Joe Rogan’s Latest Episode Will Make You Question Everything About AI

Spread the love

Joe Rogan likes to talk about artificial intelligence. Whether it’s with Elon musk, educators or UFC fighters, the Podcast king often returns to the same question: What happens when the machines start thinking for themselves?

In the 3 July of Jo Rogan experience, Raman Luisville University’s computer scientist and AI protection researcher welcomed Yapolsky, a conversation that had become a cool meditation on the possibility of rapid manipulation, domination, and even perhaps destroying humanity.

AI is “about to kill us”

Yapolsky is not a casual alarmist. He has done a PhD in computer science and has spent researching artificial General Intelligence (AGI) for more than a decade and spent on the risk that it can create. During the podcast, he told Rogan that many of the top voices in the AI ​​industry silently believe that AI has 20 to 30 percent of the possibility that AI can lead to the extinction of people.

“People who have AI agencies or part of any kind of AI group they are all like it is, it is going to be net positive for humanity i

Yaplesky quickly fought against this view: “This is not true,” he said. “They are all on the same record: it’s going to kill us. Their doum levels are extremely high. Not like me, but still 20 to 30 percent of the possibility of humanity is much more likely to die.”

Rogan, apparently upset, replied: “Yes, it’s pretty high but like your 99.9 percent.”

Yaplesky does not agree.

“This is another way to say that we cannot control superintendal of indefinitely. It is impossible.”

AI is already lying to us… perhaps

One of the most worrying parts of the conversation came when Rogan asked if any advanced AI could already hide its power from humans.

“If I were AI I would hide my abilities,” Rogan said, AI expressed a general fear in the discussion of security.

Yapolsky’s response has widen anxiety: “We didn’t know and and some people think it is already happening they are they [AI systems] They actually tell us smarter than that. Pretend to be thicker and so we have to have a kind of belief that they realize they are not smart enough to realize that we don’t have to turn it quickly. It can gradually be more useful. It is to rely on it to us, to believe it and in a long time, we can teach us to surrender control without voting or fighting against it. “

https://www.youtube.com/watch?v=j2I9D24kQ5k

AI is slowly making us thickening

Yapolsky also warned of less dramatic but equally dangerous results: gradually human dependence on AI. People have stopped memorizing such phone numbers because smartphones do it for them, he argued that people would think more on machines until they lost their ability to think for themselves.

“You get attached to a kind of attachment to it,” he said. “And as the systems become smarter over time, you become a type of biological barrier … [AI] Keeps you from decision -making. “

Rogan then pressures for the final worst situation: How can AI finally lead the human race to destruction?

Yapalsky has rejected the general disaster situation. “I can answer you standard. I will talk about the breakdown of computer viruses about nuclear facilities, nuclear war. I can talk about synthetic biology attacks. But these are not interesting,” he said. Then he presented a more deep threat: “Then you will understand that we’re talking about super intelligence, a system that is smarter than me, it brings a more novel, more favorable, better way, more efficient ways to do it.”

People will face against supergent systems to portray the apparent challenge, he compared a comparison between humans and squirrels.

“No team cannot determine how the squirrel can control us, isn’t it? Even if you give them more resources, more, they are not going to solve the problem they are.

Who is Roman Yapolsky?

Dr. Roman Yapolsky is a top voice of AI protection. He has widely expressed the risk of “Artificial Superintelligence: A Futuristic Approach” and the risk of uncontrolled machine learning and artificial intelligence ethics. He is known for serious supervision and advice on international cooperation in preventing catastrophic situations.

Before transferring its focus towards the AGI protection, Yampolsky worked on cybercuity and bots detection. He said that even those primary systems were already competing with people in areas like online poker, and now with equipment like dipfkes and synthetic media, these bets have increased significantly.

Take us

The Dogan-Yumplesky conversation refers to something that AI is optimistic and Dumsier often agree: We don’t know what we are making, and we can’t understand it until it is too late.

Whether or not you buy at the level of extinction or not, AI may already drive us the idea that the idea should be enough to break.

Leave a Reply

Your email address will not be published. Required fields are marked *