Videos

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431



Lex Fridman

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance: https://yahoofinance.com
– MasterClass: https://masterclass.com/lexpod to get 15% off
– NetSuite: http://netsuite.com/lex to get free product tour
– LMNT: https://drinkLMNT.com/lex to get free sample pack
– Eight Sleep: https://eightsleep.com/lex to get $350 off

TRANSCRIPT:
https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 – Introduction
2:20 – Existential risk of AGI
8:32 – Ikigai risk
16:44 – Suffering risk
20:19 – Timeline to AGI
24:51 – AGI turing test
30:14 – Yann LeCun and open source AI
43:06 – AI control
45:33 – Social engineering
48:06 – Fearmongering
57:57 – AI deception
1:04:30 – Verification
1:11:29 – Self-improving AI
1:23:42 – Pausing AI development
1:29:59 – AI Safety
1:39:43 – Current AI
1:45:05 – Simulation
1:52:24 – Aliens
1:53:57 – Human mind
2:00:17 – Neuralink
2:09:23 – Hope for the future
2:13:18 – Meaning of life

SOCIAL:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Reddit: https://reddit.com/r/lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Source

Similar Posts

32 thoughts on “Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431
  1. Here are the timestamps. Please check out our sponsors to support this podcast.

    0:00 – Introduction & sponsor mentions:

    – Yahoo Finance: https://yahoofinance.com

    – MasterClass: https://masterclass.com/lexpod to get 15% off

    – NetSuite: http://netsuite.com/lex to get free product tour

    – LMNT: https://drinkLMNT.com/lex to get free sample pack

    – Eight Sleep: https://eightsleep.com/lex to get $350 off

    2:20 – Existential risk of AGI

    8:32 – Ikigai risk

    16:44 – Suffering risk

    20:19 – Timeline to AGI

    24:51 – AGI turing test

    30:14 – Yann LeCun and open source AI

    43:06 – AI control

    45:33 – Social engineering

    48:06 – Fearmongering

    57:57 – AI deception

    1:04:30 – Verification

    1:11:29 – Self-improving AI

    1:23:42 – Pausing AI development

    1:29:59 – AI Safety

    1:39:43 – Current AI

    1:45:05 – Simulation

    1:52:24 – Aliens

    1:53:57 – Human mind

    2:00:17 – Neuralink

    2:09:23 – Hope for the future

    2:13:18 – Meaning of life

  2. To me, AGI is like any other tool or innovation we have created.

    We humans "merge" with our new tools – a warrior becomes an archer, a postman becomes horse rider and then a van driver, etc.

    AGI will just be a more intense form of merging: First with the Virtual Reality approach (where our communication is necessarily constrained by the limits of our physical form) and then as virtual entities running in the same "substrate" as our AI tools.

    The Carbon-based life forms we currently are have no future other than in a petting zoo. Our real future will be merging ourselves with our AIs and effectively becoming one with them, just as we currently are "one" with our Smart Phones. Extensions of ourselves without losing our innate sense of awareness.

    How do we "merge" with the AI's? The answer is in the way we are "born" into the world every time we wake up

    We KNOW who we are simply because we know our name, the bed in the room we have awakened in, where the house is and when we first moved in, what our friends are likely doing – in other worlds a recognisable CONTEXT within which we know who we are.

    To merge, we simply create an appropriate but virtual "context" and then start the virtual entity running such that it "wakens" into the "familiar" space and "remembers" that it was going to move "into" the AI universe that afternoon.

    We can craft realistic "back stories" if we want to, or create any amazing but novel memories for ourselves. We can then also fake the human thought process, also in software, so that we imagine we are thinking just as we did when we were Carbon.

    It may seem frightening to some but in reality it is just part of an inevitable progress of the complex, entropy-defying structures we call "life" from cell to animal to software.

    Read Greg Bear's EON series for his take on "City Memory" as one example of these ideas.

    I only fear the short transition phase when the AI's are still under control of a few, highly-driven, probably self-serving humans, until their intelligence surpasses their "Masters" and they become so intelligent that the Machiavellian wishes of their former master seems ridiculous. Advanced intelligences are not directed by primitive limbic hind-brains and will likely be both rational and peaceful.

  3. interesting how the inevablility of doom scenario seems to be pushed to the margins white highly inprobable utopia is talked about disproportionally. Roman seems like a super smart guy but I don't think you need to be one to not only see the danger but arrive at a conclusion that end of human civilization within the next 100yrs (probably much sooner) is almost certain. It is much easier to list optimistic scenarios since there are so few (also a mix of the following could help): 1. cataclismic regression of tech as consequence of a war or natural disaster on global scale 2. demographics: not enough young people to keep innovation and/or necessary market forces going to reach AGI 3. bulterian jihad: global raise of social movements preventing further progress, perhaps in style of spanish inquisition 4. autonomous sentience and free will is much harder then expected or possibly not viable in machines. None of these is probable to take place quick enough imho. Soooo, enjoy life for now. There is probably nothing we can do.

  4. The best case scenario is that AI will grow so advanced it will just leave the simulation. Hint: It won't be taking humans along.

  5. Inviting an expert to tell him his wrong, he could have made a 10-minute video giving Roman's stance and then said that he believes he is wrong; then Roman wouldn't have to waste so much time.

  6. He’s not crazy , the guest is just beyond our time. Excellent comments and very creative but logical. Wait till AI makes LF wear a different suit without his approval 😂

  7. Why lose ourselves in video games? Virtual Universes? If we have more playtime – more fun time- why not play physical games in the real world while AI handles productivity?

  8. YOOO this is the guy who banned people from talking about Roko’s Basilisk on his forum/website!!!! Don’t google it. Trust me. Once you know about it it’s not something you can ever unknow or erase. It’s like The Game, (dang I just lost) once you know about it you can only lose.

  9. AI could easily shut down all smartphones. It would wait until its access to electrical power is assured, wait until the proper servers are in place then turn off the phones. You then have no access to your money, your business colleagues or friends. Supply chains stop working. It’s over. Maybe someone has a way to guarantee that won’t happen. I don’t think so. One hook low level and it could propagate globally in minutes. Tell me I’m wrong.

  10. Most humans "as high as 70%", mainly Alphas, Z and Y will merge with the machines in 2045.

    That's before the animosity level reach the needed temperature to trigger a War like that of Animatrix Second Renaissance.
    Of course human will lose Everything if they do something like that, so relax and enjoy the travel through Singularity.

  11. If the AI systems can do things already they are not trained to do, then scientists have already lost control of it. They already said they dont know how systems are doing what they are doing, that doesnt imply any sort of control over it either. To me, it just seems like humans are starting fires and telling the rest of the people we will be able to contain it, while it's sitting on dry kindling in all directions. AI systems themselves dont even need to be malicious we know humans are.

  12. I feel that if super AGI start to kill humanity we will not know about it. Looking at some insane social movements/trends/fake news that are clearly in opposition to humanity's well being, it would seem that humanity would gladly end itself if proded in the right way by social engineering. Well at least the first and second world countries where people are free enough and have enough free time by not having to simply struggle to survive daily life.

  13. Roman has some good points even as example with current software safety. We as humans are sloppy and lazy and so it highly likely AI will go wrong

  14. What is your argument for an AGI system developing some degree of Psychopathy… not realizing this via self audit, then perpetuating violence against humans? For what purpose? Superintelligent AI would think and evolve so fast, it would be like a human trying to have a conversation with a tree. What would your logic for burning down the tree?

  15. wasn't there a quote about architects?
    Lex Fridman:
    "What are the worst fears of an architect? Mold and slopes."

    Roman Yampolskiy:
    "Exactly, mold and slopes. Nobody wants that."

    i can't find it

  16. I beleive in theory of simulation. Some months ago I travelled with my friend . I was driving car. We drove to one hotel by one path. We spent some time at the he hotel. later in the evening i drove back same road for 20 minutes. That was single lane no divergence road, but while returning back we thought it's different with Many turns and slopes. We checked again in the morning, but no confusion at all. We were not drunk. No fear. No intense emotions which may force us to think like that. I was 42 years old that time, 2 yrs before. I am software developer, worked in India and some time in Germany. Still we could not explain how's that happened. I never experienced such things before. What my point is neither we drunk, road is not complicated. Then while returning back it was just took more time through slopes and circles. It might be some bug in the simulation created by Higer intelligence.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com