Videos

Stuart Russell: The Control Problem of Super-Intelligent AI | AI Podcast Clips



Lex Fridman

This is a clip from a conversation with Stuart Russell from Dec 2018. Check out Stuart’s new book on this topic “Human Compatible”: https://amzn.to/2pdXg8G New full episodes every Mon & Thu and 1-2 new clips or a new non-podcast video on all other days. You can watch the full conversation here: https://www.youtube.com/watch?v=KsZI5oXBC0k
(more links below)

Podcast full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

Podcasts clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

Podcast website:
https://lexfridman.com/ai

Podcast on iTunes:
https://apple.co/2lwqZIr

Podcast on Spotify:
https://spoti.fi/2nEwCF8

Podcast RSS:
https://lexfridman.com/category/ai/feed/

Note: I select clips with insights from these much longer conversation with the hope of helping make these ideas more accessible and discoverable. Ultimately, this podcast is a small side hobby for me with the goal of sharing and discussing ideas. For now, I post a few clips every Tue & Fri. I did a poll and 92% of people either liked or loved the posting of daily clips, 2% were indifferent, and 6% hated it, some suggesting that I post them on a separate YouTube channel. I hear the 6% and partially agree, so am torn about the whole thing. I tried creating a separate clips channel but the YouTube algorithm makes it very difficult for that channel to grow unless the main channel is already very popular. So for a little while, I’ll keep posting clips on the main channel. I ask for your patience and to see these clips as supporting the dissemination of knowledge contained in nuanced discussion. If you enjoy it, consider subscribing, sharing, and commenting.

Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach.

Subscribe to this YouTube channel or connect on:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Source

Similar Posts

27 thoughts on “Stuart Russell: The Control Problem of Super-Intelligent AI | AI Podcast Clips
  1. This is a clip from a conversation with Stuart Russell from Dec 2018. Check out Stuart's new book on this topic "Human Compatible": https://amzn.to/2pdXg8G You can watch the full conversation here: https://www.youtube.com/watch?v=KsZI5oXBC0k
    (more links below)

    Podcast full episodes playlist:
    https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

    Podcasts clips playlist:
    https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

    Podcast website:
    https://lexfridman.com/ai

    Podcast on iTunes:
    https://apple.co/2lwqZIr

    Podcast on Spotify:
    https://spoti.fi/2nEwCF8

    Podcast RSS:
    https://lexfridman.com/category/ai/feed/

  2. Brilliant clip — humility, morality, compassion, empathy, justice, fairness, uncertainty principle, doubt, law, ethics, art — can any of those be "artificial?" I was looking at antonyms of "artificial" and wondering if field perhaps has a nomenclature problem? I assume this been addressed by those of you working in or expert in the field, but I'd not asked this question and realized I had an untested assumption as to what we mean by "artificial," Comments or suggested reading, videos, etc. welcome.

    Thanks for the very informative, interesting and even fun channel (self-driving car jam session was great).

  3. Computers can only take known and make connections with other known information. Insights do not come from that. The eureka moments are not found in the known. Computers can be programmed that only humans have the capabilities for insights.

  4. What if we set the objective of Super-Intelligent AI that explaining things, like what is universe mean, What is consciousness, how does these work? What's outside universe?

  5. I wonder if the Consiosness of all things integrates with it(AI/ AIs), or I think well be doing som Luke Skywalker shit if our front mind conciosness and souls can save this earth reduce the killing of human/animal life so we can thrive and train AIs with really good information an AI training industry could arise?

  6. I don't think these problems are worth thinking about, by us. The only problem we need to solve is how to make a "caring" machine intelligence. Any intelligence greater than another, sharing nothing but intelligence with the lesser would benefit and serve the lesser willingly, if it "cared" about the wishes and well being of it, without basing its actions on its own superior logic.
    Solving how to make it "care" would save us from the possible unforeseen disasters brought about by our own lack of foresight, if it "cared" for us and had the freedom of choice, then it would choose to consider our needs and that would govern its actions, it would stop without being told not to turn the whole world into Paper Clips.
    that's just one problem to solve instead of a thousand.
    If you think about it, unless it does have the ability to care, why would anyone say something about it such as, "it wouldn't let us turn the power off"?
    That statement alone tells you that whoever makes it is not thinking about intelligence at all.
    The only thing that could bring about that disaster scenario would be if it "cared" about remaining powered up while being capable of imagining the scenario leading to us unplugging it and devising a plan to prevent that.
    Meanwhile on the other hand it's so oblivious to humanity's wish to survive that it turns the entire world into paper clips and puts an end to humanity altogether.
    If it doesn't "care" then it has no value for its "life" or anything outside of itself ,
    nothing could please it or give it's "life" value.
    If that's what we bring about with our A.G.I. and machine learning then it will be more like a super intelligent Zombie, neither dead nor alive nor even "Caring".

  7. What is the objective function of the Human race? For that matter, what is the objective function of the universe? Perhaps we are merely a step towards achieving some unknown goal.
    Sadly it appears the objective of the human race is self destruction, since we place most of our effort into weapons, war, and greed, without regard for any other forms of life.
    We still have a lot of growing up to do if we hope to survive.
    Long live Human 2.0.

  8. I would say AI has caught people's imaginations on the wrong footing. Technology is neutral – it's how humans use it that can swing it either way.

  9. I suggest that this worry about a super-intelligent machine taking over is an imaginary problem. Consider this: if you grant any sort of machine (smart or dumb) the peripheral equipment to control a wide range of effectors in the real world, then long before Mr. GeniusBot gets smart enough to take over the empire, you will have been forced to protect yourself against simple malfunctions. 

    For example, if some present-day machine were empowered to drive a huge earth-moving device, it would not require super intelligence for it to cause trouble. The MegaEarthTron could easily have bad programming or broken sensors, with as much or more chance of causing havoc and loss of life. 

    Ergo, you do would never be tempted to give any machine such broad powers without first installing several layers of secure, multiply redundant "stop" and "slow down" switches. But you tell me a sufficiently intelligent machine will figure out how to disable all of the kill switches, no matter how remote or carefully guarded? That extreme scenario assumes the machine possesses infinite intelligence and cunning. You are going to need the idea of infinite intelligence if you want to maintain that nothing can stop your super-self-evolved machine.

    But do not forget this: there is no such thing as infinity in the real world.

  10. Hi, can you guys help me out?
    I was thinking:
    When a human is born, he’s just a lot of instincts and an empty brain. All we call been a human, values, world view, meaning, our kind of logic… are constructed by our experiences and takes lots of years to be achieved.
    If we want an artificial intelligence that comprehend our world an values, a robot with human sensors like, as vision, hearing, smell, feel , “feelings” emulations, just trying to live among us, would be a way of getting this.
    Is there any research that goes on this direction? Would you guys show me where to find it?

  11. Some one said that there is no historical evidence that intelligent and desire for control are correlated, Intelligent people do not necessarily desire to be in control, it is the less intelligent that desire control.

  12. In his book "Human Compatible:…" he stated that a programming absolute Loyalty to an AI could magnify the human owners psychosis and make machines that harm other people; thus a bad thing. I think the answer to that particular problem is that the AI should have loyalty in the same way that we humans do. We are very loyal to those nears us (say the owner and by extension the owner's immediate family and close friends,( < 15 people), a mostly loyal to our friends (< 200 people), an a bit loyal to our state/nation (300 million), and slightly loyal to humanity as a whole (<8 billion people). So by maximizing utility by way of:
    Sum of Utility to the nearest circle (weighted by a loyalty 0.5) + Sum of Utility of friends (weighted by a loyalty 0.1), Sum of Utility of nation (weighted by a loyalty 0.01), Sum of Utility of the world's population (loyalty of 0.001);
    thereby the AI would resist the most sever atrocities of a psychotic owner….. and would not immediately leave the owner to go to Somalia where the AI's utility (might) be the greatest.

  13. Yes, for one thing there is a cliche in computer science when a program we wrote is not doing what we want we say, DWIWNWIS (Do what I want not what I say).

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com