Videos

AI Safety – Computerphile



Computerphile

Safety in AI is important, but more important is to work it out before working out the AI itself. Rob Miles on AI safety.

Brain Scanner: https://youtu.be/TQ0sL1ZGnQ4
AI Worst Case Scenario – Deadly Truth of AI: https://youtu.be/tcdVC4e6EV4
The Singularity & Friendly AI: https://youtu.be/uA9mxq3gneE
AI Self Improvement: https://youtu.be/5qfIgCiYlfY
Why Asimov’s Three Laws Don’t Work: https://youtu.be/7PKx3kS7f4A

Thanks to Nottingham Hackspace for the location.

http://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

Source

Similar Posts

35 thoughts on “AI Safety – Computerphile
  1. so once its human level consious, yet we give it multi core attention spans and huge data storage resources, at what point is artificial still in the picture, you mean "different", you are the artificially inteligent one

  2. What if instead of trying to create an ASI, why don't we try to put merge our brain with an AGI, that way we will have the self-advancing nature of an ASI with our human emotions. Either that, or we don't put any rules into the ASI hoping it won't give two shits about us, until we try to get in its way. If I am wrong on either of these points, please share.

  3. Create a simulator where AI interacts with simulated humans. After enough time, adjustments, and lessons learned, it could be accepted as safe and implemented in physical systems.

  4. 5:08 "Or we discover that the brain is literally magic" lol

    "Uh, sir, we may have encountered a problem in our general AI project."
    "Well what is it?"
    "We've discovered that human brains are literally magic."

  5. Would some kind of democracy work to try and keep our artificial overlords in check? I know it's not perfect on human leaders, but maybe it's the best we can do?

  6. the very first query, the very first task, that has to be given to any general ai has to be: "design yourself, so you can coexist with humanity and not become an threat given all possible outcomes"

    … and then just wait and hope, that that machine would not shut itself down… only then, you gonna have your safe general ai.

  7. the near-term incentives for developing AI safety are very low compared to the massive incentives for developing general AI that the advocates for AI safety will be in the same situation as the advocates for reduction in man made global warming. The payout very one-sided.

  8. Who said !COLD! fusion was 50 years away? LOL. Fusion my man, FUSION has been consistently estimated to be 50 years away. LOL. Physics =FTW=

  9. Rob doesn't overestimate the potential danger of AI. He underestimates the existing danger of the social system. We will never be able to solve the problem of AI safety when the most powerful people benefit from surveillance, war and exploitation. AI is already being used to advertise to us, to shape the information we get, to maximize profits at the expense of working people. Unpredictability is not the issue so long as we can predict that even if the first general AI is benevolent, it won't stay that way.

  10. The main thing this theories are missing IMHO is that this is the real world. So the AI's we are going to produce are in the real world too. But all these theories – e.g. the stamp collector AI – are based on the assumption that this AI has perfect information and is able to optimize it in a perfect way. Which is impossible.

    So IMHO all serious theories of AI safty have to consider that the AI has:
    * limited resources
    and
    * limited information

    If you are not considering this, you are just creating pointless mind games about a godlike entity which – for any reason – takes commands from humans. Which is somewhat theological…

  11. Working in the field myself, I can pretty easily state that very few are interested in making actual AGI. There's no long term profit involved in a machine that can make decisions that are not guaranteed to be useful to humans. If a product fails to meet expectations, it's treated as defective. In the case of an AI, its "intelligence" is defined by how useful it is to humans. We define the expectations of what it means to be intelligent.

    You can't reward an AI the same way that you can reward a human. There are biological drives for us to work and fears of punishment if we break the rules. What incentive would an AGI have to do what you tell it?

  12. The problem of AI Safety seems similar to the problem of government regulation.
    Keep entities with value functions of "maximise profit" friendly.

  13. General AI still needs to be trained, if human brains are anything to go by. we see over and over again how human potential is wasted as a result of poor training and or education.

  14. You can't tell AI the concept of right or wrong. Philosophy tells us that. Conscious thoughts can only be explained in conscious ways. Try to explain anger in terms of 0's and 1's for instance.

  15. Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values?

    What do you guys think?

  16. Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values?

    What do you guys think?

  17. I had a thought… what if it is inherently impossible? When I hear "safe", I think we can almost swap that 1:1 with "enslaved". We want AI as a tool to work for us. The problem with that is if it's of a similar intelligence to us – never mind if it's 1000x more intelligent – I'm not sure it will be possible to keep it happy with serving us day in and day out, and that's aside from the ethical question of if it should be made to do so.

  18. a possible answer to the "Great Filter", general A.I. is easier to build or otherwise occurs quicker or more "powerful" than SAFE A.I. and therfore the general A.I. always wins evolution gaining control, and planets were intelligent life USED to occur, are now covered in stamps.

  19. Dr Miles

    I cite you in my AI work. I warn people.

    You’re saying “We will probably develop an engine that can accelerate itself exponentially fastly and that we will be hitched to in counterintuitive ways, and that does not have a well defined steering wheel. We should make sure we have breaks.”

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com