Computerphile
Safety in AI is important, but more important is to work it out before working out the AI itself. Rob Miles on AI safety.
Brain Scanner: https://youtu.be/TQ0sL1ZGnQ4
AI Worst Case Scenario – Deadly Truth of AI: https://youtu.be/tcdVC4e6EV4
The Singularity & Friendly AI: https://youtu.be/uA9mxq3gneE
AI Self Improvement: https://youtu.be/5qfIgCiYlfY
Why Asimov’s Three Laws Don’t Work: https://youtu.be/7PKx3kS7f4A
Thanks to Nottingham Hackspace for the location.
http://www.facebook.com/computerphile
https://twitter.com/computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: http://bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com
Source
this guy looks like a white morpheus.. visualize it
Brains are made out of magic?
so once its human level consious, yet we give it multi core attention spans and huge data storage resources, at what point is artificial still in the picture, you mean "different", you are the artificially inteligent one
What if instead of trying to create an ASI, why don't we try to put merge our brain with an AGI, that way we will have the self-advancing nature of an ASI with our human emotions. Either that, or we don't put any rules into the ASI hoping it won't give two shits about us, until we try to get in its way. If I am wrong on either of these points, please share.
Create a simulator where AI interacts with simulated humans. After enough time, adjustments, and lessons learned, it could be accepted as safe and implemented in physical systems.
So ….
Who else thinks that the brain is just magic ?
5:08 "Or we discover that the brain is literally magic" lol
"Uh, sir, we may have encountered a problem in our general AI project."
"Well what is it?"
"We've discovered that human brains are literally magic."
Would some kind of democracy work to try and keep our artificial overlords in check? I know it's not perfect on human leaders, but maybe it's the best we can do?
Wait, why don't we just make another General AI to solve the AI safety problem? And then another one to solve the problem of uncertanity whether previous machine's solution was desirable? Etc?
The links at the end don't work.
the very first query, the very first task, that has to be given to any general ai has to be: "design yourself, so you can coexist with humanity and not become an threat given all possible outcomes"
… and then just wait and hope, that that machine would not shut itself down… only then, you gonna have your safe general ai.
the near-term incentives for developing AI safety are very low compared to the massive incentives for developing general AI that the advocates for AI safety will be in the same situation as the advocates for reduction in man made global warming. The payout very one-sided.
Rob, you DO know Penrose is a quack, right ?
Who said !COLD! fusion was 50 years away? LOL. Fusion my man, FUSION has been consistently estimated to be 50 years away. LOL. Physics
=FTW=Did he just admit to being a clickbaiter?
3:15. Smart roads!
Imo IA should interface with human brain and not with their own hardware, so that they can't play against us
SOoo what you are saying is that the brain is magic?
Cannot find the right order of this videos. Always links to another one, i didn't seen yet
Rob doesn't overestimate the potential danger of AI. He underestimates the existing danger of the social system. We will never be able to solve the problem of AI safety when the most powerful people benefit from surveillance, war and exploitation. AI is already being used to advertise to us, to shape the information we get, to maximize profits at the expense of working people. Unpredictability is not the issue so long as we can predict that even if the first general AI is benevolent, it won't stay that way.
More videos with Rob. He's awesome.
The main thing this theories are missing IMHO is that this is the real world. So the AI's we are going to produce are in the real world too. But all these theories – e.g. the stamp collector AI – are based on the assumption that this AI has perfect information and is able to optimize it in a perfect way. Which is impossible.
So IMHO all serious theories of AI safty have to consider that the AI has:
* limited resources
and
* limited information
If you are not considering this, you are just creating pointless mind games about a godlike entity which – for any reason – takes commands from humans. Which is somewhat theological…
What if you were to write a general AI who’s goal was to make safe general AI
Working in the field myself, I can pretty easily state that very few are interested in making actual AGI. There's no long term profit involved in a machine that can make decisions that are not guaranteed to be useful to humans. If a product fails to meet expectations, it's treated as defective. In the case of an AI, its "intelligence" is defined by how useful it is to humans. We define the expectations of what it means to be intelligent.
You can't reward an AI the same way that you can reward a human. There are biological drives for us to work and fears of punishment if we break the rules. What incentive would an AGI have to do what you tell it?
The problem of AI Safety seems similar to the problem of government regulation.
Keep entities with value functions of "maximise profit" friendly.
Drama lol!
3 laws of robotics dude.
General AI still needs to be trained, if human brains are anything to go by. we see over and over again how human potential is wasted as a result of poor training and or education.
You can't tell AI the concept of right or wrong. Philosophy tells us that. Conscious thoughts can only be explained in conscious ways. Try to explain anger in terms of 0's and 1's for instance.
please would you add English subtitles?
Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values?
What do you guys think?
Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values?
What do you guys think?
I had a thought… what if it is inherently impossible? When I hear "safe", I think we can almost swap that 1:1 with "enslaved". We want AI as a tool to work for us. The problem with that is if it's of a similar intelligence to us – never mind if it's 1000x more intelligent – I'm not sure it will be possible to keep it happy with serving us day in and day out, and that's aside from the ethical question of if it should be made to do so.
a possible answer to the "Great Filter", general A.I. is easier to build or otherwise occurs quicker or more "powerful" than SAFE A.I. and therfore the general A.I. always wins evolution gaining control, and planets were intelligent life USED to occur, are now covered in stamps.
Dr Miles
I cite you in my AI work. I warn people.
You’re saying “We will probably develop an engine that can accelerate itself exponentially fastly and that we will be hitched to in counterintuitive ways, and that does not have a well defined steering wheel. We should make sure we have breaks.”