Computerphile
AI Safety isn’t just Rob Miles’ hobby horse, he shows us a published paper from some of the field’s leading minds.
More from Rob Miles on his channel: http://bit.ly/Rob_Miles_YouTube
Apologies for the focus issues throughout this video, they were due to a camera fault. π
Thanks as ever to Nottingham Hackspace (at least the camera fault allows you to read some of their book titles)
Concrete Problems in AI Safety paper: https://arxiv.org/pdf/1606.06565.pdf
AI ‘Stop Button’ Problem: https://youtu.be/3TYT1QfdfsM
Onion Routing: https://youtu.be/QRYzre4bf7I
http://www.facebook.com/computerphile
https://twitter.com/computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: http://bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com
Source
What is Rob is himself a super-intelligent AI and his goal is to prevent future AI's from getting out of hand? π€
I could listen to this guy talk all day. I just find the things he talks about fascinating, the way he delivers it is very relatable too. π
That moment you realize that Fluttershy is on the bookcase.
Just make a machine that can give a million answers to the robot's million question. Let them talk when you first make a robot and done !
Robs computerphile videos are always my favorite.
Those Linux tomes look real sharp in 4K =P
re "gaming the reward function// common problem in machine learning": Yeah, it's a common problem with regular squishy humans too.
Where does this appalling insistence on things being "safe" come from? Safe is the opposite of interesting – it's precisely facing and exploring the unknown – the unsafe that gives meaning to our existence.
About the step-on-a-baby issue – how many families have cats or dogs at home? They are absolutely not safe – yet no-one is objecting to that. And the children themselves are anything but safe – they have an incredible capacity to wreak havoc. Are you suggesting that all human life should be forced into a carefully controlled, unchanging mold where any divergent behavior is instantly killed – all in the name of safety.
Is that a hickey on the right side of his neck?
>AI Safety isn't just Rob Miles' hobby horse,
No, but Worst Pony is.
So it's unsafe to go even once through all the possible types of nuclear war in the real world… Didn't know that.
what if someone purposely wrote an AI to destroy humanity? Seems plausible and terrifying.
Why not just use a camera stand or something? This will definitively help improving the quality of your videos (shakes, lack of stability, focus…).
Great content though!
Do you have a bunch of Jeremy Clarkson books filed under History?
He goes out of focus but it still looks good
ask Martha and Jonathan Kent… they raised superman =D
Program some laziness into it. I want a Bender, not a HAL.
Can we have more videos like this introducing open research problems/topics?
Is it just me or has he acquired a hickie on his neck after the cut scene at 3:42 ?
Huh, there's a Fluttershy in the background. Neat, wouldn't have expected that.
I'm writing a novel in the comments lol
I love the strange mix of books, dvds, wood stove, childrens toys, etc. in the background, but maybe focus on the guy speaking?
Does he have hickie?
I want an AI with access to youtube to generate content and count views, subs, likes etc. as "reward".
Look at the Book categorys in the background (whooo) π
Jaysus, grandpa. Get on with it.
"Gaming the reward function."
Humans do this all the time. Addictive drugs, candy, and self-pleasure are all these.
One simple way to make AI behave somewhat like people is to make its training data consist of human behavior. Even very simple neural networks will begin mimicking the general way humans act, to the extent its behavioral complexity allows.
The only way i can think of getting an A.I to be beside us is not containing it and trying to use it to our cause, but to install some kind of compassion emotion into the machine, it is simply too smart and you would be crazy nieve to think we will be able to use this at our will….
Spot the pony, whoever spotted it earliest wins.
5:03
The most importent thing seems to me:
Dont let the the AI dont actually do anything which is dangerous. If you got a robot which should should get you a cup of tea: Dont give him the power to do anything which is dangerous. It is not necessary to give the robot a way to do this.
Just build him with a engine which does not have the force to damage anything. This robot does not need to have the capapilities to do this. This robot gets only the computing power which is needed to get you a cup of tea in your room. There is no reason to give him a engine which is capable to do anything else or more computing power to do just this.
And if you use AI to use for war, its the same. Why build a "skynet" with control over everything? There is no reason to do so. Build a AI for a UAV. This AI can control this single UAV but does not have the computing power to do anything else.
In reality, nobody will implent a Skynet-like network capable to start a doomsday-device. Why should this guy/group do this? Anyone who is able to start a doomsday-device does not have the wish to delegate this power to a machine which he does not understand and make himself powerless.
You realise these are all problems SOCIETY hasnt solved yet, and here we are a group of narrow minded AI techs not learning from the persistent the big red flags. You are obsessed with continuing down this money and time sink which is hilarious because you are trapped within all of these points yourselves. YOU DONT EVEN NOTICE THAT THE PROBLEMS THAT ARE THERE AND ARE STILL CONFIDENT IN YOUR OWN HUBRIS
5:00 makes me think about the movies where the machine/computer will ask an annoying question or give a response over and over again, until it eerily shuts down ππππ°π°π±π±π€«
Is that a stove in the background? Where no stove should be
Radical differential or random differential are why humans practice games and try to exploit patterns in them. Very important observation.
once the brick walls known, it becomes a concrete problem, but these noone knows they are there for sure because they havent thought enough.
Build one AI whose positive feed back is stopping AI from being bad. Put them in the same room together when doing experiments. Profit
one action to do is a simple do multi actions to reach the last… when the robot do the cup of tea isn't doing only the last action, but a sequence of actions, at the same, wen robot is walking to reach a point is not doing only one action, but a series of step and when it run in to a baby the IA simply react at the baby and prevents impact, becaus is in the walking action informations, like an automatic-driving car prevents collisions with pedestrians or others veicles.
the IA have to learn what it have to do for reach the goal and how to split and order the sequences of action thet it have to do, should not to be a single action "do the tea" thet contains al the steps, but only the goal and the IA have to make itself the question "what have i to do for doing tea among the actions thet i have learn and in what order?" and eventually we have to teach it the sequence.
walking, open the box, take the tea, take the kettle, fill it with wather, etc…
and the problem is, the developers want thet a "new" IA brain is non like a baby's brain and don't doing like a baby?!
then they suppose thet an IA should have the prediction of what it hav to do without it have learning, unlike of natural intelligence?
Ahhh! 3d studio max 2 book in the background <3
One problem they should definitely add to the paper that currently is a problem is the amount of power given to an ai and how a human might take advantage of that power.
I read the paper. Those problems are based on a projection of AI. They are not actually problems for AGI. Waste of a video.
Thermonuclear war: Not even once.
4:05 How to stop your AI voting for Donald Trump.
"Never try killing the baby" I think we have to teach the AI common sense. Which can be adapted. And common sense is allways negative so positive assumption about killing the baby will never occur and common sense will be updated and never complete. That's what i observed about my common sense.
Can't we make two AGI to keep each other in check and enforce inaction?