Videos

Concrete Problems in AI Safety (Paper) – Computerphile



Computerphile

AI Safety isn’t just Rob Miles’ hobby horse, he shows us a published paper from some of the field’s leading minds.

More from Rob Miles on his channel: http://bit.ly/Rob_Miles_YouTube

Apologies for the focus issues throughout this video, they were due to a camera fault. πŸ™

Thanks as ever to Nottingham Hackspace (at least the camera fault allows you to read some of their book titles)

Concrete Problems in AI Safety paper: https://arxiv.org/pdf/1606.06565.pdf

AI ‘Stop Button’ Problem: https://youtu.be/3TYT1QfdfsM
Onion Routing: https://youtu.be/QRYzre4bf7I

http://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

Source

Similar Posts

45 thoughts on “Concrete Problems in AI Safety (Paper) – Computerphile
  1. I could listen to this guy talk all day. I just find the things he talks about fascinating, the way he delivers it is very relatable too. πŸ™‚

  2. Just make a machine that can give a million answers to the robot's million question. Let them talk when you first make a robot and done !

  3. re "gaming the reward function// common problem in machine learning": Yeah, it's a common problem with regular squishy humans too.

  4. Where does this appalling insistence on things being "safe" come from? Safe is the opposite of interesting – it's precisely facing and exploring the unknown – the unsafe that gives meaning to our existence.
    About the step-on-a-baby issue – how many families have cats or dogs at home? They are absolutely not safe – yet no-one is objecting to that. And the children themselves are anything but safe – they have an incredible capacity to wreak havoc. Are you suggesting that all human life should be forced into a carefully controlled, unchanging mold where any divergent behavior is instantly killed – all in the name of safety.

  5. Why not just use a camera stand or something? This will definitively help improving the quality of your videos (shakes, lack of stability, focus…).
    Great content though!

  6. One simple way to make AI behave somewhat like people is to make its training data consist of human behavior. Even very simple neural networks will begin mimicking the general way humans act, to the extent its behavioral complexity allows.

  7. The only way i can think of getting an A.I to be beside us is not containing it and trying to use it to our cause, but to install some kind of compassion emotion into the machine, it is simply too smart and you would be crazy nieve to think we will be able to use this at our will….

  8. The most importent thing seems to me:

    Dont let the the AI dont actually do anything which is dangerous. If you got a robot which should should get you a cup of tea: Dont give him the power to do anything which is dangerous. It is not necessary to give the robot a way to do this.

    Just build him with a engine which does not have the force to damage anything. This robot does not need to have the capapilities to do this. This robot gets only the computing power which is needed to get you a cup of tea in your room. There is no reason to give him a engine which is capable to do anything else or more computing power to do just this.

    And if you use AI to use for war, its the same. Why build a "skynet" with control over everything? There is no reason to do so. Build a AI for a UAV. This AI can control this single UAV but does not have the computing power to do anything else.

    In reality, nobody will implent a Skynet-like network capable to start a doomsday-device. Why should this guy/group do this? Anyone who is able to start a doomsday-device does not have the wish to delegate this power to a machine which he does not understand and make himself powerless.

  9. You realise these are all problems SOCIETY hasnt solved yet, and here we are a group of narrow minded AI techs not learning from the persistent the big red flags. You are obsessed with continuing down this money and time sink which is hilarious because you are trapped within all of these points yourselves. YOU DONT EVEN NOTICE THAT THE PROBLEMS THAT ARE THERE AND ARE STILL CONFIDENT IN YOUR OWN HUBRIS

  10. 5:00 makes me think about the movies where the machine/computer will ask an annoying question or give a response over and over again, until it eerily shuts down πŸ’€πŸ’€πŸ’€πŸ˜°πŸ˜°πŸ˜±πŸ˜±πŸ€«

  11. Radical differential or random differential are why humans practice games and try to exploit patterns in them. Very important observation.

  12. Build one AI whose positive feed back is stopping AI from being bad. Put them in the same room together when doing experiments. Profit

  13. one action to do is a simple do multi actions to reach the last… when the robot do the cup of tea isn't doing only the last action, but a sequence of actions, at the same, wen robot is walking to reach a point is not doing only one action, but a series of step and when it run in to a baby the IA simply react at the baby and prevents impact, becaus is in the walking action informations, like an automatic-driving car prevents collisions with pedestrians or others veicles.
    the IA have to learn what it have to do for reach the goal and how to split and order the sequences of action thet it have to do, should not to be a single action "do the tea" thet contains al the steps, but only the goal and the IA have to make itself the question "what have i to do for doing tea among the actions thet i have learn and in what order?" and eventually we have to teach it the sequence.
    walking, open the box, take the tea, take the kettle, fill it with wather, etc…
    and the problem is, the developers want thet a "new" IA brain is non like a baby's brain and don't doing like a baby?!
    then they suppose thet an IA should have the prediction of what it hav to do without it have learning, unlike of natural intelligence?

  14. One problem they should definitely add to the paper that currently is a problem is the amount of power given to an ai and how a human might take advantage of that power.

  15. "Never try killing the baby" I think we have to teach the AI common sense. Which can be adapted. And common sense is allways negative so positive assumption about killing the baby will never occur and common sense will be updated and never complete. That's what i observed about my common sense.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com