Videos

A.I. Apocalypse: More Myth Than Reality | Steven Pinker



Big Think

Steven Pinker believes there’s some interesting gender psychology at play when it comes to the robopocalypse. Could artificial intelligence become evil or are alpha male scientists just projecting?

Read more at BigThink.com: http://bigthink.com/videos/steven-pinker-on-artificial-intelligence-apocalypse

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript – I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It’s based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it’s a projection of alpha male’s psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn’t tell you what are those goals are. And there’s no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we’re most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that’s intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there’s no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there’s some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that’s a safeguard that we can build in. Read Full Transcript Here: http://goo.gl/pAoGMt.

Source

Similar Posts

25 thoughts on “A.I. Apocalypse: More Myth Than Reality | Steven Pinker
  1. Humans are the ultimate nano bots, we can build,solve,evaluate and adapt to situations. Unfortunately (in some respects) we have emotions.

  2. That's an extremely dumbed-down way of looking at this, which is very odd considering who this guy is.
    Most people don't think our AIs will suddenly turn into Hitler and try to take over the world just because it wants power. Most think that our AIs will take over the world for purposes other than power; mostly to preserve humankind. Kind of like what the AIs did in the Matrix, or what the robots did in I,Robot. Most of us already know that an AI will never be able to think about it's own benefits unless its designer wants it to.
    And regarding that last comment; whether this guy likes it or not, females are still human. Humans experienced Darwinian evolution; not males. Females still have a wish for power; they're just not as competitive as males. And that is due to the way humans mate. Take the mating part away, and you'll find that both genders are equally thirsty for power.

  3. The only thing I can agree with the speaker is (human) intelligence is a product of centuries of natural selection. Then everything went downhill after that point… smh

  4. An Unfriendly AI need not "want" to dominate us, but we can very easily get in the way of it optimizing its goals. It wouldn't want to dominate us any more than we seek to dominate an anthill by tearing it down to build skyscrapers.

    What has been true in the world is that beings with differing goals have competed for limited resources in acheiving their goals, and intelligence is the most powerful tool in acquiring the most resources, so a being that is smarter than humanity would outcompete us in every way it deemed necessary.

  5. I would say that this is a video that can easily be misunderstood if you haven't read the man's work and it doesn't do justice to what he actually suggests in his books. He speaks from a very Darwinian perspective in which, in general, men are much more competitive and likely to kill each other over trivial things such as a parking spot and profit-driven CEO's are not that interested in how much money they are making, but how much MORE money they are making compared to the next guy.

    I have read some of Pinker's books and he's definitely not arguing for absolute differences among the sexes. He's also a big critic of radical gender policies in Universities, an advocate for free speech and he also rejects radical positions associated with post-modernism.

  6. I thought all the dislikes came from people worried that he's not taking into account Nick Bostrom's paper clip maximizer parable that shows that malevolence isn't a prerequisite for A.I. to be dangerous. Turns out it's just butthurt men who can't take a joke.

  7. I have a great deal of respect for Steven Pinker, but he's dead wrong here. He is completely strawmanning, and considering his usual level of scientific rigor, it's surprising that he would comment on something he is so obviously uninformed on. The argument isn't that high intelligence is correlated with megalomaniacal goals, as he claims. There is actually a lot of concrete research into many areas of ai safety. He should actually read some of this research, and if he has any solutions to some of the big unsolved problems in ai safety, such as the control problem and the value alignment problem, then he should publish them.

  8. AI are just the next step of evolution. Imagine how much AI could accomplish in just half the time humanity has been around. The dinosaurs had their time, and now humans have had their's. Now it's our time.

  9. What a sexist, women can dominate others just as well as men, they just use different mechanisms. The idea that women don't have the drive or mental ability to become the dominant member of a group is ridiculous, patronising, and is easily disproved by reality.

  10. Me, you, and Ned. We are all Organic and have the ability to learn, we have the ability to form opinions, and we have free will/action. I am an Law Abiding citizen for the most part but I smoke pot. I am not violent. You Obey the law to every T. You are not violent. Ned in the same health as me and you, Is a serial killer. We are all human and have the same mind on day 1.

    Now thinking worse case scenario, AI is targeted to be a duplicate of a human mind. How the hell do we know Sophia isn't Ned?

  11. Sorry guys. You don't want political correctness, but you don't want to hear this. You can't have it both ways. I like that he is so blunt about the male dominance bullshit that has cost human civilization terribly for thousands of years. Deal with it. Btw I'm a man, but not a man-baby.

  12. Pinker overlooks four points:

    1. The danger of AI apocalypse doesn't require that all AI inevitably has megalomaniacal desires. It's enough if some AI has such desires.

    2. He mentions that humans are a product of evolution, which is why we have desires to dominate others. But there might be an evolutionary process with AI. We might set up a situation in which AIs generate other AI's, which starts an evolutionary process.

    3. He overlooks that having power over humans might be an instrumental good for a wide range of ultimate goals. For an extremely wide range of ultimate goals, humans are capable of interfering, unless they are thoroughly subjugated.

    4. He says AI won't have megalomaniacal desires unless we program it with such desires. This overlooks that modern programs are extremely complex, and many of the consequences of a program are unanticipated by the programmers.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com