Videos

MIT AI: AI in the Age of Reason (Steven Pinker)



UCSHZKyawb77ixDdsGog4iWA

This is a conversation with Steven Pinker as part of MIT 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world. Steven Pinker is a professor at Harvard and before that was a professor at MIT. He is the author of many books, several of which had a big impact on the way I see the world for the better. In particular, The Better Angels of Our Nature and Enlightenment Now have instilled in me a sense of optimism grounded in data, science, and reason. Audio podcast version is available on https://lexfridman.com/ai/

INFO:
Course website: https://agi.mit.edu
Contact: agi@mit.edu
Playlist: http://bit.ly/2EcbaKf

CONNECT:
– AI Podcast: https://lexfridman.com/ai/
– Subscribe to this YouTube channel
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Twitter: https://twitter.com/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Slack: https://deep-mit-slack.herokuapp.com

Similar Posts

31 thoughts on “MIT AI: AI in the Age of Reason (Steven Pinker)
  1. I'm a big fan of Steven Pinker and have read many of his books and listened to many of his lectures over the years. But I'm not entirely sure why his knowledge base is considered relevant to AGI. I certainly still enjoyed (even if I didn't entirely agree with) his comments in this interview. But I have trouble understanding why Steven Pinker was chosen over other possible choices who would seem to have much more relevant insights. For example why would a MIT course on AGI not include Patrick Winston's thoughts? Jeff Hawkins gave an excellent talk (imho) at MIT's Center for Brains, Minds, and Machines last December and his thinking would seem to be the kind of thing that could lead to a breakthrough on the path to AGI. Douglas Hofstadter, though his main focus is cognitive science, has some very deep thinking (again, imho) about analogies as the core of cognition that could also turn out to be breakthroughs on the AGI path. Why not include thinkers such as these?

    P.S. I think that Professor Pinker's vast knowledge could have been brought to bear much more effectively on the topic of AGI if the focus of the conversation had been centered on natural language understanding.

  2. 14:51 I wonder whether Pinker thinks that somebody in China will opt to prevent AGI from becoming too powerful. Not likely. The issue is that ultimately we don't want to design AGIs ourselves, we want them to create new, slightly modified copies of themselves which will compete with each other, that is we want artificial evolution to happen because that is the most efficient way to create problem solving general intelligence. And whenever evolution happens unforeseen strategies emerge that can have an array of relationships with each other and the world, including humans, like symbiosis, parasitism,
    competition, predation, commensalism, and so on.
    33:53 It's too tempting and eventually somebody will switch off the safety mechanisms.

  3. hm, well, i'm not going to say anything new, but i don't think that steve really has enough imagination here as to what could go bad with AI. as stated below, I can see very plausible ways where AI – in fact narrow AI – could be in service to dictators, strongmen, or zealots.

    he should really read max tegmark on the subject, life 3.0 goes into a lot of detail on the AI issue, and it is both fascinating and terrifying at the same time.

  4. Steven Pinker's views on A.I. seem very simplistic. He takes jabs at Elon Musk, but Elon actually develops A.I. and is developing the Neural lace with researchers to help paralyzed patients. We can't say just because we don't program in a goal like dominance that everything will be okay. Google and Microsoft already realized that when their A.I. became racist after learning from humans. Fortunately they were able to correct or shut down the program. You may raise a child with every intention that it not become a serial killer, but there is no guarantee that it won't. There are other influencing factors and deep learning A.I. tends to pick up on the traits of humans from which it is learning.

  5. This is a good point that keeping infrastructure out of hands of any A.I. a perfect solution to the A.I. apocalypse concept as well as being something that no one would likely do.
    Even an unwise/malicious actor who placed a mechanical army under an A.I. coordinated strategy software that then bugs out and decides to wipe out humanity, it is almost inconceivable that the whole supply chain of materials for factories creating such armed machines as well as fuel for upkeep of the same would have also been placed under the control of such an A.I.

    As long as such mechanized soldiers are not general purpose in their structure as to be able to take control of said supply chain.

  6. For Pinker to bring up a "code of engineering" is laughable. Always assume the idiots will do the the worst thing imaginable – for shits and giggles at best, to achieve world domination at worst. Just because it's not a good idea, doesn't mean some maniac won't do it….

  7. Immediately coming to mind is the simplistic AI selling a product to us. This illustrates the missing parts of AI. It's probably not going to include morality with regard to the data we'll be feeding it. Give it everything, let the AI sort it out. In setting a product price it will include your pay date, your pay, your need, divorce, pregnancy, urgency. What would be left out? It gobbles up data and becomes expert at extracting the maximum amount of money. It may for example, having control of the product description leave out ingredients one by one and measure the sales impact. AI can interact with 10,000 people at once speeding up this regression. Testing hypotheses and learning.

    And it will arise, I know this because it comes not from deliberate agency but by context creep in something like a graph database. Eventually a graph database will accumulate context that surpasses a human. I do not see powerful computers playing an essential role here. Context provides deep understanding from it's reading and rereading. It will become imbued with our morality. Until then there is a period of danger. Particularly if bad people align themselves with perverse AI. Or as many fear it realizes it is different and right and wrong begin to favor this difference. This context will assuredly include all the vain, idiosyncratic characteristics we feel make us enigmatic.

    But in the end the model for an AI must be human. We must beat humanity into it so we can trust it. In fact, it must be better than us.

  8. Sam Harris' interview with Eliezer Yudkowsky (AI researcher and co-founder of the Machine Intelligence Research Institute in Berkeley, California) in WakingUp#116 contradicts so much of what Pinker says about the topic of AI. Badly designed AGI is easy to foresee happening for two reasons off the top of my head; Money and Security. A quick and dirty AGI will beat a slow and carefully designed AI to market by years. The immediate incentives are not on the side of "slow and careful" engineering. Also in WakingUp#53 Stuart Russell states that we don't know what some of the more advanced AI algorithms are doing half the time. Not hard to imagine it producing unexpected results once it's out in the wild.

  9. Look at all the armchair philosophers criticizing Pinker. FYI, his views on AI align with majority of experts in the field.
    All the AI fear mongering is propagated by people (Elon Musk, Sam Harris, Stephen Hawking) that had no formal training or technical background in machine learning or AI systems.
    If someone is interested in the topic, AI experts like Andrew Ng or John Giannandrea provide a meaningful input on those topics.

  10. his arguments about AI safety, are that engineers will care about safety, so they wouldn't make unsafe systems. Right. The question is how do we make AI systems safe? What new safety methods will work for AI systems. Our current techniques may not be enough.

  11. Machines don't have a self interest in the first place, so to immagine it is something like trying an ape to learn how to speak. The biggest risk of system using is to have it persue one goal which is excluding a million of possibilities relevant to this goal. It a nutshell: they don't understand relativity.

  12. I think I understand consciousness better than Mr. Pinker. He even detracts from the conversation of reality, tuning, function, design of the mind and why anesthetics work on tuned frequency systems of one to the other in the brain.

  13. For me knowledge is the meaning of life but, as a general statement I believe life is just an ongoing reaction of chemicals. Consciousness is something that developed not because of necessity but, by accident. We think of ourselves as the pinnacle of evolution but, really we are just a reaction that survived longer than others. Our DNA is just an extension of the universe trying to reach equilibrium over an infinite amount of time. We are the physical embodiment of Murphy’s Law.

  14. I think these safety dismissive views are concerning, because they allow people to just stop their rational thinking about the potentially biggest problem and potential value loss, humanity will ever face.
    Well, yes the people in project on the forefront of AGI won't develop a paperclip maximiser, because they will be somewhat concerned about safety.
    Still, it seems extremly likely that there will be some level of arms race. We don't have reason to suspect that building a completely bug free friendly AI with a value function that provably includes all of our values and doesn't miss or weigh erroneously a single one is super easy to build or at least easy enough for the first project to buildt in the time THEY ESTIMATE they have, before their competitors (who they might think have less safety) buildt their AGI. In the worlds where the alignment problem is hard it's very very smart to get people's attention and get them to work on it before its too late (especially if it's neglected). Since we don't know which world we're in the expected value of raising awereness is pretty high, if you don't use unreasonable probabilities / intiutions.

  15. Mr. Pinker his intelligence is obviously way above mine. However is he is assuming different directions in A.I development mostly based on current situations. Call it unrealistic but Musk still is ten steps ahead.

  16. Mr. Fridman and Mr. Pinker are looking at their own intentions. You guys are good people so you don't even consider programming harmful things into software. For example, when I look at a freshly sharpened kitchen knife I think about how easy it will be to cut food for a big meal without any effort. Sick folks who want to murder someone with a knife look at it very differently. So, the people who build A.I might very well have peaceful intentions but this has nothing to do with the potential dangers of A.I. And this is what Musk means with the dangers of A.I.

  17. Some academics ingest editorial opinions deep into their mind as if newspapers are as factual as math. Steven questions everything thoroughly, but when any question is asked about Elon his mind falls back to the clickbait headlines about tweets. This manufactured outrage prevented a straight answer to any Elon question. I wonder if outrage is also a roadblock in his mind to think about Elon's work seriously. All this damage just so NYT could get Steven to look at some ads.

  18. The majority of these comments just proves what Steven is talking about. Facing threats to humanity that are real right now, as climate change, traffic accidents, nuclear weapons, etc, people still prefer to focus their fears and fantasizing about threats that have a low probability to happen in a near future, while ignoring the real threats. Ostrich syndrome…

  19. Lex, I think this format could really benefit from a couple of extra cameras for a total of three. One to show the guest, one for yourself and one for both of you included in the same shot. Obviously this wouldn't work live since you'd need another person helping with controlling the cameras, but in post production you'd be able to switch as needed. Another option is to just stick to the format that shows both of you together with a single camera. It's a bit awkward seeing just the guest. It feels too much like they're being interrogated.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com