Videos

Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense



Sean Carroll

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2019/10/14/68-melanie-mitchell-on-artificial-intelligence-and-the-challenge-of-common-sense/

Patreon: https://www.patreon.com/seanmcarroll

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.

Source

Similar Posts

47 thoughts on “Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense
  1. The longer I listen to this the more I feel like GAI is a child with very specific sensory inputs and no concept of physical contact. We taught some of them to drive.

  2. This was a great conversation. I say this because it marries with my own thoughts on AI: Ah yes, confirmation bias, human all too human. The real mystery is how the human brain does what is does on just 20W. The problem of replicating human intelligence may not be a software problem, but a hardware one. Penrose (vis a vis Godel) may be right; human intelligence/consciousness is not computational.

  3. I find it very strange that, in a podcast about AI, the word cyborg didn't come up not even once… which seems to me the most important field. Because, if we don't see how AI could match us, we definitely find myriad ways for it to complement us. Like, why don't we use it for global decisions in farming and other planetary phenomena we all have a hand in??

  4. Mr. Carroll, it will be really great if you start to film your podcasts and upload it here. I like to listen to it, but i'll really love to watch you and your guests. Big fan here.

  5. @48:36: "We can give it a calculator… It won't feel like it is part of itself", what maker her believe that? A hunch?
    If you look at all the sensory input modalities and wide array of motor outputs that are all integrated into one human consciousness (which all feel like part of ourselves), I'd think it's very plausible that a calculator could be integrated into such a consciousness (either human or artificial) once we understand how it works.

    I'd go even further, I think computer-brain interfaces will be one of the next big revolutions of human technology, which will hopefully work alongside General AI. One need only look at the current primitive computer-brain interfaces to see the potential.

  6. Seems like there's a tendency to personify the potential of computers. There's no evidence computers will become alive and will take on human like or a living like motives

    Obviously technology will continue to progress and they'll be some really cool Innovations and the continuation of the technology wear building

    But it seems like there's this magical thinking that somehow computer systems and networks will become analogous to living systems

    There's no evidence for that. There's no evidence that computers will take on living motives

    Some of it seems like hype and Buzz just to get funding and the same old human game of trying to grab people's attention. I mean again I'm a big fan of technology and the potential for smarter human communities with the aid of Technology and computer help. And there's definitely great potentials for help as well as for problems caused by computer networks that are beyond our understanding.

    But again I I'm just trying to emphasize the point that I think a lot of people don't really understand what life is nor do they really understand what computers are and then it creates for a lot of miscommunication and ill-informed communication

  7. Sean's 100% right about computers and chess and go. I'm not amazed that computers can do math better than a person. And chess and go are just calculations or algorithms. You would expect that a computer could do this better than a person. What would be really cool is if a computer could give relationship advice on a level higher than a person. If It could understand social phenomena, if it could help us make decisions about our life. In a simple rule base game that is confined and that is based on calculations, it's a no-brainer that a computer will outperform a person!

    But it's in things like predicting Behavior, understanding human behavior, applying Behavior to other forms 2novel tea, coming up with funny stories and interesting jokes and interesting theoretical perspectives about life that it would be really surprising if computers could get better at those things than people

  8. One thing AI can't do that humans do is motivations in particular or emotions. I don't mean modeling them, I mean having them. Motivation are emotions are physical things, they exists in a particular set of physical systems, biological systems. Silicone and circurity are different sorts of physical systems so we shouldn't expect them to have the same set of properties.

  9. Sean, one suggestion, if you were to use video in your interviews, I believe your audience would grow considerably. Give it a try. You’ll be glad you did. BTW great interview.

  10. That was a really interesting point about "sameness". Perhaps it's "same enough for our purposes". Going back to the car example, for its purpose, a snowman should not be the same as a person (which the car failed at). But for our sense of association and representation, it is the same. More tangibly, you could say two strings are the same if each character matches. Or you could say they are the same if each character match without being case sensitive.

  11. Another fascinating podcast👍
    Would love to hear a debate between Sean and a proponent of one of the other theories regarding the wave/particle duality problem i.e hidden variables or quantum gravity explanations.

  12. Humans use different levels of description to think in abstract terms, this is exactly what Hofstadter says in "I Am A Strange Loop". They go up one or several levels to think "this thing is the same" or "this is beautiful". AI can't still do this. What's fascinating is that Reality will be a very, very different thing to an AI than it is for a human. Many thanks for these uploads Prof. Carroll, I particularly like the ones about Physics but they're all thought provoking and very interesting.

  13. Love your stuff Sean. Would you consider doing a show on what a workday for you is like as a theoretical physicist? I mean, what kinds of things do you do in any given day as a TP. For instance, is your day filled with doing lots of calculating or do you think and write more, etc?

  14. China is a concern regarding AI surveillance and human rights. While in the U.S., it's surveillance capitalism that's concerning. But people in both countries will be subjected to both types of surveillance.

  15. True intelligence (like found in humans and other animals) is a byproduct of the consciousness/awareness (NOT the other way around). AI is a misnomer, it's just a computation on bulk inputs with auto-adjusted feedbacks in a loop according to some empirically-derived rules we generally don't understand well but derive them from mass simulations (the "training" part). All the doomsday AI scenarios and speculations are not possible before having the artificial self-awareness implemented first (whether intentionally or by chance). Furthermore, the awareness is tightly coupled to the available sensory apparatus so unless it's heavily derived from the biologic one found in humans – all AI ethics talk etc. is laughable and it's a pure sophistry rumblings

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com