Videos

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Artificial Intelligence (AI) Podcast



Lex Fridman

Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast.

INFO:
Podcast website:
https://lexfridman.com/ai
iTunes:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

EPISODE LINKS:
Gary website: http://garymarcus.com/
Rebooting AI book: https://amzn.to/2pAPcz3
Kluge book: https://amzn.to/2ncvR6o

OUTLINE:
0:00 – Introduction
1:37 – Singularity
5:48 – Physical and psychological knowledge
10:52 – Chess
14:32 – Language vs physical world
17:37 – What does AI look like 100 years from now
21:28 – Flaws of the human mind
25:27 – General intelligence
28:25 – Limits of deep learning
44:41 – Expert systems and symbol manipulation
48:37 – Knowledge representation
52:52 – Increasing compute power
56:27 – How human children learn
57:23 – Innate knowledge and learned knowledge
1:06:43 – Good test of intelligence
1:12:32 – Deep learning and symbol manipulation
1:23:35 – Guitar

CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Source

Similar Posts

42 thoughts on “Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Artificial Intelligence (AI) Podcast
  1. I really enjoyed this conversation with Gary. Here's the outline:
    0:00 – Introduction
    1:37 – Singularity
    5:48 – Physical and psychological knowledge
    10:52 – Chess
    14:32 – Language vs physical world
    17:37 – What does AI look like 100 years from now
    21:28 – Flaws of the human mind
    25:27 – General intelligence
    28:25 – Limits of deep learning
    44:41 – Expert systems and symbol manipulation
    48:37 – Knowledge representation
    52:52 – Increasing compute power
    56:27 – How human children learn
    57:23 – Innate knowledge and learned knowledge
    1:06:43 – Good test of intelligence
    1:12:32 – Deep learning and symbol manipulation
    1:23:35 – Guitar

  2. robots cannot "self simulate" itself. Self simulation would solve some problems with the physical world but I guess it would require wasting CPU cycles.

  3. This was a very informative discussion and I think I gleaned more information here on the underlying way AI "thinks" then any other discussion on this topic. I have wondered if AI needs a sort of baseline set of instincts / understanding that it can build on to be truly intelligent. It would seem that is what narrow AI might be accomplishing in the current state and in the future, maybe we will use a bunch of different narrow AI's as a way to program instincts in a system. Such as using computer vision and physics as a precursor to understanding environments or actual intelligence.

  4. Nice and all. But saying that evolution finds an idea and runs with it, after procrastinating for a billion years or so, that only shows a lack of knowledge about molecular biology.

    Really, there is nothing more impressive than the inner working of a single cell. The rest is child's play.

    Start here:
    https://youtu.be/X_tYrnv_o6A

  5. I'm more in line with Gary's future of AI than Elon Musk's. I think we will have narrow AI with certain functions, but general AI will *never happen.

  6. Lex, we all owe you a massive thank-you.

    I'm praying everyday to see Hinton/Demis on here…keep up the good work.

  7. 40:07 "This is an astute question, but I think the answer is at least partly, 'no'." Epic burn, I love Gary Marcus's style. He doesn't mind being challenged because he knows it improves his lines of reasoning. Great interview.

  8. I pick up a lot from this podcast. Hey Lex, since you mention sophia so often, I think you should invite Ben Goerztel to your podcast to talk about OpenCog and SingularityNet. Both seem to be really interesting stuff.

  9. Gary Marcus hybrid method of mixing symbolic logic and machine learning is the next door we have to open to bring ai to a other new level.

  10. I really liked that you dare to go/ask against the interviewed person. For example when he questions the effectiveness of deep learning, and you ask back, what successes were before, and he kind of comes up with a really generic non answer.

  11. 49:30 "Is there some database that we could download and stick into our machine? No." So the decades of work that went into Cyc and I think Elsa are of no value? That's a shame. Obviously "If we just teach a computer enough facts it will eventually be able to read by itself and then will achieve AGI!" didn't happen, but the fact are out there in OpenCYC. They need to be ingested into a better conceptual representation system, whatever that is.

  12. Free Association assumes that all variables have potentially infinite hidden variables associated with them, which is the basis of a complex number, or the basis of electricity.

  13. Seems like he is advocating something like programing a system with the notion of Platonic Forms. Humans have wrestled with this idea of what constitutes "the thing itself" for eons and have not reach a consensus. It will be interesting to see some attempts at explaining it to a machine…

  14. Turing Olympics sounds like an amazing thing we could use for radical innovation cultivating. I would love to see "events" like normal olympics but with various AI and computer programming games. For example, a developer could use any tool they are aware of or can build to accomplish something, such as to write 10 files that do something. That would allow us to invent new ways to accomplish these benchmarks faster and faster each year. For example, people might invent hotkeys and code snippets and utility functions that gain efficiency, and we could use terms like "new world record in the 10 minute natural number counting application". The sky is the limit for "competitive event types". It would be interesting to see large teams as well, competing like the NFL or NHL to build things. Imagine 100 developers building something and fine tuning eachother's additions for an ultimate end-product.

    It would be cool to have 100 developers against 100 developers, infinite number of teams (per year), and then for the first half of the event, they can't refer to any material, but in the second half of the event, they gain access to the repository of knowledge created over all the previous years (like Stack Overflow, with all the top-rated questions/answers and with real-time communication streams on every question/answer).

    That is a future I'd love to live in. Point the teams at the highest value questions in bleeding edge science and economic/societal objectives.

    Take the smartest human in the world. Put 10 teams of 100 onto that question that person has right now today. I bet you they will make mind-blowing progress on it in the next 2-4 weeks; and by that, I mean aim low. What is the absolute next answer needed, possibly achievable in a couple/few steps from today?

    We need treat something like a Turing Olympics as a "utility function" that helps humans innovate in currently-invisible problem spaces. We are still operating too low level. We need to abstract one more dimension and start using organized events as methods of sudden knowledge creation. It could be just as useful to gain efficiency as it is to solve something that is currently unsolved. Those are the two fronts: efficiency and effectiveness. There is a particle/wave duality between them. Effectiveness is particle (1 or 1, off, on, isnt effective, is effective). Efficiency is analog (-100% to 100%, 20% correct, 10% better at this moment in time).

  15. For machines to understand human thought process in regard to problem solving and learning, the machine needs to understand pain. This has been the motivating factor for humans and without it progress would not have been possible.

  16. 53:33 gotta say on this one I'm with Gary till the Natural Languare Understanding part, we have consistently beaten most benchmarks and solve with Transformers tricky dependencies or ones that need real world context, e.g. Winograd. Thanks for doing this podcast Lex, I enjoy it greatly.

  17. Presision talking both of you, ready with answers right away, both suportive of fast free thinking. Almost zero distractions with "you knows or likes" just great communications.

  18. He assumes eventual human level AGI but there is a logical problem. No system can understand itself. Mapping the brain can never tell us how it works. It has traffic not just a map.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com