Lex Fridman
Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast.
INFO:
Podcast website:
https://lexfridman.com/ai
iTunes:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
EPISODE LINKS:
Gary website: http://garymarcus.com/
Rebooting AI book: https://amzn.to/2pAPcz3
Kluge book: https://amzn.to/2ncvR6o
OUTLINE:
0:00 – Introduction
1:37 – Singularity
5:48 – Physical and psychological knowledge
10:52 – Chess
14:32 – Language vs physical world
17:37 – What does AI look like 100 years from now
21:28 – Flaws of the human mind
25:27 – General intelligence
28:25 – Limits of deep learning
44:41 – Expert systems and symbol manipulation
48:37 – Knowledge representation
52:52 – Increasing compute power
56:27 – How human children learn
57:23 – Innate knowledge and learned knowledge
1:06:43 – Good test of intelligence
1:12:32 – Deep learning and symbol manipulation
1:23:35 – Guitar
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
I really enjoyed this conversation with Gary. Here's the outline:
0:00 – Introduction
1:37 – Singularity
5:48 – Physical and psychological knowledge
10:52 – Chess
14:32 – Language vs physical world
17:37 – What does AI look like 100 years from now
21:28 – Flaws of the human mind
25:27 – General intelligence
28:25 – Limits of deep learning
44:41 – Expert systems and symbol manipulation
48:37 – Knowledge representation
52:52 – Increasing compute power
56:27 – How human children learn
57:23 – Innate knowledge and learned knowledge
1:06:43 – Good test of intelligence
1:12:32 – Deep learning and symbol manipulation
1:23:35 – Guitar
https://www.youtube.com/watch?v=yRkazlPdJ4A
robots cannot "self simulate" itself. Self simulation would solve some problems with the physical world but I guess it would require wasting CPU cycles.
Unhyping well done !
This was a very informative discussion and I think I gleaned more information here on the underlying way AI "thinks" then any other discussion on this topic. I have wondered if AI needs a sort of baseline set of instincts / understanding that it can build on to be truly intelligent. It would seem that is what narrow AI might be accomplishing in the current state and in the future, maybe we will use a bunch of different narrow AI's as a way to program instincts in a system. Such as using computer vision and physics as a precursor to understanding environments or actual intelligence.
That's one hell of an opening question.
I'm not convinced that GPT2 doesn't have common sense.
fascinating, finally someone publicly vocalized what I think about AI (in a more coherent and professional way).
Innovations like variational autoencoders seem to be moving the abstractions learned by deep systems towards human comprehensibility.
Nice and all. But saying that evolution finds an idea and runs with it, after procrastinating for a billion years or so, that only shows a lack of knowledge about molecular biology.
Really, there is nothing more impressive than the inner working of a single cell. The rest is child's play.
Start here:
https://youtu.be/X_tYrnv_o6A
I'm more in line with Gary's future of AI than Elon Musk's. I think we will have narrow AI with certain functions, but general AI will *never happen.
Lex, we all owe you a massive thank-you.
I'm praying everyday to see Hinton/Demis on here…keep up the good work.
I liked it that you asked a music question 🙂
40:07 "This is an astute question, but I think the answer is at least partly, 'no'." Epic burn, I love Gary Marcus's style. He doesn't mind being challenged because he knows it improves his lines of reasoning. Great interview.
I pick up a lot from this podcast. Hey Lex, since you mention sophia so often, I think you should invite Ben Goerztel to your podcast to talk about OpenCog and SingularityNet. Both seem to be really interesting stuff.
Gary Marcus hybrid method of mixing symbolic logic and machine learning is the next door we have to open to bring ai to a other new level.
I really liked that you dare to go/ask against the interviewed person. For example when he questions the effectiveness of deep learning, and you ask back, what successes were before, and he kind of comes up with a really generic non answer.
49:30 "Is there some database that we could download and stick into our machine? No." So the decades of work that went into Cyc and I think Elsa are of no value? That's a shame. Obviously "If we just teach a computer enough facts it will eventually be able to read by itself and then will achieve AGI!" didn't happen, but the fact are out there in OpenCYC. They need to be ingested into a better conceptual representation system, whatever that is.
Marcus comes across as a Mr. Know-it-all despite the counterfactual reality.
It is going to be huge! Buy AGI (SingularityNet) tokens!
I used this video in my post as a reference, here is the link to my post : https://medium.com/datadriveninvestor/relax-ai-will-not-take-over-the-world-de13a152af17?source=friends_link&sk=1b08f14cded3c42b65d243de34b2ad47
Free Association assumes that all variables have potentially infinite hidden variables associated with them, which is the basis of a complex number, or the basis of electricity.
Seems like he is advocating something like programing a system with the notion of Platonic Forms. Humans have wrestled with this idea of what constitutes "the thing itself" for eons and have not reach a consensus. It will be interesting to see some attempts at explaining it to a machine…
Great talk.
Lex. How did you remove Gary's burp and cough in the recorded video? Was it done by AI or manually?
Why no one ever talks normally in this show? It is almost impossible to understand what he is saying
Turing Olympics sounds like an amazing thing we could use for radical innovation cultivating. I would love to see "events" like normal olympics but with various AI and computer programming games. For example, a developer could use any tool they are aware of or can build to accomplish something, such as to write 10 files that do something. That would allow us to invent new ways to accomplish these benchmarks faster and faster each year. For example, people might invent hotkeys and code snippets and utility functions that gain efficiency, and we could use terms like "new world record in the 10 minute natural number counting application". The sky is the limit for "competitive event types". It would be interesting to see large teams as well, competing like the NFL or NHL to build things. Imagine 100 developers building something and fine tuning eachother's additions for an ultimate end-product.
It would be cool to have 100 developers against 100 developers, infinite number of teams (per year), and then for the first half of the event, they can't refer to any material, but in the second half of the event, they gain access to the repository of knowledge created over all the previous years (like Stack Overflow, with all the top-rated questions/answers and with real-time communication streams on every question/answer).
That is a future I'd love to live in. Point the teams at the highest value questions in bleeding edge science and economic/societal objectives.
Take the smartest human in the world. Put 10 teams of 100 onto that question that person has right now today. I bet you they will make mind-blowing progress on it in the next 2-4 weeks; and by that, I mean aim low. What is the absolute next answer needed, possibly achievable in a couple/few steps from today?
We need treat something like a Turing Olympics as a "utility function" that helps humans innovate in currently-invisible problem spaces. We are still operating too low level. We need to abstract one more dimension and start using organized events as methods of sudden knowledge creation. It could be just as useful to gain efficiency as it is to solve something that is currently unsolved. Those are the two fronts: efficiency and effectiveness. There is a particle/wave duality between them. Effectiveness is particle (1 or 1, off, on, isnt effective, is effective). Efficiency is analog (-100% to 100%, 20% correct, 10% better at this moment in time).
Great ideas !
Why no captions
Dude handles his hiccups like a boss.
this guy has a fucking amazing voice
omg i agree so much more with this guy's view on modern neutral nets than with deep learning fans like the host..
Never thought I'd listen to two people talking about bottles for an hour and a half :^)
For machines to understand human thought process in regard to problem solving and learning, the machine needs to understand pain. This has been the motivating factor for humans and without it progress would not have been possible.
Who else is here preparing for the debate?
53:33 gotta say on this one I'm with Gary till the Natural Languare Understanding part, we have consistently beaten most benchmarks and solve with Transformers tricky dependencies or ones that need real world context, e.g. Winograd. Thanks for doing this podcast Lex, I enjoy it greatly.
Good stuff. Any chance of ever having Terry Winograd on the show?
Gary Marcus seems intelligent, but also a bit arrogant.
Presision talking both of you, ready with answers right away, both suportive of fast free thinking. Almost zero distractions with "you knows or likes" just great communications.
Watson won jeopardy because of reading?
Gary has a really fresh, interesting and realistic perspective on current A.I. I really enjoyed this conversation. Thanks Lex!
Lex, your questions are great ! Carry on good man, carry on
He assumes eventual human level AGI but there is a logical problem. No system can understand itself. Mapping the brain can never tell us how it works. It has traffic not just a map.