Lex Fridman
Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founding father of convolutional neural networks, in particular their early application to optical character recognition. This conversation is part of the Artificial Intelligence podcast.
INFO:
Podcast website: https://lexfridman.com/ai
Full episodes playlist: http://bit.ly/2EcbaKf
Clips playlist: http://bit.ly/2JYkbfZ
EPISODE LINKS:
Yann’s Facebook: https://www.facebook.com/yann.lecun
Yann’s Twitter: https://twitter.com/ylecun
Yann’s Website: http://yann.lecun.com
OUTLINE:
0:00 – Introduction
1:11 – HAL 9000 and Space Odyssey 2001
7:49 – The surprising thing about deep learning
10:40 – What is learning?
18:04 – Knowledge representation
20:55 – Causal inference
24:43 – Neural networks and AI in the 1990s
34:03 – AGI and reducing ideas to practice
44:48 – Unsupervised learning
51:34 – Active learning
56:34 – Learning from very few examples
1:00:26 – Elon musk: deep learning and autonomous driving
1:03:00 – Next milestone for human-level intelligence
1:08:53 – Her
1:14:26 – Question for an AGI system
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
I really enjoyed this conversation with Yann. Here's the outline:
0:00 – Introduction
1:11 – HAL 9000 and Space Odyssey 2001
7:49 – The surprising thing about deep learning
10:40 – What is learning?
18:04 – Knowledge representation
20:55 – Causal inference
24:43 – Neural networks and AI in the 1990s
34:03 – AGI and reducing ideas to practice
44:48 – Unsupervised learning
51:34 – Active learning
56:34 – Learning from very few examples
1:00:26 – Elon musk: deep learning and autonomous driving
1:03:00 – Next milestone for human-level intelligence
1:08:53 – Her
1:14:26 – Question for an AGI system
What is high? What is higher? What is learn? What is learning?
What a great interview!!! i can't never thank you enough Lex!!
Are they trying to make a perfect intelligence system or 'simply' a human like intellingence.
My favorite part was the section where unsupervised learning. It was fascinating ❤
"I don't know the cause, so, you know, God did it."
Thanks
Amazing work Lex! Nice handling him on his "hater" attitude against AGI and other things. Gosh, he seems to define in his head that AGI is like human the same way airplanes were being designed with same mechanics as birds… AGI, could learn from human but is not restricted to that kind of intelligence. . .. How disappointing to see an ACM Turing Award Winner is that much of a hater, narrow thinker and yet attacking so many useful innovations, creativity and successes that he couldn't get himself…
I love this! It's interesting how different Yann LeCun and Jeremy Howards views differ so much on active learning and transfer learning. Would to like to see them discuss their views with each other.
The recurrent updating of state is the core i think, activation/supression on an embed space that updates itself recurrently.
Next Richard Sutton!! Thanks for all your work Lex.
Great interview ! The only thing that I disagree with is the visual cortex example allegedly proving that human intelligence is not general. First off, we need to distinguish sensory perception from logical reasoning. Senses might indeed be specialized and there are good evolutionary reasons for that. Even so, this specialization is premised on a great deal of abstraction and generalization — we ignore all unnecessary patterns and convolve multiple disparate ‘pixels ‘into a coherent representation. So even specialization of visual function is based on our ability to generalize. Secondly, the ability to recognize all parameters of a system state like in the gas example is not what makes intelligence general. On the opposite, it’s ability to infer physical laws from limited observations or even better – deductively, without any training data at all.
The point Yann was making at aroung 54:00 – While the machine may have had 100s of years worth of training, it could perhaps be argued that evolution has trained us for millions of years not to run off of cliffs. We have inherited knowledge about the world that is millions of years old, and also happens to be applicable to starcraft 2 (because humans made this game). As soon as a neural net has figured out the perfect weights and biases, it could pass it's configuration on to the next generation to perhaps improve on it.
Captivated the world
21:00 causality
legit
jeez these deep neutral net zealots are sometimes very annoying to listen to.
this guy really thinks that neural nets did not take off back then because python libs for them did not exist. yeah, definitely not because they were wasting processing power just as inefficiently as they do today. and definitely not because mega-corporations just recently started dumping ungodly amounts of money into promoting deep learning hype.
It's really annoying when deep learning fanatics pretend that existing neural nets are an even remote representation of how biological neural nets work.
All existing artifitial nets are based on backprop as their very base of learning algorithm. There's literally nothing like that in a biological brain. At the very fundamental level, artifitial nets are different from biological ones. They pretend that "they're just not there yet", when in reality there are literally no alternative proposals on the horizon. In last 30 years there were no advancements in artifitial neutral net training algorithms other than minor tweaks and just more money/hardware dumped.
Hey Lex,
I found out that in his interviews LeCun tends to make derogatory comments on Slavic people.
It is doubly disappointing, first because it is not nice to have such attitude for any civilized individual and second,
it is not nice especially when you are a high profile researcher.
You might keep this fact in mind if or when you invite him for another conversation.
I am not talking about some extreme far right crap, but in the videos I saw the posture of condescendence and disrespect was indeed noticeable.
Of course, you I doubt my words, just let me know, I will send you the links.
Cheers and
Your podcast is the best in the category of scientific conversations
i'm half expecting a gorilla to walk in the hall behind Yann to test people's conscious awareness.
i've noticed quite a few people back there going back and forth in the hall.
What's the difference between an ANN an artificial neural network, a GAN (a General Adversarial Network), Convolutional Neural Network , and Deep Learning and Machine Learning? And can they be merged? or hybridized? or add lots of AI brain part objects that interact with each other to make a human like brain? and use 100+ neurotransmitter/hormone AI objects?
I wonder how would one control an autonomous AI and give it instructions? and prevent disobedience by the AI? Most people have no use for an AI robot that can be hacked and made to stab someone while they sleep? How can an AI detect a hacker? it would be bad if the hacker hijacked a self driving car? cyber security is important for AIs. they seem to be vulnerable to hackers. Gizmodo.com has an article on AI that were hacked into and weaponized including self-driving cars. the article was from 2017. i'd like to be able to protect my AI Apps.
Mr LeCun is a great communicator. I enjoyed every second of this interview. Looking forward to seeing where his research will lead us all.
“You can be stupid in three different ways: you can be stupid because your model of the world is wrong, you can be stupid because your objective is not aligned with what you actually want to achieve […], or you are unable to figure out a course of action to optimize your objective given your model.” 1:08:08
can you speak faster? or just STFU