Videos

Ilya Sutskever: Deep Learning | AI Podcast #94 with Lex Fridman



Lex Fridman

Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic. This conversation is part of the Artificial Intelligence podcast.

Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Ilya’s Twitter: https://twitter.com/ilyasut
Ilya’s Website: https://www.cs.toronto.edu/~ilya/

INFO:
Podcast website:
https://lexfridman.com/ai
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 – Introduction
2:23 – AlexNet paper and the ImageNet moment
8:33 – Cost functions
13:39 – Recurrent neural networks
16:19 – Key ideas that led to success of deep learning
19:57 – What’s harder to solve: language or vision?
29:35 – We’re massively underestimating deep learning
36:04 – Deep double descent
41:20 – Backpropagation
42:42 – Can neural networks be made to reason?
50:35 – Long-term memory
56:37 – Language models
1:00:35 – GPT-2
1:07:14 – Active learning
1:08:52 – Staged release of AI systems
1:13:41 – How to build AGI?
1:25:00 – Question to AGI
1:32:07 – Meaning of life

CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Source

Similar Posts

40 thoughts on “Ilya Sutskever: Deep Learning | AI Podcast #94 with Lex Fridman
  1. I really enjoyed this conversation with Ilya. Here's the outline:
    0:00 – Introduction
    2:23 – AlexNet paper and the ImageNet moment
    8:33 – Cost functions
    13:39 – Recurrent neural networks
    16:19 – Key ideas that led to success of deep learning
    19:57 – What's harder to solve: language or vision?
    29:35 – We're massively underestimating deep learning
    36:04 – Deep double descent
    41:20 – Backpropagation
    42:42 – Can neural networks be made to reason?
    50:35 – Long-term memory
    56:37 – Language models
    1:00:35 – GPT-2
    1:07:14 – Active learning
    1:08:52 – Staged release of AI systems
    1:13:41 – How to build AGI?
    1:25:00 – Question to AGI
    1:32:07 – Meaning of life

  2. I was stunned when you asked "what would you ask an AGI?" and Ilya says, "The first time?" I wish the camera would have been on his face when he said that.
    "The first time?" There is so much in his voice, behind those words.

  3. How can vision be more basic than language? Look at a book. You know you see it if you can label it. The label is essentially language. I cannot do vision without language.
    I can do language without vision though. Anyway – cool interview. Thanks for that.

  4. No accident that anti-aircraft tracking spurred advances in computation. Hitting a moving target is much harder at distance because you have to anticipate where it will be in the future. Robots will have to interact in real time with a changing environment as well as learning systems.

  5. just watching this video and beaing able to read the almost perfect english subtitles produced by AI it's incredible… But humans quickly adapt themselves … 10 years ago wouldn't be possible… it's huge step for humankind

  6. STDP … neuronal firing (depolarization) is all or none (action potential driven by specific mV for said neuron) if my education remains true. The “strength” of the outcome is determinant by the quantitative aspect of neuronal depolarization. Are you saying that the timing/continuity of depolarization (in the synapse or the axon) determines its intensity? If the action potential has been reached, the neuron fires – “intensity independent.” If intensity is desired, don’t me need more congruent depolarization for such an outcome? I’m just trying to make sure I understand STDP (computer science terminology for neuronal depolarization through an action potential). Thanks – Breaux

  7. The human brain has all sorts of sub parts like the thalamus, amygdala, hypocampus, etc. Whereas deep learning networks seem mostly homogeneous and monolithic . Maybe we need lots of deep networks interacting somehow?

  8. Application of a sense of awareness derived from multiple disciplines drawn from such human experienced aspects such as satirical humour and self-expression – emotions from language such as tone (implications of sense) rather than imagery which can be a sub domain. Think of the vision aspect as an after cursor or linker to the deep understanding of the language embedded.

  9. About GANs and their cost function. In terms of analogy with evolution cost function exists at all times whether it's GAN on other. The difference is only location of a cost function. If it's a convolution NN then CF is internal. Because NN already knows what's write. On the other side GAN's CF is external and it'll be punished later by selection.
    Hope that makes sense.

  10. This is highly…what's the word…I don't know…I'm very uncertain with this, it's just a guess. I think the brain's form of back propagation is open loops. For instance, the brain forms neural nets dynamically on the fly. However, those most involved neurons are affected by neurons that are not immediately needed by the calculation / problem. So the calculation is done, but they're still getting adjustment later. The neurons doing the calculations communicate most with each other, but the other neurons are partially aware of what's going on and provide input too.

  11. I think that to make a neural net language, what you need is to look at the various types of architectures we have so far and use the neural nets as a means to communicate with each other. For instance, for time based tasks where you have to recognize actions, you're going to need LSTMs, so the compiler has a general purpose LSTM that you use repeatedly. Also, think about what we learned with GANs – a neural net is being used as an utility function on the other in a feedback loop. So, we can use classification models to dynamically build utility functions and we could have a more abstract language to do that.

  12. Seems like the GAN cost function is the equilibrium of the game or the minimization of surprise when either the discriminator or generator win. It is toss-up as to who won is not so informative as that both hold high information representations.

  13. An outstanding interview – thankyou to both interviewer and interviewee. The questions were as thoughtful as their answers. Super rich.

  14. Hi Lex &Ilya .. I am struggling with a question. As per my understanding looking inside as human that intelligence is not about learning as a human can take a decision without any prior knowledge which we may say as intuition or awareness.

    Why for AI we are always looking about learning algorithms and training. Do we need to look it differently to understand artificial general intelligence

  15. The GPT-2 might change the course of human spontaneous thinking and lead toward a central implementation of governance as naturally human groups are attracted by central leadership. Thank you for the enlightening podcast.

  16. I was really scared when both of them were silent during the discussion of openness of ai development. Hopefully we can come together instead of racing into a miserable situation.

  17. Next step in podcasts: Have 2 guests come on . And we see them disagree and agree on various topics and that would be so amazing to watch. I'm sure the guests would enjoy it as well.

  18. Lex thinks and speaks so painfully slow and gets corrected by guests often. For example after some deep thought and closing of eyes Lex's criteria for robust vision/language processing is if the system impressed him. Instantly Sutzkever snapped at how relative and useless that criteria is. Why Lex didn't self censor such a stupid comment as he was forming it makes me think his ability is above average but not 99th percentile

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com