Videos

MIT AI: Deep Learning (Yoshua Bengio)



Lex Fridman

Yoshua Bengio, along with Geoffrey Hinton and Yann Lecun, is considered one of the three people most responsible for the advancement of deep learning during the 1990s, 2000s, and now. Cited 139,000 times, he has been integral to some of the biggest breakthroughs in AI over the past 3 decades.

This conversation is part of MIT 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world. Audio podcast version is available on https://lexfridman.com/ai/

INFO:
Course website: https://agi.mit.edu
Contact: agi@mit.edu
Playlist: http://bit.ly/2EcbaKf

CONNECT:
– AI Podcast: https://lexfridman.com/ai/
– Subscribe to this YouTube channel
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Twitter: https://twitter.com/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Slack: https://deep-mit-slack.herokuapp.com

Source

Similar Posts

28 thoughts on “MIT AI: Deep Learning (Yoshua Bengio)
  1. Yoshua's lecture here (https://www.youtube.com/watch?v=Yr1mOzC93xs) proved very helpful to me. I'm understanding this discussion more completely. And yes, this idea of creating a flat abstract space makes a lot of sense. It seems very much like a good way forward. I would argue still, however, that LSTMs which include attention are in effect concurrent neural nets and that these are precisely the kind of architectural advancements that we need on a path to a universal cell or group of cells which can self assemble ad hoc concurrent sub-modules. I see the improvements like current LSTM cell as highly specialized, and, therefore, our architectures unfortunately will become more and more complex. Looking at biology, it's the neuron that is a very complex unit unto itself. I'm sorry if I keep coming back to this, and I may simply be showing my own ignorance, but somehow, and perhaps foolishly, it rankles with me. And although I think that otherwise we may be in complete agreement, in as far as I can be, I truly feel that the answer will lie in the cell and not changes to the network itself beyond size. A conscious prior, for example could be stored and retrieved from a database. Nevertheless, I intuitively "feel" that this kind of functionality should be handled eventually within a self assembling sub-module of an end-to-end NN.

  2. Great talk! Some feedback on the format: An unedited version might be better, the topics can get quite theoretical at times and when they do, it can take some time for the points to really set in. So the pauses can help with that. But I also don't mind re-watching. ?

  3. Great discussion, thank you Yoshua and Lex. A question about infants vs. machine learning:

    If we view the world as we see it, a continual, light-speed fast, changing pice of data, would it be safe to say that the infant uses huge datasets as well?

  4. wow…………totally agree with the points of those movies….that one individual creates……..and invents all things……no it doesn't happen like that………Leon Musk has teams, and engineers that solve, and build more prototypes……Man can't be compare and idolize as God to create things…………………………Great interview Lex……Thanks………….:) …………….bye

  5. 1) High semantic labels is a very very strong baseline.
    2) Long-term credit assignment.
    3) Train neural nets differently so that they can focus on causal explanation.
    4) We need to have good world models in our neural nets.
    5) From passive observation of data to more active agents.
    6) Knowledge factorization is one of the current weaknesses of neural nets.
    7) Disentangle the high semantic variables and build high semantic relationships between the highest level semantic variables in a neural net (similar to the rules in the classical AI systems) to better generalize.

  6. It's time Google starts manufacturing Tpu ai chips and sell them for $500.00 but the ai chips should be made of carbon and should be able to handle and process exabytes of data a second

  7. I generally find the time spent watching the videos for this course very worthwhile, including the interviews. But, if it were possible, I would vote for all of the presenters to deliver prepared lectures on the topic of "how might we create AGI?" (and possibly "should we be worried about AGI?") rather than being interviewed. I found the prepared lectures at the beginning of this course much more focused and worthwhile. But the interviews are still okay, if lectures are not possible.

  8. bakkaification
    Hey lex loved you interview with Joe Rogan! Great convo! I don’t know if you are familiar with Eakhart Tolle and his work but I encourage you to read A New Earth it’s possibly one of the most insightful books I’ve ever read. The concept of getting rid of the ego needs to be addressed before humans do something dumb and start another war over who’s got the bigger one… if you can contact him I also think Joe would benefit greatly from this book as well perhaps convey its message to the masses Hope all is well and your studies are good!

  9. Deep Learning: Convolutional Neural Networks in Python

    > Understand how convolution can be applied to image effects.
    > Understand how convolution helps image classification.
    > Implement a convolutional neural network in Theano
    > Implement a convolutional neural network in TensorFlow.

    http://bit.ly/2QrXyRx

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com