Lex Fridman
Yoshua Bengio, along with Geoffrey Hinton and Yann Lecun, is considered one of the three people most responsible for the advancement of deep learning during the 1990s, 2000s, and now. Cited 139,000 times, he has been integral to some of the biggest breakthroughs in AI over the past 3 decades.
This conversation is part of MIT 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world. Audio podcast version is available on https://lexfridman.com/ai/
INFO:
Course website: https://agi.mit.edu
Contact: agi@mit.edu
Playlist: http://bit.ly/2EcbaKf
CONNECT:
– AI Podcast: https://lexfridman.com/ai/
– Subscribe to this YouTube channel
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Twitter: https://twitter.com/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Slack: https://deep-mit-slack.herokuapp.com
Source
What you people think of using AR for supervised/reinforcement learning? Will it be our next step for learning sophisticated tasks?
How about a person without hands and legs?
Have good inteligence.
Yoshua's lecture here (https://www.youtube.com/watch?v=Yr1mOzC93xs) proved very helpful to me. I'm understanding this discussion more completely. And yes, this idea of creating a flat abstract space makes a lot of sense. It seems very much like a good way forward. I would argue still, however, that LSTMs which include attention are in effect concurrent neural nets and that these are precisely the kind of architectural advancements that we need on a path to a universal cell or group of cells which can self assemble ad hoc concurrent sub-modules. I see the improvements like current LSTM cell as highly specialized, and, therefore, our architectures unfortunately will become more and more complex. Looking at biology, it's the neuron that is a very complex unit unto itself. I'm sorry if I keep coming back to this, and I may simply be showing my own ignorance, but somehow, and perhaps foolishly, it rankles with me. And although I think that otherwise we may be in complete agreement, in as far as I can be, I truly feel that the answer will lie in the cell and not changes to the network itself beyond size. A conscious prior, for example could be stored and retrieved from a database. Nevertheless, I intuitively "feel" that this kind of functionality should be handled eventually within a self assembling sub-module of an end-to-end NN.
Great talk! Some feedback on the format: An unedited version might be better, the topics can get quite theoretical at times and when they do, it can take some time for the points to really set in. So the pauses can help with that. But I also don't mind re-watching. ?
Here is a guy who was born French but speaks English with little accent. Based on this alone you can tell he's good at what he does.
Great interview, thanks Lex. Yoshua seemed a bit angry.
Great interview! But I did not like all the thinking moments cut out. Makes it unnaturally fast, and takes away the time for me to think as well.
Great discussion, thank you Yoshua and Lex. A question about infants vs. machine learning:
If we view the world as we see it, a continual, light-speed fast, changing pice of data, would it be safe to say that the infant uses huge datasets as well?
25:50 google duplex
For AGI, we need to rethink the medium of computation.
Siraj squad ?
Peculiarities and phenomena of human psychology == breadcrumbs to underlying structures
We already have to much common knowledge — we just have to use it.
Curiosity is a reward function (especially for newborns).
wow…………totally agree with the points of those movies….that one individual creates……..and invents all things……no it doesn't happen like that………Leon Musk has teams, and engineers that solve, and build more prototypes……Man can't be compare and idolize as God to create things…………………………Great interview Lex……Thanks………….:) …………….bye
1) High semantic labels is a very very strong baseline.
2) Long-term credit assignment.
3) Train neural nets differently so that they can focus on causal explanation.
4) We need to have good world models in our neural nets.
5) From passive observation of data to more active agents.
6) Knowledge factorization is one of the current weaknesses of neural nets.
7) Disentangle the high semantic variables and build high semantic relationships between the highest level semantic variables in a neural net (similar to the rules in the classical AI systems) to better generalize.
It's time Google starts manufacturing Tpu ai chips and sell them for $500.00 but the ai chips should be made of carbon and should be able to handle and process exabytes of data a second
Love how you trimmed the video, not a second of wasted time! Thanks.
Follow your intuition and update if new evidence appears.
I generally find the time spent watching the videos for this course very worthwhile, including the interviews. But, if it were possible, I would vote for all of the presenters to deliver prepared lectures on the topic of "how might we create AGI?" (and possibly "should we be worried about AGI?") rather than being interviewed. I found the prepared lectures at the beginning of this course much more focused and worthwhile. But the interviews are still okay, if lectures are not possible.
Nailed it! Excellent.
Lex, thank you for the chance to listen to Yoshua. But where's your impression? Where's excitement?
bakkaification
Hey lex loved you interview with Joe Rogan! Great convo! I don’t know if you are familiar with Eakhart Tolle and his work but I encourage you to read A New Earth it’s possibly one of the most insightful books I’ve ever read. The concept of getting rid of the ego needs to be addressed before humans do something dumb and start another war over who’s got the bigger one… if you can contact him I also think Joe would benefit greatly from this book as well perhaps convey its message to the masses Hope all is well and your studies are good!
cc.Lex Fridman:
cc.Yoshua Bengio:
A strong case can be made to support your theory on sematics.
Human intelligence come from everyday natural learning.
Deep Learning: Convolutional Neural Networks in Python
> Understand how convolution can be applied to image effects.
> Understand how convolution helps image classification.
> Implement a convolutional neural network in Theano
> Implement a convolutional neural network in TensorFlow.
http://bit.ly/2QrXyRx
AWS Machine Learning, AI, SageMaker – With Python
> Learn AWS Machine Learning algorithms, Predictive Quality assessment, Model Optimization
> Integrate predictive models with your application using simple and secure APIs
http://bit.ly/2IXhGZ2
This guy looks chill AF
Our pleasure too.