Lex Fridman
This is lecture 2 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. This lecture introduces types of machine learning, the neuron as a computational building block for neural nets, q-learning, deep reinforcement learning, and the DeepTraffic simulation that utilizes deep reinforcement learning for the motion planning task.
INFO:
Slides: http://bit.ly/2H8Fs7g
Website: https://deeplearning.mit.edu
GitHub: https://github.com/lexfridman/mit-deep-learning
Playlist: https://goo.gl/SLCb1y
Links to individual lecture videos for the course:
Lecture 1: Introduction to Deep Learning and Self-Driving Cars
https://youtu.be/1L0TKZQcUtA
Lecture 2: Deep Reinforcement Learning for Motion Planning
https://youtu.be/QDzM8r3WgBw
Lecture 3: Convolutional Neural Networks for End-to-End Learning of the Driving Task
https://youtu.be/U1toUkZw6VI
Lecture 4: Recurrent Neural Networks for Steering through Time
https://youtu.be/nFTQ7kHQWtc
Lecture 5: Deep Learning for Human-Centered Semi-Autonomous Vehicles
https://youtu.be/ByZF8_-OJNI
CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– AI Podcast: https://lexfridman.com/ai/
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Twitter: https://twitter.com/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Slack: https://deep-mit-slack.herokuapp.com
Source
Hi, it seems the link to the slides of the lecture does not work. I am wondering if you could provide the new links? Thks
Is this guy high AND drunk ?
Thanks Lex!
Thank you lex for uploading this awesome video here for free.
Thanks for sharing your lecture, Just the web site is not working
Thank you!
Thanks, and it is a good way to deliver the material in a motivation speech style !!!
Very poor and hand wavy explanation on some of the key concepts. Not at all clear what the deep Q learning loss function is and why it is chosen that way and how it is evaluated. The instructor seems to assume that you have a pretty good basic understanding of the content talked about in the lecture
Am I the only one that finds the explanations to be quite cumbersome and not easily digestible!? I'm having a hard time following some things, gotta pause, go back, rewatch segments, speculate on a lot of things and extrapolate on speculations then rewatch hoping to match speculations on stated facts to confirm my understanding is correct. I'm not an expert in teaching nor am I a genius but when the lesson leaves so many loose ends and raises more question than it answers, it might not be properly optimized for teaching. I do appreciate the effort though and acknowledge the fact that it's a difficult subject, I'm a visual learner and it's a pain in the ass to find material that suits me on this subject.
Can someone please explain what is the input to the algorithm? Is it just one snapshot of the game , or multiple snapshots taken when humans are playing it, or a video of a human playing it?
Can someone tell me why are we doing this in the browser? Is the training happening in cloud or on the local system. What is the logic of using browser?
good stuff.. thanks
How did we decide the image size should be 28*28? That 784 pixels are neither too much or too less for the training model.
Great lesson
54:31 So basically you try to predict the result R' of performing A' on a past state S on which you did A and got result R, and then readjust your weights to make your prediction and the actual R' you got closer?
excellent lecture! Thank you!
At 36:56 it seems like you can reduce the reward to Q(t+1) – Q(t), or just the simple increase in the "value" of the state in time period t+1 over period t. Then the discount rate (y) can be applied to that gain to discount it back to time t. The learning rate (a) then becomes a "growth of future state" valuations. Then the most important thing is that y * a > 1, or your learning never overcomes the burden of the discount rate.
This is really similar to the dividend growth model of stock valuation:
D/(k-g)
D=dividend at time 0, k=discount rate, g=growth rate.
The strange similarity is that when the "Learning rate" (feels like this should be "Applied Learning Rate") is greater than the discount rate, there is "growth" in future states, otherwise there is contraction (think The Dark Ages). In the dividend discount model, whenever the growth rate is extrapolated into infinity as higher than the discount rate, the denominator goes to zero and below, and the valuation goes to infinity.
Yeah, I like this guys analogies translating the bedrock of machine learning, etc to fundamental life lessons.
Never stop learning… and then doing!
Thanks for sharing such a great lecture. But i stuck here at 45.13 where in the Atari game we have 4 image to decide Q parameter and we have dimension of each image as H * W and each pixel represent 256 level (gray scale) and then total size would be 256 * H * W * 4. But how there are 256^(H * W * 4) rows in the Q table.
please anyone can explain?
I find the explanations of Q and deep Q a bit unclear
Kant knowlege & reason cross-pollination: https://youtu.be/FYkwLiHEEtY
This is so refreshing! To break down the human psyche to mathematical terms! Mind blown π€―!! You nailed it!! When science and psychology come together so beautifully like this is an inspiring site! You got my attention π
Nice talk.
First you are a good human and a fantastic teacher.
Because you share the knowledge with the people who have not has the possibility to study by a university.
Thanks for that and god bless you.
28 people donβt have email
Lex, why do you have digital shadow? it's freaking me out man.
I can see he has a lot of "Work" behind him.
Can anyone tell me the step by step map for learning machine learning? I am a beginner ,I have just completed python programming and have done some small project. Please help me ,I don't know where to start
at 7:31your slide shows a threshold for the activation function in the equation but the animation shows a sigmoid for the activation. That might confuse some MIT folks.
Presentation style of the trainer is awesome.
Can you please use bike instead of cars? Car's are polluting, outdated, harmful means of transport.
γ γ 0
Thankyou πππ
Just started watching this series and realized the game is long taken down :'(
Learning about one of my favorite topics from Lex is just awesome. Thanks to this humble legend for sharing this!
Lex, you look like a kid here! Are you sure this was only 4 years ago?
I like Lex a lot, but the objective function for Q I think is wrong (32:49). Optimal Q-values are intended to maximize the cumulative future reward, not just reward at the next time step. One could easily imagine that the best action to take in one's current state delivers a loss at the next step, but in the long term achieves the greatest net gain in reward.
Thanks! Great lecture!!
Amazing!
Sorry, but you're a terrible teacher.
Sorryβ¦
Yuuuummmm.
#sploosh.
#imnasty
It's amazing how technology allows us to access such high-quality educational content from anywhere in the world. Huge thanks to Lex for sharing these insightful and inspiring videos with us!
This lecture is gold
Awsome man!