Alexander Amini
MIT Introduction to Deep Learning 6.S191: Lecture 1
*New 2020 Edition*
Foundations of Deep Learning
Lecturer: Alexander Amini
January 2020
For all lectures, slides, and lab materials: http://introtodeeplearning.com
Lecture Outline
0:00 – Introduction
4:14 – Course information
8:10 – Why deep learning?
11:01 – The perceptron
13:07 – Activation functions
15:32 – Perceptron example
18:54 – From perceptrons to neural networks
25:23 – Applying neural networks
28:16 – Loss functions
31:14 – Training and gradient descent
35:13 – Backpropagation
39:25 – Setting the learning rate
43:43 – Batched gradient descent
46:46 – Regularization: dropout and early stopping
51:58 – Summary
Subscribe to @stay up to date with new deep learning lectures at MIT, or follow us on @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Source
Everyone watching this video lectures knows how the quality of the lecture is.. It's speechless and crystal clear I can say. Thanks MIT and prof Alexander Amini 👌
What is wrong with the comments section? Just read more comments
Thank you for your professional lecture!
To be honest, my mom is an AI expert with years of dedicated experience in Machine Learning and Deep Learning. As a software engineer, I wanted to have some knowledge of Deep Learning in recent days. One day I asked my mom via email. She instantly recommended me to join your course in YouTube. Now watching your videos becomes an important part of my daily life. I'm really happy to learn a lot from your lectures.
great lecture
Should 1st year students take this course? Or not?
Imagine you did tried to find interest in studying deep learning and dropped it at some point of time, but YouTube don't forget that node and it somehow connected the dots gave you a recommendation on a Friday night… Does it mean YT has analyzed my watch pattern too deep and it learnt on my behavior on how I commute through its network to watch videos… This is some serious deep learning example in real time.. Not sure the above things that I mentioned is even part of deep learning 🤔🤔
sir in this lecture the example given of a neural network that predicts students pass or not…it is a feed forward network right?…and weights are updated using gradient decent which uses backpropogation….then what is the difference between feed forward network and backpropogation networks?
Thanks a lot for the best lectures on Deep Learning…
wow man! I was really surprised that I could grab everything you explained!
He is only 24! This guys's gonna rock!
Thanks to Alexander Amini For this , Guys if you want to learn python from absolute basic to advanced, You can check my channel, I am making a series in python from absolute beginning to advanced.
This course is a really fascinating first time im hearing about it
Thanks a lot for the best lectures on Deep Learning…
Thanx for sharing knowledge it really helps to understand DL lots of love from INDIA
One of the best Deep Learning lectures …. wish we could have labs instructions too…
Thanx for sharing knowledge it really helps to understand DL lots of love from INDIA
He is only 24! This guys's gonna rock!
I like this class!! Thank you for sharing the knowledge, big up from France!
Thanks a lot for the best lectures on Deep Learning…
One of the best Deep Learning lectures …. wish we could have labs instructions too…
This course is a really fascinating first time im hearing about it
Cannot be more clear, thanks a lot to share this knowledge
What a time to be alive! Thanks MIT and thanks Alexander.
Thanks for uploading these Alexander ! These are amazing. Question for you – Near the end (@50:52) you say that we need to stop training before testing and training dataset start to diverge… is this also the case for a continuously growing training dataset ? Would the hard stop be (theoretically) non-existent if the gradient is continuously changing?
What determines the number of hidden units in the hidden layers?
The cleanest and tightest deep learning intro lecture I've ever come across. Most others either get lost in the theory and math or in the coding. Skipping the coding using pseudo code and displaying the math along with the diagrams was really helpful.
Thanks Prof. Amini and MIT!
In 12:50 why did you mention x and w as coloum vectors? It could be written in either way as row and column vectors?
I would say that I'm surprised of how well Alexander taught this lesson ,I'll definitely watch the rest of the course.
++
real GOLD
/67
can u please give us a plan to master machine and deep learning to be able to do projects on our own , like start with course #1 then take this , this and that
It's gold !
15:15 Non linearities
33:33 Loss Optimization
37:37 Back Propagation
I just cant understand how you dislike this video. I mean, it is uploaded just for the pure sake of education and it is one of the top lectures of its field if not the best…
There is a typo at 23:18 where the z outputs need to be nonlinearized before summing up into output layers. The diagram shows this but the equation doesn't. 😀
Would love if you guys write the prerequisites …
One of the amazing thing I have found on Youtube. It is free and we can never thank you for this. I am pleased to see everything I wish is here and I wish we could have this kind of education in our country but this is great. Prof Alexander Amini you are great. Thank You so much.
You're more than university's
Thanks a lot Alex. Your lecture is so good. Especially how you explain the back propagation, it is very clear.