Lex Fridman
Introductory lecture on Human-Centered Artificial Intelligence (MIT 6.S093) I gave on February 1, 2019. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.
INFO:
Website: https://deeplearning.mit.edu
GitHub: https://github.com/lexfridman/mit-deep-learning
Slides: http://bit.ly/2IDMd0U
Transcript: http://bit.ly/2IDMkcQ
Playlist: http://bit.ly/deep-learning-playlist
OUTLINE:
0:00 – Introduction to human-centered AI
5:17 – Deep Learning with human out of the loop
6:11 – Deep Learning with human in the loop
8:55 – Integrating the human into training process and real-world operation
11:53 – Five areas of research
15:38 – Machine teaching
19:27 – Reward engineering
22:35 – Question about representative government as a recommender system
24:27 – Human sensing
27:06 – Human-robot interaction experience
30:28 – AI safety and ethics
33:10 – Deep learning for understanding the human
34:06 – Face recognition
45:20 – Activity recognition
51:16 – Body pose estimation
57:24 – AI Safety
1:02:35 – Human-centered autonomy
1:04:33 – Symbiosis with learning-based AI systems
1:05:42 – Interdisciplinary research
CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
Source
Human-centered AI is the topic I'm most passionate about and hope to make my life's work. I believe that solving the human-centered grand challenges I outline will lead to big leaps in impact of AI in real-world applications in the next decade. Here's the outline of the lecture:
0:00 – Introduction to human-centered AI
5:17 – Deep Learning with human out of the loop
6:11 – Deep Learning with human in the loop
8:55 – Integrating the human into training process and real-world operation
11:53 – Five areas of research
15:38 – Machine teaching
19:27 – Reward engineering
22:35 – Question about representative government as a recommender system
24:27 – Human sensing
27:06 – Human-robot interaction experience
30:28 – AI safety and ethics
33:10 – Deep learning for understanding the human
34:06 – Face recognition
45:20 – Activity recognition
51:16 – Body pose estimation
57:24 – AI Safety
1:02:35 – Human-centered autonomy
1:04:33 – Symbiosis with learning-based AI systems
1:05:42 – Interdisciplinary research
6 troglodytes disliked this video
I wish you luck, I hope you don't need a lifetime to see the fruits of your love. 🙂
A lot of information and a lot of food for thought. I particularly liked the grand challenges you mentioned. Looking forward to the rest of the lectures.
Very great lecture, sir. I love human-centered AI and the way you said "a child".
This is great for anyone who knows how to listen
great lecture, thank you Lex
1:00
Human will not be provably safe => AI supervision is required
Human will not be provable fair => AI supervision is required
Human will not be perfectly explainable => => AI supervision is required
Your doing amazing work lex!
Showing my son this lecture-it will count for science n tech learning
Sir Lex Fridman, would you need a vlog publicity partner … I need a normal per event job. I am considering a Q&A portion in my vlog with a sophisticated AI. Kindly See my YouTube Profile. #SacredGeometryCompass #NeutralPossibilitiesSolutions #GlobalInitiatives
Awesome lecture Lex thanks. If you work a bit on your presentation skills it will be better. The points are not as important as you engaging with your audience. Material is great though. So thanks for this..
Excellent video. Looking around the world at the rampant hate, division, war, crime, greed, subjugation and narcissism we have difficulty in our AI cognition work of accepting that the supervision should be one way (i.e. human over AI) even when a 'crowd' or mob is used. We are working to have an AI oversee human supervision to detect bias based on what we call 'seeds' which are very basic conceptual guide posts as a relative measure that most humans should agree on but often don't (i.e. don't kill, don't hate, don't steal, don't lie, etc.). This at least gives an oversight cycle that reports on human guidance infractions. What mechanism in human centered AI addresses the severity of human bias that has produced today's angry world given that some of the worst historical human atrocities were 'crowd' supported?
ya yanggang
I'm so passionate about AI. It will be one hell of a day once we get true AI. We will have created a god.
This is really awesome. I'm watching lectures on a new major discovery. When I look at modern videos of AI, It's like I'm watching the origins of its discovery. The teachings are primal compared to what they will be as time goes by.
13,832 views
Seems like a very low number
https://www.youtube.com/watch?v=bdJxAF9Xj28&t=2984s
If netflix and youtube recommendations should inform my view on AI running government, I'd rather keep Trump. Those recommendations only fan the flames of my worst habits.
Very important
31:00 something to think about with this – deep fakes may not just be videos but phone calls too. Imagine not getting an email from your boss to do an emergency wire for 1.6 million dollars but actually getting a phone call from him, with all his vocal and social nuance, telling you to do it. We'll really have to think security, on everything at every level.
'Arguing machines' reminds me of the recent Boeing 737 MAX crashes. They could have been avoided if redundancy had been taken seriously when designing the MCAS system.
On a meta-level it is quite interesting: to make machine learning more scalable, deep learning extracts the features without the human labour. Now we are putting the human labour/supervision back again into the system, just into a different role. So that's our human learning curve… Thanks for sharing the lecture. Great to see progress on the human-machine interaction front.
"This world is full of the most outrageous nonsense. Sometimes things happen which you would hardly think possible."
I learn so much from you! I'm still not married…🖤💗💜💗🖤
Y’all are kinda very dumb attempting to explain science without ever taking OG calculus or critical thinking. Fridman speaks sophomorically like he is talking to children. Oh…hahahaha….he is. It is their level. Fridman cannot change a tire or troubleshoot any engine or tune anything ever…..by the way. He attempts to play guitar. He is low level. Calicornication. Is Idiocracy
Fridman lunch can be eaten by any real OG American mechanic. If presented with realism he would not know what he sees.
The problem I have with artificial intelligence has nothing to do with your work Mr Fridman nor MIT.
The problem I have with it is a simple one, how long I'm I going to have a job for?.
These technologies are worrying to me, I've seen shops fire employees replaced by simple
servering kiosks, with the possibly of understanding human expression and emotions and responding accordingly, how much longer in till machines will able to offer the same if not better customer service than me?.
Great talk by the way, just incredibly depressing.
Brilliant lecture
Thanks a lot for make this publicly available. Would it be possible to have an interview with Prof. Harari, like this this one in Stanford (https://www.youtube.com/watch?v=b9TfkgH0Xzw), in your Podcast?
Human morals are naturally developing. You can impose your morals on the machine today, and reap consequancies 100 years later, when you have to deal with world-ruling machine that have 100 years old morals.
Go watch his interview with Eric Weinstein, before this and you hear it from a different perspective. Thanks for posting such great content Mr.Fridman.
My soul screamed out on to my friends on Facebook (a relatively small group), I thought i’d post it below because there is truth in it, even though it’s theme may seem catastrophic:
Trying to recover my Apple ID has been a nightmare.
Here’s my advice to anyone working in customer service or in any operation generally:
We all can follow instructions without thinking about it; when Person A carries out an action, person B consults the process map and identifys the pre-assigned response, to which person A responds within the confines of said process, leading to another pre-defined response, and so on and so forth.
In customer service they call this ‘transactional service’. You’d find it in highly regulated industries to protect both parties from falling fowl to complex laws and regulations.
However, transactional service has become common place across industries. Complaint processes in general have become overly complicated, so even when it is completely evident and apparent the company are in the wrong, the customer has to jump through a series of hoops just to get to a stage where a person is going to step away from the process map and resolve the issue.
So, in service it’s ‘transactional’ (strict process following) or ‘relational’ (considering the process does not fit the situation and relying on human communication and thinking)
In psychology it’s ‘unconscious behaviour’ (just following process) and ‘conscious behaviour’ (engaging your fucking brain you npc fucktard).
The warning in my little message: everything you do unconsciously, can possibly be done by AI. If service is confined to a black and white set of rules, then AI can do service.
But AI cannot do ‘relational service’. Relational service takes effort, experience, empathy… to a company these things are expensive and unecessary; to humans, these things are priceless.
So, we can all support making things ‘easier’ by process mapping the world. But in doing so, we de-value conscious thought, de-value each other (we are less than the process map), and open the door to AI replacing us and none of us being happy with the result (but weirdly feeling it’s how it should be).
Now i’ve put the world to rights… think i’m gonna have pancakes
End soul scream.
That’s my uneducated concern with AI
We need a ‘Human Union’ to stop the dehumisation of life.
And on emotion: repressed emotion often appears alternate on the surface; repressed fear can present as anger (cornered coward scenario), it can often take a therapist numerous conversations before the true emotions are identified and dealt with. Could AI accurately assess emotional state from surface data?
And values: look at Design Decode from Italy. They are mapping human values and have a list of over 200. We all have different values and Jung believed these values cannot be chosen, they are part of us.
And on Empathy:
I can never truly emphasis with a black man (for example) because I have never been black. I can imagine what it would be like to be a human and face the situations black people cam face, but I can never understand what it feels like to be a black person in that said situation.
Ergo, AI can understand our values and our emotions and analysis how we may react.. but it can never understand what it feels like to be a human in said situation.
Fascinating vids; i’ll be watching your work with great interest
Humans are electrons … this is not a good idea so many people dont know nature
34:52
saving my spot for later
Wonderful talk. Where deep learning and other aspects of parsing through data are at this time is simply breathtaking. I do find beauty in algorithms being able to make use of realtime and realworld data. My only worry is that our species has yet to discard its costlier behaviours and ideas. Let's just see where this all evolves towards. 🌻
Human-centred AI = stronger authoritarianism