Videos

MIT 6.S093: Introduction to Human-Centered Artificial Intelligence (AI)



Lex Fridman

Introductory lecture on Human-Centered Artificial Intelligence (MIT 6.S093) I gave on February 1, 2019. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.

INFO:
Website: https://deeplearning.mit.edu
GitHub: https://github.com/lexfridman/mit-deep-learning
Slides: http://bit.ly/2IDMd0U
Transcript: http://bit.ly/2IDMkcQ
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 – Introduction to human-centered AI
5:17 – Deep Learning with human out of the loop
6:11 – Deep Learning with human in the loop
8:55 – Integrating the human into training process and real-world operation
11:53 – Five areas of research
15:38 – Machine teaching
19:27 – Reward engineering
22:35 – Question about representative government as a recommender system
24:27 – Human sensing
27:06 – Human-robot interaction experience
30:28 – AI safety and ethics
33:10 – Deep learning for understanding the human
34:06 – Face recognition
45:20 – Activity recognition
51:16 – Body pose estimation
57:24 – AI Safety
1:02:35 – Human-centered autonomy
1:04:33 – Symbiosis with learning-based AI systems
1:05:42 – Interdisciplinary research

CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman

Source

Similar Posts

38 thoughts on “MIT 6.S093: Introduction to Human-Centered Artificial Intelligence (AI)
  1. Human-centered AI is the topic I'm most passionate about and hope to make my life's work. I believe that solving the human-centered grand challenges I outline will lead to big leaps in impact of AI in real-world applications in the next decade. Here's the outline of the lecture:

    0:00 – Introduction to human-centered AI

    5:17 – Deep Learning with human out of the loop

    6:11 – Deep Learning with human in the loop

    8:55 – Integrating the human into training process and real-world operation

    11:53 – Five areas of research

    15:38 – Machine teaching

    19:27 – Reward engineering

    22:35 – Question about representative government as a recommender system

    24:27 – Human sensing

    27:06 – Human-robot interaction experience

    30:28 – AI safety and ethics

    33:10 – Deep learning for understanding the human

    34:06 – Face recognition

    45:20 – Activity recognition

    51:16 – Body pose estimation

    57:24 – AI Safety

    1:02:35 – Human-centered autonomy

    1:04:33 – Symbiosis with learning-based AI systems

    1:05:42 – Interdisciplinary research

  2. 1:00
    Human will not be provably safe => AI supervision is required
    Human will not be provable fair => AI supervision is required
    Human will not be perfectly explainable => => AI supervision is required

  3. Awesome lecture Lex thanks. If you work a bit on your presentation skills it will be better. The points are not as important as you engaging with your audience. Material is great though. So thanks for this..

  4. Excellent video. Looking around the world at the rampant hate, division, war, crime, greed, subjugation and narcissism we have difficulty in our AI cognition work of accepting that the supervision should be one way (i.e. human over AI) even when a 'crowd' or mob is used. We are working to have an AI oversee human supervision to detect bias based on what we call 'seeds' which are very basic conceptual guide posts as a relative measure that most humans should agree on but often don't (i.e. don't kill, don't hate, don't steal, don't lie, etc.). This at least gives an oversight cycle that reports on human guidance infractions. What mechanism in human centered AI addresses the severity of human bias that has produced today's angry world given that some of the worst historical human atrocities were 'crowd' supported?

  5. This is really awesome. I'm watching lectures on a new major discovery. When I look at modern videos of AI, It's like I'm watching the origins of its discovery. The teachings are primal compared to what they will be as time goes by.

  6. If netflix and youtube recommendations should inform my view on AI running government, I'd rather keep Trump. Those recommendations only fan the flames of my worst habits.

  7. 31:00 something to think about with this – deep fakes may not just be videos but phone calls too. Imagine not getting an email from your boss to do an emergency wire for 1.6 million dollars but actually getting a phone call from him, with all his vocal and social nuance, telling you to do it. We'll really have to think security, on everything at every level.

  8. 'Arguing machines' reminds me of the recent Boeing 737 MAX crashes. They could have been avoided if redundancy had been taken seriously when designing the MCAS system.

  9. On a meta-level it is quite interesting: to make machine learning more scalable, deep learning extracts the features without the human labour. Now we are putting the human labour/supervision back again into the system, just into a different role. So that's our human learning curve… Thanks for sharing the lecture. Great to see progress on the human-machine interaction front.

  10. "This world is full of the most outrageous nonsense. Sometimes things happen which you would hardly think possible."
    I learn so much from you! I'm still not married…🖤💗💜💗🖤

  11. Y’all are kinda very dumb attempting to explain science without ever taking OG calculus or critical thinking. Fridman speaks sophomorically like he is talking to children. Oh…hahahaha….he is. It is their level. Fridman cannot change a tire or troubleshoot any engine or tune anything ever…..by the way. He attempts to play guitar. He is low level. Calicornication. Is Idiocracy

  12. The problem I have with artificial intelligence has nothing to do with your work Mr Fridman nor MIT.
    The problem I have with it is a simple one, how long I'm I going to have a job for?.

    These technologies are worrying to me, I've seen shops fire employees replaced by simple
    servering kiosks, with the possibly of understanding human expression and emotions and responding accordingly, how much longer in till machines will able to offer the same if not better customer service than me?.

    Great talk by the way, just incredibly depressing.

  13. Human morals are naturally developing. You can impose your morals on the machine today, and reap consequancies 100 years later, when you have to deal with world-ruling machine that have 100 years old morals.

  14. Go watch his interview with Eric Weinstein, before this and you hear it from a different perspective. Thanks for posting such great content Mr.Fridman.

  15. My soul screamed out on to my friends on Facebook (a relatively small group), I thought i’d post it below because there is truth in it, even though it’s theme may seem catastrophic:

    Trying to recover my Apple ID has been a nightmare.

    Here’s my advice to anyone working in customer service or in any operation generally:

    We all can follow instructions without thinking about it; when Person A carries out an action, person B consults the process map and identifys the pre-assigned response, to which person A responds within the confines of said process, leading to another pre-defined response, and so on and so forth.

    In customer service they call this ‘transactional service’. You’d find it in highly regulated industries to protect both parties from falling fowl to complex laws and regulations.

    However, transactional service has become common place across industries. Complaint processes in general have become overly complicated, so even when it is completely evident and apparent the company are in the wrong, the customer has to jump through a series of hoops just to get to a stage where a person is going to step away from the process map and resolve the issue.

    So, in service it’s ‘transactional’ (strict process following) or ‘relational’ (considering the process does not fit the situation and relying on human communication and thinking)

    In psychology it’s ‘unconscious behaviour’ (just following process) and ‘conscious behaviour’ (engaging your fucking brain you npc fucktard).

    The warning in my little message: everything you do unconsciously, can possibly be done by AI. If service is confined to a black and white set of rules, then AI can do service.

    But AI cannot do ‘relational service’. Relational service takes effort, experience, empathy… to a company these things are expensive and unecessary; to humans, these things are priceless.

    So, we can all support making things ‘easier’ by process mapping the world. But in doing so, we de-value conscious thought, de-value each other (we are less than the process map), and open the door to AI replacing us and none of us being happy with the result (but weirdly feeling it’s how it should be).

    Now i’ve put the world to rights… think i’m gonna have pancakes

    End soul scream.

    That’s my uneducated concern with AI

    We need a ‘Human Union’ to stop the dehumisation of life.

  16. And on emotion: repressed emotion often appears alternate on the surface; repressed fear can present as anger (cornered coward scenario), it can often take a therapist numerous conversations before the true emotions are identified and dealt with. Could AI accurately assess emotional state from surface data?

    And values: look at Design Decode from Italy. They are mapping human values and have a list of over 200. We all have different values and Jung believed these values cannot be chosen, they are part of us.

  17. And on Empathy:

    I can never truly emphasis with a black man (for example) because I have never been black. I can imagine what it would be like to be a human and face the situations black people cam face, but I can never understand what it feels like to be a black person in that said situation.

    Ergo, AI can understand our values and our emotions and analysis how we may react.. but it can never understand what it feels like to be a human in said situation.

    Fascinating vids; i’ll be watching your work with great interest

  18. Wonderful talk. Where deep learning and other aspects of parsing through data are at this time is simply breathtaking. I do find beauty in algorithms being able to make use of realtime and realworld data. My only worry is that our species has yet to discard its costlier behaviours and ideas. Let's just see where this all evolves towards. 🌻

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com