Lex Fridman
Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.
This conversation was recorded in the summer of 2019.
This episode is presented by Cash App. Download it & use code “LexPodcast”:
Cash App (App Store): https://apple.co/2sPrUHe
Cash App (Google Play): https://bit.ly/2MlvP5w
INFO:
Podcast website:
https://lexfridman.com/ai
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
EPISODE LINKS:
Thinking Fast and Slow (book): https://amzn.to/35UekjE
OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
I really enjoyed this conversation with Daniel. Here's the outline:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man's Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life
Listning to a Garry Mulligan record in the background and to this podcastas as well, with my morning coffe.
Great start on the day. 🙂
It seems to me that neural networks is to system 1 thinking as rules and expert systems is to AI in system 2.this might mean that AGI might need a blending of the two.
Promising blog on deep learning below. Have a look!
https://sites.google.com/view/techmadeeasy/deep-learning-state-of-the-art
Low key shots fired against Kurzweil
Nice 😎
Amazing! Would we expect Noah Harari in an upcoming podcast ?!
i'd like to comment but i'd need to change and improve the way that I think before I could formulate any helpful response. I doubt I even understood that what was subtle.
This is the first time I've heard of or seen this man, but it is so glaringly obvious I've been missing out on some profound knowledge and perspective. Brilliant.
Amazing that both of you are Jewish.
I can't understand how there are people out there who want to kill both of you amazing people simply because of your ethnicity. why??!
Regarding the appeal of social media: There is an observation that Tocqueville states, and repeats a few times, that I like. He does not break it down into parts or try to explain it, just asserts it. It seems similar to what many people like about today's social media. I have some ideas about what causes this, but for now I'll just put down Tocqueville's words. "[T]he impetus of […] passion is irresistible […] because it is felt and shared by millions of men at the same time." http://www.gutenberg.org/files/815/815-h/815-h.htm
1:16:49人生
There is no system 1 or system 2 ..this is not science ….
smart guy
Does anyone know what is the study Kahneman is talking about towards the end of the video (1:04:00)? The one analyzing 53 studies from Behavioral Economics. Thanks!
Thank you so much for these interviews. They are insanely intriguing and curiosity driving particularly in the format they are given.
Excellent stuff brilliant 😋
I'd love to have that guy as a grandpa
"How far can we go with System 1/associative Learning?" – Lex. Such a great question..
The auto driven cars could have a system for signaling to the crossing pedestrian that is is aware and stopping for them… in effect "making eye contact". It could be a pointer with a light.
38:17 "..Shared fictions and the stories that we tell ourselves.." This nicely coincides with Jim Kellers words 1:23 ""I imagine 99% of your thought process is protecting your self-conception, and 98% of that is wrong.".. I think Lex's thoughts on 'modeling pedestrians' is over blown.. AI for autonomous driving only needs to emulate basic human driver behavior, ('tell a convincing story' to pedestrians or other drivers) IE: when to be assertive, aggressive, passive, patient) .. I think this is all in the collected data and can be trained..
– Thats my expert youtube armchair AI design opinion
Lex thank you for having this guest. I bought “thinking, fast and slow” a few years back and loved it. There is so much insight into how the mind works in that book. It seems like it would really help with AI concepts. (I believe I got turned on to it by reading something else by him and Tversky before that.) Daniel is such an interesting guy. Great interview!
The cash app? Never heard of it, I'm only familiar with the mother fucking cash app.
Fantastic channel. Bravo
Guests I’d be thrilled to see you have a discussion:
-Douglas Hofstadter
-Robert Sapolsky
-George Lakoff
-Peter Thiel
-Alex “Sandy” Pentland
Such a humble and knowledgeable scientist. He says I don't know or I'm not sure so many times, and yet, he doesn't shy away from having an opinion based on intuition. You can feel his commitment and love towards reason and science. Thanks, Lex!
There is causality in RNNs , I do not see that as a real issue. How we are able to represent experiences and data and extrapolate it to the unknown is amazing. Why some are so good at it , what do they have more than the average man is puzzling me more…
Kahneman is a personal hero of mine.
Summery, notes during watching
[05:21] «It is surprising that it was so extreme, but… one thing in human nature, I don't want to call it evil, the distinction between the in-group and the out-group, that is very basic, that's built in, the loyalty and affection towards in-group and the willingness to dehumanize the out-group, that's really human nature.» — Daniel Kahneman
Studies of the human mind and its limitations while engineering intelligent systems:
System I, fast, effortlessy and automatically generates ideas
System II, slow, verifies, conscious mental effort with limited sigle focus, manipulating ideas, imagination with conditional thinking
[11:14] Animals have a perceptual system, even if they cannot explain the world, they can anticipate what's going to happen, and that's the key form of understanding.
[12:47] We have to trust System I, without it's speed we wouldn't survive.
[15:43] Deep learning as of today is more like a System I product, it matches patterns, anticipates while being highly predictive. Yet, it doesn't have the System II ability to reason with causality and meaning.
[18:04] Current goal: Make AI learn quickly with few examples only. Maybe some assumptions or presuppositions need to be provided first to allow for faster learning, just like biology equipped us with genetically coded instructions, biases, expectations and intuition since birth and even before.
[21:37] «You get systems that translate and they do a very good job, …but they really don't know what they are talking about. For that, I am really quite surprised, you would need an AI that has sensation, an AI that is in touch with the world. Without groundig you get a machine that doesn't know what it's talking about, because it is talking about the world, ultimately.»
You have to be able to actively learn, play with the world, anticipate, interact, connect patterns in the world with patterns in your brain.
[31:28] Daniel: «It must be very difficult to program a recognition that you are in a problematic situation, without understanding the problem.», Lex replies, in order to the full scope of problematic situations «you almost need to be smart enough to solve all those problems.»
[34:36] «Go is endlessly complicated but very constrained and in the real world there are far fewer constraints and many more potential surprises.»
[37:48] Some systems are superior, but as they can't explain their reasoning, humans cannot want to trust them. «There is a very intersting aspect of that: humans think they can explain themselves. So, when you say something and I ask you ‹why do you believe that?›, then reasons will occur to you… but actually, in most cases the reasons had very little to do with why you believe what you believe. The reasons are a story that comes to your mind, when you need to explain yourself. Human interaction depends on those shared fictions and on those stories that people tell themselves.» «The story doesn't necessarily need to reflect the truth, it might just need to be convincing.» «The objective of having an explanation is to tell a story that would be acceptable to people. And for it to be acceptable, …robustly acceptable, it has to have some elements of truth, but the objective is for people to accept it.»
[40:21] what makes experiencing self vs the remembering self
[41:45] «Basically, decision making and everything that we do, is governed by our memories and not by what actually happened, it's governed by the story that we told ourselves or by the story that we are keeping. In stories, time doesn't matter. There's a sequence of events, and events matter, highlights, but time doesn't. It turns out that we go to vactions in large part to construct memories, not to have experiences. I abandoned happiness research because I couldn't solve that problem. If you do talk in terms of those two selves then clearly that what makes the remembering self happy and what makes the experiencing self happy are different things.
For contemplation: Suppose you are planning a vacation and you are just told that at the end of the vacation you'll get an amnesic drug and you will remember nothing and all you photos and so on will be destroyed as well, so there will be nothing. Wouly you still go to the same vaccation?»
[45:02] Digital social media tremendously magnify the remembering self.
[46:32] In times of permanent fast internet access, it's much less important to know things.
[48:55] Existentialist philosophy: Benefits like achieving contentment or maybe even happiness of letting go of any of the things and procedures of the remembering self and instead highlighting the experiencing self without evaluating, passing judgement or keeping score.
[50:23] «My intuition was, that the experiencing self, that's reality. But then it turns out that what people want for themselves is not experience, they want memories and they want a good stories about their life. And you cannot have a theory of happiness that does not correspond to what people want for themselves.»
[51:05] «So currently AI systems are more like the experiencing self, in that they react to the environment, …there's some pattern formation like learning and so on, but you really don't construct memories, …except in reinforcement learning where you replay over and over.»
[51:38] «You think it's a feature or a bug that we humans look back?» «Definitely a feature. I mean you have to look back in order to look forward. »
[54:53] Most of the controlled, scientific understanding of the world is the result of in-lab based experiments. «Do you think we can understand the fundamentals of human behavior through controlled experiments in the lab? Because in driving simulators we don't capture true honest human behavior in that particular domain.» «You can learn a lot. But your conclusions are basically limited to the situation of the controlled experiment. Then you have to jump the big inductive leap to the real world in order to validate them.»
[56:54] «Very skilled intuition. I just had that experience: I had an idea that actually turned out to be really good idea, days ago, and you have a sense of that building up, I couldn't exactly explain the idea or what's going on, but I knew this is going somewhere. You know, I've been playing that game for a very long time and so you develop that anticipation, that ‹yes, this is worth following up!›. That's part of the skill. You can't reduce it to words of advice like describing a process. It's like trying to explain what it's like to drive… you've got to break it apart and then you lose the experienrce.»
[58:04] In order to find somebody that you love to collaborate with, you have to be lucky. Lucky to find a person who you very like to respond and share your curiosity with. Then you'll magically finish each other's sentences and it feels like you violate information theoretic laws by exchanging more information than has been communicated.
[01:02:30] What is your theory regarding the replication crisis of experimental psychology? -> Between-subject experiments are way harder to predict (and replicate?) than within-subject experiments. As researchers think through their experiments from a within-subject perspective, they imagine the between-subject situation.
[01:05:40] The focusing illusion: when you think about something it looks very important, more important than it really is.
[01:10:50] It's interesting how hard it is for people to change their mind. You build your system and live in it and other systems of ideas look foreign and there is very little contact and very little mutual influence. «We have to opinions that we have not because we know why we have them but because we trust some people. It's much less about evidence than it is about stories.»
[01:13:19] «What is a good test for intelligence in AI systems?» «AGI is doing better in any task.»
// My clasical answer: physical survival
// My spiritual answer: love of the self
[01:16:28] «What is the meaning of it all, the meaning of life?» «There is no answer, that I can understand. And I am not actively looking for one. There is no answer that we can understand. I am not qualified to speak about what we cannot understand. But, I know that we cannot understand reality. I mean there's a lot of things we can do, …you know …grivity waves, a big moment for humanity, when you imagine that ape being able to go back to the big bang. But the ‹why› is hopelessly bigger than us, really.»