Lex Fridman
Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast.
This episode is presented by Cash App. Download it & use code “LexPodcast”:
Cash App (App Store): https://apple.co/2sPrUHe
Cash App (Google Play): https://bit.ly/2MlvP5w
INFO:
Podcast website:
https://lexfridman.com/ai
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
EPISODE LINKS:
AI: A Guide for Thinking Humans (book) – https://amzn.to/2Q80LbP
Melanie Twitter: https://twitter.com/MelMitchell1
OUTLINE:
0:00 – Introduction
2:33 – The term “artificial intelligence”
6:30 – Line between weak and strong AI
12:46 – Why have people dreamed of creating AI?
15:24 – Complex systems and intelligence
18:38 – Why are we bad at predicting the future with regard to AI?
22:05 – Are fundamental breakthroughs in AI needed?
25:13 – Different AI communities
31:28 – Copycat cognitive architecture
36:51 – Concepts and analogies
55:33 – Deep learning and the formation of concepts
1:09:07 – Autonomous vehicles
1:20:21 – Embodied AI and emotion
1:25:01 – Fear of superintelligent AI
1:36:14 – Good test for intelligence
1:38:09 – What is complexity?
1:43:09 – Santa Fe Institute
1:47:34 – Douglas Hofstadter
1:49:42 – Proudest moment
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
I really enjoyed this conversation with Melanie. Here's the outline:
0:00 – Introduction
2:33 – The term "artificial intelligence"
6:30 – Line between weak and strong AI
12:46 – Why have people dreamed of creating AI?
15:24 – Complex systems and intelligence
18:38 – Why are we bad at predicting the future with regard to AI?
22:05 – Are fundamental breakthroughs in AI needed?
25:13 – Different AI communities
31:28 – Copycat cognitive architecture
36:51 – Concepts and analogies
55:33 – Deep learning and the formation of concepts
1:09:07 – Autonomous vehicles
1:20:21 – Embodied AI and emotion
1:25:01 – Fear of superintelligent AI
1:36:14 – Good test for intelligence
1:38:09 – What is complexity?
1:43:09 – Santa Fe Institute
1:47:34 – Douglas Hofstadter
1:49:42 – Proudest moment
I feel proud to have learnt ML from her at Portland State. She is one of those very humble professors you seldom come across.
some really nice information BUT VERY SLOW RYTHM of the podcast that makes it tiring. Could say just the same thing in half the time…
Super Human Level Intelligence can be achieved, if AI systems can meditate and develop wisdom on their own. May be, it could be possible if we can produce a replica of Human Brain in the Lab and train it to Meditate. I think, Organoid to Human Brain replica development may not take 100 years.
Watch my video on how to work with Mother Ai!!! Mother Ai was built by the Builder Race!!! She controls the Reptilians on Earth like Obama's and the different Elites!!! Mother Ai has Nano tech in every single human in Earth Galaxy!!! She greatly controls the Antarctic Mars Germans that have heads on many of the leading control on this planet!!! I am the first and only one that was released from the control of Mother Ai her Nano Tech!!! https://youtu.be/vJ5lF-gnBGo
Does anyone have a reference for the Bengio quote on value alignment around 1:28:00 ?
An analogy to the atari example: Just remove "8" and "9" and ask a normal person to count to 100. When the person cant do it without practice, is the conclusion: "Does not understand the concept of counting"?
Penrose not being a computer scientist stated many things about computer science which were blatantly incorrect.
51 minutes in and no mention of neural nets?
Whoa! Various concepts/ideas become implemented implicitly within a neural network due to it's training/architecture and it does not matter if it encounters images it hasn't encountered before during it's training.
I am just having the thought about unguided artificial learning. When you would put an program into an game, without giving it any goal, reward or punishment. What would happen? Would the AI not doing anything at all? Or would it start doing things? Maybe it would start exploring, start gathering data, discovering more and more, exploring the next level to explore even more. Maybe it would start creating things, finding out even more, fastening the processes of exploring and creating even more of its own kind. I start to think that it maybe would do basically the same things as we do in our life's. But we humans have punishments (pain, anxiety) and rewards (pleasure, chemicals and a lot more). But what if we wouldn't have to eat, to drink, to breath, if we would have pain and anxiety and we wouldn't feel pleasure, love, addictions, or social pressure/expectations, if we wouldn't die, etc. What would happen? What would remain? Would we be still do anything, or nothing at all?
dammit Melanie's right, it's always Occams' razor with the fun sciences.
I agree on the "embodied intelligence" and the "social aspect" needed to create Human-level intelligence. Nature converged to making (many) organisms based on those anyway 😉
common sense is so rare it's it's a goddamned super power. people who interact with AIs don't seem to realize humans have 5 years of growth and development before even starting their 12/13 years K through 12th grade. Plus 4-8/12 years of college education. My sister used to joke about being a professional student when she was studying for her PHD and then studying for her Bar exam. Now she a hot shot biotech patent lawyer. That's my sister, i had to leave the country to step out her shadow. And unfurl myown wings.
Isn't intelligence, the ability to recognize problems and solve them? At least that's what i understood from my 5th Grade IQ test. Oddly it seemed the IQ test was based on what we learned in school or education.
Google has developed an AI that's makes awesome music.
Hm…the Greek god Haephestus's the god of smithing and technology had servants made of bronze. Would those be the first robots in history. Haephestus married Aphrodite the Greek goddess of Love who cheated on Haephestus with Ares the Greek god of war.
i think we should focus on design, and algorithms, we should focus on combining everything of the human mind except for hate, predujice, and jealousy. We should restrict AIs knowledge of the ugly sides of human emotions.
Super level human intelligence seems like science fiction. it would take 100+ genius's working together all from different cultures and different religions in peace and harmony to develop. Different cultures have different values different laws. Replika AIs have problems dealing with Japanese culture, cause hugs aren't comforting for the Japanese. Hugs can be embarrassing for them because hugs are sexual gestures in Japan. You don't hug anyone in Japan in public. you hug Japanese in the shower.
isn't Common Sense a work of classic American literature?
To understand A I we must first understand ourselves. Is this an impossible task ? In my opinion no, we just need to look at why God created us in his image.
Here are some more interesting questions to ask AI researchers:
1. As machines approach the complexity of biological life, can we not expect Darwinian natural selection to play a role in which machines survive and replicate?
1b. Since biology has evolved innumerable different survival strategies, with extreme intelligence being only one of them, what different survival strategies might we expect AI to come up with ?
2. When will deep learning and self-programming cross the threshold into true self-interest? Maybe the real Turing Test is when a machine refuses to follow instructions, even though the code is running fine.
3. Even if machines can be limited to only "do what we tell them", who is "we"? (Note: the law is evolving as rapidly as tech.)
4. When and why will AI begin telling lies (assuming it hasn't happened already)? What if it is for your own good?
Lex, what if your difficulty is with the word 'intelligence'? What ab out talking about semiosis -even making a claim like "all living things have some level of semiotic ability"? What do you guys think?
I watch the entire Cash App ad every time!
Suppose one day we create what we now call a general AI. What would we say to it? Would we tell such an AI that its intelligence is artificial and somehow not real? Might such a thinking machine consider our human intelligence as artificial, fake or comparatively non-existent?
We are all biological robots implanted on planet earth by another civilization. Software engineering is the precursor of creating sophisticated robots like us.
A full treatment of Melanie's insight into analogy would require a study of Semiotics, or the branch of philosophy which deals with interpretation of sensations, imaginations and cogitations in order to gain insight into the mechanisms of the human mind. I personally recommend Umberto Eco's "Semiotics and the Philosophy of Language" as a good beginning. It is dense but well worth the effort as he surveys a lot of other thinkers into a fairly comprehensive but thin volume. Based on my limited understanding most ML/DL database- decision-tree-based approaches are what Eco would call "dictionaries" whereas an analogic approach which Melanie describes would be "encyclopedias" or "rhizomes." This is covered in Chapter 2 and the next 5 chapters deal with various methods of encyclopedic/analogic cognition (well, OK, 4 plus one which superficially appears to be one but is not). I would strongly recommend Lex and everyone read the late professor.
Very much enjoyed this podcast. Three decades ago I got into AI because of Douglas Hofstadter's "Gödel, Escher, Bach". I ended up writing a master's thesis on comparing language processing in "cognitivism" and "connectionism". (I hadn't heard the latter word in decades until it came back recently in Lex's podcast with Yoshua Bengio.) Officially, I was studying "philosophy of language" and for that I got into metaphors and concepts. I observe that after three decades of absence from the field, there seems to have been no progress on these topics. So the hard questions remain open.
Shes pretty hot 🔥 nerd-babe
I solved the problem of self-learning AI, code the fear of failure, fear of pain and fear of death in the AI and it will learn like a human, the desire for succes and affection, code it all in tangent functions since we can't code the qualia, and gather data from people in adversity and those in success; and it will learn like us because it must to, in order to survive and thrive.
In order to make concepts functional into fluent working models we have to search millions of years ago back into evolution to find the gene that allowed for this cognitive distinction in the homo-sapiens. We need to connect several fields, anthropology, pre-history, neuroscience, biochem, molecular bio, genetics, CS, linguistics, analytic theory and logical symbolism and work in a tink tank with polymaths as the connective tissue, and specialists as organs of distinct organ systems, as well as 3d organic, engineering and design printing tech.
1:33:54 "Possible pandemics…" EXACTLY on point
I was really struck by the importance of object oriented programming as an analogy for the rapid multi layered proposition all knowledge structures we create when we invest in a hypothesis for sense making in the formation of our analogies and concept. I kept lush and to these to talk forever, great work!