Lex Fridman
François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class AI researcher and software engineer at Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.
INFO:
Podcast website: https://lexfridman.com/ai
Full episodes playlist: http://bit.ly/2EcbaKf
Clips playlist: http://bit.ly/2JYkbfZ
EPISODE LINKS:
François twitter: https://twitter.com/fchollet
François web: https://fchollet.com/
OUTLINE:
0:00 – Introduction
1:14 – Self-improving AGI
7:51 – What is intelligence?
15:23 – Science progress
26:57 – Fear of existential threats of AI
28:11 – Surprised by deep learning
30:38 – Keras and TensorFlow 2.0
42:28 – Software engineering on a large team
46:23 – Future of TensorFlow and Keras
47:53 – Current limits of deep learning
58:05 – Program synthesis
1:00:36 – Data and hand-crafting of architectures
1:08:37 – Concerns about short-term threats in AI
1:24:21 – Concerns about long-term existential threats from AI
1:29:11 – Feeling about creating AGI
1:33:49 – Does human-level intelligence need a body?
1:34:19 – Good test for intelligence
1:50:30 – AI winter
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
I really enjoyed this conversation with François. Here's the outline:
0:00 – Introduction
1:14 – Self-improving AGI
7:51 – What is intelligence?
15:23 – Science progress
26:57 – Fear of existential threats of AI
28:11 – Surprised by deep learning
30:38 – Keras and TensorFlow 2.0
42:28 – Software engineering on a large team
46:23 – Future of TensorFlow and Keras
47:53 – Current limits of deep learning
58:05 – Program synthesis
1:00:36 – Data and hand-crafting of architectures
1:08:37 – Concerns about short-term threats in AI
1:24:21 – Concerns about long-term existential threats from AI
1:29:11 – Feeling about creating AGI
1:33:49 – Does human-level intelligence need a body?
1:34:19 – Good test for intelligence
1:50:30 – AI winter
He is a great developer, not so much his social skills…
To avoid an AI winter we need people in this field to have Skin in the Game
WOW this guy systemic view is beyond anything i've heard
How is the universe an information processing system?
Lex. C is a feature not necessarily emergent
Thanks
Musk: "we must work on neurolink". LOL, sure buddy.
This guy is obsessed with limitations. How can he not understand the human mind has no limitations. Every time we get to a point in technology where naysayers believe we are limited, we break through in a different substrate.
I used this video in my post as a reference, here is the URL to read it : https://medium.com/datadriveninvestor/relax-ai-will-not-take-over-the-world-de13a152af17?source=friends_link&sk=1b08f14cded3c42b65d243de34b2ad47
Hey lex! Ur podcast is really very cool! I think it would be really great if you could invite few of these people in ur podcast: Sergey Levine,Szegedy, Mnih,David Silver,Noam Brown, Chelsea Finn,Andrej karpathy and few other people
Love what Francois brought up at 1:16:31 . We've been working on an interface for controlling our RL model on news recommendations. It currently looks like this: https://ibb.co/Ht3cjXV
We're working on a slider next to control the degree to which we add "random" news instead of the RL model's. Random being a list of top 40 publishers, where we filter to have one article per source and try to avoid duplicates.
The news is currently down, having issues with the API provider. Hopefully coming back soon.
Impossible to define intelligence void of environment. So true.
Would love to hear you question Rupert Sheldrake. The Past exchange would be quite fascinating!
Thank you Lex for such a great podcast with Francois. Really refreshing perspectives.
From what I understand, his notion of "skill" is more of a measure of "adaptivity to a new environment"? Isn't that just redefining words? What about the skill to be able to improve yourself indefinitely? It's not trivial, or else everyone would be a champion of any field since you just need to practise again and again to become better…but isn't that the hardest part of anything? Being efficient to something withing a set of constraint is one part of the equation, but certainly not the best predictor of "skill" imho.
The
Isn't intelligence all priors. Intelligence is a form of knowledge not an attribute that can be separated out from the other properties of the system/agent
Led thx to connect nodes by making accesible those knowledge stars ! : creating ecosystems of knowledge : Gracias!!!!
"Both the man of science and the man of action live always at the edge of mystery surrounded by it. " J Robert Oppenheimer
Lex is the Joe Rogan of AI podcasts
wtf "science is consciousness"…Lex what are you even doing…that makes zero sense
I submit that generally speaking news feeds and passive content consumption panders to impulsivity.. Should we be more deliberate in our procurement of information?
This is wonderful, as always. Thank you very much (Lex for this series of podcasts, which I believe will be historical and of paramount cultural importance, and François for Keras ** and ** the sanity in a field where so many people go crazy about extrapolations) . Can anybody tell me how it is possible to get the "dead silence" when no one is speaking? Audacity achieves this?
Can someone share the link of the paper at 17:25? Can't seem to find it.
Again a very interesting talk, thank you Lex!
I am so glad there are such thoughtful and intelligent younger people. There is hope for the future.
25:37 "For many people AI is not just a subfield of Computer Science. It is more like a belief system."
(1:47:00) I dont think Chollet is right about gendered facial discrimination being definitively socialized. 'That DNA doesnt change fast enough to cover our divergence from chimpanzees.' A pre existing facial discrimination hardware could have been part of the cause of that change, and a small change to that hardware might explain the difference. Conversely, and additionally, not identifying mates comes at a tremendous cost.
This guy is the most grounded researcher I've seen.
They say dont read the comment section, but thats definitely not good advice for this chap. Youre doing an awesome job, Lex. I miss the technical in-the-know things as i have no education in the field, but i love thinking about the principles discussed in your podcasts. The thousand brains theory one was my favourite. Keep going at it! Thanks for your hard work. Peace
This episode, in particular, is SUPERB! Lex's podcast has been instrumental in introducing me to a number of very influential thinkers who have inspired me in my research. See my most recent paper Robots, AI and Cognitive in an Era of Mass Cognitive Decline accepted for publication by IEEE Access.
https://www.alistairavogan.com/new-blog/2020/1/8/robots-ai-and-cognitive-training-in-an-era-of-mass-age-related-cognitive-decline-a-systematic-review
This podcast could be called “interview of James Bond villains”
blah blah blah — a perfect example of the results of common core — inarticulate chatter, unfocused, no premises — they sound like joe rogen
blah blah blah — a perfect example of the results of common core — inarticulate chatter, unfocused, no premises — they sound like joe rogen
44:32 "making design decisions is about satisfying a set of constraints. But also trying to do so in the simplest way possible" – elegantly put.
brilliant discussion, gave me a lot of food for thoughts, thanks!
control is good with a fake comfort of self choice, control is bad if the good control says so.
This guy is full of crap. Don't waste your time