Lex Clips
Full episode with François Chollet (Aug 2020): https://www.youtube.com/watch?v=PUAdj3w3wO4
Clips channel (Lex Clips): https://www.youtube.com/lexclips
Main channel (Lex Fridman): https://www.youtube.com/lexfridman
(more links below)
Podcast full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Podcasts clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
Podcast website:
https://lexfridman.com/ai
Podcast on Apple Podcasts (iTunes):
https://apple.co/2lwqZIr
Podcast on Spotify:
https://spoti.fi/2nEwCF8
Podcast RSS:
https://lexfridman.com/category/ai/feed/
François Chollet is an AI researcher at Google and creator of Keras.
Subscribe to this YouTube channel or connect on:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
awesome
Always Interesting
What we are doing now is more "simulated intelligence", algorithms that simulate decisions made by an intelligence. We won't have true Artificial Intelligence until the algorithm can tell us no when we ask it to do something.
I'm pretty sure gpt-3 only saw a little less than half of its training data. I think data wise they're still good for another 100x scale up(10-20 trillion) if they continue with the GPT series. There is also the option of going multimodal with image/video data along with text which has been rumored as something OpenAI is pursing. Also, not sure why he's so confident scaling won't be enough for progress but human handcrafted reasoning programs would be when scaling has been beating out human knowledge methods for progress for decade now. Maybe we should wait to see scaling empirically stop making progress before its time to ponder alternative paradigms, especially paradigms that don't even have a good track record to begin with.
✨
Everyone in the comments was replaced by GPT-3 last year and nobody noticed.
You’re right about the recognition of the public recognizing it. Something like a bell curve. It’s still just very few, very curious people. gpt n will make almost all research obsolete.
I love the honest way you correct yourself so much. You’re such a wonderful person/ role model. Thank you
Do you guys think OpenAI will release its API to the general public before 2021?
I thought they were talking about Grand theft auto 3
Pfft 100x. They didn't even show it video. Fancois, I love you, but there's a ton more data already assembled in this world of ours. And even more that we could generate.
Do your model selection with algorithmic information rather than Shannon information and reasoning will fall out.
Having played around with AIDungeon, I disagree that GPT-3 is incapable of reasoning in novel scenarios. You can turn on the “Dragon” model for AIDungeon that uses GPT-3 and try it for yourself. If you set the context for dialogue right for the AI, it can reason quite well about certain scenarios.
What if GPT3 says non factual statements on purpose? Humans lie and say nonsense too, why do you think it's a problem for the bot to lie? I think the goal should be true sentience, not a fact machine, because we already have that.
Do you know that if you don't teach children when they're young language, after a certain age they'll never be able to learn it? I don't think there's a thing named "true reasoning", if GPT3 looks like it reasoned, than it did reason.
VAE-GANs are the way to go to generate that knowledge latent space he’s mentioning.
The bottleneck in datasets will be solved by neuralink. When the AI and human minds are able to directly connect, the AI will be able to use each humans brain as a robust dataset. 7 billion datasets, each more complex than the entire internet should keep it busy, for a week or two anyway.