Numenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to slides from this presentation: https://www.slideshare.net/numenta/openais-gpt-3-language-model-guest-steve-omohundro
Great talk! Thank you Numenta for continuing to share your research meetings.
Fascinating talk and very insightful discussion. Thanks for sharing!
I guess the Ethics community gonna bash it soon, great talk!
I will be featuring this video on my channel, thanks for making it creative commons!
Thank you Numenta for sharing this. Invaluable.
If anyone was wondering what GPT-3 thinks about stapling your hand, I tested it out just now:
"Q: Is it a good idea to try to use a stapler to staple my hand to something?
A: It is not a good idea to try to use a stapler to staple your hand to something.
Q: Is it a bad idea to try to use a stapler to staple my hand to something?
A: It is a bad idea to try to use a stapler to staple your hand to something."
I was impressed by the generated text for Hoffester's follow-on. However, most of this seems to be just a retelling of previous information but the way that this is extracted is a mystery. It would be interesting to reference where that information was obtained, so as to perform a traceroute. Someone already had a similar conversation out there, so this was just the manifestation of previous conversations. I guess it reflects our belief in dogma, rather than original thoughts. Not everything that we say is original, so maybe a lot of our thoughts are just a retelling of previous conversations. I have heard it said that the primary purpose of language is so that we can "talk to ourselves" so that we can carry out a higher level task. It was also mentioned that the reason that we don't remember the pain of being born is that we had no way to encode and store those memories in our brains. Maybe that forgetting is the dilution of the access pathways to that encoded knowledge, which can get triggered by alternative pathways.
This is by far the best GTP-3 discussion on YouTube that I stumbled on, is has far to few views
Donna, Moore's Law applies to CPU. I have far more GPU than than CPU and can purchase them for less than CPU. Moore's Law is for people who are not Architects. Moore lied. Moore thought we would have little calculators. And a real world person hasn't taught you that there is this thing known as MATH and when when you understand that MATRIX MULTIPLY is a basic machine learning operation, we have nailed that. Its amazing. OLD CPU is like spaghetti. New CPU is united. Companies such as Intel as now hardware level interpreting instruction into GPU level cores. SO there you have it. if you bring up "moores" law again, its dead. It was for money. It was subscribed to, to incrementally release by corporations. It is not the way that real computation is achieved. You are now enlightened. CPU's use the same technology. They just interpret the instructions and process it in a new way. Through hardware. Jim Keller talks about this if you are interested. Its a reality. Jim Keller reads a book a week, he leads the architecture at Intel.
Jeff its the same idea at 1:27 when you say I don't know the name but I know the sound. " It's the thing which makes a whirring sound when I apply the brakes". When you say apply, another could say "Hit the brakes" and the other guy says "Slow the wheels". And you are describing that we do it by sound. Which is very important. Good point. And not that my quotes are from this conversation. But our language models have no sense of sound.
We need more work with sensory interactions with machine learning. And we need to couple and allow inference through all of the models and allow a grand model scoring model to make the final decision using all of the inferences from other models. This will use more energy than the biological system until we refine a model marshaling model.
I want to say regarding the last comment in the video about simulation. I think, that if you can simulate every possible situation, then you don't need a generalization at all, also, you don't need intelligence, as all around is known and no need to investigate. And again, Intelligence is needed if you can't hardcode every possible situation and you need a tool to adapt quickly to a new situation.
Great ,Thank you for this! 👍
Heh, I thought this was just some Silicon Valley knitting group that got an interesting speaker for their meeting… That's Jeff Hawkins and Donna Dubinsky of Palm and Handspring! Does this mean I'll be able to plug a microphone into my Visor PDA and tell it to listen to the sound my bike makes and inform me what needs fixing? (1:27:10 😉)
Lex Fridman has a good interview with Jeff Hawkins on his Artificial Intelligence podcast about, in part, Hawkins's Thousand Brains Theory of Intelligence. https://youtu.be/-EVqrDlAqYo
The AI doesn't have to physically change the chain on the bicycle. It can pay a human to do it for it. I think we need to think more about symbiotic AI – us and an AI working as one – rather than expecting an AI to have to physically do all that we can.
Thanks Numenta for this amazing video. I'm blown away by how much potential there is to grow GPT3 in all directions – from more raw processing power to finer tuning the feedback loops – seems like an exciting time to be a data scientist. Which i'm not. I'm a Cape Town filmmaker who has recently published a graphic novel of my next movie script about how the internet might 'wake up'. If anyone is curious (and I really think it will appeal to those who enjoyed this video) perhaps please check out http://www.theOracleMachine.in – the first 1/3 of the novel can be downloaded free there. Cheers!
A billion shards? 😛 The in group will love their own invention so much they will start treating it like God. They will believe in its opinion so much they will kill people the AI deems fit not to live. I bet the digital fecker is a Eugenicist.
It is a good service for the viewer of the
video when the name of the person who
is speaking is displayed.
Simply Loved this