DecisionForest
Let’s talk about irrational excitement in AI. And OpenAI’s GPT-3 fits the bill perfectly.
If you don’t know what GPT-3 is, it’s a large neural network developed by OpenAI and it has 175 billion weights. Which is huge right? It’s the biggest neural network so far so surely it can do anything from creating apps to having a nice conversation and do pretty much anything you can think of. But the thing is nobody knows how it works, what does it learn, how do you evaluate those 175 billion weights?
🎁 1 MONTH FREE TRIAL! Financial and Alternative Datasets for today’s Data Analysts & Scientists:
https://www.decisionforest.com/accounts/signup/
📚 RECOMMENDED DATA SCIENCE BOOKS:
https://www.amazon.com/shop/decisionforest
✅ Subscribe and support us:
https://www.youtube.com/decisionforest?sub_confirmation=1
💻 Data Science resources I strongly recommend:
https://radufotolescu.com/#resources
🌐 Let’s connect:
https://radufotolescu.com/#contact
–
At DecisionForest we serve both retail and institutional investors by providing them with the data necessary to make better decisions:
https://www.decisionforest.com
#DecisionForest
The current state of AI sounds like it would do well in a political discussion:
completely ignores the core points of the matter at hand, focusing instead on making the wittiest possible remark on the last sentence it heard.
The problem is that AI lacks any sort of context, and it's impossible for it to understand one as it has no way of storing it in a way that could change its weights and biases (no real time re-training or fine tuning).
It also has no impulses, instincts or needs, so it never drives the conversation where it wants, but it's merely led by the last sentence it hears.
I totally agree on the pointlessness of the bruteforce approach. Then again, I don't even know whether the point was to make a human like intelligence in the first place. Maybe they just wanted to make something that was commercially viable, a useful tool rather then a real AGI.
Anyway cheers and thanks for the video.
The simple lessons about physical reality learned by a 3 year old are not yet of any commercial value. GPT-3 uses statistics to string together high probability word associations. That is not the same as the understanding that tipping over a full glass of milk results in a mess to clean up Those words might often be found together, but these words represent physical relationships that require some other type of learning than just word association.
It's still incredibly impressive
If you took parts of your brain and separated their cognitive functions, you'd get a similar kind of thing to machine learning systems. We're building parts of an AGI, but they haven't been put together.
Very interesting. I’m new to AI, but what I like about OpenAI is that they consistently treat the NN as if it was a brain. Yes, we don’t understand all those weighs, but do we understand human brain? All connections between the neurons? In the evolution process the brain development went from a primitive virus or bacteria, through warms, reptiles, mammals to humans, just to mention a few pieces of this long chain. That’s basically what adding more weights to a NN model means. A simple scaling. The funny part is that with these billions of parameters they probably not even any close to the human brain. 😜
Btw. Radu. Can you make a video on how you research/ learn about what’s new in the DS field? So much is happening, so how do you stay up to date? Reading particular sites, monitoring arXiv papers, etc, you know what I mean?