Chris Hawkes
► SPONSOR ◄
Check now if your .TECH is available! Link – https://go.tech/chrishawkes
Use Code chris.tech at Checkout for a special 80% OFF on 1 & 5 Year .TECH Domains!
Linode Web Hosting ($20.00 CREDIT)
http://bit.ly/2HsnivM
🎓 If you’re learning web development, check out my latest courses on my website @ https://codehawke.com/ 🎓
— Why OpenAI GPT-3 Is Mostly Hype —
In this video I’m giving my thoughts to OpenAI’s new GPT-3 Natural Language Processing (NLP) API. Some of what it does is impressive, but I think we’re far from this type of product being able to take our jobs as programmers.
Article Cited:
https://openreview.net/pdf?id=GKTvAcb12b
Twitter – http://bit.ly/ChrisHawkesTwitter
LinkedIn – http://bit.ly/ChrisHawkesLinkedIn
GitHub – http://bit.ly/ChrisHawkesGitHub
To replace programmers with Robots, clients will have to accurately describe what they want. We're safe.
you missed the point. When properly connected to an API any generalist will be able to leverage and interconnect technologies at a speed lik enever before. Even if there is no innovation in the tech, everything which has an API nowadays will be possibly be linked to a natural speaking interface and any idiiot will be capable of prototyping at a speed never dream.
According to my opinion, this definitely not replace programmers but it decreases their demand.
Many programmers may be replace by those people who knows how to accurately describe things to gpt 3 to perform required task.
So maybe demand of those people increase as compared to programmers because they take less time to do required task and they cost less.
It would be cool if machines could generate some algorithms for us, so we would be able to write code at super high level.
Love your videos so much value!
Loyal Subscriber from MOROCCO
Its marketing, a lot of the so called influencers have investments in AI. This practice is well explained in the book Winner Takes All.
Although GPT-3 is in theory only learning "form" not "content", the fact that it can fairly reliably do addition which it was not specifically trained on, and is higher quality that you would expect from simply regurgitating from the dateset indicates that it is doing some sort of learning beyond just learning statistical relations between words. Or perhaps learning statistical relationships between words at a high enough level necessitates some degree of understanding. Computerphile has a really good video on this: https://www.youtube.com/watch?v=_8yVOC4ciXc
I agree that a lot of machine learning is more so statistics rather than actual learning, and seems far removed from what would be needed for general AI. The one exception to that, I think, is reinforcement learning which has a very human feel to the way it learns. These models are also getting more and more general, such as playing a whole set of atari games.
Having said that, these models still have a long way to go. For example, the efficiency in terms of skill divided by total games played is still orders of magnitude below what a human child can do. I don't think anyone, including OpenAI, is saying that general AI is right around the corner, but we're definitely on the way there. The brain is not magic.
uh its not lol
ur voice is so calming
This video proves that in fact developers should be worried and you are too 🙂 if product owners or stakeholders can mockup websites like this shown in the video then developers would mostly work on backend and APIs. Maybe not even that in the age of AI powered search engines producing UI tables. 🙂 I can totally see most of small companies using generators like this and never talk to developers. Let's face it, times are changing. Focus in developing AI. If you can't fight them, join them.
application of distance learning for GPT and kids at 5:42
If you've managed as many synapses as a shrew and your project isn't yet intelligent then you've already failed. I dunno where to draw the line but I do know that all mammals are "intelligent" so when someone says get to "100 trillion synapses" I kinda tune out. If you consider humans as the idea of intelligence then it is my opinion that you don't understand what you are pursuing and consequently cannot achieve it.
Saying AI will replace developers is like saying AI will replace writers and painters and musicians and engineers in general. These things are extremely refined arts that require a lot of attention to detail and expertise and creativity and foresight.
GTP3 is learning arithmetic by accident. That’s not hype.
Can you refer me to the video where Geoffrey Hinton says "Pump the brakes…"
Thanks.
"It's not AI it's just pattern matching" … what do you think your brain does. That's also just pattern matching. Even Hinton recently cooperated on paper which supports theory about the brain using backprop for learning. IMO the missing piece is mostly in optimization (I would like to see some major online learning improvements) rather than architectures or overall direction of ML.
Hi Chris, thank you for your perspective of this topic. I wonder if the programming is still worth of learning? Is still good opportunity for earn good money? I've learned Web stack (JS, React, CSS, HTML) for one year and i wonder to change my career into freelancing copywriting direct response or marketing (facebook ads, instagram adds etc.). What do you think about it? You have long experience in tech and business, what is better for earn money nowadays?
GPT3 is dumber than the dumbest human. It cost 14 million dollars. GPT4 will be as dumb as a human. It will cost around 2 billion dollars. GPT4 will do your job better than you can. That is coming in the next 5 years.
Nokia said the same thing about smartphones before they lose their market. This is not just hype this has to be stoped or AI will take our jobs in the future. For me people that invest their life in this kinda research are dump coz they are coding for the companies what’s gonna replace them in the future.
This is kinda worrying though. When research finally gets this to one-shot learning and with a much large training set, theoretically you should be able to show a machine learning model something it's never seen before and it could classify/identify/use it correctly. And in the real world, like a human, in some cases it should ask a question or two to clarify.
Yall guys list excuses and trying to deny gpt3
The argument that people cannot learn through form alone is not true. We learn things through abstracting information we gather about the world, be it written, visual, or tactile. Even if one way of getting information is not available (i.e. blindness), it is still possible to abstract information and understand things invisible otherwise. People didn't have to see blackholes to understand exactly how they functioned, it was all abstracted through maths.
The bigger the model for AI learning, the less it relies on memorizing and the more it can abstract concepts and understand the rules behind complex problems. Take arithmetics for example, when asked to solve 4 digit additions and subtractions, GPT3 did much better than GPT2 while not relying on identical previously seen calculations. This just shows that in order for abstraction to arise, the model simply needs to get bigger.
Currently, the complexity of GPT3, which can be interpreted as the number of parameters it has (175 billion), is still too small compared to the human brain. What it has in compensation is speed of processing, which allows the AI to gather experience orders of magnitude quicker than humans, allowing it to perform lower scale abstraction in much lower time. Once the AI model is as complex or even more than a human brain, then its abstraction capabilities could be compared to humans with added extremely faster processing.
I wouldn't discard the possibility of AI in the future replacing humans on almost every intellectual profession, be it artistic or academic. However, we are not too close to that. Maybe give it 2 to 3 decades.
After watching all of the other exciting videos this is upsetting…
As more content will be AI generated, how do we know what was AI made and do we have to exclude it from training data?
A lot of hype, I agree. Boiling it down to "just" better pattern matching by the octopus, it's so true 😉
Computer scientist have been writing pseudo code in there papers for a very long time why not train a system to take a standardized version of this mathematical language and generate code. Now that would be somthing.
It is easy to trivialise what has been achieved when you don't understand the work done.
GPT-3 is only a sneak peek at what is possible and the curve hasn't flattened out yet.
It is only a matter of time before people learn to accept that AGI is already here.
Okay, but this is only the beginning.
I wanted to be programer but this shit will evolve in next 5 years that much that it will program everythink by itself and upgrade by itself combining all technologies and knowledge we have collected as humanity in past 100 years, shortly AI is humanity last invention.
it will be the end of many things when Starlink is completed …Fastest Car with best engine and fuel needs the Autobahn!
I still think this is going to be an incredible tool that will start a new era of accessibility, not a coding machine but some next level simplification of tasks by having the user describe what they are trying to do, and see if it can match that to a function in the program. Without this level of pattern matching it makes accessibility features so much harder to develop.
You sound very dismissive. Pattern matching IS intelligence. There isn't a magical point at which computers achieve 'intelligence', they already have. GPT-3 is just another step forward and an increase in intelligence. It's really not long before AI begins to surpass human brains, which are just wet computers anyway.
Wait until GPT 4
Brave man
Quantum Computing ?
Ah yes, the tech that was going to change the world, but ultimately was completely useless because of its design.
Is AI going to be the same? No. Because it isn't. AI finds practical use-cases everywhere, slowly but surely getting better.
But, yes. Today's ML algorithms are BAD, very bad. They can't imitate a human brain, hardly any kind of a brain.
Without a major change in the algorithms themselves, we aren't getting AGI anytime soon.
Artificial intelligence programs like deep learning neural networks may be able to beat humans at playing Go or chess, or doing arithmetic, or writing Navy Seal copypasta, but they will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.
Written by GPT-3
5:45 so, what about blind people ?
First, don't worry about programming jobs for now. (But there are more serious aspects to worry about.).
In some examples, you criticize the results are not perfect. But that's beside the point, it's about that one piece of software can do many different things like this. Remember. it was not build to be able to write code. It just turned out it can. Nobody taught it. It's seriously spooky.
you do realize all these use cases were not made by open AI…the ideas were thrown together by random people invited to try the beta release so your assessment of gtp3 based on these is not valid, they"re just opening musings of the possible uses.