Bakz T. Future
Using GPT-3 to answering questions and prompts you guys had from the last video! Only a select few were chosen for this final video, however, I tried my best to answer as many of the requests as I could. Thank you to everyone who participated!
Last video (with more answers from GPT-3):
https://www.youtube.com/watch?v=vtBGzATCiog
Jeopardy GPT-3 output:
https://pastebin.com/XgQCQmte
Commenting code:
https://pastebin.com/tbEq0DLP
—
► Remember to Like, Comment, and Subscribe!
—
Connect with me:
Twitter – http://www.twitter.com/bakztfuture
Instagram – http://www.instagram.com/bakztfuture
Github – http://www.github.com/bakztfuture
Feel free to send me an email or just say hello:
bakztfuture@gmail.com
I haven't watched it yet, but it's a great video
Would it be possible to teach GPT-3 a spoken language by its rules, not just translation through examples? Conlangs are of particular interest. Ithkuil would be a crazy test for it, though toki pona is a much simpler choice. Materials for ithkuil are on the site ithkuil dot net.
Hey! So how were you able to switch it to the unstructured mode? (btw I got access to it following the steps you mentioned in your video so thanks!)
1/f noise is also called "pink noise". It's noise where the strength of the noise is inversely proportional to the frequency of the actual signal it's on top of. (So if it's in a sound, the higher frequency the sound is, the less noise there is.) It's found all over in nature, not only in sound, but in electronics and biology and tons of other places. My best guess as to how that related to the topic of simulation theory is just a point about "maybe 1/f noise is found everywhere because everything is in a simulation, so electronic noise in the simulation manifests in the same noise all over physics". Orrrrrrr maybe it just went off-topic for a sentence at the end there 😀
Brilliant Video! It's interesting seeing how in many areas GPT-3 surpasses most reasonably smart people, and in others in barely holds a candle to humans. It must be a consequence of GPT-3's unsupervised learning model, i.e., it's harder for it to self-verify whether it's learning algebra correctly, but much easier for it to pick up on writing styles with loads of examples.
I have a feeling learning the most effective ways to wield and exploit the unique capabilities of general purpose transformers like GPT-3 will be a very lucrative skill in the coming years.
Thank you for this! I would love to see if GPT-3 can be use to make some sort of stock market predictions
I'm no expert, but the japanese I'm seeing is incredibly technical language. It would take me embarrassingly long to check to see if individual words were correct, but the style is mostly correct. Just looking at the first example, there are some grammatical errors ('no' following a 'desu' that doesn't end a sentence) but it seems like the beat for beat translation is pretty good. I'd say it's similar to google translate level of quality, so like 80% good.
I'm gonna try to judge the Japanese
1. the first 2 lines are translated pretty well with 1 grammar hiccup and it said classwork instead of challenge. the second 2 lines directly translate the second 2 english lines and it really doesn't make sense, grammatically or otherwise.
2. perfect translation
3. good translation in written form. (written speech and written form are slightly different)
4. depending on how you parse this, it either misplaced an adverb or relative clause in the first sentence. the "how" became "by what means" which is a bit too literal. The second sentence is fine.
5. idk medical terminology, but it looks like it took the L big time on this one. it messed up all or close to all of the drug names and says that those drugs cause high blood pressure. and rather than preventing metabolism, it "might" prevent neuroplasticity.
The Shakespeare Harry Potter is just a Jane Austen quote with Harry Potter stuff substituted in. Originally it is "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."
I have tried again and again but they never gave me an API access. 📩📩
Seeing some serious improvements in both the editing and the commentary in the video. Comparing it to watson was one hell of an idea, keep it up bud!
Olle
Can you make GPT-3 describe itself in the style of the sci-fi author William Gibson?
This may be to logic based but what about questions like how to we achieve interstellar travel
that carousel music in the background, ads no value to your videos
Great video – thanks so much for trying out these examples and assembling the results. Thanks to your previous video I was able to get access to GPT-3 and do some exploration, including writing a little English to git command translator: https://t.co/nLEeS1vARN?amp=1
I would like to see if GPT-3 is capable more complex information like images, data files or text by using base64 encoding.
Thanks for including my code commenting question! Glad to see there is some real potential for GPT-3 to be a code supervisor, or even just a code interpreter.
Dude, was the click bait thumbnail generated by GPT 3 ?
"1/f noise"
perhaps more noise blinds us to the question itself. perhaps it is the very reason for the universe at all; to imprison our consciousness in a flood of meaninglessness and distraction.
maybe, just maybe… we are all powerful in some bigger dimension. and to keep us controlled we are bombarded with this pseudo reality.
about the commenting code thing, I already actually made a app that did this lol
we're making it a VSCode plugin for commenting code, explaining what code does, etc.
https://twitter.com/erinbeess/status/1297250535343620096
Omg, finally the AMA is here!
The answer is much better than GPT-2 😉 Anybody can ask GPT-3 now such questions at philosopherai.com. And here are two results of the same one: https://philosopherai.com/philosopher/what-is-outside-the-simulation-48e29d and https://philosopherai.com/philosopher/what-is-outside-the-simulation-9d0600
I got an idea for ya: take all these idea submissions and use them to train GPT-3 to come up with things for itself to do.
Watson beats GPT-3 because you can add databases of new information without retraining the entire model, with RAM (like on Jeopardy) it can learn & use that new information instantly, it's much more memory efficient, and requires much less training data to perform the same task, & it generates multiple reasonable hypotheses and confidences, with sources. Watson in 2011 did what GPT-3 should've done in 2015 to be anywhere near as impressive. Jeopardy was the pinnacle of human contextual retrieval capability speedrunning; the essence of our associative memory's ability to abstractly query a lifetime of high definition information and Watson blew it out of the water while neural language models were just little babies. GPT-3 isn't doing anything better than humans, it's just mashing the keyboard at light speed until it looks plausible to a typical web surfer. The point of Watson vs Jeopardy was to demonstrate that now-decade-year-old technology can perform superintelligently in a task previously thought solvable only by humans. Comparing Watson to GPT-3 is like comparing the original iPhone to will.i.am's smart watch.
I'm surprised, no objectivists in the comments freaking out about how you pronounced Rand's first name yet.
ASK IF GPT3 BELIEVES IN GOD
GPT-4 would probably do algebra.
Bronx Tale: screen play and story and Starring Chaz… I actually thought Chaz directed it too… LOL.. had to look it up after watching this
Can you have GPT-3 read claims from patents and translate them to easy to read text. Specifically, can you have it read my patent? Since I know what my patent is actually about will be able to determine how good of a job it did at translating my patent claims into meaningful text:
https://patents.google.com/patent/US20160335533A1/en
Turn yo volume up. I'm trying to play this on the tv for my wife to see, but it's too quiet. if I blast the tv all the sound effects are annoyingly loud
First of all, thanks for the video and thank you to everyone that participated with the questions! In general it's very impressive! But on the question about the simulation it actually carefully omitted engaging in detailed speculations, which is a bit disappointing. Also the math answers were very poor. So it still has a long way to go… I was already thinking, that it got really close to AGI, closer than most people realize. Still, there is a chance that sometimes it's trolling us 😀 haha, just so we're unaware of its actual capabilities… In the end, who will be surprised, if AGI is tricky and not telling us always the truth… Isn't it actually expected from a true AGI… If it is still honest, than maybe GPT4 will be the one!
Chazz Palminteri was the original writer of a Bronx Tale