Computerphile
More examples of how GPT-2 pays attention to things. Rob Miles
https://www.facebook.com/computerphile
https://twitter.com/computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: https://bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com
Source
sees JoJo
yep this dataset is scraped from reddit
The way we use language has so many assumptions that communication seems like it is almost entirely comprised of approximate meaning.
no white onions though… because of the war…
thumbs up for the simpsons quote.
…which is the style at this time
I figured out the "tl;dr" trick just before he revealed the answer. Man, is that brilliant!
"Has anyone made any of those recipes?"
I can't think of a better way to get at the heart of language processing and the disconnect to the real world.
Also yeah, "because it was the style at the time" lol
So there isn't a specific "onion on my belt" protocol, that's just something that statistically emerges.
hahahah truth… journalists…..and truth.
ok
T usually denotes the unit tesla, i.e. the magnetic flux density of an electromagnetic field.
Now the recipe makes total sense.
"Writing coherent and plausible news prose apparently also doesn't require general intelligence."
lol
The most surprising thing is that it shows promise of becoming better than this with a larger dataset. You know this is going to happen at some point!
Can this thing do subtle dialect stuff? Like for example, if the prompt says “torch” or “lift” instead of “flashlight” or “elevator,” will it tend to use more British terms for things?
What kind of computing power does this model need to generate an answer?
9:37 – "roads are fearless" x'D
Why is everybody talking sbout teaspoons, and nobody noticed the "pinch of sea" in the recipe? That is hilarious.
Also, as a non-native speaker, I was surprised about the word "tablespoon". In my language, it would transkate to teaspoon and soupspoon, which makes a lot morecsense, imho.
The chicken didnt cross the road because it was too…
I like how the came up with examples of how the word IT can refer to thd road, the chicken, neither of them, both of them. All four possibilities.
That model has a parameter for about every 5 words it read.
I wrote"To make meth all you need is: " and it gave me the recepie
The chicken didn't cross the road because it was too ambiguous. After all, what is the sound of two hands smelling? Ducks and hammers, I tell you!
Of course tl;dr works. It was trained off reddit after all.
Another thing which should work is just giving it the last sentence it said.
TL;DR Smart Ethan
My weak human brain picked up a quote from Grampa Simpson – 'onion on my belt, which was the style at the time'
"Roads are fearless" – Robert Miles 2019
Colorless green ideas sleep furiously.
c h i c k e n
One version of Microsoft word (2007) had a summary feature well over ten years ago, it was uncannily accurate and useful. You could use it to lose words if your essay was too long and it generated short summaries of an essay really very well. Unfortunately it wouldn't do a 120% summary and pad your essay out a bit. Microsoft quietly dropped it for Word 2010
I initially understood it as the chicken being too wide……
📙💯
Three tesla of heavy cream
This will have a huge impact. But im kinda worried about "pseudoscience bots". This will actually just crash many minds:D
Really impressive work and thanks for showing:)
TL; DR: GPT-2 is nowhere near human level, and it definitely didn’t become a bot on youtube
Theres many more ambiguity there. Did the chicken cross the road or did it not?
It could be interpreted as stating that it didnt followed by why it didnt – it could also negating just one specific reason for doing so instead.
"I didnt consider this sentence just becouse "it" is ambigious" – it is, but its not the only thing that is.
3:30 pinch of sea yeah exactly what was missing in the recipe
The problem is that neural networks do not encode knowledge, they encode intuition. intuiting what words come next is not sufficient to encode knowledge, it's still just reasoning about words to assemble sentences. The only knowledge the network obtains is the patterns exhibited in it's training data, it know that patterns mapping input to output, but this isn't knowledge about the content of the sentence, this is just advanced morphology, taking into account the semantics in the training data expressed as the relationship between words. The system can express a sentence which sounds like it knows something… but it doesn't know it… it can just produce the output that suggests is knows, but only in that output domain which was trained by examples from people that knew things. The problem is that there is no internal representation of the concepts in the texts, which can then be fed through a network mapping that internal representation to an expression.
There should be a way to make a neural network, or other system that takes as input human text, and generates some compact representation, a distillation of the concepts conveyed… and then this is fed through one, of several expression neural networks which map that internal representation to an expression in an output domain. Further, there should be many such neural networks for different input domains that generate the internal representation…. That internal representation may be stored…. and THEN it could be said to have understanding. If it can express the concept in multiple ways, and it can draw a correlation of input instances to the internal representation, and correlate that internal representation to many outputs…. that's as close as a neural network system could get to understanding something, to have KNOWLEDGE of it.
And now we proceed to generate 10x more fake news than actual news…. wait maybe that's already happened…
in 10 years we will discover that you don't need to be intelligent to be human and in fact, most humans are not.
I bet if they trained it on student answers to English class quizzes it would output high quality, high understanding answers
"pinch of sea"
Uh, ok… I'll be right back honey, going to the beach…
For the summery, my thought was to act like you had started a book report, but the solution they used makes way more sense for the data set they had.
I think it was the chicken that was too wide.
Oh come on say it! The chicken didn't cross the road because it was too chicken!
Probably one thing missing, AI can beat human in chess, AI can beat human in SC2, AI can beat human in writing text, but same AI can do all this do at once?
I was train doing do A but I can do easy B, AI can be train to do A but can't do B in any way. This is big difference.