Videos

More GPT-2, the 'writer' of Unicorn AI – Computerphile



Computerphile

More examples of how GPT-2 pays attention to things. Rob Miles

https://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

Source

Similar Posts

43 thoughts on “More GPT-2, the 'writer' of Unicorn AI – Computerphile
  1. The way we use language has so many assumptions that communication seems like it is almost entirely comprised of approximate meaning.

  2. "Has anyone made any of those recipes?"

    I can't think of a better way to get at the heart of language processing and the disconnect to the real world.

    Also yeah, "because it was the style at the time" lol

  3. The most surprising thing is that it shows promise of becoming better than this with a larger dataset. You know this is going to happen at some point!

  4. Can this thing do subtle dialect stuff? Like for example, if the prompt says “torch” or “lift” instead of “flashlight” or “elevator,” will it tend to use more British terms for things?

  5. Why is everybody talking sbout teaspoons, and nobody noticed the "pinch of sea" in the recipe? That is hilarious.
    Also, as a non-native speaker, I was surprised about the word "tablespoon". In my language, it would transkate to teaspoon and soupspoon, which makes a lot morecsense, imho.

  6. The chicken didnt cross the road because it was too…
    I like how the came up with examples of how the word IT can refer to thd road, the chicken, neither of them, both of them. All four possibilities.

  7. The chicken didn't cross the road because it was too ambiguous. After all, what is the sound of two hands smelling? Ducks and hammers, I tell you!

  8. One version of Microsoft word (2007) had a summary feature well over ten years ago, it was uncannily accurate and useful. You could use it to lose words if your essay was too long and it generated short summaries of an essay really very well. Unfortunately it wouldn't do a 120% summary and pad your essay out a bit. Microsoft quietly dropped it for Word 2010

  9. This will have a huge impact. But im kinda worried about "pseudoscience bots". This will actually just crash many minds:D
    Really impressive work and thanks for showing:)

  10. Theres many more ambiguity there. Did the chicken cross the road or did it not?
    It could be interpreted as stating that it didnt followed by why it didnt – it could also negating just one specific reason for doing so instead.

    "I didnt consider this sentence just becouse "it" is ambigious" – it is, but its not the only thing that is.

  11. The problem is that neural networks do not encode knowledge, they encode intuition. intuiting what words come next is not sufficient to encode knowledge, it's still just reasoning about words to assemble sentences. The only knowledge the network obtains is the patterns exhibited in it's training data, it know that patterns mapping input to output, but this isn't knowledge about the content of the sentence, this is just advanced morphology, taking into account the semantics in the training data expressed as the relationship between words. The system can express a sentence which sounds like it knows something… but it doesn't know it… it can just produce the output that suggests is knows, but only in that output domain which was trained by examples from people that knew things. The problem is that there is no internal representation of the concepts in the texts, which can then be fed through a network mapping that internal representation to an expression.

    There should be a way to make a neural network, or other system that takes as input human text, and generates some compact representation, a distillation of the concepts conveyed… and then this is fed through one, of several expression neural networks which map that internal representation to an expression in an output domain. Further, there should be many such neural networks for different input domains that generate the internal representation…. That internal representation may be stored…. and THEN it could be said to have understanding. If it can express the concept in multiple ways, and it can draw a correlation of input instances to the internal representation, and correlate that internal representation to many outputs…. that's as close as a neural network system could get to understanding something, to have KNOWLEDGE of it.

  12. For the summery, my thought was to act like you had started a book report, but the solution they used makes way more sense for the data set they had.

  13. Probably one thing missing, AI can beat human in chess, AI can beat human in SC2, AI can beat human in writing text, but same AI can do all this do at once?
    I was train doing do A but I can do easy B, AI can be train to do A but can't do B in any way. This is big difference.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com