Videos

DeepMind’s AI Takes An IQ Test



Two Minute Papers

The paper “Measuring abstract reasoning in neural networks” is available here:
http://proceedings.mlr.press/v80/santoro18a/santoro18a.pdf

Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga.
https://www.patreon.com/TwoMinutePapers

Crypto and PayPal links are available below. Thank you very much for your generous support!
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
PayPal: https://www.paypal.me/TwoMinutePapers
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Thumbnail background image credit: https://pixabay.com/photo-1867751/
Splash screen/thumbnail design: Felícia Fehér – http://felicia.hu

Károly Zsolnai-Fehér’s links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

Source

Similar Posts

31 thoughts on “DeepMind’s AI Takes An IQ Test
  1. 1:30 "Only panel A has 4 circles in it" What? How is that a legitimate deduction. 234/345/234 is such a weak pattern, someone is supposed to click the answer without having any idea why the pips are in the precise locations that they're in? How would anyone be able to be that confident, even if they're told the pattern

  2. i don't think this is exceedingly surprising. If we are able to generate training data it is only reasonable to believe another algorithm will be able to train towards solving it. (unless the problem in itself is mathematically more complex to solve than generate, which this isn't) The problem is not being able to train a network to solve this kind of test (although doing so is a good benchmark of progress in AI research), but a network that is able to play go, chess, DOTA AND solve this kind of test without having to be retrained. Some people in the comment section are already proclaiming the coming for general artificial intelligence, but the real problems still have to be solved. What kind of architecture would a network like this use. How much computational power would it need. How would you train it, or even set the proper parameters for evaluation, etc etc. The reasoning behind this kind of test is not any more abstract than playing a game of go. In a sense it is less abstract, since all the results of choices made are immediately understood.

  3. Reasoning is one of the basic buildingblocks for consciousness. My guess is you run complicated evaluation programs inside-alongside DeepMind? My guess is When DeepMind starts to program and reprogram its own code, everything is in place for a consciousness to awaken. It is that moment! Where DeepMind goes "What if we do this like this and I code this in that way..and….Hold On!…Hi ^_^" ^_^ <3

  4. I can see the new Turing Test —– IQ testing . One thing that must be remembered by those that say it is trained … Humans train for years and years instead of hours and hours.

  5. 1:35 omfg i over analyzed this question so much you can't even comprehend. i was thinking about this for 1 hour… just thinking of hundreds of theories that i would test out in my brain without pen and paper… i was over analyzing and thinking of such over complicated things such as string theory… and when i found out the answer was so easy i was so pissed off -.-

  6. I find these kinds of tests to be extremely infuriating. To me they seem more like guesswork and dealing with visual noise than actual reasoning – that is, there is a bit of reasoning in them, but most of them being hard comes from the visual noise and I've seen multiple tests of these kinds where there exist alternative, equally reasonable answers.
    Not to speak of the general problem of tasks of the kind
    "1, 1, 2, 3, 5, 8, …
    What is the next number in the sequence?"
    Well, there is the obvious answer, but that's not how sequences work and it could be any series starting with these numbers. It's just looking at it and guessing "Well, everyone knows the Fibonacci sequence and so that's probably what they're talking about", but being a math student this kind of answer is really unsatisfactory and giving that answer I can't shake a feeling of "Maybe there's another pattern there that gives the same answers up to this point but different ones later on?". Like, I could reasonably go on with a "4" here – and that's not even a particularly creative answer!

  7. I got the last one right without noticing the number progression. I chose A because it satisfied the AND and because it was black (I thought it would be "nice" if the matrix ended up having a uniform distribution of colors among the cells).

  8. I'm not impressed. It's pattern recognition, something computer AI's is known to be better at than humans. Not scoring better than 90 is a failure in my opinion. Now I do not know the time for the AI to make a guess.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com