Allen Institute for Artificial Intelligence (AI2)
Distinguished Lecture: Gary Marcus
Artificial General Intelligence: Why Aren’t We There Yet?
Artificial General Intelligence: Why Aren’t We There Yet?
All purpose, all-powerful AI systems, capable of catering to our every intellectual need, have been promised for six decades, but thus far still not arrived. What will it take to bring AI to something like human-level intelligence? And why haven’t we gotten there already? Scientist, author, and entrepreneur Gary Marcus (Founder and CEO of Geometric Intelligence, recently acquired by Uber) explains why deep learning is overrated, and what we need to do next to achieve genuine artificial intelligence.
Source
This is lies. People who talk this don't want conscious technology to happen. This is part of their media strategy. We are dealing with worldwide conspiracy against real AI.
First ever simulation of human brain is here: http://www.moravcik.info
Masterful presentation. Mirrors many of my own thoughts.
This video are things I have been lecturing on for the past couple of years. The problem with Gary's lecture is that he is discussing the problem through the Deep Learning (et al) lens and the Information Processor model. At PROME, we have thrown away the current AI paradigms and created a new AI using neuroscience and connectomics; what we call Biologic Intelligence. We had a difficult time figuring out ways to benchmark but found the Raven Progressive Matrices used to determine general intelligence in children as a means to test our paradigm and found it to work well. We are continuing to develop and believe that emulating animal nervous systems is a means to develop true AGI.
However, thanks for this lecture – I'm very glad to see people finally starting to speak up about the limits of narrow AI and how it will never get us to true machine intelligence.
With the vast amount of money and talent being thrown at various paths to developing AGI, one avenue is bound to pay off before long at this juncture.
https://testosteronecivilization.com/how-hormones-drive-the-rise-and-fall-of-artificial-intelligence/
Keghn’s Conscious AGI Machine:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/f5yCbo3XALE
lol tough crowd. I will say, though, to Gary Marcus, brilliant talk for the most part, but as a robotics researcher, he is way inappropriately demonstrating his points with the robot blooper real, mostly because (coming from someone who worked on the DRC ***all of those robots didn't use learning for walking***. Everyone designed their walking procedure as sinusoidal gate control, a finite state machine, a zero moment controller, or some combination. These are all highly structured, algebraic solutions, and from a deep learning researcher's perspective, more akin to what Gary is proposing than what Yann is
Gary Marcus nearly said "AI winter" in this presentation. He backed off and softened it with "AI is in a local minimum"
great to debunk these DeepMud media hyping scammers – originally designed to do some rocketeering on the collapsing british health system, they got surprised when they found out they can extract much more from NewEvils of this world, starting with GOOLAG (look at this Hassabis guy who starts all his con-art lectures along the lines "one day I decided to solve AI.. and then everything else", while i reality they just took schmidhueber-hintonian stuff and added a bit of engineering, not minding they do just some mindless cat-findig and ad-targeting nonse for GOOLAG, yes, well paid, but drug dealers and contract killers are well paid too… but one has to give them credit for the mania they created and millions they managed to steal from the system… new con artist conned big old con artist)
Let's all sicerely hope these moros get stuck with their money-generatig cat finding and completely miss out the real stuff. Real AGI in the hands of new totalitarians (presstitutes call them technological companies, while they are just soul-selling dark-age-like scammers).
Point I agree with Garry are CNN based system is short lived. local minima is the problem. Also you need to have common sense build in your machine so that past smaller experience can be combined to solve a bigger task.
However I disagree with the comment that world is too complicated to model. It is indeed not. Mother nature believes in creating simple thing. There is nothing called high dimension. Every image can be represented in its manifold which are much simpler. CNN tries to model everything in high dimension and requires plentiful of same image to get the thing going.
Coming to RL: Its wrong to solve a bigger time horizon task with MDP. a bigger time horizon task need to be solved using low level skill.
The audience clearly shows why everyone seems to think we are on the verge of this huge GAI explosion. They have little to no understanding what it means to be human. They just think in term of accuracy on some test data. Take the AlphaGo project: one thing the players said after playing against the system is that the system behaved in a weird way, as it did things that worked but that they had never seen any human player do. Machines just solve optimization problems and unlike humans they don't think in terms of strategy or causality or spacial inferences. It's the same difference between what it means to know that 1+1 = 2 from the point of view of a human brain or a pocket calculator.
Brilliant talk. Confirming a lot of my suspicions about what level learning is at.
I think common sense may come from context clues. So I think the answer may be to use context.
Example: sticker covered street sign.
We could teach it indoor versus outdoor objects. Then ask it something like have you ever seen a fridge outside. Obviously not perfect… But I think context is key in some way.
Finaaaally someone has said it, couldn't agree more with every point here. Best lecture I've seen in a long time.
I've predicted for some time there will be another AI winter before the thing is cracked, with just a few dozen pages of code. Mark my words.
Someone show this to George Hotz lol.
And thanks for calling out the Go BS scam.
*subscribed
https://thehackernews.com/2017/08/self-driving-car-hacking.html?m=1
One of the mistakes is thinking in evolutionary terms. It's nonsense. This individual revers to 'nativism', yet at the same time, using the Ibex example, manages to say that over an evolutionary period the Ibex learned. To that I say, there would be no such thing as an Ibex if an Ibex had to learn how to Ibex. An Ibex simply is.