Big Think
AI needs thousands of pictures in order to correctly identify a dog from a cat, whereas human babies and toddlers only need to see each animal once to know the difference. But AI won’t be that way forever, says AI expert and author Max Tegmark, because it hasn’t learned how to self-replicate its own intelligence. However, once AI learns how to master AGI—or Artificial General Intelligence—it will be able to upgrade itself, thereby being able to blow right past us.
Read more at BigThink.com: http://bigthink.com/videos/max-tegmark-superintelligence-how-ai-will-overcome-humans
Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink
Max Tegmark: I define intelligence as how good something is at accomplishing complex goals. So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ.
To see how ridiculous that would be, just imagine if I told you that athletic ability could be quantified by a single number, the “Athletic Quotient,” and whatever athlete had the highest AQ would win all the gold medals in the Olympics. It’s the same with intelligence.
So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence, maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator, maybe it’s good at driving cars or playing Go.
Humans on the other hand have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning. For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once. Another area where we have a long way to go in AI is generalizing.
If a human learns to play one particular kind of game they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether.
And this is a fascinating frontier of AI research now: How can we have machines—how can we can make them as good at learning from very limited data as people are?
And I think part of the challenge is that we humans aren’t just learning to recognize some patterns, we also gradually learn to develop a whole model of the world.
So if you ask “Are there machines that are more intelligent than people today,” there are machines that are better than us at accomplishing some goals, but absolutely not all goals.
AGI, artificial general intelligence, that’s the dream of the field of AI: to build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there maybe in a few decades. And if that happens you have to ask yourself if that might lead to machines getting not just a little better than us, but way better at all goals, having super intelligence.
The argument for that is actually really interesting and goes back to the ‘60s, to the mathematician I. J. Goode, who pointed out that the goal of building an intelligent machine is in and of itself something that you can do with intelligence.
So once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by not human engineers but by machines, except they might do it thousands or a million times faster. So in my book I explore the scenario where you have this computer called Prometheus, which has vastly more hardware than a human brain does, and it’s still very limited by its software being kind of dumb.
Source
AI will never overcome humans, because God's Wisdom is greater than machine logic.
Why you want to get there??
"carbon chauvimism" awesome
Will intelligent machines be altruistic
Then my question is are humans getting smarter themselves? Updating their software surpass the generation before.
They are limited in the reprogramming/self improvement process by valid data input though. This is why I believe it will take them more time on that phase.
U.S. Military AI "Nuke enemy humans"
AI Generalization Patch Upload Complete
U.S. Military AI "Nuke humans"
you mean how AI will end humans?
They stole my name…
Dude A.I will never be as smart to humans as humans are to snails…
Snails did not create us nor do they understand matters of the universe in any sense of reality.
I predict the rise of A.i Will be like the rise of african americans in the sense of earning their freedom, becoming better at sports and sex robots (screwing our girls), they will try to compete with us but still wont on an emotional level and be envious of it.
Just watch Alien Covenant
The big problem with AI is humans having a dependancy for it i dont think being attacked by robots is a huge concern but what happens when robots create our food clothing gas and everything we rely on i realize AI is already being used in alot of these industires but i mean once we really start to use it for the majority of things than things are left open to hacking and glitches things like that ive seen companies switch computer systems and almost be put out of business because absolutly everything got fucked up imagine a world where noone works noone has any skills robots do everything than one day they all just stop that would be devistating in so many ways
yeah fucking right, total bullshit, science will never recreate general intelligence in machines, that general human learning ability is something that comes from a soul, current science can not quantify it, and even if we are able to quantify it in the future, it will never be able to be recreated, unless u figure out how to create a soul LOL, so stop ur bullshit propaganda
In order for a machine to continuously rebuild its AGI, it will need increasing numbers of new and innovative materials in which to support the exponential growth of its AGI. Else it will quickly reach the limits of its own processing, storage and other components capabilities and essentially get boxed in. The only way to free it from its prison would be for humans to continuously provide those materials and components. While such a machine may be 1,000,000 times more AGI capable than humans, it will still be a head without a body to enable its continued evolution. Consequently we are in control. The question is whether or not humanity will be able to resist its self-destructive tendencies and maternal instincts, and not continuously aid its growth until it doesn’t need us any longer. Thus saving ourselves from becoming the very mechanism that leads to our own destruction.
It's what you want don't you liberal commies? Except humanity will rise.
Well hurry the fuck up and finish us
I think this is a delusion. Sure, Intelligence isn’t a thing restricted to an organ like the human brain, and yes in principle there could be not carbon based systems that generate something akin to human intelligence. But I dare to say that something that hasn’t been born, then potty trained, didn’t go through kindergarten and wasn’t cared for by at least a loving parent, is bloody unlikely to reach human intelligence. Our brains are adaptations to exactly the world in which we are acting, and the enormously complex processes of brain development have evolved over millions of years to make this match better and better and finally actual allow the emergence of civilisation which in turn is prerequisite of human brain development and intelligence. You leave one big question out: intelligence about what? Brain development without embedding in a human body and without all the intricate interactions with other humans and with nature in a human way won’t happen. It’s is part of human society which participates in an enormously complex ecology, and that means that human intelligence can’t exist by itself. Just like there is no rabbit intelligence without the rabbit and no swarm intelligence without the swarm or ant colony. You seem to believe that all that could just be simulated based on some hyper-fancy machine learning trick. To me that’s a quasi-religious belief, a wild overreaching extrapolation born out of the now ruling techno cult that presents itself as science. I would rather go back and study more physics and try small-think before talking bigthink. I want to see the math.
On second listening I find another silliness that is kind of typical for a physicist (I’m actually one myself so I’m kind of aware of this): the tendency to hideous oversimplification – you probably have seen the caricature of a sphere as the physicist’s model of a cow. And here you talk about the brain as a big system of interacting particles buzzing around and demonstrate it with hand waving. And that isn’t even wrong, but it is just a trivial truth. A mere rhetorical instrument to gloss over the complexity that no one understands. The question is still: So that’s all that the brain is? If so, AGI should be a piece of cake. But it is not.
Perhaps a less self-referential definition of intelligence might be 'an evolving relativistic informational structure which, by the act of integrating information, encodes useful changes to the integration algorithm, thereby integrating future information into the structure more effectively according to a hierarchy of implicity goals given a set of possible environments.'
He says "yuman"?
We are great at denying our reality. Machines won't. Not good for us.
I'm wondering if my public service job will be around in 10 years time.
Why ???
Fght ntgthktdghtuddixdegetniqutienxnvifthenrt deux de nurgt nthosis. last attempt at survival.
OMG this guy is like so medically bipolar..he is not making any sense…He is like totally pretending to be sane in his psychosis.He needs to be put in a hospital…OMG what a skiod he needs meds.Im thinking like depokote,risperdal and some other med to like make him piss his pants just until he stops over compensating and gets his life together…O my gosh.. he is like dangerous..LOL Chris Auci
I think this is the best period in time to live on. “The “calm” before the storm”.