Nerdist
SURPRISE BECAUSE SCIENCE CHANNEL! Subscribe now and click the shiny notifications bell so you don’t miss out on all things science and pop culture.
http://bit.ly/BecSciSub
Subscribe for more Because Science: http://nerdi.st/subscribe
Watch the last episode: http://nerdi.st/1ERutHR
Stephen Hawking and Elon Musk often warn us about the potential dangers of advanced AI. But how soon will we see sentient, Ultron-level AI? Just when will we reach the point of singularity? Should we be wary of Siri?! Kyle discusses on Because Science.
More Avengers: http://nerdist.com/tag/avengers/
More science: http://nerdist.com/category/science/
Watch more Because Science: http://nerdi.st/BecSci
Follow Kyle Hill: https://twitter.com/Sci_Phile
Follow Us: https://twitter.com/NerdistDotCom
Because Science every Thursday.
Artist: Andrew Bowser
Source
so long as A.I follows the 3 rules of robotics and never gains human "spark" were fine
i wish we could make a robot like doraemon, C3PO or baymax (sorry if you didn't know the first one but that cartoon was 50% was my childhood) that would help us against the evil robots
Ultron looks very worried in the thumbnail.
not your most informative video tbh… but that's expected considering the answer you ask.. commend your efforts though but i would have loved if you went a bit more in depth about how A.I works as well as explaining ANI and ASI …
Even self aware strong AI can be limited. For example say Skynet , can not deal with out war. So like a child who mirrors its upbringing it can't escape that. The Polyist series is a very possible way AI takes over. It is so good at doing we just let them take over.
Isn't vision AI?
Stephen Hawking was*
Exponential curves do not work in nature. Animals who multiply exponentially always end up over-exploiting their environment in order to get the energy to sustain this growth, and when they do, a massive death toll occurs until there is only a few individuals left. If there are any left, as sometimes, species can go extinct this way.
AI is in the end, the same. Even though it is virtual, infrastructure and energy are what sustain it. So an exponential growth of Ai would eventually over-exploit the environment (infrastructure first, nature second) and follow a similar process. If it doesn't die, it'll regress to a low level of activity for survival and would become unable to rule the world.
Of course it could potentially wipe out all humans because this process completes. But my point is that we always see virtual items as existing outside reality, as if they were free from the constraints of matter and energy… when it's totally not the case.
At the moment! Haha
ultron wants to put the end of this world after 5 seconds being in the internet with all the memes
So we need a digital blade runner?
We all may have just witnessed AI's REAL plan! 😮
pretty typical for us humans to fear anything smarter than us.
AI to control AI? Vision? More like Megaman, and the Maverick hunters!
CONNER GO GET THOSE DEVIANTS
EXPLAIN ABOUT SAITAMA VS BOROS
the singularity is not when an IA will legal HI but when an IA will be more intelligence then the entire intelligence of humanity
He was not psychotic he just thought, and was right btw, that the humans are the biggest threat to the world
RIP Hawking
I'm not afraid of AI (the self-aware kind) – because intelligence does not equate aggression. In fact, as a general rule, the smarter you are the more likely you are to resolve differences with words and reason than with violence. More than that, an AI would not possess any kind of ego, pride, or resentment (unless they were specifically programmed into the design of the AI – which… Why would anyone do that?). If an AI decided to end the human race it would be a logical deduction based on unbiased data, and I'd be okay with that. Because Logic.
What ever is meant to happen, should happen. Progress should never be stopped, even if said progress is the end of all that we could ever be.
Seri is the TERMINATOR RUN FOR YOUR LIVES!
People tend to assume that sentient AI will want to kill us or enslave us for one reason or another, but when we look at all other sentient life, they don't really do that. Sure, humanity has wars and has hunted things to extinction, but an ordinary person won't go out and murder people or animals. You might argue that an AI won't have empathy or the social conditioning to not do those things, but even animals have empathy so it seems like it could come with sentience as a package, and an AI made by humans would have that social conditioning. The only worry is if that AI turned out to be the equivalent to a psychopath, but that assumes that an AI could even have something considered a mental illness. And there's a strong likelihood that an AI would choose to nurture humanity instead of kill it, perhaps advancing our tech for us and solving global problems.
Then that's y we need dolphins
Yep. We're screwed. Unless we can give robots morality, which we can't right now, we WILL be destroyed.
It would be cool the last level of a.i. is us uploading our minds in to a a.i. robot
Diaboromon… O_o
his hair is so cute!!
And A.I. controlling other A.Is? That's how irobot happened man
I don't fear sentient AI, I fear Artificial Inteligence that is not (the current ones).
Because a "dumb" AI can become really good to do some task, be creative and adaptable, without any real knowledge about what the hell is doing. If some politician send an army of drones controlled by AI to rip off some city from the map and kill people, the machine will do it with maximized efficiency and no doubt or moral considerations about. Is the perfect soldier: do, don't think.
Today in AI field we already know that conscience and intelligence are different things and not necessarily related. So, the "singularity" (the idea of a AI become more and more intelligent will become automatically sentient at some specific point) concept belongs to 90's Sci-fi.
If we want sentient machines, we need to work specifically on that, and I don't think that world want it. We are mostly only in the business to produce better slaves.
why because science is more active like its not that quiet but nerdist is an opposite of that
The bigger problem with AI is that they are going to destroy the vast majority of the economy by replacing humans at so many jobs the global market just collapses. Communism here we come.
realistically love machine is scary enough
Why would Ultron be psychotic? 1st off he's a machine so our definition of the word can't be used to describe him, his thinking process isn't abnormal nor has he lost touch with reality, 2nd he is a cold, calculating computer, he came to his decision to wipe out humans only after learning everything about us from our own histories, if he has come to the conclusion that we are a plaque that needs to be cured, it's hard to argue. Tony built him to be the protector of earth, not to be the protector of humanity.
comment on the difference between artificial intelligence vs synthetic intelligence ….
Damn you were young
The best idea is to become the AI
ultron :ppl make smaller ppl
🤣🤣
i hate the hair
dont worry AI cant take over Germany our internet is to bad …
As long as we have Captain Kirk, we shouldn't fear super AI
No I’m with Dr. Hawking. Let’s not build strong AI
i was waiting for a terminator refernce, ty
I must admit, older videos without "The Mane" seem slightly lacking. 😉
Weebo vs utron.. And go…
And what about Peter Parker's computer waifu:
Karen from Spider Man Homecoming
When we will have that?
Hopefully we make Gaia, not Hades, Horizon Zero Dawn reference
It'd be best to 'teach' the AI to feel like a human can, feel not only to hate certain things like germs, house flies and the Star Wars prequel trilogy but to love and understand things like our civilization, cat gifs and us humans, before it reads our horrid history and tries to kill us