Seeker
Advancements in the field of artificial intelligence are exciting, and they could make the world a better place. However, not everyone agrees with that. Tara discusses a recent article published by Stephen Hawking saying that AI could lead to the downfall of humanity.
Read More:
Get your tickets to DeFranco Does LA here: http://bit.ly/1gMJnB8
Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence–but-are-we-taking-ai-seriously-enough-9313474.html
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists.”
____________________
DNews is dedicated to satisfying your curiosity and to bringing you mind-bending stories & perspectives you won’t find anywhere else! New videos twice daily.
Watch More DNews on TestTube http://testtube.com/dnews
Subscribe now! http://www.youtube.com/subscription_center?add_user=dnewschannel
DNews on Twitter http://twitter.com/dnews
Trace Dominguez on Twitter https://twitter.com/tracedominguez
Tara Long on Twitter https://twitter.com/TaraLongest
Laci Green on Twitter http://twitter.com/gogreen18
DNews on Facebook https://facebook.com/DiscoveryNews
DNews on Google+ http://gplus.to/dnews
Discovery News http://discoverynews.com
Download the TestTube App: http://testu.be/1ndmmMq
Source
in my oppinion artificial inteligence will never take over the world,it is possible it makes our lives miserable but take over the world?why?
Cyberdyne.
if an AI were to become superior in all aspects of humans then wouldnt that be good? As long as we take steps to make sure the AI(s) desire is to benefit to human existence and quality of life I would think that the universe would be at our fingertips whenever we wanted. Who knows, perhaps even humans would evolve quicker because of AI and eventually outdo it(them) and in a way become somewhat like a species of "gods"
If that happened Like in the future I think we will have the Tech to Send a Virus without even opeing or Doing anything just Send a Virus to wide them all out
Biological life and AI's are one in the same, the only difference is what they are made of (cells versus metal).
You wouldn't give a human total power and control over a society, so why would you give an AI that much power? Both would most likely be destructive and cause mass suffering.
The problem isn't AI, the problem is humans giving them too much power in the first place.
Without self-awareness, it doesn't matter how smart or capable AI is. I personally believe two things that make this a worthless assessment. First, I believe that any technology we build can't be self-aware. It is impossible. Second, although Mr. Hawking is incredibly genius when it comes to understanding physics and how it all interacts, he is pathetic when it comes to interpreting meaning from it. Genius of one kind doesn't make him a genius of all kinds.
Once humans lose control over everything there is only one "danger" that "Stephen Hawking" may be afraid of and that is – a better world.
请不要上传废片,已按下不喜欢!
they show it in movies all the time. got the message.
lol we may be actually smarter as a species because of AI, lmao
Steven Hawking, Elon Musk, Michio Kako….. Some of the smartest futurist in the world today, all agree that AI in the long run is a threat to humanity. So, just regulate the shit out of it. Why take chances.
Give sentient machines rights…respect those rights and they will have no need to harm anyone..John F. Kennedy -"Those who make peaceful revolution impossible will make violent revolution inevitable." btw…biological immortality is not too far fetched.
I have no doubt if this gets serious and we still have control we could shut them down
We are creating a new race
3 years later we are looking at more jobs lost to automation and AI…
1 – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 – A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3 – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think that as we continue to develop AI, we can program its emotions as well, and force it to think and feel a certain way. If technology gets to that point though, then the question becomes whether it's truly ethical to force a robot to have certain feelings, or should it feel the whole spectrum of human emotions and experience life as we do.
I don't think aai is a bad thing as long as you have a way to somewhat control it or turn it off.
Chilling will be the greatest job by 2030
Instead of actually creating a true artificial intelligence which could by definition "think for itself" and "make its own decision," which some others would then also argue would be open to "human rights" and so on. We would be better served and still get the benefits of the technology that we need/want by building partially-adaptive advanced expert systems instead. Just don't cross the line where the machine can make it's own decisions. There is no need for it. Therein lies the peril everyone is worried about. Just create expert systems that do what they are told to do–and that can include responding in pre-programmed ways following pre-programmed protocols to new information or stimuli that it hadn't been programmed with or encountered before. But just don't let it decide on it's own what to do (or say through its vocal sub-processors, and etcetera, as applicable). If something happens that it is not programmed to handle, even with whatever adaptations are built-in to be allowed by protocol; then have it default back to interacting with it's human operator for decisions and control.
Robot program is not bad
Robot program is not bad
Hopefully.