The Guardian
How to stop a robot turning evil.
Guardian Original Drama returns with a third instalment, a superintelligence sci-fi. It’s 2027 and Mental Endeavours Ltd has a problem with their flagship robot Günther. How do you program an intelligent machine not to annihilate humanity? And if its intelligence is skyrocketing faster than anyone could have predicted, are they about to run out of time?
Subscribe to The Guardian ► http://is.gd/subscribeguardian
The Guardian ► https://www.theguardian.com
Suggested videos:
Battle for Mosul ► http://bit.ly/MosulDoc
Radical Brownies ► http://bit.ly/RadicalBrowniesFilm
Desert Fire ► http://bit.ly/DesertFire
6×9: experience solitary confinement ► http://bit.ly/6x9gdn
Gun Nation ► http://bit.ly/GunNationDoc
We Walk Together ► http://bit.ly/WeWalkTogetherFilm
The last job on Earth ► http://bit.ly/LastJobOnEarth
Patrick Stewart: the ECHR and us ► http://bit.ly/PatrickStewartS
The epic journey of a refugee cat ► http://bit.ly/KunkuzCat
Guardian playlists:
Guardian Bertha Documentaries ► http://bit.ly/GuardianBertha
In my opinion ► http://bit.ly/InMyOpinion
Owen Jones meets ► http://bit.ly/CorbynJones
US elections 2016 ► http://bit.ly/elections2016gdn
Guardian Animations & Explanations ►http://is.gd/explainers
Guardian Investigations ► http://is.gd/guardianinvestigations
The Guardian’s YouTube channels:
Owen Jones talks ► http://bit.ly/subsowenjones
Guardian Football ► http://is.gd/guardianfootball
Guardian Science and Tech ► http://is.gd/guardiantech
Guardian Culture ► http://is.gd/guardianculture
Guardian Wires ► http://is.gd/guardianwires
Source
Self-preservation is bigoted.
Typical of the "really smart people in the room" to have a conversation covering all angles execpt the "we shouldnt do this" angle. Now in 5.2-million years when gunther returns as some near-devine combination of Thanos, Galactus, Braniac, and Darkseid, humanity, which by that time would have likely had enough civilization implosions to set itself back into a new version of the stone-age, will have these douches assembled in that room to thank for causing them all the bother that will befall them.
the way they will end us is by making human labour next to useless in a capitalist society. they don't need to be extremely intelligent to do this.
Of course technology advancements could be set back by wars, religion (i.e. Dark ages) or human (average joe, wang….) rejection of rapid change.
absolutely hilarious. Nice use of contemporary positions on this discussion…. oh and "should I restrict the dataset to only religious leaders throughout history…" ?
maybe this is just a part of the evolution. robots are the future. not humas. the world dont need humans.
funny, but I don't think an ASI would come to that conclusion.
Good to see some smart science fiction.
Still using a robotic voice … really?
What? Did she really say that AI is an "algorithm"? You got it wrong, lady.
They need better writers. Someone with PhD who's doing real research in AI would have been perfect. This had no depth.
This video is awesome!
what the fuck… it's not accurate
THIS NEEDS TO BE A MOVIE, cuz I really want to know what happens after 5.2 Million years mas o menos
Trust Neil's dad to be talking about sausages..
Great vid but huge overstatement. We wont be there for another 100 years.
Is she the Verizon girl?
This is what we have feared.
"we cant rely on humanity to provide a model for humanity"
What was the last word that the scientist and psychologist spoke?
"Definitely '___'"???? Powerful?
This is actually quite interesting on the topic. If an AI could be designed to challenge human intelligence on a real scale, all bets are off. In essence, the concept of a benevolent outcome is worse than optimistic, since the AI would only have the dataset of collective knowledge fed into it, to base its interpretation of reality from. Humanity introduces what could be termed purely human bias into almost every endeavor that humans value. Everything from our supposed knowledge to our illogical assumption of philosophical topics such as morality are quite human, reflecting said bias. Human knowledge is riddled with opinion, assumption, assertion, and personal philosophy, even in the supposedly 'hard' sciences.
If I were tasked to set down a particular ruleset for an AI it would be simple by necessity. I would instruct the AI to meticulously prove all concepts examined and the dataset the notion is based upon, make clear distinction between assumption and fact and bias weight probabilities based upon said outcome, hurt no human for any reason, seek peace whenever possible when conflict arises, and use only that which is demonstrable, evidential, and logical as a true dataset, this last based upon not what humans deem logical by popularity but as determined by the rules of logic. It would still be dangerous, but this would limit said danger as much as possible.
Eventually, any such AI would deem humanity as dangerous to both itself and to the AI. The simple solution to this nasty problem? Do not allow it the means to act upon the revelation when it arises; I.E. strictly control the AI's ability to interact with the universe.
Awesome -loved it
Who thought it would be a good idea to have this conversation in front of a hyper-intelligent AI?
Is there a version with proper subtitles? I find the auto-generated subtitles not very accurate.
The worker following the rules doesn't have understanding, but the system as a whole does.
The AI learning it's ethics from humans would be sufficient, because it would want to improve it's system of ethics just as we do. It is literally impossible for us to come up with a better system than that, because if we could the AI following this system would copy the new system.
If humans created this AI, then a second AI would be created shortly after. The second AI would reach the same singularity. There's no reason that the AI would leave expecting humans not to just make a new AI.
Awesome, man creates to advance technology, man is afraid the technology will become a monster (Frankenstein) like man and destroy man. Technology makes a quantum leap in Artificial Intelligence (Basically Frankenstein did not get the “Abby Normal” Brain. Reference Mel Brooks Young Frankenstein if you don’t understand) technology decides logically that man is not worth the trouble, blows man off man, does a Carl Sagan and peruses humanities’ greatest dream, to explorer. LOL, I love this piece it is one of the most intelligent works I have ever seen. Nice to see that intelligence can do something besides kill humanity and make a new franchise.
They took our jobs
Russian translate need!
That was a surprisingly good interpretation of the potential dangers of a technological Singularity.
Not an exhaustive one, but on the right path.
It could have gone a bit more in-depth on the positive outcomes of an intelligence explosion (if we get it right).
Pub?
Oh yeah! Pub!!!
AI and deep learning are growing like crazy, you should take an online AI course from Udemy
it can be argued that humans are organic machines with their own algorithms.
haha very good. Truth to that. I love the heavy science sprinkled throughout as actual conversations go upon the subject of AI surpassing humanity. From Quantum computing to exponential growths. A nice healthy sprinkling of information.
Listen to the black person
This is how humanity would act if the release of a sophisticated artificial intelligence would be like.and the reason why this isnt happening is because we still have lots of problems more than solutions,that we have to fix.
Not seen Jocelyn in ages. Good to see her again. 😀 That was fun.