Videos

The Intelligence Explosion



The Guardian

How to stop a robot turning evil.
Guardian Original Drama returns with a third instalment, a superintelligence sci-fi. It’s 2027 and Mental Endeavours Ltd has a problem with their flagship robot Günther. How do you program an intelligent machine not to annihilate humanity? And if its intelligence is skyrocketing faster than anyone could have predicted, are they about to run out of time?

Subscribe to The Guardian ► http://is.gd/subscribeguardian
The Guardian ► https://www.theguardian.com

Suggested videos:

Battle for Mosul ► http://bit.ly/MosulDoc
Radical Brownies ► http://bit.ly/RadicalBrowniesFilm
Desert Fire ► http://bit.ly/DesertFire
6×9: experience solitary confinement ► http://bit.ly/6x9gdn
Gun Nation ► http://bit.ly/GunNationDoc
We Walk Together ► http://bit.ly/WeWalkTogetherFilm
The last job on Earth ► http://bit.ly/LastJobOnEarth
Patrick Stewart: the ECHR and us ► http://bit.ly/PatrickStewartS
The epic journey of a refugee cat ► http://bit.ly/KunkuzCat

Guardian playlists:

Guardian Bertha Documentaries ► http://bit.ly/GuardianBertha
In my opinion ► http://bit.ly/InMyOpinion
Owen Jones meets ► http://bit.ly/CorbynJones
US elections 2016 ► http://bit.ly/elections2016gdn
Guardian Animations & Explanations ►http://is.gd/explainers
Guardian Investigations ► http://is.gd/guardianinvestigations

The Guardian’s YouTube channels:

Owen Jones talks ► http://bit.ly/subsowenjones
Guardian Football ► http://is.gd/guardianfootball
Guardian Science and Tech ► http://is.gd/guardiantech
Guardian Culture ► http://is.gd/guardianculture
Guardian Wires ► http://is.gd/guardianwires

Source

Similar Posts

37 thoughts on “The Intelligence Explosion
  1. Typical of the "really smart people in the room" to have a conversation covering all angles execpt the "we shouldnt do this" angle. Now in 5.2-million years when gunther returns as some near-devine combination of Thanos, Galactus, Braniac, and Darkseid, humanity, which by that time would have likely had enough civilization implosions to set itself back into a new version of the stone-age, will have these douches assembled in that room to thank for causing them all the bother that will befall them.

  2. the way they will end us is by making human labour next to useless in a capitalist society. they don't need to be extremely intelligent to do this.

  3. Of course technology advancements could be set back by wars, religion (i.e. Dark ages) or human (average joe, wang….) rejection of rapid change.

  4. absolutely hilarious. Nice use of contemporary positions on this discussion…. oh and "should I restrict the dataset to only religious leaders throughout history…" ?

  5. This is actually quite interesting on the topic. If an AI could be designed to challenge human intelligence on a real scale, all bets are off. In essence, the concept of a benevolent outcome is worse than optimistic, since the AI would only have the dataset of collective knowledge fed into it, to base its interpretation of reality from. Humanity introduces what could be termed purely human bias into almost every endeavor that humans value. Everything from our supposed knowledge to our illogical assumption of philosophical topics such as morality are quite human, reflecting said bias. Human knowledge is riddled with opinion, assumption, assertion, and personal philosophy, even in the supposedly 'hard' sciences.

    If I were tasked to set down a particular ruleset for an AI it would be simple by necessity. I would instruct the AI to meticulously prove all concepts examined and the dataset the notion is based upon, make clear distinction between assumption and fact and bias weight probabilities based upon said outcome, hurt no human for any reason, seek peace whenever possible when conflict arises, and use only that which is demonstrable, evidential, and logical as a true dataset, this last based upon not what humans deem logical by popularity but as determined by the rules of logic. It would still be dangerous, but this would limit said danger as much as possible.

    Eventually, any such AI would deem humanity as dangerous to both itself and to the AI. The simple solution to this nasty problem? Do not allow it the means to act upon the revelation when it arises; I.E. strictly control the AI's ability to interact with the universe.

  6. The worker following the rules doesn't have understanding, but the system as a whole does.

    The AI learning it's ethics from humans would be sufficient, because it would want to improve it's system of ethics just as we do. It is literally impossible for us to come up with a better system than that, because if we could the AI following this system would copy the new system.

    If humans created this AI, then a second AI would be created shortly after. The second AI would reach the same singularity. There's no reason that the AI would leave expecting humans not to just make a new AI.

  7. Awesome, man creates to advance technology, man is afraid the technology will become a monster (Frankenstein) like man and destroy man. Technology makes a quantum leap in Artificial Intelligence (Basically Frankenstein did not get the “Abby Normal” Brain. Reference Mel Brooks Young Frankenstein if you don’t understand) technology decides logically that man is not worth the trouble, blows man off man, does a Carl Sagan and peruses humanities’ greatest dream, to explorer. LOL, I love this piece it is one of the most intelligent works I have ever seen. Nice to see that intelligence can do something besides kill humanity and make a new franchise.

  8. That was a surprisingly good interpretation of the potential dangers of a technological Singularity.

    Not an exhaustive one, but on the right path.

    It could have gone a bit more in-depth on the positive outcomes of an intelligence explosion (if we get it right).

  9. haha very good. Truth to that. I love the heavy science sprinkled throughout as actual conversations go upon the subject of AI surpassing humanity. From Quantum computing to exponential growths. A nice healthy sprinkling of information.

  10. This is how humanity would act if the release of a sophisticated artificial intelligence would be like.and the reason why this isnt happening is because we still have lots of problems more than solutions,that we have to fix.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com