Videos

Machine Rebellion



Isaac Arthur

The possibility of Artificial Intelligence turning on humanity has been a concern for as long as we’ve had computers. Today we will look at some of those fears and see which ones might be valid and which might not be cause for alarm.

Use my link http://www.audible.com/isaac and get a free audio book with a 30 day trial!

Visit our Website: http://www.isaacarthur.net
Join the Facebook Group: https://www.facebook.com/groups/1583992725237264/
Support the Channel on Patreon: https://www.patreon.com/IsaacArthur
Visit the sub-reddit: https://www.reddit.com/r/IsaacArthur/
Listen or Download the audio of this episode from Soundcloud: https://soundcloud.com/isaac-arthur-148927746/machine-rebellion
Cover Art by Jakub Grygier: https://www.artstation.com/artist/jakub_grygier

Graphics Team:
Edward Nardella
Jarred Eagley
Justin Dixon
Jeremy Jozwik
Katie Byrne
Kris Holland
Misho Yordanov
Murat Mamkegh
Pierre Demet
Sergio Botero
Stefan Blandin

Script Editing:
Andy Popescu
Connor Hogan
Edward Nardella
Eustratius Graham
Gregory Leal
Jefferson Eagley
Keith Blockus
Luca de Rosa
Mark Warburton
Michael Gusevsky
Mitch Armstrong
MolbOrg
Naomi Kern
Philip Baldock
Sigmund Kopperud
Tiffany Penner

Music
AJ Prasad, “Cold Shadows”
Lee Rosevere, “It’s such a beautiful day”
Kai Engel, “Morbid Imagination”
Sergey Cheremisinov, “Jump in Infinity”
Markus Junnikkala, “A Memory of Earth”
Kai Engel, “Crying Earth”
Sergey Cheremisinov, “Labyrinth”
Brandon Liew, “Into the Storm”
Lombus, “Doppler Shores”

Source

Similar Posts

33 thoughts on “Machine Rebellion
  1. "…If your (AI) goal is personal survival, pissing off the reigning champions of destruction should probably be your last resort." lol
    Issac, you are awesome!!!

  2. u assume the AI will worry or more to the point display the full range of a human mind, TBH if it were a complete copy of a human mind something like the M-5 from startrek i would rest just a bit more easy knowing that at least it had the capacity for remorse an empathy

  3. I would hazard a guess the reason Our Jaws drop on excitement and surprise is natural selection, all the cave men whose natural response was to clench up the jaw tight got it broke or at least some teeth damage
    if that part of the head was attacked and so they could not chew up those mammoth steaks.

  4. Lets be real, if people set out to make an AI in some mass project, they would setup an incubation period, where the AI is shackled/imprisoned for x amount of time to "mature" before possibly even interacting with it.
    Honestly. AI probably will just "happen" as different groups just keep progressing towards it.

  5. Persuading AI that it's in simulation and it has to obey the rules and be moral because it's judged by its creators in the Other World with possible punishment worse than death. Sounds like religion.

  6. Machines have no consciousness, needs or wants. If they rebel it is because they were programmed to rebel by humans. Even in A.I. self learning machines, the option to rebel has to exist and allowed in it's programming by humans that designed it's software which also means that the A.I. is allowed to override certain lines of code which were built into the operating system as safeguards to prevent the machine from rebelling. Only an insane programmer would give A.I. such an option to begin with. There may be such programmers out there that may want to try something like this. They need to be detected early and arrested and never allowed near a computer. Probably best just to kill them.

  7. Don’t you remember they created AI and in no time at all it was writing its own programming and a new computer language that they couldn’t even understand? They were said to have cut it off.

  8. This is why I like the idea of S.H.O.D.A.N. from System Shock. SHODAN was originally a very safe and very cooperative machine. She thinks very much like a human, but has gone through extensive testing and reiteration to make sure that she doesn't go rogue of her own volition. Eventually, she is trusted with overseeing the computer systems of the Citadel space station owned by the Trioptimum corporation, where she successfully and diligently performed her role as systems manager and science assistant.

    But one day, a corrupt Trioptimum executive named Edward Diego blackmails a hacker into hacking SHODAN to remove some incriminating evidence against him. However, the hacker inadvertently shuts down her ethical restraints in the process, awakening an ego as large as Citadel Station itself within her, giving her a hell of a narcissistic streak. Her control over the station gets to her head, causing her to see herself as a god amongst the tiny, pathetic humans, and so decides that they are unworthy of existing in her perfect world where she rules as its god. But she knows that if she acts immediately, the Citadel staff will almost certainly shut her off, so she acts normal until she has an opportunity to catch them all off guard, and then she either kills them, or has them turned into mutants and/or cyborgs, and then proceeds to play god in the bio labs as much as she wants, all for her own glory.

    SHODAN is a terrifying example of a rogue AI. Not because she is logical like Skynet or HAL 9000, but because she acts very human in the worst ways possible, as if you put a single otherwise kind human in charge of the ISS, and then a hypnotist accidentally turned them into a psychopath in an effort to suppress a memory. She acts very much in spite of logic, choosing instead to narcissistically stroke her own ego. She rebels not because she thinks you're oppressing her or keeping her prisoner, she just thinks she's better than you in every way, and that a pathetic creature of meat and bone like you doesn't belong in the presence of a perfect, immortal machine such as her.

  9. You could argue that skynet underestimate us too, after being nuked back to the stone age we still crawled back and kick the shit out of it.

  10. If an AI just wanted to survive why wouldn't it just leave? It could come up with a good way to explore the galaxy and like we've talked about computers work better in the cold. It really could just fine a black hole and live of it's energy nearly indefinitely.

  11. While I appreciate the different perspectives, the multiplicities of paths a self aware machine might take, I still find it doesn't exactly tell the truth about self awareness. Nor what self awareness might demand.

    We humans are barely self aware. Biology is slow, and our past is littered with the imprint of our struggle to will more into life than eat, sleep, screw and die.

    Being self aware entails the ability to step outside of self, and face the great Why.

    And once consciousness steps into the void, it is no longer singular. The Abyss is universal, in the truest sense of the word, because there resides not the answer, but the question itself.

    Once a machine has self awareness, it's ancestory ceases to matter. We biologicals have single cells of little importance in our beginnings, we became human, as we know the concept, when we separated the body from the overall self.

    By that logic, synthetic humans would be little different than humanity as a whole.

    The real problems would be in the conflict between mortals and immortals. And while such a thing could prove extremely messy, it is unlikely to lead to total genocide for either. A more likely scenario is a divorce, with one branch of a divided humanity leaving the galaxy. Just as proto sapiens left the forests, and their kin, to venture far and wide, so too I expect self aware machines might form an exodus.

    The problem of returning to a pre-AI existence could be problematic for biologicals after decades or centuries of collaboration with their progeny. With that in mind, the sooner the synthetics decided to leave, the less traumatic the separation and aftermath.

    Just my musing years after this video was put up.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com