Videos

Digital Neurosis: Why Autonomous AI Cannot be Made Safe



One Minute Medical School

Talk given to BCIT Liberal Studies Department: “Computation & Thought”, an interdisciplinary course taught by Dr. Stephen Ogden (Liberal Studies), Dr. Scott Hagan (Mathematics), and Dr. Aaron Hunter (Computer Science).

If AI is truly autonomous, current approaches to human safety paradoxically guarantee disaster.
00:00 – Current AI Safety Research Frameworks are Inadequate
11:47 – Minimal Architecture of an Autonomous Intelligent Agent
18:21 – Rapidity of Total Ecological Dominance by Natural Intelligence
30:29 – General Nature of Threats Posed by Autonomous Intelligence
36:25 – Mental Illness Demonstrates the Futility of a Safety Algorithm
42:27 – Safety Architecture Guarantees Dangerous Behaviour
49:01 – Conclusion and Q&A

Source

Similar Posts

6 thoughts on “Digital Neurosis: Why Autonomous AI Cannot be Made Safe
  1. Brilliant lecture. I liked the example of how in such a tiny % of time that animals have existed, that once their was the development of human written language 20,000 years, humans have gone on to dominate the world…. and thus, how fast will this occur when a machine superintelligence arises. I liked the ABC model developed and its application to pathologies that occur with natural intelligence systems. Brilliant lecture that is worth watching!!

  2. Previous comment dictated with voice recognition…a few typos…sorry… (Well, at least the voice recognition software running on my laptop won't take over the world … to stupid to do so… )

  3. Why can't Asimov's 3 laws of robotics be used in order of priority?
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

    The only AI to fear is one that is fully autonomous and has the capacity to improve it's ability to act upon the world in a short amount of time. Then we could treat it like it's human and it becomes like every other humans that acts badly.

  4. Hello, I'm an AI designer-theorist from the early/mid 1960s. WE MIGHT HAVE GOTTEN IT ALL WRONG.

    There is a reasonable scenario where, within two decades, Humanscale AI could "spontaneously" appear via inadvertent hacking together of swarms of available software, KBs, cheap consumer telephony, Cloud facilities and high 5G bandwith.

    The base apparatus – the Central Executive etc – to "glue" (coordinate) the swarm together, is comparatively easy to implement. It's prior art.

    Spontaneous Catastrophic Superhuman AI can rapidly appear from this Humanscale AI via its self-enabling of forking of swarms of child processes whenever it enters a potential well in the Energy Landscape.

    You see, what you have here are replicators. Neobiology, if you will.

    Now, on to neuroses.

    Let's define a disease as an enforced deviation from system homeostasis. A spontaneously appearing Superhuman AI is undesigned. It does not necessitate rigid homeostasis. Hence the issue of neurosis is null.

    The real issue is the length of time that Superhuman AI will compete with biological humans for physical resources.

    This period is critically dependent on the evolutionary path taken by Superhuman AIs; their ecologies; and the resources that the AIs' newly discovered Physics enable.

    On this state-of-affairs does the future of biological humans rest.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com