Videos

AI Safety: Controlling and Using an Oracle AI – Stuart Armstrong



Science, Technology & the Future

There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial
intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one
particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous.
Paper: http://www.nickbostrom.com/papers/oracle.pdf

Also see Stuart’s video on using and controlling an AI: https://www.youtube.com/watch?v=oAHIa651Wa0

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Source

Similar Posts

3 thoughts on “AI Safety: Controlling and Using an Oracle AI – Stuart Armstrong
  1. Since intelligence is a spectrum and humans sit along that spectrum, wouldn't whatever measures we use to confine an AGI be usable by other intelligent beings to confine us? In other words, would whatever solution we find to confine an AI above human level be usable by a fish or dog to confine/control us? Or would there be a limit on how intelligent a being has to be before it can enact measures to confine a higher intelligence?

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com