Videos

Thinking inside the box: using and controlling an Oracle AI



FHIOxford

A talk by Stuart Armstrong, of the Future of Humanity Institute, Oxford University. There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Solving this issue in general has proven to be considerably harder than expected. This lecture looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges and we analyse and critique various methods of control. In general this form of limited AI might be safer than unrestricted AI, but still remains potentially dangerous.

Paper at: http://www.aleph.se/papers/oracleAI.pdf

Source

Similar Posts

2 thoughts on “Thinking inside the box: using and controlling an Oracle AI
  1. This is something I'm considering devoting my life to.  As weird as it sounds, we've never faced a more pressing problem than this.  Artificial super intelligence is, in all likelyhood, coming in the very near future.  We need to do it right the first time because there won't be a second chance.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com