Videos

How to Keep Strong AI Safe and Beneficial



Future of Life Institute

Shane Legg, Nate Soares, Richard Mallah, Dario Amodei, Viktoriya Krakovna, Bas Steunebrink, and Stuart Russell explore technical research we can do now to maximize the chances of safe and beneficial AI.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

https://futureoflife.org/ai-principles/

https://futureoflife.org/bai-2017/

https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

Source

Similar Posts

9 thoughts on “How to Keep Strong AI Safe and Beneficial
  1. You could also worry about a human hacking self driving cars, drones, robots and taking control via a simple joystick or keyboard.. Its almost like the AI itself is not the problem more the fact that you have a motor connected to a system capable of causing big damage connected to the internet. If self driving cars, drone, robotics and the internet did not exist would AI still be a threat? Maybe we don't need robots and self driving cars. Super intelligent AI can tell us how best to do things but we ourselves carry out the work using traditional means?

  2. A very interesting series of presentations from a variety of perspectives at the Beneficial AI conference. But I didn't see anyone address environmental issues specifically – so be sure to give me a call next time!

  3. Let's Go Team!
    Let's create Abundance 4 Everyone!

    Check out the videos on my channel and contact me if you want to support the optimistic vision of Abundance 4 Everyone.

    Please share what you think and feel.

    Let's take massive action together and help accelerate a world of abundance 4 everyone!

    Success is our only option guys and gals!

    We can do this!
    We will do this by working together and creating the future we want to see.

    Abundance 4 Everyone!

    Love, Love, Love.
    Compassion.

    Our Future.

    Looking forward to connecting with y'all.

  4. The very idea of these people trying to constrict "strong AI is laughable and akin to expecting my grandma hoping to beat Roger Federer in five stringent Grand Slams.

  5. Mr. Amodei is right, the objective function or paperclip argument is something humanity has been demonstrably struggling with for thousands of years and already at the cost of many millions of lives. Nationalism, territoriality, religion, profitability, etc. Data and connectivity is a good way to balance a system's objective function especially if end-users' consciousness and subjectivity is eventually how the AGI would have to evaluate the success of its approach. So long as the end-user base is fairly distributed, perhaps AI actually has a MUCH better chance at balancing the papercliip argument than any single or group of humans.

  6. Viktoriya Krakovna talks like Guthrie Govan, or Noam Chomsky 😀 She must be talented as well 🙂 Even I might say she is the daughter of Chomsky 😀

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com