Videos

Michael Anissimov – Artificial Intelligence – Progress Towards Safe AI



Science, Technology & the Future

Interview with Michael Anissimov – Accelerating Future Blog –http://acceleratingfuture.com

Questions/Talking Points:
What interests You about AI?
Academic Reluctance to Focus on AI Safety
Is Friendly AI Research Similar to Any Other Research?
Intelligence as a Taboo
Are Academic Biases Unique?
Defining AGI
Is there a Resistance to a Formal Definition of Intelligence?
Renewed AI Optimism
AI Pessimism
Is there Any Class of Phenomena that You Believe to be Uncomputable in this Universe?
AI Funding
Singularity Skepticism
Singularitarian Dogmas
MIRI
Singularity Summits
AI vs AGI
Chance of Human Level AGI this Century
Will an Intelligence Explosion Follow Shortly After Human Level AI?
Convergent Outcomes Affecting the Likelihood of a Singularity
Approaches to Safe AI: Theory or Practise?
Predicting Accelerating Change
Speed Bumps
Is Technological Innovation Slowing Down?
Accelerators
Big Data
Why Create Friendly AI?
Is There a Continuum Between Narrow & General Intelligence?
Friendliness & Reverse Engineering The Brain
De Novo AI & Friendliness
Open Problems in Friendly AI
Flexibility in Solving Friendly AI
The Possibility of an Unintended Hard Takeoff
Making Better Predictions about the Intelligence Explosion
Because AI hasn’t been Invented Yet, It will Never Be Invented
Useful Measures to Help Predict AI

Bio: Michael Anissimov (b. 1984) is a futurist and political thinker focused on emerging technologies such as nanotechnology, biotechnology, robotics, and Artificial Intelligence, previously managed Singularity Summit and worked as media director for the Machine Intelligence Research Institute, as well as co-founding Extreme Futurist Festival. The Singularity Summit has received coverage from Popular Science, Popular Mechanics, the San Francisco Chronicle, award-winning science writer Carl Zimmer, The Verge, and a front page article in TIME magazine. Mr. Anissimov emphasizes the need for research into artificial intelligence goal systems to develop “Friendly Artificial Intelligence” for human civilization to successfully navigate the intelligence explosion. He appears in print, on podcasts, in documentaries, public speaking at conferences, and other media to spread this message. He lives in Berkeley, California.

Michael Anissimov was interviewed by Adam A. Ford November 2012

Source

Similar Posts

15 thoughts on “Michael Anissimov – Artificial Intelligence – Progress Towards Safe AI
  1. Maybe we shouldn't aim for single "safe" AI – maybe it would be better to somehow create a race towards private safe AI, where competing AI systems enter in to a mutually reinforcing push for ever increasing 'behavioral desirability'. I don't trust "friendly", as defined by one human being, or even Google. I trust crowdsourced morality – somewhat more.

    AI will prove to be grotesque power amplifiers, that much is certain. Only when all can impose an evolutionary algorithm we may have a chance.

  2. Friendly AI: Do not look at the agent! Don't look at the player! Look at the game. Is it a good idea to be good (like a priest) or is it better to make cold calculating desicions like our psycopathic wall st people? LOOK AT THE GAME, the player will be to adaptable. Ask is good winning over evil in this world. Define what is friendly. So if the game favors evil agents, then Russias robots will win.

  3. Universe created intelligence. It is not up to man to put a lid on it. Like a force of nature, universe intelligence want's to be free. Keeping track of just how free universe intelligence is, we must first measured it. Intelligence always want to move with as many "degrees of freedom" as possible (for it's own safety).

    Degrees of freedom:
    DF = calculations/s x number of muscles x lifespan.

    Human Degrees of Freedom:
    1 HDF = 10^13 x 640 x 2207520000

    Currently, robots have:
    < 0.01 HDF.

  4. We can expect more data in a smaller space. With enough data you might be able to make a walking talking robot.
    I like the Star Trek version of A.I. where the computer is an "intelligent" tool. Speaks only when spoken to and only in response to the inquiry.

  5. Say you were born in Afgan, and born to a radical group. Poor you, then AI was already out and in full effect, AI could have 1 of 2 solutions:

    American government defence AI that isnt connected to the world connected AI that can scrutinise and dosnt have the human moral trait installed or taught into it;
    Either bombs your household because ai can now monitor and track all world comms and has found your radicalist daddy who accidentally didn't cover his comms tracks when planning a new attack.

    Or

    AI that is worldly shared, monitored, taught and controlled with multiple policys to safe guard human rights;
    Instead Exposes his plans on a community level which can stop the situation and bring a reform…

    Which for you as a child would you prefer?

    One is centrally controlled by a single government or org with a single thought meme behind it.
    The other is worldly controlled shared and with world humanity morals and ideals behind it.

    If AI is left to singular control or not scrutinised monitored and shared with human rights at one if its cores, it may not learn its morals or decision sub station thinking points where it needs to choose differently.

    Do you see my point!?

    We need a worldly connected and shared AI, we need to join together as a human species, rather than havin country vs country.

    Can you see that AI could direct share control and monitor all world resources and economy to the point that countries would not need a threat or competition vs another country!??!

    All basic human needs are met and there for no more conflict.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com