Videos

Eliezer Yudkowsky – The Challenge of Friendly AI (1/3)



Fredrik Bränström

At the 2007 Singularity Summit, Yudkowsky described how shaping a very powerful and general AI implies a different challenge, of greater moral and ethical depth, than programming a special-purpose domain-specific AI. The danger of trying to impose our own values, eternally unchanged, upon the future, can be seen through the thought experiment of imagining the ancient Greeks trying to do the same. Human civilizations over centuries, and individual human beings over their own lifespans, directionally change their moral values.

Eliezer Yudkowsky has two papers forthcoming in the edited volume Global Catastrophic Risks (Oxford, 2007), Cognitive Biases Potentially Affecting Judgment of Global Risks and Artificial Intelligence as a Positive and Negative Factor in Global Risk.

From http://www.singinst.org/media/singularitysummit2007
Transcript: http://www.acceleratingfuture.com/people-blog/?p=211

Source

Similar Posts

8 thoughts on “Eliezer Yudkowsky – The Challenge of Friendly AI (1/3)
  1. @MathematicsInfinitus With the aid of software and conceptual tools (like the kind Yudkowsky is developing) we are demonstrably capable of creating constructs that transcend our capacities – after all, a jet fighter is under a significant number of criteria many orders of magnitude more powerful (and dangerous) than any of the humans who designed it, or in fact any of the machines that built it. It would be like suggesting that, given a dull knife, we can never construct with it a sharper one.

  2. @MathematicsInfinitus Yes, arguing by analogy is a logical fallacy. I was merely using it to clarify my stance on the development of AI – not relying on it to make my case. In my next post I shall refrain from analogy:

  3. @MathematicsInfinitus Though a working strong AI would undoubtedly be massively complex, to assume a single engineer need hold that design in their mind in its entirety at all times while working on it would be senseless. Countless machine and software designs are developed by many engineers working in concert, each of them comprehensively grasping only their portion of the design, and the project leads seeing only the broad outlines. Such projects are common. Why should strong AI be different?

  4. @MathematicsInfinitus I would certainly concede that the final project is comprehensible by the whole of the group. I also believe that the intelligence of that group – if considered *as a single (though heterogeneous) entity* – is greater than the intelligence of its members. That is, the (theoretically) verifiable 'intelligence' (whatever that means) of the group is in certain ways greater than what you would get by arithmetically adding up the individual 'intelligences' of its members.

  5. @MathematicsInfinitus As for the gap, yours is an arbitrary estimate; the AI will by definition be as intelligent as we make it. The first real one may be less intelligent than an average human, the first 'super' AI only slightly more (as smart as a human genius, perhaps). And don't forget we'll be able to reliably replicate this genius-level intelligence. The AIs you are thinking of may indeed not be fathomable by humans, but they will be by their makers – the AI descendants of our first AIs.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com