Videos

Eliezer Yudkowsky – The Challenge of Friendly AI (3/3)



Fredrik Bränström

At the 2007 Singularity Summit, Yudkowsky described how shaping a very powerful and general AI implies a different challenge, of greater moral and ethical depth, than programming a special-purpose domain-specific AI. The danger of trying to impose our own values, eternally unchanged, upon the future, can be seen through the thought experiment of imagining the ancient Greeks trying to do the same. Human civilizations over centuries, and individual human beings over their own lifespans, directionally change their moral values.

Eliezer Yudkowsky has two papers forthcoming in the edited volume Global Catastrophic Risks (Oxford, 2007), Cognitive Biases Potentially Affecting Judgment of Global Risks and Artificial Intelligence as a Positive and Negative Factor in Global Risk.

From http://www.singinst.org/media/singularitysummit2007
Transcript: http://www.acceleratingfuture.com/people-blog/?p=211

Source

Similar Posts

3 thoughts on “Eliezer Yudkowsky – The Challenge of Friendly AI (3/3)
  1. I imagine its actually a bit more complex than that. I mean, we haven't even conceptualized the methodology behind how an AI thinks or rationalizes or reasons at all much less feel, and just saying 'we can make it feel something' denotes that we are simply assigning a positive or negative value to sensory input – static values at that – how is it to know whether pain is bad or good, or what pleasure is bad or good, do we then try and determine extents, so it shuns torture yet encourages exercise? or encourages moderate sensory pleasure but not hedonism?

    Its enough for a human to make complex, too complex for simple assigning of values, we know how HUMANS do it to an extent, but I'm not entirely sure just providing definition to certain sensations provides enough substance for an AI to deliberate at a satisfactory level as to mirror or solve human morality, which as Yudowsky says and I happen to agree – is a fluid, constant progression rather than static values.

    – essentially, the answer is higher and higher reasoning/thought, we need an AI that thinks about the question in and of itself, and the existential boundaries of that problem rather than just the problem in isolation, and even then that's probably not enough.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com