The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential threat to humanity,” an alarmist view that confuses A.I. science with science fiction. Nevertheless, even A.I. researchers like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop A.I. at all.
I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.
I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.
First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
By OREN ETZIONI
https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html
Source link