Google has bet its future on artificial intelligence, and company executives believe the technology could have an impact comparable to the development of the internet.
Google promotes the benefits of artificial intelligence for tasks like early diagnosis of diseases and the reduction of spam in email. But it has also experienced some of the perils associated with A.I., including YouTube recommendations pushing users to extremist videos or Google Photos image-recognition software categorizing black people as gorillas.
While most of Google’s A.I. guidelines are unsurprising for a company that prides itself on altruistic goals, it also included a noteworthy rule about how its technology could be shared outside the company.
“We will reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles,” the company said.
Like most of the top corporate A.I. labs, which are laden with former and current academics, Google openly publishes much of its A.I. research. That means others can recreate and reuse many of its methods and ideas. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it.
DeepMind, a top A.I. lab owned by Google’s parent company, Alphabet, is considering whether it should refrain from publishing certain research because it may be dangerous. OpenAI, a lab founded by the Tesla chief executive Elon Musk and others, recently released a new charter indicating it could do much the same — even though it was founded on the principle that it would openly share all its research.
By DAISUKE WAKABAYASHI and CADE METZ
https://www.nytimes.com/2018/06/07/technology/google-artificial-intelligence-weapons.html
Source link