AI's Latest and Greatest

Kevin’s Week in Tech: Are Google’s A.I.-Powered Phone Calls Cool, Creepy, or Both?

Even some A.I. experts were taken aback by the Duplex demo, which showed off a kind of technology that scammers could one day use to make mass robocalls, conduct social engineering hacks and impersonate people on the phone. Google has said that the technology won’t be used for telemarketing. The company says it will be used to call businesses using publicly available numbers to gather information like a store’s hours. But it doesn’t take much imagination to see how this kind of technology could be used for all kinds of questionable or dangerous tasks. And there’s an obvious ethical question of whether a robot caller should be required to identify itself as a nonhuman.

Erik Brynjolfsson, a professor at M.I.T. who has written extensively about artificial intelligence, told the The Washington Post that while Google’s Duplex demo was technologically “amazing,” it still raised ethical questions.

“I don’t think the main goal of A.I. should be to mimic humans,” he said.

Defenders of Google’s Duplex experiment have pointed out that an automated phone call service could be helpful for people with disabilities, or for freeing people from annoying customer service slogs. (“O.K. Google, cancel my cable subscription.”) But Google is now in a precarious position. It wants to keep pushing its A.I. development forward, but it needs to do so in ways that won’t scare people at a time when distrust of the tech industry is growing by the day.

Bloomberg reports that the backlash to Duplex caught Google by surprise. That, to me, is the most disturbing piece of this week’s news — that people inside the company thought that a demo of advanced A.I. fooling an unwitting human receptionist would be greeted with universal praise.

I keep thinking of a Twitter thread posted last November by Kumail Nanjiani, one of the stars of HBO’s “Silicon Valley.” He noted that while doing research for the show, the show’s crew members often visited the offices of tech companies and were struck by how little thought engineers gave to the ethical implications of their products.

“Often we’ll see tech that is scary,” Mr. Nanjiani wrote. “And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions.”

Maybe Google should have invited a few more actors to their prelaunch product meetings.

Some other tech stories I found interesting this week:

■ My colleague Sheera Frenkel has a story about all the mountains of Facebook data scraped by academic researchers, much of which is still trading hands in the open and potentially being misused. Cambridge Analytica is increasingly looking like the tip of the iceberg.

■ Speaking of Facebook, my colleague Cade Metz reports that the social network is opening up new A.I. labs in Seattle and Pittsburgh, mostly by raiding nearby research universities for talent. A professor at the University of Washington sums up the problem nicely: “If we lose all our faculty, it will be hard to keep preparing the next generation of researchers.”

By KEVIN ROOSE

https://www.nytimes.com/2018/05/11/technology/kevins-week-in-tech-are-googles-ai-powered-phone-calls-cool-creepy-or-both.html

Source link

Similar Posts

WP2Social Auto Publish Powered By : XYZScripts.com