First proposed in the 1950s, neural networks are meant to mimic the web of neurons in the brain. But that is a rough analogy. These algorithms are really series of mathematical operations, and each operation represents a neuron. Google’s new research aims to show — in a highly visual way — how these mathematical operations perform discrete tasks, like recognizing objects in photos.
Inside a neural network, each neuron works to identify a particular characteristic that might show up in a photo, like a line that curves from right to left at a certain angle or several lines that merge to form a larger shape. Google wants to provide tools that show what each neuron is trying to identify, which ones are successful and how their efforts combine to determine what is actually in the photo — perhaps a dog or a tuxedo or a bird.
The kind of technology Google is discussing could also help identify why a neural network is prone to mistakes and, in some cases, explain how it learned this behavior, Mr. Olah said. Other researchers, including Mr. Clune, believe they can also help minimize the threat of “adversarial examples” — where someone can potentially fool neural networks by, say, doctoring an image.
Researchers acknowledge that this work is still in its infancy. Jason Yosinski, who also works in Uber’s A.I. lab, which grew out of the company’s acquisition of a start-up called Geometric Intelligence, called Google’s technology idea “state of art.” But he warned it may never be entirely easy to understand the computer mind.
“To a certain extent, as these networks get more complicated, it is going to be fundamentally difficult to understand why they make decisions,” he said. “It is kind of like trying to understand why humans make decisions.”
By CADE METZ
https://www.nytimes.com/2018/03/06/technology/google-artificial-intelligence.html
Source link