These algorithms can learn tasks on their own by analyzing vast amounts of data. “It used to be that a real smart Ph.D. sat in a cube for six months, and they would hand-code a detector” that spotted objects on the road, Mr. Urmson said during a recent interview at Aurora’s offices. “Now, you gather the right kind of data and feed it to an algorithm, and a day later, you have something that works as well as that six months of work from the Ph.D.”
The Google self-driving car project first used the technique to detect pedestrians. Since then, it has applied the same method to many other parts of the car, including systems that predict what will happen on the road and plan a route forward. Now, the industry as a whole is moving in the same direction.
But this shift raises questions. It is still unclear how regulators and lawyers — not to mention the general public — will view these methods. Because neural networks learn from such large amounts of data, relying on hours or even days of calculations, they operate in ways that their human designers cannot necessarily anticipate or understand. There is no means of determining exactly why a machine reaches a particular decision.
“This is a big transition,” said Noah Goodall, who explores regulatory and legal issues surrounding autonomous cars at the Virginia Transportation Research Council, an arm of the State Department of Transportation. “If you start using neural networks to control how a car moves and then it crashes, how do you explain why it crashed and why it won’t happen again?”
The seeds for this work were planted in 2012. Working with two other researchers at the University of Toronto, a graduate student named Alex Krizhevsky built a neural network that could recognize photos of everyday objects like flowers, dogs and cars. By analyzing thousands of flower photos, it could learn to recognize a flower in a matter of days. And it performed better than any system coded by hand.
Soon, Mr. Krizhevsky and his collaborators moved to Google, and over the next few years, Google and its internet rivals broke new ground in artificial intelligence, using these concepts to identify objects in photos and to recognize commands spoken into smartphones, translate between languages and respond to internet search queries.
Over the holiday break at the end of 2013, another Google researcher, Anelia Angelova, asked for Mr. Krizhevsky’s help on the Google car project. Neither of them officially worked on the project. They were part of a separate A.I. lab called Google Brain. But they saw an opportunity.
Rather than trying to define for a computer what a pedestrian looked like, they created an algorithm that could allow a computer to learn what a pedestrian looked like. By analyzing thousands of street photos, their system could begin to identify the visual patterns that define a pedestrian, like the curve of a head or the bend of a leg. The method was so effective that Google began applying the technique to other parts of the project, including prediction and planning.
“It was a big turning point,” said Dmitri Dolgov, who was part of Google’s original self-driving car team and is now chief technology officer at Waymo, the new company that oversees the project. “2013 was pretty magical.”
Mr. Urmson described this shift in much the same way. He believes the continued progress of these and other machine learning methods will be essential to building cars that can match and even exceed the behavior of human drivers.
Mirroring the work at Waymo, Aurora is building algorithms that can recognize objects on the road and anticipate and react to what other vehicles and pedestrians will do next. As Mr. Urmson explained, the software can learn what happens when a driver turns the vehicle in a particular direction at a particular speed on a particular type of road.
Learning from human drivers in this way is an evolution of an old idea. In the early 1990s, researchers at Carnegie Mellon University built a car that learned relatively simple behavior. Last year, a team of researchers at Nvidia, the computer chip maker, published a paper showing how modern hardware can extend the idea to more complex behavior. But many researchers question whether carmakers can completely understand why neural networks make particular decisions and rule out unexpected behavior.
“For cars or flying aircraft, there is a lot of concern over neural networks doing crazy things,” said Mykel Kochenderfer, a robotics professor who oversees the Intelligent Systems Laboratory at Stanford University.
Some researchers, for instance, have shown that neural networks trained to identify objects can be fooled into seeing things that aren’t there — though many, including Mr. Kochenderfer, are working to develop ways of identifying and preventing unexpected behavior.
Like Waymo, Toyota and others, Aurora says that its approach is more controlled than it might seem. The company layers cars with backup systems, so that if one system fails, another can offer a safety net. And rather than driving the car using a single neural network that learns all behavior from one vast pool of data — the method demonstrated by Nvidia — they break the task into smaller pieces.
One system detects traffic lights, for example. Another predicts what will happen next on the road in a particular kind of situation. A third chooses a response. And so on. The company can train and test and retrain each piece.
“How do you get confidence that something works?” asked Drew Bagnell, a machine learning specialist who helped found Aurora after leaving the self-driving car program at Uber. “You test it.”
Mr. Goodall, the Virginia Department of Transportation researcher, said car designers must reassure both regulators and the public that these methods are reliable.
“The onus is on them,” he said.
An earlier version of this article misspelled the surname of a researcher at the Virginia Department of Transportation. He is Noah Goodall, not Goodhall.
By CADE METZ
https://www.nytimes.com/2018/01/04/technology/self-driving-cars-aurora.html
Source link