CES

New supercomputer may speed arrival of fully autonomous cars

Volvo And Nvidia Launch Supercomputer Pilot Program In Sweden

Back in the dark ages of autonomous-car development – about two years ago – it took months and months for engineers and software developers to program vehicles for scenarios they'd likely encounter on the road. Through advances in deep learning and artificial intelligence, vehicles can now accomplish that training in a matter of hours, according to Jen-Hsun Huang, CEO of Nvidia, the visual computing company that has become a leading supplier to automakers working on self-driving cars.

Compressing the development timeline will hasten the arrival of fully autonomous vehicles, and to speed that along, Nvidia unveiled a new supercomputer Monday night that can handle the rapid real-time processing needs that autonomous cars need to handle all operating conditions.

At CES in Las Vegas, Huang said the new Drive PX 2 supercomputing platform can process 24 trillion deep-learning operations per second, about ten times more than the company's first-generation platform in use by roughly 50 companies in the automotive world today. The new super system tracks roughly 36,000 points in 3D space eight times per second, enabling cars to combine data streams from LiDAR, radar, and ultrasonic sensors, along with cameras, to discern a clearer picture of the 360-degree traffic environment around them. Perhaps as importantly, for practical purposes, this lunchbox-sized computer fits snugly in the back of a car.

"It's now possible for us to train these incredibly complex networks on very large data sets on objects of all kinds," Huang said. "Deep learning is a huge breakthrough, as big as the internet or mobile computing. And all of the sudden, the race is on."

Volvo, one of the global automotive leaders in autonomous development, with a pilot program underway on Swedish roads, will be the first customer to use the Nvidia supercomputers in its cars, Huang said. He emphasized the computers were already outperforming humans in recognizing images and that humans, responsible for roughly 94 percent of all car crashes, would continue to be the least-reliable part of the driving experience. Nividia, of course, isn't the only one pursuing such technology in the automotive realm. Toyota, for one, announced a $1 billion in AI research last year and told CES attendees it had rounded out its research teams.

For both humans and autonomobiles, some of the most difficult conditions are found in cities, where an onslaught of unpredictable conditions are found in the movements of other cars, bicyclists, pedestrians and more. "That perception problem is, at the core, a very difficult problem," Huang said. But the deep-learning advances pioneered at the University of Toronto in recent years, he said, turned into a "sha-zaam moment" for autonomous development.

With cars now capable of painting such a detailed 360-degree view of the world around them, "the rear-view mirror is history," Huang said.


Related Video:

Share This Photo X