NVIDIA revealed its 2nd generation DRIVE PX at CES with so much processing power, the box needs to be water-cooled. NVIDIA calls PX 2 the first artificially intelligent supercomputer for self-driving cars.
In case you haven’t gotten the message — NVIDIA is now a systems company. At its CES press conference, the company’s CEO, Jen-Hsun Huang, revealed its latest development in advanced automotive processing which included a complete sensor processing system and a Deep Neural Net (DNN) cloud. NVIDIA’s system and cloud architecture had been described before during the company’s last year CES press conference and at the 2015 GPU Technology Conference — a powerful processing unit in the car (the NVIDIA Drive PX), that processes the car sensors and makes control decisions in real time, backed by the cloud DNN that provides non-real-time image analysis. This year, the second generation of NVIDIA DRIVE PX, the PX 2, was revealed with so much processing power, they needed to water-cool the box. In total, Huang called it the first artificial intelligent supercomputer for self-driving cars.
As part of the announcement, NVIDIA also revealed that it has Ford Motors as a partner and Volvo has chosen NVIDIA platform for its safe driving program. They join Audi, BMW, and Daimler in developing on NVIDIA’s Drive PX platform.
Jen-Hsun Huang holding the NVIDIA DRIVE PX 2. (Photo credit: Kevin Krewell, Tirias Research)
But, because of NVDIA’s heritage, it couldn’t resist dropping an impressive list of specification for the NVIDIA Drive PX 2: 12 CPU cores, two of the company's next generation Pascal GPUs capable of 4 TFLOPS each, and a total of 24 “Deep Learning” TOPS for the whole system. The NVIDIA deep learning algorithms can use specialized mixed precision instructions as low as 8-bit integer to deliver up to 24 trillion operations per second. The 8-bit integer operations are new in the Pascal GPU. All the processors in the PX 2 — the next generation Tegra and the Pascal GPUs — are manufactured in (TSMC) 16nm FinFET process, the first chips NVIDIA has announced using FinFET transistors.
The Drive PX 2 requires about 250W of power as it contains two Tegra processors and two full GPUs. NVIDIA used water cooling to reduce the size of the PX 2 box and allow tighter thermal management. Many electric cars already have water cooling systems for their batteries and the PX 2 can be connected into that system. For cars without a water cooling system, NVIDIA can also provide a heat exchange attachment with fans. The first DRIVE PX 2 units will ship in 2Q16 to early customers with more general availability in Q4.
While Huang did not give any specific details of the chips inside the PX 2, preferring to focus on the overall system development, the new Tegra processors have interesting features. Each Tegra SoC has a mix of two of NVIDIA’s own 64-bit ARM cores (Denver) and four Arm Cortex-A57 cores — a first for the company. The Tegra SoC also will have NVIDIA’s leading edge Pascal GPU at about the same time as the more expensive discrete GPUs — which is also a new development for the company (the Tegra integrated GPUs usually lag the discrete products by about a year).
The water cooled NVIDIA DRIVE PX 2 unit. (Photo credit: Kevin Krewell, Tirias Research)
The amount of software and hardware development NVIDIA is putting into advanced driver assist (ADAS) and autonomous driving show a very big commitment to the market. Huang is convinced that GPUs are the key to providing sufficient computational horsepower to power intelligent, self-driving cars. At CES, there is no shortage of fellow believers in autonomous cars. And NVIDIA is not alone as it will compete with Freescale, MobileEye, Qualcomm, TI for this market. But NVIDIA is trying to keep ahead in the self-driving revolution. Bringing self-driving to cars has the power to change urban landscapes — replacing parking lots with parks, as less cars will be required as private ownership moves to shared ownership and “transportation as a service.” Although you have to think that self-driving cars could eventually lead to a sharp drop in automotive demand. But there are also safety benefits of autonomous cars. As Huang described it: “humans are the least reliable part of the car.”
But self-driving is complex, unpredictable, hazardous (weather, etc.). Perception with multiple sensors around the vehicle is the core of a very difficult problem and deep learning is key to understanding this driving situation. It starts with fast image recognition — being able to recognize cars, trucks, pedestrians, traffic sign is just the beginning and with GPU acceleration, the DNN up to 40x faster than CPU alone. After that, the autonomous vehicle development needs to keep learning and connect to the cloud to gather the latest DNN models. Production systems will also have to deal with school crossing guards, road construction, debris in the road, and many unexpected situations. It’s a complex systems and software design problem — to deliver autonomous systems, it takes much more than raw compute power and a few drivers, NVIDIA has invested in the Drivenet DNN learning platform with the DIGITS DNN development tool. Autonomous driving is hard, but NVIDIA’s DRIVE PX 2 bring a lot of performance to bear on the problem. The rest of us can sit back and enjoy the ride.
—Kevin Krewell is a principal analyst with Tirias Research and has a funny (and smart) Twitter feed. Follow him on Twitter.
Want to learn more? Attend DesignCon 2016, the premier conference for chip, board, and systems design engineers. Taking place January 19-21, 2016, at the Santa Clara Convention Center, DesignCon will feature technical paper sessions, tutorials, industry panels, product demos, and exhibits. Register here.DesignCon and EDN are owned by UBM Canon.