MADISON, Wis. — Nvidia pushed the computational performance battle for autonomous cars to a new level, unveiling a new member of its Drive PX family, codenamed Pegasus, at its own GPU Technology Conference in Munich.
Noting that Pegasus can compute 320 trillion operations per second, CEO Jensen Huang boasted, “Our new DRIVE PX Pegasus AI computer — roughly the size of a license plate — can replace the entire trunk full of computing equipment used in today’s Level 5 autonomous prototypes… DRIVE PX Pegasus has the AI performance of a 100-server data center.”
Nvidia CEO Jensen Huang at GTC in Munich (Photo: Nvidia)
Nvidia’s Pegasus reportedly comes with a four-chip architecture featuring the equivalent of two Xavier units, plus two next-generation discrete GPUs.
Although computational power alone can’t solve all the challenges still posed by Level 5 autonomous cars, Nvidia appears to have edged ahead of its rivals.
Luca De Ambroggi, principal analyst for automotive electronics and semiconductors at IHS Markit, noted, “Processing power is a very important point and with Pegasus we are getting close to the POPS (Peta operations per second), which I expect to be the minimum requirement for L5 vehicle.”
In De Ambroggi's opinion, the Pegasus platform is likely to be ready for “Geo-fenced L5” self-driving cars — Robo-taxis — but not for the mass market. “We will probably see a few more generations of ICs (such as Nvidia's Xavier 3, 4, 5 and Intel/Mobileye's EyeQ 5, 6, 7)” to improve performance, De Ambroggi said.
Mike Demler, a senior analyst at the Linley Group, cautioned that Nvidia is “now pre-announcing chips more than one year before we see first samples.” But he, too, acknowledged that “the combination of Nvidia’s more open software platform and the GPU-compute architecture” position Nvidia well to address deep learning.
While industry analysts aren’t declaring Nvidia the sole winner of the autonomous car race yet, they aren’t refuting the clear leadership role Nvidia has seized.
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), said, “Nvidia is developing and learning just like everyone else.” However, he added, Nvidia has “put a lot into developing automated vehicle technologies and their efforts are beginning to pay off. They have essentially democratized AI in automotive, which is to be commended considering the auto industry's position on AI up until now.”
Currently, no vehicle commercially available today exceeds Level 2 autonomy. The future of higher level automation still hangs in the balance but, at least, Nvidia is “the first to offer a complete A/V stack for L4/L5,” noted Magney.
Describing Pegasus as “a production ready platform to support L4/L5 automation,” Magney added, “It is very robust and has lots of redundancies and fall back methods. It will run on QNX which is an ASIL D embedded operating system.”
Other functional safety measures Nvidia has installed in Pegasus, according to Magney, include, “Decomposing the neural network components and validating the AI libraries.” He added, “Nvidia says they can examine the performance of the network layers by isolating certain tasks such as perception.”
Asked about sensor fusion on Pegasus, Magney who was at Nvidia’s GTC in Munich this week, said, “Nothing different… Nvidia advocates fusing raw data despite the high capacity physical layer. Nvidia claims their architecture can handle massive amounts of data so no need to process outside of the domain controller.” Pegasus “supports more sensors - up to 16,” he added.
How does Pegasus stack up?
Before going into competitive analysis, how does Pegasus stack up against Nvidia’s own, other Drive PX chips?
The Linley Group’s Demler said Pegasus is not fundamentally different [from other Drive PX chips]. “In fact, architecturally Nvidia is actually reversing course from their initial positioning of Xavier,” he observed.
Next page: Comparison with Intel/Mobileye