MADISON, Wis. – Car companies rarely discuss behind-the-scene development work on their autonomous vehicles.
The notable exception to this rule is CPU giant Intel, which styles itself as a “data company” and sees the challenges of autonomous driving as problems of data flow.
In developing highly automated vehicles, Intel focuses on overseeing and analyzing how data -- once captured by test vehicles -- is ingested, sorted out, learned and simulated at data centers. Intel hopes that this sort of granular knowledge will help determine how best to process data inside a cars.
Intel's Jack Weast
EE Times recently interviewed Jack Weast, Intel’s chief architect of Autonomous Driving Solutions. We discussed the exploding volume of data collected by autonomous cars, and how the data glut is actually being processed – at data centers and inside vehicles.
Recently, Intel has been warning about “the coming flood of data in autonomous vehicles.” It estimates that an autonomous car could generate as much as 4 terabytes per 1.5 hours on the road.
So, is Intel saying that every autonomous car roboting around in 2021 will be collecting, storing and uploading that much data?
Not quite. “Let us first explain autonomous vehicle data processing by separating what’s happening during development phase from deployment,” said Weast.
Intel clarified that four terabytes per 1.5 hours is an estimate of how much data a training test vehicle would collect during the development phase of autonomous driving.
Assuming the test vehicle drives four to five hours per day, it will end up collecting “tens of terabytes of data,” said Weast, “which is stored in a hard disk drive inside the test vehicle.”
At this point, no filters or processing are applied to collected data. Everything captured is stored in HDD for later analysis.
Certainly data in terabyte chunks is too huge for wireless transmission to a data center. Instead, “We would physically take the HDD out of a vehicle and bring it over to a data center,” Weast.
Data center architecture matters
Once at the data center, the next step is important, he noted, “Because this is where the data center architecture matters.”
There are five different processing stages at a data center, according to Weast.
Next page: 5 stages of data processing
5 stages of data processing
The first is basic ingestion. Data stored in HDD is transferred to the data center for archival purposes.
Second comes data analysis.This process organizes the stored data, making it easy for data scientists and application developers to search for specific information. “For example, data scientists may want to take a close look at a particular scenario: a vehicle is coming to a stop at a four-way stop. It’s raining and there are pedestrians at the corner,” said Weast. “The data must be organized and aggregated so that it’s searchable.”
Weast defined the third stage of processing as training data for artificial intelligence and machine learning. What matters most here is “how we can reduce the time to train,” noted Weast.
The fourth stage is making data accessible to application developers. As they develop valet parking apps, for example, for autonomous vehicles, developers must have access to relevant data sets with details collected by test vehicles that include latency and response time.
The last phase is simulation. At this stage, rigorous simulation is applied to data, helping improve algorithms to a level at which they can be put back into autonomous vehicles.
5 stages of autonomous car data processing at data centers (Source: Intel)
Critical in these five stages of data processing, said Weast, is that “You will need different hardware best suited for each stage of data processing.” Hardware that works efficiently for neural network training is inevitably different from hardware for other stages of data processing.
Gone are the days of the generic racks of blades once common to data centers, Weast noted. “It’s time for a custom data center design that meets performance requirements.”
Intel, of course, takes a pride in its breadth of hardware technologies, ranging from Xeon, Xeon E5 to FPGAs, custom accelerators and memory. Figuring out which are best suited to specific jobs affects the power and performance of a data center.
That’s just what must be done in the development phase of autonomous driving, Weast said. Data processing that takes place inside a vehicle has its own requirements.
Data processing during robo-car deployment
In the deployment phase, algorithms used in an autonomous vehicle are more focused on capturing unique anomalies, Weast explained.
For example, the autonomous car might encounter an object it hasn’t seen before. Although it looks like a human being, it’s moving too fast. This confuses the car. The object might still be a person, perhaps riding a hoverboard. To solve this mystery, the car might first send a text-based description to the cloud, and follow by sending video samples, explained Weast. Or both.
Either way, data sent to a data center in the cloud is much smaller than data transferred to the data center by testing vehicles.
In the near future, it’s foreseeable that 5G technology will be used for vehicle-to-vehicle communication, noted Weast. In one scenario, the last car in a platoon might want to see what the first car sees. Rather than uploading video to the cloud, the lead vehicle can wirelessly stream standard real-time video to the caboose via 5G, Weast explained.
Carmakers are also interested in gathering data related to the physical health of the autonomous car.
HD mapping companies also need to get updated data from vehicles deployed on the road. Autonomous vehicles will be sending such updates about “huge potholes or other road hazards,” explained Weast.
HD mapping companies like HERE can use Mobileye’s Roadbook, which identifies and maps drivable routes in real-time using crowdsourced data. It will pair with Here’s HD Live Map, as an added layer to its existing data, thus enhancing the HD map, said Weast.
Aside from data that needs to be captured and sent to the cloud or to another vehicle, how can a robo-car fuse and process a heavy flow of data coming from the car’s many sensors?
Next page: Where's Intel's purpose-built SoC?
For Level 2 autonomy, where only one or two sensors are deployed, sure, each module can process its own data, acknowledged Weast. But when a host of different sensors start capturing gobs of data,“a high performance compute cluster will come in,” he explained.
Fusing raw data centrally not only makes data processing more efficient, but it becomes necessary, he added. A highly automated vehicle’s central fusion unit, for example, can combine data from cameras, lidars and radars, then normalize gaps and overlaps, eventually creating a 360-degree vehicle view.
Where's Intel's purpose-built SoC?
Intel has yet to detail its own purpose-built automotive SoC – perhaps similar to Nvidia’s Xavier – designed for highly automated vehicles.
Xavier is a complete SoC integrated with a new GPU architecture called Volta, a custom 8 core CPU architecture and a new computer vision accelerator. Likewise, Intel’s solutions could in theory integrate a variety of compute elements – CPUs, FPGsA, accelerators and others.
Previously, Kathy Winter, vice president and general manager for Intel’s Automated Driving Solutions, said during an interview with EE Times that Intel, in preparation for the emergence of fully automated vehicles, is working a new custom SoC. It features multiple Xeon cores and integrates hardware acceleration units. The designed will deliver “high performance in computing while meeting the power budget,” said Winter at that time. The SoC will be designed as automotive-grade with functional safety, she added.
Intel said last November that the company will be announcing more details on this product closer to final production sometime during 2017.
Intel isn't exactly ready to unveil its purpose-built autonomous car SoC. Instead, the company shared the graphics suggesting "autonomous car 'brains' need more than one kind of processor to get the job done." (Source: Intel)
Click here for larger image
Without offering specifics, Weast said, “Our goal is to offer efficient computation, high performance but at low power.” Weast added, “And we will offer scalable solutions on our platform.”
Noting that Intel’s vehicle compute platforms could contain any variety of CPUs, FPGAs, ASICs or custom SoCs, Intel’s spokeswoman pointed out that platforms now in Intel’s GO development system scale from “Atom to Xeon.”
— Junko Yoshida, Chief International Correspondent, EE Times