5 stages of data processing
The first is basic ingestion. Data stored in HDD is transferred to the data center for archival purposes.
Second comes data analysis.This process organizes the stored data, making it easy for data scientists and application developers to search for specific information. “For example, data scientists may want to take a close look at a particular scenario: a vehicle is coming to a stop at a four-way stop. It’s raining and there are pedestrians at the corner,” said Weast. “The data must be organized and aggregated so that it’s searchable.”
Weast defined the third stage of processing as training data for artificial intelligence and machine learning. What matters most here is “how we can reduce the time to train,” noted Weast.
The fourth stage is making data accessible to application developers. As they develop valet parking apps, for example, for autonomous vehicles, developers must have access to relevant data sets with details collected by test vehicles that include latency and response time.
The last phase is simulation. At this stage, rigorous simulation is applied to data, helping improve algorithms to a level at which they can be put back into autonomous vehicles.
5 stages of autonomous car data processing at data centers (Source: Intel)
Critical in these five stages of data processing, said Weast, is that “You will need different hardware best suited for each stage of data processing.” Hardware that works efficiently for neural network training is inevitably different from hardware for other stages of data processing.
Gone are the days of the generic racks of blades once common to data centers, Weast noted. “It’s time for a custom data center design that meets performance requirements.”
Intel, of course, takes a pride in its breadth of hardware technologies, ranging from Xeon, Xeon E5 to FPGAs, custom accelerators and memory. Figuring out which are best suited to specific jobs affects the power and performance of a data center.
That’s just what must be done in the development phase of autonomous driving, Weast said. Data processing that takes place inside a vehicle has its own requirements.
Data processing during robo-car deployment
In the deployment phase, algorithms used in an autonomous vehicle are more focused on capturing unique anomalies, Weast explained.
For example, the autonomous car might encounter an object it hasn’t seen before. Although it looks like a human being, it’s moving too fast. This confuses the car. The object might still be a person, perhaps riding a hoverboard. To solve this mystery, the car might first send a text-based description to the cloud, and follow by sending video samples, explained Weast. Or both.
Either way, data sent to a data center in the cloud is much smaller than data transferred to the data center by testing vehicles.
In the near future, it’s foreseeable that 5G technology will be used for vehicle-to-vehicle communication, noted Weast. In one scenario, the last car in a platoon might want to see what the first car sees. Rather than uploading video to the cloud, the lead vehicle can wirelessly stream standard real-time video to the caboose via 5G, Weast explained.
Carmakers are also interested in gathering data related to the physical health of the autonomous car.
HD mapping companies also need to get updated data from vehicles deployed on the road. Autonomous vehicles will be sending such updates about “huge potholes or other road hazards,” explained Weast.
HD mapping companies like HERE can use Mobileye’s Roadbook, which identifies and maps drivable routes in real-time using crowdsourced data. It will pair with Here’s HD Live Map, as an added layer to its existing data, thus enhancing the HD map, said Weast.
Aside from data that needs to be captured and sent to the cloud or to another vehicle, how can a robo-car fuse and process a heavy flow of data coming from the car’s many sensors?
Next page: Where's Intel's purpose-built SoC?