Behind all the excitement of cars driving themselves down the freeway to Las Vegas for CES this week is a jockeying for position for semiconductor vendors and processor architectures.
Audi demonstrated a prototype A7 driving unassisted from San Francisco, a distance of 550 miles, using radar sensors for long-range and 360 degree views, as well as LIDAR laser scanners at the front and back. It also used a new high-resolution 3D video camera to take a wide-angle view out of the front, combining the data from four other cameras around the car.
All of this data is managed by the advanced driver assistance systems (ADAS) that are the beachhead for chip makers on their roadmaps to self-driving cars. ADASes are initially providing functions such as lane departure warning, advanced cruise control, traffic sign recognition, pedestrian and object detection, forward collision warning and reversing prevention, but these are all key algorithms for the self-driving car. Indeed, these emerging devices already support autonomous emergency braking, and are also being designed for the ISO26262 automotive safety standard.
Israeli chip designer Mobileye saw a billion dollar IPO on the NASDAQ stockmarket last year as a result of this drive, moving from 160 customers by the end of 2014 to 237 by 2017. Along with foundry partner ST Microelectronics it intends to be a key element in self-driving cars that are on the market in 2022. Its EyeQ family of vision processors is based on the MIPS architecture, and the company signed up with new owner Imagination Technologies following the takeover in 2013 to use the Aptiv and Warrior processor families that follow on from the MIPS34K in the EyeQ2 processor in 2008 and 1004K in the EyeQ3 in 2012. It was the lead partner for the definition of the M51xx family of cores with hardware virtualization that will sit at the heart of the next generation devices for customers such as General Motors, Volvo and Honda.
Click on the image below to begin the slideshow
Audi's self driving A7 on its way to CES 2015.
But the processing architecture is not necessarily fixed. Toshiba has a long history in image recognition systems that dates back to handwriting recognition for the Japanese post office in the 1960s, and launched its first automotive image processor in 2004 around the ARM architecture. With the fourth generation, that has changed The hardware-based architecture of the TMPV7608XBG moves from ARM to 10 dedicated media processor engines (MPE) supported by a VLIW coprocessor based on its own MeP RISC architecture running the applications such as a traffic signal detection and lane detection warning. These are optimized for transferring large amounts of data.
The media processing engines are themselves supported by 14 hardware accelerators and Toshiba supplies the API and libraries.
“We have to acknowledge that the applications are so complex with 8 cameras running in parallel so its important to understand what functions you want to do and with that you need the hardware capability and make the most intelligent assignment,” said Klaus Neuenhuskes, Automotive System LSI IC Product Marketing for Europe for Toshiba, told EE Times. “To build that up in a simple RTOS would be a challenging task so we trust to the intelligence of the engineer to map the application to the processor and this is done with our image experts.”
“The migration of the control code rather than the image recognition algorithms can be done on the MeP, and for engineers it's easy to convert the control tasks to the MeP as these are 32bit RISC engines with a C compiler.”
There are few safety issues with such a port, he says. “As long as the safety is done in software its not that much of an issue to port to the MeP. We have implemented additional hardware to achieve a higher ISO level such as ECC memory checking and monitors but we recognize the requirement to achieve a certain ISO function and it is optimized for ISO26262.”