NEW YORK--Rapidly evolving imaging and embedded vision algorithms are opening up a new battleground for DSP core IP companies in the high-performance and power-efficient imaging needs of mobile handsets, automotive and video products.
Following Ceva’s introduction a year ago of MM3101, a programmable, low-power imaging and vision platform, Tensilica Tuesday (Feb. 12) rolled out an imaging and video dataplane processor unit (DPU), called IVP.
The IVP DPU, a licensable semiconductor IP core, is designed to offload complex imaging features from the host processor. While the IVP IP core is available now for the general market, two unnamed Tensilica customers are deploying it in their system silicon, according to Chris Rowen, Tensilica’s founder and chief technology officer.
The IVP DPU, capable of 500 billion pixel operations per second per watt, is fabricated by TSMC in 28-nm process technology. It can take up less than 0.5 square millimeters per core, according to Tensilica, making it suitable for low-cost applications.
Pushing the demand for an imaging/video processor core are new functions such as high dynamic range image capture, face recognition and tracking used in mobile handsets and digital cameras; gesture control, video post-processing used in DTVs; front-collision warning, lane departure warning and others for advance driver assistance systems.
These complex imaging/vision algorithms are evolving so rapidly that mobile handset and automotive companies expect to incorporate such new features in their systems “in weeks,” not in months, explained Gary Brown, director of imaging/video at Tensilica.
Options for system vendors looking for imaging/video processing solutions range from keeping it all in the CPU to offloading imaging to the GPU, or adding hardwired logic dedicated to imaging functions.
"If video processing--but nothing else--runs on quad cores of A8 at 1.5GHz, for example, it can easily burn 3 watts of power,” Rowen said.
It’s especially tough for mobile handsets or digital cameras to do so on a CPU alone, when such consumer systems need to run algorithms such as high dynamic range “continuously while taking pictures,” Rowen explained.
Tensilica's IVP processor core architecture.
You can also use hardwired logic to enable dedicated functions such as face detection, video stabilization or object tracking. But when more and more high-end man-machine interface features are coming downstream to consumer devices, more new hardwired blocks might be necessary just two months from now.
Offloading imaging to the GPU is yet another option. Noting that a GPU’s focus is on floating point and 3-D graphics, Rowen cautioned that this modification could cripple imaging efficiency and increase area. Besides, GPUs are hard to program, he added.
Jeff Bier, president of Berkeley Design Technology Inc., explains that processing real-time image and/or video data typically requires “10s of billions of operations per second.” This is because “we’re applying complex algorithms to the real-time data, and extracting meaning from pixels--which is the essence of embedded vision--is a hard problem.”
Further, that hard problem is “unsolved in a general sense,” Bier added. It means “algorithm development approaches tend to be very experimental and iterative.” That, in turn, demands imaging/embedded vision solutions that are programmable and easy to develop, he said.
This is a processor core tailored to the imaging pipeline.
Just as Tensilica used its software profiling tools to create an optimized CPU for its HiFi family of audio DSPs, Tensilica engineers, this time around, "created a specialized DSP with an instruction set that reduces the cycle count of the key embedded vision algorithms," according to Linley Group's Gardner.
Article completely left out CogniVue Corporation which was a founding member of EVA before CEVA and Tensilica announced their vision cores. CogniVue's APEX image cognition processor (ICP) technology addresses what is mentioned in this article as key - an efficient processor architecture for vision processing is not just about massively parallel pixel processing, but about creating vision friendly data structures, minimizing data movement and somehow achieving na efficiently pipelined implementation of very complex vision algorithms.
Actually there is a 3rd player in the mix in the form of CogniVue with their Image Cognition Processing (ICP) technology. CogniVue has been designing vision processing IP for low power embedded and mobile applications for several years and launched the ICP architecture in 2010, making the IP available in their own SOC. In 2012 CogniVue licensed their ICP APEX processing core to Freescale (http://media.freescale.com/phoenix.zhtml?c=196520&p=irol-newsArticle&ID=1734693&highlight=) for vision processing in Automotive safety applications. CogniVue is now a technology licensing company squarely focused on embedded vision processing, and in fact was one of the founding members of the Embedded Vision Alliance in 2011. The comments in the article about the complexities of vision processing are quite accurate and the ICP APEX processing core is specifically architected to "hide" memory access latency while optimizing the vision processing pipeline. Check us out at www.cognivue.com.