NEW YORK--Rapidly evolving imaging and embedded vision algorithms are opening up a new battleground for DSP core IP companies in the high-performance and power-efficient imaging needs of mobile handsets, automotive and video products.
Following Ceva’s introduction a year ago of MM3101, a programmable, low-power imaging and vision platform, Tensilica Tuesday (Feb. 12) rolled out an imaging and video dataplane processor unit (DPU), called IVP.
The IVP DPU, a licensable semiconductor IP core, is designed to offload complex imaging features from the host processor. While the IVP IP core is available now for the general market, two unnamed Tensilica customers are deploying it in their system silicon, according to Chris Rowen, Tensilica’s founder and chief technology officer.
The IVP DPU, capable of 500 billion pixel operations per second per watt, is fabricated by TSMC in 28-nm process technology. It can take up less than 0.5 square millimeters per core, according to Tensilica, making it suitable for low-cost applications.
Pushing the demand for an imaging/video processor core are new functions such as high dynamic range image capture, face recognition and tracking used in mobile handsets and digital cameras; gesture control, video post-processing used in DTVs; front-collision warning, lane departure warning and others for advance driver assistance systems.
These complex imaging/vision algorithms are evolving so rapidly that mobile handset and automotive companies expect to incorporate such new features in their systems “in weeks,” not in months, explained Gary Brown, director of imaging/video at Tensilica.
Options for system vendors looking for imaging/video processing solutions range from keeping it all in the CPU to offloading imaging to the GPU, or adding hardwired logic dedicated to imaging functions.
"If video processing--but nothing else--runs on quad cores of A8 at 1.5GHz, for example, it can easily burn 3 watts of power,” Rowen said.
It’s especially tough for mobile handsets or digital cameras to do so on a CPU alone, when such consumer systems need to run algorithms such as high dynamic range “continuously while taking pictures,” Rowen explained.
Tensilica's IVP processor core architecture.
You can also use hardwired logic to enable dedicated functions such as face detection, video stabilization or object tracking. But when more and more high-end man-machine interface features are coming downstream to consumer devices, more new hardwired blocks might be necessary just two months from now.
Offloading imaging to the GPU is yet another option. Noting that a GPU’s focus is on floating point and 3-D graphics, Rowen cautioned that this modification could cripple imaging efficiency and increase area. Besides, GPUs are hard to program, he added.
Jeff Bier, president of Berkeley Design Technology Inc., explains that processing real-time image and/or video data typically requires “10s of billions of operations per second.” This is because “we’re applying complex algorithms to the real-time data, and extracting meaning from pixels--which is the essence of embedded vision--is a hard problem.”
Further, that hard problem is “unsolved in a general sense,” Bier added. It means “algorithm development approaches tend to be very experimental and iterative.” That, in turn, demands imaging/embedded vision solutions that are programmable and easy to develop, he said.