NEW YORK – In a move to cash in on the embedded vision markets, Analog Devices, Inc. is rolling out a family of 1-GHz, dual-core Blackfin processors integrated with a vision accelerator.
ADI’s DSP team developed a hardware accelerator called a pipelined vision processor (PVP), designed to work with the company’s Blackfin DSP. The PVP combined with DSP cores are called ADSP-BF608 and ADSP-BF609, both optimized for embedded vision applications.
Before creating the PVP architecture, the team examined a variety of embedded vision software and designed the hardware accelerator that can meet with the needs of three major markets: automotive driver assistance, industrial machine vision and security/surveillance analytics.
By offering over 25 billion mathematical operations per second available from the PVP, this accelerator combined with two Blackfin cores, “provides the basis for a very powerful and flexible processor,” according to Colin Duggan, director of product marketing, ADI’s processors-DSP core technology group.
ADI’s launch of this new family of embedded vision-enabled Blackfin processors matches a trend toward embedded processors that are expected to perform a number of different vision analytics at low power and low cost.
Recognizing the growing appetite for running multiple video analytics concurrently, “We first tried to solve the problem by throwing a lot of Blackfin processors at it,” ADI’s Duggan said. “But we quickly realized that we were only violating low power and low cost requirements demanded by embedded systems.”
ADI’s team studied a range of embedded vision applications, then focused on a set of software algorithms most commonly used in automotive, industrial and surveillance applications. The algorithms cover functions that include object detection, object tracking and object identification.
The team architected the hardware accelerator by building into the PVP the flexibility to reconfigure itself. The PVP, designed as a flexible image processing engine, can run functional blocks such as convolution, scalers and arithmetic. It’s optimized to save memory bandwidth.
The PVP offers a total of 12 high-performance, highly configurable, signal processing blocks that support a variety of algorithms. These twelve functions can be assigned to either memory or camera pipe, according to ADI.
Texas Instruments, one of ADI’s competitors, has taken a similar approach by developing a hardware accelerator for its OMAP DSP processors to address the embedded vision application market. TI’s accelerator, however, is more focused on running specific applications such as face detection.
ADI’s Duggan stressed: “Our vision accelerator is unique, because it’s not dedicated hardware. It is more generic and less custom.”
Jeff Bier, founder of the Embedded Vision Alliance, said that the biggest challenge for embedded vision processors is that they need to address “extremely diverse applications.” On one hand, embedded vision demands big, high-performance, computation-intensive processors. On the other, it looks for a hardware that can go inside a $30 - $50, very low power embedded system.
“I give a lot of credit to ADI for staking out new territory with PVP,” said Bier. The PVP, when integrated with Blackfin, sits in the middle ground between high-powered general purpose processors and specialized, dedicated embedded vision chips. Calling ADI’s PVP a “reasonable and somewhat innovative approach,” Bier explained that ADI’s engineering team has studied embedded vision applications, understood them and built the new accelerator. How its actual performance fares against competitors’ solutions in each of the intended application markets, however, remains to be seen, he added.
Embedded vision applications’ demands
The embedded vision application for advanced driver assist systems (ADAS) is one of the fastest growing markets. Auto manufacturers are now asking not just to provide “forward collision warning/mitigation” and “pedestrian detection functions,” but also “intelligent high beam control,” “traffic sign recognition” and “lane departure warning features,” according to the company. And they want all those functions to run concurrently.
Certainly, that’s no easy task.
But the PVP with Blackfin cores is designed to meet the challenge – running up to five concurrent functions, claimed Duggan.
He noted a range of other safety-critical systems the ADI’s new Blackfin cores can meet with PVP: camera resolution of up to HD (1280 x 960) level; real-time frame rate (30 frames per second); programmability (for adding secret sauce for analytic functions) and temperature threshold up to 105 degrees C. Because such a vision processor needs to be embedded “behind the rearview mirror” in a car, Duggan explained that the chip needs to be able to withstand the ambient temperature.
According to Jon Cropley, principal analyst at IMS Research, the automotive segment is by far the fastest growing embedded vision market. Cropley noted that the market for intelligent automotive camera modules alone is estimated around $300 million in 2011 and is forecast to grow at an average annual rate of over 30% to 2015.
ADI’s embedded vision-enabled Blackfin cores also target industrial applications such as image processing for barcode decoding. The industrial machine vision segment is the biggest and the most established of all embedded vision markets. The market for such industrial machine vision hardware, including smart sensors, smart cameras, compact vision systems, and machine vision cameras, is estimated to have been worth around $1.5 billion in 2011 and is forecast to grow at an average annual rate of over 10% to 2015, according to IMS Research.
The third leg of ADI’s target market is surveillance. The Blackfin cores with PVP will enable security IP cameras to not just see objects, but to do analytics co-processing. Examples of behavior analysis functions include: activity detection, number of entries and exits over a given period. It can also support functions as traffic pattern review and monitoring for recognizing license plates, for example.
IMS Research estimates the market for intelligent video surveillance devices (devices with embedded analytics) to have been worth around $250 million in 2011 and is forecast to grow at an average annual rate of over 20% to 2015.
ADI’s Blackfin BF60x processors are supported by CrossCore Embedded Studio. It enables support for both proprietary and open source tools and technologies including Analog Devices C/C++ compiler, Micrium uC/OS-III, Linux and GCC. ADI has worked closely with Micrium to integrate their uC/OS-III real-time operation system, USB drivers and file system into CrossCore Embedded Studio.
Pricing for ADI’s BF60x series of processors starts at $15 in quantities of a thousand or more, according to ADI. Its silicon samples are available today. Cross Core Embedded Studio, the BF-609 EZ-KIT development board, and a family of in-circuit emulators are available now to assist designers in their development with these new Blackfin processors.
Few years back I used to work on BlackFin and I used to remember that it was always pitched against OMAP. Technical marketing guys from TI and ADI used to always have slides with benchmark numbers comparing BlackFin and OMAP.
There is no doubt the demand of embedded vision is growing. The more advanced driver assist systems is installed in a car, the safer we will be. I am sure there are tons of researchers developing optimized algorithm to deal with the situations - “forward collision warning/mitigation” and “pedestrian detection functions,” but also “intelligent high beam control,” “traffic sign recognition” and “lane departure warning features". Soon, the google autodriving algorithm will be embedded into the next-generation processor.
In real time vision, the data transfer among camera, memory and processor is substantial. Given the resolution 1280 x 960 of 8 bits grey scale, the amount of data is 1.3MB. A 10 frame per second capturing, the data rate is 13MBps, i.e. ~100Mbps. This is just speaking of data transfer from camera to memory or memory to processor. If the vision algorithm includes 3 stages processing, the data transfer between processor and memory will be increased to 3 times, i.e. 300Mbps. I am really interested in learning how to reduce the memory bandwidth requirement. Any more infos or research papers are welcomed. ;)
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.