Once upon a time, most signal processing applications were powered by single-proces-sor chips.
Once upon a time, most signal processing applications were powered by single-proces-sor chips. Today, though, there is an increasing trend to use complex, heterogeneous multiprocessor chips. One such chip is Texas Instruments' recently announced Omap 2, which contains a microprocessor core, a DSP core and multiple application-specific coprocessors.
The primary motivation for this shift is not processing speed; the fastest uniprocessor DSPs and general-purpose microprocessors are fast enough for many performance-hungry signal-processing applications. Instead, multiprocessor chips are gaining popularity because they deliver strong computational speed along with prices suitable for high-volume applications and (at least in some cases) energy efficiency suitable for battery-powered systems.
These devices, of course, are more complex. Instead of a single processor architecture and a single tool chain, users have to contend with multiple architectures and tool chains. Many chip vendors try to ease this process by providing software and middleware so that the user doesn't have to develop all of the code from scratch. Nevertheless, users will likely need to develop, optimize and debug some software for each processor on the chip.
Perhaps even more daunting is partitioning an application across different processors. Unlike "channelized" applications, which often use multiple instances of the same processor to execute the same software in parallel, applications that use heterogeneous multicore chips run different portions of the application on different processors. So which portion goes where?
Ideally, you'd like to run each block of code on the processor that's most efficient for that block. But that may not be obvious until you've actually implemented and optimized each block on each processor-which is not a viable approach. In addition, you'll have to consider the overhead associated with interprocessor communication. That may mean that certain blocks are best mapped to a less-efficient processor to avoid excessive shuttling of data.
Given the advantages of heterogeneous multiprocessor chips, system developers will put up with some added complexity. But it is up to the chip vendors to provide the needed infrastructure to help make that complexity manageable.
Jeff Bier is the general manager of Berkeley Design Technology Inc. (www.BDTI.com), the DSP technology analysis and software development company. Jennifer Eyre of BDTI contributed to this column.