Back in 2001, I wrote a column about the merits of using heterogeneous designs for signal processing-oriented applications. My argument went like this: Signal-processing applications typically encompass diverse data rates, data types and algorithms, and it often makes sense to address these needs using a collection of similarly diverse processing engines rather than taking a one-size-must-fit-all approach. Back then, using a heterogeneous processing architecture usually meant using two or more chips.
Six years later, we're in the midst of a surge of interest in multi-core, massively parallel chips. These chips contain tens, or even hundreds, of processors, or sometimes even FPGA-like elements. The question of whether to use homogeneous or heterogeneous processing elements is still a key design decision, though, in this case, it takes place at the chip level rather than at the system level.
The trade-offs involving heterogeneous vs. homogeneous designs for massively parallel chip vendors are similar to those made by system designers in the past: flexibility and ease of use versus efficiency. In general, homogeneous designs are simpler to use and are suitable for a wider range of applications, advantages that come at the cost of performance and efficiency. Heterogeneous solutions (which incorporate more specialized processing elements) give up some generality and ease of use in exchange for being more efficient in their target applications. From the chip vendors' perspective, the question is whether to design chips that cast a wide application net or design chips that are focused on one or two target markets?
One solution is to start out with a fairly generic homogeneous approach and then, once the chip has found its niche or niches and the market has gelled, migrate to an application-specific heterogeneous approach. This is the trajectory massively parallel chip vendor picoChip has taken; its initial chip had arrays of highly similar (though not identical) processors, while its latest chip has moved towards application specificity by incorporating specialized co-processors. FPGA vendors Altera and Xilinx have evolved from offering generic "sea of gates" chips to offering chips that incorporate hardwired signal-processing elements, memories, processors and specialized I/O blocks.
I believe that, in embedded applications, there are plenty of opportunities for chips all along the homogeneous-heterogeneous spectrum. For massively parallel chip vendors, the key is matching the architecture to the market--and being mindful of the trade-offs.
Jeff Bier is the president of Berkeley Design Technology, Inc., a benchmarking and consulting firm focusing on digital signal processing technology. Jennifer Eyre White of BDTI contributed to this column.