For some time, it seemed as though digital signal processors (DSPs) and general-purpose processors (GPPs) were morphing into one another. In an effort to provide better DSP performance, general-purpose processors were incorporating increasingly powerful DSP-oriented features. Meanwhile, as digital signal processing applications got more complex, DSP processors were becoming more CPU-like to allow for efficient compilers and to support more elaborate operating systems. It was getting hard to tell DSPs and GPPs apart.
From a systems standpoint, this had some positive implications. As the two classes of processors and their software development environments became more similar, DSP software developers and non-DSP software developers were able to move more fluidly between the two worlds, understanding each other's concerns more clearly.
But the increasing focus on multicore chips may see that trend reverse itself. When I discuss multicore processors with embedded software developers who are not DSP specialists, they inevitably talk about homogeneous symmetric multiprocessor (SMP) chips that integrate a handful of CPU cores. In contrast, when I discuss multicore processors with digital signal processing software developers, I often find them to be interested in "massively parallel" multicore architectures (having dozens or even hundreds of cores), many of which use heterogeneous assortments of cores.
Although both classes of chip are called "multicore," they differ radically from one another. They make different architectural trade-offs and support different programming models. After moving toward convergence for several years, why are DSP-oriented and non-DSP-oriented processors now heading in such divergent directions?
Fundamentally, signal-processing applications have always had demands and constraints that differ from those of most other kinds of embedded software. For one thing, DSP applications are often well-characterized ahead of time. It is known in advance (for example) whether H.264 encoding or WiMax baseband processing will be done; no one is going to suddenly load some other piece of software onto the machine. Furthermore, DSP applications are typically very computationally demanding. Because of these characteristics, it's often more practical, and a more attractive choice, to use massively parallel architectures in DSP applications than it is in other applications. The more predictable and stable the workload, the easier it is to partition it among many processing elements; and the more computationally demanding the workload, the more incentive there is for doing just that.
GPP applications, on the other hand, are more diverse and less predictable. When the workload is less predictable, it's harder to farm it out to hundreds of processing elements. What's more, GPP software developers tend to place a premium on software compatibility, allowing them to more easily reuse legacy software. It is easier to accommodate this requirement in the multicore SMP paradigm, which allows processor designers to use existing instruction-set architecture, simply replicating a few cores on a chip.
These same factors helped spur the original divergence in DSPs and GPPs in the early 1980s, and it looks as if history may be repeating itself as the two once again evolve in different directions.
About the author
Jeff Bier is the president of Berkeley Design Technology Inc. (www.BDTI.com), a benchmarking and consulting firm focusing on digital signal processing technology. Jennifer Eyre White of BDTI contributed to this column.