In the shift to digital processing of information, the performance of many electronic systems has become limited by the interfaces between physical analog signals and their digital representation. Aside from restrictions due to limited throughput, the growing trend toward portable and wearable devices has made the power efficiency of analog-to-digital converters one of the main factors that can limit overall system performance.
A prominent example in which the A/D intervenes as the ultimate bottleneck is the long-anticipated "software radio." On the receiver side of such a device, a fairly wide band of RF spectrum would be digitized, and subsequently demodulated by a digital processor. With this approach, radios could adapt to virtually any standard and also intelligently manage interference and bandwidth allocation. While this looks good on paper, the bad news is that state-of-art A/Ds do not deliver the performance needed in such an application. Realistic estimates suggest that even for modest bandwidth and dynamic range requirements, the required A/D could easily consume power on the order of 10 watts.
So why is it that A/D converters seem to be so power-hungry compared with digital processors? The qualitative answer to this question is simple: Unlike digital circuits, most A/Ds are constrained by accuracy requirements. While digital gates only distinguish two logic levels, A/Ds must resolve minute differences in voltage or current at high speed. Especially in high-resolution A/Ds, this is fundamentally difficult due to the presence of electronic noise, caused by random carrier motion and discreteness of charge.
A trivial, but often unsuitable, approach for coping with electronic noise is to cool down the circuit. The next best solution is to increase the signal power with respect to the noise floor of the circuit-and this is precisely where most of the power in a high-resolution A/D goes.
Increasing the signal power necessitates storing more energy per conversion cycle. The power needed to repeatedly charge and discharge energy storage elements, such as capacitors, sets a fundamental limit on the attainable power efficiency in A/Ds.
In this context, it is important to note that feature-size reduction driven by Moore's Law does not alter the above fundamental trade-off in A/Ds. In contrast, smaller transistors and reduced supply voltages have been key to improving the power efficiency of digital computing, at an astonishing rate of better than 2x every two years for the past decades.
An obvious direction for improving the performance of A/Ds is to exploit what has transformed them into a bottleneck in the first place: powerful digital computing and signal processing. At first glance, this doesn't seem like a very new idea. Numerous digital correction and calibration algorithms are routinely used in today's products to mitigate the impact of analog circuit imperfections, such as offset and mismatch. What else can be done to leverage digital signal processing power?
Going back to the fundamental noise limit discussed above, it follows that we've left out one important piece of the puzzle, which I call "deterministic precision." While it is clear that we must process signals with a certain minimum amount of energy to overcome electronic noise, there is no fundamental restriction in terms of deterministic, or repeatable, errors that we make, for instance when charging a capacitor. Conceivably, a digital processor could measure and remove such errors, thereby boosting the precision of an intrinsically imprecise, and even nonlinear, analog operation.
For example, the accompanying figure (see page 59) illustrates a concept that was introduced recently at the IEEE International Solid-State Circuits Conference (Murmann, 2003; Grace, 2004). Here, a digital processor measures and removes errors due to nonlinear amplification in the signal path of a pipelined A/D. With this approach, the usual linear precision amplifiers can be replaced by simple, and weakly nonlinear, open-loop gain stages, whose complexity is ironically close to that of a digital logic gate.
Direct and obvious benefits of this approach are a significant reduction in amplifier power dissipation and a potential increase in attainable speed. In addition, the simplified amplifiers also generate less electronic noise, which indirectly helps lower their power dissipation further.
Concepts like the use of digitally compensated, nonlinear amplification in A/D converters are only the beginning of a promising avenue of research. After all, calculations show that current A/D converters are nowhere near the power-efficiency limits dictated by electronic noise. The use of complex, highly linear circuits in current implementations accounts for more than two orders of magnitude in power overhead.
With the fast, exponential pace of performance growth in digital processing, analog designers are just beginning to understand the potential of an increased level of "digital assistance."
In current 0.13-micron CMOS technology, a logic gate requires roughly 5 femtojoules per operation. On the other hand, a state-of the art 12-bit A/D consumes roughly 5 nanojoules per conversion, which corresponds to an energy equivalent of 1 million logic gates. With the expected future progress in technology scaling, these numbers will only become more favorable for approaches that use a few hundred thousand "free" gates for digital post-processing.
The most significant future improvements in the power efficiency of A/Ds are likely to come from an aggressive extension of the concept of nonlinear analog signal processing. For instance, using charge-pumplike structures with poor linearity and settling precision instead of conventional amplifiers may be one way to approach the ultimate power efficiency limit in silicon technology.
On a higher level, it is imperative to abandon looking at A/Ds as "black boxes" within a system. For example, viewing a digitally assisted A/D as part of an equalizable, nonideal communication channel may result in highly efficient joint solutions for mapping radio waves into the digital domain.
Clearly, key to such innovation will be broad, multidisciplinary approaches that use digital processing capabilities as a driver, rather than an afterthought in trying to cope with imperfections in evolutionary analog circuits.
Boris Murmann (firstname.lastname@example.org) is assistant professor of electrical engineering at Stanford University (Stanford, Calif.).
See related chart