# Getting 12-bits of Resolution out of your 10-bit A/D

Post a comment

Emerging products in smart sensors and instrumentation are demanding that engineers look for more ways to increase performance while simultaneously squeezing out cost. This article demonstrates how engineers can accomplish this by implementing digital signal processing techniques in common 8/16-bit microcontrollers. The reader will be able to increase the effective resolution of their own A/D designs after achieving a practical understanding of the algorithms used and seeing them demonstrated using an Arrow Electronics signal processing analysis tool and common microcontroller development platform.

**Example Applications**

In many cases, designers select an A/D by initially considering the full scale range and minimum resolution. For example, a designer that is measuring temperature may have a resolution requirement of 0.1 degrees Fahrenheit over a 100 degree full scale range. Taking the full scale range and dividing it by the desired resolution implies a dynamic range of 1000. The dynamic range can also be expressed in decibels by taking 20 times the Log of 1000 which in this case is 60db. A pretty good approximation for the number of required A/D bits is to take the dynamic range in db, divided it by 6 and then round up to the nearest integer which implies that 10-bits are required for the A/D. However, with the implementation of some simple averaging algorithms in the microcontroller, the A/D requirement is easily reduced to 8-bits. Since many low-cost microcontrollers have 8-bit A/Ds in the peripheral set, the ability to fit your application into these solutions enables you to choose the lowest possible cost for your design.

The temperature measurement example focused on an application where the signal bandwidth (BW) was assumed to be very close to DC, however this is not always the case. For example, a design that monitors 60Hz sine-waves must maintain adequate bandwidth for that frequency in the data processing chain. If a minimum dynamic range of 70db is required, then an A/D with 12 bits would be chosen since 70 divided by 6 rounds up to 12. However, using the oversampling and moving average techniques presented in this article, an A/D with less than 12 bits could provide 70db of dynamic range or a 12-bit A/D already in the system could provide even better performance than 70db without sacrificing bandwidth.

This article addresses separately each of the previous examples for averaging and oversampling. The signal processing algorithms are presented in an overview manner along with the underlying assumptions that enable them to operate properly. Appendices are referenced for the reader who wants an explanation of how the presented equations, algorithms and demonstrations were developed.

**Part 1: Increasing A/D Converter Resolution by Averaging**

Recall from our temperature measurement example that we needed 10 bits for the requirement of .1 degree resolution over a 100 degree full scale range. One solution would be to purchase a 10-bit A/D and be done with it! However, in situations where circuit cost is critical or when you're locked into a set number of bits because of legacy issues, considering some simple averaging techniques along with the hardware requirements achieves the best overall solution.

**Averaging Technique**

So how does this actually work! Remember that an A/D's noise (QN) due to quantization error is related to the minimum A/D resolution according to Equation 1,

where the minimum resolution, or quantization level (Q), is calculated by using Equation 2 and knowing the full scale range (FS) and the total number of bits (N) in the A/D converter.

Since quantization noise is directly proportional to the resolution of the A/D, processing A/D samples in a manner that reduces the noise, QN, can increase the A/D's effective resolution. Equation 3 describes what happens when you average a given number of samples (NS) from an A/D converter and then look at the quantization noise for that set of averaged samples (AQN). The effect is to reduce the quantization noise by the square-root of the number of samples averaged.

For example, consider the algorithm defined by Equation 4 which is often called a moving average filter.

If we let x(n) represent A/D samples and y(n) represent processed samples, then the noise associated with each A/D sample would be QN and the noise associated with each averaged sample would be AQN. Using Equations 1 and 3, the averaged samples, y(n), will have lower noise and as a result lower quantization level (better resolution) than the original A/D samples.

An enabling assumption for this technique to work is that the A/D's RMS input noise must be random, zero mean, and greater than the quantization noise of the A/D. If this assumption isn't met, the averaged samples could "flat-line" causing an error to occur in the mean values. This can be overcome by using a summing amplifier which adds random noise to the input signal. Sometimes this can be a less expensive solution than increasing the number of bits in an external A/D. In many cases, the environment that the final system operates in guarantees the RMS noise to be greater than QN. If the mean value of the noise is not exactly zero, then a bias will be introduced into the data that can be easily calibrated out.

An equation can be derived that calculates the equivalent number of A/D bits (ENOB) for the averaged samples, y(n), as a function of the number of samples averaged, NS, the number of A/D bits, N, and the ratio of the A/D's RMS input noise to quantization noise, QN. If we call this ratio the Input Noise Factor (INF), then the moving average ENOB, y(n), can be calculated using Equation 5.

For example, if the actual RMS input noise to the A/D is equal to QN, then INF is equal to 1. If it is 1.5 times QN, then INF would equal 1.5. Based on our previous assumptions, the value of INF must be greater than 1 for this technique to work. Practically, INF should be at least 1.5 or greater. Assume that we have an 8-bit A/D with RMS input noise equal to 1.5 times the quantization noise. If we let the number of samples averaged, NS, equal 64, then Equation 5 predicts an ENOB of just over 10 bits.

Notice also that if we used a 10 bit A/D with INF still equal to 1.5, the new ENOB would be just over 12 bits. Please refer to Appendix A for a more detailed development of these equations and the assumptions used to derive them.

**Temperature Example**

Getting back to our original temperature example: Let's assume that you scale your measurements into a 0 to 5V range and anticipate an RMS noise level on the order of 10mV. An 8-bit A/D will have a quantization level of 19.5mV and quantization noise of 6mV RMS. Using Equation 5, the INF is 10/6 or 1.67, and the ENOB would be

or just over 10 bits assuming we averaged 64 samples. All things equal, it is best to average as many samples as possible if you have the available processing power and time to make the measurement. Assuming that your microcontroller has a 10-bit A/D, then Equation 5 would predict just under 12 bits of resolution for the same number of averaged samples.

**Demonstration**

Figure 1 plots averaged samples taken from an 8-bit A/D where the analog DC input was shifted by 1/4 and 1/2 of the A/D's quantization level (Q/4, Q/2). The data was processed using Equation 4 and formatted into a 16-bit word. Both the average value and the RMS noise of the averaged samples are compared with what would be predicted by Equation 3. The RMS noise is plotted on the left axis and the mean values are plotted on the right axis. Appendix A describes the problem of not averaging enough samples and identifies 32 as a good minimum number. Figure 1 demonstrates this with the agreement between real-data and what is predicted by Equation 3 at 32 averages per sample. At 128 averages per sample, the Q/4 and Q/2 sub-quantization steps are very clearly defined. Appendix B identifies the actual development board hardware and software tool that was used to demonstrate this technique. The software that was created for the development board as well as the software evaluation tool can be obtained through your local Arrow FAE (or the Arrow Website).

**Figure 1 Demonstrated Increase in Resolution for an 8-bit A/D**

**Summary**

Part 1 presented a practical understanding of how you can increase the resolution of an A/D using a moving average filter. We assumed that the noise at our A/D input was random, greater than the quantization noise of the A/D, and equally distributed about the mean of the parameter that we were measuring. Finally, we assumed that the measured parameter would not change significantly over the sampling interval. The technique was demonstrated using a hardware demonstration board and the Arrow signal processing analysis tool. Appendices A and B are available for the reader who wishes to have more detail on the development of the equations.