One of the most common sources of errors in measurements is the presence of vertical noise, which can decrease the accuracy of signal measurement and lead to such problems as inaccurate measurements as frequencies change. You can use ENOB (effective-number-of-bits) testing to more accurately evaluate the performance of digitizing systems, including oscilloscopes. The ENOB figure summarizes the noise and frequency response of a system. Resolution typically degrades significantly as frequency increases, so ENOB versus frequency is a useful specification. Unfortunately, when an ENOB specification is provided, it is often at just one or two points rather than across all frequencies.
In test and measurement, noise can make it difficult to make measurements on a signal in the millivolt range, such as in a radar transmission or a heart-rate monitor. Noise can make it challenging to find the true voltage of a signal, and it can increase jitter, making timing measurements less accurate. It also can cause waveforms to appear “fat” in contrast to analog oscilloscopes.
The ENOB concept
Digitizing performance is linked to resolution, but simply selecting a digitizer with the required number of bits, or quantizing level, at the desired amplitude resolution can be misleading because dynamic digitizing performance, depending on the technology, can decrease markedly as signal speeds increase. An 8-bit digitizer can decrease to 6, 4, or even fewer effective bits of performance well before reaching its specified bandwidth.
When designing or selecting an ADC, a digitizing instrument, or a test system, it is important to understand the various factors affecting digitizing performance and to have some means of evaluating overall performance. ENOB testing provides a means of establishing a figure of merit for dynamic digitizing performance. You can use it as an evaluation tool at various design stages and as a way to provide an overall system-performance specification. Because manufacturers don’t always specify ENOB for individual instruments or system components, you may need to do an ENOB evaluation for comparison. Essentially, ENOB is a means of specifying the ability of a digitizing device or instrument to represent signals of various frequencies - see figure 1.
The figure illustrates that effective digitizing accuracy falls off as the frequency of the digitized signal increases. In this case, an 8-bit digitizer provides 8 effective bits of accuracy only at dc and low frequencies. As the signal you are digitizing increases in frequency or speed, performance drops to lower and lower values of effective bits.
This decline in digitizer performance manifests itself as an increasing level of noise on the digitized signal. Noise in this case refers to any random or pseudorandom error between the input signal and the digitized output. You can express this noise on a digitized signal in terms of SNR (signal-to-noise ratio):
is the root-mean-square value of the digitized signal and rmsERROR is the root-mean-square value of the noise error.
The following equation yields the relationship to effective bits:
where EB represents the effective bits, A is the peak-to-peak input amplitude of the digitized signal, and FS is the peak-to-peak full-scale range of the digitizer’s input.
Other commonly used equations include
where N is the nominal, or static, resolution of the digitizer, and, EB=−log2(rmsERROR)×√1¯2¯/FS.
These equations employ a noise, or error, level that the digitizing process generates. In the second equation above for EB, the ideal quantization error term is the rms error in the ideal, N-bit digitizing of the input signal. The IEEE Standard for Digitizing Waveform Recorders (IEEE Standard 1057) defines the first two equations (Reference 1). An alternative for the third equation assumes that the ideal quantization error is uniformly distributed over one LSB (least-significant bit) peak to peak. This assumption allows you to replace the ideal quantization error term with FS/(2N
√1¯2¯), where FS is the digitizer’s full-scale input range.
These equations employ full-scale signals. Actual testing may use test signals at less than full-scale—50 or 90% of full-scale, for example. Improved ENOB results can improve this result, so comparisons of ENOB specifications or testing must account for both test-signal amplitude and frequency.
Noise or error relating to digitizing can come from a number of sources. Even in an ideal digitizer, quantizing causes a minimum noise or error level amounting to ±½ LSB. This error is an inherent part of digitizing (Figure 2).
It is the resolution limit, or uncertainty, associated with ideal digitizing. A real-life digitizer adds further errors to this basic ideal error floor. These additional real-life errors can include dc offset; ac offset, or “pattern” errors, sometimes called fixed pattern distortion, associated with interleaved sampling methods; dc and ac gain error; analog nonlinearity; and digital nonmonotonicity. You must also consider phase errors; random noise; frequency-timebase inaccuracy; aperture uncertainty, or sample-time jitter; digital errors, such as data loss due to metastability, missing codes, and the like; and other error sources, such as trigger jitter.