Measuring the sampling instants
Accurate measurements of the sampling-clock delay require a precise time reference common to both instruments. The trigger instant constitutes the only time reference for the waveform data from the digitizer itself. Unfortunately, many factors affect the precision of this time reference. For example, the path of the input signal,and of the analog-to-digital converter (ADC) data once the signal is converted, is different from the path of the trigger signal on the digitizer board, implying different propagation delays. Components on these paths have propagation delays that vary from component to component. This is the trigger-vs.-channel delay.
The trigger comparator also can affect trigger-time precision. The comparator threshold is calibrated to a finite resolution, and any noise on a signal entering the trigger comparator causes trigger-time jitter.
Consequently, another time reference is needed. One possible solution is to use the input signal. If the signal delivered to the channel input connectors is identical, the samples from each waveform can be positioned with respect to a reference instant in the signal itself.
Outlining the measurement method
It is possible to show the constant sampling-clock delay between two channels of different digitizer modules (when using a common clock reference) by repeatedly measuring the sampling instants of channels relative to each other, two-by-two.
The starting point is a time reference, which is needed to accurately position the samples on an absolute time scale. Because the trigger instant is known to a precision much larger than that required for a measurement, the measured signal can be used as the absolute time reference. In this case, it must be exactly the same signal delivered to the channel inputs. By choosing a sine signal and fitting the measured data to a sine function, it is possible to position the samples in time relative to the signal (e.g., relative to the first positive-going zero crossing). The difference between the fitted phases of the waveforms from each channel is the sought-after sampling-clock delay.
This method uses the trigger-time information from the digitizers for one purpose: to determine the starting samples in each waveform. This yields a measurement of the sampling-clock delay that is free of the errors and jitter caused by digitizer trigger systems.
The measured samples from each channel must be taken during the same period of the input signal. Therefore, the digitizers must be triggered at exactly the same time using an external trigger pulse. Note that this trigger pulse is completely asynchronous to the signal and the clocks. Even without using the trigger time from the digitizer, it is known that the first sample from each waveform is taken within sampling interval + trigger vs.- channel delay + trigger jitter of the trigger instant.
The frequency of the input sine wave should be chosen appropriately. A higher frequency will provide better timing accuracy, but the period must be long enough compared to sampling interval + trigger-vs.-channel delay + trigger jitter to resolve the ambiguity due to the period folding (see next section). Also, more samples per period will allow the sine fit to converge without having to specify the frequency .
Finally, the sine wave frequency is chosen such that the phase of the digitized samples is different for each period over the complete acquisition time window. Nonlinearity errors, such as those due to the ADC, eventually translate into time errors and can be eliminated by the averaging effect of the sine fit over many periods.
The principle of using a sine fit over many periods will average out digitizer imperfections such as nonlinearity in ADCs and high-frequency phase noise in the sampling-clock generator. The only remaining errors come from low-frequency phase noise in the clock generators. The stability of the sampling-clock delay can be verified by repeating the measurement over many acquisitions.