A clock and data recovery (CDR) circuit with advanced electronic dispersion compensation (EDC) technology is required to equalize the multi bit reflections on the shorter and deal with insertion losses on the longer channels. The EDC can be implemented using a number of equalization techniques.
Among these techniques, the two most common are FFE/DFE (feed forward equalizer and decision feedback equalizer) and maximum likelihood sequence estimator (MLSE) equalization. Due to the latency requirements, FFE/DFE technique is preferred for Fibre Channel applications. The typical latency of an FFE/DFE implementation is on the order of 1 ns while the typical latency of a DSP-based EDC, including an MLSE solution, receiver could be up to 500 ns.
The major building blocks of an FFE/DFE-based equalizer are depicted in Figure 4 below. The distorted signal is either attenuated or amplified by automatic gain control (AGC) block so that its amplitude at the input of FFE is optimum. The FFE is a tapped delay line linear filter with a tap spacing of half bit period (T/2 ~ 58 ps at 8.5 Gbps).
The FFE is effective in reshaping the signal but may not perform well on channels having spectral nulls. DFE (T spacing) is a nonlinear filter that uses previous bits decision to eliminate the ISI on current bit.
In other words, the distortion on a current bit that was caused by previous bits is subtracted. The advantage of DFE is the feedback filter, which is additionally working to remove ISI, operates on noiseless quantized levels, and thus, its output is free of channel noise.
Figure 4. Major building blocks of EDC.
The basic idea of a DFE is that if the values of the symbols previously detected are known, then ISI contributed by these symbols can be canceled out exactly at the output of FFE filter by subtracting past symbol values with appropriate weighting. The forward and feedback tap weights can be adjusted simultaneously to minimize the square error (MSE).
Despite the choice of equalization technique, one of the challenging tasks of the EDC device is to achieve a stable and repeatable adaptive algorithm that corrects ISI and reflections while also tracking transmit variations. The error channel, calculates MSE, guides FFE and DFE tap coefficients and must correlate to actual bit error rate tester (BERT). This ensures that the adaptive algorithm does not degrade the hardware performance and it achieves stable and repeatable performance.
A typical system designer would like to ensure that the EDC adaptive algorithm successfully converges within Fibre Channel auto-speed negotiation requirements without degrading the link performance. Although it is often thought that there is no need for an adaptive algorithm in backplane applications, the adaptive algorithm can play a key role in optimizing short and long-term temperature variations of the transmitter as well as ensuring seamless interoperability.
In addition to a successful and reliable convergence, the convergence must be quite fast. A typical convergence time of an FFE/DFE implementation in 10GbE Ethernet applications is on the order of 1 second while the EDC adaptive algorithm needs to converge within 154 ms per Fibre Channel auto-speed negotiation requirements.
It is important to reduce the time it takes the algorithm to converge by an order of magnitude without degrading the performance. We were able to reduce the convergence time to less than 100 ms by studying the midplane and transmitter output characteristics.