Several years back, bottlenecks in networking communication architectures often occurred in moving data between individual nodes in a network. While these links still hold some delays for designers, they are no longer the key headache for designers. Rather, the interface linking subsystems (host processors, memory devices, network processors, etc) have now emerged as a key thorn in networking system architectures.
When dealing with interfaces, skew and other impediments, such as lossy interconnects, are the main cause of problems. These impairments limit how fast an interface may be clocked, and therefore directly impacts data transfer rates.
Any high-speed interface--whether it is XAUI, InfiniBand, SPI-4, SPI-5, or 3GIO--faces skew and other impairment problems. Through automatic skew compensation, adaptive equalization, and other design techniques, designers can take on these impairments head on.
Below we'll layout the impact of skew and lossy interconnects on a system architecture. We'll also examine the impact automatic skew compensation and other techniques have on interface design
Skew--The Main Culprit
In a parallel data link, multiple data bits (often, an entire 8-bit, 16-bit, or 32-bit "word") are sent simultaneously, with one data bit transmitted on each of the wires (or, wire pairs, if differential signaling is used) comprising the parallel link. Each of the data bits is transmitted at the same moment, and ideally, each of these signals should arrive at the opposite end at exactly the same time as those for the other data lines. When there are differences in the arrival times from line to line, the result is known as skew.
Figure 1 graphically depicts the skew problem. The transmit buffers of a parallel link are shown in red on the left-hand side of the figure, while the receive buffers (at the other end of the link) are shown in blue at right. Near the left side of the figure, three square-wave pulses (three "bits" of data) can be seen--one on each data line. Since each of these pulses is transmitted at exactly the same time as the others, the three pulses line up vertically.
Figure 1: Diagram of skew in a communication interface.
At the right-hand side of Figure 1, the three data pulses can be seen arriving at their respective receive buffers. Note that the upper-most data bit arrived ahead of the others; it experienced the least amount of delay.
The signal line second from the top had the most delay, while the delay for the bottom one falls somewhere between the others. The difference in time between the arrival of the "fastest" bit and the slowest can be considered the overall amount of skew for this link.
At the receiving end of a parallel data link, each of the incoming pulses is captured by a sample-and-hold circuit, and the data word can be read by taking the value of each sample-and-hold circuit. But if the data bits don't all arrive at the same time, just when should the sample-and-hold circuit be read?
The obvious way to deal with skew is to clock the circuit at a slow enough rate that the skew is irrelevant. Low clock rates mean that each data bit is stretched out in time, and if the data bits are wide enough, all of the arriving data bits will overlap in time. As long as the sampling time falls somewhere within this overlap period, the incoming data word can be read with confidence.
The problem, of course, is that skew sets hard and fast limits on the rates at which parallel links can be clocked. The only way around this is to find some way to reduce the amount of skew in the link.
The primary cause of interconnect skew is minute differences in the dielectric constants among the wires that make up the parallel interface. The dielectric constant affects the wave velocity (velocity of propagation); thus, differences in the dielectric constants will result in each wire having a slightly different velocity of propagation from the others.
Also, each wire comprising the parallel link--whether it be a wire within a cable or a copper trace on a PCB--should have exactly the same physical length as the others. Any line-to-line length mismatches will further contribute to skew. Additional skew is introduced by chip package and circuitry propagation delay differences.
Clearly, these issues do not get any better as the interconnect path grows in length and the systems operating speed increases.
Lossy Interconnect Effects
Lossy effects, or lossy interconnects in general, also cause headaches during the interface design process. These effects cause an increase in the amount of eye pattern jitter that is present in the data streams.
Figure 2 shows the effects of jitter on the eye pattern. A good eye pattern is an "open" one, in which the oscilloscope traces closely (or at least loosely) approximate a series of square waves.
Figure 2: Jitter causes the eye of a signal to close, thus reducing overall throughput.
Jitter can be defined as random instantaneous timing variations in what should be a precisely periodic waveform. Jitter produces a "smearing" of the oscilloscope trace in the horizontal axis, as successive signal pulses arrive either earlier or later than they should. This smearing results in a filling-in of the eye pattern, which is undesirable, as it makes it more difficult for the signal discrimination circuitry at the receiving end of link to extract the data from the incoming signal. As can be seen in Figure 2, this waveform has been seriously degraded.
As the amount of eye pattern jitter increases, the frequency of operation reaches a limit beyond which data can no longer be reliably transmitted and received. Data bit rate, bit stream randomness, interconnect length, line impedance, and dielectric material are some of the parameters that increase lossy cable effects. These factors present a hard limit to the maximum operating distance, data throughput and overall cable length.
Solving Skew Problems
Designers can counteract skew, lossy interconnect effects, and other impairments by taking a three-part approach to improving interface performance. The three parts are:
- Automatic skew compensation
- Active equalization
- Optional AC-encoded data
1. Automatic Skew Compensation
Line trimming has traditionally been used as a skew compensation solution for both copper and fiber optic links. However, this costly, labor-intensive solution is impractical for high volume production with data rates in excess of 100 Mbps.
A better method of compensating for interconnect skew is to equalize the total delay of each signal path by adding additional delay to those signal paths exhibiting the lower propagation delays, in order to match the delay of the slowest signal path. The additional amount of delay required to equalize or "pad" the delay of a signal path is introduced via an electronically adjustable delay element.
This method of interconnect skew compensation is performed at the destination node. It has the advantage of flexibility, as well as the possibility of being automatically applied to a link.
Figure 3 illustrates automatic skew compensation. The left-hand side of this figure shows the waveforms on three of the data lines making up a parallel interface. (The fourth waveform, at the bottom, is the clock signal.) It can be seen that skew has resulted in staggered arrival times for the individual pulses.
Figure 3: Example of automatic skew compensation
The right side of Figure 3 shows the result of adding skew compensation. Suitable amounts of delay have been added to the faster signal lines, in order to bring them into alignment with the slowest signal. Compensating for skew in this way eliminates the need to resort to a slower clock rate. In other words, it eliminates a troublesome speed limit that designers of high-speed busses and communications equipment would otherwise be faced with.
The amount of additional delay added to each signal path is measured during a periodic "training" sequence. The training sequence is initiated when a special training packet is injected in the signal paths of the data and control signal streams at the transmitting node.
A preamble is used to identify the training packet as such, and also to restore disparity of the data and control signals to 0. (Over an arbitrary period of time, an equal number of 1s and 0s should be sent.)
Following the preamble, the phase correct sequence generates isolated timing edges for each signal; these timing edges are guaranteed to be aligned at the output of the transmitting node.
When the training packet is received at the other end of the link, the receiving node measures the relative time at which each isolated timing edge is received. The arrival time of each timing edge represents the delay of that particular signal path, relative to the other signal paths. By measuring these relative arrival times of the isolated timing edges, the amount of delay required to "pad" that signal path can be determined.
There are two primary advantages to this method of interconnect skew compensation. First, training the receive node for skew compensation is very quick, requiring only a single packet time to fully de-skew the link. This allows for faster recovery in case of a link error. Secondly, this method is independent of the interconnect length or type, and only requires that the total skew observed at the destination node be less than or equal to maximum allowable skew.
2. Loss Equalization
Loss equalization is employed to address the frequency-dependent loss effects of the copper interconnects, which are primarily tangential or dielectric loss and skin effect. Equalization increases the bandwidth of the interconnect by providing a maximally flat response curve over the range of frequencies required to transmit data. For example, if the maximum data rate of a signal is 200 Mbps, the flat frequency bandwidth required is 100 MHz.
Equalization may be active or passive, but for the purposes of this paper, discussions will focus on active equalization. The less expensive passive equalization techniques achieve increased bandwidth at the cost of reduced signal amplitude seen at the receiver, which translates into increased power dissipation and a limit on overall distance of transmission.
A 3.125-Gbps data rate over a copper interconnect using differential signaling (one bit per wire pair) requires a minimum flat response bandwidth of the copper cable of 1.5625 GHz. Most copper interconnect with a differential impedance of 100 ohm has a --3 dB upper cut-off frequency much lower than 1 GHz, assuming fairly long interconnect lengths. These factors combine to limit the maximum attainable distance to less than 40 inches.
To overcome the limits of the copper interconnect, an active signal equalization circuit is implemented to compensate for signal attenuation. The active signal equalizer extends the operational frequency and distance of the signal by flattening the attenuation profile of the interconnect using amplification modulation techniques.
3. AC Encoded Data
Data stream randomness is one of the factors that will increase eye pattern (data dependent) jitter present in a data stream. AC encoding of a data stream ensures the signal will have one or more transitions every N bits--thus reducing its randomness. Hence, a data stream segment of all 1s or 0s is limited to less than N bits.
Reduction in the randomness of the data stream narrows dynamic frequency range. This lowers the amount of distortion introduced by the copper interconnect, since less high-frequency information is lost, thus eye pattern is reduced.
One AC encoding algorithm used is the 8B/10B coding scheme, which is used due to its common implementation throughout the industry.
Figure 4 shows the effect of the application of AC encoding and active equalization upon the eye pattern. The cleaner, well-defined, and more "open" eye pattern shown here makes it far easier for the receiver to reliably extract the data from this signal. This open eye pattern further raises the upper frequency limit--and thus, the maximum data rate--that can be obtained.
Figure 4: Diagram showing the impact of AC encoding and active equalization on an interface channel.
The Impact of Lower Impairments
Skew is a problem faced by any high-speed networking and communications interface. The higher the data rate, the more troublesome channel impairments become. Using the techniques described above, designers can reduce jitter problems and, in turn, increase overall throughput in a communication system architecture.
About the Authors
Hansel Collins is chief science officer at TriCN Inc. He holds an M.S. in Computer Engineering from Boston University and can be reached at email@example.com.
Ronald Nikel is CEO of TriCN Inc. He received his M. Eng. Electrical Engineering from Cornell University and can be reached at firstname.lastname@example.org.