By now, most digital system engineers know that when their signals reach high enough frequencies, they need to worry about losses in PCB traces andother conductors carrying their signals. At the lower frequencies, typical
of most designs in the 1990s (up to about 200 MHz), loss was irrelevant or
at most a minor affect that needed little attention. Other signal-integrity
issues such as reflection, total delay and crosstalk were of much greater
concern. Analysis tools could give accurate answers by assuming "lossless"
transmission lines, which simulate quickly and are easy to characterize.
But as design frequencies increased into the 300-400 MHz range, and
especially with the introduction of super-high-speed gigabit-per-second
serial signaling, loss suddenly became impossible to ignore. Oscilloscope
waveforms at receiver ICs showed serious signal degradation, with amplitudes
significantly attenuated and edges noticeably rounded and delayed.
Simulators were forced to respond by adding lossy transmission-line
Still, many designers are unsure when loss is really important, what causes
it, how it can be measured, and how to minimize it. This article examines
all of these issues.
The roots of signal loss
First of all, what exactly is meant by "loss?"
Figure 1 makes the answer very graphic. It shows a clock signal driven down 36 inches of stripline PCB trace, buried in typical FR-4 dielectric material. Note how radically different the signal looks at the far end (red waveform) than at the driver end of the trace (purple) its amplitude is badly attenuated and its rise/fall time is severely lengthened. In fact, the signal looks almost like a sine wave by the time it passes through the trace.
Why does the signal look so poor at the trace end? Obviously, it's lost a lot of energy as it propagated; the amplitude decrease alone shows that. But notice that its high-frequency components seem to have been particularly hard-hit. The sharp edges of the input signal (which result from high-frequency content) are almost completely gone in the output. But why?
Figure 1 A TDR signal driving down 36 inches of stripline PCB trace, in a typical FR-4 dielectric. Note the serious amplitude and edge-time degradation when the driver signal (in purple) reaches the far end of the trace (in red).
Some of the loss experienced by our signal is due to an energy-eating mechanism in the conductor, and some to another effect in the dielectric. The culprit in the conductor is just resistance; in the FR-4, the blame falls on "dielectric loss," which steals energy directly from the signal's field. The electric field distribution in the dielectric material is shown in Figure 2.
Figure 2 A cross section of Figure 1's PCB, showing conductor, two planes, dielectric, and the electromagnetic field lines that result when a signal travels on the conductor. Electric field lines are in blue; magnetic in red.
Any electrical designer expects a PCB trace to have some resistance. What might be surprising is that the resistance at high frequencies is much greater than at DC. The reason is "skin effect," the tendency of a high-frequency current to crowd to the edges of a conductor, rather than flow through the entire available cross section. The resistance seen by a high-frequency signal is much larger than one would expect, and it keeps increasing in proportion to the square root of frequency.
Dielectric loss is related to the fact that all dielectrics contain polarized molecules that move in the presence of EM fields. High-frequency fields oscillate very quickly and as the polar molecules move in sync with the field, they begin to heat the dielectric material. There's only one possible source for the heat the energy of the signal itself. It turns out that dielectric loss increases relentlessly with higher frequencies and in direct proportion to signal frequency.
Buried away in the preceding paragraphs is an ominous fact. Skin resistance scales as the square root of frequency, but dielectric loss scales directly which means that at a high-enough frequency, the attenuation from dielectric loss should overtake the attenuation from skin loss.
Figure 3 illustrates that this is exactly what happens. The plot shows, for the same conductor we've been discussing, the resistive loss (in red) and the dielectric loss (in green). Note how dielectric loss zooms past resistive loss at high frequencies. This means that for very-high-speed signaling, the total attenuation is more and more determined by the dielectric materials used.
Figure 3 For Figure 1's conductor, a plot of resistive (red) and dielectric loss (green), versus frequency (log scale). Note that at low frequencies, resistive loss dominates, but at high frequencies, dielectric loss overtakes resistive.
This is where good old FR-4 becomes a big problem. FR-4 is basically a low-cost, time-honored, sloppy mixture of glass fibers and glue; a material chosen for anything but its loss properties and tight process control. Yet (mostly for reasons of cost), digital designers are still stubbornly trying to push high-frequency signals through it. As the preceding figures show, this will become very difficult and eventually impossible as data rates go higher and higher.
In the eye of the beholder
As bad as the signal in Figure 1 may look, its behavior in a modern high-speed system is even worse than one would expect. This only becomes obvious when we view the signal in the special way an eye diagram that is de rigueur these days in very high-speed signaling.
For many years, digital designs relied on a steady diet of well-known techniques; wide parallel buses, synchronous clocks, simple setup-and-hold-type timing and rail-to-rail switching. But as the quest for ever faster systems pushed on, eventually the old strategies became strained; crosstalk between signals became harder to avoid, skew control became difficult and timing margins grew impossibly tight.
Finally, about five or six years ago came a push into new types of signaling that promised some relief, first in the form of LVDS (low-voltage differential swing). These devices introduced two key changes: a lower signal-swing voltage, which (even though it decreased noise margins) allowed for slower slew rates; and differential signals, which greatly reduced crosstalk and radiated emissions and allowed for narrower, faster data paths.
The past two years have seen an explosive levering of LVDS-type signaling into a full-blown revolution the introduction of very-high-speed serial buses, which push data rates (and signal frequencies) from the hundreds-of-MHz range to well above 1 GHz. Suddenly, wide parallel buses and global clocks (and in some cases, explicit clocks of any kind) have become pass; "in" are super-fast, very narrow, all-differential, low-voltage-swing standards, such as PCI Express, RapidIO, and XAUI.
Which brings us back to eye diagrams. In the new high-speed serial world, data rates are so high (to compensate for the narrower data paths, each "lane" has to carry more traffic) that the time between individual bits is often scarcely much longer than the rise/fall time of the signals carrying the bits. Loss suddenly becomes a big issue because any increase in the signal's rise/fall time risks failure of the entire bit stream. Worse, with such little margin for error, a phenomenon called intersymbol interference (ISI) becomes dominant.
ISI means basically that the data rate is so high relative to the signal's rise time that a given bit's exact shape and timing depends on the previous bit history. This can occur because the driver IC itself can't guarantee perfectly regular timing between bits (a form of "jitter"), or because of reflections and other effects which die out more slowly than the bit interval, or because of small variations in voltage level due to the bit history. A long series of 1's, for example, causes the average voltage level to drift upward. Often, all of these effects apply simultaneously.
In the "classic" days of signal-integrity analysis, it was possible to focus on a single rising or falling edge of a signal to judge its "goodness" whether it rang, overshot, needed termination, and so forth. But in high-speed-serial signaling, multiple bits have to be analyzed in sequence before any binding judgments can be made. And to make matters worse, it's never clear what bit history (what sequence, how long) will produce the worst-case behavior. The only recourse is to drive a data path under test with a long, randomized bit sequence and hope that any pathological behavior becomes obvious.
Figure 4 shows a portion of such a simulation. The trace is deliberately long to incorporate a healthy dose of high-frequency loss, which shows up as signal attenuation and degraded edge times similar in length to the bit interval. The intersymbol interference is easy to see; the voltage levels in particular are clearly dependent on the history of the previous bits. Yet analyzing a data path with results in this form would be tedious at best. Imagine scrolling through hundreds or even thousands of bits, trying to imagine when a certain condition occurs that would confuse a receiver IC.
Figure 4 Portion of a high-speed serial data-path simulation, driven with a PRBS (pseudo-random bit sequence) to exercise as much inter-symbol interference as possible. Note the clear dependency of each bit's shape and position on the previous bit history.
Enter the eye diagram, a simple technique that compresses the results of a long simulation like that shown in Figure 4 into a single, easy-to-digest-and-interpret picture. Conceptually, an eye diagram is easy to create; just chop Figure 4's waveform at regular intervals related to the bit time, place each chopped segment on top of the previous segments, and view all the data in one overlaid pile. Figure 5 shows the results of this process, for exactly the same data as displayed in Figure 4.
Figure 5 Same simulation as in Figure 4, but with the data overlaid into an eye diagram. Note the small opening in the middle of the data a nearly closed eye, which spells disaster for this signal path. Interpreting an eye diagram is much faster and more certain than scrolling through a long sequence of bits in standard simulation output. In a glance, the "openness" of the eye tells whether the data stream is acceptable or not.
The eye in Figure 5 is basically a disaster. Without even knowing the details of the particular signaling technology, it's fairly obvious that a receiver IC would have trouble recovering all (or maybe any) of the data sent by the driver. The received bits clearly drift badly in both time and voltage.
If they didn't, this picture would look a lot like an "eye;" a wide opening in the middle of upper and lower "lids" formed by high and low signal states. Unfortunately, here the eye is practically closed. This is a data path in serious trouble and the eye diagram makes it obvious at a glance.
Eyes wide open?
The signal in Figure 5 was generated under the following conditions: 40-inch trace, 4-mil trace width, standard FR-4 dielectric with "loss tangent" (a widely used measure of dielectric loss) of 0.02. In this section, let's look at how reducing loss improves signal quality and how eye diagrams make it immediately obvious when and how much improvement we get.
Earlier, we noted that one component of loss is due to the conductor itself, in the form of skin resistance. It follows that if we can coax a high-frequency signal's current to flow in a larger cross section of metal, the resistance to the signal will decrease and so will the loss.
We know that high-frequency current tends to crowd to the edges of the conductor, near the perimeter. So if we increase the conductor's perimeter, we should decrease the loss. The easiest way to increase a trace's cross-sectional perimeter is to widen it. A 4-mil trace width is nice from a routing-density standpoint, but not ideal for minimizing loss. Let's try increasing our trace width from 4 mils to 8 mils. Of course, we'll increase the dielectric thickness as well to maintain the 50-ohm characteristic impedance.
Figure 6 shows the two resulting eye diagrams, one atop the other (red for the 4-mil trace, yellow for the 8 mil). Interestingly, this simple change a wider trace width has indeed reduced loss and improved signal quality, as shown by the wider-open eye for the 8-mil-wide trace.
Figure 6 Same simulation as in Figure 5, run once with a 4-mil-wide trace (red eye diagram) and once with 8-mil (yellow). The wider trace has less skin-effect loss, and therefore better signal quality as indicated by the more-open eye.
Let's make another change that we know will further reduce signal loss. We've already attacked skin resistance; now let's go after dielectric loss. The easiest way to decrease dielectric loss is to change materials, from the relatively lossy FR-4 used in Figures 4 - 6 to something with better behavior. It turns out that such dielectrics exist, although they're not as low in cost or widely available as FR-4. Amongst these better materials, loss tangents can be as low as one-tenth FR-4's; let's try one called GML3000 with a loss tangent of 0.004.
Figure 7 shows the before and after effect. Sure enough, the signal quality has improved again; the low-loss eye (purple) is opened wider than its partner. Unfortunately, a PCB built from a such an exotic dielectric would be more expensive than one built from FR-4 just as we had to decrease our routing density to drop skin resistance (wider traces), now we're facing higher manufacturing cost as we try to drop dielectric loss.
Figure 7 Same simulation as in Figure 6's better eye, run once with a typical FR-4 dielectric (loss tangent = 0.02; yellow eye diagram) and once with a low-loss dielectric (loss tangent = 0.004; purple). The low-loss material makes a substantial improvement to the eye opening, but would result in a much costlier PCB.
We've explored the concept of signal loss, described what causes it, shown how its effect on high-speed signals is measured, and seen a few examples of how to minimize it. Clearly, loss is an increasingly serious problem in high-speed systems, particularly in the new world of very-high-speed serial signaling.
As data rates increase to the point where signal rise/fall times are on the same order as bit intervals, eye diagrams are an essential tool in judging whether a data stream is so severely affected by loss that a receiver IC can no longer reliably recover the data. Eye diagrams can also make it immediately obvious whether attempts to minimize loss actually improve signal quality, and by how much.
Steve Kaufer is director of engineering for high-speed tools at Mentor
Graphics. He held similar positions at Innoveda and PADS Software, and prior
to joining PADS was co-founder of HyperLynx, an early supplier of
signal-integrity software. He has experience in both hardware and software
engineering, and holds degrees in electrical engineering and physics from
Seattle University. He can be reached at firstname.lastname@example.org.
Eric Bogatin received his BS in Physics from MIT and his MS and PhD in Physics from
the University of Arizona. in Tucson in 1980. For more than 20 years, he has been active
in the signal integrity and interconnect design field. He worked for many
years at AT&T Bell Labs, Raychem Corp, Sun Microsystems and Ansoft.
Recently, he merged his consulting company, Bogatin Enterprises, with
GigaTest Labs, where he is the CTO and teaches short courses on signal
integrity. He has written 3 books and over 100 papers on this topic. He can
be reached at email@example.com.