Current long-haul and metro SONET OC-192 optical links (10 Gbits/s) are limited to about 80 km in reach over single mode fiber (SMF), mainly due to impairments in the fiber. Similarly in data centers and building backbones, 10-Gigabit Ethernet (GbE) links are limited to less than 26 m in length running over legacy multi-mode fiber (MMF, OM1 type) due to the effects of signal dispersion present at these high rates. The bottom line is, dispersion has a dominant effect on overall optical link performance, limiting carriers and IT Managers to having to deploy lower speed OC-48 (2.488 Gbits/s) or 1 GbE for most long- and short-haul networking applications.
As the demand for bandwidth increases, carriers and IT managers are looking to cost-effectively scale these networks up to native 10 Gbits/s using the existing infrastructure and without having to deploy expensive or bulky dispersion-compensated fibers (DCF). Currently, OC-48 and 1-GbE links can operate beyond 80 km and 220 m distances, respectively, without dispersion becoming an overwhelming factor in signal integrity. However, trying to upgrade these same links to 10 Gbits/s will result in signal distortion, which limits the reach the signal can travel unless some form of dispersion compensation is used.
Electronic dispersion compensation (EDC), forming the foundation of emerging standards from both the Optical Internetworking Forum (OIF) and the IEEE 802.3, compensates for optical dispersion in the electrical domain. EDC was designed to specifically address three main types of dispersion interference that lead to link impairment: chromatic, modal, and polarization mode. These interference sources have risen to the forefront as engineers continually evolve links to native 10 Gbits/s. Because these types of dispersion effects also depend on symbol rate, increasing signal speed causes them to have a more dominant effect.
To address the dispersion issue, standards are being developed to assist users in upgrading their networks. The OIF, in collaboration with the ITU, has created a long reach SMF EDC project for 10-Gbit/s SONET links operating over 145 km (120 km with worst case fiber), allowing a seamless upgrade from OC-48. The IEEE has also been working on a new standard to upgrade 1-GbE links to 10-GbE on existing MMF using EDC. With these standards, EDC products are being developed or are shipping from multiple suppliers to compensate for these known interference sources, substantially improving signal quality and overall link reliability (see the table).
Chromatic dispersion as a result of material and waveguide properties is a phenomenon where a light pulse begins to spread as it travels over greater distances (Fig. 1). An optical laser output has a finite spectrum and is comprised of different colors. The most commonly deployed SMF fibers have a dispersion slope of about 17 ps/nm km at 1550 nm, which is the operating range of a long-haul transmission system.
1. Each light wavelength is comprised of different colors, and the greater the distance light travels, the wider a pulse of light will spread. Chromatic dispersion results in energy from adjacent pulses interfering with each other, leading to inter-symbol interference.
When a pulse spreads, the resultant energy from adjacent pulses begins to interfere with each other and leads to what is commonly referred to in the electrical domain as inter-symbol interference, or ISI. The problem with ISI is that as one symbol spills over to another, it adjusts the second signal's level. This causes errors because the original symbol is no longer at an ideal level where its value is easily distinguished by the recovering system.
Chromatic dispersion is quantified by the fiber dispersion (for SMF-28 at 1550 nm fiber, dispersion is 17 ps/nm km). As an example, a pulse with a 1550-nm center frequency traveling over 140 km would have a total chromatic dispersion of about 2400 ps/nm, which is the spec for the upcoming OIF/ITU SMF long reach application code, assuming a 0.1-nm optical bandwidth.
Modal dispersion is more specific to the multimode fiber used in today's short-reach data centers and building backbones. It's the result of interference between different modes of light arriving at the receiver at different times. Because the fiber isn't symmetrical, imperfections in the fiber degrade the light as it propagates over distance. These imperfections cause the light to spread or disperse and potentially overlap (Fig. 2).
2. Imperfections and degradation in fiber over time cause different modes to propagate through the fiber at different speeds, causing light to disperse and potentially overlap.
Depending on the fiber, these imperfections may result in pulse spreading across several unit intervals (UIs). In this case, a UI represents one symbol of the transmission Baud-rate. A dispersion of one UI means that a symbol begins to interfere with adjacent symbols within the same symbol string. Because multiple light frequencies share the same fiber, dispersion can spread energy across frequency and also impact link performance in short reach applications. Systems without EDC using traditional receivers can recover an optical signal only if the dispersion is less than about 0.5 UI over the fiber. The new IEEE spec for running 10 GbE up to 220 m (with OM1 type 62.5-micron fiber) may result in over 4UI of dispersion, which is why EDC is required on these systems.
Polarization-mode dispersion (PMD) is primarily a concern in single-mode fiber applications where a single pulse launched into the fiber appears as multiple pulses on the far end (Fig. 3). This occurs because optical fibers support two perpendicular polarization planes. If the fiber was perfectly round and without any stress, then these two polarization modes would result in a signal that traverses through the fiber at the same time, resulting in one pulse at the receiver.
PMD causes phase shifting of the pulse itself and its effects are statistical and complex to measure on the optical link. For SMF-28 like fiber, PMD is specified at 0.1 ps/√km, and for applications less than 80 km, PMD is manageable using standard receivers. However, as link distances increase to 140 km or more, the EDC at the receiver can compensate for slow PMD effects. PMD also tends to get exacerbated when fibers are damaged, such as kinks or compression in the link itself. A kink in the fiber, for example, could cause one component of the light to travel at 90 degrees to another, much as if a mirror had reflected it. Thus, while PMD increases with longer fiber, it's possible for a short fiber to have a worse PMD because of the presence of kinks. PMD is measured in group delay (ps).
3. PMD results from delayed arrival of the two perpendicular polarization modes in the fiber. PMD will cause a single pulse to spread on the far end.
EDC algorithmic options
There are various equalization algorithms upon which an effective EDC implementation can be based. Continuous time filters (CTF) are the most simple to implement in silicon and have the advantage of consuming little power. A CTF adjusts the analog bandwidth of the optical front-end by boosting or band-limiting the frequency band of interest.
A CTF can benefit optical applications that are OSNR (optical noise) limited by band limiting the channel, and may also compensate for chromatic dispersion through wave shaping. A CTF has limited benefit for a noise-loaded channel that requires high frequency boosting as this will impact the SNR.
In terms of EDC implementations, the most common architecture is based on a combination of a feed-forward equalizer (FFE) and /or decision feedback equalizer (DFE) that employs a more sophisticated approach to signal conditioning than that of a CTF. The FFE and DFE are typically multi-tap architectures and are effective ways to compensate for ISI. When there is only a single UI of interference, the FFE/DFE only has to determine whether a symbol has spread into the adjacent symbol and add or subtract the symbol appropriately. When there's more than one UI of interference, instead of only one symbol spilling over to the next, each symbol can be distorted by several adjacent symbols. The design's FFE portion focuses more on removing distortion prior to a symbol’s main energy point (also called the pre-cursor area), while the DFE compensates for the interference following a symbol’s main energy point (also called the post-cursor area).
The most common FFE implementation is based on an analog distributed amplifier where delay elements are realized using various transmission lines on-chip. DFE implementations require a bit rate clock and use the sampled data to determine signal quality. DFE designs can be either analog or digital based on the architecture chosen. With analog designs, power dissipation tends to be lower over digital implementations because the analog signal doesn't have to be converted to the digital domain using high-speed A/D converters and DSP.
Performance stability over operating corners is another trade-off that must be accounted for when comparing analog versus digital FFE/DFE implementations. More sophisticated equalization architectures also exist in the form of a maximum likelihood sequence estimator (MLSE) implementation, which uses Viterbi decoder algorithms. An MLSE is generally more of a digital-based design and requires more of a sophisticated DSP approach to filtering. An MLSE can achieve better performance than a DFE, but the DSP implementation of the filter is generally more complex and often consumes two to four times the power. As a consequence, an MLSE is often reserved for an application where performance is tantamount, such as when an application experiences severe non-linearity or is targeted for ultra long-haul optical fiber.
EDC implementation issues
Ideally, an EDC implementation can dynamically adapt to any link. After all, each optical link has different characteristics based on its length, quality, condition of the fiber, and other differentiating factors. Currently, long-haul optical links are hand-tuned for reach and wavelength using a DCF or some other fixed means. If the EDC algorithm is adaptable, network technicians can simply insert new line cards without having to tune settings based on the individual link to which the card is interfacing , bringing installations one step closer to true plug-and-play. In addition, as the characteristics of the fiber degrade over time, i.e., more kinks are introduced into the fiber, the line card can regularly retune the connection without human intervention. An adaptive EDC algorithm can also facilitate the use of one board design across multiple applications.
To be self-adaptive, the EDC algorithm is often implemented using well established least-mean squared (LMS) algorithms applying feedback mechanisms along with a means for estimating signal quality. This is accomplished by closing the loop and enabling the line card to calibrate itself by making small adjustments to gains and filters to narrow in on the optimal signal response. When the EDC equalizer is integrated directly on the transceiver device, dynamic self-adaptation is more easily implemented.
Modal dispersion tends to be more pronounced in MMF and can spill into several UI, rather than just one or two. Because of these factors, an EDC algorithm must provide more sophisticated equalization for short reach MMF than that required for SMF up to 145 km in length.
Another important element of an EDC design is the variable gain amplifier (VGA). By the time the optical signal reaches the receiver, its amplitude has decreased significantly. The VGA gains up according to the input signal and delivers maximum dynamic range to the filter. A VGA holds the output steady regardless of input signal stream over a specified dynamic range.
EDC is a critical enough technology that the OIF, in collaboration with the ITU, is developing application codes for SONET long reach, while the IEEE is developing EDC-based standards for 10 GbE.
The OIF is in the process of defining an SMF EDC standard under ITU-TSG15, intended to address longer reach applications where the minimum chromatic dispersion must be at least 2400 ps/nm, which is equivalent to about 140 km of nominal fiber. The goal of the standard is to enable the upgrading of existing OC-48 links to native 10-Gbit/s/OC-192 without having to replace existing fiber or use a DCF. This should enable carriers to swap out transponder modules (and appropriate backend components like framers) with the end result of being able to upgrade equipment without having to upgrade the links themselves. The standard is close to ratification with interoperability testing between different suppliers currently underway, and no fundamental changes to the standard are expected.
The IEEE, with its 802.3aq standard based on EDC, is focused on running serial 10-GbE over legacy MMF (OM1) up to 220 m. Today's links primarily run 1-GbE type applications, with some small deployments of 10 GbE in parallel formats of the 10G-BaseLX4 PMD, which supports legacy OM1 fiber over 300 m.
One of the factors driving the adoption of 802.3aq is that running native 10 GbE can be done with less complicated modules (the promise of a lower cost and smaller size module based on the XFP form factor). The LX4 module has four wavelength-stable lasers, requires a complex optical Mux, and entails detailed integration and testing to meet the 10 GbE standards. In contrast, the 802.3aq module supports only one wavelength of light, consumes less power, and can be deployed at nearly half the cost of existing 10G-BaseLX4 PMDs. The IEEE802.3aq is currently in draft status and should be ratified by mid 2006.
About the authors
Michael Furlong is a senior product line manager with Broadcom Corp. He received his BSEE and MBA degrees from the Florida Institute of Technology. Furlong can be reached at Mfurlong@broadcom.com.
Dr. Ali Ghiasi is currently the chief architect for Broadcom's optical business line. He has a Ph.D. in Electrical Engineering from the University of Minnesota and an MS/BS from North Dakota State University. Dr. Ghiasi can be reached at Aghiasi@broadcom.com.