As with all applications reliant on semiconductor technology, performance demands being placed on data center, enterprise switching and public network communications systems continue to increase at breakneck speed.At the same time, such applications are experiencing great pressure to reduce power, cost and size. Increases in data rates driven by a need for higher bandwidth means the demands on system-level and chip-to-chip interconnects are increasingly challenging design and manufacturing capabilities. Networking equipment now commonly supports large numbers of 10Gbps connections, while support for 40Gbps and 100Gbps Ethernet is growing.
Traditionally, such design pressures have been relieved by higher and higher levels of semiconductor device integration, which has succeeded in combining more processing power with more high-speed interfaces on a single SoC. The continuing drive to smaller geometry CMOS to support higher density digital processing may however slow future levels of integration or even reverse the trend. For current and future high-performance, high-speed serial interfaces featuring equalization could well be the norm and high levels of SoC integration may no longer be the best solution.
Higher bandwidth drivers
Networking speeds are growing to keep pace with ever increasing demands for data. Developments in cloud computing, consumer access speeds (Fibre to the Home and LTE) and ubiquitous access (the 3G cell phone) mean that all parts of the communications network, from datacenter to core, have to deal with more data. The major management, switching and routing function blocks in any communication system are generally implemented in ASICs using advanced CMOS processes, today at 40nm with 32nm and 28nm on the horizon. These small geometries enable extremely high-density digital circuits and certainly address the need for realizing digital chips with ever increasing complexity within power and cost budgets. All this concentrated digital processing power naturally presents a huge data I/O requirement, as these digital cores must interface with other chips on the same card, on separate cards in the system or even other systems.
While previous generations of ASICs generally employed serial interfaces using speeds of a few Gbps per lane, the increased density of processing power is now leading to higher interconnect speeds. ASICs in todays’ data communications applications are supporting interfaces with one or multiple 10Gbps lanes, such as XFI, SFI, XLAUI, CAUI, and will support protocols all the way up to 100GbE. Still, even higher speed interface lanes will be demanded by 12G SAS, 16G Fibre Channel and 25G Infiniband. Such increased speeds mean interface technology has to become even more sophisticated.
Next generation communications and networking equipment is usually not designed from the ground up, but are instead built on existing components. To avoid the dreaded “forklift upgrade”, system vendors aim to keep at least the chassis of a system in place, upgrading only the field-replaceable components. In reality this means that often only the backplane, power and cooling system is kept, with all other functional components being replaced. As a consequence, a next generation upgrade (say from a high-density GbE switching function to one that supports 10GbE) has to be implemented within the power and thermal budget of the previous generation. This means that, while a system often has to achieve a ten-fold increase in performance compared to the previous generation, this has to be done with the same power consumption and heat dissipation.
For the digital processing functions Moore's Law still comes to the rescue, as smaller geometry CMOS not only offers higher density designs, but also power saving. Power savings of 50% have been claimed for digital switching functions moving to the next process node. The problem is that size, power and performance of the interface technology do not scale the same way.
Digital processing is getting faster and faster, while connections between chips and system elements often remain the same and become more and more of a bottleneck. To add to this problem, the total power budget for a system also remains the same and the latest process technology does not reduce the power for many interface technologies.
Improving signal integrity
Legacy systems with interconnect speeds below a few Gbps can be supported with reasonably simple high-speed serial interface technology. These interfaces are usually based on scrambled data that is guaranteed to have enough transitions to enable extraction of the clock signal at the receiver using a simple clock recovery function. This will then enable error-free data recovery, even when it was transmitted across a backplane with several connectors. An example of such a legacy interface is XAUI, which uses 4 lanes of 3.125Gbps signals to transport 10GbE between chips or over a backplane. Figure 1 shows a XAUI signal entering and exiting a 24” backplane.
Fig 1. 3.125Gbps transmit and receive signal using a 24” FR4 backplane