For next generation systems, designers increasingly face the problem of getting the increasing amount of data on and off the processing chip, and driving and receiving high data rate signals over channels comprised of PCB traces, connectors, cables (copper and optical) and backplanes. This either requires increasing parallel connections or increasing the bit-rate per connection. When the interface is between two parts of a legacy system (for example 2 line cards in an existing chassis), the only option is to use the existing traces at a higher data rate. One example is the upgrade of a blade system, which supports 10GbE through a XAUI interface to support 4 ports of 10GbE. Such an upgrade is only possible by transmitting 10Gbps data over each of the 4 lanes, which in most cases is not possible as shown in figure 2. This figure shows the 10Gbps signal at the input and the output of the same backplane as used in Figure 1.
Figure 2. 10.3Gbps transmit and receive signal using a 24” FR4 backplane
Error-free transmission of these signals over such a legacy backplane connection is only possible by means of signal conditioning, such as pre-emphasis and equalization. This is just one example that illustrates the requirement to get ever higher speeds through existing hardware and is the main reason that the IEEE generated the 10GBASE-KR standard (IEEE802.3ba).
There are a number of ways equalization functions can be implemented, with the main distinction being the use of either analogue or digital architectures. Analogue equalization refers to methods such as continuous time equalizers and certain implementations of feed forward / decision feedback equalization. The analogue approach processes most of the data in an analogue fashion, where signal levels linearly represent the actual received signal.
Digital equalization refers to approaches where the incoming data is converted into the digital domain as soon as possible, using very high speed ADCs. While significant amounts of processing take place in the digital domain, it should be noted that these types of equalizers are not completely digital. The ADC, as well as initial gain stages are still analogue. The key metrics of the transmitters in any high-speed interface are also mainly driven by analogue properties.
Problems with smaller geometries
The question then is how to implement this increasingly complex interface technology in small geometry digital-dominated SoCs? ASIC suppliers are continually upgrading and developing their I/O IP, but the relatively new CMOS process technologies, such as 40nm and below are not best suited to implementing these high performance signal conditioning functions. This is because of the analogue and mixed-signal content, which requires well-characterized analogue performance to reduce risk and power consumption.
The power and area savings in digital circuits achieved when moving to smaller geometry processes is simply not carried over directly to analogue functions. Analogue functions are often fixed in size, with metrics such as current density, voltage swing and matching dictating that structures cannot shrink as fast as process geometry. Additionally, analogue elements such as inductance and capacitance will not shrink at all, as these are not determined by the minimum feature size on a chip. Finally, analogue circuits need headroom, a property that is actually reduced by the lower power supply voltages needed to produce the power savings in the digital circuits. A summary of the key process properties for digital and analog functionality is given in Table 1.
Table 1. Key process metrics for digital and analogue functionality
Driving complex loads also presents new problems such as how to manage overload voltages. For example to drive any significant length of twin-ax copper cable (as used in passive SFP+ cable assemblies), an output voltage swings of more than 1V may be required. With low power 40nm CMOS chips operating from a 1V supply, it’s a challenge to see how this could be achieved. Clearly the current handling capacity does not scale with process and the narrow range of operating voltages makes the design of the driver stage inherently even more difficult. This has implications for long-term reliability and is clearly not what the process was optimized or characterized for.
This means that the gap between the digital and analogue portions of a design is growing: while the digital part is becoming more efficient, the analogue part is not, therefore taking relatively more of the power and area budget.
High performance 10Gbps transceiver functionality in 90 and 65nm CMOS technology currently consumes close to 300mW, while taking up in excess of 0.5mm2
of chip real estate. Further design optimization may reduce this somewhat, but it is unreasonable to expect a complete interface to take less than 200mW. With the increasing ASIC functionality, for example going from 10GbE switching to 100GbE switching, more and more of these interfaces are needed. Requirements for 50 to100 10Gbps lanes are common now. Even at 200mW per lane, this will contribute up to 20W to the ASIC power consumption and heat dissipation. This would be a significant amount of the power and thermal budget for any chip, even very large BGAs featuring heat sinks! With more high-speed interface performance required, this figure will only increase further.