As the Internet explosion has continued unabated over the last five years the IT industry has struggled to keep up with the relentless demand for higher bandwidth, 24/7/365 availability and decreased cost. PCI has been the workhorse of the computer I/O systems that link server Mips to terabytes of storage and Gigabytes/second of Internet transmission bandwidth.
PCI has been one of the most widely adopted and most relied-upon technical standards in modern history. It's served us well over the last decade. However, its performance is up against a wall of diminishing returns and its manageability features and scalability won't meet the expectations of computing in the 21st century.
In response to this growing disparity, the primary players in the computer industry are developing the Infiniband architecture as the next generation of server I/O, with the intent of eventually replacing PCI. It is not a question of if but when the migration to Infiniband technology occurs, since among the seven steering members of the Infiniband Trade Association are the five largest server vendors, accounting for 70 percent of all server shipments.
This new I/O technology spells opportunity for companies in the computer business, both for system vendors and the providers of the embedded boards and processors that will be required to support the systems. It also presents significant challenges to the design community.
Those challenges to designers begin with Infiniband's physical signaling layer. The architecture enables data transfers at 2.5 Gbits/s, but because the clock is embedded in data the datastream's fundamental frequency is 1.25 GHz. Moreover, because Infiniband devices can have switching times as fast as 100 ps, the frequency content of an Infiniband system can easily exceed 3 GHz, leading to design exercises that are truly microwave in nature. Many college textbooks on electromagnetics will be dusted off within the Infiniband design community as designers get reacquainted with concepts such as frequency domain, s-parameters and intersymbol interference, to name just a few.
At the physical layer, the primary design challenge is maintaining eye openings at the receiver so that data is transmitted at acceptably low bit-error rates. "Eye openings" in this context refers to the graph results shown in so-called frequency eye diagrams. The minimum eye opening at an Infiniband receiver will likely be less than 200 mV x 150 ps; meeting the spec will require very careful control of loss and jitter. Most designers will find the differential signaling used in Infinband to be much more of a blessing than a curse, although for some it will be new territory that will also require new learning to be successful.
Infiniband signaling has its clock signal embedded in data, and 8-bit/10-bit encoding is used to remove dc offset from the data stream and enable clock extraction. Additionally, byte striping is used to encode the packet stream across the multiple physical lanes in a by-4 or by-12 link. These techniques are common in high-speed serial-link design, but will likely be new to many of the designers developing Infiniband products.
The Infiniband architecture maps directly onto the bottom three layers of the Open Systems Interconnection seven-layer stack, with specification for the physical, link and network layers. It also defines services at the transport layer.
The layers above the physical layer will also present new hurdles to design engineers, or at least require the adoption of new mental models about the systems being designed.
Differences between Infiniband and a shared multidrop bus such as PCI are numerous. These include the send/receive of queued messages vs. a load/store model, exchanging messages vs. control signal handshaking, packets that contain data payloads vs. a dedicated data bus, concurrent data queues vs. sequential reads and writes, and an intelligent interconnect channel vs. an interconnect between intelligent devices.
Until recently there has been a deep chasm separating digital and RF designs. However, as digital buses within computers and communications equipment have pushed through several hundred megahertz over the last few years, the need to bridge the gap has been growing. With its 2.5-Gbit/s signaling rate and ultrafast rise and fall times, Infiniband technology will force the issue.
Managing the microwave analog effects of Infiniband signals begins in modeling and simulation. It will probably mean "upgrading" to the next modeling paradigm, such as trading in behavioral models of drivers for transistor-level models, lumped-circuit models for s-parameter models and so forth. The wavelength of the third harmonic of an Infiniband signal in FR4 (the material used in most printed-circuit boards) is roughly 40 mm. That means that Infiniband systems should generally be modeled and simulated using distributed models, with full electromagnetic (EM) models being used for geometrically complex structures or those approaching half a wavelength in size.
At frequencies of several gigahertz transmission-line models may not adequately model effects such as frequency-dependent loss; s-parameter models may be needed. A simulation environment that can accommodate a variety of device description types, from Spice netlist fragments to s-parameters to 2.5-D or 3-D EM data, is recommended because such an environment enables the right trade-offs among modeling effort, simulation accuracy and run-time.
Spice simulators that use convolution are a good fit for Infiniband designs, since they allow direct use of s-parameter frequency-domain models within time-domain transient simulations. Some simulators also allow the use of 2.5-D and 3-D EM data directly; alternatively, s-parameters can be produced by the 2.5-D or 3-D simulator.
A big advantage of a simulator that can use s-parameter models is that the models can come from a vector network analyzer that outputs s-parameters of a device under test. This measurement-based modeling methodology is increasingly being encouraged at seminars and conferences on high-speed digital design. Since Infiniband signaling technology is differential, a vector network analyzer with a four-port test set is recommended.
Board layout for Infiniband frequencies demands careful engineering. Reliable data transfer depends on keeping data eyes open, which relies on managing loss and jitter. For differential traces, symmetry is extremely important as both loss and jitter result from the mode conversions produced by asymmetries. Connector launches, strip-line or microstrip transmission lines, package breakouts and even vias should be carefully designed, simulated and measured with a TDR scope or network analyzer or both to ensure that these structures will meet the design goals for impedance matching, crosstalk and loss. For making TDR measurements on systems, a probe that interfaces directly to Infiniband cables greatly simplifies the procedure.
Although it seems that even the smallest of stubs can affect transmission-line performance at Infiniband frequencies, appropriate test points should still be included on the board. Real-world measurements will undoubtedly be needed to troubleshoot, characterize and optimize the design. As with anything else, test points are a matter of making the right trade-offs.
Once Infiniband systems have been built, a variety of test tools will likely be needed. Very high bandwidth scopes and even bit-error-rate testers will be needed to characterize signaling and ensure reliable, repeatable data transmission. Logic analyzers and protocol analyzers will be needed to capture link traffic to debug both hardware and software bugs. Finally, Infiniband traffic generators will allow the system to be loaded down and driven into corner case scenarios that are difficult to achieve any other way.
Designers considering an Infiniband design should start working now with vendors to ensure that they will have the tools necessary to be successful in this demanding design space. Beyond a variety of instruments and EDA tools, many also provide support and consulting that may be just the ticket to help cross this chasm. In addition, many excellent seminars on high-speed serial design are being held at conferences such as Intel Developers Forum, Infiniband Developers Conference and DesignCon.
Engineering is simply the making of trade-offs to optimize systems around a set of prioritized objectives. For designing with Infiniband technology, the variables and their weighting factors within the trade-off space may have changed, but the engineering is the same. What may be needed are new design approaches, skills and tools to successfully bridge the crevasse between 13-MHz parallel bus designs and 2.5-Gbit/s high-speed serial Infiniband designs.