The first InfiniBand semiconductors have arrived, and the challenge now facing system developers is migrating from their existing Peripheral Components Interface (PCI) and CompactPCI (CPCI) solutions to realize the opportunities enabled by the new InfiniBand switch fabric.
These same vendors have invested heavily in PCI and CPCI-based adapter cards, passive backplane chassis, and software, and are reluctant to abandon this investment. At the same time, vendors are anxious to obtain the benefits offered by the InfiniBand switch fabric, including improvements in RAS capabilities (reliability,
availability, serviceability), scalability, fault tolerance, and performance. These system developers must develop a transition from today's PCI-based hardware and software platforms to InfiniBand.
A wide variety of PCI-based solutions are available, supporting connections to traditional Ethernet and ATM networks, Fibre Channel storage area networks, and as internal I/O solutions for all manner of systems (printers, routers, storage devices, etc.). The telecommunications industry in particular has been quick to adopt CPCI as the basis for developing robust 3U and 6U chassis with hot-swappable, modular interface cards. CPCI supports a wide variety of single-board computers and interface cards. The market for CPCI chassis, SBCs, and I/O cards is experiencing strong growth this year and is projected to exceed $3 billion.
Making the switch
However, this growth is being threatened by the ever greater demand for bandwidth, RAS capabilities, scalability, and Quality of Service (QoS). In today's data center and telecommunications environments, RAS capabilities are perhaps even more important than bandwidth. Hot-swap is no longer sufficient and fault tolerance and fail-over support is required. At the same time interfaces are moving from 100Mbits/s to 1Gbit/s and beyond. In addition to bandwidth and RAS limitations, the PCI I/O infrastructure is inadequate to distinguish various classes of traffic necessary to implement QoS. This is important, as customers are willing to pay for higher QoS levels.
The PCI architecture offers limited scalability by using PCI-to-PCI (P2P) bridges, which create a hierarchy of PCI buses, thereby increasing the number of possible I/O devices. However, this bridging does not increase the amount of total available bandwidth, and each additional device must share bandwidth with all PCI devices on the bus. In addition, conventional P2P bridges do nothing to overcome the shared-bus distance constraints. Switch fabrics offer much greater scalability, with each new switch increasing the overall system bandwidth. Furthermore, many physical constraints are overcome by serial links, which can span distances ranging from 10 to thousands of meters.
For these reasons, system vendors have begun to look beyond PCI and CPCI and focus on switch fabrics, which inherently address these requirements better than a shared-bus solution. Various proprietary switch fabrics have been put forward, but industry leaders have not adopted them in sufficient quantities to create the necessary environment to stimulate the development of a broad range of standard components. Until now, that is.
Transitioning to InfiniBand
With support from all of the major server, storage, and telecommunications system vendors, InfiniBand has emerged as the switch fabric of choice to succeed the PCI bus. The architecture has broad industry backing and development is progressing on a range of silicon and system products as well as native OS support for InfiniBand.
Nonetheless, system developers are wise to adopt a transition strategy that allows for InfiniBand leadership, but does not require a wholesale re-architecture of every aspect of existing systems. Silicon devices are now available that provide the fundamental building blocks for system vendors to deploy InfiniBand infrastructure while leveraging their investment in PCI-based platforms and software. This has accelerated the timeline for the deployment of the InfiniBand architecture, with initial customer shipments of system-level products expected in the second half of this year and volume production starting in 2002.
The InfiniBand switch fabric has broad support of industry leaders in silicon, software, and systems. Bridge and switch products offer dramatic performance, RAS, and scaling benefits, while allowing system developers to use existing PCI-based network interface cards and software. Products such as these that address the transition phase are accelerating the adoption of the InfiniBand architecture in the enterprise data center.
Kevin Deierling is vice president of product marketing at Mellanox Technologies Inc., Santa Clara, Calif. Send comments to EBNletters@cmp.com.