With the convergence of telecom/datacom switching centers and traditional LAN and Internet-based data centers, a distributed server architecture based on "blades" is emerging.
Traditionally, a server is simply a repurposed desktop computer with the distinction of utilizing higher-grade components, additional memory and hard-drive capacity, and packaging that enables rack mounting. In the most general sense a server blade is an industry-standard computer delivered on a single PC card that can be plugged as a module into a chassis. Practically such a chassis may accept from eight to 24 such cards.
As defined, this server blade is not able to operate standalone but requires the chassis to provide power, cooling, rigidity and a disk
drive. In this sense a server blade is simply a different form of packaging of a traditional 3U or 1U server (without a lot of the PC baggage).
A broad class of products satisfies this definition of "server blades," including single-board computers for VME and CompactPCI (CPCI) passive backplanes, as well as newer, proprietary blade form factors utilizing 100 Mbits/second and Gigabit Ethernet for connectivity. Increasingly, the Infiniband switched-fabric topology will play a big role .
To delineate the advantages of a switched-fabric architecture such as Infiniband in a blade configuration, it is necessary to extend the concept to what is called the "stateless server blade." This means that the server blade, rather than being a repurposed desktop computer, is reduced to its bare essence: CPU, memory and I/O. Such a setup is effectively a pure computational element, unencumbered by components and connectivity to support keyboard, video, mouse, hard drive and the incumbent operating system, plus the myriad other functionalities required by more general-purpose desktop computers.
Such a blade is said to be "stateless" because the parameters that define its identity and application are not stored on the blade itself, but rather are determined intelligently when the blade is initialized. To be truly stateless, the functions of OS-booting network provisioning, loading drivers, and mounting root and application file systems must be done using remote services and storage.
The storage contains all of the OS and applications images that reside within the data center, and provisioning services automatically direct the blades to their booting resources and I/O interfaces based on the system administrator's input. Thus, a single system administrator can provision and manage thousands of stateless server blades from a single management console. Removing the hard drive from the server blade not only reduces both cost and power, but also greatly lowers operating and system-management costs. Giving simple computational server blade elements access to an application and operating-system software through a single, unified and coherent storage repository greatly simplifies application management.
Clusters of stateless server blades can leverage economies of scale to deliver the same management cost benefits at a fraction of the cost of many-way servers.
The other key aspect of realizing the true benefits of a blade-based server architecture is delivering an I/O blade that utilizes the same form factor as the server blade. This gives deployment flexibility to the system administrators as they face trade-offs between I/O and compute power requirements. This flexibility makes it possible to independently deploy resources within the same chassis or across multiple chassis, and to use load balancing to meet the current computing needs.
Under Infiniband, the concept of the I/O fabric is extended to enable I/O sharing. In an Infiniband-based system it is possible to create a number of relationships between CPUs and I/O elements. For example, multiple server blades can share a single Fibre Channel blade, whereas previously I/O cards have always required a one-to-one relationship to a single computer. Because of the many-to-many feature of Infiniband, a shared I/O blade need be installed only once, into the storage pool, instead of for each server.
Another benefit of Infiniband in a server blade design is that it can be used as a backplane interconnect for multiple blades. The backplane connects the server blades, I/O blades and switch blades into a single unified chassis fabric. In such an architecture the backplane contains sixteen 1x Infiniband blade connectors that are redundantly connected to each of the two switch blade connectors. Each Infiniband switch can aggregate the 16 blades and connect the chassis to the Infiniband fabric within the data center.
In an Infiniband-based blade chassis, not only does the chassis accept both server and I/O blades on a common backplane, it also connects to the fabric through switch blades with a virtually identical form factor. The Infiniband architecture also enables the ability to connect either through traces on a board or through copper or fiber-optic cabling. This allows the server blades, I/O and switch blades all to interconnect in a chassis through a single Infiniband backplane to other chassis, storage devices, gateways and other Infiniband devices on the fabric.
Server blades within the chassis or between chassis can be clustered together for both high performance and failover. With up to 16 server blades per chassis and multiple chassis connected through the Infiniband fabric, clusters can be scaled to 64 or 128 nodes in practice. Theoretically, it should be possible to scale to thousands of server blades, though the practical limit will be determined by the latency requirements and number of switch hops the application can bear.
See related chart