PCI Express has been around for a decade and until recently has not been viewed as a viable general-purpose fabric, but that is changing. I believe the ideal datacenter has PCI Express inside the rack and either Ethernet or InfiniBand connecting the racks together.
Ethernet and InfiniBand are both excellent solutions for connecting racks together in a datacenter with the interconnect choice being guided by the application's cost/performance requirements. Within the rack, all three interconnects will do the job. But even though 10G Ethernet and InfiniBand have an established track record, neither is well-suited to the task.
Ethernet is not ideal for the tightly knit connections within the rack. It tends to be expensive for the amount of bandwidth provided, doesn't scale easily or elegantly, and dissipates more power than it should for the bounded distances between blades inside the rack. Its feature-set is also limited for popular rack-level usage.
Normally, blades connected through Ethernet in a rack are treated as if each blade is a node on a network. This is convenient, but allows only a very loosely coupled approach to sharing the I/O nodes in the system, and host-to-host clustering performance tends to suffer due to the software overhead resulting from copying data back and forth between buffers on the blades.
InfiniBand is even less appropriate for rack-level integration, even with its well-tuned performance and outstanding clustering capability. There are very few components with InfiniBand as their connection point, and so building a heterogeneous rack that includes processors, storage and communications is impractical, and sharing I/O among the processors is generally done by adding costly, power-hungry hardware and yet another interconnect protocol -- often Ethernet.
PCI Express (PCIe) is less established as a rack-level fabric, but offers great value in this usage. Almost every processor, storage device and I/O system has a native PCIe connection, each provided by one of the multiple reputable vendors.
True I/O sharing is already established for single hosts, and multi-host sharing has been deployed in production systems for a number of years. Low-latency clustering is relatively straightforward, and because the components all have PCIe as at least one of their connections, you remove the latency and power of additional bridging hardware.
The most significant drawback for PCIe is that the infrastructure is not in place to make this an attractive datacenter backbone. For this job, datacenter architects can turn to Ethernet if they seek the lowest cost solution or Infiniband if they can pay a premium for lower latency and higher performance.
The ideal solution for connecting subsystems in a datacenter needs to deliver high performance at a reasonable cost and have a large ecosystem with ample solutions from many competing vendors. In today's datacenters, Ethernet, Infiniband, and PCIe all have roles to play.
— Larry Chisvin is vice president of strategic initiatives at PLX Technology. He can be reached at firstname.lastname@example.org.