Extending PCIe beyond traditional server role
The need for flexibility and scalability to service a growing set of PCIe SSD options in multiple form factors (card, mezzanine, drive, module, embedded) and multiple configurations (PCIe x1, x2, x4, x8, x16) requires a flexible PCIe switch fabric for connectivity between these high-speed SSD devices and host CPU/memory.
The need here goes beyond just offering a non-blocking interconnection path (one in which the switch fabric can sustain full line speed on all ports), which is mandatory. The system needs to efficiently match the controller and the interconnect in terms of how wide each port needs to be for the different usage models, and to focus on the sweet spots. If not, the system can become inefficient, with wasted capability if the interconnect capability is not utilized, or even worse, significantly impaired throughput if the interconnect cannot handle the bandwidth of the controller.
Finally, a key differentiating factor is availability and configurability. Enterprise platforms designed for high-performance business-critical applications need to be built with resiliency in mind from the start. Additionally, system resources need to be easy to manage for reliability, high availability, and cost efficiency. To address these requirements, the component vendors need to work together to extend PCIe beyond its traditional server role and use it as a high-speed, low-latency fabric (see figure 4). There are clear advantages to sharing diverse memory and I/O devices across multiple compute nodes.
Figure 4: An ideal PCI Express-based fabric extends PCIe beyond its traditional server role to act a high-speed, low-latency fabric.
The first advantage is availability. By physically sharing multiple host compute nodes through a PCIe switch array with PCIe-based SSD storage, a system vendor can develop software to intelligently migrate storage ownership from one node to another, maintaining continuous availability in the event of downtime (unplanned failure) or scheduled maintenance. The second advantage is lower cost through the sharing of fewer storage devices among multiple host nodes. Finally, using PCIe as a fabric provides configurability, allowing the system software to change the device provisioning on a per node basis over time to adapt to changing needs; for example, growing or shrinking logical SSD volumes provisioned to different compute nodes.
These advantages can be taken one step further by allowing the network interface card and HBA devices—which already have PCIe connections— to communicate directly with the storage subsystem without needing host involvement (see figure 4). This approach further reduces the latency between the source of the data (storage) and the destination (in this case, the I/O device), and the overhead in the host, effectively providing more compute power available to other tasks.
PCIe is rapidly becoming the interconnect of choice for enterprise SSD-based systems. For optimal performance, it is critical that the vendors who supply the components used to create high-performance, reliable, low-cost, flexible, and scalable systems work together to match the capabilities of the controller with the PCIe switch.
About the authors
Larry Chisvin is vice president of strategic initiatives at PLX Technology, a global supplier of high-speed connectivity solutions enabling emerging data center architectures. He can be reached at email@example.com
Shawn Kung is director of product marketing at Marvell, a worldwide supplier of integrated silicon solutions. He can be reached at firstname.lastname@example.org
Did you find this article of interest? Then visit the Memory Designline
where we update daily with design, technology, product, and news
articles tailored to fit your world. Too busy to go every day? Sign up
for our newsletter to get the week's best items delivered to your inbox.
Just click here
and choose the "Manage Newsletters" tab.