For offloading networks, the message rate benchmark really measures the network's ability to create data packets and send them to the target. In the case of offloading networks, the CPU is not involved in the data transfer, and therefore is free for the user applications.
In particular for InfiniBand message rate testing, there are two known benchmarks - InfiniBand message rate and MPI message rate. The IB message rate benchmark measures the number of InfiniBand packets (single packet message) that can be send between two hosts. The MPI message rate measures the number of MPI messages that can be sent between two hosts. In the MPI test, there is a capability to accumulate several MPI messages within a single InfiniBand packet. These two tests describe the range of message rates (lower boundary and upper boundary) that a given interconnect solution can support - from single packet message to multiple messages encapsulated within a single network packet. The message rate that an application will see will be within that range. If the application has burstiness characteristics, i.e. tend to send burst of small messages between nodes, the message rate used will be toward the upper limit of the interconnect.
Figure 2 compares the message rate areas between two InfiniBand solutions, one that includes full transport offload (Mellanox ConnectX-2 adapters for this case) and the second that relies on the CPU for the transport later - i.e. onloading (QLogic QLE7342 adapters for this case). As can be seen, the message rate range supported by the offloading solutions is between 22 million messages per second to 90 million messages per second, while the range supported by the onloading solutions is between less than 1 million messages per second to 23 million messages per second. Moreover, one needs to pay attention that the onloading results require CPU cycles to create the networking packets, therefore in a presence of real application, the onloading message rate area is expected to dramatically shrink.
When considering interconnect latency, CPU overhead and message rate influence the performance and productivity one can gain from their applications. In order to demonstrate the performance difference we tested two of the most commonly used applications in the high-performance computing space - FLUENT and LS-DYNA. The performance difference, including the percentage figures, is illustrated in Figures 3 and 4.
The application performance testing provides a genuine comparison. A single benchmarking platform was used: 8-nodes, each with dual Intel(r) Xeon(r) processor X5670@ 2.93 GHz. Mellanox ConnectX-2 adapters and IS5000 switch and QLogic QLE7342 adapters and 12200 switch were used for the interconnect solution.
The results show that Mellanox InfiniBand (Offloading) demonstrates up to 16% higher performance with FLUENT and up to 36% higher performance with LS-DYNA on the 8-node system. Moreover, the performance gap increases with system size therefore the gap is expected to increase with larger sized systems.
Scalability and Productivity
Those are the more desired items: to scale the system to meet the compute needs of today and tomorrow, and to maximize the return of investment or the productivity of the system. When one invests in the latest CPU technologies and a fast connection to host memory, it is critical to ensure that those resources can be fully utilized, and to connect them via high-performance, offloaded networking solutions.
As we listed earlier in this article, one example of network offloading solutions is the Mellanox ConnectX InfiniBand adapter. These adapters deliver the entire transport offload with extra-sophisticated offloading, such as MPI collectives offloads, data reduction and much more. As such, the adapters are used in the world leading supercomputers and data centers.
The ability of Mellanox ConnectX adapters to offload the MPI collectives communication is extremely important for HPC application based on MPI. Collective communications, which have a crucial impact on the application's scalability, are frequently used by scientific simulation codes like broadcasts for sending around initial input data, reductions for consolidating data from multiple sources, and barriers for global synchronization. Any collective communication executes certain global communications operations by coupling all processes in a given group. This behavior tends to have the most significant negative impact on the application's scalability.
In addition, the explicit and implicit communication coupling, used in high-performance implementations of collective algorithms, tends to magnify the effects of system noise on application performance, further hampering application scalability. Mellanox ConnectX adapters address the collective communication scalability problem by offloading a sequence of data-dependent communications to the Host Channel Adapter (HCA). This solution provides the mechanism needed to support computation and communication overlap, allowing the communications to progress asynchronously in hardware, while at the same time computations are processed by the CPU. It also is a way to reduce the effect of system noise and application skew on application scalability. Needless to say, those capabilities cannot be provided with onloading solutions. Onloading solutions do the opposite; they eliminate any way to overlap computation and communications cycles, and thus magnify the effects of system noise and jitter on application performance.
As tests show, network offloading solutions are critical for high-performance system scalability, performance and productivity. Onloading solutions can negatively affect the system efficiency, and therefore are not recommended for systems with the above requirements. The main (and probably only) reason for onloading solutions could be their price. Surprisingly, according to public market surveys, there is no real price gap between onloading solutions and offloading solutions in the InfiniBand market. Therefore, for a given system, the decision between offloading and onloading solutions should be very easy. When price gaps do exist, one should always review the entire system cost (i.e., by taking into account both capital expenses and operational expenses), and the desired return on investment for making the right decisions.
From the performance figures, one can see that offloading networks (in this case InfiniBand), provide the needed scalability for multiple system cores while ensuring maximum core performance for user applications. One can argue that the frequency of the NIC or adapter is not as fast as the CPU, but such speed is not required. Offloading adapters need to be able to handle all incoming/outgoing data at wire speed, and - since it is being done in a highly parallel way - the offloading adapters can maintain the needed scalability and high performance without running at CPU-like frequencies. As the number of cores grows, the adapters provide higher throughput. Thus, using adapters that can handle all network data at wire speed, as in a full offloading architecture, is the secret for scalable systems.
About the Author:
Gilad Shainer is a Senior Director of HPC and Technical Computing at Mellanox Technologies and HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. Mr. Shainer holds M.Sc. degree (2001, Cum Laude) and B.Sc. degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel. He also holds patents in the field of high-speed networking.