TORONTO, Canada — Just as memory and storage continue to merge, memory is also becoming a key consideration in network architectures to improve application performance and reduce latency.
Startup A3Cube recently announced a new network interface card, dubbed RONNIEE Express, designed to eliminate the I/O performance gap between CPU power and data access performance for datacenters, big data, and high-performance computing applications. The company said that by turning PCI Express into an intelligent network fabric, it can exceed existing networking technologies such as Ethernet, InfiniBand, and Fibre Channel, and improve memory latencies.
The RONNIEE Express is a network interface card from A3Cube.
However, it is not meant to be a datacenter network or an Ethernet substitute; the company described RONNIEE as a data plane technology that differentiates itself from other existing interconnection networks because of its hardware-based shared memory facilities. Although not required, A3Cube's ByOS operating system can leverage the in-memory RONNIEE network to create a parallel computing system. It supports features such as deduplication, compression, and encryption.
As explained to me by A3Cube's CTO and founder Emilio Billi, RONNIEE Express uses A3Cube's In-Memory Network technology to share non-coherent global memory across the entire network. Just as compute resources have become virtualized, A3Cube's network fabric can improve communication with memory regardless of where it is physically located.
RONNIEE Express uses memory as the main communication paradigm at protocol level, reported Billi. By creating global shared memory container, the architecture allows for direct communication between local and remote CPUs, memory to memory, and local and remote I/O.
In a datacenter environment, this memory is likely to be an SSD, said Billi, but it could be any type of memory. He added that adoption of SSDs in the enterprise has shifted the storage I/O bottleneck from the storage device to the interconnection between storage and the CPU, highlighting the limitations of conventional PCI Express and other flash architectures.
Bob Laliberte, senior analyst with Enterprise Strategy Group, said A3Cube's network fabric is an example of wider push to put memory and storage closer to the applications. As the cost of SSDs come down and more flash is incorporated into storage arrays, said Laliberte, "the cost per IOPs is becoming a new measurement. There's more of a focus of driving more memory and storage closer to the applications that require it."
From a networking perspective, Laliberte said A3Cube's approach is not dissimilar to 3D Torus, a switchless interconnection topology, which is often employed in high-performance computing environments.
And just as SANs developed as a way to leverage underutilized storage resources, Laliberte pointed out, A3Cube fabric allows for better sharing of memory by addressing the challenge of how to connect to it, "Right now, without using the fabric like they are proposing, you are talking about having to go down through a couple tiers of switching and back." This doesn't make sense, particularly in a high-performance environment, he noted.
A3Cube is not the only company to take an architectural approach to either improve communication with available memory or move applications closer to memory. Scale-out memory platform maker Violin Memory has an array that allows applications such as SQL Server, SharePoint, and Exchange, as well as Windows Server Hyper-V virtualization and Server Message Block (SMB) file services, to access persistent memory directly. Meanwhile, Diablo Technologies' Memory Channel Storage architecture connects NAND flash directly to the CPU through a server's memory bus, so that persistent memory is essentially attached to the host processors of a server or storage array. SanDisk has incorporated the MCS architecture into its ULLtraDIMM technology.