Developing a product is never easy and doing so in a fast-paced market operating on "Internet time" is even more difficult. And doing so when the very definition of the market is changing can make things especially interesting. That was the case as Mellanox began development of our I/O products at a time when the major vendors and suppliers in the server I/O market were debating which route to take to higher performance: first the Next Generation I/O standard and the Future I/O standard, then System I/O and now, thankfully, just Infiniband.
Before Infiniband began to surface as the best-of-class solution for the modern data center, Mellanox had committed to Next Generation I/O technology. Changing the company's focus to Infiniband would surely mean delays in getting product out. For a startup company with an unproven track record, Mellanox had to make a critical decision.
The Infiniband architecture was strongly influenced by the requirements of storage, clustering and communications. To meet these different requirements, the Infiniband I/O fabric must be able to operate with significantly different characteristics, depending on the application.
The Infiniband architecture is gaining momentum with release of the 1.0 specification and the appearance of functional silicon. However, the development of the standard included both controversy and technical challenges.
Initially two architectures were contending to provide this I/O technology: NGIO and Future I/O. The NGIO architecture was introduced first: a 2.5-Gbit/second point-to-point switch-fabric architecture that resolved many of the problems of the shared-bus architecture. The switched nature of NGIO enabled extensive scalability through the addition of switches to the fabric. The architecture was based on the layered-network model that supported physical, link, network and transport layers in hardware. Features included end-to-end reliability, remote-DMA support and out-of-the-box connectivity.
Soon thereafter, Future I/O was introduced to solve the same general class of interconnect issues. But the new spec came at it from a different angle. Future I/O was based on the principle of end-to-end data paths that share the same physical media using a prioritization scheme with features such as multispeed connections, virtual lanes to support quality of service, larger packet sizes, credit-based flow control and multicast capabilities.
Recognizing the need for new I/O interconnect devices, we began developing devices supporting the NGIO standard in early 1999. We were actually far along in the development process when we began to recognize the forces supporting Future I/O and many of the technical merits of this interconnect architecture.
Customers were being forced to choose between the two architectures, and we were faced with a dilemma on how to proceed with our development: Do we continue with our current path of NGIO only? Or do we change directions and work on a product for Future I/O as well?
QoS leads comms needs
We were working with a large developer of communications systems for which quality of service was vital. A switched-I/O fabric solution that could support multiple, independent classes of traffic on the same physical fabric was extremely powerful, particularly because these "virtual lanes" used independent link-level credit-based flow control. This prevents congestion in one class of traffic from creating higher-order "head-of-line" blocking that could impact another class.
These I/O fabric characteristics proved ideal for developing multiprotocol systems. In addition, the ability of the fabric itself to perform traffic shaping created some novel opportunities. On the other hand, the simpler NGIO standard offered an attractive price/performance trade-off. Thus, both standards had strengths and weaknesses.
It seemed as if there were two possibilities for the future: Both technologies would deploy, and there would be separate networks of NGIO and Future I/O with the requirement to connect the two. Or perhaps the two standards would merge and become a unified interconnect architecture.
In either case, we knew we couldn't ignore either architecture and needed to become experts in both. As a result, we put a device on our road map that bridged the two technologies. This meant that our architecture and microarchitecture incorporated features that could support both standards.
Over the next few months, the engineering team continued the core NGIO development and in parallel began to develop expertise on the architecture and features of Future I/O. Our engineering team began to:
- Support multiple-link speeds and negotiate from high-speed links to the base 1x link speed of NGIO;
- Accept traffic on multiple virtual lanes (prioritized logical data paths within a physical link);
- Accommodate different maximum-transfer units to the base 256-byte packets of NGIO and support end-to-end flow control.
In August 1999, the supporters of the two standards made the decision to merge their efforts into what became the Infiniband architecture. The idea was to take the "best-of-class" features from both NGIO and Future I/O and combine them in the new architecture. Our initial NGIO product was nearly ready for tapeout at the time of the merger. Taping out the NGIO product would validate and prove our execution capabilities and technology. On the other hand, an NGIO device would consume resources and defocus from our ultimate Infiniband products.
Ultimately the decision was made to drop the NGIO product and focus all engineering resources on Infiniband. In fact, because we had developed expertise in both technologies we found ourselves in a good position to develop Infiniband devices. Our architecture incorporated features from both NGIO and Future I/O, making the move to Infiniband much simpler.
See related chart