SAN JOSE, Calif. Top computer and communications companies are working to define a standard for fast data transfers over Internet Protocol networks that could bust through performance bottlenecks in high-end Ethernet adapter cards. The work could potentially shift the balance of power in the data center, where Infiniband, Ethernet and Fibre Channel and the companies that back them now vie for dominance. (See chart at bottom of file.)
Top architects from companies such as Cisco Systems, Compaq Computer, Hewlett-Packard, IBM and Microsoft will meet on Wednesday (Dec. 12) to define iWarp, a form of remote direct memory access (RDMA) that will be independent of switch fabrics and applications protocols. The group hopes to have a final standard available by the end of next year, with products following hard on its heels.
"My goal is to unify IP and Infiniband," said Michael Krause, who heads up interconnect work for Hewlett-Packard Co. and leads the iWarp effort, which started eight months ago. As many as 30 companies attended a first meeting on the proposed standard at an IBM Research facility in October.
"Everybody is trying to get to a 10-Gbit fabric and needs RDMA and the same software semantics to be used everywhere," said Krause. "It will take a couple years for this to shake out. But when it does, the customer should be able to decide whether he wants Infiniband or 10-Gbit Ethernet. Infiniband might become the clustering interconnect for low latency, and 10G Ethernet the link for access to the Internet and storage."
Just what role each interconnect will play is key in determining which companies will take the lead in building the anticipated computing utility of the future. Computer makers have invested heavily in recent years in defining and developing the Infiniband specification. Networking companies have history and a wealth of investment in old and new versions of Ethernet. And a rising group of storage companies have staked claims to links such as Fibre Channel.
The interconnects in some respects act like the arms of an octopus, extending the reach of one OEM directly into another's systems. However, most observers believe the triangle of server, storage and network OEMs that define today's data centers will increasingly have to use one another's technologies flexibly for many years, until a winner shakes out.
Level playing field
The iWarp work could level the playing field by defining in a general way, for any interconnect, much needed RDMA techniques that now are spelled out in ways specific to Infiniband and a handful of vendor-specific interconnects, such as Myricom's Myrinet. Because iWarp is independent of the data link layer, it also works across different application protocols such as the Direct Access File System Protocol and the American National Standards Institute's SCSI RDMA Protocol.
At the very least, iWarp will plow the way toward cheaper network interface cards (NICs) that deliver better performance. Current Ethernet NICs require significant memory space to copy data transfers, which often arrive out of order, until an application can reassemble and use them. That work can eat up a big chunk of host processor and system I/O resources.
"Gigabit NICs are starting to feel this problem pretty badly. At 10 Gbits, NICs may become pretty darned near intractable," said Paul Culley, a senior architect for Intel-based servers at Compaq Computer Corp., who has been involved in the iWarp effort.
The iWarp approach provides "a way to describe where the application wants data up front, so a NIC can plant it there in one go," Culley said. "And it does not require the operating system to go back and forth between kernel mode so often, which really eats up CPU performance today."
Culley estimates a Gigabit Ethernet card can chew up 20 to 60 percent of CPU performance in a server with these kernel-mode calls today. "Without iWarp, 10-Gig Ethernet can be used, but you won't get 10 Gbits out of it."
Cisco Systems Inc. hopes to use iWarp to power its efforts at delivering storage networks that link servers and large storage arrays over IP instead of the fast but more costly Fibre Channel links often used today. "We can get moderate-performance links with iSCSI over IP at 50 to 60 Mbytes/second today, but with iWarp we will get to very high-end servers that can handle hundreds of megabytes per second as much as a host CPU can throw at them, really," said Mark Bakke, a chief architect for storage networks, who joined Cisco as part of its acquisition of startup NuSpeed Internet Systems Inc.
Indeed, earlier this year, Mario Mazzola, chief development officer at Cisco, identified the company's work on storage IP networks with NuSpeed as a billion-dollar opportunity. He also said Cisco thinks Ethernet can handle any task new interconnects like Infiniband are targeting.
And Cisco believes 10-Gbit Ethernet will become available for storage networks before a 10-Gbit version of Fibre Channel, said Bakke. "My personal view is [that] Infiniband is going to be a server interconnect, and won't expand into broader networks," Bakke said. "Ethernet is king when it comes to networking. After all the development on new technologies is done, people usually end up with Ethernet."
Infiniband commands a performance space thanks to data rates up to 3 Gbytes/s and latencies as low as 100 nanoseconds. High-end Ethernet tops out at 10-Gbit/s throughput and 5-microsecond latency. However, even computer architects at Compaq, HP and IBM believe iWarp may help high-speed versions of Ethernet steal sockets from Infiniband in low-end servers and extend its reach in the data center.
"For a large number of apps [Infiniband's performance edge] won't matter, and the well-understood and mature Ethernet technology will be sufficient," said Culley of Compaq. "With iWarp, I would expect the typical Internet service provider wouldn't think about moving to Infiniband, but will be very content to stay with Ethernet."
"Ethernet has a lot more going for it as a data center fabric than does Infiniband," said Renato Recio, an IBM server architect and liaison with IBM Microelectronics. However, tallying the costs of Ethernet is still difficult. Optical transceivers for the 10-Gbit version are still pricey, as much as $900. "That's a major problem for 10-Gig Ethernet," Recio said.
On the other hand, Infiniband may require end users to upgrade their system-management software and learn about new kinds of NICs. That means the total cost of ownership of moving to an Infiniband data center may still be higher than migrating to 10-Gbit Ethernet, he added.
"Compaq will use both [interconnects]. We expect the market will eventually settle out to one or the other, but it's impossible to see which one will dominate at this point," Culley said.
The advantage of iWarp is that it provides a single hardware/software interface that can be used on Infiniband, Ethernet or storage links like iSCSI, unifying the different camps, said IBM's Recio. "These worlds are all collapsing, and iWarp helps hedge your bets," he said.
IBM is replacing its proprietary I/O and cluster interconnects with Infiniband today. The company sees iSCSI as a good link for storage systems, although it will continue work on Fibre Channel for several years. And it believes Gigabit Ethernet can be used to link data centers within a building or across a campus.
HP's Krause optimistically hopes the iWarp standard might be finished by June, with products coming at the end of 2002. "I'm talking with several vendors who believe they can deliver Gigabit Ethernet-based products in this time frame and then 10G products in the second half of 2003," he said.
But others say the standard which is being taken to the Internet Engineering Task Force could take significantly longer. The iWarp group was fairly far down the road to writing specification documents when the IETF said it needed to backtrack to define a problem statement and get that ratified before creating a working group and beginning the spec effort in earnest.
"There's no question some products will ship," said Compaq's Culley. "We are keeping our fingers crossed the products won't ship before the standard is finished."
A number of companies including Adaptech, Alacritech and IBM are in various stages of delivering for Gigabit Ethernet what are called TCP-Offload Engine (TOE) products, which accelerate processing of the Transmission Control Protocol. A second generation of those development efforts is expected to incorporate iWarp.
"Internally, IBM is working on TOE products, and we are looking at not just TCP offload but framing features and iWarp," said Recio. "Our intention is to go after that business by providing cores for it in our ASIC libraries."
See related chart