SAN JOSE, Calif. Mellanox Technologies, the sole provider of merchant Infiniband silicon, is jumping on a growing bandwagon of companies fielding chips for 10 Gbit/second Ethernet. Overall the Mellanox part falls in the middle of a pack of a half dozen competitors, in line or ahead in some respects, but behind in others.
"We think Infiniband is a huge growth market, but we want to grow even faster than that market," said Dan Tuchler, director of product management for Mellanox.
"There's no doubt 10G Ethernet will be a huge market too, and certain things are coming together to suggest that it's about to take off. So we want to layer one growth market on top of another," Tuchler added.
Accelerating the move to 10G at the server, switches for 10G Ethernet are coming down in price, and 10GBase-T copper links are at least on the distant horizon as affordable, he said. In addition, the latest quad-core processors are doing a good job of handling part of the 10G processing tasks while virtualization technologies are ratcheting up the amount of I/O data centers need, he added.
The Mellanox chip is supporting per priority pause, a congestion management feature, as one step toward distinguishing its Ethernet chip. The feature is currently under discussion in an Ethernet IEEE standards group.
"We wanted to get this feature out there and help shake it out in the market," said Tuchler. "We think its one of the biggest and most significant pieces of congestion management," he added.
Congestion management is itself one part of a broader effort to advance Ethernet. It lays the foundation for running the Fibre Channel protocol over Ethernet, a concept that will be the center of discussion at the June meeting of the T11 committee that handles Fibre Channel standards. Mellanox will participate in the meeting.
Mellanox claims its dual-port ConnectX EN delivers up to 17.6Gbits/second in bi-directional bandwidth and 6.9 microsecond latency on standard TCP/IP flows, besting most competitors. The 21x21 mm chip integrates CX4 and XFP drivers and consumes as little as 12W on a CX4 link.
However in three areas the Mellanox part offers less than some competitors. The ConnectX EN does not terminate TCP, it has not enabled remote direct memory access (RDMA) and it is not running the Open Fabrics Software, a common stack meant to unite Infiniband and Ethernet devices.
Mellanox is following Intel Corp.'s lead, letting the processor handle TCP processing, unlike competitors that offer partial or full TCP offload engines. "Our observation is it makes more sense to handle the TCP processing on a 45nm Intel core than on a third party network card," said Tuchler.
Indeed, Intel has rolled out at Gbit rates its I/O Acceleration Technology. IOAT runs TCP termination on the CPU, memory placement on the chip set and lets Ethernet chips like the one from Mellanox handle a laundry list of so-called stateless offload chores.
Tuchler noted that several years ago everyone thought TOE would be a requirement for Gbit Ethernet but that never panned out. Now with the rise of quad-core processors, TOE will become similarly unnecessary at 10G, he said.
More surprisingly, Mellanox is not implementing RDMA on the chip. This feature of advanced Ethernet actually has its heritage in the Infiniband standard Mellanox champions.
"We have some of the basic RDMA controls in the chip, but we need to do more development work to expose them to software," Tuchler said.
Likewise Mellanox is running standard Windows and Linux Ethernet drivers, but not the OFA stack. "In the future, we are looking at running some of the OFA components," he said.
Mellanox has implemented the Receive Side Scaling capability Microsoft is putting into its Ethernet software stack in Windows. The capability lets the chip associate particular Ethernet flows, MAC addresses or virtual LANs with individual cores in a multi-core CPU. Mellanox also offers the feature in Linux and the VMware virtualization software.
Tuchler claimed that the Neptune Ethernet chip from Sun Microsystems only offers a subset of these functions. It cannot dynamically adjust flows based on MAC addresses and VLANs, he said.
By the end of the year, Mellanox will release a variant of its ConnectX EN that can handle either 20 Gbit Infiniband or two10 Gbit Ethernet links. Early next year the company will ship an adapter that can handle 40 Gbit Infiniband links via four 10 Gbit serdes.
Mellanox developed its Ethernet technology in house. Adding an Ethernet framer and MAC to the existing serdes and other I/O components on an Infiniband part was not that difficult, Tuchler claimed.
"We have shipped more Infiniband chips than all the 10G Ethernet chips combined," he said.