SANTA CLARA, Calif. " Riding a wave of interest in Linux clusters, startup Mellanox Technologies Inc. rolls out Monday (Nov. 10) its first Infiniband switch capable of handling 30 Gbit/second data transfers. The InfiniScale III is aimed at port aggregation and lowering cost per port for the high-speed, low latency interconnect.
The company's MTS2400 Infiniband III Switch supports both 8-Ports at 30 Gbits/s and 24-Ports at 10 Gbits/s in Enterprise Class 1U Designs. A mix of 10 and 30 Gbit/s ports with its 480 Gbit/s aggregate internal bandwidth. "Initially all the demand is at the 10G level. That will be enough for virtually all the current applications," said Dana Krelle, vice president of market for Mellanox.
Supports 8 30-Gbits/s ports or 24 10-Gbits/s ports (click to enlarge).
Indeed, the startup does not plan to have volume shipments of a 30 Gbit/s host control adapter until the second quarter of 2005. Mellanox has not yet decided whether that part will support PCI-X 2.0 or PCI Express, however, it is designing a 10Gbit HCA for the Express interconnect to be available early next year.
Krelle said Infiniband's closest competition comes from efforts to layer Infiniband's remote direct memory access capabilities over Ethernet with TCP offload. A number of companies including Intel, Hewlett-Packard and Microsoft are backing that interconnect for broad use in future data centers. Krelle said it will by mid-2005 at the earliest before that interconnect is available at 10Gbit/s data rates, and it may have higher costs and lower throughput than Infiniband.
Meanwhile, the InfiniScale III will pave the way for a 24-port 1U Infiniband switch system which could lower costs from $700 to about $400 per 10G port compared to a current 96-port 7U system, Krelle said. The company has already crafted such a system as a reference design. The MTS 2400 offers 24 10Gbit ports or 12 10Gbit and four 30Gbit ports for port aggregation to simplify data center cabling.
The new chip and reference system will both be available by February. To date the part has been tested sending 30 Gbit/s signals over 10 meters of Infiniband cable. The spec calls for transfers up to 17 meters.
Infiniband is beginning to catch fire in large clusters as a high bandwidth alternative to proprietary interconnects from Quadrics Ltd. (Bristol, England) and Myricom, Inc. (Arcadia, Calif.). Virginia Polytechnic Institute recently announced its Mellanox-based cluster of 1,105 Apple G5 systems now ranks as the world's third largest supercomputer.
Krelle said with the advent of native Infiniband support in Oracle and IBM DB2 databases such clusters could begin to move into large commercial data centers starting next year. Systems startups such as InfiniCon, Topspin and Voltaire are already using Mellanox switch chips to address such markets.
On the competitive front, Agilent Technologies announced in June it is developing Infiniband HCA chips and 10/30Gbit Infiniband switches.
"I don't think in the near term Infiniband will become the universal I/O interface some people thought it would a few years ago, but it certainly is gaining momentum in the clustering world. IBM and Sun are pushing it," said Nathan Brookwood, principal of market watcher Insight64 (Saratoga, Calif.).
The InfiniScale III is made using Taiwan Semiconductor Manufacturing Co. Ltd.'s 130-nm process, has a 40x40 mm, 961-lead BGA package and will sell for $949 in thousands. It consumes an average 18W. The MTS 2400 system consumes an average 50W.
For more information, call marketing at 408-970-3400 x 304, or visit www.mellanox.com.