The chief executive of Mellanox is pushing server makers to adopt this year his 40 Gbit/s Ethernet controller, but I see him facing a steep hill and a big new competitor.
SAN JOSE, Calif. – Eyal Waldman is a pusher.
The chief executive of Mellanox Technologies wants to drive server makers to adopt this year his 40 Gbit/s Ethernet controller, which also handles 56G Infiniband. That’s a big leap over what most of the industry is planning.
This is the big year of the 10 Gbit Ethernet ramp. For ten years, engineers have been working to establish 10G as the baseline Ethernet speed for servers, but for years it’s been too costly or power hungry.
Now a class of so-called 10GBase-T chips has emerged that for about 2W can drive 10G over about 10 meters of copper cables. Intel has built one called Twinville, reportedly using a third party physical layer block from startup Aquantia as part of its secret sauce.
The Twinville chip will be soldered on to the motherboard of Intel’s Romley server platform due to ship in March. Big server makers such as Dell, HP and IBM are said to have their own designs that use a slot into which they may plug a great variety of 10GBase-T and other 10G cards.
Even the last big step in this 10G road has been a difficult one. The Romley boards and many of the 10GBase-T products need PCI Express Gen 3. Difficulties validating the fast PCIe 3.0 designs were one of the reasons the Romley boards have been delayed, one Intel manager said.
For Eyal, it’s not fast enough. He claims if servers make the leap to 40G it will bust through a bandwidth bottleneck. That will enable end users to pack more virtual machines on each server and reduce the total number of servers they need, he says.
Mellanox claims it’s already getting traction with big Web 2.0 data centers and others that are adopting the approach. I’m skeptical.
I talked to a senior manager of one big data center that Eyal claims is a close partner. The data center manager said he is indeed kicking the tires on the Mellanox 40G chip in a few test deployments, but he has strong reservations about it.
The Mellanox scheme requires taking a cost hit up front. Eyal says he is selling his 40G chips for two to 2.5 times the price of the 10G chips, so it’s a deal, but still a cost hit.
The data center manager notes installations like his tends to go for the lowest cost, simplest, most widely available off-the-shelf stuff they can buy in large volumes. That doesn’t describe 40G products. In addition, 40G adds an extra layer of management complexity to a network that most data center managers are trying to keep as simple as possible.
Mellanox is clearly on a tear. Its revenues have grown every year, even through the 2008-2009 down turn. They hit an astonishing $260 million in 2011, in part due to the company’s acquisition of Voltaire, an Infiniband switch maker.
Eyal notes that one research group credits Mellanox with having the biggest market share in 10G controllers at 24.6 percent in the fourth quarter of 2011. He says that’s because the data centers started adopting his approach in 2011.
I’m still skeptical and will wait to see how the 2012 numbers evolve. I suspect with the Intel Ivy Bridge servers coming out the numbers will look quite different at the end of 2012.
Mellanox has a bigger problem on its horizon. A few weeks ago, Intel bought the Infiniband operations of QLogic.
Although Mellanox’s latest chips can handle Infiniband or Ethernet, the company admits 90 percent of its sales are for use in Infiniband networks that are typically deployed in large clusters of high-performance systems for specialized apps. After five years trying to eke out a position in that market in a two-company horse race, QLogic only grabbed about 15 percent of it from Mellanox.
But with Intel the new competitor, the dynamics are likely to shift. Intel can tie the fast Quick Path Interconnect on its processors to Infiniband and maybe even run some specialized QPI protocols through the IB fabric. In a market focused on performance, Intel will then have an edge that Mellanox can’t match.
Another observer said the big motivation for Intel’s QLogic deal was to get access to its Infiniband switches. Those relatively high profit margin boxes represent another slice of the growing data center market Intel can grab.
Another interesting wrinkle is whether Intel will try to integrate IB directly into its server SoCs. Even one Intel networking manager says network fabrics could be the next big thing for server chips, pointing to signals archrival AMD has sent that it is pursuing this direction.
Eyal notes the QLogic IB products are a generation behind those of Mellanox. He estimates Intel has two years of hardware design work to catch up and two years of software integration work beyond that.
Maybe so. But four or five years out Intel could start taking big chunks of the Infiniband market from Mellanox. By then, it could be about time for the real ramp of 40G and Intel and Broadcom will be ready for it.
In a world addicted to bandwidth, Mellanox clearly has the strongest drugs. But the server world is also addicted to low cost, simplicity and “good enough” technologies, many of them provided by Intel.
Eyal is a convincing pusher, but he is pushing up a steep hill.