Meanwhile out at the network things aren’t any easier. Intel sounded a fanfare for its X540 10 Gbit/s Ethernet controller aka Twinville that sits next to the new server processor on the Romley boards.
The chip is among a generation of its peers that can drive 10 Gbit/s Ethernet over copper at reasonable power consumption rates. But look at the data sheets of this new class of products and you will see with this generation the trade off is distance.
The old promise of the IEEE “Base-T” specs was that with each generation you make Ethernet go ten times faster over 100 meters of Category 5 copper cables, the most widely used cables in today’s computer rooms and offices.
But the specs of the Intel X540 10GBase-T controller call for Augmented Category 6 or Cat 7 cabling to get to 100 meters. Ordinary Cat 6 cables get you 55 meters, and there’s no support for Cat 5. That's not specific to Intel; it's true of all the products based on this IEEE spec thanks to the limits of physics.
It’s taken nearly a decade to get 10G Ethernet on a server motherboard, the place where volume sales start. That’s because until we got to 40 nm process technology, no one could build one of the physical layer chips that didn’t suck more than a couple watts per port, an unacceptable level for servers.
As Ethernet guru John D’Ambrosia notes, going forward we are going to have to do things a little differently. Indeed, I expect I/O issues will increasingly drive server and server CPU designs.
That’s why—in my humble opinion--AMD recently paid a whopping $330 million+ for SeaMicro. The server startup has some novel systems I/O that I expect AMD will integrate into a future server CPU to save a few watts and try to gain an edge on Intel.
I applaud Intel for its fine E5-2600 server CPU and X540 Ethernet controller designs. The chips are complex and well executed. But AMD and other companies have good processors and controllers out and in the works, too.
The real news today is there’s a big I/O bottleneck dead ahead, a couple of them actually. You better watch out or in the next few years you could get caught up in them.
This is a familiar story to embedded systems folks. CPU speeds have been growing much more quickly than interconnect speeds, be they for local memory, storage, or remote databases. At some point I think we will need to reexamine the system architecture of a computer itself to decentralize the compute elements across the data instead of building ever-faster roads for the data to get to the CPU.