Keith Underwood, an HPC design for Intel on the panel, declined to give any specifics about Intel’s plans. Instead he laid out a broad set of opportunities and challenges for anyone trying to integrate Ethernet or high-performance interconnects into server CPUs.
Intel currently licenses to a handful of companies the Quick Path Interconnect it uses on its processors. Computer makers such as SGI use the technology to build large clusters, but have to be flexible because Intel makes changes in QPI when and how it sees fit.
If Intel integrates networking and other interconnect features into future processors it will “close some doors in hardware [for OEMs and card makers], but it opens a lot of other doors in software—there’s plenty of fun to be had yet,” said Greg Thorson, a chief engineer at SGI, speaking on the panel.
Interconnect chip and card makers may be most under threat from Intel’s plans. For example, Mellanox is now the only supplier of merchant Infiniband chips.
“There were eight or ten 10 Gbit/s Ethernet vendors five or ten years ago, now there are only about three or four,” said panelist Christian Bell of Myricom which used to design its own interconnect but now makes high speed Ethernet products optimized for targeted markets.
The move to integrate networking and interconnect into server CPUs comes at the same time a host of non-volatile memory technologies and uses are emerging. Long term server designers want to bring flash closer to DRAM in the memory hierarchy, a move new chip-to-chip interconnects could enable.
“We’d like to have better non-volatile interfaces than solid-state drives,” said McLaren of HP Labs.
For its part, IBM sees 3-D chip stacks of processor and memory die as a key competitive advantage for all future servers from Exascale supercomputers on down to its Power and x86 systems. The company is developing logic interface chips for such stacks as part of its work with Micron in the Hybrid Memory Cube Consortium.
Intel is expected to compete with such designs using future versions of the QPI interconnect on its Xeon chips. AMD’s acquisition of server maker SeaMicro in February may have been motivated in part by a desire to own the startup’s interconnect technology.
What remains to be seen is what approach ARM-based server SoCs from companies such as Applied Micro, Calxeda, Marvell, Nvidia and Samsung will adopt.
Just to clarify, the Xilinx parts are Cortex-A9, not ARM9. Certainly not the fastest Cortex-A9 parts you can get on the market, but it is the latest ARM instruction set (until ARMv8 devices enter market) so latest and greatest software will run on these Zynq devices.
Any word on optical pcie or other optical interconnects ?
Altera and avago demoed an fpga with the optical transceivers built-in but still yet to hear any more about it.
Intel is clearly has a plan for interconnect evolution,(IDF may give us more insight) with recent acquisitions, we will very likely see Ethernet (40G, 100G, 400G) and Infiniband continue to be the leaders. Both are a decade ahead on Bandwidth, latency is not a issue if your 4 to 10 times the data rate, (ie RapidIO is only 10G) Network Adapters and switches will be integrated into the processor or/and Chipset, and in many cases already are.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.