SANTA CLARA, Calif. – A new front has opened up in the war over server microprocessors between Intel and ARM—interconnect. The technology is key to a range of chip-to-chip uses from high-speed networking to future non-volatile memory interfaces, supercomputer clusters and 3-D chip stacks.
Backers of RapidIO have been quietly courting ARM, ARM SoC and server makers, encouraging them to adopt RapidIO as an open alternative to proprietary technologies in Intel Xeon chips. RapidIO got its start as a fast, low latency link among DSPs in cellular base stations and other high-end embedded systems, and now sees an opportunity in ARM servers.
“This is a big part of why I came back,” said Sam Fuller who recently returned as executive director of the RapidIO Trade Association.
Intel is widely expected to implement in future Xeon processors the interconnect technology from a spate of recent acquisitions. Server and adapter makers believe such a move could narrow their hardware options and are looking to ARM and AMD for alternatives.
“There’s an opportunity for someone to be the ARM of interconnect--it could enable a lot of interesting designs,” said Moray McLaren, a high performance computing specialist at HP Labs.
McLaren was speaking in a panel discussion at the Hot Interconnects conference here on the topic of interconnects getting integrated into microprocessors. Panelist said they expect Intel to integrate more networking and clustering technologies into its Xeon chips following a spate of acquisitions in the area.
In July Intel acquired Whamcloud, a developer of parallel distributed file systems for large clusters. In April it bought an interconnect group from Cray for $140 million. In January, it acquired the Infiniband chip business of QLogic for $125 million.
Previously Intel bought two high-end Ethernet chip makers. In July 2011, Intel bought Fulcrum for an undisclosed sum and in 2008 it acquired NetEffects at a bargain rate of $8 million.
“Integration of the [network] adapter [into the server processor] is or will be a done deal, but the real question is what about the interconnect fabric--outside of high performance computing [clusters] there’s very little of that yet,” said Lloyd Dickman, vice president of architecture at Bay Storage Technology and previously CTO of Infiniband products at Qlogic, also speaking on the panel.
Moray McLaren of HP Labs speaks at Hot Interconnects panel.
Just to clarify, the Xilinx parts are Cortex-A9, not ARM9. Certainly not the fastest Cortex-A9 parts you can get on the market, but it is the latest ARM instruction set (until ARMv8 devices enter market) so latest and greatest software will run on these Zynq devices.
Any word on optical pcie or other optical interconnects ?
Altera and avago demoed an fpga with the optical transceivers built-in but still yet to hear any more about it.
Intel is clearly has a plan for interconnect evolution,(IDF may give us more insight) with recent acquisitions, we will very likely see Ethernet (40G, 100G, 400G) and Infiniband continue to be the leaders. Both are a decade ahead on Bandwidth, latency is not a issue if your 4 to 10 times the data rate, (ie RapidIO is only 10G) Network Adapters and switches will be integrated into the processor or/and Chipset, and in many cases already are.