Users may be kicking the tires of different ARM server architectures for some time. Within the next 18 months as many as nine companies are expected to field a variety of 64-bit ARM server SoCs including AMD, Applied Micro, Cavium, Calxeda, Marvell, Nvidia, Qualcomm, TI and Samsung.
In addition to the processor or storage cartridges, the new HP system also packs two networking modules supporting up to six 10 Gbit/s links, at least two power supplies and five hot pluggable fans. The Ethernet cards support the OpenFlow protocol for software-defined networking.
The Intel-based cartridge shipping now includes a Broadcom 5720 dual-port Gbit Ethernet controller and a Marvell 9125 storage controller. Each cartridge includes either a hard or solid-state drive with capacity ranging from 200 Gbytes to a terabyte.
The new design is a logical follow-on to HP’s blade-based servers. However it varies significantly from the design promoted by the Facebook-led Open Compute Project. Other big data center operators, including Google and Microsoft, specify their own server and rack designs although they do not publish them.
For its part, Intel worked with four large data center operators in China as part of its Project Scorpio to define a rack server architecture. It is expected to follow up that work with another rack design targeting a broader set of customers.
Separately, Dell designed a prototype system for customer testing that uses the 32-but Marvell ARM server SoC, the Armada XP.
Pricing for HP’s first Project Moonshot system begins at $61,875 for the enclosure, 45 Intel Centerton server cartridges and an integrated switch. The system is available in the US now and the rest of the world in 30 days.
The cartridge form factor here is smaller than anything I have seen from the VME and CompactPCI folks that fits into a muscular server-targeted chassis.
The VMEs of the world might sharpen their pencils over this one. Maybe they could break into the mainstream server world with another chassis.
I believe HP has done something denser than VME (45 processor boards in a 4U) but probably without its mil/aero ruggedness (and cost) and more aligned with PC server standards such as PCI Express and Ethernet interconnects
I'm not really getting the appeal here. 45 wimply processors in 4U doesn't sound that great, given that anyone can get 16 conventional sockets in the same space - 512 cores, 64 ddr3/1600 channels!
maybe the story here is the backplane, which hasn't seemed to get much coverage. does it provide something other than 2x Gb and power?
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.