SANTA CLARA, Calif. – At the same event where Facebook managers opened the door for ARM server SoCs in the data center, they also made it clear that today’s chips fall short of what they seek, In addition, NAND flash vendors need to refocus their plans to fill exploding storage needs in the data center, they said.
In keynotes and interviews at the Open Compute Summit, Facebook executives said they want server SoCs that pack lots of cores and not much else. They also called for a spectrum of NAND flash products in between the extremes of today’s consumer chips in USB drives and the premium products in solid-state drives.
Facebook wants disaggregated servers, said Frank Frankovsky, the chairman of the Open Compute Foundation and vice president of hardware design and supply chain at the social networking giant. Such servers should accommodate upgraded CPUs when they arrive every year or so without needing to swap out memory, networking and I/O chips that might only change once every five years or so.
At least a half dozen companies are planning 64-bit ARM server SoCs geared to save power for large data centers. But all those with existing chips and road maps are focused on highly integrated parts that build Ethernet, proprietary fabrics and other features Frankovsky doesn’t want into the SoCs.
“I can’t say anyone has come saying we will build it exactly the way you are asking for it,” said Frankovsky in an interview with EE Times after his keynote. “When they hear this disaggregation message, they can start changing their direction,” he said.
“Today is the first day that a lot of this has been discussed publically, so it may take time for SoC providers to think differently about how much they integrate and have different sets of options,” he added.
He showed on stage a so-called Common Slot board based on a new Open Compute specification. It uses a x8 PCI Express connector to link any processor board into other server components. Intel x86 and Applied Micro ARM chips populated a board shown on stage
Facebook may deploy its first servers using such Common Slot boards before the end of the year, he said.
Many many core Pentiums or Athlons cores inside an SOC without processor interconnect built in.
HTC : High Throughput Computing processor architecture is what Facebook in looking for where in there are many-many core processor chips which can handle more than 16/32/64/128 threads simultaneously in single chip which will make the compute systems much more power efficient.
Data centres only require decent compute power per core and not really like Core-i7 or FX-xxxx power hungry processor SOC chips.
It's looks like Facebook wants processors having infrastructure like the Pentium days. Pentium was more a processor than a SOC with a processor built in.
Without high speed interconnect between processor chips, how Facebook is trying to scale compute power !
At least interconnects like Infiniband or AMD's HiperTransport(HT) is needed for highly reliable scalable compute systems which is much more prevalent these days which has taken computer industry more than 20 years at least to come up with reliable scalable system architecture which is actually proven for more than a decade now. Eg. they are used in supercomputers.
Both InfiniBand and HiperTransport processor interconnect architecture are Open standards.
May be they can standardise on the Open interconnect system architecture(s) for the convenience of the processor SOC design companies to collaborate with Facebook in a much more efficient way.
ARM up and coming multi cores; maybe. First generation ARM quad core server sans design advantage of system on chip and switch integration; not likely.
Open Compute's HUB Processor Connector can make data center processor plane upgrade easy. But who in their right mind wants to HUG with Intel processors in plane on their advantage of a leading process monopoly. Only the leading process architect can win.
HUG whether Open Compute, Q7 or Kontron MXM3 are a great idea for benchmarking, prototyping, embedded development and initial production. Taking into account applications targeted for design or work load.
These standards also represent Intel placing their interest head to head with all comers on core processing ability where Q7 has already shown what happens to ST ARM and Freescale ARM fabricated on trailing nodes vs. everyone else.
Open Compute HUB/HUG connector is currently an Intel upgrade solution only.
There's a ton of ATOM surplus in Intel stocks. S12xx from stocks are two 32nm Saltwell dice in multichip package; meaning costly to produce. Avoton in production now at 22nm with integrated control hub will quickly overtake.
For ARM up against Intel in server, only the best performance per watt and performance per TCO can win.
I think we need to take a step back. The computing resources that are available on servers if the right architecture and right software stack is put on are more than enough to handle computing needs of the like of Google and Facebook.
It was not a bunch of guys in a room with computers writing software. Lets get serious about this...if we want to serious address this problem, the executives from companies need to make proper statements...how many servers will Facebook buy ?....Google buy ?...the market size of these products is in several 100 billion. Nobody is going to shrink their market by building lower and lower cost products targeted at customers who cannot pay.
If they do, we will go the way of Telecom, where the likes of Nortel, Lucent, Alcatel died trying to fight the commoditization of the market instead of serious evolution of the same. goodbye more jobs in product companies.
Sounds like Facebook needs to contract a custom ic house to me. Who in their right mind would build a one customer targeted leading edge SoC with literally no differentiation? I may be wrong if the whole industry goes this way for servers but then it will really be a race to the bottom to start building arm server chips with just the cores and no network subsystems or other "differentiation".