It once took Facebook at least a year to design a new server, but using Open Compute components it completed a recent design in six months. That’s still far behind the pace of software which changes by the hour.
“There’s an impedance mismatch now between the speed at which software and hardware moves,” said Frankovsky in his keynote. “Everything is bound to a pcb and wrapped in sheet metal, and that doesn’t allow us good flexibility,” he said.
Separately Jay Parikh, vice president of infrastructure at Facebook, quipped in a keynote that if storage vendors were car dealers they would be offering a choice between a sports car (NAND flash) and a van (hard drives). He joked that what he wants is a Prius, large volumes of flash chips with relatively low write endurance and lower prices.
“It really puts a crimp on innovation when we have to shoehorn into things that are suboptimal,” he said. “The underlying storage systems are a big problem today,” he said.
Parikh said low cost NAND could open up many new flash uses in the data center. He is working to define his specific requirements and share them publicly through Open Compute, he said in a brief interview with EE Times.
Creating such high volume, lower cost products could disrupt the current business models for flash vendors, he said. At the summit, Fusion IO announced it would supply its ioScale flash cards for as little as $3.89 per Gbyte. By contrast, hard disks provide storage for less than a dime per Gbyte.
The disruption would help serve a rapidly expanding market. Users generated an estimated 2.8 Zettabytes of new data in the last two years, including millions of pictures posted to Facebook, he said.
Parikh’s team is even contemplating the use of Blu-ray drives as a new storage medium to keep pace with the flood. He also described a custom server called Cold Storage that Facebook recently designed to pack into a single rack two petabytes of information that can be readily accessed.
“There is a huge data deluge, and we have to throw everything we have at it to innovate faster at the storage device layer,” he said.
Many many core Pentiums or Athlons cores inside an SOC without processor interconnect built in.
HTC : High Throughput Computing processor architecture is what Facebook in looking for where in there are many-many core processor chips which can handle more than 16/32/64/128 threads simultaneously in single chip which will make the compute systems much more power efficient.
Data centres only require decent compute power per core and not really like Core-i7 or FX-xxxx power hungry processor SOC chips.
It's looks like Facebook wants processors having infrastructure like the Pentium days. Pentium was more a processor than a SOC with a processor built in.
Without high speed interconnect between processor chips, how Facebook is trying to scale compute power !
At least interconnects like Infiniband or AMD's HiperTransport(HT) is needed for highly reliable scalable compute systems which is much more prevalent these days which has taken computer industry more than 20 years at least to come up with reliable scalable system architecture which is actually proven for more than a decade now. Eg. they are used in supercomputers.
Both InfiniBand and HiperTransport processor interconnect architecture are Open standards.
May be they can standardise on the Open interconnect system architecture(s) for the convenience of the processor SOC design companies to collaborate with Facebook in a much more efficient way.
ARM up and coming multi cores; maybe. First generation ARM quad core server sans design advantage of system on chip and switch integration; not likely.
Open Compute's HUB Processor Connector can make data center processor plane upgrade easy. But who in their right mind wants to HUG with Intel processors in plane on their advantage of a leading process monopoly. Only the leading process architect can win.
HUG whether Open Compute, Q7 or Kontron MXM3 are a great idea for benchmarking, prototyping, embedded development and initial production. Taking into account applications targeted for design or work load.
These standards also represent Intel placing their interest head to head with all comers on core processing ability where Q7 has already shown what happens to ST ARM and Freescale ARM fabricated on trailing nodes vs. everyone else.
Open Compute HUB/HUG connector is currently an Intel upgrade solution only.
There's a ton of ATOM surplus in Intel stocks. S12xx from stocks are two 32nm Saltwell dice in multichip package; meaning costly to produce. Avoton in production now at 22nm with integrated control hub will quickly overtake.
For ARM up against Intel in server, only the best performance per watt and performance per TCO can win.
I think we need to take a step back. The computing resources that are available on servers if the right architecture and right software stack is put on are more than enough to handle computing needs of the like of Google and Facebook.
It was not a bunch of guys in a room with computers writing software. Lets get serious about this...if we want to serious address this problem, the executives from companies need to make proper statements...how many servers will Facebook buy ?....Google buy ?...the market size of these products is in several 100 billion. Nobody is going to shrink their market by building lower and lower cost products targeted at customers who cannot pay.
If they do, we will go the way of Telecom, where the likes of Nortel, Lucent, Alcatel died trying to fight the commoditization of the market instead of serious evolution of the same. goodbye more jobs in product companies.
Sounds like Facebook needs to contract a custom ic house to me. Who in their right mind would build a one customer targeted leading edge SoC with literally no differentiation? I may be wrong if the whole industry goes this way for servers but then it will really be a race to the bottom to start building arm server chips with just the cores and no network subsystems or other "differentiation".