An HP representative said the company is shipping beta versions of Centerton systems to customers. The beta chip addresses a handful of applications that don’t require heavy CPU processing, including memcached, off-line analytics and Hadoop, he said.
HP will announce early next year a new system called Genesis. The company has said it will support a mix and match variety of Atom and ARM server SoCs in a single chassis.
A Facebook representative at the Intel event said wimpy cores like the Atom SoC can handle similar work for a half to a one-third the power of so-called brawny cores such as Intel’s Xeon server CPU. He did not say whether Facebook will use the new chips, instead speaking in general terms about supporting the new SoCs as long as they have 64-bit addressing and error correction codes.
A Microsoft data center chief architect said using x86-compatible CPUs for both wimpy and brawny cores helps lower complexity and cost. “Once again Intel and Microsoft are working together to supply best platform for our customers,” said Jeffrey Snover of Microsoft.
“Today there are no enterprise-class arm servers…the comparison [of ARM SoCs to the S1200] is not apples to apples,” because the ARM chips lack 64-bit support, said Diane Bryant, general manager of Intel’s Datacenter and Connected Systems Group.
“We know investments are being made, and we have a good view into the alternative architecture,” said Bryant. “We believe we have a substantial performance and performance-per-watt advantage, and at the system level a compelling solution."
Intel’s gross margins for the Atom SoCs are good, she told a Wall Street analyst. “Because of the density of compute, our revenue with either Xeon or Atom is a wash--in fact, Atom is slightly greater, so it’s absolutely fine if Atom does well,” she said.
The S1200 is a 6W part with two, dual-threaded, in-order Atom cores. The SoC includes a controller supporting up 8 GBytes DDR3 memory. The 64-bit chip also supports Intel’s virtualization technologies, eight lanes of PCI Express 2.0 and ECC
It comes in three versions, with frequency ranging from 1.6 to 2.0 GHz. Costs start at $54 in the thousands.
Interesting, though they might refuse to acknowledge, because of ARM they are forced to innovate and bring out low cost solutions to server market which are also performance competitive. This is going to cannibalize their Xeon share to an extent which again they wont admit. With ARM it is not just raw performance that matters, it is that their is a good alternative for businesses who cannot afford an Intel SOC for their server requirements.
ARM is here to stay even if it will not win the war.
The S1200 has a significantly faster clock and more HW thread contexts than its ARM competition; the former almost guarantees that the S1200 will have a higher TDP than the slower clocked A9s.
Can the ARM vendors describe their NEON implementation and how its stacks up to SSE3? Can they show us how their microserver cores perform on SPEC2000/2006 FP and INT? Can they tell us which toolchain developers can use that can match Intel's toolchain?
please, really read the article. It's the comparison between EXC-1000 and crippled E3-1220L v2.
And by the way, it's 1 Exc-1000 performance as 1/7 of XEON. And only in the perfect scaling case can 7 EXC-1000 sacle up linearly and match 1 E3-1220L v2. This can (if possible) only happened at certain work load and certain situation(Apache, Mapreduce?) and without any software overhead. And don't forget the cost and energy to connect 7 CPUs are not ignorable.
Think integrated value.
By the way thatís 7 Calxeda quad to every Xeon 2620 hexa which is not crippled 1220L v2.
Considering El Reg, Calxeda, Apache benchmark.
No one in ARM silicon & systems wants to be compared within Intel sales paradigm.
ARM server offers unique utility benefits that define ARM server purchase requirement. That means no industry standard Intel benchmarks including Intel SPEC and Intel Hadoop.
ARM silicon in systems address a different paradigm that requires measureís which truly document that environment's unique performance requirement; whomever the innovative benchmark author that addresses this benchmark product void.
Calxeda has systems at Apache which I suspect are addressing the utility benefit of ARM server.
This analyst is aware of others validating in wholly unique & differentiated app environments.
On El Reg; yes 32 bit ARM use model is 1 GbE.
Adopters with 1 GbE networks call an ARM server systemís provider today.
Adaptors with 10 GbE requirement call ARM Silicon solution provider for designing your very own. System design producers are addressing the requirement which can be advantageous for savvy innovator adaptors. Donít forget there's Marvell and Applied Micro too.
On Calxeda Energy Card; four 1.1 GHz quads, BSM, I/O, NIC, storage at 11.56w v E3 1240 quad 3.3 GHz, 8 MB L3 at 56w under partial load:
Calxeda operation at 1/3 frequency, cool.
4 times cache & 1/2 code density, yumm.
4 to 7x lower power advantage according to Intel, intriguing.
40% better 1 GbE performance v Sandy E3 1240 quad 3.3 GHz, 8 MB L3, 80w TDP at $250 each and that's just the processor.
500% better 10 GbE performance v Ivy E3 1220L dual 2.3 GHz, 3 MB L3, 17w TDP processor at $189 each and 35w system power are discontinued. Never made the channel but if you want 1220L Intel might be able to reserve some from dice bank on their way to the crusher.
Does that mean the package costs extra?
Yes, the ARM should be consider as high margin business, especially when you take the performance/dollar into consideration.
It seems would take 7 EXC-1000 to compete the E3-1220L v2 (17w, 189$) in apachebench (the ideal case for microserver). So, it seems it take 400$ for ARM server to achieve a 189$ Xeon server's microserver friendly workload.
What a nice margin:)
There's no one in the connected community denying 64 bit is not a prerequisite for commercial server.
Every ARM silicon and system design producer agrees with the 64 bit observation and you're aware those 64 bit developments are underway.
So how about an investigative report on ARM server progress at current 32 bit boot strap aimed for 64 bit growth?
The industry could sure use some independent design producer successes that enable unique and differentiated product utilities, supporting innovative use models adding margin values for the greater good of the business.
With Intel executives positioning to take out half the industry, isn't it time to support adoption of components and platform designs beyond a monopoly that now blatantly threatens to destroy the industry by concentrating out competitive innovation?
Surely some silicon, system, software, data center types could fill us in on the development chains perspective.
There is currently software systems integration addressing 32 bit implementations for NAS, home, small work group and slim work loads that Intel does not address.
This current quarter happens to be the E5 26xx volume peak, at approximately 20 million units, so why get 64 bit dumped on now?
Why is 64bit a must for server apps? Do most server apps require more than 4Gb of memory?
I suspect that this '64bit is must for servers' is a received wisdom we accept because with Intel x86 processors, 64bit doubles (or better) performance. I suspect a lot of that is the extra registers available in X_64 mode. ARMs don't suffer the register starvation of x86 in 32bit mode, so the performance increase going to 64bit will be less compelling that for x86.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for todayís commercial processor giants such as Intel, ARM and Imagination Technologies.