Interesting, though they might refuse to acknowledge, because of ARM they are forced to innovate and bring out low cost solutions to server market which are also performance competitive. This is going to cannibalize their Xeon share to an extent which again they wont admit. With ARM it is not just raw performance that matters, it is that their is a good alternative for businesses who cannot afford an Intel SOC for their server requirements.
ARM is here to stay even if it will not win the war.
The S1200 has a significantly faster clock and more HW thread contexts than its ARM competition; the former almost guarantees that the S1200 will have a higher TDP than the slower clocked A9s.
Can the ARM vendors describe their NEON implementation and how its stacks up to SSE3? Can they show us how their microserver cores perform on SPEC2000/2006 FP and INT? Can they tell us which toolchain developers can use that can match Intel's toolchain?
please, really read the article. It's the comparison between EXC-1000 and crippled E3-1220L v2.
And by the way, it's 1 Exc-1000 performance as 1/7 of XEON. And only in the perfect scaling case can 7 EXC-1000 sacle up linearly and match 1 E3-1220L v2. This can (if possible) only happened at certain work load and certain situation(Apache, Mapreduce?) and without any software overhead. And don't forget the cost and energy to connect 7 CPUs are not ignorable.
Think integrated value.
By the way thatís 7 Calxeda quad to every Xeon 2620 hexa which is not crippled 1220L v2.
Considering El Reg, Calxeda, Apache benchmark.
No one in ARM silicon & systems wants to be compared within Intel sales paradigm.
ARM server offers unique utility benefits that define ARM server purchase requirement. That means no industry standard Intel benchmarks including Intel SPEC and Intel Hadoop.
ARM silicon in systems address a different paradigm that requires measureís which truly document that environment's unique performance requirement; whomever the innovative benchmark author that addresses this benchmark product void.
Calxeda has systems at Apache which I suspect are addressing the utility benefit of ARM server.
This analyst is aware of others validating in wholly unique & differentiated app environments.
On El Reg; yes 32 bit ARM use model is 1 GbE.
Adopters with 1 GbE networks call an ARM server systemís provider today.
Adaptors with 10 GbE requirement call ARM Silicon solution provider for designing your very own. System design producers are addressing the requirement which can be advantageous for savvy innovator adaptors. Donít forget there's Marvell and Applied Micro too.
On Calxeda Energy Card; four 1.1 GHz quads, BSM, I/O, NIC, storage at 11.56w v E3 1240 quad 3.3 GHz, 8 MB L3 at 56w under partial load:
Calxeda operation at 1/3 frequency, cool.
4 times cache & 1/2 code density, yumm.
4 to 7x lower power advantage according to Intel, intriguing.
40% better 1 GbE performance v Sandy E3 1240 quad 3.3 GHz, 8 MB L3, 80w TDP at $250 each and that's just the processor.
500% better 10 GbE performance v Ivy E3 1220L dual 2.3 GHz, 3 MB L3, 17w TDP processor at $189 each and 35w system power are discontinued. Never made the channel but if you want 1220L Intel might be able to reserve some from dice bank on their way to the crusher.
Does that mean the package costs extra?
Yes, the ARM should be consider as high margin business, especially when you take the performance/dollar into consideration.
It seems would take 7 EXC-1000 to compete the E3-1220L v2 (17w, 189$) in apachebench (the ideal case for microserver). So, it seems it take 400$ for ARM server to achieve a 189$ Xeon server's microserver friendly workload.
What a nice margin:)
There's no one in the connected community denying 64 bit is not a prerequisite for commercial server.
Every ARM silicon and system design producer agrees with the 64 bit observation and you're aware those 64 bit developments are underway.
So how about an investigative report on ARM server progress at current 32 bit boot strap aimed for 64 bit growth?
The industry could sure use some independent design producer successes that enable unique and differentiated product utilities, supporting innovative use models adding margin values for the greater good of the business.
With Intel executives positioning to take out half the industry, isn't it time to support adoption of components and platform designs beyond a monopoly that now blatantly threatens to destroy the industry by concentrating out competitive innovation?
Surely some silicon, system, software, data center types could fill us in on the development chains perspective.
There is currently software systems integration addressing 32 bit implementations for NAS, home, small work group and slim work loads that Intel does not address.
This current quarter happens to be the E5 26xx volume peak, at approximately 20 million units, so why get 64 bit dumped on now?
Why is 64bit a must for server apps? Do most server apps require more than 4Gb of memory?
I suspect that this '64bit is must for servers' is a received wisdom we accept because with Intel x86 processors, 64bit doubles (or better) performance. I suspect a lot of that is the extra registers available in X_64 mode. ARMs don't suffer the register starvation of x86 in 32bit mode, so the performance increase going to 64bit will be less compelling that for x86.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight Ė as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.