I agree Patrick, this is a main message regardless of all these details which processors was fabricated at which process nodes and similar noise...Intel could design a processor for supercomputing that is not x86 compatible, why they are not doing that? market too small? Kris
Intel will always have the disadvantage of having to translate its vintage x86 CISC instructions into pipeline-able micro-ops. This is something ARM does not have to do, since its RISC instructions are pipeline ready.
Think about it, every x86 processor in the world sits there, continuously translating the same instructions, over and over, every second they are running. How inefficient! Someone might not care if they're running one processor in their PC, but someone designing a supercomputer that has thousands of processors in it will surely notice the difference in their energy bill, cooling requirements, etc.
I know the Cortex M3 and M4 MCUs are fabbed at 90nm and they still have excellent power savings. I can only imagine what power efficiency they would have at 22nm, even with leakage becoming a more dominate factor.
What are the ARM A9 and coming A15 being fabbed at, anyone?
gpus manage 1-2 Tf for about 300W, or ~3-6 Gf/W. dedicated HPC chips like in the K machine or BG/q are about the same (say 2-3 Gf/W). current x86 processors manage .5-1.5 Gf/W. (numbers are a bit fuzzy - chip vs system dissipation, etc)
the recent Calxeda ARM chips seem to be about 3 Gf/W, too. (assuming 1.5W/core, 1.2 GHz and 4 flops/cycle. might be half that, can't tell.)
in reality, the ISA has shifted with each generation. yes, adding to an ISA is messier than starting from scratch each time, but ARM is not pure and fresh, either. GPUs are probably the winner by this metric, since with, eg, cuda, apps are insulated by intermediate PTX code.
you're right: ARM is a fairly conventional ISA, though it's cleaner than x86. there must be some power savings in decode, but the processors have to eventually _do_ almost the same thing. (this argument doesn't hold as well comparing to GPUs, since their programming model restructures the code significantly.)
where did you get that idea? supercomputers are traditionally about _balance_, which tends to run against extreme core counts.
in fact, the push for many, lower-powered cores is precisely motivated by power consideration, works _against_ unpredictable workloads.
Obviously Intel has to maintain x86 backwards compatibility which limits its ability to innovate going forward...so every non-x86 architecture has a chance to be better but that is not guaranteed...in case of ARM I believe that the market has spoken clearly, just check where ARM was 5 or 10 years ago...Kris
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...