Does the "Relative performance" in the plot refer to computational performance/speed alone?
If so this analysis kinda sidestep the issue of power consumption. It was not the processor's computational speed that was in question. It was that Intel CPU had more or less the same performance at HALF the current drain/power.
Scaling of the benchmark LEVELS is key here in my opinion.
What are the quantitative measures used to give qualitative performance "scores?"
For gaming people, only super fast will do at any power cost.
For most other people, battery life dominates in the high end phone market in my opinion, as long calls are common, hectic days on road, but a dead phone is very very costly, so multi-day battery life in heavy use scenario is very important.
Other "performance" metrics are unclear to non-techies so perhaps someone can make them clearer in terms like "time to open email" or texting delays, whatever.
Well, long calls affecting battery life is much more a function of the RF chipset efficiency and software control of transmit levels, etc. I don't see how it would fit into a comparison of digital SoCs.
I couldn't agree more about the performance/watt metric. Noe one would be shocked if you said a top of the line Xeon (or POWER8) beat an ARM chip in performance. The race is completely about performance/watt.
As the editor has remarked performance benchmarks are thwart with issues. Many of them are constructed to take advantage of the architecture of a specific core for the sole purpose of making that vendor's silicon look really fast, when in reality it's just really good marketing spin.
Also, what OS was running on each platform to carry out these tests, since a highly optimised OS can make these benchmark tests show amazing performance on a slow processor vs. poor results on a badly ported OS running on a considerably faster processor.
This is further compounded if the OS isn't utilising the multi-core architecture very effectively. Many OS utilise two CPU cores really well, two not so much and the others are hardly used at all, making you wonder why some OS vendors were ported in the first place to many of the multi-core processors.
In terms of Power consumption, that is nearly always governed by the quality of the silicon process. As you go down in silicon geometry the leakage current from the gates increases significantly, which needs to be properly controlled in the manufacturing process.
Intel uses a low leakage 32nm process for the Atom and the Samsung is made on a standard 28nm process. Therefore, the Power consumption remark is somewhat flawed, since its not related to an ARM vs Intel CPU issue, but the choice of manufacturing process.
Note how often the word "current" is used. The ABI article is clearly more about the current draw than about raw performance. So, while I agree that they could have done a better job by averaging multiple benchmarks, I think the point of the article is that Intel seems to have finally conquored what analysts have considered its "Achilles' heel": power consumption. The intel part performs competatively compared to other SOCs on the market and it does so with impressive power numbers.
As to another commenters point about power being tightly correlated with process and that Intel has an advantageous process -- I agree. But in the end its the current products on the market "at the present time" that matter -- not how the company achieved its success. If the purpose is a strict architecture comparison, then yeah, take the process out of the mix. However, the "ARM vs Intel" babble out there isn't really geeky comparisons to architecture specfically. Its just terminology used to compare "Intel based" SOCs and "ARM based" SOCs that are currently available.
Thank you all fo the great discussion. First, the question about the OS. Both the Galaxy S4 and K900 run the same version of Android. So, the comparision is as equal as possible.
Now, for the issues of power and process technology. You are all correct in that power is the most critical issue. Unfortunately, there is no really good way to measure the power of the SoC or parts of the SoC. And, even if you could, it would not be a very good measurement because it all comes down to the system implementation, as I indicated in the article. With that said, I have seen some comparison of the two platforms with the SoCs tied to the same clock frequency to balance the comparison. Obvisouly, once you increase the clock frequency, the power goes up. In this case, the K900 is using a higher clock frequency (2.0GHz) than the S4 (1.8GHz) with the Exynos processor. In the studies I have seen, the ARM platform still had lower power consumption at the chip and platform level. I am leaving that comparison up to those entities that did those comparisons. But, I do find it difficult to believe that a larger die, more complex architecture, and higher clock frequency would result in lower power.
Now, in terms of process technology. Intel is using a more efficient FinFET process, but as with all process improvements, not every part of a semiconductor design will benefit from the process improvement. In addition, Intel and the other major foundries have different manufacturing methodologies. Intel optimizes the process for multiple fabs and multiple products. The foundries tend to tweek the process to otpimize the performance and/or prower of each product running on that process. The results can be drastically different. Intel's lead in process technology has and will continue to beneft the company, but I'm not sure if it will completely overcome the difference in mciroarchitecture complexity and die size; at least not yet.
My apologies, I confused the two processes nodes. Intel will not be manufacturing the Atom product line on the FinFET process until the 22nm generation starting with Bay Trail later this fall. The current generation in on the 32nm process node, similar to the 32/28nm node being used for the Exynos processor.
Jim, I am curious, where did you get the test results you mention in your article? Also, is Qualcomm paying you for this blog? I think you have an ethical obligation to be honest about that. Reading all these comments I get the impression that the only people commenting here are employed by either Qualcomm or Intel. :)
I also understand you were formerly a member of the now failed research firm INSTAT. How long have you had your own shingle hanging? Are you really credible? All this geek fuss must be a big boost for you, eh? Doesn't matter what they are saying about you, so long as they are talking about you, eh?
"Toad Sprockets," These are all fair questions. I am not employed by any of the companies mentioned in the articles. I did, however, talk to Intel, ARM, Qualcomm, and NVIDIA, and reached out to Samsung and AnTuTu, before even writing the first article.
The data was obtained from as many different sources as possible, including other publications like GSM Arena, which does an extensive job of publishing a wide variety of benchmarks for new platforms. I also worked with some independent sources to verify the numbers. Since no two tests ever result in the same numbers, I averaged the results for my final figures and tried to account for the revisions of each benchmark when possible.
In terms of my background, you are more than welcome to check it out on LinkedIn under TekStrategist. I was at In-Stat for 8 years before NPD decided to close the division. Being in the industry for 30 years, I would hardly call In-Stat a failure, but it was subject to the politics of two large entities, Reed-Elsevierr and NPD, and I will leave it at that. After In-Stat was closed, I struct out on my own as TIRIAS Research, where I have continued to work with various clients and publications, including EE Times. Note, however, that EE Times is not paying me for my contributions.
Prior to being an analyst, I worked in a wide variety of roles in the semiconductor and embedded systems companies, including Intel, General Dynamics, Motorola, ON Semiconductor, and STMicroelectronics. In total, I have over 25 years of engineering and business experience in the electronics industry. I have done everything from launching rockets to launching multi-billion dollar companies. So, my perspective is much more than that of a reporter, analyst, or blogger that has not worked in the industry. Now you be the judge. Am I a credible source?
And finally, I would argue that what I say and what is said about me does have an impact on my credibility. I have already establish a reputation in the industry for being direct an unbiased, and I have no plans on deviating from my morals for self promotion.
The RAM scores seem highly unusual. Is there some kind of "cheating" going on with Antutu? I agree with Jim -- using one benchmark to brag about a chip capabilities is certainly wrong. But not only is the Antutu score "out of the norm", but the subtest scores seem very strange. Doesn't even look like the results of a real benchmark compared to the scaling of other benchmarks!
The AnTuTu blog posted a complaint about Chinese manufacturers who diddled the benchmarks to make RAM performance appear to be double what it actually was. (See http://www.antutulabs.com/node/100) The fact that the manufacturers could diddle the benchmarks like that makes the value of the benchmarks questionable.)
I didn't quite laugh when I read the article, but it was tempting, I've been watching the electronics market place for decades, and benchmarks are simply a tool in the marketing battle. Vendors will post the benchmark scores that favor thier products, and ignore or attempt to denigrate the rest. They will further emphasize the individual parts of a benchmark suite where they excel, and ignore the less favorable ones. (And it's not like flat-out cheating is unheard of.) Most folks take benchmarks with sacks of salt, and the ones that get attention at all are run by independent third-party testing groups who aren't tied to any of the vendors.
The two applicable questions are "Has Intel become competitive with ARM in power draw at the same performance level?", and "Will any vendors give Intel design wins in upcoming products based on these benchmarks?"
My personal suspicion is that the answers are No and No.
Everything I've seen about the mobile device market indicates battery life is the biggest factor users will be concerned about, and Intel is still playing catchup with ARM in that area. Since device manufacturers pretty much have to provide battery life estimates under normal usage conditions as part of their marketing, they'll look long and hard at the battery life using an ARM processor vs an Atom processor, and if the ARM processor uses significantly less power, it's likely to get the nod even if the Atom processor has somewhat better performance.
Agree. Benchmarks are a pure marketing tool. And I agree 100 percent that Intel will not get a single design win based on the benchmark. It did what it's basically designed to do: give Intel a PR boost.
And as a general comment on benchmarks, I note that the AnTuTu benchmark is proprietary. I can get it and run it. I can see what it purports to measure. I have no idea how it is doing it, unless I want to do things like disassemble the object code.
If I'm going to run benchmarks and take the results seriously, they better be open source. I want to get the code and see exactly what it's doing and how it is measuring what it purports to measure. I want code that is a standard, which everyone agrees is the way you do those measurements, with lots of developers to look at the code, see issues, and make fixes. I need to be confident that the benchmarks reflect reality, because I will be making critical decisions based on them.
I don't care whose benchmark code it is. It may be accurate and valid, but I'm not going to take it seriously unless I can see the code for myself, and make my own judgement on its validity.
These changes skew the results as shown in the article, making AnTuTu useless as an Android benchmark. As Exophase notes, it is interesting to see the AnTuTu website full of cheating complaints and now it turns out AnTuTu pulled off the biggest cheat themselves...
Note this also explains why ARM SoCs used more power in the original ABI Research article - they are doing significantly more work.
The main point is that Intel is closing the ARM gap quickly. Whether they are already better or just will be better in the next generation is not really important. They decided to be number one in the performance/power game for mobile processors and if history is any guide they will be. Whether in 6 months or 18 months matters little.
What makes you so sure Intel will be better than ARM on cost, performance or power? Given the large complexity, overhead and cost of x86 that is by no means certain even with a manufacturing advantage. We all know how bad Atom really is despite the marketing claims - even OEMs aren't fooled: Atom has just 0.2% mobile market share. Who knows, in 10 years Atom may well be remembered as yet another Itanium, iAPX 432 or i860...
Great article! I have always been surprised at the gullibility of many people when it comes to such sensational "benchmark" results. People should take some stock and critically analyze the hypotheses, experimental set-up etc. before coming to a conclusion. It does not take a genius to realise that the recent Intel claims are just marketing nonsense with no solid scientific foundation. It also does not take much to realise that Intel is fighting a losing battle as long as it is sticking with x86. Superior fabrication technology will only take you so far (incremental linear gains at best). The much higher gains are in the architectural hardware and software levels. Indeed, anyone who optimized software would tell you that a bit more care in how we code can often get you 10x performance gains. Try and get that gain at the Transistor level....So even if Intel are ahead with FinFet etc. they will not be able to compensate for their inferior power management technology and inadequate software stack. They might reduce the gap a bit every now and then but they cannot maintain doing that forever. One thing that has really changed since the 80's is that the competition is strong and diverse. Consumers have a choice, and they will vote with their pockets.
I'm not convince x86 is a losing battle. It depends on the direction device platforms go. I rather think Intel is attempting to use what they experienced in the PC world. An increase in performance allows ever more complex and capable software, and once those apps are available nobody wants to go back. So just as with PC evolution, an increase in performance that allows for the next "killer" app will up the performance bar for all players.
I see no reason to believe that the evolution of mobile devices will not closely follow the evolution of the PC. So while ARM is playing to their strong point and pressuring Intel to lower power usage, Intel will be pressuring ARM to increase performance. In a stagnant software world, ARM would surely win. But it is not a stagnant software world, so it is still unclear whether or not x86 will compete in the mobile market.
On my desktop, I don't care about power consumption. It's plugged into an outlet, and always on. There's no battery to drain.
On my laptop and notebook, I start to care, because they can be used without being plugged in. I don't care much, because I don't normally run them solely off of battery. If I travel, they get set up and plugged in once I'm at my destination. I don't normally use them when I'm actually in transit.
On my cell phone and PDA, I care a lot, because they are normally running off of battery, and I've made it a reflex to plug them in to a charger whenever I'm not actually out and about so I don't find myself suddenly running dry.
Yes, more powerful processors make possible more sophisticated applications, at the trade-off of increased power usage.
Power concerns are becoming relevant in the server market. ARM has a shot at the server market because data centers are increasingly larger with increasingly greater numbers of servers in racks, and power requirements are continually escalating. ARM's planned 64 bit processors stand to make significant design wins, because the power they use will be a lot lower, and power costs are a significant fraction of the cost of operation of the data center.
The CPU doesn't have to be the most powerful available - it just has to be powerful enough, especially as applications move to parallel processing, and multiple CPUs will be engaged on any particular task.
In the mobile market, the big challenges in processing power I see are in the GPU. The "killer apps" tend to be those that demand video performance, and we are seeing devices with screen resolution and 3D accelleration that used to be the province of the desktop and laptop. Intel is far behind in GPU performance. (I have Intel graphics on board in my machine. It's adequate for what I do, because I don't do things like serious gaming. If I did, I'd be looking at shifting from mobo graphics to a dedicated video card, or getting a whole new machine.}
The desktop market is shrinking, as tasks formerly performed on the desktop migrate to laptops, notebooks and tablets. The mobile market and the server markat are booming, as things increasingly more to the cloud.
Power consumption is the new critical factor, and Intel is playing catchup.
What's wrong with Intel getting ahead using better compiler technology? I understand if the gains are only on a single benchmark, but if a broad range of real workloads benefit then it's definitely legitimate. Good compilers are an essential part of any microprocessor platform. If ICC only supports x86/64 then it's one of Intel's strategic assets, just like they don't share their superior fabs with ARM. Software is really important. People need to better appreciate this. ARM needs to invest more in compiler technology.
Remember that when exploring the microarchitecture design space, simulations are done using benchmarks as input. Benchmark tuning also happens at the hardware level.
"What's wrong with Intel getting ahead using better compiler technology?"
Nothing, if we're talking about making real applications run faster.
But that's not what we're talking about here.
What we're talking about here is the compiler removing portions of the benchmark, contrary to the intent of the benchmark. As a result, the benchmark results become meaningless.
As Reinhold P. Weicker, co-author of Dhrystone, wrote in 1988:
...optimizing compilers should be prevented from removing significant statements. It has turned out in the past that optimizing compilers suppressed code generation for too many statements (by "dead code removal" or "dead variable elimination"). This has lead to the danger that benchmarking results obtained by a naive application of Dhrystone - without inspection of the code that was generated - could become meaningless. [http://bit.ly/1atTdWZ]
The issue is that no real workloads will ever benefit, not from these optimizations and not from ICC.
Android uses GCC as the default compiler, so ICC is irrelevant, even if it happened to be better than GCC. By secretly having AnTuTu replace GCC with ICC in this closed-source benchmark and adding specific optimizations which only speedup this particular code, Intel is cheating by manipulating the benchmark scores.
If changing compilers, settings and adding specific benchmark busting optimizations is OK, what if someone wrote a hand-compiled version of the benchmark - would that be legitimate too? After all, the best compiler is still a human.
You're quite right that software and compilers are important. Intel could, like ARM, invest more in GCC and speed up real Android workloads rather than showing off cheated ICC results while pretending they are in any way relevant for Android performance.
This is great article. Much professional than the one just simply claiming Intel is better than ARM based on single data point. Thanks for Author.
Personnaly, I am happy to see Intel is closing in. It is much healthier to have two competing Architectures in mobile market. This is battle between Intel against the rest Semiconductor Industry. Intel might be big but the rest of semiconductor business is much bigger. And I believe most system Companies will remain at ARM camp as well. Intel has long history of manipulate CPU price, which leave PC Companies profit close to nothing. I had a conversation with Executive from main PC manufacturer which is also becoming major player in mobile side. I'm sure they will not let this happen anymore. I think it is time for Intel to re-think their strategy.
Professional article? Don't think so. Check the comment from the author: "It has always been in the best interest of the technology vendors"... where is the proof on this? Can the author give some real examples? If you blame someone you need to prove it.
About the comment about the compiler: ARM code cannot be compiled the same way x86 code. So, what is suspicious about that? Both compilers can be setup to optimize the code execution for each architecture. It depends on the platform vendors to tune these optimizations to enhance the performance and thus make a better product. Also, it is well known that kernel code in x86 architecture can be tuned to execute code faster because it's CISC architecture and optimized code caching instructions, whereas ARM can't (just ask any Linux kernel enthusiast and will confirm this or check in google: this is what I found: http://www.linuxjournal.com/article/7269?page=0,1)
"Both compilers can be setup to optimize the code execution for each architecture..."
You have overlooked an essential point here: What we're talking about here is the compiler removing portions of the benchmark, contrary to the intent of the benchmark. As a consequence, the benchmark results become meaningless. It's like comparing two runners based on a race where one runs a half-marathon and the other runs a full marathon.
In terms of my comment about technology vendors always wanting the show their products in the best possible light, I speak from experience. I have worked with and for most of the major players throughout the electronics ecosystem and quite honestly, I don't blame them. It makes smart business sense. However, not pointing out issues can also backfire.
Please see my follow-up article on the topic at http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894&. AnTuTu has revised its benchmark and it had a very negative impact on the scores of the Intel Atom processors. I only cited the K900 test results, but tests on RAZR i were very similar.
If even the benchmarking company admits a problem, then there IS a problem. In any case, this was a great discussion about benchmarks in general.
Jim McGregor (tekstrategist on twitter and LinkedIn)
I completely disagree. Many of the "professional" sites just repeat the same story without any analysis at all. That's why it was so easy for Intel, ABI research etc to spread their false marketing. But Jim has done some actual analysis and debunked the claims that ultimately led to AnTuTu fixing their benchmark. That's very rare nowadays, even AnandTech often posts literally what Intel says. And as Jim noted in his other article, are all these sites going to retract their articles now that the claims have been proven false? Any site that hasn't isn't professional.
"Also, it is well known that kernel code in x86 architecture can be tuned to execute code faster because it's CISC architecture and optimized code caching instructions, whereas ARM can't (just ask any Linux kernel enthusiast and will confirm this or check in google: this is what I found: http://www.linuxjournal.com/article/7269?page=0,1)"
This is not a RISC vs CISC debate. However in general ARM RISC instructions do as much work as x86 CISC instructions and are smaller as well. So in that sense x86 doesn't have an advantage.
Now it is true that Linux and GCC are more tuned to perform well on x86 rather than ARM, but this is due to x86 ports existing for far longer and having more people work on them. A few years ago GCC was a very bad ARM compiler, but it's much better today.
No idea what that 8-year old link is supossed to show, we all know how to use GCC, but since AnTuTu compiles their benchmarks themselves, it is essential they use the same compiler and options to get a fair comparison across different architectures.
In regard to not being able to see the benchmark code, BDTI posted and article this morning indicating that the RAM benchmark skips some operations when run on the Intel platform (http://bit.ly/1b2U8gq). The issue appears to be linked to the use of the ICC compiler for the Intel platform, as indicated in an earlier comment. This makes the use of this compiler for just the Intel platform highly suspicious. AnTuTu indicated that there will be some changes in the benchmark coming out in August, but provided no reference to this issue.
I continue to be amazed at the unbelievable number of people who continue to cite and rely on the Antutu benchmark. Without knowing the implementation of this benchmark, can anyone acknowledge the functionality of this? Can you guarantee that it performs the same workload on every device? The same can be asked for many of the other readily available Android benchmarks, including Vellamo - designed and built by a semiconductor vendor. At least with Vellamo, you have better insight into what code it actually runs, even if there is still no way to know it executes the same amount of work on all devices. While it lacks the popularity of many Android benchmarks (at least for now), the AndEBench was defined, designed, and developed inside an industry consortium by most of the vendors selling application processors, including the producer of Vellamo. Although AndEBench results show the Lenovo K900 trailing the Samsung Galaxy S4, there are no secrets and the benchmark source code is available to all EEMBC members and licensees. By the way, Intel is chairing EEMBC's AndEBench 2.0 working group to develop a next generation benchmark suite, but most of the other apps processor vendors have been heavily and steadily engaged in this effort.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.