These changes skew the results as shown in the article, making AnTuTu useless as an Android benchmark. As Exophase notes, it is interesting to see the AnTuTu website full of cheating complaints and now it turns out AnTuTu pulled off the biggest cheat themselves...
Note this also explains why ARM SoCs used more power in the original ABI Research article - they are doing significantly more work.
My apologies, I confused the two processes nodes. Intel will not be manufacturing the Atom product line on the FinFET process until the 22nm generation starting with Bay Trail later this fall. The current generation in on the 32nm process node, similar to the 32/28nm node being used for the Exynos processor.
Thank you all fo the great discussion. First, the question about the OS. Both the Galaxy S4 and K900 run the same version of Android. So, the comparision is as equal as possible.
Now, for the issues of power and process technology. You are all correct in that power is the most critical issue. Unfortunately, there is no really good way to measure the power of the SoC or parts of the SoC. And, even if you could, it would not be a very good measurement because it all comes down to the system implementation, as I indicated in the article. With that said, I have seen some comparison of the two platforms with the SoCs tied to the same clock frequency to balance the comparison. Obvisouly, once you increase the clock frequency, the power goes up. In this case, the K900 is using a higher clock frequency (2.0GHz) than the S4 (1.8GHz) with the Exynos processor. In the studies I have seen, the ARM platform still had lower power consumption at the chip and platform level. I am leaving that comparison up to those entities that did those comparisons. But, I do find it difficult to believe that a larger die, more complex architecture, and higher clock frequency would result in lower power.
Now, in terms of process technology. Intel is using a more efficient FinFET process, but as with all process improvements, not every part of a semiconductor design will benefit from the process improvement. In addition, Intel and the other major foundries have different manufacturing methodologies. Intel optimizes the process for multiple fabs and multiple products. The foundries tend to tweek the process to otpimize the performance and/or prower of each product running on that process. The results can be drastically different. Intel's lead in process technology has and will continue to beneft the company, but I'm not sure if it will completely overcome the difference in mciroarchitecture complexity and die size; at least not yet.
I didn't quite laugh when I read the article, but it was tempting, I've been watching the electronics market place for decades, and benchmarks are simply a tool in the marketing battle. Vendors will post the benchmark scores that favor thier products, and ignore or attempt to denigrate the rest. They will further emphasize the individual parts of a benchmark suite where they excel, and ignore the less favorable ones. (And it's not like flat-out cheating is unheard of.) Most folks take benchmarks with sacks of salt, and the ones that get attention at all are run by independent third-party testing groups who aren't tied to any of the vendors.
The two applicable questions are "Has Intel become competitive with ARM in power draw at the same performance level?", and "Will any vendors give Intel design wins in upcoming products based on these benchmarks?"
My personal suspicion is that the answers are No and No.
Everything I've seen about the mobile device market indicates battery life is the biggest factor users will be concerned about, and Intel is still playing catchup with ARM in that area. Since device manufacturers pretty much have to provide battery life estimates under normal usage conditions as part of their marketing, they'll look long and hard at the battery life using an ARM processor vs an Atom processor, and if the ARM processor uses significantly less power, it's likely to get the nod even if the Atom processor has somewhat better performance.
The AnTuTu blog posted a complaint about Chinese manufacturers who diddled the benchmarks to make RAM performance appear to be double what it actually was. (See http://www.antutulabs.com/node/100) The fact that the manufacturers could diddle the benchmarks like that makes the value of the benchmarks questionable.)
Note how often the word "current" is used. The ABI article is clearly more about the current draw than about raw performance. So, while I agree that they could have done a better job by averaging multiple benchmarks, I think the point of the article is that Intel seems to have finally conquored what analysts have considered its "Achilles' heel": power consumption. The intel part performs competatively compared to other SOCs on the market and it does so with impressive power numbers.
As to another commenters point about power being tightly correlated with process and that Intel has an advantageous process -- I agree. But in the end its the current products on the market "at the present time" that matter -- not how the company achieved its success. If the purpose is a strict architecture comparison, then yeah, take the process out of the mix. However, the "ARM vs Intel" babble out there isn't really geeky comparisons to architecture specfically. Its just terminology used to compare "Intel based" SOCs and "ARM based" SOCs that are currently available.
Well, long calls affecting battery life is much more a function of the RF chipset efficiency and software control of transmit levels, etc. I don't see how it would fit into a comparison of digital SoCs.
As the editor has remarked performance benchmarks are thwart with issues. Many of them are constructed to take advantage of the architecture of a specific core for the sole purpose of making that vendor's silicon look really fast, when in reality it's just really good marketing spin.
Also, what OS was running on each platform to carry out these tests, since a highly optimised OS can make these benchmark tests show amazing performance on a slow processor vs. poor results on a badly ported OS running on a considerably faster processor.
This is further compounded if the OS isn't utilising the multi-core architecture very effectively. Many OS utilise two CPU cores really well, two not so much and the others are hardly used at all, making you wonder why some OS vendors were ported in the first place to many of the multi-core processors.
In terms of Power consumption, that is nearly always governed by the quality of the silicon process. As you go down in silicon geometry the leakage current from the gates increases significantly, which needs to be properly controlled in the manufacturing process.
Intel uses a low leakage 32nm process for the Atom and the Samsung is made on a standard 28nm process. Therefore, the Power consumption remark is somewhat flawed, since its not related to an ARM vs Intel CPU issue, but the choice of manufacturing process.
I couldn't agree more about the performance/watt metric. Noe one would be shocked if you said a top of the line Xeon (or POWER8) beat an ARM chip in performance. The race is completely about performance/watt.
The RAM scores seem highly unusual. Is there some kind of "cheating" going on with Antutu? I agree with Jim -- using one benchmark to brag about a chip capabilities is certainly wrong. But not only is the Antutu score "out of the norm", but the subtest scores seem very strange. Doesn't even look like the results of a real benchmark compared to the scaling of other benchmarks!
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.