"Both compilers can be setup to optimize the code execution for each architecture..."
You have overlooked an essential point here: What we're talking about here is the compiler removing portions of the benchmark, contrary to the intent of the benchmark. As a consequence, the benchmark results become meaningless. It's like comparing two runners based on a race where one runs a half-marathon and the other runs a full marathon.
Professional article? Don't think so. Check the comment from the author: "It has always been in the best interest of the technology vendors"... where is the proof on this? Can the author give some real examples? If you blame someone you need to prove it.
About the comment about the compiler: ARM code cannot be compiled the same way x86 code. So, what is suspicious about that? Both compilers can be setup to optimize the code execution for each architecture. It depends on the platform vendors to tune these optimizations to enhance the performance and thus make a better product. Also, it is well known that kernel code in x86 architecture can be tuned to execute code faster because it's CISC architecture and optimized code caching instructions, whereas ARM can't (just ask any Linux kernel enthusiast and will confirm this or check in google: this is what I found: http://www.linuxjournal.com/article/7269?page=0,1)
I continue to be amazed at the unbelievable number of people who continue to cite and rely on the Antutu benchmark. Without knowing the implementation of this benchmark, can anyone acknowledge the functionality of this? Can you guarantee that it performs the same workload on every device? The same can be asked for many of the other readily available Android benchmarks, including Vellamo - designed and built by a semiconductor vendor. At least with Vellamo, you have better insight into what code it actually runs, even if there is still no way to know it executes the same amount of work on all devices. While it lacks the popularity of many Android benchmarks (at least for now), the AndEBench was defined, designed, and developed inside an industry consortium by most of the vendors selling application processors, including the producer of Vellamo. Although AndEBench results show the Lenovo K900 trailing the Samsung Galaxy S4, there are no secrets and the benchmark source code is available to all EEMBC members and licensees. By the way, Intel is chairing EEMBC's AndEBench 2.0 working group to develop a next generation benchmark suite, but most of the other apps processor vendors have been heavily and steadily engaged in this effort.
In regard to not being able to see the benchmark code, BDTI posted and article this morning indicating that the RAM benchmark skips some operations when run on the Intel platform (http://bit.ly/1b2U8gq). The issue appears to be linked to the use of the ICC compiler for the Intel platform, as indicated in an earlier comment. This makes the use of this compiler for just the Intel platform highly suspicious. AnTuTu indicated that there will be some changes in the benchmark coming out in August, but provided no reference to this issue.
And as a general comment on benchmarks, I note that the AnTuTu benchmark is proprietary. I can get it and run it. I can see what it purports to measure. I have no idea how it is doing it, unless I want to do things like disassemble the object code.
If I'm going to run benchmarks and take the results seriously, they better be open source. I want to get the code and see exactly what it's doing and how it is measuring what it purports to measure. I want code that is a standard, which everyone agrees is the way you do those measurements, with lots of developers to look at the code, see issues, and make fixes. I need to be confident that the benchmarks reflect reality, because I will be making critical decisions based on them.
I don't care whose benchmark code it is. It may be accurate and valid, but I'm not going to take it seriously unless I can see the code for myself, and make my own judgement on its validity.
This is great article. Much professional than the one just simply claiming Intel is better than ARM based on single data point. Thanks for Author.
Personnaly, I am happy to see Intel is closing in. It is much healthier to have two competing Architectures in mobile market. This is battle between Intel against the rest Semiconductor Industry. Intel might be big but the rest of semiconductor business is much bigger. And I believe most system Companies will remain at ARM camp as well. Intel has long history of manipulate CPU price, which leave PC Companies profit close to nothing. I had a conversation with Executive from main PC manufacturer which is also becoming major player in mobile side. I'm sure they will not let this happen anymore. I think it is time for Intel to re-think their strategy.
Blog Doing Math in FPGAs Tom Burke 2 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...