Much of the article and commends is based on a high-performance 32 bit world view. In terms of CPU revenue that is probably "the majority view". However in terms of number of design wins, and probably number of sockets, that isn't. There are a lot of products out there (besides smart phones and PCs) that work just fine on 16 bit processors--many on 8 bit processors. Neither Arm nor Intel is positioned to address those.
I design a lot of products using the DSTni series of 186 derivitives. I looked at Arm and MIPs among others when I made that decision. Part of it comes down to RISC versus CISC. What I learned was that RISC was capable of very high performance, but that CISC took less memory and a lower clock speed for the same effort. Since both of those relate to battery life and clock speed to EMI issues, I opted to go with CISC.
In my early days RISC was all the rage because it could get more MIPS out of less silicon. That feature should be reconsidered now that transistor (or gate) count isn't as constrained as it was 30 years ago.
We have so much CPU power now that developers don't take runtime efficiency as seriously as we used to. In some ways that is a mistake, because we are migrating ever more towards battery power. While a 2:1 performance hit on a desktop with CPU cycles (an watts) to burn seem of little concequence, a 2:1 performance hit on my cell phone means the battery will last only half as long doing the same thing. That IS a big deal.
What planet are you on? There has been NO cross-pollenation of architectural features between x86 and ARM. Name one! Further, x86 has never made any significant inroads into the mobile world. They couldn't be more different.
ARM may eventually be able to compete in the low-end server world where x86 is king, but that's a long way off at best. In the high-end side of things, neither is likely to ever be seen there.
Wow, guys. I don't think I've ever seen an article so full of incorrect historical information. I mean, it's normal for the suits at Gartner to not have any clue about technology, but I'd expect the editors here to at least check it out. Not the sort of thing I'm accustomed to seeing in this publication!
The CPU architecture wars died in the late 90's when MOT drifted away from the 68K which forced Apple to create Rosetta that performed dynamic recoding to move users from 68K to PowerPC. Then Apple did it again to move from PowerPC to x86. And while you all may not realize it, they did it again in iOS move to ARM because the roots of iOS are in OS X. So, if the CPU ISA is no longer handcuffs to a developer, then you have to ask the question why did Apple choose ARM? It's not because of architecture but for some other reason such as flexibility of customization, or some such motivator.
I don't get why the article says ARM is just a 12 years old kid of Intel's block, at the real facts ARM1 was like 1984 project, so in facts it's 12 years younger Intel mate, but their story related because the older has not been licensed to Acorn to manufacture its (circa 1985) new line of computer workstations. In fact the first application run on it was ARX (a Unix-like OS for it), but never released due time to market issues.
MIPS is history from Lexra days and their bogus patent suites for unaligned instruction access. Also, HP is not a serious contender for tablets after their last hiccup on exiting the PC space and abandoning their tablet. PowerPC is not doing that well either, so why not adopt ARM? ARM is only the core, and that is licenced for 5 cents. TI, Freescale and others add significant peripherals, and around 1GHz, they are a better bet than Atom for programming at the bare-metal level (ie, not Windows legacy). ARM's annual turnover is chicken feed compared to Intel, and if Intel wanted to snuff the ISA, it would be easy to move to something else. Look at how people moved away from MIPS with a 64-bit solution, and how easy it would be to buy them.
Over the years, ARM and x86 learned from each other and tried to adopt best of each others. Currently both of them are entering into each others territories. ARM into server and complex computing and x86 into mobile domain. Over the period, the differntiating factor of these architures will be more blurred.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.