Intel made their money out of x86 compatibility ensuring backward compatibility of codes. That model, while pragmatic, comes at a price: loss of efficiency. For many years, Intel has countered that by being at the cutting edge of process technology, but as Moore's law is slowing down, they can no longer rely on this solely. I cannot see them making serious inroads into the IP licensing model, and hence into the embedded market. Their best bet is to move up the chain into systems and services.
Intel will be worried if they look at this article. I'm completely convinced with comments such as the basic architecture should be redone to bring out processors with low power consumption compared to ARM. But is it really feasible for a company like Intel to reevaluate their basic architecture at this moment. No ways. They should just sell what they have to major OEM's. Otherwise a better idea can be they can buy few product companies and promote their Atom SOC's.
I would like to see SOC and USB peripherals store access parameters so drivers are more universal. This could be done on a simplified SOC architecture and port to larger systems.
I would like to see Intel make a quad core: 2 cores for IT (updates, government monitoring, etc), and two dedicated RTOS cores so the user never sees a delay and can observe all activity.
It's power consumption plain and simple. If Intel was serious about this market they would do a ground up redesign of x86 with the primary design focus being power consumption.
Most engineers view their entry into SOC as a post-design marketing decision. That may be an unfair assessment, but that's the perception. Toes dangling in the water.
I don't feel the commitment from Intel that I feel with ARM.
Press & industry analysts have picked up on ATOM 'not whole product' in terms of ARM portfolio, tools, design and customer mass. Where mass and leverage is a foundation strategy right out of Intel's very own play book.
However ATOM has a following in industrial embedded; with a power supply. Look at all the engineering efforts to coble ATOM surplus into an industrial design?s thermals that have upset some paying the higher price for the industrial package.
So end customers beware.
I think the key issue with the Intel / TSMC fabrication deal is that TSMC is booked on leading edge process. ATOM requires advanced lithography to be function, power and cost efficient and TSMC just doesn't have the wafer starts for such a low margin product so early in their advanced process's economic life.
ATOM cannot be efficiently produced on fabrication process other then 'fully depreciated" in the wake of higher value, higher margin products. Especially for a foundry that schedules wafer starts based on their economic value in tune with the specific process's life cycle.
ARM cluster has experienced similar foundry hurdles for wafers that are worth how much? Where ARM does offer constituent mass and leverage that can be scheduled on a foundry's advanced process accordingly.
For Intel the task of ATOM whole product; IP library and design tool falls back on Intel and whatever SOC design cluster can be cobbled together. Financially where Intel will be stuck fabricating ATOM for an economic profit, at end of run, occurring near the end of any process nodes economic life.
Camp Marketing Consultancy
First, let me say I am a big fan of ARM and not at all of fan of Intel Architecture. However, ARM has evolved a great deal since the ARM2 days and each new version of the archtecture has added new complexity. It is no longer a simple RISC architecture. Just enumerating all the possible instruction formats is daunting.
This brings up an interesting challenge for selling ARM-based software: for which version of the architecture do you compile? Or do you sell multiple binaries, one for each of the architectures you want to support? And how do you validate your software on the many possible architectures? Adding in floating point options makes things even more interesting.
This is not an issue for embedded applications, since they're only running one program and the manufacturer of the embedded system knows which architecture version to support and the programmers can tune the software to match the architecture. It's also not an issue with open source, since you can rebuild any application from source code to match your platform. If it's a popular FLOSS application, you can take advantage of myriad beta testers in the community to validate many platforms. This is one reason Linux runs well on ARMs.
As far as cleanliness of architecture, my current favorite is PowerPC. It's much more regular than ARM, particularly loads and stores. But there are far more ARM chips available.
I like your comment about Intel Architecture having evolved from the 8080. It actually goes back further to the 8008 (1972).
I was at the original Intel-TSMC press event. It was believable--until Intel said that it would not license its high-k technology to TSMC. (Don't forget: Atom is based on a 45-nm process with high-k and metal gates.) What use is Atom without high-k? Very little. I believe TSMC wanted to get its hands on the high-k process and Intel balked--for good reason: It's the only high-k technology in production. I think TSMC is struggling with high-k and wanted the recipe. When Intel continued to balk, TSMC put little effort in bringing up Atom in its libraries. Without high-k, Atom is just another IP core.
Meanwhile, at the original Intel-TSMC event, it was great to see the execs: Intel EVP Sean Maloney; TSMC former CEO Rick Tsai, etc. But the event seemed to be more of a photo opp than a real news blockbuster. A complete waste of time.
Assembly coding on x86? Yuuch!. On ARM? Sweet.
The benefit of all that x86 programming knowledge is a red herring. Almost all that programming was done on PCs. In a SoC implementation you wont have the BIOS interrupt calls let alone all the Windows hooks. If you're not running windows why go with Atom?
My ARM experience was with the Archimedes computer using an ARM2. At that time (1988) the 8Mhz ARM2 eat the lunch of 30Mhz '386s and the ARM did that using 9 times less transistors (30,000 vs 275,000).
Intel has never had efficient elegant processors, relying instead on leading silicon processes to stay competitive. If they're using TSMC as a foundry then their only advantage is lost.
ARM is a beautiful 'clean-sheet' design from the mid 80's. x86 has baggage that goes back to the 8080 (1974).
If it weren't for DOS/Windows and the PC Intel would be just a foundry.
Apart from military applications (Probably the most politically contaminated industry in the world) and the Xbox (Microsoft, good buddies of Intel) I'm am aware of no other x86 application that doesn't run on Windows.
Thinking about the Microsoft Intel coziness, I predict that Microsoft will stop developing Win Mobile for the ARM and move to the Atom platform, say late this year to mid 2011. Wanna bet?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.