In order to be competitive your new core needs to leapfrog the previous generation cores of your competition by a good margin, especially when you already use the best process. 20nm TSMC is around the corner, and so are several more 64-bit ARM cores (besides Apple A7). That's what Silvermont will have to compete against next year, so let's see what your analysts say then.
ISA does certainly matter at lot despite claims to the contrary. However even if you just consider the ecosystem, it's hard to deny that x86 is at a distinct disadvantage there with virtually no presence in MCUs. Intel needs to try to compete in IoT of course (or again lose out like with mobiles), but the question is whether it is possible for them to compete using something like Quark. I refer to my leapfrog argument again, why would anyone switch to x86 if there is no compelling reason to do so?
Currently, as a foundry, Intel has only one option; premium devices that are high price, high margin components including perhaps their integration into sub systems.
To secure the Intel profit margin these devices still pay for everything else. Certainly this could include 2.5 and 3D integration into the future at their premium cost : price.
High margin candidates are obviously what Intel can do well; enterprise x86 processors and other margin leaders including FPGA, GPU and application specific coprocessors that bring a high price for their applications economic benefit.
Noteworthy Avoton/Rangeley qualifies on cost price ratio > 5.4 up to 9 for octa. Ironically at the lower ratio, which is the even grade split, Intel gives up $552,362,902 in revenue to core disablement, or dice recovery, across 51% of the total volume. Perhaps this says more about yield than servicing the customer's need for less than an eight core part? For Bay Trail suspect 10 million units this cycle might still be a cost over their production short run. Meaning most sit in dice bank rather than populated in tablets.
Intel remains an expensive business to operate. Through a product line overhaul in attempts too see what sticks with the market. The mix and volume of Intel product categories is experiencing an evolution. Not unlike transition periods associated with Tanner and Cascade and Netburst to Core.
Intel experimentation to deliver new product types, throughout this cycle, has added to the cost of Intel operation. PC microprocessors including interposer and how to mix signal fabrication learning curve have been expensive. Voltage regulation cost around $20 to fabricate into dual mobile before dropping down to $8 at volume peak.
Now one broker reports 30 million E5 4650 4 way dumped into channel before the official Intel introduction. One has to ask whose white elephant? Perhaps the entire MP product category in a 2P world? The point is that Intel produces a lot of product that gets banked until sold, dumbed down, or sent to the crusher.
Is it possible Intel might covet to achieve the mantra of effective economic utilization TSMC administers for its own sustainability and the customer's long term welfare?
Haswell run current best case $0.30 per mm^2 fully burdened design manufacturing cost. Worst case hit with lost sunk costs Marginal Revenue = Marginal Cost requiring Ivy E price support. With Ivy E support $0.62 mm^2 fully burdened design manufacturing cost.
Average weighed price across product lines currently:
Haswell all purpose QUAD good for most anything $283.79
Intel should let it go its Ego. It is too late and culturely not fit for Intel going to Mobile World with its x86 SoC. Once Intel gives up on that, Mobile will become money making business instead of money loosing business by offering foundry services to Qualcomm and Appple.
@resistion, agreed. Any SoC vendors, especially in the mobile world, would have to think long and hard before going to Intel for having their chips fabricated by them. They should be "scared," as you point out. The competitor angle should not be underestimated.
Rick, I'd say Quark is exactly the sort of thing that convinces me that Intel doesn't have a clue. Who wants an ancient and slow 486 to replace their current fast and efficient RISC MCUs? If this is something the new CEO pushed for, then I believe things won't improve at all - when does Intel get the message that x86 everywhere is just a crazy idea?
Atom is another example - why spend 5 years trying to shoehorn the same old and uncompetitive design into mobiles when you see all your competitors release new CPUs every 6 months or so? The disadvantage of the x86 ISA is such a millstone that even having the most advanced process cannot mitigate it. The new Silvermont can barely keep up with older 28nm ARM designs despite being made on the worlds most advanced 22nm process...
If they keep repeating the same mistakes then it won't even be their choice - to survive, all they could become is a foundry.
@ Rick Merritt. In general I find your opinions spot on. However not in this case. You have titled your article as " ... inflection point ". That is a term which has crept into common usage from Mathematics & then EE but perhaps not fully comprehended. Such points are characterized by very sudden & severe changes in gradient - not visible from 30,000 ft and require close monitoring. So what is needed "...inside" is someone at the helm who can call the shots based on hard personal knowledge where there is weakness / delay i,e. architecture and design and not lose focus, as you have rightly pointed out, by dabbling into lower margin Foundry services where the incumbent feels conmfortable in because of his Process background. A person with strength in Design would not have even considered that diversion seriously.
@Junko: Interesting parallel with Intel and the japanese giants it once fought.
I think things will have to get pretty painful for Intel before it gives up the high margin processor busienss for the lower margin foundry busienss. It doesn't even have the skills for the high product/process mix that busienss requires.
@Chipmonk and resistion: I think Intel has plenty of good processor and circuit designers, but they are still far behind Qualcomm in having mobile SoC blocks and exptertise--despite years of efforts.
Re the CEO (BK) I don'[t think it maters if the CEO is a process, design or sales guy as long as he undewrstands the 30K-foot issues in all three areas. I think BK's Quark move shows he understands those issues and as an insider he knows how to work the levers at Intel to get stuff to happen fast.
That said, the mobile train left the station a couple years ago and Intel is still running behind it waiving its hat and carrying its suitcase.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.