After all, it completely validates what the chip behemoth said at its investor day -- in both Paul Otellini's and Brian Krzanich's presentations – that Intel is a good four years ahead of its closest foundry competition.
Indeed, as my colleague Peter Clarke notes over in his article, the collaboration between the Taiwanese manufacturer and ARM isn’t even expected to yield introductory results much before the second half of 2015, and that is probably an optimistic bet.
Furthermore, the announcement serves to show just how far behind ARM will be in terms of making any sort of serious impact on the server market.
ARM has repeatedly said it would have a serious server market offering by 2014, and would be able to capture meaningful market share within two to three years, but this latest announcement throws a whole lot of cold water on that notion.
If ARM’s latest press release represents the current state of things, it means 64-bit V8 architecture on FinFET process tech is roughly three and a half years away at best, giving Intel ample time to work out all its current kinks and remain far out ahead of its competition.
The announcement also means TSMC may be re-setting its strategy for FinFet (tri-gate), after it had previously told the market it would target second generation 20nm. Today’s press release reads like the manufacturer has had to adjust the plan to sub-20nm, though whether TSMC jumped or was pushed is easily debatable.
“There’s no doubt that based on current plans of record, Intel’s way ahead of any other company with regard to FinFET deployment,” said industry analyst Nathan Brookwood of Insight64.
“They’re shipping millions of devices with 3-D transistors today, but TSMC and Globalfoundries don’t plan to introduce the technology until their 14nm nodes, at least four years away with regard to volume production. I hear both companies want to pull that technology in, but that’s where things stand today.”
That being said, Brookwood was not convinced ARM had to wait for FinFETs to start its move to a 64-bit architecture for servers and clients. The British chip designer had already released its 64-bit architecture definition (ARM V8) last year and Applied Micro is already shipping an FPGA-based V8 implementation to early customers, with the expectation it will follow with a 40nm ARMv8 SOC later this year, and a 28nm version in late 2013.
“It wouldn’t surprise me to see other ARM architectural licensees (the list includes Apple, Qualcomm, Marvell and Nvidia) jump into the 64-bit fray with 28nm technology in a similar timeframe,” said Brookwood.
He added that in his opinion, the fact ARM and TSMC made this announcement now wouldn’t mean 64-bit ARM chips had to wait for FinFET technology, but rather that it takes a couple of years to develop the physical IP, especially one with as many design constraints as the FinFET processes coming out of TSMC and Globalfoundries.
Pushing FinFET technology out to 14nm is hardly a surprising move for TSMC. After all, before a fab can do FinFET, it should first properly master High-K metal gate technology, something TSMC is still struggling with compared to Intel and Globalfoundries.
“As Intel pointed out in a talk at SemiCon West earlier this month, each new process generation builds on the technologies of prior generations,” said Brookwood.
True enough. And what’s more, it’s now abundantly clear that Intel will once again be several generations ahead in the FinFET game.
I think "articles" like this are a sign of the times. Looks like EETimes is no longer really editing and is just becoming a low-quality channel for press releases and FUD-talk.
The purpose of having an editorial is to keep the quality and screen out this sort of junk. Without that you're just cashing in a once-valuable brand. When the quality goes down, so will the advertising revenue.
Oh well, once ESD stopped printing this was bound to happen...
Read what I said above. On its own transistor/process technology is not enough these days. ARM and Co. have low power architectures, more efficient software tools, and a much larger ecosystem, which can more than compensate for a loss of 30% in power or speed performance at the transistor level (if proven). Of course, they will always seek to catch up with the latest and best transistor/process technology but that's not all what they are about.
I repeat, system performance is not dictated by transistor/process performance alone. Intel want you to think that way because they are an IDM who want you locked into their way of doing things.
Intel's fins are like varying shape triangles (as pointed out in earlier articles). The leakage variation impact is clearer here:
About TSMC or foundries in general, they will never adopt new technologies unless it is clear customers will go for it. That is why they are not so "self-driven" as Intel or Samsung might be.
That is why you need ARM or Qualcomm to first sign up. Even then, there is general reluctance. What if it is just one customer? Should other customers benefit so easily without putting in so much develop time? Etc.
The Ivy Bridge advantage over Sandy Bridge is not 100% clear:
You mean benefits like 30% less power consumption on average vs 32nm?
I for one, applaud Intel for marketing the technology behind these benefits rather than just telling consumers "It's faster, so buy it"
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.