I'm curious to see how Intel responds to this article. From what Mr. Or-Bach says here, it looks like their announcements of SRAM cell size may not be consistent with each other. Intel keeps saying Moore's Law is alive and well, but is cost per transistor really scaling once every 2 years as it used to? Isn't there a 2.5-3 year gap between 22nm and 14nm? Looking forward to hearing Mark Bohr/Intel's response. Maybe EETimes can contact him for comment?
The new report from Intel detailing the long awaiting 14nm process shows an amazing transistor structure with 2 new features.
1. Higher fins 42 vs.34nm of 14 vs. 22nm nodes
2. Non tapered (vertical fins) which is quite deviation from the 22 nm process.
I'm not sure what will the competitors (Samsung, TSMC GF) are planning for their fins' shape. It will be very interesting to see.
However I have not seen any revelation with regards to the back end and especially with the interconnect section of the process (BEOL). Looks like the transistors are getting better and better but the BEOL is basically stays the same. In that case are we seeing diminished return? Or the solution will be to yet adding more metal layers, also what about first metal layers CD, how small narrow can we make them.
Actually the smallest cell published so far is the "10nm" cell shown at VLSI earlier this year. With a gate pitch of 64nm, metal pitch of 48nm, and the same fin pitch of 42nm, it was a bit smaller at 0.053 um2.
Also, a fair comparison for Intel vs Intel would be the 0.108um2 cell in 22nm which had the same 2-fin NFET design that was shown here. The 0.092um2 cell had only one fin per transistor. The 2-fin cell here has a width of 420nm. A single-fin cell would had a width of (420-84 nm) and an area of roughly 0.047um2, which would put it at 0.51X compared to the corresponding 1-fin cell in 22nm. However, a single-fin SRAM would have poor charactersitics, because it does not have the right beta ratio (PD>PG>PU).
Just to clarify, if you plot the data over multiple generation, the SRAM area scaling has never been 0.5X per node. It was mostly ~0.6x per node. 32nm was a little bit more agressive and 22nm was a less aggressive one. But the long run trend has been about 0.6x per node.
@AKHO re: "the smallest cell published so far is the "10nm" cell shown at VLSI earlier this year. With a gate pitch of 64nm, metal pitch of 48nm, and the same fin pitch of 42nm, it was a bit smaller at 0.053 um2." -- That paper would be A 10nm Platform Technology for Low Power and High Performance Application Featuring FINFET Devices with Multi Workfunction Gate Stack on Bulk and SOI. K.-I. Seo et al. (Samsung, IBM, ST, GF, UMC) -- while this paper is 10nm FinFETs on both bulk and SOI, my understanding is that the SRAM work cited was indeed on SOI - right?
Some analysts have predicted rising cost per transistor due to multi patterning but not all have. In fact a lot of the predictions are the work of one analyst repeated by many. My modeling shows a cost reduction at 20nm for TSMC although not as big a reduction as we "typically" see and what I am hearing from early adopters that is what is happening.
The costing of advance nodes leave a very large room for creative accounting as a large part of the cost is the capital depreciation.
But there is additional aspect that is ignored in the general discussion but have huge impact on the industry adaption trend - per design development costs.
The design cost escalate rapidly with scaling. And the industry has responded accordingly. Quoting from our blog FPGAs as ASIC Alternatives: Past & Future - "In his last keynote presentation at the Synopsys user group (SNUG 2014), Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys newer tools to improve older node design effectiveness. .... One can easily see that the most popular current design node is at 180nm."
So while we argue about 14 nm vs. 28 nm most new design are choosing 180 nm !
> creative accounting as a large part of the cost is the capital depreciation.
True. That could be. But how can we tell ?
> per design development costs.
On the face of it , it seems like a problem. But structured asic's , which are a decent solution for that(with easic looking pretty great in that area) , doesn't seem to catch up , and are a relatively small niche of the semi biz, AFAIK. So maybe it's not such a big problem ?
There are few ways at least to get some indication, the one I like is to watch the large fables companies choice of a foundry. Now that Intel position itself as TSMC competitor this become simple. And let me quote, Karim Arabi of Qualcomm ,from our recent blog: Qualcomm Calls for Monolithic 3D IC - "One of the biggest problems is cost. We are very cost sensitive. Moore's Law has been great. Now, although we are still scaling down, it's not cost-economic anymore. It's creating a big problem for us."
So if Intel does continue to reduce cost by 30% we should see all the big fables companies on Intel door step, so far it did not happen !
"But structured ASIC , which are a decent solution for that(with eASIC looking pretty great in that area)"
Thanks, as the inventor and founder of eASIC I loved reading it. But for the time being those are less than 1% of the ASSAP and ASIC market.
There are also process development costs...these are no longer small. They can be in the 10s-100sM$$. Plus, ramp to yield costs..with how late this node has been, Intel must have spent 100s of M$ in poor yielding wafers. All these, plus the design development costs already mentioned (libraries, characterization, qualification) have to be amortized over the chip volume produced. And I don't see Intel's volume greatly rising.
Yes, at a given node eDRAM is 4-5 times smaller than SRAM. IBM has been using eDRAM on the same processor chips and some of the game chips for a few years (since 45nm) and that's how they huge caches on the server chips. Intel's 22nm eDRAM however was a standalone chip packaged together with a processor. It is possible to see more of this packaging solutions and extensions to 3D in the future.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.