I'm curious to see how Intel responds to this article. From what Mr. Or-Bach says here, it looks like their announcements of SRAM cell size may not be consistent with each other. Intel keeps saying Moore's Law is alive and well, but is cost per transistor really scaling once every 2 years as it used to? Isn't there a 2.5-3 year gap between 22nm and 14nm? Looking forward to hearing Mark Bohr/Intel's response. Maybe EETimes can contact him for comment?
The new report from Intel detailing the long awaiting 14nm process shows an amazing transistor structure with 2 new features.
1. Higher fins 42 vs.34nm of 14 vs. 22nm nodes
2. Non tapered (vertical fins) which is quite deviation from the 22 nm process.
I'm not sure what will the competitors (Samsung, TSMC GF) are planning for their fins' shape. It will be very interesting to see.
However I have not seen any revelation with regards to the back end and especially with the interconnect section of the process (BEOL). Looks like the transistors are getting better and better but the BEOL is basically stays the same. In that case are we seeing diminished return? Or the solution will be to yet adding more metal layers, also what about first metal layers CD, how small narrow can we make them.
Actually the smallest cell published so far is the "10nm" cell shown at VLSI earlier this year. With a gate pitch of 64nm, metal pitch of 48nm, and the same fin pitch of 42nm, it was a bit smaller at 0.053 um2.
Also, a fair comparison for Intel vs Intel would be the 0.108um2 cell in 22nm which had the same 2-fin NFET design that was shown here. The 0.092um2 cell had only one fin per transistor. The 2-fin cell here has a width of 420nm. A single-fin cell would had a width of (420-84 nm) and an area of roughly 0.047um2, which would put it at 0.51X compared to the corresponding 1-fin cell in 22nm. However, a single-fin SRAM would have poor charactersitics, because it does not have the right beta ratio (PD>PG>PU).
Yes, at a given node eDRAM is 4-5 times smaller than SRAM. IBM has been using eDRAM on the same processor chips and some of the game chips for a few years (since 45nm) and that's how they huge caches on the server chips. Intel's 22nm eDRAM however was a standalone chip packaged together with a processor. It is possible to see more of this packaging solutions and extensions to 3D in the future.
Some analysts have predicted rising cost per transistor due to multi patterning but not all have. In fact a lot of the predictions are the work of one analyst repeated by many. My modeling shows a cost reduction at 20nm for TSMC although not as big a reduction as we "typically" see and what I am hearing from early adopters that is what is happening.
The costing of advance nodes leave a very large room for creative accounting as a large part of the cost is the capital depreciation.
But there is additional aspect that is ignored in the general discussion but have huge impact on the industry adaption trend - per design development costs.
The design cost escalate rapidly with scaling. And the industry has responded accordingly. Quoting from our blog FPGAs as ASIC Alternatives: Past & Future - "In his last keynote presentation at the Synopsys user group (SNUG 2014), Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys newer tools to improve older node design effectiveness. .... One can easily see that the most popular current design node is at 180nm."
So while we argue about 14 nm vs. 28 nm most new design are choosing 180 nm !
Just to clarify, if you plot the data over multiple generation, the SRAM area scaling has never been 0.5X per node. It was mostly ~0.6x per node. 32nm was a little bit more agressive and 22nm was a less aggressive one. But the long run trend has been about 0.6x per node.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.