@AKHO re: "the smallest cell published so far is the "10nm" cell shown at VLSI earlier this year. With a gate pitch of 64nm, metal pitch of 48nm, and the same fin pitch of 42nm, it was a bit smaller at 0.053 um2." -- That paper would be A 10nm Platform Technology for Low Power and High Performance Application Featuring FINFET Devices with Multi Workfunction Gate Stack on Bulk and SOI. K.-I. Seo et al. (Samsung, IBM, ST, GF, UMC) -- while this paper is 10nm FinFETs on both bulk and SOI, my understanding is that the SRAM work cited was indeed on SOI - right?
There are also process development costs...these are no longer small. They can be in the 10s-100sM$$. Plus, ramp to yield costs..with how late this node has been, Intel must have spent 100s of M$ in poor yielding wafers. All these, plus the design development costs already mentioned (libraries, characterization, qualification) have to be amortized over the chip volume produced. And I don't see Intel's volume greatly rising.
There are few ways at least to get some indication, the one I like is to watch the large fables companies choice of a foundry. Now that Intel position itself as TSMC competitor this become simple. And let me quote, Karim Arabi of Qualcomm ,from our recent blog: Qualcomm Calls for Monolithic 3D IC - "One of the biggest problems is cost. We are very cost sensitive. Moore's Law has been great. Now, although we are still scaling down, it's not cost-economic anymore. It's creating a big problem for us."
So if Intel does continue to reduce cost by 30% we should see all the big fables companies on Intel door step, so far it did not happen !
"But structured ASIC , which are a decent solution for that(with eASIC looking pretty great in that area)"
Thanks, as the inventor and founder of eASIC I loved reading it. But for the time being those are less than 1% of the ASSAP and ASIC market.
> creative accounting as a large part of the cost is the capital depreciation.
True. That could be. But how can we tell ?
> per design development costs.
On the face of it , it seems like a problem. But structured asic's , which are a decent solution for that(with easic looking pretty great in that area) , doesn't seem to catch up , and are a relatively small niche of the semi biz, AFAIK. So maybe it's not such a big problem ?
Just to clarify, if you plot the data over multiple generation, the SRAM area scaling has never been 0.5X per node. It was mostly ~0.6x per node. 32nm was a little bit more agressive and 22nm was a less aggressive one. But the long run trend has been about 0.6x per node.
The costing of advance nodes leave a very large room for creative accounting as a large part of the cost is the capital depreciation.
But there is additional aspect that is ignored in the general discussion but have huge impact on the industry adaption trend - per design development costs.
The design cost escalate rapidly with scaling. And the industry has responded accordingly. Quoting from our blog FPGAs as ASIC Alternatives: Past & Future - "In his last keynote presentation at the Synopsys user group (SNUG 2014), Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys newer tools to improve older node design effectiveness. .... One can easily see that the most popular current design node is at 180nm."
So while we argue about 14 nm vs. 28 nm most new design are choosing 180 nm !
Some analysts have predicted rising cost per transistor due to multi patterning but not all have. In fact a lot of the predictions are the work of one analyst repeated by many. My modeling shows a cost reduction at 20nm for TSMC although not as big a reduction as we "typically" see and what I am hearing from early adopters that is what is happening.
Yes, at a given node eDRAM is 4-5 times smaller than SRAM. IBM has been using eDRAM on the same processor chips and some of the game chips for a few years (since 45nm) and that's how they huge caches on the server chips. Intel's 22nm eDRAM however was a standalone chip packaged together with a processor. It is possible to see more of this packaging solutions and extensions to 3D in the future.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.