thank you @DMcCunney...looking forward to see who is willing to pay for faster, smaller and more expensive...let's examine those in more details: faster is probably only marginally faster as chip delays are dominated by interconnect lines not intrinisic transistor speed...smaller doesn't really matter much unless the application is constrained in space like smartphone...and noone likes more expensive so I am not sure who will be willing to do faster, smaller and more expensive afterall...Kris
@krisi: If the cost below 28nm goes up why would anyone go there???
For the same reason such decisions are made in any industry.
Because those going there have a use case that they believe requires it, and for which they believe their customers will be willing to pay enough to let them do it profitably.
The question going forward will be what applications will require going below 28nm, and whether customers will be willing to pay what below 28nm parts will cost.
Moore's Law has historically meant semiconductor components would be progressively smaller, faster, and cheaper. It's increasingly apparent that cheaper will no longer be part of the equation, and the new paradigm will be smaller, faster, and more expensive. We are all in the process of finding out what that will mean for the semiconductor business.
A. Below 28 nm the 2x transistor scaling cost benefits are neutralized by the escalating lithography costs.
B. Below 28 nm the embedded SRAM bit cell scale very poorly. Just to keep with 2X density improvement per node, at 14 nm bitcell area should be less than 0.04 µm². The published data from TSMC and Samsung shows ~0.07 µm². Accordingly SoC build at 14nm would cost much more than the one at 28 nm.
The reason is that 28nm is a planar transistor, 22 (20) can't be panar it has to be FinFET for many reason. TSMC decided t ojump to 16(14) with FinFET. There will b esome 20nm -planar but doesn't have an advantage.
You can still use below 28nm even if the cost of the transistor is the same.
But you would see die sizes shrinking and total transistor counts staying the same to keep margins the same. Or they could increase prices (not likely!). Nvidia 20nm and Intel 16nm parts are coming soon. They will confirm if 28nm was the last cost effective node.
@Max. I agree with you about monolythic 3D IC's becoming mainstream. It certainly seems like the logical choice and the ~28nm "wall, if you will, will drive this technology relatively quickly. My crystal ball indicates that Fab 42, currently on hold, will be completely retooled to eventually accommodate carbon nanotube technology (likely a hybrid). It won't happen in this decade and maybe not until well into the next. Meanwhile the industry is in for seismic changes. I'm glaad to be nearing the twilight of my career in this industry. I see lots of pain ahead.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.