Unlike the foundries we don't really know enough about Intel to analyze the transistor or their SoC costs. We do know that the for the foundries 28nm is the last node of Moore's Law. As to Intel it might help you reach your own conclosing if you read our blog: Intel vs. TSMC: an Update <http://www.monolithic3d.com/2/post/2014/01/intel-vs-tsmc-an-update.html>
It's a good question, it might be just to differentiate from Intel's 22 nm. But the aggressive DP cost might be too high to be tolerated by customers, maybe they'll go for 16 nm to get FinFET benefit (if any) as well.
EUV is now being considered for 7 nm insertion with double patterning (10 nm insertion decision point past). I am not sure if that is an improvement to the cost curve. The source power target is not fixed, has to keep moving up.
22nm vs 28nm is just margin case because no need for double patterning yet(according to intel). Why hasn't TSMC just migrated to 22nm without it ? Or is really hard to make 22nm work with single patterning ?
Broadwell is right around the corner. Lets see what their die sizes are at 16nm. Perhaps SRAM amount will stay the same to compensate for poor SRAM scaling.
Intel is a good example to use since they don't need to take a margin hit in desktops and and servers. If 16nm really costs the same as 22nm they will have to reduce the die size by 50% to give the same margins.
The industry is in a deep dodo -- 450mm (infinitely?) delayed, EUV (infinitely?) delayed, and even pure logic transistors cost is not dropping anymore. One can believe Intel's charts if one chooses to but, clearly, their life is at stake over this question, so they must be optimistic or short their own stock. Everyone else's data shows level cost at best, somewhat up realistically.
But this blog brings the other whammy to the table -- memory shrink ... that barely happens anymore. Which means that complex chip cost below 28nm will go up. Significanly.
No. The sky will not fall. But many companies' stocks will.
It will be nice to see TSMC or Intel showing better cost of transistors. What I have seen out there is density chart which is actually do not result in reduce transistor cost due to the higher wafer cost. BUT this is well known by now, the real problem is the SoC costs !, it will be nice if you really read the blog before posting comments. Please look again at the SRAM density chart from the recent ISSCC 2014. If effectively the density increase from 28nm to 16nm is about 10% than the SoC cost will much higher at 16nm. Unless you can show data that is different you need to admit that 28 nm costs would be the cheapest for SoC for a while. And you are advise to read a previous blog published by EE Times " Why 450mm Will Be Pushed-Back Even Further" http://www.eetimes.com/author.asp?section_id=36&doc_id=1321239& , which provide additional support to the problem with embedded SRAM scaling. So regardless if you like monolithic 3D or not the 28nm is the last node of Moore's Law.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.