One issue with the poor scaling claim is that 16nm is really just a marketing term (at least at TSMC) for a 20nm process with FinFETs instead of planar transistors. There's a small density improvement, but it's not really a true shrink.
Zeno Semiconductor technology is now being ported to 28nm. Like always these things take longer than initially expected so it is not available for commercial use yet. It does not require a process change but it does require the use of one deep implant.
Intel was smart with bringing up eDRAM at 22nm for its high performance chips. With a 6F2-8F2 cell instead of a 100F2 SRAM, they can scale to smaller nodes cost effectively. Foundries are exploring 3D stacked DRAM with TSVs to reduce the amount of on die SRAM and allow scaling. Finfets improve SRAM cell size significantly. Between Finfets and 3D stacked TSV DRAM, scaling will hopefully continue for mobile - we all want a strong semiconductor industry.
As the say prediction are tough and especially about the future. Yet we do know now what Intel have decided regarding their Fab 42, and there is information out there that ASML stop their work on 450 EUV. You are also welcome to check with Applied Materials what their back-log is indicating. At SolidState Jan 2014 you can find both an article titled: " new paradigm adjustments for capacity and equipment spending" and Randhir Thakur Executive Vice President, General Manager, Silicon Systems Group, Applied Materials, Inc. outlook for 2014: "our foundry/logic and memory customers that manufacture semiconductors are migrating from lithography-enabled 2D transistors and 2D NAND to materials-enabled 3D transistors and 3D NAND".
I get the tie in but the article is really about shrinks slowing and the 450mm tie in is inferred at best. The title should state the main theme of the article and 450mm isn't the main theme.
By the way, TSMC has said that based on what they are seeing the 20nm node will be their fastest ramp ever! All the major foundries are also gearing up for high volume FinFETs at the 16/14nm node late this year -early next year. There are cost issues at the leading edge but it hasn't slowed the adoption rate yet.
Quoting: "this will dampen even further the transition to advanced nodes such as 20nm or 16/14nm!" - so we now have the "chicken and egg problem" - without enough volume moving to advance nodes vendor can't justify the investment in 450mm. And no one will invest develop and bring up 450nm for "old" nodes like 28nm. So yes if there is enough volume than vendor might take on the risk and the learning curve cost to bring up 450 mm so at a later time they could benefit the 450 mm potential 20%-30% reduction. But at this time it seem that more of the capex budget is allocated for old nodes - hence 450nm is being pushed-back
I don't see how the content of the article is related to the title. 450mm offers a 20% to 30% reduction in cost per unit area and would be more important when scaling slows and wafer costs rise. That isn't to say I don't think 450mm is going to be delayed, but the problem in my view is that Intel was the biggest driver and their fabs are relatively empty now.
The article is really about rising wafer costs and slowing scaling. 450mm would help with the first and is neutral to the second. You make some good poitns but they are unrelated to 450mm.
A more appropriate title would be something about wafer cost issues and the slowing of scaling.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.