Peter, I think you have it right. Of course this is all about cost. The profit margins for commodity NAND chips simply are not high enough to justify the costs required to go to a smaller node right now. So the "more than Moore" design optimization was the best/only economical choice. At this point no one is counting on EUV litho coming to the rescue any time soon.
Well for NAND, the quadruple patterning to 10 nm would not have been more lithography/masks but certainly more process steps. Hynix did ~15 nm at IEDM two years ago. It would have made more sense to do this for both the 1Y and 1Z nodes, with both 1Y and 1Z closer to 10 nm to offset the potential doubling of costs with quadruple compared to double patterning. Now that 1Y is still 19 nm, it doesn't make much sense. Also possible, too close to 10 nm is too big a risk with S-D tunneling.
I think what we might want to call the 1M generation should be added to the list. M for Multi-Chip Package (MCP) where chip stacking, with Through Silicon Vias (TSVs) is used to achieve the required doubling of transistor/memory device density and is likely to play a significant role at about the 19-20nm node. In that way, take your pick for MCP or 3D monolithic, a constant chip footprint (area) will be meet the prediction of Moore's Law independent of the lithographic node.
I think in the NAND market you are getting the first indications of working ReRAM. If vendors are moving to a completely new architecture/technology, how hard are they going to push on the current technology?
Well the general consensus I have heard from the likes of IMEC, SanDisk, Intel is that 3-D NAND replaces 2-D NAND and scales the technology in the vertical direction. Not so much a 1M-nm generation as a 4M-nm generation...
And when you hit a limit in the z direction due to the ability to coat the depth of the high aspect ratio through-silicon-wire holes you return to lateral scaling with ReRAM...And by then a more complete understanding of the physics of this resisitve systems may have been achieved.
"Instead SanDisk found a way to improve the memory cell through design — reducing the area by about 25 percent – and without the scaling the geometry." That is a wrong statement because the memory-cell bitline pitch scales down from 26nm to 19.5nm while the wordline pitch remains at 19nm (see press release: http://www.sandisk.com/about-sandisk/press-room/press-releases/2013/sandisk-advances-its-industry-leading-manufacturing-technology/ ). So the core cell size reduction is 25%, and the total chip size reduction is about 20% when periphery is included for a 64Gb 2-bit-per-cell chip.
I'm not arguing how long Moore's Law will hold in the future. 1Y could be considered as half a node from 1X. Considering 1x just came out last year, the scaling trend is still pretty impressive. You can also consider 2X to 1Y as a full generation node, and that took less than three years for SanDisk/Toshiba to develop.
Why in the world would you call Moore's Law a self-fulfilling prophecy. It is neither, rather being a historical trend on how fast people manage to build new technologies that has had surprisingly significant predictive power. The first time that physics and fabrication technology come into it is (perhaps) when people can't keep up the rate of innovation.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.