To me 10-12 years seems like a very short time to get a new technology in place when the profit margins on existing technologies will be under continued pressure. Where is the $ going to come from to fund significant new technology development, manufacturing plant and other infrastructure expenses... Wowzers!
No doubt that a new technology would initially have its cost, but mature technologies have had years of cost-reduction methods applied and there is no reason not to include cost-reduction in new technology development as well.
Treating 3D NAND as a "new" technology, when can it get its cost appreciably down? Or will it be accepted anyway after 1Z, even if it is more expensive?
It seems that NAND Flash have many more years of a roadmap. The 3D NAND started mass production with relativly old process node 40nm - 24 layers. It could scale up for over 128 layers and down to the 14/16 nm over time. These represent too many nodes, and accordingly years, to make any reasonable prediction at this time.
Since they are stacking ONO layers laterally in 3D NAND instead of vertically on floating gate, the lateral scaling of 3D NAND is probably already at/near its limit, but certainly the number of vertical layers is supposed to increase as much as possible. But eventually, the vertical channel gets too long.
I think when the words new memory technology or alternative memory are bandied around they should be qualified with silicon dependant memory (SDM) or silicon independent memory (SIM), with a further sub qualification for SDM of monolithic or multi-chip packaging (MCP).
A silicon independent memory will require more than just the emergence of a new memory technology it would be something akin to the past when solid state replaced the vacuum tube. Although some sort of optical coupling, even optical processing might be able to remove some of the silicon interface workload in MCP it is difficult to see complete silicon independence for any new or emerging memory technology for some long time ahead.
Because of the existing investment in fabrication equipment, interface knowledge, designs and reliability SDM would appear to be the easiest and most likely route, so far that has not provided an obvious near-term solution, e.g PCM, CbRAM, ReRAM etc in more than 10-12 years. It looks as though SST-MRAM might now be moving centre stage with correlated electron devices (Symetrix and 4DS Inc) offering an interesting latest addition to the list of possibles.
I think the best analogy is to consider Flash memory for the electronics industry as like the internal combustion engine of the automotive industry. Along the way a few variations NAND, NOR for the former, like gasoline and diesel versions for the latter but still requires silicon or four wheels respectively to make it useful.
You're right on all of your points but one, and that's the manufacturing plant issue. It should only take about 2 years to build a manufacturing facility for a technology if you have the process all worked out. That could be done with technologies that are now in low-volume production: FRAM, PCM, and MRAM.
These technologies can't get cheaper than NAND flash, though, until they move ahead of NAND in process migration, and they aren't even close to that at the moment.
You hit on the most important point in memory: Nobody wants to pay more for superior performance unless there is absolutely no alternative. That's why the NOR and SRAM markets have shrunk. 3D will not replace 1Z until it's cheaper than 1Z. That could take a while.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.