I'm very familiar with the various countermeasures for dealing with the raw NAND flash limitations. My preference as a systems architect is using integrated systems from suppliers who are major players in the IP arena of these algorithms; there are only a handful of companies that control the vast bulk of that IP, and most have partnerships with the others cross-licensing the IP. Regardless, none of these are bullet-proof, and each innovation in NAND flash density requires another layer or two of protection. Although the details of this latest die-shrink are not disclosed, I would imagine that it entails both geometry shrink AND level-splitting the MLC structure. That combination will require a major increase in the controller complexity to maintain the same level of data and device reliability. IMO, even the present level of that reliability is marginal for highly-sensitive applications (think medical devices, secure servers, etc.). Too many people view this technology as the "magic bullet" that side-steps all the limitations of electro-mechanical (HDD), not realizing that even these have to be used in redundant schemes (e.g. RAID or equivalent) to get the level of system availability needed.
I think that many of us are aware that NAND flash chips "wear out" over many read/write cycles. And so companies employ various software algorithms to try to mitigate the physics of NAND device break down. In some cases that means "bad" cells are excluded and not used hence the usable memory capacity is lower.
I'm curious as to what other performance parameters are impacted by this. The specifics important to me relate to data reliablility and retention. How many R/E/W cycles at the individual cell level? Operating temperature range? Noise margins? The list goes on.... I know the transition to MLC required substantial improvements in the SW/controller algorithms to deal with these. I suspect far too many users (and even design engineers) aren't aware of these limitations, and the consequences (e.g. even USB memory sticks wear out eventually and shrink in capacity during their service life, and the same is true of SSDs).
>> Interested whether it also represents a lead over Samsung and Hynix.
Samsung does not seem to lead in any of these innovations. Yet, they find ways to catch up and redesign any industry. The move to lower feature size is excellent but over time, that will not be a major advantage. Anyone that figures out will be the long-term leader.
The introduction of a flattened planar memory cell at 20-nm by Intel-Micron seems to be standing the companies in good stead for this move to 16-nm. Interested whether it also represents a lead over Samsung and Hynix.
These companies usually tend to compete hard even with engineering ennouncements
Samsung generally does not move first in some of these new areas. They wait for someone to do all the hard works and then with their cash pile jump in and take over the market with volume and great pricing. Micro is a great company but NAND business may be very challenging now
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.