Glen Hawk emphasized the fact that this part is the smallest die size 128Gb NAND available much more than the 16nm process technology. But I agree that given the surprising move by SanDisk/Toshiba to stay at 19nm, Micron's migration down to 16 is impressive and differentiating.
Samsung generally does not move first in some of these new areas. They wait for someone to do all the hard works and then with their cash pile jump in and take over the market with volume and great pricing. Micro is a great company but NAND business may be very challenging now
Is the innovation in feature size or crafting better circuit architectures? 16nm can help pack more memory in small space which is good but sooner or later we just have to redesign the foundation of transistor to make real progress. After 16nm, maybe 11nm and then we are cut-off with quantum mechanical effects to go lower.
The introduction of a flattened planar memory cell at 20-nm by Intel-Micron seems to be standing the companies in good stead for this move to 16-nm. Interested whether it also represents a lead over Samsung and Hynix.
These companies usually tend to compete hard even with engineering ennouncements
>> Interested whether it also represents a lead over Samsung and Hynix.
Samsung does not seem to lead in any of these innovations. Yet, they find ways to catch up and redesign any industry. The move to lower feature size is excellent but over time, that will not be a major advantage. Anyone that figures out will be the long-term leader.
Thanks guys for making me feel old! It seems like not that long ago that I was in the Boise Micron plant watching the boats of wafers float overhead and hearing the brags of the engineers that they were "sub micron".
But I am that old -- I can remember us repairing mask or design defects so we could test by "smushing" traces together or apart with a probe needle viewed through an optical microscope.
I'm curious as to what other performance parameters are impacted by this. The specifics important to me relate to data reliablility and retention. How many R/E/W cycles at the individual cell level? Operating temperature range? Noise margins? The list goes on.... I know the transition to MLC required substantial improvements in the SW/controller algorithms to deal with these. I suspect far too many users (and even design engineers) aren't aware of these limitations, and the consequences (e.g. even USB memory sticks wear out eventually and shrink in capacity during their service life, and the same is true of SSDs).
I think that many of us are aware that NAND flash chips "wear out" over many read/write cycles. And so companies employ various software algorithms to try to mitigate the physics of NAND device break down. In some cases that means "bad" cells are excluded and not used hence the usable memory capacity is lower.
I'm very familiar with the various countermeasures for dealing with the raw NAND flash limitations. My preference as a systems architect is using integrated systems from suppliers who are major players in the IP arena of these algorithms; there are only a handful of companies that control the vast bulk of that IP, and most have partnerships with the others cross-licensing the IP. Regardless, none of these are bullet-proof, and each innovation in NAND flash density requires another layer or two of protection. Although the details of this latest die-shrink are not disclosed, I would imagine that it entails both geometry shrink AND level-splitting the MLC structure. That combination will require a major increase in the controller complexity to maintain the same level of data and device reliability. IMO, even the present level of that reliability is marginal for highly-sensitive applications (think medical devices, secure servers, etc.). Too many people view this technology as the "magic bullet" that side-steps all the limitations of electro-mechanical (HDD), not realizing that even these have to be used in redundant schemes (e.g. RAID or equivalent) to get the level of system availability needed.