All that stuff about NAND becoming so small it won't work is about to change; 2D NAND cells have gotten so small that one can talk about counting the individual electrons they contain, with the resulting problems of slow programing, short life, and high error rates. (Source: Omar Barcena and John Picken on Flickr)
Really enjoyed the blend of humor/technical content on this one. Anyone have any learnings to share on this:
"Choosing the wrong technology can eliminate the many advantages of flash. This is frequently seen when multiple SSDs are put in a standard disk drive shelf, where the SAS link to the server is filled by the input/output per second (IOPs) of two or three SSDs, so 90% of the potential performance in the shelf is never available."
In a NAND Functional Tester I implemented the algorithm to detect during Read/Verify the actual ECC required per Block and Device.
Applying repeatedly Erase/Program/Read/Verify cycles to the same block emphasized the Endurance (measured in number of Erase/Program cycles) based on the evolution of ECC detected during Read/Verify.
Lately a customer ran, on an ONFI2.2 Chip with ECC Unit of 1117 Bytes and minimum required ECC 40 according to the spec, repeatedly, Erase/Program/Read/Verify cycles on the same Block. During the first cycle the ECC detected by the tester for that Block was 6, after 3K cycles ECC detected was 17, after 5K cycles ECC 29.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.