I know this sounds like a sacrilege so early in the frenzied adoption of SSDs, but this format is a clumsy kludge driven by the industry's fixation on the fact that flash is persistent
I know this sounds like a sacrilege so early in the frenzied adoption of SSDs, but this format is a clumsy kludge driven by the industry’s fixation on the fact that flash is persistent -- its data stays valid after the power has been removed. Chip-heads call this nonvolatility, but to systems folks this is persistence.
Because it’s nonvolatile, everyone believes that it has to be treated as persistent storage and must be managed as a disk. Over the past five-plus decades, a considerable amount of effort has been put into managing storage, especially in multi-server virtualized environments where different servers have access to shared storage. Coherency is the issue here. You have to make sure that the most current version of the data resides in a preordained location. Anything else would result in inconsistencies that may cause irreversible data corruption.
If you’re managing flash as storage, then convenience dictates that it should be placed behind a disk interface and housed in a 2.5” box that can be accessed for easy replacement from the front of the cabinet. In other words, it has to be exactly like an HDD with the exception that it’s faster. Because of this, everyone compares the price of SSDs to that of HDDs.
This thinking is all wrong. In reality flash is memory. When you compare it to DRAM, flash has pretty horrible performance, but it costs about 1/20th as much. That makes it a reasonable memory layer to put between DRAM and an HDD, even if you don’t take advantage of its persistence. Actually, ignoring flash’s persistence brings you compelling benefits. If you don’t trust the flash to retain your data when the power is removed, then you will treat it like DRAM, and DRAM data management algorithms are well-proven. There aren’t any questions of coherency, since DRAM data is not considered persistent until it’s written into shared storage.
Over time the industry’s attitudes about flash will change. Processor boards will have two kinds of memory buses, one kind for DRAM and the other for flash, and the flash bus will use a native flash interface. The management of these two technologies will be performed by the file system and the core logic chipset. Computer users who want to improve their systems' performance will now face more choices: Upgrade the DRAM? Upgrade the flash? Upgrade something else?
It is likely to take some years for such a change to occur, despite the fact that Intel nearly introduced this approach in early 2010 before changing direction, and IBM and SanDisk are now providing NAND flash in DDR3-compatible DIMMs. I estimate that NAND flash won’t be supported by major processor chipsets for at least another three years. But that’s a guess, since Intel already has the technology and could very well choose at the last minute to bolt it onto a chipset to be introduced as early as this year.
As computing DRAM interfaces migrate to one of the more exotic technologies, it is very likely that computers will ship with a non-upgradeable DRAM and upgrades will be limited to adding flash to the motherboard. In this way, DRAM will start to resemble one of the processor’s caches.
This makes perfect sense. In 2011, Objective Analysis published a report (How PC NAND Will Undermine DRAM) that found, through nearly 300 benchmarks, that a dollar’s worth of flash yielded a bigger performance boost than a dollar’s worth of DRAM, once some minimum DRAM requirement was met. This minimum level was actually relatively low -- between 1 and 2 GB, depending on the benchmark. Many system administrators who manage data centers already realize this and find ways to add SSDs to their systems to allow them to scale back their DRAM requirements.
Meanwhile shared storage and HDDs will remain with us, with HDDs offering the most affordable storage for our ballooning data requirements, and shared storage providing the coherent data receptacle so very necessary to virtualized systems.
Next week I’ll look at DRAM interfaces.