With advances in NAND flash technology, solid-state drives (SSDs) have found their way into a variety of enterprise and data center applications. It is important to recognize that not all SSDs are designed the same. Some devices are made for laptops, others for industrial uses, and still others for consumer desktops owned by gamers or enthusiasts, and finally, enterprise data center uses.
The challenge is to recognize that a product designed for one use or environment is not necessarily something you should use for a different environment. For example, it is not advisable to use an SSD designed for use in laptop computers in a data center environment for mission critical applications. Why? Because laptop designs are tuned and developed for the low power requirements of a battery-driven environment with low duty cycles and will be powered up and down regularly, or shut off when not in use. On the other hand, enterprise environments expect to be powered and moving data 24 hours a day, all year, for 5 or more years, so an SSD that can survive that demanding workload is required. The opposite is equally true -- an SSD designed for the enterprise may not be the best choice in a laptop.
In this article, we focus on SSD devices intended for use in demanding enterprise environments, with a focus on the attributes, features and requirements needed to support the data center for access and reliability of user data. When selecting an SSD for these applications, IT professionals have a variety of choices available, ranging from the type of NAND flash memory technology used in the drive to how intelligent the built-in SSD controller is.
The types of NAND flash memory used in SSDs include single-level cell (SLC), multi-level cell (MLC), or enhanced multi-level cell (eMLC). While SLC flash offers high endurance and fast write access times, MLC flash stores two bits per cell to offer twice the density of SLC flash, at a more cost-effective price point. eMLC flash was designed to feature the higher densities and increased endurance that MLC enjoys, however eMLC achieves this at the expense of doubling access time, making it unsuitable for true enterprise-class applications.
Table 1: A quick comparison of the NAND flash memory types.
The cost-effectiveness of MLC-based SSDs (enabling more functionality in the same footprint as SLC-based SSDs) has increased their acceptance and adoption in the data center. In its raw state however, MLC flash technology has reliability and endurance challenges, especially for write-intensive and mission-critical applications. Flash memory wears out when you write to its individual cells over time, and with the array of write-intensive and mission-critical applications in the enterprise (e-commerce, online transaction processing, social networking, cloud computing, etc.), MLC-based SSDs will wear out faster than SLC-based drives, increasing a data center’s total cost of ownership (TCO), unless the right tools and technologies are implemented.
By understanding how MLC NAND flash wears and identifying the different technology tools available for optimizing SSD performance and endurance, decision makers can select an MLC-based SSD that will accelerate data access in the most cost-effective and reliable manner.
where did this figure of 30 full-device writes per day come from? I'm sure there's a market for that, but it has to be fairly small. obviously, most storage and computation is more consumer-like, with read-mostly loads, and often much sparser duty cycles than 24x7. it's easy to find very cheap SSDs today that peak at 500 MB/s and 80k iops and still offer 3-5 year warranties. commodity storage is cheap enough to simply use above-device redundancy to solve issues of reliability and permanence.
STEC's pitch seems to be pretty intensive engineering at the device level - laudable, but do people buy these inherently more expensive (and apparently slower) devices and trust them without any above-device redundancy (raid, etc)?
Agreed about FRAM: love the idea, but I doubt density will reach levels high enough for use in computers as Storage: maybe BIOS/EUFI.
I like the materials from STEC, but not able to find any products identified as containing their technology, even searching all their links. Looks like vaporware to me.
This technology illustrates the pressure to create workarounds for the recent Moore's Law crunch that means smaller geometries are not appearing fast enough to meet demand.
Previous EDC/ECC and other flash-'nursing' initiatives failed because bigger chips appeared that allowed the protection to be implemented at a higher level, in software. Hardware was only necessary for custom high-integrity applications.
STEC have a window of opportunity to make MLC work for a wider range of applications before a memory breakthrough pushes the density up again cheaply enough to compete. But is that breakthrough in sight? I personally love FRAM but can it be made dense enough? I think not.
Production will also only be available when a big fab becomes surplus to DRAM or flash requirements. No-one will build a fab for FRAM speculatively, I think.
Perhaps a slowdown will create spare FAB capacity?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.