As organizations increasingly adopt data-intensive applications—data analytics, transactional databases, geospatial applications, virtual environments, cloud computing, and more—the amount of data that must be stored is exploding. Seagate estimates that the compound annual growth rate (CAGR) for digital data stored is 50 %.
The use of these applications to manage vast amounts of data quickly and gain a competitive advantage is driving demand for high-performance solid state drive (SSD) technology in the data center. According to the Gartner Group, the enterprise SSD market grew from approximately 320,000 units and US$485 million in sales in 2009 to more than 1 million units and over US$1 billion in sales in 2010. Gartner forecasts that the enterprise SSD market will mushroom to 9.4 million units and US$4.2 billion in sales by 2015 .
SSDs deliver the highest- performance storage in boot speed, random IOPS and other measures critical to the enterprise. For example, while the average enterprise- class 15K-RPM hard disk drive (HDD) achieves 350 to 400 random read/write IOPS, the average enterprise-class SSD can push 50,000 IOPS for random reads and 15,000K IOPS for random writes.
Yet the remarkable performance of SSDs has overshadowed their apparent shortcomings in the area of endurance. Unlike magnetic HDDs, which can undergo an unlimited number of write/erase cycles, the number of times memory cells on the flash drive can be erased is finite. With use, flash cells wear out—and when they do, the drive can become unreliable.
Client Versus Enterprise SSD Endurance Requirements SSDs were first used in consumer devices, including cameras and MP3 players, and later made their way into laptop computers. While data loss is a concern for laptop and other PC users, it has far greater implications for SSDs as they move into enterprise data centers that manage an organization’s most important business information. In the enterprise, unreliable storage can undermine customer satisfaction and, as an extension, revenue generation through lost business and even interrupt critical business processes.
Like HDDs, SSDs must be developed to meet the differing needs of client and enterprise workloads. Client SSDs, typically used in laptop systems serving one user no more than 8 to 10 hours per day, face far fewer rigors than data center SSDs, which operate 24x7 and must service random data patterns, highly complex reads/writes, and applications that are far more write-intensive than in any client application. What’s more, enterprise SSDs must operate at a much higher temperatures than client SSDs and offer higher Uncorrectable Bit Error Rates (UBER) to ensure round-the-clock data integrity.
SSD devices from different vendors vary considerably in their ability to meet these demands. Until recently, it has been difficult to test vendor claims of SSD endurance in enterprise applications. In September 2010, the JEDEC Solid State Association published two sets of standards for SSD endurance and reliability. JESD218A defines endurance verification requirements for both client and enterprise SSDs. JESD219 defines workload endurance requirements for enterprise SSDs only. These standards specify requirements for each application class, describe a test methodology, and create an SSD Endurance Rating that provides a standard comparison for SSD endurance based on application class, as shown in Table 1 . The endurance rating is expressed as a Terabytes Written (TBW) value describing how much data can be written to a device over its lifetime. These standards make it easier to compare client and enterprise products.
(Source: JESD218A, Copyright JEDEC. Reproduced with permission from JEDEC.) Click figure to download larger version in a PDF.
We agree with the author's assessment though we think in addition to controller technology, there are other innovations enabling MLC chips to be more attractive to the enterprise market.
Kaminario uses MLC flash in its K2 all solid-state storage devices. One of the ways we extend MLC life is through our Scale-out Performance Storage Architecture (SPEAR). SPEAR enables dynamic load balancing and automated data distribution across multiple DataNodes, reducing wear. Plus, SPEAR offers intelligent parallel I/O processing meaning that I/O requests are distributed across the K2 storage cluster smartly parallelizing all reads and writes increasing performance and preventing hot spots.
So yes, Ms. Worth is correct that there are now viable MLC flash SSD options for enterprises. To us, it is architecture and software that is helping to drive this change.
A memory cell doesn't just count to one million and die. You don't know when it's going to happen, it's part of a Weibull distribution. A few would fail quite early, although they would be corrected out.
I am wondering if the drives have a mechanism to warn the user/system that they are running out of life? Or does the drive start getting flaky and that is the only indicator? I really like the robust nature (no more lost drives due to dropping!). The cost is coming down and the performance is very much worth it.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.