Similar to traditional hard disk drives (HDDs), solid state drives (SSDs) are manufactured today using numerous technologies and specifications that enable different degrees of performance, reliability, and endurance. While HDDs use standard methods for determining overall reliability ratings based on mean time between failure (MTBF) or annualized failure rates (AFR), SSDs are just now establishing a standardization process. Testing with a validated format is critical for any OEM provider of enterprise servers or client-based systems that will integrate SSDs.
The JEDEC Solid State Technology Association, working together with industry leaders, has established a set of standards for client and enterprise SSDs. These requirements and test methods are found within the JEDEC JESD218 and JESD219 standards developed by the JC-64.8 Subcommittee and are focused on SSD endurance.
The Need for Standards The category of solid state storage (SSS) is one of the most promising and exciting technology areas discussed today. Over the past several years, numerous solid state makers have placed offerings to the marketplace, making strong performance, reliability and endurance claims. The bold positioning of these solutions seemed to indicate that they would become the storage de-facto standard and quickly replace hard drive technology, which has been part of the computing industry for over 50 years.
Regardless of the solid state hype, the market is still small when compared to HDDs. According to IDC, solid state drive shipments reached 11 million units last year, less than two percent of the HDD industry shipments of 590 million units. So why haven’t SSDs yet taken off? Besides pricing and supply concerns, those within the storage industry also believe the answer is largely due to lack of standards.
A standard for SSD endurance validation in particular is a critical requirement of any OEM system maker. Developing standards for SSD endurance provides the confidence needed for OEMs to conduct qualification cycles for product integration. And ultimately, the adoption and communication of standards also helps enable both consumer and CIO confidence to drive the market as well.
Certainly standards are important, but I don't think that the lack of these standards is a first order, or even a second order factor in the limited level of adoption. I can buy a Terrabyte spinning media HD for $60.00. The cheapest SSD I found (in my three minute research project) was $60.00 for a 4GB device. On the same site, I found a 128GB SSD ranging in price from $239 to $469.
That's more than 30 times the cost per byte at best. When the SSDs are in the range of two or three times the cost per byte, then things like standards start to be a part of the decision making process.
There are applications with very specific requirements that override the 30X cost factor, but I really doubt that standards, again, factor in to those decision.
Regarding the failure mechanisms, the most common are shipping damage, random component infant mortality, and assembly defects. Such failures are not due to the endurance of the memory components. Regarding endurance, there are definitely two classes of SSDs. The workloads for enterprise and client are extremely different and an SSD that is not specifically designed to deal with the demands of the enterprise workload could be at risk of endurance (memory) failure if used in an enterprise application. From a client perspective, any properly designed SSD should provide many years of dependable service. In the enterprise application, be sure to get an enterprise SSD.
I like the idea of standards being promoted for SSDs! I do wonder what the failure mechanism is for SSDs, is it device failure related to MTBF or due to the exceeding the write over capacity of a section of memory rendering it non-operable? Once the cost of SSDs comes down I expect to see more adoption into the consumer market devices.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.