Solid-state drives constitute an increasingly important market for flash memory. To better understand the market forces, trends, and challenges, we sat down with Jim Ting, vice president of product marketing and management at SSD manufacturer STEC, to learn more about the outlook and challenges of the market.
Kristin Lewotsky: What’s driving the shift toward SSDs the enterprise data storage? J.T.: A lot of people wonder what kind of application or workload needs SSD performance. One is just having a huge application with a whole lot of transactions. The other way to do it is through scaling. With physical CPUs increasing the number of cores and a number of machines on each core, you get a multiplier effect that can turn an average machine at the [virtualization] level into a large user at the physical CPU level. This trend is inevitable. There have always been cases where input/output operations per second (IOPS) have been a problem but I think that the percentage of customers that run into problems is starting to increase just because of the technology.
K. L.: That sounds a lot like big data. Do you see flash playing an important role there? J. T.: When Hadoop [framework for distributed computing] was created, the concept was using low-cost commodity servers on a massive scale to solve data mining problems. That’s fine for companies like Google and Yahoo, but mere mortal companies are much more limited by data center concerns like power and cooling, rack space, etc. A different way to solve the problem using the Hadoop paradigm is to use a fewer servers but interconnect them on a high-speed network with faster I/O using SSD technology. That way you can do your analytics with a smaller number of servers and in a much smaller space.
Another driver for the SSD enterprise market is latency. Web 2.0 companies are always looking for better latency for the user experience. This measurement is also very important in the cloud space because there's an inherent network delay that people are using SSDs to compensate for, but SSDs also give people the overall throughput and service levels that they expect. I think cloud is also another great opportunity for SSD technology—it's really a performance and latency story.
K.L.: What are the key challenges in the market? J. T.: The flash SSD industry gets to ride advances that the flash vendors themselves are making but it does require us to be a lot more clever about developing this technique to maintain data integrity as the geometries shrink. If you’re only looking to put a device through low demand in a consumer environment that's one thing, but you can't just take off-the-shelf flash memory and put it into fast form factor or die board and expect it to work 24 x 7. What we've done with our CellCare technology is to increase the endurance of commercial grade NAND flash technology to make it work at the enterprise level. That is really opened up the application that because the capacities we’re now able to achieve with 2x-nm flash, and soon, 1x-nm technology.
K.L.: Even at best, SSDs appear unlikely to achieve cost parity with spinning-disc drives. How will that affect market penetration? J. T.: Spinning HDDs have an advantage SSDs in dollars per gigabyte but in most people when they look at flash focus on dollars per I/O or I/O per dollar metric. There's always going to be that balance between what makes sense for particular application or data center manager: Is it density in dollars per gigabyte or is the metric response time or I/O, which lends itself to an SSD?
It’s important to remember that while we are making advances on the flash side, there continue to be advances on the HDD side. The gap between the two is decreasing, though. It wasn't too long ago where a 1 TB hard drive was a huge accomplishment and flash SSDs were quite small but now we’re talking about terabyte flash SSDs. The big gap in large capacity is still there but the ratio is decreasing.
K. L.: What are the key trends you’re seeing in this space right now? J. T.: We’re seeing orders of magnitude performance changes in a very short period of time. That doesn't happen very often. It's really exciting but also challenging. As we see the rate of change increasing there are a couple things on the horizon that will start to impact the enterprise users and elsewhere. There's a lot of interest in using the PCIe interface in different physical form factors.
Another aspect that I think is important is how people start thinking about applications and taking advantage of hardware architectures that are more balanced between CPU DRAM and SSD storage in terms of latency and IOPS. A lot of applications in the past could take advantage of access times that are inherent with the spinning disks to do things in the background and take advantage of waiting for the disk to provide that data. Well, with SSD technology, you shouldn't assume that there is a built in latency and you can do useful work more consistently. I think that has implications across the board as application developers leverage the balance and create opportunities for their products. It's a very exciting time.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.