In the first partof this two-part series, we looked at factors behind choosing solid-state drives for complete solutions. In part two, we take a look at how to determine the right storage solution for each aspect of a cloud architecture.
In 2011, it was estimated that enterprise capacity demand for both public and private cloud services alone accounted for 23% of the total enterprise server and storage data capacity shipped, and it is expected to triple in the next 10 years. It goes without saying that such a transformation in terms of where data is created and stored is not without complexity.
The overarching value proposition of the cloud is the reduction of complexity through simplification of not only the architecture on which “the cloud” is built, but also, the agility, economics, and scale the cloud delivers. Such simplification is not without complexity. This is especially true when it comes to architecting the cloud to not only provide traditional data storage capabilities to the user (backup, archive, disaster recovery, file sharing), but also, the applications and services the on-premise data center delivers today.
From transaction processing to business applications and databases, the ability to deliver such applications via the cloud while maintaining the value proposition of complexity reduction is easier said than done. If the cloud is to gain continued momentum it must be able to provide the same level of service in terms of not only data storage, but also data creation. Consider the wide range of applications running in a traditional enterprise data center today. The opportunity to move said applications to the cloud is dependent upon a number of requirements such as security, bandwidth, time, expertise, and also how the application is architected in the first place.
Historically, applications were architected to run on dedicated servers within the data center. With the advent of server virtualization, and thus the cloud, we are seeing a shift from a hardware driven data center to an application driven data center. This dynamic shift in application design and development is what I call the “application of applications” and it is challenging traditional IT suppliers to adjust their thinking around delivering solutions that adhere to the cloud mantra of providing businesses increased agility through the simplifying the distribution of IT resources.
One segment of the IT industry largely impacted by this shift is storage. With the ever increasing number of cloud-based applications and the promise of cloud providers to drive complexity out of the data center, storage suppliers must adopt an application or software-centric approach to product development and feature-set versus the more traditional hardware-centric. Doing this, while delivering solutions that provide cloud data center simplification is complex in and of itself. The storage suppliers that ultimately will succeed in this space are those that possess the greatest understanding of cloud application workloads, and apply such knowledge to products and solutions that not only meet the application workload requirements, but also deliver on the promise of simplifying the cloud data-center architecture.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.