To be sure, such systems will be the storage platform of choice to handle ever-growing and increasingly critical workloads such as credit card processing, stock exchange transactions, manufacturing and order processing systems. Even app stores on the web are starting to use flash-only storage.
Such attributes as improved performance, reliability, and durability make flash systems desirable today, but virtually mandatory for the future. That’s because the tsunami of Big Data shows no sign of receding. Researchers predict that the digital universe—all the digital information created around the world—will hit 8 zettabytes by 2015. That’s about all the data found in the U.S. Library of Congress times 800 million.
Today we find ourselves at a tipping point of computer storage, once again, where the challenges of computing are no longer at issue but, rather the storing and retrieving of the information they generate and share.
Through our acquisition of Texas Memory Systems last fall, we have systems that can store almost 24 terabytes of storage in a unit the size of a pizza box, and that provide access to data 100 times faster than mechanical storage. If we were to stack 42 such pizza boxes in a rack, it would provide 1 petabyte of storage, which is more storage than any single operational application requires.
The industry is moving rapidly in the direction of all-flash storage for operational information. Such systems will not only help organizations respond to, and exploit the challenges of Big Data today and tomorrow, but once again will change the future of computing and the possibly the world along with it.
Ambuj Goyal is general manager of IBM System Storage & Networking.
One thing abundantly clear to every one is the explosion of data in the last five years. If projects based on IoT forge ahead and we end up living with 50billion nodes networked by 2050, a storage environment where round trip latencies for accessing and fetching data have to scale exponentially and that is possible only with solid state drives. The question is how can be make the nonvolatile storage faster and more reliable, over and above what is afforded by redundancy today.
At some point we'll have to stop hoarding data and do some old-fashioned spring cleaning. I see a horrible practice of multiple instances of the exact same files here on the corporate servers basically because people are in herently lazy and nobody forces the issue. We could easily contain this data explosion if people could just discipline themselves to keep only what they need. I dare say, 98% of the data is probably worthless (repeated, out-of-date, superseded, obselete, etc.).
Valid point! Most storage area networks and storage services will have 4X to 6X redundancy and my point above is this can get way out of hand with hardware limitations in the coming days. Solidstate drives have to improve their reliability drastically from where they are now.
I love the fact that I have 2 TB of at home storage! I also really like having alternate remote site backups! It is a wonderful time to be on a machine with so many options for storage/performance and capacity..
Not sure that few Tbs of storage I have helps me. I am at the point where I am not sure where everything is located. And don't have the time to sort thru 100 Gbs+ storage in family pictures not to mention other stuff. Help is needed. Start-up opps?
Computing could become big data oriented if it were all based on big look up tables. But accessing memory to carry this out would have to be much more parallel than existing multi-core threads to be appreciably fast. Probably on the order of 1000x at least.
Of course the author will plug the advantages of flash memory, since his company uses it.
But the Invisible Hand stills plays a role. We are in the early stages of shifting to SSDs in PCs and the like, because old fashioned hard drives are simply a lot *cheaper*. I don't see the industry as a whole shifting till costs drop by an order of magnitude. It's the reason why an awful lot of backup is to tape: the cheapest cost per megabyte of storage.
Are we getting lost in this huge sea of humongous data? Is storage capability of our network start to become so much larger than ability of our brain to digest that data?? The tipping point might be that soon nobody will know what they have stored and where.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.