Many years ago, I was in charge of backing up the company's business records every night onto a tape drive. I made two copies. One went into the safe. The other went home with the president or vice president of the company. It was a big deal and, at the time, we thought a tape with a vast amount of storage potential.
Fast forward 20 years and now I am reading about the world's first exabyte storage system. Now, I am pretty geeky and live with a geeky family. We throw around the term Terabyte with some frequency here (really, we do). But exabyte? No. Never said that one before.
Late last month, Oracle's StorageTek division announced a tape drive that can scale to an exabyte. The new tape drive includes FujiFilm's cartridge, which uses nanoscale Barium Ferrite (BaFe) particles to coat the tape, adding to the lifetime of the product.
Here what Oracle has to say on the product:
World’s largest capacity at 5 TB (uncompressed), over 3x more than any other tape drive.
World’s fastest throughput at 240 MB/sec, 50% to 70% faster than other tape drives.
Up to 23% lower five-year TCO for a 20 PB system when compared to other enterprise tape solutions.
Protects your investment with media reuse and broad compatibility.
Integrated encryption support enhances security and ensures data protection.
What do you think? Is this a game changer? Will businesses choose tape over SSDs for data backup and recovery?
To provide something closer to random access, would something analogous to microfiche (relative to microfilm) be practical? For a robotic library, handling a much larger number of fiches than tape cassettes might not be a huge problem. The density might be significantly lower, requiring (thinner) per-sheet protection rather than (thicker) per-spool protection, but the faster access to arbitrary data might be a sufficient advantage.
A less related question: Would thermally assisted writing make archival storage more reliable by preventing magnetic fields at room temperature from corrupting the data?
wow thats great. With the growing needs of family and business responsibilities I guess it wont be a surprise when people satrt demanding that. Of course cost is the most decisive factor. What will be the cost anyway???
I think the title of this article is misleading.
I've been unsuccessful finding any mention of exabyte (10exp18 bytes) in the Oracle announcement.
Or in the text of this article in relation to this product.
5 TB is a long way from 1 EB.
Btw, at 240 MB/Sec it'll take on the order of 10exp 9 seconds - 132 yrs of continuous writing to fill a 1 EB tape.
Game changer? I doubt it.
The advantage of tape for backup is cost. Tape is cheap. The disadvantage is time and convenience. It takes a relatively long time to back up to tape, and restoring is a pain because it's a sequential medium and you must start at the beginning and advance to the spot where what you want is stored before you can restore it. And backup systems have catalogs that take space, so you may even be looking at regenerating the catalog first so you can find the file on the tape.
I was the backup admin at a former employer, and I was pushing hard for a two stage strategy: back up to disk, then tape. Corporate standards required us to back up every night and send the tapes offsite in the morning. Guaranteed, a user would discover they had trashed a file and needed a restore just *after* the tapes had been sent off...
We also had a division that couldn't *do* a *full* nightly backup to tape, because there wasn't enough time in the backup window. They had to hope an incremental would suffice.
In my view, backup would go to disk, and from disk to tape. The disk backup would be there for the "Oops! I trashed X! Can you restore it from backup?" instances, and the tape would be long term archival storage.
I see the same when you are dealing with potentially exabyte sized backups. They aren't your primary backup - they're a long term archival solution. Oracle's announcement says it will be faster, easier, and cheaper to create such backups, but you still hope you'll never have to restore from one.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.