No, I don't think a consumer or average commercial customer wants the "cold flash". Mr. Taylor may not realize that low performance may also indicate poor reliability, heavy redundancy will be a possible result.
You missed the point entirely. RAID came into being purely to ameliorate the high cost of enterprise storage by replacing expensive drives with an array of cheap drives, algorithms to distibute data and parity, and silicon acceleration.
Early RAID used 'junk' rotating media to emulate enterprise class storage.
When you upload anything facebook, there is more than one copy. It gets pushed around and indexed.
I believe, also, that they do perform local RAID on the FLASH arrays.
So you have geographical spatial distribution of RAID protected data.
Technology really is a marvelous thing. For more information, look into background on what Amazon has done with the elastic cloud.
Wobbly; So for you the ultimate is just piles and piles of junk memory distributed all over the place, storing what in effect piles of junk pictures. Somewhere it falls apart. The phrase making silk purses from sow's ears comes to mind. Future generations of memory devices that solve the impending SSD problems will evolve, move down the cost learning curve and quality,performance and reliability will prevail. Surely becoming known as the repository of junk cannot be could PR for any company.
It is like saying I can make an airliner in the shape of a square box (or even box of junk) and use computers to keep it airborne. Might be possible, but is not the best or most efficient way to go.
I think the whole thing depends on distributed storage. If you use distributed storage for network balancing and delivery caching, you end up with 'accidental' data redundancy as well.
So if your 'junk' storage fails in one location, you get a breif network pause while it shows up from somewhere else. Think in terms of spatially separated RAID, but with replicated full volumes as well as stripes.
With pricey high quality storage this gets expensive. With 'junk' storage, this is cheap.
chanj0...... large RAM cache is needed to keep a db on hard drive within tolerable performance levels, particularly for write operations, because of the very slow random i/o speeds of hard drives. Because flash has much faster random i/o than hard dive, it is possible to have larger databases with a given amount of RAM.
Since the members get to use the service free of charge (advertisements not withstanding), how can they complain if Facebook accidentally loses their pictures? Of course, some slimey lawyer will try to make a federal case out of it. I think Facebook got it right.
@Ron: I also wonder what's the incentive for flash vendors to make a cheaper product at a time when there is such demand for a relatively high value version of flash. It must be easy to say you will get back to them when you aren't quitre so busy.
Resistron: Touch of the PCM deja vus here. In the very early days of phase change memory PCM it was called RMM for Read Mostly Memory. Mostly because of the limited write/erase life that could be guaranteed at that time. Maybe there is still hope! The problem with this request is if if you ask for junk you get junk and then even worse junk. In the end you finish up with what is virtual memory you just kid the users that they are putting the photgraphs that they will never look at into memory but there is nothing there. Esapecially nice if you can charge for it.
Blog Doing Math in FPGAs Tom Burke 2 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...