"Meanwhile MRAM and STT MRAM will start to replace SRAM and DRAM within the next few years and probably before RRAM replaces flash memory."
Why would I need both? That is if DRAM/SRAM's replacement is persistant why would a CPU system have any need of other persistent storage, i.e Flash's future substitute? Or conversely, if Flash's future substitutes has DRAM-quick write speeds why would a CPU have any need of DRAM's future substitute? It seems like the future merges the two functions by solving the technical challenges that led to them being seperate in ther first place.
I've never understood the positioning of these new persistent memory technologies as a replacement for RAM. Isn't it good enough that they are a replacement for Flash? By saying that they can replace RAM, you are setting up a good technology for failure.
Here's why: RAM has to have infinite read/write capability. Not millions, not even billions, but infinite. RAM can be written to very quickly. If it has a 100ns cycle, then you can write 100 million times in 10 seconds. Writing to one location may not be something that is done often, but all it takes is one time, (maybe some weird encryption, compression, or even Bitcoin algorithm that reuses one memory block over and over again), and the memory is dead in the water.
Let's set the goal post a little lower so these memories have a chance.
edit add: did some searching, and one MRAM company, Everspin, claims no wear-out mechanism, or infinite read/write capability, so I guess at least MRAM can replace RAM.
Seems like the cost of new fab equipment might be a key element in the transition. Can you scale the equipment used for disks to memory in a cost effective manner? What is the ramp-up rate of making the new equipment? Maybe a 50% growth rate is just the number needed to encourage investment...
Samsung is pushing STT as a persistent replacement for DRAM.
The targeted application is only interested in cost per gigabyte, footprint density and power. Server manufacturers will be using "a lot more DRAM in new designs" due to a server architecture shift that's underway and is just now beginning to be reflected in DRAM usage trends.
Samsung has instituted a university research program to study STT. Not something you want to hear for a purported near term memory technology for tier 1, high margin applications.
Flash is a really cheap process. Very few layers. Hugely redundant both due to sparing on the die (leading to nearly perfect yield) and strong ECC (allowing the limits of storage variability to be pushed). Compare buying decent quality MLC at less than 50 cents per GB with raw wafer cost around $1 per sq cm, and you can see that the process is hugely efficient.
DRAM may be more vulnerable to replacement. It consumes significant power, its density has barely improved in years (it hit the wall for limits on charge density per unit surface, much of the recent improvement has been reducing the size of everything surrounding the capacitor). If you could build an equivalent density for less power and less cost, heck yes.
It could be useful to persist with power off although actually when you lose power and reboot a server you mostly will want to do stuff like run diagnostics and then catch up on the transaction backlog, so for the most part modern servers in a data center don't care about that. Even the SSD and HDD is used with replication and consensus, so the idea of "keeps its memory state without power" is mostly interesting if it means the device uses less power to operate, not so much for durability.
TanjB, thank you for the reply. This suffices to answer my question well for the near term, but for the long term I still see a convergence after the challenges you site have been met. How can the goal be anything but a single memory interface?