As someone who started out in vacuum-tube days, I have to laugh at all of the self-serving reports on the next candidate to become the "one true memory" solution. Magnetic-bubble memory (aptly named), ovonic memory, MRAM, FRAM, etc. have all been anointed by various pundits selling absurdly expensive "research reports." Not one of these ever gained the foothold needed to validate the claims to universality. The comments made by others above regarding the inherent costs of additional process steps and potential long-term reliability issues are all point-on. DRAM of various types has managed to survive through numerous technologies (I reckon the next-gen DDR4 will be at least the 8th or 9th full generation) along with the "niche" products for non-volatilty of EEPROM and all the varieties based on flash technology. Even back in the days of the "revolutionary" 4Kbit DRAM, I recall the intense debate as to whether the DRAM technology had reached its "natural limits" because of bit-flips induced by "cosmic rays" that later were found to be from radiation from the frit seal glass of the ceramic packaging. Easily fixed!
Why do you think it must be a single memory interface? There never has been yet. Each technology has its own characteristics, and attempts to make them indistinguishable (like paged virtual memory) are mediocre at best.
TanjB, thank you for the reply. This suffices to answer my question well for the near term, but for the long term I still see a convergence after the challenges you site have been met. How can the goal be anything but a single memory interface?
Flash is a really cheap process. Very few layers. Hugely redundant both due to sparing on the die (leading to nearly perfect yield) and strong ECC (allowing the limits of storage variability to be pushed). Compare buying decent quality MLC at less than 50 cents per GB with raw wafer cost around $1 per sq cm, and you can see that the process is hugely efficient.
DRAM may be more vulnerable to replacement. It consumes significant power, its density has barely improved in years (it hit the wall for limits on charge density per unit surface, much of the recent improvement has been reducing the size of everything surrounding the capacitor). If you could build an equivalent density for less power and less cost, heck yes.
It could be useful to persist with power off although actually when you lose power and reboot a server you mostly will want to do stuff like run diagnostics and then catch up on the transaction backlog, so for the most part modern servers in a data center don't care about that. Even the SSD and HDD is used with replication and consensus, so the idea of "keeps its memory state without power" is mostly interesting if it means the device uses less power to operate, not so much for durability.
Samsung is pushing STT as a persistent replacement for DRAM.
The targeted application is only interested in cost per gigabyte, footprint density and power. Server manufacturers will be using "a lot more DRAM in new designs" due to a server architecture shift that's underway and is just now beginning to be reflected in DRAM usage trends.
Samsung has instituted a university research program to study STT. Not something you want to hear for a purported near term memory technology for tier 1, high margin applications.
Seems like the cost of new fab equipment might be a key element in the transition. Can you scale the equipment used for disks to memory in a cost effective manner? What is the ramp-up rate of making the new equipment? Maybe a 50% growth rate is just the number needed to encourage investment...