LONDON – IBM is looking forward to improving its server computers within a few years through the use of phase-change memory (PCM), according to two separate reports that reference different senior executives with the company. This is despite review papers and discussion within a patent application that seems to acknowledge problems with the scaling of PCM.
PCM is based on changing the material phase and the electrical resistance of a chalcogenide layer in each memory cell. It is an attractive technology because of its non-volatility, density and bit-alterability and has been touted as a possible replacement for both flash memory and DRAM. But the technology has proved difficult to commercialize and even as devices have made it to market using 90-nm process technology, questions have been asked about the ability to scale the technology beyond the levels already reached by flash memory.
It is not clear whether IBM it looking to use PCM devices produced by third parties such as Micron, which recently bought PCM pioneer Numonyx, or has proprietary PCM technology of its own that it could license out or manufacture internally.
Nonetheless Jai Menon, chief technology officer of IBM's Systems and Technology Group, believes that phase-change memory could replace existing DRAM and revolutionize the way servers are built, according to an article published last month by InfoWorld.
With PCM "you can design your file systems differently, you can design your databases differently, and it has the potential to reduce by three orders of magnitude the power consumed and the amount of space consumed by servers," the report quoted Menon as saying. It adds that he said that IBM is continuing to develop PCM and will incorporate it in servers, but did not reveal a date by when this would happen.
In an article published by the Technologizer website at about the same time in October Alan Ganek, chief technology officer and vice president of strategy and technology in the IBM Software Group and Rod Adkins, senior vice president within IBM's systems and technology group are also quoted being enthusiastic about PCM.
PCMs have 30X or better write times (Samsung claim) than NOR and are low voltage (~2V vs 10V+).
@Kris, The memory chip in this case is an industrial prototype, not university-made, thus it has already been tested to meet certain cycling specifications by the manufacturer (though as a prototype part, the study found write performance issues). This was not a test of the PCM, but rather a new PCM _drive architecture_ and _software_ for speed benchmarking. I fail to how extended cycling times will make a test of the architecture any different?
To @xprmntl, academic testing is very different than industrial testing...I worked as a prof at 5 universities around the globe in my career and most of my academic colleagues would test one device, make few measurements, write a paper and move on...needless to say in industry you test things thousands if not millions of times, very, very different, Kris
@inewski The academic claims are made from testing actual PCM devices (Micron NP8P128A13B1760E), not just what they think. Scaling could be an issue, so remains to be seen. PCM with a vertical access device would be really nice, though...
Memorywranglers link was bad, so try this: http://cseweb.ucsd.edu/users/swanson/papers/HotStorage2011-Onyx.pdf
The PCM devices are qualified for 85C, similar to current Flash; 150C is a high temperature, automotive spec.
I agree with Peter and would be very skeptical of any academic claims...there is a wide gap between what academics think and what actually works ;-)...I have been on both sides of this divide and have seen this difference of point of views thousands of times...Kris
This is an academic paper from UC San Diego and one of the observations therein is: "These results show that (assuming PCM scaling projections hold) PCM-based storage array architectures
will be a competitive alternative to ?ash-based SSDs."
But does that assumption hold?
The original Moneta system used DRAM, but the version described in this article does real PCM -- 10GB of it, in fact.
You can read about it in detail here: http://cseweb.ucsd.edu/users/swanson/papers/papers/HotStorage2011-Onyx.pdf
The UCSD Moneta storage array DOES NOT USE ANY actual PCM, apparently. It uses actual DRAM but (fake) performance data for PCM to model the performance of the array. According to that fake input data, their imaginary array achieves 1.5 GBytes/second or so sustained write, which is not impressive, given that Fusion-IO's Duo ACTUAL Flash drive already has achieved the same speed using SLC NAND Flash.
Garbage-in, garbage-out, as they say.
Oh, and by the way, 64GB worth of PCM costs over $16,000 (compare to just $100 if MLC NAND Flash is used). Good luck with that!
PCM certainly faces some significant technology challenges, but I think IBM is right to see it as a potentially transformative technology, especially for storage. If PCM follows even somewhat optimistic trends in terms of scaling and performance, it will will dramatically speedup storage. A group at UC San Diego recently built a prototype storage array to explore the potential impacts (google "moneta UCSD"). The performance they achieve is impressive.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.