LONDON – Samsung Electronics Co. Ltd. is set to re-ignite debate about whether phase-change memory is commercially viable with the presentation of an 8-Gbit, 20-nm device at the 2012 International Solid-State Circuits Conference.
The development was not unexpected as Samsung engineers are due to present a 20-nm phase-change random access memory cell at the International Electron Devices Meeting coming up in Washington DC on Dec. 5 to 7, 2011.
Nonetheless a 20-nm, 8-Gbit phase-change random access memory is a large jump forward from the previous state of the art. In February at ISSCC 2011 Samsung engineers presented a 1-Gbit phase-change memory implemented in a 58-nm manufacturing process technology equipped with a low-power double-data-rate nonvolatile memory (LPDDR2-N) interface.
Samsung – and Micron Technologies Inc. through its acquisition of Numonyx NV – are the only two companies that have got close to offering non-volatile phase change memory for commercial use, despite years of research and development. And even so there are almost no reports of phase-change memories in the field.
Samsung is now set to present a large device in a 20-nm process technology operating at 1.8-V and with a 40-Mbyte/s programming bandwidth. This puts phase-change memory at close to the same geometry and memory cell density as NAND flash.
The ability within NAND flash to store and detect multiple bits per cell still gives flash a memory capacity advantage over PCM. Flash memory is also expected to go to a form that could stack multiple memory cells vertically, providing further memory capacity scaling.
Phase-change memory works by detecting the change in resistance of a chalcogenide alloy as it moves between amorphous and crystalline states under the action of resistive heating. It has long been hoped that the technology could combine the scaling advantages of a cross-point memory with the non-volatility of flash memory while offering superior endurance and bit addressability.
However, PCM has hit a number of barriers to deployment, not least the ability to get ahead of the fast scaling of NAND flash memory. Technical challenges to PCM continue to exist over the ability of the heating effect to scale both within the memory cell and due to thermal cross-talk effects on neighboring cells. There are also concerns on whether this sensitivity to temperature could prevent pre-programmed phase-change memories being taken through printed circuit board production processes such as solder baths.
The rules of ISSCC always were that papers could only be accepted if researchers had made real devices and taken physical measurements – in other words no simulation papers or ones based on design data. However, it was not necessary that devices worked fully or as intended.
Mr Rbtbob-With respect to terbium and mechanisms of operation, as I have written somewhere here in another comment, reduction to practice is necessary. That is reduction to practice in an environment that considers all the variables that will allow the device to be produced in array form. One thing is sure for any new compositions that are suggested as the active material for PCM or threshold switches, there is one question that must now be asked. It is, are they immune from the effects of element separation, from whatever the cause. J, E, or crystallization rejection processes.
I agree that the patent is for a design that uses a combination of components that are not quite perfected. It is interesting that it proposes moving away from the silicon wafer process.
I believe some PCM researchers are developing the concept that a PCM cell would benefit from a reduction in the cell diameter. A diameter that is small enough to restrict the cell material molecules into having fixed pivot points as the conductive matrix forms and un-forms. I have a personal theory that this effect can be assisted by doping the cell material with Terbium. Maybe the same effect can improve the threshold switches.
Mr rbtbob-I think the patent you mention is one of those that might be useful for promotion and VC activities, but little else. For me reduction practice is necessary before I would become a believer.
Any discussion of threshold switches requires an answer to the simple question. Why has nobody been able demonstrated a long lived free running oscillator using the negative resistance of a threshold-switch; that is an oscillator that runs for more than a maximum of a few thousand cycles before device failure. When the reason for that is fully explained or a demonstration provided, the relevance of the patent can be discussed further. I think the answer to the free running oscillator problem points to the fundamental mechanism of threshold switching. I would suggest a clue will be found from the composition analysis of the threshold switch after it has been used in oscillator experiments.
Eista and Resistion--- post the IBM work, I think most of those trying to understand the PCM element separation (ES) problem now agree, that once the chalcogenide becomes molten the driving force for ES is electric field, that is it is an electrochemical process. During reset and before the chalcogenide becomes molten conventional electro-migration driven by current density is in play. Current density is important because it will drive electro-migration in the passive and active components in the matrix.
The reset current density can always be reduced, but that usually extracts a performance penalty in some other parameter. If you are Samsung then it appears from their PRAM cell…..VLSI2010 paper you can reduce w/e power in the same PCM structure (their figure 12) without consequences. Ignoring the fact that at some point in the process of that reduction the device will fail to operate and the lifetime will fall to zero. Even though their own data tends to indicate a maximum they ignore the reality of a maximum lifetime as a function of reducing w/e power.
In that same paper they do not consider that at some levels of reset (high current) the molten material will have metal as the electrodes, while at low reset current the electrodes in contact with the molten material will be crystalline chalcogenide. This means they are most likely extrapolating a single curve from what are in effect a series of different devices. I think we will need to look very carefully at the PCM related claims for PCM performance at both IEDM 2011 and ISSCC 2012. However, a serious and competitive product announcement (8G-bit or 1G-bit even),with (public) data sheets will kill much of the discussion.
The reset current density issue has called attention in the industry. Will be interesting to see the 2011' IEDM paper from Macronix with 30uA reset current with 39nm contact size. The current density will be less than 10MA/cm^2.
Does Samsung have motivation to move to PCM mass production now, just imaging Samsung has PCM technology ready? Probably no, since Samsung has the biggest DRAM/NAND memory market share now and there is no other big player in PCM.
Mr Lowrey is still at it Mr. Bauer and that is why you are still posting as "Volatile Memory." You don't like the fact that Lowrey has the Edison perspiration factor and has savored success. What is your claim to fame?
I quite agree.
And i presume that Samsung engineers/managers understand it as well.
But you should also consider, that large teams were working on PCM development, so now they just can not so simply admit that their bet has lost (and lose face doing it, it is Asian company, after all).
So some activity will be continued, at least until major reshuffle in leadership happens and new managers could scrap the project (to everybody's relief).
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.