This phase-change memory (PCM) progress report explores the use of a PCM to emulate a component of the brain, the synapse, in an impressive piece of work from Stanford University.
While many have claimed and suggested that amorphous memory and threshold switches might be able to emulate functions in the neural network, this latest work from Stanford is a serious device-based physical example . The experimental evidence presented supports the claim by the authors that, to their knowledge, “this is the first demonstration of a single element electronic synapse with the capability of the modulation of the time constant and the realization of the different STDP (Spike-timing Dependant Plasticity) kernels.” If, going forward, the dreams of neural network emulation are to be fully realized, the challenges to PCM device designers in terms of precision, discrimination and scaling will exceed, by far, anything that has been accomplished to date.
Linking PCM and synapse
As illustrated in Figure 1, the synapse is a complex connector that is involved in controlling the passage of neural messages from one neuron to the next. It is part of the learning process, it functions by combing and remembering the timing of events (spikes) in the two neurons that it links, called the pre-synaptic and post-synaptic neurons.
Also shown in Figure 1 is the PCM device structure that was chosen for this experimental work. Readers familiar with the trials and tribulations of the attempts at commercializing PCM will recognize this as the familiar “mushroom,” “dome,” or “barrel” structure, with or without heater electrode. The PCM emulation devices used have a contact diameter of 75nm and from the bottom up use a W-TiN-xtal GST- TiN structure.
In the brain, the synapse learning process is facilitated by STDP of the synapse. While the focus here will be on the PCM performance and achievements, it might be useful to relate the terms used in the study of brain activity with PCM electronics. These are shown in Table 1.
The synapse is an important component because, in the human brain, it is estimated there are some 1015 synapses connecting some 1011 neurons in a three-dimensional network.
The form of the synapse characteristic that must be emulated is shown in Figure 2. The vertical axis is the learned weight of the connection while the horizontal axis relates to the time that a post neuron activity (spike) precedes or lags a pre-neuron activity (spike), as shown in the insets to Figure 2, resulting in the curves to the left or right of the vertical axis respectively.
The drift can be compensated with a reference, but each new data input needs a new reference, so things (costs, power, delays) add up that way.
100 levels in flash I think is not doable, even 8 levels or 3 bits is borderline. Too few random electrons to tell the difference.
I have hidden some of the comments on this article because of their unproductive nature. My thanks to those readers who pointed them out. I understand and am glad that we are passionate about our work, but let's keep it professional. Thanks.
Resistion-You could be right, I guess it's up to someone to prove the point experimentally, arm waving solutions and claims of brain functions or any other aspect of PCM developments are no longer acceptable.
The 100 level resolution (1%)for Flash might result in some drift. In a modern scaled Flash how many electrons does that involve for each step? It is even possible if all devices in an emulated synapse network drift together the relative learned experience of the neural network is the same, others will have to answer that question. Drift tolerance is the feature that IBM use in their MLC-PCM. Both IBM-MLC and PCM-Onyx were originally part of this Progress Report I understand that material will now appear in the next PCM Progress Report, #5 in the near future.
I asked Ron Neale to take a look at some of the current work being done in PCM and he took the time to analyze some work being done by Stanford University. Look for more on PCM, with a new progress report coming in a week or two.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.