PORTLAND, Ore.—By replicating the functions of neurons, synapses, dendrites and axons in the brain using special-purpose silicon circuitry, IBM claims to have developed the first custom cognitive computing cores that bring together digital spiking neurons with ultra-dense, on-chip, crossbar synapses and event-driven communication.
IBM's effort is the crowning achievement of a "phase zero" and "phase one" contract with the Defense Advanced Research Project Agency (DARPA) to build Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), IBM and its university partners—Columbia University, Cornell University, and University of California-Merced, and the University of Wisconsin-Madison—now enter "phase two," which extends their efforts for another 18 months with a new infusion of $21 million in funding. DARPA funding the project has received thus far, including the new funding, amounts to $41 million in total.
The eventual goal is to create a brain-like 10 billion neuron, 100 trillion synapse cognitive computer with comparable size and power consumption to the human brain.
"We want to extend and complement the traditional von Neumann computer for realtime uncertain environments," said Dharmendra Modha, project leader for IBM Research. "Cognitive computers must integrate the inputs from multiple sensors in a context dependent fashion in order to close the realtime sensory-motor feedback loop."
Though IBM claims its custom cognitive computing cores are the first of their kind, a rival European program using conventional ARM cores called SpiNNaker— for spiking neural network architecture—was announced last month.
Traditional von Nuemann computers are ill-equipped to deal with the multiple simultaneous data streams coming in from sensors today, but brains handle these easily by distributing processing and memory among its neural networks. In particular, sensors feed neurons down input lines called dendrites.
IBM's Cognitive Computing Chip, at about 3-mm wide, has demonstrated the ability to play (and win) against a human in the game "Pong" and can also read a written letter 7, even when written in various ways.
The neuron integrates over these inputs until a threshold is exceeded, at which point it fires a pulse down its output axon, which is weighted by the synapses connected to other neurons. Pattern recognition is accomplished by the synapses "learning" which connections are used most often, which causes them to grow stronger, while seldom used connections wither away. In this way, the neural network closes the sensory-motor feedback loop, since once a pattern is recognized from the sensor inputs, the output motor neurons mobilize a response.
IBM replicates the brain's architecture by using a crossbar array to hold the synapses, which then learn which sensory patterns correspond to which desired motor control outputs. The crossbar array connects the neurons to sensor inputs by integrating over a large fan-in of dendrites, then firing output pulses down axons which feed individual synaptic connections to the other neurons in the network.
"Synapses are realized with a crossbar array, in which the vertical lines are the input dendrites and horizontal lines are the output axons," said Modha. "Each neuron fires in order to communicate with the other neurons which fully integrates memory with processor, instead of separating them like von Neumann."
I'm disappointed eetimes doesn't have something more technical about the architecture here. are the weights stored in registers? the device is said to be all digital, so doesn't an update cycle require a lot of flops? isn't a fan-out of 10,000 pretty hard to drive? why is 10Hz an adequate update rate (it certainly doesn't match real neurons.)
if this thing is structured like an xbar, does that mean there's some kind of multiplier at each crossing? (the weights and summation are digital, right?)
A pure digital chip that is supposed to emulate neurons, synapses and dendrites? As mdkosloski said, they might as well just do the whole thing in software. Oh wait, they did.
So I guess the point of building the chip was just so they could eventually build a 10B neuron/100T synapse machine the size of a shoe box? They could simulate that in software too...it just takes more computers to do it.
Mr Rbtrob- There were two parts to the Stanford synapse experiment. The first was to demonstrate 100 level resolution. The second to demonstrate that the synapse characteristics could be reproduced, for this they used 15 pulse trains. I reduced it to fewer for the purpose of my explanation and illustration. I think the use of epitaxial regrowth of the same crystals, as illustrated in one of my figures is the way to obtain reproducibility and 100 level resolution. However, that path tend to lead to the conclusion that PCRAM might offer a superior solution.
I dunno, it's hard to tell what you've taught a neural circuit. It's easy teaching it the difference between a bruised and unbruised apple. I once heard the story of how they tried to teach a neural program to detect tanks on a field. They showed it fields with and without. When it came for a field trial it failed spectacularly. Going back over the input data, the best guess is that they had taught it how to tell a sunny day from a cloudy day, which turned out to be the difference when the tanks were on the field or not.
On reading your explanation of what Stanford had demonstrated, I was amazed that they could perform a resistance change using such a large number of pulses AND, if I understand your analysis, get the device to repeat the cycle enough to demonstrate a workable functionality.
Now the question is whether IBM is pursuing Stanford's scheme or some other? Also, could directional current pulses be used to enable both additive and subtractive resistance changes?
This is IBM's first generation device, intentionally created to transfer its supercomputer simulations to a hardware platform. As their simulations become more detailed, IBM will have to deal with all the mentioned issues going forward. (And yes, it is relatively easy to program a computer to play Pong or recognize a numeral, which is why these were good metrics for a very simple chip learning a task on its own.)