Even though the final cognitive computers will have billions of neurons, they will only consume power when a neuron fires, which happens at the incredibly slow clock speed of 10 Hz. As a result, an entire brain-sized cognitive computer could fit into a shoebox and consume less than a thousand watts.
IBM showed two working prototype chips, both completely digital, which it hopes will serve as the cores of future cognitive computers where thousands will be integrated on multi-core chips.
"A key intellectual step forward was that our chips are all digital, allowing us to simulate on a supercomputer and then implant the results on a silicon chip, resulting in predictable, deterministic behavior," said Modha.
Its two prototypes each use a few million transistors to implement a single core housing just 256 neurons and consuming less than four square millimeters in area using IBM's 45-nanometer silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The only difference between the two test cores was in their use of the interconnecting crossbar array, either as 256k pre-programmable synapses, or as 64k learning synapses. The chips were fabricated at IBM's facility in Fishkill, N.Y., and are currently being testing at the T.J. Watson Research Center in Yorktown Heights, N.Y. and at IBM Research in San Jose, Calif.
In operation, IBM's chips learn from experience, after several learning parameters are set. For instance, one parameter is the threshold level at which neurons fire after integrating over their multiple inputs, allowing faster but cruder operation when set low, or slower but more refined operation when set high. Then as the neurons fire, the learning synapses adapt by changing their weights as they are used. IBM implements the (Donald) Hebb rule, whereby the more a synaptic connection from one neuron to another is used, the more conductive it becomes by virtue of lowering its synaptic weight. Seldom used pathways, on the other hand, inherit higher weights that virtually prune them from the neural network.
IBM envisions its cognitive computers solving a wide variety of applications in navigation, machine vision, pattern recognition, associative memory and classification. So far it has taught one to recognize a cursive letter "7" regardless of in whose handwriting. The other has learned to play (and win against humans) at the game "Pong."
I dunno, it's hard to tell what you've taught a neural circuit. It's easy teaching it the difference between a bruised and unbruised apple. I once heard the story of how they tried to teach a neural program to detect tanks on a field. They showed it fields with and without. When it came for a field trial it failed spectacularly. Going back over the input data, the best guess is that they had taught it how to tell a sunny day from a cloudy day, which turned out to be the difference when the tanks were on the field or not.
Mr Rbtrob- There were two parts to the Stanford synapse experiment. The first was to demonstrate 100 level resolution. The second to demonstrate that the synapse characteristics could be reproduced, for this they used 15 pulse trains. I reduced it to fewer for the purpose of my explanation and illustration. I think the use of epitaxial regrowth of the same crystals, as illustrated in one of my figures is the way to obtain reproducibility and 100 level resolution. However, that path tend to lead to the conclusion that PCRAM might offer a superior solution.
A pure digital chip that is supposed to emulate neurons, synapses and dendrites? As mdkosloski said, they might as well just do the whole thing in software. Oh wait, they did.
So I guess the point of building the chip was just so they could eventually build a 10B neuron/100T synapse machine the size of a shoe box? They could simulate that in software too...it just takes more computers to do it.
I'm disappointed eetimes doesn't have something more technical about the architecture here. are the weights stored in registers? the device is said to be all digital, so doesn't an update cycle require a lot of flops? isn't a fan-out of 10,000 pretty hard to drive? why is 10Hz an adequate update rate (it certainly doesn't match real neurons.)
if this thing is structured like an xbar, does that mean there's some kind of multiplier at each crossing? (the weights and summation are digital, right?)
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.