Naah, even Apple II's 1Mhz processor could beat humans at "Pong" and read a written letter 7! This is just another scam on the taxpayer, and IBM should be ashamed! The so-called cognitive computer is nothing more than an underpowered curve-fitting device that gets stuck in local extrema, as IBM knows very well. A multi-core CPU with enough DRAM beats IBM's monstrosity in any task, any time.
You seem to be confused regarding the difference between programming a computer to beat people at Pong (which obviously happened when the Pong console came out) and having a computer learn to do the same thing. That's fine; being confused is fine. Matched with your arrogance, however, and your confusion becomes tiresome.
Mr rbtbob-I am aware, and I tried to cover at least one application of a programmable resistance device, the PCM, in neural applications with the work I reported in:-
Here is my quote from PCM PR#4 that I think is applicable in light of the present stagnant state of commercial PCM product development
“If, going forward, the dreams of neural network emulation are to be fully realized, the challenges to PCM device designers in terms of precision, discrimination and scaling will exceed, by far, anything that has been accomplished to date.”
A quote that is also applicable to all programmable resistance devices, including, CBRAM and ReRAM. Also, with respect, I think you should also be reminded that for the synapse, timing between pre- and post-synaptic pulses as well as conduction change as a function of usage is important.
PCM brings nothing to the so-called cognitive chip. Even if the cognitive chip made sense (which it does not), its value would be in the connectivity per sq inch (i.e., number of "synapses"), not the storage/counting media.
On reading your explanation of what Stanford had demonstrated, I was amazed that they could perform a resistance change using such a large number of pulses AND, if I understand your analysis, get the device to repeat the cycle enough to demonstrate a workable functionality.
Now the question is whether IBM is pursuing Stanford's scheme or some other? Also, could directional current pulses be used to enable both additive and subtractive resistance changes?
I can't help but find it a bit odd that IBM is still trying to duplicate probabilistic, over-complete, non-orthogonal, impulse integration systems using perfectly ordered and organized grids of binary devices. Might as well write the whole thing in software at that point. Biological neurons don't send signals in one or two defined routes, rather many directions randomized from neuron to neuron often including back to the neurons that originated the signal. It is interesting to note, though, that their learning algorithm does strengthen or "prune" pathways based on use.
Just to keep things straight, almost all neurons send OUT only one signal along one axon. The axon branches at the end and connects to the dendrites of many other neurons. Neurons may have thousands of dendrites receiving signals from other neurons (or receptors) Axons and synapses are like PCM, there is no possible way they could actually work :-)
This is IBM's first generation device, intentionally created to transfer its supercomputer simulations to a hardware platform. As their simulations become more detailed, IBM will have to deal with all the mentioned issues going forward. (And yes, it is relatively easy to program a computer to play Pong or recognize a numeral, which is why these were good metrics for a very simple chip learning a task on its own.)
I dunno, it's hard to tell what you've taught a neural circuit. It's easy teaching it the difference between a bruised and unbruised apple. I once heard the story of how they tried to teach a neural program to detect tanks on a field. They showed it fields with and without. When it came for a field trial it failed spectacularly. Going back over the input data, the best guess is that they had taught it how to tell a sunny day from a cloudy day, which turned out to be the difference when the tanks were on the field or not.
Mr Rbtrob- There were two parts to the Stanford synapse experiment. The first was to demonstrate 100 level resolution. The second to demonstrate that the synapse characteristics could be reproduced, for this they used 15 pulse trains. I reduced it to fewer for the purpose of my explanation and illustration. I think the use of epitaxial regrowth of the same crystals, as illustrated in one of my figures is the way to obtain reproducibility and 100 level resolution. However, that path tend to lead to the conclusion that PCRAM might offer a superior solution.
A pure digital chip that is supposed to emulate neurons, synapses and dendrites? As mdkosloski said, they might as well just do the whole thing in software. Oh wait, they did.
So I guess the point of building the chip was just so they could eventually build a 10B neuron/100T synapse machine the size of a shoe box? They could simulate that in software too...it just takes more computers to do it.
I'm disappointed eetimes doesn't have something more technical about the architecture here. are the weights stored in registers? the device is said to be all digital, so doesn't an update cycle require a lot of flops? isn't a fan-out of 10,000 pretty hard to drive? why is 10Hz an adequate update rate (it certainly doesn't match real neurons.)
if this thing is structured like an xbar, does that mean there's some kind of multiplier at each crossing? (the weights and summation are digital, right?)
I have just been asked to explain the final sentence in my posting above, there was a typo. It should have read: "However, that path tends to lead to the conclusion that CBRAM might offer a superior solution." I meant the Conducting Bridge RAM, that offers, by electrochemical action the precision to add and remove individual layers of atoms. Without that is the need for melting, high temperatures and high current densities associated with "reset" in conventional PCM. Apologies.
Say thanks a lot for your time and effort to have put these things together on this blog. Mary and I very much loved your ideas through the articles on certain things.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.