I'm disappointed eetimes doesn't have something more technical about the architecture here. are the weights stored in registers? the device is said to be all digital, so doesn't an update cycle require a lot of flops? isn't a fan-out of 10,000 pretty hard to drive? why is 10Hz an adequate update rate (it certainly doesn't match real neurons.)
if this thing is structured like an xbar, does that mean there's some kind of multiplier at each crossing? (the weights and summation are digital, right?)
A pure digital chip that is supposed to emulate neurons, synapses and dendrites? As mdkosloski said, they might as well just do the whole thing in software. Oh wait, they did.
So I guess the point of building the chip was just so they could eventually build a 10B neuron/100T synapse machine the size of a shoe box? They could simulate that in software too...it just takes more computers to do it.
Mr Rbtrob- There were two parts to the Stanford synapse experiment. The first was to demonstrate 100 level resolution. The second to demonstrate that the synapse characteristics could be reproduced, for this they used 15 pulse trains. I reduced it to fewer for the purpose of my explanation and illustration. I think the use of epitaxial regrowth of the same crystals, as illustrated in one of my figures is the way to obtain reproducibility and 100 level resolution. However, that path tend to lead to the conclusion that PCRAM might offer a superior solution.
I dunno, it's hard to tell what you've taught a neural circuit. It's easy teaching it the difference between a bruised and unbruised apple. I once heard the story of how they tried to teach a neural program to detect tanks on a field. They showed it fields with and without. When it came for a field trial it failed spectacularly. Going back over the input data, the best guess is that they had taught it how to tell a sunny day from a cloudy day, which turned out to be the difference when the tanks were on the field or not.
On reading your explanation of what Stanford had demonstrated, I was amazed that they could perform a resistance change using such a large number of pulses AND, if I understand your analysis, get the device to repeat the cycle enough to demonstrate a workable functionality.
Now the question is whether IBM is pursuing Stanford's scheme or some other? Also, could directional current pulses be used to enable both additive and subtractive resistance changes?
This is IBM's first generation device, intentionally created to transfer its supercomputer simulations to a hardware platform. As their simulations become more detailed, IBM will have to deal with all the mentioned issues going forward. (And yes, it is relatively easy to program a computer to play Pong or recognize a numeral, which is why these were good metrics for a very simple chip learning a task on its own.)
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.