Yes, I think IBM's motivation was to implement a conventional neural network architecture, but one that minimized the necessary hardware, thus the analog summing nodes. IBM says its neuron can be implemented with just 1272 gates--pretty economical.
Interesting how similar this architecture is to the old familiar CPLD architectures. Inputs are connected to AND terms and then 'Summed' via OR terms. Outputs are routed to a connectionm matrix. Really the only change is the switch from digital logic to an 'analog' summing neuron. Makes me think a digital version would be much easier to experiment with using an FPGA/CPLD architecture. Perhaps that was phase 0 of the project...
Corelets is talking about two languages one that programs the hardware and the other that programs the software. I think this thread will merge with the multiprocessing and parallel processing techniques, as Corelets is dealing with many decision making elements, we can say Tiny Processors.
Yes, neural networks dropped off our radar in the trade press, because all the startup companies either failed or were absorbed by larger corporations who only used their technology for special purposes. However, the International Joint Conference on Neural Networks has continued to make slow, steady progress--especially in the learning methods you mention, which have become quite sophisticated. And now that IBM is backing them, we should finally see the dream start to materialze. By the way, HRL's Center for Neural and Emergent Systems (CNES) is also in DARPA's SyNAPSE program. HRL is using memristors as its artificial synapses for learning:
Matt, thanks for your insightful roundup of neural network history and the remarkable opportunity they offer to answer some of the world's most profound questions about the brain and human consciousness. Of course, these questions will not be answered anytime soon, but at least there is now light at the end of the tunnel :)
I'm not sure that 'program' is the right word to use here. This type of neural networks typically need to be taught rather than programmed. In past efforts of this type the emphasis was on methods for adjusting the reaction of specific simulated neurons to learning input and whether or not you have a separate learning phase or keep learning on while the net is in live operation.
The last time that neural network research was in vogue there were some significant advances, but then it dropped out of the press. I remember the observation at the time that Artificial Intelligence is how it is described only while it is unproven. Once it actually works it just becomes an engineering design method.
Awesome! We started dreaming of analog neural architectures back in the late 80's ... but were seerely limited by the processes, EDA tools and lack of programming. The first book I have is Analog VLSI Implementation of Neural Systems by Carver Mead and Mohammed Ismail. That ... and a couple professors at Indian University (Prof Jonathan Mills - Lukasiewicz logic arrays, and Gregory Rawlins - Genetic Algorithms) inspired my first design and implementation of an simple modular architecture similar in concept to IBM's ... back in 1989. That inspired an IEEE best seller book based on our varied implementations of controller for a memory wire controlled 'stiquito' hexapod. I had hypothesized, that there must be a continous path from the single-cell neuron controller to a cilia, expanding fractally for multi-segment articulated creatures ... via a natural evolution of neural loops. There was no other way - since the primitive creatures had no controller algorithm. What was amazing, was that the psuedo-evolved neural loops resulted in a set of gaits which exactly matched that seen in nature! Expanding on that evolutionary path lead to an overall architecture of the mind, based on said loops. However, at a point of complexity, the loops are not hard-coded in genes, but rather acquired ... with a little boosting from the core loops. The idea of consciousness and qualia existing as a set of active loops, competing in the non-physical domain for the 'attention loop', was further supported by Alwyns Scott's Soliton's idea. There was a heated debate between him and the Stuart Hameroff team - whom believe that consciousness resides in quantum conherent states maintained by millions of microtubules in each neuron. Now .. we shall have the potential to prove Alwyn Scott right (although we can not prove him nor Stuart wrong). To prove Hameroff/Penrose right, we would need to add nano-structures capable of holding quatum states ... which is also possible now (ie, finfet quantum tunneling). So - who will win? Zombie deterministic machines? Or ethereal universe harmonizing super-Gödel pan-dimensional Schrödinger quatum spirit-minds? Unfortunately - both are capable of sentience. But the first one will kill you ... eventually. That is - it will lack the essence of free will, pure creativity, and any connection to God. imho.
Matt - Sr. Intel Engineer ... worrying about nano-level transitor effects.
Yes, neural networks have been knocking around for a couple decades. In fact, back then I wrote a book for John Wiley and Sons--"Cognizers: Neural Networks and Machines that Think" (which I revised recently). IBM is inching closer to the dream of cognitive computers--what I called cognizers--but it will still be many years before they mature enough to go mainstream.
This is my understanding of how this will work after a cursory look at IBM's documents.
Basically tey seem to have built a very efficient, simple, but very flexible and generic neural network. These are used in two ways.
One way, you use predefined IBM blocks emulated by the generic network these include, according to the article "scalar functions, algebraic, logical, and temporal functions, splitters, aggregators, multiplexers, linear filters, kernel convolution (1D, 2D and 3D data), finite-state machines, non-linear filters, recursive spatio-temporal filters, motion detection, optical flow, saliency detectors and attention circuits, color segmentation, a Discrete Fourier Transform, linear and non-linear classifiers, a Restricted Boltzmann Machine, a Liquid State Machine, and more.".
I suspect that using these will be a bit like using some very advanced analog blocks even though the underlining architecture is digital.
The other way of using the corelets is to define your own blocks and I think this will be for more advance users.
All these module can be connected into more complex functions.
It is definitely a paradigm shift from normal digital design and programming, close to an analog/digital FPGA capable of very complex functions. I've ever used Matlab, but from the little I know, a Matlab user would be familiar with at least some of the modus operandi.
Interesting observation. This could be the story of computing? Reinvention and not so much invention. Don't we see this over and again? Reusable code was renamed SOA! Social media bundled age old technologies into a single platform! Just to name a couple.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.