More hype than substance I'd say at least when it comes to the claims about a neural network processing unit (NPU) in hardware.
Looking at the braincorp jobs page http://braincorporation.com/index.php/category/opportunities/ these guys are hosted inside Qualcomm and are focused on building robots, robotics algorithms and tools.
They are hosted inside Qualcomm and appear to be leveraging Qualcomm's GPUs using OpenGL and OpenCL, as well as C/C++, Python and Matlab to to the NN processing rather than some kind of specialised NN processor.
I'd say most of the NN work is being done in Matlab with some kind of back-end code generation of C/C++ code or potentially OpenGL/OpenCL ES code, the smart approach would be to have libraries for the GPU which are optimised at low level and be able to call identical libraries within Matlab or Python for rapid development rather than designing an esoteric compiler.
I'd guess this may lead to adding NPU support in the ISA of future GPUs at some point if they really need it
"The brain is also very power efficient, he explained, consuming only about 20 watts at a cost of under a quarter of a cent per hour, whereas simulating the brain on a conventional von Neumann computer would take up to 50 times more power"
This statement is badly wrong and about 5-6 orders of magnitude off both in FLOPS and Watts!
This article reports an 83000 processor cupercomputer being able to deliver about 1% of the calculations performed by the brain and such super computers typically dissipate megawatts!
According to the German supercomputer centre in Juelich it will take an exaFLOP machine to simulate the entire brain in about 2020 with a power budget on the order of 20MW
Given current supercomputers can only manage about 10GFLOPS/W even this figure is in considerable doubt and would require almost 2 orders of magniture improvement in FLOPS/W in the next 10 years wit Moore's law and supply voltages plateauing
This is a long, long way from what Qualcomm claim they can fit in a handset where 3-4W is the total for the complete smartphone including PA, baseband, WiFi/Bluetooth/GPS etc., display, Android OS and last but not least the budget for applications of 600-700mW
Next year Qualcomm will release its suite of software tools that work with an FPGA emulator for developers to use when creating applictions for its NPU. Regarding automotive, Purdue University professor Eugenio Culurciello has already shown that recognition of roadside scenes can result in realtime classification into pedestrians, vehicles, buildings, etc., but I suspect it will take a year or two for developers to start making good use of this type of information in collision avoidance and similar automotive applications.
I agree, neural has been on the boards for decades, Motorola had one about 15-20 years ago but no one ever seems to get it off the ground because they try to "program" them instead of letting them learn. Also power consumption has been a problem because they need to be essentially analogue and massively parallel to do real work (see our brain) and that doesn't translate well into silicon. I think some quantum deriviative of a neuron is going to be the real solution.
One of the significant questions with neural networks is whether learning is continuous or not. This came up years ago when people started thinking about using them for control systems. If learning is disabled in an operational system then its utility is certainly limited, but if it is enabled then you run the risk of it learning something that would cause it to give a wrong answer (in some waus these things are very much like people!). This may be less of an issue for an intelligent UI than a direct control system, but one way or the other we need to understand how it will react. Some of the best Asimov stories revolved around ambiguity of interpretation of the Three Laws for a good reason.
When I first saw this I thought, "Rats! Why can't all of the fantastic and very similar work that Eugenio Culurciello has been doing for years under Office of Naval Research (ONR) funding get this kind of press...?" This also shows why one should read figure captions. From said caption on the familiar-looking first figure I finally realized that this _is_ the next phase for Dr Culurciello's amazing chips! Since shutting down the government seems to be the theme-du-jour, it's worth pointing out just how huge a role federal funding plays in giving folks like Dr Culurciello a chance to move the ball on some wild new idea to the point where a large company like Qualcomm can see the potential and catch the pass. That's the kind of teaming where everyone benefits.
Very remarkable results are obtained by the Purdue University professor, it seems a really working technology, yet it is not much explained how the individual NPUs are working as neurons, but these technique will be having many roles to be played beyond it is explained in the article and explained by Qualcomm
This may not be critically important in the overall scheme of things, but since others have called elements of this article into question it is worth noting that Asimov's Zeroth law is incorrectly described. The law that robots may not harm a human is captured in the first law. The Zeroth Law states that a robot may not harm humanity, which is quite a different thing. Anyone interested can get the lowdown at this Wikipedia page: http://en.wikipedia.org/wiki/Three_Laws_of_Robotics
The comments about various libraries, compilers, and other standard digital lore are well off the mark with respect to neural networks. The basic structure of neural function has been known for a long time, even if the details may await future generatiions. It is analog through and through, and no digital approach will work. John Hopfleld and colleagues showed three decades ago how extraordinarily efficient a simple collection of analog processors could be at a test case - solving the traveling salesman problem. Operational amplifiers were all the electronics needed.
So it can certainly be done in CMOS, but it will not be digital, and it will not involve a lot of C++...
First, I'd like to mention that NN chips will be another processor type in a heterogeneous processor system. Like human brains, ANN chips will not be very fast at solving matrix equations and other deterministic problems. Instead they are good at learning to be decision makers, or in other words they can "solve" NP-complete (Nondeterministic Polynomial time) problems, (like the travelling salesman problem you mention), in a much more efficient way than Harvard architecture computers.
Must they be analog? A good question. Another is, must they be electronic at all? There is a need for the synaptic connections to be analog, or at least to carry a variable signal. Whether electronic or photonic, the signal energy is in fact quantized, and so perhaps there is no such thing as true analog and the only possible signal is a discrete one. As the trace sizes get smaller and smaller, so does the possible number of discrete signal levels. In any case, we don't know how many signal levels are necessary for a usable ANN. Perhaps a relatively small number of signal levels is sufficient, in which case digital synaptic connections are plausible.
The use of neural nets and "experience stores" allowing users to download expertise into their consumer products is a fascinating topic. I remember downloading neural net software for my MS-DOS computer in the early 1990's. It was the emerging breakthrough technology for solving image recognition problems. Obviously biological neural nets have served mankind very well (our brains). That said, there is an area of concern to me. The very fact that we don't know what cue is being used by the network to solve a problem may be the source of a serious issue. The software cannot be debugged or "certified" as correct. Imagine that we use a neural net algorithm to distinguish apples from oranges. Perhaps it does perfectly. Not knowing the "factor" it is utilizing, we might get into very unexpected results under unexpected conditions. Is it using color? If so, did we remember to test with green and yellow apples as well as red ones? Is it looking for stems? Will it mistake round candles for apples? When the variables being used are understood, a simple traditional program can run with less resources (and potentially less errors in a comprehensive test) than the neural net. I would predict that downloading a variety of "experience" updates to a neural net may result in some unexpected results.
DrQuine: your points are well taken; after all, an artificial neural network (if successful) is going to have a lot of the same characteristics as a biological one (some of which are just the ones you list!). In the spirit of the mixed system mentioned above, I would say that the best way to use neural network processors is as a feed to a final digital system. Refer again to the TSP: Hopfield's op amps can get the best million (roughly) solutions to a 30-city tour in a few microseconds, but can't go beyond that. But a digital system can now evaluate a million tours very easily (unlike the 10^30 original possibilities). The same principle could be used for image recognition.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.