PORTLAND, Ore. — Processing images 1,000-times faster using brain-like neural-network chips is the four-year goal of a $5.7 million University of Michigan research project funded by the Defense Advanced Research Project Agency (DARPA).
By using memristors as the neuron's memory synapses -- because they consume zero current when idle -- the image-processing neural network also aims to consume 10,000 times less power than today.
Adaptive neural networks learn the features in an image, rather than memorize its pixel values, allowing simpler representations in memory -- for instance, just two features, "round" and "red," might suffice to determine that a traffic light says stop.
Professor Wei Lu's neural network image processor will connect artificial neurons using a crossbar (lower left) of memristors with migrating oxygen vacancies (upper right) in tungsten oxide to adaptively change its synaptic connection strengths.
To detect such features, neurons are arrayed to input all the pixels in an image at once, then process them in layers with variable synapses between them -- similar to the visual cortex of the brain. Learning an image proceeds by inputting it to the first layer, whereupon the middle layers self-organize an internal representation, with the last layer acting as an array of single feature detectors. In practice, the more an image feature is presented to the neural network during learning, the stronger the synaptic connections that detect that feature will become.
To test slightly different architectures, the University of Michigan researchers, led by professor Wei Lu, are designing two prototypes. The simpler one uses memristors to store the values of its synapses, but uses conventional connections between layers. The more complex architecture mimics the brain more closely by using the memristors themselves to process voltage spikes sent between layers.
University of Michigan professor Wei Lu is designing a neural network chip that processes images 1000-times faster than conventional computers.
(Source: University of Michigan)
In an interview with EE Times, Lu said:
Basically there are two approaches we are developing, one uses small local memistors to store the weights that are calculated using well known learning algorithms, with most of the computations performed in the neuron. The other approach is more dramatic because we use the memristor to do the learning directly in its synapses, which is a riskier approach because you need a large amount of memory and the algorithms are not well known.