BOSTON Boston University researchers have discovered a way to shape the noise spectrum of analog signals for higher fidelity, achieving ear-rivaling dynamic range (DR) and virtually unbounded signal-to-noise ratios (SNRs) by using a variable-sized neural network. They claim that their neural network's DR is proportional to the number of its neurons, and SNR goes up exponentially with each added neuron. Thus, those two important sound-quality factors can be indefinitely increased, simply by adding neurons.
"We are using biologically inspired principles to do the opposite of recognizing signals . . . we want to represent analog signals with the highest perceived reality," said BU researcher Douglas Mar, who showed off his work in a notch filter demonstration to prove the concept of a noise-shaping neural network. The research, funded by Ray Stata, chairman of Analog Devices Inc. (Norwood, Mass.), was conducted in collaboration with Carson Chow (University of Pittsburgh), Wulfram Gerstner (Swiss Federal Institute of Technology) and Robert W. Adams (manager of audio development at Analog Devices), who first suggested the concept.
According to Mar, the brain represents analog signals as a train of digital pulses, each neuron being a leaky integrate-and-fire amplifier. Using a population of such neurons, with each sampling the same input signal, can achieve a kind of massive "oversampling," since each neuron is operating asynchronously. The effect is similar to the "1-bit" oversampling analog-to-digital converters available today. Each neuron outputs only an occasional digital spike, but so many are sampling the signal simultaneously that they accurately modulate the population's overall frequency with the input signal.
The averaging effects of a population of parallel-acting neurons is augmented further by a negative-feedback architecture, called inhibitory coupling. Working in the time domain, an inhibitory coupling sends a negative feedback signal to other neurons whenever a neuron fires. This acts as a traffic cop within the neural network, slightly delaying the firing of other neurons until the current one is finished, thereby smoothing the flow of information.
Mar said, " . . . the clumps of information in the frequency spectrum are spread out into the empty spaces a time-domain version of the on-center, off-surround architecture in the visual cortex." By shifting around the flow of information with negative feedback, all the neurons get a chance to fire on their own, without the loud neurons washing out the effects of the faint. In that way, the weak signals can be simultaneously represented with strong signals, both contributing proportionately to the modulation of the overall frequency of the network.
"This is how you can pick out one voice and have a conversation in a whole crowd of voices at a cocktail party: You shift the noise, or unwanted information, from the bandwidth of interest to higher frequencies, where it's filtered out," said Mar. Since all neurons are simultaneously sampling the input, there is always some neuron on the verge of firing to represent the tiniest change in signal strength, even though that individual neuron might not fire again for many milliseconds.
Neurons operate on a slower time scale, in the millisecond range, because the massive oversampling compensates and every nuance is accurately sampled. The negative feedback of inhibition keeps loud inputs from washing out subtle nuances.
The improved DR and SNR results from negative feedback that generates temporal anticorrelations, which shift the noise in the frequency domain. In a circuit without negative feedback, the noise would grow linearly with each added neuron, but with inhibition the noise spectrum is shifted to frequencies much higher than those of the signal, permitting the DR to grow linearly with each added neuron.
"Beyond a certain number of neurons, the noise saturates because it's proportional to the maximum firing rate of the overall network, but signal power continues to grow exponentially with each added neuron," said Mar. The team plans to develop analog chips that could be used in "brainy HiFis," but Mar is more interested in other brain-like principles he could build into next-generation chips for instance, adaptation, fault tolerance and habituation.
"There are a lot of biological ideas, such as robustness against damage that can be directly translated into fault-tolerant chips. There are also many subtle biological principles of adaptation, such as one akin to psychological habituation, that allow circuits to adapt to their inputs in beneficial ways," said Mar.