Nonlinear nets approach runway to wireless apps
ESNs have an additional advantage over a fixed set of basis functions, since the output of the network can be fed back to the black-box network, which tends to shape the response of the nonlinear network. The mechanism is similar to an infinite impulse response filter, and has created some stability problems, he said.
"The internal basis signals inside the reservoir are automatically preshaped to look like modified versions of the target signal. For instance, if the target signal is periodic, all the basis functions will automatically take on the same period. This preshaping of the basis is another factor that leads to the superior approximation properties of ESNs.
In benchmark runs on predicting chaotic time series, Jaeger found that the ESNs were more accurate by a factor of 2,400 than standard techniques.
Most practical systems being designed with the ESN approach are implementing the networks digitally in field-programmable gate arrays. "A major design decision is whether one goes for an analog or digital realization of the ESN. The former would of course run much faster-conceivably in the high-frequency or very high-frequency front end directly. "Analog hardware is noisy, however, and some mathematical groundwork to make ESNs noise-resistant remains to be done. Digital chips could implement the algorithm in its current form," he said.
"I am planning to start a research line of ESN applications in telecommunications," said Jaeger, who is working with a group at the Fraunhofer Society to offer consulting services to companies wishing to capitalize on the approach. "One typical application example would be the use of ESNs for dynamic-channel assignment in next-generation (fourth generation) high-data-rate cellular networks-up to 1 Gigabit per second-which will operate in an ad hoc and self-organizing fashion," he explained.
Work done with ESNs so far seems to confirm that they will be both as easy to work with as back-propagation of error networks, but are much more powerful in predicting time series.
But will ESNs have a better chance of behaving more like neural networks do in biological systems than other approaches researchers have used to date? An answer to that question is being investigated by Maass at the Technical University of Graz (Graz, Austria).
Maass independently discovered ESNs while trying to mathematically model the behavior of feedback circuits in the brain. While Jaeger's black-box networks only use a highly simplified model of a neuron, Maass' model has more realistic neurons that communicate using trains of voltage spikes. Maass called these nonlinear systems "liquid-state networks," comparing them with the surface of a liquid.
For example, when a sugar cube is dropped into a cup of coffee, the surface begins to undulate in a complex pattern that gradually diminishes in amplitude until it reaches the original zero state. A similar phenomenon occurs when a reservoir of feedback neural nets are given a single-input set of data. Given a time-series of input events, the continual agitation of the liquid stores a running history of the input sequence, giving the network a built-in memory.
Maass has formulated a general mathematical model called a liquid-state machine, similar to the universal Turing machine model of digital computers.
Such liquid-state computers can be shown to be universal for time-series prediction in the sense that they can implement any time-invariant filter with fading memory. Those specifications cover any nonlinear filter that could be designed for real-world time series.
Actual liquids are not good models with which to work, however, when it comes to creating liquid-state computers. Although a convenient metaphor, liquids are modeled as a collection of nearest neighbors (the water molecules) acting on one another. Neural networks offer more complex interactions, which are needed to mimic the neural activity actually seen in nature. To solve that problem, Maass tried substituting some realistic models of neurons communicating with voltage spike trains along with models of synaptic connections. The input and output from the "liquid" network was also modeled as realistic spike trains. The goal was to see if the whole system would exhibit the type of signal processing found in the brain.
Experiments with quite simple neural nets revealed a wide variety of network activity. It was discovered that a small reservoir of neurons could implement any finite-state machine, a computational approach that is used to process time series in engineered control systems. To verify the generality of these networks, Maass chose a series of randomly selected finite-state machines and found that in each case, its operation could be reproduced by adjusting only the weights of the output neurons.