Woodside, Calif. -- Carver Mead, frequently cited as the father of digital VLSI, still sees a future for analog. A new way to architect A/D and D/A converters is just the beginning. Twenty years from now we'll see total adaptive measurement and evaluation circuits, with nodal intelligence propelled by neural net technology, Mead told EE Times in a recent interview here.
Such seemingly outlandish visions are nothing new for the Caltech professor. In the late 1960's, Mead postulated that the performance of transistors would actually improve as they shrunk. This vision of the million-transistor chip, which had researchers agog in the '70s, paved the way for SoC technology in the '90s.
As it turns out, the vision of angstrom-sized CMOS bit switches does not exclude analog transistors - scaled devices that amplify voltages and currents. Seattle-based Impinj, founded to commercialize re-sizing techniques developed by Mead and colleague Chris Diorio at the California Institute of Technology, will design and manufacture data converters and communication circuits with analog interfaces. The technique, which uses a +P floating gate charge to alter the signal processing capability of a CMOS transistor after it has been fabricated, will be used to make amplifiers and line drivers with precisely matched differential inputs. The scaling technique can be used to increase the sensitivity of low noise amplifiers (LNAs) in RF circuits, or to size the bit-weighing transistors in high-resolution data converters.
But this is just the beginning, Mead suggested. "That's the part we can do today," he insisted. "There are so many ways of doing computation - and a much bigger role for analog than anybody thought."
"An op amp can do a fabulous amount more than a logic gate," Mead explained. An op amp, more than digital logic, allows signal amplitudes to be correlated with a moment in time. And this correlation, Mead insists, is a necessary ingredient to what he calls "Neural Science."
The only reason analog variables like "voltage" are converted to digital ones-and-zeros, Mead suggests, is in order to feed them to a Turing machine.
"I confess to 'corrupting a lot of minors' - leading people down the digital path," Mead laughed, a strange admission from the professor who argued that every computable function was amenable to manipulation on a Turing machine. (A Turing machine uses a discrete number of time steps or cycles to process information, and requires a discrete number of memory locations. Increasing the precision of a Turing Machine forces the amount of information it processes to increase exponentially, and this makes computational tasks exponentially more complicated - even impossible. "The Turning machine vision," Mead laughed, "turned out to be mostly crap."
We've already built machine vision systems (with mechanical retinas, motion perception, stereo matching and pattern recognition, Mead had lectured at an IEEE Association for Computing Machinery (ACM) convention in 1997. We similarly have auditory processing systems with cochleae, auditory feature extraction, and stereo localization, he said. We have machines capable of a certain amount "in situ" learning, with floating silicon gates, autonomous on-chip operation, and data weighting. Why then haven't we been able to assemble an autonomous electronic being? Mead asked. The future of computing, Mead claimed then, was limited by the abilities of a Turing machine.
The problem with Turing machines, Mead had argued before the ACM in 97, was that the representation of information - a signal in digital format, for example - was dependent on an exponential increase in symbols (e.g., bits) to increase the precision with which the machine could understand it. The difference between a signal in 8-bit format (its amplitude divided into 256 segments) and a more precise 12-bit representation (the amplitude divided into 4096 segments); the difference between a 12-bit signal and 16-bit representation (which divided its measured amplitude into 65,535 segments) was never just four bits.
And while data converters could represent amplitudes, they have no digital representation for time - no continuous time sense - no sense of locality or continuity. Thus, the computations cannot be used to simulate continuous non-linear systems - and are often dominated by aliasing artifacts (discontinuities between digital representations of values in the same continuum).
"Once a signal becomes digital, it seems like it's 'off limits' for analog," Mead told EE Times in October. "But that puts a heavier signal-processing load on the analog that's left over." Automatic gain control (AGC) is a case in point, he said. It's adaptive: you examine certain segments of a signal, rather than the entire amplitude. "You don't go digitize the whole thing - then paw out the bits."
The vision of A/D and D/A converters feeding data to a Turing machine was the equivalent of dozens of sewing machines in a 19th-century clothing factory, belt-driven by a single steam engine, Mead hypothesized. Replacing that huge steam engine with a single huge electric motor was first step toward real progress, he said - but only a first step. The real goal would be to make those sewing machines autonomous (with their own motors), he implied.
At ACM 97, Mead argued that the only way to displace a Turing machine was to build a computer whose computational ability outpaced the amount of information available to process -a machine whose computational ability increased EXPONENTIALLY. The most likely candidate for such a machine was a neural net topology, he argued then.
With neural computation, the ability to encode information is magnified. Signals transmitted over a distance could not only be represented as events with discrete amplitudes and time values, but also as a continuum on the receiving structures as electrical potentials (voltages) on a time scale - even as a chemical concentration. The decoding of the signals on branched neural net allows for multiple variables, and continuously changing variables like amplitude. Adaptive control - what Mead called, "Neuromorphic VLSI systems" -would keep the structure stable.
As long as we produced digital silicon for Turing machines, we would be contending with limited precision. But systems that adapt to their environment were naturally analog. "It's the difference between learning and programming," Mead said, then and now.