Fascinating story @rfindley...clearly I have lots to catch-up in my understanding how our brains work electrically...would you be interested in giving a talk on this topic at emerging technologies conference in Vancouver in 2015? details at www.cmosetr.com, pls email me at firstname.lastname@example.org
> "out of curosity where is the ADC in the brain?"
In a way, everywhere. The general principle of neural processing is competition, which naturally pushes away from gradients (i.e. analog) toward 'classification' of stimulus (ones and zeros). [Interestingly, the quality of analog transmission in the brain is so low that it provably cannot be the sole means of transmitting sensory information].
The ear is a great example. The first-layer neurons respond in an analog manner to the strength of the peaks of the standing waves captured in the cochlea. But the successive neuron layers are conditioned to compete for which one is most sensitive to the pattern of peaks associated with a specific frequency. The result is that, after a few layers of processing, a particular neuron can be ON or OFF based on the presence of a particular frequency.
Amplitude is processed in parallel with a different mechanism, but in a similar manner. Many neurons have a gaussian response function, such that they respond only within a range of volume. Since neurons are generated stochastically, they each respond most strongly at a different volume level. So, for any given volume level, a particular pattern of neurons will fire strongly. Then, at the next layer, a single neuron can recognize the pattern for that volume level, and will turn ON or OFF accordingly.
> "how many bits and what sampling rate?"
The technical answer is long, but it is possible to measure an effective bit resolution and sample rate for various senses. However, it varies by genetics, usage, and even across the range of a single sensory organ. For example, we have much higher bit resolution in the mid audio frequencies, because there are more neurons dedicated to that frequency range. Also, blind people can develop higher auditory bit resolution, because processing of audio expands into the unused visual areas, allowing more 'oversampling' of audio signals in the deeper layers of audio processing.
Also, you can consciously direct an increase in effective bit resolution for some things. It's why we get better with practice. Attentive processing causes neurons to configure more quickly to distinguish the attended stimuli, thus increasing effective bit resolution. Nifty, eh?
> "understanding what exactly neuron does is the key"
Yes, exactly. Or more specifically, understanding what each neural microcircuit does, where a microcircuit is usually made up of a dozen to a few hundred neurons.
> "I am not sure how one goes about finding this out"
That's the billion dollar question, isn't it :-). I have my own ideas that I'm pursuing, but only time will tell who will be the first to the finish line :-).
thank you @rfindley for quick and comprehensive answer...I am aware of brain using timing between pulses, to me this is analog computation...digital is basically looking weather there is one or zero...I entirely agree on your hmain point, understanding what exactly neuron does is the key...I am not sure how one goes about finding this out...and out of curosity where is the ADC in the brian? how many bits and what sampling rate? ;-)...Kris
@krisi, The ADC example has nothing to do with power consumption. Nor is the point that the brain performs ADC operations (though it does, in a way). Rather, it simply illustrates that understanding a system allows you to optimize it for different circumstances.
Perhaps a better example: if I were to design a Playstation emulator on a PC, I might decide to build it as a virtual machine where each machine instruction on the Playstation processor is replaced with a similar instruction on the PC (this is a simplification, of course). But if I knew nothing about how a Playstation processor worked, I might be forced to simulate each individual transistor in that processor. Obviously, that would require massively more computation. This is essentially what is being done with the brain (and understandably so, given the general lack of understanding of the brain). But it doesn't have to be that way.
Researchers are realizing that neurons appear to do a lot more computational work (per neuron) than originally theorized. That is why some folks are increasing their estimates of how much processing would be required to implement a brain in silicon. But I think a significant part of that computation is specific to the grey-matter implementation. When trying to implement the equivalent functions (at the group-of-neurons level) in silicon, it can be implemented in a much more efficient way than simply copying how neurons work to the nth degree.
On a side note, the brain really doesn't operate entirely in analog. It is somewhat of a digital-analog hybrid, plus some aspects that aren't described well by either analog or digital. The brain converts almost all of its input to a quasi-digital code -- quasi because it uses several coding tricks such as pulse-frequency modulation. It has even been shown experimentally that information may be encoded as serialized digital symbols in certain parts of the brain, and are transmitted in a repeating loop in order to minimize the amount of parallel connections required across brain regions!
I am not sure I undestand how ADC plays a role in explaining brain power...brain operates entirely in analog domain, what is why it consumes 20W not 5MW that similar digital computation might require...there is no ADC in teh brain! Kris
Lest we forget, the human brain is an analog computer. It takes some serious digital horsepower to do what a summing opamp/comparator can easily do with a few transistors. Emulating an analog system with digital hardware is a tremendous multiplier. Ask any neural net guy.
@_hm, I'm assuming you mean 'computational' power of the human brain, not 'energy' power.
Personally, I think the ever-increasing estimates of the computational ability of the brain are actually headed in the wrong direction. We are currently in a period of "Moore's Law" of brain knowledge (doubling every year or two), but not so with actual 'understanding' of what we observe. As a result, our computer-centric theories tend to guide us toward the (I think) false notion that we need more computing power to simulate every nook and cranny of the neuron.
If one applies a more stochastic perspective on brain computation, we can, in theory, actually decrease the estimated computational power of the brain, placing it squarely within reach of today's technology (with a proper understanding of how it works, and of course a big enough budget).
To illustrate in terms more suitable to an EE: In some cases, a particular A/D converter design might benefit by using a low-res ADC, and increasing its resolution by adding noise, oversampling, and averaging (see here). The brain does similar: (a) The noise is added via the stochastic nature of neuron formation and connection. (b) Oversampling occurs in space, rather than time, via an array of stochastically generated neurons clustered together. (c) Averaging occurs in both time and space via signal summation (though that is a simplification for sake of brevity).
The point of the A/D example is that, when you understand why it's doing all those computations, you might decide that you have the resources to simply use a higher-resolution ADC, thus saving yourself a lot of computation. Or maybe you can use a totally different sampling method that achieves the same goal. The same is true of the brain. Its design is well-suited to grey matter, but maybe we can make some different choices better suited to silicon, while achieving the same result.
When we look at neurons without understanding how they work together, we assume that we need to simulate every little bias and noise and geometry of a neuron -- much like simulating every eddy current, leakage, and charge distribution in a transistor. When you begin to understand the system on a larger scale, you realize such things are unnecessary, or at least can be minimized in the right context.
And most importantly: much of the brain is efficiently idling at any given instant. It is a sparse coding system consisting more of storage than computation (though this, too, is a simplification). So, we can use our smarts to figure out how to take advantage of that knowledge, resulting in less necessary computation.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.