That the slow, small human brain does so well shows how little we really know about its structure and operation
You've undoubtedly heard about the Jeopardy TV-show episodes pitting two top human contestants against the IBM Watson supercomputer array. It's received a lot of attention and comments, both in EE Times as well as the general media, such as this article (and there are many more out there, just do a search).
No doubt about it, the winning performance of Watson is an incredible accomplishment in information extraction and storage, contextual decoding, fast search retrieval, and so many other computer-science disciplines.
But I actually took away a very different lesson from the event, and it's this: for all our advanced research, we really don’t have a clue as to how the human brain (or those of most other animals) really "works". How is it that the brain, with its slow central processing unit, can search through tens of thousands of mental data records and come up with an answer almost immediately to most Jeopardy-type questions? How can the brain sometimes take a few seconds, or even minutes, to retrieve other answers—or even weirder, have an answer "pop" into your head hours later, when you have forgotten that you were trying to remember that obscure piece of information in the first place?
It gets even stranger. One of the biggest challenges for an autonomous vehicle is determining where the "road" is, on which it is supposed to travel. To do this requires one or more video cameras feeding into advanced image-processing algorithms using lots of MFLOPs, often supplemented by sonar or radar sensors plusadditional processing to integrate all the inputs. It's especially difficult when visual conditions are poor, or there is image clutter from signs, trees, clouds, sharp turns in the road, or other less-than-ideal situations.
But for a human, knowing where the road is, in almost any condition except dense fog or darkness, is trivial: anyone can do it. How? We don’t know. And just about anyone can learn to run and catch a ball, which involves real-time image-capture and analysis, along with continuous control of a complex set of muscles.
Speaking of "don't know": how does the brain store information and data (and I use those terms very loosely, in the context of the brain)? What format(s) does it use for storage, retrieval, and even playback of relatively simple facts (names, numbers, dates) to complete, real-time audio and video streams? Again, no one actually knows.
Thus, while Watson's performance is impressive, it achieves this by applying lots of computing power, with extremely fast processor systems executing very advanced algorithms, on a huge and extensive database. In sharp contrast, it's not just that the brain is a "better" computer, with lots of memory storage, dynamic allocation and linkage, and extremely low-power operation. Reality is that the brain does what it does without doing any MFLOPS, and with an architecture for storage, search, and retrieval about which we know very, very little.
Although my "hat's off" to IBM and Watson for their genuinely impressive accomplishment, we still know surprisingly little about what's going on under that hat! ♦