That is a fascinating concept. As I think about it, the phrase "why not?" pops into my head. Provided the areas between the electrodes, not only would you reduce the amount of area required for cooling, but you could have localized power, dramatically reducing the need for supply lines running around the chip. That in itself will reduce heat generated as well.
IBM pioneered many of the heat transfer methods for getting heat from very fast chips (like their 1960-70's ECL chips, bipolar not cmos). Water and Freon cooling, thermal-conduction modules of flip chips on stacked ceramic substrates, and of course all the modeling tools with big computers to run them. Glad to see they are still thinking "out of the box." Hope all this data transfer is LOCAL. One nanosecond is one light-foot. Every foot of wiring adds one nanosecond of delay (I have been told) Cray kept his fortran engines small and liquid cooled, with all interconnect lengths minimized. Optical or not, electrons or photons, cannot exceed speed of light. Right? Correct me if I am wrong please.
Maybe it is more accurate to define true "cognitive computing" as a a non-precise, non-provable methodology. Just as the human brain is non-precise and non-provable, I don't think anyone would characterize it as non-useful. If you think of some of the applications IBM has been targeting, "Watson" for example, they are trying to get a system that is good at "guessing." It works for Jeopardy and for medical diagnosis. In fact, most of us depend on the Google search engine than anything else in our computer. Maybe the constraints of precision and provability that we have applied to computing systems is what is holding us back.
I like the approach to increasing circuit density. I really doubt that "cognitive computing" is mature enough for this yet. The weakness of "cognitive computing" is not that it requires a lot of gigaflops (although it does). It's weakness is that it isn't provably correct. If a cognitive computer translates a massive array of signals into an image, how do we know that the resulting image is accurate? Is that feature really there, or did the cognitive computer just "want" to put it there. Don't get me wrong, I am not mysticalizing this. The truth is, the more pattern recognition is part of an analysis, the greater the chance that an incorrectly recognized pattern will be a fundamental part of the result.
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...