PORTLAND, Ore.—To achieve the exascale speeds necessary for the realtime analysis of the Big Data streaming in from the massive sensor networks of the future—such as the Square Kilometer Array (SKA) radio telescope to be completed in 2024—IBM is exploring the use of "electronic blood" to not only cool but also to power the cognitive computers of the future.
"We are taking a new approach inspired by the brain," said IBM Research scientist Bruno Michel. "Neurons are both cooled and powered by the blood, and by copying this packaging technique in the brain we hope to achieve a 5,000-times energy efficiency improvement by compacting the volume of our devices by several million times."
IBM announced Monday (March 11) that the South African government's National Research Foundation is joining it and the Netherlands Institute for Radio Astronomy (NIRA) in the Dome collaborative research project to pioneer the technologies needed to perform realtime analytics on the exascale Big Data streams coming in from a massive array of radio telescope dishes being installed across a 1,824 mile swath of remote Australia desert.
The South African contribution to the project, called SKA South Africa, will be to host 64 prototype dishes there as well as to assist IBM and Astron in creating the computing infrastructure to analyze exabytes of streaming data in realtime. Working at the newly established Astron and IBM Center for Exascale Technology (Drenthe, the Netherlands) the Dome researchers are aiming for cognitive computing technologies that not only learn and reason like the human brain, but which likewise leverage its energy efficiency.
"If you analyze a typical microchip, only one part per million is used for transistors that perform its functions, while 98 percent is used for cooling. But in the brain, 40 percent of its volume is performing functions, 50 percent is interconnections, and only 10 percent is used for cooling," said Michel. "We want to produce computers closer to this ratio."
The liquid coolant used will also be the electrolyte of a flow-battery that provides charged ions to the 3-D chips. The coolant/electrolyte will flow in channels between each stacked die, whose fins will also serve as the electrodes of the flow battery. After flowing through the 3-D chip stack, the fluid will return to a central repository where it is cooled down and recharged before flowing back into chips.
The Dome team will be prototyping microservers using these liquid-cooled and -powered 3-D chip stacks in order to process signals from SKA's dishes with the aim of producing images of unparalleled resolution, hopefully enabling scientists to peer back in time to the faint signals still propagating from the Big Bang. And since the amount of data streaming in from SKA will exceed the total traffic on the Internet, the project aims to eventually provide the exascale data processing power for future cognitive computers processing all sorts of business, financial and healthcare data worldwide. Related stories:
That is a fascinating concept. As I think about it, the phrase "why not?" pops into my head. Provided the areas between the electrodes, not only would you reduce the amount of area required for cooling, but you could have localized power, dramatically reducing the need for supply lines running around the chip. That in itself will reduce heat generated as well.
IBM pioneered many of the heat transfer methods for getting heat from very fast chips (like their 1960-70's ECL chips, bipolar not cmos). Water and Freon cooling, thermal-conduction modules of flip chips on stacked ceramic substrates, and of course all the modeling tools with big computers to run them. Glad to see they are still thinking "out of the box." Hope all this data transfer is LOCAL. One nanosecond is one light-foot. Every foot of wiring adds one nanosecond of delay (I have been told) Cray kept his fortran engines small and liquid cooled, with all interconnect lengths minimized. Optical or not, electrons or photons, cannot exceed speed of light. Right? Correct me if I am wrong please.
Maybe it is more accurate to define true "cognitive computing" as a a non-precise, non-provable methodology. Just as the human brain is non-precise and non-provable, I don't think anyone would characterize it as non-useful. If you think of some of the applications IBM has been targeting, "Watson" for example, they are trying to get a system that is good at "guessing." It works for Jeopardy and for medical diagnosis. In fact, most of us depend on the Google search engine than anything else in our computer. Maybe the constraints of precision and provability that we have applied to computing systems is what is holding us back.
I like the approach to increasing circuit density. I really doubt that "cognitive computing" is mature enough for this yet. The weakness of "cognitive computing" is not that it requires a lot of gigaflops (although it does). It's weakness is that it isn't provably correct. If a cognitive computer translates a massive array of signals into an image, how do we know that the resulting image is accurate? Is that feature really there, or did the cognitive computer just "want" to put it there. Don't get me wrong, I am not mysticalizing this. The truth is, the more pattern recognition is part of an analysis, the greater the chance that an incorrectly recognized pattern will be a fundamental part of the result.