Luis, I agree with your idea and spirit! An iPhone5 has tremendous computing power.
Now, if we could get SW developers to push back when performance/development-cost arguments are raised. Contemporary code is inefficient/sloppy when compared to what was written 40 years ago. Your reference to Apollo-capsule computers is right on target!
The whole Top500 thing is a silly focus of attention. A DARPA PM I knew used to talk about "macho-OPS." Most of these massively-parallel processing (MPP) systems fit that description.
MPP systems try to use COTS (commercial off-the-shelf) parts to "save" on development costs. GPUs provide a lot of numerical bang for the buck, but don't work worth a hoot on data-dependent computations.
MPP systems have ridiculous power consumption. Megawatts? Yow.
MPP machines - particularly those employing GPUs - often support relatively narrow ranges of numerical applications. This ultimately ends up transforming serious problems - modeling real-time physical systems (brains, weather, etc.) - from computation-bound to storage-bound: real-time systems often can't be modeled in real time, requiring results to stack up somewhere so they can be displayed in a meaningful (non-glacial) way.
I can't help but wonder if we could realize better ROI by funding more research in configurable architectures. FPGAs are the obvious starting point, but ASICs are needed to meet density/power needs of real, deployable systems that go into end-user devices.
It is impressive to read about how supercomputers are built. That's a lot of computing power! But... I think supercomputers are already in the pockets of the lay man. If we think of the processing power in the iPhone and compare it against the computers that were used in the Apollo mission to the moon we can certainly amaze ourselves and say that we have a supercomputer at the reach of our hands.
For $5750 per compute node, you could have a 6-core (12 threads with Hyperthreading active) at 4.60 Ghz per core with the i7-3960X architecture.
You might notice something similar if you look closely at HokiSpeed's compute node at the Monolith computer build by Liquid Nitrogen Overclocking.
Very good effort on supercomputer research, the goal is very much good as the scientist wants to make the supercomputer accessible to the society, generally the access of HPC is only to the researchers in most of the countries, it will be really a great time if society can have access to this kind of HPC.
wait, what is this "wider, younger audience" thing? lots of unis have clusters - are you saying that there's something new about the planned access to this cluster? it's not that uncommon for undergrads to have research-sponsored accounts on clusters...
it _is_ perfectly normal for universities to buy clusters. there are scores of companies that will set you up with the same configuration as this one, all off-the-shelf hardware. you can buy turnkey package deals, or do it yourself. for this config, figure about $7k/node.
I wonder ten years from now what kind of computing power the average person might have and what new applications there might be for it. I noticed in ScienceDaily the other day that the smallest conductors ever developed in silicon, 1 atom tall and 4 atoms wide, are still governed by Ohm's Law. The article states that "For engineers it could provide a roadmap to future nanoscale computational devices where atomic sizes are at the end of Moore's law." Exciting stuff there. What does the future hold? What will a supercomputer be able to do at that time. Danny Dunn's Homework Machine might become a reality. ;)
Clustering existing technology has the advantage of enabling supercomputer performance from standard modules in a cost effective manner. How does the performance of such systems compare with that achieved through grid computing externally? Certainly the latency between nodes is much less than that achieved with external computers on the grid. If computations depend upon each other, having everything in one place probably significantly improves performance.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.