I would say that anything which brings supercomputing into mainstream education is absolutely cause for applause, yes. If it becomes normal for universities to have supercomputers, that's a huge advantage. In my opinion...
it _is_ perfectly normal for universities to buy clusters. there are scores of companies that will set you up with the same configuration as this one, all off-the-shelf hardware. you can buy turnkey package deals, or do it yourself. for this config, figure about $7k/node.
What a piece of crap article. This is 96th on the Top500 list! Talk to me about the top 5. Not the Top 96th! Thanks for wasting my time. Is this what they call AdverNews? I wonder how much $$$ EET got? Not surprised with what's going there.
Well, Help.fulguy, I'm terribly sorry this article wasn't to your taste. if you felt it was a waste of your time, however, you could have simply stopped reading it after paragraph 2, when you realized it was 96th. Instead, you chose to waste more time and write a comment accusing EE Times of taking money for editorial? Is that right? Is that what you believe happened here? Because that is not how we do things at EE Times. Just wanted to clarify that for you. I wrote about it because I happen to be passionate about HPC, about education, and about making supercomputing available to a wider, younger audience. Now, the fact that you are not passionate about that is fine, but I do hope you don't believe our news team writes ANYTHING for financial incentive. Ever. Have a great weekend.
wait, what is this "wider, younger audience" thing? lots of unis have clusters - are you saying that there's something new about the planned access to this cluster? it's not that uncommon for undergrads to have research-sponsored accounts on clusters...
Clustering existing technology has the advantage of enabling supercomputer performance from standard modules in a cost effective manner. How does the performance of such systems compare with that achieved through grid computing externally? Certainly the latency between nodes is much less than that achieved with external computers on the grid. If computations depend upon each other, having everything in one place probably significantly improves performance.
I wonder ten years from now what kind of computing power the average person might have and what new applications there might be for it. I noticed in ScienceDaily the other day that the smallest conductors ever developed in silicon, 1 atom tall and 4 atoms wide, are still governed by Ohm's Law. The article states that "For engineers it could provide a roadmap to future nanoscale computational devices where atomic sizes are at the end of Moore's law." Exciting stuff there. What does the future hold? What will a supercomputer be able to do at that time. Danny Dunn's Homework Machine might become a reality. ;)
Very good effort on supercomputer research, the goal is very much good as the scientist wants to make the supercomputer accessible to the society, generally the access of HPC is only to the researchers in most of the countries, it will be really a great time if society can have access to this kind of HPC.
For $5750 per compute node, you could have a 6-core (12 threads with Hyperthreading active) at 4.60 Ghz per core with the i7-3960X architecture.
You might notice something similar if you look closely at HokiSpeed's compute node at the Monolith computer build by Liquid Nitrogen Overclocking.
It is impressive to read about how supercomputers are built. That's a lot of computing power! But... I think supercomputers are already in the pockets of the lay man. If we think of the processing power in the iPhone and compare it against the computers that were used in the Apollo mission to the moon we can certainly amaze ourselves and say that we have a supercomputer at the reach of our hands.
Luis, I agree with your idea and spirit! An iPhone5 has tremendous computing power.
Now, if we could get SW developers to push back when performance/development-cost arguments are raised. Contemporary code is inefficient/sloppy when compared to what was written 40 years ago. Your reference to Apollo-capsule computers is right on target!
The whole Top500 thing is a silly focus of attention. A DARPA PM I knew used to talk about "macho-OPS." Most of these massively-parallel processing (MPP) systems fit that description.
MPP systems try to use COTS (commercial off-the-shelf) parts to "save" on development costs. GPUs provide a lot of numerical bang for the buck, but don't work worth a hoot on data-dependent computations.
MPP systems have ridiculous power consumption. Megawatts? Yow.
MPP machines - particularly those employing GPUs - often support relatively narrow ranges of numerical applications. This ultimately ends up transforming serious problems - modeling real-time physical systems (brains, weather, etc.) - from computation-bound to storage-bound: real-time systems often can't be modeled in real time, requiring results to stack up somewhere so they can be displayed in a meaningful (non-glacial) way.
I can't help but wonder if we could realize better ROI by funding more research in configurable architectures. FPGAs are the obvious starting point, but ASICs are needed to meet density/power needs of real, deployable systems that go into end-user devices.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.