SAN FRANCISCO-- After the success in 2003 of its System X supercomputer, Virginia Tech is again pushing the supercomputing envelope, announcing its new HokieSpeed machine, said to be 22 times faster than its predecessor.
At just one quarter of the size of X and boasting a single-precision peak of 455 teraflops, with a double-precision peak of 240 teraflops, the HokieSpeed debuts with enough performance to vault it into the 96th spot on the most recent Top500 list.
HokieSpeed is also energy efficient enough to place it at No. 11 in the world on the November 2011 Green500 List, making it the highest-ranked commodity supercomputer in the United States.
The $1.4 million supercomputer is made up of 209 separate computing nodes, interconnected across large metal racks, each roughly 6.5 feet tall. In all, the machine occupies half a row of racks, three times less rack space than the X.
Each HokieSpeed node consists of two 2.40-gigahertz Intel Xeon E5645 6-core CPUs and two Nvidia M2050/C2050 448-core GPUs on a Supermicro 2026GT0TRF motherboard. That gives HokieSpeed over 2,500 CPU cores and more than 185,000 GPU cores.
To complement HokieSpeed’s sheer amount of computational ability, the supercomputer will also come with a visualization wall – eight 46-inch, 3-D Samsung high-definition flat-screen televisions – to provide researchers with a 14-foot wide by 4-foot tall display to render data on.
The display is still under construction but, once finished, will be hooked-up to special visualization nodes for researchers to see their computational experiments visualized in real-time. In the past, it was sometimes weeks before all the data from a computational experiment could be generated and then rendered as a video for viewing and analysis.
Wu Feng, associate professor of computer science and electrical and computer engineering at Virginia Tech said the supercomputer would allow scientists to routinely conduct ‘what-if’ scenarios. “It will facilitate the discovery process or ‘accelerate the time to discovery,’” he said.
Feng expects that once the HokieSpeed has gone through its final stages of acceptance testing it will become the university’s next scientific war horse and make supercomputing accessible to a wider population.
“Look at what Apple has done with the smartphone and iPad. They have taken general-purpose computing and commoditized it and made it easy to use for the masses,” said Feng. “The next frontier is to take high-performance computing, in particular supercomputers such as HokieSpeed, and personalize it for the masses.”
The majority of funding for HokieSpeed came from a $2 million National Science Foundation Major Research Instrumentation grant, and it’s hoped the supercomputer will attract more international research projects to Virginia Tech, adding more to the College of Engineering’s income.
Luis, I agree with your idea and spirit! An iPhone5 has tremendous computing power.
Now, if we could get SW developers to push back when performance/development-cost arguments are raised. Contemporary code is inefficient/sloppy when compared to what was written 40 years ago. Your reference to Apollo-capsule computers is right on target!
The whole Top500 thing is a silly focus of attention. A DARPA PM I knew used to talk about "macho-OPS." Most of these massively-parallel processing (MPP) systems fit that description.
MPP systems try to use COTS (commercial off-the-shelf) parts to "save" on development costs. GPUs provide a lot of numerical bang for the buck, but don't work worth a hoot on data-dependent computations.
MPP systems have ridiculous power consumption. Megawatts? Yow.
MPP machines - particularly those employing GPUs - often support relatively narrow ranges of numerical applications. This ultimately ends up transforming serious problems - modeling real-time physical systems (brains, weather, etc.) - from computation-bound to storage-bound: real-time systems often can't be modeled in real time, requiring results to stack up somewhere so they can be displayed in a meaningful (non-glacial) way.
I can't help but wonder if we could realize better ROI by funding more research in configurable architectures. FPGAs are the obvious starting point, but ASICs are needed to meet density/power needs of real, deployable systems that go into end-user devices.
It is impressive to read about how supercomputers are built. That's a lot of computing power! But... I think supercomputers are already in the pockets of the lay man. If we think of the processing power in the iPhone and compare it against the computers that were used in the Apollo mission to the moon we can certainly amaze ourselves and say that we have a supercomputer at the reach of our hands.
For $5750 per compute node, you could have a 6-core (12 threads with Hyperthreading active) at 4.60 Ghz per core with the i7-3960X architecture.
You might notice something similar if you look closely at HokiSpeed's compute node at the Monolith computer build by Liquid Nitrogen Overclocking.
Very good effort on supercomputer research, the goal is very much good as the scientist wants to make the supercomputer accessible to the society, generally the access of HPC is only to the researchers in most of the countries, it will be really a great time if society can have access to this kind of HPC.
wait, what is this "wider, younger audience" thing? lots of unis have clusters - are you saying that there's something new about the planned access to this cluster? it's not that uncommon for undergrads to have research-sponsored accounts on clusters...
it _is_ perfectly normal for universities to buy clusters. there are scores of companies that will set you up with the same configuration as this one, all off-the-shelf hardware. you can buy turnkey package deals, or do it yourself. for this config, figure about $7k/node.
I wonder ten years from now what kind of computing power the average person might have and what new applications there might be for it. I noticed in ScienceDaily the other day that the smallest conductors ever developed in silicon, 1 atom tall and 4 atoms wide, are still governed by Ohm's Law. The article states that "For engineers it could provide a roadmap to future nanoscale computational devices where atomic sizes are at the end of Moore's law." Exciting stuff there. What does the future hold? What will a supercomputer be able to do at that time. Danny Dunn's Homework Machine might become a reality. ;)
Clustering existing technology has the advantage of enabling supercomputer performance from standard modules in a cost effective manner. How does the performance of such systems compare with that achieved through grid computing externally? Certainly the latency between nodes is much less than that achieved with external computers on the grid. If computations depend upon each other, having everything in one place probably significantly improves performance.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.