I agree with the point made in the article: benchmarking needs to address both numerical computation as well as managing and moving large sets of data. This point was made well during the ICCAD held in the Silicon Valley about two weeks ago.
Dr. Mark Shephard (RPI) who spoke at ParCAD (after keynote by Dr. Arvind of MIT) spoke of his experiences with the Big Blue machine for Adaptive Simulations on Massively Parallel Computers. In an upcoming simulation that could reach solving a billion equations (yes, not a typo!!), he was mentioning that while Big Blue may crunch this out in a few hours, it may take a few days to write the data from simulations!. To that end, he though the use of Linpack as a bench mark may not be truly representative of the cluster. Graph 500 may be the answer to truly assess the capabilities of a super computer.
Dr. MP Divakar
The issue is broader than just supercomputers. The lack of an industry consensus compute unit prevents the cloud infrastructure as a service category from obtaining traction. Amazon created an abstraction it calls the ECU and a check of 20 other providers will reflect 18 other metrics. The topic is addressed at http://cloudpricecalculator.com/blog
Very interesting. This is a much needed change to the way things are done. Of course, there will be detractors but that is always the case with anything intended to provide benchmarking because it either will not measure what someone wants measured, or it will do it in an way that someone feels is biased. In this case, I am sure that the "influence over how industry puts together supercomputers" will result in some bias towards certain design methodologies.
Blog Doing Math in FPGAs Tom Burke 23 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...