IBM technologists claim the accelerator approach won’t scale to the next big leap of exascale-class systems. Such systems will require new architectures that use 3-D stacking of processors and memory, they say.
Indeed, “power is a big issue,” said Jack Dongarra, of the University of Tennessee, another researcher behind the Top 500. “Today the Titan is at 2 Gflops/W, but we need 50 Gflops/W at exascale,” he said.
Infiniband surpassed Ethernet as the cluster interconnect of choice in the current Top 500. InfiniBand was used in 226 systems, up from 209 systems six months ago. Gigabit Ethernet was used in 188 systems, down from 207.
InfiniBand-based systems also account for more than twice as much performance as Ethernet-based ones at 52.7 petaflops compared to 20.3 petaflops.
The list does not track use of gigabit versus 10G Ethernet. Dongarra, said 10GE is now economical but provides less bang for the buck as 40G Infiniband. He estimated networking for a small 16-node 10GE cluster costs about $17,500 compared to about $21,500 for 40G Infiniband.
Among other trends, Intel continues to supply most of the processors in the Top 500 (76 percent). AMD’s Opteron and IBM’s Power trail at 12 and 10.6 percent, respectively. CPUs with eight or more cores are on the rise at use in 46.2 percent of the systems.
Energy-efficient does not mean low power. It means a better quotient of MIPS or GFLOPS per power unit used. Those numbers are published if the processors in question ran LINPACK or for I/O, a similar test.
The A57 is supposed to deliver 3x the performance of a15. Of course that means it would still need more cores than Xeon Phi. Interconnecting the cores also takes energy. I think it remains to be seen which is better in terms of performance/W.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.