IBM technologists claim the accelerator approach won’t scale to the next big leap of exascale-class systems. Such systems will require new architectures that use 3-D stacking of processors and memory, they say.
Indeed, “power is a big issue,” said Jack Dongarra, of the University of Tennessee, another researcher behind the Top 500. “Today the Titan is at 2 Gflops/W, but we need 50 Gflops/W at exascale,” he said.
Infiniband surpassed Ethernet as the cluster interconnect of choice in the current Top 500. InfiniBand was used in 226 systems, up from 209 systems six months ago. Gigabit Ethernet was used in 188 systems, down from 207.
InfiniBand-based systems also account for more than twice as much performance as Ethernet-based ones at 52.7 petaflops compared to 20.3 petaflops.
The list does not track use of gigabit versus 10G Ethernet. Dongarra, said 10GE is now economical but provides less bang for the buck as 40G Infiniband. He estimated networking for a small 16-node 10GE cluster costs about $17,500 compared to about $21,500 for 40G Infiniband.
Among other trends, Intel continues to supply most of the processors in the Top 500 (76 percent). AMD’s Opteron and IBM’s Power trail at 12 and 10.6 percent, respectively. CPUs with eight or more cores are on the rise at use in 46.2 percent of the systems.