PORTLAND, Ore. — The fastest supercomputers in the world is the aim of the Department of Energy’s Coral (Collaboration of Oak Ridge, Argonne, and Livermore) program. Coral aims to supersede its current cluster supercomputers, Titan, Mira, and Sequoia, respectively, with 100+ petaflop supercomputers using pre-exascale architectures, which will maintain the US leadership position with smarter supercomputer architectures one-step beyond today’s “clusters.”
The DoE announced today it has chosen IBM’s “data centric” architecture for Oak Ridge National Lab’s new “Summit” and Lawrence Livermore National Lab’s new “Sierra,” both to be completed by 2017. Argonne National Labs will choose a different architecture (yet to be announced) for its pre-exascale supercomputer. Altogether the Coral program should add up to 400 petaflops to the US supercomputer arsenal.

“Data-centric computing, or the idea of locating computing resources all places where data exists to minimize data movement, is a necessary and critically important architectural transition that IBM is announcing with the Coral project,” said Dave Turek, vice president of Technical Computing OpenPower for IBM in a video (below), “because not only are the government labs in the US experiencing a dramatic impact of huge amounts of data, but also are industries around the world.”
In a nutshell, data-centric computing makes the storage devices “smart” by adding computing resources right there in the same box, so that many routine — and some not so routine — data manipulation algorithms can be run locally without having to send massive amounts of data across the InfiniBand fabric, thereby lowering power consumption, too. Mellanox Technologies will supply the InfiniBand while Nvidia will supply graphics processing units (GPUs) specialized for computing tasks rather than graphics.
Said Turek:
- Data-centric computing is a phenomenon that the national labs and other science institutions already experience today. They are awash with data; they don’t know how to handle it. These new architectural paradigms were produced to deal with this problem, but this is a problem from which no one is immune. The issue of big data and its impact on the acceleration of insight is something that cuts across all market segments, all technologies, and eventually you’ll see it affecting even smart devices that consumers are using.
The immediate goal of Summit and Sierra is to increase the computing capacity of current cluster supercomputers by five to 10 times, while consuming five-times less power and accommodating a wider spectrum of analytics as well as big-data applications.
Addison Snell, CEO of Intersect360 Research in Mountain View, Calif., told EE Times:
- The movement of data across a system has become the gating factor. What IBM is proposing with its data-centric architecture is to intelligently locate computational elements throughout the system — integrated with storage components or networking components where the data resides — in order to optimize the workflow of a system on an ongoing basis.
Each system will house more that five petabytes of dynamic and flash memory and will be capable of moving data at a rate of more than 17 petabytes per second. Working closely with Nvidia, IBM will incorporate the high-speed NVLink interconnect technology, using Nvidia’s Volta architecture, which will enable IBM CPUs and Nvidia GPUs to communication 12 times faster than they can today. IBM is also working with Mellanox to add smarts that improve data fiber handling.
“Nvidia is a member of the Open Power Alliance,” said Snell. “These data-centric systems for Coral are going to make use of [Nvidia’s] GPU computing, which are very good at certain kinds of math-intensive calculations.”
There, in fact, will be computational units as a part of the storage infrastructure itself, so that for certain types of data manipulations the data need never leave the storage device and be shipped over to the servers, thus decongesting the interconnection fabric.
Snell told us:
- Data-centric computing is one step beyond the traditional notion of a cluster. In the new era of exascale computing, the concepts of commodity clusters goes away, and what we are really moving toward is an era of more specialization, because there are so many different component technologies evolving independently today.
What IBM is trying to do is look at the world not from a computer-centric perspective, but from a vision of how does the data move through all of your elements over the course of the entire workflow, and they are trying to optimize an architecture around that. The Coral systems are going to deliver some of the component technologies to realize that vision, but I think that there is never going to be a point where you can say this was the data-centric computer and that one was not — I think its going to evolve.
— R. Colin Johnson, Advanced Technology Editor, EE Times 
Related posts:
- Supercomputer Accelerator Comes to Datacenter
- Top 10 Supercomputers Worldwide
- Archer Supercomputer Is Fastest in UK
- Superconductivity Predicted by Supercomputer
- IBM Wins Gordon Bell Prize




Good report. Coral sounds like an effort at the next big leap from today's 38 PFlops Tainhe-2 in China to 100Pflops.
Pairing X86 CPUs and GPUs has been a big trend for at least a couple years now. Sounds like IBM has its own twist on that with Power8 and the proprietary Nvidia NVLink on Volta (which I believe is a 2.5-D chip with GPU and memory, right?)
I'm not sure what IBM means underneath the marketing term "data-centric" as far as the actual architecture, but we shall see.