AUSTIN, Texas -- The Open Power Foundation (OPF) is driving computer innovations that range from new kinds of supercomputers to new uses for flash memory, according to representatives from IBM who lead the group based on its Power processor architecture.
The OPF did fundamental work on the Coral program, a new supercomputer architecture for the US Department of Energy. The group also aims to pioneer new memory capabilities for servers based on flash, while partners in China and elsewhere are gearing up other innovations.
The Coral program will forge a new Power 8-based supercomputer architecture in collaboration with the Oak Ridge, Argonne, and Livermore US national labs by 2017. It will use accelerators such as graphics chips linked over Nvidia's NVLink and other chips riding IBM's coherent accelerated processor interface (CAPI).
The government labs "need a bigger scale than most anyone else in the world," Brad McCredie, vice president of IBM Power Systems Development and OPF president, told EE Times. "That scale drives so much hardware and software interaction… reliability, efficiency, and systems clustering connectivity are the biggest challenges we're going to face."
IBM won the contract in part because it could offer a ready-made ecosystem with partners such as Nvidia, Mellanox, and the OPF, McCredie said.
While [national labs] do need to drive the pace of technology innovation and need some really, really big systems with very, very large capabilities, they are very heavily focused on trying to keep needs in line with the market. Like everyone else, they need to have ecosystems around them in order to draw on to build technologies they want.
On the memory front, McCredie said the foundation's work in low-latency flash storage will be disruptive. At the chip level, it is helping define new ways to plug flash into DRAM and CAPI interconnects to enable new capabilities in big data analytics software.
At the software level, the OPF has developed BigInsights, its own version of Hadoop for processing big data. Fadi Gebara, a senior manager at IBM's Austin Research Lab, said it will also use Java on GPUs to help visualize analytics.
Working with the startup Diablo, IBM put flash on DRAM DIMM modules to deliver new capabilities. "It allows you to buffer memory right into DRAM and then execute into a flash storage module that sits on a memory bus," Gebara said.
The Diablo technology is well suited for applications such as high-frequency trading, where low latency is paramount, he said. IBM is also working on applications for flash memory riding its CAPI interconnect.
Next page: Playing the China card