Despite the hurdles, IBM executives were optimistic Power 8 will gain traction as an open hardware alternative to the x86.
"It's about opening the platform up in a way you've never seen in the server market before," Doug Balog, IBM general manager for power systems, said at the event. "We acknowledge that no one company can do it all these days. The target is moving too fast around mobile, social, analytics, the cloud… no one company should have the right to own innovation."
Through the OPF, IBM will make the Power 8 hardware and software available for licensing. Ultimately, members could use Power 8 to create custom open servers, components, and optimized Linux code for cloud datacenters.
IBM already has released technical specifications for Power 8. The eight-threaded processor packs 12 cores, made in a 22nm process. It can analyze data 50 times faster than the latest x86-based systems, according to the company.
"This is the first truly disruptive advancement in high-end server technology in decades, with radical technology changes and the full support of an open server ecosystem that will seamlessly lead our clients into this world of massive data volumes and complexity," Tom Rosamilia, senior vice president of IBM's systems and technology group, said in a press release. "There no longer is a one-size-fits-all approach to scale out a datacenter."
Despite earlier layoffs and rumors that it might leave the semiconductor business, the company says it is on the forefront of design trends. Richard Talbot, director and project executive for IBM Power Systems, told us an inclination toward openness in software and application development is now creeping into all layers of hardware.
One interesting detail in the news is IBM's use of NVLink, a high-speed interconnect for GPU and CPUs, co-developed with Nvidia. Brookwood said the high-performance computing community will try a different architecture if it gives them a performance advantage unavailable elsewhere.
"It's an interesting way to attach devices; it lowers latency between devices, instead of going through a multi-layered infrastructure," Talbot said. "The GPU is attached directly to the processor, so you get upwards of 40x better latency than using standard IO technologies."
— Jessica Lipsky, Associate Editor, EE Times