Analysts were skeptical on whether Intel could catch up in graphics co-processors, particularly with Nvidia.
“Nvidia’s Tesla GPUs have a parallel-compute head-start with much broader commercial deployments over longer periods of time,” said Patrick Moorhead, principal of market watcher Moor Insights and Strategy. “As with all new brand announcements, this indicates that Intel is very serious about this space, but only time will tell the outcome,” he said.
“Intel's claim of teraflops performance with Knights Corner certainly puts them in the same class as AMD's and Nvidia's latest GPUs with regard to raw performance, but we'll have to wait for benchmark results based on real applications before concluding that Intel can be a serious player with Xeon Phi,” said Nathan Brookwood, principal of Insight64 (Saratoga, Calif.)
“I continue to be somewhat skeptical about Intel's claim that users can use standard programming tools on Phi, since most practitioners find it necessary to modify their codes in order to exploit massively parallel architectures,” Brookwood said. “Phi's use of the x86 instruction set doesn't give Intel any advantages in this regard, and may in fact be an impediment, since the GPU SIMD vector architectures used by AMD and Nvidia typically deliver superior flops per watt compared with the scalar-oriented x86,” he added.
The Cray/Intel partnership is a win-win. Cray has been struggling to survive as an independent HPC vendor for years. Supercomputers have moved from being high margin products with purpose-built CPUs to lower margin systems built from thousands of relatively standard PC server CPUs by the likes of companies such as Hewlett-Packard and Dell.
Cray’s Cascades system started its life in about 2002 as a proposal for the High Productivity Computing System program under the U.S. Defense Advanced Research Projects Agency. HPCS aimed to deliver world-class supercomputers that were easier to program thanks to advances in software languages and tools.
In 2006, DARPA gave Cray $250 million to prototype its Cascade concept. In 2008, Cray and Intel struck a broad technology partnership agreement.
The Cray/Intel deal likely paved the way for Cray to adopt Xeon Phi in Cascades. Intel may have gotten access to some of Cray’s proprietary interconnect technology as part of the deal.
Intel originally planned to attack the mainstream market for discrete graphics chips with its multicore x86 technology, originally code-named Larrabee. However,in 2009 Intel decided the move was not viable given the strength of Nvidia and AMD in that market.
That’s when Intel switched its target to the smaller but still significant HPC market that was just beginning to adopt so called general purpose GPUs (GP GPUs). Even in this sector, Intel is far behind Nvidia which has established its Cuda programming environment for its chips.
AMD is also playing catch up here, embracing the OpenCL standard for GP GPUs. AMD also is developing industry partnerships with ARM and others to drive an ecosystem around OpenCL and its approach to using x86, ARM and graphics cores.
Intel has yet to provide full details of its programming model for Xeon Phi except to say that it leverages the fact the chip is based on x86 cores. That was the same argument Intel originally used for Larrabee in the mainstream PC graphics space.
the Brookwood quote is disappointing, since he seems to have missed the point: it's not about x86 ISA, but rather the programming model we all know and love. anyone who has spent time rewriting code to suit the moving target of GPUs will appreciate a relatively normal memory model, threads, caches, shared memory and message passing. the Joules-per-flop argument is a good and interesting one, but it naively assumes that every workload is trivially malleable into the rather rigid model that GPUs provide.
the rash of large and relatively power-efficient Bluegene machines also questions the claim that efficient computation requires the GPU model...
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.