LONDON – C/C++ compilers from Portland Group have been updated to let Cuda developers target standard servers based on microprocessors from Intel Corp. and Advanced Micro Devices Inc.
The Portland Group (Portland, Ore.) is a wholly-owned subsidiary of of European chip company STMicroelectronics NV. Cuda is a parallel computing architecture from Nvidia Corp. that harnesses Nvidia's graphics processing units (GPUs) to do general purposes computing. The Cuda C/C++ for x86 compilers were developed by Portland in cooperation with Nvidia.
With the Cuda C/C++ compilers developers can use the Cuda parallel programming model to optimize the performance of their code, while targeting servers and clusters with or without Nvidia GPUs.
When run on x86-based systems, Portland Cuda C/C++ applications perform parallel execution by using the multiple processor cores, and by using Streaming SIMD (Single Instruction Multiple Data) Extensions (SSE), including the new AVX instructions available on the latest generation of x86-compatible CPUs from Intel and AMD.
Portland said it would roll out the x86 Cuda C/C++ compilers in three phases. Phase 1, available now, demonstrates the capabilities of the technology and allows developers to begin working with the compilers. Phase 2, scheduled for the fourth quarter of 2011, will include performance optimizations intended to extract maximum performance of Cuda programs running on the x86 target platform. Phase 3, planned for mid-2012, will include support for the Portland Unified Binary technology — the ability to run one executable on both CPUs and GPUs.
"It's another important element in our ongoing strategy of providing HPC programmers with a full range of options for optimizing compute-intensive applications and leveraging the latest technical innovations from AMD, Intel and Nvidia," said Douglas Miles, a director at The Portland Group, in a statement.
"Cuda is the world's pre-eminent parallel programming model, supporting a range of open standards, architectures and programming languages," said Sanford Russell, director of Cuda marketing at Nvidia, in the same statement. "Now for the first time, developers can run their Cuda apps on any x86 clusters."
a responsible journalist would avoid printing drek like that closing quote from the NVidia marketing weasel. pre-eminent... open, blah blah.
Cuda is a closed, proprietary data-parallelism language, and not the first or most capable. but most importantly, it's in no way open.
Cool transformation of graphic parallel processing technique to be used for general purpose parallel processing. It will be a new thread of using/trying the other parallel processing technique for improving the server performance.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.