For its part, Nvidia is something of a “lone wolf” pursuing its own path, said Kevin Krewell, senior analyst with the Linley Group and moderator of the panel. Nvidia uses its proprietary Cuda language for general-purpose GPUs and its Chimera API for computational photography.
Using in-house techniques helped the company get products out early, said Brian Cabral, a vice president of engineering at Nvidia and a member of the panel. “As the market matures, we will do the right thing by the market,” he said.
Cabral also noted CPU and GPU workloads “are vastly different” and call for different software models.
“There’s a tension between the ease of use that cache coherency gives you in sharing data but at cost of power and throughput,” Cabral said. “Up to now we have erred on the side of performance because that’s what people are buying.
“There may be a way to solve the problem as we get smarter, but for the foreseeable future these are different workloads and should be managed differently,” he said.
Nevertheless all sides on the panel agreed it benefits the industry and the software development community to have fewer programming models. “We don’t want to write the same program 10 times,” Cabral said.
HSA close to setting hardware specs
ARM, Khronos back low-energy parallelism conference
Hybrid Processors Need a Better Road Map