For its part, Nvidia is something of a “lone wolf” pursuing its own path, said Kevin Krewell, senior analyst with the Linley Group and moderator of the panel. Nvidia uses its proprietary Cuda language for general-purpose GPUs and its Chimera API for computational photography.
Using in-house techniques helped the company get products out early, said Brian Cabral, a vice president of engineering at Nvidia and a member of the panel. “As the market matures, we will do the right thing by the market,” he said.
Cabral also noted CPU and GPU workloads “are vastly different” and call for different software models.
“There’s a tension between the ease of use that cache coherency gives you in sharing data but at cost of power and throughput,” Cabral said. “Up to now we have erred on the side of performance because that’s what people are buying.
“There may be a way to solve the problem as we get smarter, but for the foreseeable future these are different workloads and should be managed differently,” he said.
Nevertheless all sides on the panel agreed it benefits the industry and the software development community to have fewer programming models. “We don’t want to write the same program 10 times,” Cabral said.
Qualcomm's use of, and motivations for, using LLVM, to me are the most interesting aspects of the article.
It would seem, in general, that letting the front end, in broad use environments, be useful in dictating the hardware resources, architecture and even the instructions available.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.