For its part, Nvidia is something of a “lone wolf” pursuing its own path, said Kevin Krewell, senior analyst with the Linley Group and moderator of the panel. Nvidia uses its proprietary Cuda language for general-purpose GPUs and its Chimera API for computational photography.
Using in-house techniques helped the company get products out early, said Brian Cabral, a vice president of engineering at Nvidia and a member of the panel. “As the market matures, we will do the right thing by the market,” he said.
Cabral also noted CPU and GPU workloads “are vastly different” and call for different software models.
“There’s a tension between the ease of use that cache coherency gives you in sharing data but at cost of power and throughput,” Cabral said. “Up to now we have erred on the side of performance because that’s what people are buying.
“There may be a way to solve the problem as we get smarter, but for the foreseeable future these are different workloads and should be managed differently,” he said.
Nevertheless all sides on the panel agreed it benefits the industry and the software development community to have fewer programming models. “We don’t want to write the same program 10 times,” Cabral said.
Qualcomm's use of, and motivations for, using LLVM, to me are the most interesting aspects of the article.
It would seem, in general, that letting the front end, in broad use environments, be useful in dictating the hardware resources, architecture and even the instructions available.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.