There are many reasons why design teams now favor licensing a complete integrated graphics processing unit (GPU) solution over designing one in-house. As Remi Pedersen, graphics product manager at ARM, explains, when designers choose to make or buy a GPU, they should consider the total cost of ownership for each option.
Designers are developing more advanced graphics processing to address increasing market demand for a better quality graphics experience. High-end displays are no longer restricted to just gaming and video devices. Larger screens and computer-like capabilities on mobile phones, multimedia players and GPS devices put the burden on design teams to deliver intuitive and engaging user interfaces, and high-quality video, graphics and audio to match users' desktop experiences.
Such is the desire for next-generation entertainment on mobile platforms, automotive and infotainment products, that leading mobile analyst Screen Digest expects the value of the mobile gaming, video and TV market to grow by 300 percent by 2013.
As designers look to integrate more advanced graphics processing into just about any product with a screen, some design teams may contemplate whether to produce a solution in-house or license an integrated solution. The 'make versus buy' decision is not only about figuring out the Non Recurring Expense (NRE) effort required to create the hardware IP, write the software and undertake the considerable verification tasks involved. To evaluate total cost of ownership you have to step back and consider the bigger picture and longer-term needs of your product portfolio.
The rising cost of chip design is a hot topic across the entire industry. More than ever, design teams must decide exactly where they should focus their scarce resources to differentiate their designs and minimize development costs. Because advanced graphics processing is a key ingredient in more and more products, companies also have to consider the flexibility and scalability of any in-house investment that they make in GPU development. Can they reuse the GPU for future product derivatives, or share it across business units targeting other products that need graphics acceleration? If the answer is yes, the design specification must now satisfy a broader range of needs, and the cost of IP development and support will inevitably rise.
Flexible, scalable, configurable
Design teams thinking about GPU reuse must either produce a single IP that is flexible enough to meet all of their needs, or design in some level of configurability. They must consider power and performance the needs of a mid-range mobile handset and an HDTV, for example, will not be the same. On the other hand, the cost of developing a part increases substantially if it must meet the needs of a broad spectrum of use cases with different performance requirements.
There is only one way to satisfy these conflicting issues and produce a part that is flexible, scalable and cost-effective and delivers high performance and low power: by developing configurable parts that share the same architecture. Having a consistent architecture would allow a design group to use the same IP in a mobile phone design and a set-top box to deliver, say, 1080p resolution. The software stacks would be binary-compatible, so for the same operating system, these two GPUs would share the same graphics drivers, which would significantly reduce the cost of development, test and integration.
At ARM, the Media Processing Division has addressed the need for high performance with low power by taking a multicore approach with the ARM Mali GPU family. Scaling hardware performance by replicating the GPU core has several benefits over re-designing separate parts with high-performance pipelines inside the core. The benefits include reduced development and verification costs, and multicore allows the use of streamlined software solutions. For example, you may need a different software compiler to implement tasks such as shader thread scheduling if you move to a higher performance core with more pipeline stages. On the other hand, a single compiler will suffice for a well-designed multicore solution consisting of 1-to-N identical cores and a top-level interconnect layer.
A customer-configurable multicore GPU IP solution also allows licensees to use the same core in products that have very different needs, rather than having to take licenses for different cores from the same family. Multicore solutions also allow you to power down complete cores, which supports very effective power management for a broad range of performance points.