A roundtable of EDA experts passes comment on the impact of the increasing amounts of software that comes with IP cores in modern SoC design and that needs to be integrated.
We are getting close to the end of the series of questions about IP that I asked the industry back in June. But don't fear: I did a whole roundtable on the subject at DAC and will be bringing that to you before long.
Last week I asked about IP theft and protection. One thing that has been happening in the IP market is that the average block size has been getting larger. In the early days of reuse, blocks were fairly small, but today many IP blocks are complete subsystems carrying an extensive amount of software. I asked about the ease with which that software can be integrated. Below are the answers I received.
John Koeter, VP marketing, solutions group at Synopsys
We believe that software will continue to be an important part of providing an IP solution. Whether it is reference drivers, porting OSes, working with partners to create full solutions (for example full stacks or codecs) or providing transaction-level models, addressing the software challenges our customers face is a key value-add for us as an IP provider.
Susan Peterson, product marketing group director, and Tom Hackett, senior product marketing manager, Cadence
It's not as easy as it should be. Typically, the software that's provided are examples. There are hardware blocks and software blocks that talk to the hardware blocks via software drivers. People build upon and modify these examples.
Eran Briman, VP marketing, Ceva
At Ceva, we understand that the software is equally important to the hardware when providing processor IP. This is why we have invested significantly in our development environment, tools and software in recent years. We offer many pre-optimized software components for a range of end markets, including communications, audio, voice, imaging and vision. This software could be libraries to support LTE, Wi-Fi 802.11ac, DTV demod, computer vision etc, and fully optimized codecs for a wide variety of audio, voice and other applications, like Dolby, DTS, AMR, WB-AMR etc. Also, we maintain a robust 3rd party ecosystem of more than 50 active partners for software, tools, manufacturing etc that complement our product offerings.
We have recently added significant efforts around the integration of our DSPs and software into processor architecture, which simplify the software development and improve the overall processor performance for target markets. Two examples of these software frameworks that we offer our customers include support for multi-core architectures (Ceva Must) and direct access to the DSP for software developers from the OS level, allowing developers to utilize the DSP to run software, offloading the CPU from these tasks.
Arvind Shanmugavel, director of applications engineering, Apache Design
One of the main advantages of using IP is the verification effort and software integration. IP is typically verified for functional accuracy by the IP provider, and also guarantees compatibility with the appropriate software. In general, SoC designers only need to worry about the higher level of integration for IP.
So, a rather disappointing set of answers on this question. I had hoped to hear about ways in which the software could be integrated or adapted for customer needs, standardized interfaces into OSs, and other help with this aspect of integration. Perhaps it is because this is targeted at the software guys that our industry is just not really interested in it. What do you think? Can we, should we, be doing a better job with software IP?
IP software integration is being done. However, a customer is not likely to buy all their IP blocks from a single vendor, and they're likely to have a preexisting software base into which the need to integrate software.
This is a really difficult problem for the IP vendors, because of budget and skill set. There are dozens of real-time operating systems, dozens of relevant Linux kernel versions, dozens of compilers, and every customer's final integration is different. It's not easy to manage, and it's hard to make this work match the IP vendor's business models. It's not reasonable to expect the IP business unit to develop much more than what they're doing today: all customers would have to pay for that effort, but only some of the customers would use it.
So instead the IP vendors rely on partners who have experience in the relevant technology. For example, IP vendors like Synopsys partner with MCCI to deliver USB 3.0 device and host support solutions. The IP vendor uses our stack when testing their IP, and we work closely with them for problem resolution and (if necessary) software work-arounds. Then MCCI works with the customers to integrate our software with the rest of their system. This model seems to work well for customers who are integrating IP, and want integration assistance.
I'm with Terry on this one. Tools suppliers can only take it so far, so it's up to specialized IP suppliers such as Ceva, mentioned here, to provide point solutions for application-ready IP + software. But working with the tool vendors to close the gap is critical as the IP blocks increase in size, functionality, and software requirements. I too would really like to know who's really enabling this.
My first reaction to reading this article was "What is he talking about?". Then I remembered back in the old days when we used to have to cobble together protocol stacks and math libraries from different venders and try to make it all work together. The closest to that that I have had to do recently was to integrate a proprietary 802.11 driver into a 2.6.x version of Linux. On the other hand, we just finished an upgrade based on Linux 3.6.8 which includes an open-source version supporting the same chipset.
There are still some places where third-party software needs to be integrated. USB 4.0 has been slow in coming, but seems to be on track for the near future. Some of the more exotic comm stacks may not yet be supported. From what I hear, Thunderbolt support is still shaky, for example. The bottom line, though, is that Linux and the various efforts around it have swallowed up most of what used to be the software IP market. Practically everything below the application layer (and even big chunks of that, in many cases) are readily available in very high-quality code. Even when the code is not up to snuff, the open-source model converges very quickly to improve it. If the software IP market really has disappeared, it is because it has been replaced by something that works better for system developers.
@LarryM99 I hear you, but at the lowest levels of the stack, where software meets hardware, someone has to provide that software, and it has to be able to integrate into the rest of the stack. Given that the hardware piece is often very adaptable and capable of supporting multiple protocols it can be a tricky problem. Companies such as Tensilica not only have to provide hardare IP and the tools top create them, but also tools to create the software that goes along with them. I know that in this case it is creating things such as compilers and linkers, but the problem seems to be analogous to other types of hardware as well.
Brian, I had to look up Tensilica since it has been a while since I heard that name. I can see where their customers would be looking for IP for specific protocol stacks, but I have to wonder how much market share they are getting for new designs versus FPGA cores or other design options. It reminded me of a recent story here on EE Times about how cellular base stations were moving to standard PC hardware instead of expensive specialized hardware. It seems like the embedded world is in the beginnings of a similar move, where the platforms are standardizing around Linux running on an ARM core on an SoC with an auxiliary FPGA. The more standardized that architecture becomes the less need there is for specialized software IP.
That may be true in some design areas, but if we consider the smart phone world, FPGAs are too power hungry to be viable. They also want maximum cost reduction which means integration into as few packages as possible. I think we will see more consolidation in both hardware and software platforms over time and this will help.