There was an interesting panel discussion at this week’s Design Automation and Test in Europe (DATE) Conference that included representatives from various links in the SoC supply chain – EDA suppliers, IP providers, services companies and even a bona fide SoC developer. The panelists were asked to provide their view of where the EDA and IP industries were headed. Like the proverbial blind men who are asked to describe what is in front of them by touching different parts of an elephant, everyone brought a slightly different perspective to the table.
But not as different as you might think. Interestingly, five of the six panelists focused the majority of their talk on IP. For good reason: IP, and more generally design reuse, is the crux of SoC’s future and value proposition. Everyone agrees that the ability to quickly and efficiently re-use silicon-proven functionality in new designs holds the key to addressing the technical complexity, time pressures and jaw-dropping economics of bringing new SoCs to market. There simply is no other way.
So it’s no wonder everyone is jumping on the IP bandwagon. With headlines like ARM’s market valuation reaching $12 billion, and Semico reporting the third-party IP market grew by close to 22 percent last year, it’s a legitimate growth strategy for any type of company in the ecosystem. This generated considerable discussion on who is best qualified to be the providers of IP in the long term.
What wasn’t so clear from this panel is how we are going to take advantage of the incredible potential of IP and design reuse. I mean, what is it really going to take to assemble and integrate all of it together, deal with both hardware and software on a SoC platform, and ensure it’s ready to hand off to the implementation phase?
While the idea of plug-and-play sounds great on PowerPoint slides, there are a whole lot of methodology and tool capabilities that are still missing to make it as automated and efficient as other parts of the design process. This is where the rubber meets the road.
Like building a house and starting the discussion talking about the bricks, most of the panelists were talking about the need for quality IP and standards, how we can grow the market for innovative applications by developing more IP, and even where the IP business model might end up. Perhaps as part of a foundry offering? Merged more completely with traditional EDA suppliers, like Synopsys and Cadence? All legitimate issues, questions to be addressed and great panel fodder.
But there was very little discussion on the methodologies and tools required to manage, assemble and integrate all this great IP to realize SoCs. In my mind this is the real elephant in the room. Then again, maybe it’s the blind guy at the other end of the elephant.
Perhaps this was because no one wants a continuation of the current ‘EDA classic’ tool model into SoC Realization. I believe as a result the large EDA companies talked about IP almost exclusively because of the attraction of the IP business model. Can’t say I find fault in their desire to get a higher IP-ish multiple.
The 'EDA classic' tool model goes back to the days when designers worked at the gate level and consequently the emphasis was on using gates to do logic and to optimize line length for timing. Those factors are not nearly as important today so it's time to focus on the strong points of the technology. One thing is the speed and density of embedded memory. Another is that the ability to clock many primitive blocks in parallel is not as important as in the past. Implementing function in a procedural language is more natural although has a performance penalty. A high level language can be used for the function and then parsed to load small memories that operate in parallel to do the function. Where HDL is more appropriate, it too can be used in a similar way. It is like an extension of the look up tables in FPGA's.
The potential for using variable size memory blocks is enormous. We have to get away from breaking everything down into primitives and to take advantage of other means. The use of IP in OOP software is mature and mapping Verilog modules into classes is a way to integrate the hardware function with the software during development and then to run the code on embedded
processors that do not require that code to be broken down again to primitive instructions ties it all together. By the way, the processors are cheap, fast, and run in parallel at a macro rather than primitive level.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.