Abstract For years, design productivity has been assisted by increasing levels of reuse. Many system-on-chip (SoC) designs now contain 10s or even 100s of reused intellectual property (IP) blocks that can constitute in excess of 90% of the gates in a chip. In addition, the size of IP blocks has risen from small peripherals to entire subsystems.
What has happened to total productivity over this same period? Productivity is being constrained by verification, but verification has not seen its reuse needs met by models and tools available on the market. As a result, verification continues to take a greater percentage of total time and budget, constraining product innovation that would otherwise be possible.
While some verification IP (VIP) is available, there is not enough and it does not provide the levels of reuse necessary. As SoC design moves to platform IP, where is the corresponding platform VIP? Where are the fully defined verification environments for platform IP that can be extended to add additional functions without having to understand the parts of the platform that are not being modified? Why is verification reuse so far behind design reuse?
These issues are explored in this article along with a way in which full IP-to-SoC verification reuse can be enabled through the use of scenario models.
Introduction Over the past decade, designer productivity has been boosted by reuse. Systems are composed of intellectual property (IP) blocks, each having a well-defined function, and the implementation for these blocks can be used in multiple designs and shared between multiple companies.
This is the IP methodology used to fill a significant portion of the real estate in a chip. IP reuse has allowed companies to concentrate on key parts of their design –– parts that add value and make products competitive. Over time, the size of IP blocks has grown and has extended to what is often called platform-based design where a processor, interconnect and a significant number of peripherals, accelerators and other functions are pre-integrated and pre-verified.
When IP is acquired from a third party, it is expected that a verification environment will be shipped with it. This environment was used to verify the block or platform in isolation. It enables users of that IP to ensure that the block has been integrated into their environment correctly and to see the extent of the verification that was performed. The environment may also come with some verification components that can be reused for system-level integration and verification, such as coverage models. The standard Universal Verification Methodology (UVM)  enables the reuse of other verification components through sequences and virtual sequences.
While the numbers are different for every design house, verification consumes a significant amount of the total resources and, in many cases, more than half of the time, effort and cost. Therefore, verification productivity improvement should be as good as, if not better than, design productivity improvements; otherwise the total benefits will not be as significant as expected.
However, this is not the case. The situation was put in stark contrast at a keynote  given by Mike Muller of ARM at the 2012 Design Automation Conference (DAC). He said that while design productivity has roughly scaled with Moore’s Law, verification productivity has gone down by a factor of 3,000,000 as measured by the amount of time it takes to build and run tests. This is not a good state of affairs.
Finally, the objective of system-level verification is not to verify the implementation of IP blocks. It is to verify that the system is capable of supporting the high-level requirements and delivering the necessary functionality and performance. This requires a different type of verification compared to verification conducted at the block level. The best way to capture these requirements is by scenario models that define expected end-to-end behavior of a system.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.