See part 2 here.
Today's design paradigm is changing rapidly " or to be more accurate it has already dramatically changed! Time to market pressures imply that most of today's SoC designs are re-use based derivative designs. This paradigm shift has created entirely new challenges for both design and verification teams, especially in the case of large projects that are developed throughout multiple sites. A significant part of the design cycle now involves managing and automating much of the SoC level integration that comes from the various sites. In order to address these issues we must first recognize the following:
- IP blocks that form a part of the SoC could come from multiple sources spread over multiple geographies.
- Different IP blocks may be written in various languages, making the SoC a multi-language design.
- Each IP block carries its own verification IP so the overall verification environment may also be multi-language and distributed.
All this poses significant challenges for the overall verification effort. SoC integration teams must first ensure that they can verify the entire SoC in a timely manner, and ensure the highest quality of verification. In order to illustrate this let's take a quick look at a simple SoC (see figure 1 below) " notice that a large part of the design is created from pre-defined, and supposedly pre-verified IP.
1. An example of block architecture of a simple SoC. Use link below to see details.
click here for a larger version
In most cases, directed testing has traditionally been the primary verification methodology at the block and even at SoC level. However, with SoCs becoming much more complex in terms of size and functionality, directed testing just can't keep up. Verification teams can no longer solely rely on directed testing to fully verify any significant part of the SoC functionality. Because of this functional complexity, it would require an unmanageable number of directed tests to test even a fraction of the functionality of the SoC. Furthermore, all of these directed tests would need to be created manually, which would be far too time consuming and expensive of an approach.
In order to illustrate, lets touch on a recent experience I had dealing with the overall verification process of a very large SoC. This chip was about 140M transistors " a pretty large design by any standard. It included nearly 85 large and small IP blocks " incredibly so much IP was involved that only a few lines out of the design's million lines of HDL code were written by the SoC team responsible for the SoC. For this design, the entire design and verification cycle was around seven months, out of which four months were allocated for verification. The customer was regularly finding bugs from month 2-4 and the environment scaled very nicely. Then at month four, the bug trend chart dropped sharply! It had appeared that the customer had found all the bugs in the design. But that was not the case at all. Surprisingly, the actual reason for the drop was that they had reached the ship date and stopped verification!
This example clearly illustrates a serious problem we are up against with regard to directed testing and scaling the verification environment. Trade-offs are being made today based solely on schedule limitations and not quality of verification metrics. Companies can no longer afford failure because of a missed bug because of lack of verification " the risk is just way too high.
Taking Advantage of Planning and Coverage-Driven Verification
So how should design and verification teams address this problem? How do we pack more verification into a given schedule and resources? And how do we decide the point at which we have achieved a reasonable amount of verification? First, we need a mechanism to quickly verify a large amount of the SoC functionality. The answer is to use random and constrained-random techniques for SoC-level verification to create tests for the full scope of the functionality. But constrained-random techniques require a good closure mechanism to measure completion of the constrained-random methodology.
This is where the need for verification planning and a comprehensive coverage solution comes in. Given the same requirements on the verification task as listed above, let us look at how these translate into requirements on a planning and coverage-based system (let's call this Planning and Coverage Driven Verification " PCDV for the sake of this article).
The PCDV solution must let users define verification plans in a manner that is most natural to them, and to drive verification from it. This means that verification plans written in MS-Word, FrameMaker, and XML format must be supported. Moreover, since the SoC is created out of re-used IP which may come with its own verification environments, the PCDV solution must be able to easily translate existing verification plan information into formats supported by it. Also, since components of an SoC, and the SoC itself may be verified using many different technologies (simulation, formal, hardware-assisted verification), the PCDV solution must be able to integrate with different engines, as well as existing runner technologies that the customer might be using.
Verification planning should be done at the level of functional verification. Today, verification planning is taken to be analogous to test planning. This needs to change if the goal is to verify SoC level designs. Users need to plan out how they will verify the top level functionality of the design, which in turn implies the need to start this functional verification planning right at the time when the functional specification of the design is being drawn out. These concepts are very common in software development, and similar concepts now need to be applied in SoC design and verification too.
As verification runs are progressing, the user needs to know and measure the status of the verification against the original verification plan. The PCDV solution must be able to provide real time annotation back to the verification plan so that progress can be thoroughly checked and monitored.
Coverage Goal Alignment
An integral part of the verification plan will be the definition of coverage goals. These coverage goals will be at all levels of abstraction code coverage goals, assertion coverage goals and functional coverage goals. Since the SoC verification environment is usually composed out of individual verification environments, this implies the following requirements on the coverage solution:
- It should support coverage goals defined in multiple languages - especially functional coverage goals " and provide one comprehensive view of coverage goals coming from any language.
- It should be scalable in performance and memory. As the design gets to SoC level, coverage data will scale very rapidly. This requires the coverage solution to not only manage this large amount of data, but also provide coverage analysis engines which can analyze this large amount of data efficiently.
- Since the SoC and its associated verification tasks are most likely distributed geographically, it is also required that the coverage solution supports distributed data. The data should be easily relocatable, cross platform, and easily packaged and transferred.
- The coverage and verification run data should be stored in a manner that it is hierarchically composable. This means is that the coverage data from subcomponents of the SoC should be easily assimilated into the overall SoC verification. However, as coverage is being re-used from lower block-levels into the full chip SoC level, the solution should be able to allow the user to re-use coverage data at the right level. The user should easily be able to identify and mark the data from block level that is critical at the SoC level, and use only that data, and ignore the rest.
- The coverage analysis and report out capability should ensure that it supports different roles within a large project team. This means that a project manager, verification lead, verification engineer or design engineer should all be able to look at the coverage data and analyze it at the level of abstraction that suits their role.
It is likely that directed testing will continue to be used for block-level verification, although some blocks could be large enough to require constrained-random techniques. As pointed out in this article, planning-based coverage-driven solutions targeted towards SoCs being developed across multi-sites need to merge results from block-level verification" both directed test and constrained-random " into top-level SoC results. Because coverage data collection and specifications might be different at the block-level vs. the SoC level, a good planning and coverage-driven solution must allow for merging and scaling of coverage specifications as well.
Today's SoC designs pose unprecedented challenges for verification. These challenges can be overcome only by having a good planning and coverage-driven verification solution that meets the requirements listed above. In the second part of this article, we will take a deeper look at the actual construction of these environments.
About the Author:
Apurva Kalia is Vice President of R&D for Incisive Simulation Products at Cadence. He has served on many standards and Industry bodies. He works out of the Cadence office in Noida, India.