Functional verification is an art, or so we are told. New technologies emerge that inject a dose of science into the process and these can make the process more predictable, increase efficiency and lower overall verification costs. This article exposes problems with current coverage metrics being used and looks at some recent advances that can make them more objective.
There are two primary roles for coverage metrics: 1) to provide an indication of the degree of completeness of the verification task and 2) to help identify the weaknesses in the verification strategy. The measure of completeness, while often based on objective measures, has traditionally been treated as subjective since most of the metrics in use today can only identify when the task is not complete, rather than when it is complete. This article will explore the reasons for this and how those metrics can be improved.
To look for solutions to these issues, it is worth exploring some basics of verification.
The IEEE definition for verification is:
"Confirmation by examination and provisions of objective evidence that specified requirements have been fulfilled."
While not the easiest definition, it does contain the elements that define a good verification strategy. It stresses the need to be objective and that active examination is required. The verification process today is still somewhat subjective and verification closure is seen as obtaining the confidence necessary to release a product without significant errors, and with minimum cost.
Verification is fundamentally the comparison of two or more models, each developed independently, with the assumption that if they express the same behaviors, then there is a high degree of confidence that they represent the desired specification.I The verification model thus provides an objective model that, when compared against the design, provides the evidence of fulfillment of the requirements. The IEEE definition does, however, expose a weakness because 'specified requirement' is subjective. How do you know that the requirements specified are complete, or that they actually cover the things that are the most important to you?
In order to satisfy the definition for verification, three things must happen:
Step 1 " Activate. The functionality associated with a specific requirement must be activated. This is a classic controllability issue and many of the existing coverage metrics are closely associated with this phase of verification.
Step 2 " Propagate. The effects of this functionality must be observable. If unintended side effects are not observed then this indicates a problem in detection " not propagation. Propagation requires a complete set of stimuli
Step 3 " Detect. The propagated results must be compared against the correct behavior " as represented by the verification environment. Assertions can often be used to provide localized checking. This is a fundamental problem for many verification strategies that only check for specific indicators of success and as a result, may fail to observe unintended side effects.
This three step process of activation, propagation and detection is shown diagrammatically in Figure 1 and is the basis on which all verification environments are constructed, including formal techniques. It should be quite evident that any efforts spent on behavior activation that are not propagated and detected is wasted effort and that if we believe that a behavior has been verified just because it has been activated, then that provides a potentially false level of confidence. Thus failure to follow the three steps leads to inefficient verification and the potential for overly optimistic results.
1. Verification consists of three steps.
Traditional Coverage Metrics
This article is not intended to be an exhaustive review of coverage metrics. Instead, it will look at some of the attributes of the different types that exist. For that purpose, coverage metrics have been divided into four categories: 1) Ad hoc, 2) Structural, 3) Functional and 4) Assertion coverage.
Ad Hoc Metrics
This includes metrics such as bug discovery rates, bug densities and other metrics that provide a general indication about the stability of the design. They provide no objective information about the quality of the design or the verification progress, but they have been a useful management indicator in the past and continue to be used today. While no one would release a chip based on these metrics, a deviation of these from the norm, or from experience, can highlight a project that is getting off track or hitting certain kinds of problem.
These metrics are related to the structure of the implementation. Most people perhaps know it as code coverage, although this is just one example of structural coverage. Within code coverage, there are many possible metrics such as line, branch and expression coverage. Almost all simulators on the market will offer a number of different types of code coverage. They suffer from a number of problems, including:
- Missing functionality is not identified.
- Isolated observations: Just because a line of code was reached, does not mean it was executed for the right reason or that it did the right thing. It is a metric based on "activation" only.
- Independent of data: Each metric is only associated with control aspects of the design.
- No prioritization: All lines of code are considered equal.
However, in all fairness, these metrics do have a number of advantages, including:
- Cheap and easy to instrument.
- Provides an absolute target. When you have 100%, you know that all of the lines of code, for example, were executed.
- Easy to locate coverage holes, although not always easy to work out how to fill them.
- Provides a good negative indicator. When code coverage is not complete, you know that verification is not complete.
This is a much newer class of coverage metric that is badly named because it does not measure the coverage of functionalities. Instead, it measures the coverage of particular observable events that are an indicator of those functionalities having been executed. The list of coverage points is generally obtained from a specification or verification plan. Problems include:
- Can be time consuming to define.
- Incomplete. There is no way to tell if all functionalities have associated coverage points.
- Isolated observations: Just because a coverage point was observed does not mean it was seen for the right reason or that the right thing happened as a result. It is a metric based on "activation" only.
- Can identify missing functionality since it can represent aspects of the specification that were not included in the design.
- Allows more focus in the verification process " higher priority items are likely to be targeted first.
Given the completeness problem, many coverage vendors recommend that code coverage is used at the same time in order to help identify the weaknesses in the functional coverage model. This is a weak endorsement for the effectiveness of functional coverage.
This term is somewhat problematic since it means different things to different people. Within the Accellera Unified Coverage Interoperability Standard group, four different meanings have been identified so far.
An assertion defines a set of functionality that corresponds to some hopefully equivalent, functionality in an implementation. The assertions is thus a checker and an integral part of the verification environment. However, this is disconnected from using assertions as a coverage metric, in that we are then trying to define how much of the implementation is within the domain of the assertion.
Assertions are built out of properties, which are the fundamental cornerstone of formal verification. It would thus be very interesting to be able to answer the question: When have I defined a complete set of properties that cover all specified requirements? That problem has not been solved today, but does make properties a very interesting area for future developments. Assertions do however have a number of disadvantages including:
- Time-consuming to write and execute.
- Use a different language or set of language features.
- No way to ensure completeness.
- Inability to handle data transformation.
Today, assertions are normally used for verifying fairly localized functionality, such as an arbiter or a FIFO.