The increased deployment of silicon intellectual property (IP) blocks is vital to boosting productivity in the development of large, complex system-on-chip (SoC) designs. But the increase in SoC design productivity is not matched by as great an increase in SoC verification productivity. Managers and engineers still struggle with a persistent “verification productivity gap.” Why? Because there is a persistent IP verification quality gap, too. The resulting uncertainty about the original verification quality of individual IP blocks often requires time-consuming remedial verification by the SoC design team. The alternative is to risk SoC design failure because of inadequate IP verification, which ultimately delays the project even more.
This article outlines how the latest formal metric-driven verification (MDV) methodology and technologies can eliminate integration uncertainty through the automatic generation of Accellera-defined coverage metrics, without the assistance of simulation. This formal MDV methodology measures not only the usual control coverage, but also observation coverage — a serious missing link in many other MDV approaches. The methodology is easily integrated into existing MDV flows or can be used stand-alone.
The usual questions
Before integrating an IP block, designers need reliable answers to the usual questions regarding its quality: How thoroughly has the IP block’s register transfer level (RTL) functionality been verified? Under what constraints and conditions was it verified? How accurately do those constraints and conditions reflect the operating environment into which the IP block will be integrated? In addition, has the specification been completely implemented in RTL code? And, is the specification complete and correct in the first place?
The questions and uncertainties are exactly the same regardless of whether the IP is sourced from a third party or internally. And they are no less difficult to answer when a verified, working IP is modified for a different use. The answers are considerably more difficult when the IP is configurable for multiple, different uses in different chips. So, how can designers eliminate these uncertainties?
The missing link: observation coverage
Designers generally look to the IP’s verification plan and its simulation/testbench results to try to answer the usual questions. Functional coverage points can be used to ensure that functionality derived from the specification is triggered, so it’s possible to get some idea of the functional coverage. However, applying the coverage point approach in the comprehensive manner necessary to ensure reliable verification is both difficult and time consuming.
Consequently, to assess verification quality and progress, verification teams use a plethora of diverse, industry-standard structural coverage metrics such as line coverage, statement coverage, block coverage, expression coverage, branch/condition coverage and finite state machine (FSM) coverage .
However, standard structural coverage metrics measure only control coverage, that is, the degree to which stimuli exercise the RTL code. Control coverage indicates which parts of the code have been exercised and helps to answer the question, “Have I written enough stimuli?”
But when a stimulus has triggered a fault, how do we know whether the fault has been observed by a checker? Observation coverage identifies which parts of the code have been checked and which parts have not, and helps to answer the question, “Have I written enough checks?”
Standard coverage metrics don’t measure observation coverage. Control coverage is necessary, but not sufficient, to ensure observation coverage, but thorough observation coverage analysis is necessary to ensure that there are enough of the right checkers in the right places. (Mutation analysis can mitigate the observation coverage problem, but it is not a complete solution because it requires pre-defined fault models and still relies on simulation.)
Consequently, even if verification has achieved 100-percent control coverage — for example, each line of RTL code has been exercised at least once — it hasn’t necessarily achieved the desired observation coverage, or even any at all. The inability of many MDV approaches to measure observation coverage is thus a serious source of uncertainty.
This is why dynamic verification famously suffers from uncertainty about precisely how much of the RTL description has actually been verified and about precisely which parts of the RTL description have yet to be verified. Nonetheless, engineers rely on a mix of these control coverage statistics, together with the variation in bug detection rates over time, to make a “verification done” decision. However, in the absence of any reliable measure of observation coverage, this decision is more judgment call than analytical determination.
Formal’s historical limitations
Historically, formal methods have played a lesser role in answering the usual questions. Why? After all, the exhaustive nature of formal property analysis should provide good answers to most of them. In particular, formal methods have the potential to verify and measure the degree of both control and observation.
One of the problems was that tool ease-of-use issues and limited capacity made it awkward to apply formal methods to large blocks, and even more difficult than using testbenches to calculate functional coverage. Consequently, formal was limited to the verification of hard-to-reach code, corner cases, and specific domains such as communications protocols. This case-specific use of formally checking assertions can eliminate a great deal of iterative testbench enhancement and refinement. Instead of using directed tests or automatically-generated random tests, and increasing the number of cover properties, we simply write an assertion that expresses the intended design behavior or specifies undesirable behavior. This verification of behavior is independent of whether it is simulated with the assertion or formally verified. Nonetheless, formal’s use as a situational add-on to simulation mitigates IP integration uncertainty only incrementally.
In recent years, though, the capacity of formal technology has been significantly increased, and its ease of use has been improved by, for example, the establishment of standard assertion formats such as SystemVerilog Assertions (SVA).
But to apply formal methods to the verification of entire IP blocks and to answer the usual IP verification questions, formal technology must do much more than simply augment simulation. It must become a comprehensive methodology, capable of whole-block verification, both stand-alone and, where desired, in cooperation with dynamic verification. And, it must transform the “verification done” decision into an analytical decision by verifying and measuring coverage ¬— both control and observation — accurately, consistently, and reliably.