High-resolution graphics displays are becoming a key part of automotive manufacturers' strategies to simultaneously differentiate from their competitors, reduce production cost, and increase customer satisfaction. Our group at Fujitsu develops IP blocks and SoCs to help customers realize these advantages.
One of our IP blocks is called Iris, a 2D graphics engine. This IP is composed of many reusable sub-components, which can be easily rearranged to create new derivatives of Iris that are then integrated into a range of products. All of these sub-components, of course, need to be verified in addition to the final product. For this purpose, we employ a metric-driven verification flow.
In the usual implementation of metric-driven verification, all the stakeholders of the IP (software, hardware, design and verification engineers) define a verification plan that specifies what needs to be done so that they all can agree on signing off the IP for tapeout. The plan contains a number of items (what needs to happen, targets the stimuli/design inputs), a number of checkers (what needs to be checked, targets the design outputs), and maybe also a number of directed tests for corner cases internal to the design.
Figure 1 – Formal-driven code coverage hole analysis reduces the need for manual review
This verification plan is then handed to the verification engineer who will create a verification environment, implement checkers and coverage items, create randomized sequences, and finally execute the verification in the form of a regression, which is a collection of all necessary tests (containers for sequences). The output from a regression is a functional coverage database, which is basically a record of everything relevant that happened during the tests. This functional database is then annotated back to the machine-readable verification plan, and in this manner it is possible to measure the verification result against the planned items. The achieved coverage is usually expressed by a number between 0% and 100% and the number of checks that have failed.
Using this metric-driven verification approach has two main advantages.
- First, there is a defined point at which the verification is finished (100% coverage and 0 fails from checkers, otherwise it is hard to determine whether verification is complete).
- Second, a metric-driven verification plan provides an opportunity for verification quality to be reviewed from a higher abstraction level by a larger audience.
While this flow provides very good coverage of the (relevant) state space of the design, there is still some margin for error. For example, if a feature gets added or changed late in the project, the verification plan might not be updated accordingly; or perhaps the verification engineer makes a mistake when implementing the functional coverage collection code or the checkers. So it is still desirable to have a metric at hand that is more independent from human error than the functional coverage defined in the verification plan.
To ensure that the checkers are implemented completely, there are solutions available that insert errors in the design and check whether these are found by a regression. For the purpose of ensuring stimuli completeness, code coverage analysis can be employed. This article will concern itself with code coverage analysis.