If there were a bug in your design, could the verification environment find it? Functional qualification is the first technology to provide an objective answer to this fundamental question. It is an essential addition to the increasingly challenging task of delivering functionally correct silicon on time and on budget.
As depicted in Figure1, functional qualification encapsulates functional verification, providing an automated and objective measure of the quality of the functional verification.
1. You can visualize the relationship of Design, Functional Verification, and Functional Qualification in this way.
Functional verification is a quality control process, the objective of which is to ensure the design quality as efficiently as possible. The functional verification environment is an "instrument" that will measure the design quality. As with any measurement, it is essential to calibrate the instrument. Table 1 provides some examples of typical calibration techniques:
||IC Wafer test equipment
||Fault simulation of test vectors. Correct by construction based on ATPG algorithms
Table 1: Activity, Quality control, Calibration
The need for quality measures
In the latter half of the twentieth century, many industries were revolutionized by the introduction of modern quality control processes (Figure 2). The economies of entire nations were catapulted to world-changing status through the adoption of rigorous quality improvement. Companies that were late in adopting the paradigm of "quality = profit" missed massive markets.
2. Quality means Profits
There is a fundamental requirement for quality control: there must be objective measurements. This is, to a certain degree, the difference between art and engineering. Functional qualification is all about taking the art of verification and facilitating the transition to a rigorous engineering discipline.
Quality is an holistic concept incorporating the notion of acting locally and thinking globally. It is essential to align the modern enterprise so that local decisions, such as taping out an integrated circuit (IC) design, are aligned with organizational objectives such as increasing market share. When objectives cannot be measured there are serious risks, including:
- Inefficiencies growing so that costs become unsupportable.
- Other measurable objectives taking priority and resulting in suboptimal solutions.
With design teams often spending more than 50% of their resources verifying the quality of the functionality, it is tempting to think the industry has fallen into the first trap. However, given that most designs require re-spins and that, in most of these cases, the designs contain functional bugs, and the industry is perhaps suffering both consequences.
Engineers strive to achieve great results and most CEOs recognize that "quality = profit," so where is the disconnect? One hypothesis is that, because IC design teams can measure cost and schedule accurately, these objectives become the priority. Since quality is controlled by allocating a fixed resource to the project, the assumption is being made that more resources will result in higher quality. However, as with traditional manufacturing industries in the first half of the 20th century, the lack of objective quality control introduces organizational inefficiencies. Functional qualification is about to introduce the same type of revolution to IC verification that total quality management (TQM) introduced to traditional manufacturing.
Functional qualification, which is applicable to both formal methods and dynamic simulation, provides a common metric allowing the complementary use of these two valuable techniques. The current generation of functional qualification tools focuses on measuring dynamic simulation of RTL level designs, but the support of formal methods, higher levels of abstraction such as SystemC and embedded software are beacons on the technology roadmap.
Consider a design under verification (DUV) and a verification environment both derived from a specification as per Illustration 3. The verification environment generates stimuli to exercise the specified behavior and checks that the DUV's responses are compliant with the specification. The objective of functional verification is to find design bugs (represented by the circle in the DUV in Figure 3.
3. The flow shows a typical verification strategy.
Existing tools are able to partially measure the quality of the verification environment, usually through the use of code coverage and functional coverage. These techniques do not measure the ability of the verification environment to check for correct responses; instead, they can be thought of as different views of the stimuli in the context of the design. For example, errors in checkers or missing checkers are not reflected in coverage metrics.
Code coverage only partially measures the quality of the stimuli because it does not consider the ability of the stimuli to propagate bugs to outputs. Code coverage is only a summary of the design activity during simulation and most of the temporal behavior of the stimuli is lost in the accumulation of the coverage information during a given test execution. The verification would be of very poor quality if only the minimal set of tests for achieving complete statement coverage were developed.
Functional coverage only partially measures the quality of the stimuli because it is written by the user, and is therefore not objective. As a result, it is likely to be incomplete and error prone. Functional coverage also requires significant investment because the verifier must write the functional coverage code.
Functional qualification is a technology that can measure both the ability of the verification environment to generate sufficient stimuli and the ability of the verification environment to check for correct responses. This technology is derived from the mutation analysis field within the academic software research community. The seminal paper introducing mutation analysis was written by R. A. DeMillo, R. J. Lipton, and F. G. Sayward in 1978. Mutation analysis continues to be an active domain of software research; the most recent workshop was held at the ISSRE conference in 2006.
Brief history of mutation analysis
Mutation analysis is rumored to have been invented in the early 1970s and the similarities with fault simulation techniques, used for the calibration of manufacturing test vectors, is perhaps not a mere coincidence. It is fascinating to think that 30 years later it would be mutation analysis in the software domain that would inspire functional qualification for microelectronic designs!
The widely known "stuck at 1"and "stuck and 0" fault models are used in the application of fault simulation for the fault grading of manufacturing test vectors. The stuck at 1 and stuck a 0 fault models were selected because of their explicit modeling of potential physical defects caused during the manufacturing process for digital ICs. This has been an area of considerable EDA innovation. The introduction of Automatic Test Pattern Generation (ATPG) and the insertion of scan chains to simplify test vector generation has reduced the dependence of modern designs on fault simulation.
Mutation analysis introduced the completely new concept of analyzing small behavioral changes to a software program. Mutations are intended to find weaknesses in the functional testing of a program. An example of a mutation would be:
Original program code:
a = b or c;
a = b and c;
In this example the "and" operator has been substituted for the "or" operator. Every behavioral construct in the program can have many associated mutations.
Mutation analysis measures the quality of the software testing by running the program's tests on the mutated program to see if the results of the mutated program are different from the original program. If no difference results, then the mutation is considered to be alive. If one of the tests causes different results, then the mutation is considered to be killed. Ideally, the software tests should kill all mutants; live mutants indicate weaknesses in the testing.
The fundamental hypothesis of mutation analysis is that if the program contains live mutants then the program also could contain real bugs (or coding mistakes) that cannot be found by the existing tests. If the testing is improved so as to kill live mutants, then these same tests can expose the vast majority of previously unknown bugs in the original program.
There were several mutation analysis systems developed in the 1980s (e.g. Mothra for the FORTRAN language), and experiments confirmed this hypothesis to be valid. In fact, experiments demonstrated this to be the most comprehensive measurement of quality of the tests for a given program. In the 1990s, researchers provided theoretical justifications explaining why mutation analysis is so effective.
Mutation analysis assumes the "competent programmer" hypothesis to be valid. The design is considered to be largely correct, i.e. the majority of the code is assumed to not contain bugs. This is important because the mutation analysis assesses the ability of the verification environment to measure the quality of the current design implementation. When mutations are introduced, they take the design slightly out of specification. Since mutation analysis is completely automated, the user does not need to learn a new language or describe the design behavior.
Mutation analysis is, however, a computationally expensive technique; many, many tests must be executed. Performance has been a fundamental problem for the adoption of this technology in industrial projects.
Mutation analysis has been applied successfully to languages such as C, FORTRAN, Java, SQL, etc. IC design entry has evolved from schematic entry to high description languages (HDLs) such as Verilog and VHDL. The HDLs have programming constructs similar to software languages, Verilog having a resemblance to the C language and VHDL having a resemblance to the Ada programming language. HDLs permit the application of certain software methods, such as assertions and mutation analysis, to the design of ICs.