Learn how design-for-test can minimize false positives or false negatives in design verification tests.
In Part 1, we introduced the concept of two tests—one to show the UUT (unit under test) to be faulty, while the other to show the UUT to be good. Because these two are not exact duals of each other, they produce overlaps. Indication that the test failed (T+) is either from a truly bad UUT (Test Positive or TP) or from a False Positive (FP) of a good UUT. Similarly, an indication that the test passed is either from a truly good UUT (True Negative or TN) or from a missed fault called False Negative (FN).
Figure 1 illustrates that an axis at M equally divides the incorrect test results FP and FN.
Figure 1. This illustrates a set of tests where some faults go undetected (False Negative or FN) while some false alarms (False Positive or FP) are indicated for UUTs that are not faulty. Tests can be biased during the test planning stage to reduce FP by moving line M to line L or to reduce FN by moving line M to line R.
The "correctness" of fault-oriented tests is called Sensitivity and it follows the formula:
These are the areas under the curves on the negative y-axis of Figure 1.
Similarly, the correctness of tests demonstrating that the product meets specifications is appropriately called Specificity and it is the area on the positive y-axis of Figure 1. It has the formula:
In a "perfect test," (FN=FP=0), both Sensitivity and Specificity would be 100%. While this sounds like an ideal situation, for some tests, we prefer to bias tests so that we concentrate our test resources on either Sensitivity or Specificity. By adjusting the axis from M to either L or R in Figure 1, we can get a perfect Specificity=1 or Sensitivity=1, respectively. It often makes sense to bias our tests. For example, an IC vendor would rather throw away a good IC that only costs pennies to make than to experience a customer returning a bad chip that caused a $700 smart phone to fail. That IC vendor would rather have lots of FP (low Specificity) as long as it has no FN (Sensitivity=1). More on this in Part 3 of this series.
In this part, we want to find the most appropriate Sensitivity and Specificity for DVTs (design verification tests) and then use the most appropriate test and testability resources to help us achieve such tests.
First, letís clear up some confusion in terminology. Validation answers whether the right device was designed and we will not be discussing it here. Verification answers whether a device was correctly designed.
Design Verification Testing (DVT)
DVT is an intensive testing program for verifying that the product was designed to meet its specifications and interface standards:
- Functional Testing (including usability)
- Performance Testing
- Environmental Testing
- Product Reliability Testing
- Product Regulatory Compliance Testing, such as Electromagnetic Compatibility (EMC)
We want DVT tests to pass primarily because they are actually good (TN) rather than because we are escaping the comprehensiveness of the test (FN). If we can reduce FN to 0 we will be confident that units passing the DVT are in fact free of failures.
To Page 2: What is a DVT Failure?