In the IC design flow, design-for-test is often an afterthought. First, the design is coded, then simulated, then synthesized, and only after all that - usually months into the design cycle - it's handed over to a test team to ensure the design is testable. And often it isn't.
Most RTL designers have never had to insert test vectors into their own designs. They're not held accountable for the testability of their designs. They don't even understand that there is a lot they can do to make their designs more testable.
Why should RTL designers care? If RTL designers don't properly apply testability rules at the initial design stage, the design can have poor test coverage or even be untestable until extensive changes are made. Besides saving valuable test engineering time, proper design techniques result in code that is much more reusable. Testability issues create requirements to make changes at the gate level that are rarely reflected back into the RTL code.
For years, test engineers have argued that the designer should have done many of the test functions that must be added to a design. The designers, on the other hand, don't see the payoff and don't understand test requirements so they have always pushed back. Until now.
Now it's possible for designers coding in Verilog or VHDL to estimate a reasonable upper bound for obtainable fault coverage from the RTL description. On top of that, designers can check the effect of design changes on fault coverage.
Until recently, there was no good way to check this early in the design cycle. But new techniques now make this possible. Now designers can get solid estimates of the fault coverage that will be achieved by either TetraMax or FastScan without running those ATPG tools and without creating complex testbenches.
Scan design, the most commonly used DFT method, generally requires testmode controls to allow shifting test vectors into the circuit flip-flops and shifting out test results so that a test machine can compare actual results with expected results. These testmode controls must ensure that the flip-flop clocks and asynchronous set and reset pins do not depend on circuit state.
Gated clocks, for example, are commonly used in low power and in a variety of other applications as illustrated in the following code fragment:
assign clknet = clk & clken;
always @ (posedge clknet);
Scan design requires that gated clocks must have by-pass logic so that clocks will not depend on any flip-flops controlling the enable logic. The following modified code would ensure that, in test mode, the signal "clk" would not have any dependencies from the system clock enable logic.
assign tclken = clken | Testmode;
assign clken = clk & tclken;
always @ (posedge clknet);
Derived set and reset signals also require test consideration since if a set or reset became active during a scan shift operation, then data being scanned in or scanned out could be corrupted. The end result could be bad chips passing or good chips failing. In either case, there is a significant problem. The code fragment below illustrates a simple example of a flip-flop that is reset with an internally generated signal.
always @ (posedge clk or negedge derived_reset);
if (!derived_reset) q <= 1'b0;
else q <= data;
From a test perspective, the internally generated reset should be disabled by a test mode signal as illustrated below:
assign tmreset = derived_reset | test_shift;
always @ (posedge clk or negedge tmreset);
if (tmreset) q <= 1'b0;
else q <= data;
These two examples, while appearing on the surface to be easy changes for any RTL designer to make, are just the tip of the iceberg. There are so many rules that need to be remembered that even the most experienced design teams have trouble spotting them all. Therefore, many companies conduct lengthy RTL code reviews, allowing experienced designers to oversee all of the code for a project. Unfortunately, even code reviews are no guarantee that everything will be noticed.
The reality: there may be hundreds of design files that make up a design so that the symptom of a problem -- a flip-flop that does not qualify for scan replacement, for example -- may be substantially removed from the actual cause or the best way to fix the problem.
With tough schedules and conflicting system requirements yet to be solved, the RTL designer is not motivated to tackle the DFT issues. Especially since the payoff for DFT changes are not so obvious. But, a commonly used metric, test coverage, is now available at the RT level to give very graphic evidence of the effect of even small changes to a design.
The objective of a test coverage estimate is to provide quick, but sufficiently accurate estimation of the test coverage that commercial ATPG tools will eventually provide much later in the design flow.
The ATPG tool is designed to target a pin stuck at a specific value (0 or 1) and then accomplish both the exercise and the propagate tasks. Whether or not the ATPG completes both tasks for a particular pin stuck at either 0 or 1 so that the necessary conditions can simultaneously exist, the stuck at fault is declared detected.
The two fundamental ATPG tasks can be approximated by two equivalent testability analysis tasks. Fault exercise is replaced with "controllability" and fault propagation is replaced with "observability." Two tasks are much simpler than ATPG and well known within the realm of testability analysis.
Testability analysis determines whether or not a node in a gate level circuit can be controlled from the input pins and observed at the output pins. In the past, a number of algorithms have been used on gate-level descriptions. The trick is to produce the gate- level netlist quickly but still be logically accurate.
Such tasks are considerably simplified if design tools incorporating predictive analysis are used, especially if it includes it includes fast synthesis to create a flat gate-level representation. This allows structural analysis to be performed at the RTL design phase.
Essentially, this enables the tool to detect, at the RT level, complex design problems such as clock domain crossings, synchronization, tri-state bus decoding, combinational loops, logic cone depth, as well as complex testability issues and report errors directly back to the original RTLfile and line number. A graphical debugging environment and a schematic viewer helps in quick problem isolation.
Predictive analysis-based DFT is especially useful here if it includes a highly optimized testability analyzer that determines whether or not each node the circuit can be controlled to a 0 or a 1 and whether or not each node can be observed at the circuit or test outputs.
The justification for using testability analysis to estimate test coverage is based on the following two principles. First, if a fault can be exercised to a 1/0 then it can be controlled to 1/0. Second, if a fault can be propagated to an output then it can be observed at that output.
Predictive analysis based DFT makes it possible to determine unused faults as well as untestable faults and therefore can estimate test coverage.
Controllability and observability of a node tells us nothing about whether or not these conditions can be simultaneously satisfied. Nevertheless, estimates based controllability and observability can come surprisingly close to actual ATPG performance. Furthermore, the estimates can also be used to compare alternative methods for problem solution. In such a case, the exact value may be less important the relative value.