By David Lammers
An effort is under way to develop a standard method for reporting test coverage, with the ambitious goal of having a reporting standard for stuck-at faults ready in time for the 2005 International Test Conference (ITC).
Rajesh Raina, a VLSI design manager at Freescale Semiconductor (formerly Motorola's Semiconductor Products Sector), is tackling the issue as head of a committee within the Austin-based Die Products Consortium.
The goal is to develop an IEEE standard that would help design teams understand the true level of fault coverage by providing an apples-to-apples comparison between automatic test pattern generation (ATPG) tools from the five major EDA (electronic design automation) test vendors.
Raina described the standards effort at the recent Design For Test Southwest Symposium, organized by SiliconAid Solutions, a DFT (design-for-test) consultancy.
Running Out Of Steam
The problem is that a chip design team may use one test tool that runs out of steam after reaching, say, 98.6% fault coverage. Prompted by a manufacturing team seeking a higher level of coverage, the design team may buy a second tool that reports 99.8% coverage, but with a completely different fault list.
"Often, there are enough differences between the tools to make coverage difficult, if not outright impossible, to judge," says Raina. "It becomes a very painstaking task to map the undetected faults from the pattern set of the original tool into the new tool so it can generate new patterns to go after these faults."
Working Group Tackles The Issue
To tackle the issue, a working group was formed recently within the Die Products Consortium (DPC). Participating companies include Agilent Technologies, Philips, Intel, Freescale, IBM, LSI Logic, and Texas Instruments.
Several companies have an in-house method of reporting faults, which Raina called a DPM (defects per million) predictor. "These three or four companies may compare algorithms and use that as a starting point," says Raina.
Stuck-At Faults First
Raina says he's spoken with several of the vendors of the most commonly used ATPG (automatic test pattern generator) tools, which he said includes Cadence Design Systems' Encounter Series; the LogicVision BIST Architect Series; and Mentor Graphics's FastScan, FlexTest and TestKompress. Other tools include Synopsys's SoCBist and Tetramax; and the Turbo Scan and Virtual Scan tools from SynTest Technologies.
Raina says the three companies contacted by the committee---which he says should remain unidentified until the process is completed---are "in basic agreement" on the need for the reporting standard.
Initially, the DPC-based effort would develop a standard reporting method for the most common test mode: stuck-at faults. A prototype standard for stuck-at faults could be circulated at this year's ITC. That standard would be circulated through the test community and proposed to the IEEE and ITC by October of next year, for the 2005 ITC.
"The holy grail," notes Raina, "is some method or some process which takes the coverages you get from each different fault model and predicts the failure PPM [parts per million] model. That would be a standard DPM predictor, and the goal is that by the 20005 ITC, we should have some idea of what a DPM predictor would look like."
Raina says the committee first needs to define its terms for stuck-at fault models, and decide on the meanings of test coverage, fault coverage, test effectiveness, and redundant faults.
"Phase One is to have a stuck-at fault model ready by the end of this year. From that, we can transition to more complex fault models for IDDQ, path delays, and transition-fault models."
Longer Than Expected
Ben Bennetts, a DFT consultant in the United Kingdom, said it's likely that it will take much longer than expected to achieve an IEEE standard. The P1500 proposal for testing intellectual-property blocks, has been seven years in the making. "It's likely to become an IEEE standard in the middle of this year," says Bennetts.
Ravi Apte, VP of Strategy at SynTest Technologies, said he hasn't yet spoken directly to the committee. But SynTest would support the idea, though Apte said that the effort may run up against technical difficulties.
Some companies report only definite fault detects, and not probabilistic faults, while others include both. Others collapse equivalent faults into one fault, and report a total of collapsed faults, while others report the original (uncollapsed) faults. "Potential detected faults and definite detected faults need to be defined," says Apte, "and a reporting standard would help in those areas.
"One tool can report 99 percent coverage of uncollapsed faults," adds Apte, "while another will report 98 percent for collapsed faults, and they may or may not be equivalent."
At a deeper level, however, the number of faults depends on the fault models, which depend on how the primitive cell models are modeled.
Different simulators and different ATPG tools will look at cells differently. For example, some tools only recognize a two-input AND gate. If the tool comes across a five-input AND gate, it may split that into several different cells.
"To really solve this problem," says Apte, "you have to look at the signals within the cells, and at what happens when the transistors switch inside those cells. And that could be a long-drawn-out affair."