Each new manufacturing process generation brings with it a whole new set of challenges. In an era of multimillion-gate complexity and increasing density of nanometer manufacturing defects, a key challenge today is creating the highest quality deep submicron (DSM) manufacturing tests in the most cost-effective manner possible.
In an effort to contain costs at the tester, designers have begun to embrace a young, relatively obscure design for test (DFT) methodology known as scan compression that utilizes on-chip circuitry to compress the scan ATPG pattern set without otherwise compromising its fault coverage. Scan compression technology seems to have emerged at just the right time, offering designers the promise of reducing tester costs with only negligible impact on design performance, silicon overhead and engineering resources needed to implement compression on-chip.
But how much test compression is enough? Is there a point beyond which additional compression is a waste of DFT resources? To answer these questions, it is useful to examine two distinct metrics related to generating high quality manufacturing tests: test application time and test data volume.
The first of these, test application time, is the time it takes to execute test patterns on a tester on a per die basis. Semiconductor firms may utilize compression to reduce test application time if they are sensitive to test execution costs, which depend on both the test application time and the cost per unit time of a tester.
The second metric, test data volume, is the amount of stored data required by the tests. Companies want to reduce test data volume to the extent that it will reside within the dynamic memory of the tester, since this avoids the time consuming and therefore expensive operation of halting the testing of parts to load more test data into memory to execute the remaining tests. Further reductions in test data volume are needed if these companies wish to further increase test quality by applying substantially more test patterns.
Most design teams in fact want compression solutions that provide both test application time reduction (TATR) and test data volume reduction (TDVR). But how much of each is really needed?
To find out, the benefits and limitations of increased TATR and TDVR are examined for organizations seeking to add compression to their existing DFT design flows. In the analysis that follows, references are made to a comprehensive test cost model, developed by Carnegie-Mellon University researchers for use by Sematech consortium companies, which was augmented to take into consideration the effects of compression. The model breaks down the total cost of testing ICs into several distinct categories: test execution, DFT implementation, silicon area overhead of DFT and imperfect test quality1.
Firms wish to minimize tester costs by reducing the time it takes to apply tests to each die. Consider the case of implementing scan compression to improve TATR relative to test application time based on regular scan. 20x compression reduces test time by 95 percent. However, increasing compression from 20x to 50x further reduces test time by only three percent, and increasing it from 50x to 100x reduces it by one percent (see Figure 1).
Figure 1 Test application time reduction versus compression
Since test execution cost is just one of several cost categories, we observe that beyond 10-to-20x compression, the incremental decrease in test execution cost contributes to an even lower percentage decrease in total test costs (see Figure 2). It is noteworthy that for all scan compression implementations, the total cost of test actually increases beyond a certain compression level, so these two category costs must be well contained.
Figure 2 Costs of test versus compression