Robert Ruiz, Product Marketing Manager, Nanometer Analysis and Test Business Unit, Swaminathan Venkat, Group Marketing Manager, Verification Technology Group, Synopsys Inc., Mountain View, Calif.
Deep-submicron processes mean that more gates and therefore more functionality can reside on a single die. But verifying those functions and ensuring that silicon is adequately tested is no trivial matter. Increased gate counts translate to increased logic on a chip, but that extra functionality comes at a price-design engineers routinely complain that functional verification cycles are expanding to consume up to 70 percent of the design cycle.
The deep-submicron feature sizes that enable this complexity also cause test problems when silicon comes back from the fab. Test programs based on the traditional stuck-at fault model are no longer enough to sufficiently test the millions of gates that are available with today's silicon processes. But the good news is that EDA companies are working on technology to help engineers with these verification challenges. Design and test engineers who are facing tight deadlines and spiraling complexity are saying that these solutions can't come soon enough.
Traditional functional verification tools haven't kept pace with advances in silicon processes and design tools, making it possible to create designs that cannot easily be verified. Some designers have been forced to resort to hardware description languages (HDLs) to create their verification code-a hopelessly inadequate method for quickly and effectively verifying complex ICs and systems-on-chip (SoC). This is because HDLs were designed to model hardware and do not have the high level of abstraction and features necessary for the successful functional verification of complex systems.
To avoid some of the limitations of an HDL, designers often turn to general-purpose programming languages like C/C++ to create their own testbenches or verification tools.
But this approach has problems of its own, because general-purpose programming languages have no understanding of hardware. For example, C/C++ does not have any notion of timing, clocks, high impedance, unknown states or concurrency.
Design engineers who are frustrated with the traditional HDL and C/C++ methodologies are turning to other languages and tools that are specifically designed for verification. When evaluating these languages and tools, engineers should look for several features. First of all, the verification language and methodology must enable development of reusable code. Such an environment will accelerate testbench development and decrease the overall design verification time, which shortens time-to-market. Reusing and sharing code will also bring down the cost of functional verification.
Increasing time-to-market pressures are also forcing design teams to concurrently develop software and hardware. Therefore, the ideal verification environment must accommodate easy, seamless integration of software and the hardware or register-transfer-level (RTL) code. Because concurrency is inherent to all digital systems, the ideal verification environment must also enable one to order, schedule and express concurrent events.
To verify data-path-intensive designs, the verification environment must provide all possible combinations for stimuli patterns. With the increased complexity of multimillion-gate designs, functionality is best tested by pseudorandom code generators. The verification environment should therefore allow for easy generation of self-checking random code that can be predictably reproduced on demand.
A sound verification methodology must also allow for dynamic checking of simulation results. This is particularly essential when verifying multimillion-gate SoC designs, since their inherent complexity makes it nearly impossible to create dump files and observe the simulation waveforms for correct operation. Self-checking is also mandatory to be able to run pre-tapeout regressions on the design.
Tools are available that are evolving to address these concerns and requirements.
But it is imperative that design and verification engineers understand the EDA tool evolution that the deep-submicron processes dictate. Only with a thorough understanding of complex silicon's verification requirements can engineers properly evaluate the appropriate technology for their designs.
Rising test costs
The same silicon complexity that is driving the evolution in the functional verification arena is also affecting device testing. The sheer number of gates residing in a complex SoC makes it harder to completely test a device. If enough vectors are created to do the job, it generally means more time is spent on expensive testers. According to some in the silicon industry, test costs are becoming an increasing portion of the overall capital requirement per transistor. As design complexity increases, manufacturing-test costs will eclipse other development costs for SoC designs until they are the most expensive part of the design cycle. This is clearly unacceptable.
Simply put, increased gate counts are making it obvious that traditional manufacturing-test methodologies and test programs are no longer adequate.
Many EDA users report high test coverage with their test programs as measured by the traditional single-stuck-at fault (SSF) model, but still get defective parts. This is because the SSF model breaks down with smaller silicon geometries. This traditional model assumes that each input and output of every gate will be stuck at a high-level or low-level voltage (a zero or one).
But with minute deep-submicron feature sizes there are bridging defects, where the function works but more slowly. The primary way to supplement coverage on defects that the SSF model does not cover is to perform other tests, such as IDDQ testing and delay fault testing. To ensure adequate testing, many engineers are turning to structured design-for-test (DFT) methodologies, especially full-scan insertion. This, along with automatic test-pattern generation (ATPG), can help designers achieve high fault coverage and still conserve valuable ATE resources.
Nevertheless, given the explosive growth in design sizes and complexities, three fundamental concerns must be addressed for scan DFT to continue to be viable. First, scan methodologies must be transparently integrated into all implementation and verification flows, because design engineers do not have the time to learn and implement test technology. This means that synthesis tools must implement optimized, scan-testable logic and all the other tools in the high-level design flow must understand test logic.
This must be accomplished while still taking into account the function, timing, area and power requirements of the device. With old "over-the-wall" methods, the test structures inserted later in the design cycle could affect one or even all of these design parameters. As a result, costly and time-consuming iterations took place while the design bounced back and forth between the design and test engineers.
Second, ATPG must minimize the learning-curve impact on designers by becoming integrated with the high-level design flow so that relevant design and test attributes are automatically transferred. With present and future design complexities, manual setup of ATPG with that information is far too effort-intensive and error-prone. ATPG methodologies must be easy to use and be able to support efficient diagnostic and debugging efforts when there are design problems-which may occur, say, when designs have legacy or imported blocks. To address the need for supplemental testing techniques, ATPG tools must also offer a simple means of generating IDDQ tests and delay fault tests in addition to the standard SSF tests.
Finally, ATPG solutions must offer acceptable and scalable quantitative performance, including the speed and capacity to handle multimillion-gate designs without partitioning or other manual strategies. Simultaneously, ATPG solutions must be able to obtain the most optimal vector quality of results-the highest possible fault coverage using the fewest possible vectors.
While ATPG must be fully integrated into high-level design flows to maintain designer and test-engineer productivity, the pattern generation process does impose special requirements on advanced ATPG products. Though DFT is now increasingly an intrinsic part of the high-level design flow, ATPG is commonly used only at the very end of the design processes. For that reason, ATPG tools must be very easy to adopt and use, especially since designers use them so infrequently. When an ATPG tool does point out testability problems, it is critical for diagnostic information to be presented usefully. Localizing problems in multimillion-gate designs is extremely difficult and the diagnostic and debugging capabilities in advanced ATPG products must be oriented toward helping the designer solve problems in addition to finding them.
ATPG is fundamentally a gate-level verification technology. For this reason and because designs are getting increasingly complex, ATPG technology must stay on the leading edge of performance. ATPG performance is measured by three interrelated metrics: vector quality of results (QoR), capacity and execution speed. Vector QoR is a metric that combines fault coverage with vector compaction. Optimal vector QoR is the highest possible fault coverage using the fewest possible vectors. Capacity simply refers to the largest possible design that can be loaded into a given workstation platform. Execution speed is the time necessary to achieve a given level of fault coverage and vector compaction.
ATPG performance is usually evaluated by the highest achievable vector QoR for a given design at realistic settings-that is, the maximum tolerable run-time. Of course, execution speed is directly correlated to vector QoR-run-time increases exponentially for the most elusive faults and increases dramatically at higher levels of vector compaction. Evaluating ATPG capacity is somewhat binary: either a design fits on a workstation or it doesn't. One way to extrapolate capacity is to judge the number of bytes per gate that ATPG requires, though this is inexact.
Better evaluation methods are necessary to truly assess the quality of an ATPG tool, and these should be based on the best combination of vector QoR, capacity, execution speed and integration into the design flow. If evaluated together, all of those requirements spell out the features and capabilities that designers should look for when choosing an ATPG solution. If these requirements are not properly evaluated, the design either will not be adequately tested or will run up exorbitant test time on very expensive ATE.
Though deep-submicron complexity is responsible for the increased adoption of full-scan DFT and ATPG techniques, old habits die hard. There is still a great deal of wariness in the designer community over the presumed costs of implementing these techniques. While costs in design overhead have largely become irrelevant because of the vast improvements in silicon technology, there are still concerns about ease of use, design flow impact and suitability for ever-larger designs. These are valid concerns for back-end scan design and ATPG flows that do not address their impact during high-level design, or that only check for scan rule violations at the end of the design process.
Hierarchical DFT and ATPG tools that work in and fully support hierarchical synthesis flows are key to making scan DFT an integral part of the design process.