These days, IC design engineers have more functionality to implement in their designs than ever before, even though design schedules are shrinking. Although design-for-test (DFT) is absolutely necessary to enable thorough and cost-effective manufacturing test, it potentially makes the overall design process even more challenging.
The paradigm of the designer “throwing the design over the wall” to the test engineer was viable in the days when DFT was limited to adding scan chains to a design, and timing delays of the signal propagation across gates were relatively greater than across the interconnects. But the old paradigm no longer applies in an era where advanced DFT methodologies are necessary to limit test cost, interconnect delays are dominant and power consumption critical.
In fact, designers are confronted with two fundamentally different sets of goals that seem to be in direct conflict with each other. On the one hand, they must contend with increasingly aggressive design goals that stem from competitive pressures to squeeze as much functionality as possible into the smallest die size while meeting strict timing and power budgets. Yet the test logic added to a circuit to improve fault coverage and reduce test data volume itself impacts design timing, area, power and routing congestion. All these effects need to be accounted for during RTL synthesis to ensure that design goals are not compromised. Gate-level DFT tools that simply insert test logic into a post-synthesis netlist invariably violate design constraints by not considering the timing impact of the interconnect delays, the connections that cross voltage/power domains or the area impact. Extra time-consuming synthesis iterations are then needed to resolve all the violations, a process which lengthens the project schedule.
On the other hand, as if achieving the design goals on schedule were not challenging enough, designers are faced with equally aggressive test goals reflecting quality and cost considerations. Today’s designs incorporate complex clocking schemes, multiple memories, mixed-signal circuits and multiple voltage/power domains—all having constraints that affect the parameters and physical location of the test logic. In addition, increasing demand for more functionality in a smaller area and at lower cost, especially for mobile appliances, has led to tighter IC packaging constraints. As a result, the number of package pins that can be allocated for test has been reduced, affecting test coverage and compression results. A combination of design characteristics ranging from flop count to the type of power management cells used all influence DFT choices and test outcomes.
And so we have the dual causality that makes DFT so challenging: achieving the test goals can impact your design, but the design itself has an effect on the test logic and, ultimately, test outcomes. To accelerate convergence on both design and test goals, design implementation and test implementation need to be combined in RTL synthesis. With this approach, the test logic required to meet the test goals is synthesized simultaneously with the functional logic while observing timing, floorplan and power constraints affecting all the logic. Having advanced DFT capabilities “under-the-hood” allows designers to deal with the complexity of modern designs, freeing them to focus on achieving their design goals while also meeting test quality and cost requirements.
An approach that implements test during synthesis is very different from bolt-on methods that require connecting test RTL IP with the design RTL. Bolt-on methods require more dedicated focus on the test portion of the overall design task because test implementation is outside synthesis and separate from design implementation. For example, to add scan compression to a design using a bolt-on solution, considerable manual effort is needed up-front to first estimate the compression parameters and then stitch the compression IP to the design RTL. If you make a mistake or encounter issues late in the flow—issues caused by the subtle impact of design characteristics on the test parameters—you have to go back and repeat these steps and iterate though the entire flow again. These iterations between design and test can involve significant time, effort and cost.
In contrast, combining design and test in RTL synthesis does not assume that test experts are always on-hand to provide specialized guidance on DFT matters. This approach generates highly flexible test structures that are fully optimized based on the design characteristics and the design constraints. These structures work seamlessly with automatic test pattern generation (ATPG) to enable high defect coverage and high compression for a wide variety of complex designs, including extremely large systems-on-chip, and for test methodologies that allocate few test pins.
Comprehensive and consistent test design rule checks for RTL and gate-level let designers flag testability issues early in the design phase and then apply these same checks after synthesis to verify consistency between the RTL and gate-level. The most common violations related to controllability of clocks and asynchronous set/reset signals can be resolved during synthesis with automatic “fixing” of the violations as needed. These fixes do not compromise design goals because the test logic is synthesized based on the design constraints.
Power problems during test can delay ramp-up to volume production and cut into profit margins. In a solution that combines test with design implementation, the test logic and the ATPG engine work in tandem to circumvent power issues, reducing power consumption during test to levels below your specified power budget. For low-power designs, the compression logic is transparently disabled during mission mode to minimize its power, and power management cells inserted as needed, minimizing the number of cells to reduce area impact.
When the test logic is synthesized, congestion optimizations reduce the occurrence and severity of routing congestion hot spots that can occur when a large number of internal scan chains connect to the compression logic. Synthesizing a design with less congestion minimizes iterations by making it easier to perform a “clean” route without design rule or design timing violations.
Although design and test goals may be fundamentally different, as stated at the outset, they need not be in direct conflict after all. Because test functionality has become increasingly entangled with design functionality, achieving better design-for-test now requires an approach based in synthesis to enable faster and more predictable results for both design and test.
Arif Samad is VP of Engineering in the Test Automation Group of Synopsys, Inc.
Information about Synopsys' activities at International Test Conference 2010 can be found here.
Great comments from everyone. I work with Arif at Synopsys and we meet often with design teams from different companies. We are encouraged by the trend to consider test as part of the design (indeed - it is), and surprised to find that some teams still consider and execute it as a seperate activity.
We agree that 'test' is essential, high value, and can be accomodated as an integral part of the design process starting with synthesis.
Hank, I totally agree that "test" is not something that foisted onto the "design" -- it is an integral part of the design.
It is essential to get test engineering & design engineering talking to each other very early in the project, so there are no surprises or misunderstandings about test capability and cost. Designers also need to learn that the test engineer is their friend. When something is not working, you will be SO glad you have the fault coverage, the BIST, the test buses to bring out internal signals, the JTAG, etc.
I don't quite understand why the author so emphasized the point about inserting test logic at the RTL level, before synthesis. Haven't we all been doing this for at least the last 10 years? It's been a long time since I've seen anybody use "bolt-on methods" -- for exactly the reasons the author states.
While I agree with these comments, I think the slant is that "test" is something foisted onto the "design". As was said long ago, customers won't pay much if the chips aren't tested, and they won't pay if the manufacturing test cost is too high. So test is every bit as much a design constraint as functionality, area, power and delay. Design would be so much easier if we didn't have to worry about test, but it would be so much easier if we didn't have to worry about area, delay or power either.
The article does not mention the fact that the fundamental properties of DFT - controllability and observability - are also useful for debug, which is also a design requirement.
This is a thoughtful article concerning the increasing need for DFT in the author's design domain. It is part and parcel of a larger problem, though. That is the desire to perform hardware and software design simultaneously. Once called co-design, this activity was something of a chimera except in special cases. Modern design tool chains, however, can support co-design by providing a platform that can either simulate hardware or incorporate hardware in loop for both the control electronics and the "plant", the hardware being controlled. This comes at a price: the tool chain isn't cheap; a fair amount of training is needed to use the tool set effectively, and there is no single tool chain that will bridge the spectrum from IC design through, say, turbine engine FA controllers. Still, tool chains do exist, and it seems to be more important from day to day to employ them on large and even medium-size designs.
I agree with the over all picture of DFT and design being two orthogonal objectives leading to complexity in design cycle. Not only the time line or schedule, we pay price on chip overall functionality too e.g. due to DFT the clock scheme get even complicated leading to suboptimal Clock tree synthesis which means more power while phone is operating or more heat up when processor is operating.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.