Recently, DFT elements have begun to show up in more and more large complex SoC devices. The concept of scan no longer raises the objections of overhead to the extent it used to. Yet, customers and vendors of DFT technology are still trying to realize the full potential of a comprehensive, structure-oriented DFT strategy as a consistent engineering interface platform.
Systems companies traditionally have been vertically integrated and, thus, have been exposed to the individual difficulties and costs as well as the combined product life cycle effects of design, manufacturing, and product maintenance decisions. Being part of one company, systems house organizations are more motivated to develop and use a common methodology throughout the design, manufacturing, and product support chains than the different players in a disaggregated supply chain.
A comprehensive DFT methodology creates a common interface architecture and operational platform for many engineering purposes. Commonality creates economies of scale by enabling engineers to focus their expertise and know-how on one common platform, rather than being spread too thin over multiple inconsistent methodologies.
Systems houses for some time have established scan-based DFT methodologies as the primary platform of choice for rather sophisticated post-silicon system debug, diagnostics, and initialization/configuration applications, in addition to better manufacturing testability. However, only a fraction of this sophistication has been brought to bear on today's daunting SoC post-silicon debug and characterization problems.
Traditional system-level scan methodologies, to highlight just one simple example, tend to require serial access to embedded memories as a mandatory design element that enables the reading/writing off each memory location for the purpose of functional debug.
To complement the DFT hardware infrastructure inside the products, systems companies have developed sophisticated middleware and application software systems that allow engineers to interact with the products via the DFT infrastructure. The resulting scan infrastructure and engineering tools give system engineers complete bit-level state control and observation, thus turning the real product into the ultimate register-level "simulation accelerator." It is telling that today's commonly used scan DFT and Built-In Self-Test (BIST) approaches, even if they use serial memory access interfaces under the covers for memory testing, generally do not provide for debug-oriented memory access. Instead, extra hardware may have to be grafted on top of the purely test-oriented DFT/BIST interfaces to enable serial memory access for debug.
Although system-level debug, initialization, configuration, and maintenance advantages of a comprehensive scan-based DFT structure are worth an article in their own right, this article will discuss an equally powerful advantage on the other side of the fence: the world of manufacturing, failure analysis, and manufacturing yields.
Scan enables automated structural testing, and systems-house engineers have learned to appreciate the unique values of a purely structural test methodology. Structural test operates in terms of logic gates and nets rather than intended chip functions. This function-independence enables engineers in later stages of the design, manufacturing, and product engineering chains to effectively and efficiently solve test and yield problems without needing knowledge of the function.
Again, it is telling that scan-based test vectors generated by Automatic Test Pattern Generation (ATPG) tools in the "early" days used to be referred to ase Fault Locating Tests (FLTs), signifying that the purpose of such tests is not merely to detect faults, but also to help locate and isolate the root causes of the faulty behavior. In today's language, ATPG vectors have been "demoted" to being just test vectors.
In line with the emphasis on the diagnostic capabilities of FLTs, highly automated logic diagnostics software tools were developed by some systems houses to help rapidly locate the root cause location of scan test fail information collected from ATE on the manufacturing test floor or in the characterization lab, or even from fail data collected in the field. Understanding complex system functionality, like that of today's complex SoCs, requires intimate and highly specialized application- and architecture-specific knowledge. Such knowledge normally rests with the designers of the function blocks, but not with the back-end test and failure analysis engineers.
The structure-oriented automated logic diagnostics tools and the engineers using these tools, by contrast, only have to understand the rather small set of simple logic gates that are common to all logic applications. It does not matter whether the chip functions are intended for space applications, medical applications, a video game, or anything else. Nor does it matter what the micro-architecture looks like or who designed the function. Gates are gates, and nets are nets, irrespective of the chip's functional purpose.
This unique function- and technology-independence is a grossly underrated, invaluable advantage of scan-based structural test methodologies when it comes to time-to-market, time-to-volume, and time-to-profit for complex semiconductor products. The ability to react quickly and on-the-spot to manufacturing problems associated with complex SoC devices is particularly significant for designs emerging from today's increasingly disaggregated design and manufacturing chains. Going back to the original source of some piece of design content to obtain functional understanding may be reasonably viable in a vertically integrated company, but definitely is much more challenging in the brave new world of third-party Intellectual Property (IP), global design teams, and manufacturing/test outsourcing.
Even if it is possible to go back to the original design team, getting them involved in the diagnostics process causes inevitable delays and logistical problems. The beauty of structure-oriented scan methodologies is that many aspects of failure analysis and yield learning have been automated and can be accomplished without having to go back to the original design organizations. Due to the universal commonality of the gate/net abstraction, the automation tools can be applied to all digital products alike.
A further organizational decision to use a common structure-oriented test methodology and integrated toolset rather than a hodge-podge of different point tools, gives the diagnostics/yield engineers a better chance to develop deep expertise in that common methodology and toolset. The availability of highly automated diagnostics tools running on a common structural DFT methodology leads to demonstrably faster problem turn-around-time, in particular to higher physical failure analysis success rates. The net result is faster feedback to manufacturing for process improvements enabling faster technology ramping and better yields.
DFT-based automation technology has matured to the point that most of the required technology ingredients needed for a successful implementation of automated logic diagnostics are available. DFT is practiced in design and DFT tools are primarily sold to design organization, while diagnostics and failure analysis are practiced in manufacturing and product engineering.
Realizing the automation opportunity in manufacturing and failure analysis, hence, requires data and information links between design and manufacturing. Examples of successful design-to-manufacturing integration exist primarily in vertically integrated companies. One major challenge facing today's industry will be to port the successful models to the brave new disaggregated world where data security/accessibility and IP protection are at least as contentious as agreeing on the necessary data content and interchange formats.
The logic diagnostics tools typically operate at the gate/net abstraction level. Physical failure analysis needs x-y coordinates. One obvious integration requirement, hence, is to link diagnostic call-outs in terms of logic net names to identifiable physical shapes so that physical failure analysis equipment can successfully navigate to the likely physical location of the root cause defects. Equally obvious, a link is needed between fail data collection data formats on the production or characterization test equipment and the fail data input format expected by the logic diagnostics tools.
In addition, the tool needs to have access to the correct netlist and physical design data of the design, as well as the actual test patterns that were used for testing. Correlation with manufacturing data, like in-line process images, also is very useful.
What emerges from this brief review is not a call for inventing fancy new algorithms, but the need to formulate and codify a constructive design-to-manufacturing hand-off that makes the right kind of design information available and accessible from the manufacturing environment and, once it's there, the ability to link the design data to the right manufacturing data as well as manufacturing/failure analysis equipment and equipment navigation tools.
The transition to the 130nm process node has been more painful than expected. All indications are that the upcoming transitions to 90nm and 65nm are equally scary propositions. While a comprehensive structure-oriented DFT and test strategy by itself may not solve all problems, it may yet be among the industry's better chances to help accelerate the ramp of the new technology nodes, as well as to give the various players in the increasingly disaggregated design and manufacturing chains a practical opportunity to productively collaborate on defect detection, defect analysis, and yield improvement.