Those who make money by developing and selling chips know that it is not enough to successfully design a chip that meets all performance specifications. To be a viable volume-production item, the chip must also exhibit sufficient yield throughout the manufacturing chain—fabrication, packaging and test. Just as chip designers must design their products such that they can be successfully tested (design for test, or DFT), they must also take into consideration factors that will maximize the chip's yield after all the relevant manufacturing operations. This is called design for manufacturing or DFM. With today's tight profit margins, particularly for chips that go into consumer products, achieving a high manufacturing yield can mark the difference between success and failure, and just a small increase in yield percentage can translate to additional millions of dollars of revenue.
The need for DFM is driven by several factors, including more complex photolithography operations for deep-submicron designs, higher design complexity, shrinking device geometries and more devices per unit area on a chip, and more complex processing operations.
The recent Wescon conference and exhibition in Santa Clara, California (April 12-14) was the unlikely site for a very good technical track on DFM. I use the term unlikely since recent Wescon conferences have not been noteworthy for technical chip-level sessions. However, Wescon 2005 may have marked a turning point for these types of technical events.
As part of Wescon's Design & Analysis program, Si2, the Silicon Integration Initiative, put together a DFM track consisting of:
- A keynote talk on "Nanometer Era Design for Manufacturability"
- Several presentations comprising various aspects of DFM, analog design for yield, optical lithography, and DFT.
- A panel on "Innovative Approaches to Tackle the Challenges of DFM"
Three of the DFM presentations were particularly interesting: "DFY Fundamentals" by Mark Rencher of Pivotal Enterprises, "Redefining Test for the DFM Era" by David Abercrombie of Mentor Graphics, and "Design for Manufacturability in an Analog World" by James Lin of National Semiconductor. Each of the authors of these talks brought out some very important points.
Mark began by defining design for yield (DFY), which includes DFM and DFT, and explaining that DFY predicts chip yield at two points of the manufacturing flow—wafer probe and during final test of the packaged chip—and identifies what defects result in yield loss. He then explained the three types of chip defects: random, systematic, and parametric.
Random defects refer to those mechanisms that are not specifically tied to a particular process step, but can result in a chip either failing outright or not meeting a performance specification. Such defects include a foreign particle on the wafer that may cause a short between two interconnect lines, a short caused by a metal bridge between two lines, failure of a contact to open, a break in an interconnect line, and a pinhole in a transistor's gate oxide.
Design, processing, or test can all add systematic defects to a chip since these problems can result from any mechanisms that result in spatial- or time-based variations on the chip. Systematic yield loss may be corrected with tighter controls during chip processing, but there is a definite tie between design and systematic yield at smaller process geometries, below 130nm, and the correlation increases with each process node shrink. A common way of reducing systematic yield loss in a chip processed at these leading-edge geometries is by using optical pattern correction (OPC) techniques to adjust geometries so that features printed on the chip more closely resemble what was drawn during the chip's design (Figure 1).
Figure 1: Chip vendors use optical pattern correction (OPC) to add additional features to the drawn geometries on the chip. OPC helps ensure that the features printed on the chip closely resemble those drawn during the design process. (Courtesy "DFY Fundamentals," Pivotal Enterprises, Wescon 2005)
OPC is an effective way to deal with geometry distortion from design to chip; however, it does come at a price. First, there is the cost of the EDA tools you need to implement the OPC corrections. Second, you have an exponential increase in volume of the data representing the chip's layout, along with a huge increase in the time it takes to process this data and prepare it for photomask generation.
Parametric yield loss is caused by several factors that represent a chip's process and environmental variations from targeted, nominal values. Examples of this type of yield loss include:
- Statistical process variations
- Temperature and operating-voltage spreads
- Geometry variations on the chip resulting in differences from nominal values in parameter values, such as transistor delays (gate lengths and oxide thickness) and interconnect delays (metal spacing and widths).
Modeling parametric yield loss is a complex process, consisting of exhaustive simulation of parameter variations and sensitivity analysis to determine the effects of the various parameter variations on critical design targets. Analog and mixed-signal chips are more sensitive to parametric yield loss than are digital chips. This is due to analog having design parameters, such as phase margins for filters, input offset voltages for op amps, and hysteresis for Schmitt triggers, that are more complex than the design parameters of digital control and datapath circuits.
Mark also brought up a very interesting point regarding the importance of the three yield-loss mechanisms for different process nodes. Figure 2 shows a graph of yield vs. process node for the Taiwan chip industry in 2004 (courtesy "DFY Fundamentals," Pivotal Enterprises, Wescon 2005). The figure shows that the different defect types—parametric, random and systematic—have different regions of influence. While parametric defects encompass all the process nodes on the graph, random defects are prevalent for 250nm down to 130nm and systematic defects affect chips processed at 90nm. More importantly, since Taiwan's wafer capacity peak was for chips processed between 130nm and 250nm, it would be logical to have most DFY work concentrate on reducing random defects at these process nodes.
Redefining Test for the DFM Era
With each new process node, geometries shrink and both chip processing and design become more complex. New materials are added to the processing flow, masking steps increase and become more intricate, and the design rules chip designers must follow become trickier. At today's mainstream process nodes and below, the traditional "stuck-at" fault detection for digital circuits becomes inadequate to detecting faulty chips. Mark discusses how the cost of testing good chips becomes a larger part of the chip's manufacturing cost as gate counts, test-data volume, and test times spiral upward (Figure 3
). New defect types, such as resistive bridging, add complexity and cost to chip testing.
Figure 3: As process nodes shrink and chips comprise more logic gates, the amount of data needed to test these chips explodes, rapidly adding to chip test time and cost. (Courtesy "Redefining Test for the DFM Era," Mentor Graphics, Wescon 2005)
Mark proposes several methods of reducing test cost for deep submicron chips:
- Transition from a strictly logic-based defect-detection model to one that is physical-based.
- Employ statistical yield learning from production test
- Reduce total test time by using test-compression techniques
Traditional logic-based bridging testing would look at all the nets in a circuit and check for shorts between every pair of nets. Bridge testing based on the physical layout of a chip tests bridging faults starting with the highest priority faults first and eliminating the testing of net-pairs that could not have a bridging defect (Figure 4). This results in catching more faults earlier in the test process and reducing total test time.
Figure 4: By basing test priority on the physical layout of a chip, more probable faults are tested earlier and total test time is reduced. (Courtesy "Redefining Test for the DFM Era," Mentor Graphics, Wescon 2005)
Using test chips for characterizing yield helps to isolate yield-loss mechanisms and to determine yield sensitivities to various design features. However, this is not inexpensive, since there are costs associated with the design of the test chip, the masks for processing this chip, and the processing of wafers that don't directly generate revenue. Defects associated with test chips are "out of context" compared to those associated with "real" functional chips. In addition, test chips cannot test for at-speed defects.
Statistical yield learning works by recording the data from production test—DFT error logging and diagnosis—and statistically analyzing the much larger samples of defective devices than can be done in a traditional failure-analysis flow. This technique eliminates the costs associated with using a test chip to analyze yield-loss mechanisms and results in a more accurate determination of what design features are causing yield loss.
To combat rising test-data volume, test compression results in reduction of test data, time, and cost. "Smart" test compression should result in no loss of test coverage, support all fault and test-pattern types, should not affect the functional chip design in any way, have a minimal impact on chip area, and require no change to the automatic test equipment (ATE) interface. By compressing both the test stimuli and responses, Mark notes that the compressed data sets on the ATE can be two orders of magnitude smaller than test without intelligent compression, significantly reducing total test time.
Design for Manufacturability in an Analog World
In his presentation, James emphasized that DFM work, up to this time, has concentrated on digital designs and, therefore, cannot be used by analog chip designers. He identifies three analog-design areas that need DFM solutions: DFY, DFT, and DFR (design for reliability). There are significant differences between analog and digital design that affect DFY practices, according to James. These differences include:
- While digital designs operated at VDD and ground, analog circuits operate between these levels and, therefore, have performance that is dependent on precise voltage levels.
- Analog chip testing is often more expensive than digital chip testing.
- Analog designs have to work despite long-term parameter drifts after deployment in a system and usually have no built-in error-correction mechanisms.
- Analog designs have performance specifications that are more multi-dimensional than those of digital designs (including, for example, offset voltage, frequency response, gain linearity, supply rejection, and noise immunity)
- Analog devices are sensitive to both global and local process variations
- Component matching is critical for analog designs, much more so than for digital designs.
Since analog circuits often have gain as one of their performance parameters, device mismatch is magnified, as shown in Figure 5 (courtesy "Design for Manufacturability in an Analog World," National Semiconductor, Wescon 2005).
Unfortunately, DFM for analog circuits is still mostly a manual job—design tools for assisting in this task are lagging far behind those for digital-circuit DFM. Successful DFM for the analog engineer is still very dependent on the designer's expertise. Adding to the problem is that analog circuits, like their digital brethren, are becoming more complex as technology nodes shrink, since newer chips can support more functionality. This adds to the analog DFM problem of raising packaged chip yields.
Making DFY Part of the Design Flow
Design for yield has an interesting parallel to design for test. In the 1980s, DFT began to be inserted into a chip's design flow as part of the total design process. Initially, engineers resisted the inclusion of DFT, since their primary task was to make sure that the chip met all of its design specifications and learning about and employing DFT techniques added to their workload.
A similar situation is occurring with DFM—chip designers have to be convinced that it must be part of the design cycle. Much like a chip that cannot be tested if it is not manufacturable, one that is not designed with yield in mind cannot lead to an optimally profitable product. DFM is still in its infancy, and is evolving as processes shrink to 90nm and beyond. Successful deployment of DFM requires the cooperation of everyone involved in chip design and manufacturing, including designers, EDA tool vendors, automatic test equipment (ATE) manufacturers, silicon foundries, and vendors of chip processing and chip test equipment. In other words, DFM is everyone's business.
About the Author
Jim Lipman is currently Vice President, Client Services for Cain Communications, specializing in the development and implementation of communication and marketing services programs for companies serving the semiconductor, silicon-IP, EDA, and other high-tech electronics-industry segments. Jim's experience includes chip-design R&D, marketing, marcom, consulting, technical editing, technology training, and on-line publishing of technical content for engineers. His email address is email@example.com