The move to advanced nanometer nodes and new process materials is diminishing semiconductor designers’ ability to estimate and realize device yields. Yield, which has been traditionally limited only by defect density, is now impacted greatly by the interaction of process-related deviations with design elements.
In the past, random defects caused by particle contamination were the dominant reason for yield loss, and it was the foundries’ responsibility to control such defects through inspection and other techniques. Today, systematic variations, such as metal width and thickness variations or mask misalignment, are also major contributors to yield loss.
The impact of process variations on design parameters is becoming more extensive with the reduction in feature dimensions and the increasing design complexity. Reducing yield loss mechanisms has now become ever more dependent on design, not just improvement of the manufacturing process. Once an afterthought, yield is becoming a considerable concern for designers.
The need for a yield metric
There are various yield optimizations available to designers today, such as via redundancy, metal fill, design and recommended rules. Currently, design implementation tools assume that yield loss may occur anywhere in the layout, and they apply yield optimizations across the board. Designers have no ability to make informed trade-offs because, in contrast with other design targets, there is no metric today to assess whether or not these optimizations will even improve yield.
To enable trade-offs between yield optimization techniques, a metric for yield is now required. Choosing a yield model to use as the basis for this metric is an experimental procedure. It involves comparing data from a specific process versus die size to results from various yield models, and selecting the model that offers the best fit [1-2]. Figure 1 shows some commonly used yield models.
Figure 1 Commonly used yield models, where A is the die area and D is the foundry’s defect density.
For a designer lacking access to foundry information, choosing the right yield model is almost impossible. An alternative approach is to come up with a metric that does not represent the exact yield value, but that corresponds directly to yield. A first-order metric can clearly illustrate the layout sensitivity to yield loss mechanisms and can be used by designers to guide and verify yield optimizations during physical implementation.
Critical area: a metric for yield
Critical area analysis best fits designers’ need for a yield metric. Critical area is the key layout attribute that can measure a design’s sensitivity to yield loss. Primarily used for particle-related yield modeling [3-4], critical area can also provide an excellent proxy for other yield loss mechanisms, such as lithography hotspots.
What is critical area?
Critical area is the area in the design where circuit failures are most likely to occur. Random particles can cause two types of circuit failures:
- Shorts: a short occurs when a conductive defect creates an electrical connection between two neighboring wires.
- Opens: an open occurs when a non-conductive defect creates an electrical “break” or disconnect in a signal path.
A random particle will only cause a circuit failure if it lands in a location where it can produce an electrical short between two wires or it can break a wire. The sum of all these locations is called the critical area for that specific particle, as shown in figure 2.
Figure 2 Critical area definition: the region where the center of the particle must fall to create a short or open circuit.