In recent discussions with customers around the world, we have been hearing a surprising new message—that, at 28 nm, they have to care about density at the cell design level “like never before.” It’s surprising because density has historically been a manufacturing issue that was handled post tape-out or during chip assembly. However, where and how density is handled in the design process has evolved significantly along with the process technologies (Figure 1). In this article, I’ll take a look at how density has evolved from a back-end manufacturing issue that was of little interest to designers to a design concern that affects the layout of standard cell libraries.
Figure 1. Responsibility for density checking and management has moved progressively up the production line as nodes decrease and designs become more compact.
In this discussion, density refers to drawn area of polygons in a given area on a given layer in a layout. The size of the area in which the density is calculated, called the “window,” depends on the process effect that is driving the check. For instance, some process effects are very local (microns or less), while others have much larger interaction distances (up to 100s of microns). In fact, a basic process—Chemical Mechanical Polishing (CMP), for instance—will have different interaction distances on different layers, resulting in different window sizes for each of the layers.
In technologies before 130 nm, density was treated as a manufacturing issue, and the responsibility of the foundry. At that time, the window size for density checks was between 100µm and 300µm, depending on the foundry and layer. That window size was 300-600 times the size of the minimum pitch, meaning that, in these designs, the density was being averaged over a lot of layout. The cell designer could easily follow the simple width and spacing design rules without worrying about density, unless the design was one of a small set of special structures, like capacitors. When the chip was finished, it went to the foundry, and the foundry added extra metal shapes in the empty spaces between the drawn polygons to even out the density variations. This was called “dummy” fill because the extra shapes weren’t part of the circuit. In the beginning, the foundries did this without even telling their customers about it, since the process was assumed to have no impact on the electrical characteristics of the circuit.
However, with each new technology, the foundry has to solve a host of new process challenges. In solving those challenges, they often have to make decisions and compromises that add new constraints to the design rules. As you can see in Figure 2, not only does the number of design rules steadily increase with each new technology, but the same effect also occurs with the density requirements. For instance, there might need to be separate rules for density inside and outside of the memories, or a new technology or layer might be more sensitive to variation in the density, and thus need a new rule for allowable density gradient. Each new technology has made the requirements for density more stringent, while at the same time adding more restrictions on how the layout can be manipulated to meet those requirements. The phrase “between a rock and a hard place” comes to mind, since the layout is being constrained both at the very local level (design rules) and at the macro level (density over an area).
Figure 2. The growth in density rules and design rules over technology node advancements
At the same time as density requirements increased, fill placement was becoming more aggressive and closer to the signal lines, making it difficult for the foundry to convince designers there was no impact on the electrical behavior. This trend in fill placement made more designers want to check the impact of the fill on the performance of their designs, and by around 65 nm, it became common for design teams to take control of the fill process themselves, using fill rule decks provided by the foundry. Although this shift in responsibility continued down to 45 nm, density was still a chip-level issue, to be solved at chip assembly or tape-out.
At 28 nm, Figure 2 shows a significant increase in both the number of design rules and density rules. In fact, for the Poly layer, the number of density rules increases by 60%, while the number of width/space/area checks increases by a whopping 80%. However, while this strong shift upwards does indeed make layout more complex, it doesn’t in and of itself mean density requirements have moved from “chip level” to “cell level.” What, then, is different at 28 nm?
Most of the new density requirements are applicable to front-end layers that are relevant to cell design. These new rules are not esoteric checks or lithography rules, but “typical” design constraints that define the ways a designer can achieve the required min/max density over a given area (i.e., minimum width and spacing in different configurations). The problem for the designer is that, with more and more constraints being applied to protect circuitry and ensure manufacturability, the steps that can be taken to achieve the desired density average are more limited. But that still doesn’t explain why, at 28 nm, fill now becomes the cell designer’s job.
Let’s go back to the basics of density checking—a square “window” is stepped across the design, with the step size being some fraction of the window dimension. In each step of the window, the density is captured. The calculation of the density can be as simple as the relative amount of drawn polygons versus the total window area, or something more complex, including density effects from multiple layers. As the number and complexity of density checks has increased, both the window size and the step size have decreased. A smaller window size means that the effect of one cell on the average density in the window is now far more significant than before. For instance, at 130 nm, the ratio of window size to cell height was about 40x, meaning that one row of standard cells contributed, at most, 2.5% to the density calculation for a window. At 28 nm, the ratio of cell height to window size can be as small as 10x.
To illustrate how this seemingly small change in window size to cell height ratio can affect the lowest level of design, let’s look at a simplistic example containing a 10x10 cell array (Figure 3). In each cell, there either is or isn’t a polygon, meaning each cell has a density of 1 or 0. Assume the upper limit on density is 60%, where any density over that value constitutes a violation. The average number of cells containing polygons in a 10x10 window is 50, so the density of this 10x10 array is 50%. No problem. If, however, we calculate the density using a 5x5 window on the same array, we get more variation across the windows (but still no density violations). If we further reduce the window size to 2x2, we now get 9 windows containing violations. Of course, if we go all the way down to the 1x1, we get 50 violations. Thus, even if the value of the individual cells doesn’t change, the smaller window size forces more violations.
Figure 3: Density calculated with varying window sizes demonstrates how a smaller window size creates more violations at the “cell” level, even when the cell remains the same.
In concert with diminishing window sizes, the step size has also been decreasing. Step size is important because the window is arbitrarily placed on the layout. The actual process sees the entire design placed on the wafer, so the density check must account for different placements of the density windows. The typical solution is to “step” the window across the layout. If the ratio of step size to window size is 1, then each window step is adjacent to the previous one, much like the simple grid above. If the step size is half the window size, then each window step would overlap the previous step by 50%, meaning there would be almost twice as many density windows used in the checking, which ensures a more continuous calculation across the layout.
For many years, the step-size was usually about half of the window size. However, during that time, the cell size and window sizes were shrinking. Since the cell is the controllable element, let’s look at the trend of step size to cell height. Figure 4 shows the trend of step size to cell height over technology nodes for both the Poly and Active layers. Poly clearly has the lowest ratio, and is therefore more significant. Between 130 nm and 45 nm, the step size was roughly 4-7 times the size of the cell height, meaning each new step of the window contained 4-7 rows of cells. Density variation from step to step, therefore, was an average of 4-7 rows of cells. At 28 nm, though, the ratio goes all the way down to 1! This means that each step of the window brings in one new row of cells. If that row contains capacitors or other very dense cells, it can have a significant impact on the density average. Finally, we can see that, yes, the combination of smaller window sizes and smaller step sizes makes density a requirement that must be controlled at the cell level, not just the chip level.
Figure 4. Changes in window and step size increase the complexity of density checking.
Okay, so the responsibility has shifted to the cell designers, but why is this a problem? First, there are few automated options for density management at the cell level. Second, place and route tools are not equipped to automatically evaluate and adjust densities during layout. What is needed is a way to help cell designers efficiently manage density at the cell level, while ensuring that those cell designs do not create significant density issues when placed in blocks or chips.
And yes, EDA vendors are working on solutions. “Smart” fill solutions are providing more sophisticated fill strategies, while density checking techniques are being enhanced to address the needs of the cell designers. Integration of verification engines with both custom design and place and route tools enable designers to check layouts earlier in the implementation process, eliminating costly redraws late in the process flow. While 28 nm does indeed “raise the ante” for density analysis and debugging, the obstacles are not insurmountable.Author biography
Joe Davis is currently the Product Manager for Calibre interactive and integration products at Mentor Graphics in Wilsonville, Oregon, USA. His career in the IC industry spans over 20 years at high-profile companies such as Analog Devices, Texas Instruments and PDF Solutions, and covers both sides of the EDA industry—designing ICs and developing tools for IC designers and manufacturers. Prior to joining Mentor, he was the senior product manager for yield simulation products at PDF Solutions, where he managed semiconductor process-design technologies and services, including yield simulation and analysis tools. Joe earned his BSEE, MSEE, and Ph.D. in Electrical and Computer Engineering from North Carolina State University. Contact him at email@example.com.
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).