Analysis of 45-nm design and manufacturing processes has shown that effects seen in 90-nm and 65-nm nodes are amplified, and that there are now many more things to consider in the design cycle. Considerations that were second- and third-order effects in historical nodes are becoming determining factors between silicon failure and silicon success.
One of the more obvious challenges is the size of system-on-chip (SoC) designs and the corresponding capacity and performance requirements demanded from the design tools. Due to a combination of factors, designs at the 45-nm node typically have many more gates than their predecessors. In the case of the consumer markets for mobile communications (including cell phones and wireless networks), personal digital assistants (PDAs), personal audio and visual entertainment systems, digital cameras, and so forth, end users increasingly want more features in their products. Furthermore, bandwidth requirements are increasing and communications protocols are becoming evermore complex, thereby driving the need for massive computational resources. And all of this is coupled with requirements for lower cost, smaller size, lighter weight and minimal power consumption, which typically are addressed by consolidating multiple functions (previously provided on separate devices) into a single chip.
A somewhat related challenge is that of design complexity. It is now the norm for designs to feature multiple general-purpose central processing unit (CPU) cores coupled with special-purpose digital signal processor (DSP) cores, hardware accelerators and peripherals. These designs employ advanced system architectures with tiered memory structures, multilevel memory caching and multilayered bus structures. In fact, some devices require a full network-on-chip (NoC) simply to allow the various functions to "talk" with each other.
Yet another major challenge is that designs at the 45-nm node exhibit massive sensitivity; small variations in the widths and thickness of wires, or the sizes of the structures forming transistors, can make a huge difference in the performance and reliability of the device.
Process variation--whose impact was manageable at 90 nm and above--has a much more dramatic effect as process geometries shrink. Even if the amount of variation remains the same as in previous generations, it will account for a greater percentage of process geometries as they get smaller.
In traditional design environments, variability is accounted for by introducing more corners to model different process and environmental variation combinations over multiple analysis runs coupled with aggressive gross guard bands. However, as the number of scenarios and corners increase, achieving design convergence strains resources, inflates costs and negatively impacts project schedules. Furthermore, guard-banding the design can result in excessive power consumption and performance being left "on the table," which is unacceptable in today's highly competitive market.
The bottom line is that overly pessimistic analysis negates many of the benefits offered by smaller process geometries.
In order to take full advantage of the performance gains offered by smaller geometries, it is important to understand and quantify process variation to improve accuracy, reduce pessimism and allow for informed decisions about yield and performance trade-offs. Statistical analysis is emerging as the most likely vehicle to carry the industry into the future. Using a statistical approach makes it possible to break beyond the barriers of case analysis and begin to holistically model the factors affecting process variation in a single analysis run. This more holistic statistical approach enables designers to effectively model process and environmental variation, mitigate the effects of process variation, and meet the demands of cutting-edge electronic design for the foreseeable future.
Approximately 20% of 2007 design starts are expected to be at the 65 nm and 45 nm nodes. |
Another challenge is quantifying and managing thermal effects with regard to timing, signal integrity and power consumption. Many of today's analysis tools assume a constant temperature across the surface of a silicon chip. In reality, however, today's large, dense devices can exhibit significant thermal gradient across the surface of the die and the layers forming the die. Depending on the amount of switching activity at any particular time, different areas of a chip may vary by 40ºC or more; similarly, there can be a thermal gradient between the device layers and the upper-layer metallization. These combined thermal effects can negatively impact all aspects of the design, including power consumption, power integrity, signal integrity and timing.
Attempting to "guard-band" against these types of effects means you end up giving a lot away in terms of performance and power. In order to address this, a thermal analysis engine that can properly model temperature gradients in the context of silicon, package and board effects needs to be part of the equation for prevention, analysis and correction during design implementation and optimization. Doing so will allow designers to dramatically improve the accuracy of existing analysis and optimizations for such things as power, voltage (IR) drop, electromigration and timing.
A silicon chip created at the 45-nm node may require 35 or more layers, each of which can involve the specification and creation of hundreds of millions of geometric shapes. The incredibly small feature sizes of the structures forming silicon chips at the 45-nm technology node--and the incredibly thin layers used to construct them--means these designs are increasingly subject to variability in the manufacturing process. Some well known areas of concern involve Chemical Mechanical Polishing (CMP), mechanical stress, etching and lithographic effects. Failing to address these issues can result in exposure to possible silicon failure or, at the very least, wide variations in timing, leakage power and signal integrity both across a wafer and across the surface of a single chip.
Poly gate variation due to lithography. |
At the 45-nm technology node and below, managing physical, manufacturing and electrical variability is paramount. Up till now, traditional design tools were largely of a class known as "Rule-Based." This means they were provided with a set of rules and they analyzed the physical design coming out of the place-and-route engines to check that none of these rules were violated. The problem is that not only are these rules inherently pessimistic by definition, the number of rules are increasing exponentially, thereby straining the capacity and performance of the implementation engines. Rules that were merely "recommended" at the 65-nm node are now mandatory at 45 nm; and, of particular concern, it simply isn't practical to specify rules that cover every possible situation and potential problem.
One option is to move to a "model-based" approach, in which the physical implementation (place-and-route) and electrical analysis engines accurately model or simulate the way in which light will pass through the photo-masks and any lenses, how it will react with the chemicals on the surface of the silicon chip, what the resulting structures will look like and how they will perform in silicon. Using such a model-based approach during the design process means manufacturing optimizations can be made in the context of other, more traditional, optimizations for timing, signal integrity, power, area, and so forth.