If there was a message for chip designers from two recent industry conferences, it was this: Get used to uncertainty. The era of "design for variability" is here.
At both the International Symposium on Physical Design (ISPD) and the Electronic Design Processes (EDP) conferences, speakers noted the growing impact of process, temperature and voltage variations on IC designs. And they weren't just talking about design-for-manufacturability (DFM), which is the label that usually gets stuck on things when we start talking about variability.
The problem with the DFM moniker is that it implies designers are supposed to do something for the manufacturing people, such as put optical proximity correction (OPC) into chip layouts. Design-for-variability is certainly aimed at producing manufacturable designs, but it could also apply to the chip designer who just wants to find out how gate width variations might affect his leakage current (which, as it turns out, is a problem right now at 130 and 90 nanometers).
The other topic that usually comes up in connection with variability is statistical timing analysis, but that's just one of a number of approaches for dealing with variability. And it's not yet clear when the necessary statistical models will be available or who will need statistical analysis, at what process nodes.
Sources of uncertainty, or variability, in chip designs are many. As noted at EDP, process variations that affect design include critical dimensions, channel width, interconnect and voltage thresholds. Supply voltage and clock skew variations are also growing more significant. And what about temperature? Not many people think about that, but one paper at EDP showed how a thermal gradient of 10°C can change timing delays by 30 percent.
One way to think about design and process variability was outlined in an ISPD presentation. That talk distinguished between "operational" or global variability, which stems from the different operating modes in which a product might run, and "local" variability. The latter category includes interconnect variations in thickness and width, as well as cell variations according to process, temperature and voltage fluctuations. On-chip variations also affect cell variations.
On the manufacturing side, speakers distinguished between random variations, such as those caused by particle defects, and systemic variations, such as shorts or opens caused by printing misalignment. Although often lumped together, they require different approaches.
Chip designers can manage some of the variability with min/max corner analysis, while other concerns may require a statistical approach. With large fluctuations, temperature and voltage will probably require a corner-case analysis, one speaker said.
But the argument for looking at timing, and ultimately power, in terms of statistical probability distributions is strong. There are only so many corners one can model.
When chip design gets into the realm of the very small, we must accept uncertainty and start thinking in terms of probabilities. It's a new way of thinking that will reshape the IC nanometer design and manufacturing flow.
The Art of Design column offers opinions and perspectives on the technology, techniques and business of electronics design. Its subject matter covers design automation, test, packaging, boards and silicon. Suggestions or comments may be sent to email@example.com.