Since moving to sub 65-nanometer technologies, Pre-Defined Corners (PDC) verification has attained its limits. The number of corners to verify has become huge with always the possibility of over-design. The worst is that these corners cannot guarantee the design. Some corners could fall inside the process parameters space while others do not really need to be tested.
In this context, EDA has started looking for solutions for this new and key concern. Many solutions have been developed on the back-end side to reduce variability, analyze yield or enhance it. These solutions, although very useful, were not sufficient. Another effort at design level has been made concerning parametric yield. This is thoroughly developed in this paper.
Today, the EDA market offers tools based on the following different approaches: - Manual sizing and PDC - Manual sizing and simulator-based analysis - Simulator-based sizing and yield optimization - Model-based sizing and yield optimization
From the different approaches, two main workflows appear: 1. Analysis flow where the designer does the sizing manually and then the verification by PDC or a lot of Monte Carlo based on a simulator or a model. 2. Optimization flow where the designer trusts the tool to do the yield estimation and optimization. In this case, simulator-based tools generally need a good design point in order to be able to enhance the yield. By contrast, model-based tools can provide a global exploration of the design and thus find the point that best yields. Nevertheless, modeling and model-based approach is a new methodology that needs to be educated!
In fact, optimization tools are a very old topic. However, analog designers did not seem to have a real need for them as they are used to sizing their design manually based on their own knowledge of the parameters influences and sensitivities.
All major EDA companies propose “simple” non-constraint optimizers linked to their simulators. Generally, these optimizers are not very used.
Nevertheless, parametrical yield analysis and optimization is for sure very important. Even if there is no guarantee about the final yield, it should be clear that the more you increase your parametrical yield and robustness, the more you increase the final yield.
Parametric yield prediction could be verified through predefined corners that are always used by a majority of designers, different more or less enhanced Monte Carlo (MC) methods and response surface modeling techniques (RSM) .
Nevertheless with aggressive technology process spread, the designer has to face the risk/cost dilemma: What risk on yield estimation is acceptable and what associated cost the designer is ready to pay?
The risk can be measured as the confidence on yield estimation whereas the cost can be measured as the waiting time before end of simulations or for parallelized algorithms the cluster load or quantity of available simulator licenses.
However, one of the most important questions is still how to estimate accurately parametrical yield and its robustness. Another question is: How to maximize the chance to get the best yielded design point?
To try to answer these questions we’ll go through the different techniques for verification and optimization.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.