Green requirements are no longer a newspaper caption but have become an important topic for logic designers. Ever increasing power densities, combined with a global push to contain power consumption in electronic devices, have brought power conservation to the forefront of logic design. But design techniques to manage power must coexist with techniques to achieve speed and performance. To make it all work, designers must introduce more information about the electrical effects of the physical implementation into early stages of the design flow.
Logic designers have long been aware of issues such as wireload model correlation/customization, shrinking geometries and timing closure iterations. In the past, designers dealt with these issues by overconstraining their design by some margin based on previous experiences or gross generalizations. For example, early in the flow, designers globally overconstrain clock cycle time by 30 percent in an attempt to ease the physical timing closure process. This results in a tremendous amount of excess power consumption, with design teams forced to make painful trade-offs that could have been avoided or preempted.
Designers need guidelines for addressing design performance and schedule predictability through the use of physical information earlier in the design cycle.
One of the main motivators for physical predictability is data that is based on typical synthesis of a mainstream processor. As cost functions are added, the number of critical paths rises sharply. The reason is that power is often improved at the expense of reducing excessive positive slack. The problem worsens as additional cost functions are incorporated. And power is only the tip of the iceberg; yield and other issues loom on the horizon.
Accurate prediction of path delays becomes essential as more paths achieve near-zero slack. Physical information needs to be added to the synthesis process in order to achieve better predictability without sacrificing quality of silicon (QoS).
Do's and Don'ts in low-power design
The addition of physical data to the synthesis flow brings new challenges. The following guidelines should help users to make the most effective use of the technology.
* Validate your library collateral. The introduction of physical data means that additional collateral (LEF libraries, floor plan, etc.) is used. Making sure that this collateral is up-to-date and correct is no less important than it was to validate the nonphysical-based original set.
* Use a high-level floor plan of the design when available. While placing logic before it gets synthesized is not possible, a preliminary floor plan is often available containing a large amount of the information most useful during synthesis (macro placement, die area, aspect ratio, etc.).
* Use the same constraints for synthesis as for place and route. Constraints such as derating, DRCs and routing layer restrictions need to be reflected in synthesis the same way they will be reflected in place and route. If the constraints are missing or incorrect, correlation is futile.
* Use the correct libraries for power optimization. Traditional flows use the worst-case setup-time corner libraries for all optimization. These, in fact, are normally not the worst-case power libraries. The result is a design that is incorrectly optimized for power, resulting in protracted design closure. Different corners should be used for power optimization than for timing optimization. Ideally, this should be performed concurrently so that optimal trade-off analysis is performed.
* Match the capacity of your place and route flow. Tools limited to smaller block sizes will fail to recognize issues caused by wire modeling at higher levels. Larger block sizes are unlikely to improve the situation if place and route will be performed at lower levels anyway.
* Use the GUI to analyze the results. A logic designer with data flow knowledge can easily identify basic floor-planning issues.
* Correlate synthesis results with place and route results. Don't just look at the metrics; also examine the critical paths and make sure that the same path has similar metrics before and after place and route.
*Determine your cost function's upper and lower bounds. Analysis runs can be very valuable for determining what the tools can accomplish in best-case scenarios. Knowing the best-case power, area and timing that can be achieved enables design teams to make more sound trade-off decisions.
* Use process information (physical libraries, capacitance tables, etc.) instead of relying on wireload models or ignoring interconnect. Ultimately, that data is what will be used for signoff; hence you want to make sure you start using the data as early as possible. Physical layout estimation (PLE) is a good example of technology well suited for this purpose.
* Use multi-VT optimization. Access to all VT libraries enables additional implementation options and better QoS.
* Rush to gates. Just having physical information is no guarantee of QoS improvements. The first transformation from RTL to gates is where synthesis can make the biggest impact; any effort spent tuning this part of the flow is time well spent.
* Assume the provided or autogenerated floor plan to be optimal. Launch the GUI and make sure the floor plan makes sense. While most logic designers are not floor-planning experts, a quick look at the floor plan in a graphical environment can save countless hours of less productive runs and their debug. Silicon virtual prototyping (SVP) is an ideal means to help provide a better starting floor plan with the appropriate granularity and minimal effort.
* Guardband excessively. Minor guardbanding (about 5 percent) is frequently needed to account for certain effects not accounted for in synthesis (PLL jitter, crosstalk, etc.), but excess guardbanding will result in longer timing closure and poor power performance.
* Constrain globally beyond guardbanding. If you identify paths that do not behave as expected in this flow, do not penalize the whole design. Use cost groups or adjust paths individually to address the paths in question.
* Forget shrink factors and scale factors, if applicable. Remember, you are dealing with additional constraints used by your physical collateral. Shrink factors account for optical shrinks of the process, whereas scale factors account for miscorrelation with final signoff R/C extraction.
About the author
Diego Hammerschlag (firstname.lastname@example.org) is a senior technical leader at Cadence Design Systems (Chelmsford, Mass.).