The imperative for reducing power consumption now pervades application spaces ranging from mobile appliances with limited battery life to big-box electronics that consume large amounts of increasingly expensive power. Consequently, power reduction and management methods are now used extensively throughout the chip design flow from architectural design, through RTL implementation to physical design.
The design team must verify not only that the inserted power management circuitry functions correctly, but also that it does not corrupt the chip’s functionality. Ideally, the team would have power estimates early enough in the design flow to deploy and verify the appropriate reduction techniques, minimizing or even completely avoiding late-stage redesign. Generally, though, really accurate power estimates become available only at physical layout, where design changes — even small ones — ripple back through design flow. Consequently, power-aware design often requires iterative optimization up and down the flow. Moreover, because many optimizations are performed late in the design flow, verification effort and risk increase, and debug becomes even more tedious and time-consuming. Consequently, power-aware design can appreciably increase design and verification time and cost. The challenge is to achieve the target power consumption while limiting the cost of doing so.
Our ultimate objective is to verify not only the chip’s functionality, but also that we have completely and correctly implemented the power intent described in Unified Power Format (UPF)  or Common Power Format (CPF)  descriptions. This article concludes that an “apps” approach is the best way to apply formal methods to power-aware verification.
We first address the challenges in devising and verifying the power management scheme and power optimizations necessary to achieve this ultimate objective. Firstly, how is the power management scheme devised?
Power management scheme
Initially, the system architects devise an implementation-independent functional specification to meet the product requirements, and then partition the functionality across hardware and software. They may also devise an initial power management scheme at this architectural level. However, the architectural scheme defines only which functionality may or should be deactivated for any given use case, not how it should be deactivated.
Consequently, how it should be deactivated is decided at the RTL implementation stage by the RTL team, that is, by a different group of engineers. The team decides whether and how to use a multiplicity of power reduction methods such as power domains and clock gating to achieve the requisite low power characteristics. These decisions must comprehend both hardware- and software-controlled circuit activity. They then implement the functional hardware, making extensive use and reuse of pre-designed silicon intellectual property (IP) blocks, together with new RTL blocks, some of which are under the control of software stacks.
However, although the RTL team’s decisions may be informed by a physical floorplan, they do not (and cannot) comprehend the final partitioning at place and route (P&R), where accurate power estimates are made. A common outcome is that the power management scheme must be modified and re-verified after P&R.
Clearly, the power management scheme is a moving target, and requires design optimization, verification and re-verification at every stage of the design flow — architecture, RTL implementation and physical design. However, implementing any scheme is often subject to significant constraints, such as the impact on IP use and re-use and on the use of design-for-test (DFT) circuitry.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.