So, what is driving this dynamically adjustable power supply requirement? From our experience, we are seeing three dominate factors -- improved performance, power savings, and wafer yield increase. In reality, the first two are still considered optional, but highly recommended... the latter is the driving force behind required.
One of the most critical factors driving the cost of semiconductors is the billions of dollars needed to build a state-of-the-art fab and the associated operational costs to keep it running. Couple that with the sensitivity and complexity of 32nm (and less) circuits, and you end up with the need for a finely-tuned and tightly restricted core voltage requirement.
These tight restrictions significantly impact the yields that they are capable of from each wafer. One of the ways they have discovered to increase yields is to require a dynamically adjustable core voltage. Here is an example:
A DSP has a core voltage requirement of 1.0V and a limit of ±2% accuracy on the core voltage supply over all conditions. At the end of the manufacturing process, the manufacturer tests each chip at 1.0V plus or minus a very small margin to verify performance to specification. Anything that operates outside of that specification is discarded because it is considered to not meet spec.
Unused chips are expensive to discard. However, just because they couldn't meet the 1.0V requirement doesn't mean that they can't meet the performance specification... it just can't achieve it at 1.0V. But, if you were to allow for a voltage of 0.97V or perhaps 1.02V, that once discarded chip is now within performance specification and useable.
Chip companies are therefore running an "integration algorithm" to define optimal core voltage either at the time of chip test or at the time of board power-up. Once that is established, the optimal core voltage requirement is sent to the point of load controller via a digital communication bus sometimes utilizing proprietary commands.
In some instances, an additional MCU is used to translate the proprietary command to a standard PMBus command. After the controller receives the command to set the output voltage to "X", it will then run at the new optimized voltage for the rest of time. By doing this vendors are increasing yields on their chips and driving cost down.
The effect of the PMBus+ specification
The AVSBus and adaptive voltage scaling has multiple application usage for power savings, but the usage that seems to be the most relevant and immediate is for the scenario previously described. This is the scenario that creates a “required” environment. The new PMBus+ specification with the AVSBus is expected to go into effect in March 2014. Once this occurs, we can expect to see the next wave of chips requiring digital power to not only improve yields, but also to maximize performance and reduce power consumption.
Mark Adams is VP of Advanced Power Marketing at CUI.