Increasingly higher-performance computing has a direct impact on the power-management portion of the system, especially in portable PCs. To achieve higher operating frequencies and finer geometry shrinks in both microprocessor and memory circuits, voltage levels must decrease and current levels must increase. In most cases, greater functionality will require more power consumption-even with the most sophisticated power-management strategies.
The Semiconductor Industry Association Roadmap, revised in 1999, sets out the advances that will be required to keep up with Moore's Law and the computing and memory forecast for the next 15 years. In power management, several changes will be required in both portable and desktop applications. The operating voltage of portable computers is projected to decrease from 1.5 V in 1999 to 0.3 V in 2014. Furthermore, combinations of voltages will be required to maintain 1.4 W to 2.4 W of power consumption and to power the computer's various subsystems. All this will add to greater design complexity and provide opportunities for different design strategies in power management. For comparison purposes, the decrease in desktop voltages from 1.8 V to 0.6 V is not as drastic as for portable PCs but current levels will increase six times from 50 A to 305 A.
One of the ongoing debates in power management is whether a distributed or an integrated solution is better, but there is no specific rule for deciding. When a critical factor in the new product development plan is time-to-market or time-to-revenue, the most common approach is to use off-the-shelf parts to build a power-management solution, evaluate its performance vs. cost and migrate to an integrated solution later.
With the number of different solutions being developed for power management, an application-specific standard product (ASSP) can provide a reasonably good short- and long-term design solution. An example of the power-management strategy for extending battery life is the SpeedStep technology for Intel's newest Pentium III microprocessor. Intel's 0.18-micron dual-mode mobile Pentium III specification allows processors to switch between two performance modes to conserve power and maximize battery life. When users disconnect from ac wall outlets and battery operation is enabled, core CPU operating voltages and clock frequencies automatically drop from 1.6 V (600 MHz) to 1.35 V (500 MHz) in less than 100 ms.
Maintaining the system's power efficiency and transient response becomes ever more challenging as operating currents increase and output voltages approach 1 V. Intel's efficiency target is 88 percent under ac operation and 90 percent under battery operation with a three-cell Lithium ion battery pack providing between 7.5 V to 24 V. Until recently the typical design methodology used a nonsynchronous buck converter to deliver stepdown voltages. Here the typical 0.4-V drop across a diode creates unwanted inefficiencies, contributing to conduction losses. But technology advances have made it possible for synchronous buck converters to achieve higher efficiency in applications demanding low voltage and high current. Synchronous buck converters use active switching devices such as MOSFETs to replace diodes. With today's advance in MOSFET technology, the conducting resistance, RDS(on), can be as low as a few milliohms.
When using synchronous buck converters current must be monitored to detect overload or short-circuit conditions. Upon detection of an overload, controller ICs typically disconnect the power source from the converter to protect the batteries, MOSFETs and inductors. Many buck converters use a current sense resistor. This method provides accurate current reading, but it leads to efficiency losses and additional costs. Also, the extra component and two dedicated pins from control ICs add size.
An alternative approach is to use the RDS(on) of the MOSFET to sense the current. Although both the top and bottom MOSFETs can be used, sensing the voltage drop across the bottom MOSFET is often preferred. This is because the bottom MOSFET conducts a greater portion of the switching cycle and it gives the IC adequate time to sense the signal. With no penalties related to efficiency, this current-sensing scheme requires no extra components.
Power supply efficiency can also be improved by selecting the top and bottom MOSFET based on their different requirements. A compromise between the gate charge and RDS(on) has to be made to balance the switching loss and conduction loss. Low gate charge is often a requirement for the top MOSFET because it is prone to severe switching loss. For the bottom one, conduction loss will outweigh switching loss because of its long conduction time and ideal zero-voltage switching. Therefore, the low RDS(on) MOSFET is always preferred. Yet another consideration is the dead time between the top and bottom MOSFET activation, which must be minimized to improve efficiency, used to prevent any shoot-through. In continuous conduction mode, the body diode of the bottom MOSFET conducts current during the nonoverlapping time. Paralleling a Schottky diode with the bottom MOSFET further reduces the power loss during the nonoverlapping time and improves the efficiency by 1 to 2 percent.
A floating gate drive for the top MOSFET avoids the need for additional gate charge, which is otherwise inevitable during the discontinuous conduction mode (DCM). In DCM , the body diode of the top MOSFET is forward-biased during the nonoverlapping time to bring the source of the MOSFET to the input voltage. If its gate is connected to ground, a negative voltage (equal to the input voltage) is established between the gate and source. When the top MOSFET has to turn on at the end of the nonoverlapping time, the control IC has to source extra gate charge to counteract the negative voltage. The floating gate drive turns off the top MOSFET by shorting the gate to the source instead of the ground. This implementation reduces the control IC's operating current in DCM and improves overall efficiency.
Due in part to the many complexities encountered when designing a CPU power supply, converter manufacturers have traditionally opted to deal independently with the CPU core voltage, I/O voltage and clock voltages. The objective of using distinct voltage input circuits was performance driven, but the solution required two or three separate ICs such as a combination of synchronous buck controllers and low dropout regulators. Since board space and solution cost are key design constraints when building a mobile CPU power supply, inevitably the solution must evolve to integrate power functionality. Integration technology has given us a better approach to handling VCORE, VIO and VCLOCK.
One highly integrated approach combines the synchronous buck controllers for the CPU's core voltage and 2.5-A I/O currents with a 150-mA low dropout to handle the digital CPU clock. The result is a single IC implementation that incorporates two synchronous buck controllers to provide VCORE and VIO, an on-chip low dropout linear regulator for VCLOCK, a 5-bit digital D/A converter input for setting VCORE and additional control functions for micropower shutdown and voltage monitoring output VGATE.
This circuit features an enhanced V2 control method that improves power supply performance, including line and load regulation, load transient response and on-the-fly D/A converter changes. It uses a ramp signal generated by the ESR of the output capacitors. V2 inherently compensates for variation in both line and load conditions since the ramp signal is generated from the output voltage.
This differs from traditional methods such as voltage-mode control, where an artificial ramp signal must be generated, or current-mode control, where a ramp is generated from inductor current. Such methods depend on the speed of the error-signal loop to meet load transient specifications. In contrast, the reaction time of the V2 loop to a load transient is not dependent on the bandwidth of the error-signal loop. The result is better transient response and fewer output capacitors. To effectively respond to on-the-fly D/A converter changes, the converter's output is added to that of the error amplifier. Therefore, the summed signal proactively sets the new regulation target. This approach again bypasses the error amplifier and minimizes transition time.
There are many different overcurrent protection schemes implemented by today's control ICs. The two most popular are pulse-by-pulse and hiccup overcurrent protection. The pulse-by-pulse approach limits the peak inductor current by turning off the top MOSFET upon triggering the threshold.
Although it is simple to implement, this scheme becomes less effective in a short-circuit condition when the bottom MOSFET and inductor are exposed to a dc current level approaching the peak current threshold. The hiccup scheme solves this problem by completely turning off the top switch for a fixed time, usually programmable by the user. Then the circuit tries to power up again in a soft-start fashion. Care must be taken during a D/A converter shift when the overcurrent threshold may be triggered because of in-rush current's charging the output capacitors. Activation of hiccup operation turns off the top switch for an extended time and interrupts proper operation. Therefore, the hiccup current limit must be disabled for approximately 100 ms during a D/A converter shift. The low dropout has a built-in overcurrent limit that collapses the output voltage when the current exceeds the threshold.
Because of the presence of the low dropout inside the IC, a thermal shutdown circuit is also needed to turn off both it and the buck controllers so that the die does not overheat. If a temperature correlation between the control IC and external power components can be established this thermal shutdown feature can also protect the components susceptible to thermal stress.
See related chart