As cellular phones become more complex, more power is consumed by both active and standby systems. Consequently, power-management design for portable wireless devices imposes new challenges in the areas of I/O interface, energy management and battery lifetime. Digital designers lead the industry in implementing microprocessors in ultra-deep-submicron (130-, 90- and 65-nanometer) processes, where they have found that the thinner oxides and smaller channel lengths yield fast transistors. Likewise, analog baseband and RF designers are following a path of integration to provide a single-chip wireless solution to their end customers. Voltage scaling has not kept up with oxide scaling, however, which has resulted in leaky system solutions-a definite drain on battery lifetime. Fortunately, there are some power-management techniques that can be used to lower power losses in single-chip solutions. There are three identifiable forms of power supply drain: active current consumption, standby current consumption (sometimes referred to as sleep mode) and off-mode current leakage. In the active mode, the dissipated power is a summation of both static bias current power dissipation and the average of switched or clocked (dynamic) power dissipation. Standby is a low-power state where most, if not all, of the dynamic power dissipation is absent because the clocks have been gated or turned off. In this mode, the magnitude of the static quiescent current dictates battery lifetime. The third form, off-mode power dissipation, is a function of the sub-threshold leakage that the transistors in the chip exhibit when the chip is off, but when the input supply is still present. If ultradeep-submicron CMOS pro-cesses were able to handle the higher voltages of the battery (4.3 V to 5.4 V), the off-mode leakage would be negligible because effective channel lengths would be longer and the gate oxides thicker. Likewise, the active power-supply drain would be reduced, because such a process would be slow in terms of frequency, and dynamic power dissipation is a function of capacitance, frequency and input supply. Thus, one needs to address the matter of direct battery hookup of the power-management circuits. The two most commonly used circuits that can accomplish this with some modification are the low dropout regulator (LDO) and the dc/dc buck switching regulator. LDO regulator In a typical LDO, most transistors can get some exposure to the input supply voltage, whether it is the drain-to-source voltage (VDS), the gate-to-source voltage (VGS), the gate-to-drain voltage (VGD), the gate-to-bulk voltage (VGB) or other combination. Hence, for a simple design, the devices must be rated at voltages at least equal to the battery. In 1.5-V CMOS, that would be 1.8 V maximum. Recent process developments have allowed the inclusion of a drain extension on regular core transistors without adding cost. That allows the VDS and consequently the VGD of a typical NMOS or PMOS core transistor to extend to a higher voltage. It does not extend VGS; therefore, in traditional designs, attempting battery connection requires careful device sizing and extensive use of clamps. Such a design does not reap the full shrink benefits one would get from future ultradeep-submicron process nodes, since the drain-extended transistor form factor does not shrink as much as the core transistors. A solution is to self-regulate the circuit around a pair of PMOS cascaded current mirrors. Most core circuits can be made battery-tolerant using this technique, provided there is negative feedback that either regulates or clamps the voltage at the input of the supplied circuit. For the PMOS LDO, this technique would use the LDO's inherent feedback to regulate the LDO error amp at the core voltage. The main dc-to-dc converter blocks that interface with the battery are the output driver and the level shifter-predriver. The output driver of the switching regulator can use a cascaded drain-extended PMOS (DEPMOS) device along with a high-voltage-gate (HVG, -1.8 V) PMOS device for the high-side switch. The low-side switch or synchronous rectifier can use a cascaded drain-extended NMOS (DENMOS) device and a core (1.3- to 1.5-V) NMOS device. This structure allows high-voltage operation, has better leakage performance and has less gate-drain capacitance that has to be switched than a single DEPMOS device. Since the battery connects to an HVG PMOS device whose maximum VGS is much less than VBAT, a protection scheme is required for the VGS of the two devices. A circuit is needed that will generate a constant voltage, PBias, that is referenced from the battery. The PBias voltage is set such that VBAT - PBias is less than the maximum VGS of the transistors. The cascaded DEPMOS is biased with this PBias, and the level shifter/predriver voltage swings between VBAT and VBAT - PBias while driving the HVG PMOS device. The level shifter/predriver can be designed in the same cascaded manner as the output FETs. Low drop-out regulators Integrating an external system preregulator in a high-performance ultradeep-submicron CMOS and then splitting it into several smaller internal regulators can minimize the area penalty of such integration. The higher transistor drive currents per unit area reduce the size of the pass FET. Furthermore, some tougher analog and radio-frequency specification constraints only apply to one or two of the LDOs. For example, a 100-mA LDO can be split into a 50-mA digital LDO, a 10-mA RF LDO and a 40-mA analog LDO. For the digital LDO, power supply rejection and accuracy are not critical, so the power FET can be reduced to the edge of operation in the linear region. The analog LDO at 40-mA load current becomes easier to compensate and can be designed to have a high power supply rejection with its output pass FET at the edge of the linear region. With several LDOs, increased quiescent current in standby can drain the battery. For example, disabling the analog and RF LDO's in stand-by can reduce a fair amount of quiescent current. This leaves the digital LDO, which in external solutions can consume 50 microamps to 250 microamps. A solution is to use an adaptively biased LDO design (see figure). The design consists of positively feeding back a fraction of the output load current to the tail current in the differential pair of the error amp of an LDO, thereby increasing the overall quiescent current only when the load current increases. Such an architecture can achieve standby currents of less than 10 microamps while still maintaining the ability to provide 50 mA of output power with good transient load regulation. Dc-to-dc buck converters are typically used for higher-current applications-greater than 200 mA-at which point the LDO's inefficiency becomes a significant portion of the overall power budget. Buck converters can be up to 95 percent efficient at full load, which makes them attractive, but at the cost of more area and more external components. In order to maximize battery lifetime, the dc-to-dc converter is required to have high efficiencies over a wide range of load conditions. Pulse width modulation (PWM) is used for high current loads, whereas pulse frequency modulation (PFM) mode is used for light loads. At high load currents, controlling the duty cycle of the PWM signal regulates the output voltage. In the PWM mode, the converter operates at a fixed frequency, which can be filtered for noise-sensitive applications. In this mode the dominant losses are conduction and switching losses, which occur in the power switches of the converter. To maintain high efficiency at light loads, the switching frequency is reduced, as per PFM, and allowed to vary with the load, thereby reducing switching losses. The PFM mode also enables most circuits to be shut down to reduce the quiescent current. Valerian Mayega (valerian@ti.com) and Byron Reed (bmr2@ti.com) are IC design engineers at Texas Instruments Inc. (Dallas). See related chart |