Efficient audio is essential for low-power 3G mobiles
EE Times: Latest News Efficient audio is essential for low-power 3G mobiles | |
Dudy Sinai (06/28/2004 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=22101863 | |
Many of the design challenges in 3G mobile phones revolve around implementing additional functionality, such as videoconferencing, while prolonging or at least preserving battery life. On the digital side, that has led IC designers to use smaller process geometries and adopt such techniques as core voltage and clock scaling to reduce CPU power consumption.
However, since mixed-signal and analog functions — primarily audio — account for an increasingly large part of the total power consumed, low-power design in this area is just as important. This requires a different set of techniques because shrinking transistors or lowering supply voltages has an adverse effect on mixed-signal performance. Moreover, clock scaling cannot be used on purely analog circuitry where there is no clock.
Additionally, the pin count and printed-circuit trace totals rise as functionality increases. Groups such as the Mobile Industry Processor Interface Alliance are working to unify some of the multimedia communication buses in 3G products to save power and board space. But a highly suitable solution to the problem already exsits, in the form of the well-established AC'97 interface protocol.
Granular power management
Among the most power-hungry components are the loudspeakers. The small membranes and lightweight magnets of speakers used in mobile phones generally result in lower efficiency, which is difficult to improve without making the speaker unacceptably large, heavy or expensive. The audio codecs and amplifiers driving these speakers, however, offer considerable latitude for power savings.
The three output transducers (earpiece, loudspeaker and headset) are usually driven by three separate output stages, only one of which needs to be powered up at any given time. To avoid unnecessary supply current drain, smart-phone audio ICs feature increasingly granular power management — ideally with a separate control bit for each output driver, input preamplifier, digital-to-analog converter, analog-to-digital converter and analog mixer. Software designed with power efficiency in mind takes advantage of this controllability to disable any functions not currently needed.
Sample rate conversion
Another perennial problem is the requirement to record and play back audio files recorded at different sample rates. Although digital sample rate conversion can be used to make this possible, the method is computationally very expensive, slowing down the CPU and increasing its power consumption.
Two alternative methods avoid that drawback. The first is to let the audio digital signal processor (DSP) cores associated with D/A and A/D converters run at different speeds depending on the sample rate. Phase- locked loops (PLLs) provide a power-efficient way to generate the necessary clocks — usually by multiplying the word clock, whose frequency equals the sample rate, by a fixed number, such as 256.
Another method is the variable-rate audio feature built into AC97-compliant codecs. This makes it possible to support all standard sample rates while keeping the master clock constant, at 24.576 MHz, and while transferring data across the digital interface at 48 kHz irrespectively of the sample rate.
When playing or recording audio at sample rates below 48 kHz, such as the commonly used 44.1-kHz rate, the codec skips sampling approximately one out of every 12 data frames and times the 11 remaining samples more evenly. Even simultaneous recording and playback at two sample rates can thus be achieved. The power consumption of the extra digital logic that implements variable-rate audio is comparable to that of one or two high-quality PLLs and is far lower than for the digital sample rate conversion performed by a CPU.
Serving two clock domains/p>
Opportunities for power savings also exist in the codecs' digital core. Reflecting the origin of smart phones as hybrids between traditional mobile phones and PDAs, today's typical audio subsystem consists of a mono voice codec and a stereo, hi-fi codec, which may or may not reside on the same piece of silicon. Although they are interconnected in the analog domain, each codec has its own digital core and audio interface, and is slave to its own clock domain. The voice clock is synchronized to the communications processor; the hi-fi clock is derived from the applications processor.
Besides increasing chip size, this architecture also results in two cores' running simultaneously when music, MP3 ring tones or other signal tones are played during a call, as well as in phone call recording. While sample-rate conversion combined with digital mixing could unify the two audio streams for processing in a single codec, the power consumed in the process would far outweigh any power savings achieved through the elimination of one codec.
There is, however, another approach, which only requires a single audio DSP core, while preserving separate D/As and analog signal paths. AC97 codecs are particularly suited to this because, as discussed above, the variable-rate audio mechanism keeps their master clock constant, at 24.576 MHz. With the hi-fi clock thus fixed at one frequency, the voice clock can be locked to it as a fixed ratio: Dividing the 24.576-MHz hi-fi clock by six yields a 4.096-MHz voice clock (512 times the standard telephony sample rate of 8 kHz). By generating the clocks on-chip and operating as clock master on both audio interfaces, the audio subsystem eliminates any frequency mismatch that might otherwise result in unintentionally dropped samples and thus in audible distortion (see figure).
The single-core architecture saves power whenever both audio streams are active. Playback during a call is achieved using just one audio DSP core, operating at 24.576 MHz, whereas the dual-core architecture also requires a second core, running at 4.096 MHz, and thus consumes extra power.
For phone call recording, the AC97 interface is slowed to one-sixth its normal speed, so that it becomes synchronous with the voice interface. One A/D converter can then be used to digitize the microphone signal and feed the data to the baseband chip set, while another A/D, using the same clock, captures an analog mix of both sides of the phone conversation for recording through the AC97 interface under the control of the applications processor. The same scenario in the dual-codec architecture would have required a separate DSP core for the second A/D, running at the frequency dictated by the applications processor.
The single-core implementation also requires less silicon. Besides the gate count reduction due to the elimination of one audio DSP core, an A/D is eliminated.
Touchscreen digitization
Many 3G phones feature touchscreen interfaces to allow for more intuitive user interaction. The digitization of the touchscreen signal needs only a relatively slow, low-resolution A/D converter, but some details of its implementation can significantly affect overall power consumption in the system.
To measure touch coordinates, a current must be driven through the resistive layers of the touchpanel that overlays the screen. The current far exceeds the A/D converter's supply current and should therefore be minimized. Factors affecting this current are the frequency of measurements and the delay between driving the touchpanel and taking a measurement. Both also affect the accuracy of measurements, and a minimum sample number is necessary to achieve a responsive user interface.
Pen-down detection is another concern with touchscreen displays. Many touchscreen controllers rely on a polling routine whereby a CPU regularly checks whether the screen is being touched. Such polling prevents the CPU from remaining in sleep mode when there is no user input and thus increases the CPU's power consumption.
With touchscreen controllers capable of independently detecting pen-down and sending an interrupt, the CPU can spend more time in sleep mode. The same pen-down signal can also be used to control the touchpanel driver circuits and A/D converter, ensuring that they run only when necessary.
Flexibility in the interpretation of the AC97 interface specification makes it easy and very efficient to control and transfer touchscreen digitizer information over the same four-wire ac link bus that transfers the audio data and control, thus saving on the need to connect to a separate touch digitizer chip.
The future
Because the AC97 interface might be used for data other than solely audio or modem data, and because the 48k frame-per-second frame rate is sufficient for foreseeable needs, other functions within the 3G phone could conceivably use this bus for transfer of control and system data. Once we break free of the limits of using AC97 only for audio, then other benefits become apparent.
For example, power management functions could easily be transferred over the excess bandwidth of the ac link. If this happens while audio or touch functions are asleep, then the ac link could run at very low speeds, thereby saving power.
Further, the inherent sleep and wake-up protocols built into the ac link specification are well-suited to building very low-power multipurpose communication subsystems.
This extended interpretation of the usability of the AC97 interface opens the way to further beneficial reductions in system power consumption and board area, and should be considered as an attractive future component of 3G phone architectures.
Dudy Sinai is a senior new product definition engineer at Wolfson Microelectronics, plc (Edinburgh, United Kingdom)
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |