|
|||||
Memory Amnesia Could Hurt Low-Power Design
Memory Amnesia Could Hurt Low-Power Design Today's wireless chip designers face a myriad of challenges in meeting the ever-expanding feature requirements of high-technology products while being constrained by power limitations imposed by wireless and battery operated devices. Nowhere is this more apparent than in the area of wireless system-on-a-chip (SoC) design, where advanced processes allow for greater complexity than has been previously achievable, yet these same technologies pose new power issues. One key element of the modern SoC is the increasing portion of the die area devoted to embedded memory. The 2000 International Technology Roadmap for Semiconductors (ITRS) indicates an almost exponential rise in the die area devoted to embedded memory over the next ten years. The roadmap further indicates that the crossover point (the time where the percentage of die area devoted to embedded memory is equal to the percentage of die area devoted to logic) occurs in the 2002-2003 timeframe. < P> Clearly, as memory begins to dominate communication SoC designs, engineers can no longer ignore or dismiss the contribution of memory power to the system's power budget. With the increasingly larger amounts of memory deployed in a low-power application, it becomes critical to apply power-saving techniques to the memory in order to achieve system power goals. When looking at memory as a key area in which low-power considerations need to be applied, there are three key elements that stand out: the embedding of system memory into the SoC, memory technology options that affect both active and standby power, and dynamic power management techniques. Let's look at each of these three elements in more detail. To Embed or Not: The Perennial Question Dynamic RAM (DRAM) traditionally has dominated external memory due to its cost advantage over other memory technologies. Over time, DRAM pricing has been driven by PC memory cache requirements. As a result, medium density synchronous DRAM (SDRAM has been widely available at a reasonable cost. Recently however, the PC industry has been transitioning to larger density double data rate (DDR) DRAMS. With this transition, the price points of DRAM appropriate for embedded system applications have risen, making external memory less cost effective than before. Embedding the system memory has significant system power implications over external memory solutions. Too often, power budgets are allocated on a per-chip basis without regard to total system power. By considering the entire power budget, proper partitioning can result in efficient power usage. The following example illustrates the power advantages of embedding memory versus an external solution: Consider an embedded system with an SOC-based processor and 4-Mbit (64Kx32) memory (Figure 1). The memory interface consists of 32 data lines and 20 assorted address and control lines. Assuming that one-half of the signals are transitioning at any one time, a total of 26 signals need to be accounted for in terms of power. Each of these signals has an effective loading of 8 to 10 pF based on the following breakdowns: Power distribution is calculated as 1/2CV2. Assuming that I/O voltage is 2.5 V and that the memory is operating at 100 MHz, the I/O power consum ed in performing memory operations would equal approximately 81 mA. Clearly this is excessive when viewed from the point of battery requirements. While cost considerations dominated decisions of whether to embed memory in the past, today's power requirements of wireless and battery-powered applications heavily favor the embedding of system memory. Memory Options 6T memory is built around a latching memory cell that contains six transistors (Figure 2). Because of the latching action of the circuit, 6T memo ry is referred to as static RAM (SRAM), which implies that the memory cell can hold its stored value as long as power is present.
6T memory is available through a large number of vendors and is able to run on standard CMOS logic processes. While often viewed as "free" memory due its availability through ASIC vendors and foundries, the memory does come with a price. The high transistor count translates to a large cell resulting in memory that is roughly twice as large as its competitors. While power is a prime consideration, cost is a factor that cannot be ignored. Cost translates directly into silicon area -- in other words the smaller the memory, the more cost effective it is. The Single Transistor Approach
Dynamic embedded memory (eDRAM) is significantly smaller than 6T-SRAM due to the memory cell having only a single transistor and single capacitor. The small cell area of the eDRAM cell results in memory arrays much denser than a corresponding 6T-SRAM array. As with most tradeoffs there is a downside, and in the case of eDRAM, the technology requires a special process, which is not offered by most ASIC vendors and is considerably more costly than standard logic processes. An alternative to both 6T-SRAM and eDRAM is the 1T-SRAM memory technology being offered by a number of foundries. Based on a 1T1C bit cell, the 1T-SRA M memory technology offers the density advantages of a DRAM cell but runs on standard logic processes. Power Considerations Traditionally in low-power or wireless systems, active power was considered of lesser importance in the power budget due to the relative short time the device was active as compared to the time the device was in standby. Today's applications depend on many new features that require a greater percentage of the time being in active mode. For example, a 2G handset's functionality consisted mainly in the call and call manag ement functions associated with wireless communication. Typically a 2-Mbit SRAM was sufficient for protocol stack, menu system, and scratchpad. By contrast, today's 3G phones, in addition to voice services, support a wide variety of options such as data services, Web browsers, audio players and MPEG-4 video. These handsets can easily require up to 16 Mbit of SRAM. The demand on active power by these functions increases the need for power efficient memory. The 6T memory cell, which is a latched structure, dissipates the highest active power because of the latch action and the inherent size of the cell. In addition, large 6T arrays typically contain long metal lines that create high node capacitance furthering the power draw. By contrast, 1T1C memories read and write data by charging or discharging the capacitor in the memory cell. The small size of the 1T1C cell results in shorter metal line lengths and lower node capacitance that translate to lower power. The Standby Power Benchmark In past generations, standby power in memory was not given major consideration due to the performance of 6T-SRAM in standby mode. The latched action of the 6T cell coupled with the heavily oxided transistors of past processes resulted in a memory cell that consumed little power in a standby mode as compared to other system elements. With the advent of very-fine geometry processes (0.13 micron and finer), this picture has changed greatly. While benefiting from the speed and density afforded by ever shrinking geometries and supply voltages, the industry is facing a power crisis brought about by these same processes. The issue of leakage current, while always present in previous silicon generations, has become an overriding concern to the design industry. Leakage current is simply defined as the uncontrolled (parasiti c) current flowing across regions of the semiconductor structure in which no current should be flowing. It can be composed of several elements: sub-threshold leakage current, gate direct leakage tunneling current, and source/drain junction leakage current. The ITRS has published its low standby power (LSP) logic technology requirements for both the near and long term, which include leakage current requirements. It is generally acknowledged that these leakage requirements cannot be met with current methodologies. In fact, it is estimated that leakage current will increase on the average 7.5X with each chip generation. It is no longer valid to assume that gate-leakage is an insignificant contributor to standby power in embedded memory. Approaching Standby Power
6T leakage in 0.13 micron and below results in a significantly higher standby current than an equivalent 6T memory array in 0.18 micron or higher. While circuit techniques are constantly being employed to improve 6T leakage, standby current will always suffer with a six-transistor design in advanced processes. 1T1C memory cells, such as embedded DRAM and 1T-SRAM memory technology, do not suffer from leakage effects as severely as does 6T. The basic structure of the 1T1C cell contains only a single leakage path (Figure 5) in standby mode. In addition, the relative smaller cell results in overall lower leakage.
While it is true that 1T1C cells require a refresh current to maintain memory state in standby, design techniques have lowered this current to where refresh current is often significantly less than the leakage current of an equivalent 6T memory array. Impact of Dynamic Power Control Early implementations of power-saving design methods were mainly static in nature, such as a reduced voltage or frequency constantly applied to the system resulting in a constant savings that was not dependant on true system activity or throughput requirements. While power sav ings were realized, often the results were not optimal. Recent advances such as dynamic clock control, adaptive voltage and frequency scaling, and selective sleep or shutdown are implemented as a dynamic control to allow the designer to maximize the power savings in relation to system load and throughput. In other words for maximum power savings, power saving design techniques must be applied dynamically to compensate for system activity and throughput requirements. Until recently, most low-power design techniques have been targeted to reduce the power of the logic circuits. With the increased amount of embedded memory in low-power systems, these same design techniques must also be applied to the memory in order to achieve system power goals. A good example of this is the "sleep" mode employed by many systems. During periods of inactivity it is traditional to put the put the processor in a "sleep" or standby state to reduce power. This can be accomplished by software, clock control, or other me thods. During standby, memory is assumed to be in a low-power state. With very large embedded memories in fine geometry processes, this is not the case. In the case of 6T memory, leakage current can exceed the very logic current that one hopes to save by sleeping the logic. In the case of 1T1C memory, the refresh requirements are still present and will consume power. Clearly in order to conserve power, the memory must be made aware of the "sleep" or standby condition allowing the memory to operate in a "power- optimized" mode. An example of this is the low-power standby mode of the 1T-SRAM memory technology, which reduces standby current by an order of magnitude. Wrap Up The employment of dynamic power management, previously used in logic power management, to the memory area can significantly lower system power. A combination of the above techniques will help the system designer realize his power goals. About the Author
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |