Memory Amnesia Could Hurt Low-Power Design
Memory Amnesia Could Hurt Low-Power Design
By Jud Bond, MoSys, CommsDesign.com
July 30, 2003 (10:01 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030730S0018
Today's wireless chip designers face a myriad of challenges in meeting the ever-expanding feature requirements of high-technology products while being constrained by power limitations imposed by wireless and battery operated devices. Nowhere is this more apparent than in the area of wireless system-on-a-chip (SoC) design, where advanced processes allow for greater complexity than has been previously achievable, yet these same technologies pose new power issues. One key element of the modern SoC is the increasing portion of the die area devoted to embedded memory. The 2000 International Technology Roadmap for Semiconductors (ITRS) indicates an almost exponential rise in the die area devoted to embedded memory over the next ten years. The roadmap further indicates that the crossover point (the time where the percentage of die area devoted to embedded memory is equal to the percentage of die area devoted to logic) occurs in the 2002-2003 timeframe. < P> Clearly, as memory begins to dominate communication SoC designs, engineers can no longer ignore or dismiss the contribution of memory power to the system's power budget. With the increasingly larger amounts of memory deployed in a low-power application, it becomes critical to apply power-saving techniques to the memory in order to achieve system power goals. When looking at memory as a key area in which low-power considerations need to be applied, there are three key elements that stand out: the embedding of system memory into the SoC, memory technology options that affect both active and standby power, and dynamic power management techniques. Let's look at each of these three elements in more detail. To Embed or Not: The Perennial Question Dynamic RAM (DRAM) traditionally has dominated external memory due to its cost advantage over other memory technologies. Over time, DRAM pricing has been driven by PC memory cache requirements. As a result, medium density synchronous DRAM (SDRAM has been widely available at a reasonable cost. Recently however, the PC industry has been transitioning to larger density double data rate (DDR) DRAMS. With this transition, the price points of DRAM appropriate for embedded system applications have risen, making external memory less cost effective than before. Embedding the system memory has significant system power implications over external memory solutions. Too often, power budgets are allocated on a per-chip basis without regard to total system power. By considering the entire power budget, proper partitioning can result in efficient power usage. The following example illustrates the power advantages of embedding memory versus an external solution: Consider an embedded system with an SOC-based processor and 4-Mbit (64Kx32) memory (Figure 1). The memory interface consists of 32 data lines and 20 assorted address and control lines. Assuming that one-half of the signals are transitioning at any one time, a total of 26 signals need to be accounted for in terms of power. Each of these signals has an effective loading of 8 to 10 pF based on the following breakdowns:
One of the first questions that needs to be answered in terms of system architecture is whether to embed the system memory or leave the memory external to the SoC. In previous technologies where power was not a major consideration, cost was the domina ting factor in determining whether to embed the memory.
Power distribution is calculated as 1/2CV2. Assuming that I/O voltage is 2.5 V and that the memory is operating at 100 MHz, the I/O power consum ed in performing memory operations would equal approximately 81 mA. Clearly this is excessive when viewed from the point of battery requirements.
While cost considerations dominated decisions of whether to embed memory in the past, today's power requirements of wireless and battery-powered applications heavily favor the embedding of system memory.
Memory Options
When planning for low-power operation, it is important to examine factors of the various memory options for both active and standby operations. Today's designer has a host of embedded memory technologies to choose from, including memories based on a six-transistor memory cell (6T), and memories based on a single transistor-single capacitor (1T1C). It is necessary to consider the merits of each technology in order to make an intelligent decision as to which technology to embed.
6T memory is built around a latching memory cell that contains six transistors (Figure 2). Because of the latching action of the circuit, 6T memo ry is referred to as static RAM (SRAM), which implies that the memory cell can hold its stored value as long as power is present.
6T memory is available through a large number of vendors and is able to run on standard CMOS logic processes. While often viewed as "free" memory due its availability through ASIC vendors and foundries, the memory does come with a price. The high transistor count translates to a large cell resulting in memory that is roughly twice as large as its competitors. While power is a prime consideration, cost is a factor that cannot be ignored. Cost translates directly into silicon area -- in other words the smaller the memory, the more cost effective it is.
The Single Transistor Approach
1T1C memories are built around a single-transistor, single-capacitor cell (Figure 3). The architecture of the 1T1C c ell calls for the stored memory value to reside as a charge on the capacitor. The charge is required to be periodically refreshed in order to maintain its value, thus the 1T1C cell is referred to as a dynamic cell.
Dynamic embedded memory (eDRAM) is significantly smaller than 6T-SRAM due to the memory cell having only a single transistor and single capacitor. The small cell area of the eDRAM cell results in memory arrays much denser than a corresponding 6T-SRAM array. As with most tradeoffs there is a downside, and in the case of eDRAM, the technology requires a special process, which is not offered by most ASIC vendors and is considerably more costly than standard logic processes.
An alternative to both 6T-SRAM and eDRAM is the 1T-SRAM memory technology being offered by a number of foundries. Based on a 1T1C bit cell, the 1T-SRA M memory technology offers the density advantages of a DRAM cell but runs on standard logic processes.
Power Considerations
When considering power characteristics of the various memory technologies, one must look at both the active power and the standby power. Active power is the power consumed by the memory and its interface when being accessed for a read or a write. Standby power is the power consumed by the memory when the stored value is being retained but not accessed. Each technology (6T and 1T1C) has different characteristics concerning active and standby power.
Traditionally in low-power or wireless systems, active power was considered of lesser importance in the power budget due to the relative short time the device was active as compared to the time the device was in standby. Today's applications depend on many new features that require a greater percentage of the time being in active mode.
For example, a 2G handset's functionality consisted mainly in the call and call manag ement functions associated with wireless communication. Typically a 2-Mbit SRAM was sufficient for protocol stack, menu system, and scratchpad. By contrast, today's 3G phones, in addition to voice services, support a wide variety of options such as data services, Web browsers, audio players and MPEG-4 video. These handsets can easily require up to 16 Mbit of SRAM. The demand on active power by these functions increases the need for power efficient memory.
The 6T memory cell, which is a latched structure, dissipates the highest active power because of the latch action and the inherent size of the cell. In addition, large 6T arrays typically contain long metal lines that create high node capacitance furthering the power draw. By contrast, 1T1C memories read and write data by charging or discharging the capacitor in the memory cell. The small size of the 1T1C cell results in shorter metal line lengths and lower node capacitance that translate to lower power.
The Standby Power Benchmark
Stand by power is still the mark by which low-power wireless applications are measured. Battery life in a mobile design is directly related to the efficiency of the system's standby power.
In past generations, standby power in memory was not given major consideration due to the performance of 6T-SRAM in standby mode. The latched action of the 6T cell coupled with the heavily oxided transistors of past processes resulted in a memory cell that consumed little power in a standby mode as compared to other system elements.
With the advent of very-fine geometry processes (0.13 micron and finer), this picture has changed greatly. While benefiting from the speed and density afforded by ever shrinking geometries and supply voltages, the industry is facing a power crisis brought about by these same processes. The issue of leakage current, while always present in previous silicon generations, has become an overriding concern to the design industry.
Leakage current is simply defined as the uncontrolled (parasiti c) current flowing across regions of the semiconductor structure in which no current should be flowing. It can be composed of several elements: sub-threshold leakage current, gate direct leakage tunneling current, and source/drain junction leakage current.
The ITRS has published its low standby power (LSP) logic technology requirements for both the near and long term, which include leakage current requirements. It is generally acknowledged that these leakage requirements cannot be met with current methodologies. In fact, it is estimated that leakage current will increase on the average 7.5X with each chip generation. It is no longer valid to assume that gate-leakage is an insignificant contributor to standby power in embedded memory.
Approaching Standby Power
Each memory technology approaches standby power differently. 6T memory theoretically has the best standby power numbers because the latched memory consumes negligible power. However, because of the basic structure of the 6T cell, it is very susceptible to leakage. In fact, in its standby state, the 6T memory cell has four separate leakage paths (Figure 4) in which current flows.
6T leakage in 0.13 micron and below results in a significantly higher standby current than an equivalent 6T memory array in 0.18 micron or higher. While circuit techniques are constantly being employed to improve 6T leakage, standby current will always suffer with a six-transistor design in advanced processes.
1T1C memory cells, such as embedded DRAM and 1T-SRAM memory technology, do not suffer from leakage effects as severely as does 6T. The basic structure of the 1T1C cell contains only a single leakage path (Figure 5) in standby mode. In addition, the relative smaller cell results in overall lower leakage.
While it is true that 1T1C cells require a refresh current to maintain memory state in standby, design techniques have lowered this current to where refresh current is often significantly less than the leakage current of an equivalent 6T memory array.
Impact of Dynamic Power Control
Traditional low-power designs have employed a wide variety of techniques to reduce power consumption. These techniques include reducing voltage and frequency, clock control, transition minimization and selective sleep or power down.
Early implementations of power-saving design methods were mainly static in nature, such as a reduced voltage or frequency constantly applied to the system resulting in a constant savings that was not dependant on true system activity or throughput requirements. While power sav ings were realized, often the results were not optimal.
Recent advances such as dynamic clock control, adaptive voltage and frequency scaling, and selective sleep or shutdown are implemented as a dynamic control to allow the designer to maximize the power savings in relation to system load and throughput. In other words for maximum power savings, power saving design techniques must be applied dynamically to compensate for system activity and throughput requirements.
Until recently, most low-power design techniques have been targeted to reduce the power of the logic circuits. With the increased amount of embedded memory in low-power systems, these same design techniques must also be applied to the memory in order to achieve system power goals.
A good example of this is the "sleep" mode employed by many systems. During periods of inactivity it is traditional to put the put the processor in a "sleep" or standby state to reduce power. This can be accomplished by software, clock control, or other me thods.
During standby, memory is assumed to be in a low-power state. With very large embedded memories in fine geometry processes, this is not the case. In the case of 6T memory, leakage current can exceed the very logic current that one hopes to save by sleeping the logic. In the case of 1T1C memory, the refresh requirements are still present and will consume power.
Clearly in order to conserve power, the memory must be made aware of the "sleep" or standby condition allowing the memory to operate in a "power- optimized" mode. An example of this is the low-power standby mode of the 1T-SRAM memory technology, which reduces standby current by an order of magnitude.
Wrap Up
There are many considerations that go into a low-power design. As memories begin to dominate silicon area, special care must be given to the memory as a key element in the overall power budget. Embedding system memory as opposed to keeping it external has been shown to reduce overall system power by minimizing drive r power requirements. By choosing an appropriate embedded memory technology, maximum power savings can be realized in both the areas of active and standby power.
The employment of dynamic power management, previously used in logic power management, to the memory area can significantly lower system power. A combination of the above techniques will help the system designer realize his power goals.
About the Author
Jud Bond is the program manager for IP licensing at MoSys. Bond earned a bachelor's degree in applied physics from Brigham Young University and can be reached at jbond@mosys.com.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |