Keeping leakage current under control
Keeping leakage current under control
By Dave Reed, Vice President of Marketing, Monterey Design Systems, Sunnyvale, Calif., EE Times
February 10, 2003 (10:29 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030207S0026
As process technologies progress from 0.18 microns to 130 nanometers, it is no longer sufficient to achieve closure on timing, IR drop, and signal integrity. At 130 nm, leakage current becomes a significant contributor to the overall power dissipation of the chip. With process technologies of 0.18 microns and above, the only significant source of power dissipation was the switching of transistors. But at 130 nanometers and below, this is no longer the case. As processes shrink, so do supply voltages and threshold voltages, resulting in exponential increases in sub-threshold leakage current (10 times per technology node). At 180nm with a supply voltage of 1.8-2.0V, leakage current is negligible. At 130nm with supply voltage of 1.2-1.3V, leakage current represents 10-30% of active power. At 70nm with supply voltage less than 1.0V, over 50% of a chip's power dissipation may be due to leakage current. Leading edge microprocessors were the first chips to be affected, but now, leakage current looms as a critical design parameter for all multi-million gate, nanometer chips. There are two complementary approaches to limiting leakage current - statically-selected slow transistors (SSST) and dynamically-deactivated fast transistors (DDFT). The static approach is design independent and may be implemented with multiple threshold (Vt) libraries and design tools that support these libraries. Most foundries today offer multiple Vt libraries for processes of 130nm and below that contain both fast cells (high leakage, low Vt) and logically equivalent slow cells (low leakage, high Vt). The dynamic approach requires that the chip designer employ techniques during the design process, to dynamically deactivate parts of the chip during periods of inactivity, and is thus design dependent. The approaches are complementary in that a chip can be designed using both approaches. However, since the dynamic approach is design dependent, we will foc us on the static approach in this article and how a combination of multiple Vt libraries and advanced physical synthesis technology can effectively reduce leakage by as much as 30% on a 130nm design. Some of the static solutions available today that attempt to make use of multiple Vt libraries are overly simplistic and ignore the physical implementation aspects of the chip. This approach involves initially synthesizing the design and mapping to high leakage (low Vt) cells, and then replacing as many of the cells as possible with logically equivalent low leakage (high Vt) versions. Since "leakage optimization" takes place immediately after synthesis, this approach completely ignores the physical aspects of the design such as cell placement, power routing, and clock trees that are so critical to the closure of a multi-million gate nanometer design. Another approach offers more accuracy, but requires doing multiple iterations of the layout in order to achieve closure. This circuitous and time-co nsuming process requires completing the fully-placed and routed layout, and then analyzing the design to determine which parts of the chip are candidates for low leakage cells. The designer would then be required to process an ECO swapping in the low leakage cells, re-do the layout, and then re-extract parasitics and re-analyze timing and power. If problems are discovered, the entire process must be repeated. More than likely, the swapping in of low leakage cells would be done in sections, rather than on the chip as a whole, thus requiring multiple iterations. Swapping cells A more effective, approach consists of swapping in the low leakage cells during physical synthesis and optimization. This requires that the physical synthesis tool identify those parts of the chips where the use of low leakage cells will not result in timing violations, swap the cells, and then perform timing and power analysis to ensure that timing and power requirements are still being met. The latest generation of physical synthesis, prototyping and implementation tools take full advantage of multiple Vt libraries by simultaneously optimizing the design for power consumption, timing, area, and signal integrity. As part of this process, slower low leakage cells are instantiated wherever they do not introduce timing violations. Characterization tests run on a suite of 130nm designs show a reduction in leakage current of up to 30%. In order to make effective use of multiple Vt libraries, the physical synthesis and prototyping technologies being used must possess a number of key attributes.
First, they must be very accurate so that the physical prototype correlates to within a few percentage points of the final physical implementation. The resulting prototype must include fully routed power networks, fully synthesized and placed clock trees, and an accurate and achievable global routing of all nets. Without this level of accuracy, it is impossible to obtain any useful measurement of the effect of using multiple Vt cells on timing and power consumption.
And, the physical prototype must be constructed optimizing all design parameters timing, power, clock, area, and signal integrity simultaneously. Traditional flows composed of physical synthesis, virtual prototyping, place-and-route, extraction, and analysis tools, have a tendency to optimize individual design parameters sequentially and then attempt to repair violations of the other parameter requirements through multiple iterations of the entire flow. Adding another design parameter, such as leakage current, would simply lengthen the overall flow and increase the number of iterations required to achieve closure.
One approach that lends itself well to the simultaneous optimization of multiple parameters (timing, area, clock delay and skew, IR drop, electromigra tion, and signal integrity) and to the addition of new design parameters (leakage current and inductance) is that of using an open cost function throughout the entire process physical synthesis, prototyping, and implementation. The cost function can readily take into account factors such as leakage current at 130nm and inductance at the 70nm node.
Full deployment
The use of an open cost function combined with a "progressive refinement" approach starts with a coarse estimate of the physical implementation and progressively refining it until all design requirements are met. During this optimization process, the full complement of physical synthesis and prototyping functionality is deployed techniques such as cell sizing, buffering, logic re-structuring, cloning, technology re-mapping, and area recovery.
It is towards the end of the optimization phase that the cells within non-critical paths are swapped out for lower leakage cells while timing, I R drop, and electromigration are constantly being monitored by the built-in analysis engines. The open cost function enables all design parameters timing, power, clock, area, and signal integrity to be evaluated and optimized simultaneously. The end result: a fully placed, globally routed chip that includes fully routed power networks and synthesized clock trees with placed buffers.
As process technologies continue to advance, physical implementation becomes progressively more difficult. At first, it was very simple; the only consideration was routability. Then came timing, power consumption, and signal integrity. Leakage current is but the latest in this series of design parameters. In the future inductance will become of primary concern.
An approach that uses an open objective function that starts with a coarse estimate of the physical implementation and progressively refines it until the final implementation can be achieved, while considering all design parameters simultaneously throughout the entire process. This approach provides the accuracy necessitated by nanometer processes and is highly extensible to handle all the requirements of current and future process technologies.
Related Articles
- Predicting PLL reference spur levels due to leakage current
- Integrated Power Management, Leakage Control and Process Compensation Technology for Advanced Processes
- How control electronics can help scale quantum computers
- Functional Safety for Control and Status Registers
- ASICs Bring Back Control to Supply Chains
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |