Modeling challenges for 90 nm and below
Modeling challenges for 90 nm and below
By Vassilios Gerousis, EE Times
September 19, 2003 (12:17 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030919S0048
Based on a series of physical effects, CMOS is showing increasing design challenges at 90 nanometers and below. Newer modeling obstacles, with varying degrees of influence, are becoming critical in achieving silicon accuracy in both analysis and implementation. When CMOS devices are scaled down, voltage level and oxide thickness must also be reduced. The electrical barriers within the device begin to lose their insulation properties because of thermal injection and quantum mechanical tunneling, which results in higher leakage and reduced speed. As we go to 50 nm and below, a fundamental modeling change will be required to overcome the nonlinearity of signals and the variation of process and mask imperfections. These changes will also require reexamination of how analysis (for example, timing, power, voltage drop and signal integrity) and implementation tools must deal with these new modeling and design challenges. As an example, timing analysi s must take into account voltage drops, crosstalk delays and noise effects. The worst case of all effects reduces design space or make design impossible. Timing analysis must consider chip and wafer map (voltage drop and temperature variation) for deterministic variations. Instances of same cell and block may operate at different power supply and temperature. At the 50-nm technology node, we see the need for a statistical approach in static timing analysis for random effects, such as process variation. There is a need to develop new timing analysis techniques that should be available to support 50 nm. The challenge of low power and low voltage (LPLV) is becoming one of the leading design obstacles for 90 nm. The LPLV goal is to optimize chip performance and reduce chip power--both active and standby; the trade-off range between performance and standby current is reduced. This challenge has led to new design and library methodologies. The basis of LVLP is to develop accurate modeling, improved analys is and sign-off and physical synthesis tools. This has affected every aspect of design flow, cell modeling and library methodologies as enhancements are added to tools and cell modeling to take advantage of LPLV techniques. LPLV design challenges With 90-nm technology the leakage current has increased by at least 100 times over the 180-nm technology node. But if LPLV techniques are not used, this target could be greater than 1,000 times. Multiple techniques are used to reduce leakage currents and achieve target performance. Multi-Vdd is one of the design methodologies that have been applied with some degree of automation at 90 nm. The industry also must come up with further enhancements, especially in modeling and mixed analysis (power, timing and signal integrity) for 50 nm and below. Multi-Vdd (power supply) methodology is focused on a voltage domain block design where each block can have an independent power supply line, called Vddn. Each block is designed with at least two performance targets: high voltage and high clock frequency and low voltage and low clock frequency. A maximum of eight power domains, selected for optimal physical design to reduce area required by the additional power supply lines, can be used. Each domain can vary its power supply value and each power supply line can be switched off or lowered to a specific operating case. When a level shifter is inserted, a second power line is required for the shifter, which is usually equal to two or three cells high. Special placement and routing considerations are required to support them, and physical synthesis tools can automatically insert the level shifters using already defined rules. Modeling of cells and macros (blocks) is still one main challenge to take full advantage of multi-Vdd analysis. Current versions do not model power supply variation during cell switching or take into account delay variation due to changes in power supply voltage. Both propagation delay and slew rates are functions of Vdd. Adding active well cells as part of the design requires the addition of both Vdd-n-well (Vdn) and Vdd-p-well (Vdp) variations. In fact, none of the existing timing, power and signal-integrity tools even consider these parameters and their variations. There are usually three important stages that must happen to properly support multi-Vdd methodology: Commercial tools are currently at the first stage. To use current commercial tools, LPLV modeling must be achieved through the development of multiple libraries, which can explode the number of libraries to more than 70. Each is characterized for a specific operating condition (as an example, Vdd value, Vdd bulk value). Timing problems Starting with 130-nm features there have been large errors due to inadequ ate modeling of the input pin, which affects both timing and power analysis. We have quantified the effects of timing with respect to dynamic changes of pin-capacitance values. We predict similar changes will also affect power modeling (also signal integrity) and analysis. The state of the art in modeling usually makes the input pin capacitance of a cell as a fixed value. All tools, even for the 90-nm technology node, currently neglect the effects of changes in the input pin capacitance. The impact of pin-capacitance variation touches timing, power and signal-integrity analysis and implementation methodologies. During characterization, the dynamic pin capacitance is measured accurately (available in the characterization database), but only one constant value is supported by industry library formats. The industry has focused on interconnect modeling and neglected advancement in cell delay modeling. Wire-length distribution on a chip comprises more than 60 percent of the nets to have the shortest leng th; for those nets, input pin capacitance of a cell is largely to blame for interconnect delay. In a block of 50,000 cells, we have gathered the following statistics: Considering the wrong constant pin capacitance leads to 30 percent delay calculation error cell delay (of the net driver), and 25 percent for transition time. This propagates the error to the following logic stages. For the current timing methodology, multiple libraries are generated to provide the worst (or best) pin capacitance value at each process, voltage and temperature corner. At least eight timing analysis runs are required for four-corners with on-chip variations. The amount of data generated as a result of these runs is huge and very difficul t to analyze and debug. Traditionally, cell delay increases with rising temperature. But in the 90-nm node, measurement has shown that cell delays decreases with rising temperature. This behavior is called temperature inversion. Temperature inversion is dependent on process options, circuit type (biased voltage) and power supply values. At the 90-nm node, the following characteristics have been observed: To properly model temperature inversion for 90 nm, it is necessary to specify temperature dependency for every timing arc. Because of the lack of support by timing-analysis tools, the current methodology is to model and to analyze designs using multiple libraries. The Linear Slope Model (LSM) has been the cornerstone for cell delay and slope calculation. When used in conjunction with an extracted-cell-driver model that is voltage based, it can generate delay errors and cause timing failures. But at 90 nm that model is running out of steam. The following can be observed: The conclusion is that SLM is inadequate for 90- and 50-nm technologies. Timing (extraction, power and signal integrity) is and will continue to be one of the increasing technology gaps between design and tool innovation. Physical synthesis has promised to close the gap; however, we see continued design iterations with 90 nm and below. Current modeling, analysis and implementation tools have proved to be inadequate in terms of capturing and analyzing 90-nm challenges. These and anticipated new challenges should be solved for the 50-nm technologies. Vassilios Gerousis is Chief Scientist at Infineon Technologies (Munich, Germany). The author acknowledges the contributions of the following people for helping to make this work successful: Alfred Lang, Pierrick Pedron, Hanspeter Bissig and Knut Just.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |