Executive Comment: The leap to 0.13-micron poses risks - Nov 17, 2001
Executive Comment: The leap to 0.13-micron poses risks
By Peter Hutton, EBN
November 26, 2001 (12:54 p.m. EST)
URL: http://www.eetimes.com/story/OEG20011126S0042
For companies that have already undertaken the expense and challenge of migrating from 0.25-micron chip designs to 0.18-micron, another leap forward to 0.13-micron may seem daunting. And yet the move to the 0.13-generation is inevitable and critical for two main reasons: the smaller geometry size gains more real estate on each chip for designers striving to match or surpass Moore's Law, and, at least on the analog side, more complex functions are allowed by the increased performance of the silicon. This next evolution in chip design will impose its own shakeout of weaker players. Given the expense involved for 0.13-micron tooling, the new technology may bring some of the entry barriers to semiconductor design that occurred in semiconductor fabrication when fabs moved from 0.25- to 0.18-micron. It may soon be the case that only large vertically integrated semiconductor companies and specialist design houses will be able to play in the 0. 13-micron space. The cost of entry will now include a great deal of expensive retooling and the need for vastly more verification specialists. The migration to 0.13-micron is under way because 0.18-micron technology is at its limit for chip designers seeking faster clock frequencies, higher gate and memory counts, and lower power dissipation. The 0.13-micron technology allows reduced gate delays, smaller die areas, increased routing layers, and the use of copper rather than aluminum as an interconnect layer. This migration, due to increased physical side effects, has a number of serious technical challenges to overcome. Since the average mask set cost for a 0.13-micron design will be around $750,000, it's imperative that the errors introduced by these physical side effects be found and fixed before the chip is sent to the foundry for fabrication. The quest to place more computing functions on a shrinking piece of silicon is driving the need in 0.13-micron design for more engineers along a deman d curve similar to the linear shape dictated by Moore's Law. This demand is being addressed on the front end of the design process by reusing designs via standard intellectual property (IP) blocks. On the back end of the process or physical design side, however, the demand is still growing. If new chips in 0.13-micron kept gate counts similar to those in 0.18-micron, then 10% to 15% more engineering time would still be required to implement the additional verification stages. Since gate counts are climbing higher, however, design teams by necessity will need to grow by 15% to 50% as compared with those required in 0.18-micron. Larger design teams and engineering time spend is not only driven by more complex chips. The higher cost of masks requires that more time be spent on analysis before a test mask set is produced. All 0.13-micron designs will still need some form of physical synthesis in order to achieve timing closure. For 0.13-micron designs, RTL is the likely starting point for physical de sign because designs can be partitioned quicker and timing closure can be achieved. This is a major change in approach from 0.18-micron designs as it means that the physical designer will be responsible for the synthesis of the chip. An equivalence tool, which checks RTL to netlist, is required for this approach and there are currently several proven ones on the market. Physical verification for 0.13-micron designs should be the same as 0.18-micron, but 0.13-micron design rules are much more complicated. This will make tool capacity an issue. While most EDA tools have 64-bit versions available today, EDA tools still have capacity issues as chip sizes surpass 25 million gates. Test chips are increasingly used as cost-effective vehicles to check libraries and IP blocks. This is especially true for analog blocks such as 3.125GHz SerDes. Specific flow test chips are now being used to prove the entire 0.13-micron design flow and specifically to verify the accuracy of the extraction and signal integrity t ools. The test chip also allows designers to assess the libraries from the various library vendors in real silicon and include a wide range of complex digital IP blocks. Based on results from the test chip, designers will then be able to adjust their design flow in order to accurately model real silicon. Even for experienced players in the industry, the shift to 0.13-micron design will require a tremendous leap in risk, expense, and engineering capabilities due to the requirements of the more complex design and verification stages. The retooling and increased team sizes required will greatly increase costs and up the ante if the design goes wrong. For these reasons, not many organizations will be capable or competent to work at these geometries, and this effect will simply get worse at 0.10-micron and below. Peter Hutton is group director at Tality Corp.'s Livingston, Scotland design center.
Related Articles
- Worldwide chip sales to fall 17% in 2001, says Dataquest
- Embracing a More Secure Era with TLS 1.3
- IoT Security: Exploring Risks and Countermeasures Across Industries
- Risks and Precautions to take care while using On-Chip temperature sensors in Safety critical automotive applications
- System configurations for power systems based on PMBus 1.3
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |