Improving yield in RTL-to-GDSII flows
EE Times: Design News Improving yield in RTL-to-GDSII flows | |
Joe G. Xi (07/11/2005 6:01 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=165700979 | |
These days, "Design for Manufacturing" (DFM) and "Design for Yield" (DFY) are frequently used terms in the EDA industry. It has been said that yield should be the "fourth design parameter" after area, timing and power. True, improving yield has in the past been left mostly to the fab and has had little direct impact on IC designers. It is well understood that this is no longer the case at the 90-nm and 65-nm nodes due to difficulties of lithography and manufacturing. Designers and design tools must start optimizing designs for yield prior to taping out the chip. Despite the buzz, designers are left with few answers to some basic questions:
2. What can be done in my design to manage and improve yield? Although the industry has invested heavily in addressing these questions, there are inherent difficulties to providing complete answers. While the fabless model contributed significantly to the growth of the semiconductor industry in the 1990s, the separation between design and manufacturing has not been favorable to the yield improvement effort. Most people have acknowledged the need for collaboration between the design and the manufacturing sides, and accounting for manufacturing effects early in the design flow. However, designers have so far relied on "recommended rules" passed from the fabs. Other DFM or DFY solutions such as resolution enhancement techniques (RET) primarily focus on the manufacturing side and offer little help during the design stage. Yet yield is something that a design company cannot afford to ignore. Consider this example: A 90-nm fab utilizing six-layer copper interconnect, 26 mask layers, producing 30,000 300-mm logic wafers per month and running at 90 percent of capacity. If each of these wafers generates on the order of $5,000 in revenue, a small 1 percent yield improvement would contribute an additional $1,500,000 working product per month. This is very meaningful to a design company's top line as new devices typically demand a higher price premium during their early product life cycle. Similar to area, timing, signal integrity and power, yield has become an additional design objective. Designs should be optimized for better yield continuously throughout the entire design flow. This paper explains how to measure and improve the defect limited yield in the RTL-to-GDSII flow. Defect data that are readily available from fabs are used. These data can either be scaled to relative values or be encrypted to protect the proprietary information of the fab. Figure 1 shows the yield optimization approach. We start with a description of how to measure the impact on yield of cells, vias and routing. This is followed by a description of some of the techniques in the RTL-to-GDSII flow to improve yield, including cell optimization in physical synthesis, wire-spreading, via-minimization and replacing single-cut vias with double-cut vias in routing. Using real fab data, we will discuss how the flow impact the final yield improvement. The results clearly show that the RTL-to-GDSII flow, although not intended to solve all the yield problems, equip designers with significant power in improving their designs' yield.
Figure 1- Yield as another design objective, optimized continuously throughout the design cycle.
Measuring Yield Many factors lead to yield losses. For a given design, the RTL-to-GDSII flow may focus on the yield impact due to three factors: wires, vias and cells. Yield data from the fab must be used to analyze the yield. Fortunately, almost every fab monitors the failure rates or probabilities of failure of design features such as wires, vias and cells.If the probability of failure of each feature is provided, then the limited yield due to defects can be calculated. This provides a quantitative measure of defect-caused yield that can be used by optimization tools as criteria to improve the design's yield. To protect the proprietary information of the fab, the yield data can be scaled to relative values. The probability of failure of a standard cell designed for a given process can be obtained by characterizing and quantifying the different failure mechanisms. Therefore, a library for a given process may have a "yield view" just like its timing view. The data may vary according to the time range in the fab's process ramp; the fab where wafers are actually processed; and other physical mechanisms that contribute to random and systematic yield failures. Via failure rates are primarily based on the number of cuts in the via (single-cut or double-cut) and weakly based on the metal-overhangs for the via-cuts. The probability of via failures comes directly from the fab characterization. Typically, values for single- and double-cut via failure rates are available and in some cases failure rates for single-cut vias with extra metal overhang. Critical area analysis can be used to obtain the yield impact of wire failures. As shown in Figure 2, random particle defects due to contamination lead to open or short wires and are the major source of random yield loss. Wire segments are analyzed using critical area analysis and the fab's random defect size distribution for shorts and opens. This produces the yield impact for wire open-defects based on the wire-width, and cost for wire short-defects based on the spacing between a wire and its nearest neighbor.
Figure 2- Critical area analysis for short and open defects of a given size.
This is shown graphically in Figure 3 below. For a given defect size, r, CA(r) = Critical Area, the area in which the center of a defect of radius r must fall in order to cause a failure. Fabs can measure the DSD(r) = Defect Size Distribution function, which is the probability of occurrence of defects as a function of defect size r. The distribution of defect sizes typically falls off as 1/r3. Then integrating DSD(r) and CA(r) gives us the failure rate due to defects.
Figure 3- Critical area, defect size distribution, and failures per defect size.
Chip Prototyping for Yield The primary benefit of being able to measure yield is that yield information can be made available early in the design cycle. Chip prototyping has proved to be a powerful way to quickly achieve closure on area, timing and power. The early design decisions, which have the biggest impact on final design, can be made with wire information that is close to the final layout. At the prototyping stage, trial routing may be performed and yield is analyzed to evaluate designs, e.g. compare one floorplan against another.Optimizing Cells for Yield As mentioned earlier, each cell layout may be characterized with a "yield view" or "yield cost" that captures its impact on yield. In general, a cell design with the same functionality may have different layout and "yield cost". For example, the same cell may have a seven-track layout or a five-track layout. For a given process, a "higher-yield" library can be created by designing a "higher-yield" cell for each cell in the library.As shown in Figure 4, the physical synthesis tool may swap high-yield for low-yield cells in a fashion that is similar to timing or power optimization. This process needs to be timing-driven, as the new cells should not worsen the timing of the design. Similar to timing optimization, different options should be provided for pre- and post-route optimization with consideration of utilizing empty spaces. Because this optimization process does not add new nets or pins, it increases cell area utilization but not routing congestion.
Figure 4- Cell libraries will include a "yield view", just like .lib for timing. Physical synthesis may trade-off yield against timing, area, power, etc.
Optimize Routing for Yield Routing is a key design stage for yield improvement. Many techniques are carried out at this stage. They include:
Even without accounting for yield, today's routers are facing tremendous challenges, such as handling highly utilized designs, optimizing timing and signal integrity, supporting the exponentially increasing design rules required by nanometer processes, etc. Adding yield as another objective burdens the router with additional complexity and constraints. It requires the routing architecture to be fundamentally intelligent and scalable. Figure 5 shows the routing architecture that has built in a concurrent analysis and optimization infrastructure. Its superthreaded architecture can also scale linearly by taking advantage of multiple CPUs on the network.
Figure 5- (a) Intelligent routing infrastructure for concurrent analysis and optimization during routing. (b) Superthreaded architecture to handle the computational complexity.
Wire-spreading can be used for both reducing capacitance and for improving yield (reducing the probability of a defect particle shorting two neighboring wires). Both needs are served by the same underlying wire-spreading algorithm. Vias are a common failure mechanism. Single-cut vias fail 10X-100X more often than double-cut vias. But a double-cut via takes more area and complicates routing, as shown in Figure 6. There are two techniques used by routers to swap double-cut vias.
Figure 6- Via-opens are one of the dominant failures. Single-cut vias fail 10X-100X more often than double-cut vias. But a double-cut via takes more area and complicates routing.
As illustrated in Figure 7, the conventional approach inserts double-cut vias as a post process after routing is complete. The goal of this operation is to replace any via by a double-cut via (or single-cut via with extra metal overhang) with a lower yield cost whenever possible. It should not create new DRC or antenna violations and it should have minimal impact on timing. The via origin and routing wires remain the same. This limitation makes via swapping simple, but it hurts the swapping ratio in some designs as the solution space explored is limited.
Figure 7- A comparison of double-cut via insertion approaches. The conventional approach performs tradeoffs as post-processing after routing. The concurrent approach adds double-cut vias while performing routing.
A new approach performs double-cut via swapping concurrently during routing. The router dynamically adjusts the via strategy. In comparing both approaches on many real designs, it has been found that the concurrent approach consistently improves double-cut via ratios by 15-25 percent over the post-process approach. The concurrent approach also incorporates single-cut via reduction to improve overall yield since single-cut vias have much higher failure rates.
Figure 8- An example of concurrent routing and double-cut via insertion results.
Optimizing Overall Yield The above described methods have been used to optimize and analyze yield for many different designs. The results validates that measuring yield using fab data rather than "recommended rules" is a much more scientific and effective way to address yield. The following observations are made from the results:
Conclusion With the addition of yield as a new design objective, it is important to directly estimate the impact of different optimization techniques on yield. Having an accurate analysis tool is critical to making good decisions during optimization. This was shown by the fact that aggressive concurrent via-swapping does not always give a better result.Various optimization techniques throughout the RTL-to-GDSII flow may improve yield by a significant amount. Focusing only on the double-cut via swapping rate while ignoring wire-length increase could actually reduce yield in some cases. Providing the yield measurement and optimization capabilities allow designers determine what decisions to make in the design flow that will give them the best yield results. Joe Xi is the vice president of product marketing for digital IC products at Cadence Design Systems Inc.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
- Out of the Verification Crisis: Improving RTL Quality
- An RTL to GDSII approach for low power design: A design for power methodology
- Proven solutions for converting a chip specification into RTL and UVM
- Reducing Power Hot Spots through RTL optimization techniques
- Improving performance and security in IoT wearables
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certifiedâ„¢ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |