NVM OTP NeoBit in Maxchip (180nm, 160nm, 150nm, 110nm, 90nm, 80nm)
Model-based approach allows design for yield
EE Times: Design News Model-based approach allows design for yield | |
Ara Markosian and Mark Rencher (04/18/2005 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=160900827 | |
With the move to sub-100nm feature processes, predictions of initial yields are in the single digits. As a result, yield is moving up into the design flow. Most of the EDA, manufacturing equipment and semiconductor intellectual property companies are proposing a variety of visions and solutions for design for manufacturing (DFM) and are starting to talk about yield. All of these different visions and solutions can be very confusing to a designer or manufacturing engineer who is faced with the challenge of developing a solution to get chips to yield. In this paper, we will identify the drivers of DFM as well as its limitations and will discuss the emergence of a new segment of EDA — design-for-yield (DFY). We will examine the requirements for yield-aware EDA tools and flows, present a classification of defects and failure types, introduce the concept of a unified yield model, and discuss the application of models within a standard EDA design flow. Industry drivers Historically the yield problem was considered to be a manufacturing only problem. Yield awareness of traditional EDA tools was limited to geometric design-rule compliance. There is a consensus in the industry that yield issues must become a part of the design flow. However, the industry is debating what approach or method will produce the best results or yield improvement. Quite recently a new term began appearing in publications: DFY. The emergence of this new term not only signifies crystallization of a new segment of EDA, but also helps in better understanding of what DFM really is. Are DFM and DFY different? Generally speaking, DFM and DFY are broad notions and comprise design methodologies, flows, and tools as well as business models. If these terms are defined without excessive generalization, then one can find significant differences between the two. First, let's take a look at differences in tools. One can define DFM tools as the ones that traditionally would ensure manufacturability of a design by making sure it adheres to or verifies rules defined by the fab. Design Rule Check (DRC) tools are perfect examples of DFM tools because they rely on fab information abstracted in a form of a DRC rule-deck. This provides a binary "yes or no" answer on whether or not a design is compliant with feature-based design rules. This allows designer to make sure that the IC can be manufactured and can function. But this approach becomes problematic as the feature sizes of designs keep shrinking. Design rules that used to be few-page documents now comprise volumes of "hard rules" and "soft rules" that sometimes even conflict with each other. The reason is that the processes can no longer be adequately described by a set of design rules such as min spacing or min width. Today the spacing rules are conditional. Metal fill rules are not just percentage numbers but are window-based rules. This situation will become more and more complicated, and soon it will be obvious that the right language to describe what's happening in the fab would not be a rule, but a model. In Figure 1, a simple chart is presented that compares rule-based and model-based approaches to failure analysis of a pair of wires.
Figure 1 — The probability of failure is described as a binary function with rules, whereas it can be more accurately described with a model-based methodology.
As opposed to rules, a yield model first of all captures each step of the IC manufacturing process in its complexity. The model uses a simplified abstracted form to describe physical, mechanical, and/or chemical phenomena that will have functionally different behavior depending on the features of the design. Along with a description, a model can provide quantitative characterization of the phenomena in terms of probability of failure of a design (or some part of the design) under certain conditions. Also from a design perspective, a manufacturability-related design alteration must ensure design intent has not been altered. EDA tools that can understand yield models can be classified as DFY tools. The latter is going to be a new generation of tools that ensure not only manufacturability of a design, but can also characterize vulnerability of a specific design to process imperfections and defects that may be multi-variant and non-linear. But this is only one half of what DFY solutions must provide. Most importantly, DFY tools must be able to take advantage of model-based characterization of the design, and made such changes to it that design becomes more tolerant the process imperfections and variations. Of course, an EDA tool cannot improve the fab and reduce the probability of defects and process variations, but it can administer an "immunity shot" to the design such that it is more tolerant and therefore more immune to getting "infected" during the manufacturing process. Another significant difference between the DFM and DFY tools is in the core technology on which these tools are built. As a "rule-based" or binary approach, DFM tools can provide constraint-based solutions. For example, technology migration tools can be considered as DFM tools. Being "model-based," DFY tools not only can adhere to the constraints derived from applying a model to specifics of the design, but they can also understand what the yield implications are and solve an optimization problem that will maximize a design's yield. In short, the above DFM vs. DFY discussion can be summarized in two comparisons: "rule-based" vs. "model-based" tools, and "constraint-based" vs. "yield optimized" solutions. Prior to the 0.18m process generation, DFM tools were sufficient to provide satisfactory yields. At 0.13m and below, although DFM tools have modernized and become more sophisticated, they will no longer be sufficient to solve the low-yield problem. This becomes especially critical for 65nm and below. DFY solutions are necessary at all steps of the design process, especially in the very back-end. Besides the fact that existing (DFM) tools will be augmented by DFY tools, it is not hard to guess that a new breed of tools are coming into existence. For example, similar to physical optimization tools (which do post-initial-placement logic optimization) that were "invented" in late 90's to address design timing closure issues, a new kind of tools is emerging to address manufacturability and "yieldability" at a post-routing design stage. The first tools of this kind started appearing at last year's Design Automation Conference and certainly there will be more of them in the next few years. What kind of yield models DFY tools will need We described DFY tools as the ones that can understand yield models and apply them to a specific design. Similar to DFY, there is no commonly shared understanding of the term "yield model." Obviously it is not easy to provide a generic definition of a yield model. What makes it even more difficult is the fact that there is no commonly accepted classification of yield problems and their types. The right classification, or taxonomy, of failure mechanisms, behavior and types will help in yield model definition. There are two major types of failures when a manufactured IC is tested: catastrophic failures when the chip is not functioning at all, and parametric failures or deficiencies which cause the chip to be functionally correct, but not perform according to the tolerance or range of performance. No matter what kind of defects of the process are encountered or failure mechanisms are working, functionality failures and parametric failures can be of the following types:
Figure 2 — Taxonomy of yield loss
Defect phenomena are usually classified as random or systematic. Strictly speaking there is a random component in each systematic defect, and conversely, each random defect has some systematic nature (for example random defect clustering phenomenon is usually "equipment signature" related). But for the sake of clarity, it is better to keep the distinction between random and systematic defects. Random particle defects (which are usually called extra/missing material random defects or short/open random defects), happen because of contamination. Systematic defects are conditioned by the specifics of the design layout or the equipment. Each of the defect phenomena requires very substantial research and development effort before a satisfactory yield model can be developed. Discussion of individual types of defect phenomena and advancements in their research deserve a separate discussion in and of itself. What is a yield model? We already mentioned that there are a few components that a yield model of a defect phenomenon must comprise. First of all, this is the description of a phenomenon, be it a physical, chemical or mechanical one. Then a quantitative characteristic of the layout in relation to the phenomenon must be defined. These characteristics of the layout can be called yield metrics of the design. Let's illustrate the notion of yield metrics on the example of random particle defect phenomena. The theory of random defects (which is now accepted as industry standard) defines the yield metrics of the model as "critical region" and "critical area." Critical region of random extra/missing material particles is defined as the region of the layout where if a center of a particle lands, it creates a short/open failure of the design. The area of critical region is called critical area. Figure 3 illustrates critical regions for extra material/short type random defects.
Figure 3 — Graphical representation of yield metrics (critical area) for extra-material/short random defects.
Next, the probabilistic/variability aspect of a defect phenomenon must be captured and probability of design failure (catastrophic or parametric) must be defined for given yield metric. This probabilistic aspect of the model can be either completely random as for random contamination, or can describe the process variation distribution for given metric of a systematic defect phenomena. Coming back to our example of random particle defects, the variability function is the defect size distribution function, or number of defects larger than a give size per square centimeter (see Figure 4).
Figure 4 — The defect size distribution function captures the number of defects larger than a given size per centimeter.
Finally, the rule or formula according to which yield metrics and process variability failures translate into the average number of failures must be provided, along with the yield number. Again, in our example of random defects, the average number of failures can be derived from the metrics and defect size distribution. For yield calculation either Poisson or Negative Binomial formulas, or any other custom formula, can be used. Do we need a standard? Each entry of the "yield taxonomy" table needs a yield model that adequately describes all above mentioned components of a model. The semiconductor industry needs to ask an important question: is there a common language that can be used to describe these yield models? Today the answer is no, but the benefits of developing and standardizing such a language are tremendous. Indeed, every DFY tool will use the same format to import the necessary yield models, so the users of tools do not have to translate models from one tool input format to another. Second, besides the standard unified yield modeling language, standard interfaces to the DFY tools at different levels can be defined that will allow providers to develop open and interoperable DFY tools. Third, a common model would allow communication of the behavior of manufacturing and not necessarily the specific process that the foundry wants to hold proprietary. Finally, the former two can result in another benefit which is very important for the business aspect of DFY. While the format of the yield model description language can be standard and public, the yield model itself does not have to be. For example, someone who would like to keep a yield model as a private IP can develop a binary application that does the yield metrics and probability failures calculation, and then can be plugged into a DFY tool using standard API interface. Summary The unprecedented decline of yield in IC production is directly related to ever-complex processes. To solve the yield decline, the EDA flow must expand to consider phenomena that require a multidisciplinary approach to their solution, and start moving from rule-based to model-based approaches. We will see the industry adopt common yield models that a designer can place into his or her design flow. But, this will only happen through the joint participation and collaboration of designers, process technologists, EDA developers, material and equipment suppliers. With this joint effort, design teams will be able to use a design flow that yields results! Mark Rencher is president of Pivotal Enterprises, Inc. He has more than 17 years of business development, strategy planning and marketing experience in the electronics and software industries. Ara Markosian is the CTO and a cofounder of Pont Solutions, Inc. Ara has more than 16 years of experience with EDA and software management. Ara has held positions as director of engineering of Monterey Design Systems, Aristo Technology, and Compass Design Automation.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |