Multiprocessing speeds IC physical verification
EE Times: Multiprocessing speeds IC physical verification | |
Mark Miller, Roland Ruehl, Eitan Cadouri and Christopher Clee (09/12/2005 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=170701629 | |
As manufacturing processes transition to more advanced technologies at 90nm and below, design signoff requirements become increasingly more rigorous and time-consuming. With each step to more advanced process technologies, designs comprise larger numbers of process layers and transistors per die, and require more design rule checks before manufacturing handoff. The resulting dramatic increase in physical verification runtimes and slow turnaround of design errors has led to protracted physical verification cycles, threatening signoff deadlines and even time to market. Designers correspondingly require more powerful physical verification capabilities able to deliver high-accuracy results from complex manufacturing and yield-enhancing checks with the throughput required to keep their projects on track. Designers can no longer rely on conventional physical verification methods to deal effectively with the complex rule decks and massive data sets associated with emerging nanometer designs. Newer approaches to physical verification address this new wave of requirements, delivering capabilities that reduce time to error for today’s increasingly large, “virtually flat” designs. These new verification methods scale to take full advantage of cost-effective distributed processing resources, while also implementing advanced feature sets that reduce rule complexity. Still, accuracy alone is insufficient. Current verification methods force engineers to wait until long design verification runs complete before revealing design error information. Slow time to error directly delays the development cycle and forces additional iteration cycles, as engineers work through a series of design fixes and wait overnight (or potentially many days) for a new batch of results. Next-generation verification solutions deliver design error information directly into the design and debug environment concurrently with the physical verification run. This greatly accelerates time to error and reduces the number of rework cycles, since it is easier to spot and terminate erroneous runs. By boosting performance and shrinking time-to-error, next-generation physical verification capabilities reduce the number and duration of verification iterations, speeding development of nanometer designs. Limitations of conventional approaches Conventional verification methods face an increasing set of challenges to their ability to maintain speed and accuracy for emerging nanometer designs. Sub-wavelength lithography and nanometer manufacturing effects associated with fine-line geometries, and tightly packed interconnect, conspire to fuel an explosion in the nature of the design rules and data volume associated with physical design and verification. Indeed, process requirements have evolved more rapidly than the physical verification design rules that describe them. With earlier process technologies, chip designers were able to rely on simple combinations of primitive commands to build comprehensive rule decks. Traditional rule sets for design-rule checking (DRC) fulfilled specific requirements for physical placement, spacing, and enclosure, and designers could expect that designs passing DRC would achieve acceptable yield levels with little further concern for the detailed physical effects of the manufacturing process. Nanometer process requirements force designers to combine increasingly long and convoluted sequences of primitive commands to address complex, yet common verification scenarios. A single check might contribute hundreds of lines to the rule deck. The aggregation of these primitive rules dramatically increases the complexity of the rule deck, significantly complicating the tasks of writing and debugging verification decks. Worse, a sequence of primitive rules generally only approximates the designer’s intention for more complex checks, eroding accuracy at a time when greater detail is necessary to uncover potential design faults. In processing these rule decks, current-generation tools introduce additional bottlenecks to the verification process. Conventional tools distribute processes by rule and by layer, resulting in uneven task sizes. Invariably, the recurrence of a small number of commands on one or two layers (such as sizing commands on the lower metal layers) determine the minimum runtime for the entire deck, because conventional multiprocessing schemes cannot further decompose them. Even if all other tasks individually occupy only a few minutes of CPU time each, these “long pole” command sets that take many hours to complete delay delivery of the final results to waiting teams of engineers. As an analogy, consider a car capable of traveling at the speed of light between two cities. If the car becomes stuck in traffic for 10 minutes as it nears its destination, the overall journey will take no less than 10 minutes, regardless of how quickly it is able to complete the rest of the trip.
Figure 1 Graph shows the theoretical performance limit using a conventional multiprocessing model for a large 90-nm design. There is little benefit to allocating more than 6 CPUs to this physical verification run.
These long poles typically arise when rules have an associated halo, or zone of influence, that encompasses neighboring structures. This interdependence between design elements invalidates the fundamental assumption of conventional parallel verification methods that design elements can be processed independently of each other.
Nanometer processes require a greater proportion of checks derived from lithographic process limitations. The halo effect has therefore become more prevalent, affecting more structures and further limiting the efficiency of multiprocessor execution. A greater proportion of rules consequently have an associated halo, increasing the inevitability of long poles and limiting opportunities for speeding verification run times through conventional multiprocessing methods. Even for common verification checks, designers face increased complication from the combination of the inefficiencies of primitive commands and growing halo effects. For example, conventional latchup checks are built from repetitive loops of primitive “SIZE” and “AND” commands. The “SIZE” command has its own inherent halo, so repeated use of this command results in another long pole for the overall run. Performance challenge Along with the halo effect, the challenging nature of nanometer physical design further erodes the effectiveness of other divide-and-conquer methods such as hierarchical processing. Hierarchical processing emerged in tools launched in the late 1990s as a means of improving performance. Today, however, designs comprise larger numbers of polygons connected by abutment on the lower metal layers in designs. More cells and cell shapes means a greater number of permutations and combinations of abutted cells. Cells connected by abutment cannot be partitioned hierarchically, so this limits partitioning opportunities in the overall design. This results in a virtual flattening of large nanometer designs, returning designers to their headaches of the late 1990s as physical verification runs once again stretch beyond the acceptable limit of overnight turnaround. The solution to the present-day physical verification performance challenge clearly requires a paradigm shift of the same magnitude as the shift from flat to hierarchical processing. Current-generation physical verification tools are unable to effectively exploit ubiquitous Linux-based server farms because their rule-based multiprocessing models cannot overcome the long poles in the rule deck. Consequently, the performance curve flattens dramatically after 6-8 CPUs. Designers face unacceptably long physical verification runs even on large and expensive multiprocessor systems. Worse, designers must currently wait until the end of one of these long runs to identify and begin fixing errors, because current-generation physical verification tools employ a sequential approach to reporting design rule violations. Consequently, designers are forced to wait until completion of a lengthy physical verification run before learning about multiple instances of a single error that have a simple solution (such as an error in a particular cell) but nevertheless require a complete rerun. This results in protracted development cycles and tapeout delays. A new paradigm Because the conventional hierarchical processing model is broken, effective solutions to nanometer design verification require an entirely new approach. Indeed, addressing the emerging requirements for nanometer physical verification requires a synergistic and cohesive set of solutions for the problems described earlier in this article. This goes beyond what any single-point tool solution can deliver. The combination of increasing design size, nanometer challenges and rule complexity calls for an integrated solution that combines greater efficiency in processing complex rule decks, more effective distributed processing models, and more efficient data management. Emerging physical verification solutions leverage new partitioning methods that remove multiprocessing limitations inherent to current-generation tools. Here, optimizing compilers employ a combination of partitioning strategies that achieve a very high level of utilization across massively parallel compute resources. By adopting a compiled approach, it is possible to analyze the design and rule deck at compile time to determine the best allocation of tasks across available computing resources even across heterogeneous networks. Each multi-strategy partitioning solution is based on the unique combination of rule deck requirements, design characteristics and available computing resources. The result is very efficient resource leveling across large compute farms, eliminating long poles and speeding runtimes. This approach exhibits a near-linear performance improvement across distributed systems numbering into potentially many hundreds of CPUs.
Figure 2 Emerging physical verification solutions integrate massively parallel processing, compiler technology, high-performance dedicated engines and real-time results reporting.
At the same time, newer verification approaches provide an alternative to the primitive-level operations used in conventional tools. For example, dedicated, high-level commands that directly implement complex checks (such as width-dependent spacing rules, latchup checks and density checks) replace large chains of primitive commands that otherwise complicate rule deck authoring and hinder processing throughput.
Designed to exploit massively parallel distributed processing techniques, these dedicated processing engines directly improve performance. Rule decks do not necessarily need to be rewritten to take advantage of dedicated engines the optimizing compiler automatically detects opportunities to run dedicated engines based on recognition of pre-determined patterns in the input rule deck. Furthermore, this approach improves accuracy because each engine is designed for a specific verification task, rather than relying on the approximations inherent in a sequence of primitive verification operations. For example, in the latchup check mentioned above, tools using conventional primitive-level operations will not be able to detect situations where a notch in the well causes the shortest path between two points across the notch to be greater than the Manhattan distance between those two points. Higher-level commands implemented as dedicated processing engines handle this situation correctly. In addition, the use of higher level commands means that one or two lines replace potentially hundreds of lines in a rule deck, dramatically improving rule deck development and maintenance.
Figure 3 Dedicated latchup check delivers a faster and more accurate measurement than piecewise estimation using primitive commands.
As lithography and manufacturing effects impose more constraints on upstream design decisions, early feedback becomes paramount. Consequently, any practical solution to emerging verification challenges must accommodate a growing need for data interchange among a broader array of tools in the design environment. Indeed, emerging physical verification solutions are turning to open architecture approaches such as the OpenAccess database for data management. Providing a data model optimized for semiconductor design, the OpenAccess architecture enables fluid data interchange and even helps shorten time to error. OpenAccess-based physical verification tools write errors to the design database as they are discovered, permitting early identification of design errors. Consequently, if designers find serious errors as a verification run progresses, they can terminate the run and immediately begin working to fix the error. The ability to reduce time to error provides an important advantage in decreasing the number and magnitude of design iterations common in today’s physical verification environments. Indeed, this combination of enhanced performance and open data architectures allows design teams to broaden the scope of physical verification checks to address subtle nanometer manufacturing issues that have a significant impact on yield. Rather than forcing designers to wait days for the minimum set of results that enables design signoff, such physical verification systems fully exploit the power of massively parallel processing environments and deliver an earlier, deeper, more accurate assessment of the impact of design decisions on yield in nanometer process technologies. Mark Miller is the vice president of business development of Design-for-Manufacturing (DFM) at Cadence. Prior to joining Cadence, he held similar positions with Tera Systems and Synchronicity. Roland Ruehl is the engineering group director of Physical Verification Products at Cadence. Previously, he was with PDF Solutions, where he managed the development of the DFM Software Tools that PDF employs in its Yield Improvement Services. Eitan Cadouri is the vice president of Research and Development, Physical Verification Products, at Cadence. His responsibilities include managing R&D for the Assura, Dracula, Diva and Physical Verification System product lines. Prior to joining Cadence, he was co-founder and president of WaferYield, whose tool suite was designed to improve FAB throughputs and yields.
Christopher Clee is the senior product marketing manager for Physical Verification Products at Cadence. He began his EDA career at Praxis Electronic Design, a pioneer in language-based design and logic synthesis, and has run the gamut of the industry, with technical and product marketing roles supporting ASIC, full-custom and PCB design flows.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. - - | |
Related Articles
- Constraint-driven physical design speeds IC convergence
- IC design: A short primer on the formal methods-based verification
- IC mixed-mode verification: The Sandwiched-SPICE approach
- Major changes expected for physical verification tools as designs move into 28nm and below
- New IC verification techniques for analog content
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |