Verification = IP = Verification = IP =... - Part 2 (By Dr. Aart de Geus)
By Dr. Aart de Geus
In the second part of this two-part article, Aart de Geus, Chairman and CEO of Synopsys, considers the importance of convergence between verification and IP in tackling design quality and productivity. If you missed Part 1, you can read it here: [Part 1]
Pressure Leads to Design Re-spins
One manifestation of this pressure is the number of advanced chip design re-spins that are due to the discovery of functional errors. Research by Collett International Research suggests that:
-
61 percent of all chip designs require at least one re-spin.
-
Of these, 71 percent have functional or logic bugs.
With all the complex technology-related problems associated with submicron chip design, it is surprising to find that functional errors are still the number one cause of re-spins.
History Repeats Itself
As an industry, we have overcome major problems like this in the past. The reality is that design productivity has done a remarkable job of keeping up with design capacity. It is useful to consider for a moment how this has occurred.
Waves of productivity appear to be the norm. Initially, productivity increases with the successful adoption of a new approach. Incremental improvements in design tools and methodology drive productivity upwards, but then tend to reach a natural plateau as the tools and techniques reach their limits. Once productivity flattens out, we start hearing about the “design gap” again.
Meanwhile, EDA companies have continued to invest in the development of even newer tools and methodologies, but they may find resistance among designers who are hesitant to change their methodology or adjust to a new tool. When design finally gets so difficult that everyone is ready to give up, the design community accepts that a change is required, they adopt the new tools and/or methodologies, and the next wave of productivity begins.
Figure 1. Productivity Waves: Design
It’s no accident that the productivity waves correspond to levels of design abstraction. Many years ago, we designed with polygons. These became transistors, which then became gates. Design using the hardware description languages Verilog and VHDL has been mainstream for about 15 years. Now, we are in the age of IP and complete design platforms.
These productivity waves must be supported by a corresponding set of tools to create the designs. Hence, along the way the primary EDA software products have shifted from pattern generation to place and route, synthesis, physical synthesis and a number of other key technologies.
IP Complexity Increases
One way to help close the design gap is to integrate existing semiconductor IP. If you examine any complex chip today, you will invariably find a collection of sophisticated IP blocks. They can come from the company fabricating the chip itself or from third-party sources such as EDA vendors or IP providers. There may be a processor core, a DSP core, many blocks of memory, interconnect modules, on-chip buses and so on. The critical question is: how quickly can you design something like this and produce an overall system that is functionally correct?
IP blocks themselves have evolved considerably in the last few years. ARM cores are a good example. During the last 10 years, their complexity has increased by a factor of approximately 24x as new features have been added to successive instruction sets.
Even with interconnect blocks such as USB and PCI, we have seen a substantial increase in complexity with new versions of standards. For example, the USB 2.0 host specification represents a 23x complexity increase over the original. And it’s important to remember that design complexity is just one parameter; the increase in verification complexity is actually much higher.
Embedded in these interconnect IP blocks are protocols that have a high level of sophistication, requiring a high degree of verification technology to ensure that they are functionally correct – both in their own right and in relation to the chip containing them. In fact, sophisticated verification is a common theme for both IP blocks and the entire chip.
Verification Evolution
Creating a complex design requires writing many lines of Verilog or VHDL code. Because people write the code, by definition there will be some degree of human error. To trap such errors, verification teams are throwing more and more verification vectors at the design. Two years ago, some companies were reporting that their designs required about 200 billion simulation cycles for the purpose of verification. A conservative extrapolation would put that figure at one trillion cycles today. On its own, this approach to simulating complex designs is neither scalable nor sustainable. What we need is a new way to verify complex designs.
Looking back, verification productivity has evolved in waves in much the same way as design productivity. Our verification history began with Spice, closely followed by a plethora of languages developed for gate-level simulation. While some of these did extremely well, it quickly became apparent that the combination of schematic capture and simulation was hitting the productivity barrier for more complex designs.
This barrier was broken by Verilog and VHDL, which were used to help verify gate-level designs even before these languages were commonly used to capture the designs themselves. As newer verification techniques such as pseudo-random test generation, coverage metrics and assertions emerged, new verification languages developed to help streamline the verification process.
Figure 2. Productivity Waves: Verification
Despite these advances, the verification productivity gap has been widening over the past four to five years. For all the reasons identified earlier, designs have become much harder to verify. In practical terms, schedules have slipped and costs have exceeded budgets as expensive mask sets have had to be repeated for each silicon re-spin. It is clearly time for a new advancement to close the design and verification gaps for the foreseeable future.
A Common Language for Design and Verification
Today, there is a great opportunity to advance both design and verification productivity. In the last year, many techniques have been assembled in a common language—SystemVerilog. The powerful language constructs of SystemVerilog allow designers to produce designs that are inherently more verifiable, and to do so more quickly than with traditional Verilog or VHDL. A SystemVerilog-based verification flow supports a wide range of effective verification techniques, using both dynamic and static verification techniques.
To cite a parallel example, about 10 years ago, there was a fundamental shift from testing chips after the design had been completed to an approach based on designing for test. In a similar vein, verification can no longer be thought of as something that happens only after the design work has been done; rather, it must be built into the design process so that bugs are discovered early enough to prevent schedule and cost catastrophes. SystemVerilog enables a true design-for-verification (DFV) approach.
The SystemVerilog momentum is growing rapidly, and we are seeing much customer interest, as well as related tool announcements and book publications. In fact, one book recently announced is being jointly authored by ARM and Synopsys and will describe a SystemVerilog-based reference verification methodology. This type of effort will enable more design and verification teams to adopt SystemVerilog quickly and easily.
There are several reasons why the industry should be excited about the move toward SystemVerilog. The language fosters more concise descriptions, resulting in fewer lines of code. SystemVerilog also provides backwards compatibility with Verilog, enabling effective design and verification reuse. Regarding VHDL compatibility, the onus is on the EDA community to ensure we support the integration of VHDL into the SystemVerilog flow.
SystemVerilog Benefits for Verification
SystemVerilog offers a significant speed increase over current verification, primarily because it integrates testbench capabilities within the language and avoids a separate co-simulation approach. SystemVerilog also offers a lot more help for verification engineers by supporting advanced DFV and coverage-based approaches.
Easy specification of assertions (statements of design intent) is at the heart of DFV. The ability to capture design intent in a single point of specification is an extremely powerful asset. SystemVerilog provides very expressive assertion constructs that run efficiently in simulation and work with formal verification tools as well. By exhaustively exploring the state space of a design block, formal analysis can enable a more comprehensive verification by complementing traditional simulation-based approaches.
Constrained-random verification is another advanced technique that has shown very positive results. With sophisticated chip designs, one of the key issues is how to take a complete chip and stimulate it with a set of inputs to cover as much of the state space as possible. In the past, a huge number of random tests were applied to the chip in the hope that any erroneous behavior would be caught. Using constraints to guide the generation of the random vectors significantly increases verification efficiency and thoroughness.
When running constrained-random verification, it is important to know which parts of the chip are being well exercised and which are not. Coverage metrics provide this information; the engineer specifies appropriate functional coverage points and the simulator tracks whether or not these points have been hit. This helps gauge verification completion and determine where to focus next if coverage is not sufficient. Since SystemVerilog is both a design and a verification language, it includes all the testbench constructs necessary for easy definition of coverage points and effective constrained-random verification.
Constrained-Random and Formal Verification Get Results
The quality of the assertions and constraints that are applied to the verification process will have an impact on which bugs are found. A whole new wave of optimization and state space search techniques can be applied to this problem. What has been astonishing to discover is that, through these techniques, we have developed the ability to find bugs that are buried 20,000 cycles deep in a design block. We have also been surprised to uncover issues in IP blocks that have been in production for a while – IP that was previously simulated very comprehensively using standard conformance vectors. There is no guarantee of being able to cover the entire state space, but with better engines and the exploitation of available compute power, we can be far more successful and efficient in tackling verification.
Traditionally, the design flow has defined individual functional blocks based on a paper specification, prior to implementation and integration into the main design. Only then can those blocks be verified in the context of the overall chip. Replacing the paper specification process by adding assertions and constraints to the functional descriptions of the blocks enables the designer’s knowledge and intent to be captured for reuse of the IP. Capturing this knowledge is a job for experts – people who really understand the intricacies of the latest standards and protocols.
The assertions and constraints constitute verification IP, which experienced chip designers know is an essential companion to design IP. For common types of design structures and standard interfaces, this verification knowledge can be contained within a verification IP library that is independent of the original design. Once captured, it is there for all to reuse in the future.
Figure 3. SystemVerilog + Verification IP = Coverage and Effectiveness
With this approach, the quality and efficiency of the process of verifying individual blocks are improved and can be scaled to the entire chip, as the same assertions and constraints are migrated to the chip level. To be effective, assertions and constraints must be easy to describe and must be automatically verifiable.
IP at the Root of DFV
The design community can be absolutely instrumental in ensuring that DFV is established firmly and rapidly. When adopting any IP, whatever the source, the key question is “what verification methodology was applied to this IP?” It should be at least as good as the verification methodology that you use on your own designs. As for your own designs, you should be specifying assertions and constraints and using the most advanced verification techniques available to find as many bugs as possible, as quickly as possible.
The age of DFV is here, and capturing IP with verification assertions and constraints is absolutely central to ensuring success. IP is the root of design for verification. They really are two sides of the same coin.
Chairman and Chief Executive Officer Dr. Aart de Geus, co-founder of Synopsys in 1986, is considered one of the world's leading experts on logic simulation and logic synthesis, with more than 25 published papers on the subject. Dr. de Geus is a Fellow of the IEEE and the third-ever recipient of IEEE's Circuits and Systems (CAS) Society's prestigious Industrial Pioneer Award. He holds an MSEE from the Swiss Federal Institute of Technology and a Ph.D. in electrical engineering from Southern Methodist University. Dr. de Geus currently serves as the Chairman of the Board for the Silicon Valley Manufacturing Group, and he was named the 2002 CEO of the Year by Electronic Business.
©2004 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |