IP quality requires verification focus
IP quality requires verification focus
By Thomas L. Anderson, EE Times
July 18, 2003 (10:54 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030718S0020
When a system-on-chip project team considers using intellectual property, one of the hardest problems is assessing the quality of the available options. No aspect of quality is more important than verification. If the IP block, or virtual component (VC), has not been well verified in standalone mode and in previous system-on-chip uses, it may contain bugs that could be fatal to the project.
To assess verification quality, VC providers must take the right steps during verification and their customers must ask the right questions. The first step is for the VC provider to properly identify the verification "hot spots" in the design and to focus on them during development. Common examples of verification hot spots include:
- industry-standard buses and interfaces, which must be proven compliant;
- bus bridges between buses and interfaces, which have complex corner cases;
- arbiters, which are notoriously difficult to verify ful ly in simulation;
- FIFOs and memory buffers, which must not lose or corrupt their contents; and
- any signals crossing between multiple, asynchronous clock domains.
Hot spots can be effectively verified by a combination of black-box, assertion-based and formal techniques. The VC development team should develop a verification environment around the component and a robust plan to run a thorough series of black-box tests. It is critical to test every operating mode and to document the test process for prospective customers. For many types of VCs, it is also helpful to generate pseudorandom sequences of instructions or transactions that may hit some of the corner cases missed by the test plan.
Black-box techniques stimulate only the inputs of the VC and check only the outputs. Assertion-based verification is a well-established way to supplement these tests. The combination detects bugs earlier, thereby potentially speeding diagnosis and repair, and also finds bugs that might no t be detected at the VC outputs.
To achieve these results, VC designers should specify their assertions as they write their RTL code and develop high-quality protocol monitors for all interfaces. Any simulation, whether test case or pseudorandom, should run with the assertions and monitors turned on.
Certain errors can be detected by static RTL analysis using "automatic" assertions. Examples of such errors include combinational feedback loops, dead code, unreachable FSM states and poor synchronization of signals crossing clock domains. Using a static analysis tool improves the quality of the VC by eliminating those problems. Some tools can also identify areas of the design not well covered by assertions and even recommend appropriate assertions in certain cases.
Assertions are the cornerstone of formal verification, a technique especially appropriate for VC development. Not even the most thorough test plan and most extensive set of pseudorandom simulations can completely cover all beh avior in a VC. Exhaustive by their very nature, formal tools analyze far more behavior than a lifetime of simulation. Formal analysis targets each assertion, trying either to prove that there is no possible way to violate it or to produce a counterexample that shows how the assertion can be violated, thereby revealing a design bug.
A VC with many configuration op-tions and operating modes provides users a high degree of flexibility. But that same flexibility greatly complicates the verification process.
When there can be millions of possible configurations, formal verification is the only way to address the complexity.
Any VC that implements a standard interface must not be verified in isolation; it's critical that the VC provider's interpretation of the standard match the rest of the industry. Any available compliance mechanisms, such as checklists from a standards body or widely used protocol monitors, are helpful. Silicon testing in actual systems may also be required. When evaluat ing a potential VC provider, the SoC team also should ask several key questions:
- Did the provider follow the process outlined above?
- What was the test plan, and how much pseudorandom simulation was run?
- Which blocks were verified with formal verification?
- Why were those blocks identified as verification hot spots?
- Has the VC been in silicon and, if so, in which applications?
VC customers should ask to see the results of RTL static analysis and coverage tools or should run those tools themselves as part of an evaluation or "incoming inspection" of the IP. Beyond code coverage, they should obtain metrics for the following:
- static structural coverage, to assess the density and quality of assertions;
- simulation structural coverage, to show how well the design was exercised;
- testbench automation tool functional coverage, including cross coverage; and
- formal structural coverage, to measure stress testing beyond simulat ion.
Finally, the SoC designers should receive any assertions and protocol monitors as part of the VC deliverables. Protocol monitors are critical for ensuring that the VC is being stimulated correctly after integration into the SoC and that it can support formal verification of adjoining blocks. The assertions internal to the VC also have value in simulation and formal analysis, ensuring that the VC is always being used as the original designer intended.
Selecting a VC-and, in the process, trusting part of one's chip to someone else-is often a difficult decision for an SoC designer. While there are few guarantees in the IP business, the more informed the choice, the more likely the outcome will be successful. To minimize the chances of bugs in the VC or in the end product, VC providers should follow the guidelines discussed in this article, and VC customers should ask their providers informed questions.
Thomas L. Anderson, chairman of the VSIA's Functional Verification DWG, is technical marketing consultant for 0-In Design Automation (San Jose, Calif.).
http://www.eet.com
Related Articles
- Certifying RISC-V: Industry Moves to Achieve RISC-V Core Quality
- Out of the Verification Crisis: Improving RTL Quality
- SoC low-power verification requires a full-chip solution
- Managing IP quality in the SoC era requires a purpose-built DM approach
- Breaking the Language Barriers: Using Coverage Driven Verification to Improve the Quality of IP
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |