|
|||||
Counterexamples and metrics drive assertion-based verification
Counterexamples and metrics drive assertion-based verification Assertion-based verification (ABV) has recently emerged as the only functional verification methodology capable of keeping pace with increasingly complex integrated circuits. When using ABV, designers and verification engineers write assertions, or properties, describing how the design is intended to operate. Then a variety of verification algorithms and engines are used to provide a verification result for each assertion. These engines include simulation, static formal verification and dynamic formal verification. Assertions and verification engines provide the starting points for ABV, but successful users also require debugging support and metrics. This article describes the practical issues of using ABV to find design bugs and the requirements for success. The assertion-based verification flow In general, the ABV flow is: write assertions, verify the assertions, debug counterexamples, and determine what to verify next (see Figure 1). The objective is to find bugs, since every bug found with ABV is one less bug to be found in the lab. The result is good silicon, produced in less time at less cost than simulation alone. Figure 1 - General ABV flow From a design team viewpoint, one of the most critical steps in the ABV flow is diagnosing counterexamples. This is how design bugs are identified. V erification engines always produce one of three results for each assertion:
It is not enough to have verification engines prove assertions. A complete ABV meth odology must employ engines that are thorough at finding counterexamples for complex assertions in large, complex designs. No single verification engine, whether it uses simulation, static formal verification or dynamic formal verification, can provide all results for all assertions in a complex design. Different algorithms can be optimized to find either proofs or counterexamples. Deep counterexample (DCE) technology is a revolutionary static formal verification algorithm optimized to find counterexamples for complex properties in large designs. A DCE engine, such as 0-In Confirm, exhaustively searches for counterexamples starting from a single initial state of the design. Optimizing the algorithm to find only counterexamples in circuit descriptions allows DCE to achieve substantially deeper and more thorough analysis than previous formal algorithms. DCE complements traditional static model checking, which is better at finding proofs, and dynamic formal verification, which applies shallow exhaustiv e formal analysis starting from every cycle of a simulation test. Together, these engines provide the best and most efficient method to find proofs and counterexamples of the assertions. A full ABV methodology needs to have an engine tuned for each goal. The 0-In ABV suite provides all three specialized formal verification engines: a static model checker to find proofs, a dynamic formal verification engine to find shallow counterexamples from a simulation test and a static DCE engine for deep counterexamples. I found a counterexample; did I find a bug?
Figure 2 - Debugging a counterexample The best outcome of debugging is when the counterexample demonstrates a design bug. The waveform can then be shown to designers and RTL fixes can be implemented. Unfortunately, in practice, not all counterexamples turn out to be design bugs. Many times, counterexamples point to assertions that are wrong, either because of a simple coding error in the assertion or because of a misunderstanding about what the assertion should check for. Although easy to fix, such debug iteratio ns waste time and frustrate designers. Assertion IP libraries, such as 0-In CheckerWare, can save time and alleviate frustration because the elements of the libraries are macros representing collections of assertions that check a high-level assertion. For example, an arbiter element may contain several dozen individual assertions encapsulated as one. Assertion IP libraries help designers because their assertions are pre-verified and are being reused. Many of the normal errors encountered while creating assertions from scratch are removed from the flow simply because someone else ironed them out of the assertion IP library. Another cause of counterexamples often seen in practice is when the verification engines are not constrained sufficiently. The effect is that the reported counterexample uses input sequences that are illegal, or that the RTL was not designed to handle. When this happens, users must provide more constraints to the engines to block out illegal inputs. Practically, this iteration of adding constraints is one of the hardest steps in verification with formal tools. Success with ABV requires a good methodology to ensure the accuracy of constraints. Getting constraints correct is vitally important for quality verification results. Under-constrained inputs lead to false counterexamples and wasted debugging time. However, a worse problem is over-constrained inputs. When inputs are over-constrained, formal verification engines - both static and dynamic -- are prevented from using some legal input sequences to find counterexamples. This means that bugs in the design will be missed. Getting constraints just right, so that no bugs are missed and yet few false counterexamples are found, is a difficult and time-consuming process. A constraint methodology that works in practice is to leverage other engines in the ABV flow to refine the input constraints. Simulation complements formal verification by exercising only prescribed behaviors. Simulation of directed tests can be used in combinatio n with simulation coverage metrics to guide the creation of assertions on the interface of a design. The resulting assertions can then be leveraged as input constraints for formal verification engines. Input constraints that simulate correctly with a test suite and achieve high coverage of interface corner cases are not over-constrained. To prevent under-constrained inputs, an abundance of assertions are placed to monitor interfaces. What should I do next?
In 0-In's experience, this metric is best addressed as a combination of assertion density within the HDL and the complexity of the assertion. This says that there must be enough assertions to statically cover the HDL code constructs (typically one assertion per 100 lines of HDL code) and the assertions should not be trivial checks. For example, an assertion that checks for higher level function, such as arbitration or flow control, is superior to an assertion that a signal is driven to a logic 0/1 value. The second metric of value stems from the fact that assertions monitor the functional behavior of the design. This metric can be used to measure the effectiveness of simulation at exercis ing corner cases. Assertion IP libraries are particularly useful in this regard since the elements can encapsulate functional coverage along with individual assertions, so-called structural coverage. By measuring structural coverage across a simulation regression suite, verification holes can be easily identified. The combination of structural coverage with assertion density provides a good method to measure simulation verification effectiveness. The goal should be to have high assertion density and cover all the corner cases identified by structural coverage. The third important ABV metric measures the thoroughness of formal verification. As noted earlier, most complex assertions in real designs cannot be completely proven because of inherent capacity limits of the underlying algorithms. The best approach to verifying these indeterminate assertions is to use the deepest algorithms, whether they are static formal or dynamic formal, to look for counterexamples. The more counterexamples found, the more com plete the verification. However, some indeterminate assertions have no counterexamples, and yet a complete proof is impossible. For these assertions, the best verification result is a metric that measures the verification confidence. For formal verification algorithms, this metric is the proof radius. The proof radius measure is simply a number that represents how many cycles of exhaustive analysis have been completed on the assertion (see Figure 3). A complete proof is equivalent to a proof radius of infinity, meaning it has been verified for all time. When a complete proof is impossible, a higher proof radius provides more verification confidence. In addition, the proof radius provides actionable feedback, telling design teams which assertions require more analysis to achieve verification confidence.
Figure 3 - Proof radius For ABV, having all three metrics is important: density and complexity, struc tural coverage, and proof radius. The assertion density and complexity metric answers the question "do I have enough assertions?" The structural coverage metric provides the answer to "do my test plan and verification cover the functions of the design?" Finally, the proof radius metric addresses the question "are there any more bugs?" Taken together, they provide users of ABV the confidence to tape out. Summary The iterative part of verification is looking for bugs and identifyin g how to improve verification. Easy-to-use, integrated tools addressing these issues provide users a path to success with assertion-based verification. Without them, users are still in an open loop process. Dr. Richard Ho is a co-founder of |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |