USB2.0 OTG PHY supporting UTMI+ level 3 interface - 28HK/55LL
Counterexamples and metrics drive assertion-based verification
Counterexamples and metrics drive assertion-based verification
By Richard Ho, EEdesign
February 28, 2003 (3:52 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030124S0042
Assertion-based verification (ABV) has recently emerged as the only functional verification methodology capable of keeping pace with increasingly complex integrated circuits. When using ABV, designers and verification engineers write assertions, or properties, describing how the design is intended to operate. Then a variety of verification algorithms and engines are used to provide a verification result for each assertion. These engines include simulation, static formal verification and dynamic formal verification. Assertions and verification engines provide the starting points for ABV, but successful users also require debugging support and metrics. This article describes the practical issues of using ABV to find design bugs and the requirements for success. The assertion-based verification flow In general, the ABV flow is: write assertions, verify the assertions, debug counterexamples, and determine what to verify next (see Figure 1). The objective is to find bugs, since every bug found with ABV is one less bug to be found in the lab. The result is good silicon, produced in less time at less cost than simulation alone.
Assertions are statements about how a design is intended to behave at the RTL level. Assertion-based verification methodology allows multipl e formal verification engines to be easily applied within an existing simulation-based verification flow. In ABV, the design team instruments the RTL code using simulatable assertions. The design team places assertions throughout the design -- on worry cases, inter-block interfaces and complex RTL structures -- wherever the protocols may be misused or the design intent may be incorrectly implemented.
Figure 1 - General ABV flow
From a design team viewpoint, one of the most critical steps in the ABV flow is diagnosing counterexamples. This is how design bugs are identified. V erification engines always produce one of three results for each assertion:
- The assertion is proven to be true for all time.
- A counterexample to the assertion is found.
- The assertion is indeterminate (or unknown) because the engine could not produce a result.
It is not enough to have verification engines prove assertions. A complete ABV meth odology must employ engines that are thorough at finding counterexamples for complex assertions in large, complex designs. No single verification engine, whether it uses simulation, static formal verification or dynamic formal verification, can provide all results for all assertions in a complex design.
Different algorithms can be optimized to find either proofs or counterexamples. Deep counterexample (DCE) technology is a revolutionary static formal verification algorithm optimized to find counterexamples for complex properties in large designs. A DCE engine, such as 0-In Confirm, exhaustively searches for counterexamples starting from a single initial state of the design.
Optimizing the algorithm to find only counterexamples in circuit descriptions allows DCE to achieve substantially deeper and more thorough analysis than previous formal algorithms. DCE complements traditional static model checking, which is better at finding proofs, and dynamic formal verification, which applies shallow exhaustiv e formal analysis starting from every cycle of a simulation test. Together, these engines provide the best and most efficient method to find proofs and counterexamples of the assertions.
A full ABV methodology needs to have an engine tuned for each goal. The 0-In ABV suite provides all three specialized formal verification engines: a static model checker to find proofs, a dynamic formal verification engine to find shallow counterexamples from a simulation test and a static DCE engine for deep counterexamples.
I found a counterexample; did I find a bug?
The story does not end when an ABV tool finds a counterexample. A counterexample is simply an input sequence that causes one of the design assertions to be violated. The assertion might be violated for several reasons: the assertion could be wrong, the input sequence may not be constrained to legal values, or there really is a design bug. Once the counterexample is found, designers need an efficient debugging path to find the root cause. < P> Counterexamples can be presented in various forms. The simplest is a textual listing of input stimuli over a number of cycles. For some simple problems, this is sufficient. But for most assertions, users need to visualize the behavior using a waveform viewer, just as they would diagnose a bug detected in simulation. Figure 2 shows a typical counterexample waveform with debugging information.
Figure 2 - Debugging a counterexample
The best outcome of debugging is when the counterexample demonstrates a design bug. The waveform can then be shown to designers and RTL fixes can be implemented. Unfortunately, in practice, not all counterexamples turn out to be design bugs. Many times, counterexamples point to assertions that are wrong, either because of a simple coding error in the assertion or because of a misunderstanding about what the assertion should check for.
Although easy to fix, such debug iteratio ns waste time and frustrate designers. Assertion IP libraries, such as 0-In CheckerWare, can save time and alleviate frustration because the elements of the libraries are macros representing collections of assertions that check a high-level assertion. For example, an arbiter element may contain several dozen individual assertions encapsulated as one.
Assertion IP libraries help designers because their assertions are pre-verified and are being reused. Many of the normal errors encountered while creating assertions from scratch are removed from the flow simply because someone else ironed them out of the assertion IP library.
Another cause of counterexamples often seen in practice is when the verification engines are not constrained sufficiently. The effect is that the reported counterexample uses input sequences that are illegal, or that the RTL was not designed to handle. When this happens, users must provide more constraints to the engines to block out illegal inputs. Practically, this iteration of adding constraints is one of the hardest steps in verification with formal tools. Success with ABV requires a good methodology to ensure the accuracy of constraints.
Getting constraints correct is vitally important for quality verification results. Under-constrained inputs lead to false counterexamples and wasted debugging time. However, a worse problem is over-constrained inputs. When inputs are over-constrained, formal verification engines - both static and dynamic -- are prevented from using some legal input sequences to find counterexamples. This means that bugs in the design will be missed. Getting constraints just right, so that no bugs are missed and yet few false counterexamples are found, is a difficult and time-consuming process.
A constraint methodology that works in practice is to leverage other engines in the ABV flow to refine the input constraints. Simulation complements formal verification by exercising only prescribed behaviors. Simulation of directed tests can be used in combinatio n with simulation coverage metrics to guide the creation of assertions on the interface of a design. The resulting assertions can then be leveraged as input constraints for formal verification engines. Input constraints that simulate correctly with a test suite and achieve high coverage of interface corner cases are not over-constrained. To prevent under-constrained inputs, an abundance of assertions are placed to monitor interfaces.
What should I do next?
A critical aspect of verification methodology is measuring progress. In order to answer the question "What should I do next?" it is necessary to know what has already been done and point out what has yet to be done. In ABV, the important metrics address the following questions:
- Are there enough assertions in the design?
- Is the verification plan for simulation complete?
- How thorough is the formal verification analysis?
In 0-In's experience, this metric is best addressed as a combination of assertion density within the HDL and the complexity of the assertion. This says that there must be enough assertions to statically cover the HDL code constructs (typically one assertion per 100 lines of HDL code) and the assertions should not be trivial checks. For example, an assertion that checks for higher level function, such as arbitration or flow control, is superior to an assertion that a signal is driven to a logic 0/1 value.
The second metric of value stems from the fact that assertions monitor the functional behavior of the design. This metric can be used to measure the effectiveness of simulation at exercis ing corner cases. Assertion IP libraries are particularly useful in this regard since the elements can encapsulate functional coverage along with individual assertions, so-called structural coverage. By measuring structural coverage across a simulation regression suite, verification holes can be easily identified. The combination of structural coverage with assertion density provides a good method to measure simulation verification effectiveness. The goal should be to have high assertion density and cover all the corner cases identified by structural coverage.
The third important ABV metric measures the thoroughness of formal verification. As noted earlier, most complex assertions in real designs cannot be completely proven because of inherent capacity limits of the underlying algorithms. The best approach to verifying these indeterminate assertions is to use the deepest algorithms, whether they are static formal or dynamic formal, to look for counterexamples. The more counterexamples found, the more com plete the verification.
However, some indeterminate assertions have no counterexamples, and yet a complete proof is impossible. For these assertions, the best verification result is a metric that measures the verification confidence. For formal verification algorithms, this metric is the proof radius. The proof radius measure is simply a number that represents how many cycles of exhaustive analysis have been completed on the assertion (see Figure 3). A complete proof is equivalent to a proof radius of infinity, meaning it has been verified for all time. When a complete proof is impossible, a higher proof radius provides more verification confidence. In addition, the proof radius provides actionable feedback, telling design teams which assertions require more analysis to achieve verification confidence.
Figure 3 - Proof radius
For ABV, having all three metrics is important: density and complexity, struc tural coverage, and proof radius. The assertion density and complexity metric answers the question "do I have enough assertions?" The structural coverage metric provides the answer to "do my test plan and verification cover the functions of the design?" Finally, the proof radius metric addresses the question "are there any more bugs?" Taken together, they provide users of ABV the confidence to tape out.
Summary
To save time and lower bug rates on tapeout, successful users of ABV have not only good tools, but also a sound methodology. It is easy to focus on just the assertion language or just the formal verification engine when considering adoption of ABV. But equally important is a methodology that provides easy debugging of counterexamples and metrics to close the verification loop. When applied in real projects with real designs and real deadlines, these two parts of the methodology are where users spend the most time.
The iterative part of verification is looking for bugs and identifyin g how to improve verification. Easy-to-use, integrated tools addressing these issues provide users a path to success with assertion-based verification. Without them, users are still in an open loop process.
Dr. Richard Ho is a co-founder of