Every design verification technique requires coverage metrics to gauge progress, assess effectiveness, and help determine when the design is robust enough for tapeout. At every step of the way and with every bug-finding technology and tool, verification engineers assess coverage results and make critical decisions on what to do next. In fact, for the verification of large, complex system-on-chip (SoC) designs, coverage metrics and the responses to them guide the entire flow. The term "coverage-driven verification" describes a methodology built around coverage metrics as the primary way to manage verification. Code coverage Coverage-driven verification is made possible by the wide range of structural coverage information available in modern verification tools. The most traditional form, RTL code coverage, has migrated from specialized add-on tools directly into the more advanced simulators, providing much better performance and ease of use. Once limited to line coverage, today's code coverage metrics may also include toggle, condition, path and finite-state-machine (FSM) coverage. These metrics can be gathered automatically in simulation, under user control to select or exclude specific metrics or portions of the RTL. Code coverage is very helpful at identifying "holes" in verification: if a section of code has not been exercised then it has not been verified. However, high code coverage metrics do not necessarily mean that a design is bug-free or that the verification effort is complete and thorough. The automatic nature of code coverage means that it does not reflect any engineering insight into corner-case behaviors of the design. Although code coverage is valuable, it should be supplemented by the specification of functional coverage points that must be exercised for thorough verification. Functional coverage Traditional Verilog and VHDL do not have any built-in notion of functional coverage points. However, both SystemVerilog and hardware verification languages such as OpenVera have explicit coverage constructs. These allow designers to specify corner cases based on their knowledge of the implementation. Verification engineers can specify additional functional coverage points based on their knowledge of the design requirements, especially on buses and other interfaces. For example, functional coverage might track whether the verification process has: - Filled and emptied every FIFO in the design
- Transmitted all packet types across a particular channel of the design
- Transmitted packets across all channels in the design
- Transmitted all packet types across all channels (cross-coverage)
Knowing that such coverage points have been exercised builds confidence in the verification thoroughness. Verification IP (VIP) is an important part of SoC verification projects. For industry-standard interfaces, VIP provides the ability to verify a design against the requirements of a standard protocol. VIP should include functional coverage metrics to ensure that all corner cases of the protocol have been verified. For example, an AMBA VIP with coverage points enumerates all the different ways of completing a transaction. Covering these points ensures that important protocol functionality has been verified. If any of these points is not covered, verification is incomplete. Assertion coverage Assertions, an essential part of modern SoC verification, can also provide valuable coverage feedback. VHDL, SystemVerilog, and OpenVera all have assertion constructs that allow design and verification engineers to capture design intent. Knowing which assertions pass in simulation and which ones fail is one form of coverage. Any assertions failing indicate that functional bugs have been found. However, successful assertions do not provide any run-time feedback information: they may have succeeded"or they may not have had the opportunity to execute at all. Assertion coverage, a metric similar to code coverage, reports which assertions were successful. Assertion coverage may also report which values within a range of acceptable values have been observed. Some advanced verification tools can extract automatic coverage points from user-specified assertions. For example, consider the following assertion (expressed in natural language form): If "READY" is asserted, and "READ" is asserted the following cycle, then valid read data must be returned on the "RDATA" bus within five cycles. If this assertion is passing in simulation, this might seem like good news to the verification engineers. However, it might be the case that the "READY" and "READ" signals are never asserted in successive cycles. An extracted coverage point corresponding to this assertion setup condition resolves the dilemma. If the assertion passes but the coverage point is not exercised, more verification work is needed to generate tests containing read operations on this interface. Once the assertion passes with the coverage point exercised, preferably many times, verification confidence increases. Some vendors provide assertion-checker libraries to make assertion specification easier. As with user-written assertions, nothing less than 100% passing is acceptable. Some assertion-checker libraries contain built-in coverage points, providing additional coverage metrics for no extra effort on the part of the design and verification engineers. For example, a FIFO checker may check that a FIFO should never be written when full or read when empty, but may also include coverage points for the FIFO being full and empty. Other coverage metrics If formal property analysis is used to help verify the design, even more coverage information is available. Whenever formal methods find a way to cause an assertion to fail, this must be resolved as part of the verification process. On the other hand, formal analysis may be able to prove mathematically that an assertion can never fail. Proven assertions can be considered fully covered, increasing verification confidence. Some formal tools can generate bounded proofs for assertions that may be too complex to prove completely; such assertions can be considered conditionally covered. RTL checking tools may provide another form of coverage. These tools report any RTL coding or design rule violations, but they should also report the number of times each rule was checked. For example, an RTL checking tool might report that it analyzed 20 Verilog sensitivity lists and found two to be incomplete. As with assertions, nothing less than 100 percent of all relevant (enabled) rules passing should be considered acceptable. The same standard applies to advanced RTL checking tools that can detect more complex design errors such as unreachable statements or clock-domain-crossing violations. The verification job is not done when the RTL is verified. Formal equivalence checkers are used to verify test-logic insertion, ensure that engineering change orders (ECOs) haven't broken functionality, and cross-check RTL with gates and transistors. These tools report any mismatches found; once again a 100 percent metric is mandatory. Equivalence checkers for full-custom circuits such as memories typically employ symbolic simulation, and therefore can provide other forms of coverage metrics to help determine when this stage of verification is complete. Conclusion Figure 1 summarizes the different sources of coverage metrics used in a modern verification process. This wide range of coverage information helps verification teams assess progress and determine what to do next. In addition, the most advanced simulators and testbench-automation tools can assess coverage data while running and react automatically to improve the results. Reactive testbenches save precious engineering resources by eliminating manual iterations. Figure 1 — Verification management requires a broad range of coverage metrics. Managing verification requires knowing, at every step, which coverage results are acceptable, which need improvement, and which bug-finding technology should be used next to improve coverage. This requires a comprehensive coverage-driven methodology that links together the tools, methods, metrics, and results to guide verification from high-level functional models all the way to chip tapeout. Only with such a methodology in place can SoC teams produce chips that work on first silicon. Thomas L. Anderson is director of technical marketing in the verification group at Synopsys, Inc. He also serves as chair of the Virtual Socket Interface Alliance (VSIA) functional verification working group. |