A Comparison of Assertion Based Formal Verification with Coverage driven Constrained Random Simulation, Experience on a Legacy IP
Wipro Technologies
Abstract:
Formal tools used for functional verification claims an upper hand on traditional simulation based tools; given their exhaustive nature of property checking and a fast learning curve. Whereas the simulation based tools like Specman/System Verilog continues to have a stronger presence due to their already proven abilities to unearth complex corner case scenarios. Also there is a natural bias from the verification community who are using these techniques for at least a decade. Verification teams are now facing a big question – how much amount of verification effort can be shifted to formal tools, which are the ideal design constructs for formal and simulation based verification techniques. This paper discusses the strategies used while verification planning, so that an optimum partition between formal analysis and simulation based functional verification is achieved.
Introduction
A coverage driven constrained random simulation has following merits.
- It provides a reliable metric, the functional coverage as a key indicator of the verification progress
- The test bench development is much similar to any other conventional programming exercise.
- The constrained randomness virtually places the device under test in the real life situation which gives a much more confidence on the verification.
- Also it allows for an almost black box testing of the DUT.
On contrary, the Assertion Based formal Verification Methodology seems to be a holistic solution for all these challenges put forward by simulation tools. It relieves one from the tedious test bench generation; it is exhaustive, so that the functional coverage definition need not be as elaborate as in simulation. The learning curve is fast. But a closer observation reveals that, this methodology also has its own drawbacks. The formal based verification necessitates a white box strategy. This means the verification engineer should have good design knowledge, The high level of abstraction that is possible in simulation based tools is lost here , also the thought process of a verification engineer can easily got slipped in to something similar to that of a designer , rather than that of an application engineers perspective . With the present state of the art, it is practically impossible to verify at system/subsystem level as a whole the verification engineer has to carefully partition the design in to meaningful logic cones and constrain the formal tools analysis on this logic cone.
We attempted both these methodologies on a fairly simple IP that has all the necessary types of design constructs, registers, interrupt /DMA generation and State machines. In this paper we are attempting to compare and contrast the merits and demerits of both coverage driven random simulation and formal assertion based verification while verifying these design constructs.
The paper assumes the following situation. You are assigned a verification project of a legacy IP i.e. verified using the conventional simulations technologies. Your job is to redefine (or redesign) the verification suite, in light of the fact that new formal verification tools are available and also the functional coverage of the existing verification suite needs some improvement.
In this particular situation, the verification engineer’s first challenge is to formulate a strategy for the activity. The obvious goals are – Less rework, optimum use of engineering resources, filling the functional coverage holes and of course building a rock solid verification suite using the formal verification techniques. A strategy that can be used in this verification activity can be pictorially represented as below (Figure1).
Figure 1
Being a Legacy IP, the design should have a list of reported Bugs. Along with these data the IP specification and the Legacy verification environments coverage information, the verification team consolidates the functionalities which are to be verified again and divide them for formal and simulation based verification. Figure 2 gives a typical functional partition.
This partition should be thoroughly reviewed and based on this document; the verification team can prepare a Verification Plan for Formal and another one for Additional Simulations. The actual assertion runs may prove that some of the functionalities can be comfortably shifted to
Simulations, also there can be shifting of some functionalities to formal verification
Figure2
Functional Partitioning
In Functional Partitioning, the IP‘s functionalities are listed out as a table of functional coverage points. Now depends on the nature of the functionality and ease of implementation it is selected for either a formal based verification or simulation based verification. We are listing out a set of guidelines that are useful in the functional partitioning. Following are the deciding factors –
1) Nature of the Functionality
2) Coverage Holes
3) Amount of Additional Simulations Required
4) Effort in writing Assertions
We organize these guidelines according to the nature of the functionalities.
1) Register Implementations
The IP may have registers with features like protected/user mode access, read clear, write clear and default value of reserved bit locations. Most of the time the IP‘s simulation based verification suite covers almost all the functional coverage points. If at all a coverage hole exists, a little bit of additional simulation can fill it. Nevertheless the following kinds of functionalities can be verified again using formal tools.
a) Status Bits that are controlled by state machines
b) Opposite status bit ( example FIFO empty and FIFO overflow flags )
c) Status bits with always identical values (in some cases the same information may present in two different registers).
Writing test vectors to cover all the scenarios related to these functionalities are difficult, whereas assertions can be easily written and proved.
2) Bus Interfaces
A majority of the simulations are done with Bus Functional Models (BFM‘s) used in simulation. Hence the Data path is heavily verified. A good simulation environment for verification will also have protocol checkers.
Using Assertions to verify the data path of a Bus Interface unit is not straight forward. Hence it will be a rational decision to leave the Data path verification to simulation based test cases.
Even though writing protocol related assertions seems to be simple, while using formal tools, often the assertions/assumptions development is an iterative process i.e. write assertion -> run -> adds assumptions -> run … Any new protocol checks should be added in the existing simulation based checkers rather than developing a new assertion. This will save a lot of verification effort.
3) State Machines
State machines are present in arbitration logics, FIFO’s and controllers. Good simulation based verification must have reached all the possible states and have done all the state transitions. However possible combinations of inputs at different states, sequence of inputs and set of possible configurations (register configurations) can be very much extensive that simulation alone can’t cover all the possibilities.
Figure 3
The verification engineer can treat the state machine and the related logic separately,
Now considering the functional coverage points related to the state machine, the verification team can concentrate on the input / output boundary of the state machine. Using assumptions at the inputs and assertion on the outputs and state variables this part of the design can be verified.
4) Muxes , Encoders and Decoders
These are simple logic block s. However in most of the designs verifying the logics requires tremendous effort, since to execute all possible in puts to the logic block (Mux/Encoder/Decoder) which can be driven by other complex logics, the amount of simulation vectors required is too high.
Imagine the inputs to a mux are connected to a state machine and different configuration registers .Formal verification tools perfectly suites to this scenario. The verification team has to write only a few assertions on the direct inputs/outputs of the logic. This safely avoids the surrounding logic which is not actually necessary for the verification.
5) FIFO’s
The FIFO logics are implemented using a state machine, combinational logics and a RAM block. The verification effort usually fans out to –
b) FIFO status bits
c) FIFO Data Path
d) RAM Interface
e) FIFO configuration
A good coverage driven random simulation can cover the RAM Interface, FIFO Data Path and FIFO configuration. Data path verification is usually not simpler in Formal verification. The functionalities related to FIFO status bits (in terms of STATES STATE machine), the FIFO state machine and FIFO configuration can be verified more effectively using Formal Verification.
6) Interrupt Generation, DMA
The interrupts and DMA generation are usually verified in simulations using the following method. A simulation vector brings the design to a state where an interrupt/DMA is expected and then the verification environment checks that the expected Interrupt/DMA occurred in that state.
Assertions can verify these in a much simpler manner by logically connecting the state and the expected value in the assertions.
7) Asynchronous reset
In some designs the out put signal of an IP has to be reset asynchronously. This usually happens if the out put of the IP has a direct control over the designs IO pins. While writing test cases the verification engineer has to consider the state of the design before reset. There can be enormous number of possible states before a reset can come. This necessasitates a large number of test cases to verify these features. Obviously the Formal Assertion Based Verification is more helpful since the functionality can be verified using a single assertion per feature.
Case Study
As a case study, we used a simple IP which has been already verified using constrain driven random simulation. The IP is basically a timer unit with programmable presales. The timer generates an interrupt while the counter value reaches a programmable threshold.
A constrained driven random simulation type of verification has been used to verify the IP for all its functionalities. Later Assertion Based Formal Verification is applied to functionalities except the Bus Interface.
Nature of the Verified IP
This IP communicates with the core using a standard Bus Protocol of which eVCs are already available. It has a register set that contains protected mode registers, read only, write once kind of fields as well as status fields. Also It has a timer counter , three prescale counters , edge detection features for counting internal/external events , Period calculation of external events , And programmable tolerance for external event’s fluctuation in the period . The IP generates Interrupts and DMA requests on specific conditions. Also it additionally implements a watch dog timer unit.
The Simulation Environment
The simulations are done using the Specman Elite tool using the eRM methodology. The stimulus were basically constrains used up on e Sequences. The e Sequences then drive the Bus Functional model for read /writes operations to memory mapped registers, control event generators that simulate the external event sources for the IP.
Since the IP is inherently a timer, we used a Specman e based Reference model of the IP which is cycle accurate to the actual design. The monitors developed for the Bus Functional model as well as the external event sources will pass on the real-time information of the stimulus driven to the Design Under Test. The Reference model uses this information to keep in sync with the design.
Now a set of e Based checkers are written to verify the behaviour of Interrupts , DMA , Designs status registers and general register read write operations .These checkers rely on the Reference model to determine the expected behaviour of the design .
The Functional Coverage of the Simulation are measured using e Coverage monitors that probes the e Field’s of the reference model and the Bus Functional model .
Assertion Based Formal verification
The Assertion based formal verification for the IP is done using the Incisive Formal Verifier IFV tool. The assertions assumptions are written using the Property Specification Language PSL. For constrains the IFV is Tcl interface is used.
A Rule Base document is prepared that describes each of the Functionalities to be verified, the Rule that describes the functionality (this is cycle accurate), the assumptions and constrains that can be used for the rule and finally the PSL assertion that validates the rule. This is a spread sheet document. We are giving a sample entry in the spread sheet.
Functionality / Feature | Rule | Assumptions | Assertion |
Counter | |||
Overflow Bit | When counter value is 0xFFF an additional hardware event will result overflow and the ovl status bit will be set | 1) counter enabled2) No reset | assert always ( {c= 0xfff ; event } -> { ovl = ‘1’ } ) ; |
Initially we tried running the assertions by loading the entire design to the IFV. This resulted in an explored depth state – which means the tool is unable to run these assertions on large designs such as ours. Hence, the entire RTL design is partitioned meaniningfully to run the assertions on different internal blocks in seperate IFV sessions. This really helped us in speedingup the assertion runs and avoided the explore depth problem.
Results
Prior to the Specman based verification the IP have been verified using a considerably large number of direct test cases in assembly language. Due to the extensive functional coverage achieved in the simulations, the design has been considered as a stable IP. Also the Bugs caught in the Specman based simulations were only corner cases.
The Table 3 describes the results of both simulation based verification and assertion based formal verification. The issues caught in the assertion based formal verification were new ones.
Verification Type | Number of Bugs Caught | Area of Bugs | Duration | Remarks |
Constrain Driven Random Simulation | 5 | Random Logic, registers. | 6 Months( 127 Test cases ) | Nil |
Formal Assertion Based verification | 2 spec mismatch | State Machine | 5 Weeks (66 assertions ) | This activity was done after the Constrain driven random simulation |
The Assertion Based Formal Verification can be completed within a very short span of time. The application of Assertion Based Formal Verification helped in understanding the behaviour more closely. This indirectly helped us in creating more accurate specification document for the IP. We have regenerated these scenarios in the simulation based test bench and found that the issues are missed in the simulations, because of the following reasons.
1) Coverage Defenition did ‘nt addressed these scenarios, since these are corner cases.
2) The reference model ‘s behavior was not well defined in these scenarios.
3) The test cases were random eRM sequences, the probability of reaching these states is very low.
We have added more assertions, however the IP became so stable at that point of time, hence chances of additional bugs were rare.
Figure 4
Conclusion
Our experience in verifying this IP using two different methodologies leads us to the following set of conclusions (listed out). These are applicable only for thoroughly verified Legacy IP’s. For newer IP’s, it has been reported that starting the IP verification using Assertion Based formal verification have significant improvement in verification performance. However this is subject to the complexity and nature of the design.
-
The Formal Verification activity has to be supplementary to functional coverage Points which are already covered in simulations
-
For the uncovered functionalities, the verification effort should be divided between formal and simulation based techniques.
-
Formal techniques may be used for some already covered functionality, if the design seems to be more suitable for Formal techniques.
|
Related Articles
- Formal, simulation, and AMBA verification IP combine to verify configurable powerline networking SoC
- Breaking the Language Barriers: Using Coverage Driven Verification to Improve the Quality of IP
- Breaking the Language Barriers: Using Coverage Driven Verification to Improve the Quality of IP
- Using verification coverage with formal analysis
- Developing Assertion IP for Formal Verification
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |