Measurable Verification Methodology for Highly Configurable IP Cores
By Vishal Namshiker, Solutions Group, Synopsys India Pvt Ltd
Bangalore India
Abstract
Quality measurement of IP cores with large number of configurable options is a key challenge for IP developers. IP cores are developed with hard and soft configurable options to facilitate reuse in wide variety of use- cases.
This paper describes the methodology based on use of functional coverage technology for measurement of quality of IP. In this methodology, the regressions are run on RTL generated by selecting hard configuration parameters randomly. Constraints are defined such that illegal combinations of such parameters are avoided. Functional coverage is used for both soft configurable options (control registers) as well as hard configurable options (ifdef parameters). The paper also introduces the idea of using CRV for determining any unknown requirements of IP w.r.t. the order of programming control registers during initialization. This enables the IP developers to ensure that their IP is independent of the initialization flow that may be adopted by third party software.
The paper also talks about other aspects of our verification flow such as use of embedded assertions, linting and our simulations strategy that helped to reduce the verification cycle time and to improve the quality of IP.
1 Introduction
Due to today’s highly competitive IP market, IP providers have to deliver low cost, error free IP that have higher operating frequencies and low power. Creation of such high quality IP is not worthwhile unless they are re-used a number of times. In order to facilitate re-use, the IP has to be highly configurable to suit variety of end applications.
IP cores could be configured using hard (ifdef parameters) and soft (control registers) configuration options. The soft configuration options can be provided by designing IP with control registers that are software programmable. The values of these control registers control the functionality and features of IP. In case of hard configuration options, the RTL/Netlist is generated as per the configuration parameter values selected by user. While the provision for soft-configuration options allows flexibility to re-configure an ASIC through software changes, the hard-configuration option is efficient area wise. When IP core is designed with quite a lot of options for configuration, the design becomes more complex. [9] talks about the problems associated with the quality measurement of such IP. It is widely believed that around 70% of the design cycle time is spent on verification. It is also logical to conclude that larger the complexity of design more is the time spent on verification.
The verification methodology suggested by this paper, attempts to provide solution for this problem. In this methodology, the RTL is generated by selecting random values of configuration parameters. The regressions are run on this randomly generated RTL. Use of this approach eliminates the additional effort involved in manually selecting configurations for regressions, generating the RTL and running regressions on the selected configurations. This methodology also benefits from the inherent advantages of constrained random verification described in next section.
Section 2 of this paper briefly talks about constrained random verification methodology and importance of functional coverage. Section 3 presents the details of how CRV and functional coverage can be interfaced with coreConsultant [3] for selection of random values for hard configuration parameters. Section 4 introduces the technique for random ordered initialization of IP. Section 5 is dedicated towards highlighting use of techniques that helped reduce the verification cycle time. Section 6 deals with different types of simulations we adopted in the verification flow for IP and the benefits we derived from each of them for quality improvement.
2 Constrained Random Verification
Constrained random verification techniques are often used for verification of RTL with large state space. In this approach, the stimulus is generated randomly with constraints in place to ensure that random stimulus is meaningful. The checkers and scoreboards check the RTL’s response to random stimulus. They react to this random stimulus and perform on-the-fly checking. Also functional coverage is used to track the verification progress. With this approach, the major chunk of verification effort is concentrated only on building the testbench. This effort is further reduced if we reuse the Bus Functional Models as in the case of verification environment using Verification IP such as DesignWare VIPs [12]. The tests are then developed with minimal effort.
The compute farms and grids can then be utilized effectively to run the tests with several seeds to generate several scenario many. Several of these scenarios are unexpected and never thought of while writing the test plan and this is one of the biggest advantage of CRV over traditional verification methodologies that use directed tests. [4] is a good case study on how CRV compares with the traditional verification approaches. [2] provides the guidelines for coding a good reusable CRV testbench.
2.1 Functional Coverage
Functional coverage is used to get feedback on what stimulus has been exercised and what is not. Thus it can give us the measure of how the verification is progressing. Without functional coverage, the constrained random verification is like shooting in the dark. Verification plans has to identify the functional coverage points and then testbench should be developed with an aim to hit these functional coverage points randomly. The state space for a fairly complex design is very large. If cross functional coverage is written for entire state-space, it will probably take impractical amount of time to get 100% functional coverage. Also several of the combinations produced by simply crossing the individual parameters might be unimportant and at times incorrect too. In such cases, you have to ask the question whether it is important to cover the entire state space. Would you write a directed test for every possible scenario or only for a particular corner case scenario? If your answer is that you would write a directed testcase, then you have to write a functional coverage point, if not, you don’t write one. Also large coverage space could be compressed into ranges and minimum number of hits being assigned as more than one if required. Chapter 8 of [2] provides ample guidelines for functional coverage. [8] also provides a good idea of how to write functional coverage points effectively.
3 Hard Config Options
IP providers often provide hard configurable options for their IP. The RTL in such case is more area-efficient at the expense of flexibility. This is usually done with the help of ifdef parameters in the Verilog HDL code. e.g. Asynchronous FIFO might be coded with parameterized width, parameterized depth and other such options. But how do we track the verification of such parameterized code? Can we be certain that the Asynchonous FIFO is verified for all interesting combinations of parameters? e.g. Is the FIFO verified for 1-bit width? These concerns are discussed in detail in [10].
At Synopsys, we use a tool called “coreBuilder” to develop IP with hard configuration options. The advantage of using coreBuilder & coreConsultant [4] is that the user gets the RTL file the RTL code as per the selected configuration. The `ifdef `else `endif structures that look complicated and add to the difficulty for understanding RTL are collapsed and only the code corresponding to chosen configuration is generated for the user. The example code below shows a hard configuration parameter as defined in constants file which serves as an input to the coreBuilder.
// reuse-pragma attr Description Select the top-levelSystem Interface & configuration
// reuse-pragma attr DefaultValue 0
// reuse-pragma attr MinValue 0
// reuse-pragma attr MaxValue 3
// reuse-pragma attr SymbolicNames {GMAC-AHB
GMAC-CORE GMAC-MTL GMAC-DMA}
// reuse-pragma attr Label System Interface configuration
// reuse-pragma attr Sequence 10
// reuse-pragma attr GroupName Features/System
Interface/TOP-LEVEL CONFIG
`define GMAC 0
// reuse-pragma attr Description Provide APB, MCI (in
GMAC-CORE/MTL/DMA config) or AHB (in GMAC-AHB
config) for CSR Interface
// reuse-pragma attr Label Select CSR Interface
// reuse-pragma attr MinValue 0
// reuse-pragma attr MaxValue 2
// reuse-pragma attr DefaultValue {(@GMAC==0) ? 2 : 0} // reuse-pragma attr SymbolicNames {"MCI Interface"
"APB Interface" "AHB Interface"}
// reuse-pragma attr CheckExpr {(@GMAC==0) ?
(@CSR_PORT!=0) : (@CSR_PORT!=2)}
// reuse-pragma attr CheckExprMessage "MCI Interface
not valid for GMAC-AHB or AHB Interface valid only for
GMAC AHB"
// reuse-pragma attr Sequence 40
// reuse-pragma attr GroupName Features/System Interface/CSR PORT
`define CSR_PORT 0
In the above example the check expression (CheckExpr) ensures that when GMAC is configured as GMAC-AHB, the CSR_PORT cannot be MCI (MAC Control Interface). Similarly CSR_PORT cannot be AHB slave when GMAC is configured as anything other than GMAC-AHB.
3.1 Approach for Verification
The hard configuration parameters can be modeled as random properties of a HVL class. The example code below shows these parameters being defined as random properties of a Vera class.
class gmac_ahb_dut_cfg {...
}
Constraints are put in place to eliminate combinations of parameters that are illegal or unsupported. Constraints can be optionally written in this class to ensure that customer’s preferred configurations are favored when parameters are randomized. Example code below shows how limitations on parameter values for CSR_PORT are translated to constraints.
constraint dut_valid {solve GMAC before CSR_PORT;
if ( GMAC == 0 )
Additionally the verification progress can be tracked by defining functional coverage on these random properties. The code below shows functional coverage point defined on such hard configuration parameter.
coverage_group hard_config {
state APB (1);
state AHB (2);
...
...
}
The Vera stand-alone simulator (invoked by vera_cs) is utilized to get the configuration parameter values and to track the functional coverage on hard configurable options. The parameter values generated in this manner are used to generate the RTL by a simple script. The code snippet below shows how Vera can dump out randomly generated parameters to a script for coreConsultant.
file_hndl = fopen ( “vera_output.tcl”, "w", VERBOSE ); ......
fprintf(file_hndl,"create_workspace -name cfg -installation gmac n");
...
...
fprintf(file_hndl,"set_configuration_parameter GMAC %0dn",GMAC);
fprintf(file_hndl, "set_configuration_parameter CSR_PORT %0dn",CSR_PORT);
...
...
fprintf(file_hndl,"autocomplete_activity SpecifyConfigurationnn");
...
...
fprintf(file_hndl,"close_workspacennquitn");
fclose(file_hndl);
The coreConsultant can then be invoked in shell mode to generate the RTL and then regression is started using scripts.
coreConsultant -shell –f vera_output.tcl
4 Soft Configuration Options
Often IP users may not be aware of the exact environment of end application and will prefer to configure the IP through software. e.g. Duplex mode of Ethernet GMAC IP should be a software configurable option when the SoC could be used in Full Duplex or Half Duplex mode. Software configurability is provided by control registers in the register map of the IP that are accessible to the software.
4.1 Interdependencies of Control Registers
Since IP is going to be used for a variety of endapplications it should function seamlessly with software written by programmers located possibly in different geographic locations. Hence good documentation is an absolute must. RTL designers are expected to provide the dependencies in the order of register programming (soft configuration). Example of such inter-depency is that DMA Register pointing to the start of linked list of descriptors should be programmed before the DMA is started by programming DMA control register start bit. However on occasions there is a chance that certain dependences skip the attention of RTL engineers or some unwanted dependencies may creep in. This is very much true if the IP has numerous control registers or if IP has evolved through several stages of bug-fixing and code change.
Traditional verification methods do not handle the order of register programming very well. Most often testcases have the same order of DUT initialization which provides no value addition. If a verification engineer does think of changing the order of initialization, it involves sizeable effort and time. Also verification engineers often tend to have limited knowledge of software programming due to which the order of register programming used in each and every testcase could be non-practical in actual software. This is often true when the IP has to be used with a software developed third party based on protocol standard (e.g. USB EHCI).
4.2 How Can CRV Help?
With use of CRV, the order of register programming can be randomized. Hardware verification languages such as Vera allow constraints to be defined on elements of arrays. Such arrays could be used to store addresses of registers to be programmed during initialization of IP. The array could be defined as a random property. Any dependencies that are known can be converted to constraints on array elements. The array is then randomized and given to the BFM for register initialization. The BFM then simply picks up the elements of the array and converts it to register write transactions. Once any dependency on the order of register writes is detected, IP developers can take one of the following actions. If this dependency is acceptable it can be documented and constraint is added on array elements to handle this. If the dependency is not acceptable then this flaw in RTL can be addressed immediately.
The example code below shows how an array in vera can be used for the purpose of randomizing the order of register programming.
class initialization_order {// element is unique
// A Register needs to be programmed
// only once.
foreach (arr,x) {
arr[x] in {1:15};
foreach(arr,j) {
x != j => arr[x] != arr[j];
}
}
constraint requirement_1 {
// Registers 10, 11, 12 & 13 are
// initialized in same order.
foreach (arr,p) {
p > 10 => arr[p] != 10;
p > 11 => arr[p] != 11;
p > 12 => arr[p] != 12;
p > 13 => arr[p] != 13;
p < 4 => arr[p] != 13;
p < 3 => arr[p] != 12;
p < 2 => arr[p] != 11;
p < 1 => arr[p] != 10;
foreach(arr,q) {
(p == q+1 && arr[q] == 10)
=> arr[p] == arr[q] + 1;
(p == q+1 && arr[q] == 11)
=> arr[p] == arr[q] + 1;
(p == q+1 && arr[q] == 12)
=> arr[p] == arr[q] + 1;
}
}
}
constraint requirement_2 {
// constraint ensuring that
//Register 1 is programmed last
foreach(arr,i) {
i == 14 => arr[i] == 1;
}
}
task new() {
printf("Starting randomize()...n");
void = randomize();
display();
}
task display() {
integer i;
for (i = 0; i < arr.size(); i++) {
printf(" arr[%0d] = %0dn",i,arr[i]);
}
}
}
program test {
initialization_order test_obj;
test_obj = new;
}
Below are results of above code with seed=1 and seed=2. As can be seen each seed will provide different order of initialization and thus has the potential of bringing to light some unexpected dependencies.
5 Reducing Verification Cycle Time
This section briefly talks about methods to reduce verification cycle time.
5.1 Use of Embedded Assertion
In verification, the majority of time is spent in debugging the failed simulations. Often the error message is printed only after the error is observed on the periphery of the IP. These on many occasions happen hundreds of cycles after the actual defect condition have occurred deep inside the IP. The test-benches in general suffer from low observability.
Use of assertions that are embedded in the RTL code can reduce this problem. Assertions flag a failure immediately upon detection of incorrect behavior deep inside the IP and thus improve the observability of verification environment. Assertions are especially useful when embedded in design hotspots such as Arbiters, Shared resources like memory or bus, FSMs, FIFOs, interfaces etc. Use of assertions has been observed widely to have reduced the debug effort and time.
The SVA library available with VCS Simulator was utilized in our case for quick development of assertions. [6] and chapter 7 of [2] encourages design engineers to embed assertions in RTL to help verification.
5.2 Running Linting Tools on Randomly Generated RTL
Using linting tools such as Leda from synopsys can help detect many RTL coding defects in very early stages. Also certain defects such as incorrect clock domain crossing are uncovered more effectively with linting than with verification. We can utilize the random RTL generation method discussed in section 3 and then perform linting checks.
6 Simulation Strategy
Simulations performed for verification of IP were categorized as RTL simulations, Simulations for code coverage and Simulations with metastability model. Each of these are described in the following subsections along with the value-add they provided.
6.1 RTL Simulations
The RTL simulations were performed throughout the development phase of Synopsys GMAC-UNIV IP. All the testcases were run with several configurations of the IP. Incase a testcase failed, debugging was performed and failure identified as RTL defect or testbench/testcase defect. Defects if any were fixed and failing testcase will be run again on new RTL with same seed. Also all the previously passing testcases are run again with this new RTL. These iterations continued till all the identified testcases pass and code coverage and functional coverage were good enough and no new bugs were detected.
In the initial stages RTL simulations were performed interactively. But once testbench and DUT were found to be fairly stable, the simulations were performed in batch mode. Functional coverage reports were continuously tracked during this stage.
6.2 Simulations for Code Coverage
Once the DUT was found to be reasonably stable, simulations were also being performed in batch-mode with code coverage enabled. The code coverage reports will indicate the quality of test vectors. The matrix below is used for decision making.
Functional Coverage | Code Coverage | Indication |
Low | Low | Early in verification |
Low | High | Missing sequences and corner cases |
High | Low | Need to improve test plan (functional coverage points) |
High | High | Ready for Release |
6.3 Simulations with Metastability Modelling
We also performed what we call simulations with metastability modeled. The idea is to replace the dual flip-flop synchronizer with a model that replicates the real-world metastable events. [11] is a good reference for understanding the effects of metastability in digital designs and how it causes failures. [11] also talks about inability of simulators to replicate the metastability behavior. These simulations are a closer substitute of this deficiency.
6.3.1 First Synchronizer Flip-Flop
Whenever a control signal is crossing the clock domain it has to be synchronized using a dual flipflop synchronizer. The first of this flip-flop is bound to become metastable at a point of time, because there is no relation between destination clock domain and the source clock domain. But the flipflop attains a stable value before its value is sampled by the second synchronizer flip-flop. However, the stable value reached may not be the same as the input at the flop.
6.3.2 Metastable D Flip-Flop Model
Since stable value at the output of first synchronizer flip-flop has no relation to its input when it goes metastable, the first flip-flop of the synchronizer could be replaced with a model of D flip-flop. This model will provide random value at the output incase setup time or hold time or reset recovery time is violated. Otherwise the output will follow the input. The model will have the logic to determine whether setup time, hold time and reset recovery time conditions are satisfied.
6.3.3 Replacing 1st Synchronizer Flip-Flop with Model
Synchronizer flip-flops in the RTL code were identified by the design engineers. Once the synchronizer flip-flops are identified, they can be replaced with Metastable D Flip-Flop model instantiations in the RTL with ifdef constructs.
7 Conclusion
Verification of Highly Configurable IP is a challenging task. It is imminent that constrained random verification approach should be used to verify IP that have large configuration parameters. Use of functional coverage to measure the progress of verification is recommended when using constrained random verification approach. Use of this approach has helped us develop very high quality Synopsys Ethernet GMAC-UNIV IP. In future we plan to make use of RVM Register Abstraction Layer in the testbench. Use of RVM RAL with its in-built functional coverage and other pre-defined environment independent utility methods shall reduce the testbench development time and further enhance the quality of IP.
8 References
[1] Vera User Guide
[2] RVM User Guide
[3] Synopsys coreConsultant User Guide
[4] Article Titled “Using Vera and Constrained- Random Verification to Improve DesignWare Core Quality” By Chris Rosebrugh
[5] Discussions on www.verificationguild.com dealing with problems associated with functional coverage driven verification methodology.
[6] Paper titled “System Verilog Assertions are for design engineers too!” By Don Mills & Stuart Sunderland
[7] Whitepaper Titled – “Constrained-Random Test Generation and Functional Coverage with Vera”
[8] Whitepaper titled “Spec-Based Verification”
[9] Whitepaper titled “Functional Verification Automation for IP. Bridging the Gap Between IP Developers and IP Integrators”
[10] Article titled “IP reuse requires a verification strategy” By Sean Smith
[11] FPGA-FAQ – Tell me about Metastability By Philip Freidin
[12] Synopsys Desigware VIP Webpage
|
Synopsys, Inc. Hot IP
Related Articles
- Formal-based methodology cuts digital design IP verification time
- Methodology Independent Exhaustive Constraint Solver for Random Verification and Regression Generation
- Efficient methodology for design and verification of Memory ECC error management logic in safety critical SoCs
- Metric Driven Verification of Reconfigurable Memory Controller IPs Using UVM Methodology for Improved Verification Effectiveness and Reusability
- Efficient methodology for verification of Dynamic Frequency Scaling of clocks in SoC
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |