Functional Coverage Analysis for IP Cores and an Approach to Scaledown Overall Simulation Time
Mohan Srikanth Sunkara, Raja Jagadeesan
Synopsys (India) Pvt. Ltd, Bangalore , India
Abstract
This paper presents functional coverage analysis automation and an approach to scale down overall simulation time. It is well known that functional verification of configurable IP cores is a real challenging task in any digital design development. Consequently, it is necessary to develop new methodologies to improve the quality of functional verification and also to decrease the time for regression convergence. A metric that measures the functional coverage is specific to each design, and it depends on its functional requirements. Hence, we propose a methodology supported by any industry standard simulator that automates the coverage analysis at the functional level. We use functional metrics as parameters in our tool and apply theses metrics on an executable specification. Using our methodology, we are able to provide a quantitative evaluation of test suites developed to exercise the functionality defined in an executable specification. The application of these test suites on a RTL design improves quality and also increases the degree of confidence and reduces the overall simulation time. This approach has been followed in our functional verification of Configurable USB Host and Device IP controllers.
Introduction
Code coverage metrics such as line coverage, fsm coverage, expression coverage, block coverage, toggle coverage and branch coverage are extracted automatically by the code coverage tool and it gives us the picture of which sections of the DUT (Design under test) have been executed. Root cause analysis can be done on the code coverage holes and suitable test case can be added to cover the DUT functionality. Code coverage has a drawback of not identifying missing features in the DUT. There is no
automatic way of getting the correlation between the functionality to be tested and the implementation of the functionality. A lot of manual effort is needed to get this correlation.
Functional coverage is the measure of how much functionality of the design has been exercised by the verification environment. Functional coverage is user defined coverage which maps to every functionality to be tested (defined in the test plan) to a coverage point. Whenever the functionality to be tested is hit in the simulation, the functional coverage point is automatically updated. Functional coverage report can be generated which gives us the summary of how many coverage points were hit. Functional coverage metrics can be used as a feedback path to measure the progress of a verification effort.
A common worry when using coverage metrics in tandem with random stimulus is the possibility of witnessing a significant drop in the coverage rating simply because different seeds were used.
To avoid this problem and help optimize the simulation cycles, coverage analysis tools can also grade the various coverage data sources. The grading helps determine which simulation or analysis contributes the most toward the current coverage rating. Grading can be based on absolute contribution to the coverage rating or rate of contribution over time. These simulations and analysis, together with their initial seed value, are collected to create the regression suite. By running the most efficient simulations and analyses first, the same level of coverage rating can be reliably obtained in less time [9].
Methodology Used
Adding functional coverage to a verification environment involves following steps, we have taken a specific example of verification of USB Host and Device controller IP Cores.
- Identification of functional coverage points from standard specification. (Directly maps to the test plan).
- Implementations of functional coverage cover groups and crosses.
- Run batch mode simulations to collect the functional coverage metrics.
- With the help of unified report generator, generate unified functional coverage report.
- Back annotating functional coverage metrics.
In our case we used HVP (hierarchical Verification Planner) [6] to achieve functional coverage. The methodology adopted is explained below figure 1.
Figure 1: Functional Coverage methodology flow
1. Identification of functional coverage points
This is the planning phase which involves the identification of the functional coverage points to be covered and associating the appropriate cover points and crosses to it. The inputs are fed to a HVP and it contains the following details as shown in figure 2.
- Annotation by the URG (unified report Generator) [5] command. It contains the overall score for the functional coverage.
- Score: This value will be initially blank and values will be populated after back annotation using the URG. It also contains individual scores for the specific cover groups and its associated crosses.
- Reference: Specification reference of the functionality which is going to be covered.
- Feature: The specific feature which is going to be covered, for example in our case USB Device features would be Bulk In, Bulk Out etc.
- SubFeature: Description of the subfeature which is to be covered. In our case there are many sub feature columns each describing a specific aspect and the attributes that are mentioned in the subfeature are
- The packet transmitted by the USB device or host and the expected response for the same.
- The present state of the USB Device or Host (HALTED and NON_HALTED etc) and pre existing device state (e.g. NO_FLOW_CONTROL,POLLING etc)
- Detailed description of the specific feature to be covered.
- Name of the cross and covergroup to be covered.
- Comments if any.
Figure 2: HVP (hierarchical Verification Planner) example
2. Implementation of Cover groups and Crosses defined
This is the second stage where the Covergoups and crosses defined in the HVP are implemented as required. In our specific case we were using the USB VIP in which the required Covergroups and crosses were already defined and implemented.
3. Creating test cases and running regressions
The next step is the creation of specific testcases required to hit the crosses and coverpoints defined in the functional coverage and running them with multiple seeds with functional coverage capability enabled in the tool and creating the functional coverage database.
4. To generate the unified coverage report using the Unified Report Generator
URG generates combined reports for all types of coverage information. You can view these reports organized by the design hierarchy, module lists, or coverage groups. You can also view the overall summary of the entire design and test bench on the dashboard. The reports consist of a set of HTML or text files. URG is an inbuilt tool in VCS which is used for functional coverage analysis.
Following steps are involved.
- Create a merged functional coverage database by creating a single merged database from the list of all test runs in the regression.
- Back annotation of the merged functional coverage database with the HVP to see the results of the coverage back annotated into the HVP with scores.
- Identifying the bins hit and holes.
- Creating more scenarios or running with more seeds and repeating the steps again until all the cover points are hit.
Methods to improve functional coverage
- Usage of filters.
- Using URG commands.
- Usage of Grading.
1. Usage of Filters to improve scoring:
Sometimes the covergroup declared may have crosses defined which may not be of interest in the functional coverage and which may reduce the overall score .For example if USB device does not support Low speed but the covergroups will have the lowspeed bins present. In this case the overall score will be less if we have the lowspeed bins present in the overall score. In this particular case we disable the lowspeed bins in all the coverpoints and this enables us to get the true functional coverage number.
The other scenario is in the initial stages of development of functional coverage where the code for all the scenarios may not be available.In that case you can filter out the covegroups which have not yet been defined to get the numbers that are required.
Filtering can be done in two ways.
- Skipping an entire functional covergroup as it may be inappropriate to the configuration for which you are testing.
- Skipping specific bins of the coverpoint as the configuration may not support those specific bins.
a. Skipping a entire functional Cover group:
Skipping an entire cover group can be done by adding the keyword skip in the first column of the HVP This causes the URG tool to ignore this cover group while calculating the final scores. This approach will be useful in the initial development stages where all the cover groups are not implemented fully.
b. Skipping specific bins of the Cover point:
This can be done by creating a exclude filter file (extension .el) and including the specific cover points that you may want to exclude. When invoking the back annotation an extra argument –el (exclude file name) has to be given so that the final report does not include these cover points in the final analysis.
Filter file Example:
Below example will filter’s BULK OUT Token with Eop Err bin in the overall result.
covergroup $unit::svt_usb_protocol_20_host_ def_cov_callbacks::host_usb_20_bulk_out
cover item bulk_out_invalid_out_token
bins {{nvalid_out_token_no_response_pattern_sequence},{eop_error},{device_speed_fs}}
Using –show tests command in URG [5]:
URG has a useful option called –show tests which create the back annotated report with an additional column which shows which test case covered which cover point. This is an extremely useful feature which can used to tweak the test list used in regression appropriately. It can be used to determine the number of seeds each CRV test case can be run depending upon the number of cover points each seed of the test case hits. If only a small number of cover points in a specific cover group are hit, we can increase the seeds of the specific test case which hit the cover point so that more bins can be hit.
URG report with “show tests” option enabled and disabled are shown in figure 3 & 4.
From the figures 3 & 4 we find that URG with tests enabled helps us better in identifying those specific test cases that hit required cover bins.
Figure 3: URG (unified report generator) report with “show tests” option example
URG report without “show tests” option example:
Figure 4: URG (unified report generator) report without “show tests” option example
Coverage Grading
Figure 5 Grade Based Coverage analysis flow
Coverage grading is an option used to rank the test cases based on the number of functional coverage points hit by the individual test case. Grading option can be used to analyze and remove redundant test cases which are targeting the same functionality. Thiswill help to optimize the overall simulation time.
Command to generate functional coverage grading in VCS is
urg -dir *.vdb -grade -metric group
Flow for the grade based coverage analysis is given in figure 5. When the -grade option is given to URG, it will grade all of the tests that are provided as input to urg. When grading is specified, the tests.html page will list the tests in three sections. The first section will list the tests in graded order, showing the incremental coverage scores for each test with respect to the tests above it in the list. The second section consists of a simple list of tests that were not used to reach the grading goal (or were not useful). In the third section, all tests will be listed in descending order of overall coverage score.
In random test environment it also helps in identifying the random seeds which provides maximum coverage. It will be a good idea to go for functional coverage grading when the verification environment and test cases are frozen and suitable functional coverage numbers are achieved. Grade report example shown in figure 6:
Note: If test bench is constantly in development and changing, using the same seed that gave you good coverage before may not do so again since the randomization may have been affected by the changes in the source code.
Standalone scores for tests that do not contribute to grading
Figure 6: Grade report example
Results
With the help of grading report, one time manual effort is required to change regression test list, which helps to reduce number of seeds in the Constraint random test cases and also helps to remove redundant tests in directed test cases [4].
With the help of grading technique we optimized our overall simulation time by 50% (from 3.3 hrs to 1.8 hrs). From our analysis, if the design is small and the number of bins to be covered is less, the process can be done manually, but if the design is complex and the number of bins larger, updating the regression test list based on grade report can be time consuming and error prone. Based on this observation we propose an approach of using automated tool at this stage which helps in avoiding human errors and also saves time in searching and editing the regression test list as shown in figure 7. This helps to remove the redundant test cases and other test cases which do not contribute significantly to increasing the functional coverage numbers.
Figure 7: Proposed automation flow
Conclusion and future works
In this paper, we have presented an automated approach to scale down overall run time by integrating our scripts with existing functional coverage tools. The usage flow of the proposed automation process is to update existing regression test list with the scripts we developed, which is much faster and error free than updating the test list manually. Developing such technique and integrating with the existing flow is a subject of our future works. We are also planning to use a tool which automatically generates test cases for uncovered bins and checks if an uncovered bin can be covered by the test environment with the existing constraints.
In this paper we described about the methodology used for implementing functional coverage for complex IP Core. Going beyond the traditional functional coverage approach, we explored how by a combination of tool assisted intelligent grading of the test cases and optimizing on scenario redundancy, we are able to achieve better verification efficiency by reducing the simulation time.
References
- Chien-Nan Jimmy Liu, Chen-Yi Chang, Jing-Yang Jou, Ming-Chih Lai and Hsing-Ming Juan “A Novel Approach for Functional Coverage Measurement in HDL”, ISCAS 2000 - IEEE International Symposium on Circuits and Systems, May 28-31, 2000, Geneva, Switzerland
- Nancy Pratt and Dwight Eddy “Are We There Yet?” , SNUG San Jose 2008
- Oded Lachish, Eitan Marcus, Shmuel Ur and Avi Ziv “Hole Analysis for Functional Coverage Data” , www.ieee.org
- Coverage Technology reference manual (Version D-2010.06-SP1, December 2010)
- Coverage Technology User Guide (Version D-2010.06-SP1 December 2010).
- VMM Planner User Guide (Version D-2010.06-SP1 December 2010).
- VMM User Guide (Version D-2010.06-SP1 December 2010).
- VCS Coverage navigator (Version D-2010.06-SP1 December 2010).
- Janick Bergeron, Eduard Cerny, Alan Hunter and Andrew nightingale, “Verification Methodology Manual for SystemVerilog”.
- James Young, Michael Sanders, Paul Graykowski and Vernon Lee, “Managing coverage grading in complex multicore microprocessor environments”, EETimes Design Article.
[Key words: Functional Coverage, Grading, USB, IP, Connectivity, Verification,hvp,urg]
|
Related Articles
- Gathering Regression List for Structural Coverage Analysis
- Reduce ATPG Simulation Failure Debug Time by Understanding and Editing SPF
- Methodology to reduce Run Time of Timing/Functional Eco
- Virtual Prototyping for Fault Analysis, Functional Safety
- Leveraging UVM based UFS Test Suite approach for Accelerated Functional Verification of JEDEC UFS IP
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |