Best Practices for a Reusable Verification Environment
Steve Ye, EETimes
(07/12/2004 9:00 AM EDT)
Verification reuse is critical to the productivity and efficiency of system-on-chip (SoC) verification. The foundation of this technique is well-designed verification codes and components that implement reusability techniques. Before developing the code, however, it is essential for the designer to learn practical, real-world techniques on how to create a highly reusable verification environment using an environment such as Specman e. Such a guide includes project management, testbench architecting, verification planning, test case creation and scripting.
Why verification reuse?
Verification reuse involves reusing the existing verification environments or components of verification environments developed for the other designs or blocks. It includes verification code reuse (monitor, bus-functional model [BFM], scoreboard, data item), test case reuse, assertion reuse, simulation script reuse and coverage analysis reuse.
As design complexity grows, the complexity of the functional verification task rises exponentially. Considering that verification consumes 50 percent to 80 percent of the total development effort, verification reuse brings tremendous benefits to the verification team. Verification reuse can:
- Dramatically reduce the verification environment build effort.
- Reduce verification risk and improve product quality.
- Reduce the need for deep protocol expertise on the verification team.
For the reuse of verification components, a number of requirements must be met from the perspective of verification component users. They include:
- The ability to integrate with the design implementing the specific interface.
- The ability to integrate with other verification environments.
- Allowance for multiple instantiations.
- A user-friendly interface for writing tests.
- A clear interface for extensions.
- Complete and clear documentation.
Verification planning
The first step of verification planning is to separate the verification environment from test cases — that is, to separate the test case-specific components from test case-generic components. Among the test case-generic components, components must be identified that could be reused among different macro verifications, between macro and chip-level verification, or in other projects. These must be summarized in the testbench specification.
A test plan is the document used to define each test case. It should be written before creating the test cases, since this document is used to identify the number of tests required to fully verify a specific design. Before creating the test plan, the following information should be collected and listed:
- All configuration attributes.
- All variations of every data item.
- All the important attributes of each data item that you would like to control, along with the range of values for each generated data item.
- All interesting sequences for every device-under-test (DUT) input port.
- All corner cases to be tested.
- All error conditions to be created and all erroneous inputs to be injected.
This information is used to identify the verification targets or goals. Based on those goals, test cases are created and documented in the test plan. For reusability, it is desirable to separate the verification goals from the test case implementation. The same verification goal could be achieved at the different levels, such as the block or chip level, and through different methods, such as directed tests or random tests. The goals are reusable, but the implementations of the test cases are not.
The tests should be categorized, such as white-box test or black-box test, directed test or random test, block-level test or SoC-level test, functional test or interface test, standard compliance test or implementation specific test. According to those categories, the tests can be easily sorted according to their reusability.
The implementation specific tests are usually not reusable in other projects, while standard compliance tests are usually reusable in different levels and projects. In one regression, every random test is usually run multiple times with different seeds, while each directed test is only run once, so the directed tests should be separated from random tests.
Designers know the internals of the DUT implementation and can help define interesting test cases, so that they should be involved in the process of defining the test plan.
Verification environment
The verification environment consists of an automatic verification/regression control system driven by a series of test cases, which usually are a set of constraints in an aspect-objected environment or a set of scripts containing function calls to class member functions in object-oriented environment. For reusability, the verification environment should be modeled with modularity, configurability and completeness in mind.
Verification IP prerequisites
Verification IP is a verification component, and an eVC is a verification component in the e language. It is a ready-to-use, configurable verification module, typically focusing on a specific protocol or architecture.
- A verification component must:
- Be self-contained. Thus, it can be easily instantiated, either alone or within an existing environment.
- Have the ability to specify a different configuration for each instance.
- Be easily configured at both the component and the element levels.
- Be reusable at different levels of DUT integration.
- Implement all protocol elements of the specific interface.
- The component definitions are as follows:
* Bus-functional models. A BFM is the unit instance that interacts with the DUT by driving and/or sampling the DUT signals. In it, a sequence driver passes a data item generated by a data generation unit. A BFM should be self-contained, not dependent on other drivers. All stimulus interaction with the DUT should come from common drivers. This makes the verification IP more modular and reusable.
BFMs drive and sample only one interface. An interface is defined as a set of signals that implements a specific protocol. It makes the design more modular and allows drivers to be reusable. BFMs should not check the interface protocol; protocol checking is handled by monitors.
- Monitors. Monitors are used to check and observe all transactions on the interface. Monitors should be self-contained, with each monitor handling only one interface, and should not drive any design signals.
A monitor verifies the protocol on the interface but should not determine whether a transaction has happened correctly on an interface. A monitor checks the protocol of the interface, but determining the correctness of data received should be left to the scoreboard. As the IP or block is integrated into a multiple-unit or SoC environment, the monitors should be reusable to check for violations on the interfaces of the IP or block.
Monitors should be capable of being established and disabled. This is important for reusing the verification IP in an SoC environment. The pads of an SoC are often multiplexed in order to provide multiple functions while reducing the package pin count. The functionality will be selected by primary pins and/or internal registers. Therefore, it should be possible to change the monitor according to the SoC setup.
Temporal checking — checking the correctness of timing, sequences and relationships — is a task of the monitor. All exceptions or interrupts that happen during simulation should be recorded in association with the data items that were in process when they occurred.
- Scoreboard. A scoreboard is the verification element that predicts, stores and compares data. It does not check the protocols; that is the task of the monitors. The separation of data checking and protocol checking makes verification elements more reusable, as well as less complicated in the implementation.
- Functional-coverage collection. The functional coverage collection element tells you which functionaries are tested. For reusability, functional coverage items should be separated into implementation-specific and implementation-nonspecific. The implementation-nonspecific items should be able to be turned off.
The specific verification guidelines are as follows:
- Messaging. Uniform messaging for errors, warnings and information is critical for script writing, readability and portability. A common routine should be used to display simulation messages. Using common routines to display messages ensures a uniform output format and simplifies both debug and script writing. A single display routine also allows a single point of maintenance for the log file names. All simulation messages should indicate point of origin, simulation time and the nature of message. All output information should have several levels of detail, from completely silent to full information.
- Termination. All tests should be terminated by a standard mechanism. Error types should be defined to indicate the nature of simulation errors, and common routines or mechanisms should be used to report the errors. Predefining error types — CRC error, time-out error, mis-comparison error — simplifies the process of error collection and report.
All verification components should flag the errors using the standard format. A time-out routine that stops the simulation after a predefined number of simulation cycles and/or a predefined number of executed items should be defined. All tests should be terminated. In the event of a deadlock situation (i.e., a test wait for an event that will never occur), a means should be available to terminate out the section of test.
- Reset. All tests need a standard means of resetting and initializing the DUT and verification environment. The verification environment should check the DUT's response to assertion and de-assertion of the reset(s). This requires that the test environment be properly reset and able to drive and sample the signals of the DUT during the reset.
- Clocking. A difference in clocking techniques is often a source of incompatibility when integrating modules into an SoC environment. Multiple clock environments and clock skew should be handled consistently. Clocks that have frequency dependency should be scalable via a constant or variable.
A multiple-clock environment should have a mechanism to allow period scaling of all clocks through a common constant or variable. Delay parameters should be specified as a fraction of the system-level clock. This allows time delay to be adapted to new simulation clocks by just modifying the system clock period. Verification components should be able to work with DUT-supplied clocks; such an ability enables support for combining multiple verification IPs in one verification environment.
- Scripts. Scripts should not have any absolute paths; only relative paths should be used. The range of the relative paths should be within the directory tree of the macro. When using CAD tools, an absolute path may be necessary. In that case all such paths should be contained in one file and also documented.
Scripts should check that all data and CAD files needed exist; otherwise, the script should exit. This prevents the script from executing in the wrong directory.
The script should create any files and subdirectories that it needs and should not assume that these files and subdirectories already exist. If unsuccessful in creating the files or subdirectories the script should exit.
See related chart An IP verification test plan is the document used to define each test case. It should be written before creating the test cases as this document is used to identify the number of tests required to fully verify a specific design. Source: Agere Systems Inc. |
Scripts should use variables instead of hard-coded data and hard-coded paths. The variables should be defined at the top of the script.
Attention should be paid to the effect of environment variables in the execution of scripts. If possible, a script should make every attempt to be independent of environment variables. For example, the environment variable $PATH may be different for different users. Some users may have a "." that implies the current directory is in the path; others may not. A script can avoid this issue by always referring to a file in the current directory by the full relative path name "./".
Project management
The eVC examples provided by the eVC reuse methodology could be used as coding templates for e-based verification components. They provide basic eVC structures and useful functions such as reset/clock generation. A complete set of document templates should be created and used throughout the project or company to speed the documentation process and improve document quality.
Directory and files
For completeness, all of the work-in-progress and final deliverables for the verification environment building blocks should be situated within one root location.
See related chart The verification environment consists of an automatic verification/regression control system driven by a series of test cases. For reusability, the verification environment should be modeled with modularity, configurability and completeness in mind. Source: Agere Systems Inc. |
To reduce the likelihood of name collision in the concurrent use of different verification components, the verification element names, such as struct, unit and enumerated type in the e language, should be unique. One way to ensure this is to follow certain naming conventions such as: __.. This will help prevent the potential collision with other components of the verification environment.
I recommend starting with these practical guidelines for creating reusable verification components. Doing so will dramatically improve the reusability of the verification environment. We at Agere have implemented such a structure and process with success.
Steve Ye (qye@agere.com) is a senior engineer in the IP Reuse Design & Development group of Agere Systems (Allentown, Pa.).
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- PCIe error logging and handling on a typical SoC
E-mail This Article | Printer-Friendly Page |