A comprehensive approach for verification of OCP-based SoCs
Pisa Italy
Abstract :
OCP is a standard for on-chip interconnect, widely used in wireless and multimedia applications: developers can design their IPs in a busindependent way, allowing immediate and effective reusability. The intrinsic flexibility of the OCP and the complexity of systems using such protocol are bringing new challenges to SoC engineers, both in design and verification. The paper describes authors’ experience in verification of Open-Core Protocol based SOCs for state-of-the-art applications: results demonstrate how with a comprehensive approach it is possible to dramatically reduce time needs for functional verification, reaching the best verification quality. The paper concludes with some comments about standardization and compliance, two main aspects to be taken into account when designing and verifying interfaces for specific communication protocols. .
Introduction
Nowadays, specificity of a product on one hand and generality of building elements to be reused on the other one represent a real paradox for design and verification methodologies. As an example, if even a minor change affects the functionality and thereafter the interface of an IP, compatibility between the module and all the other elements in the system could be compromised. This means, for instance, that functional validation has to be totally re-done because test-benches need modifications as well as test programs and so on. The scenario would look different if the IP interface could remain unchanged and only the inner functionality would require an additional verification effort. Open Core Protocol (OCP) [1] is a configurable protocol set completely bus-independent and it is well suitable for any hardware communication behavior, both synchronous and asynchronous. From an architecture point of view, main advantage is the isolation between interconnection and interface. OCP interfaces are highly configurable and can replace proprietary bus interfaces, allowing easy IP re-use and general abstract model adoption. It is possible to have one interface family that satisfies the needs of any peripheral or core. From a methodology point of view, OCP represents one protocol for the whole system, and it ensures to drastically reduce time needs for functional validation and benchmarking of any future release of the system. In fact it is possible to define and develop just one set of transactions, independent from the system interconnect architecture and reuse it for these tasks.
Verifying OCP-based SOCs: challenges and how to address them
The adoption of such OCP based architectures can help functional validation: but there is not “free lunch”. Moving the centre of gravity of a system to the interface means that flexibility and configurability become a real challenge for verification. In fact these intrinsic properties of the OCP interfaces create the need for corresponding complexity in the reference model that should integrate any possible aspect or feature of the protocol, with the option to extend or add extra features while design and validation is on going.
Major challenges in verifying an OCP-based SOCs can be summarized as in the following:
- <>to implement a flexible verification environment;>
- <>to cover multiple abstraction layers; >
- to efficiently measure the functional coverage and system performances;
- to check the compliance of the interface with the protocol;
- to implement a re-usable reference model;
- to be able to quick update the verification environment based on new specs releases.
In general, the most important requirement is to improve productivity of verification: therefore, previous described challenges must be addressed with a comprehensive approach.
Flexibility of the verification environment can be addressed by using a scalable and configurable
architecture, including also the possibility to generate a wide and modular suite of test cases.
The use of Object Oriented languages can help to easily range from top-level scenarios to single transactions, covering all the abstraction layers.
Functional coverage and system performance must be addressed at first by defining a proper metric, and then adopting state-of-the-art verification tools. Moreover, a strict methodology (e.g. exhaustive table of checks) should be used to allow protocol compliance checking.
At last, a well-defined methodology has to be used to assure a full reusability of the reference model at each level of the system.
All these concepts will be described in the following sections, together with results obtained from authors’ on-field experience in verifying OCP-based systems.
The Verification Architecture
A flexible and configurable architecture for the verification environment is typically composed by:
- Multiple instances of verification components, both for
- interconnect verification;
- IP interface verification;
- protocol compliance.
- An effective methodology, i.e. a top-level verification controller managed by a stateof- the art verification tool.
Figure 1 represents a possible architecture for the verification environment.
In this verification environment, OCP master and slave verification components are used to initiate and receive transactions at the different abstraction levels. An OCP checker is used to verify the protocol compliance and to collect data for data logging and performance analysis. Other verification IPs can be used to verify the IP interfaces. All the verification components are managed by a top-level sequence generator that generates and controls all the different possible verification scenarios. The way with which these components are interacting one with the other is very important in order to obtain the better performance and productivity of verification.
Quality, maintainability and extendibility of the verification environment are guaranteed by the use of Verisity’s e language and of Specman Elite© tool [2]. The e is a real object oriented language, fully extensible and it doesn’t require any rework on the source code when patches or improvements are needed. An IEEE initiative has been started in 2003 under the name of IEEE P1647 to standardize such language [3]. Both the e language and the Specman Elite© provide many interesting features for verification and interactive debugging, such allowing a constrained random-driven and coverage-driven verification. Models described with e language are called e Verification Components (eVC) and they can be considered as extensible atomic blocks for complex verification environment construction. Time and hardware-like concepts make this language particularly indicated and well suited for high quality modeling.
Figure 1: architecture of the verification environment for an OCP-based SoC
The OCP Verification Component
The proposed verification architecture is based on the OCP 2.0 e Verification component the architecture of which is schematically represented in Figure 2. The way with which such component operates is logically represented in Figure 3.
Figure 2: architecture of the OCP verification component
The OCP 2.0 eVC is composed by active and passive units: the former generates and injects transactions or responds to transaction requests according to OCP specifications, the latter monitors and logs traffic information to collect items for test functional coverage and also checks protocol consistency. Since the verification component is implemented with an object oriented language, the basic structure is the transaction and the appropriate units and the methods are developed to handle this object.
The basic transaction instance of the OCP verification component comprises all the fields needed to model any kind of physical transaction on the OCP interface. Generally, when an instance of this transaction item is generated almost all its fields have random values. Nevertheless during transaction generation and then during test program creation phase, verification engineer can constrain all, some or one of these fields to assume only specified values or to be in a specific range.
This set of constraints changes the generic transaction into the desired transaction (fully constrained item), into the sequence of transactions (series of fully constrained items), into a specific kind of transactions (partially constrained items) or into a sequence of these transactions (series of partially constrained items).
Figure 3: functional description of the OCP verification component
OCP basic transaction also contains functional parameters interpreted by the procedural part of the verification component to emulate delays, latencies, etc.. and to obtain very specific scenarios (for example forcing request and response overlapping).
When the verification component acts as a “master”, it generates this structure following a PUSH mode methodology: the main method of the verification component generates the transaction structures whose fields can assume default values, random values or user constrained values and then it gives to the configurable BFM1 the task of driving them on the bus. The BFM part of the master represents the procedural part of the protocol and it is responsible for driving all the signals following the proper dependency. When the verification component acts as a “slave” a different BFM generates the basic structure, follows the flow of the signals on the bus and drives responses consequently, filling the structure step by step.
Reuse: from module to system level
A typical application of the verification component is when a module is developed from scratch or enhancement is needed to make its interface OCP compliant, as for instance represented in Figure 4. Boundary conditions generally are design dependent and they rarely match the requirements of the system in which the module is supposed to work. The verification component allows the user to constrain transactions generation and then to have a well known scenario, without loss of generality. It means that even if values for transactions can assume only specific values, there is always a random component in the BFM (for example delays between requests or responses) which gives to the transactions driven on the bus the possibility to bring the DUT into specific corner cases like the ones achievable when instantiated in the system.
The verification component includes simple and complex checking rules on OCP specifications, and metric definitions for functional coverage. The embedded protocol checker is a runtime tool which is capable of applying OCP rules to the current bus traffic. If some wrong conditions are detected during simulation the checker prompts the user about the error and prints a message about the violation. A link can also be used to browse on all the information available for the current transaction. Furthermore these rules can be extended and customized by the user.
Figure 4: module-level verification
Any verification environment is valid not only contextually for the module level validation, but for every OCP based module and OCP based system. Architectures created to support defined sequences of transactions at module level are fully reusable and they are the first blocks for system level verification environments.
Possibility to easily move from module to system level is direct consequence of the generality of the Verisity’s eRM methodology (the skeleton of this verification component) and the object oriented philosophy [4]. eRM methodology guarantees that all the verification components that are instantiated in the verification environment can work in the same way (when different types of protocols need to be checked) and that similar structures can be generated and merged into sequences very easily (leveraging on compatibility and absence of conflicts in resource sharing).
If many verification components are available, each for any interface of the DUT, they can generate independent transaction objects or structures to work on different sides of the DUT. The common modeling language and the object oriented properties allow to constrain model behaviors and to emulate system-like activity around the module.
Verification component instances are the reference model for debugging different interfaces and they can act independently, while an upper layer can be created as system-like configuration. An example of such upper-level verification environment is represented in Figure 5 for an OCPbased sub-system: two instanced of the OCP eVC are used together with other verification components to implement a register set and to verify the other ports of the sub-system. A coverage collector is also used.
Figure 5: example of a verification of an OCP sub-system
In the case of OCP based systems, functional coverage items are also capable of helping the verification engineer in the creation of additional test sequences and in providing rankings for the test programs. In fact, the information provided by the coverage suite is organized in functional items and then it is evident if something has not been tested yet. Coverage from each OCP verification component instance shows a local evaluation of the effectiveness of the test for that particular submodule and contributes for the full coverage collection over all the interfaces. The user is allowed to extend his test programs to cover what has not been tested also in a big and complex design. When a 100% of functional coverage is reached, the verification engineer is sure to have verified any possible aspect of the internal interfaces.
As represented in figure 6, the verification environment can be completed by including SW (for instance C programs) running at the same time the verification components are generating useful scenarios. The CVL (Co-Verification) Link available in the Specman methodology is used to synchronize the verification components with the running application.
Figure 6: a verification environment including interaction with SW
Verification results Some results from the use of the proposed verification environment on real applications such state-of-art wireless platform, are the following:
- average of 20 users per project;
- more than 30 modules fully verified using the OCP verification component;
- average of 20 bugs per module detected;
- 50 more bugs detected at system level;
- regression runs with more than 500 different test cases;
- more than 1000000 random seeds;
- impact on simulation time < 30%;
- average ramp up time and test cases development from 2 to 4 weeks for module level and from 2 to 3 months for system level environment.
Conclusions
Functional verification means also to take in charge precise responsibilities in terms of standardization and compliance, i.e. to be sure that all the interfaces are speaking with the same language.
Standardization means to consensually define precise rules with which systems have to be designed, and continuously maintain them to include new needs. Compliance means to have a well defined methodology that can leverage the risks for the interface designers. To lead the definition of such rules is the key role of organization such OCP-IP [5], while main duty of design and verification IP companies is to participate to such definition and commit them to consequently upgrade their products.
A compliance flow needs mainly three deliverables:
- the definition of all the sequences that must be exercised in a given interface, i.e. a set of coverage items;
- the set of rules that each transaction on the bus should fulfil, i.e. a table of checks. A protocol checker is the entity that applies these checks;
- the set of stimuli with which the interface must be stimulated in order to cover the 100% of coverage items, i.e. a test suite also called “compliance coverage” suite.
When an interface is stimulated with test vectors and no check is violated, it can be defined “compliant”. Compliance verification is therefore really important to assure the quality of an OCPbased architecture. The combined use of advanced verification tools and a smart organisation of the verification component can dramatically simplify this task. For instance, the test suite can be generated to be fully random plus very few directed tests which can achieve 100% coverage over multiple runs. The use of coverage models plus running various random sequences makes easier to deal with different DUT variances, or with engineers who are interested in coverage subset only.
In conclusion, if OCP protocol can be one of the best candidates to answer the new paradigm of SOC design, OCP verification components have to be designed in order to fulfil to all the verification requirements needed to achieve a time-effective and high-quality functional validation. The proposed approach demonstrates that these challenges can be addressed in a successful way, by making use of modern verification languages and tools, together with an interconnected and flexible verification architecture implemented on an efficient transaction-based reference model.
References
[1] Open Core Protocol (OCP) Specification Release 2.0, OCP IP (www.ocpip.org)
[2] www.verisity.com
[3] www.ieee1647.org
[4] e Reuse Methodology (eRM) Developer Manual Version 4.2 (www.verisity.com)
[5] www.ocpip.org
Related Articles
- A comprehensive approach for verification of OCP-based SoCs
- Video codecs in SoCs using OCP-based programmable accelerator design
- OCP-based memory access arbitration for a digital sampling oscilloscope
- OCP-based Memory Controller IP Offers Optimal Power and Performance Requirements for 3G Applications
- SoCs require a new verification approach
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |