|
||||||||||
Embedded Software IP Verification
By Markus Winterholer, Cadence Design Systems
Abstract: One of the most difficult challenges in SoC verification today is determining how to make sure the hardware and software work together at the SoC level. The commonly deployed process is to verify the hardware first and then execute as much software as possible before committing the design to fabrication. This presentation presents new methodology to improve the SoC verification process. The aim of this methodology is to produce higher quality designs by exposing the hidden corner cases that are not being found. The goal is to move embedded software from execution and inspection to verification. INTRODUCTION The concept of embedded software verification is mostly non-existent in SoC projects today. The primary way to find out if the software works with the hardware is to just run it and watch what happens. The result is a commonly deployed three step process for SoC verification:
Verification is the process of determining a design meets requirements. In practice, verification is the process of identifying and removing as many functional bugs in the hardware and software as possible. Today, manual techniques such as visual inspection have been replaced by automated verification plans containing a set of goal metrics that are used to measure progress toward verification completion. By definition of the verification plan, if these metrics are achieved, the design is verified. In hardware verification the process of verification planning to define the corner cases that need to be hit and the use of automated, constrained random stimulus to hit these corner cases is known as Coverage Driven Verification (CDV). To perform CDV the corner cases are converted into coverage points and the goal is to reach 100% coverage. The combination of using random generation to reach coverage points also results in new corner cases that engineers did not think of. Considering the wide adoption of CDV for hardware verification it is logical that Co-Verification should have a new definition that is specific to the verification problem. Co-Verification is the use of automated, constrained random stimulus and functional coverage metrics applied to the hardware design, the embedded software, and the combination of hardware and software. RELATED WORK Many tools and techniques have been developed in the “Hardware Software Co-Verification” space yet current tools generally fall into two categories: co-simulation and co-debug. Co-simulation tools are those that allow the engineer to run simulations containing both hardware models and software, they usually contain techniques or hardware specifically to accelerate the performance of the software. The second category of tool is co-debug: these usually come along with the co-simulation tool and provides debug capabilities for both the hardware and software. These tools provide the software engineer with the ability to run and debug source code well before the final silicon is available, thus shortening the overall design cycle. These tools and techniques have thus proved to be extremely valuable. The first, still existing, commercial co-simulation tool specially targeted at solving the hardware/software integration problem is Seamless from Mentor Graphics [MeSe,Kle96]. It provides instruction set simulators to execute the software. Other works in this area are specific to a given platform or prototyping environment like [Pos04] or they are specific for a processor family [And05]. Several works are published providing fast models [Hel05, Sem00] or using emulators to speed up the simulation of cycle accurate microprocessor models [Ekl04,Nak04]. These implementations are either executing existing applications or the user has to provide test applications, which means that interesting corner case scenarios might have been missed. A GENERIC SOLUTION TO COMMUNICATION WITH EMBEDDED SOFTWARE There are several tools, such as Cadences Specman Elite [CdnSp], and methodologies that enable and optimize the application of Coverage Driven Verification methodology. These tools generally have built in mechanisms to interface with hardware models written in various languages such as HDL’s as well as more abstract languages such as SystemC or C++. To enable the application of this methodology to those corner case scenarios which lie across the hardware software boundary, we need to have the same interface capabilities with the software running on an embedded processor as we do with the differing abstractions of hardware models. In particular we are interested in being able to stimulate the embedded software, i.e. call its routines and drive its variables as well as monitoring its state. The mechanism that is used to communicate with the software must be immune to method of execution and the abstraction of the processor model. For example following are some of the mechanisms that we typically see for modelling embedded processors.
THE GENERIC SOFTWARE ADAPTER The Generic Software Adapter (GSA) needs to communicate with the software running on the embedded processor and it needs to do this in a model and processor independent manner. This is achieved in through a mailbox located in the processors memory map. This mailbox is written to and read from both by software running on the embedded processor and by the verification environment. The verification environment will write tasks to the mailbox so that an embedded software wrapper may notice these tasks and act upon them. This gives the verification environment the ability to indirectly control and observe activity in the embedded software. There are two prerequisites: Firstly a monitoring process is required that will be added to the embedded software in order to notice the mailbox actions and act accordingly. Secondly the processors memory, or at least the part that contains the mailbox, must be accessible to the verification environment usually via some sort of backdoor access. The following scenario illustrates the process, for a common method call: In this case we have a method that is part of the driver or hardware abstraction layer software that would normally be called by the application layer software. Let’s assume that the method is called “transmit_init” and it takes two parameters “address” which defines the location of data to be transmitted, and “size” which defines the number of bytes to be sent. It also has a return parameter which is an integer indicating pass/fail. So the prototype might look something like the following: int transmit_init (int address, int size); In order to call this method we need to setup mailboxes to carry the relevant information as follows:
Figure 1 Verification Environment interactions with mailbox for method calling If we look at this from the software side then we see something like in figure 2. We can see that the first operation to be undertaken by the software is to initialize the mailbox1. Once this has been completed the software may start monitoring the mailbox2 to see if any activities are being requested. Figure 2 Embedded software interactions with the mailbox for method calling During the monitoring process we are looking for method calls to be placed in the mailbox. The polling mechanism consists of a large ‘case’ statement3 encompassing all of the methods that we might like to call, these will be defined manually or automatically as described later. The “call box” indicates which method it is that we want to call via an integer and then the case statement makes the call to the relevant method, assigning parameters4 from the mailbox accordingly. When completed the return value, if any, is written back to a mailbox5 and then completion is indicated via the activity mailbox6. A similar process may be used to monitor software state variables and to provide a callback mechanism. The above described process may of course be built entirely by hand and maintained as the verification environment and the software change, but much of this work may be automated by analyzing the type information available in the debug information of the software. The user need only define the methods and variables required and then an automated process will build the mailbox architecture as well as the processes to handle the mailbox activities on both the verification environment and embedded software sides. Thus the mailbox mechanism, architecture and its handling may be completely hidden from the user. It is exactly this level of automation that is provided by GSA. MODIFICATIONS TO THE SOFTWARE RUNNING ON THE EMBEDDED PROCESSOR (A MONITORING PROCESS) As described above the work of interacting with the mailbox on the embedded software side may be largely automated and in the GSA product this is done by analyzing the prototypes provided by the user and then auto-generating C code, the auto generated files are termed ‘stub files’. These stub files must be compiled and linked with the user’s software to be run on the embedded processor. These C stub files contain two user interface methods, an initialization method called “sn_gsa_init()” which handles the initialization of the mailbox and a run method called “sn_gsa_invoke()” which reads the mailbox and acts upon activity requested via mailbox accordingly. The sn_gsa_init() and sn_gsa_invoke() methods need to be called appropriately. In the case of the sn_gsa_init() method, this should be called once during or soon after system start-up, and must be called before any other GSA activity is required. The sn_gsa_invoke() method must be called regularly to check for required activity (commands in the mailbox). Typically this will be done in some form of loop or based on a system event such as an interrupt. The user needs to take care to ensure that this is called frequently enough to meet verification requirements, as the time between calls defines the maximum lag between the verification environment requesting activity and that activity actually being executed. For some corner case scenarios the verification plan may require close timing control between hardware and software activity and therefore in order to enable this, a tight execution loop may be required. In order to verify hardware and software together, traditionally a verification engineer would have to write some test code to execute on the embedded processor, i.e. a test wrapper, to call the various routines in the driver or hardware abstraction layer to be tested. With the GSA mechanism this activity is completely controlled from the verification environment and only a very limited C code wrapper is required. Something as shown below would be the minimum required. #include “sn_gsa_stub.h” main() { <other initialization code> sn_gsa_init(); // Initialize the GSA while(1) { <other regularly activated code> sn_gsa_invoke(); <other regularly activated code> } } Figure 3 Typical software test harness This shows all of the components required on the C side to make the GSA interface work, i.e. the use of the interface methods as well as the inclusion of the automatically generated stub file that declares these methods. This demonstrates that to use this methodology the modifications on the C side are minimal and in fact are probably far less than would be required to achieve some sort of test harness developed solely in C. VERIFICATION ENVIRONMENT INTERFACE The verification environment needs to be able to call software methods, to monitor software variables and to have its methods called from the software. Specman and its associated ‘e’ language have predefined port syntax, semantics and mechanisms to achieve this. GSA is able to utilize this port mechanism to enable the definition of variables and method prototypes and their locations. The following sections show how this is put into practice. Let’s look at the three scenarios: the verification environment calling an embedded method, the verification environment monitoring a state variable, and a verification environment method being called from the embedded software. Verification Environment calling a method in the embedded software: In this case we use an “out method port”. So let us assume we wish to connect to a method in the embedded software that looks as follows: int transmit_init (int address, int size); First we need to define a “method_type” that will define the prototype of the method that we wish to call, this is derived directly from the prototype in our embedded software. In our ‘e’ code this will look something like: method_type trnsmt_init_t(addr: int, sze: int); As shown, the names for the method and the parameters are not required to match the declaration in our embedded c code, however usually they would. Now we can instantiate an interface to a specific method of this type in a relevant object in the verification environment. The software may have several method instances of the same type, and we may want to point to them individually, or there may be different methods in the software with the same prototype declaration. In either case we may have multiple instances of a method port but only one method_type declaration. An example instantiation of a method port is shown below: transmit_init: out method_port of trnsmt_init_t is instance; keep bind(transmit_init, external); keep transmit_init.hdl_path() == “my_sw_top->tx_interface.transmit_init”; We can see in the first line the instantiation of the method port declaring its type to be that which was declared in the ‘method_type’ declaration. Then in the second line we see an external binding, this is just to indicate to Specman that this port is outside of the scope of the verification environment. Finally we see the constraint on the hdl_path() which, despite it’s name, indicates the path to the method from a globally accessible point. As can be seen this path may contain pointer dereferencing. From this information the Generic Software Adapter is able to build all of the interface code controlling the mailbox mechanism automatically. Now to make a call to the software method the user must only call the method port accordingly, as is shown below: initialize() is also { <other initialization code> transmit_init$(start_addr, block_size); <other initialization code> }; Here we can see a regular ‘e’ method making a call to the C method as if it were a local method. This will automatically cause the GSA to send the appropriate data to the mailbox to initiate the transmit_init() method in the embedded software. Similar mechanisms can be employed to provide access to variables in the embedded software and allow callback from embedded software to the verification environment. BACKDOOR ACCESS TO THE MAILBOX FROM THE VERIFICATION ENVIRONMENT In order for GSA to operate, both the verification environment and the embedded software must be able to access the mailbox. This is not usually a problem for the embedded software as the mailbox will be located in an area of system memory that lies within the processors memory map. As far as the processor is concerned then we just need to ensure that there is sufficient memory in the system to accommodate the mailbox. In the case of the verification environment we need to have a specific mechanism to communicate with the mailbox and though not essential, this will usually be done via a backdoor mechanism, which does not cause simulation time to advance. The actual mechanism by which we communicate with memory will be dependant on the way that the memory model is written. Some typical modelling methods might be:
Due to this potentially unlimited range of interface methods then the GSA must have a standard API of its own and the user must provide the interface between the two APIs. In real terms this boils down to the user filling in a few methods that will provide access such as that shown below: backdoor_read_byte(address: uint): uint is { var b: byte; mem_if.bd_read_byte(address, b); result = b; if trace_backdoor_access then { outf("Backdoor read byte <%d>: add %x value %x\n", sys.time, address, result); }; }; In this case the API method “backdoor_read_byte” is filled in by the user, and as can be seen in this case then this causes a call to a method in a sub instance “mem_if” which makes the required interface to the memory. This is likely to make use of built in capabilities to interface to models written in various HDL or software languages as well as dedicated API’s such as Verilog PLI or socket based mechanisms. In this way the user builds an interface for each of the different memory modelling methodologies that is used. SUMMARY Verifying complex designs is an exponentially increasing problem and in recent years the application of Coverage Driven Verification (CDV) techniques has made significant improvements to the process. This technique has provided higher quality verification in addition to shortened verification time, and improved predictability. This improvement has been largely restricted to the hardware domain, and significant problems still exist particularly on the hardware software boundary. Hunting down and exposing corner case scenarios and thereby bugs that span this boundary has traditionally been a very hit and miss affair. GSA allows these same successful CDV techniques, as well as associated reuse architectures and methodologies such as ‘eRM’ to be applied to the low level software running on the embedded processor, so that this software may be monitored for functional coverage (hitting of corner case scenarios) and checking purposes as well as the driver API being called from the verification environment allowing unprecedented control and coordination of stimulus, providing effective corner case targeting, and productive hardware software co-verification. The techniques described in this paper and the technology associated with GSA are applicable to hardware and software SoC developer teams where they are developing software to run on an embedded processor. They are also applicable to IP software and hardware developers where they develop both the hardware IP and software drivers to be delivered alongside the hardware. 1 References [And05] J. Andrews: Co-Verification of Hardware and Software for ARM SoC Design. Elsevier; 2005. [CdnSp] http://www.cadence.com/products/functional_ver/specman_elite [Ekl04] B. Eklow et al.: Simulation Based System Level Fault Insertion Using Co-verification Tools. International Test Conference; 2004. [Hel05] G. Hellestrand: Systems architecture: the empirical way: abstract architectures to 'optimal' systems, ACM International Conference on Embedded Software; 2005. [Kle96] R. Klein: Miami: A Hardware Software Co-Simulation Environment, IEEE International Workshop on Rapid System Prototyping; June 1996. [MeSe] http://www.mentor.com/products/fv/hwsw_coverification/seamless/ [Nak04] Y. Nakamura et al.: A Fast Hw/Sw Co-verification Method for SoC by using a C/C++ Simulator and FPGA Emulator with Shared Register Communication. Design Automation Conference; 2004. [Sem00] L. Séméria et al.: Methodology for Hw/Sw Co-verification in C/C++. Asia South Pacific Design Automation Conference; 2000. [Piz04] Andrew Piziali. Functional Verification Coverage Measurement and Analysis. Kluwer Academic Publishers, 2004. [Pos04] G. Post et al.: A SystemC-Based Verification Methodology for Complex Wireless Software IP, Conference on Design, Automation and Test in Europe, 2004.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |