|
|||||
Transaction-based methodology supports HW/SW co-verification
Transaction-based methodology supports HW/SW co-verification Verification efficiency is the latest topic being discussed among engineers and EDA vendors. Engineers are wondering how to leverage all of the point tools that have been developed to solve specific issues to create a single, cohesive methodology that some call "unified verification." This paper describes how engineers doing system-on-chip (SoC) verification can be more efficient by using a single, reconfigurable verification system, applications, and a unified methodology that allows engineers to execute hardware and software tests with a flexible mix of performance and debugging. New transaction-based verification techniques based on a "Co-Verification Debugger" are demonstrated for an ARM SoC design. In order to work smarter, engineers can make improvements in one of the three areas that take up the majority of their time during the verification process: The three components of verification Verification platform Four distinct methods for execution of hardware designs have been identified and are commonly used in SoC design: Throughout this paper, the following definitions will be used: Software simulation refers to an event-based logic simulator that operates by propagating input changes through a design until a steady state condition is reached. Software simulators run on workstations and use Verilog or VHDL as a simulation language to describe the design and the testbench. Simulation acceleration refers to the process of mapping the synthesizable portion of the design into a hardware platform specifically designed to increase performance by evaluating the HDL constructs in parallel. The remaining portions of the simulation are not mapped into hardware, but run in a software simulator. The software simulator works in conjunction with the hardware platform to exchange simulation data. Removing most of the simulation events from the software simulator and evaluating them in hardware increases performance. The final performance is determined by the percentage of the simulation left running in software. Emulation refers to the process of mapping an entire design into a hardware platform. This process has been designed to increase performance. There is no constant connection to the wo rkstation during execution, and the hardware platform receives no input from the workstation. By eliminating the connection to the workstation, the hardware platform now runs at its full speed and does not need to wait for any communication. In-circuit refers to the use of external hardware coupled to a hardware platform for the purpose of providing a more realistic environment for the design being simulated. This hardware commonly takes the form of circuit boards, sometimes called target boards or a target system, and test equipment cabled into the hardware platform. Emulation without the use of any target system is defined as targetless emulation. Hardware prototype refers to the construction of custom hardware or the use of reusable hardware (breadboard) to construct a hardware representation of the system. A prototype is a representation of the final system that can be constructed faster and is available sooner than the actual product. This is achieved by making tradeo ffs in product requirements, such as performance and packaging. A common path to a prototype is to save time by substituting programmable logic for ASICs. Hardware verification 2002 saw the widespread introduction of assertions as a way to document the designer's assumptions and the properties of the design. [1] Assertions are a powerful tool to crosscheck the design's actual versus intended behavior. They are also valuable to verification and system engineers to formal ly specify the intended behavior of the system and to make sure it is behaving according to specification. This year, growth in the languages used for design and verification will certainly occur with the evolution of SystemVerilog and SystemC. Embedded system software 1. Software engineers have much earlier access to the hardware design. This allows software designers to develop code and test it concurrently with hardware design and verification. Performing these activities in parallel shaves time from the project schedule, compared with the serial method of waiting for the prototype to begin software testing. Moreover, the early involvement of the software team results in a much better understanding of the underlying hardware operation. 2. Co-verification provides additional stimulus for the hardware design. In fact, it can provide the true stimulus that will occur in the embedded system. This improves hardware verification when compared to using a contrived testbench that may or may not represent real system conditions. Increased confidence in the hardware design is invaluable. By running HW/SW co-verification, a wide range of problems can be found and fixed prior to silicon, such as register map discrepancies, problems in the boot code, errors in DMA controller programming, RTOS boot and configuration errors, bus pipelining problems, and cache coherency mishaps. Some of the errors will be software problems and some hardware-related. Addressing these issues must be done using a logical and well-concei ved co-verification strategy. Co-verification requires that accurate microprocessor models and software debugging tools be available to software engineers as early as possible. It also requires that the verification platform provide the best mix of performance and debugging for software engineers to work effectively with hardware engineers. Five distinct types of embedded system software have been identified. The software content (that is, lines of code) increases with each successive step: Matching the software with the platform Figure 1 - Methodology confusion System initialization and HAL The hardware abstraction layer (HAL) is the next layer of software that works with the initialization code to provide a common interface for higher-level software to use for hardware-specific functionality after the system is initialized. The HAL abstracts the underlying hardware of a processor architecture and the platform to a level sufficient for the RTOS kernel to be the platform. Diagnostic suite A comprehensive set of diagnostic tests should be developed to verify each subsystem and peripheral. This starts with the memory subsystem, progresses to interrupt testing, and then moves to other IP blocks, such as timers, DMA controllers, video controllers, MPEG decoders, and other specialty hardware. Most of these tests do not see their way into the final product, but they are very important because they build the case for a solid hardware design. Creating the programs gives software engineers a very good understanding of the hardware and provides an opportunity to learn about the hardware specifics in a more secluded environment. Real-time operating system (RTOS) Device drivers and application software Applications usually want to int erface to real network traffic, see things on the screen, and use the pointer or mouse. During application development, hardware and lower-level software bugs are few and far between, and the software engineer is focused on providing robust applications with differentiating features for end users. Before discussing the specifics of methodology and tools, it is important to recall that software engineers view the world very differently from hardware engineers. Here is a brief review of the different perspectives of software and hardware engineers. Software engineer's view of the world "The programming model is a model used to provide certain operations to the programming level above and requiring implementations on all of the architectures below."[2] More practically, the programming model for a microprocessor co nsists of the key attributes of the CPU that are necessary to abstract the processor for the purpose of software development. As an example of a programming model, consider the ARM9E-S CPU. The ARM9E-S implements the ARM v5TE instruction set that includes the 32-bit ARM instruction set and the 16-bit thumb instruction set.[3] The details of the instruction set are an important part of the programming model. Also covered by the programming model are details related to the operating modes of the CPU, memory format, data types, general purpose register set, status registers, and interrupts and exceptions. All of these microprocessor details are important to the software engineer. Beyond the microprocessor, software engineers are interested in the memory map for the embedded system. For a 32-bit address space, 4GB of physical memory addresses can be accessed. All embedded systems use only a subset of this physical address space, and the memory map defines the location in the addr ess space of various types of memory and other hardware control registers. The memory map may also define what happens if addresses are accessed where no physical memory exists. Commonly found types of memory in an embedded system are ROM to hold the initial software to run on the CPU, flash memory, DRAM, SRAM for fast data storage, and memory mapped peripherals. Peripherals can be any dedicated hardware that is programmable from software. These can range from small functions such as a UART or timer to more complex hardware, such as a JPEG encoder/decoder. The combination of the microprocessor programming model, the memory map, and the individual hardware control registers form the software engineer's view of the embedded system. This information becomes the ultimate authority for all software development and is available in the form of technical manuals on the microprocessor, combined with the system specific memory map supplied by the hardware engineers. Hardware engineer's view of the world For the hardware design to work correctly, the logic connected to the microprocessor must obey all of the rules of the bus protocol. If the rules of the bus protocol are obeyed, the details of the software tasks are not important. To hardware engineers, the microprocessor is nothing more than a bus transaction generator. All microprocessors use some type of protocol to read and write memory. To the hardware engineer, the microprocessor is viewed as a series of memory reads and writes. These reads and writes are used for fetching instructions, accessing peripherals, doing DMA transfers, and many other things, but in the end, they are nothing more than a sequence of reads and writes on the bus . For years, hardware engineers have used a bus functional model (BFM) to abstract the microprocessor into a model of its bus. More recently, this has been described as transaction-based verification since it views the microprocessor as a bus transaction generator. Co-verification methodology Since it need s to be used with logic simulation and later with acceleration and emulation, it cannot be constructed such that it will be a bottleneck to overall acceleration and emulation performance. Software engineers require good CPU models and debugging tools. For each of the five different types of software, they will prefer either a software model of the ARM CPU or a hardware model of the ARM CPU. The three primary verification platform execution methods combined with the three representations of the ARM microprocessor form the matrix of nine modes of operation shown in Figure 2.
Figure 2 - Verification operating matrix The next sections describe how each type of software can choose to be executed by either a software or hardware model of the ARM CPU using one or more of the platform's execution modes. System initialization and HAL development For the ARM SoC example, the ideal debugging solution for early development of system initialization and HAL code is one based on a cycle-accurate instruction set simulation model tightly coupled to a logic simulator containing the SoC hardware design. This provides interactive, graphical software debugging for the software engineer to single step through the code and verify register and memory contents with excellent flexibility and control. Simulation performance is less important because the code must be verified line-by-line, and the number of lines of code is relatively small. This situation is labeled as box 2 in the matrix in Figure 2. Diagnostics The best solution uses simulation acceleration to increase the simulation performance over what is a possible using an ordinary software simulator. A simulation environment running at 10 to 100 Hz is not fast enough for engineers to run and test. Moreover, the memory optimization techniques commonly used by co-verification tools are not useful because the main purpose of the diagnostics is hardware verification. A simulation acceleration system that runs at speeds of 1 to 10 kHz is the ideal platform for simulation performance and debugging. The use of simulation acceleration with the software model of the ARM is labeled as box 5 in the matrix in Figure 2. RTOS and device drivers Once the RTOS is booted and stable with the selected device drivers, as shown in box 8, future work can be done using a faster execution method, such as in-circuit emulation. The number of hardware bugs is very small, so the increased performance is well worth an y tradeoff in hardware debugging. This shifts the focus of the software engineers from box 5 to boxes 6 and 9. Application software Testbench development The co-verification methodology requires a BFM that runs well for all phases of verification, from the start of a project, as shown in box 1 in Figure 2, moving to acceleration and emulation for directed and random testing, as shown in boxes 4 and 7. To achieve this, a transaction-based interface to synthesizable BFM for the CPU bus is ideal. By operating at the transaction level, the communication is minimized between the testbench and the verification platform. Using a synthesizable BFM and a transaction-based interface to the verification platform optimizes performance, while simultaneously allowing for the use of C/C++ or HVLs to create testbenches. A BFM that works the same way from simulation to emulation and provides the required performance, while simultaneously following the industry trend toward verification automation, is an important part of a unified verification methodology. Matrix coverage is not enough Unfortunately, the methodology would not be as strong as it could be because the tools do not work together. More than matrix coverage, what is required is interope rability of the boxes of the matrix. The next sections describe two examples of why this interoperability is important. Communication gap Compounding the problem, software and hardware teams debug using different techniques and view the problem from different perspectives. The software team works with software models and debugs using software source-level tracing and memory and register viewing. The hardware team works with hardware design languages and debugs by viewing waveforms with history values associated wi th simulation times of read and write operations. As a result, when the software team detects a potential hardware problem, it cannot be described in hardware terms (time and signal value), nor is it easy to transfer an independent test case to the hardware engineers for further review. This will change with the adoption of a new approach called the Co-Verification Debugger. Transaction-based verification Hardware engineers are already familiar with transaction-based techniques for testbench development. Hardware engineers benefit from a transaction-based channel to simulation, acceleration, and emulation with a common interface to synthesizable models for the AHB protocol for ARM designs. This allows all types of tests, ranging from block-level tests, to directed system-level tests to random tests to be run without any changes to testbenches or models as different performance requirements are needed. Verification and hardware engineers benefit from boxes 1, 3, and 5 on the matrix in Figure 2. Software engineers already have a good understanding of transactions and how register and DMA operations translate into bus transactions. A method is needed to transparently capture the transactions to correlate lines of software with time values and to package the transactions for use within the software or hardware view. The Co-Verification Debugger serves this purpose.
Figure 3 - Transactions link software and hardware Co-Verification Debugger and tra nsaction instrument By capturing transactions during software execution within an in-circuit emulation session, the software engineer can stop at a particular line of code, obtain the simulation time, and send a set of transactions to the hardware team to debug at the indicated emulation time when the error occurred. The Transaction Instrument captures bus transactions for use with Instant Replay -- the playback mechanism that displays the captured transactions within a self-contained environment as shown in Figure 4. The Co-Verification Debugger takes the captured bus transactions from the transaction instrument and associates the emulation time value with the associated software line that activates the transaction.
Figure 4 - Transaction instrument Instant replay of software execution As a way to address this problem, the engineer can run a single simulation and save a compressed file that contains the bus transactions at the processor interface. The memory transactions, including address, data, and simulation timestamp along with interrupt information, are "recorded" into this file. Aft er the simulation is complete, software engineers can start the software model of the ARM CPU and software debugger and "re-run" the software execution sequence. However, this time, instead of interacting with hardware simulation, the results are read from the recorded file. This "playback" of the bus interface replicates the exact sequence of software execution as shown in Figure 5.
Figure 5 - Instant replay for software execution Because the simulation now runs at MHz speeds, software engineers can re-run the software as many times as needed to find the problem. The simulation timestamp is also provided at any time to help correlate software and hardware execution. This record and playback methodology is a good way to debug long simulation tests that make interactive debugging unproductive. Instant replay of hardware execution Now the test becomes a set of AHB transactions instead of an ARM CPU model. This allows hardware engineers to work on the problem using familiar transaction-based verification techniques. For all of the diagnostic tests, a history of transaction-based stimulus files can be saved for use as a set of regression tests. These tests can even be modified and augmented to test "what-if" scenarios on the bus. Instant Replay for the hardware view is shown in Figure 6. These examples demonstrate that interoperability between the boxes of the matrix in Figure 2 further enhance the unified methodology for SoC design.
Figure 6 - Instant replay for hardware execution Conclusion Interoperability between the platform and the application of transaction-based verification and HW/SW co-verification is essential to work smarter, not harder. Additionally, transactions and simulation timestamps are the communication method for hardware and software teams to effectively pinpoint the exact cause, time, and location of problems, as well as to provide a stand-alone test case to replicate the prob lem within each environment. References [1] Assertion Processor, Jason Andrews, 2002 Jason Andrews is currently a Product Manager at Axis Systems, working in the areas of hardware/software co-verification and testbench methodology for SoC design. His experience in EDA and the embedded marketplace includes software development and product management at Simpod, Summit Design, and Simulation Technologies.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |