Approaches to accelerated HW/SW co-verification
EE Times: Latest News Approaches to accelerated HW/SW co-verification | |
Ray Turner (06/25/2004 5:00 PM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=22102217 | |
"Software is everywhere and it dominates our product development cycle."
What if your software was tested and working before first silicon came back from the foundry? What would that do to your product's delivery schedule? As more and more electronic products have extensive software content, designers are faced with serious project delays if they wait for first silicon to begin software debugging. Indeed, "first software" becomes the pacing milestone for product delivery. Increasingly, developers are turning to hardware/software co-verification — concurrently verifying hardware and software components of the system design — to deliver on more demanding time-to-market requirements. Concurrent verification allows software verification and debugging to begin before silicon is available, often before it is frozen, which can shave months off the software development schedule. There are a variety of approaches to hardware/software co-verification (co-verification hereafter). Here I will focus on accelerated co-verification, since the complexity of software in most of today's electronic products precludes adequate testing of the ASIC with the performance of a logic simulator alone. This article compares three approaches to co-verification and describes how to incorporate co-verification into your design environment. Objectives of hardware/software co-verification The traditional approach to software verification is to wait for (mostly) working silicon to begin software debugging. This makes the hardware and software debugging tasks largely sequential, and increases the product's development time. It also means that a serious system problem may not be found until after first silicon, requiring a costly respin and delaying the project for 2-3 months. The objective of co-verification is to make the hardware and software debugging tasks as concurrent as possible (Figure 1). At a minimum, this means starting software debug as soon as the IC is taped out, rather than waiting for good silicon. But even greater concurrency is possible. Software debugging with the actual design can begin as soon as the hardware design achieves some level of correct functionality. Starting software debugging early can save from 2 to 6 months of product development time.
Figure 1 — Emulation allows concurrent chip, board, and software verification, accelerating time-to-market
There are additional benefits to be obtained by starting software verification prior to freezing the hardware design. If system problems or performance issues are found, designers can make intelligent tradeoffs in deciding whether to change the hardware or software, possibly avoiding a degradation in product functionality, reduced performance, or an increase in product cost. Early software integration on "real" hardware provides enormous value:
In the most common case, a custom ASIC is being developed to be used with a standard microprocessor (or microprocessor core to be included in the ASIC), which is running the software being developed. In-circuit emulation provides the highest performance possible and includes a rich debug environment. Emulation also allows verification of the design in a "real world" environment with live data. Testing of a design in the context of actual data and with thousands of times the volume of test data provides exceptionally high confidence in design correctness. If only an Instruction-Set Simulator (ISS) model is available, acceleration of the ASIC is possible, but overall performance will be reduced by the speed of the ISS model and simulator overhead, perhaps by as much as one or two orders of magnitude. Accuracy may also be reduced, hiding subtle bugs in the hardware-software or processor-ASIC interfaces. With this background, let's explore three complete solutions to accelerated hardware/software co-verification. These approaches are listed in order of increasing performance:
2) Use an RTL processor model and the processor-based emulator. 3) Use a physical model of the processor (bond-out core) and processor-based emulator. Co-verification using an ISS, simulator, and accelerator/emulator In this approach to co-verification, the logic simulator and accelerator model the ASIC and all other hardware components except for the processor and memory. Processor simulation is done by an ISS, and memory is "modeled" by workstation memory. The industry-standard PLI interface connects the logic simulator to the ISS (Figure 2).
Figure 2 — Co-verification flow using an ISS,simulator and accelerator
Alternately, for transaction-based acceleration, the industry-standard SCE MI interface is used. In operation, the system software is executed by the ISS on the workstation. The ISS typically can execute several thousand instructions per second. When an I/O instruction or memory-mapped I/O access to the ASIC is performed, the ISS passes the I/O to the simulator which handles any non-synthesizable code in the design and interfaces to the accelerator which is accelerating all the synthesizable code in the design. Any resulting changes in ASIC outputs are passed back to the ISS. Overall performance of this approach to co-verification depends greatly on the amount of I/O being done by the software due to the overhead of communicating between the ISS and the accelerator. Steps in using co-verification with an ISS, simulator, and accelerator:
In this approach, an RTL model of the processor is substituted for an ISS model and the RTL processor model is mapped into the emulator along with the ASIC design. A processor-based emulator like the Cadence Incisive Palladium is used. The memory in the emulator is downloaded with the test code needed for verification. Thus, the entire system is modeled in the emulator and runs at full emulation speed — usually 10-100 times faster than the ISS approach. An integrated software debug environment (such as Motorola Metrowerks, WindRiver, TI Code Composer, Green Hills Probe, PowerTap, ARM Multi-ICE, and ARM RealView) can be connected to provide the user's familiar software debugging environment. Software and hardware engineers can each use the debug environment they are most familiar with increasing debug productivity. The software debugger is usually connected to the emulator via a JTAG port, either built in to the processor model or external to it. Due to the emulator running JTAG at a slowed rate, some software debuggers require a vendor patch to support reduced JTAG speeds. Other software debug connections that have been used include RS-232 and Ethernet. Since this approach can be in-circuit and in-system, testing can take place with live data in as near to a real world environment as possible. Indeed, this approach is the only way to gain the high confidence that comes with testing a design in a real environment with real data. It is hard to overestimate the value of in-system testing. Over and over again, customers speak of finding bugs in this way that they couldn't possibly have foreseen or tested for in a simulation environment. The only substantial difference between testing with emulation and testing with first silicon is that, in emulation, the target environment must be slowed down to emulation speeds and, hence, provides lower performance than actual silicon — but with the advantage of complete visibility into the design and a comprehensive debugging environment which first silicon cannot offer. And emulation allows you to start testing software several months before silicon is available. In addition to software verification, emulation also provides a vehicle to evaluate hardware/software implementation tradeoffs early in the design cycle. Potential software performance bottlenecks can be uncovered while there is still time for a hardware-based solution. Steps in using co-verification with RTL or physical model and emulator:
In this approach the RTL model of the processor is replaced by a physical model — a bond-out core. Again, a processor-based emulator is used. Performance will be similar to the above approach, but less capacity is needed in the emulator. Otherwise, operation and capabilities are the same as above. Hardware and software debugging tools can be easily cross-coupled for coordinated debugging when needed (Figure 3).
Figure 3 — Co-verification flow using RTL or physical processor model and emulator
In some target-based approaches, a real-time operating system (RTOS) or an MP-ICE debug environment may be running in the processor. These provide a communications path (RS-232, JTAG, or Ethernet) back to a workstation running the same software debugger with which the developer is used to working. In both cases (MP-ICE and RTOS), the software debugging and hardware debugging environments can be synchronized so that hardware/software interface issues can be debugged conveniently. The breakpoint/trigger systems of the emulator and MP-ICE are cross-connected such that the emulator's logic analyzer trigger is one of the MP-ICE breakpoint conditions, and the MP-ICE breakpoint trap signal is set as an emulator logic analyzer trigger condition. Therefore, if a software breakpoint is reached, the emulator captures the condition of the ASIC at the same moment. If an ASIC event occurs that triggers the logic analyzer, the software is stopped at that moment. This allows inspection of the hardware events that led to a software breakpoint or of the ASIC operation resulting from executing a set of software instructions. This kind of coordinated debugging is extremely valuable for understanding subtle problems that occur at the hardware/software interface. Comparison of approaches Figure 4 summarizes the tradeoffs of the three approaches to acceleration explained above and contrasts them with not using acceleration.
Figure 4 — Comparison of hardware/software co-verification approaches
Determining the best approach for your project It's important to think about the performance required to meet the objectives of a given project. Consider the following: 1) Are you going to begin software debug before or after tape-out?
A look at the cost of the verification solution versus the cost of making a mistake can be instructive. The costs of making a mistake include the cost to do a respin of the IC and the cost of being 3 months late to market (the average time that can be saved if you start software debugging at or before tape-out). For rapidly changing consumer markets, the lost opportunity cost can easily be tens of millions of dollars. There are additional benefits from using co-verification. For example, it is very helpful if the diagnostics are working when the IC comes back from the fab. They can be used to do focused testing of specific parts of the design. Without working diagnostics, you end up doing ad hoc testing of the whole IC at once " a hit-or-miss proposition. Incorporating co-verification into your design environment One of the most significant factors in implementing hardware/software co-verification for a project is the corporate "culture" and organization regarding hardware and software developers. The ideal is a project team wherein hardware and software engineers report to a single project leader/manager and work together in a fully collaborative way to create an optimal hardware/software system. It's important to make the entire software debug process as transparent and turnkey as possible. The ideal situation is one where a software developer asks for an emulated software debug environment, gets queued (via LSF) and receives the first available system, which then launches the same software debugger with which the developer is familiar. From that point on, the developer operates as though connected to a prototype system. Several companies have established such environments, and provide this level of support across their corporate WANs to developer teams across the globe on a 24/7 basis. Because they are using the same software debug environment, whether with an ISS, emulated model, prototype, or final system, a high level of productivity is maintained. Being able to share an emulator effectively in a multi-user environment is important. At one 3D graphics chip company, the total capacity of a single Incisive Palladium emulator is shared among eight users for BIOS and driver software development (using a partial-chip model). Each user can operate their portion of the emulator independently in any mode. Alternately, the entire system capacity can be used when verifying the complete 35 million-gate design. With multiple systems they will be able to support up to 48 simultaneous software developers. Some silicon vendors who make processors or DSPs are providing an additional benefit to their customers to encourage design-ins and to accelerate time-to-market (and hence their own processor time-to-volume). By making the software development environments described above available to their customers, they give the customer early access to new designs — prior to chip availability — for the purpose of software testing. Processor models can be encrypted for IP security. On special arrangement, system-level hardware debugging environments can also be offered. Such capabilities build closer customer relationships with long-term mutual benefit. Summary Software content of electronic products is increasing exponentially and is most often the pacing item for product completion. Software simulation alone is not fast enough to test the volume of software being written for today's electronic products. Using acceleration and emulation for hardware/software co-verification takes advantage of the investment made in the emulator for ASIC verification to speed software debugging thus shortening product cycles by several months. Emulation as a vehicle for hardware/software co-verification provides the highest performance available by far for this critical task along with real world data for comprehensive system testing, and a complete and familiar software debugging environment. An ISS alone can be a useful tool for testing small amounts of software when nothing faster is available. But verifying hardware and software separately leaves the most difficult problems until the most critical part of the project — THE END! Therefore it is crucial to test hardware and software together — hardware/software co-verification — and to start it as early in the project as possible. Developers doing this today find it saves several months in the product development cycle. Ray Turner is the senior product line manager for Cadence's Incisive Palladium accelerator and in-circuit emulation systems, part of the Incisive Verification Platform. Before joining Cadence, he was the EDA marketing manager for P CAD products for 7 years. Overall, Ray has 18 years experience in product management for EDA products. He also has 14 years experience in hardware, software, and IC design in the telecommunications, aerospace, ATE, and microprocessor industries.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
- Transaction-based methodology supports HW/SW co-verification
- HW/SW co-verification basics: Part 3 - Hardware-centric methods
- HW/SW co-verification basics: Part 2 - Software-centric methods
- HW/SW co-verification basics: Part 1 - Determining what & how to verify
- A New Methodology for Hardware Software Co-verification
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |