The challenge of verifying an HDL design before tape-out is a well-recognized one in the industry. The growing complexity of designs with smaller time-to-market schedules has made testbench automation tools a common sight. In addition, verifying that the software and the hardware operate together as expected has become as crucial as well. Therefore, a strong integration between an HDL verification environment and a software-debugging environment is called for. By verifying the combined hardware and software unit and allowing your testbench to access the software as fully and easily as the hardware, you will get true verification of such HW/SW packages. By becoming part of the device under test (DUT), the software must be verified during t he verification phase, to make sure that it works correctly in conjunction with the hardware at hand. A way to capture and verify the interdependencies between the software and the hardware is also required. At the SOC Design Center, we design system LSI devices for use in a variety of Canon products, including copiers and cameras. Recently, we developed a core controller chip comprised of 1.5 million gates, a RISC core, and an externally developed PCI interface. Our testbench consisted of the DUT, bus monitors, bus functional models (BFMs), utilities, and tests. Because of the system's complexity, we needed a way to do thorough functional verification of the hardware and software together (see Figure 1).
Testbench automation
Testbench automation tools automate key parts of the verification process and are able to find bugs that don't necessarily appear through manual evaluation. These bugs result from unexpected use of the target system. Specifically, Specman Elite from Verisity Design, Inc. (Mountain View, CA) is adept at generating stimuli into the device, creating self-checking tests (for both data checks and protocol checks), and indicating holes in the functional coverage where further tests should be directed.
With this tool, we have three main engines to automate the verification of the design: constraint-driven test generation, data and temporal checking, and functional coverage analysis. The constraint-driven generation engine produces stimuli to the DUT, ranging from fully random to fully directed inputs. This engine generates tests-which reach corner cases in the design-without a test writer having to force the design to a specific state. Given enough run time, the generator would cover the entire problem space of the design. The inputs can be generated pre-run, on the fly, or any combination of the two. The on-the-fly generation reaches even more interesting corner cases in the design by generating inputs based on the current state of the DUT (see Figure 2).
For the system LSI design, we wrote our testbench in Specman Elite's e verification language. We used temporal expressions for bus monitors that act both as protocol checkers and timing checkers. The state machine statement is useful for coding BFMs and the lists with keys are valuable for the scoreboard checking approach. The e language has proven to be easy to use, effective for constructing test architectures, and for writing the test scenarios. Further, the C interface provided by Specman Elite is useful to efficiently connect reference models with the HDL.
Specman Elite's random generation capability has proven essential. In our verification environment, a test call commands each BFM to generate the transactions. Each thread in a test concurrently generates various types of transactions to the DUT. Semaphores can share resources such as memory space and slave functions, or use them exclusively. Moreover, it's easy to control threads running simultaneously.
Lock and release
For example, we used a l ot of lock-and-release methods for each resource such as memory space, register, and even transactions. If we put a lock at the beginning of a transaction from a particular master and put a release at the end, then the other master can't disturb that transaction. This makes test writing easy and we can concentrate on writing just one transaction scenario. In addition, we don't have to worry about the othermaster's behavior and can avoid an unwieldy situation.
The checking engine allows tests to be self-checking and requires no manual intervention such as eyeballing waveforms or deciphering data dumps. The automatic checking supports both data checks and protocol (timing) checks, covering all the rules extracted from the specification of the design.
Figure 1 - Functional verification through the testbench |
|
Simulating a dimensionless analog MOS circuit requires a testbench such as the one shown here. By creating a few alternative testbench schematics, you easily define certain worst case or nominal operating voltages, signal frequencies and load values simply by changing the pointer to the appropriate testbench file in the primary batch file that controls the simulation.
Finally, the functional coverage engine collects coverage information during test runs. This information shows which parts of the design functionality the tests have already covered, and which parts remain untested. This is valuable information both for deciding when verification is done and for saving simulation cycles by directing tests only to still-uncovered areas.
From the coverage information, we were able to see how much of the design had been tested versus what hadn't been. Then, Specman Elite was able to generate directed tests for the uncovered po rtions of the design.
In the past, we wrote a lot of Verilog tests and ran many simulations based on scenarios in the testplan. We listed more than 300 test items and wrote more than 300 tests corresponding to each test item. This required a huge amount of engineering resources and proved extremely time consuming. Now, using Specman Elite, we're writing tests based on random-event generation and can specify coverage points corresponding to each test item. There are more than 300 coverage points, but we don't have to write out each test scenario. The tool generates various scenarios and we have succeeded in reducing the number of tests to one tenth of the original.
The e language is object-oriented and has useful capabilities as a programming language, while Verilog is a hardware description language and isn't at all suited for verification. Considering the characteristics of e, we were able to reduce the code size by one third and still achieve equivalent functions. In all, we were able to reduce c ode size to one thirtieth of the original. Now, every simulation scenario we run has significant meaning and we are no longer spending time running unnecessary simulations.
Figure 2 - Generating verification checks |
|
Specman Elite contains on-the-fly constraint-driven test generation, data and temporal checking, and functional coverage analysis that works with both VHDL and Verilog simulators. |
Conserving resources
Since our system was so complex, we needed to put as many resources into verification as possible. One of the things we did was to maximize the number of engineers proficient in the e language. In addition, once the initial HDL was completed, we shifted resources away from the design and put them in verification. For the system LSI design, we had the design engineers and verification engineers begin working simultaneously. The design engineers focused on writing the design from the specifications while the verification engineers began creating the verification environment and writing the testplan.
After the initial check of the HDL code, we then shifted 60 percent of the design engineers to the verification team. We did this for several reasons. First, after the initial check of the code, the amount of design work in need of completion drops drastically. This is mainly due to the fact that synthesis tools can automatically perform the rest of the implementation details. Second, the complexities of the system require additional verification resources. In the SOC Design Center at Canon, over 80 percent of our engineers are now fluent in Specman Elite and its e verification language.
We used the Specman Elite tool on the previous generation of the System LSI design and received very good results . Yet, with the new generation, the system contained another controller and we needed to verify the entire system. Therefore, we needed to incorporate a HW/SW co-verification methodology. On past projects, we had used Mentor Graphic's Seamless HW/SW co-verification tools and were very happy with the results. For the new generation of the system LSI design, we decided to use Specman Elite and Seamless together in our verification flow.
Figure 3 - Hardware/software verification |
|
Seamless allows you to switch dynamically between detailed hardware verification and high-speed software execution. |
Incorporating co-verification
HW/SW co-verification tools debug the software prior to the completion of the hardware portion of the design. They load the software, run it while communicating with the HDL simulator, and keep the two synchronized. Such tools give you the ability to debug the software much earlier in the development process, before a physical prototype of the hardware is available. Seamless-with its different optimization modes and instruction set simulator-is well suited for this task and has proven so useful that we no longer tape-out any chip without first validating the design using the tool (see Figure 3).
Because of its virtual prototyping functions, Seamless enables our software engineers to develop the software without silicon. In this manner, our engineers can run their code immediately after the hardware engineers complete RTL coding. This method ensures that the software is high quality when the silicon comes from fabrication. So far, our use of Seamless has been mainly for software debugging. In one instance, however, we did find a serious hardware bug that wouldn't have been found until af ter silicon without HW/SW co-verification. It's very difficult to find a hardware bug resulting from a specific behavior of the software.
Without HW/SW co-verification, we could use the BFM as a processor model. Usually though, the BFM doesn't support exception handlers; the BFM can't behave as an actual processor would when it receives an interrupt. By using HW/SW co-verification, we were able to find a hardware bug that was directly related to multiple interrupts. In this case, the hardware wasn't able to get the correct information for the second interrupt while the software was handling the first interrupt. Also, since the BFM doesn't support internal cache and snoop capability, it's very difficult to debug the hardware that relates to cache and snoop. With HW/SW co-verification, we have the ability to verify and debug chips in which hardware and software operate tightly together and therefore reduce the possibility of having to re-spin the chip later on.
Simulation streamlined
Seamle ss provided us with full debugging capability for our software engineers and we reduced our simulation time by more than a 100x using the optimization capability. In the past, it was difficult for our software engineers to control the BFMs located around the DUT (for instance, USB, PCI, Ethernet, and others). This was due to a lack of communication between the software and those models. Instead of using those models, we used loop-back tests for software debugging, which added a direct connection between the output and the input of each interface. This means that the testbench for the software had less capability than that for hardware and we had to create two individual testbenches.
Now, with the combination of tools, the model behavior can be controlled easily according to the software state. For instance, we issue a transaction of the USB host model when the software executes a certain instruction or enters a certain state. Another example is to generate an interrupt at exactly the same cycle within w hich the software executes a conditional branch instruction, providing quite an interesting situation. Furthermore, the integrated environment allows the software to control the BFMs directly.
The code of cooperation
Both of these tools can work with other third-party tools via standard interfaces and still generate beneficial results. The first thing we did, however, was to request that both Verisity and Mentor Graphics work on a tight integration between their tools. We wanted the tools to work together more smoothly than tools that only work via standard interfaces.
Verisity and Mentor Graphics honored our request and developed a deeper integration between Specman Elite and Seamless, giving the testbench tool the same access to the software as to the hardware. In our case, Specman Elite generated stimuli for both the hardware and the software, based on the combined states of both. It captured corner cases that we'd have missed otherwise.
In this instance, we were able to generat e stimuli based on the execution of the software from a model outside of the controller. One distinctive example: we were able to find a software bug in an error condition by generating an Ethernet error while the software was accessing a certain address space. We found the problem by generating an error on the PCI bus as the software was executing an exception handler or raising an exception. It would be difficult to generate these error situations even on an actual board. HW/SW co-verification with BFMs may be the only method by which to debug situations such as this.
The ability for Specman Elite to "see" both the hardware and the software states at the time of generation provoked the tool to capture tests targeted at the interdependencies between the hardware and software much more efficiently. The feature also gave us functional coverage information for both hardware and software, including cross-coverage of the two.
Figure 4 - A tightly integrated verification environment |
|
Specman Elite gains full visibility and controllability of software symbols for test generation, checking, coverage, and debugging, as well as access to the state of Seamless' coherent memory |
In addition, Specman Elite monitored the behavior of the software. By seeing into the software state, we were able to view what portion of the software was running and, therefore, see how the hardware and software interacted. And we had full debugging capabilities over both; we could view signal changes, software variable changes, and testbench variable changes in the waveform viewer, source code debugger, and other viewers.
For example, we were able to set a break point at the line that generates an interrupt in the e code, thus preventing the co de from driving the HDL and allowing the software to continue running. This strategy allowed the software to be executed in a single step so we could debug the interrupt handler. Conversely, we also set a break point at the enable bit setting for DMAC (direct memory access controller) in software, which stopped the software and allowed the hardware to continue going forward. In a single step, you can also execute the hardware simulation-a very useful capability in a complicated design where the software and hardware operate closely together (see Figure 4).
Working to the ideal
Incorporating a testbench automation tool and a HW/SW co-verification tool offers an ideal strategy for an efficient and high-quality verification of SOCs. With this combination, we have simplified our efforts because we don't need to create separate testbenches for software and hardware. By having full view and control of the hardware and software, we can guarantee that the testbench efficiently detects bugs resulting from the interdependence between the hardware and software-bugs that otherwise would only have become apparent after tape-out. The combination of the two tools can reduce an unnecessary number of prototype turns and thus produce a higher market share resulting fromgetting to market on, or ahead of, schedule.
Having realized the benefits of this methodology, we at Canon will no longer develop chips without the benefits provided by a combination of a testbench automation tool and a HW/SW co-verification tool, such as Specman Elite and Seamless. This methodology contributes significantly to bug-free chips and a successfully completed time-to-market schedule.
Hiroshi Nonoshita is the manager of SOC Development Dept. 13 at Canon and is responsible for design verification in the SOC Design Center in Kanagawa, Japan. He has worked for Canon in various ASIC design capacities for over 16 years, including desktop publishing and PowerPC hardware. To voice an opinion on this or any other article in Integrated System Design, please e-mail your comments to mikem@isdmag.com.
Send electronic versions of press releases to news@isdmag.com
For more information about isdmag.com e-mail webmaster@isdmag.com
Comments on our editorial are welcome.
Copyright © 2000 Integrated System Design Magazine