Building a Verification Pyramid
Building a Verification Pyramid
By Matthew Moniz, EE Times
October 21, 2002 (2:34 p.m. EST)
URL: http://www.eetimes.com/story/OEG20021021S0035
Developing a high-quality programmable DSP solution based on an entirely new architecture means more than providing quality silicon. To ensure a customers success also means providing a reliable toolset for developing applications as well as an accurate simulation vehicle for debugging and performance evaluation of those applications.
Meeting these requirements in the confines of today's rapid development cycles requires an efficient, yet effective verification effort. Through proper planning, the initial, more fundamental, steps of the verification process can serve as a foundation for later, more complex, verification methods. In a sense, this approach can be thought of as a "verification pyramid." The foundation of the pyramid consists of many small tests that can be rapidly deployed by keeping their complexity to a minimum. Upon this layer of tests is built a second, somewhat more complicated layer that builds upon the fundamental aspects of the base layer. And while the complexity of these tests is greater, their number is often smaller. Atop the second layer is built another smaller, yet more complex layer, and so on.
This approach was used in the verification of the CW4011, ChipWrights' first visual signal processor (ViSP). Because it was the company's first DSP solution, a fully functional assembler and instruction set simulator (ISS) had to be developed, in addition to reliable first-pass silicon.
The first step was to create a suitable tool environment in which the testbench could be created. Due to schedule restrictions it was imperative to create an environment in which all available verification resources could contribute to the testing effort with as minimal learning curve as possible. It was also important to identify places where the tools themselves could contribute to the testing process, such as for industry standard compliance checking. The desire to take advantage of these types of tools fueled our tool selectio n process.
The most fundamental decision was the selection of a testbench authoring tool. TestBuilder, from Cadence, had a number of advantages for ChipWrights over its competitors. First, it's an open source product, which means that we would not be license limited by our testbench tool as we would have been by other commercial products. Additionally, TestBuilder allows the majority of the testbench to be written in C/C++, allowing the testbench to raise the level of abstraction to that of a higher programming language. Furthermore C and C++ are programming languages widely known in the verification community. Many commercial tools require knowledge of a proprietary language that requires prior experience or a learning curve.
'A Verification Pyramid'< /B> Depicted in this diagram is a typical verification pyramid. The lowest layers of the pyramid contain large numbers of simple, easily deployed tests that verify the fundamental aspects of a design. As you climb the layers of the pyramid, the number of tests decreases as their complexity increases until you reach the pinnacle: a fully self-checking pseudo-random system exerciser.
|
That decision having been made, the other major tool choice was a modeling vendor. As the CW4011 had to support industry standard interfaces and devices such as SDRAM, EPROM, serial EPROM, and CompactFlash, we needed to select a modeling vendor capable of supplying multiple models of each type, from various vendors and different configurations while supplying a common interface that would allow for minimal integration effort with the remainder of the testbench. Denali's Memory Modeler provided us with the versatility of vendors and configurations we needed as well as the ability to add a single Verilog variable to our testbench which would represent all errors indicated by Denali's assertion checking system.
With these two powerful tools in place, it was time to start planning the first level of tests to be developed. The requirement for this first set of tests was that they be extremely narrow in focus and simple in approach. Since they were to be used to not only verify RTL, but also the ISS and assembler, they would be utilized by a number of different engineering groups. By keeping the scope of the tests small, the hope was that they would have short run-times and be easier to debug. Furthermore, since different engineers would run the tests on different targets, they all had to be entirely self-checking.
Derived from a set of architecture and design specifications, the first level of tests for the CW4011 was targeted at specific areas of the design. The entire instruction set was verified one instruction at a time. Bypass and stalling logic was ex ercised by testing specific instruction combinations. All configuration fields inside the CW4011 were tested to ensure that they had the desired effects. Data and address pattern checking were performed on individual interfaces and busses. And simple transactions were performed on I/O interfaces.
In accordance with the test plan complexity on this fundamental level, the testbench at this point in the verification process was also narrow in scope. Transactor functions were designed in such a way to provide a simple, yet effective, interface for test writers. Single functions were provided to backdoor load internal and external memory models, cause bus transactions on external interfaces, check actual vs. expected data, and other similar tasks. For example, loading application code from a file or array into SDRAM could be accomplished through a single function call, as could transferring a burst of data into the CW4011 through the high-speed external video interface.
This minimalist approach to transactor development was intended to allow test writers to start executing the test plan as soon as possible by providing an intuitive testbench interface and short transactor development time. Also, the versatility of the functions allowed test writers to develop tests at an extremely rapid pace, facilitating faster debug of the fundamental functions of each test target.
The result of the first round of testing was a high degree of confidence that each individual aspect of the design behaved as expected. But since rarely would a DSP be used one function at a time, the next step was to start testing areas of the design that would normally interact with each other. For example, scenarios needed to be created in which several system bus masters were attempting to access the internal AHB processor bus simultaneously. Or, while the primary memory crossbar was tested to ensure that each individual requester could access the primary memory banks, the case where multiple requesters were accessing them at the same time had not been addressed. This type of testing defines the second level of the verification pyramid.
Until this time, each verification engineer developing tests for a single interface or design module could operate in relative isolation from his or her counterparts working on other aspects of the design. This allowed for parallelization of effort and was well suited to the requirements of level one of the pyramid.
Now, however, many different aspects of the testbench would be brought together in any single test. The complexity of the testbench stayed the same at this level in the pyramid. Instead, the increase in complexity happened in the tests themselves. Those same functions that were used to operate external interfaces or access internal state for a particular portion of the design were brought together into the same tests. Fortunately, at this point, the quality of those functions had been well established through the debug process of the first set of directed tests, so a majority of the debug time of this second set of tests could be devoted to the design itself.
At this level of testing, the development time of each individual case becomes somewhat longer, as does its run-time. These have increased in proportion to the increase in complexity.
After the second level of testing, all of the individual test cases that can be specified and easily tested through directed methods have been exhausted. However, as in any verification project, there are always collisions and corner cases that are impractical to create in directed form. And then there are the hidden cases that haven't been thought of yet. This problem is solved through the third and higher levels of the pyramid: exercisers.
Continuing to build upon the transactor features developed for the first two levels of testing, developing an exerciser often consists of wrapping up lower level functions into a top-level function that will randomly select which function to use when. For example, at this po int, routines are available and well proven to start, abort, flush, and check the results of a Direct Memory Access (DMA) transfer. To create a DMA exerciser, then, is to take those methods and combine them in such a way that DMA transfers will be randomly selected with random parameters. Those DMA transfers will sometimes be aborted, depending on the randomization. Sometimes entire DMA channels will be flushed of pending transfers as well. And the DMA exerciser should always know when to check if the results of a DMA were correct. That is, checking the results of pending transfers that were flushed before they ever took place would certainly result in an unwanted error condition.
In the case of the CW4011, two different exercisers were implemented. The first was used to tackle the complexity of the processor pipelines. A script was used to randomly generate sequences of instructions, with varying parameters, which would be executed using the RTL and ISS models of the DSP. Since each target, the RTL and the ISS, were developed by different engineering teams based on a common set of specifications, and the individual performance of each instruction and transaction had been verified in the first level of testing, each model could now be used to verify the results of the other. An environment was created where each register access, memory access, and transition of the program counter was compared from one simulator to the other. Cycle accuracy could also be determined within a small margin of error.
Given that there are over one thousand individual instructions in the CW4011 ViSP, it would be impossible to generate by hand even a small percentage of the potential combinations of instructions. Using an exerciser in this way allowed us to constantly run random permutations of instructions in the background, and even receive an e-mail when the exerciser had detected an error.
The second exerciser was an extension of the first. In addition to the random instruction set sequences running on the processor pipelines, random events were added to all external I/O interfaces and DMA channels. To avoid collisions, each interface or design module was allocated a certain range of space in shared resources, such as primary memory. Each transactor was responsible for spawning and verifying the results of each transaction associated with its corresponding area of the design, each operating with no knowledge of the others' activities.
Once again, the complexity of the testbench and the exerciser tests themselves grew in this level of testing as the number of tests decreased. In all, over three hundred tests made up the first layer of testing for the CW4011, while the top layer consisted of only two.
The results of this approach to verification can be a high utilization of available resources in the beginning of a project and a complete and sophisticated verification suite at the end. In the case of the CW4011, the end product was high-quality first-pass silicon on schedule.
Matthew Moniz, principal engineer at ChipWrights Inc. (Newton, Mass.), can be reached at mmoniz@chipwrights.com.
Related Articles
- Building Your UVM Verification Environment for Cache Coherent Interconnects
- Advanced Techniques for Building Robust Testbenches with DesignWare Verification IP and Reference Verification Methodology (RVM)
- Building a verification strategy 'blueprint'
- IP Verification : Building systems at the silicon level: time, cost, design constraints
- Certifying RISC-V: Industry Moves to Achieve RISC-V Core Quality
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |