FPGA Prototyping as a Verification Methodology
Plano, Texas USA
Abstract:
To some degree, FPGA prototyping has become commonplace in the majority of SoC development programs. This paper is a brief discussion of four aspects of this type of approach. First the forces behind this trend are examined and what is needed in the future is discussed. Second, a short review of the traditional verification methodologies of code coverage, assertion based verification and functional coverage is outlined. Third, how to best incorporate the FPGA verification approach into the overall chip development is examined. And last, a technique of FPGA instrumentation to measure coverage is introduced.
Introduction
As designs get more complex and design cycles get shorter and shorter, wise engineers and managers look for ways to work smarter not just harder. FPGA prototyping is one of those ways. This presents a new challenge to the design and verification teams. The challenge is how best to incorporate the effort that goes into an FPGA prototype into the overall verification initiative. This paper explores the forces behind the recent increase in FPGA prototyping, reviews the traditional verification techniques, discusses how FPGA prototyping can be part of the verification effort, and gives examples of how verification logic can be incorporated in the FPGA prototype.
The ideas presented in this paper are simple but yet very powerful in producing a high quality design with as little time and effort as possible.
Forces behind FPGA prototyping
FPGA prototyping of an SoC design can take many forms. The entire device with all functionality represented, specific blocks only or any combination in between. The FPGA may also be used as proof of concept platform. This approach has become popular for a number of reasons. Low barrier of entry, high degree of reuse, software integration, system architecture validation and performance monitoring are a few reasons that are explored here.
Not only has FPGAs themselves become relatively inexpensive but also the abundance of FPGA development platforms has driven the cost down. Also, even if a custom board is developed, perhaps with custom analog, an experienced board house can produce this in a matter of weeks. In order to be most useful, these platforms must contain a number of things.
First they must have an FPGA that is ample in size and IO to contain both the design under test and the supporting test and debug features such as debug ports, logic analyzer connections and embedded trace hardware. There may be a need for a platform that has multiple FPGAs. This is especially useful when the design contains IP that needs to be protected. In this case an encrypted bit stream can be generated for a single FPGA that is almost impossible to reverse engineer.
Second, the platform should have options for external memory. While this will not be required for all designs it is desirable to minimize the number of platforms a design center must maintain. The amount of memory is always one of those things that never seem to be enough.
The platform should also have the most common physical interfaces implemented on the board. RS-232 and general purpose TTL IO as a minimum. USB, Ethernet and Firewire are also useful.
Lastly, the platform should have a way of connecting some sort of daughter card to the motherboard. This is a way to extend the features of the platform in a quick and inexpensive manner.
Figure 1: FPGA Verification Platform
Another aspect driving the use of FPGA platforms is the high degree of reuse. More complex chips and faster design cycles mandate that the chip designer adopt the approach of building his design from proven sub-blocks rather than creating from scratch. It is then the designer’s responsibility to understand the interfaces to the sub-blocks very intimately. This can be done by reading the documentation and through simulation. Actual prototyping on the FPGA is another way of gaining an understanding of the IP. Most designers would attest that actually using something is the best way to learn the details.
Software integration can be a lengthy and high effort task. Giving the software designers a head start with an FPGA platform is a way to overlap the hardware and software effort in a manner that minimizes the amount of throwaway code. The FPGA platform can provide the software designer with a bit exact, clock cycle accurate development system that is very useful for developing the layer of software that interfaces directly to the hardware.
Once a certain amount of software is written it becomes possible to evaluate the system architecture. This includes sizes of the fifos or buffers, bus bandwidths and amount of code space required. Changing the architecture late in the development can have a drastic impact to the schedule so identifying problems as early as possible is always desirable.
Last but not least, performance monitoring, especially with software involved can be performed on the platform. For example, it is easy to evaluate how many processor instructions per message are required at this stage. This is useful to determine what operating speed the design must meet or if multiple processors or coprocessors are warranted. Obviously addressing these factors early in the development cycle is a must to maintain a fast time to market.
A less tangible reason is that having a platform that actually performs a function that can be seen, heard or felt is of unlimited value. All of the simulations in the world fall short of the impact of seeing something perform first hand.
Review of Traditional Verification Methods
To understand how FPGAs can help in assisting in verification, it is appropriate to have a brief discussion of verification methodologies. Below is a description of coverage-based verification, detailing Statement, Toggle, Branch and Conditional coverage. Assertion based verification is described and functional coverage is discussed.
Code coverage is probably the most used type of qualification to determine if your verification effort is adequate. Statement coverage is simply a measure of how often each line, or statement, in your design is executed by the simulator. Toggle coverage looks at each net or register to determine if the signal changes. It may also include data on if the signal rose or fell. Branch coverage evaluates all the branches in the code and determines if each branch is taken. Conditional coverage goes even further. It will look at which condition causes the branch to be taken. For example, in the statement “if (a or b) then c”, conditional coverage will measure if “c” is executed because both “a” and “b” are true.
Code coverage is a good measure of how extensive the testbench is but it really tells little about the correctness of the design. A verification engineer can achieve a high degree of code coverage without even looking at the specification of what the design is suppose to do. It is also possible to achieve your code coverage metric without even checking for pass/fail conditions. In other words it is really a measure of your stimulus but not necessarily a measure of your verification.
Functional coverage is a verification approach that has gained in popularity over the past few years. Here, the design is subjected to input that is constrained but more or less random. The verification occurs in one of two ways. First, assertions are placed in and around the design that will flag errors if they occur. This is especially useful for protocol monitors. Second, the payload or output of the design is checked. The advantage of functional verification is a verification engineer can run a large amount of vectors with relatively little effort in setting up the stimulus and the assertions. It is possible to do functional verification in native Verilog of VHDL but several commercial products exist that allow the verification engineer to save large amounts of time.
When using functional verification techniques the job of measuring coverage is not as straight forward as code coverage. Code coverage metrics are well understood and are the same from one design to the next. Not so for functional coverage metrics. A designs functional coverage has to be defined for each and every project. The verification can then proceed to meet that definition but there is a subjective nature in determining what is 100% covered. For this reason, the quality of your verification is only as good as your functional coverage spec. This is one of the weaknesses of this type of verification. It is important for the verification engineer to realize that most obscure bugs happen as a result of different, unrelated things happening in parallel. For instance, an interrupt occurs in the middle of a locked bus transaction. It is critical for the engineer to have a healthy and robust imagination when specifying the functional coverage elements. Simulation of the same packet over and over again may not reveal a hidden bug.
FPGA Verification Techniques
To properly make use of FPGA prototyping, the verification engineer must have a well planned and carefully thought out verification plan. The verification plan should clearly state which parts of the design will be subjected to code or functional coverage and which parts will be exercised in FPGA. For those areas where FPGA is the main focus, a detailed description of how this will be accomplished is required. It may seem like the conservative approach to ignore the FPGA prototype when verifying your design but when you are trying to reduce time to market it makes sense to approach verification holistically. Effort is not in unlimited supply. It is better to apply effort in untested areas than to duplicate it verifying the same things in different ways.
The traditional directed test verification should concentrate on the aspects of the design that cannot be accurately prototyped in the FPGA. Almost all of the design-for-test (DFT) areas fall in this category. Also, the memories used in the design will probably be different than the embedded memories available in the FPGA. The interface to these memories should be checked by traditional methods.
Any other IP that is different than or not present in the FPGA should be fully evaluated in simulation. PLLs and DLLs fall into this category. As do most of the physical interface IP.
Below is a table that summarizes how different aspect of the design should be verified.
Traditional Verification | FPGA Prototype | |
Payload | X | |
Software Compatibility | X | |
Real-time Performance | X | |
Embedded Memory | X | X |
Built in Self Test (BIST) | X | |
Embedded IP Interface (PLL) | X | |
IO Pad operation | X | |
Reset/Power up sequence | X | X |
Implemented Verification IP
A higher level of sophistication is to actually implement verification IP in the FPGA. This becomes most useful when the FPGA will be used in some sort of live environment. Bus monitors that check protocol, counters that keep track of number of transactions and assertion error checkers can be synthesized right in the FPGA. The system can be exercised for extended periods of time and at the conclusion the checker and counters can be examined. After the results are collected, specific tests can be performed to address any areas that did not get covered. This paper will work through a simple but detailed example. Following are several examples of how to implement verification IP in the FPGA prototype.
The first example is a PCI bus monitor that is watching the PCI bus and logging if certain types of accesses occur. It may keep track of how many of each type of access occurs as well. For instance, recording the maximum number of data transfers during a burst cycle would be useful for the verification engineer to know. If large bursts do not occur he could try to set up a specific test or feed the information to the verification team so they exercise it in simulation.
Figure 2: PCI Bus Monitor Example
Another example of a useful FPGA verification technique is a FIFO depth monitor. In communications systems the depth of the FIFOs is an important parameter when considering system performance. If they are two small over-runs will occur resulting in lost payload. If they are too large then you are wasting silicon area and therefore money. It is even possible to log error condition such as reading an empty FIFO or writing to a full one.
The last example to discuss is a processor real time monitor. This requires the support of software but can prove to be a very useful mechanism. The idea is to place a command in the processors idle routine that increments a hardware register every time the software passes through the idle loop. Obviously the more time the software spends in the idle loop the more the register will increment. Another, free running counter is implemented in hardware and is unaffected by the software. When the free running counter rolls over the value in register incremented by software is saved off, probably in memory or possibly transmitted off the FPGA. A record of these values can be reviewed to determine a history of the processor idle time. It would be fairly easy to implement an alarm condition if the idle value ever reached a certain threshold. This can be measured under different traffic loads to give the system design team a very good understanding of how software and hardware interact.
All of these techniques eventually require the data to be read out of the FPGA. This can be accomplished in a number of ways. If the system allows, the application could be stopped and the processor itself could read internal registers and communicate the results through its debug port. This is a good approach because it allows the amount of time the sample is active to be controlled. Another way is to create a port using spare pins on the FPGA. These pins can be connected to a logic analyzer that stores the data as it is streamed out or a debug port similar to a processor debug port can be constructed. In the case of the latter, care must be taken to maintain data integrity when data is available but the debug port is not serviced. A third way is to make use of the FPGA embedded logic analyzer features to construct a combination of the first two. The counter registers or memory contents can be specified as probe points and hooked into the embedded LA. This way the user can query the internal registers at anytime.
Conclusion
FPGA prototyping is an approach that can greatly assist the development team in producing a quality design in a timeframe that reduces the all-important time to market. This paper seeks to address the challenges as well as highlight the benefits of such an approach. Straight forward, easy to understand examples are presented to express the approach in a clear and efficient manner.
Related Articles
- A SystemC/TLM based methodology for IP development and FPGA prototyping
- Formal-based methodology cuts digital design IP verification time
- Methodology Independent Exhaustive Constraint Solver for Random Verification and Regression Generation
- Efficient methodology for design and verification of Memory ECC error management logic in safety critical SoCs
- Metric Driven Verification of Reconfigurable Memory Controller IPs Using UVM Methodology for Improved Verification Effectiveness and Reusability
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |