Hardware Solutions to the Challenges of Multimedia IP Functional Verification
Update: Cadence Completes Acquisition of Evatronix IP Business (Jun 13, 2013)
by Marcin Rodzik and Adam Bitniok, Evatronix SA
ABSTRACT
This paper discusses the functional verification of IP cores and problems which arise during their implemenation in today’s advanced applications. First, the usual approach to functional verification is presented together with its common difficulties. The next part features an example of hardware verification environment which was used for verification of the Evatronix JPEG 2000 encoder multimedia IP core in order to illustrate this paper’s thesis. After a short description of the JPEG 2000 image compression algorithm, the structure of the environment is presented. Then the manner of test cases preparation is described as well as criteria used to determine whether a particular test is passed or failed. Finally, numerical results of hardware verification experiment are presented with some comments which conclude the paper.
FUNCTIONAL VERIFICATION
Importance of functional verification
Verification understood as the assessment of the product’s functionality under all possible conditions is a crucial part of every development work. This especially applies to Intellectual Property (IP), which, if verified badly, can cause serious problems during integration and increase the costs of chip development. Over the years, the importance of IP verification has increased, with many solutions developed to ease and accelerate the process.
In this paper, we will discuss functional verification of an IP core in the form of RTL code written in a formal language (usually one of Hardware Description Languages). The goal of functional verification of an IP core (we can have also formal verification, timing verification etc.) is to ensure that the IP core is 100% correct in its functionality. The correct functionality is usually understood as an ability of the IP core to produce correct response for given stimuli. This correct response is defined by the design specification and/or an external industry standard. Thus, importance of functional verification stems from its invaluable role in not only developing the high-quality product (an IP core in this case), but securing its basic functionality at all.
Common approach to functional verification
Preparing a verification specification (as a standalone document or a part of the design specification) is generally a recommended practice while developing IP cores. This piece of documentation should describe the test strategy and the verification environment (including its block diagram). It should list testbench components as well as additional verification tools. The document should also specify what criteria will be used in order to determine whether the verification process has been completed successfully. Last but not least, the verification specification, or another separate document, should define a list of all the tests used to verify the functionality of the design[1].
Functional verification is usually realized in the form of functional simulation, which is performed in the HDL verification environment by means of software simulation tools, e.g. Mentor Graphics ModelSim or Cadence NcSim. Typically, the verification environment consists of two main parts: the reference software and the testbench. The reference software duplicates the functionality to be implemented in hardware and is used to prepare both stimulus vectors and correct responses to those stimuli, while the testbench is used for functional simulation purposes. The testbench instantiates a synthesizable block of the IP core (Design Under Test - DUT) and at least two verification components: the stimulator, which feeds the DUT with stimulus vectors, and the comparator, which gathers responses to those stimuli and compares them with the results obtained in the reference software. The test case ends with success if the sets of responses coming from reference software and testbench simulation are identical.
Difficulties related to functional verification in multimedia applications
Simulation enables checking whether the designed device is going to act correctly at the early stage of RTL coding and remains the key verification technology until all the design work is finished. This is mainly due to another advantage of simulation which is the possibility to monitor values of particular signals in the device and, thanks to this, its extensive debugging. Despite the benefits, functional simulation is a very slow process. Nowadays this serious drawback becomes critical as we are observing a continuous increase of design complexity and, in addition to this, a strong time-to-market pressure. Moreover, vast numbers of tests are usually needed in order to fully cover functionality of multimedia IP cores and call the product a thoroughly verified one. As a result, state-of-the-art multimedia IP cores cannot be simulated in a reasonable timeframe, and other verification methods are necessary.
There is also another problem in terms of verification of some multimedia applications such as image or video encoders. Multimedia standards tend to define decoding rather than the encoding process. They are normative in terms of codestream syntax and decoding procedures only and usually do not create any strict requirements for the encoder so that the implementers can experiment with optimizations and make their solutions suit different needs. As a result, such applications cannot be expected to produce any defined codestream for a particular set of input data, because many correct compressed representations of raw data are possible. For that reason, simple response identity criterion is not always useful in verification.
Hardware acceleration of functional verification
It is obvious that in the context of verification of today’s emerging applications, hardware acceleration is a must. Commercial solutions have already existed on the market of EDA tools for some time, however, they still have their own limitations. They offer quite a considerable speed‑up and a possibility of limited debugging because they enable monitoring internal signals in the simulated hardware to a certain extent, but the simulation process is usually run at the maximum frequency of a few megahertz, which is far slower than the pace at which a real circuit can work.
Full in-hardware functional verification
Today’s multimedia designs require thorough verification: hundreds of thousands of test cases (or even more) are usually needed in order to cover different settings and possible input data contexts. Under such circumstances, moving functional verification entirely into hardware is strongly advised in order to go through as large a number of test cases as possible and gain the highest level of confidence in the design and its correctness.
The price which has to be paid for the benefits of hardware verification is the impossibility of simultaneous application debugging: verification engineers can see only the output values of the DUT. In case of any error, they need to go backwards and run a standard functional simulation to determine the cause of the problem. Therefore, hardware verification is an extremely powerful aid in the latest phase of the design in order to detect hidden errors which happen very rarely but can jeopardize the whole development effort. It is a kind of a brute force attack with an increased processing power against the bugs in the design which might still exist because of the fact that not all corner cases can be covered by simulation based verification in reasonable time.
Below, the environment developed for verification of the Evatronix JPEG 2000 Encoder IP core is presented as an example to illustrate what has been written above. The ways to deal with the aforementioned problems and obtained results are also described.
EXAMPLE - EVATRONIX JPEG 2000 ENCODER IP CORE VERIFICATION
JPEG 2000 still image compression standard
The JPEG 2000 is the newest still image compression standard developed by Joint Photographic Experts Group. JPEG 2000’s superiority over previous compression standards results from attractive, improved image quality, especially for low bitrates, as well as additional exclusive features, e.g. both lossy and lossless compression, progressive transmission by pixel accuracy and resolution, Region-of-Interest coding, output rate control, and mechanisms increasing robustness to bit errors [1]. The high efficiency of compression is achieved mainly due to employment of the Discrete Wavelet Transformation and a sophisticated bit-plane coder. The JPEG 2000 encoding process is comprised of the following stages[2]:
- Multiple Component Transformation, which converts an RGB image into a 3-component image containing one luma and two chroma components;
- Partitioning components into tiles and the Discrete Wavelet Transformation, which is carried out on each tile of each component and exploits one of the two defined filters: the reversible 5-3 filter or the irreversible 9-7 filter;
- Scalar quantization of DWT coefficients;
- Formation of bit-planes from groups of quantized coefficients (code‑blocks), and scanning each bit‑plane in three coding passes with possible truncation of some bits (the goal is to limit the output data volume at the cost of loss of some information);
- Entropy coding by means of a context-dependent binary arithmetic coder and the final code‑stream formation.
Each of these stages is configurable through numerous compression parameters (e.g. number of DWT decomposition levels, quantization style and size of quantization step), so that the encoding process can be adjusted to fulfill the requirements of a particular application in terms of the desired output bitrate as well as the image quality.
Verification environment for the Evatronix JPEG 2000 Encoder IP core
The figure below presents the verification environment for the Evatronix JPEG 2000 Encoder IP core. First, the reference software is run in order to calculate the expected DUT’s response which can be later compared with simulation results. In fact, not only are the final compressed images compared, but the environment also enables comparison of intermediate results at different stages of the compression process. It eases debugging and helps to localize an error in the HDL code if a test case fails to produce expected results. After the simulation, an external software tool examines obtained results and determines whether the test case passed or failed.
Generation of test cases
With help of the reference software, test cases are generated for different input images representing a variety of image data types. Among test images, there are:
- natural images with different characteristics (portraits, landscapes, vibrant-colored pictures, images containing sharp as well as smooth edges)
- images typical for particular applications (maps, medial imagery etc.)
- artificial images (text, geometric shapes and images with increased bandwidth such as multi‑colored fractals)
For each input image, tests with different values of compression parameters are generated. Dealing with the full Cartesian product containing all the combinations of parameter values is not feasible. For that reason, parameters related to different stages of coding are dealt with separately. For instance, tests for all the possible numbers of DWT levels (from 0 to 7) and for both the DWT filter types (9–7 and 5–3) are generated while the rest of parameters have “typical” values. Afterwards, DWT stage parameters are fixed (e.g. 5–3 filter, 4 decomposition levels) and other parameters are modified. While this modification, it is especially important to exploit the entire range of applicable values of each compression parameter.
JPEG 2000 verification criteria
In order to overcome the issue of an undefined circuit response, relaxed verification criteria have been defined in the form of five separate verification modes with five different verification metrics used to determine the test result in each mode. The verification modes are presented in the table below.
The “streams” mode is, in fact, the most strict one. In this mode, the test case is passed only if both codestreams (the one obtained in simulation or in FPGA and the one obtained with the use of reference software) are identical. Such identity of codestreams can be achieved in the majority of cases while testing lossless compression. However, codestream equality requirement cannot be met in case of features the hardware implementation of which might not fully concord with the reference software as a result of FPGA-targeted optimizations. In such situations, relaxed verification criteria should be used. In “strict”, “slack”, and “lossy” verification modes, the codestream produced by the evaluated encoder is decoded and the compressed version of the image is compared against the original one. The test case is considered to be passed if a calculated metric does not excess a predefined limit. Such metrics as Peak Absolute Error , Mean Squared Error and Peak Signal to Noise Ratio are used for this purpose. The “strict” and “slack” modes are appropriate for lossless compression, and the latter should be used if the compressed image contains some distortions (these are usually the results of inexactness during the reconstruction of samples from the DWT coefficients). The “lossy” and „delta” modes are adequate for lossy compression. In the “delta” mode, distortions in the compressed image are compared with those introduced while using the reference software.
Numerical results of hardware-assisted verification
For the sake of verification of the Evatronix JPEG 2000 Encoder IP core, a hardware verification environment has been built, too. It comprises of an FPGA circuit plugged into an evaluation board and a PC, both connected with an Ethernet cable. A simple proprietary UDP-based protocol was utilized in order to send input image data and compression parameter values to the encoding platform as well as to download the obtained codestream back to the PC. Dedicated software was used for all the verification-related tasks done by the computer.
The table below presents the verification time results obtained for 6 natural images in three different resolutions, compressed losslessly with typical compression parameter values. The compression time ranges from about 30 ms in case of the smallest test images to half a second in case of the biggest ones. Functional simulation of all the 6 test cases using ModelSim lasts more than 6 hours. It can be calculated that simulation takes on average 16,000 times longer than the real compression.
Verification of the same test cases takes 59 sec in the designed hardware environment which is an improvement of 2 magnitude orders in comparison with the standard simulation. In addition to this, it is worth highlighting that the mentioned value reflects not only compression time but also transmission of data to and from the evaluation board (including a large volume of raw image samples). Due to a specific system construction which enables some kind of pipelining, this transmission overhead can be lesser if bigger sets (>30) of test cases with the same compression parameters and input image resolution are processed in a row. Thus the time needed for test cases can be statistically about half as long as in the experiment presented here.
These results can serve only as an example of speed-up. The time the compression in the JPEG 2000 encoder takes depends to a high extent on the image characteristics, so the numbers can differ strongly.
CONCLUSION
Today’s advanced HDL designs, especially implementations of multimedia algorithms, require thorough verification. However, simulation, which until now has been the primary technology for functional verification, is becoming the bottleneck of the entire design flow in case of contemporary complicated IP cores. This trend will not be inverted, and a need for new verification methods emerges. In this paper, performing functional verification based in a significant part on calculations conducted in hardware has been proposed. As it was shown in a simple experiment, this can bring a significant speed-up of the verification process. Nowadays, such technologies as simulation acceleration or entirely in-hardware functional verification become a necessity when working on implementations of sophisticated contemporary multimedia algorithms.
References
[1] M. Keating, P. Bricaud: Reuse Methodology Manual for System-on-a-Chip Designs, 3rd edition. Kluwer Academic Publishers, 2002.
[2] Ch. Christopoulos, A. Skodras, T. Ebrahimi: The JPEG 2000 Still Image Coding System – An Overview. Published in IEEE Transactions on Consumer Electronics, Vol. 46, No. 4, pp. 1103-1127, November 2000.
|
Related Articles
- Dealing with the challenges of integrating hardware and software verification
- RTL Prototyping Brings Hardware Speeds to Functional Verification
- SoC Functional verification flow
- Addressing SRAM Verification Challenges
- Leveraging UVM based UFS Test Suite approach for Accelerated Functional Verification of JEDEC UFS IP
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |