Co-Design for SOCs -> Fresh test strategies needed for IP cores
Fresh test strategies needed for IP cores
By James Hakewill, Chief Architect, ARC Cores Ltd., Edgware, Middlesex, England, EE Times
June 15, 1999 (12:02 p.m. EST)
URL: http://www.eetimes.com/story/OEG19990615S0023
Embedded developers are now designing devices and systems in a world moving to third-generation intellectual property (IP), or configurable cores. Significantly different methods for test and verification will be required as this new IP becomes more widely used. Third-generation intellectual property is a term used to describe IP blocks that have been specifically designed to solve the problems faced by ASIC or system-on-chip designers working in a world where hardware-description languages and logic synthesis are routine, and where designs include many pieces of externally sourced IP. Those designers need to narrow the gap between what is possible with fixed-function second-generation IP and what could be created if it were possible to design every part of the system from scratch. Every design has different requirements against which compromises are made and each uses a different set of tools. A common feature of third-generation I P is the ability to set up design parameters through a graphical configuration tool, which creates a design-specific HDL description of the block in question, and builds synthesis scripts and test benches to match.
The challenge for IP suppliers is to build a system that provides the ability to "open the box" of the component so that the user can configure, customize and understand. They must also ensure that both the component and the larger system can be verified, debugged and manufactured.
Typically, the user puts together a package of reliable EDA tools for each stage of the design-simulation, synthesis, scan insertion, automatic test-pattern generation (ATPG), floor planning and layout. Ideally, a system is created where the intellectual property can merge seamlessly into the user's design and EDA tool flow, and can be treated in the same way as the remainder of the HDL code. P>
Delivery systems can be created to allow an HDL block to be configured differently for each user. Some blocks, such as the ARC processor, allow not only configuration by the user, but also enable the user to extend the function of the IP block and modify certain internal components.
The IP is intended to merge with the rest of the design and not cause any exceptions or special cases that will disturb the EDA tool flow. It is in the interests of both the customer and the IP vendor to minimize the amount of integration work. Hence it is unacceptable to impose rigid requirements that chip I/Os at the top level are multiplexed to allow test access to a block, or to insist that a particular test methodology be followed, when the user is very likely to have a test methodology in place already.
The starting point for synthesized IP is somewhat different than for a hard macro. When good-quality HDL source is the delivery method for the design, the IP user can have much more confidence that th e gate-level and register-transfer-level simulation results will match. The source code is the same; the only differences that can be introduced will come from the synthesis tool. This helps development time by either avoiding problems entirely or by making them apparent earlier.
For a third-generation IP block to meet the design goals and be technology-independent, synthesized on a wide variety of processes and used in any EDA flow, it must be defined in very high-quality HDL. The recently published Reuse Methodology Manual describes the factors that need to be considered when creating reusable HDL. This quality HDL helps a great deal in manufacturing test, as well as in synthesis and static timing analysis.
The challenge is to create a methodology that will allow the user to verify that each module in the design is correct, that the system is connected together properly and that any software will run successfully. Chip respins due to design flaws or oversights are unacceptable. Simulate eve rything!
Just as it is unhelpful to require the design of special structures for manufacturing test, the same applies for functional verification. A scheme is required where the tests to be performed may be defined independently of the mechanism that applies them, so that a common set of tests can be used in many different system configurations.
If a system contains an embedded processor, this can be used as a powerful testing tool. Tests for the processor itself and the surrounding modules can be defined in assembly language. If unexpected results are received, the test program will terminate with an error code, which will cause the HDL test bench to stop the simulation with an error.
In the example system, test code is read into the system's memory model at the start of the simulation. The system has a debug communications module, which provides debugger access to all processor memory and registers. Outside the circuit under test, the host model reads a file that defines memory and register tests to be performed through the debug port and applies them using the appropriate protocol-in this case, via a JTAG interface.
Using this scheme for the ARC system, a powerful set of tests for each part of the design has been created and applied, combining assembly code and debug port tests. Different tests have been created for each module that may be included in the user's design, and different configurations are handled through the use of switches in the assembly source, or scripts to build test code. Since tests are defined independently of the testbench, it is possible to use the same tests for many different configurations.
It is possible to link the host and memory models to the MetaWare SeeCode debugger through plug-in simulator modules. That allows full debugger access into the HDL simulation of the device-with the ability to view all registers and memory, including user-designed extensions to the processor. Software engineers thus can run fully functional C/C++ code on t he system, including printf() and file IO on the debugger host machine. Software development can start using an entirely HDL-based simulation or a mix of HDL and cycle-accurate instruction-set simulation models.
The scheme also allows the hardware engineers to ensure that the debug channel is working correctly and that all vital registers required by the software engineers can be viewed in the debugger. It is, of course, possible to run code on HDL simulations without using this tool.
Some modules need additional testing to achieve full test coverage, and standalone testbenches must be created for the module in question. As the configuration tool creates each module, the testbench for that block can also be created or can be arranged to read the same configuration information. That approach is used to verify the function of individual modules, while the test code for the block is run in the system environment to verify the connections to the outside world.
Final puzzle piece
The final piece of the jigsaw is passive bus monitors. The blocks are located within the hierarchy of the part itself but are hidden from synthesis tools using special metacomments. During RTL simulation, they perform the useful task of reporting activity on particular signals and are able to generate simulation errors if illegal states appear on the signals being monitored.
The ARC philosophy is that the designer will use the configuration tool to create the processor that most closely matches his requirements, selecting cache sizes, memory systems, and additional CPU and/or DSP instructions and functions to be included. Test code, testbenches, synthesis and simulation scripts are created along with the design data. If further design-specific acceleration is required, then the ARC architecture allows the user to design powerful additional functions and incorporate them into the ARC design by adding extension instructions, registers, condition codes and the like.
Related Articles
- Co-Design for SOCs -> Blend of tools needed for verification
- Co-Design for SOCs -> On-chip support needed for SOC debug
- Co-Design for SOCs -> Software debug shifts to forefront of design cycle
- Co-Design for SOCs -> 'Ramping-up' for complex multiprocessing applications
- Co-Design for SOCs -> Designers face chip-level squeeze
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |