Synthesizable verification IP speeds design cycle
Synthesizable verification IP speeds design cycle
By David Murray, EE Times
April 4, 2003 (11:00 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030331S0061
Today's deep sub-micron process technology allows the creation of an incredible range of System on Chip (SoC)-based electronic products. These products, however, are sources of methodology and capability nightmares as product design teams strain to achieve the design mantra of "right first time". This is happening across all aspects of electronic design but one problem is common with them all - verification. The system-level verification process is the SoC 'right first time' quagmire.
Various verification methodologies have evolved throughout the years that have eased the stress on this most complex part of the product design process. Block-level verification methodologies have matured to the point where they are more predictable, controllable and measurable. System-level verification methodologies, however, are strained by highly efficient SoC design capability: while an SoC can be designed in months, the system-level verification is s till the high-risk, high-resource and high-stress area of the overall product design process. Today's system verification requirements are simply to get the system verification done more quickly, more safely and more smoothly. This is the requirement today but we need tomorrow's methodologies and tools to achieve it.
This paper presents an approach and case study in tackling the system-level verification problem head on. Leveraging the latest system level verification tools of co-simulation and emulation we utilize a modular synthesizable testbench that maximizes the advantages and minimizes the disadvantages of these methodologies. This encompasses real SoC verification IP that provides us with a reusable and innovative solution to SoC verification stress.
Synthesizable Verification IP
SoC Validation is a big problem. Design Teams have access to fast maturing SoC design methodologies giving a certain amount of 'design-ease'. Not only can SoCs be a rchitected relatively quickly, but by using IP Blocks, SoC architects can factor in 'Just in Case' type scenarios "Let's put in a USB interface just in case". The system soon becomes a spreadsheet of application configurations. In some ways the traditional pin-bound ASIC starts to look like a programmable-logic device as there can be a multitude of system-level configuration options that require validation.
Validating a system such as this becomes a very complicated task. Sub-Blocks may be 100 percent verified but we need to ensure that they are integrated correctly. Simple connectivity tests can be followed by sub-block integration tests to ensure that the block is connected and it alive.
If there are embedded processors in the SoC we need to ensure that the system memory map is validated or we may require that an embedded operating system is validated. Once all sub-blocks are integrated and embedded processors are functioning correctly, we need to stress the whol e system to ensure that the interconnect and system busses are performing correctly. Running actual applications on the system gives a good confidence factor and thus this should be included in the overall validation.
Moving from integration level to application level requires several orders of magnitude more processing power but with a typical system described above, combining all these levels of system validation is the only way to guarantee that your system works "right first time"!
Figure 1 summarizes these system validation levels.
Fig. 1 System Validation Levels |
This shows us the what require for a Right-First-Time SoC validation. The problem is How?
Clearly block-level ver ification methodologies and tools will not work to deliver these requirements. Standard RTL simulators are not capable of going much beyond the integration phase of the validation. Two methodologies, however, have matured to tackle these requirements. These are co-simulation and hardware emulation. While they dramatically boost the system validation process, they also have their own drawbacks.
Co-simulation typically provides a means to run, and more importantly debug, system software on an embedded processor core. This allows software integration to take place before the chip design is concluded. The main benefits of this methodology are:
Hardwa re emulation is the means of mapping the design onto hardware. This may be programmable logic based or processor based and may be a dedicated emulation system or a simple FPGA prototyping system.
The main benefits of this methodology are:
Figure 2 shows the capability of different 'simulation' methodologies with respect to our system validation levels.
Fig. 2 Simulation Methodology Capability |
Co-simulation and Emulation also have their disadvantages. Although co-simulation can be quicker than standard simulation, it can be too slow to run higher-level software--e.g. an operating system boot, a device driver validation or an application. Also, the co-simulation model of the processor and its system components is often optimized for execution speed, and is therefore an estimation of the functionality: typically a good estimation, but nonetheless an estimation.
Emulation, for all its speed, suffers from several drawbacks. Mapping from your design to an emulator can be fraught with difficulties and initial emulator integration can be difficult. Compared to the productivity gained in co-simulation and despite improvements in recent years, debugging on an emulator is still an arduous task. Also, creating tests to run on the emulator can be difficult and if real speed is required a fully synthesizable testbench is needed.
In some ways these two technologies are co mplementary: where one fails the other excels and vice versa. It's a battle of observability and speed. If we could bring these two technologies closer then we would benefit in many ways. Imagine if any test that was created in a co-simulation environment could be used immediately to validate your emulation mapping. Imagine creating and debugging a test case in a co-simulation environment and then running it at full speed on an emulator.
If we can combine these technologies in a reusable manner then we are creating true Verification IP and we increase our verification potential by an order of magnitude.
A Proposal
Our verification solution consists of a reusable synthesizable testbench that is used across all different methodologies and tools to facilitate a 'write once - run anywhere' methodology.
A synthesizable testbench produces synthesizable test cases that can be used on any platform including:
The main benefits of a modular synthesizable testbench are :
Of course synthesizable testbenches have their own problems:
Our solution tackles these problems to create the ultimate synthesizable testbench.
Modular Synthesizable Testbench
For SoC valid ation we use a single testbench architected out of Synthesizable Testbench Modules (STMs) as shown in Figure 3.
Fig. 3 Synthesizable Testbench |
All testsuites are memory-based, providing concurrent control and data to the STMs.
STMs give the testbench a modular architecture and indeed the STMs are architected to resolve specific problems inherent in a synthesizable testbench . The whole methodology centers around a synthesizable component called the C3 (Concurrent Controller Core).
Testbenches typically are discarded because their inherent structure is not reusable. With the C3, however, the level of abstraction is raised and this promotes reusability.
The C3 is essentially core Verification IP from which other higher level blocks are easily constructed.
Figure 4 shows the structure of an STM.
Fig. 4 STM Archecture with C3
|
The STM is a reusable testbench component that is 100 percent synthesizable and can be considered the verification equivalent of a standard design IP Block e.g. UART, I2C, UTOPIA, LCD, PLCP etc.
These STMs are the main components of a plug and play testbench which can be quickly constructed from a component library to provide a concurrent, realistic platform for SoC validation or intensive block level validation
Each STM contains a C3 that provides a wide range of programmable verification functions. The STM also contains a Perip heral interface (PI), a custom block which provides the specific STM functionality e.g. UART. Using this methodology each STM can be independently programmed , a critical feature of a SoC validation environment. Also, the advantage of this generic approach means that the programming of all STMs is similar.
Synthesizable testbenches can also have non-C3-based STMs such as Flash Memories or SDRAM interfaces.
Concurrent Controller Core (C3)
The C3 is an advanced programmable controller designed specifically for complex verification environments. It provides independent, concurrent control of its host STM within a validation environment. The C3 has a generic instruction set for data path management, Master/Slave mode configuration and synchronization. This instruction set is typically extended to cover STM Specific configuration and control.
The following are the primary features of the C3:
Software tools are used to extend the functionality of the C3 so that new configuration or control mechanisms are seamless to the operator.
Programming the C3
The C3 is controlled through a simple scripting language which is automatically extended as more commands are added for specific STMs. At simulation start these scripts are translated into opcodes and loaded into specific memo ry locations. An example of a script is as follows:
The 'uart_cfg' commands are specific to a UART STM and the send_data and stop commands are generic C3 commands i.e. the send_data command will work on any STM that has transmit capability.
Synthesizing the C3
The C3 is fully synthesizable and, in standard configurations, it is about 3.5K Gates. It can be further optimised by reducing the feature set e.g. removing redundant data paths and data widths.
Figure 5 shows a high-level diagram of how the C3 can be used as part of a Camera interface emulator.
Fig. 5 Sample CAMERA STM Application | < /TR>
In order to test and integrate the camera interface the following is required :
The C3 is used as data transmitter using the C3's generic send_data command. The size of the data stream is selected with the C3's generic data_size configuration command.
The Camera PI interfaces to the C3 via a simple state machine. This PI acts as a simple bridge between memory and the Camera Protocol, transmitting memory data containing headers and image data.
Alternatively, the C3 and PI can be extended to have image size and mode etc. as parameters. Once transmission is indicated from the C3, the PI generates header information based on the input parameters. It takes pure Image data from memory and sends this on the camera interface.
Th e Camera Peripheral interface is less than 500 gates in the first instance and less than 1K gates in the second so the design of this block is trivial.
Note: These STMs can be re-used in any C3-based testbench , without any change.
System Level Stress Testing
STMs can be programmed to send or receive :
A stress test scenario can be setup within a co-simulation environment and later run on an emulation environment. A typical stress test, as illustrated in Figure 6, includes setting up self-checking point to point data transactions that run continuously in the background. This is implemented by generating pseudo-random data from source C3s and checking the data stream with target C3s. They require the same seed and some software routines in the core processor to transfer data from one peripheral to the other. e.g. fro m UWIRE to SPI. This type of transaction can be seen as continuous self-checking background noise.
Fig. 6 Stress Testing with a C3-based testbench |
While these self-checking transactions are running in the background, real application test cases can be implemented e.g. taking raw data from the CAMERA, processing it and transmitting it to the LCD. This provides an efficient way to stress system interconnect while directed system behaviour is validated.
At this point I would like to summarize a pair of actual designs in which this methodology has been used successfully.
Case Study: Multimedia Platform Validation
Device: Dual Processor based SoC
Testbench: 15 STMS
Technologi es: Co-simulation + HW Emulation
Fig. 7 802.11a Emulation Environment |
Highlights:
Case Study : 802.11a Platform Validation
Device: Internal IP core
Testbench: 4 STMs
Technologies: Matlab modelling, RTL si mulation and FPGA prototyping.
Figure 7 shows an 802.11a emulation environment. Its highlights include:
General Application
The C3-based testbench methodology has a wide range of applications. The C3's configurable and programmable nature means that it is highly adaptable. The following are a list of application areas for the C3:
Because the C3-based methodology is derived from VHDL it can be used in any platform. It can be simulated, co-simulated, accelerated emulated or prototyped.
The Future
The C3-based synthesizable testbench is enabling a 'virtual prototyping' type methodology. This means that if we have access to programmable logic, be it a custom emulation board with FPGAs or an off-the-shelf emulation system, then we can compile our complete design + testbench + test cases onto this and run real system tests. Alternatively, we can combine this methodology to boost o ur current prototyping environment by connecting the SoC to devices which do not exist on the prototyping board. Either way the modular synthesizable testbench has very far reaching and very positive implications for SoC Verification.
Figure 8 shows that any investment in the reusable testbench will ensure that future verification projects are fractions of the development time instead of the primary development effort.
Fig. 8 C3-Based System Verification |
In conclusion, the C3-based Synthesizable Testbench methodology has many benefits. It provides a completely reusable testbench architecture which can be reused both across different methodologies and across different projects.
We have been able to write test suites within a co-simulation environment that have ported automatically to a target emulator. The validation team using the C3-based testbench have developed their own modules in dramatically reduced schedules. Software integration teams have a simple and generic view on how to program the testbench. As the testbench is modular, any STM developed can be reused instantly within another testbench environment.
The approach is standardized, the benefits huge, the learning curves easy and the reusability ensured, leading to reduced schedules for SoC validation and overall verification stress.
David Murray is Systems Verification Specialist at design services vendor Duolog Technologies (Dublin, Ireland).