A Phyton Based SoC Validation and Test Environment
Sophia Antipolis, France
Abstract
Validation of today’s complex, mixed-signal System on Chip (SoC) creates new testbench development challenges. Simple Hardware Description Languages (HDL) lack abstraction features needed to easily achieve good coverage. To overcome this difficulty, commercial tools are available, but they are expensive to deploy, and have a significant learning curve. In addition, they lack support for actual chip testing. In this paper, we present an efficient simulation environment we developed, addressing the challenges of both validating and testing a mixed signal RF CMOS transceiver targeting the WLAN market (codename : Eagle).
This environment is based on the free open-source Python language, and relies on SystemC to interface with standard HDL simulators. Overall, this methodology allows shorter schedules for both the validation and verification phases, and to deliver a working chip at first tape-out
I. INTRODUCTION
In order to address the validation of a mixed signal RF CMOS chip for the 802.11 WLAN market, codenamed Eagle, we developed a simulation environment based on the free, open-source Python language [1]. This methodology allowed us:
-
To write interactive test scenarios, emulating the behavior of a software driver. Compared to classic tests based on linear code, we greatly improve the validation coverage automatically,
-
To reuse the same Python test scenarios, both at simulation stage, and during chip verification in our RF lab,
-
To prepare and debug these verification scenarios well before the actual chip came back from foundry, thus greatly shortening the initial bring up phase.
Overall, using this flow, we managed to speed up both the validation and verification phases, and to deliver a chip without any critical bugs at the first tape-out.
II. CONTEXT
Eagle, the chip we wanted to validate, is a mixed-signal chip, including both an RF CMOS front-end and digital signal processing [2]. It is an 802.11a/b/g transceiver, which operates both in the 2.4 GHz and 5 GHz bands. Most of the parameters of the RF sections are programmable through registers. This allows great flexibility, and the ability to perform self-calibration loops, under software control. This helps to improve yield, as it allows us to compensate for process-dependant variations.
Eagle connects to an 802.11 compliant baseband chip using a digital, high-speed serial interface, which carries the modulated signal to be transmitted over the air, the received down converted I and Q signals, and the register programming messages [3].
Figure 1: Wild 802.11 WLAN solution
Eagle also performs the packet detection (medium sensing), Automatic Gain setting (AGC), and some basic signal processing on the incoming signal (such as down sampling, residual DC removal, and shape filtering).
Figure 2: Eagle block diagram
III. SIMULATION AND VERIFICATION FRAMEWORK
We had three main goals in mind, when developing our simulation environment:
First, in order to improve functional coverage, we needed to write executable scenarios, which could run interactively along with the chip RTL simulation. For a SoC integrating at least one CPU, this can be achieved by replacing its RTL description by a CPU simulator (ISS) and its associated bus functional model (BFM). It can execute C or assembly code, and thus react in real time to the hardware responses. But, in this mixed signal RF chip, we had no CPU. So this interactive behavior had to be fully handled by the simulation environment.
Then, we wanted to be able to run the whole test suite on multiple platforms without the need to rewrite them. This means we wanted to be able to support several HDL simulators (in order to easily adapt our IP delivery to each customer), but also to be able to use same tests both for chip simulation, and then for actual chip verification in our RF laboratory. Therefore we adopted a layered approach: testcases scripts were written using an interface (API), which was independent of the platform on which they run. The translation into actual hardware stimuli was performed by lower Python layers, which could be changed to interface either with an RTL simulator, or with the actual hardware.
In addition, we wanted to support higher data abstraction levels than the ones allowed by classic HDLs simulators. This led us to use an Object Oriented (OO) scripting language. For example, we needed to represent data packets as a single object, and not only as a simple random sequence of bytes. This allowed us to generate packets under constrained randomization: the testbench could generate automatically a set of pseudo random packets, which however were still compliant with the 802.11 specification (i.e. with a valid length and CRC). This feature allowed us to automatically generate test patterns falling in corner cases, and thus improved our functional validation coverage.
We chose to implement this simulation framework using the open source (GPL) Python scripting language [1]. It offers all the required features: object oriented programming, control structures, a wide users community, a large number of available libraries, and a short learning curve. In addition, it is supported both under Linux and Windows, a prerequisite to be able to use our environment both for simulation and lab testing. The interface with either an RTL simulator or the actual Eagle chip will be detailed in a next chapter.
IV. VALIDATION PHASE
A. Testbench description
To validate Eagle, we used Cadence’s Incisive™ platform [4]. It offers a unified simulation environment, supporting VHDL, Verilog, and SystemC languages in a single simulation kernel. As we will detail in the next chapter, we used the SystemC [5] support to interface to the Python interpreter.
Figure 3: Eagle Validation Testbench
-
Eagle’s digital part (RTL coded in VHDL), which is the part we wanted to validate,
-
Simplified VHDL models of the analog/RF part: these models were used to check the connectivity of the interface with the digital control part, and also the behavior of all mixed-signal loops (such as a fractional-N sigma-delta PLL used in the synthesizer),
-
Stimuli generators: These modules were responsible for feeding incoming RF signals models into Eagle,
-
Serial interface model: this interface was handling all the communications with Eagle’s digital part: registers accesses and RX/TX digital samples transfers. In this test environment, this model replaces the baseband chip.
-
Signals tracers: These modules were logging specific signal activities within Eagle, to be compared to golden references at the end of each test case simulation. Most of these references were generated by a Matlab™ implementation of the corresponding algorithm.
-
An interface layer with the Python interpreter, written in SystemC.
The whole testbench was controlled by a high level Python script. As we will detail in a next chapter, registers are fully described by an independent XML file. This allowed us to access them using their explicit names, and let the script automatically find the corresponding addresses and field offsets.
As an example, an excerpt of a script performing reception tests is shown on the next figure. Eagle is instantiated as an object which includes high level functions, each of them managing a specific Eagle module: for example, the doRX() function configures and starts the reception path. This hierarchical description allowed us to write test cases in a very clear manner, making them simple to understand and maintain. In addition, if a module implementation had to be reworked during the design cycle, we just had to change the corresponding Python function, without the need to modify all the test cases referring to it.
A simulation output example is given in the following figures. The first one is a transcript of a reception test, and the second one the actual simulator window. Here we clearly illustrate the benefits of this simulation environment: It is much easier to analyze the transcript, where parameters are printed in a human readable format, rather than monitoring waveforms generated by the simulator.
Figure 4: Testing reception: Python script and simulator output
B. Software architecture
The communication mechanism between the Python interpreter and the RTL simulator uses a shared memory approach. We relied on the Unix / Linux inter process communication library (IPC) to manage it. Synchronization between Python and simulator is achieved through semaphores, one for each direction (Python to RTL simulator, and RTL simulator to Python).
One difficulty we encountered is that the RTL simulation runs several concurrent threads (since HDL languages are natively parallel), while, in this version, Python implementation was single threaded. However since tests scripts were driving the simulation from high level, the number concurrent processes to manage was limited (Eagle serial interface driver, Eagle interrupts, signal generators and signal analyzers).
Figure 5: Software architecture (validation phase)
On the RTL simulator side, we used the native SystemC support offered by Cadence Incisive™: It uses a common kernel to simulate modules written in VHDL, Verilog or SystemC. So, by writing our communication layer in SystemC, we were able to access all the RTL simulation signals states, together with the standard Unix shared memory management APIs. Thus, we avoided the need to write a complex simulator PLI (Programming language interface).
All simulated modules exchange data through a common shared structure. Both sides of the simulation are timed by a simulation clock. On the simulator side, a module that wants to exchange data with the other side sets an “updated” flag in this shared memory. At each clock tick, if any “updated” flag is set, synchronization takes place by releasing semaphores, otherwise nothing happens until the next clock.
The synchronization process, which implements the semaphores, is executed in a single SystemC thread. This method reduces the risk of mutual deadlocks. In addition, it improves simulation performances by grouping synchronizations requests. But it can’t cope with true asynchronous processes.
Figure 6: Synchronization mechanism
On the Python interpreter side, we developed two categories of Python testcases: Some of them required simple linear code, like: Eagle setup (sequence of registers accesses), launch signals generators, log Eagle response, and finally analyze it.
Others, like interrupts checking scenarios, were more complex, and required some parallelism. In that case the sequence was:
-
Perform Eagle setup,
-
Setup signal generators,
-
Start an event loop, including an event dispatcher, to simulate parallelism: here we were waiting for any events coming from the simulator (like interrupts and incoming packets detection flags generated by Eagle, end of signal generation notifications from generators), and also “internal” events, such as timeouts,
-
Then, at simulation end, analyze and report results.
V. CHIP VERIFICATION PHASE
The main advantage of our approach is the possibility to reuse most of the Python scripts written during validation phase, directly to perform Eagle’s chip verification in our RF lab. Scripts were able to control both Eagle and all the lab instruments. In addition to eliminating duplicate efforts, this allowed us to prepare and test the verification scripts well before the actual chip came back from the foundry, thus eliminating the initial, time consuming, debug phase. And when Eagle was not performing as expected in our lab, we were able to simulate the faulty scenario, giving us full visibility of all internal Eagle signals, thus speeding up the resolution of the issue.
The test setup is built around a Windows PC, which controls:
-
A I/Q baseband arbitrary waveform generator, used to generate 802.11a/b data packets,
-
A modulator, generating the incoming RF signal at 2.4 GHz or 5 GHz,
-
A spectrum analyzer (with 802.11 demodulation option), to analyze the RF signals generated by Eagle,
-
A custom board based on a Xilinx Virtex™ FPGA. It was designed to emulate the behavior of the baseband part of our 802.11 solution. It communicates with Eagle using its high speed serial interface, and manages all register accesses, as well as digital I/Q data streams transfers [6].
Figure 7: Eagle Verification setup
RF instruments are driven by a GPIB bus [7], while our FPGA board is driven by the PC parallel port. Thanks to our layered software approach, only the lower parts of the Python simulation had to be changed, when the simulated RTL was replaced by actual hardware:
-
Interface with Eagle RTL simulator is replaced by a generic communication layer, which dispatches messages either to/from RF instrumentation, or to/from our custom FPGA board.
-
The serial I/F simulation model is replaced by our custom FPGA board driver, which, in its turn, interfaces with the PC parallel port driver,
-
The signals generators which were instantiated in the testbench are now replaced by actual RF instruments drivers,
-
An extra layer is added to implement the GPIB and parallel port drivers.
Figure 8: Software architecture (verification phase). Common part shared with validation is highlighted
VI. REGISTER MAP DESCRIPTION
The register map, as defined in the chip specification, is used at several stages across the design cycle:
-
It is mapped into a register file RTL implementation, using HDL,
-
Registers are extensively accessed to control the chip in all our testcases,
-
Finally, it is translated into C code headers files, included in the low level software driver running on the baseband chip.
Manual translation over all these design phases is a tedious and error prone process. To avoid it, we chose a unified representation based on XML [8]. This allowed us to benefit from many open-source libraries dedicated to XML parsing, editing, and translating.
XML is a universal language well suited to describe all trees data structures. Registers map falls well in this category. With these considerations in mind, the hierarchy we selected is:
-
Chip block (such as TX control, RX control),
-
Registers,
-
Register fields.
Figure 9: view of a register described in XML
Each register field has additional information attached, such as data type (signed, unsigned), allowable range, SW access rights (read/write, read only, etc.). This allows our simulation environment to have enough information to represent the data held in registers, in a user-friendly way. Thus, there is no longer a need to manually pre-compute binary operations to access registers (such as error-prone offset and masking) : they are done automatically by our simulation framework, which also does on-the-fly consistency checks for overflows and meaningless values.
To ensure consistency across the whole design cycle, we developed automatic translators, which can either generate from the XML source:
-
Python code for our simulation framework,
-
C headers code for the low level software driver,
-
VHDL implementation of the register map.
VII. CONCLUSION
Using this methodology, we managed to validate a complex, mixed signal chip and we delivered it without major issues at first tape-out. In addition, we achieved a greatly accelerated verification phase by reusing the same test cases, and bringing in new debug features. Together with XML based registers description, we believe these are the two main innovations of our approach. We plan to extend this concept to support the validation of more complex Systems on Chip, embedding multiple CPUs.
VIII. REFERENCES
[1] Wipro-Newlogic, Wild Reference IP Product Overview, Nov. 2005, www.newlogic.com
[2] G. von Rossum, Python Tutorial, Python Software Foundation, Sept. 2005, www.python.org
[3] Cadence Design Systems, Introduction to the Incisive Functional Verification Platform, Jan 2006, www.cadence.com
[4] Open SystemC Initiative, SystemC 2.01 Users’s guide, Oct 2003, www.systemc.org
[5] W3C consortium, Extensible Markup Language (XML) 1.0, Third Edition ,W3C Recommendation , February 2004
[6] C.Bernard, N.Tribie, “A SystemC-HDL Cosimulation Framework”, CEA/LETI, Proceedings of the 6th MEDEA conference, Oct 2002.
|
Related Articles
- Creating IP level test cases which can be reused at SoC level
- ARM intrusive debugging for post-silicon SoC validation
- Bridging the Gap between Pre-Silicon Verification and Post-Silicon Validation in Networking SoC designs
- An efficient way of loading data packets and checking data integrity of memories in SoC verification environment
- Creating core independent stimulus in a multi-core SoC verification environment
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |