Transaction-Level Modelling and Debug of SoCs
Kamal Hashmi, SpiraTech, Ltd., UK
Abstract:
System-on-Chip (SoC) designs are becoming increasingly complex. Modelling, verification, and debug facilities at RTL have become quite inadequate in the face of rising design challenges. Transaction-Level Models (TLM) described at the top levels of design, and/or extracted from the design implementation promises to not only speed-up verification but also ease design understanding, evaluation and analysis thus alleviating the design burdens at the SoC level.
We present here our research and development efforts in the development of multi-level adaptors and transformers, as well as analysis, visualization, and debug facilities that revolve around TLM and the opportunities it affords for cooperative development among architect and design teams.
INTRODUCTION
Feature-rich consumer electronics are becoming increasingly common. The shrinking design features of nanometer design processes have empowered the development of designs with many millions of gates. The trade-off, however, has been an exponential escalation of system design complexity. To make matters worse, the large design size coupled with strict performance constraints, have given rise to physical-based synthesis and verification flows where the RTL design is a key fulcrum for design analysis, optimization, and validation.
Figure 1 Typical SoC Design Flow
Figure 1 shows the typical SoC design flow. System-level architects work with application developers to develop the application target and the design functional requirement; then comes, from top to bottom, high level modelling and test plan specification followed by design implementation and integration at the Register Transfer Level (RTL) and realization at the physical level.
RTL is the workhorse of design implementation because tools for design analysis, estimation, and backend realization such as synthesis from RTL to gates are quite mature. However, with RTL abstraction at the centre of design and verification activities, using varied characterization metrics such as performance and power consumption, a serious lag in engineer productivity arises. Too frequently, RTL is the pivot point for trade-off analysis, verification and debug, resulting in long and seemingly endless design iterations. This leads to the so-called design and verification productivity gap and is responsible for the recent swell in long development schedules for SoC designs and missed market opportunities for their applications.
It has also become quite evident that analysis at the RT level does not scale with the increased complexity of the design. The tremendous level of detail in RTL means exceedingly slow validation and extremely large trace data making it almost impossible to perform effective analysis and evaluation of the system design. Large size and compound complexity also necessarily means more corner cases, more latent bugs, and a dip in design quality. A new layer of abstraction on top of RTL where the system functionality and the design implementation (including estimates from the physical world) come together for trade-off analysis, verification and debug is required.
In order to address rising design complexity and the challenge of managing the increased detail and ensuing verification slowdown, there has been a rising interest in modelling, analyzing, and verifying system-level function and target architecture options at the level of system transactions. Transaction- Level Modelling (TLM) has recently been widely recognized as a useful paradigm for improving modelling efficiency, and verification performance [4]. TLM is the best current practical incarnation of the long sought articulation point for system design and verification; the place at which design specification and implementation, and the design team – architects and designers –— can come together to make educated design choices as well as diagnose and correct errors [5]. This level, if connected to RTL [1], has been shown to give a tremendous boon for reducing RTL-based design iterations and validation runtimes. TLM can alleviate the burdens of the productivity gap as it is adopted into the methodology, and as EDA tools that make use of this abstraction become commonly available. We describe here our research and development efforts that leverage the TL abstraction for modelling, analysis and debug.
TRANSACTION-LEVEL MODELLING AND ITS AUTOMATION
SoC architects typically do their thinking and design at the system functional levels. While their main concerns are the operations and dealings caused by the software, and interactions within the design components such as reads/writes from/to memory, the problem many software/hardware teams encounter is that the first complete system implementation model is only available when the RTL prototype is available. This means that architects and implementers have no alternative to RTL and to dealing with its detail – sometimes needless for the current specific concern.
The concept of TLM, therefore, holds great promise as a practical way to raise the abstraction of design processes. The grouping together of event sequences into flexible transactions provides an easier way of viewing and understanding design operation. TLM allows architects and engineers to focus on the essence of the design instead of its signal-level manifestation. This not only achieves a performance boost for simulation, analysis and debug, it also seals the gap in design comprehension by bridging the disconnect between specification and implementation.
Transaction modelling actually forms a continuum starting from the algorithmic level down to implementation. The primary modelling layers are: functional or Algorithmic (AL), functional Programmers’ View (PV), timed functional (PVT), Cycle Count level (CC), Cycle Accurate (CA), RTL, and finally the gate level. These modelling abstraction levels, a visualization of the function and architecture description elements at each level and their refinement, from partial orders to sequencing to timed functional and lastly bus then cycle accurate, is shown in Figure 2.
Figure 2 Transactions at Multiple Abstraction Levels
Transactions can be used to model the design starting at the abstract level, by using high-level modelling languages such as SystemC. However, a purely top-down modelling to implementation flow is not always feasible, nor is it desirable. In some cases, a pure high-level approach may require a total methodology change where the architects, designers and the engineering team move to a new method of design and verification. Moreover, when starting at such high abstraction levels, the final design output Quality of Results (QoR) is sometimes suspect since the links to implementation in this space are not mature for designs with large gate counts. High-level modelling is, therefore, currently most commonly used for early functional evaluation, and then manually converted to RTL for implementation, preferably through incremental refinement where detail is added gradually in order to control complexity and minimize divergence from design intent.
The alternative to modelling from the start at the transaction level and refining down towards the implementation is the automation of the modelling process starting from the RTL itself. This permits hardware and software designers as well as verification engineers to stick to their favourite best-practice methodology and toolsets, i.e. continue to use RTL where it works best –— implementation. The final methodology outcome is that of a “meet-in-the-middle” tactic that leverages the best of both top-down educated flows as well as bottom-up component-based (IP library) approaches to design.
Generators and Transactors
In order to achieve a performance boost out of these verification techniques and technologies a flexible model and test scenario generation capability is desired to permit testing the system at a multitude of abstraction levels [1]. Bi-directional transaction generators and checkers are crucial for rapid system design and architectural trade-off exploration. These facilities automatically perform:
- Transformation of data and control from the protocol interface specification, along with
- translation of the timing and the sequencing in the protocol, as well as
- generation of appropriate checkers for protocol rule enforcement
Figure 4 Up and Down Transaction Generator and Checker
TLM automation tools are based on the fact that the transaction information exists within the RTL simulation; it only has to be recognized and abstracted. Transactors are the automation elements that can recognize transactions from the RTL behaviour. We believe the best place to capture transactions is the system bus through which all the system components communicate directly or indirectly.
Transactors are, therefore, typically embedded as bus monitors that recognize and match the signal level activity and abstract it into transactions on the pre-analyzed known bus. Busses can be proprietary or standard such as AMBA, PCI and the like. Transactors can also be used as adaptors or transformers; they perform down or up conversion between abstract transactions down to or up from the signal level. Such adaptors are crucial for enabling a mixed-level transaction and RTL evaluation and validation (e.g., simulation) approach, without which performance would suffer. The adaptors have the ability to recognize transactions on the system bus within any validation tool such as RTL simulators. The enhanced design and verification flow using transactors and outlining the up and down transformation and adaptation within the system-level is shown in Figure 6. Sitting between the simulator and the analysis, visualization and debug system, the adaptors recognize and capture the transactions on the system bus, transform the design operation up and/or down as needed, and pass the data to the debug database for analysis and multi-level abstraction correlation.
Figure 6 Abstraction and Refinement in the Design and Verification Flow Using Transactors
SYSTEM-ON-CHIP DEBUG
SoC debug can be quite challenging given the colossal amount of data involved. It is virtually impossible for engineers to manually trace across a system from specification to implementation to validation data and mentally analyze the results. Recent advances in trace-based analysis and automated educated error location and diagnosis have made strides in relieving the burdens of RTL debug (see [2] for details). However, there is still a wealth of opportunities in high-level debug of the system [9].
No one technique in isolation will succeed on its own. We, therefore, propose a coherent and layered debug system that has all these elements, as shown in Figure 8. The analysis and debug framework can then be a means through which the team as a whole cooperates, some working at different abstraction layers than others, but where all the layers are tied together through the refinement (or abstraction) transformation details. The interface layer between high-level model debug and implementation layers (RTL for hardware, and feature C or assembly code along with the operating system for software) is done through the transactionlevel modelling layers.
Figure 8 SoC Debug: From System to Implementation
Transactions, in particular, are ideal for analysis and debug of bus-based communications. The grouping together of event sequences into flexible transactions provides an engineer-friendly way of viewing and understanding design operation. In other words, they provide an abstraction of pin-level activity that is easier to comprehend, analyze, and debug.
Transactions, Verification, and Debug
As we mentioned earlier, enormous databases and a multitude of low-level trace data make verification turnaround (whether simulation or debug) very unsatisfactory given the increasingly short development time and rising system design complexity. Much effort is still spent analyzing and “reverse-engineering” waveforms to abstract their function.
The fact is hundreds of unfathomable waveforms make it hard to perform meaningful design evaluation and analysis. Transactions, on the other hand, are ideal for analysis and debug. They allow debug to be performed at an abstract yet meaningful level. For example, one can debug a memory read or write transaction, already recognized as such, without having to go down to the signal activity and cross-reference with a data book to identify the proper signal handshaking for said memory read or write operation. In mixed transaction and RTL simulations, adaptors correlate the design activity and can preserve the analysis in trace files for later deep analysis by behaviour debuggers. Transactions enable cross-team collaboration between architect and engineer, hardware and software. They form a system design level at which many concerns can be analysed in a fruitful and efficient manner.
Debuggers use transactions as abstract data, grouping many signals, messages, and payloads, together referred to as attributes, allowing user to quickly identify this communication segment at a glance.
In addition transactions can be grouped into so-called streams. These pull together similar transactions and build yet another abstraction. For example, a stream can group memory reads and writes into a single “memory interaction” collection.
Transactions can, of course, overlap in time. They can also be (de)composed into sub or nested transactions. Relationships and associations among transactions and signals, e.g. predecessor-successor or parent-child, assist the designer or verifier in the analysis and debug. A sample transaction display for the AHB bus is shown in Figure 10. The bus transfers and modes are clearly visible. The mixed transaction level and signal data makes strides in efficient concurrent system and implementation debug.
Figure 10 Debug with Transactions for AHB
Analysis and Debug
Analysis engines also perform protocol verification as well as record and monitor output results. Architectural trade-off analysis is performed by calculating several metrics and profiling the system, for example, bus statistics to ensure functional coverage, bus utilization to gauge the speed matching between processor and bus, and memory access tracking for evaluating the memory architecture.
These built-in analysis engines (see Figure 12), along with new visualization and animation capabilities, can address the unique requirements of the debug target – for example, proprietary and on-chip memory buses such as AMBA, PCI, PCIX, by permitting effective and accurate examination of chip-level communication, performance, and power consumption issues.
Debug from System to Implementation
Design and debug requires rigorous automation and interaction with the engineers to ensure first and foremost that they are productive, and that the design operates correctly and can be released to the market with a high degree of confidence. Bus-based communication in target platforms with embedded processors, together with accompanying software require a debugger that cuts across both the traditional abstraction boundaries and the hardware/software implementation boundaries.
As shown in Figure 12, a comprehensive debug system is required for analyzing and evaluating the system design operation using the high-level models, such as SystemC [7] or SystemVerilog [8], as well as implementation HDL models. Transaction-level modeling and its analysis engines permit this integration and enable a truly unified framework for system-level debug.
Figure 12 Unified Comprehensive Debug System
CONCLUSION
Solving the challenges of designing large complex SoCs requires an understanding of the many design functions and their inter-relationships at higher levels of abstraction. It also demands an integrated design flow where specification, modelling, and implementation, as well as function and architecture, can come together for fast co-verification, educated trade-off analysis and integrated debug.
The emergence of new mixed-level modelling and debug technologies enable designers to fully embrace the move from RTL to the higher abstraction levels, and do so without disrupting, but rather by enhancing, current successful system design approaches. We believe the Transaction Level Model (TLM) is an ideal articulation point for system-level design that is finding increasing acceptance in the design as well as the verification and debug communities [9].
Figure 14 Transformation, Adaptation, Visualization, and Debug
Our research and development efforts in SoC design and verification have culminated in multi-level adaptors and transformers as well as analysis, visualization and debug facilities that revolve around the TLM and the opportunities it provides for cooperative development among architects and designers [3][6]. The overall integration is shown in Figure 14. The combined flow and toolset automate the efforts currently performed manually by design and verification engineers.
AUTHORS
Dr. Bassam Tabbara is Architect for R&D at Novas. He received his doctoral degree from Berkeley in 2000. His research interests include Hardware/Software co-design and co-debug of (embedded) systems and IP assembly. He has authored numerous papers and two books on these topics.
Kamal Hashmi is Co-founder and VP of R&D at SpiraTech. He is recognized internationally as an expert in ESL design tools and languages, and interface-based design methodologies and a major contributor to the VSI System Level Design working group and author of a numerous papers on system level design.
REFERENCES
[1] Kamal Hashmi, Chris Jones, “Curing Schizophrenic Tendencies in MultiLevel System Design”, DVcon, 2003.
[2] Yu-Chin Hsu, Bassam Tabbara, Yirng-An Chen, Furshing Tsai, “Advanced Techniques for RTL Debugging”, DAC, 2003.
[3] Novas Software, Inc., www.novas.com, 2004.
[4] Sudeep Pasricha, Nikil Dutt, Mohamed Ben-Romdhane, “Extending the Transaction Level Modeling Approach for Fast Communication Architecture Exploration”, DAC, 2004.
[5] Alberto Sangiovanni-Vincentelli et. al., “Benefits and Challenges for Platform Based Design”, DAC, 2004.
[6] Spiratech, www.spiratech.com, 2004.
[7] SystemC, www.systemc.org, 2004.
[8] SystemVerilog, www.systemverilog.org, 2004.
[9] Bassam Tabbara, George Bakewell, Dave Kelf, “Challenges and Opportunities for Debug at the System Level: Debugging SystemC Models”, SAME, October 2004.
Related Articles
- Transaction-level models eyed as SoC enabler
- A SystemVerilog DPI Framework for Reusable Transaction Level Testing, Debug and Analysis of SoC Designs
- Transaction Analysis and Debug across Language Boundaries and between Abstraction Levels
- Trace Based Approach for Unit Level Debug and Verification of C/C++ IP Models
- Modelling Embedded Systems at Functional Untimed Application Level
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |