Debugging SoC Designs with Transactions
San Jose, CA
ABSTRACT
High level abstractions, such as transaction level modeling are used to help aid in high-level design and debug. This paper will provide some ideas and techniques for deploying and organizing transactions for effective debugging and analysis of large SOC designs.
1. INTRODUCTION
Transactions are used to model high level behavior without details about lower level implementation. For example, two communicating blocks can exchange information without knowing the details of the underlying mechanism – it doesn’t matter what bus standard is employed, or even whether the blocks are actually processes running in software on a distributed computing machine.
Transactions represent temporal system behavior at a level that abstracts away implementation details. Transactions can represent such things as abstracted data transfers and control directives, which can be reasoned about, ignoring the details. You can collect and analyze behavioral and statistical information at the higher abstraction level. You can also use the transactions to enable a more efficient and productive debugging environment.
In this paper we first briefly define the notion of a transaction and explore relationships between them. Then we show how those relationships can be exploited for debugging complex systems.
2. TRANSACTION RELATIONSHIPS
A transaction nominally represents a chunk of system behavior that occurs over a time. A transaction object contains a start time and end time and a set of attributes which are simply name/value pairs.
A sequence of non-overlapping transactions can be used to represent the operations of a finite state automaton as it executes over time.
A typical reactive or concurrent system may require many sequences on the same time axis to properly represent its behavior.
Figure 1 Example sequence of read and write transactions
There are many interesting relationships between transactions that may be captured by designers or verification engineers. Successful debugging is improved by modeling the proper relationships between transactions.
Two of the more common relationships are parent/child and successor/predecessor. Traditionally, children have been modeled as non-overlapping subordinate transactions. For this paper, we expand the definition of child to include both non-overlapping subordinates and overlapping subordinates.
2.1 Non-overlapping subordinate
A strict non-overlapping subordinate child model is useful on communication paths that represent a single point of control. There is usually one path of communication, and one communication control point – so overlapping transactions are not allowed. Figure 2 illustrates an example of a burst read “parent” with non-overlapping subordinate reads. Those subordinate reads may encapsulate lower level operations.
Figure 2 Non-overlapping subordinate
2.2 Overlapping subordinate
Overlapping subordinate children can only occur when there are multiple points of control – for example when there are multiple threads or when there are multiple finite state machines controlling multiple communication paths. Modern SOC communication has the ability to overlap communication – in fact these techniques can often improve overall system performance.
Figure 3 Overlapping subordinate
The figure above illustrates an example of a burst read that is implemented by multiple parallel operations. Each of the subordinate read operations resides on a separate sequence.
2.3 Predecessor and Successor relationship
The “predecessor/successor” relationship models cause and effect. Transaction t1 causes t2; transaction t2 is thus caused by t1. Similarly, the predecessor and successor relationships are inverses of each other – the successor of t1 is t2, and the predecessor of t2 is t1.
successor(t1) -> t2
predecessor(t2) -> t1
Predecessor/successor is very useful to track cause and effect across interfaces, and across time. A read() operation initiated from block A may eventually cause a read() operation to be executed on block D. The read() on block D may occur many clock cycles after the initial read() from block A. Nonetheless, there is a distinct causality relationship that is useful to recognize.
Figure 4 Abstract communicating transactions across multiple blocks
In the figure above, block A communicates with block B across interface 1; and B with C across 2; and C with D across 3. Successor/predecessor can identify that a request from C to D on interface 3 was actually caused by an original request in functional block A.
Finding the cause of the read() on D, can be traced by tracing the following successor/predecessor relationship.
Figure 5 Abstract communicating transactions across multiple blocks
2.4 Tagged Sets
As transactions execute – recognizable collections or patterns over time are generated. Certain patterns are created by the way blocks are interconnected, and the way they communicate with each other. The patterns become a signature for correct communication, and can be used as part of the debugging environment to diagnose illegal or unoptimized behavior. The patterns are identified as tagged sets. All the transactions within a tagged set may be selected and operated on as a single entity. They become a meta-transaction, which can be reasoned about and further grouped with other transactions or tagged sets.
When multiple blocks are communicating across multiple interfaces the patterns formed can be much more complex. Occurrences of tagged sets present a similar “shape” – this shape is the signature of the communication.
Figure 6 Shapes of transaction activity in tagged sets
Shapes of communication can be used to easily identify trouble spots. An elongated shape might indicate a stall or extra delay. Shapes that don’t overlap when they should might indicate incorrect locking behaviour.
3. TRANSACTORS
Two communicating blocks may operate at the same or different abstraction levels. Blocks at different levels of abstractions may require a transactor to translate high level communication into low level communication; for example stepping down from “read(addr, &data)” to signaling on “r/w”, “address”, “data” and other control signals.
Figure 7 Transactor – Read data from Block B.
Multiple abstraction levels – modeling
As stepwise refinement progresses through the design and verification cycle, lower level RTL or gate models may be simulated along with the high level models. These models will interface to the high level parts of the system via a transactor.
Using a transactor within a design debug environment can help identify transactions that are mis-modeled – for example excessive wait cycles, or extra clocks between transactions.
Figure 8 High Level function calls with lower level transactors
Figure 9 Stepping “read” down into constituent signaling
A transactor is necessary to manage the detailed timing of the control and data signals that allow information to be exchanged with the lower level abstraction model. Transactor construction can become complicated and is beyond the scope of this discussion.
4. DEBUGGING WITH TRANSACTIONS
In this section, we describe various situations where using transactions and the relationships between transactions can improve the effectiveness and productivity of debug and analysis of SOC designs.
4.1 $display() replacements
In Verilog models, $display() is commonly used to flag special conditions or mark interesting regions of behavior. These $display statements can be replaced by transaction recording calls. Transaction recording calls can be coded to contain all the same information as a $display() call, along with other information like start time, stop time and relationships. This can enhance debugging beyond simple output logging.
By replacing $display with transaction recording, messages can be visualized along with signals and variables.
Transactions can be used as a simple way to collect attributes together that otherwise would need to be assembled from various lower level bus signals, or by time multiplexed signals. Transaction attributes as shown in Figure 9 are values that are displayed together in a concise way. In a read() transaction, for example, the data and the address are displayed as hexadecimal numbers.
In debug, the outstanding events are visible in relation to all other system activity. When combined with wave display of relevant design signal values, the complete picture is immediately available.
4.2 Parallel operations
Parallel operations are hard to visualize in a usual waveform environment. Modeling parallel operations with transactions with separate threads of control can highlight resource bottlenecks, conflicts and the resulting resource under-utilization.
Figure 10 Parallel operations
In the system described above there are two resources F() and G(), which are shared by THREAD1 and THREAD2. There is a scheduling conflict, which can be seen clearly from the transaction-level view.
4.3 Distant functional requests
SOC designs are designed as large functional blocks communicating across multiple interfaces. Examples of such interfaces in use today include the IBM CoreConnect[9], AMBA[8], OCP[7].
The IBM CoreConnect system has a high speed bus – the processor local bus and a slower speed bus – the on-chip peripheral bus. As the system operates, requests are made across the interfaces which can cause further requests across other interfaces. For debugging purposes, the “successor/predecessor” relationship can capture this causality, so for example a problem that occurs when a cache flush happens can be traced back to the software routine that was executing and issued the original read request.
Figure 11 IBM CoreConnect Typical Bus interconnect
4.4 Errors in transaction sequences
The order of a sequence of transactions can be important for certain types of models. It’s much easier to understand the functionality of a system by viewing its operation as sequences of transactions rather than by decoding waveforms.
4.5 Dropped or missing transactions
Simple conceptual tests – like counting occurrences of specific sequences of events – are hard without transactions. Using transactions, a record can be made that 15 packets went into an interface, for example, and then you can observe whether or not the expectation is met that 15 packets came out the other side. If more or less than 15 packets come out, an error transaction can be generated.
The expected and actual transaction streams can be placed side by side in a display revealing the exact error in the sequence.
4.6 Tracing relationships between requests
Checks between related transactions can guarantee that relationships are enforced – a transaction request can “expect” to receive 4 response “child” transactions. If there are not 4 responses, an error transaction can be generated.
A parent transaction may require N phases to complete, as in a multi-word write to a device that is broken into N single-word writes. In this case, the number of phase transactions is expected to reach the number N. By issuing phase transactions, the correct count is visually revealed and any error states can be recorded exactly where they occur in the sequence of phases.
4.7 Comparison of multiple abstraction levels
Designs modeled at multiple abstraction levels must be verified against each other. With a transaction-level based model, the sequence of transactions generated by each abstraction level can be compared. These sequences of transactions may occur out-of-order with respect to each other, and timing details will be different, but over long periods of time, the same collection of transactions must occur at each abstraction level.
Each level – high-level software models, TLM, RTL and gates can be simulated, and the sequence of transactions generated can help establish consistency, and allow the comparison of behavior between abstractions.
For each model, the collection of transactions must be similar. The timing of the transactions will not be comparable – for example, the SW model may run in zero time – all transactions occur with no time delay. TLM models may have some time introduced. RTL models may have even more timing detail. For verification, the order relationship of the models should be similar. Many insights can be gathered from orderings that are very different.
Additionally, with transactions the performance of each model realization can be measured. Using the same analysis tools with all abstraction models, the transaction stream generated by each can be analyzed and measured. Each model should have a predicted performance that is similar. If not, then there may be problems.
Because the abstraction levels all record their activity in terms of transactions, many common tools can be used at each level. Bus bandwidth, throughput, resource utilization can all be measured and compared at any abstraction level.
The concept of a transaction is independent of the modeling language. As a design is refined and models are re-implemented as succeeding abstraction levels in different languages, it is possible to retain the original transaction output for consistency checks while expressing more detail through new transactions as appropriate.
4.8 Functional bugs masked as performance issues.
Consider an example where we have information about a hardware design with a perceived performance problem. The bandwidth was not as high as expected.
After analysis, the verification engineer determined that the low level hardware model was dropping a packet when two separate clocks were co-incident. This low level hardware problem (a dropped packet) was reported to upper layers in the system. The higher level system noted the dropped packet, and just resent it – recovering gracefully from an error – but reducing overall system throughput.
With the addition of transactions, a “dropped packet” would have been seen – as an error transaction, or as a “resend” request. Visual and automatic tools could identify that this unexpected transaction occurred, and further debug could begin.
4.9 Performance Analysis
Analysis of system performance can be augmented with analysis of transactions, including measurement, summation, averaging and exporting to other analysis tools.
With analysis tools, the difficult part of the analysis is identifying the correct collection of transactions. In Figure 12 below, the transactions displayed are all related. One analysis might be “Bytes per second for transaction A”. In order to calculate this value, the bytes transferred in the lowest leafs (transaction “B”) must be summed, and the duration of “A” used to calculate bandwidth.
Figure 12 Analyzing related transactions
Once a collection of transactions have been identified, various metrics, such as latency and bandwidth can be applied to them.
Latency can be defined as the delay that exists between the start of an operation and some later desired effect. For a system modeled with transactions, latency can be measured between related transactions, or can be measured as the total length of the related transaction.
Bandwidth and throughput are two related statistics that can be measured by annotating the bytes for each related transaction, summing over all related transactions and dividing by the elapsed time for the related set.
In Figure 13 below, analysis can be performed on each collection of transactions – the same analysis tools can be used despite the fact that one architecture is pipelined and the other is not.
Collections of transactions define interesting operations – like a bus transfer, or a network packet transfer. Transactions are used to identify these collections, and once identified, calculations can be performed.
Figure 13 Pipelined and Non-pipelined operations
Many alternate metrics can be used to view collections of transactions. Some measurements are of direct data – the value of the address or data in a transaction. Other measurements are of derived data – for example bandwidth is measured by dividing the number of bytes processed in a transaction by the transaction duration (end.time – start.time).
5. CONCLUSION
Transactions are useful abstractions that can be used to hide details and complexity, while providing a mechanism to collect and analyze performance and allow other reasoning – including debugging.
Used early in the design process, transaction analysis can allow measurement of performance under alternative architectures.
When relationships such as non-overlapping subordinate, overlapping subordinate and predecessor/successor are properly modeled, transaction debugging is made more efficient.
As transactions execute over time they produce a recognizable “signature” – patterns that are useful for quickly identifying bugs or unoptimized communication.
Taken together, the relationships and transaction signatures can be grouped into “tagged sets”, which can be abstracted further, and considered as a unit. Using tagged sets of transactions is a powerful mechanism for abstracting detail for large SOC communication.
As high level designs are refined from highly abstract down to an RTL or gate implementation, transactions can be used to span the abstractions – so that high level tests and models can be used with lower level models. These issues are beyond the scope of this paper, but when used, the techniques described here for debugging can be applied to them.
Using transactions allows visual or scripted debugging and analysis; increasing productivity and improving effectiveness for the design and verification teams.
6. REFERENCES
[1] Thorsten Grötker, Stan Liao, Grant Martin, Stuart Swan, System Design with SystemC.
[2] Bart Vanthournout, et al. Developing Transaction-level Models in SystemC, www.design-reuse.com/articles/?id=8523
[3] Adam Donlin, Transaction Level Modeling: Flows and Use Models. September 2004 Proceedings of the 2nd IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis
[4] Adam Rose, SystemC/TLM – www.systemc.org, Proposed TLM Standard.
[5] Testbuilder Reference Manual, Product Version 1.3-s8, August 2003, www.testbuilder.net
[6] Cadence Verification Extensions Reference Manual, Product Version 5.0, March 2003, www.testbuilder.net
[7] OCP, www.ocpip.org, Theory of Operation
[8] ARM AMBA Specification, www.arm.com
[9] CoreConnect™ Bus Architecture, www.chips.ibm.com/products/coreconnect
[10] Sudeep Pasricha, et all. Extending the Transaction Level Modeling Approach for Fast Communication Architecture Exploration, DAC 2004
Related Articles
- Using Transactions to Effectively Debug Large SoC Designs
- Creating SoC Designs Better and Faster With Integration Automation
- Speeding Derivative SoC Designs With Networks-on-Chips
- Optimizing Communication and Data Sharing in Multi-Core SoC Designs
- Getting started in structured assembly in complex SoC designs
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |