The 'what' and 'why' of transaction level modeling
Bryan Bowyer, Mentor Graphics
02/27/2006 9:00 AM EST, EE Times
Advances in both the physical properties of chips and in design tools allow us build huge systems into “just a few” square millimeters. The problem is that modeling these systems at the register-transfer level (RTL) is labor intensive, and simulation runtimes are so long they have become impractical. If this is a problem today, just imagine trying to design, integrate and verify the even more massive systems we will build 10 years from now.
Transaction level models (TLMs) can help with design, integration and verification issues associated with large, complex systems. TLMs allow designers to model hardware at a higher level of abstraction, helping to smooth the integration process by providing fast simulation and simplifying the debugging process during integration.
Designers start with a variety of parts at different levels of abstraction, often including algorithmic models written in pure ANSI C++. These models are combined with a detailed specification of how they should be brought together into a system. Then the models are divided among several design teams for implementation into RTL. Other pieces — often the majority of the system — consist of existing blocks reused in the new design.
Algorithmic synthesis tools help RTL designers quickly implement new, original content for various blocks. This allows a fast path from a collection of algorithms to a set of verified RTL blocks that need to be integrated. But, any errors or misunderstanding in the specifications for the systems or for the IP blocks will still lead to a system that doesn’t work.
Transaction level models could be used to simplify the integration and testing, but where to get the models? Attempts to manually create TLMs in SystemC by adding hardware details to the pure ANSI C++ source are often as error-prone and time consuming as manually writing RTL.
While this effort is certainly justified for reusable blocks, someone still has to maintain these models. For the original signal-processing content, however, the best approach is for the algorithmic synthesis tool to simply generate the TLM models as part of the design and verification flow.
An added benefit of this approach is that system modeling and integration can now be used to refine each block in your system. Information gathered during integration is fed back into the algorithmic synthesis flow, allowing blocks to be re-optimized based on the system.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
- Functional Transaction Level Modeling simplifies heterogeneous multiprocessor software development
- The 'why' and 'what' of algorithmic synthesis
- Advancing Transaction Level Modeling -- Linking the OSCI and OCP-IP Worlds at Transaction Level
- Electronic musical instruments design: what's inside counts
- What's Next for Multi-Die Systems in 2024?