|
||||||
Breathing life into hardware and software codesign
Breathing life into hardware and software codesign From theory to practice, this article comes from one who's done it all. Hardware/software codesign is the goal of every (well, most) embedded systems designers. To have the hardware and the software both spring forth from the same designer's pen is enough to make any manager glad. As consumers develop an insatiable desire for instant information, embedded systems are here to both satisfy the need and fuel the desire. These systems, from the consumer gadgets to the safety-critical technology, have permeated our lives almost to the point where we depend on them directly or indirectly not just for entertainment but for food, clothing, and shelter. The demand for increased embedding of hardware and software in multifaceted consumer products coupled with rising design complexity and shortening time-to-market is renewing the emphasis on bringing together the various segments of the embedded system design industry. Codesign of hardware and software is back in vogue. At long last, codesign, now known as embedded system-level design, is starting to mature. This article describes the evolution of codesign, what went wrong with early codesign methods, and its revival and great hope: the transaction-level method. Intro to codesign Hardware/software codesign involves analysis and trade-offs: analyzing the hardware and software as they work together and discovering what adjustments or trade-offs you need to make to match your parameters. For example, anytime you debug a software driver on the hardware (or a model of the hardware), and you tweak the hardware or the software as a result, that's codesign. To put it more simply, any time you run a compiler you're doing codesign. The compiler tries to finagle the software code (to a degree that depends on your optimization flags) to make it match the fixed-hardware processor. In many cases engineers even use hardware hints (for example, the register keyword in C) in the programming language itself to suggest how the optimization or codesign should be done. The field of codesign, however, has long been a victim of its own visionary breadth, torn between idealism and realism. Many practitioners have proclaimed it dead because its ideals of top-down embedded systems design starting from high abstraction layers have proven too unwieldy. The other camp—the codesign realists—says the practice is very much alive as bottom-up intellectual property (IP) component assembly. A version of codesign has now come along that may please both camps. To make sense of it, let's first look at how codesign evolved. Rise and decline In these applications, the system control—the interaction of the controller with the RTOS and the peripherals—was typically designed first, forming a shell into which the data-path components such as arithmetic logic units, multipliers, and shifters would fit. The software was typically used in such systems to add flexibility through programmability. The software wasn't so much application "feature" software but a soft implementation of the required control functions. Industry awareness of the toll exacted by unreliable code and long development times led to academic curiosity in software modeling and automated synthesis. Academic research camps were polarized according to the application domain: a majority studied control-intensive applications while others worked on data-dominated applications. Early on, researchers adapted methods for synthesizing hardware logic (for control) and data path (for data) to simultaneously design hardware/software and their interfaces. The motivating idea was to start from a single system-level specification and automatically generate both the hardware and the software to reduce design time and the time spent evaluating different implementation alternatives. At first, the researchers limited their focus to analyzing the trade-offs of low-level hardware/software implementation and the best methods for cosimulation, but complex target architectures made it increasingly hard to adequately analyze and optimize the system at the low hardware/software level. Newer 32-bit microprocessors and a variety of DSPs tailored for different applications were in greater use, and their advanced memory hierarchies with multilevel caches severely limited the accuracy of the abstract models built on function-control abstractions. A solution at hand Automated synthesis promises to increase engineer productivity and enhance system quality by rapidly generating the hardware, software, and necessary interfacing mechanisms (even a dedicated RTOS optimized for the application), all guided by the system-constraint metrics. Those metrics are primarily performance and size, but also power consumption. Architecture once consisted of a few software partitions such as one or more microprocessors with an RTOS to manage the multiple software tasks and multiple hardware partitions. This design flow is shown in Figure 2. System architects would perform their codesign early, at the high level, where the greatest design function or architecture-change returns could be reaped, and then map the representation down to implementation after hardware/software partitioning. Hardware and software engineers would codesign and cosimulate at the implementation level. Division of labor Although research thrived on codesign modeling and analysis at the high level, the promise of a practical, highly productive design and verification process did not materialize for the industry at large. Codesign tools weren't widely used for system design since mapping the results of high-level algorithmic codesign onto a realistic architecture implementation remained elusive. Conversely, at the lower hardware/software layers coverification was king. The gap between the system architects and application feature software developers on one hand and the implementers of hardware, middleware, and firmware on the other hand persisted; it even grew wider as design complexity mounted. The high-level modeling just didn't deliver on the links to implementation. The approaches that did have a path to implementation suffered from poor results. In practice, architects did the high-level modeling and codesign and then handed off the results to the developers to manually implement the system assisted by partially automated verification. The gap in the modeling spectrum causes design iterations if the specification and the implementation are not consistent. This fissure can also cause miscommunication between software and hardware teams. The software engineers design for an abstract model of the hardware many months before a hardware prototype is available. The best practices at large system houses focused on cosimulation as early as possible but the codesign process wa s just not in wide use. The field as a whole seemed doomed to fade out of existence. Renewed vigor in codesign The shortcoming of previous codesign methods is that a lot of work went into behavioral modeling, with communication aspects hidden within the entrails of the model of computation. TLM changes all that. Concurrency is a reality we're just now coming to terms with, and TLM is the vehicle through which we're revisiting these ideas, filling the breach between concurrent hardware and sequential software modeling. Anatomy of a transaction Related transactions are grouped together in streams. In its simplest form, a stream is one signal where the transaction would be a specific value for that signal. Typically a stream is more like an object that captures the bus transfer type, and transactions on it might be single read, idle, single write, burst transfer, and so on. Streams can have overlapped transactions to model split or concurrent activity. The transactions therefore would consist of attributes, such as several subsignals, messages, or other variables that implement the bus handshake. Transactions can also be composed or decomposed to form aggregations and associations among varied transactions in one-to-one or one-to-many relations. Such relationships include predecessor-successor, parent-child, and the like. These concepts are shown graphically in Figure 3, which displays a trivial example of a generic bus. Tools such as debuggers typically use such views to present the data in an understandable fashion that abstracts and encapsulates the data for codesign and trade-off analysis. Languages have not been a successful medium in which to analyze trade-offs because different application domains have different concerns and thus dissimilar modeling notations. Hardware/software codesign starting from a single all-encompassing language is just not realistic because systems are heterogeneous entities. System-level design languages such as SystemC and SystemVerilog, and a host of other hardware-verification languages (such as Temporal e) have come to terms with this realization and identified the TLM abstraction as a suitable bridging notion. TLM is a description of the observable behavior (whether from a specification or from an actual implementation) that can serve as a cross-team conversation and documentation of what the final behavior should be; in this sense it's a behavior trace abstraction. High-level analysis techniques can view it as an abstract sequence of or (partially) ordered sequence of data and can use model refinement and composition in order to manipulate this trace abstraction for tradeoff analysis. Low-level models can focus on the concrete manifestation of protocol communication where validation tools can monitor and analyze such metrics as memory access, caching performance, and bus utilization. Verification is undoubtedly a central part of design. The TLM abstraction is quite efficient at verifying the different component models; any one block can be swapped with a functional (timed or untimed) model or an implementation model (bus functional model or RTL). This capability can provide a lot of speedup in verification techniques such as hardware/software co-verification. This is where traditional hardware/software partitioning can be performed based on performance evaluation, and one task can be moved from one portion to another. Different cosimulation techniques are shown in Figure 4, with fast evaluation on the top, more accurate and slower on the bottom. This TLM therefore sidesteps the issue of an overall central modeling language and enables different domains to use appropriate modeling constructs. TLM forms a central modeling concept that allows both architects and implementers to quickly explore various functional and architectural trade-offs and alternatives. Indeed transaction-level modeling stems from representing the required communication among blocks, not modeling of the blocks behavior or of the interface or channel behavior; it's a presentation of the required function (specification) and the current operation (implementation) that focuses on demonstrating the proper system operation. The model itself is a continuum of several TLMs with varying levels of detail. Its three primary sublevels are the programmers' view (PV); the programmers' view with timing (PVT), which typically includes a bus functional model of hardware and instruction set simulator (ISS) software abstraction; and the cycle-accurate or cycle-callable, which involves a mix of bus-functional and RTL model abstractions as shown in Figure 5. The multitude of levels reduces the mapping effort from one level to the next and provides for a stepped tradeoff analysis where models assist the optimization and mapping process to pick an optimal choice of implementation. The model capitalizes on the fact that design is really a "meet-in-the-middle" process, not purely a top-down or bottom-up but rather an up/down mixed process. Automation tools that can transform the model up to abstract detail and down to refine it are mushrooming in the context of TLM design and verification. Codesign nirvana at last Hardware and software are like ice and water: each has its own distinct characteristics yet their essence is the same. Codesign enables us to see beyond a particular hardware and software incarnation of an embedded systems design and analyze it at the core. Codesign is no longer the sole purview of large system houses. Cooperative efforts aimed at defining key analysis and tradeoff points that run across the hardware and software domains such as TLM are breathing renewed life into hardware and software codesign and its practice. Dr. Bassam Tabbara is architect for research and development at Novas, where he leads the system-level debug and the assertion-based design and verification teams. He has a bachelor's in electrical engineering from UC Riverside and a master's and doctorate from U.C. Berkeley. His research interests include the optimization, synthesis, verification, debug, and codesign of embedded and hardware/software systems. You can reach him at bassam@novas.com. Copyright 2005 © CMP Media LLC |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |