Tools For Reprogrammability -> SoC platform needs system approach
SoC platform needs system approach
By Frank Schirrmeister, Director, Product Management, Co-Design Technology, Cadence Design Systems, San Jose, Calif., EE Times
November 20, 2000 (11:30 a.m. EST)
URL: http://www.eetimes.com/story/OEG20001120S0031
In most application fields designers of electronic products are coping with significant changes and challenges. The complexity of telecom, multimedia and automotive applications has been exploding over the past few years so that design teams are finding it difficult to sustain their productivity. However, now design teams are increasingly adopting system-on-chip (SoC) intellectual property (IP) reuse combined with platform-based design. Several SoC companies have announced platform concepts to manage the design complexity of future electronic systems. Among them are Philips Semiconductors with its Nexperia and Velocity platforms, Texas Instruments with the OMAP platform for third-generation systems and ST Microelectronics with its processor platforms. We define an SoC integration platform as a high-productivity design environment that specifically targets a product application domain and is based on virtual component (VC) reuse, mix -and-match design methodology. Another definition of platform is a family of architectures that satisfies a set of architectural constraints that allow the reuse of hardware and software components. This library of VCs-both in hardware and software-is the essential part that enables the configuration of SoC platforms. Designers start with foundation VCs-the central fixed parts of the platform. Then they add additional items from the VC library to customize functionality and interfaces as required by the application being developed. Finally, they bring in the differentiating features of the design-the part of the application intended to capture end users. These differentiators represent the systems integrators' core knowhow and their implementation aspects are typically kept as secret as possible.
The process of configuring SoC platforms goes far beyond simply reconfiguring a processor core to support additional instructions, or configuring a hardware core to support more communication modes. It includes the overall system interaction of functions running on different architectural implementation alternatives.
What configuration options might exist in a typical multimedia SoC integration platform? It will have an RISC processor to run the basic control software. This processor will be connected through a memory interface to external memories and several peripherals will carry dedicated hardware acceleration functionality and provide the interface to the external world.
For speed, performance or energy consumption reasons the bus might be partitioned using a bus bridge. Depending on the SoC design functionality, some configurations of the SoC platform might require two or more processors. Crossover bridges may be required between the local buses serving the processors, which can be peripherals talking to two or more buses. Some of the VCs, such as dedicated hardware accelerators, m ay require very high memory bandwidth. This leads to the introduction of a dedicated memory bus, which may make it necessary to use direct memory access and direct channels between peripheral components.
Even the small number of considerations mentioned above easily reach 50 different configurations if we take into account different processor speeds, bus width, RTOS, scheduling, etc. Previously, those decisions have been based on designer experience, ad hoc techniques and, often, gut feeling. The moment of truth was the actual "big bang" test-connecting the hardware and software implementation models using co-verification tools. To come to the realization at this point that a function in software really should have been implemented in hardware can have serious consequences for a project.
Let's look at a real-life example. Three years ago, in an MPEG-2 design, the simulation of a single PAL frame at the RTL level took approximately 12 hours. Since then, increasing simulator efficienc y has been paralleled with ever-more complex SoC designs that are now connected to an instruction set simulator (ISS) running 100,000 to 500,000 instructions per second. A possible synchronization problem between video and audio (to be dealt with by a central system control function) occurs after many frames (say, 40)-at least 480 hours in the example above.
Right abstraction levelClearly, the RTL level is not the place to make partitioning and configuration decisions, which have to be based on a reasonable set of real-time data and have to consider more than 50 different configurations. To make these decisions based on simulation results requires models and simulation at the right level of abstraction.
Today, design teams must determine which level of abstraction should be used in simulation and which methodologies, technologies and tools are available to support it. Furthermore, the delivery of an SoC platform is posing new challenges requiring the introduction of what we call next- generation design kits. That's because current ASIC design kits are focusing on gate and RTL implementation only and cannot support early configuration of SoC platforms.
Design teams are now adopting a next level of abstraction as they turn to an approach we call function-architecture co-design.
Early transistor models represented an abstraction of the actual layout on silicon. Once digital designs became too complex for design at that level, clusters of transistors were abstracted into gates and cell libraries. Those, in exchange, got synchronous digital designs replaced by RTL models-which, with digital synthesis, enabled description of hardware using a textual representation and silicon compilation.
In all those steps, the process of abstraction went hand-in-hand with models representing the implementation characteristics of the abstracted models. Transistor simulation and silicon measurements were used to characterize cell delays. Interconnect delays were characterized in successi vely complex wire-load models, guiding synthesis until we reached the era of physically knowledgeable synthesis. Then we were no longer trying to predict wire loads, instead combining synthesis with layout itself.
At the cell level, an ASIC design kit contains characterizations of cell delays and wire loads and schematic libraries for the user of the silicon technology. At the RTL level the actual textual representations in Verilog or VHDL are added with their appropriate synthesis scripts. Furthermore, verification IP, as in the Cadence Testbuilder environment, provides test benches for the modules delivered in a design kit, adding C++ transaction-based testing at the clocked RTL level.
The next logical step has been implemented in the Cadence Virtual Component Co-Design (VCC) Environment. Implementing a function architecture codesign methodology, VCC lets the user separate the design process into definition of virtual component functionality independent of the actual implementation and the architecture topology on which those functions are running.
With the technology provided in VCC, functions such as a video decoder can be equipped with performance models for a low-cost, low-performance software implementation and models for several hardware implementations, varying in energy consumption, gate count and performance. Clusters of RTL IP blocks are characterized using their function (independent of the implementation) and performance models to represent different implementation options. At this level, the SoC can no longer be analyzed in an isolated manner. The environment and the software running on on-chip processors must be characterized using the same techniques.
For configuration of the software/hardware partitioning, VCC also provides-along with characterization techniques-software estimation technology that lets design teams profile software using high-level processor m odels.
Besides annotation of performance to IP blocks, VCC offers technology for characterization of the interconnect performance between IP blocks. For this purpose, design teams characterize the performance of architectural components, such as the buses, RTOS and memories. Using communication patterns defining the path of a signal or token through the architecture, design teams can analyze the performance impact of different SoC platform configurations. Different bus widths, bus arbiter types and cache and memory hierarchy, as well as RTOS scheduling, are analyzed using performance simulation. This makes it possible to test the system functionality on different target architecture configurations, and, it enables high-productivity derivative design configurations once the approach is adopted and the initial designs are on the market.
The function architecture codesign methodology has significant impact on the way SoC platforms will be delivered for evaluation and configuration to system cust omers.
For each of the blocks, the SoC provider uses the performance-modeling technologies provided in the Cadence VCC environment to create libraries of reusable blocks that have known high-fidelity performance estimates and characterizations for different implementation options with the system functionality. For communication effects, users model the system architecture and the impact of bus delays, bus arbiters, RTOSes, caches and memories.
Those characterizations let users assess the impact of different architectural options on the system performance in the VCC environment.
While the configuration of SoC integration platforms at the high level of abstraction is possible as described above, it is equally important to ensure that the configured design can be exported to implementation level flows. To link the system level seamlessly to implementation, the VCC environment employs technologies for hardware and software design assembly and export, together with export of testbenches an d corner cases that must be analyzed using co-verification at the detailed level of hardware/software interaction.
While RTL signoff is becoming a reality, the described approach of SoC platform configuration combined with the methodology of integration platform-based design clearly raises the level of interaction between SoC providers and system houses.
It is conceivable that the industry will embark on a "system-level signoff" during which SoC providers deliver their platforms in a virtual fashion through SoC design kit capabilities as provided by VCC.
After the system house has defined the configuration parameters for the SoC platform, it can be either reconfigured through software or manufactured according to the new requirements by the SoC provider.