Co-Design for SOCs -> System models replace hard prototypes
System models replace hard prototypes
By Graham Hellestrand, Chief Executive Officer, Vast Systems Technology Inc., Santa Clara, Calif., EE Times
June 15, 1999 (11:57 a.m. EST)
URL: http://www.eetimes.com/story/OEG19990615S0019
The advent of the virtual processor model (VPM) has led to dramatic improvements in the capabilities of modeling systems. In a future dominated by systems, there will be no room for the wasteful, inefficient and costly trial-and-error product engineering methods of physical prototyping. Those techniques must be replaced by rapid and accurate system modeling, which offer engineers the alternative of building soft prototypes as direct substitutes for physical components. Soft prototypes, in turn, allow engineers to use accurate models to develop, build and test complex systems. The result is a dramatic improvement in such critical factors as time-to-market, quality, engineering flexibility and component choice. The best systems-modeling approach should enable the deployment of soft-system prototypes that support a concurrent-engineering design flow, where hardware and software systems are designed simultaneously. But an effective model of the processor in a soft-system prototype has been conspicuously absent from traditional modeling approaches.
Because it sits between the software and the hardware portions of the design and mediates communication across that boundary, the processor model largely determines the performance, accuracy, flexibility, adaptability and utility of the soft-system prototype. It is important then, to consider the merits of the four processor-modeling approaches: hardware description, instruction-set interpretation, host software execution and architectural mapping.
The earliest processor-modeling technique employed hardware description language (HDL) to describe internal processor details. The most significant advantage of the hardware-modeling approach is in the timing accuracy of internal processor details. That technique embodies levels of abstraction where internal details can be traded for si mulation performance. However, the tie to event-driven/cycle-based simulators makes the model's execution very slow. The low speed limits the technique to the verification of some aspects of new processor architectures.
One interesting facet of this approach is that it integrates easily with hardware simulation, since the same process is used to model hardware. But software engineers and system architects do not typically employ such modeling techniques.
Instruction-set simulation (ISS) models the behavior of processors, rather than the internal details of their operation. To simulate a system involving hardware and software, a sophisticated linkage needs to be made between the ISS and the hardware-simulation environment. That interface limits the interactions between the hardware and software parts of the system model. In addition, the time bases of the hardware and software simulators are not the same.
ISSes are useful for hardware engineers writing and verifying device drivers an d can be used by software engineers in much the same way as an in-circuit emulator, albeit with considerably less performance. ISSes execute between 2,000 and 500,000 assembly/object-level instructions per second-the former when heavily interacting with the hardware model. For software engineers developing applications to run on embedded systems that execute hundreds of millions of instructions/s, being constrained by an ISS model of a processor means that 60 seconds of the software executing on the real target system will take between 1.5 million seconds (36 days) and 12,000 seconds (3 hours) to model. Neither delay is acceptable to software engineers.
As with all processor models, an ISS may model architectural elements such as cache, pipeline, pipeline hazards and virtual memory, but performance is degraded considerably by the incorporation of those elements. ISS modeling clearly creates a software execution "choke point" in the hardware/software cosimulation process.
The third processor- modeling technique, called "host software execution" (HSE), is perhaps the least attractive of the alternatives. The HSE approach uses the high-level-language (HLL) code that has been designed for the target processor and simply executes it on the host computer instead. Even though this solution is easy to implement, it is inadequate. First, the software is not executing on the target processor, so the model can only mimic gross functioning and it cannot provide timing for the simulation. It also has no ability to model the target pipeline, cache, virtual memory or primary memory.
However, such processor models can be made to communicate with hardware-system elements by bus-functional models that are described using an HDL. When the bus-functional model is executing, the transactions it performs are timing-accurate. The advantage of the HSE approach is that software will execute at host processor speeds-maybe 250 million instructions/s. However, since no timing or architectural element modeling-cach e and virtual memory-is possible, the technique is unacceptable as a systems-engineering tool.
That brings us to the fourth approach. In this method, the target processor's architecture is mapped into an executable model-the virtual processor model-which in turn executes the target code. The VPM is fast primarily because it has both a static and dynamic component. In the static component, which models the instruction execution behavior, an analyzer builds a custom VPM from the target code based on all or some elective subset of the architectural elements required in the processor. That static analysis is analogous to "static timing analysis" in circuit simulation and the resulting model runs very fast. The code executed by a VPM might be HLL C/C++ code or assembly/object-level code.
The second portion of the VPM models the dynamic parts of the processor-those portions whose function cannot be determined prior to simulation. This includes the I/O parts of the processor that communicate with th e hardware: cache, virtual memory, interrupts, bus signals and the like.
For obvious reasons, the simulation speed on that portion is limited by the level of detail modeled, and, where communication with hardware occurs, the speed of the hardware simulator during that communication.
With a VPM, it is also possible to select the architectural elements and the level of detail modeled in both the dynamic and static portions of the design. In that way, processors can be customized for a particular use, or modeled as cores or selectable catalog components. That feature is especially helpful in addressing the different concerns of engineers. Typically, software engineers do not care about the details of bus transactions when building application code, but they do want to know if control bits have been set correctly in device and status registers. At the same time, real-time software engineers need extremely high timing accuracy, but care less about function. Sometimes the two come together, as when a complex, concurrent system will lose functional accuracy if the timing is flawed.
Hardware engineers, on the other hand, have a completely different perspective. They are rarely interested in running billions of instructions, but they do want to ensure that devices plugged into the bus behave as designed and communicate and synchronize using proper bus protocols.
A VPM allows modeling flexibility for hardware, software and architectural engineers, providing accuracy where it is needed, and trading detail for simulation speed at the election of the designers. A VPM with virtual port (memory mapped) input/output typically executes 150 million instructions/s on a 400-MHz host, without sacrificing function or timing accuracy. The VPM is one of the fastest, most accurate and most malleable approaches to processor modeling. With the availability of flexible VPMs, system engineers can bring a great deal of innovative system-design technology to bear very early in the design cycle.
Mo deling flexibility
VPMs add two things to that process. First, their speed, flexibility and accuracy provide modeling flexibility, enabling changes as problems are discovered. Without having to commit to silicon, engineers can make critical decisions about hardware/software partitioning, processor configuration and their choice of operating system quantitatively and much earlier in the design cycle, when trade-offs will not incur severe re-engineering penalties.
Second, VPMs provide a means of quantifying system decisions, which gives the system architect a precise mechanism for communicating engineering change decisions to the software and hardware groups. In contrast to the traditional manual system, this lends the design process a rigor it has lacked and eliminates costly errors.
Concurrent engineering is unavoidable as time-to-market shrinks and the amount of software embedded in systems explodes. Since the processor executes software and mediates the communication between th e hardware and the software, concurrent engineering is only possible if the processor model is fast, accurate and flexible enough to permit a soft prototype to stand in for a hardware prototype. Only virtual processor models provide the features required for concurrent engineering via soft prototypes. To those working with leading-edge designs, such as systems-on-chip, VPMs will become a standard part of the modeling arsenal.
Related Articles
- Co-Design for SOCs -> Iterated co-design right for system ICs
- Co-Design for SOCs -> Executable models key to flexible design
- Co-Design for SOCs -> Software debug shifts to forefront of design cycle
- Co-Design for SOCs -> 'Ramping-up' for complex multiprocessing applications
- Co-Design for SOCs -> Designers face chip-level squeeze
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |