|
|||||
Co-Design for SOCs -> Minimizing risks with co-verification
Minimizing risks with co-verification Today's processors offer a level of flexibility and performance unthinkable just a few years ago. But all of this advanced capability complicates the design task, substantially increasing the potential for serious-if not fatal-design mistakes on the first pass. Since today's new processors usually involve the adoption of new tools, new bus architectures and significantly more complex procedures for operations like initialization and reset, problems are bound to occur. Take for example the initialization task. Because new processors can operate in so many different modes they must be told which one to adopt when they start. The processor determines the proper mode by reading the appropriate word or sequence when it comes out of initialization. If that information is not available, the system will not work. Or consider the reset function. The specific sequence for reset can get quite involved, depending on the processor being used. The system has to know where the reset vector is stored, how many clock cycles it takes to jump to the reset vector, etc. If anything is incorrect in the reset function, good luck trying to debug a dead system. Another area that can fatally jeopardize the functionality of a system is the initialization of the memory controller. Some of the newer processors store the address zero for the memory subsystem in ROM. But once the address is read, they move it out of the limited ROM space and remap it into RAM. Any mistake in this process and nothing whatsoever will work in the system. Many processors also support an array of sophisticated peripherals that must be properly initialized to achieve functional system operation. Although flaws in peripheral setup will not prevent the system from booting, a design team can easily chew up weeks in system integration tracking down and debugging peripheral problems. The bit and byte ordering between the processor and the other components on the bus is gettin g quite complex, increasing the likelihood of an error that could require an ASIC or board spin. Studying the experiences of companies that have adopted a new processor graphically demonstrates the problems that can occur and how co-verification can help ferret them out before the prototype stage. Ascend Communications, Inc. (Alameda, Calif.) was designing a multiprocessor control board intended to drive a frame relay switching system. The design contained five processors, one of which was new to the design team-Motorola's PowerPC 603e. It had to interface with several i960s from Intel, a processor Ascend had used extensively. The team wanted to reuse as much software and hardware as possible, but needed to create a sizable amount of new code for the PowerPC 603. Anticipating the increased risk of incorporating a target that the y had little experience with, the team decided to co-verify the design. By the time the code checkout was complete, several design errors had been isolated a nd corrected. There was one problem in particular that would have definitely required a board turn had it been discovered during hardware prototype. It seems the manual on the 603e neglected to mention that after reset, the processor expected an 8-byte configuration word. The boot PROM interface provided a 1-byte buffer, a design practice carried over from the i960. Early in the co-verification process it became evident that the 603e was not coming out of reset and proceeding with code execution as expected. Close examination of the target's behavior and its data book identified the boot PROM interface as the problem. To remedy the situation an 8-byte buffer was added to the design. This was easy enough to do since the board was still in layout. Had the board already been prototyped, it would have been a time-consuming task. One of the best ways to minimize such problems in an embedded design is to thoroughly simulate the interactions between the software and hardware prior to constructing a physical prototype. In the last few years, co-verification tools that support this type of comprehensive simulation have become available and have been widely used. These solutions have proven to be very useful in a variety of design situations, especially in cases where a team is adopting a new processor. Co-verification gives a design team the means to fully evaluate hardware-specific code and perform an additional measure of hardware validation beyond what can be achieved with a logic simulator and a suite of testbenches. Checking out these fundamental operations before committing to a prototype can mean the difference between a viable prototype that boots and successfully runs hardware diagnostics and a prototype that is DOA. What's more, having operational boot-ROM code and hardware diagnostics ready when the first prototype hits the lab will shave weeks off system integration. Tool differences Having decided to use co-verification with a new processor, the next task is to evaluate the different co-verification solutions. EDA vendors offer such tools but-as might be expected-the tools are not all created equal. Among the most important considerations are the accuracy and completeness of the simulation and the CPU models. To thoroughly analyze fundamental operations like reset, interrupt and memory subsystem functions, the CPU must be carefully modeled to create a realistic representation of the system activity. Moreover, all the peripheral devices should be modeled in detail to reduce the risk of problems in peripheral interactions, as these can compromise overall system performance during the prototype stage. Co-verification tools tend to fall into one of two categories: those that support the full hardware simulation of the reset and boot process and those that simply fire selected bus cycles at the hardware, shielding the logic simulator from these operations. Why don't all the tools exercise these critical hardware boot functions? One reason is that th e development of CPU models that fully support reset and initialization requires a significant investment that many vendors are unwilling to make. For example, few instruction-set simulators targeted at software developers include these hardware-specific functions. The vendor must therefore choose to either integrate the instruction-set simulator as is or invest significant resources to enhance it. Faced with ever-growing demand for new processor models, most vendors choose to minimize the investment in any given device. Performance is another reason for reducing the functions executed in the hardware simulator. Because logic simulation is extremely slow, minimizing hardware interactions speeds the co-verification process. But remember, the intent of these tools is to verify the interaction between hardware and software. Many of the case studies discussed here involve detailed execution of the reset, initialization and boot sequence. These functions must be co-verified to make sure that erro rs are not built into the hardware prototype. The solution to the performance vs. detailed validation dilemma is to choose a co-verification tool that enables you to switch effortlessly between these two modes of operation. Begin your session by simulating all functions in the hardware simulator. Then, as you build confidence in the design, you can begin to optimize functions and memory regions proved to be operational. As co-verification progresses, a growing percentage of transactions is shielded from the logic simulator, thereby achieving acceptable performance without sacrificing the comprehensiveness of your co-verification session. When evaluating co-verification solutions, make a careful assessment of what functions of your design are supported by the tool and its associated CPU model. At what level of hardware detail can these functions be simulated? What is required to shift between detailed hardware simulation and high-speed software execution? Will you be required to make changes to your embedded hardware or software? You want to co-verify your design, not some distant derivative of it. These factors, as well as how well the tool fits with your current design environment, will determine how much value you derive from the adoption of co-verification. Incurring the risk of introducing a new processor into your project should be accompanied by an appropriate level of risk mitigation. Co-verification has proven to be an effective approach to comprehensive system simulation.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |