Soc Design -> Complexity alters verification strategy
Complexity alters verification strategy
By Yoshinori Eda, Manager of Engineering, NEC Corp., Tokyo, EE Times
September 26, 2001 (11:05 a.m. EST)
URL: http://www.eetimes.com/story/OEG20010913S0067
The development process for a system as complex as NEC's next-generation Disk Array Subsystem required a verification method different from traditional practices. Extrapolating the data from the last project to the 78 million gates of the new system resulted in unacceptably long simulation times.
As the system was too large to consider building an FPGA prototype, NEC's engineers faced the choice of foregoing system-level simulation, or adopting hardware emulation or hardware/software co-verification.
The team determined that hardware emulation for a system of this size was cost prohibitive, and looked at several co-verification tools, including Seamless from Mentor Graphics Corp. (Wilsonville, Ore.). After a rigorous evaluation process, Seamless was determined to be the best fit for the project's needs. Mentor had a mature model for the PPC-750 CPU, and the usability was superior to that of its competitors.
Three phases
b>With the choice of a system verification tool completed, the team began setting a system verification strategy. The ASICs would be verified independently, using only a logic simulator. Then co-verification would be used on progressively larger partitions of the design until the entire system was validated. The smallest partition, the control card, consisted of two PPC750s, two ASICs, 128 Mbytes of local memory and a handful of standard logic devices. The ASIC gate count of this partition was approximately 4.3 million gates. This phase of co-verification tested the PPC750 ability to access local memory and the ASIC registers.
Basically, the embedded software used to verify the system was a version of the hardware diagnostics. At this point, since only a small part of the hardware was simulated, the diagnostic routines targeted at the absent hardware were commented out. Phase I co-verification uncovered logic errors in the ASIC, incorrect connections between the ASIC and local memory, as well as several errors in the diagnostic program.
Isolating and correcting these errors took very little time due to the excellent control and visibility afforded by co-verification. The design was stepped at the assembly instruction level to the point where the discrepancy occurs. At this point the full hardware user interface features of the ModelSim logic simulator were used to debug ASIC and memory errors, and the Xray source-level debugger provided full visibility into the diagnostic program, including software variables and the register set of the PPC750.
Phase two of the verification plan was to achieve module-level verification. This added 6 million gates to the simulation for a total of 10 million, as well as 1 gigabyte of shared memory disk cache. Phase two contained the same two PPC750 CPUs as phase 1. The additional ASICs expanded the functionality of the verification target to include the shared memory module.
At first, data read-write access from PPC750 to memory card and DMA data transfer between host system and memory card were tested individually. Then simultaneous accesses to shared memory by the PPC750 and the host system were performed to memory card for verify conflict condition.
Verification of this intermediate configuration proved to be a valuable component of the strategy. During the co-verification of phase two, the team discovered a data alignment problem, which would have been difficult to diagnose at the full system level.
The final and most ambitious goal was to simulate the maximum configuration of the S4100. This phase of the co-verification strategy included 24 PPC750 CPUs, each with 128 Mbytes of local memory, 78 million ASIC gates, and 4 Gbytes of shared memory. Because the simulation model was so large, the team spent some time tuning the hardware diagnostics in order to keep the co-verification run-times from becoming prohibitively long.
While Seamless did not require NEC to make any changes to the embedded code, the team chose to eliminate low value operations performed by the diagnostics. The modifications can be grouped into one of the following categories:
- Shorten timer settings.
- Reduce the count on software idle loops.
- Modify data transfer length.
These modifications reduced the simulation run-time by a factor of four, a significant gain given the minor reduction in the comprehensiveness of the hardware diagnostics.
Because system simulation took place in the virtual space, the external environment, host computers and disk drives had to be modeled. A testbench was used to supply the external stimulus and check expected response of the system.
The fact that the NEC team was able to run a 24-processor, 78 million-gate design in Seamless is a significant accomplishment, but the results of co-verification are the real reward when compared with performing system-level verification on the physic al system. The tool shaved six months off the project schedule.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |