10Gbps Multi-Link and Multi-Protocol PCIe 4.0 PHY IP for SMIC
Overcoming timing, power bottlenecks
Overcoming timing, power bottlenecks
By Jay Abraham, Product Marketing Manager, Silicon Metric, San Jose, Calif., EE Times
April 28, 2003 (4:10 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030428S0087
In most SoC designs, embedded ROM, RAM or register file memories of various sizes consume up to 50 percent of die area. According to the Semiconductor Industry Association, that figure is expected to increase to 71 percent by 2005. And by 2014, the SIA has estimated that embedded memory will account for more than 90 percent of the area on a chip.
With the rising cost of silicon fabrication for high-performance designs and the increased complexity of silicon designs in nanometer processes, a detailed timing and power analysis of the memory is required.
Generating high-quality SoC designs requires the ability to accurately model embedded-memory devices, and traditional methods for embedded-memory characterization and modeling for timing and power too often fall short in quality. These methods result in poor-quality models that, when used in design flows for nanometer technology processes, cause timing-closure issues and unpred ictable power-analysis results.
The increased use of embedded memories in SoCs has made it difficult for design construction, optimization and analysis tools to perform efficiently.
Today SoC teams are forced to overdesign to accommodate timing inaccuracies, which translates into increased die area and increased manufacturing costs. Overdesigning to accommodate power inaccuracies directly influences chip pinout and packaging costs. Lack of information and inaccurate estimates can delay power-related decisions, negatively affecting design schedules.
This is especially troublesome in mixed-signal designs, where pinout can determine the success or failure of analog components. The implications are clear: Memory characterization and modeling are key concerns to SoC design teams, and accurate memory models are required in all phases of design.
Most design teams use timing and power memory models generated by memory compilers. Such compilers generate models for a spe cific memory size and aspect ratio by interpolating the performance from the data for a few specifically characterized configurations. It is likely that the actual memory size and aspect ratio used in the design have never been characterized. The interpolation and extrapolation used to create models may result in significant error or nonuniform guardband when comparing the model against Spice.
Some design teams seek to generate better timing and power embedded-memory models with Spice simulation. But the teams find that it is impossible to simulate an entire memory for most configurations using Spice simulators.
Designers thus have resorted to manual cutting or user-directed pruning of the memory netlist. But these labor-inten-sive processes are time-consuming and errorprone.
Solutions for memory models must strive to improve model accuracy while providing an automated, high-throughput flow for the generation of the models. Ideally, characterization and model generation sh ould be a pushbutton process. One way to accomplish high-throughput run-time is to leverage hierarchical simulators. Array-reduction capabilities in these simulators are suited for memory devices. Also, the variance to traditional Spice is often only 2 to 4 percent, resulting in highly accurate timing and power models.
The flexibility provided by hierarchical simulators enables designers to generate memory models that fit the unique needs of a particular design, including the actual operating points. Because the entire postlayout netlist is simulated, designers no longer need to compensate for estimation in the measurement process. The models generated are instance-specific for the SoC design in which they will be embedded. Because the memory model is applicable to a particular design, guardband is user-controllable. Most important, memory characterization utilizing hierarchical simulation no longer requires manual netlist cutting, netlist pruning or synthesis of the measurements of the individua l memory components.
As important as hierarchical simulators are to memory characterization, their existence addresses only one aspect of this complex process. It is important to eliminate complexity from the memory characterization and modeling process to turn characterization and memory model generation into a pushbutton task. Requirements for automation include automatic stimulus generation, arc-based job distribution, automatic deck creation, archiving, and prepackaged timing and power methodologies that match the target model. High throughput for characterization is obtained with the use of hierarchical simulators, parallelized distribution of simulation jobs that are granularized at the per-arc level and intelligent Spice deck creation to utilize hierarchical processing.
This same solution solves power accuracy issues by providing configured power-acquisition capability. Complex analysis can be performed to determine average power consumption in design-specific configurations. D esigners can account for typical utilization patterns such as sequential vs. random access and generate power numbers.
Analysis of various modes, such as low-power or quiescent, is also possible. This provides the information needed to make important decisions earlier in the design cycle, such as those related to packaging, chip pinout, floor planning and power routing.
There are two applicable memory characterization flows for SoC designers. The first is a recharacterization flow in which designers take existing memory models and perform a recharacterization step to improve their accuracy. With this methodology, users can characterize for unique conditions.
The second flow is for characterization of custom memories, such as handcrafted memories or compiler-generated memories that have been modified. In the latter there are no existing models. In this flow, a user creates a memory configuration file that is then used by the automated characterization system. In the event t hat an existing memory model generated by a compiler may not be of good quality, this same methodology may provide superior results to the recharacterization flow. A robust characterization solution must be able to address both situations.
http://www.eet.com
Related Articles
- Overcoming Timing Closure Issues in Wide Interface DDR, HBM and ONFI Subsystems
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- BCD Technology: A Unified Approach to Analog, Digital, and Power Design
- Reducing Power Hot Spots through RTL optimization techniques
- Power analysis in 7nm Technology node
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Dynamic Memory Allocation and Fragmentation in C and C++
- Scan Chains: PnR Outlook
E-mail This Article | Printer-Friendly Page |