SOC: Submicron Issues -> Physics dictates priority: design-for-test
Physics dictates priority: design-for-test
By Chappell Brown, EE Times
October 18, 2000 (1:28 p.m. EST)
URL: http://www.eetimes.com/story/OEG20001016S0047
The bedrock problem of testing and verifying complex systems-on-chip-the subject of this week's Focus section on deep-submicron design-is colliding with the basic physical limits long predicted for CMOS technology.
At the system level, designers are struggling with the management of multimillion-gate projects, but even when that problem is solved, the resulting designs still need to be verified and tested. And at that point, the physics of semiconductors begins to intrude, adding a host of unwanted physical effects to an an already complex system. As a result, the verification and test world is scrambling to develop new tools and techniques to meet the challenge of 0.18-micron and smaller processes.
Synplicity's Jeff Garrison has a solution for those struggling with PLD timing problems: physical optimization of a fixed wiring layout. |
Take the case of advanced programmable-logic device (PLD) design. It is now possible to place several million gates on a single PLD chip, a system realized in a complex process with at least six layers of metal. In addition to the basic array of gates, such chips typically include memory blocks and intellectual-property (IP) cores.
As Jeff Garrison, director of PLD products at Synplicity Inc. (Sunnyvale, Calif.), points out in his Focus report, "Fine-line process technology that produces such high-density, high-speed devices has brought with it ASIC-like problems. In particular, timing delays, once dominated by logic, are now determined largely by the interconnections between logic."
This effect, which occurs even at 0.25-micron design rules, shifts critical timing parameters from the gate level to the interconnect. Even advanced design tools that can cope with highly complex logic circuits are not structured to deal with the problem.
The critical timing performance of a circuit therefore becomes an unknown quantity until the later stages of place and route. Failure at this stage triggers a rework of the entire design, creating a costly delay in the production of a finished design. Garrison and his colleagues at Synplicity are developing design tools targeted at the special interconnect architecture of PLDs in a joint project with PLD vendor Altera Corp. (San Jose, Calif.).
But that is only one circuit type among a large variety of digital circuits and IP cores that system-on-chip (SoC) designs use. At the circuit-testing stage, covering all of those circuit types on one chip is creating a crisis for the perennial problem of test coverage.
The classic problem for test engineers revolves around the question of how many test patterns are required to ensure a level of fault coverage. SoC design has pushed the coverage problem-already no p icnic for simpler circuit types-up several notches.
Ron Press and Janusz Rajski of Mentor Graphics Corp.'s ATPG products group (Wilsonville, Ore.) give a comprehensive overview of how circuit complexity, and the additional quirks of submicron devices, are ratcheting up test and measurement problems.
To handle the sheer number of devices on the latest chips, test systems are moving to 64-bit processors and test vector compression schemes. Compression levels of up to 60 percent are now required.
But simply coping with circuit size is not enough: Deep-submicron devices are creating a whole new class of faults related directly to timing problems. Test engineers are discovering that simply adding tests to existing schemes will not suffice. The only way to track down the new faults, according to Press and Rajski, is to perform at-speed testing.
Traditional test procedures center on locating "stuck-at" faults-transistors that fail to switch-which are not related to timing. At-speed t esting involves a more complex approach. For example, timing-related "launch and capture" events are modeled in which an initial event is later captured to see whether it falls within a specified timing window. Also, as Synplicity's Garrison has found, path delays have become a significant factor that can no longer be ignored.
Along with isolating transistor and interconnect timing problems, test and verification must now deal with interactive problems between adjacent wires. As interconnect becomes more dense, and the signals traveling along wires switch faster, the possibility of a logical value's being transferred from one wire to the next via capacitive coupling has opened up a whole new set of problems centering on the question of signal integrity.
Modeling that type of fault becomes complex because whole networks, not just individual devices and wires, are involved. The modeling process starts with identifying the "aggressor networks" that may be perturbing the behavior of "victim netwo rks."
Lou Scheffer at Cadence Design Systems Inc. (San Jose, Calif.) details in his article the ramifications of this new area of signal integrity analysis. The new level of interaction among circuits introduces a complex, cycle-by-cycle variation in timing delay that becomes difficult to model.
As Scheffer points out, "Timing dependence on crosstalk is a subtle and complex issue because the timing on victim nets depends on the delay across the first gate, the interconnect delay and the behavior of other adjacent nets. Instead of a single delay value for interconnect, we now have minimum and maximum delays, which can vary from one cycle to the next."
One response is to space wiring to eliminate crosstalk or introduce shielding to eliminate the effect. However, that results in higher costs and puts a roadblock on the path to denser circuits. As a result, fault analysis is becoming more sophisticated, extracting precise timing margins after the place and route stage of design.
A nother interconnect-induced problem is the impact of distance on the voltage drop of power supply lines. As voltage margins shrink with submicron devices, the problem of length-induced voltage drop has become more acute and must now be factored into the test and verification procedure.
Scheffer also discusses the problem of heading off faults that only show up in the field, after circuits have been tested. This class of headaches arises from fatigue problems: hot-electron effects, wire fatigue and electromigration. It is probably no surprise that shrinking circuit geometries are aggravating the situation. In addition, new processes designed to solve other problems, such as low-k dielectrics, are making the problems worse. The lower-dielectric-constant materials have become an issue because they are poor thermal conductors.
Heading off the design integrity problems requires a change in the entire design process, not just better testing. A chip could be correct according to test and verificatio n when it leaves the fa but could still contain problems that only show up later in its life cycle.
The following articles detail a variety of new design and test tools that are being developed to head off this complex set of problems. Circuit designers and test engineers are betting on the new tools to help them get a grasp on SoC complexity.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |