SOC: Submicron Issues -> Noise awareness catches timing flaws
Noise awareness catches timing flaws
By Jim McCanny, Vice President, Business Development, CadMOS Technology Inc., San Jose, Calif., EE Times
October 16, 2000 (3:51 p.m. EST)
URL: http://www.eetimes.com/story/OEG20001016S0054
In the move to ultradeep-submicron processes-at 0.25- micron design rules or below-system-on-chip designers are finding new challenges in achieving timing closure. Chief among these are timing problems, which now appear in silicon despite apparent successful verification of timing performance by traditional electronic design automation (EDA) tools and techniques.
As designers push clock frequencies and shrink process technologies, however, traditional EDA approaches have begun to fail in their ability to address key electrical effects related to noise and signal integrity. Designers need to apply newer tools and techniques to catch noise and signal integrity problems in ultradeep-submicron (UDSM) system-on-chip (SoC) designs.
One of those new approaches, "noise-aware" timing analysis, has already proven essential for catching timing problems and ensuring accurate timing convergence prior to silicon manufacturing.
In the final ste p before manufacturing, timing sign-off is intended to ensure that a design will operate at the required clock frequencies. Any deviation from performance objectives forces design changes and optimization to fix reported timing problems. Designers look to minimize this timing closure process with as few iterations as possible to avoid any further delays in product schedules. In the move to UDSM process technologies, however, designers typically find that they do need additional design iterations because the actual silicon performance is very different from the performance predicted by timing verification tools during timing sign-off.
Each new UDSM process shrinks feature sizes, wire widths and wire spacings. On the other hand, as designers try to squeeze more function into SoCs, die sizes tend to remain relatively constant. Consequently, average wire length has remained relatively constant despite the decreasing pitch. Still, with each reduction of wire width, total wire capacitance decreases, but th e fraction of wire capacitance represented by lateral coupling increases dramatically. What's more, the continued demand for higher performance translates to requirements for higher clock frequencies with much faster switching signals. The faster a signal switches, the more noise is coupled onto neighboring lines. Consequently, in this environment of ultradense, high-speed SoC devices, timing verification with crosstalk analysis becomes essential in determining if a design will function as expected at the required performance levels.
Over the years, static timing analysis has emerged as a preferred approach for timing verification. Yet, conventional static timing analysis tools ignore crosstalk-a pervasive electrical effect found in UDSM designs.
Crosstalk arises from the simultaneous switching of adjacent wires and can seriously degrade signal integrity. In fact, it is common for crosstalk to change the delay of a signal by over 100 percent. If the affected signal forms part of a critical ma ximum-delay path, the extra delay due to crosstalk can result in a setup failure where the signal arrives too late at a latch or a flip-flop.
Crosstalk can also decrease signal delays. For example, if an "aggressor" signal switches in the same direction as a "victim" signal that is part of a critical minimum-delay path, crosstalk can lead to hold violations, because data signals can arrive too early at latch or flip-flop inputs. Worse, crosstalk-induced delay effects are not limited to the nets where the attack occurs, but can ripple through a series of gates, dramatically altering timing characteristics of critical paths. Setup failures are, of course, undesirable, but they can be removed by slowing down the clock-at the cost of lower overall SoC performance. On the other hand, hold violations can only be fixed by silicon mask changes at a tremendous cost in dollars and time lost.
Accurate analysis of crosstalk effects on delay requires special techniques to deal with the volume of parasitic data present in UDSM design, the non-monotonic waveform shapes that occur when aggressor signals switch, and the complex timing and logic relationships between aggressors and victims. The first step in calculating the impact of crosstalk on delay is the identification of the potential victim nets-that is, those nets that have enough coupling capacitance to warrant investigation.
Using advanced interconnect analysis techniques, crosstalk analysis tools such as CadMOS' CeltIC handle this by analyzing nets to find those whose glitch noise exceeds a particular threshold. Next, CeltIC calculates the nominal delay for each victim net by simulating its drivers and receivers with all possible aggressor nets held quiet. In the next step, CeltIC switches aggressor nets in the same directions as the victim net to calculate the decrease in minimum delay, and finally in the opposite directions to calculate the increase in maximum delay. The difference between the delay for the nominal delay case and the minimum and maximum delay cases is a specific measurement of the impact of crosstalk on delay.
Crosstalk calculations become even more complex in practice due to the timing differences in victim and aggressor signals. For example, differences in switching slew rates in victim and aggressor signals can dramatically affect the specific magnitude and extent of crosstalk-induced delay on particular nets. The issue becomes even more complicated when a victim net has multiple attackers. Typically the worst-case crosstalk occurs when all attackers switch at the same time and in the same direction so that their noise peaks align. Due to both the independent arrival times of signals and the exclusivity of signals, however, that worst-case situation may never actually occur. Consequently, when grouping aggressor signals together in such cases, analysis tools such as CeltIC account for the windows of time during which signals are likely to switch to avoid overestimating crosstalk effects.
Although tools can int elligently factor switching windows and logic relationships into calculations, it is still possible to overestimate noise-on-delay effects. For example, if a victim signal runs orthogonal to a multibit bus, a small amount of coupling capacitance occurs between the victim and bus at each crossing point. Individually, the small coupling is insignificant, but in combination, the effect can be significant and simply cannot be ignored. Here, the worst-case crosstalk scenario occurs when all bits of the bus switch simultaneously. The probability of all the bits of the bus switching together in the same direction is typically very low-unless of course, all bits are designed to be switched together. Although the effect of these small couplings cannot be ignored, any approach that simply assumed they would all switch together would generate a very pessimistic noise-on-delay result. Instead, tools such as CeltIC account for the statistical probability of simultaneous switching when calculating noise-on-delay effects.< /P>
As described previously, cross-talk delay analysis is dependent on the arrival times of signals and their slew rates drawn from timing analysis results. In turn, correct analysis of a circuit's true timing behavior requires that crosstalk-delay effects be incorporated into timing analysis. To solve this mutual dependency, designers need an iterative verification approach that lets them refine results by iterating through crosstalk and timing analyses.
This iterative approach starts with an assumption of worst-case conditions and refines the results through a few short iterations. Here, designers first find worst-case crosstalk effects on delay by assuming the fastest possible switching on all aggressor nets and ignoring timing relationships between aggressor and victim nets.
Next, they apply this worst-case noise result to the worst-case noise-free timing to obtain worst-case noise-with-timing. Designers can use these results in turn to generate more accurate switching windows that ca n be used for the next iteration of the noise calculation. The results are then applied to timing computations until the analysis converges, typically within only about three iterations.
Crosstalk woes
Using this iterative approach, engineers have found that high-speed, high-density designs that otherwise met all timing goals nevertheless showed failing nets when analyzed for crosstalk-induced delays by CeltIC.
In one recent example, after static timing analysis showed the circuit met all timing goals, an initial iteration of noise analysis identified 10 percent of the nets as being affected by noise, with delay deltas ranging from 25 picoseconds to 633 ps. When the incremental delay changes were fed to static timing analysis, 87 paths then showed setup and hold-time failures, with the worst path exhibiting a negative slack of 312 ps. After the second iteration of noise analysis, 7.5 percent of all nets had their delays affected by crosstalk. Feeding those delay changes to stat ic timing analysis yielded 58 failing paths and a worst-case slack of 285 ps.
Related Articles
- SOC: Submicron Issues -> Physics dictates priority: design-for-test
- SOC: Submicron Issues -> Technique probes deep-submicron test
- SOC: Submicron Issues -> SiPs enable new network processors
- SOC: Submicron Issues -> 'Sea of blocks' speeds up SoC designs
- SOC: Submicron Issues -> Deep signal integrity can be assured
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |