BIST versus ATPG -- separating myths from reality
BIST versus ATPG -- separating myths from reality
By Stephen Pateras, EEdesign
November 27, 2002 (3:07 p.m. EST)
URL: http://www.eetimes.com/story/OEG20021127S0040
There is a rapidly growing interest in the use of structural techniques for testing random logic. In particular, much has been published on new techniques for on-chip compression of automatic test pattern generation (ATPG) in order to reduce ever-growing pattern sizes. These techniques -- some oddly named using the term built-in self test (BIST) -- do indeed represent improvements on existing ATPG methodologies. However, they all fall short in addressing the real testing needs of today's complex designs. This article attempts to explain today's real testing requirements and how they are addressed using a truly embedded at-speed logic BIST methodology. Common scan architecture In the traditional ATPG flow, deterministic test patterns are pre-generated using a gate-level representation of the design netlist. These patterns are then stored in tester memory and scanned into the circuit using a limited number of parallel scan chains. The number of scan chains is limited by such factors as available chip I/O, available tester channels and on-chip routing congestion. Commercially available logic BIST solutions and the newly introduced ATPG-based compression approaches build upon the scan infrastructure by adding an on-chip pattern generator that feeds the scan chains, and an on-chip result compressor that compresses the scanned out responses of all patterns into a final signature. This architecture allows for a much larger number of parallel scan chains, as they no longer have to be routed to chip pins. Where similarities end In comparison, the logic BIST approach uses an on-chip pseudo-random pattern generator (PRPG). Once initialized, the PRPG independently creates any number of random patterns to be scanned into the parallel scan chains. No data is needed to be stored on the tester. There are a number of myths that have surprisingly developed on the advantages of using ATPG related compression approaches versus leading logic BIST solutions. The most important of these are addressed next. Myth #1: ATPG achieves better fault coverage than logic BIST Empirical evidence shows that when roughly 1 test point is added per 1,000 gates (about 1 percent overhead), stuck-at fault coverages comparable to those achieved with deterministic ATPG can be obtained with a reasonable number of random patterns (typically in the 50K to 100K range). Test points also help reduce the number of required deterministic patterns. Despite the above comparisons, chip quality really depends on physical defect coverage and not simply coverage of stuck-at modeled faults. A number of studies have shown that true defect coverage is proportional to the number of times each modeled fault is detected. These studies show that the large number of random patterns used by logic BIST results in significantly greater defect coverage than that achieved by the limited number of deterministic patterns used by ATPG approaches (see figure 1). Figure 1 - Defect coverage comparison Myth #2: ATPG approaches support at-speed test The first is often referred to as the "launch-from-shift" technique. In this technique, after the first pattern is scanned in, the second pattern is obtained by performing one more shift, then toggling the scan-enable pin and pulsing the clock to capture the results within one at-speed clock period. However, use of this technique requires that the scan-enable signal operate at full speed. The tester must also be capable of providing very accurate pin-to-pin timing between the scan enable pin and one or more clock pins (see figure 2).
Virtually all structural logic test methodologies are based on a full scan infrastructure. That is, all storage elements (flip-flops or latches) are connected together into several scan chains so that in test mode, data can be serially scanned into and out of these st orage elements. Applying a test pattern consists of scanning in the pattern data, applying one or more functional clock cycles, and then scanning out the captured response data.
The ATPG compression approach uses on-chip pattern generator as a decompressor. Pre-compressed deterministic patterns are stored in the tester. They are then sequentially loaded into the on-chip pattern generator that simultaneously decompresses and scans the resulting pattern data into the parallel scan chains.
Using random patterns makes logic BIST unable to achieve the same level of stuck-at fault coverage as determinis tic patterns. It is true that many designs will require a large number of random patterns to achieve high stuck-at fault coverages. However, the designs can be modified by inserting scan-accessed test points to increase their random pattern testability.
It is now generally accepted that to achieve desired quality levels in today's complex high-speed designs, good coverage of delay related defects must be achieved. Delay defects are tested by applying sets of two consecutive patterns, where the circuit response from the second pattern is captured at system speed. There are two basic techniques to deliver at-speed tests using ATPG approaches.
Figure 2 - Timing requirements for "launch from shift" scan
The second at-speed test technique, often called "launch-from-capture" or "double-capture," removes the at-speed scan-enable requirement. In this technique, the second pattern is obtained from the circuit response to the first pattern. However, this results in the need for sequential ATPG, which is not only much more CPU intensive than combinational ATPG, but typically results in an unacceptably large number of test patterns.
In contrast, the leading commercially available logic BIST solution provides true at-speed testing. Field proven techniques provide on-chip support for at-speed scan-enable signals and multiple asynchronous clocks. Other field-prove n techniques provide on-chip support for testing logic at clock domain boundaries. No tester pin-to-pin timing requirements of any kind are required.
Myth #3: ATPG approaches easily scale with growing chip sizes
To deal with growing chip sizes, most current design flows are hierarchical in nature. In particular, modules or cores are often reused at different levels within the design. Despite this, commercial ATPG tools typically operate on the fully flattened netlist. This results in ever-growing CPU requirements, growing test pattern volumes, and has a significant impact on the design cycle as a change to any part of the design requires a complete regeneration of the deterministic test patterns.
In some cases, cores can be dealt with separately by fully isolating them with scan cells. However, the resulting overhead is typically prohibitive. Even the pattern volume reductions obtainable with the new ATPG compression approaches represent only in a one-time improvement. As design sizes are growing exponentially, test pattern volumes will once again become a problem.
With commercial logic BIST, hierarchical cores are made self-testable independently of other cores (see figure 3). Some patented techniques allow isolation of the core during test using little or no overhead. Design changes in one core do not affect the logic BIST capabilities inserted in other cores. In particular, a core with logic BIST can be reused "as-is" without any modifications to the existing logic BIST capabilities.
Figure 3 - BIST added to each core
Other advantages to logic BIST
In addition to dispelling the above myths, a number of other arguments can be made for the use of true logic BIST over ATPG and related compression approaches. Because logic BIST does not require the storage of any test pattern data or require external control of clocks, it can be reused during board and system level testing. This not only reduces board and system manufacturing test development costs, but also helps time-to-market through faster hardware debug. When a chip fails functionally in the system, it can be debugged more reliably by running BIST in situ -- without de-soldering.
Another major point is that Logic BIST can also be used for dynamic burn-in. Parallel execution of logic BIST on all devices on a burn-in board can be achieved using only the low-speed IEEE 1149.1 interface for board-level access. Pre burn-in tests can even be applied using the burn-in board, eliminating a test insertion.
Conclusion
ATPG-based flows continue to try to provide further techniques to meet the testing challenges of today's complex designs. However, leading commercial logic BIST capabilities originally developed to address these high-end design test challenges have matured over the past several years and has become field hardened and field proven solutions.
Stephen Pateras is director of engineering for manufa cturing test software at LogicVision Inc. He has over 10 years of extensive background in system design, ASIC design, operating systems, DFT, BIST, and fault tolerance. Prior to LogicVision, Pateras worked for IBM as a team leader in the system technology and architecture divisions, responsible for the development and execution of test strategies for the CMOS mainframe design project.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |