LVDS IO handling data rate up to 50Mbps with maximum loading 60pF
Shifting from functional to structured techniques improves test quality
Shifting from functional to structured techniques improves test quality
By Rohit Kapur, Principal Engineer, Mouli Chandramouli, Product Line Manager, Test Automation Products Group, Synopsys, Inc., Mountain View, Calif., EE Times
March 10, 2003 (5:43 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030228S0046
It is becoming clear that functional testing of integrated circuits, the most widely used and oldest method in the semiconductor industry has reached the limits of its effectiveness.
Functional Test methods rely on a of pre-existing test patterns that have been generated for functional verification of the design logic. But Functional tests fail to test all parts of the IC evenly and offer poor fault coverage.
Structured test methods, which are replacing functional testing in the highest density devices offers a rational alternative to functional test. It uses fewer test patterns as does functional test, but generates patterns to detect targeted faults, and thereby achieves higher quality of test with increased efficiency of the test resources, resulting in a significantly reduced cost of test.
For years, test technologists have studied the differences between the two approaches and have come to grips with the fact that struct ured test really works. As one deals with the different structured test approaches one should not forget the basic reasons behind structured test itself.
The best test for any device is to exhaustively test the device. The quality of such a test would be perfect since any defective IC would be identified and the shipped ICs would all be defect-free. However, this is not possible in the real world due to a number of practical limitations including: limited tester memory, limited test application time, and the time available to generate the test suite is limited.
In the absence of truly exhaustive testing, the ideal alternative would be to first itemize all the defect mechanisms, and categorize them according to their probability of occurrence.
After sorting on the basis of their probability of occurrence, partitioning of the defects is performed, and a best guess subset of the defects is targeted for creation and application of test patterns. While such a scenario is quite logical , the requirement to list the defects is unrealistic.
Defects can manifest themselves in too many ways. Defects can affect a large part of the silicon or they can affect only small locations of the silicon. When they affect small locations, they are called spot defects. These spot defects are extra or missing material in the IC that may occur in any of the many steps in the IC manufacturing process.
Spot defects are the primary target for test, since these are much harder to detect than defects that effect a large part of the silicon. Instead of listing these defects, abstract representations of the defects - or faults - are used.
The faults guide the process of test creation and evaluation to areas in the design where the defects are located. Clearly, the quality of the fault-abstraction process is an important factor in the quality of the tests created. The most common fault models known today are the single stuck-at fault model, the bridging fault model, the transition fault model and stuck-open fault model.
Once a set of faults is created or hypothesized, tests can be generated to differentiate the circuit with faults from the circuit without any faults. Functional tests are generated from a set of tests that already exist for functional verification of the design.
These test patterns, when evaluated against any fault model, perform poorly in terms of fault coverage, as they were never created for the purposes of detecting these faults. Functional tests are randomly created as far as testing of the structure is concerned, and randomly created tests are sloppy.
This means that the functional tests are not testing all parts of the IC evenly. If the faults cover the most probable defect mechanisms, then the tests are also not efficient.
Better quality tests
One must remember that with limited resources one cannot afford to be too sloppy in utilizing the resources. Structured test takes a cost-effective approach to test. That is, with the same number of test patterns, the quality of the test is expected to be better with structural tests, since these detect targeted faults.
Structured tests detect targeted faults because such faults are the only affordable abstract failure mechanisms that can be tested with limited resources.
It has been experimentally found that fault oriented testing does in fact effectively detect the defects in ICs. To assist the creation of these tests, typical designs are modified to include scan chains in the design.
These scan chains allow for arbitrary initialization of the design to any state, and arbitrary observation of internal states in the design. Scan chains inserted in the design are used by ATPG-based deterministic test and/or Logic BIST architectures.
Once a state is initialized, any sequence of events can be applied as needed to detect faults. Then, the internal states of the design are observed to determine the outcome of the te sts. The quality of the test as measured by the different defect mechanisms covered depends on the flexibility in the sequencing of the tests being performed in any structural test method.
Structured test methods differ in quality, since the sloppiness of the test could stress limitations in resources such as test-data-volume and test-application-time.
How much memory is taken by the test patterns impacts when ATE memory is exceeded, and one cannot apply all the test patterns to achieve the fault coverage. On the other hand, having un-utilized memory on the ATE during test does not affect quality or the efficiency of the test.
Test application time, on the other hand, does not have as binary an effect on quality. The less the test application time, the less expensive the test, and hence it has better quality. For this reason, it is important to not be sloppy in the creation of test patt erns that detect the faults.
Stored pattern deterministic test methods tend to be more efficient in the number of test patterns over random sources of stimulus. Random - Logic BIST methods are sloppy in detecting faults and the associated defects and hence go against some of the fundamentals behind the existence of structured test.
Functional test cannot offer the quality and defect coverage available with today's structured test. If the faults cover the most probable defect mechanisms, then the tests are also inefficient. Randomly created tests are sloppy, therefore Random-Logic-BIST tests are not efficient in testing the IC. With limited test resources, one cannot afford to be sloppy in test resource allocation. Stored pattern test takes a cost-effective approach to test. With the same number of test patterns, stored-pattern-structured test effectively detects targeted faults, and thereby raises the quality of the test, making it a must-have for IC test today.
Related Articles
- Optimizing Automated Test Equipment for Quality and Complexity
- Pytest for Functional Test Automation with Python
- Leveraging UVM based UFS Test Suite approach for Accelerated Functional Verification of JEDEC UFS IP
- Physical Lint -- Better RTL Quality Improves Design Convergence
- Improve functional verification quality with mutation-based code coverage
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- System Verilog Assertions Simplified
- Smart Tracking of SoC Verification Progress Using Synopsys' Hierarchical Verification Plan (HVP)
- Dynamic Memory Allocation and Fragmentation in C and C++
- Synthesis Methodology & Netlist Qualification
E-mail This Article | Printer-Friendly Page |