SoC testing becomes a challenge
SoC testing becomes a challenge
By Ron Wilson, EE Times
March 27, 2003 (4:42 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030327S0025
From the beginning, test has been the poor stepchild of integrated circuit design, ranking somewhere below verification in status, attention and resources. In many organizations test is considered not a design function but rather a part of the manufacturing startup process. Still, just as increasing system-on-chip complexity has elevated verification to an importance rivaling design, fears, near-misses and dire warnings suggest that test is about to take its place as a vital, must-fund design issue that managers will underfund at their imminent peril.
As with verification, sheer complexity is part of the driving force behind this new urgency for SoC test. Conventional functional-test techniques-sufficient, say, for an IC of a few thousand gates, or a highly regular IC of a few hundred thousand gates-simply crash and burn on a multimillion-gate SoC compri sing a variety of different kinds of cores. Gate count is an issue. But the variety of functional modes, failure modes and test approaches has become an issue as well.
Along with complexity, process evolution has become a second major culprit. The kinds of failures that are most likely in a given process change from process to process, and they change as a given process matures. This means that the style of testing has to evolve with the changes in faults.
Traditionally, test has been concerned with just one kind of fault: the short or open circuit caused by a defect on the die. Until recently this was by far the most common type of fault. Initially, test coverage for these "stuck-at" faults followed the same methodology as brute-force design verification: feed enough test vectors into the chip's input pins to exercise all the operating modes, and see if anything breaks.
A few years ago, as SoC complexity began to increase, functional test began to fall from favor. It became clear tha t there was so much state-and often, so many operating modes-in an SoC that functional vectors until doomsday would not fully exercise the chip. New techniques were applied using structural, rather than functional, vectors. In these techniques, software could deduce from the chip's netlist how the SoC should be partitioned into blocks, isolated by scan chains. Then the software derived minimal test patterns to force transitions in every net. Automatic test pattern generation (ATPG) triumphed.
Recently, an elaboration upon this idea emerged. For some time, memory circuits had been tested not by feeding test vectors in from outside, but by circuitry built around the memory array that generated its own test vectors, clocked the memory and checked its own results: built-in self test (BIST). As SoC complexity continued to increase, BIST ideas were applied to logic circuits, generating either random-which often was not a good idea-or ATPG-directed pseudo-random test patterns. So individual blocks of memory and logic began to test themselves.
Unfortunately, it has been found that in new, dual-damascene copper processes, stuck-at faults are not the problem. The more likely problems are tiny bridges or necks in the interconnect and irregularities in vias, which create high-value resistors either between nets or in series within a net. There is growing evidence that other problems, such as unpredicted coupling between signal lines, either capacitive or inductive, will also present serious failure problems.
These faults will appear not as a stuck signal but as an unexpected increase or decrease in propagation delay, or an unexpected shape in the arriving waveform. Ironically, structural testing, with its focus on connectivity and its far-below real-time speed, is not very useful in detecting these faults. So, many test engineers are beginning to look back to functional test and to the incorporation of the designers' original intent.
Finally, there is the issue of analog. Increasingly, SoCs have analog content. But, as one frustrated test manager put it, you don't test analog circuits, you characterize them. Analog circuits typically are subjected to calibrated analog input signals and to analog time- or frequency-domain analysis. It is yet another different style.
The presence of reused IP in the design complicates everything. Given the variety of sources from which IP is drawn-even for a single project-and the rate at which test methodology has been changing, the test designer will almost certainly be confronted with a variety of IP blocks, each with its own test strategy and test implementation. Often, this will be presented as a black box to the test designer. So the complex problem of devising a test strategy for a big, deep-submicron SoC is replaced by the far worse problem of stitching together the test strategies, devised in different times and places, of a dozen IP cores, some of which may not be documented in any detail. Just routing all the various test structures into a cohe rent test interface becomes a challenge; test times can explode, and the designer is lucky if the test suite can be executed on just one tester.
Compounding the problem is the lack of anything resembling standards. Some IP vendors are quite sophisticated at test design and have thought hard about the needs of 130-nm processes. Some have used an aging but robust BIST design and see no need to change it. Others are content simply to pass a subset of the verification suite to the unlucky licensee. Of course, there are no standards for coverage, deliverables or formats.
In many cases, by the time test engineering is brought into the project, the only alternative is to try to stitch together scan chains that encompass the various blocks of reused IP. If full boundary scan (or better) is not provided in the IP block, then the test designers must wrap the block with a boundary-scan chain. Then, depending on how much the design team knows about the internals of the IP, the test engineers must either generate new test vectors or, worst-case, simply employ whatever vectors came with the IP.
"In practice there is very little reuse of tests," conceded Mouli Chandramouli, product line manager at Synopsys and chairman of the Virtual Socket Interface Alliance (VSIA) working group on testability. "There have not been agreed-upon definitions or standards that would support reuse of tests."
What there has been lately is a few important papers on techniques for stitching together scan chains and controlling them. While there may not be a way to automate this process, there are at least guidelines and hints.
A far better approach would be to start with IP blocks that had all used the same methodology for test. To some degree this is possible-for instance, if the design team works with an ASIC vendor that provides all the cores from its own library. This can reduce the problem to one of stitching the cores together and figuring out how much of the testing can be done concurrently.
The VSIA has a more general, but much more politically difficult, solution. The organization has proposed a series of standards that would make it much simpler to adapt a core to an SoC test methodology, without imposing a specific methodology on either the core or the designer. "It begins with a simple checklist," Chandramouli said. "The IP developer simply checks off the test structures, vectors and access methods included in the core. This helps greatly with test planning."
The next step, he said, is a standard test access arrangement that would apply to virtually all digital cores. "It took 68 people a year and a half to produce this," he said. "It defines a standard wrapper that can be instantiated around pretty much any core. The wrapper responds to standard commands and connects to scan chains. It provides transparency during normal circuit operation, isolation of the block during tests, scan and test-execution capability. The whole scheme is upward-compatible to IEEE 1149.1."
For the fut ure, there is hope for not just a standardized process but an automated one as well. The common test language (CTL) is being created as a language for defining the test structures, needs and capabilities of a block. CTL code, as part of the deliverables of a reusable IP core, would convey to test developers-and even to test-equipment programmers-all they need to know about the core in order to perform thorough testing. The language is extensible, so that it can eventually carry information on protocols, fault models and even timing. So it could potentially meet the needs of even the next generation of test engineers.
In the meantime, SoC test won't be easy.
Related Articles
- USB 3.2: A USB Type-C Challenge for SoC Designers
- Internal JTAG - A cutting-edge solution for embedded instrument testing in SoC: Part 2
- Internal JTAG - A cutting-edge solution for embedded instrument testing in SoC: Part 1
- Testing Of Repairable Embedded Memories in SoC: Approach and Challenges
- SoC Interconnect Verification Challenge
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- System Verilog Assertions Simplified
- Smart Tracking of SoC Verification Progress Using Synopsys' Hierarchical Verification Plan (HVP)
- Dynamic Memory Allocation and Fragmentation in C and C++
- Synthesis Methodology & Netlist Qualification
E-mail This Article | Printer-Friendly Page |