SoC Test and Verification -> Modular ASICs ease test problems
Modular ASICs ease test problems
By Bob Osannå, Founder, Lightspeed Semiconductor, Sunnyvale, Calif., EE Times
December 13, 2001 (11:14 a.m. EST)
URL: http://www.eetimes.com/story/OEG20011213S0031
The largest FPGAs available today can implement close to 1 million (real ASIC) logic gates. At this level of integration, design reuse becomes necessary and these flexible devices finally enter the realm of true system-on-chip densities. However, unlike SoC designs implemented in conventional ASIC technologies, designs are normally placed in FPGAs with little or no regard for test concerns or rules since FPGAs are tested at the factory. Because FPGAs provide many benefits, system designers tend to use them as a matter of habit regardless of whether they can achieve target performance and cost, or mean completing development projects on schedule. Large FPGAs, in particular, carry with them severe performance limitations that can cripple a development project, and cost levels that can considerably reduce gross margins for system products. The largest FPGAs have enormous die sizes, often approaching the limit (22 mm) of the mask reticle use d to pattern a wafer. This causes yields to be very low, sometimes resulting in only a handful of good dice per wafer and a price to the user of many thousands of dollars. To overcome these cost and speed limitations, FPGA users must sometimes consider replacing large FPGAs with some form of mask-configurable ASIC device. Unfortunately, when this decision is made the design is usually complete and ASIC-related test rules were never a consideration. Many FPGA designers have never heard the term "design for test." It is mostly those who have previous ASIC experience, or who work in an environment where ASICs are also designed, who know DFT rules even exist. As a result, an FPGA design that is migrated to ASIC technology rarely contains logic that adheres to commonly required test rules. Conventional ASIC-SoC technologies, such as standard cells and embedded arrays, require many DFT rules to provide sufficient fault coverage and acceptable device defect rates. Modular array ASICs contain an array of function modules, but unlike with FPGAs there is no silicon between adjacent modules allocated to reprogrammable wiring. Instead, modules are laid out end-to-end with all interconnect done in the metal layers above. Only the top layer of metal is configured uniquely to implement a particular design, meaning only two custom masks are needed and lower NRE fees. Contained in the lower layers of metal where the pattern is common to all designs is Lightspeed's patented AutoTest technology, which uses a different paradigm from conventional ASICs so delivers 100 percent coverage of detectable stuck-at faults.
Since modular arrays don't sacrifice large amounts of silicon to reprogrammable wiring, they provide functional densities between five and 10 times that of FPGAs with the same process technology. In addition, by not adding the delays associated with reprogramming wiring, mod ular arrays exhibit up to two times the performance of FPGAs. To overcome the long and painful process of achieving timing convergence with large FPGAs, modular arrays contain an unusually high percentage of buffers that may be automatically used by proprietary software to enable timing convergence without resorting to iterations of the placement process. Lightspeed provides families of devices that are pin- and RAM-compatible with FPGAs and families that offer densities beyond FPGAs as alternatives to conventional ASICs.
While the performance, capacity and cost advantages of modular arrays relative to FPGAs are certainly attractive, it is the enabling AutoTest technology that separates modular arrays from conventional ASICs and makes test completely transparent to the user; that is, the user never has to think about it.
Most conventional ASIC technologies are tested using a test architecture known as "scan." The layout flow for conventional ASICs starts with the step known as "scan insertion.. The f irst step in inserting structures for scan testing is to replace all flip-flops in the design with scan flops. The most common variety of scan flip-flop contains a multiplexer prior to the D input so that test data can be shifted into the flip-flop during test mode or alternately a normal logic signal can be stored during normal user operation.
To obtain reasonable coverage levels of detectable stuck-at faults, it is typically required that the design be fully synchronous. Hence, we have the first DFT rule. In general, not following DFT rules reduces fault coverage, and as fault coverage is lowered the defect rate for devices is increased. Historically, ASIC suppliers prefer to see fault coverages greater than 90 percent. However, for designs with 1 million gates on 0.25-micron design rules, 90 percent coverage would produce a defect rate of 0.35 percent unacceptable considering these defects will not be found until devices have been assembled on printed-circuit boards.
Another common DFT ru le requires the modification of logic driving asynchronous sets and resets on flip-flops so that they can be controlled from an external pin during test. The next DFT rule requires that redundant logic not appear in the design. Unfortunately, it is very common in FPGA designs to utilize redundant structures to reduce the loading on high-fanout nets. Experienced ASIC designers are all too familiar with the design flow for conventional ASICs. They know the degree to which DFT rules and test point insertion, as well as scan chain routing and timing, can cause a highly iterative process where the designer must continually get. These iterations characteristically go on for three to six months or more before an acceptable level of test coverage has been achieved with both the user design and the test logic running at speed.
Some companies supplying conventional ASICs are positioning certain product lines as "FPGA conversion solutions," with claims of a simplified and "seamless" design flow. The problem is that these solutions are essentially scan-based conventional ASICs usually based on embedded array technology where the supplier is willing to take on more of the burden of the conversion process in order to get the business.
In the end, designs originally created for FPGAs will rarely meet all the DFT rules required to reach acceptable levels of fault coverage when migrated to a conventional ASIC solution. A different approach is required to make ASIC speed, capacity and cost available to FPGA users without requiring modifications to their existing designs.
Often, paradigm shifts start by ignoring past techniques and taking a fresh look at the problem from a new perspective. When Lightspeed created the modular array ASIC architecture, a new way of looking at test was part of the concept from the beginning. Unlike conventional ASICs, the function modules within a modular array that can both "control" and "observe" are also capable of being configured as combinatorial logic, flip-flops or RA M. This means that all nets can be controlled regardless of whether they represent clocks or sets/resets and whether or not they are part of redundant structures or combinatorial feedback loops.
Related Articles
- SOC: Submicron Issues -> Heading off test problems posed by SoC
- SoC: Codesign and Test -> Verification ensures reuse really used
- SoC Test and Verification -> SoC complexity demands new test strategies
- SoC Test and Verification -> Dense wires snarl verification plans
- SoC Test and Verification -> ATE struggles to keep pace with VLSI
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |