|
|||||
Panel recommends radical changes for SoC test
Panel recommends radical changes for SoC test SANTA CLARA, Calif. Rising chip complexity and the challenges of sub-130-nanometer silicon will force radical changes in the way system-on-chip (SoC) devices are tested, according to panelists at the DesignCon 2003 conference here Tuesday (Jan. 28). Today's functional tests and stuck-at fault models are running out of steam, and test structures will soon need to move on-chip, panelists said. Moderator Gabe Moretti, technical editor at EDN magazine, began the panel by citing four ways in which SoC complexity is driving up test costs: increasing gate counts are causing longer test sequences; a mix of analog, digital, and memory circuitry requires different test strategies; higher operating frequencies make at-speed test difficult; and an increasing number of functional levels requires more complex test sequences. "You've got to plan for test, not just design for test, so you think about the requirements of testing up front," Morett i said. With some devices today, test costs exceed silicon costs, said panelist William DeWilkins, senior manager for strategic test development at National Semiconductor Corp. "Designs are pushing the limits of technology and process capability, and testers can't keep up," he said. "They can't do all of the tests required, especially for analog and high-speed I/Os." DeWilkins said that testing "smarts" need to move onto the chip itself, in the form of built-in self test (BIST), embedded test engines, test buses, and controllers. In a statement that may disturb automated test equipment manufacturers, DeWilkins said National hopes to replace its $5 million testers for $250,000 testers that take advantage of on-chip test structures. DeWilkins also outlined some new test requirements, including the ability to test multiple cores on one chip, to reconfigure tests on-the-fly, to provide real-time statistical process control, and to perform repairs and calibrations when errors are found. Increasing I/O counts, core performance, and memory content on SoCs is driving test costs much higher, said Subhakar Sabada, vice president for the design technology group at LSI Logic Corp. "If you go from 100,000 gates to 10 million gates, you need a hundredfold improvement in defect coverage," he said. Stuck-at faults alone "are not going to cut it," Sabada said. He noted that testers must also screen for timing defects, parametric faults, and IDDQ measurements. And new defect screens are needed for intra-die process variations, he said. "We've got to move away from functional test," Sabada said. "The tester cost is too high, it's very resource-intensive, and you can't get the required levels of coverage." Looming problems James Sproch, senior director for R&D of test automation products at Synopsys Inc., cited some looming problems in SoC test. One is the prevalence of cores designed by different teams, resulting in different logical, physical, and test hierarchies on-chip. Other pro blems include increasing pin counts and high-speed I/Os, thin oxides and short channels, tester data overhead, and problems in handoff to manufacturing test, Sproch said. Echoing the view that stuck-at testing isn't sufficient, even when complemented with IDDQ, Sproch noted that process parameters such as dielectrics, conductors, channels, and vias introduce new failure sensitivities. There's a need for "defect-based test methods," he said, including delay tests that can look at timing sensitivities, bridging tests that can examine shorts between nodes, and "modified" IDDQ tests that can look at relative numbers. Rudy Garcia, strategic marketing manager and technical advisor for NPTest Inc. (formerly Schlumberger Semiconductor Solutions), reiterated the view that current fault models aren't good enough. "High quality test is not high fault coverage, it is high defect coverage," he said. Garcia said there are "gnarly defects" at 130 nanometers and below that cannot be detected with only stuck-at or IDDQ models. Some of these include tighter pitches and taller aspect ratios, leading to capacitive coupling; smaller vias, creating more resistive opens; voids in copper plugs that create delay defects; and lower threshold voltages, which increases leakage. Robert Aitken, senior architect of product technology at Artisan Components Inc., said the disaggregated supply chain is causing test problems. He noted that EDA vendors, library vendors, design houses, foundries, and ATE vendors must all work together. Otherwise, Aitken said, it's unclear who "owns" test costs and who can control test quality. "Dedicated on-chip test IP is needed to lower costs, and it needs to come from dedicated suppliers," he said. Stuck on stuck-at In a question-and-answer session, panelists noted that stuck-at fault testing is still needed. Aitken took aim at Garcia's suggestion that 70 percent stuck-at coverage with 60 percent delay coverage might be better than 98 percent stuck-at coverage only. "You need high stuck-at coverage no matter what, and you also need high delay coverage," Aitken said. In response, Garcia noted that delay testing will actually produce much higher stuck-at fault coverage. "But don't spend all your tester time doing stuck-at faults," he said. "There's no one silver bullet," LSI's Sabada concluded. "We need a combination of techniques. Little drops of water make a mighty ocean."
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |