Introduction For the past five years, the cost of test has prevailed as the hottest topic in test. During this period, automated test equipment (ATE) has made a dramatic move towards low-cost design for test (DFT) testers, and EDA solutions have implemented DFT methods that significantly reduce test data volume and test application time. Numerous papers have been published on the cost of test and test compression, each of which demonstrates the possibility of marked improvement in reducing test data volume and/or test application time through the introduction of DFT structures on the chip [Bar01], [Jas99], [Mitra04], [Oh02], [Rajski02], [Sit02], [Sit04], [Wei97]. Now, with numerous test compression technologies at our disposal, we are left with the arduous task of evaluating and choosing the one that is best. In this paper, we consider the general DFT cost model proposed in [Wei97] and offer a calculated approach to help simplify the evaluation of these different test compression technologies. In this way, we can obtain good data on the relative merits of different compression techniques even if some of the parameters of the detailed model are unknown. An overview of test compression methods All test compression methods start with a baseline of scan technology. A typical scan test involves a scan operation in which the flip-flops of the design are controlled and observed along with some stimulus to inputs, some measures on outputs, and some capture events. Compression technologies focus on optimizing the scan operation to reduce the amount of data stored on the ATE and thereby decrease the time it takes to load or observe the scan chains. This compression and scan optimization is achieved by adding logic before and after the scan chains to allow more scan chains to be controlled and observed through a small interface. The amount of DFT logic added for this purpose generally follows the trend shown in Figure 1. Decreases in test application time and test data volume are achieved through higher DFT area. Figure 1 — Three dimensions, or aspects, of test compression methods. For example, one would expect that as more specialized DFT logic is added, additional decreases in test-data-volume and/or additional decreases in test-application-time should be realized. Evaluating the technologies in each separate dimension makes it easy to form a metric for comparison. However, evaluating the technologies in light of all dimensions taken together makes it difficult to decide which method is best. Test economics — forming a basis of comparison All test compression methods are devised to solve a simple economics problem — reducing the cost of test and maximizing the profit from the manufactured ICs. Therefore, to evaluate each test compression method, we must simply devise a way of calculating its economic impact. The key is to identify a common metric for the cost and benefits of compression-focused DFT and then translate the three dimensions into this common metric. We start with a general test economics model developed by Wei and coauthors [Wei97]. Since our goal is to compare test compression DFT methods against each other, the actual entities in the model that are uniform across all test methods can be factored out and ignored. In [Wei97], the cost of test (Ctest) is defined to be the sum of the cost of preparing a test (Cprep), the cost of executing a test (Cexec), the cost of silicon (Csilicon) and the cost of imperfect quality (Cquality), as shown in Figure 2. Figure 2 — Calculating the cost of test — the basis of comparison. Test preparation The cost of test preparation is comprised of the cost of test pattern generation, the cost of creating the tester programs and the cost of designing the DFT technology. Since we are evaluating different test compression methods and not their implementation, we can assume that each of the test compression methods is implemented efficiently; the parameters are all equal for each method. Thus, from the standpoint of comparison, the economic impact of the test preparation cost is can be ignored. Test execution The cost of executing a test has two basic components: the cost of using hardware (a probe card) that is device-specific, and the cost of using tester hardware that is common across devices. We can assume that the device-specific hardware cost is the same for all test compression methods if the number of terminals that need to be probed is maintained to be the same. Since this is the only fair way to compare the performance of different test compression methods, this aspect of the cost has no impact on relative comparisons. The cost of the tester itself is amortized over the life of the equipment, and its cost depends on the time a device utilizes the tester. We can analyze the equation given in [Wei97] to characterize the relationship of the test time to the cost of the tester as shown in Figure 3. Figure 3 — Calculating the amortized cost of a tester over its lifetime. In the cost equation above, Ract is the rate of an active tester in $/second, βutil is the tester utilization, Rinact is the rate of an inactive tester and Ttest is the average time (seconds) to test a single IC. The amortized cost of the tester appears in the Ract and Rinact variables of the equation, which represent the cost of the capital and its depreciation rate per year. While the utilization of the tester is impacted by the test time of a device, the change in utilization of a tester — a result of the time it takes to test a single IC — is a small modulation of the number. For all practical purposes, the comparison of different compression methods is built around similar tester requirements, and the cost of a tester can be seen as a number that is proportional to the test time Ttest. DFT impact We can also analyze the equations proposed by [Wei97] to characterize the relationship between the cost of extra silicon and the IC area increase. Equations derived from [Wei 97] are shown in Figure 4. Figure 4 — Calculating the cost of testing increased silicon area. In these equations, Qwafer is the unit area cost of the wafer, Rwaf is the wafer radius, βwaf_die is the percentage of the wafer area that can be divided into dies, D is the defect density, ADFT (= (1+αDFT)Ano_DFT)) and YDFT are the area and yield of the die with DFT, and Ano_DFT and Yno_DFT are the corresponding parameters of the die without DFT. Without going into the details of every parameter, it can be generally seen that the additional cost of silicon is quadratic in the added silicon for the DFT(αDFT). Furthermore, we can separate the added silicon into that used for general DFT (αgeneralDFT) and that used specifically for compression DFT (αcompressDFT). Thus, αDFT = αgeneralDFT + αcompressDFT. However, if we assume that the defect density is very small, and thus the change in yield from the extra DFT area is negligible, we can ignore the quadratic term and consider the relationship between the additional cost of silicon and the additional silicon area (αDFT) to be linear. This is a reasonable assumption because cost becomes a significant issue especially for high volume production when the yield is high and the defect density is low. Thus, we obtain the equation shown in Figure 5 for the additional cost of the silicon added for the compression: Figure 5 — Added cost from extra silicon for DFT is proportional to the extra area. Imperfect test quality Test quality is always limited by the imperfect nature of testing. Assuming that we strictly care about compression and that each of the compression solutions under evaluation provides a more efficient means to apply the same test patterns, this cost has no impact on the relative comparison of test compression methods. Comparing test compression methods As shown above, the DFT area for compression and test application time combine in an additive manner to the overall cost of test. Furthermore, under our assumptions, the cost of test has a linear relationship with the DFT area and the test application time. Thus, we can estimate the total cost of test for a chip with added compression DFT as: Figure 6 — Total cost of test for design with DFT added for compression. In this equation, k3 represents those contributors to the cost which we have assumed remain constant across different DFT compression methodologies. The relative importance of the area overhead in comparison to the test time is captured by the constants k1 and k2. However, the general trend of more DFT area provides better (reduced) test application time. Thus, we can describe the effect of the compression DFT (versus no compression) on the overall cost as: Figure 7 — Calculating the change in the cost of test as a sum of the change in the DFT area (versus no compression) and the change in the test application time. The cost equation does not explicitly include the test data volume reduction. However, significant differences in test data volume reduction will show up in the term for increased test time. All that remains at this point is to find appropriate estimates for the two constants in the model. We can accomplish this by using data from existing designs and solving a system of linear equations or using linear regression analysis. Ideally, we would have historical test cost data from prior designs without additional compression DFT and from the same designs with various amounts and/or types of compression DFT. If we assume that knowledge of the overall cost, the test time, and the additional DFT area included is available for each design instance, we can use this information to find the best estimates for the constants k1 and k2. Even if data from different iterations of the same design is not available, it is still be possible to obtain information about the constants if different designs of similar character (such that our assumptions still hold) are used. Conclusions We have presented a method for making practical and useful comparisons of different methods of compression DFT. Starting with the cost model described in [Wei97], we have made assumptions appropriate to the case of DFT for test compression. The resulting model is simpler and contains only a few constants, which can be estimated from historical cost data on previous designs. The final results will allow us to make reasonable and informed tradeoffs between the test time reduction and the silicon area overhead inherent in different compression techniques. References [Bar01] C. Barnhart, V. Brunkhost, F. Disler, O. Farnsworth, B. Koenemann, and B. Keller, "OPMISR: The Foundation for Compressed ATPG Vectors", Proceedings of the International Test Conference, 2001, pp. 748-757. [Jas99] A. Jas, K. Mohanram, and N. A. Touba, "An Embedded Core DFT Scheme to Obtain Highly Compressed Test Sets," Proceedings of the Asian Test Symposium, 1999, pp. 275-280. [Oh02] N. Oh, R. Kapur, T.W. Williams, "Fast Seed Computation for Reseeding Shift Register in Test Pattern Compression," Proceedings of the International Conference on Computer Aided Design, 2002. [Mitra04] S. Mitra, K. S. Kim, "X-compact: an efficient response compaction technique" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vo. 23, no. 3, March 2004, pp. 421-432. [Rajski02] J. Rajski et. al., "Embedded Deterministic Test for Low Cost Manufacturing Test," International Test Conference, 2002, pp. 301-310. [Sit02] N. Sitchinava, S. Samaranayake, R. Kapur, M.B. Amin, T.W. Williams, "Dynamic Scan Chains," IEEE Computer, September, 2002. [Sit04] N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gidarski, and T.W. Williams "Changing the Scan Enable during Shift," VTS 2004. [Wei97] S. Wei, P.K. Nag, R.D. Blanton, A. Gattiker and W. Maly, "To DFT or Not to DFT?" Proceedings of the International Test Conference, 1997, pp. 557-566. Rohit Kapur, Synopsys Scientist, guides the development of Synopsys design-for-test (DFT) solutions based on Core Test Language (CTL) and other open standards. He is chair of the Core Test Language, IEEE P1450.6, standard committee, and was named IEEE Fellow in January 2003 for his outstanding contributions to the field of IC test technology. Thomas W. Williams is a Synopsys Fellow at Synopsys in Boulder, Colorado. Formerly, Dr. Williams was with IBM Microelectronics Division and served as manager of their VLSI Design for Testability group. Dr. Williams has received numerous best paper awards from the IEEE and ACM. He is the founder or co-founder of a number of workshops and conferences dealing with testing and was twice a Distinguished Visitor lecturer for the IEEE Computer Society. Jennifer Dworak will be joining Brown University as an Assistant Professor in January 2005. She graduated in May 2004 with a Ph.D. in electrical engineering at Texas A&M University. Her research interests include digital circuit testing and automatic test pattern generation, defective part level modeling and logic minimization. Dworak received a National Science Foundation Graduate Fellowship and co-authored a paper that won the Best Paper Award at the 1999 VLSI Test Symposium. M. Ray Mercer is a Professor of Electrical Engineering at Texas A & M University, where he holds the Computer Engineering Chair. His research interests are centered around computer engineering and include the computer-aided design of digital systems and design verification. |