|
||||||||||
How are you planning to verify all that DfT?
by Stylianos Diamantidis
Managing Director, Globetech Solutions Abstract As gate counts continue to swell at a rapid pace, modern systems-on-chip (SoCs) are increasingly integrating more design-for-testability (DfT) capabilities 1. Test and diagnosis of complex integrated circuits (ICs) will soon become the next bottleneck, if, in fact, they have not already. With up to 30% of a project's cycle already being spent debugging silicon and typically 30-50% of total project costs being spent on test 2, DfT is quickly becoming the next wild card. As daunting of a task as reining in all the variables related to DfT infrastructure can seem, an enormous opportunity awaits those ready to take up the challenge. And a challenge it is... Today, DfT is usually nothing more than a collection of ad hoc hardware put together by different people, using different tools, with neither a common strategy nor a vision of an end quality of result. The inability to deliver a reliable test infrastructure inevitably leads to missed market opportunity, increased manufacturing costs, or even a product which is not manufacturable. Instead, a carefully designed and verified DfT scheme reflecting coherent test intent across the board can be an excellent value differentiator throughout the lifetime of the product. This brief article will discuss how to plan DfT verification against test intent, ensure compatibility with standards and functional correctness, and create a complete, methodical and fully automated path from specification to closure. Planning for System-wide DfT Verification The foundation for systematic DfT verification is a well-defined set of goals, supported by a methodology developed to provide integration-oriented test methods for chip-level DfT, to enable compatibility across different embedded cores and to incorporate high levels of reuse. A DfT verification plan must satisfy these three separate objectives:
Does the test infrastructure adhere to the test strategy and specification set forth by the design and test engineers? It must be verified that the global test intent is properly designed and implemented.
Does the test infrastructure comply with industry standards for interoperability and universal facilitation of access? This is crucial to ensure reuse of hardware and software.
Are there functional design issues with the DfT resources? Although such resources may appear to operate within the parameters of the first two points, there could be logic bugs in the implementation. Once the objectives are defined, the development of a complete system-level DfT verification plan should follow these general steps:
In order to build a successful DfT verification plan, one first needs to capture system-level test intent. During this step, test engineers need to work closely with verification engineers to ensure that the plan includes all the aspects of test that must be available in the final product. The global test access mechanism (TAM) is the primary element at this point; however, other elements can come into play such as top-level DfT features, integration to board-level testability or hardware/software interfacing and information sharing. Management must also be involved so that they gain an understanding of the implications and trade-offs of building reliable DfT. This will ensure total visibility and resolve contention for resources further down the road. The preferable way to deliver this description is in executable form. The global verification plan must leave no room for doubt or misinterpretation. Furthermore, it needs to provide a solid basis for automation of subsequent steps down to design closure. Integrating Heterogeneous Core-level DfT Plans Bridging the gap between the ad-hoc world of spurious DfT resources and planned system-level DfT is not a trivial task. Individual intellectual property (IP) vendors' strategies for testability can vary significantly in terms of quality, coverage and/or support deliverables. In this phase of DfT verification planning, it is important to work closely with vendors to align DfT strategies as closely as possible and to enforce quality metrics. Optimally, vendors should work with their customers' test engineers to design pluggable DfT schemes and plans. By capturing core-level test intent in an executable plan, and including it in the deliverables, IP vendors can provide new value-add to their customers. Such executable plans can then flow into the IC system-level test plan (see Figure 1). This is key to unlocking the paradox of driving a uniform SoC-level test plan based on heterogeneous core-level DfT schemes from different vendors. Finally, this methodology also needs to apply to internal engineering teams delivering design IP for integration. Such teams have different skills and management styles, and can operate in different geographies or business units. They too, must understand the need to plan DfT verification and provide the necessary components to enable this methodology. Providing a Completely Automated Path from Plan-to-Closure Having a) captured the high-level test intent in an executable plan and b) integrated separate core-level DfT schemes, verification and test engineers are now empowered to drive their processes more effectively. The result is a fully automated path from plan-to-closure for DfT verification, ensuring: a) Completeness – The verification plan includes a section on all DfT features and their specifics b) Intent – The verification scope was defined early in the process by experts and with complete visibility c) Uniformity – Disparate test strategies can now be driven by a single process During this stage, engineers should seek and incorporate various elements that will be used as building blocks to implement the verification strategy according to the plan. Such elements can include:
But how is one to conclude that the DfT verification plan guarantees the necessary quality for reliable DfT? In order to address this inherent unpredictability, a strong aspect of planning needs to be introduced which sets expectations for quantifying results. Quality of result metrics, are measurable targets that can be entered as verification plan attributes early in the planning phase. Such targets must result from collaboration among verification, design, and test engineers in order to ensure that all the aspects of the task at hand are addressed. They can include functional coverage metrics such as the number of different instructions loaded in any given JTAG TAP, seed patterns used in automatic test pattern generators (ATPGs) or isolation behavior of embedded core scan cells. All these metrics should be neatly associated with a respective section of the executable test plan. It is also a good idea to assign priorities or weights to different test plan sections based on these metrics. For example, what is the purpose of exhaustively testing a built-in-self-test (BIST) controller connected to a JTAG TAP if the TAP is not thoroughly verified first? Providing Total Progress Metrics Quality of result metrics can be a guide to understanding and reporting progress, as well as can identify critical paths. Once the project is underway, it is difficult to track the progress of specific tasks and the implications of prioritization. Tracking quality of result progress provides a way of correlating real DfT verification progress while simultaneously enabling total visibility across the different teams. This way, test engineers can know at all times the progress of the verification of DfT across the board and use this information to drive other processes such as test vector generation or early fault analysis. They can also use this information to raise management awareness of issues which may arise during the design process. Integrating the DfT Plan into the IC Verification Plan Finally, a methodological DfT verification plan needs to be integrated into the system-level IC verification plan. This way, DfT quality of result metrics can be factored into chip-level metrics for total quality management for closure. System-level planning should incorporate the processes and methodologies of effective DfT verification. This enhances the allocation of necessary resources and ensures that expert knowledge is available. Furthermore, a DfT verification plan can help bridge the cultural gap that today divides test engineers from the rest of the design-cycle and can advocate cooperation between the two main bottlenecks of today's SoC design: verification and test. Benefits of DfT Verification Planning There are a variety of motivating factors for planning and executing proper DfT verification. The investment made during the design cycle can be leveraged to reap a series of long-term benefits including:
Increased visibility into test intent across development teams results in better integration of the design and test engineering processes, skills and cultures. Methodical plans to verify test infrastructures create a well-defined process for incorporating input from test engineers into the development cycle. Test engineers participate in creating the global test specification, helping to qualify vendors based on DfT quality metrics, and/or prioritizing verification tasks against target results. This enhanced visibility also results in the reverse benefit of better communication and information feedback from manufacturing/test back to design in order to close the design-for-manufacturabilty (DfM) loop.
As semiconductors move deeper into nanometer scales, the cost of fabrication and test is exploding. Fabrication facility costs at 65nm are expected to hit $4 billion. If the current test- capital-per-transistor ratio persists (it has been flat for 20 years), in several years the general cost of test will exceed fabrication cost! 3. Associated low yield also increases the number of test cycles required to determine the quality of silicon. DfT verification planning aims at providing a reliable path from test intent to quality of results. Adding quality and efficiency to test planning leads to better testing strategies aimed at locating real silicon faults while minimizing costly over-testing or excessive vector sets. Testing on advanced automated test equipment (ATEs) at 90nm can exceed $0.10/second per unit; for a batch of 1 million at 100% yield, that is $100,000/second! Improving the DfT planning process can help companies make efficient use of this time or even enable them to switch to cheaper ATEs by partitioning more test resources on-chip.
In a demanding consumer-driven electronics market, executing a product strategy leaves no room for error. Re-spins are simply not an option when dealing in 90nm (or below). Lengthy silicon debug, manufacturing test time, low yield, and lack of diagnosability, substantially impact the time-to-market window. Proper planning for DfT verification results in increased design and test schedule predictability and repeatability, better process automation and enhanced efficiency with a direct positive effect on time-to-market. Furthermore, designing verification IP that can be invoked directly and automatically from the plan, results in additional significant time savings. Such IP can include complete environments capable of generating test vectors, checking DfT state and measuring the extent of exercise of the test infrastructure. Such IP should be designed only once for standard components (e.g. JTAG) and then enriched with feature-specific libraries for customization. Time invested up-front results in overall project time savings by ensuring that DfT is designed and verified only once. Such savings are optimized from project to project due to complete and calculated re-use.
With third party IP playing such an integral role in today's SoCs, DfT verification planning can be used to increase levels of process integration and automation with strategic vendors. Furthermore, DfT quality metrics can be incorporated in new vendor assessment by grading the vendor's test strategy, DfT implementation, DfT reuse and applicability targets. Qualification metrics offer advanced vendors incentives to provide complete, executable verification plans, IP, and test information models for enhanced integration into their customer's test infrastructure. Conclusions Large and complex test infrastructures are a reality in today's dense SoCs which comprise a multitude of diverse DfT resources. If companies are to meet their manufacturing cost and time-to-market demands, they will need to ensure that such test infrastructures are well verified for specification, compliance and functionality. At the foundation of the solution lies a detailed executable plan which can be used to provide an automated path from spec to closure with predictable quality of result. How are you planning to verify all that DfT? 1 Already accounting for as many as 10% of total gates in some integrated circuits 2 International Technology Roadmap for Semiconductors 3 Semiconductor Industry Association, 1997 Report
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |