Extending validation another level
EE Times: Latest News Extending validation another level | |
Wally DuBois (10/04/2004 9:07 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=48800546 | |
During the past 10 years, Intel Corp. has added more support for embedded applications using Intel architecture processors and is currently exploring what is required for another level of support over the next three to five years. This article details the increasing validation requirements and how they were met at Intel.
Initially, validation efforts for these types of applications consisted of simple adoption and regression testing to enable mainstream desktop processors to be used in the embedded arena. Three years ago, a second generation of validation support was added. In this second level, still used today, validation relies on heavy reuse of existing methods by applying them to more complicated mix-and-match scenarios in order to meet embedded customers' requirements.
A third generation of support, still under exploration, requires more complex test development efforts to support requirements that sharply diverge from mainstream desktop, mobile and enterprise product lines. Products will also likely require leaps of integration levels to meet customer needs in specific embedded areas, offering unique validation challenges that require solutions beyond today's techniques.
Products included under the first generation of validation support included Intel architecture mainstream processors and core logic chip sets targeted for the high-volume desktop market segment. These products were adopted into traditional embedded applications by leveraging existing validation, design-in and customer-sustaining support. As the customer demand in embedded space grew and as the needs of embedded customers started to diverge from mainstream product lines, it became clear that further investments were needed to offer more-tailored, more-competitive solutions.
A phenomenon affecting the second level of support is the shrinking envelope of power and space for embedded processor applications. A key figure of merit for embedded applications — expressed as Mips x watts x cubic centimeters and sometimes referred to as compute density — is in direct conflict with mainstream Intel architecture CPUs. While the enterprise and desktop market segments are fueled by raw horsepower, the embedded segment requires a wider spectrum of performance. Overriding all of this is the need for density, especially in the backbone and data centers supporting communications and storage market segments, including the growing wireless market.
For instance, a common trade-off lowers the Mips part of the equation to reduce power, thereby allowing more units of computation per given area of board, rack or floor space. This brings with it additional validation requirements to support new user models.
Some of the new feature sets calling for second-generation validation support include lower core voltages, mobile features mixed with enterprise chip set features, lower I/O ring voltages, lower processor bus speeds and a greater range of CPU cache sizes to match cost/performance requirements in emerging markets. A good example of a nonstandard feature combination demanded by embedded Intel architecture customers today is the power consumption of a mobile processor coupled with the memory density and error correction of an enterprise-level chip set.
In light of these changes, the decision was made three years ago to grow validation requirements in support of the diverging and growing embedded marketplace. To productize nonstandard feature combinations, additional validation and engineering support is needed.
Thus, the goal of the second generation of support is to offer embedded customers the latest and greatest Intel silicon building blocks at the same time they are available to mainstream mobile, desktop and enterprise market segments. Second- generation support is characterized by the customer requirements that drive new validation efforts. The maturing trend in validation to test silicon building blocks at a system or platform level also lends itself to good support through reuse at a higher level.
For Intel architecture processor support, the second generation of validation takes advantage of a system context that includes both pre- and post-silicon validation for the CPU and core logic. In presilicon validation, emulation and simulation are used to shake out defects.
The simulation environment can be broken into two major pieces: system-level simulation and full-chip simulation. The latter is concerned with testing clusters on the silicon from the pins inward; the former uses an environment where the code representing the silicon is in a platform environment.
Bus-functional models of the various interfaces representing the target platform are connected together to run various combinations of random and focused tests to test data flow as well as bus protocols. An example is having several PCI bus masters simultaneously driving and receiving data into the platform. Emulation is used by synthesizing the silicon register-transfer-level (RTL) model into FPGAs and running the chip set functionality on a real target platform as a gate to tapeout.
The technologies used post-silicon to verify functionality are system validation (SV), analog validation (AV) and compatibility validation (CV). SV represents controlled, synthetic cycles at every interface in a platform environment. AV verifies signal integrity, and CV runs in a real-life operating system with off-the-shelf peripherals and applications. Second-generation support fully reuses these methodologies to test the new combinations as an example of an Intel methodology called Copy Exact.
Third-generation support, however, may demand validating the marriage of a rich set of peripherals with an Intel architecture processor on one piece of silicon. This brings the customer a wealth of inexpensive tool chains, programming language support and scalability across vertical product offerings. It also brings testing challenges that can not be met by reuse alone.
In presilicon validation, the challenges of supporting a large-gate-count CPU along with large-gate-count standard I/O functionality are magnified by the sheer size of the overall model. Currently, the entire silicon model cannot fit into one tool chain and must be processed in chunks across various tools. New technologies are coming online to support these huge presilicon testing environments, but they are immature, and cycle times at this megamodel level are still relatively slow.
In post-silicon validation, the biggest issue is likely to be the lack of visibility into the internal states of the silicon for debug during validation runs. In first- and second-generation support, a standalone three-chip platform with CPU, memory and I/O provides easy access to interconnects and allows spare pins for access to internal states. Access to these signals, including deep internal states of the silicon, is directly related to reducing debug and verification times and thereby reducing back-end validation costs.
Unfortunately, even with low integration requirements in previous generations of validation support, design-for-validation (DFV) efforts lose out to other activities deemed more important in the heat of the moment. Future high-integration designs will also tend to be pin-limited, making it even more difficult to add capacity for debug and observation of internal states. This is why the design-for-test for presilicon validation and DFV for post-silicon validation teams must be given the funding and mandate to establish the needed test mechanisms up front.
Three strategies are needed to enable third-generation support.
The first is more thorough presilicon validation. This has other benefits as well, such as fewer tapeouts in the back end, which equates to faster time-to-market and reduced project costs.
The second is to enhance the built-in-self-test functionality of the processor and core logic at a platform level. Current BIST capabilities tend to be narrowly focused on individual functions of a specific chip in isolation.
The third strategy is to use a high ratio of known-good building blocks to newly designed blocks of silicon for internal units, then connect them with an internal interblock communication bus. This enables the testing of each block in isolation, thereby reducing model complexity by relieving stress on the front-end tools and allowing use of higher-level formal verification methodologies.
Wally DuBois (wally.dubois@intel.com) is manager of chip set and processor validation in the Infrastructure Processor Division of Intel Corp.'s Communication Infrastructure Group (Santa Clara, Calif.).
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |