|
|||||
It's about time -- charting a course for unified verification
It's about time -- charting a course for unified verification BACKGROUND 1 OVERVIEW Engineers create almost everything from scratch at every stage, leaving the preceding stage to rot. The result is an expensive, slow, inefficient process that all too often allows critical bugs to reach silicon. Techniques such as automatic test generation and assertions during RTL simulation, sometimes known as "smart verification," apply to only a single verification stage and thus cannot even begin to address fragmentation. This paper describes the requirements for high-speed, high-efficiency functional verification of nanometer-scale ICs. The paper first examines the primary verification drivers -- massive digital logic, massive embedded software, and critical on-chip analog circuitry. It then describes a unified verification methodology -- one that supports verification from system design to system design-in -- across all design domains. The unified methodology delivers unmatched speed and efficiency by eliminating fragmentation. It uses only proven technologies and techniques, and supports evolutionary migration from existing methodologies. The heart of the methodology is a functional virtual prototype (FVP), which begins as a transaction-level golden reference model of the design and verification environment, then serves as unifying vehicle throughout the verification process. The methodology also addresses the two critical paths in IC design-to-volume -- embedded software development and system design-in. The paper also describes the primary requirements for a verification platform that optimally supports the unified methodology. The first requirement is a heterogeneous single-kernel architecture with native support of Verilog, VHDL, SystemC, PSL/Sugar assertions, analog/mixed-signal (AMS), and algorithm development. This type of architecture is necessary to eliminate fragmentation. The second requirement is exceptional performance through full transaction-level support; high-speed , unified test generation; and hardware-based acceleration. Functionally verifying nanometer-scale ICs requires incredible speed and efficiency. Utilizing a unified methodology based on a unified verification platform provides exactly that. It's about time. 2 FUNCTIONAL VERIFICATION DRIVERS 2.1 Massive digital logic Figure 1 - Silicon gate capacity by process technology From a theoretical point of view, completely verifying a digital design requires checking the entire state-space, that is, the entire set of possible internal states and transitions between them. The maximum state-space is roughly a function of square of the number of registers in a design, which is growing exponentially. Even very simple designs by today's standards have many thousands of registers, giving them a state-space so large that it is impossible to simulate it within one's lifetime. From a more pragmatic point of view, verification complexity is a function of the number and complexity of a design's interfaces and their protocols. Again, the sequential combinations of potential interface activity are astronomical and not possible to fully simulate. Given this, the verification team's resp onsibility is to cover as much of the most important behavior as possible within the target design cycle. In order to do so, they must maximize speed and efficiency. 2.2 Massive embedded software Systems on chip (SoCs) are integrated circuits with one or more processors, memory, and application-specific logic. They include embedded software, from drivers and protocol stacks to operating systems and application software, all of which must work flawlessly with the hardware. The amount of software embedded in SoCs is growing rapidly. In fact, at 130 nanometers embedded software development costs equal the hardware development costs for SoCs, and by 90 nanometers fully 60% of all SoC development costs will be embedded software rela ted. See Figure 2.
Figure 2 - Relative development costs by process technology More important, embedded software has become the critical path in completing most SoC designs. Design teams must thoroughly verify the hardware-dependent software prior to silicon tapeout or risk re-spins. This hardware-software co-verification adds a new dimension to the already complex task of functional verification, and requires software and hardware teams to work closely together. Given the complexity of today's hardware and software, it is infeasible to verify their interaction at the register-transfer level. Verifying low-level, hardware-dependent software requires at least 100x faster execution. Verifying application-level hardware-software interactions generally requires very long sequential execution, making it impractical without emulation or prototyping. Moreover, SoCs increasingly contain more than one processor , such as a microprocessor with a digital signal processor (DSP), which makes software development and hardware-software co-verification all the more difficult. 2.3 Critical on-chip analog Figure 3 shows the growth in the number of SoCs that have critical analog/mixed-signal content. According to industry analyst IBS Corporation, digital/mixed-signal SoCs accounted for approximately 20% of worldwide SoCs in 2001, and that percentage will rise to nearly 75% by 2006. In fact, more than half of all 90 nanometer designs will be D/MS ICs.
Figure 3 - SoCs with critical analog circuitry by process technology Not surprisingly, adding analog circuitry on ICs requires additional functional verification. This is especially crucial in designs that contain tight interaction between the digital logic and analog circuitry. Approximately 50% of all D/MS design re-spins are due to the analog portion, and many of those are functional errors. Additionally, IC design teams over-design mixed-signal interfaces to avoid functional problems, thereby sacrificing performance and area. 2.4 Key issues
Table 1 - Critical functional verification issues While each of these issues can be addressed separately, a fundamental issue underlies all of them-fragmentation. Moving to a unified methodology substantially addresses all of these issues, resulting in far superior speed and efficiency. 3 TODAY'S FRAGMENTED VERIFICATION METHODOLOGY Figure 4 - Typical SoC verification flow 3.1 Fragmentation within a project Once a design stage is complete, the verification environment is left unmaintained while the team creates a whole new verification environment for the next design task. Design changes in later verification stages are never incorporated into earlier completed verification stages. If and when the team needs to incorporate major late-stage changes, it can use only slow late-stage verification approaches because earlier verification stages no longer match the design. Moreover, this also makes it impractical to reuse the verification IP and methodology for subsequent derivative designs. Verification methodologies have grown in an ad hoc manner, reacting to the expanding design methodologies. So it is not surprising that today's verification process simply mirrors today's multi-stage design process. Verification tool vendors have followed suit, creating increasingly specialized tools that are useful during one or a few verification stages. Thus the lack of a comprehensive verification platform has created an artificial barrier for verification teams to create a unified methodology. However, the fragmentation problem is much more far reaching than a given project. There is substantial fragmentation between projects within companies. 3.2 Fragmentation between projects IC design projects often use completely different verification methodologies. Only a small fraction of the differences are due to inherent design differences. Different projects often use completely different approaches with different tools that have different limitations and require different types of modeling. The projects rely on different types of metrics and set up different types of environments. Thus, while the design IP might be transportable between projects, the verification IP almost never is transportable. Even the verification engineers gen erally require significant ramp-up time, specifically to learn the new verification environment in order to transition from one project to another. Even when the top-level methodologies are the same, as is the case with derivative designs, the details are invariably different enough to require a substantially new effort. Figure 5 illustrates the same top-level methodologies, with different environments for each stage. With more and more derivative designs, directly addressing this fragmentation is critical. Figure 5 - Fragmentation occurs even between derivative IC designs For large companies, fragmentation between projects is incredibly expensive. It creates small pools of tools and infrastructure that cannot be shared readily, including verification engineers who have expertise in only specific verification environments. Fragmentation between projects greatly increases the cost of integrating and maintaining verification environmen ts. It results in vast redundancy in developing various forms of verification IP including modeling, monitors, and test suites. It even makes it difficult to evaluate the relative effectiveness of various approaches, identify best practices, and increase their use. 3.3 Fragmentation across a design chain Given that the majority of overall design time is spent in functional verification, these have become major problems. There are signs that this is changing. Leading system companies are beginning to demand complete functional verification environments from their IC suppliers, and leading IC providers are beginning to provide them. 4 UNIFIED VERIFICATION METHODOLOGY 4.1 Guiding principles: speed and efficiency Table 2 - Principles of verification speed Each step up in abstraction-from gate to RTL to transaction to behavioral-can increase speed by one or more orders of magnitude simply because there is less data, less computation, and less frequent computation. On the verification engine front, hardware-based verification can provide 100x to an incredible 100,000x speed-up versus RTL simulation. The unified methodology leverages these principles by enabling verification teams to work at the highest possible level of abstraction and to migrate to hardware-based engines as quickly as possible. The unified methodology is built upon the principles for verification efficiency in Table 3, wh ere efficiency is defined as the return per unit of input. Table 3 - Principles of verification efficiency 4.2 The functional virtual prototype -- the unifying vehicle Unified methodologies are less advantageous for less complex designs, such as all digital designs, simply because such designs have less fragmentation and less need for an early design repre sentation. However, the vast majority of nanometer-scale ICs will be complex SoCs for which a unified methodology is critical. Moreover, this approach is also valuable for any processor-based system -- systems within which most all digital designs, for example, must eventually be verified. 4.2.1 FVP overview Figure 6 - Functional virtual prototype The initial FVP uses a transaction level of abstraction for all design models. Creating transaction-level models takes a fraction of time it takes to create the equivalent RTL model, and transaction-level models run approximately 100x faster than equivalent RTL. IC teams may tradeoff top-level architectural detail and accuracy to reduce the FVP dev elopment and maintenance effort. However, the FVP partitioning should match the partitioning of the intended implementation. All stimulus generator, response generator, and application checkers also are written at a transaction-level in which creation time and run times are very fast. 4.2.2 The critical role of the transaction-level FVP Table 4 - Transaction-level FVP roles 4.3 Methodology overview Figure 7 - Unified methodology: FVP decomposition and recomposition Next, while designers are implementing and verifying their individual units, verification engineers create block-level test environments using the transaction-level FVP block models as reference models and extending the FVP interface monitors to the signal level for that block. Verification engineers perform functional directed testing to bring up the block, then use a combination of extended testing in the block-level test environment and verification in the original transaction-level FVP environment to meet the required transaction and structural coverage. Structural coverage, to be clear, is implementation-specific coverage of the logic and important logic structure functions - for instance, de termining that FIFOs are filled. As soon as a set of blocks that provides meaningful top-level functionality is verified, the verification team populates the FVP with those blocks, using the original transaction-level models for the rest of the design. The team verifies these blocks together, adding each remaining verified block as it becomes available. When verification becomes performance- and capacity-limited, the team uses acceleration on the more thoroughly verified blocks. When the FVP is fully populated with the block-level implementations, the verification team focuses on completing the application and transaction coverage requirements. At this point the implementation-level FVP is ready for final software integration and final system design-in, generally using emulation or prototypes. Once a verification team has established the FVP and unified methodology for a given design, it can easily re-apply them for derivative designs to again reap the benefits of speed and efficiency. 4.4 Tr ansaction-level FVP creation and verification 4.4.1 Definition Having interface monitors on all key external and internal interfaces is critical. These monitors are sets of assertions that check the transaction-level protocol for each block's interfaces. They track interface specific transaction coverage, including transaction sequences and combinations, and they form the framework for the signal-level interface monitors used in implementation testing. The remaining verification environment-stimulus generator, respo nse generator, and application checks-are also at the transaction level. The stimulus generator supports directed, directed-random (constraint-based random), and random test generation. The response checker provides appropriate application-level responses. The application checker ideally would be a full behavioral model for the IC. However, as that is often impractical, it may be a collection of application-level checks. 4.4.2 Creation It is important to create transaction-level models for pre-existing IP that appears at the top level. RTL models will degrade performance to the point where it eliminates many of the transaction-level FVP benefits. The models can be a behavioral model with a transaction-level shell, and need be only as detailed as necessary to support overall FVP utilization (see Figure 8). Figure 8 - Transaction-level modeling of existing IP It may be more efficient to develop some blocks, such as those for signal processing and analog/mixed-signal/RF circuitry, in specialized design and verification environments. The transaction-level FVP models for such blocks should contain behavioral cores with transaction-level shells to interface to the rest of the FVP. The team verifies these behavioral blocks as part of the transaction-level FVP. 4.4.3 Verification In fact, verifying the FVP is very similar to the lab bring-up of a final system, making similar methodologies appropriate. Creating tests at this early stage accelerates later implementation-level block verification in the FVP. The verification team should define initial application and transaction coverage requirements for the transaction-level FVP, then create the tests necessary to achieve this coverage prior to signing off on the executable model. 4.5 Block-level verification Below the block-level, design and verification teams often will choose to use bottom-up verification beginning at the individual unit level. Digital designers generally verify their own units. Most use an HDL as the verification language, although a growing number also use hardware verification languages (HVLs). Superior controllability makes deep verification easiest at this level. All designers should meet structural coverage criteria, including code coverage, for their units. Designers creating units in specialized environments, such as signal processing and analog/mixed-signal/RF, will perform implementation and verification within those environments. They will be able to use their block-level behavioral core from the tr ansaction-level FVP as a starting point for implementation and as a reference model for implementation verification. 4.5.1 The block-level environment Figure 9 - Block-level verification environment The stimulus generator generates transaction-level test sequences. All tests are at the transaction-level where they are fast and easy to write, debug, maintain, and reuse. Transaction-level tests also enable directed random testing on verification using randomly generated inputs that are intelligently constrained to stress a certain area of logic. The master and slave transactors are transaction-to-signal and signal-to-transaction converters that bridge between transaction-level traffic generation and the block's signal-level interface. The interface monitors check all operations on t he bus, translate data into transactions, record transactions, and report transactions to the response checker. These monitors consist of the transaction interface monitors from the FVP with signal-to-transaction converters to interpret the signal-level bus activity. The response checker contains the block model from transaction-level FVP and a compare function for self checking. Reusing the block model eliminates the redundant, error-prone, and high-maintenance practice of embedding checks in tests. 4.5.2 Hardening blocks in the block-level environment The most efficient way to bring up blocks is to start with directed tests in the block-level environment. One function at a time is verified first, then increasingly co mplex combinations of functions are verified. Debug time is critical at this phase. Thus, while it is possible to generate higher coverage in less time with random or directed random techniques early on, the additional debug time needed to decode the stimulus makes doing so much less efficient. Basic directed tests, unlike random tests, also provide a solid baseline of regression tests which are fairly insensitive to design changes. Once all basic functions are verified, it is time to "harden" the unit through extensive testing that meets all of the transaction and structural coverage requirements. The verification team accomplishes this through a combination of block-level stress testing and realistic top-level testing in the transaction-level FVP. The verification team uses random, directed random, and directed tests in the block-level environment to stress-test the block. Random and directed random pick up most of the remaining coverage holes, but the team is likely to have to create directed test s to verify the difficult-to-reach corner cases. Verification teams can clearly trade off running more random or directed-random cycles versus handcrafting directed tests. Using simulation farms or an accelerator, teams can run orders-of-magnitude more random or directed random cycles to hit more of the corner cases, as well as cover more of the obscure state-space beyond the specified coverage metrics. If engineer time is at a premium, as it usually is, this can be an excellent tradeoff. It should be noted that formal and semi-formal techniques may also be useful to verify properties or conditions that are extremely difficult to simulate or even predict. 4.5.3 Verifying blocks in context Figure 10 - Verifying block in context of the original FVP 4.6 The implementation-level FVP As each new block becomes available, the team brings it up in the same controlled manner to facilitate easy debug and to create a clean baseline set of top-level regression tests. Top-level verification quickly becomes performance and capacity limited, so it is best to migrate blocks that are running clean into an accelerator. Doing so leaves only a small number of blocks running in RTL simulation; the rest of the implemented blocks run in an accelerator and the non-implemented blocks run at a transaction-level (see Figure 11). Thus, this approach maximizes overall speed. Figure 11 - Integrated new block into implementation-level FVP 4.7 Full system verification 4.7.1 Emulation and prototyping
Emulation and prototyping enable teams to test the design at near real-time speed in the context of its actual physical environment. This enables verification of application software against the design. Generally design teams emulate to test application software, increase application-level coverage, or produce design output for human sensory review -- such as video and audio. 4.7.2 Verification hub: the design chain connection Verification hubs enable the semiconductor company to maintain full control and possession of the detailed implementation and give their customers access to high-speed design-in, verification, and software development environments. They also provide high-quality, realistic application-level testing from actual customers prior to silicon which could be valuable to the SoC design team, internal embedded software developers, applications engineers, and others involved in technical IC deployment. 5 UNIFIED VERIFICATION PLATFORM REQUIREMENTS Unified verification methodologies such as the one described above require a unified verification platform. The platform should be based on a heterogeneous single-kernel simulation architecture in order to optimize speed and efficiency. While a step in the right direction, integrating different verifica tion engines -- simulators, test generators, semi-formal tools, and hardware accelerators -- and their environments, is no longer good enough. Without a single-kernel implementation, critical performance and capacity is lost, ambiguity is introduced and, most important, fragmentation remains. The architecture must be heterogeneous, supporting all design domains -- embedded software, control, datapath, and analog/mixed-signal/RF circuitry -- and supporting everything from system design to system design-in. It must include a comprehensive set of high-performance verification engines and analysis capabilities. Lastly, the architecture must have a common user interface, common test generation, common debug environment, common models, common APIs, and must support all standard design and verification languages. Only a verification platform meeting these requirements will support a unified verification methodology such as that described in the preceding section. By doing so, it will dramatically increase verification speed and efficiency within projects, across projects, and even across design chains. 5.1 Single-kernel architecture 5.1.1 Native Verilog and VHDL The platform should also include support for standardized verification extensions to these languages if and when they become viable. While adding verification extensions to cu rrent HDLs will give designers more powerful verification capabilities, they will have limited applicability at the top-level, or even the block-level, for SoCs with significant software content. With the vast majority of embedded software being written in C/C++, standard C/C++-based languages will be a requirement. 5.1.2 Native SystemC Leading system, semiconductor, and IP companies are rapidly adopting SystemC both to natively support embedded software and hardw are modeling in the same environment and to move off proprietary in-house C/C++ verification environments. This makes it likely that SystemC will become the standard exchange language for IC designs and IP. However, it is unlikely that SystemC will displace HDLs for unit-level design and verification in the foreseeable future; hardware designers strongly prefer HDLs and only the HDLs have a rich design support infrastructure and environment in place. 5.1.3 Native Verilog-AMS and VHDL-AMS 5.1.4 Native PSL/Sugar 5.2 Performance, performance, performance 5.2.1 Transaction-level support Transaction reco rding in particular should be easy to retrofit to the extensive set of existing IP that recognizes transactions. The platform must support transaction recording throughout the entire verification process, and it must be able to aggregate results from all verification runs including those that the embedded software development team runs. Transaction visualization and transaction-level exploration is critical to assessing the design performance, understanding its actual functionality, and debugging errors. From a debug perspective, the platform needs to support transaction analysis, for example the ability to identify specific transaction combinations, and transaction debug, such as within the waveform viewer. 5.2.2 High-speed, unified test generation In recent years, many verification teams have moved to commercially available proprietary hardware verification languages (HVLs) that mix hardware with software constructs to provide a rich set of verification-centric capabilities. These systems have distinct advantages versus standard C/C++ and standard HDLs. However they can be slow and expensive. Being proprietary, the environments are not readily transportable and often support only a small part of the overall verification process, generally simulation-based digital verification. Standardizing proprietary HVLs will certainly make them more attractive. However, many current HVL users have identified significant performance issues, especially at the top level, and are seeking alternatives. For engineers interested in using C/C++-based verification environments, the SystemC verification standard provides an excellent alternative. It is very high-performance and has a rich set of standard capabilities in the same open source C++ language that supports system modeling and even RTL specification. Just as important, it is a standard, ensuring test environments are transportable and any number of tool vendors can compete to create the best products. For designers and verification engineers interested in working in an HDL-oriented environment, new open, standard extensions to Verilog may provide a strong alternative. Unified verification platforms must support the SystemC verification standard, as well as new HDL-based language extensions as they become viable. 5.2.3 Hardware-based acceleration Figure 13 - Hardware emulation and acceleration provide high-speed top-level performance Historically hardware-based systems have been prohibitively expensive for most verification teams. Moreover, it has taken weeks to months to get a design into the hardware-based system. Unified verification platforms must provide hardware-based acceleration that addresses both of these issues. Hardware-based systems have inherent design and manufacturing co sts. To effectively reduce these costs, the platform should include the ability to simultaneously share systems among a number of users. This capability can provide many verification engineers orders of magnitude performance advantages simultaneously at a fraction of the cost. In addition, the hardware accelerator should double as a multiuser in-circuit emulator to be valuable throughout the design cycle. In fact, in either mode the accelerator/emulator should have the ability to serve as a multiuser verification hub for internal or external software development and system integration teams. Reducing time-to-acceleration is just as critical. Using the unified methodology can help, while also increasing the overall performance. For example, verification teams that know they are going to use acceleration should create their transactors so that the communication between the simulator and the accelerator will be at a transaction level rather than a signal level. From a platform standpoint, early design polic y checking and co-simulation with a common debugging environment is a good first step; however, ultimately hardware acceleration needs to be an extension of the single-kernel architecture. Please click below to continue reading article
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |