Inline Memory Encryption (IME) Security Module for DDR/LPDDR
How to choose a verification methodology
EE Times: Latest News How to choose a verification methodology | |
Rangarajan (Sri) Purisai (07/09/2004 6:00 PM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=22104709 | |
It is a known fact that functional verification takes the lion's share of the design cycle. With so many new techniques available today to alleviate this problem, which techniques should we really use? The answer, it so happens, is not straightforward and is often confusing and costly! The tools and techniques to be used in a project have to be decided upon early in the design cycle to get the best value for these new verification methods. Companies often end up making costly mistakes by underestimating or sometimes overestimating the complexity of the design and skill set required to run these new tools and techniques. The higher the abstraction level, the easier it is to design; by the same token, the higher the abstraction level, the easier it is to make a bigger mistake. Making an architectural flaw can end up hurting the entire chip, as opposed to a misconnection of wires at the gate level that can be fixed by a re-spin. Verilog, for example, makes it possible to design at a fairly abstract level, but it is very easy to make mistakes if one does not know the nuances of the language. The same argument holds true with the many verification techniques and languages available today. This article gives the reader an overview of the prevalent verification techniques (formal verification, random, directed, constrained random, assertions, property checking) and languages (SystemC, C/C++, SystemVerilog, OpenVera, e). It also examines the place for the various verification techniques and the time one should use them during a traditional digital ASIC design flow. 1.1 Bottlenecks 1.1.1 Design bottleneck Design time is a function of silicon complexity. This gives rise to system complexity, which affects time to market, as shown in Figure 1.
Figure 1 Technology cycle
Following an exponential increase in the number of transistors in designs, a linear increase in compute time or number of engineers was not adequate to reduce design time. To solve this problem, the EDA industry stepped in to introduce the concept of design abstraction through automation. Language-based solutions such as Verilog and VHDL were introduced. This design catch-up game is still in place. The latest in the languages to be introduced and supported by EDA world are SystemC and SystemVerilog. For the current technology processes, design complexity is well understood and design bottleneck has been overcome to some extent, thanks to the productivity gains through the use of EDA tools. Having solved the first round of problems, the focus now is on solving the effects of the first order problems such as the verification bottleneck. 1.1.2 Verification bottleneck
Figure 2 Design and verification gaps
The verification bottleneck is an effect of raising the design abstraction level for the following reasons:
2) Using a higher level of abstraction for design, transformation, and eventual mapping to the end product is not without information loss and misinterpretation. For instance, synthesis takes an HDL-level design and transforms it to the gate level. Verification is needed at this level to ensure that the transformation was indeed correct, and that design intent was not lost. Raising the level of abstraction also brings about the question of interpretation of the code that is used to describe the design during simulation.
2) The requirement for higher system reliability forces verification tasks to ensure that a chip level function will perform satisfactorily in a system environment, especially when a chip level defect has a multiplicative effect.
As complexity continued to grow, new verification languages were created and introduced that could verify complex designs at various levels of abstraction. Along with new verification languages came technologies and tools that supported them. So what does all this mean for the chip vendors? They have to evaluate new tools. Engineers have to be trained on these new tools and technologies. New tools and resources have to be included in the cost structure of R&D expenses. The company as a whole has to overcome a learning curve in a short time. Risk evaluation for these new tools needs to be performed. And integration and interoperability of new tools with existing technologies needs to be considered. 1.2 Verification versus validation In addition to the verification problem, chip companies are grappling with validation time. This section describes how "validation" differs from "verification" and sets the stage for the subsequent section on various verification technologies. Kropf [3] defines "validation" as the "process of gaining confidence in the specification by examining the behavior of the implementation." Recently there was discussion on the subject of "verification vs. validation" in the on-line Verification Guild. Many views were presented regarding the difference. One view was that "validation ensures it is the right design, while verification ensures that the design is right." Another view was "verification means pre-silicon testing (Verilog/VHDL simulations) while validation is post-silicon testing (testing silicon on boards in the lab)." Whether it is validation or verification, two things need to happen to ensure that the silicon meets the specification:
Figure 3 Typical design flow
Depending on the complexity of the function being implemented, some of these steps may be skipped or more steps added. For example, if we know that a certain design is purely hardware-oriented and does not involve drivers or software, one can directly jump to abstraction level 1 from abstraction level 3 (no need for hardware/software trade off). An example of this would be a PLL (Phase Locked Loop) design. It is important to note that equivalence must always be maintained as we step down the levels of abstraction to ensure that the lowest level of abstraction meets the requirements of the system specifications. For example:
2) Equivalence may be established between a C-model of a specification and its HDL implementation by comparing the outputs of the C-model (now reference or golden) and the HDL implementation for a given application. In the absence of a C-model, an "expected data model" (behavioral model that has passed the functional equivalence test as described in [1]) is used. Equivalence of this type is also considered functional.
3) The HDL (now reference or golden) implementation and the gate level description (after synthesis) are established to be equivalent by using a logic equivalence check. At this point the equivalence will be logical in nature because the design is in the form of bare logic gates, and functions can now be expressed as logical expressions. Figure 4 shows a snapshot of the various methods and technologies that are available to companies today.
Figure 4 Verification methodologies
1.3.1 Dynamic functional verification A simulator is used to compute all the values of all signals and compare the specified expected values with the calculated ones. Currently, industry has provided the choice of two types of simulators:
A major drawback of dynamic simulation is that only the typical behaviors, and not all possible behaviors of a chip, can be verified in a time-bound simulation run. The main reason for this is that chips are tested for the known "test-space" using directed tests. Even testing for known test-space can take a long time. For example, to verify the test-space of a simple adder that adds two 32-bit operands will take 232x232 clocks! When the logic gets more complex, the verification space increases. This brings about random dynamic simulation, which provides random stimulus to the design in an effort to maximize the functional space that can be covered. The problem with random testing is that for very large and complex designs, it can be an unbounded problem. To solve this problem, the EDA industry introduced higher-level verification languages such as Open Vera, e, and SVL (SystemC Verification Library). These introduced concepts such as constrained-random stimulus, random stimulus distribution and reactive test benches. In addition to the introduction of randomization features, new verification languages and tools increased productivity by decreasing the amount of time companies spent on building various test case scenarios for stimulus generation. For example, the test scenarios can be written at the highest level of abstraction and can be "extended" to any lower level of abstraction by using powerful object-oriented constructs. When using dynamic verification, companies typically want an estimate of the functional space covered and captured in quantifiable terms. These include:
Assertions 1.3.2 Hybrid functional verification 1.3.3 Static functional verification Current tools target the static verification market in two ways:
In order to make sure that the gate level representation is the same as the HDL implementation, an "equivalence check" is performed by using matching points and comparing the logic between these points. A data structure is generated and compared for output value patterns for the same input pattern. If they are different, then the representations (in this case gate and RTL) are not equivalent. Equivalence checking is sometimes performed between two netlists (gate level) or two RTL implementations when one of the representations has gone through some type of transformation (Figure 3). Some practical reasons for the design representations to be different are as follows:
This section describes the trends and forces that are shaping the world of verification. Technology perspective Using static functional verification, each step of module development is exhaustively verified, ensuring radically better subsystem/system quality and reliability. It has been claimed that using static/formal functional verification, we can find more bugs faster [3]. On the flip side, the drawbacks of using static functional verification are:
Static verification has done the same thing to functional simulation that static timing analysis (STA) did to dynamic timing analysis (gate level simulation). But, gate level simulation is not dead (see article, Similarly, dynamic simulation will continue to dominate the functional verification space until formal verification tools provide a method to converge on results and, in general, mature more. Language perspective Until the time this trend is proven on real world designs, companies continue to rely on current high-level design languages (Verilog/VHDL) and use either proprietary verification languages (Open Vera, e) or good old-fashioned Verilog/VHDL. SystemC and other high-level design description languages play an important role in design flow that involve hardware/software tradeoffs and designs that have software running on hardware, as in SoCs (Systems-on-Chip). SystemC and other high-level design languages continue to play an important role in architectural modeling and validation. SystemC is also used where the architectural model components can be used for verification using transactional models. 1.4 Criterion for choosing the right verification methodology Engineers are grappling with extreme design complexities in an environment of decreasing time to market and tighter cost constraints. In these types of environments, it seems that filling in the holes in existing methodologies will be sufficient, and that spending time on new technologies can be postponed if the value proposition is high. It has been shown that companies that spend a higher percentage of R&D on new technologies make more efficient use of their R&D spending, enjoy faster time-to-market, grow faster and are more profitable. Having said the above, it needs to be reiterated that companies have to evaluate the methodologies and technologies based on their individual needs and their core values. Before introducing new technologies into their tool flow, they should ask themselves the following questions and make appropriate tradeoffs. Product perspective Companies that make pure ASIC chips that do not run any software on them might not have to perform the hardware/software tradeoffs or might not have to run tests that capture software running on hardware. For example, a SERDES (serial/de-serializer) chip will require a different type of verification and modeling methodology as compared to a SoC, which has both hardware and software. For large corporations that have diversified product lines, the verification methodology has to encompass the varied requirements of the various products. System perspective This calls for architectural models, reference models, writing complex application level tests, and elaborate test bench setups. For these types of challenges, companies can either spend a great deal of time and effort using existing languages, technologies and methodologies or embrace new technologies (such as Vera or Specman). In addition, chip vendors have to ensure that the reference models and verification suites used are done in an environment that is compatible with those of the customers. Methodology perspective It might be easier to first run random simulations in order to catch a lot of bugs at the initial stages of the verification, and then constrain the random simulation to make sure that the test space (as specified in the test plan) has been fully covered on the device. Constrained-driven verification should be considered to hone in on functional coverage metric. The term "functional coverage" is used to describe a parameter that quantifies the functional space that has been covered, as opposed to code coverage that quantifies how much of the implemented design has been covered by a given test suites. Directed simulation can then be used to cover corner test space at the end of the verification cycle. Assertions and properties can be made to work in the background during static functional verification (at the module level) and can be reused in a dynamic simulation environment (at both module and system level). They are also useful if the module is going to be turned into IP because the assertions will constantly check the IP's properties when it is reused. 1.5 Conclusion With time, devices with smaller feature sizes (90nm and 65nm) will be in production. We are seeing design bottlenecks in products that use these technologies (as seen in Figure 1). Companies are trying to solve issues such as routing, cross-talk, and soft error rate this time around. Once these design bottlenecks are overcome, and as chip vendors pack more logic on chips with smaller features, the next round of verification problem is foreseeable. New tools and methods are constantly being introduced to increase design productivity. It is necessary to raise the level of abstraction for design and for verification to contain the growing complexities. Convergence of design and verification language is now being seen through the introduction of SystemVerilog. Validation continues to play an important role by increasing the product quality, and indirectly affecting the overall product time by enabling first-pass silicon. SystemC and modeling languages will continue to enable architecture validation of logic intensive products and ensure a "correct by design" methodology. Productivity is being further raised through the use of properties, assertions, and through the introduction of formal verification tools. Do we see dynamic simulation going away in the near future? No. Engineers still rely on and trust dynamic simulation because of its ability to verify large and highly complex designs. In the long run, we could see a trend where formal verification might take the front seat while dynamic simulations will be run for sanity checks. In this article we presented some trade-offs that companies could make in order to adopt best-in-class tools and technologies to achieve long-term success. New technologies might mean high short-terms costs and expenses, but could pay off in the long run by building confidence in the products that the companies build. These decisions could be easy for some companies and a hard choice for others. Unfortunately, a tool, language, or technology that meets every company's verification and design requirements is still a dream. References [1] SNUG (Synopsys Users Group) paper: "The Next Level of Abstraction: Evolution in the Life of an ASIC Design Engineer" Rangarajan (Sri) Purisai is a senior logic design engineer at Cypress Semiconductor's Network Processing Solutions Group. Currently, he is working on the architecture and design of high performance network search engines and co-processors. | |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
- Formal-based methodology cuts digital design IP verification time
- It's Not My Fault! How to Run a Better Fault Campaign Using Formal
- Methodology Independent Exhaustive Constraint Solver for Random Verification and Regression Generation
- How formal verification saves time in digital IP design
- Efficient methodology for design and verification of Memory ECC error management logic in safety critical SoCs
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- CANsec: Security for the Third Generation of the CAN Bus
- Memory Testing - An Insight into Algorithms and Self Repair Mechanism
- Last-Time Buy Notifications For Your ASICs? How To Make the Most of It
E-mail This Article | Printer-Friendly Page |