Verification moves to a higher level
EE Times Verification moves to a higher level | |
Brian Bailey (10/03/2005 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=171201809 | |
Functional verification often creates problems in the design flow. More than 50 percent of all chips fail initially, with 70 percent of problems due to functional verification. Simulation vendors respond by telling users to do more simulation, build bigger simulation farms, and use emulation. Analysis of past verification crises demonstrates that these answers did not and will not properly address the problem. Significant advances in verification productivity have occurred recently, particularly in testbench development, making it quicker and easier to develop new tests. These advances also reduce the maintenance time for existing tests by reacting to design changes without significant modification. However, advances also burdened overworked simulators. Now companies can generate thousands of vector sets, but have little time or resources to run them. Hence, the simulator vendors attempt to identify the “best” vector sets differentiated by the coverage provided. Design has changed, but today’s verification techniques are outdated. With the introduction of languages such as SystemC, and in particular transaction level modeling, designers can think and work at higher levels of abstraction. Increased IP usage also results in more attention paid to system level effects, the interaction of blocks, and the architectures of the system. Verification must reflect these changes in the design methodology, but this will require the convergence and maturation of three independent technologies, as described later in this article. The verification continuum We should view verification as a complete continuum that parallels the design flow1, rather than a singular function. Each task in that continuum is performing verification of a single facet of the system, such as functionality, architecture, performance, timing and implementation. Consider the simplified design flow shown in Figure 1. At the top of the design flow, the system is designed and algorithms are developed. Most companies do this on paper, or small pieces may be modeled in UML, Matlab or similar languages capable of the necessary levels of abstraction. No distinction is made between hardware and software, and no consideration is given to the algorithms’ implementation. Assuming that models existed for everything at this level, you would want to verify that the algorithms were correct, and the interaction between the blocks produced the desired functions on the primary outputs. System simulators are nothing new, but functional verification is performed later in the flow today because of unsuitable models.
Figure 1 Facets of design and verification
The next stage in the design process is setting the solution’s basic architecture by deciding which functionality to implement in hardware and software. Much of the intellectual property (IP) that will be used is selected, particularly any platforms and essential operating systems, forecasting a fairly accurate picture of system performance. The industry standard taxonomy2 defines this as an abstract-behavioral model, which describes the function and timing of a component without explaining its implementation. The models’ interfaces are token-passing in nature but contain real data, and accurate functionality is performed on them. The industry often calls these "transactions," enabling exploration of load factors, congestion, resource utilization and other system aspects. While abstract hardware/software co-simulation has been available, few people currently do performance verification due to a lack of models, deferring until later in the flow. In the hardware space, the design process considers the micro-architectural decisions, such as the amount of parallelism to use, pipelining, and resource sharing. These decisions impact the area, power, latency and throughput of the solutions. Once the implementation detail is added to the refined abstract behavioral model, an RTL model emerges. Here, most companies start the verification process, including system-level functional verification, performance verification and implementation verification. These models contain superfluous detail for the performance of these types of verification, resulting in wasted effort. The introduction of languages such as SystemC3 has enabled the development of behavioral models, adding a higher level of abstraction to the performance of functional verification. However, if the industry accepts and utilizes this abstraction for functional verification, other changes are required.
In order to avoid verification inefficiency, we must understand each stage of Figure 1. We do not need to know how architecture will be implemented to verify it. We also do not need to concern ourselves with every clock to verify behavior, and the verification of the RTL implementation does not need detailed timing. Successful verification separates these issues and relies on other tools to ensure validity of assumptions and simplifications made at each stage. Moving up in abstraction There are two levels of abstraction to which the emerging SystemC and SystemVerilog tools are directed. The first are the top-down system level tools that analyze algorithms and architectures and then optimize their partnership. These tools target the system design and partitioning stages shown in Figure 1. Many tools recently emerged that enable fast simulation of preconfigured platforms, such as PrimeXsys4 and OMAP5, and can handle user created blocks. These systems, centered on a processor model, execute at multiple MIPS on a typical Pentium-based platform6. Based on these models, large quantities of functional verification can be performed with fairly accurate information on architecture or partitioning choices. However, we cannot show that these models are correctly implemented at the RTL level. This is why most of these systems are based on available IP blocks. The second path to abstraction is the continuation of a bottom-up progression to a more abstract design description, where the micro-architectural details are inserted by a synthesis tool, rather than defined in the description. This is the behavioral level shown in Figure 1. Micro-architectural details show how an algorithm is implemented. For example, there are many ways to implement a multiply operation, ranging from a single cycle to more iterative “shift and add” sets of operations requiring multiple clocks. Pipelines permitting data to stream through multiple operators can be built, and resources can be shared and scheduled to avoid conflicts. Tools exist that take RTL designs and produce simulation models that do not maintain the implementation details, but can improve speed by 5 times or more7. Experiments conducted by people in the industry have shown full transaction-level models run 100 to 1,000 times faster than the equivalent RTL models. Key elements of a solution With design sizes in the tens of millions of gates, the number of test vectors run represents a small fraction of possible design behavior. Regression suites also range from a few hundred to many thousands of vector sets with execution times ranging from days to weeks, even on simulation farms. If these need to be run after every change, it becomes too expensive to consider changes late in the development cycle. There are three key pieces of technology to unite if this verification crisis is to be resolved:
2. A means of ensuring that an implementation functionally matches the high level models. 3. Effective means of defining and checking the temporal properties of a design against the specification.
Figure 2 Sequential differences
At the top of Figure 2, the system behavior is defined, but not the clocking. In other words, it does not say @ (posedge CLK), for which the solution on the left of Figure 2 would be feasible. With extra flexibility, the right-hand solution is equally valid, and may allow for the use of a faster overall clock or less silicon area. However, proving that these two are functionally equivalent is more difficult. Here, without knowing the starting state of the control flip-flop, it is difficult to know when the operation is meant to start or the output is valid. These differences must be resolved if the solutions are to be declared functionally equivalent. It is easy to locate problems in simple examples, but if you change to a complex processor pipeline and use simulation to find all possible behaviors, it will be time-consuming and error prone. For complete verification, 2 to the power (number of input bits + number of state bits) vectors are possible. While not all vectors are needed, the number to ensure a good probability of functional equivalence is high, which explains why most engineers are unwilling to consider significant changes to a micro-architecture once they find one that works.
Introducing sequential equivalence checking In combinatorial equivalence checking, the key was finding the corresponding state storage devices in each design, though the encoding of those states may vary across designs or be difficult to find. In sequential equivalence checking8, the designs can be retimed in relation to each other, meaning that some states may at times be modeled exclusively. In Figure 2, the design on the right deposits an intermediate value in the register. The other design does not correspond, except for the ephemeral signal value on first adder output. A sequential equivalence checker needs to recognize corresponding values in the two designs and then ascertain that the function is present in both between the identified time points. This is called a “synchronization point."9 Similarities are expected between the two designs, simplifying identification of the compare points. Looking into the crystal ball The key to successful future projection is using the right data points and trends that maintain validity. When implementation changes were made in a gate level design, testing occurred before it was checked to avoid disturbing the rest of the system. The answer was not running the complete test suite. Combinatorial equivalence checking solved this and enabled verification to migrate to a higher level of abstraction. Longer, more robust tests were created when functional verification concentrated on the RTL level. Similarly, sequential equivalence checking could eliminate the necessity for most functional verification at the RTL level. Temporal assertions are advancing with the standardization and adoption of PSL and SystemVerilog, meaning that a solution for ensuring that the timing of the implementation model conforms to the specification is emerging. The high-level simulators much faster than the existing emulators running at the RTL level have already demonstrated the enormous amounts of functional verification they can provide. All three pieces of technology that are necessary to enable the transition to higher levels of abstraction are in place and maturing. In coming years, we expect to see significant changes in approaches to the verification problem, which will enable larger, more complex designs to be tackled, as well as affect the design flow. Many designers are reluctant to build and use behavioral models because they cannot check for functional equivalence between them and RTL designs. Sequential equivalence checking will increase confidence, regardless of the decision to adopt behavioral synthesis tools. Hopefully, these performance and quality improvements will continue until the next major roadblock in the flow is encountered, which will involve managing the necessary levels of concurrency in the design. References 1. Bailey - executive editor: The Functional Verification of Electronic Systems. International Engineering Consortium 2005 Brian Bailey is an independent functional verification consultant helping system designers improve their verification efficiency, and providing guidance and technology services to small start-up companies and venture capital companies. He has spent over 20 years creating verification solutions in a number of EDA companies and is active in the standards community. He can be reached at brian_bailey@acm.org.
|
Related Articles
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |