|
|||||
SoCs require a new verification approach SoCs require a new verification approach Al Czamara, director and principal consultant at Zaiq Technologies, provides some tips for system-on-chip verification in this contributed article. He suggests a "vertical slice" approach to intellectual property (IP) verification, along with the use of verification IP and static methods. Zaiq provides system-level design solutions, including verification software, reusable IP blocks and design services. As system-on-chip (SoC) designs become a driving force in electronics systems, current verification techniques are falling behind at an increasing rate. A verification methodology that integrates separate but key technologies is needed to keep up with the explosive complexity of SoC designs. A SoC verification methodology must address many more issues than were prevalent even a couple years ago, in particular the integration of purchased and in-house intellectual property (IP) into new designs, the cou pling of embedded software into the design, and the verification flow from core to system. Several key concepts are important to understand, including the transition from core to system verification, the re-use and integration of multiple sources of cores, and the support needed to optimize core re-use. By now most engineers and managers have heard that design verification consumes between 50-80 percent of the design project, by recent estimates. And, with silicon capacity doubling every 18 months, according to Moore's law, the verification problem - directly proportional to the square of capacity - is doubling every 6-9 months! There are two major problem areas with SoC verification today that keep the verification bottleneck with us -- IP core verification, and the SoC verification methodology. SoC IP integration However, all this verification does not help a great deal at the system level, when the PCI-X core is connected to the rest of the system. How should the software team write drivers involving this IP core? What if the core needs modification? Will there be any architectural issues that arise when it's too late to change the core? All these make IP use, and reuse, challenging. With today's SoCs integrating increased amounts of IP, understanding the requirements for effective use & reuse of internal or 3rd-party IP cores is critical, as well as how IP use effects SoC verification. Several key issues are as follows: A complete block of IP, which includes hardware, software API, verification code, protocol checkers and monitors, and documentation, permits encapsulation of the IP block - a complete "Vertical S lice" of IP. Successful encapsulation of IP means that designers will gain the most leverage from the IP block, and reliability and predictability of using that block are increased significantly. It is easier to understand, install, modify, and debug the block. System-level architectural verification will be much simpler as well. SoC verification methodology The successful SoC verification methodology must be able to integrate multiple in-house or third-party IP cores, effectively migrate testin g from the block to the system-level to maximally leverage testing, integrate software such as drivers and diagnostics, debug, and provide for the adoption of HDL acceleration and formal verification. It is important to understand that SoC verification does not imply a homogeneous environment. There are lots of tools and methods out there. IP comes from many sources, internal and external. A solid verification methodology must be able to incorporate verification code from a number of sources, tools, languages, and methods. A successful methodology is a collection of tools integrated in open and documented platform. The most effective way to improve the quality of results, shorten the development time, and decrease costs is with a careful, consistent verification methodology used throughout the project! One effective technique for SoC verification is to use pre-defined verification IP. These can take two forms: Finally, it's important to recognize domain-based verification techniques. Telecommunications domain verification differs from computer domain differs from ATE. Each has its own special requirements, even though there are many similarities. Verification tests are becoming more complex, taking significant engineering effort to develop. If tests are written using a good methodology, and from a domain focus, then they can be used at multiple levels from block to system level, and from project to project. This type of test should be viewed as a form of IP, with all the reuse, management, and methodology issues that are part of any type of IP. A good VP will encapsulate the domain knowledge. The moral: Tailor the verification solution to the problem, not the other way around. The Future Enter static methods. Currently, formal verification is used routinely to verify various netlist transformations during a chip's physical design. Also, RTL to gate comparisons are done now by some companies to eliminate or at least minimize gate-level regressions, saving considerable verification time -- weeks to months for some large designs. If we could eliminate or at least minimize RTL verification using static methods, it's quite possible we could eliminate the verification bottleneck. The current problem with formal verification of RTL - which essentially means verifying the RTL against the design specification - is that the English language is not well suited to airtight specification of electronic designs. What's needed is a formal specification language. Two examples of work being done in this area are Intel's formal property language initiative, and the SpecC initiative. Here's how this could work. The SoC architect or designer would write a specification in SpecC, for example. Using translation tools, the designer would generate an English-language specification for humans to read, HDL code for architectural simulations, and a netlist for physical design. The specification would truly be the reference from which the design is built. As with logic synthesis now, formal verification would be used to sanity check the translations. All this would be done without any design verification, as we know it today. However, in the near term, static methods will not eliminate the verification bottleneck. The source of the bottleneck is not tools, but rather the incomplete and self-inconsistent way that people create new architectures. Verification takes a long time because those inconsistencies must be found and fixed. It is unlikely that a tool will be able to determine intent from a specification in the near future. References
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |