Verifying SoCs and IP in parallel
EE Times: Latest News Verifying SoCs and IP in parallel | |
Leonard Drucker, Saverio Fazzari, Kevin Locker and Tim Lange (07/12/2004 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=22104441 | |
We all know that verifying a system-on-chip design can take up to 70 percent of the overall design cycle, and in even the most basic environment there are many considerations to weigh. Add to that the need to consider design intellectual property, and the complexity hits a new level. The use of intellectual property (IP) in an SoC requires a more stringent verification process. Since those pieces of code are usually developed by a group or company outside the SoC design team, the code is generally not visible for debug. The assumptions made or interpretation of specifications for the device are not always known to each constituent. Therefore, more explicit methods of testing must be used. Evolving design IP along with the SoC brings even further challenges. When a piece of IP has been used in many different designs, the issues encountered during integration of the IP are well-understood. When the IP is being developed and verified in parallel with the SoC, however, potential issues are unknown. To mitigate the uncertainty, a tighter partnership among the SoC developer, the IP supplier and the EDA vendor is important. Cadence fosters this through its OpenChoice program. Design team members in the Emerging Businesses Unit at the Semiconductor Division of Philips created a piece of PCI Express design IP that required a tight integration of environments with their customer. From the beginning, they incorporated into their strategy a testing method that would mimic the customer's environment. Specifically, they chose to verify the IP they were creating against the same verification IP that their customer used, which was the Denali PCI Express verification component. The Philips team would need to enhance its IP verification methodology to work in the customer's environment. Most important, they needed a configurable approach and a common testing methodology for all members of the team. Configurability was handled using SystemC top-level module and makefiles. A PCI Express interface can take on several configurations. First, a PCI Express device can act as a root device or an endpoint device. The environment needed to be configured three ways: with the Philips device acting as a root and the Denali verification IP acting as an endpoint; with the Philips device acting as an endpoint and the Denali verification IP acting as a root; and with two Philips devices, one root and one endpoint, driven by a generic test. The second consideration is that the connection between the root and the endpoint can be anywhere from one to 32 channels. The configuration needed to be able to easily change and test the different combinations of channels, including a mismatched number of channels (one channel to four, for example, or four channels to 16). Common APIs for test There are many ways to create a configurable environment. Philips had been exploring the use of SystemC for abstract modeling and architectural development using the OSCI reference simulator and Cadence Incisive functional verification platform, so it decided to extend the use of SystemC for verification. Therefore, the use of makefiles and open-source GNU autotools, common in software development, was chosen to handle the configurability of the environment. The Philips team worked with the customer to create a common test application programming interface (API) that would be able to drive multiple environments. Four test environments needed to be considered at the start of this project: the SystemC Architectural Development environment, which had been created by the Philips IP design team; the Denali verification environment, which was normally driven through HDL; the FPGA validation platform, which would be used to emulate the IP's RTL code; and the customer's verification environment, which was C-based. Maintaining four different sets of tests to satisfy each environment wasn't practical. Philips decided early on to create a common test interface that would be capable of driving all four. The commonality between the four environments was C/C++. The company developed a common set of APIs in C. The C functions, which were called from the test, would determine which environment the test was connected to and then call the appropriate C functions or SystemC functions to drive that environment. This environment was used extensively inside Philips to drive both its SystemC-based testing and Denali-based testing, all with the same set of tests. While the testing strategy-i.e., using the same test to drive multiple environments-was proven to work with the customer, the full benefits of this type of test reuse have yet to be realized. Consider, for example, the advantage of taking a test that demonstrates some unexpected behavior in the customer's C-based SoC verification environment and directly running the same test in the SystemC IP verification environment, where debugging is more effective. The PCI Express protocol is quite complex. It has a packet-based communication stack with a transport, data and physical layer and it also has the ability to queue up multiple packets. There are buffering and flow control mechanisms at each layer. In addition, there are many transformations that occur as data flows through the system. A typical HDL-based verification environment using directed testing would not have been adequate to test all those variations. The combination of variations at each layer in the protocol would have required thousands of tests, and even then it might not have been complete enough. Therefore, the Philips team chose a constrained-random approach, using SystemC verification extensions (SCV), which is also supported by the Cadence Incisive platform. Since constrained-random testing requires visibility into the activity that was actually executed, functional coverage became a necessary requirement at Philips. In fact, Philips took a coverage-based verification approach in its testing. Coverage-based verification uses functional coverage information to help drive the development of verification tests. Most people believe this type of testing requires reactivity by connecting coverage information directly to the random generation of variables. While that is a goal of many verification environments and verification tools, it is generally very difficult. In this case, tables were created that described the functionality to be verified. The next step was to find out how to determine if the functionality was exercised. This determination was programmed into PSL Sugar assertions. Finally, tests were created that exercised various aspects of the PCI Express interface with constrained-random settings. At the end of the test, coverage information was extracted. If the set of PSL Sugar assertions for a function that needed to be tested met the coverage goal of the test designer, that function was considered covered. By adjusting the random parameters, a larger number of functions could be covered in a smaller number of tests. This not only improved efficiency by reducing the number of tests, but also reduced the simulation times. IP complications Verification of an SoC that includes IP introduces many new challenges but can result in significant rewards. Investing time and thought to assess the verification challenge early in the design process allows teams to better understand those challenges and choose the best verification techniques to improve quality and reduce verification time. For Philips, applying advanced testing techniques allowed several bugs to be found that would likely have been undetected using traditional techniques. In addition, the creation of a common testing environment across the design chain enabled a smooth transition to the customer while reducing test development and debug time by allowing tests to be reused in multiple test environments. Implementation of advanced verification techniques improved the overall quality of the design and kept the verification time to a reasonable level. By using the Cadence Incisive platform for constrained-random testing and functional coverage, the team was able to handle late requirements for additional PCI Express configurations and testing in a day instead of a week or more. Leonard Drucker (leonard@cadence.com) is verification architect and Saverio Fazzari is technical marketing director for the Openchoice Program at Cadence Design Systems Inc. Kevin Locker is lead verification engineer and Tim Lange is verification engineer for Philips Semiconductors.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |