In a perfect IP reuse world, one would simply connect a third-party intellectual-property block, validate the interface with the rest of the system and successfully produce fully functional silicon. Unfortunately, this rarely happens in practice. Mismatched expectations for deliverables, versioning issues and poorly documented interfaces and functions, not to mention improper functionality, are all common problems encountered when using third-party IP. How does one identify the potential reuse pitfalls before it is too late? Clearly, not all IP is created equal. The actual process of successfully evaluating and integrating the IP to maximize the benefits of reuse is as important as the decision to reuse. In most cases, the IP selected can directly affect the success or failure of the end product. As anyone who has looked for IP has discovered, however, there are many IP suppliers, and more enter the fray daily. Simplistically, functionality needs to be the first selection criterion, but when two or more similar pieces of IP are available, the criteria can multiply. In the past, IP selection may have been based solely on the purchase price. With the rising costs of respins, however, the risks associated with integrating IP using this decision-making process far outweigh any cost savings from the purchase price. There are other approaches for evaluating IP, but they have met with varying degrees of success. The most common method used to evaluate third-party IP has been pure internal review. This is resource-intensive, often requiring man-months of internal effort. More often than not, legal counsel must be included to generate or evaluate nondisclosure agreements, adding to the total evaluation time. Another method used in the past was the OpenMore spreadsheet, based on the Reuse Methodology Manual. While that method was publicly available and a first step for measuring and communicating IP quality, actual usage showed that much of the metric was open to interpretation, so it had limited success. Finally, some companies have developed internal reuse standards and attempt to get their vendors to deliver IP that conforms to them. But such standards tend to be company-specific and to address a much broader space than could reasonably be expected for third-party compliance. The standards truly are geared to enabling internal reuse. They typically are more complex than is necessary, and they tend to obscure, through sheer volume, the important reuse measures. Relying on the above approaches to IP selection has resulted in great variations in the actual experience of using the IP, depending on the provider, the end-application consumer and even the individual integrator. None of the methods provides a consistent mechanism to communicate an IP's suitability to purpose between vendors and end customers. The VSI Alliance's Quality IP (QIP) metric addresses that lack. Its premise is that good designs are a product of good design practices, from IP conception all the way through customer delivery and support. Integrators can use the QIP metric to evaluate similar IP from multiple vendors via an apples-to-apples comparison. QIP is also used by providers as a simple quality control checklist of implementation details and deliverables that lets them objectively evaluate their product quality and refine their designs and methods for successful reuse. Application of the metric facilitates feedback between the users and the IP providers, and fosters quality process improvement. QIP provides a means for quickly, easily and consistently providing information pertinent to the successful reuse and integration of third-party IP. While a score is provided that is derived from the completed questions in the metric, this number, in itself, is not an unequivocal indication of an IP's quality or its suitability for an application. The end user does gain some insight into the relative quality of the IP, in that a significantly higher score is obviously better than a low score; however, the same conclusion cannot be reached for score differentials of a few points. QIP is primarily intended to present a common language and mechanism to communicate the high-order factors that pertain to quality IP between the providers and the consumers, while highlighting the areas that may need more investigation, improvement or resources if the IP is to be integrated successfully into the system-on-chip and working silicon. Many components ranging from absolute showstoppers to "nice to haves" factor into the reusability and quality of IP. Business factors may be just as important as technical ones. The customer ideally would like to have confidence in the vendor and the vendor's standard procedures before engaging on a technical level. Once customers have confidence in the vendor, they may be more interested in continuing the technical IP investigation. At this point, the customer is concerned with discovering the items most relevant for the integration of the IP in the given application. This includes the IP's maturity and the deliverables provided. It's important at this stage that the customer uncover any potential hiccup that could hinder the integration of the IP into the SoC and affect the planned tool flow or even system functionality. If these two levels of discovery are satisfactory, then the user may wish to delve more deeply into the IP. The customer may be interested in detailed aspects of how the IP was developed, which can provide insight into the usability and maintainability of the block. Chances are that no single IP will be perfect for every application, but if its application shortfalls are known, the end user may be able to develop a mitigation plan. This may include scheduling additional internal resources to address the identified gaps in the IP or working with the vendor up front to satisfy the requirements. The most important aspect is to be able to determine, simply and consistently, the true state of the IP early in the evaluation process. The QIP metric provides this common baseline. It provides "at a glance" scoring summaries for the answered questions, and displays the categorized quality measures with the associated subtopics. The supporting spreadsheets break down the quality category measures in further detail, giving running totals for the associated questions, as well as color changes that immediately indicate the potential risk level implied by the individual answers. A red cell indicates that the answer suggests poor quality. Orange implies that quality could be better but is not unacceptable. A green cell indicates an acceptable standard for the attribute in question. An integrator can view the completed QIP and in a few minutes have a high-level feel for the quality of the IP. In an hour or less, the user can investigate the details that factor into the quality of the IP and determine the risk that may be associated with its use. Kathy Werner is reuse manager at Freescale Semiconductor Inc. (Austin, Texas). See related image |