Risk versus Reward: Where Do Your IP Reuse Practices Fall?
Austin, TX USA
Abstract :
The ideal IP block reuse scenario minimizes risk while maximizing the reward. In a perfect world, one would simply connect a third party IP block, validate the interface with the rest of the system, and successfully produce fully functional silicon. Unfortunately, this rarely happens in practice. Mismatched expectations for deliverables, versioning issues, poorly documented interfaces and functions, not to mention improper functionality, are all common issues that are encountered when using third party IP. Maintaining coherent views after a file is updated can also be problematic. The pressing question is: how does one identify the potential risks involved with using IP while there is still time to develop a mitigation plan? This paper will discuss the factors that influence the reusability of a block, the VSI QIP metric, and its practical usage.
Introduction
The changing business climate dictates a much different product development process than was in place as recently as five years ago. Moore’s law has remained true through the deep sub micron process technologies and shows no sign of changing in the near future. Advancements in EDA and process technologies have far outstripped the capabilities of a single engineer, while shortened product lifecycles often translate into a completely missed market opportunity if development schedules slip. These market pressures call for more efficient development cycles. One of the primary means of addressing these shortfalls is through design reuse, either via third party IP purchases or reuse of internally developed blocks.
Reuse Drivers
The electronics industry in particular, is known for its fast moving markets with short product lifecycles. Studies have shown that in a fast moving market, being just three months late can cost over a quarter of the potential product revenue. In addition, many companies are faced with difficult resource tradeoffs, trying to improve development cycles with limited resources. In fact, the resources used to develop and maintain commodity IP are better utilized on differentiated block and product development. In addition, the proliferation and complexity of standards-based IP, such as PCI or USB, make it difficult to develop and maintain internal expertise. Both of these scenarios play into the decision to purchase third-party IP. The market share gained by being first to market may outweigh any premium paid for acquiring externally supplied IP and the royalties that may be associated with the acquisition. However, how does one ensure that the block being reused will, in fact, reduce the product development time and perform as expected?
IP Selection
Clearly, not all IP are created equal. The actual process of successfully evaluating and integrating the IP to maximize the benefits of reuse is as important as the decision to reuse. In most cases, the IP selected can directly affect the success or failure of the end product. However, as anyone who has looked for IP has found, there are many IP suppliers in the marketplace, and more entering the fray every day. Simplistically, functionality needs to be the first selection criteria, but when two or more similar IPs are available, the selection criterion quickly becomes more difficult. In the past, IP selection may have been based solely on the purchase price. However with the rising costs of respins, the risks associated with integrating IP using this decision making process far outweigh any cost savings from the purchase price. There are many other approaches with varying degrees of risk that can be used to evaluate IP to improve the chances of integration success.
In the past, the OpenMORE spreadsheet, based on the RMM (Reuse Methodology Manual) [1] was used. Unfortunately, while this method was publicly available and an admirable first step, actual usage showed that much of the metric was open to interpretation. The spreadsheet focused on soft IP and while many valuable criteria were raised, the spreadsheet tended to blur the lines between deliverables, documentation, methodology, code requirements, scripting, and tool issues. It also did not completely address enough quantifiable checkpoints to ensure robust code that was reusable across tools and platforms.
Some companies have developed internal reuse standards and attempt to get their vendors to deliver IP that conform to them. Unfortunately, these standards tend to be very company-specific and address a much broader space than to which third party compliance could reasonably be expected. These standards truly are geared to enabling internal reuse, typically are more complex than needed, and tend to obfuscate through sheer volume the important reuse measures.
The most common method utilized to evaluate third party IP has been pure internal review. This has been resource intensive, requiring man-months of internal effort. More often that not, legal council needed to be included in the process to generate or evaluate NDA’s (Non-Disclosure Agreements) which added to the total evaluation time.
QIP
Using the previously described approaches to IP selection resulted in great variation in the actual experience of using the IP depending on the providers, end application consumers, and even individual integrators. None of these methods provide a consistent mechanism to communicate an IP’s suitability to purpose between vendors and end customers. VSI Alliance’s Quality Pillar developed and released to its membership the QIP (Quality IP) metric version 1.0 in 2004. This metric drew on industry expertise, and incorporated donations from companies that pioneered attempts to measure quality, including the original OpenMORE. The goal of the QIP spreadsheet is to allow the IP marketplace to use a common vocabulary and mechanism to communication the pertinent IP quality information.
The QIP metric is used by the IP integrator to evaluate similar IP from multiple vendors using an apples-to-apples comparison. It is usable by the providers as a simple quality control checklist of implementation details and deliverables to objectively evaluate their product quality and to refine their designs and methods for successful reuse. It also facilitates feedback between the users and the IP providers. The QIP premise, verified by experience, is that good designs are a product of good design practices from IP conception all the way through customer delivery and support.
This metric gained wide acceptance in the industry. It is used by companies such as ST Microelectronics, Philips Semiconductor, Cadence, Mentor Graphics, and LSI Logic. However, while these companies and others see the value in QIP, there were several enhancement requests.
The original QIP metric was organized by IP type, but otherwise was relatively flat. As there are many aspects to measuring IP quality, this led to the perception of a complex metric that was hard to use. For these reasons, and to better communicate the components of high quality IP, the Quality Pillar updated the metric. One of the first areas that the pillar addressed was the relevance of the IP information in relation to the evaluation process.
Evaluating QIP Information
At the QIP's core are questions that probe the provider capabilities and reusability of the IP itself. The Quality Pillar examined many cases of reuse success and failure. They derived the quality attributes and practices that separated them, weighted their values, and organized them in an easy-to-use spreadsheet format. All of the concepts in QIP 1.0 are present and organized in a more streamlined, logical format, and the questions have been refined for additional clarity.
The QIP provides a means to quickly, easily, and consistently provide the information pertinent to the successful reuse and integration of an IP from a third party. While a score is provided that is derived from the completed questions in the metric, this number in itself, is not an unequivocal indication of an IP’s quality or its suitability for an application. The end user can gain some insight into the relative quality of the IP in that a significantly higher score is obviously better than a low score; however the same conclusion cannot be reached for score differentials of a few points. The QIP is primarily intended to present a common language and mechanism to communicate the high order factors that pertain to quality IP between the providers and the consumers.
Quality Discovery
There are many components that factor into the reusability and quality of an IP that range from absolute show stoppers to “nice to haves”. Business factors may be just as important as technical issues. The customer ideally would like to have confidence in the vendor and the vendor’s standard procedures before engaging on a technical level. This includes understanding the stability of the company, the internal quality assurance procedures, and the IP distribution, maintenance, and support procedures.
Once the customer has confidence in the vendor, he may be more interested in continuing the technical IP investigation. At this point, he is concerned with discovering the items most relevant for the integration of the IP into his application. This includes the maturity of the IP, and the deliverables that are provided. At this stage, it’s important for the customer to discover any potential integration hiccup, be it DFT, clock or reset handling, or something that may interfere with the planned tool usage.
If these two levels of discovery are satisfactory, the user may wish to delve even deeper into the IP. He may be interested in more detailed aspects of how the IP was developed which can provide insight into the usability and maintainability of the block. Measurable good workmanship in the development process may translate into faster turnaround time to deliver a fix should a bug be discovered when using the IP.
Chances are that no single IP will be perfect for every application, but if its application shortfalls are known, the end user may be able to develop a mitigation plan. This may include scheduling additional internal resources with the needed skills to address the identified gaps in the IP or working with the vendor up front to satisfy the requirements. The most important aspect is to be able to simply and consistently determine the true state of IP early in the evaluation process.
QIP Organization
The discovery levels previously discussed are presented on separate spreadsheets that directly relate to the progressive disclosure of information. The satisfactory completion of each level opens the door to further investigation. The overall summary sheet displays the QIP metric's assessment broken down into all of the measured areas. At a high level, this predicts: How reliable the vendor will be in the relationship; how efficient the IP will be to integrate into the user's design; and how mature or stable the IP development has been.
Since the consumer may want to evaluate the quality factors differently depending on their needs, assessment metrics are also reported as their component measures. IP Integration and IP Development, for example, are logically broken up into IP Ease of Reuse and IP Design & Verification Quality. Both of these major headings are further broken down into major sub-categories that contribute to the overall evaluation of the IP.
Subsequently, these component measures, IP Integration and IP Development, are detailed on separate worksheet pages. These worksheets break down the quality category measures into further detail, giving running totals for their associated sets of questions. This can be especially useful for a consumer when deciding among a field of strong candidates to determine which IP best conforms to their specific areas of interest. In addition, an IP Vendor can review the questions to see how the metric supports good development and productization practices. The more QIP identified quality practices that vendors follow, the more the resulting vendor and IP quality assessment scores will improve.
The detailed worksheet pages include Vendor Assessment, IP Integration, and IP Development. The vendor assessment in version 2.0 is now much more detailed and identifies a broader range of internal quality processes. This is not merely limited to industry quality standards such as ISO9001, but also the internal quality procedures that are consistently used. For example, it validates that a quality audit is performed to check IP compliance to the specified design, verification, and QA processes. Another area that is of high interest to the IP consumer is how revisions are controlled and communicated. One must ensure that the deliverables in use are in fact the correct revisions for the IP version. The customer is also interested in the available support mechanisms: technical support, on-line knowledge base, or training.
The IP Integration detailed worksheet addresses the items that are of interest to the consumer, for example whether the design has been realized in silicon and if there is a reference customer available. This in itself provides a strong indication that the IP will function as expected. In addition, the customer is also interested in factors that affect the ease of using the IP in his application including the supplied documentation, the build and verification environments, and details of the IP block itself. These details will give an indication of the factors that need to be addressed at the system level, such as reset handling, testability, and how the IP will fit into the system verification environment.
Finally, the IP Development sheet indicates the standard development procedures that are followed by the IP designers. These specify the internal design documentation associated with the IP and the methods used to ensure portability and extensibility of the IP. Sections of this worksheet address items such as how the interfaces, parameterization, and coding style for downstream tool compatibility are addressed. In addition to documentation and code development, the spreadsheet quantifies the IP verification, including the environment, supporting scripts, and formal methods. While the documentation detailed in this section may not be given to the customers, its proof of creation should be available and indicates good internal quality development procedures.
Quality Indication
Every question in the Quality Metric was evaluated for its impact on the end user’s reuse experience and one of three priority levels was assigned to each item. These default priority levels provide the baseline by which all IP providers are measured. The first priority level, Imperative, is defined as an attribute that must be met, otherwise it may be impossible to use the IP within a user project. For example, failure to address metastability issues in a design that has asynchronous clock domains will result in a design that will not function reliably hence it is an imperative.
The second priority level, Rule, is defined as attributes that should be met. Failure to meet them may significantly impact the cost of using the IP within a user project. An example is the separation of positive and negative edge triggered flip-flops. A haphazard use of clocks and edges will result in a design for which it will be more difficult to achieve predictable timing closure. There will be a greater difference between the analysis of unplaced, ideal clock distribution and the analysis of a final placed and routed design with real clock distribution. Thus by not meeting, or partially meeting this attribute adds risk to the project.
The final level, Guideline, concern attributes that if met, will result in a general improvement in usability and maintainability of the IP, or provide evidence that good practice has been used in the development of the IP. For example, deleting unused code as opposed to simply commenting it out provides an indication of good workmanship. Lack of good workmanship here indicates there may be lack of good workmanship in other areas of the design. It could also indicate a lack of confidence in the design on the part of the component author. Another example is the ability to extend the architecture either through a programmable register space or building block partitioning. Doing so indicates that maintainability has been considered when architecting the component. If the component has been designed to meet an immature specification, then this provides evidence that the component supplier recognizes this and will be in a position to provide updates in line with specification changes.
The quality level for each question contained in the QIP is fixed, and may not be changed by the IP provider. The provider simply answers the questions which are crafted to be quantitative, most requiring a simple yes or no response. The QIP scores are calculated from the answered questions and the associated priority level. While the QIP is designed to cover the most common reuse issues, the needs of individual end users may vary with respect to specific IP. The QIP score illustrates the compliance of the IP to a common quality measurement baseline, and is key in providing consistent evaluations for IP. However, the relative importance of the individual metrics may differ by application. For this reason, a mechanism is in place to allow the IP consumers to modify the importance level of the individual line items to accurately reflect their needs. In effect, this allows the end users to customize the rules for their environment, improving the chances of finding an IP suitable for their needs.
Risk Mitigation
All of the details contained in the QIP metric provide a clear picture of the state of the IP. This information can be used to determine the risk of using an IP in the customers end application and system environment. It allows the end user to determine what steps need to be taken to improve the chances of integration and silicon success. It may be as simple as understanding interface signal polarity inversion, or more complex such as a potential tool incompatibility. If, for example, combinational feedback loops are inferred in the IP, they may interfere with simulation and formal verification. If this is known, adjustments can be made in the system verification process.
The state of the supplied verification environment may pose a larger issue. Complex IP may require verification components that monitor standard interface protocols. The data contained in QIP will indicate whether a verification component is supplied with the IP, and if so, how this component will fit into the verification environment. If the appropriate component is not supplied or is not complete, the customer may need to make additional investments. This may include applying internal resources to develop the supporting component, or purchasing it from a third party. If purchased, the information contained in the verification component IP’s associated QIP will indicate its suitability and quality.
In general, the QIP metric will provide an indication of the effort required by the system, verification, and DFT engineers when integrating the IP into the system level test. Design specific information, about which the designers need to be aware including configurability, resets, and clocking is clearly conveyed in the QIP metric. Additionally, the metric delivers information about the build environment, supporting scripts, expected results, and relevant portability issues.
While all of this detailed information is readily available it is categorized and summarized to provide users a quick view into areas that may require additional investigation. A “traffic light” representation tallies the answer distribution of the three priority levels previously described. The number of imperatives and rules that are not satisfied are displayed in red indicating that the answer may infer poor quality, and is likely to be considered unacceptable by an IP integrator. The number of guidelines that are not satisfied are displayed in yellow and infers quality that could be better, but is not unacceptable. The green cell indicates the total number of questions that were answered acceptably.
Testing and Validation
An extensive beta program was undertaken to validate the assumptions and enhancements made in the Quality metric. IP providers were paired with their customers to evaluate QIP and whether the information conveyed related to the actual reuse experience. Both partners supplied feedback on the applicability of the questions to quality, QIP’s completeness, and the process of filling out and evaluating the information in the metric.
Summary
A means to effectively communicate IP quality information has long been an issue in the design community. QIP has successfully addressed this need and garnered adoption support from some of the industry’s major players. Based on the usage feedback, the QIP is now even easier to use and more accurately reflects the information needed to make an informed IP selection decision. The QIP is a Quality Metric that will continue to evolve with the industry and technology, providing an objective means to evaluate IP, resulting in reduced integration time for quality IP, and providing an measurable method to guide IP vendors in the continuous process improvement for the development of quality IP.
References
[1] M. Keating, P. Bricaud, “Reuse Methodology Manual for System-on-a-Chip Designs”, Kluwer Academic Publishers, 1999.
|