|
|||||
SoCs: IP Reuse -> Networked teams tackle IP integration
Networked teams tackle IP integration Market globalization, exploding complexity, scarce resources and collapsing market windows are conspiring as never before to force chip developers to fundamentally change the way they design. A new model is emerging in which the design team acts as the system integrator, while outsourcing such roles as library development, incorporation of semiconductor intellectual property (IP) cores, design methodology and manufacturing. The developer must, in effect, meld all of these members of the development supply chain into a "virtual project team." Recognizing this trend, ASIC houses, systems houses, and integrated-device manufacturers have begun to devise in-house procedures for data management-including those for design reuse-and communications protocols that facilitate greater collaboration. One problem with these ad hoc systems is inconsistency between projects and between business units within the company. Few offer any form of systematic pro cedures for collaboration among peer companies, let alone members of the development supply chain for electronic products. Further, these systems, at their best, are still expensive to build and maintain, and draw resources from a company's product development teams. Given that they have been developed to handle in-house needs, they lack scalability, most often don't take advantage of the Web's universal interface, and-most importantly-don't permit the creation of a "knowledge base." A knowledge base is a history available to all that need it, no matter how large and geographically dispersed a project becomes. It records how semiconductor IP was developed, how it broke, how it was fixed, who used it last, and where all the configurations and tests needed to reproduce a successful result reside. This is not only a massive time-saver in this era of design reuse; it is a hedge against an increasingly mobile work force. A virtual project team is much more than the ability to communicate b y telephone and e-mail, or through ad hoc scripts. It requires revision control, file management, threaded communications archiving, usage tracking, bug and fix tracking, automatic notification of changes in state, automatic triggering of tests or EDA tool sessions related to changes in state-all in a secure, access-controlled environment-not to mention only the most pressing needs. All of these functions would have been valuable to an earlier-style in-house design team, which was responsible for authoring all aspects of a semiconductor project. But in the emerging model, the design team functions like a general contractor, which integrates, validates and refines the contributions of all members, including those groups that develop the in-house components that represent the company's differentiating technology. In the new model, we need an expanded definition of collaboration. Within an earlier definition, co llaborative software tools for such functions as revision control or data management were intended for maximizing the productivity of individuals or within teams. In the expanded definition-in which market forces and complexity demand design reuse and outsourced services-tools must transparently unify teams, groups of teams, the company, many peer companies and entire industries. Even if we assume a company so large and diverse that it purchases no third-party components-an assumption that is becoming increasingly unrealistic-the need for an improved infrastructure that enables enterprise-level collaboration is clear. For an enterprise to manage its in-house repository effectively, a great deal more data management standards, semiconductor IP usage tracking and threaded communications become mandatory. When we move up to peer-to-peer corporate collaboration, higher-level tools are clearly needed for "coopetition," but their usefulness goes much farther. As the semiconductor market evolves, mo re companies are merging or acquiring other companies. In this situation, it is also easy to see the need for tools that accelerate the time it takes to assimilate and integrate the assets of large design organizations. However, the "coopetition" scenario makes this even more dramatic. StarCore, the Motorola/Lucent DSP collaborative joint venture, will not be the last experiment in combining teams from independent corporations for more effective competition. Here, the need for collaboration is tempered by the need for security. Surely firewalls, consisting of access-authorization levels, must be erected around data that can be shared and data that must remain proprietary. "Industry-wide" implies collaboration among members of the product-development supply chain both upstream and downstream of the chip designer. When third-party providers-EDA tool suppliers, library providers, IP vendors, consultants, outsourced service providers and foundries-are folded into a project team, an environment th at contains consistent data management protocols can save weeks to months of time previously spent on cross-checking each others' work. Consider the semiconductor IP provider sending a component to a design house at a point in which IP is surely not yet plug-and-play. Without an automated data management system, the provider must manually assemble all the specific configurations of the needed files, and all the necessary test files. It's probably not telling tales out of school to note that some of our highest-profile IP vendors have performed this task literally by written checklist-a boring and error-prone activity. If the "packer" makes one mistake-say, sends all the right IP files but one wrong test-the customer will run that test and conclude that a broken component has been shipped. Locating and fixing the problem can be time-consuming. If the provider had been using an automated data management infrastructure, the error would never have occurred. Now imagine that both the supplier and the consumer work within a unified infrastructure. Not only would the supplier have shipped the right files-meaning that the consumer would have no need to laboriously "unpack" the files, as it would know the right files were present-but there would be no need to run tests upon arrival. The user-defined processes within the infrastructure would guarantee that the proper tests had been run successfully, and the test data would be visible to the customer. Checks and balances For a virtual project team to integrate input from numerous sources, it will need a number of interactions. A design project will have a checklist of needed characteristics for incoming files, such as data sufficiency or test benches. If files arrive from a provider that are not already linked to the designer as described above, the project team needs a design management infrastructure that can accommodate user-defined automatic checks. These checks can also trigger user-defined scripts, such as tests for com pliance with the VSIA On-Chip Bus standard or the Synopsys/Mentor Graphics Reuse Methodology Manual. To avoid slack time, an efficiently managed project-particularly one in which the team members are dispersed across multiple sites or even continents-needs ways to track the state of individual bits of intellectual property. It's not enough that the design management infrastructure log bug-fix requests, bug fixes, test results, validation results, and engineering change orders (ECOs). It must also notify "subscribers" to a component when that component is broken and when it is fixed. It should remind engineers that they've received a bug notification and haven't yet logged a fix. To continue this idea to its conclusion, engineers could further reduce slack time by programming the system to trigger a test as soon as a bug fix is logged, whether or not the engineer is there to receive the notification. Finally, the infrastructure must correlate all historical data not only for each IP co re, but also for each configuration of each core, and all the versions of all the deliverables that make up an IP core, such as Verilog models, synthesis scripts and testbenches. Inherent in this system is automated revision control. If further tweaking breaks the component, the engineer must always be able to return to the previous version. More importantly, automated revision control ensures that no one will grab the wrong file when it comes to tapeout. Our anecdotal evidence suggests that a surprising percentage of companies have lost weeks or months, and hundreds of thousands of dollars, simply by shipping the wrong GDSII. Tracking IP Usage tracking-who has which version of a core, when it was used, what the results were-is generally seen as a tool for effecting design reuse and simplifying internal resource management. It is also crucial for assessing return on investment for expensive, externally acquired IP. And for business-to-business interactions-say, between an IP ve ndor and IP consumer-usage tracking can also facilitate billing and speed the delivery of technical support. Security must be a top priority- security from intrusion by persons outside the project and the ability to create information hierarchies within the project. Access to those hierarchies must be controlled, based upon team members' "need-to-know," relative to their role in the project. Design teams must accommodate the contributions of all members of the development supply chain, securely, regardless of location or operating system and be seamlessly transparent to the user. The Web is the only platform that fits the bill. Without doubt, the development of complex electronic products is moving toward the model of a virtual project team. The question is: Who will be the market winners that can capitalize upon the enhanced productivity and shorter time-to-market afforded by this design practice?
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |