How to improve verification planning
EE Times: Design News How to improve verification planning | |
Steve Brown (06/27/2005 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=164902761 | |
Do you have a plan? Project management is all about planning and execution. If everyone properly plans their verification project, why do quality problems and schedule slips persist? It really comes down to the adage "Begin with the end in mind." A good plan contains detailed goals using measurable metrics, along with optimal resource usage and realistic schedule estimates. When managers and their teams engage in planning, the results of that planning process typically do not address the common problems that cause schedule slips, resource productivity issues and product quality. Most verification plans focus solely on task performance rather than defining the verification problem, independent of its solution. This almost guarantees gaps leading to bug escapes, schedule delays to fix those bugs, or significant resource strains and inefficiencies. With a bit of improvement in planning we can do much better. Why verification planning is incomplete Most teams don't do complete verification planning. They leap to verification environment development before design because they are stretched extremely thin. This can lead to an unchanged or inflexible plan which becomes nearly irrelevant because there is no easy way to maintain the relationship with the actual project. In other words, the verification plan isn't really a plan, it's merely a set of incomplete discussion notes that atrophy as the project moves forward, as the team begins working and learns new things about what it must accomplish. The solution is to make the verification plan an executable part of the verification process itself. A verification plan becomes executable when it is read by a verification process automation tool. A solution that is used to organize and generate reports about project status and becomes the basis for analyzing data to determine next steps. The plan's value is maximized, serving as the genesis and touchstone of the verification process throughout the life of the project. This directly increases the return-on-investment of developing and maintaining the verification plan by automatically using it to measure verification completeness. Thus, when changes in the project are necessary, those changes are introduced, tracked and measured through the updated executable verification plan. This makes the plan a continuous valuable part of the verification project, from specification to closure.
Figure 1 — The verification productivity gap
Better planning, and use of the executable plan to measure project completion, leads to higher quality, increased schedule predictability and improved resource productivity. With a better plan, it's easier for the team to produce the desired results. Using the plan all through the project helps surface issues earlier, which is a key to staying on schedule. Tracking progress against the plan metrics enables everyone on the team to self-manage and stay focused, which leads to improved productivity. What is good planning? Verification planning needs to change from a focus on "how" to a focus on "what." By starting with the important goals of what needs to be verified, the team can ensure the plan is complete and balanced. The plan in this case is more than an engineering specification of how the verification is to be accomplished. It is an agreed upon statement of what the project must accomplish to be successful. It identifies success criteria in a prioritized fashion — for example, these features must work; these features should work in the first release; and these features are nice to have working in the first release. It also identifies the details of how the success criteria will be measured, indicating the project has reached closure. Basics of verification planning From a high level process perspective, verification planning is actually quite straightforward. The basic steps of verification planning are:
2. Scope the verification objectives 3. Identify the feature set of the design 4. Design detailed coverage models 5. Select aggregate metrics to track progress 6. Use historical metrics to estimate effort and schedules Analyze the device specifications All projects begin with several forms of specifications. Marketing contributes the customer requirements. Management defines the resource and schedule, and build versus buy constraints of the project. The systems engineers, hardware and software engineers and the verification teams each provide verification implementation specifications that will guide the project. The biggest risk to the project is not anticipating the moving target. Some teams attempt to capture all their requirements and then move through the project sequentially, sometimes called the waterfall method. In reality the process is quite iterative. The risk comes from not fully anticipating the iterative impact and managing the frequency and burden of those iterations. Scope the verification objectives The most common failure at the scoping stage of the project is not requiring a comprehensive verification discussion process with everyone involved. Scoping and documenting the verification objectives is the only way to discern if the specifications were well conceived and understood. It is terribly inefficient, and unpredictable, to plan details into a verification project after it is underway. Management must allocate a sufficient amount of time upfront to get the verification goals accurate and complete. Propagating requirements changes are another matter and are discussed in the next section. All stakeholders must participate in scoping verification objectives: management, marketing, systems designers, hardware designers, software designers and the verification team. It is only through team dialog that the full project requirements emerge. Marketing requirements are never complete because they only address market-critical factors. Management resource and schedule constraints are usually aggressive, by definition, and must be relaxed as the project is fully scoped. Systems engineers bring the technical perspective closest to the marketing requirements. Hardware and software engineers provide valuable insight about implications of implementation. And, the verification team is the choreographer of the discussion, continuously challenging the team to see and fill the gaps and identify the unverifiable choices — a key design-for-verification objective. If any of these points of view are missed, the gaps and risks will remain in the project, inevitably leading to quality concerns, cost overruns or schedule slips.
Figure 2 — Verification team planning
Identify the feature set of the design Good measurement techniques have been an elusive part of verification. Metrics are the means by which verification progress is measured. Historically, engineers have designed tests, implemented checkers and used code coverage. Test lists have become obsolete as a metric because they have become too numerous to specify or implement. Although sometimes directed tests are easier to write, they simply don't scale to the size of today's systems on chip. Checkers aren't appropriate as metrics because they detect erroneous system behavior rather than record that good behavior has been observed. And, code coverage is only loosely correlated with behavior because the device context of each line of code is not recorded. Functional coverage has been identified as the most accurate verification metric by leading electronics teams. As part of a coverage driven verification (CDV) methodology, a team is able to measure how much real verification they've completed, as opposed to how many (mostly redundant) simulation cycles they've executed. The specification language of functional coverage is designed to match the requirements specification captured during the scoping process. The assertion language in a coverage specification can also be used to capture implementation assumptions, part of assertion-based verification (ABV). Total coverage is the only means to have a complete picture of the verification status of a project. Functional coverage correlates directly to the features. Assertion coverage relates to functional coverage and also to the implementation integrity. Hardware and software code coverage tell us how well we have exercised the design. Design detailed coverage models The scoping process results in a list of the features to be verified. The specification must describe those features so that they can be measured. Coverage is used to define the metrics of verification, derived from design features. There are two types of metrics that result from scoping: explicit specification metrics and explicit implementation metrics. Explicit specification metrics are chosen by the engineer from the specification. Explicit implementation metrics are chosen by the engineer from the RTL implementation, once the RTL is available. The typical failure at this stage of planning is lack of review of the coverage models. They need to be complete enough to represent the full list of features to be verified. Yet they also need to be succinct enough that the tools and the team can actually accomplish the task within the desired timeframe. Judgment is crucial because it's a lot better to have verified 100% of the most critical functionality than to have critical functionality lost in a coverage model that is too detailed. For a more complete treatment of this topic, refer to the book Functional Verification Coverage Measurement and Analysis by Andrew Piziali (Springer, ISBN 1-4020-8025-5). Select aggregate metrics to track progress Management is all about planning and executing. Tracking progress is the only way to know your team is executing. The old management adage "you get the behavior that you measure" is apropos. The challenge is selecting the few metrics to track that are representative of progress and will expose problems and risks. Typically, teams will define milestones of a project to delineate progress. To maximize the effectiveness of the team, these milestones should be planned such that they pull forward development of the most time-critical or high risk areas of the project. But, verifying a particular feature is no longer sufficient. Increasingly teams define milestones that expose possible system integration issues. They deliver the right subset of block functionality so that early system verification is enabled. This requires a fine grained definition of features that must be working at each milestone. By using coverage to define those features, and carefully defining and tracking milestones that collect a system subset of features, the project milestone becomes important for much more than simply marking a box in the schedule as complete. It truly reduces quality and schedule risk in the project. Other manually tracked milestones include first working simulations, as well as generation, checking, and coverage model completion. Also known as implementation tracking, anticipating and achieving these two milestones often befuddle teams first adopting coverage driven verification. An executable verification plan can specify the conditions for each, improving the predictability of these critical early stages of a project. Use historical metrics to estimate schedules Creating engineering schedules is an art. It reflects the experience of the team at project planning and recalling how much time was spent on similar tasks in the past. Any act of engineering that jumps to the answer is an art. As more metrics are used in functional verification, the art of schedule estimation is becoming more of an act of engineering, where the process is visible. The challenge is that few teams have captured metrics in the past. Nor have they invested in creating a model whereby a schedule can be quantified and estimated. Some of our more sophisticated customers are tracking additional metrics, beyond coverage, so that their schedule estimate accuracy can be improved. The metrics they're tracking measure time for implementing, debugging and integrating each level and abstraction of IP in the design and verification environment, with data points for varying sizes of IP. They also estimate the impact of developing new IP versus reuse, the impact of engineering experience and skill, and the trade-off of verification process automation versus manual verification management. They look at hardware and software, as well as block, chip and system level scopes. Without actual historical metrics, the team can estimate these metrics. Today's schedules are estimates of the tasks, rather than the metrics themselves. By breaking the estimates down to one more level of detail, the team discovers assumptions in their reasoning. They also create detail against which each individual can track their own work. A model, or equation, can then be constructed that uses these historical metrics and the current project estimates to generate detailed project schedules. These estimates and the model itself are then debated for their accuracy and a project schedule commitment can be made from the schedule estimate. With the right kind of automation tools, the overall project, task and detailed metrics can be compared against these estimates and captured for the benefit of the next project. Changing requirements Every engineer knows that requirements always change. Delaying planning does not alter this reality. Many electronics products now serve consumer markets that are more closely tied to the season than to the completion of a project. Managers have long sought to control the change process to make projects more predictable. The big question is "How late can we introduce a change and release a quality product on schedule?"
Figure 3 — Executable plan drives verification process
The historical verification plan is a crucial part of the automation mechanism for driving and managing verification from specification to closure. The components of an automated verification mechanism become:
Improved product quality It is easy to see that good plans result in good quality. Every customer we've helped improve verification planning shares a story of the bug they discovered following this process. Even for projects well under way, the value of rigorous verification planning is clear. In some cases, the biggest value is creating a common process and nomenclature for planning. Schedule predictability Improved schedule predictability is derived from more accurate work estimates and earlier visibility to deviations from the plan. Detailed milestones help surface crucial problems earlier, giving the team an opportunity to work through the challenges in the normal course of the project. Team productivity An executable verification plan helps the team communicate more efficiently as well as focus on critical issues more easily. Standard reporting increases effectiveness of the team and management, keeping resources focused on the most crucial tasks. Conclusions With this picture, "good" verification planning can be defined as using the input of all stakeholders to capture and review the verification plan, using historical metrics and a formulaic model to estimate schedules and drive the whole verification project, and specification to closure, using an executable verification plan. Steve Brown is director of Marketing, Enterprise Verification Process Automation, Cadence Design Systems.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
- It's Not My Fault! How to Run a Better Fault Campaign Using Formal
- How formal verification saves time in digital IP design
- How formal MDV can eliminate IP integration uncertainty
- Breaking the Language Barriers: Using Coverage Driven Verification to Improve the Quality of IP
- Breaking the Language Barriers: Using Coverage Driven Verification to Improve the Quality of IP
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrateâ„¢
E-mail This Article | Printer-Friendly Page |