Method for Library Analysis Automation
Praveen Kumar, Syed Shakir Iqbal, Shilpa Gupta (Freescale Semiconductors India Pvt. Ltd.)
1. Introduction
Technology scaling and process evolution is the most prominent nature of VLSI/ULSI industry. Following the Moore’s law and the advancements in both fabrication process and characterization the VLSI market provides a wide range of standard cell models and libraries to be used for design implementation and signoff. However, each of these alternatives comes with its own set of associated vendor costs and design impact. This leaves the designers to weigh each of the available alternatives and select the most optimal solution for reducing cost without compromise with design quality. A qualitative and quantitative library comparison and analysis thus plays a very important part in any design execution. In this article we will discuss in brief the usability of this type of analysis under various circumstances and discuss method to automate this analysis independent of vendor and technology.
2. Standard Library: An Implementation Perspective
The standard library file or liberty timing file is a standard format used to get the timing parameters and functionality associated with any cell in a particular semiconductor technology. The liberty file contains several attributes such as cell area and leakage power which are relevant during the RTL synthesis for performance optimization and leakage reduction in SoC design. There are multiple flavors of library available for the same cell set based on the statistical characterization. These flavors include not only the PVT variations but also the way the variations are modeled in the specific library cell. For example, the timing and power information of a 4 sigma library for the same cell and PVT will have the influence of more device parameters during characterization than a 3 sigma counterpart. Often designers, specifically the backend physical design teams have to perform thorough experiments and predictions regarding which set of library to be used for implementation and what should be the number of PVTs used for signoff. At the start of design phase and conception, often it is not possible to check the impact of usage of different libraries on each every portion of the entire design itself due to unavailability of a mature design. Rather designers often have a rough estimation of gate count and in some cases rough power and area estimates from some different library or characterization settings. Thus, this creates a need for a library comparison and prediction in terms of translating the design from one set of library to another. As we move towards lower technology the output transition of the cell gets sharper which implies it can support higher load than previous technology cells. So, with the change in technology nodes, it also brings in the different operational points (in terms of load, slews, voltage and temperatures).
3. Library Analysis Methodologies
In the earlier section we briefly discussed one of the scenarios where standard cell library comparison and analysis becomes an important part of design cycle. In this section we will now discuss how this analysis is executed in general and what are the current limitations in this analysis. Analysis of any cell present in library requires two things (i) input transition and (ii) output load. Hence to analyze library of any particular technology, the first requirement would be to know the input transition and output load operating points for which this analysis is to be made. Often we select a limited number of such transition-load combinations which are primarily targeted to cover the worst,best and average operating points i.e the output load and input transition. . Once we have these operating points the analysis of cells in a given library can be done and we can analyze the behaviour of each cell present in a particular library. For a new semiconductor technology we often do not have an idea about these operating points prior to the implementation. So, in that case we take the max ,min and mid values of input transition and output load of the new library. In addition, the variation of different cell property like leakage power can also be obeserved as we move from worst, best and typical cases irrespective of operating point.
In most of the cases, the tool based scripts and utilities are limited to raw data extraction and abstraction of analysis and comparison summary has to be done manually eating up a lot of engineering bandwidth.. In the coming section, we will discuss a technology/vendor independent automated library analysis toolkit dedicated for a faster and result oriented library analysis and comparison.
4. Automated Library Analysis Toolkit
Each technology comes with its own flavors of libraries, cell sets and nomenclature. This mostly leaves us with tool and run time dependencies for a predictive implementation analysis. In addition, even the internal tool scripts/utilities written for one type of technology or even libraries for the same technology may not be portable amongst themselves. Furthermore, we can also not completely avoid the usage of timing/implementation tools as any other type of analysis will result in divergence from the results that should have been design specific. In order to resolve this issue, we implemented a tool assisted utility which makes this analysis easier, portable and almost technology or library independent. This new automated library analyzer uses a two pass flow:
- Data Extraction: Extract the library related information from the timing or implementation tool into a standard format.
- Result Extraction: Process the output of the first step to provide a user friendly impact based analysis.
For data extraction, a designer invokes a signoff/ implementation tool with given sets of library that needs to be analyzed. Once the tool is invoked with all the necessary libraries loaded, the user invokes the library analyzer. The library analyzer takes two inputs from the user. First is the operating point descriptor which consists of derates, input slew and output load values for which the analysis has to be done written in the form of a standard template. The second input is a technology descriptor. Upon initiation the analyzer first calls the technology to generic cell mapper which uses the technology descriptor to convert the vendor specific library nomenclature format into a generic format containing the following cell descriptors:
- Cell Func : AND2, XOR2, SDFF etc.
- Cell Drive : X2,X7,X12
- Cell Track : 9 track or 12 track
- Cell VT-CH : SVT, HVT, Low Leakage, Channel length etc.
- Cell Delimiter : Underscore character by default.
For changing a technology or library the user needs to change the technology descriptor for the utility which explains the relation of these descriptors to the actual library cell name. Table 1 and 2 show the sample descriptor templates for a dummy library that the user needs to provide to the utility for analysis. There is a need for only one technology descriptor as the utility is smart enough to map the other cells accordingly. After the utility has been provided by the required descriptors it starts dumping the library parameters corresponding to the concerned operating points into a generic data format for each and every cell in the library. The methodology involved in data and result extraction is executed as shown in figure 1. To allow analysis of a selective cell set the user can use a standard don’t use file before executing the utility. The data file contains the cell descriptors, the concerned cell arcs, cell library and all the required implementation or timing specific attributes extractable from the library like delay, slew, power, max cap, area etc. All this data dumped for all the specified operating conditions and once done the utility transfers this extracted data base to a result extractor that executes the second pass of the flow. The second step now analyses the raw data sheets dumped earlier and extracts the design/ library analysis summary and comparison. The summaries are automatically generated in user friendly formats like tables and graphs. After the utility has been provided by the required descriptors it starts dumping the library parameters corresponding to the concerned operating points into a generic data format for each and every cell in the library. The methodology involved in data and result extraction is executed as shown in figure 1. To allow analysis of a selective cell set the user can use a standard don’t use file before executing the utility. The data file contains the cell descriptors, the concerned cell arcs, cell library and all the required implementation or timing specific attributes extractable from the library like delay, slew, power, max cap, area etc. All this data dumped for all the specified operating conditions and once done the utility transfers this extracted data base to a result extractor that executes the second pass of the flow. The second step now analyses the raw data sheets dumped earlier and extracts the design/ library analysis summary and comparison. The summaries are automatically generated in user friendly formats like tables and graphs.
Figure 1: Methodology for automated library data analysis.
The extracted data contains both absolute and relative analysis in terms of all the extracted library attributes. It should be noted that the comparison is done on a cell to cell and arc to arc basis which reduces statistical error. This is further enhanced by the user controllable cell type and cell count to be used for analysis as well by means of the operation descriptor. In the next section we will discuss in brief some of the data analysis that is done using the proposed methodology.
5. Usability of Automated Library Analysis Toolkit
When comparing the data like power, area and timing for library cell set and having a quick prediction of its impact on the overall design, a designer has two options. The first alternative is to actually use the new cell set on a similar but smaller sub-design. This will present a very good and accurate result however the runs times are design based and more dependent on tool efficiency. The second approach is to simply predict the design impact based upon the gross result of the pre-existing design and the statistical cell set data as done in the proposed automated library analysis. The second approach, although may not be 100% accurate but within the error of 5-7% but it makes up the difference by providing basic QOR of the cells of new library before doing the actual synthesis and lesser no of runs can be planned for doing the actual design based analysis.
Table 3: Library timing comparison generated by proposed flow.
To demonstrate the application of this toolkit let us compare the results obtained by both the analysis approaches on a common design. In our test case we had to compare the relative gain is performance by changing the signoff voltage and hence using a library characterized at a new voltage 1.0V (LIB1-1.0V). To do so a database optimized with present library operating at 0.9V (LIB1-0.9V) was first analyzed with the new libraries without any incremental optimization. Then after getting those results the design was optimized using the new library and analyzed once again. At the same time, we also used the slack distribution data from the existing design and just extrapolated it using the results presented by library analysis toolkit.
Table 4: Slack improvement after swapping library for timing analysis generated by signoff tool.
As per the lib analysis result in table 3, it can be seen that the new 1.0V library is approximately ~9% faster. So if the concerned design has a majority of paths being timed with the highest frequency as 1GHz for the current 0.9V library, then using the new library should give me a gain of 1.09GHz. Now let us compare the results for the design with swapped library in table 4. This also shows that the majority of paths will gain on an average 97ps of slack which translates into 97MHz w.r.t. 1GHz. So library swapping suggests that the performance can be raised to ~1.097GHz. Meanwhile the results after optimization on the same database revealed that the slack had improved on an average by 112ps. So the new performance is limited to 1.12GHz. Thus the library analysis provides a good approximation for timing performance improvement.
Let us consider another example. Suppose a designer needs to know the impact of VT swapping on power and timing by using the new characterized library and thus compare the relative performance of the two libraries and estimate the extra or reduced effort in implementation when using either alternative. To do so, he can simply use the library analysis toolkit consult the extracted summary data tables and graphs as shown in figure 2, to assess the power and timing impact. Thus by making use of the user friendly data interpretation the designer can save the bandwidth used up in analyzing the library data and extrapolating its impact mainly for the power, area and timing cost groups.
Figure 2: Sample VT comparison generated by proposed flow.
6. Conclusion
In this paper we have discussed in brief about the standard cell library analysis, its usability and how it can affect the overall design cycle. We also introduced a method that automates this analysis and makes it more designer friendly for producing quick and quality results with minimum engineering bandwidth. In addition, we also discussed some scenarios and examples where we were able to appreciate the usability of this methodology in actual design cycle reduction. The challenges for a designer are always escalating but by introducing such result based abstraction methodology we can always help ourselves to get even with the ever changing and evolving technology.
|
Related Articles
- Creating SoC Designs Better and Faster With Integration Automation
- Design-Stage Analysis, Verification, and Optimization for Every Designer
- Performance Evaluation of machine learning algorithms for cyber threat analysis SDN dataset
- Power analysis in 7nm Technology node
- Streamlining SoC Integration With the Power of Automation
New Articles
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
- Last-Time Buy Notifications For Your ASICs? How To Make the Most of It
Most Popular
- Advanced Packaging and Chiplets Can Be for Everyone
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
- System Verilog Assertions Simplified
E-mail This Article | Printer-Friendly Page |