State of RTL based design - is it time to move beyond?
Karl Kaiser, Esencia Technologies Inc.
Abstract:
Synopsys is celebrating its 25th anniversary this year. This also means that availability of commercial logic synthesis is turning 25 years old. This paper looks at the history of logic synthesis—how logic synthesis tools raised the design abstraction level and unlocked significant improvements in design productivity. Logic synthesis is also identified as a core technology that enabled IP reuse and the advent of the steadily growing IP business. The paper then looks at behavioral synthesis and analyzes driving factors for the incredible success of programmable processor cores in today’ SoCs. It then suggests Core-Based Design as a solution for design teams to further improve design productivity and mitigate the risks of deep-submicron tape-outs.
Introduction
Commercially available logic synthesis tools, together with Synopsys, are celebrating their 25th anniversary this year. In 1987, Synopsys introduced the first logic synthesis tool, “Design Compiler” [1], able to map functionality described with HDL languages at the Register-Transfer-Level (RTL). Since then, logic synthesis has enjoyed tremendous success and altered the way digital circuits are designed. However, with the maturity of logic synthesis, the question arises: What progress will we see in this space in the coming years, and how much runway does logic synthesis still have?
This paper looks at the history of logic synthesis technology and its impact on design productivity and design reuse. It then goes on to discuss a few technologies that have the potential to boost design productivity in a similar manner to the way logic synthesis has over the last 25 years. It suggests Core-Based Design methodology as a potential solution and identifies areas that need further work.
This paper’s target audience is EDA/design managers and engineers who are thinking about trends in logic synthesis and where the industry may head.
History of RTL Synthesis
Since its invention more than a quarter century ago, logic synthesis has become the standard tool use to implement digital circuits. Logic synthesis was the enabling technology that initiated the transition from gate level schematics to hardware description languages (HDL). The move to logic synthesis has yielded an incredible productivity boost for digital design teams. It was one of the main factors that allowed them to overcome the much written about “Productivity Gap” of the 1990s [2].
The productivity gains were mostly rooted in raising the design abstraction level from gate level to register transfer level (RTL) and letting the synthesizer do the tedious work to map to standard cells. A secondary effect of the introduction of logic synthesis was that it greatly simplified the reuse of blocks across different semiconductor foundry processes and standard cell libraries. Higher reuse not only further boosted design productivity and the ability to build more and more complex systems but also facilitated the growth of the semiconductor IP eco-system. In fact, IP reuse is at the core of the on-going significant design productivity gains. Mike Gianfagna, VP of Marketing at Atrenta, stated that large scale SoCs reuse about 80% of the blocks [2]. A similar number is used by Gary Smith in a talk he gave at DAC Talk 2011[3].
In 1994, Synopsys introduced their first behavioral synthesis tool, “Behavioral Compiler” [4]. High-Level Synthesis (HLS) enabled them to raise the HDL design description above RTL. However, the tool never really made it into the mainstream digital design flow of semiconductor companies. In 2004, Synopsys announced the End-of-Life (EoL) of the “Behavioral Compiler” product [5]. A second generation of high-level synthesis tools followed. Several of these tools not only allowed hardware description languages like Verilog/SystemVerilog or VHDL but also accepted C/SystemC as design description language. However, to this day, this technology has not seen the wide acceptance that RTL logic synthesis has enjoyed.
What is the next step?
Looking at the current situation in semiconductors, the question arises; where will we get further improvements in design productivity, and what technology will fuel it? Can we continue to rely just on logic synthesis and IP reuse? Given the ever increasing complexity of today' System-on-a-Chips (SoCs) and the intense cost, as well as schedule pressure, one can expect that additional design productivity improvements are required. From where will such improvements come?
Success of Programmable Cores
One trend that gives us some hints is the use of programmable cores. The use of embedded programmable cores is still growing rapidly, not only at the top end of the spectrum, with application processor product offerings from companies like ARM or MIPS, but also in the deeply embedded realm, where they are hardly visible from the outside of a chip product. According to a prediction made in a keynote speech at the Freescale Technology Forum in 2008 by Lisa Su, former chief technology officer of Freescale Semiconductor, Inc. (Austin, Texas), we should be well on track for 1,000 embedded devices per person by 2015 [6]. Another data point for the rapid growth of embedded cores is found in a study by Colin Barnden, Principal Analyst with Semicast Research. His report predicts that the number of ARM-based processors in operation will reach 17 billion by 2016, from just 0.4 billion in 2000 [7].
A third published indicator for the increasing usage of programmable cores is the shipment of a total of two billion cores and the continuous run rates of 800 million instances a year by licensees of Tensilica, a vendor of embedded IP processors [8].
Benefit of Programmable Cores
Why are programmable cores so popular in today's ASICs and ASPs? For one, they are typically programmed in C or other high-level programming languages. The steps of mapping sequential descriptions of a control or data plane function into register transfer type of hardware that is required to use logic synthesis is a time-consuming effort and requires special skills. A programmable core largely allows these steps to be skipped, improving design productivity and shortening project cycles and hence time to market.
The second essential benefit of using programmable cores is that they can be made re-programmable. This reduces the risk of having to re-spin a chip due to design issues. It also offers the flexibility of reusing the same hardware resources. For example, the same core is used to run an audio codec supporting all the current standards, like ADPCM and MP3, as well as and future ones.
Programmable Core-Based Design
So, what are the hurdles that slow down a faster adoption of Programmable Core-Based Design? One element is the choice of programmable cores. Today, most programmable cores are sold as IP blocks in a specific configuration. Programmable core vendors typically offer a finite number of cores with specific performance capabilities. In order to accommodate special design requirements, some product offerings come with limited configurability. However, in a Core-Based Design approach, programmable cores will replace hard-coded RTL blocks. In order to make this cost and performance efficient, much greater flexibility is required. Cores must be scalable from a simple, plain-vanilla CPU engine to higher-performance cores that can execute many concurrent computations per clock. Flexibility is also required to feed the computational elements with data. Loading and storing data in and out of the core must be scalable over a similar range, to accommodate the specific requirements of the algorithms. A designer using Core-Based Design does not have to evaluate and pick from a discrete set of IP cores but rather uses a core generator tool. This is an essential paradigm shift from today’s IP-centric approach to an EDA tool-based approach, very much like synthesis. Such a Core-Based Design tool generates the programmable cores in the same manner RTL synthesis today is mapping HDL to register and gates. The designer supplies the functionality in the form of a high-level language description; language candidates are C/C++, as they are widely used today to describe algorithms. Alternative popular languages like Java and Python could also become of interest. Besides the functional description of the algorithm design, constraints are supplied capturing the design goals. Similar to RTL Synthesis, the tool will provide feedback on the performance and cost of the generated cores. Equipped with this data, the design engineer can freeze the constraints and generate the HDL for the programmable engines. The HDL code is then integrated into standard digital SoC design flow.
Once the configuration of a core is frozen, a Core Instance Descriptor (CID) is saved, which captures the properties of a generated core. This descriptor is then loaded into the Core-Based Design platform than now acts as a Software Development Kit (SDK), generating the binary program code for the generated cores (see Figure 1).
Figure 1
It is important that each instantiated core is carefully optimized to the related performance requirements, offering the right amount of performance without squandering functionality and thus related silicon area that is not used.
In that sense, these programmable cores must fit Gary Smith's definition of Modifiable IP, which allows the addition or removal of blocks/units without affecting the verification scheme [8]. Those that do also provide the benefit of tremendously reduce verification effort.
A second element that is important is the effort and complexity it takes to integrate these cores into systems. It must be easy to instantiate these cores as a replacement for RTL blocks without spending time to craft bus subsystems, etc.
Needless to say, support for programming and debugging must be part of the easy integration strategy.
One important business aspect of the transition from an IP-centric programmable core model to an EDA-style, tool-based model is the licensing scheme. Today, most programmable core licensing models are quite restrictive in the flexibility of the core features. Unless customers buy themselves access to the entire buffet, cores are sold largely a la carte.
Conclusion
For almost 25 years, we have been using RTL-based design. The transition to RTL-based logic synthesis has improved design productivity and enabled IP reuse. The IP reuse part has further increased productivity, particularly over the last decade. However, logic synthesis technology has become mature, and the productivity gains from this technology are starting to level off.
Most modern devices use more and more programmable IP cores. Picking up on this trend, a high-level design tool is described that greatly simplifies designing SoCs with a Core-Based Design methodology, replacing hand-written RTL blocks with powerful reprogrammable cores. Moving to Core-Based Design yields time to market advantages, and the re-programmability greatly increases flexibility and reduces project risks. However, in order for Core-Based Design methodology to gain wider acceptance, a few key issues have to be solved. Not only more flexibility in the license model but each instantiated core shall fit the related performance requirements, offering the right amount of performance without squandering functionality and thus, related silicon area that is not used. The flexibility to easily create the programmable cores that fit the algorithm’s needs and the ability to efficiently integrate the generated cores into the overall systems are some of the most prominent issues to address.
References
1.“Design Compiler Technology Backgrounder”, Anonymous, Synopsys Inc. Publication, 2006
2. “Next Generation EDA: Electronics Systems Design Automation”, Ron Collett Dataquest, 1991
3. “The Evil Doctor”, Mike Gianfagna, ChipDesign Magazine, July 28, 2011
4. “IP Reuse Trumps ESL Design Tools” Gary Smith’s DAC Talk 2011, J. Blyler, August 12, 2011
5. “High-level synthesis - History”, Misc. Authors, Wikipedia
6. “Behavioral synthesis crossroad”, Richard Goering, EE Times Asia, June 1, 2004
7. “The future according to Freescale: 1,000 embedded devices per person”, R. Colin Johnson, EETimes, June 18, 2008
8. “ARM deployments outgrowing world’s population”, Phil Ling, EETimes, October 16, 2011
9. “Wow! Tensilica licensees have shipped 2 billion IP cores!”, Clive Maxfield, EETimes, October 10, 2012
10. “Gary Smith hails multi-platform design methodology”, Dylan McGrath, EETimes, June 4, 2012
Author
Karl Kaiser is the VP of Engineering at Esencia Technologies, Inc. in San Jose, California. He has 15+ years of experience managing product development groups designing complex portable wireless communication systems and devices. Prior to joining Esencia, Mr. Kaiser held senior management positions at the Silicon-Valley start-ups Altera and RF Micro Devices. Between 1993 and 2000, Mr. Kaiser worked for Philips Semiconductors in the U.S. and Europe in various roles.
Mr. Kaiser holds a combined B.S. & M.S. in Electrical Engineering from the Swiss Federal Institute of Technology, Zurich, Switzerland.
|
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |