Synopsys 'ARMs' SystemVerilog
EE Times: Latest News Synopsys 'ARMs' SystemVerilog | |
Clive Maxfield (04/05/2004 8:00 PM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=18900196 | |
Based in the way in which my phone has been ringing off the hook over the last couple of weeks, all I can say is that the folks at Synopsys appear to have been working their little cotton socks off. Whichever way I turn they seem to be announcing new "stuff" with gusto and abandon. One topic that particularly caught my eye was an announcement with regards to Synopsys and ARM collaborating to create and publish a generic reference verification methodology based on SystemVerilog. Of equal interest was a "heads up" relating to recent enhancements to the Synopsys synthesis engine Design Compiler enabling it to address FPGA space. A guiding light through the verification maze When it comes to designing today's huge and hairy digital integrated circuits, verification is a real hot area these days. In addition to consuming a sizable majority of the device's total development time, things are just becoming ever more complex. For example, an increasing amount of verification IP (VIP) is becoming available, but taking VIP from multiple sources and persuading it all to work together can be interesting to say the least. Then there's the increasing use of assertion-based verification, including assertion checking via simulation, pure (static) formal verification, and dynamic formal verification (which involves a simulation run automatically invoking a formal verification engine at strategic points to verify corner cases). And of course there's the process of setting up the verification environment and/or creating testbenches, and the list goes on, and on. Thus, Synopsys and ARM have teamed up in order to address this morass. The idea is that they are using their expert knowledge (and they have a LOT of this lying around) to define a standard methodology that everyone can use to their advantage. Perhaps not surprisingly, this methodology is to be based on SystemVerilog. In addition to providing all of the technical features one needs (system-level constructs, assertions, and so forth), the momentum for using SystemVerilog is rapidly growing. All of the big EDA vendors have announced comprehensive SystemVerilog support, and out of around 1,500 guys and gals who have attended a recent series of seminars hosted by Synopsys, in exit polls over 60% say they are moving to SystemVerilog. One key aspect with regards to this methodology is that it does not require the use of ARM design IP or Synopsys tools or VIP; instead, the concepts will be applicable to anyone involved in the verification arena. This is a huge undertaking, because it has to cover the relationship between system architects/designers working in the SystemC domain and the hardware designers in the trenches working in the traditional HDL domain (currently there is a huge duplication of effort across domains). It also has to address things like simulation-based verification in conjunction with static formal and dynamic formal (once again there can be a huge duplication of effort here with separate testbench methodologies, separate assertion methodologies, and so forth). One aspect of the Synopsys/ARM methodology will address the creation of VIP. The idea here is it would be a real good thing for end users if we could purchase VIP from multiple vendors and it all worked together without our having to bang our heads against the wall (what a novel concept). Then there's all the different ways one can use a high-end verification environment to define constrained random test generation. This is one of those areas that having some guidelines provided by experts can dramatically impact the quality of the stuff you generate. In addition to being non-ARM and non-Synopsys-centric, the guiding principles of this methodology are that it is to be unified, scalable, and incrementally adoptable (try saying that quickly); it will cover everything from the system level through RTL to the hardware interface; it will unify simulation with static and dynamic formal techniques; it will address assertions, constrained random generation, and test coverage concepts; it will leverage advanced technologies and minimize custom code writing; it will promote interoperable VIP; and although based on SystemVerilog it will be interoperable with Verilog, VHDL and SystemC IP. Actually, there are more points to ponder here than you can swing a stick at. The real big thing to be aware of is that Synopsys and ARM are going to put all of this information into a forthcoming book entitled "The SystemVerilog Verification Methodology Manual (VMM)." Apparently they will be showing a preview of this at DAC 2004 (the full-up "official" release will follow sometime later). Bridging the ASIC/FPGA divide When it comes to the ASIC arena, Synopsys obviously holds the high ground with regards to synthesis technology in the form of their Design Compiler offering (you can't argue with a 90% give-or-take-a-few-points market share). When it comes to FPGA space however, despite a number of forays over the years (such as FPGA Compiler circa 1992, FPGA Express around 1996, and FPGA Compiler II toward the end of the 1990s) Synopsis has experienced relatively little success. But all this may be about to change with the recent announcement of Design Compiler FPGA. Of course there are already a number of FPGA-specific synthesis engines available to the market, so why would one wish to take a look at Design Compiler FPGA? The answer is that around 40% of high-end ASIC designs are prototyped using FPGAs (in 2003, Dataquest's Gary Smith reported 41%, while a Synopsys survey of their customers showed 42%). Creating an FPGA-based prototype has a number of advantages, such as facilitating early software development, accelerating verification, and allowing testing in the field to avoid costly ASIC re-spins. But of course there is always a downside If you are part of an ASIC design team that is prototyping using FPGAs, then your current flow is probably based largely on pain and suffering. In one typical scenario, you commence by creating your RTL representation, which try as you might often requires an FPGA-centric micro-architecture and contains FPGA-specific instantiations and uses FPGA-based IP. Coupled with this, you create your FPGA-specific constraints and scripts, and then you get to play around with clock enabling and suchlike. Once you've finally got your design to work in the FPGA(s), you almost invariably have to modify your RTL, re-write your constraints and scripts, change your clock enabling/gating implementation, swap out your design IP, and so it goes. The bottom line is that, using an FPGA to prototype an ASIC is often like designing two completely different devices. So you are in for a time-consuming, resource-intensive effort, with the added disadvantage that, at the end of the day, it's difficult to be 100% confident that the two designs are functionally identical. So one really cool thing about ASIC and FPGA design flows based on Design Compiler and Design Compiler FPGA, respectively, is that they use the same RTL and design IP; they use the same constraints and scripts; and any clocking scheme modifications are performed automatically (while you aren't looking). But wait, there's more, because Design Compiler FPGA has some additional tricks up its metaphorical sleeves. Generally speaking a synthesis tool has a grab-bag of algorithms to play with; let's call them algorithms A, B, C, D. Depending on the type of design, some of these algorithms will do better and some will do worse, but the synthesis tool itself doesn't really understand this concept. Instead, for every design it trudges along performing algorithm A, followed by B, then C, and so forth. By comparison, Design Compiler FPGA uses something called "Adaptive Optimization Technology". This means that it first analyzes the design to see which algorithms are most appropriate, then it dynamically "tunes" these algorithms for this specific design, and then it executes the algorithms in whatever order it deems to be most efficacious. The end result is a reduction in synthesis run times (sometimes as much as 50%) and better design performance measured in terms of maximum frequency (typical performance improvements are in the order of 15%, but one user cites an improvement of 23%, which is more than interesting whichever way you look at it). When I spoke to them, the folks at Synopsys said that they had 40+ pre-launch customers actually using Design Compiler FPGA, and that they already had 20+ completed prototypes, so I think we can assume that this technology is actually more than smoke and mirrors. (Also, I think we can expect to see these adaptive optimization concepts leap onto center stage in the ASIC arena with the next release of Design Compiler, but don't tell anyone I told you.) So things really seem to be buzzing at Synopsys at the moment, and I have no hesitation in awarding them an official "Cool Beans" from me. Until next time " have a good one! Clive (Max) Maxfield is president of Techbites Interactive, a marketing consultancy firm specializing in high-tech. Author of Bebop to the Boolean Boogie (An Unconventional Guide to Electronics) and co-author of EDA: Where Electronics Begins, Max was once referred to as a "semiconductor design expert" by someone famous who wasn't prompted, coerced, or remunerated in any way.
|