Issues lurk behind formal equivalence checking
Issues lurk behind formal equivalence checking
By Ron Wilson, EE Times
February 13, 2003 (2:57 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030213S0056
SAN MATEO, Calif. - Formal equivalence checking is one of those silver bullets that comes along every so often in design flow evolution. It gives you the ability to transform a design from one level of abstraction to another and then formally prove that the new representation is logically equivalent to the old one. At least that's the theory. The practice got off to a slower start, actually. Initial tools were more oriented to the formal-logic types who devised them than to engineers who were trying to get something done. And even today, with significantly improved user interfaces and much wider acceptance of formal equivalence checking as part of standard design flows, it is still not trivial to use the tools. If you don't understand how the algorithms work at a basic level, or if you aren't clear on the structure of the data sets you are feeding into the tool, the tool is likely to stall or break. A lot of the problem is in the way formal equi v-alence checking operates. As authors Robert Vallelunga and Osman Eralp of Synopsys point out, to use equivalence checking successfully, you can't treat it as a black box: You have to know a little bit about how it works. Symbolic logic The whole idea behind formal equivalence tests is an elegant one. Instead of trying to prove by exhaustive testing that two expressions are equivalent-a hopeless task for any reasonably large circuit-you can let machine reasoning prove or disprove equivalence using symbolic means. Symbolic logic, of course, works with expressions that represent the network over the whole range of inputs, so it is by definition exhaustive. Once you've proven equivalence, you've proven it for any input and output. But the algorithms are complex enough not to work well on very large networks. So it is necessary to segment the design in some way to bound the problem. And therein lies the trick. The usual segm entation scheme is to break a network into what are called logic cones. Synopsys authors define a cone nicely in the article as a network of logic bounded on all sides by registers, ports or black boxes. The term cone comes from the tendency to select such boundaries so that the network has multiple inputs but only one output; hence, it is cone-shaped. The formal tools are just great at proving that one logic cone is equivalent to another. The problem comes in matching up the cones in one input file with the cones in the other input file. If you can't figure out which cones are supposed to be equivalent, then the tool can't help you. Normally this is done by matching up the signal names, which the formal tools do automatically. But some tools-notably those for synthesis-alter signal names. In addition, some tools make alterations to the network that change the borders of cones so that there is no direct equivalence between cones in one file and the other. This causes the formal tool to fall back on a more tedious process of trying to match up cones based on the topology of the network-a time-consuming and not necessarily successful process that may require human intervention. The authors suggest another approach, now being implemented by Synopsys, in which tools that change the netlist make annotations of how names or structures have changed. That gives the formal equivalence tool a program, as it were, for matching up cones, significantly improving both speed and results. The same technique could be implemented by a design team, one suspects, with equally useful results.