Introduction
As digital systems become increasingly complex, traditional simulation-based verification is straining under the weight of exhaustive verification demands. While simulation remains a fundamental tool for verifying design functionality, it is inherently limited in its ability to explore all possible states and corner cases. This leaves teams exposed to late-cycle bugs, uncertain sign-off decisions, and long debug iterations.
Formal verification complements simulation and is often better suited to proving corner-case correctness. It uses mathematical methods to prove or disprove the correctness of a design with respect to formal specifications expressed as properties describing the behavior and intent of our design. These properties are processed by formal engines that use mathematical algorithms to exhaustively explore through the entire state space and try to either prove or disprove whether they hold true or not.
In dynamic simulation we verify the design by defining the stimulus and according to how good our stimulus we can have confidence that we've done a thorough job of verification. Formal verification takes the opposite approach. Formal works backwards from our assertion property targets traversing through a design state space as shown in figure 1. So instead of defining what stimulus to drive we tell formal what stimulus to avoid by defining assumptions (constraints) and so when formal proves the target, we know that all valid combinations have been tested and that our design meets its requirements.
Figure 1: State Space Comparison
Overcoming State Space Explosion
At the heart of formal verification is the concept of proving that a design satisfies a set of properties across all possible states and inputs. However, this leads to the “state space explosion” problem where the number of possible states becomes exponentially large, especially for designs with deep memories, or wide buses. Attempting to compute full (unbounded) proof across this vast state space is often computationally infeasible.
To make formal practical, modern tools return a bounded proof which basically guarantees that the property holds within N clock cycles. If all interesting behaviors of a design can be observed within that window, a bounded proof is as meaningful as a full proof. The key is selecting a bound N that exceeds the design’s longest transactional latency (e.g., FIFO depth or pipeline stages) to ensure meaningful verification.
Key techniques to accelerate formal convergence include:
- Abstracting deep counters and memories into smaller versions as shown in figure 2
- Using cut-points (logical isolation boundaries) and blackboxes to isolate complex logic
- Modeling post-reset steady states as initial states to skip startup cycles
- Adding meaningful assumptions to prune unrealistic behaviors
Figure 2: State Space before and after Abstraction
Common Applications for Formal Verification
Formal verification offers many applications during RTL development. For example, linting and X-propagation checks can be deployed automatically. These apps auto-generate properties, reducing manual effort. Once RTL is ready, we can run other formal apps such as the connectivity checking and register checking apps, but these need a spreadsheet to be prepared so that these formal apps can generate the properties and compare the spreadsheet with the implementation done in the RTL. The real effort lies in formal property checking, where engineers must write SystemVerilog assertions (SVAs) and apply abstraction techniques to manage complexity.
If we take a system on chip (SoC) as an example to apply formal verification, then we can make sure that the connectivity across different IPs are correct using a spreadsheet. This will be tedious work for simulation where checkers will be developed to ensure correct connectivity. On the other hand, formal can generate properties to make sure all connections are correct.
Also, a critical security application for SoCs having encryption engines, and key storage elements is making sure that the key storage elements are only accessible by the encryption engines. This job can be perfectly done by formal where it can exhaustivity verify that encryption engine is the only accessible block to key storage elements and this security aspect is hard to thoroughly verify using simulation.
Measuring Confidence with Formal Coverage
Formal verification can meaningfully complement simulation and assist in achieving code coverage closure during sign-off. Simulation can quickly reach 80%-90% coverage but then verification hits a plateau. The remaining 5-10% of coverage is often:
- Hard-to-reach corner cases, which formal coverage app excels at
- Dead code, which simulation cannot detect effectively, which also formal coverage app can detect easily
Additional formal coverage metrics for sign-off include:
- Cone of Influence (COI): Measures how much of the RTL each assertion touches
- Proof Core: Identifies the exact logic exercised in the assertion
- Mutation Coverage: Injects faults to check if assertions detect them
At Si-Vision, applying formal coverage in parallel with simulation led to ~50% faster code coverage closure when adopting it in our Wireless IPs.
AI in Formal Verification
Generative AI (Gen AI) refers to a type of artificial intelligence that can create new content such as text, images that resemble human-generated content by learning patterns from existing data. Currently, there are efforts to enhance the experience of the formal verification tools users utilizing generative AI.
Gen AI applications under current consideration include:
- Enhance the debug experience whenever a bug is detected
- Streamline tool setup and configuration
- Gen AI can assist engineers in writing complex SVAs, especially for those less familiar with formal syntax
Infineon Technologies have introduced their a fully autonomous Gen AI formal verification solution that reads the design specifications, generate SVA and run the formal analysis using Cadence Jasper, then analyze the counterexamples in a fully automated environment and assess formal coverage.
Also, Synopsys is currently using AI in VC Formal by enabling the learning of the previously processed properties and using the learnt data about proofs and leverage the learning in the subsequent formal runs.
While promising, Gen AI faces key challenges in achieving these which include:
- Having different environments across different users, with different RTL and verification purposes makes leveraging good data challenging since the output of a data driven application is only good as the data that it feeds on
- Security challenges since IP protection cannot be compromised to feed the model with high quality data
These challenges can be seen reported by Infineon where the overall efficacy of their solution is around 40%.
Best Practices for Adopting Formal in Your Flow
Formal verification is well-suited for control logic, but can also be applied to arithmetic, security, connectivity, and even data paths (with appropriate techniques). I would recommend starting small using formal verification by targeting control logic (e.g., arbiters, FSMs). Begin with automatic formal apps (e.g., linting), then progress to spreadsheet-driven apps (e.g., register checks) and if you feel comfortable with the formal setup and debugging, you can adopt the formal property checking where the real effort and debug takes place.
Conclusion
Formal verification is no longer a niche technology reserved for specialists. With the right mindset and tooling, it can be seamlessly integrated into mainstream verification flows complementing simulation, improving bug detection, and enabling earlier, more confident sign-off. New users should begin with automatic apps (e.g., linting, X-check) to build confidence, then graduate to property checking with targeted assertions. Whether proving safety properties, or catching corner cases missed by simulation, formal is a practical path toward verification closure and time-to-market advantage.
About the author
Karim Waseem is currently working at Si-vision as a Digital Verification team leader for the Wireless IPs with more than 7 years of working experience.
Prior to Si-Vision, he worked at Siemens EDA as a Senior hardware modeling Engineer and has done his masters in Computer and System Engineering Department in Ain Shams University focusing on FPGA architecture for Machine learning applications.