MIPI C-PHY v1.2 D-PHY v2.1 TX 3 trios/4 Lanes in TSMC (16nm, 12nm, N7, N6, N5, N3E)
Industry Expert Blogs
Scaling EDA in the CloudBreakfast Bytes - Paul McLellanMar. 28, 2019 |
Last year at DAC, we announced Cadence Cloud (for details see my post cleverly titled Cadence Cloud). Of course, one aspect of the cloud is that it allows you to have as much of everything as you need—if you want 100 SystemVerilog simulations or to do library characterization at dozens of corners, you can bring a lot of compute-power to bear fairly simply. But the real promise of the cloud is to bring a lot of compute-power to bear on a single big task. Writing EDA tools for this environment is not straightforward. In particular, you can't usually just take the code written for a single workstation and immediately have it scale up to lots of servers. There are a number of reasons for this.
Some tasks can be scaled fairly easily. For example, consider design rule checking (DRC). There are a number of obvious ways to use lots of servers. On is to check different rules on different servers, since many rules (is there a metal0 spacing violation?) are independent of others (is there a metal1 spacing violation?). Another is to divide the chip up into different tiles and check them independently. This requires a lot of care when handling the edges of the tiles where they overlap, but the fact that design rules are inherently local means that the overlap doesn't need to be all that large. Circuit extraction is similar: we worry about the capacitance between a conductor and other conductors in the vicinity, but not about a conductor halfway across the chip.
Related Blogs
- Digitizing Data Using Optical Character Recognition (OCR)
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions
- Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
- Moortec "Let's Talk PVT Monitoring" Series with CTO Oliver King
- Obsolete & EOL Parts