Simplifying SoC IO timing closure
Amol Agarwal and Prateek Gupta (Freescale Semiconductor)
EDN (August 05, 2014)
Modern day SoCs are getting complex with every passing day. In order to provide user multiple connectivity options, designers are stuffing SoC with multiple IO protocols. There are several factors that have contributed to the current day complexity of SoC IO timing closure for embedded systems used in various applications. To list a few, they are
- High degrees of pin multiplexing on same die to support multiple applications and multiple packages. There is very little or no scope of keeping pads dedicated for a particular IO interface.
- A significant percentage of floor plan area is consumed by analog blocks and memory stacks. Typically MCUs used in graphic applications require huge SRAM memory on chip. This results in reasonable distance between the host IP and its concerned ports. In case of devices with big on chip flash, this problem is even more acute.
- Due to multiple power domains, some pads are kept powered on in low power modes. If these pads are shared with any high speed interface, propagation delay to the pads is affected due to isolation cell delays.
- Different IO interfaces support a different slew and load condition, which adds on to the number of timing modes to be analyzed for timing signoff. Also each IO peripheral can support multiple configurations in which it can be run.
- A non linear variation in cell and net delays across different process, voltage and temperature (PVT) conditions limit the maximum frequency of an IO interface.
- Pessimism added due to noise, jitter, on chip variation, clock skew have shrunk the data valid eye on the input side and increased the data invalid eye on the output side.
- Synchronous IO protocols with parallel interface like DDR or SDR DRAM address and data, Display data, debug data from Nexus or Trace etc. need to have their ports accommodated adjacent to each other. It becomes a challenging task physically to maintain data path symmetry for all the bit lanes. This problem is compounded if the parallel interface needs to be provided at 2 or more than 2 locations outside the chip depending upon different customer or application needs.
However the story is not all that bad since designers have come up with solutions that reduce IO timing closure complexity to a considerable extent. This article aims to provide a brief description of these solutions.
- Smarter I/O controller architectures.
- Better planning of pin multiplexing from not only functional perspective but also timing perspective
- Manipulation of setup and hold timing specifications, while keeping the data valid window to be same for the external device.
- Physical design strategies in mitigating IO timing closure issues.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
- Overcoming Timing Closure Issues in Wide Interface DDR, HBM and ONFI Subsystems
- Making Better Front-End Architectural Choices Avoids Back-End Timing Closure Issues
- Timing Closure in the FinFET Era
- Complex SoCs: Early Use of Physical Design Info Shortens Timing Closure
- Timing closure in multi-level partitioned SoCs