Infrastructure ASICs drive high-performance memory decisions
Infrastructure ASICs drive high-performance memory decisions
By Robert Landers, Design Manager, Internet Infrastructure, Bryan Sheffield, Design Manager, ASP Memories, Texas Instruments, Dallas, Texas, EE Times
May 1, 2003 (3:37 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030428S0103
The current generation of networking infrastructure ASICs at the 130-nm process node provides a set of memory challenges that go well beyond the processor-centric designs of the past. Large bit counts, extremely high bandwidth, and system requirements that hinder traditional approaches are driving providers in new directions.
Networking applications have memory requirements that do not always align with typical CPU needs. For example, large multi-port RAMs, well beyond the 5-10 ports of a typical microprocessor register file, are a common feature of switches. These large numbers of ports (20, 30, 100 or more) are assembled into a custom design using building blocks from the ASIC designer's library. These standard blocks include very fast or very high capacity single port memories; single port content addressable memories (CAMs); or small, multi-port blocks of 2-5 ports.
The sheer magnitude of the bandwidth required for large switches ca n also be a significant challenge. Packet switches providing hundreds of gigabits per second throughput, coupled with the excess bandwidth needed for port arbitration, raise the bar even further. A high-port-count OC48 TDM (time-division-multiplexing) switch can drive individual RAM bandwidth up to 300 Gbps, the equivalent of 100 high performance PC memory modules operating in parallel. This can result in the use of very wide (512-1024 bits) or very fast (>500 MHz) RAMs, or both.
To meet these requirements with a custom memory design, a number of issues must be overcome in silicon, including supply noise management of hundreds of high-activity RAMs, proactive crosstalk management, and achieving testable, high-yield wafers using aggressive design rules.
Managing the supply noise created by large arrays of RAMs switching within a tight timing window (<500 picoseconds) is often solved with nearby decoupling capacitors. Using decoupling capacitors to manage noise is well understood, however a s uccessful ASIC design flow requires semi-automatic insertion of large numbers of various size capacitors in a way that makes best use of the available free silicon.
Driving wide buses
Capacitive and inductive coupling are real concerns when driving very-wide (1000+)-bit buses over relatively long distances (many mm). Ironically, these issues can be made worse by the highly structured, switched-core architectures used to achieve the most efficient layout and adequate timing margins. To this end, logic structures must proactively manage crosstalk-induced glitching and delay variation to avoid multiple design loops. This can be accomplished by increasing the space between signal lines, inserting signal repeaters, increasing the number of inductive return lines and adjusting the relative timing of bits to avoid glitching.
Once a customized development path is chosen, the ASIC provider also comes face-to-face with challenges inherent at the 130nm node. Increases in process variation due to shrinking geometries can hinder reaching the highest speed possible and making full use of available silicon. In addition, the design tools for modeling statistical variation and accurate extractions of deep submicron processes are still lagging.
After considering these challenges, the ASIC provider is left to decide to address these complex challenges in-house or outsource some or all of the memory development. This one of the most critical decisions made during the ASIC design cycle and is ultimately determined by the capabilities of the ASIC supplier. If no core competency exists in-house for memory design, the memory design is left entirely in the hands of an outside company.
Among other downsides to this approach, the third-party's profit margin is paid by the ASIC provider and the usual back-and-forth between two different engineering companies can negatively impact the design schedul e. Another option is to do all your memory development in-house. This in-house model can pay large dividends if the number of ongoing designs is sufficient to justify the engineering resources needed to thoroughly staff and maintain a team. Not having any outsourcing relationship, however, can leave an ASIC team scrambling for help if too many projects needs attention at once or an unusually complex problem comes up.
The third option is a hybrid that provides the greatest flexibility. With this model the ASIC supplier has an instilled core competency in memory design and a library they can leverage to do complete custom designs. This is supplemented by a strong relationship with a third-party design company that can assist with a project as needed. Additional benefits are gained in this model when a company has their own fabs they can work closely with to understand how process technology will affect their designs.
The ASIC market continues to be one of the most challenging as design starts have steadily decreased. To compete, providers are constantly searching for the quickest, most profitable path to solve the embedded memory equation. Only by starting with the customers' requirements and then applying a strong in-house library and engineering team supplemented by a close third-party as needed can the memory challenges of the largest and fastest ASICs can be addressed.
Related Articles
- Transactional Level Modeling (TLM) of a High-performance OCP Multi-channel SDRAM Memory Controller
- High-Performance DSPs -> New Needs and Applications Drive DSPs
- High-Performance DSPs -> Software-defined radio infrastructure taps DSP
- A High-Performance Platform Architecture for MIPS Processors
- Tutorial: Programming High-Performance DSPs, Part 1
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |