Embedded memories multiply inSoCs
Embedded memories multiply inSoCs
By Frank R. Ramsay, Director of Technology and Business Strategy, System LSI Group, Toshiba America Electronic Components, Inc. (TAEC), San Jose, Calif., EE Times
April 28, 2003 (4:24 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030428S0089
By Frank R. Ramsay, Director of Technology and Business Strategy, System LSI Group, Toshiba America Electronic Components, Inc. (TAEC), San Jose, Calif.
Developments in embedded memory technology have made large DRAMs and SRAMs commonplace in today's SoCs. Tradeoffs between large and small memories make all sizes practical, enabling SoCs to resemble board-level systems more than ever. The latest embedded memories are even bringing the added benefits of low-power operation to handheld systems.
Large embedded memories give an SoC benefits such as improved bandwidth and considerable power savings that can only be achieved through the use of embedded technologies. The practicality and success of including embedded DRAM and/or large SRAM blocks in your SoC depends mainly on manufacturability. Highly manufacturable memory structures resolve issues of cost, time to market and risk that affect all SoC designs.
SRAMs have long been an SoC mainstay, but both the size of the SRAM blocks and the number of them in a single SoC has begun to explode in the past year or so. It is not uncommon to see chips that have as many as 150 SRAM blocks, with some of the cores ranging from 1-Mbit to 8-Mbits.
At the same time, improvements in DRAM manufacturability have caused a boom in the use of large DRAM blocks. Even ASICs for commodity products such as game machines and camcorders include DRAM cores. In Toshiba's case, the embedded-DRAM systems are frequently the early adopters of new-generation fabrication technologies. The number and size of the embedded DRAM cores in an SoC has tended to increase as chips move down the technology curve. At 180-nm, system ASICs were typically using two blocks of DRAM and up to about 64 Mbit of total memory capacity. Now at 130 and 90nm, typical systems are using four or more blocks with upwards of 120-Mbits of DRAM cores.
From a fabrication point of view, large memory blocks have to be just as manufacturable as small ones (more on manufacturability later). Tradeoffs between large and small memories can have some affect on performance and die size, however. With small memories, you pay a somewhat higher area penalty for overhead circuitry such as sense amps. On the other hand, large memories have beneficial overhead in the form of redundancy to ensure manufacturability. The tradeoffs are not simple, so if you have a choice between a few large memory blocks or many small ones, consult your semiconductor vendor's application engineers.
Even before reaching fabrication, large blocks have to work well with back-end layout requirements. The ability to route over the top of big memory blocks has now made them more friendly to the layout environment.
Test schemes have also become friendlier with features such as common BIST blocks. Today you can choose among a variety of test schemes for embedded memories, some requiring wafer-level memory testers and some relying heav ily on BIST structures. Choosing the best test scheme for a given design requires detailed discussions with your silicon vendor.
Another way in which large DRAM blocks have become more friendly involves their power consumption. The transition from the 180-nm generation to 130-nm is changing the power picture dramatically. At 130-nm, a DRAM in page write mode consumes just 34 percent of the power required at 180-nm. Standby power has dropped to 24 percent of the 180-nm value, and standby power-down has plunged to 12 percent of the 180-nm value. These power reductions have helped push large embedded DRAMs into SoCs for camcorders and cell phones.
The boom in embedded memories is all due to the success of new integrated process technologies that have been developed from the beginning with large memories in mind. The process steps have been refined generation after generation to ensure high yields on every chip that contains SRAM or DRAM. Only with this level of predictability can an ASIC vendor offer embedded memories at a cost-effective price.
For this reason, you can expect that SRAM and embedded-trench-based DRAM will be the prevalent SoC memories of choice going forward. In addition to reducing costs, the dependability of these memories minimizes your time to market and design risk. To get these benefits, note that you need to take advantage of your process vendor's memory IP because the fab process has been tuned to work with specific memory structures.
For nonvolatile requirements, Toshiba has found that combining an off-the-shelf flash die with an SoC in a stacked-die package works well and costs less than integrating flash into the fabrication process. Otherwise, SRAMs are ideal for small, fast SoC memories, and embedded-trench-based DRAMs handle needs for large memory blocks.
Related Articles
- How Low Can You Go? Pushing the Limits of Transistors - Deep Low Voltage Enablement of Embedded Memories and Logic Libraries to Achieve Extreme Low Power
- Testing Of Repairable Embedded Memories in SoC: Approach and Challenges
- How to use ECC to protect embedded memories
- SoC Embedded Memories: Overview
- A new era for embedded memory
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Synthesis Methodology & Netlist Qualification
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Demystifying MIPI C-PHY / DPHY Subsystem
E-mail This Article | Printer-Friendly Page |