|
|||||
Designer finds teamwork boosts SoC yields
Designer finds teamwork boosts SoC yields LAS VEGAS Only close teamwork between a design team, a foundry, and a provider of intellectual property can achieve acceptable yields in memory-heavy system-on-chip designs, according to representatives from a chip designer, a foundry, and a memory IP vendor in a Wednesday (June 20) panel at the 38th Design Automation Conference. The problem of achieving acceptable SoC yields is subtle and involves more than die size, according to Rajesh Verma, director of IC development at graphics chip maker ATI Technologies Inc. (Unionville, Ontario.) "In every generation of graphics chips, the percentage of die area devoted to memory increases, and the number of instances of memory increases," Verma said. "In current designs, we are approaching 40 percent of the total die being memory cells, and 140 separate instances of memory arrays." Another panel member, Narbeh Derhacobian, director of device technology engineering at Virage Logic, a memory IP ven dor, sized up the problem. "Memory is a defect magnet," Derhacobian said. "Inside a memory array you push the design rules and the electrical parameters of the process as far as you safely can. And memory is inherently sensitive: the level of leakage current that would be perfectly acceptable in a logic gate will lead to failures if it occurs in the pass gates of an SRAM." Derhacobian further pointed out that in most designs, one out-of-bounds transistor out of the 20 million devoted to SRAM on an ambitious SoC would doom the die to failure. Hence, the larger the area of memory on an SoC, the lower the yield. No silver bullet ATI's experience provides an example of potential difficulties. It is almost an axiom of SoC architecture that memory increases as a percentage of die area as SoCs become more complex. Virage provided extrapolated data suggesting that in about three years SoC yields will be reduced to single digits. Nor is there a silver bullet solution to the problem, according to the panelists. Verma reviewed an actual ATI design in Taiwan Semiconductor Manufacturing Co. Ltd.'s 0.18-micron process in which initial yields were completely unacceptable. No obvious problem, such as a logic error or design rule violation, could be found. Working together, ATI, Virage and TSMC convened a team to isolate the problem. It was eventually found that excessive leakage in a non-six-transistor memory circuit was the culprit. No design rules had been violated, but a combination of circuit design, layout style and process sensitivities had come together to create an unanticipated corner condition. With corrections, ATI's yield increased by a factor of fifteen. The panel discussed two ongoing efforts to prevent such problems. One, described by TSMC director of marketing Kurt Wolf, is a formal multicompany partnership that begins when TSMC moves a new process toward qualification. The foundry works with memory cell vendors such as Virage to assess and optimize bit cell designs and verify t hem in silicon with prototype and split-lot runs. Finally, before the cell is fully qualified, TSMC and the vendor verify the cell in actual memory arrays, and then in a real customer design. ATI's chip was the customer verification vehicle for the 0.18-micron SRAM qualification. Derhacobian described the second major effort: the introduction of redundancy and failure correction techniques into SRAM arrays. These are now options on Virage's memory products. At some point, the best joint efforts of foundry, cell designer and customer can't improve yields enough there is simply too much memory on the chip. At that point, according to Derhacobian's data, a redundancy scheme that allows the substitution of working cells or columns can make an enormous improvement in effective yield. In practice, Virage consults with customers to determine the best strategy for using redundancy. For instance, if a memory array approaches 1.5 Mbits, the designer should seriously consider adding redundant features to the array, Derhacobian said. But the number and size of memory instances can make the decision more complex. For instance, if a large portion of the die is devoted to memory, but no one array is particularly large, redundancy may be necessary but will bring an undesirable amount of overhead. Reorganizing the memory architecture may be in order. Ron Wilson is editorial director of Integrated System Design, a sister publication of EE Times. |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |