Innovation to drive chip performance curve
Innovation to drive chip performance curve
By Mike Clendenin, EE Times
January 18, 2004 (2:17 p.m. EST)
URL: http://www.eetimes.com/story/OEG20040116S0044
TAIPEI, Taiwan When a guy like Bernard S. Meyerson, chief technology officer of IBM Microelectronics, tells a roomful of Taiwanese designers and process engineers that traditional CMOS scaling is dead, they take it in stride. "There is paradigm shift here, and it is a very important one," Meyerson said at the Semico Impact Conference here recently. "The diminishing returns you get from scaling mean that innovation the harder thing actually has to happen faster and faster just to stay on the expected performance line. And scheduling innovation is something that makes engineers very nervous." The audience keenly listened as Meyerson told how a dramatic rise in power density, brought about by the traditional brute scaling of process technology dictated by Moore's Law, has already yielded silicon that could iron a pair of pants and is on a curve heading toward supernova. In other words, physics is getting ugly. The days of relatively generous amounts of gate oxides, on the order of say 30 angstroms, have given way to the angst of dealing with less than 10 angstroms, making brute scaling nearly impossible. That's why some in the industry, including Meyerson, believe that classical CMOS scaling is no longer possible. Perhaps the reason for the sense of optimism about 90-nm, and subsequent 65- and 45-nm, nodes is that people have seen this shift coming. In some cases, engineers have experienced the problems already, at 130 nm, where low-k became a synonym for low yield, resolution enhancement crept into more mask-ing layers and the industry made the uncomfortable shift from "design rules" to "design guidelines" meaning that designers really would have to talk to process engineers. The apparent scaling roadblock for bulk CMOS also mirrors earlier experience with bipolar processes, another casualty of increasing power density. Today, standby leakage in CMOS is gaining importance over active power during the transiti on from 130 nm to 90 nm, just as interconnect replaced transistor-level performance as the leading problem during the run-up to 130 nm. Interconnect delay is still a big problem. And feature defects, not just particle defects, loom as another specter. One trait of the submicron era may be unprecedented levels of collaboration across many sectors of the chip industry. "The things that must be done to make nanometer design successful will bring together the design group and the manufacturing group in a level of cooperation not seen yet, or else they won't succeed," said Wally Rhines, chairman and chief executive officer of EDA tool vendor Mentor Graphics Corp. That seems a tall order in an atmosphere where the schedule to tapeout, not the time-to-volume, is usually a designer's main concern. But in the post-130-nm world, data suggests that the efforts following physical verification eat up roughly 30 percent of design time, according to PDF Solutions, a process technology consultancy. Lots of EDA tool vendors and foundry executives have talked about design complexity outpacing productivity. But methodologies that encourage collaboration and make yield a concern for everybody have not yet arisen to bridge the natural division between design and manufacturing. One of the key challenges for EDA tool vendors will be how to quantify design flows so that process engineers can go back to designers and suggest changes. A built-in function that offers specific recommendations based on previous yield data will make it easier for process engineers to justify changes to a designer, said Mentor's Rhines. It will also create a more systematic approach to applying design guidelines, suggests John Kibarian, president and CEO of PDF Solutions. "You don't know that adding another redundant via is always good thing," he said. "It could needlessly stress a low-k film in a design where the yield will already be high enough. So it wouldn't make economic sense." Another problem that's starting to see some poten tial solutions is the data flow between designers and mask-making shops. With design rules mushrooming, the amount of data that runs through verification tools is having a serious impact on the turnaround time for mask sets. Short of buying supercomputers to process tomorrow's designs, one trick the EDA industry is looking into involves using GDSII-based data flows to speed up the mask data preparation. Rhines argued that such an approach would preserve the hierarchy of designs so that blocks of data are not repeatedly tested, as is done in today's more linear methodology. Complementing this new method would be the optimization of EDA platforms for parallel processing, as well as multithreading techniques that would chew through information faster. 'Golden benchmark' "If verification tests every geometry every time, you will never get through the process," said Rhines. "You need to maintain the hierarchy and distribute it across many computers so you can take another order of magnitude o ut of processing time and still keep the golden benchmark of overnight processing per level." To duplicate the rapid advances of the past, the industry is also looking to rely more heavily on reusable intellectual property. Until now, processor cores have been at the heart of the IP industry, and that will probably not change. But I/O and memory IP, as well as embedded test and repair cells, are increasing in importance. So is the relationship among IP vendors, foundries and designers. Adam Kablanian, president and CEO of IP vendor Virage Logic Corp., believes in a shared responsibility during a product ramp, in which the customer, IP supplier and foundry will collaborate on yield optimization. "At 90 nm, it will take three to tango," he said. Not everyone need charge into the brave new world of sub-130-nm design, though. At this point, foundry executives concede that 90-nm, 300-mm wafers still only makes sense for large-die, high-density, high-margin and high-volume products and in many cases that is the case at 130 nm, too. Indeed, many companies may find that their performance and cost-savings criteria are met by 180-nm and 130-nm processes, said Ben Lee, Asia-Pacific managing director of FPGA maker Altera Corp. "The list of products at 90 nm is shorter and shorter. Many may never go," he said. "And for those products that do go, timing will be very critical. If you go too early, the costs will really hurt you." One trend gaining prominence in the scaling debate is the role of packaging. Clearly, it is a consideration from the get-go in complex chip designs, such as FPGAs. "I/O now determines performance more than it ever did," said Ivo Bolsens, chief technology officer of Xilinx Inc. "We start by designing the package. It's a bit of the world upside down." Ever since the advent of low-k dielectrics, foundries have dedicated more internal resources toward packaging to reduce the risk of die/package stress cracking the delicate dielectric layer. Both Taiwan Semiconductor Manuf acturing Co. and United Microelectronics Corp. have forged closer relationships with packaging houses in an effort to preempt, or at least lessen, that possibility. Packaging is also emerging as the stealth route to system-on-chip (SoC) designs. Using a system-level diagram of a third-generation cell phone to illustrate the complexity of merging disparate technologies, Jackson Hu, CEO of foundry UMC, said, "SoC has probably been oversold. . . . It's not easy to put all these processes into a single die." That may push more and more designers to consider a system-in-package approach. But the SiP has its own inherent difficulties. If one chip in a three-, four- or five-dice package fails during testing, then the whole package is worthless. The same goes for a module-level approach. Of course, Hu noted that this fact will create “opportunities” for those companies that can nail down a solution. IBM's Meyerson said he started to notice links among these myriad issues about two years ago and conclude d that novel, complementary techniques such as strained silicon, silicon-on-insulator and FinFETs would gain prominence. Not too many people were listening back then, but they are now. "It's good for the industry as a whole when this sort of consensus takes hold," Meyerson said. "At least companies are finally coming to grips with the new challenges at hand."
Related Articles
- Development of a Hybrid Drive that Combines Large Capacity and High-Speed Performance
- The Internet of Things Can Drive Innovation - If You Understand Sensors
- Infrastructure ASICs drive high-performance memory decisions
- High-Performance DSPs -> New Needs and Applications Drive DSPs
- Maximizing Performance & Reliability for Flash Applications with Synopsys xSPI Solution
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |