Adapting an architecture to fit 130 nm
Adapting an architecture to fit 130 nm
By Peter Claydon, Founder and COO, picoChip Designs Ltd., Bath, England , EE Times
January 13, 2003 (12:32 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030110S0032
Designing picoChip's new communications processor chip, the PC101, which arrived in first silicon in November, taught us that using 0.13-micron processes is a lot more difficult that our engineers expected, though no one thought it would be easy. The chip contains 430 processors linked by a fast, deterministic fabric and is manufactured in an eight-layer metal 0.13-micron CMOS process. There were both technical and commercial challenges, and although many of the technical challenges are being overcome, the commercial realities are here to stay. Technically, 0.13 micron was the node when the big switch to copper was made and the big switch to low-k dielectric did not happen. The lack of low-k exacerbated a situation that has been getting worse: The metal tracks used for interconnect now look like pretty good parallel plate capacitors that slow signals and cause them to interfere with each other. Significant gains can be made in developing new silicon architectures that are optimized for the new world of fast transistors and slow wires. By using hundreds of small processors to perform the job of tens of larger processors, our engineers have avoided most of the signal integrity hurdles that are faced by system-on-chip (SoC) designers using 0.13-micron processes and also achieved a significant power reduction. Copper has completely changed the back-end design process, and the repercussions of this are still being felt. During the months leading up to the tape-out of the PC101, we had to contend with design rule changes at least once a month, many of them relating to obtaining even coverage on layers to prevent "bowing" effects during chemical-mechanical polishing from compromising yields. One major problem was getting all the third-party intellectual property (IP) on the same version of the design rules at the same time before the foundry stopped accepting layout on that version. In this respect, we were greatly a ssisted by the fact that we used very little third-party IP, and although our chip is relatively large, it consists almost entirely of four basic blocks that are tiled to form an array. One issue with 0.13-micron processes that has been somewhat overlooked is yield. With increased process complexity, yields may never be as good as in previous generations; not only that, but it is taking a lot longer than expected for yields to reach anything like acceptable levels. With 430 processors on a chip, PicoChip's team had developed a redundancy technique that enables software to be remapped onto different processors in the field, enabling devices with some failed processors to be used. This has the potential to more than double the effective yield for little more than 5 percent area overhead and also enables us to design larger chips that make full use of the increased silicon real estate available as we move to 30-cm wafers. The major commercial challenges are often quoted as being high mask costs ($500,000 to $700,000 for eight metal layers on 0.13 micron) and the time taken to design chips with over a hundred million transistors. Just trying to work out what to do with that many transistors is hard enough. Fab cycle times for prototype lots have also gone out from under three weeks for 0.35 micron to at least eight for 0.13 micron, greatly increasing the penalty for not being right the first time. Until recently, extensive use of silicon IP was championed as a solution to the design time, but in reality a huge amount of the design cost is in verification time and this can often be longer rather than shorter if IP is used extensively. These factors are leading many systems companies to question the viability of the SoC approach, looking instead to larger FPGAs and suppliers of ASSPs. However, FPGAs are not only expensive, they do not address the problems of design and verification time. ASSPs are inflexible and address only limited high-volume markets. It is precisely these issues that our products address. Designers can place software onto groups of processors in the same way as they would like to be able to place hardware IP blocks onto silicon in a SoC design, thus creating a software system-on-chip (SSoC).
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |