Adding net functions to GHz chips
Adding net functions to GHz chips
By Lyle Adams, Vice President of Engineering, Palmchip Corp., San Jose, Calif., EE Times
February 10, 2003 (10:15 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030207S0020
With the push to smaller geometries, chip functionality continues to increase. Clock frequencies of 100 MHz at 0.18- and 0.13-micron manufacturing technologies are quite common. With smaller geometries, system frequencies of 1 GHz will be readily achievable.
As transistors get smaller, they get significantly faster. But because they are smaller, they are not able to drive large loads as quickly. Thus, transistors that sit close to each other will be able to switch at gigahertz frequencies, but transistors that sit far from each other will only be able to switch at megahertz rates because of the large loading of the long interconnect wires.
Most systems rely on getting data from one portion of a chip to another and will suffer system-wide performance degradation if the system interconnects cannot operate at a speed comparable to the system components. Because this is a fundamental limitation of the technology, engineers canno t rely on tools or manufacturing improvements to circumvent the problem; it must be addressed in the design itself.
With high speeds and large chip sizes, intercore communication will be analogous to interdevice communication, where cores communicate with each other via a network. Today's networks are designed as general-purpose networks and include overhead that would be considered excessive if applied directly on-chip. But as chips get larger and faster, some features of networks are advantageous to incorporate into chip designs. These include a network topology, latency tolerance and error detection. For early SoC designs, chip designers used bus topologies, mirroring the system topologies with which they were familiar. Many transistors were linked to the same wire, and drivers would take turns sending data between blocks over the bus.
As chips became more complex, the amount of on-chip wiring available grew by orders of magnitude. During this stage of SoC design, designers moved to point-to-multipoint topologies, wherein a single transistor drives a dedicated wire to many other transistors. Many wires would be used for all blocks to communicate with each other.
As transistors got smaller and faster, their ability to drive long wires decreased, so designers moved to point-to-point topologies, where each transistor communicates with a small number of other transistors.
Chips that need to be fast are usually feature-rich, and as a result they require a large amount of silicon area. The size of the chip works against the need for high-speed operation as wires get relatively longer, because the length of the wire complicates its routing from one end of the chip to the other. It now becomes advantageous to reduce the number of long wires.
A networking topology allows such a shift by enabling many signals to be switched among fewer wires. The result is that the available place-and-route design tools and techniques can be used to achieve gigahertz speed s.
Pipelining trade-offs
System interconnects will be most affected by the move to higher frequencies because of their inherently long wires. The standard method of driving long wires is to rebuffer them with intermediate transistors, so that each transistor only needs to drive a relatively short length of wire. That prevents the signal's propagation from becoming exponentially slower with distance. But as clock rates increase to the gigahertz range, the time available becomes so small that redriving the wire is too slow. That situation is exacerbated by the larger impact of clock driver delays.
Currently the best solution is to increase the amount of time available for signal propagation by inserting pipeline stages. A pipeline state is a clocked element, such as a flip-flop or register. When such an element is placed in the middle of a signal's path, the signal will require two clocks to reach its final destination but will have less distance to travel in each of the two clock cycles.
Pipelining lets signals traverse the length of the chip without degrading the system clock rate, but it may affect overall performance because signals require more clock cycles to reach their final targets. For gigahertz operation, designers will always need to trade off clock frequency for clock latency (the number of clock cycles needed for signals to propagate).
This will be true especially for system interconnects, because while most current bus protocols allow for some pipelining via flow control, they have as part of the protocol at least one signal that cannot be pipelined: the flow control signal itself. Designers of high-speed chips will need to design in the needed amount of pipelining or will have to incorporate a protocol that allows for arbitrary pipelining. They will also have to take into account signal latency due to pipelining and its effect on system performance.
Because tomorrow's gigahertz chips will be more complex, and because of the networking features that will be built into the chips, error detection and handling will have to become an integral part of the chip design. Network packets can be routed from source to destination via any of several routes, and all components along the route need to be able to distinguish the data source, destination and packet number. Similarly, on-chip networks will need to identify each packet in some way so that intermediate system blocks can forward the data, or so that the system engineer can track data movement. This will be particularly crucial during initial chip development. Fast transistors, particularly dense transistors arrays such as RAMs, will be more susceptible to errors. Because many storage elements may be used with the chip, on-chip error detection will become important for very large chips that transfer large amounts of data. The designer will no longer be able to assume that all parts of the chip are robust enough to be immune to data errors. System interconnects will need to be able to detect and report errors and to prevent data errors from propagating.
Chips that operate at gigahertz speeds are being manufactured today, but their development and manufacturing remain too complex. Within a couple of generations of silicon technology, gigahertz operation will be available to a wider range of chip designs. But the vast majority of chip designers will not be able to change their tools or design methodologies completely in order to ensure operation of their designs at such speeds. Thus, design changes that take advantage of the speed will have to build on today's methodologies, adding only a small amount of time and cost to the chip development process or even removing some time and cost.
Design methodologies that simplify timing analysis will be critical. With large high-speed chips, much of the time spent in verifying the chip will be in ensuring that it will run properly at the intended speed. In order to avoid adding to the development effort, design changes adopted must be transparent to the timing analysis methodologies.
We see the need to bridge SoC designs operating in the hundreds of megahertz and those operating in the gigahertz range. Because the system interconnect is a weak point of high-speed operation, Palmchip's engineers have created a technology based on its current CoreFrame integration technology.
Signal optimization
The new interconnect incorporates network topology, pipelining and tagging features in a straightforward buslike protocol that allows chips created with today's silicon technologies to be ported to future technologies with minimal rework. Essentially, the interconnect protocol includes facilities for arbitrary pipelining, allowing for late retiming of interconnects that span the chip, data tagging and error handling. It also includes flexible topologies for optimization of high-speed signal routing.
Expect future technologies to incorporate more features of general-purpose networks such as packetization, error detecting and correcting codes, and negotiation. Future chips may include software protocol managers or on-chip networks.
And, on-chip bus systems that incorporate required networking elements, while reducing or adding little to the cost and effort of the chip design, will bridge current technologies with future technologies.
Related Articles
- Casting a wide safety net through post processing Checking
- Why Interlaken is a great choice for architecting chip to chip communications in AI chips
- Adding Cache to IPs and SoCs
- Revolutionizing High-Voltage Controller Chips for Electric Vehicles
- Floorplan Guidelines for Sub-Micron Technology Node for Networking Chips
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |