Royalty-free HyperTransport makes good on chip-to-chip bandwidth
Royalty-free HyperTransport makes good on chip-to-chip bandwidth
By Gabriele Sartori, President, HyperTransport Technology Consortium, Sunnyvale, Calif., EE Times
January 27, 2003 (11:33 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030124S0032
As next generation processors rev up to multi-GHz clock rates, and outside-the-box I/O technologies exceed 10 Gbit/second data rates, the last thing a designer needs inside the box is a myriad of slow, wide and costly system, processor and I/O buses. HyperTransport I/O technology was designed to simplify the design of high performance personal computers, servers, network equipment and embedded systems by providing up to 12.8 Gigabyte/second bandwidth in a cost-effective, and easy-to-deploy chip-to-chip communications technology. With the support of leading participants in the PC, server, processor, network equipment, communications, software, silicon IP and FPGA market, royalty-free HyperTransport technology is poised to become a universal solution for processor-to-processor, processor-to-I/O and processor-to-memory communications. Basically, HyperTransport addresses three sets of problems caused by old style processor and memory b uses: limited bandwidth, fixed bus sizes and speeds with limited scalability and burdensome system design overhead, and legacy I/O support. The widely used CPU-oriented bus structures and their extensions simply do not have the bandwidth to support modern GHz+ processors, especially those processor clusters that are possible with next generation 64-bit multiprocessors. In addition, traditional parallel bus structures create a great deal of board level design overhead in the form of additional power and ground signals, complex signal routing and extra passive components. At the same time, the widespread use of PCI makes it necessary to be compatible with PCI components and the broad PCI-aware infrastructure. HyperTransport technology neatly solves these problems through a point-to-point, enhanced low-power LVDS signaling architecture that supports a packet-based protocol that provides very high bandwidth (up to 12.8 Gbyte/second aggregate throughput), that is easily scalable (2, 4, 8, 16, or 32 bit wide channels) and that is PCI compatible (completely software transparent from the device and OS level). Because of its low-power unidirectional signals, it is easily deployed without costly signal routing, extra power and ground signals and with almost no passive components required for signal integrity even at high data rates.
Unlike other emerging I/O protocols such as RapidIO and PCI-Express, that are attempting to define both on-the-board and board-to-board specifications, HyperTransport is narrowly focused on providing a universal chip-to-chip communications solution for on-the-board applications.
It is processor agnostic, defines no board-to-board connectors or protocols and is suitable for connecting on-board resources to a large number of existing and emerging I/O protocols. Best of all, it is widely available now in general purpose silicon IP forma t, FPGA IP blocks, as well as commercially available x86 processors, peripheral chipsets, graphics engines, 64-bit MIPS processors, and PCI and PCI-X bridge devices. It has been deployed in systems as varied as high volume consumer oriented game consoles, high-performance servers and high-end network equipment.
The technology provides significant benefits for PCs, servers, routers and embedded system applications. For example, in next generation PCs, it eliminates complex North Bridge and South Bridge structures and their associated bottlenecks with a streamlined, but high data rate HyperTransport link from processor to I/O subsystems.
For years the personal computer motherboard has been the spawning ground of processor, system, I/O and specialty bus structures. Legacy buses such as ISA, VL-Bus, AGP, LPC, PCI-32/33 and PCI-X have been utilized to support processors, memories, graphics engines, and a wide assortment of I/O devices and subsystems. Now that the processor designers and developer s have taken advantage of new silicon design and manufacturing technologies to deliver GHz plus clock rate processors and combined them with dual-data rate (DDR) memories, these legacy buses have become the choke point of the personal computer and workstation motherboard.
HyperTransport technology eliminates those chokepoints and adds support for larger, faster 64-bit processors and for multiprocessing architectures. HyperTransport links are point-to-point unidirectional interconnects that exchange data on both rising and falling clock edges. Using enhanced 1.2V LVDS signaling simplifies board level design and employing a packetized data protocol enables the use of asymmetrical links. This means the designer can use both narrow and wide HyperTransport channels throughout the system. This in turn gives the designer more flexibility in applying just the right amount of bandwidth for a given system node. For processor to processor links, a full 32-bit link may be deployed. To connect to slow legacy PCI subsystems, an 8-bit link may be used.
As compared to traditional parallel, multi-drop buses, HyperTransport uses far fewer signal lines and provides greater bandwidth with greater signal integrity. Because of the packet-based data protocol, any amount of data payload can be transported over even as few as two signal lines or as many as 32 data pairs. With a 1.6 Gigatransfer/second throughput, HyperTransport can deliver adequate bandwidth to many applications using far fewer signal lines. In addition, the packet format eliminates many command and control signals, further simplifying printed circuit board design.
Straight edge
A technique called "naturally compensating trace length matching" is defined in the HyperTransport specification. It simplifies pc board design by allowing straight line placement of interconnect signals, eliminating the complex snaking of traces needed in alternative high-speed technologies.
Since the HyperTransport protocol e ncompasses the PCI enumeration and configuration protocols, existing operating systems need no modifications to take advantage of the greater bandwidth and integration made possible by HyperTransport technology.
This is of supreme importance in the personal computer marketplace as deploying new technologies usually forces system manufacturers to delay market introduction until software drivers can be rewritten, tested and made available to the consumer. HyperTransport requires no such effort making its deployment a seamless transition when moving to a far greater internal system performance technology.
Because of the low-power, LVDS signaling, the unidirectional data paths, and naturally compensating trace length matching, low-cost four layer motherboard technology can be employed, even in systems that sport high speed 64-bit processor technology. This is a critical advantage in the cost-conscious PC market.
Another important advantage : as additional HyperTransport-enabled processors are added to increase system performance, there is an automatic increase in I/O bandwidth as well. This sidesteps the traditional problems of adding multiple processors using wide parallel multi-drop buses. In those systems, as you add additional high-speed bus masters, total system throughput is degraded even though maximum theoretical bus throughput remains constant, when one master is accessing the bus, the others must wait.
Even in the personal computer market, thanks to the ability of HyperTransport-enabled processors to connect easily, there will be a growing market penetration of multiprocessor personal computers. As a result of HyperTransport technology, there is negligible additional overhead in designing a system where either one or two processors can be used. Since adding a second processor then becomes merely a matter of fitting the second processor i nto the socket and adding memory, there is little system design and manufacturing overhead when doubling system performance and extending product life. This lowers overall manufacturing costs and greatly improves revenue potential from a fixed product development investment.
But it is in the server and network equipment markets where system performance is critical that this capability will add the most significant benefits.
In next generation servers, the main processor will be a 64-processor with a multiprocessing architecture. Unlike older generation designs, there is no multilevel bus structure like the south bridge/north bridge arrangement required in these systems. Instead, HyperTransport links are used to connect the processor to advanced HyperTransport-based I/O devices or to link via bridges to either legacy PCI I/O subsystems or emerging high speed I/O protocols such as Gigabit Ethernet or 10 Gigabit Ethernet links. If a second processor subsystem is added to boost performance, the same HyperTransport technology can be used to link the processor subsystems as well. Because HyperTransport technology is scalable, the link between processors can be wider and thus higher bandwidth than those used between processor and legacy I/O devices.
Bandwidth flexiblity
This ability to apply just the right amount of bandwidth necessary in the system is a key advantage of the HyperTransport architecture. Slower, narrower links can be used to support slow, low bandwidth PCI components, while faster, wider links can be used to speed data flow between processors and between processors and faster I/O subsystems using more advanced I/O technologies such as 10GbE, InfiniBand or SPI-4.
Unlike traditional computing designs, when this type of architecture is expanded to include multiple processors, such as dual or quad 64-bit processors sharing memory and I/O, there is no degradation in I/O bandwidth. In fact, as more HyperTransport-enabled processors are added , the aggregate I/O bandwidth increases linearly with processing power.
In addition to requiring high bandwidth, scalability and low cost of implementation, network equipment such as routers have more advanced needs for their I/O subsystems. Because HyperTransport technology was designed to be processor agnostic, it is available in a number of processor architectures including those in the x86 and MIPS families, in both 32- and 64-bit implementations.
Due to the participation of such industry leaders as Cisco, Sun Microsystems, Broadcom Corporation and PMC-Sierra, there are a number of network features in the HyperTransport technology specification. These include a packet based data payload protocol that simplifies the integration of HyperTransport data streams with other packet-based network protocols, a message passing protocol that supports streaming of packets, 16 streaming point-to-point flow controlled virtual channels, enhanced data recovery features, peer-to-peer transfers, 64-bit a ddressing for very large memory support and support of concurrent host transactions.
Because of these features, along with the basic HyperTransport technology advantages in bandwidth, ease of implementation, and scalability, HyperTransport is being implemented in high-end routers. In these applications, the control plane and data plane processing elements use HyperTransport-enabled processors and I/O fabrics to move large amounts of packet data between the forwarding and addressing units and the packet processing units.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |