Making the best interconnect choice: weighing the pros and cons
Making the best interconnect choice: weighing the pros and cons
By Warren Miller, Vice President of Marketing, Avnet Design Services, Phoenix, Ariz., Warren.miller@avnet.com, EE Times
October 28, 2002 (10:16 a.m. EST)
URL: http://www.eetimes.com/story/OEG20021023S0012
System performance bottlenecks have gradually moved from the processing portion of high-speed applications to the input/output section. Parallel shared-bus architectures failed to provide data at the rate needed in modern multiprocessor systems, so the industry is migrating to switch-based architectures. Say goodbye to bus-based VME, ISA, PCI and PCI-X and say hello the next generation of switch-based standards HyperTransport, RapidIO, InfiniBand, and PCIExpress.
But just as in bus-based systems, not all switch-based standards will prove to be successes in the marketplace. And this short list of standards is only the most visible of the few dozen standards being proposed. Understanding the underlying advantages and disadvantages of each standard may prove to be the best insurance that a design choice made today will have the best chance of success tomorrow.
Past parallel bus structures typically used a synchronous single-ended signaling system, and were based on the standard TTL levels of 5V and 3.3V. Signals were difficult to switch quickly, trace lengths were long, noise was prevalent and arbitration for the shared bus introduced delay and latency. The good news was that there were no significant logic resources required to implement a parallel bus.
The new interconnects generally use low voltage differential signaling with high-speed point-to-point connections. The resulting increase in signal bandwidth into the Gbit/second range required a point-to-point interconnect and thus a switch-based architecture. Signals from each of the architecture elements are collected at a central switch, instead of spread across a parallel bus, and are connected to the desired destination via the internal switching fabric. The resulting architecture is more hardware intensive than a parallel bus, but can achieve multiple orders of magnitude bandwidth increases if multiple processors can operate without interference.
HyperTranspor t has opted to go with a source signaling, DDR class of physical layer with a wider style of interconnect. This makes it better suited for chip-to-chip implementations, and still supports card-to-card interconnect using a narrower configuration if a lower pin density is required between cards. HyperTransport may end up winning in chip-to-chip interconnect applications, but may miss out on the card-to-card applications leaving the next generation replacement for PCI up to one of the other standards.
InfiniBand is the standard with a focus on server interconnect and it seems to be the incumbent in those applications. The other standards seem to be leaving this application behind and are targeting the more ambitious goal of replacing PCI.
PCI Express and RapidIO both provide additional support for PCI legacy applications, PCI Express somewhat more so since it is a newer standard. RapidIO has the implementation lead, however, and designs that use the parallel version of the standard can f airly easily migrate to the serial version.
PCI Express may end up the winner in the high performance segment due to its high aggregate data rate. But if a lower pin count card-to-card format emerges that can't take advantage of the wider PCI Express options, the high speed per link of RapidIO may turn the tide.
When selecting between RapidIO and PCI Express for an inter-card application, a designer should probably look at the bandwidth requirement of the interconnect and the number of signals available. If the bandwidth need is high, but the number of interconnects constrained, the higher per-interconnect performance of RapidIO may be the best solution. If a large number of interconnects are available, and the bandwidth required by the application is greater than the aggregate bandwidth offered by RapidIO, PCIExpress might be the best choice.
The bandwidth 'sweet spot' will determine the winner at the end of the day.
The final victors in the upcoming standards battle are st ill difficult to predict, but if you match your design requirements to the standard that is targeted most closely to your application you will reduce the chance that you will need to switch interconnect horses in the middle of the data stream.
Related Articles
- Making the shift to optical interconnect with PCIe Gen3
- Why Interlaken is a great choice for architecting chip to chip communications in AI chips
- Network-on-chip (NoC) interconnect topologies explained
- Weighing Chip-Design-Verification Challenges for MedTech
- The network-on-chip interconnect is the SoC
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |