Do the Math: Reduce Cost and Get the Right Communications System I/O Connectivity
Dec 05, 2007 (12:32 PM) -- commsdesign.com
As the deployment of PCI Express (PCIe)-native systems becomes more prevalent, many of the commonly used communications system endpoint solutions are being redesigned for PCIe connectivity. These include system motherboards, network interface cards (NICs), network security processors (NSPs), storage adapters, and an array of other I/O functions previously available only with PCI and/or PCI-X interfaces. However, many of the endpoint chips have not been redesigned as PCIe-native ICs, and, in fact, for many there is no plan to ever do so.
This article examines the trade-offs associated with the deployment of PCIe endpoint solutions using PCIe-native silicon vs. using PCI (or PCI-X) silicon and adding a PCIe-to-PCI (or -PCI-X) bridge. These bridges have appeared on the market for some time and allow endpoint designers a quick migration path to PCIe, as well as create PCI (or PCI-X) slots on PCIe-native system boards. It was assumed from the outset that these bridges would only have a market until all endpoint solutions were available using PCIe-native silicon. However, even with PCIe-based systems on the market, there are still several endpoint solutions that have not been redesigned with native PCIe interfaces.
There are several trade-offs to factor-in when deciding whether to migrate an endpoint to become PCIe-native. They include the cost of the silicon implementation with PCIe-native vs. size of the opportunity, performance requirements of the endpoint, and the availability and compatibility that PCIe IP requires to make a PCIe-native solution. When you do the math, you find that in many cases it costs significantly less to deploy a bridge to create the PCIe solution than it does to implement a PCIe-native solution. This is because the additional cost of the chip on the board is difficult to recoup against the cost of the new ASIC and the lost market share associated with any delay in time to market.
E-mail This Article | Printer-Friendly Page |
Related Articles
- Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs
- Let's Get Intimated with IBIS
- How to reduce power using I/O gating (CPLDs) versus sleep modes (FPGAs)
- A Reconfigurable System featuring Dynamically Extensible Embedded Microprocessor, FPGA and Customisable I/O
- HyperTransport: an I/O strategy for low-cost, high-performance designs