Robust designs cut vendor hype
Robust designs cut vendor hype
By Michael Krause, Senior Interconnect Architect, Hewlett-Packard Co., Palo Alto, Calif., EE Times
October 28, 2002 (10:18 a.m. EST)
URL: http://www.eetimes.com/story/OEG20021024S0011
Over the years, the industry and its customers have been yanked through various vendor-driven interconnect adventures only to be disappointed as reality displaced hype. To get past the hype, developers should ask four critical questions. By applying the answers to the different kinds of systems they build, they can set a practical course for which interconnects to use.
- Does the technology deliver strong customer-visible value?
- What are the technology-transition problems and costs?
- When is the right time to deploy the technology and under what criteria?
- Are there existing technologies that can evolve to effectively solve the problem while minimizing customer impact?
For standalone systems such as notebooks and desktops it does not take a crystal ball to see that Ethernet and USB will continue to dominate and evolve to meet performance and cost requirements. Therefore, let's focus on PCI and Accelerated Graphics Port (AGP), the two interconnects that are under intense marketing pressure to be replaced with PCI Express.
Underneath the hype on PCI Express surrounding bandwidth and quality of service, vendor cost and design flexibility are the primary reasons to use Express in standalone systems. Economic conditions combined with higher component and board integration needs are the factors that drive vendors to push for Express.
The current Express specifications provide the starting point for developing silicon, software drivers and a PCI-compatible form factor. But from a solution perspective, there is still more-such as Express-to-PCI bridges and firmware-to be developed. Given that this is a new, complex technology for all vendors, combined with ongoing specification development, it becomes clear that credible solutions will not ship until the second half of 2004.
Aggressive signal rates
One first use for Express is graphics. If Express adopts an aggressive signaling rate to match t he historic AGP curve, then we should see it used to double graphics bandwidth every three years through the rest of the decade. For I/O, the answer is not as clear. The Express PCI-compatible form factor is nice, but it yields no customer-visible value while increasing vendor and channel transition costs as well as customer confusion.
What is required is a new module form factor-like a videogame cartridge-that fundamentally changes the way vendors develop platforms and customers buy them. While today's platforms are not going away, a new modular platform could alleviate frustration and breathe life into this rather stagnant and homogeneous systems space. The module form factor for PCI Express is being developed and should be completed in time for second-half 2004 solution delivery.
Systems in the data center have their own set of dynamics. As for computer servers, appliances and I/O modules, three interconnects warrant examination-PCI-X, PCI Express and Infiniband. Given the recent announce ments concerning Infiniband, we will focus on the PCI-X and PCI Express.
Data center computers span a wide range of offerings ranging from $500 to $1 million platforms. Customer quality, performance and stability demands lead to complex trade-offs and a more conservative interconnect transition strategy. Thus, Hewlett-Packard and others helped create PCI-X and its latest incarnation, PCI-X 2.0. Given the ability to scale PCI-X 2.0 to 8 Gbytes/second and beyond, it is clear that PCI-X solutions will be around for many years to come.
The PCI-compatible form factor of Express, an interconnect that HP engineers helped define, fails the test of the four critical questions on multiple counts, such as providing customer-visible value, and reasonable transition costs and impacts; therefore, it is somewhat DOA.
Again, what is required is a new module form factor that will bring improved hot-plug management and the potential adaptation of standalone-focused form factors to new small-footprint solutions such as server blades. Work is already under way to create this form factor and it is expected to be completed in the first half of 2003.
Thus, we believe that PCI Express should ship in the second half of 2004 as a chip-to-chip interconnect that bridges to PCI-X 2.0 with only a handful of on-board native I/O devices. Once the module work is completed and vendors can migrate a sufficient number of I/O devices to Express, expect to see native Express I/O slots by, say, 2006. Then designers will face a tough set of choices on how many PCI-X 2.0 and PCI Express slots to provide at a given design point.
Ethernet will continue to be the primary link for the switches, routers and other devices used to interconnect systems in a data center. But Ethernet must evolve.
First, remote direct memory access (RDMA) over TCP/IP will fundamentally change application development and solution delivery when it arrives in products in late 2003. To make effective use of RDMA for both clusterin g and storage, Ethernet switch providers must modify their implementations to provide low-latency switching of 100 to 300 ns and true quality of service. Latency improvements can be derived by design and process advancements, but QoS is another matter.
To differentiate 802.1p-tagged packets in switches and adapters requires dedicated buffer resources. It also needs standardized 802.1p arbitration algorithms, to reduce head-of-line blocking in switches and adapters, and standardized switch ingress-to-egress port scheduling per priority set to manage fabric bandwidth and latency on a given path between end nodes.
These QoS improvements are already on the way for other interconnects, along with new OS and middleware-management software. The creation of standard methods and algorithms for Ethernet can build upon these efforts and create a more robust end-to-end QoS solution.
Let's talk about interconnect for storage systems. Today, block-oriented storage systems are dominated by Fibre C hannel storage-area networks. The question on everyone's mind is, will Fibre Channel be able to withstand the pending onslaught of iSCSI-SCSI protocols over Ethernet? The quick answer is yes, Fibre Channel will survive and continue to evolve to a 10-Gbit/s link, but over five to seven years iSCSI will replace Fibre Channel for the majority of customers.
To date, iSCSI deployment has stalled because of the current economic conditions, which are, in turn, driving customers to demand and vendors to create a credible converged-fabric solution. Indeed, computer clusters and storage links must converge on a single interconnect and protocol suite: Ethernet and TCP/IP.
To create a converged solution, the RDMA Consortium and the Internet Engineering Task Force are developing RDMA-based wire protocols and upper-layer protocol-mapping specifications, such as the Sockets Direct Protocol and iSCSI over RDMA, that will allow these protocols to operate over well-defined standard hardware and software inter faces.
Customers will be able to deploy this single, converged fabric when it is combined with new operating systems, drivers and data-center-management software and the QoS improvements suggested above for Ethernet. Initial solutions will appear in late 2003 or early 2004.
Given the dominance of Ethernet and TCP/IP in delivering file-oriented storage systems, expect to see these systems also ride the technology wave based on RDMA and iSCSI.
Related Articles
- Enabling Robust and Flexible SOC Designs with AXI to PCIe Bridge Solutions
- OCP VIP: A cost effective and robust qualification process for multimedia and telecom SoC designs
- Why Transceiver-Rich FPGAs Are Suitable for Vehicle Infotainment System Designs
- Creating SoC Designs Better and Faster With Integration Automation
- Speeding Derivative SoC Designs With Networks-on-Chips
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |