Servers gas up with 4-Gbyte/s PCI-X 2.0 spec
Servers gas up with 4-Gbyte/s PCI-X 2.0 spec
By Rick Merritt, EE Times
November 21, 2001 (2:48 p.m. EST)
URL: http://www.eetimes.com/story/OEG20011121S0099
SAN JOSE, Calif. The Peripheral Component Interconnect interface is about to get a hefty boost via PCI-X version 2.0. Some companies say they will support the revision with products by summer, on the heels of the arrival of first-generation PCI-X. The new spec will use double- and quad-data-rate techniques to forge links with 2 to 4 Gbytes/second of data throughput, outstripping even the nascent Infiniband interconnect and driving Intel-based servers deeper into data center computing. The interface appears poised to get broad support as the internal bus of choice for Intel-based servers. But some OEMs warn that the industry must carefully position what's becoming an embarrassment of fast-I/O riches if it hopes to avoid market confusion. PCI-X is seen as an easy-to-implement enabler for supporting 10-Gigabit Ethernet and other fast interfaces in next-generation servers. And some backers believe the bus will dominate PC server design f or the foreseeable future. The interconnect is not expected to find much of a foothold in desktop or notebook PCs, however. Nor is it expected to supplant the role of Infiniband in linking multiple systems or subsystems in a data center. "We're expecting the PCI road map to continue a number of years into the future with this technology," said Dwight Riley, PCI-X 2.0 chairman and a server architect at Compaq Computer Corp. (Houston). "The DDR [double-data-rate] version of PCI-X itself will take us out to 2004. And Compaq will use PCI-X across its entire server line." "I think the committee did a good job of implementing DDR with minimal impact on the silicon," said Dave Pulling, vice president of sales and marketing at chip set maker Serverworks, a division of Broadcom Corp. (Irvine, Calif.). Serverworks plans to support PCI-X 2.0 in chip sets for two- and four-way Intel servers that it will launch at the end of next summer, Pulling said. The company's first PCI-X-based chip sets will roll in th e first quarter, and it will begin support of Infiniband with a 4X version late next year. Not everyone is convinced PCI-X 2.0 will become the dominant interconnect in PC servers in the long term. "Compaq has no plan for using in its servers 3GIO [a serial version of PCI slated to debut in late 2003], but I have a different take," said Michael Krause, who heads up interconnect technology for Hewlett-Packard Co.'s server group. "I think 3GIO makes a lot of sense come 2004. The best chip sets I have seen targeted for 2004 and beyond are using 3GIO." Significant savings The serial nature of 3GIO will also result in lower pin counts on chip sets. That creates significant cost savings at the chip and board level for companies prepared to move to the new interconnect, Krause said. Compaq's PCI-X 2.0 drive is largely trying to forestall a change of I/O technology for its existing users, he said. HP plans to use 3GIO as a replacement for its existing proprietary serial I/O server technology, c alled Ropes. The PCI-X 2.0 work group of the PCI Special Interest Group (SIG) has brought under its umbrella work on double-data-rate and quad-data-rate (DDR and QDR) versions of PCI-X as well as work on bringing error-correction code to PCI. It expects to put its work out for member review in the first quarter of next year. The SIG typically takes specs to a final vote 30 to 60 days after a successful review. The double-data-rate version of PCI-X uses source-synchronous signaling to measure data on rising and falling edges of a clock. The mode depends on I/O signaling in the range of 750 to 800 millivolts, although PCI-X core silicon is still expected to run at existing voltages, generally at 3.3 V. A typical PCI-X implementation using phase-locked loops with a flip-chip package would need no change in pinouts to support DDR. Implementations using ball-grid array packages might need a thorough analysis of I/O voltages. "QDR is a no-brainer after that," Riley said. "But do systems need the Q DR bandwidth today? Probably not." Roger Tipley, chairman of the PCI SIG, said the new spec arrives just as designers are looking for ways to bring high-speed interfaces like 10-Gigabit Ethernet elegantly into servers. "At some point we may want to move to dual 10-Gigabit Ethernet links, and that's where QDR comes in. Right now that might seem like overkill, but in a few years it won't seem so strange," Tipley said. Cause for delay PCI's fast-forward leap stands in marked contrast to the rather sedate pace of development of the popular bus technology just a few years back. A falling-out between the PCI SIG and a key Intel interconnect designer who spearheaded development on the Accelerated Graphics Port caused Intel to pull out of the initial PCI-X effort. "That's why PCI-X took two years to take off," said Cary Snyder, a senior analyst with the Microprocessor Report. At the same time, a deep division over the route to a serial, message-passing interface split compute r makers into competing camps, diverting attention from PCI-X until the groups reunited on the Infiniband spec. "Today there are a number of interconnect technologies available PCI, PCI-X, DDR PCI-X and Infiniband and we want to position these clearly," said Tom Bradicich, director of PC server technology and architecture at IBM Corp. "I don't think you will see one win out over another. There will be overlapping bands, but adapter-card and bridge-chip makers share some concerns that there could be confusion." Indeed, a broad group of computer makers drawn from the ranks of the PCI SIG and the Infiniband Trade Association is said to be hammering out usage scenarios for the various technologies. The group may also try to influence the still-fluid work on 3GIO, which aims to become the PCI 3.0 standard, so that further overlap among I/O specs is minimized. Riley said 3GIO offers low pin counts for desktop and notebook designers needing a fast internal chip-to-chip interconnect. It co uld be useful as those systems migrate toward 1-Gigabit Ethernet connections. Infiniband is also a serial interconnect, but it uses a message-passing approach aimed at letting it support links between systems equipped with their own processors and operating systems. By contrast, PCI and its follow-ons are memory-mapped buses geared for direct attachments between devices that share a common host and a single operating system. Infiniband aims to replace a number of expensive, proprietary system interconnects like Giganet, Myranet and Servernet, which are used to cluster systems or link subsystems. "There are four or five versions and they can cost $1,000 per card," Tipley said. The emergence of fast I/O links is helping systems designers drive PC architectures deeper into corporate and Internet data centers. At Comdex, IBM detailed its plans for its Enterprise X Architecture, a set of scalable Intel-based servers that can be configured with as many as 16 processors, 256 Gbytes of RAM and 48 PCI-X I /O slots. The new IBM systems will come in versions for Intel 32-bit Foster and 64-bit McKinley processors, simply by swapping out a processor interface chip. The servers leverage what IBM describes as mainframe-class capabilities, including hot-swap memory, up to 64 Mbytes of Level-4 cache and a 3.2-Gbyte/s coherent scalability port for linking multiple processors, even across separate chassis.
Related News
- TransEDA introduces first Verification IP for newly released PCI-X 2.0 specification
- nSys announces the release of PCI-X 2.0 nVS, a verification solution for PCI-X 2.0 in Verilog
- Xilinx Ships Industry's First 133MHz PCI-X 2.0 Core Mode 1
- NurLogic delivers Industry's first PCI-X 2.0 I/O 0.13-Micron Buffer
- Twenty-Five Leading Infrastructure Suppliers Announce Product Support for PCI-X 2.0
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Most Popular
E-mail This Article | Printer-Friendly Page |