Security coprocessor ties to PCI Express
Security coprocessor ties to PCI Express
By John Beaton , EE Times
November 25, 2003 (7:39 p.m. EST)
URL: http://www.eetimes.com/story/OEG20031125S0044
Security is becoming increasingly important to networking, generating much recent interest in the internal architecture and operation of specialized security acceleration hardware. Unfortunately, there has been little discussion of the interconnect from security hardware to other devices, primarily because this interconnect has been based on one or two open standards that are broadly supported and not subject to differentiation. However, the high level of churn and uncertainty in the semiconductor industry surrounding interconnect technologies and how they are evolving makes a discussion of how security accelerator interconnect will evolve over the next two to three years quite timely. Specifically, we'll discusses how and why the security accelerator interconnect will evolve from the dominant PCI and PCI-X of today to PCI Express technology tomorrow. There are two scenarios in which security accelerators are typically used-in-line and coprocessor. The in-line scenario places the accelerator in the fast data path, in line with the network processor. Logically, its data path interfaces should match those used by the network processor and other silicon, such as framers and traffic managers. These interfaces are defined by two existing, well-supported standards bodies: the Optical Internetworking Forum and the Network Processor Forum. The evolution of these interfaces is well-established, so it is assumed for now that in-line security accelerators will continue down this path. Note that this scenario still requires an interface to the host processor for control and management functions. The coprocessor scenario attaches the security accelerator to a general-purpose or host CPU. Security coprocessors have typically used the same PCI interconnect as other types of coprocessors. With high-performance host processors, this configuration can handle gigabit data rates today. We'll take a look at the coprocessor scenario because it represents the largest market and, therefore, plays the dominant role in determining the security accelerator interconnect. There are two approaches to discussing the evolution of this security accelerator interface. The first approach is the simpler one. It is pragmatic and constraint-based. It simply recognizes that in the coprocessor scenario, the ubiquitous interface used today by security accelerators is PCI (for simplicity, PCI also includes PCI-X). It leverages the following advantages of PCI: adequate performance and features; ubiquitous support in hardware and software; broad availability of expertise and tools for development and manufacturing; low cost; stability; and a broad range of complementary specifications. Those advantages have been adequate to support the security coprocessor model and have given designers tremendous flexibility in implementing many different kinds of security equipment. Additional capabilities will be needed, however, as network bandwid th grows, the need for security becomes more pervasive, the demand for service availability grows and as additional capabilities are needed. These capabilities include higher bandwidth, better reliability, better scalability, lower cost and more robust reliability, availability, serviceability and manageability (RASM) features. In fact, a large enough improvement in any one of those features could be reason enough to migrate. The migration path from PCI is clear: The PCI SIG has elected PCI Express technology as the successor to the PCI and PCI-X standards. The communications and compute industries are adopting this new standard because its value was designed from the outset to match their requirements. A variety of factors will determine the timing of the migration. These include the need for advanced features, vendor product development schedules and the need for vendors to recoup their costs. Additional factors include end users' life cycle expectations, system prequalification costs and t he deployment of processors and chip sets with PCI Express support. This timing can be different for each vendor and customer. PCI Express products will begin to sample toward the end of 2003 and vendors will begin their migration in 2004. Initial PCI Express deployments will be supported by PCI-to-PCI Express bridge chips. As a result, security coprocessors will not be forced to make an artificially early migration. Instead, the migration can be driven by cost and features that are of value to the end user. As more and more processors and chip sets implement PCI Express natively, security and other coprocessors will do likewise. As for timing, we can learn from the migration of ISA to PCI. Applications determined the rate at which this migration took place. For applications that needed the extra performance or features of PCI, the migration happened relatively quickly. Applications that valued the stability of the hardware platform above all else are still not quite finished migrating away f rom ISA. We can expect to see a similar pattern in the migration from PCI to PCI Express. Modularity sparks acceptance Because PCI Express is based upon a similar foundation of modular platforms and a standards-based interconnect, communications equipment is moving toward that technology in the same manner. This brings us to the second approach, which is driven by the fundamental requirements of security-equipped platforms. Security accelerator requirements can be placed in four simple categories. PCI Express was design ed to address all of these issues. One hundred percent backward compatibility of PCI Express with PCI simplifies migration. No changes are required to software drivers-the effort to migrate to PCI is no greater than migrating to a new chip set. The narrowest PCI Express link (x1) provides 2 Gbits/second of bandwidth in each direction simultaneously. Links can be scaled in small increments: x2, x4, x8, x12, x16 and x32. First-generation chip sets will support x4 and x8 links. The anticipated second-generation physical layer will at least double the bandwidth per lane, based on a conservative assessment of current PHY developments. PCI Express provides greatly expanded RASM features. These include a fundamentally reliable link layer, graceful degradation in the presence of lane failures, logging and management of faults, among others. Finally, PCI Express enables significant cost improvements over PCI. All other things being equal, PCI Express' serial architecture provides much lower in herent costs than PCI's parallel-bus construction in terms of pin count and die area to achieve the same peak bandwidth. This includes the power and ground pins necessary to make the port work properly. John Beaton is interconnect program manager for Embedded Intel Architecture Division at Intel Corp. (Fremont, Calif.).
The factor accelerating this migration is modular platforms. The modularity of computer platforms drove the ubiquity of PCI in compute platforms and, therefore, to its adoption as the security accelerator interface of choice. This modularity required a foundation of standard interconnects, and PCI was the interconnect of choice for I/O devices, peripherals and coprocessors.
Related Articles
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |