Systems and Integration : What's next in Compact PCI?
Systems and Integration : What's next in Compact PCI?
By Philippe Chevallier, Senior System Architect, Motorola Computer Group, Phoenix, EE Times
January 6, 2002 (5:55 p.m. EST)
URL: http://www.eetimes.com/story/OEG20011022S0020
Compact Peripheral Component Interconnect platforms have a long history as host-controller topologies that use PCI bus H.110 buses to communicate with Internet telephony platforms. This was a good solution as long as digital signal processing cards were in a relatively low-port-density environment. However, now it is necessary to design with other backplane topologies that offer higher bandwidth and a scalability scheme.
By now, "converged network" is probably the most used term within the telecommunications community. However, this overused but simple term presents one of the most complex technical problems the telecom industry has had to face since the deployment of the first public-switched telephone network. The following key criteria have to be considered and met:
- Carrier grade--NEBS compliant
- Highly reliable (low or no downtime)
- Highest possible number of subscribers
- Scalable
- Servicea ble
- Guaranteed service-level agreement to subscribers
- Long-term investment (no forklift upgrade as technology evolves)
Since the first network found its way into this space, CompactPCI has rapidly encountered some roadblocks because of its fairly simple host-centric and PCI-based architecture.
Before describing further what seems to be the next trend in CompactPCI, let's take a look at the three different topologies that will be the basis of the discussion.
Bused topologies use a "multidrop" configuration to connect a variety of resources. Buses are usually wider and slower. By using width to gain bandwidth, buses require numerous pins to interconnect. This high pin count makes them impractical to connect individually to lots of resources, so the bus is shared. The sharing makes the bus need to run slower and thus it needs to b e wider, rapidly causing you to reach a point of diminishing returns. Shared infrastructure, like a bus, makes reliability an issue. Any resource on the bus can compromise the integrity of the whole system
Star topologies use a point-to-point configuration where each device uses a dedicated link to send and receive data from a central resource. This resource provides the data distribution for the system. Ethernet networks are all a hierarchy of star networks. Each subnetwork is a leg on a star of the next layer in the hierarchy.
Star topologies require redundancy to provide reliability. Reliance on a single central resource can cause a loss of all elements below the failure point. For example, the topology of the PICMG2.16 specification, also known as CPSB, is a dual-star configuration.
Mesh topologies are a superset of star topologies. They use point-to-point connections as well. As you add interconnect to eliminate "dead branches" in a star network, you reach a point where all no des have connections to all other nodes. At this point, the hierarchy disappears. Each node can be an end point, a router or both.
Mesh networks offer more resilience than star networks. Each node manages its own traffic. There is no dependence on a central resource. In addition, mesh networks are more scalable. Once the mesh network interconnect is in the system, capacity is added with each card. With a star network, the central resource has to have full system capacity, even if it is not immediately used.
Historically, CompactPCI cards have been built around bused topologies, specifically H.110 bus for time-division multiplexed (TDM) traffic and CompactPCI bus for packet payload. To transcode data or voice channels in the case of a media-gateway platform, three types of CompactPCI cards were necessary: a line interface card, a vocoder card (voice-over-Internet Protocol/digital signal processing), and packet-processing card that was usually located along with the host controller.
While this topology has been used, technologies and network connectivity are reaching some of the physical limitations imposed by the two buses. First, the H.110 bus is limited to 2048 voice channels, which represents a single OC-3 or STM1 link. There are some new initiatives based on Stargen (PICMG2.17) that are meant to extend the life of a PCI/H.110 solution. But while this solution definitely has some potential, some PCI bus issues remain concerning the hardening of each PCI driver .
- Second, despite its theoretical high bandwidth, the CompactPCI bus presents some serious latency issues when traffic is coming from too many voice-over-Internet Protocol cards. In addition, unless a dual host controller architecture is being used, high availability can only be reached using a 2N topology where every component is duplicated, which results in a fairly expensive solu tion, both in cost per channel and real estate.
Nowadays, a new CompactPCI platform topology seems to be surfacing. It is based on a more ubiquitous architecture, where each card no longer needs to be connected to either bus. Each blade offers a complete CPU, memory I/O and even a disk drive. Each blade can be customized depending upon its final function, but the means of communication remains the same. That solution is Ethernet, which presents an open-standard communication protocol.
In this scenario, each blade can now be connected to every other using a socket-based protocol over Ethernet. For redundancy reasons, two Ethernet ports are provisioned on each CompactPCI blade with an automatic fail-over between each port.
There are currently no existing standards to specify such a fail-over between two Ethernet ports, but some CompactPCI switch vendors have initiated development efforts to overcome this problem. Their proposed solution consists of providing a transparent switch-ove r mechanism at the socket level without any interruption of communication. All fault detection and fail-over are achieved through the Ethernet driver.
To connect to the external world, an Internet Protocol (IP) switch card also offers Gigabit Ethernet links with link aggregation as defined by IEEE802.3ad. Future implementation of this architecture will look at offering 10 Gbits/second with 1 Gbit/s between resource cards.
This connectivity has proven, significant advantages over the CompactPCI bus. Because traffic is carried over Ethernet, cards can be unplugged without worrying about causing any major software problems. In addition, there is no need to harden a PCI driver, as all of these functions are already built into any Ethernet drivers currently available from today's off-the-shelf components.
For IP connectivity, two IP switch CompactPCI cards (IPSBs) are inserted into two pre-identified slots. Each IPSB is connected to each CompactPCI processor card (also called resource c ards) via Ethernet as specified by PICMG 2.16, therefore allowing each card to be "visible" to the entire network.
Given this dual-Ethernet ports connectivity scenario, two possible configurations can be considered:
- All cards have an independent IP address within the same class of the network and are visible throughout the entire IP core network,
- Each card has a local IP address with a unique IP address visible to the outside world.
There are advantages and disadvantages to each configuration. The desired solution will be driven by the overall network architecture and end-to-end network availability.
Each CompactPCI card is visible from the entire network and has to conform to all standard protocols to which it will be connected. For instance, in a media-gateway type of CompactPCI card, the entire H.248 Simple Network Management Protocol (SNMP) stack has to be provided. Each card is directly accessed from the rest of the network using its unique IP address. The IP switch card becomes a simple Ethernet hub.
Each CompactPCI card has a local IP address only visible from the IPSB, which advertises a single IP address to the rest of the network. This is also known as IP masquerading. Traffic is routed to each individual resource card based on other decision criteria. It can be either based on a UDP port number or any other information carried within the payload. This is often the case within General Packet Radio Service where the traffic is routed to a specific resource card based on subscriber identifications.
This dual-start Ethernet backplane architecture presents some limitations as well. Despite its good fit for an all-IP-based network, there are some issues related to availability. TDM links such as T3 or Sonet are now coming directly into the DSP card, which are now tightly coupled with voice trunks. Each trunk has to be duplicated with switchover of the entire traffic in case of a deficiency at any level. This results in a 2N configuratio n and an external electronic switch to select between active and standby links. This configuration in turn can introduce additional reliability problems and complexity to the overall architecture.
Another way of accomplishing the same scenario is to separate the resource card from the line interface card and provide a means of interconnection between the two. Since we don't necessarily want to be limited by this interconnection scheme, every slot must be connected to every other slot in a dual-star configuration. This connection is achieved by using a mesh topology on the J4 connector, since H.110 is no longer present in this chassis.
In the case of a media gateway with Sonet as access points, a pair of Sonet links would be coming into two independent line cards, each of which are connected to every other DSP card. Automatic protection switching is automatically implemented between these two line cards, along with an N+M redundancy topology between DSP resource cards.
In terms of s erviceability, CompactPCI platforms have used the host-processor strategy, where all problems and configurations were being stored. For instance, in order to detect when a board was being hot-swapped, the host processor had to watch the Enum interrupt and carry this information to the application space for decision making.
However, to gain the maximum reliability of this system, a dual-host-processor topology had to be implemented with PCI bus-bridging techniques as defined by PICMG 2.13.
Since then, a PCI-less topology--where none of the resource cards were using the PCI bus at all, other than for power and ground--has surfaced to bypass this issue. System slot management is achieved via IPMI as specified under PICMG 2.9.
There is still a need for chassis-management software, but such elements can reside anywhere within or outside of the chassis, as long as the element can communicate with the chassis manager via its redundant Ethernet port.
Network management
For network-management purposes, SNMP is the protocol of choice to communicate between the OAM-P and the media gateway, for instance. This may or may not be the same card as the one used for platform management. It can be located within the same chassis or within an external chassis. It communicates via Ethernet with every piece that constitutes the network elements, such as DSP and line interface cards, including chassis management, using a distributed MIB subagent topology.
Until the telecommunication infrastructure becomes an all-IP network, asynchronous transfer mode (ATM) is becoming a major player and a CPSB chassis has some difficulties providing solutions.
Indeed, it would not be very efficient to use an IP network in between. This would require the link aggregation card to translate voice traffic between an IP format to a VoATM (AAL2) format. Additional latency and quality-of-service management between these two protocols would make it a very complex architecture. Instea d, each DSP card produces VoATM cells, which are routed to a pair of ATM switch cards for aggregation and protection switching.
The dual-Ethernet bus is "only" used for Megaco and serviceability via SNMP. The media-gateway function is achieved by a pair of redundant processor cards connected to Ethernet via CPSB. To control and manage the VoATM traffic from Sonet link to its corresponding AAL2 channel, the active media gateway card sends information to each individual card using a private local-area network inside the chassis. We could easily extrapolate that same architecture through multiple chassis, therefore allowing a highly scalable solution.
The new CompactPCI chassis, which used to be a host-centric architecture, is leading to a PCI busless environment with higher throughput and scalability using a PCI busless architecture. There is no need to harden any PCI driver and no need to provide complex host-processor fail-over--meaning the two basic physical limitations imposed by the H.11 0 and PCI buses are eliminated. Higher throughput and higher reliability of the entire platform with 100k channel support per rack is now a definite possibility within a CompactPCI price range.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |