Build the next generation of telecom systems with open interfaces (Part 1)
Ajay Kamalvanshi and Timo Jokiaho, Nokia Corp., and members of the Service Availability Forum
May 19, 2005 (5:00 AM)
Advertisements for wireless service providers in the local newspapers will affirm that reliable services such as voice, web browsing, messaging, and video voice mails at a low monthly rate are real. This competition to provide low-cost reliable services is placing new demands on the Telecom Equipment Manufacturers (TEMs), namely forcing them to provide reliable systems at competitive prices with flexibility to rapidly incorporate new features. Note that TEMs have already brought down equipment costs by up to 40% in the last year. The TEMs in turn force the hardware equipment makers and software vendors to meet end-user demands. In addition, the intense competition has shortened the traditional time-to-market window to develop new telecom applications.
Fortunately, the TEMs had anticipated such needs a few years ago and had started working on interfaces for modular robust low-cost building blocks—in hardware, software, and applications—that would interoperate with each other. Open architectures have evolved since then, and various complementary hardware and software efforts now provide the interfaces required to build robust telecom systems.
At the highest level, a telecom system has three operational planes: user, control, and management. The user (or data) plane handles user packet processing such as forwarding or switching based on fixed parameters, at high speed. The control plane executes signaling, routing, or other protocols, and handles the connection setup, session setup, setting up of routing tables, or building tunnels. The management plane provides an administrative interface to telecom systems and handles housekeeping aspects, such as provisioning and management of node statistics. For example, in a Gateway GPRS Support Node (GGSN), the user plane handles processing of user GPRS Tunneling Protocol (GTP-u). The control plane executes the signaling protocol, such as control GPRS Tunneling Protocol (GTP-c), and routing protocol, such as Open Shortest Path First (OSPF). The control plane consists of the operation, administration, and maintenance interface (OAM).
The next generation architectures must address not only scalability and modularity of the operational planes, but also service availability when a hardware or software component fails. Consequently, the future network equipment's design, from chassis to software, is built to support fault-tolerance.
Chassis
A typical telecom system consists of one or more shelves in a cabinet. Each shelf has many Field Replaceable Units (FRUs), such as power entry modules, fan modules, computing blades, switchblades, special processing blades, or management blades. The computing blades perform operations such as packet processing, forwarding, or filtering. A computing blade can host additional modular cards, like advance mezzanine cards (AMCs) and PCI mezzanine cards (PMCs), for handling-line card functionality. The management blades perform housekeeping operations and control the computing blades. The switchblades host switches or fabric interconnect controllers to provide data transport in the shelf.
The TEMs traditionally built these systems in-house or subcontracted them to hardware equipment vendors, who built the systems based on the TEMs' specifications. Consequently, the hardware components were proprietary and seldom interoperable. To standardize the chassis, the PCI Industrial Computers Manufacturers Group (PICMG) defined the PICMG 3.0 specification: Advanced Telecom Chassis Architecture (ATCA). Although originally targeted for network equipment providers (NEPs), the spec also addresses many needs of TEMs to develop modular components that can be quickly integrated to deploy high performance and carrier-grade service solutions. It covers details of mechanicals, system management, power distribution, power connectors, data transport, thermal dissipation, and regulatory guidelines.
The specification standardizes the physical dimensions of all modular components such as front board, rear board, backplane, and shelf. Many vendors support compute blades and switchblades as front boards (Fig. 1). For hardware compatibility, three zone connector interfaces are defined. The first, Zone 1, defines the interface between the backplane and the front board for dual redundant power, shelf management, and hardware addressing. Zone 2 defines the interface for data transport, such as base interface, fabric interface, update channel interface, and synchronization clock interface. Zone 3 is left undefined, allowing equipment providers to use it for proprietary extensions.
Fig. 1. A logical diagram of building ATCA shelf is illustrated here.
Redundancy is a primary focus in this specification. All the active components can be configured to be redundant. An ATCA chassis with two shelf-management controllers and a dual redundant management bus ensures fault tolerance of the low-level system management. Furthermore, multiple PEMs, fans, and switchboards provide additional redundancy. The update channels between front boards facilitate the synchronization required for redundancy implementations.
Management plane
The management plane has also evolved over time. Instead of managing with firmware or software, customized ICs now manage the chassis and its components. These ICs discover, control, and monitor the shelf's hardware components. IPMI has clearly emerged as a standard interface for shelf management. The ATCA specification has also adopted IPMI as the default low-level shelf-management interface. The ATCA shelf-management task includes:
- Monitor, control, and assure proper operation of boards and shelf components.
- Retrieve inventory information and sensor readings.
- Receive events and failure notifications.
- Perform basic recovery operation like power cycle or reset of management entities.
- Manage power, cooling, and interconnect resources of a shelf.
- Detect mismatch (and avoid possible damage) in backplane interconnect.
A simple realization, consists of IPM controllers on each FRU, a bus interconnect, and a shelf manager (Fig. 2). The IPM controller on each Field Replaceable Unit (FRU) is connected to a redundant Intelligent Platform Management Bus (IPMB-0), based on an inter-integrated circuit bus. The ATCA specification supports radial and bused topologies of IPMB. However, due to cost constraints, a bused implementation is preferred in telecom systems. The IPM controller's functions include supporting commands to retrieve information form the FRUs; powering and cooling the FRU; managing the backplane interconnects; and generating and logging events. The shelf manager attached to the IPMB-0 using a variant IPM controller, Shelf Management Controller (ShMC), provides an interface for the shelf system manager. It also tracks the managed devices and reports abnormal conditions to the system manager.
Hardware Platform Interface
As IPMI is very close to hardware implementation, a more generic interface is needed for platform management to reduce integration with management software. The Service Availability Forum's (SAF) Hardware Platform Interface (HPI) is primarily defined as a platform-agnostic shelf-management interface. This spec defines a set of C-language APIs for dynamic discovery of the equipment's hardware components (known as entitities); monitoring and controlling those entities; managing hot-swap capabilities; and reporting entity failures.
The physical entities, such as PEMs and blades, are mapped to a logical model. The HPI model is hierarchical consisting of domains, resources, and management instruments. A system consists of one or more domains that's a collection of resources. A domain could represent a logical partition of equipment owned by different tenants, and forms a logical unit for event and alarm generation. A resource, such as each FRU, represents a component in a system (again, an entity). A resource is represented by set of management instruments, which defines a management capability. The types of management instruments are sensor, control, watchdog timer, inventory data repository, and annunciators (Fig. 3).
The SAF's HPI work group is also working on mapping modular components of ATCA-to-HPI logical entities to help platform vendors support a uniform model. The platform vendors will provide a C library with the standard HPI header file that the upper layer system management agents can use, which enables the TEMs to choose new hardware with few modifications. In addition, it will reduce integration and verification costs.
Data plane
The data (or user) plane is responsible for processing user or service data packets at high speeds. This plane includes the line interfaces that connect to external equipment, switch fabric, and specialized ICs or processors. The line card interfaces are supported using PMCs or AMCs. Due to better management and hot-swap capability, the AMCs are becoming the preferred choice. The data packets are internally switched in the telecom equipment through the switch-fabric. The ATCA specification is flexible enough to support various topologies and physical layers. However, a dual star topology based on 1-Gbit/s Ethernet or higher has evolved as an interface of choice.
The datapath also includes specialized hardware designed for processing user packets at high speeds. This class of hardware includes ASICs, communication and network processors, and more recent ASSPs (application-specific standard products). Telecom equipment providers use blades with network processors for various specialized processing, from simple tasks like forwarding-based on-packet headers to complex tasks like TCP and security offloading, and content analysis. The requirements for such specialized hardware are so different that the equipment vendors have little common ground for standardization. However, the Network Processing Forum has defined interfaces so that network-processing elements from different vendors can interoperate.
Common modular TEM profile
We've stated that the ATCA specification is quite comprehensive and can be readily adopted for telecom equipment. However, only a few hardware vendors have equipment that can be used by the TEMS. This lack of vendors is primarily because some key areas in the specification are still open to support multiple implementations. In addition, the list of features in the spec is exhaustive, and vendors need guidance from the telecom industry to prioritize those features for phased development. Consisting of ten constituents, a common modular profile can fit the needs of many TEMs.
1. Shelf: A telecom shelf typically services more than 100,000 users and up to a million logical connections. To provide such capacity, any commercially useful shelf should have at least 14 or 16 slots for front boards. In addition, the operator could choose to populate more boards when its customer base increases, or reduce the number of boards when the customers decrease. This means that boards should be interchangeable across all the slots in the shelf. More specifically, the boards should not be restricted to fixed slots with the exception of slots for switch cards. To prevent failure due to excessive heat in a region, the switch slots should be the first and last instead of the middle slots. The chassis outlook, including color and placement of LEDs, should be customizable to allow NEPs to modify it based on their company requirements. Such flexibility in deciding the chassis' appearance will help users maintain consistency with their other equipment.
2. Regulatory: The Electro-Magnetic Compatibility (EMC) standards in North America and Europe classify equipment into two categories, Class A and Class B. Class A compliance is de-facto for all industrial and central office equipments and Class B is for home and some central-office equipment. Allequipment for the telecom market is required to be Class A compliant.
3. Power distribution: A redundant "48 or "60 V dc power feed to each frame or cabinet must come from one or two power plants. A frame (or cabinet) contains up to three shelves. Consequently, "48 or "60 Vdc is distributed to all active shelf components. Each shelf will be provided 3.4 kW through battery plant wiring, and each shelf's front board can dissipate up to 200 W. An additional 20 W can be dissipated for optional RTMs.
4. Platform management interface: Redundant IPMB buses must be used for hardware platform management.
5. Hot-swap support: Support for front boards to be hot-swappable is sufficient in the first phase of system development.
6. Shelf management: The ATCA spec doesn't restrict location of the shelf manager and its associated Shelf Management Controller (ShMC) to provide flexibility. For most telecom vendors, it's advantageous to have shelf management separate from switchboards. In addition, the shelf vendor must provide a library for the SAF's HPI B.01.01 or later for integration with system management software.
7. Data transport: The base interface for control traffic should be Gigabit Ethernet in dual star topology. Furthermore, a 1-Gbit/s Ethernet update channel between adjacent cards is provided to realize 1:1 redundancy, also known as 2N redundancy. In the long term, the GE could be upgraded to a higher bandwidth, such as 10-Gbit/s Ethernet.
A fabric interface to support data-plane traffic, dual-star topology of at least 1-Gbit/s is required. The switch should support 10 Gbit/s to ensure 1-Gbit/s transfer among disjointed nodes. The 1-Gbit/s support must also operate with jumbo packets of up to 9 kbytes. In the future, this can grow to higher bandwidths, like 10-Gbits/s from point to point.
8. Synchronization clocks: For the first phase of implementation, two pairs are sufficient. One pair, CLK1A and CLK1B, are used for 8-kHz system clock signals and are required for sampling voice in telephony. The second pair, CLK2A and CLK2B, provide 19.44-MHz system clock signals, useful for SONET/SDH networks.
9. Storage: Persistent storage is required to store the software packages, run-time images, configuration, and provisioning information. This storage is provided using Fibre Channel or over Ethernet using protocols such as iSCSI. In the case of Fibre Channel, a controller is needed in the switch card.
10. External or inter-shelf connectivity: The shelves are connected within or across a cabinet to provide single-system functionality. For such connectivity, two or more 1-Gbit/s Ethernet ports are used on the switchboard.
On to Part 2 Part 2 of this article will look at the relevant software standardization efforts with a focus on how these efforts have converged to realize the standard-based telecom platform.
About the authors
Ajay Kamalvanshi is a technology manager in the system technologies group of the networks division at Nokia Corp. He holds a master's degree in computer science and automation from the Indian Institute of Science, Bangalore, India, and can be contacted at ajay.kamalvanshi@nokia.com. Timo Jokiaho is the director of technology at Nokia, working with carrier-grade platforms. He holds a master's degree in computer science from the University of Helsinki, Finland, and can be contacted at timo.jokiaho@nokia.com.
E-mail This Article | Printer-Friendly Page |
Related Articles
- Build the next generation of telecom systems with open interfaces (Part 2)
- Implementing custom DDR and DDR2 SDRAM external memory interfaces in FPGAs (part 1)
- An Introduction to Direct RF Sampling in a World Evolving Towards Chiplets - Part 1
- Paving the way for the next generation audio codec for the True Wireless Stereo (TWS) applications - PART 1 : TWS challenges explained
- Where Innovation Is Happening in Geolocation. Part 1: Signal Processing
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Layout versus Schematic (LVS) Debug
- Usage of Multibit Flip-Flop and its Challenges in ASIC Physical Design