|
||||||||
Build the next generation of telecom systems with open interfaces (Part 2)Ajay Kamalvanshi and Timo Jokiaho, Nokia Corp., and members of the Service Availability Forum The Telecom Equipment Manufacturer's (TEMs) need for low-cost modular building blocks to build next generation telecom systems has led to the adoption of standard-based hardware and software interfaces. In the past, software architectures were often tied to proprietary hardware and applications. In fact, the TEMs' product names frequently identified a specific hardware, processor architecture, ASICs, operating system (OS), and middleware instead of functions. Moving to a newer hardware platform mandated partial or complete overhaul of the software architecture. Developing new products meant developing new hardware and adopting a new OS, porting middleware and applications, resulting in two to five years to develop a product. With a growing market for quickly developing new services, a robust modular software architecture is needed. A unified telecom software architecture addresses this need (Fig. 1). The architecture consists of three layers with well-defined interfaces. Modules in each layer can be replaced with minimumal impact on the other layers. In addition, the modules in a layer provide well-defined interfaces and could be replaced with more competitive, better performing modules or building blocks with low integration and verification cost.
1. Common software architecture for telecom systems. The lowest layer is the hardware platform representing telecom hardware, such as an ATCA chassis or blade server, with different processor architecture or hardware dependencies. The middle layer (system platform software) consists of several software building blocks with standard interfaces. The carrier-grade OS provides the basic services such as scheduling, memory protection, file-system services, and low-level interprocess communication, such as shared memory support. In addition, the carrier-grade OS supports serviceability, scalability, and availability features described later. The platform drivers, such as a HPI driver, provide an interface for managing hardware components of equipment such as PEMs, fan trays, and blades. The network-processing framework handles the user-plane packet processing and encapsulates functionality of network processing elements, such as network and communication processors. The cluster middleware provides the distributed fault-tolerance features such as monitoring software failure, handing failover, and defining redundancy models. The system management modules provide a generic administrative interface for configuring and provisioning of the overall system. The top (application) layer consists of management applications providing a specific interface, such as CLI and SNMP; distributed telecom applications like GGSN and SGSN; and standard routing applications such as OSPF and BGP. Carrier-grade OSs The Open Source Development Lab's carrier-grade Linux working group was established to collect requirements from network and telecom equipment providers, and independent software vendors specifically for Linux. The working group published carrier-grade Linux requirements to help Linux distributors demonstrate readiness for adoption in telecom systems. The latest requirements, version 3.0, are a super-set of version 2.0.2. The requirements are split into seven documents, with each describing an area of carrier-grade OSs. The main requirements are support for the following features:
The Linux distributors have started including these features in their carrier-grade editions, although many still support version 2.0.2. This support testifies that Linux will soon be a viable alternative for next-generation telecom systems as the cost cuts continue. The cost of an OS includes training, support, and licensing. These are all much lower with Linux. Cluster middleware The Service Availability Forum (SAF), a consortium of TEMs, middleware software vendors, and application software vendors, addresses this challenge of defining two open interface specs between application and cluster middleware: the application interface specification (AIS) and the hardware platform interface (HPI). The AIS is focused on middleware interfaces for applications and defining redundancy models, and the HPI is focused on providing a uniform interface for system management application. The AIS logical model is designed to support applications ranging from as web-servers to telecom applications (like GGSN). In this model, the application software provides a service, a set of actions to satisfy a request from a user or other system. A collection of components implements a service. For example, a web-server may have one component that handles HTTP protocol and another for handling transactions. A component can be mapped to a Unix process, although the spec doesn't enforce this mapping. A collection of components for a service on a node forms a service unit. The logical model also defines component service as an abstraction for the workload handled by components and service respectively. The service units are associated to form a service group providing protection from failure of a service unit. The component service instances are also associated to provide a protection group. The set of cluster interfaces that the SAF's AIS defines are:
Making a telecom application highly available using the AIS spec is not complex. An application like GGSN can be made highly available by:
A Gateway GPRS support node (GGSN) application can be implemented as one process or as a component to form a service unit (Fig. 2). A service group is an association of two units and a protection group is an association of two processes on a different node. The service and component service instances represent the workload, and are managed by the AMF.
2. A sample mapping of a 2N redundancy model telecom application. Data-plane software interface The NPF architecture uses a layered approach (Fig. 3). The first layer above the network processing elements is the interconnect layer, like forCes, that provides the low-level messaging services used to send packets across various network processors. The upper layers use this service to provide location transparency. Above the interconnect layers are three important functional constituents: NPF functional APIs (NFAPI), NPF service APIs (NSAPI), and operational APIs.
3. The layered NPF architecture. The NSAPIs expose a vendor independent interface to the control-plane application, such as routing, and encapsulate an underlying distribution of the NPE. These APIs control service specific functionality, such as IP routing and MPLS of network processors, and are used by protocol stack and software vendors. The NFAPIs are vendor-independent interfaces to individual NPE and define different stages in packet processing known as logical functional blocks (LFB). For example, NFAPIs are defined for IP forwarding, classification, and QoS. A processing packet path may be established by chaining LFBs in a specific order to achieve the desired processing. The application discovers LFBs using APIs and then selects building blocks based on NPF FAPI and NPF SAPI defined for select LFBs to realize the required system behavior. As network processors are increasingly becoming application specific, the NPF is planning to define service specific protocols such as TCP and stream offloading. In addition, application-specific interfaces like firewall and SSL may soon evolve. While the NPF's primary focus is to provide interfaces for network processing elements, high availability is also a key consideration. The NP Forum is aligned with the SAF and uses its complementary services such as the availability management framework. The NFAPIs could be viewed as service areas in addition to the SAF's services, like event, lock, and membership. It's recommended that NFAPI and FSAPI providers use the SAF's services to making their interfaces highly available. System management These management agents define interfaces for the entire equipment including hardware, cluster middleware, and applications. For example, MIBs are defined for the SAF's HPI, AIS management framework, and service areas. The last system-management layer consists of management applications like command-line interface or network-management adaptors. Newer interfaces, such as software management architecture for server hardware (SMASH), may also be used in future, particularly for managing multiple heterogeneous clusters. An important aspect of system management for highly available systems is software management. In particular, the in-service software upgrades are becoming a necessity, rather than a "nice-to-have" feature. Many middleware vendors support this feature, but so far this area continues to be complex. The SAF is exploring a standards-based software upgrade. Editor's Note: In part 1, we looked at standardization efforts in equipment hardware and low-level system management. We also discussed data plane and cover details of a common modular profile for telecom equipment vendors. To view Part 1 of this article. References: About the authors
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |