Modular design framework allows network processor software reuse
Modular design framework allows network processor software reuse
By Alexander Shoykhet, Senior Software Product Manager, Network Processor Division, Intel Corp., Tempe, Ariz., EE Times
August 5, 2002 (10:31 a.m. EST)
URL: http://www.eetimes.com/story/OEG20020802S0039
As applied to network processing architectures used in today's switches and routers, the challenge of sharing and reusing complex software involves several key issues. Solutions are available to help customers write reusable code and protect their software investments. These include the use of modular frameworks, data-plane libraries, and control plane and data plane separation. To facilitate creation and integration of highly reusable code, it is necessary to start with an overall framework. The primary characteristic of this framework must be modularity. It should be organized so that each module can be reused or replaced independently from the others. This framework would provide a strict programming methodology and a set of libraries and APIs to help customers develop software based on it. The framework must define how the application code should be partitioned into independent functional blocks, and how these blocks can be integrated together to form a packet processing pipeline. In order to minimize the amount of state shared between these blocks and maximize performance, the blocks must be coarse-grained and perform a certain logical set of functions on a packet, such as IPv4 forwarding or MPLS label insertion. Functional blocks typically maintain a state and expose management and configuration APIs through which their behavior can be controlled. An important aspect of the framework is the mechanism by which context is passed between functional blocks. This includes per-packet information, referred to as packet meta-data, such as packet size, offset to the packet data in memory, destination port etc. Another example is packet headers that are read and modified by different blocks in the pipeline. The framework must include APIs for passing this context from one block to another. The functions that implement these APIs would employ the most efficient way for passing this data, depending on how the blocks were assigned to the hardware resources and what hardware mechanisms exist for communication between these resources.
For example, local registers may be employed in linking functional blocks that end up on the same Processing Element (PE) within a network processor. Network processors usually contain multiple PEs to employ parallel processing. If the functional blocks reside on different PEs, other kinds of resources, such as memory or message queues, may be used for passing the context. This, however, should be completely transparent to the developer of these blocks. Such a model ensures that functional blocks can be instantiated in any combination without having to change the implementation of the block.
Furthermore, the logic that determines the flow of packets through the system must be implemented in the application code that binds all blocks together, and not in any specific block. To preserve its reusability across applications, a functional block must not assume that a specific block will be run before or after it. Each block only indicates the logical result of its operation (for example, packet classification) and it is then up to the application to determine which block should be run next.
A modular framework enables easy, virtually seamless integration of software components coming from different sources such as network processor vendors, third-party vendors and internal development groups. This type of interoperability acts as a catalyst for the creation of a rich ecosystem of software and hardware vendors around the network processors, thereby enabling customers to select best-in-class components from multiple vendors.
Such a framework also simplifies code reuse between projects and between groups within a company, with component libraries developed and used throughout the organization. This helps foster well-integrated modular designs and alleviates integr ation challenges.
Portability of software is another important way to preserve a company's software investment. Several types of portability are open for consideration. One is portability of code across generations of platforms and hardware products. These products may include new interfaces and media devices, as well as new functionality and support for new protocols and networking standards. A modular framework can help address these issues.
There are generally two types of blocks found in any networking application. One type is hardware or network interface-specific blocks, such as receive and transmit blocks for the different media interfaces (POS, ATM, Ethernet, etc) and the switch fabric (CSIX). Another is packet processing blocks that are protocol specific; for example IPv4/v6 forwarding, NAT, layer-2 bridging, etc. It is a good design practice to decouple these two types of functionality and implement them in separate functional blocks, so that each block can be replaced in dependently of the other.
Thus, when new media devices are added, blocks responsible for receiving and transmitting packets using those devices can be inserted for the old ones without affecting the packet processing modules. Conversely, if new protocols or standards are introduced, it is possible to update the packet processing blocks that implement these standards without changing all other blocks.
Updating code
New platforms may also utilize new generations of network processors that provide speed advantages and address various new requirements seen in the industry. As such, the new generations of network processors may introduce new hardware features, such as enhanced instructions (ALU and bit-field operations), new processing units that perform specific functions, or new types of internal memory and registers. While customers should expect backward compatibility for software from their network processor vendors, some changes to the code may be required in ord er to utilize the new features of these processors to the full extent. These code changes can be minimized by the use of high-level languages, such as C. These languages provide programmers a higher level of abstraction and isolate them from hardware specifics, thus absorbing the impact when new generations of network processors are introduced and the underlying hardware architecture changes.
Intel has adopted the C language and developed a compiler for use on the packet processing elements (microengines) as we find it the most widely used and the most suitable language for the embedded and networking industries. The level of abstraction that this languages provides protects the programmer from changes in the underlying instruction set and types of data storage available on the microengines. The new version of the microengine (MEv2) that is found on Intel's IXP2400 and IXP2800 network processors has many new types of data storage such as local memory and specialized registers used for optimiz ing data access and passing data between microengines. Code written using Intel's version of C (Microengine C) is not required to specify what types of storage are to be allocated for the variables. Instead, the compiler can automatically assign variables to the most efficient types of storage based on how these variables are used. Thus, code written for earlier version of the Microengine (MEv1) will not have to change when being compiled for MEv2, and the compiler will automatically make use of the new storage types available on MEv2 architecture, maintaining optimal efficiency.
Another layer of protection against hardware changes are data plane libraries that abstract a Processing Element's instruction set, provide OS-like services and implement common data plane functions. The libraries should have generic APIs so that they can be used on multiple generations and flavors of network processors with no changes to the code required. Their implementation, on the other hand, must be optimized for each specific network processor, generating code that fully utilizes all hardware capabilities of that processor.
The libraries are especially important in cases where the Processing Elements implement extensions to the standard RISC architecture designed specifically for processing packets, interfacing with media devices, and facilitating multi-threading processing.
Some of these extensions cannot be expressed well through the syntax of general-purpose, high-level languages such as C, and cannot be abstracted by these languages. In these cases, the data plane libraries provide the only layer of isolation from the hardware, and hence are critical for software portability.
Up to now we have been talking about software that processes packets at wire speed, also known as data plane processing. While data plane software is an important component of the overall system and its reuse is crucial for protecting a company's investment, it is not the only piece in the puzzle. Data plane processing is controlled and managed by control plane software and stacks. For example, control plane software sets up and updates various data structures, like routing tables, used by the data plane.
Control plane stacks typically reside on a separate control-plane processor and exchange protocol packets and control messages with the data plane software over a control bus or a backplane. With the appearance of fully programmable network processors, some or even all of that functionality may be moved to the network processor itself. It is important, however, that the control plane be logically separated from the data plane. This allows independent development and evolution of these planes and a higher degree of freedom when integrating these classes of software.
If the two planes are separated by a set of standard APIs and protocols, equipment manufacturers would be able to choose and upgrade control stacks independently of the network processors and data plane software. These stack s may come from any vendor who supports these APIs. Upgrades of network processors and migration to newer generations will not affect the control stacks and will not require changes to control plane software.
Related Articles
- Software Infrastructure of an embedded Video Processor Core for Multimedia Solutions
- Platform Software Verification Framework Solution for Safety Critical Systems
- Designing an Efficient DSP Solution: Choosing the Right Processor and Software Development Toolchain
- Expediting processor verification through testbench infrastructure reuse
- A 24 Processors System on Chip FPGA Design with Network on Chip
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |