|
||||||||
SoCs Let Designers Re-Architect Next-Gen Transport Equipment Partha Srinivasan and Rajashree Mungi, Parama Networks Inc. Telecommunication networks have evolved dramatically over the last few years. But architectures used to build the underlying network devices haven't changed much. VLSI technology and associated packaging has come a long way. Five years ago, 0.18-micron, 4-million-gate-count chips in a 35x35mm package with 1.27mm pitch were considered state-of-the-art. But today, the commercially available technology is leading us to sub-micron (90nm) levels, with gate count exceeding 10 million. Package sizes have advanced from 40x40mm to 50x50mm and the pitch has gone down from 1.27mm to 0.5mm. In short, the technology advances to enable dense new devices exist today. As technology advances, the traditional integration strategy of packing neighboring components onto a single die would result in denser devices. But this still yields a non-optimal level of integration as basic system partitioning has not changed. On the other hand, a complete re-architecture of the system would result in maximal integration and would leverage the new VLSI technology to its fullest extent. Advantages of a Centralized Approach SONET/SDH equipment prices suffered a precipitous decline in the 2000-2002 time frame caused by a combination of eroding margins and a rush of highly integrated components brought to market during that period. Those price declines have now leveled off, and most of the saving in equipment cost due to higher integration has already been realized. The initial cost for an OC-48/STM-16 ADM (add-drop multiplexer) appears to remain at the $10,000 level or above, even with further component price reductions. To reduce the cost of this equipment to the $5000 target sought by international OEMs, while enabling new multi-protocol services within the systems, we have to revisit the entire architecture and partitioning of the traditional ADM (Figure 1).
Figure 1: Diagram showing a traditional ADM architecture. The requirements of equipment redundancy and survivable network topologies have been the primary drivers of ADM architectures. Traditional ADM designs have used a centralized redundant switch matrix, with traffic ports on individual interface cards. The interface cards performed section/line termination, while passing on the SPE data to the central redundant switch cards for grooming. This scheme is shown in figure 1, with each traffic card containing a framing device, some overhead handling capability, a backplane interface device, and clock recovery circuitry (PHY). The traditional ADM architecture distributes the data path between the interface cards and the central switch card. The distributed nature of the data path results in a distributed control path, with software required on each of the interface card as well as the switch cards. This contributes to higher part count and a more complex control path leading to more software complex difficult to implement due to the distributed nature of the transport overhead termination. All APS (automatic protection switching) software and DCC (data control channel) communication software involves keeping multiple distributed processors in sync. The application of VLSI integration has traditionally resulted in reducing the part count on individual interface cards. This approach has reduced the part count and cost, without addressing the issues of architectural complexity and duplication of effort. The engineering development cost of the resulting system has not been substantially altered by the application of an integrated Framer, PHY and Backplane device. Thus, mere application of VLSI integration without appropriate partitioning, does not allow us to truly reduce the cost structure of the ADM. New Partitioning Needed Centralization involves pooling on one card all of the elements of the Sonet/SDH data path, including framing, overhead processing and switching. This process is shown in Figure 2.
Figure 2: Diagram showing a a centralized ADM architecture. An ADM enabled with this Centralized architecture results in the following advantages:
Extending the Centralized Architecture to MSPPs MSPPs are enabled by:
To explore the applications of the combination of a SoC and centralized architecture, we look at two specific emerging MSPP applications. The first is customer located equipment (CLE), and the second is the Packet ADM which is a variation of the MSPP. CLE is gaining favor from carriers, which are looking for low cost equipment that they can install at a customer site to enable new services, connect legacy services and serve as a demarcation point in place of the traditional CSU/DSU. The version of CLE that we will look at has been recently dubbed the micro-MSPP (Figure 3). It is typically a one-rack-unit (1U) box with a variety of service interfaces such as Ethernet, private line, and frame relay. The uplinks for the micro-MSPP are typically OC-3/12, but in certain cases will be Ethernet. In any case, the key functions of combining new and old services and providing a convenient demarcation point for carriers will drive the feature set.
Figure 3: Next-gen centralized architecture for a micro-MSPP. The micro-MSPP is a natural place for the centralized architecture. The cost pressures at this end of the network are extreme with target selling prices in the $1000 range. This necessitates the need for high levels of integration and a reduced level of complexity. The single card implementation and moderate sophistication of the protocol processing required further argue for such an approach, where a large amount of the system functionality can be furnished by a single piece of silicon. As the system grows in complexity, the centralized SoC architecture brings more leverage to the tasks of reducing cost, power consumption, and overall system complexity. Our second example of the packet ADM shown in Figure 4 will illustrate this. The packet ADM is a multiple card system with I/O cards and centralized function cards.
Figure 4: Diagram showing a next-gen packet ADM. The data manipulation functions have all been centralized into a redundant set of processing cards. The SoC implementation is paired with a Network Processor via a SPI 4.2 interface. Notably the system has two backplane connections " one for TDM traffic and the other for data traffic. This construct allows a high degree of flexibility in the service handling capability of the system. The alternative design with TDM-only backplanes and Ethernet mapping cards in I/O slots, limits the scalability of the existing ADM elements to deploy Ethernet or other data services. This leads MSPP designers to implement separate packet and TDM backplanes. I/O interfaces are routed to the central packet/TDM switch card over two independent traces on the backplane and they are destined to two separate devices based on the type of the I/O interface, i.e., packet and TDM. The MSPP-on-a-chip (MoC) provides a SPI interface so that some of the packet traffic can be mapped to Sonet. With a high capacity channel between the MoC and the Packet processor, the system gains a large degree of flexibility. The incoming traffic can be any mix from mostly TDM to mostly data. The service mix is controlled by the mix of low cost I/O cards. In our example the Packet ADM is using an OC-n uplink to the carrier, with data traffic encapsulated in GFP/VCAT. Wrap Up About the Authors Rajashree Mungi is a principal architect at Parama Networks. She holds a M.S. degree in Electrical Engineering from Arizona State University, Arizona and can be reached at rmungi@paramanet.com.
Copyright © 2003 CMP Media, LLC | Privacy Statement
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |