Ceva-Waves Bluetooth 5.3 Low Energy Baseband Controller, software and profiles
Programmable NPUs 'edge' out ASICs
Programmable NPUs 'edge' out ASICs
By David Husak, Board Member, Network Processing Forum, San Jose, Calif., EE Times
August 5, 2002 (10:37 a.m. EST)
URL: http://www.eetimes.com/story/OEG20020802S0035
As the cores of the Internet and private networks alike are built out to satisfy current raw-bandwidth demands, aggregation has become the key element in the design of large networks. Shifting the burden of smart networking from the core, where speed rules, to the edge and access layers, where flexibility is key, are metro-area networks; public and private enterprise networks built and managed by companies like Cable & Wireless, MCI and Sprint; multinational corporate networks, and the Internet itself. The reason? Network cores these days are comprised of a relatively small number of high-bandwidth systems that do one thing very well: They process packets and cells at blinding speed. But to gain speed, network core elements have become one-trick ponies. Their functionality is pared down and, as a result, they're not very good at delivering network services that require intelligence, such as implementing quality of service (QoS) and traffic prioritization. Just as freeways or highways move well enough once you're on them-major metro-area traffic chaos notwithstanding-the challenge is managing access, which is where most congestion occurs. Many communities have partially solved this problem by installing metering lights at on ramps to control the flow of merging traffic. In this case, intelligence has moved to the on ramp, or the edge. Similarly, for large-scale networks intelligence is required at the places where decisions are made on admitting traffic to the net-creating billing records, monitoring activity, QoS and virtual private network capabilities. The challenge for edge systems or aggregation points is to facilitate speed, performance and functionality in this dynamic environment, so that access to the high-speed core network is efficient, effective and secure. In other words, this equipment must provide programmable bandwidth-the ability to add a multitude of networking services and capabilities while still a ddressing speeds to OC-48 and beyond. Network system designs based on software-programmable network processors can do the job far more quickly (deliver products to the market) and efficiently (add new protocols and specs to networking products) than conventional hardware-based designs, especially those requiring custom ASICs. Access is all about designing and deploying systems for the smaller businesses and individual users who still plod along at relatively low speeds provided by dial-up modems, fractional T1/E1, DSL or cable modems. Edge systems must intelligently aggregate this multitude of disparate access pipes while simultaneously dealing with high-speed core uplinks. This situation, in turn, leads to an increase in standard and proprietary networking services, features and protocols to enable such services. The requirement for increased intelligence at the edge is, at least in part, economically driven. Competition for revenue among Internet service providers is now more inten se than ever. Today's "differentiate-or-die" competitive scenario becomes critical as service providers look for ways to add revenue-enhancing services.
The canonical service provider dollar-less-a-month, penny-less-a-minute strategy has proved to be a go-out-of-business plan for many. As a result, it will no longer be adequate for system vendors to simply offer standard protocol sets. They must now become responsive to their customers' requirements.
The demand for adding such programmable-bandwidth functionality means that network processors become the only practical way that networking equipment can meet, adapt to and grow with the ever-changing demands of the access and edge market. In particular, all these issues indicate that the conventional approach of using fixed-function hardware and ASICs as basic building blocks of network systems must be updated.
Widespread recognition of these demands has, in fact, resulted in the organization of a new breed of industry consortia, exemplified by the Network Processing Forum. The objective: to create a platform of portable and reusable building blocks that can be interconnected, configured and programmed to address the broadest possible range of system requirements.
By the late 1990s at network systems companies everywhere, next-generation ASICs were being developed to power next-generation equipment. Larger companies typically ran dozens of parallel ASIC programs. But attempts to predict the market carry with them significant elements of risk. The investment is large; it can cost millions of dollars to build an ASIC. If the design is faulty, fixing it may be just as costly.
There is, however, another branch on the network systems architecture genealogy tree. Way back when the first routers on Arpanet were built, they were simply mainframes with multiple communication controllers. As Internet protocols were codified and problems discovered, programmers would load new software, reboot the systems and go.
Performance was terrible, particularly by today's standards, because the computers weren't optimized for I/O. This led the way to specialized hardware acceleration and to complete hardware-based systems, leaving behind the flexibility and adaptability afforded by software-based systems.
Speed is still high and growing, but functional demands are getting more complex and disparate: packet protocols, ATM protocols, voicevideo-data convergence and interoperability. Simply put, a network processor is a programmable chip that can implement those kinds of functions as efficiently as ASICs, at a fraction of the development time and investment, and at far less risk of premature obsolescence. Network processors incorporate the best features of hardware and software while eliminating the disadvantages of each. Unlike general-purpose processors like Pentium and PowerPC, network processor arc hitectures are specifically optimized to handle the demanding high-speed I/O and protocol-processing requirements of network systems.
Software-programmable network processors carry obvious advantages. Designers can buy them off the shelf today-many big-name processor vendors like Motorola, Intel and AMCC have their software-programmable NPUs in production. There's no need to launch a protracted ASIC development cycle. Network processors can be programmed using powerful state-of-the-art software tools, coded in high-level language, debugged and performance verified-all on a desktop.
Furthermore, developers don't have to predict feature set requirements far down the road; devices using network processors can be delivered in six to 12 months, giving vendors faster time-to-market and letting them define feature sets more accurately. If features and standards change, vendors can simply update their programs and load new firmware; new hardware isn't required.
And if there are bugs in the software or the processor itself, programmers can code around the problems.
Related Articles
- PRODUCT HOW-TO: Taking the delay out of your multicore design's intra-chip interconnections
- How to give crime-fighters a flexible, high-performance edge with programmable logic
- Inside CEVA's portable, programmable video solution
- Find out what's really inside the iPod; Reuse of components is a good design practice for similar applications, including mobile handsets
- Programmable logic carves further into the ASIC's territory
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |