Scale network processors to 40 Gbps and beyond
Jun 28, 2006 (10:46 AM), CommsDesign
The networking industry has witnessed countless advances since the 1960s. Yet, despite a myriad of changes in applications, protocols and technologies over nearly a half-century, one thing has not changed--the ever increasing need for speed.
The "benchmark bandwidth" required of networking equipment increases by approximately an order of magnitude every decade or so. In the 1960s, 10 kbps was sufficient to connect terminals to mainframes. With the debut of distributed client/server computing in the 1980s, typical data rates increased to the 10-Mbps range with Ethernet and Token Ring LANs. Today's local- and wide-area networks now demand multiple Gigabits-per-second of throughput. And with the advent of IPTV and other bandwidth-hungry applications, tomorrow's networks will require substantially more capacity.
Over the years, the technologies employed to keep pace with bandwidth and its associated performance requirements have also evolved. Ordinary off-the-shelf processors worked well enough for a while. Then along came the custom-designed and application-specific integrated circuits (ASICs) needed to process critical protocols at very high data rates. However, as the number of protocols continued to proliferate, the use of specialized processors and architectures made development projects considerably more complex.
All throughout this period, the industry has pursued a worthy goal--use general-purpose programmable network processors to lower development costs and accelerate the time-to-market for new products and features. This article explores why fulfilling the promise of the network processor has remained so difficult, and outlines how a pipelined architecture can achieve this elusive goal.