FPGAs: Embedded Apps : Telecom puts new spin on programmable logic
Telecom puts new spin on programmable logic
By Ron Wilson, EE Times
July 1, 2002 (4:46 p.m. EST)
URL: http://www.eetimes.com/story/OEG20020628S0095
The relationship between programmable logic and the communications industry reaches back to the days of PALs. But beginning with the Internet explosion of the late 1990s, that relationship took on a special meaning, and eventually a path of its own. Today, it is fair to say that neither the programmable logic business nor the communications/networking infrastructure is as it would have been had the two not been joined in common cause. And, the rest of the electronics industry stands beneficiary to these changes.
In the beginning communications and networking engineers perceived the advent of FPGAs and complex PLDs in pretty much the same light as everyone else. The devices were handy ways of implementing glue logic in prototypes and low volume applications. If there was anything unique about the relationship, it was that this application area tended to have a plethora of interfaces, all of them not quite too fast to implement in existing technol ogy. That tended to mean that the FPGA was often used with external transceivers, and that designs often devolved into joint developments between networking engineers and FPGA customer engineers, both struggling to get a little more throughput from a recalcitrant logic fabric. Those trends would become a hallmark.
This quiet evolution would have been left to follow Moore's Law in peace except for a coincidence of several powerful forces in the mid-1990s. First, of course, was the explosive build-out of the global network, both in bandwidth and in nodes. Second, was a fundamental change in network switch architecture. And third, was the gradual loss of financial rationality in the industry.
The buildout created not only more demand, but demand for accelerating evolution. New data rates appeared in months instead of decades. Protocols appeared, mutated and in some cases disappeared with such frequency as to undermine the whole concept of a standard. As a consequence time to market--and the rela ted figure, time required to adapt to a change in the market--became more important than manufacturing cost.
This, combined with the naturally growing capacity and speed of FPGAs, created a new opportunity. The parts were still hopelessly behind ASICs in speed, capacity and energy consumption. But their ability to slash design time and--more important--time to early production, made them allies of design teams fighting to stay on the current generation of switching products.
At about the same time, the rapid increase in data rates was having another impact on switch architectures. Traditionally, switching and routing under complex protocols had been handled as a general-purpose computing task. Arriving data, once separated from its transport packaging, was read into main memory, classified and sent to an output port by an embedded CPU under program control, and with little hardware assistance. But the data began arriving too fast. This forced architects to break the system into two layers: on e for system control and for tasks that could be performed with greater latency, and one that could perform inspection, classification and routing at wire speed: a control plane and a data plane. FPGAs, with their wondrous flexibility but limited speed, remained in the control plane. And FPGA vendors, with their unbounded taste for growth, set about to change that.
In order to serve in the data plane, FPGAs would require the high-speed, low-swing differential interfaces that were increasingly used between chips there. They would need significant and flexible on-chip memory to implement the dual-port RAMs and FIFOs that were the essential glue of data paths. In addition, they would need much more speed. In some applications, that speed would be needed not just for shuffling packets, but for the signal processing algorithms necessary to recover data at the baseband.
Vendors responded with a significant change in direction. They began to take a strong interest in such technologies as LVDS and Utopia. They moved away from using logic cells as tiny embedded RAMs, and began to provide moderate and then large blocks of fast, configurable RAM diffused on the FPGA die. And in some cases they began to add diffused signal processing elements--integer multipliers or multiply-accumulate blocks--to their fabric. These new FPGAs could approach the I/O throughput, on-chip memory and signal-processing throughput of moderately advanced ASICs. And they won a role on the data plane.
Insanity and recovery
Meanwhile, the dot-com bubble had ignited demand for network capacity far beyond reason. Carriers appeared to be willing to pay any price whatsoever to the first vendor to role out a new feature set or performance level. And so the traditional economic constraint that reserved FPGAs for prototyping was dissolved. Network equipment vendors found that they could ship production systems stuffed with multi-hundred-dollar FPGAs and easily recover the stagger ing bill-of-materials cost. FPGAs became, as the chip vendors had dreamed, a production technology.
But of course it was not sustainable. The huge inflows of capital that had made carriers insensitive to cost evaporated. The demand for new features slowed, and then ceased. Finished goods began to gather dust on the shelves. And FPGAs despite their remarkable new capabilities appeared headed back to the prototype shop.
Yet sometimes the fine print that taketh away granteth a boon as well. The same absence of capital that ended the madness also ended the blank checks that design teams had been receiving to develop new networking ASICs using COT design flows. Given the need for continuing development and the limited production forecasts, vendors began looking at FPGAs as a means of keeping the engine running during the bitter cold of recession.
Another idea as well began to dawn. What if there wasn't going to be another boom? What if the industry could only afford to develo p a few platform ASICs instead of one $50 million chip design for each new idea?
Such a platform chip would be widely shared among system designs and among vendors. I would have the highest-speed, hardest bits done in standard-cell logic. But it would have to be sufficiently adaptable to permit several vendors to differentiate themselves. It might even have enough flexibility to operate in several different modes, and to respond to changes in the still-unstable protocols.
The solution, conceptually, was to embed pieces of programmable fabric inside a conventional SoC. And that is about where we stand today. Large, highly featured FPGAs, some even with embedded RISC cores, are still used on the data plane, in prototyping and in what production is going forward. SoC design activity is substantially reduced. And a generation of new ideas--approaches to programmable fabric embedded within an SoC--are moving from research to early deployment. The first fruits of this effort should be visible withi n a year.
The design teams who wrote articles for this week's In Focus highlight several applications of current FPGAs in production. They justify their use on the basis of flexibility, but also because they are so feature-rich that they have become cost-effective in moderate volumes. Surprisingly, even moderate-density, fast CPLDs are participating in this trend. And, that is a hint at what is to come.
Related Articles
- FPGAs: Embedded Apps : Building mesh-based distributed switch fabrics with programmable logic
- FPGAs: Embedded Apps : OC-48 SONET receiver consumes significantly less logic in FPGA
- Embedded Systems: Programmable Logic -> FPGAs don remote reprogram habits
- Programmable Logic: FPGAs get flexible for PCI Express
- Programmable logic/customizable CPU cores adapt hardware to apps
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |