No size fits all for signal processing on FPGA (RF Engines)
EE Times: Latest News No size fits all for signal processing on FPGA | |
Steve Matthews (11/08/2004 9:00 AM EST) URL: http://www.eetimes.com/showArticle.jhtml?articleID=51202117 | |
Dramatic advances in FPGA technology have resulted in devices that can be used in very high-performance signal-processing applications, with the latest "platform" devices supporting processing power that is hundreds or thousands of times greater than that of traditional programmable DSPs. But harnessing this performance while maintaining an efficient design can be time-consuming and difficult, requiring an in-depth knowledge of signal-processing algorithms and an understanding of the nuances of FPGA implementation.
Numerous intellectual-property solutions are becoming available that allow such signal-processing blocks as fast Fourier transforms, FIR filters and other common functions to be rapidly integrated. It is nonetheless apparent that traditional approaches to IP development are resulting in only the most-generalized functional blocks, which do not meet the needs of even moderately demanding applications and do not permit the full capabilities of the FPGA to be realized.
The reason for these limitations relates to the numerous engineering trade-offs that must be considered when creating a design for an FPGA. The FFT algorithm serves as a useful example. When developing an FFT for a programmable DSP, it is fairly straightforward to develop a single library function that will be broadly optimal for the vast majority of requirements. Configuration parameters, such as FFT length, can be selected at run-time, and data memory can be allocated dynamically to suit.
This one-size-fits-all approach is not open to the FPGA designer, however. The resources required by the FFT must be fixed at design time and will depend heavily on the longest length of transform that must be supported. Further, dramatically different architectural approaches must be considered in order to achieve the required performance while maintaining a silicon-efficient design. For example, parallel implementations can be used to support sample rates many times faster than the FPGA clock — but at the expense of significantly more FPGA resources.
In many cases, the ability to fit the FPGA design within the smallest possible device is a common aim, since, even in moderate production quantities, this can result in cost savings. Fine-tuning of such aspects as the number of bits used to represent integers can often save enough silicon to achieve the smaller device. In other situations, the designer may be more concerned about power usage and may flex the architecture in order to achieve a power-efficient combination of FPGA clock rates and silicon usage.
The downside of this kind of customization is that the resulting implementations are often heavily tied to one particular application. Porting the design to a different set of requirements can be time-consuming, and starting again is often the most efficient approach. This clearly contradicts the reuse paradigm.
To address this need for highly customized IP, companies are shifting their emphasis from the development of standard IP blocks to a more architectural focus. Under this model, the IP vendor focuses its effort on thoroughly understanding the signal-processing algorithm in question and develops a number of architectures for different performance points. In many cases, like that of the FFT, the algorithm can be broken down into a number of smaller building blocks.
The requirement, then, is to be able to quickly configure and integrate the various low-level building blocks to meet particular requirements. By implementing the architectures in parameterized VHDL code, producing the final implementation can simply be a question of selecting the appropriate algorithmic building blocks and synthesizing the design. Increasingly it is becoming practical to automate this final step of the design.
Bit-true models in a high-level simulation language, such as Matlab, can provide a reference for the design as it progresses through each stage of the synthesis. This, too, can be built into the automated process.
With these techniques, it is possible to rapidly produce signal-processing IP for FPGAs that's highly optimized for its target application. For the system developer, the approach offers an expedient, low-risk route to product.
Steve Matthews (steve.matthews@rfel.com) is technical sales executive for RF Engines Ltd. (Isle of Wight, U.K.).
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
- Using parallel FFT for multi-gigahertz FPGA signal processing
- How to achieve 1 trillion floating-point operations-per-second in an FPGA
- A RISC-V ISA Extension For Ultra-Low Power IoT Wireless Signal Processing
- Where Innovation Is Happening in Geolocation. Part 1: Signal Processing
- Selecting the right hardware configuration for the signal processing platform
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- PCIe error logging and handling on a typical SoC
E-mail This Article | Printer-Friendly Page |