Hardware-based floating-point design flow
Michael Parker, Altera Corporation
EETimes (1/17/2011 3:22 PM EST)
Floating-point processing is widely used in computing for many different applications. In most software languages, floating-point variables are denoted as “float” or double.” Integer variables are also used for what is known as fixed-point processing.
Floating-point processing utilizes a format defined in IEEE 754, and is supported by microprocessor architectures. However, the IEEE 754 format is inefficient to implement in hardware, and floating-point processing is not supported in VHDL or Verilog. Newer versions, such as SystemVerilog, allow floating-point variables, but industry-standard synthesis tools do not support floating-point technology.
In embedded computing, fixed-point or integer-based representation is often used due to the simpler circuitry and lower power needed to implement fixed-point processing compared to floating-point processing. Many embedded computing or processing operations must be implemented in hardware—either in an ASIC or an FPGA.
However, due to technology limitations, hardware-based processing is virtually always done as fixed-point processing. While many applications could benefit from floating-point processing, this technology limitation forces a fixed-point implementation. If feasible, applications in wireless communications, radar, medical imaging, and motor control all could benefit from the high dynamic range afforded by floating-point processing.
Before discussing a new approach that enables floating-point implementation in hardware with performance similar to that of fixed-point processing, it is first necessary to discuss the reason why floating-point processing has not been very practical up to this point. This paper focuses on FPGAs as the hardware-processing devices, although most of the methods discussed can be applied to any hardware architecture.
After a discussion of the challenges of implementing floating-point processing, a new approach used to overcome these issues will be presented. Next, some of the key applications for using floating-point processing, involving linear algebra, are discussed, as well as the additional features needed to support these type of designs in hardware. Performance benchmarks of FPGA floating-point processing examples are also provided.
E-mail This Article | Printer-Friendly Page |
|
Intel FPGA Hot IP
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)