Tutorial: Floating-point arithmetic on FPGAs
December 13, 2006 -- dspdesignline.com
This article explains the basics of floating-point arithmetic, how floating-point units (FPUs) work, and how to use FPGAs for easy, low-cost floating-point processing.
Inside microprocessors, numbers are represented as integers—one or several bytes stringed together. A four-byte value comprising 32 bits can hold a relatively large range of numbers: 232, to be specific. The 32 bits can represent the numbers 0 to 4,294,967,295 or, alternatively, -2,147,483,648 to +2,147,483,647. A 32-bit processor is architected such that basic arithmetic operations on 32-bit integer numbers can be completed in just a few clock cycles, and with some performance overhead a 32-bit CPU can also support operations on 64-bit numbers. The largest value that can be represented by 64 bits is really astronomical: 18,446,744,073,709,551,615. In fact, if a Pentium processor could count 64-bit values at a frequency of 2.4 GHz, it would take it 243 years to count from zero to the maximum 64-bit integer.
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |
|
Xilinx, Inc. Hot IP
Related Articles
New Articles
- Why RISC-V is a viable option for safety-critical applications
- Dimensioning in 3D space: Object Volumetric Measurement by Leveraging Depth Camera-based Reconstruction on NVIDIA Edge devices
- What is JESD204B? Quick summary of the standard
- Post-Quantum Cryptography - Securing Semiconductors in a Post-Quantum World
- Analysis and Summary on Clock Generator Circuits and PLL Design
Most Popular
- System Verilog Assertions Simplified
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Method for Booting ARM Based Multi-Core SoCs
- An Outline of the Semiconductor Chip Design Flow