Implementing floating-point algorithms in FPGAs or ASICs
Kiran Kintali, MathWorks
embedded.com (June 19, 2018)
Floating-point is the most preferred data type to ensure high-accuracy calculations for algorithm modeling and simulation. Traditionally, when you want to deploy such floating-point algorithms to FPGA or ASIC hardware, your only choice is to convert every data type in the algorithm to fixed-point to conserve hardware resources and speed up calculations. Converting to fixed-point reduces mathematical precision, and sometimes it can be challenging to strike the right balance between data type word lengths and mathematical accuracy during conversion. For calculations that require high dynamic range or high precision (for example, designs that have feedback loops), fixed-point conversion can consume weeks or months of engineering time. Also, in order to achieve numerical accuracy, a designer has to use large fixed-point word lengths.
In this article, we will introduce The MathWorks' Native Floating-Point workflow for ASIC/FPGA design, using an IIR filter as an illustration. We will then review the challenges of using fixed-point, and we will compare the area and frequency tradeoffs of using single-precision floating point vs. fixed-point. We will also show how a combination of floating-point and fixed-point can give you much higher accuracy while reducing conversion and implementation time in real-world designs. You will see how modeling directly in floating-point can be important, and how sometimes it can significantly reduce area and improve speed in real-world designs with high dynamic range requirements, contrary to the popular belief that fixed-point is always more efficient compared to floating-point.
E-mail This Article | Printer-Friendly Page |