HD video line buffering in FPGA
By Suhel Dhanani, Senior Manager, DSP, Altera Corporation
(03/28/08, 02:00:00 AM EDT) -- Video Imaging DesignLine
The move to high definition (HD) -- for a wide variety of applications including broadcast equipment, displays, surveillance cameras and medical/military imaging systems -- means that in all of these types of systems approximately 4x to 6x the amount of data now needs to be processed, compared to standard definition (SD). This proliferation of HD video has pushed the implementation of various video processing algorithms on to the parallel architectures of FPGAs.
Complex video processing algorithms -- encoding, motion estimation, and scaling -- that are generally implemented using FPGAs need access to various pixel values within a single frame or even across multiple frames (when doing motion estimation, for example). These different pixel values are manipulated and processed in parallel by the FPGA to implement the algorithm needed to meet the required performance specification.
Storing the entire video frame (or multiple frames when performing temporal encoding) is not the most efficient use of the on-chip memory resources and generally only a few selected lines from a given frame are stored inside an FPGA fabric. Pixel values are often calculated as a function of kernels, blocks, or lines of pixel values around the pixel of interest. In many cases multiple lines have to be stored. Implementing these video line buffers within an FPGA is what makes video applications memory intensive.
An FPGA platform rich in embedded memory and offering flexibility in terms of memory configuration can assist in fitting the design into the smallest device and/or helping get the optimum signal processing performance.
E-mail This Article | Printer-Friendly Page |
|