The move to high definition (HD) -- for a wide variety of applications including broadcast equipment, displays, surveillance cameras and medical/military imaging systems -- means that in all of these types of systems approximately 4x to 6x the amount of data now needs to be processed, compared to standard definition (SD). This proliferation of HD video has pushed the implementation of various video processing algorithms on to the parallel architectures of FPGAs.
Complex video processing algorithms -- encoding, motion estimation, and scaling -- that are generally implemented using FPGAs need access to various pixel values within a single frame or even across multiple frames (when doing motion estimation, for example). These different pixel values are manipulated and processed in parallel by the FPGA to implement the algorithm needed to meet the required performance specification.
Storing the entire video frame (or multiple frames when performing temporal encoding) is not the most efficient use of the on-chip memory resources and generally only a few selected lines from a given frame are stored inside an FPGA fabric. Pixel values are often calculated as a function of kernels, blocks, or lines of pixel values around the pixel of interest. In many cases multiple lines have to be stored. Implementing these video line buffers within an FPGA is what makes video applications memory intensive.
An FPGA platform rich in embedded memory and offering flexibility in terms of memory configuration can assist in fitting the design into the smallest device and/or helping get the optimum signal processing performance.
Let's take a closer look at some of the common memory requirements when implementing video line buffers inside an FPGA fabric using embedded memory blocks. Both the amount of total memory bits and the memory configuration options available can impact the implementation of video applications on an FPGA-based platform.
Figure 1: A 1080p HD frame has 1920 pixels in each of the 1080 lines
Video line buffer size
A video frame is comprised of many lines of pixels as shown in Figure 1, which shows a progressive HD video frame that has 1080 lines with 1920 pixels per line.
Lines of video frames are often stored in FPGA memory. For example, a bi-cubic scaling algorithm will buffer 4 lines of pixels, where high quality vertical downscaling algorithms can require 16 lines of buffering.
Next: Video line buffer configurations