Several contrasting approaches can be used for electronic image stabilization. The first, and most popular, is the use of prominent image features to generate frame-to-frame flow vectors. Typically, this approach involves feature detection and tracking of these features between frames. If the frame-to-frame movement is assumed to be low and high detection thresholds are used then the implementation can be relatively simple. However, performance and robustness when operating with diverse imagery can be poor.
Figure 2: Stabilized image set. The rotation correction can be readily gauged from the edges of the image frame.
In terms of the derived requirements, the RFEL stabilization function was specified to deliver a stable image under the most demanding of applications covering: driving aids for military vehicles, diverse airborne platforms, targeting systems and remote border security cameras. Furthermore, the algorithm design was required to stabilize images subjected to two-dimensional translation and rotation from both static and moving platforms. It is envisaged that the stabilization function will be used for supporting many different physical equipment installations. As such, the center of rotation could be within the camera or external to it and the stabilization algorithm must be able to cope with such installations. The stabilization function was required to provide real-time correction at frame-rates of up to 150Hz for various imaging devices and for resolutions of up to 1080p including both daylight and infrared cameras. For example, a 1080p color camera operating at 8-bits and with a frame-rate of 60Hz, necessitates operation with an input data rate of about 1 Gbits/s. An FPGA-based hardware implementation provides the computational resources needed to process several gigabit/s input data rates with the selected spatial frequency stabilization method. However, even with the inherent processing power of an FPGA, the implementation has to be carefully tailored to satisfy the stringent latency and power consumption constraints. Given that the stabilization function is likely to be only one component of a larger processing suite, it was also necessary to minimize the number of gates and external memory accesses used.
The level of stabilization accuracy achieved under a very diverse and demanding range of evaluation test data was typically less than ± 1 pixels, even when subjected to random frame-to frame displacements of up to ± 25 pixels in x and y directions and with a frame-to-frame rotational variation of up to ± 5°. The performance of the stabilization function is illustrated with figures 1
, using a small number of frames from a daylight camera.
The performance of the stabilization design is shown using five consecutive frames from a sequence, when subjected to a random frame-to-frame rotation as large as ± 1° around the center of the image. The design has proven to be extremely flexible and can be used for both static and moving camera platforms. Although this capability can be delivered through an FPGA only implementation, further capability and performance can be achieved through additional software functions hosted on the ARM multicore processors embedded in the latest FPGAs.
The stabilization design was originally implemented on a development platform for Xilinx’s Zynq-7000 All Programmable SoC, which hosts an ARM Cortex-A9 MPCore dual core processor. This development board allowed early revisions of the design to be matured based upon target device resource and processing constraints. The processor was accelerated by exploiting existing RFEL’s IP Core components that reside in the fabric of the FPGA and have been optimized and tested over the last 10 years. A specific hardware design was also undertaken that provides the stabilization IP Core, together with other video processing functions, in a fully integrated custom hardware system-on-module. This module can interface with many different standards such as Analogue, CameraLink and GigE based protocols such as GigEVision.
RFEL’s video image stabilization processing capability is now available as an IP Core, optimized for FPGA. The fully integrated hardware system-on-module that incorporates the stabilization function will be available in the second quarter of2013 and may be ruggedized to military standards. This stabilization system offers exemplary performance even when the camera is subjected to extreme unwanted shifts and rotations. When this capability is coupled with a low power and low latency implementation, the design becomes highly suited to military and security applications, as well as more demanding commercial applications. In addition, the IP Core can be readily integrated with existing processor hardware with negligible impact on size and weight.
About the author
Dr Steve Parker is Principal Digital Systems Engineer and Technical Project Lead at RF Engines Ltd – www.RFEL.com.
He can be reached at Steve.firstname.lastname@example.org
Wayne Cranwell is Technical Sales Engineer and Project Manager at RF Engines Ltd.
He can be reached at Wayne.email@example.com
Courtesy of EETimes Europe
. Implementing analog functions in rugged, rad-hard FPGAsFor more technical information on military and aerospace engineering visit EE Times' Military Aerospace Designline.
. NATO experiences of modeling military embedded systems
. Ruggedized interconnects support military computing platforms
. Debug a microcontroller-to-FPGA interface from the FPGA side
. Virtex-6 ups processing power to military embedded
Optimizing FPGAs for power: A full-frontal attack
. Saving size, weight in avionics, military or space power distribution systems
. How to mitigate military component supply issues
. Mil-Aero top 10 'How-To' articles for 2012