Digital video stabilization techniques provide image correction by using information from within the video-stream and this includes movements of the camera, any atmospheric effects, and movement within the scene itself. The approach offers a potentially significant performance gain with minimal impact on power, weight, and size. However, to realize these benefits, the stabilization algorithm complexity can be high, which translates into a high computational load. Although electronic stabilization can be achieved using a low-cost CPU architecture, the limited processing bandwidth restricts the maximum input image size and frame rate. Consequently, the capability of the stabilization algorithms has to be compromised to facilitate real-time operation. GPU architectures can be used to reduce the limitations associated with CPU-only devices and provide higher processing bandwidths that enable more complex processing. However, GPU implementations consume more power and often still need an additional system host for designs based on commercial-off-the-shelf products.
The approach taken by RFEL has been to specify a high-performance stabilization system that can readily support high input resolutions and frame rates, while maintaining low latency and power consumption. Also, the solution was required to be compatible with cameras that operate over different spectral bands, with support of multiple camera interfaces.
Physically, a flexible and compact hardware implementation was required that supports both stand-alone and networked applications. Furthermore, the stabilization solution should allow rapid integration into third-party hardware, including retro-fitting into in-service equipment.
To meet these challenging requirements, RFEL elected to base the implementation on the latest FPGA architectures which have embedded ARM processors. Compared with a GPU implementation, the primary drawback was the required engineering development time which is significantly higher when compared with a CPU / GPU software module implementation.
Fortunately, RFEL has been developing advanced signal and video processing modules for many years, which allowed substantial re-use of pre-existing functions and development tools. Initially, functional requirements were captured by liaising with major customers in the military and security markets.
The system was then designed and developed using RFEL’s proven methodology of floating and fixed-point modeling in Matlab that allows efficient performance testing, rapid debugging and substantially de-risks all aspects of system implementation.
A fundamental challenge in the development of any video processing
product is the complexity and diversity of the imagery that must be
processed by the product for the large range of applications. Experience
has shown that the development of a video processing function, with
testing confined to only a limited data set, can introduce significant
program risk as discovery of ‘corner case problems’ late in the
development may necessitate substantial rework. Consequently, RFEL
performed a series of trials using various cameras and platforms, with
imagery gathered at different times of the day and under various weather
conditions. The data gathered was sufficiently diverse to give
confidence that the stabilization design would be fit for purpose for
land, maritime and airborne applications.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.