Lessons learned, applied
The software code was structured such that some customization to the
solution is possible to accommodate specific camera module hardware
and custom user interfaces. As a minimum, the following
application components can be modified by the customer, or a 3rd
party developer: graphic overlays, addition of a custom logo,
look-up-tables (LUTs) for custom views, and sensor driver/settings.
The image below shows one UI implementation with parking guides and
a marker to highlight the location of the nearest
Figure 4: Alternate UI for Smart Back-Up
Lessons Learned and Applied
Performing object detection and distance estimation using a
single-sensor based 1” cube camera presented significant challenges,
but in the end a successful implementation was achieved.
Lessons learned were many, with the most noteworthy shared below in
an effort to help ease the development path for future embedded
About the author
- Acquire a comprehensive ‘golden’ image validation database and
automate the test process. This will enable testing the
algorithm during all phases of development and determine if
algorithm changes have impacted performance.
- Be mindful of the embedded platform architecture, its
properties and limitations and develop a set of guidelines both
platform and algorithm developers must adhere to in an effort to
reduce PC algorithm re-writes.
- Develop a vision-centric software framework. Algorithm
development does not consider data movement and a “software
framework” can help manage the complexities of vision processing
data movement. The framework can ensure data is always available
for processing, reducing pipeline stalls and cache misses.
- Develop specialized vision libraries to simplify and speed
porting onto the target embedded platform. Library functions of
high level complex processing “primitives” which are optimized
for the specific scalar and vector processor architecture can be
reused in follow-on embedded vision application development.
- Develop a tool for camera calibration and look-up-table (LUT)
coefficients instead of generating these manually. The task of
generating LUTs for different sensor/lens combinations is
tedious and time consuming. We developed a PC-based tool to
automatically generate LUTs for creating custom views specific
to the camera lens selected.
- Structure the code to accommodate customizing specific
components of the final solution. For example, consider offering
the application code in the form of a toolkit whereby elements
can be modified and a new build can be generated by simply
linking in the lower level algorithm binary executable.
- Take all necessary design precautions to ensure a clean signal
from the image sensor is acquired by the image processing
device. A noisy signal from the sensor will mean
Tom Wilson is vice president of business development at CogniVue Corp., which
makes image cognition processors and software for embedded vision
systems. CogniVue is a founding member of the Embedded Vision