VANCOUVER, British Columbia In an attempt to offer machine-vision developers a high-level software-development environment, BrainTech Inc. is beta testing a new "wrapper" application called Odysee Development Studio. Slated for introduction in June, the system acts as a container for intelligent machine-vision algorithms, simplifying their development and standardizing their extension to new application areas.
Building intelligence into machine-vision applications is a highly customized activity. Once basic image elements have been extracted, higher-level software must be developed to give the computer clues as to how various image features relate to one another. "We want to open up machine-vision application development to the researchers actually doing the applications. With Odysee, there is no need to collaborate with a programmer in order to create a machine-vision application," said Charles Hooge, vice president of research and development at BrainTech (Vancouver, British Columbia).
Machine-vision applications share a common architecture, according to BrainTech, a fact which the company used to craft its Odysee application-development environment. Four common architectural blocks characterize all machine-vision applications: data acquisition; data preprocessing; recognition classification; and global control-feedback. Consequently, BrainTech has built a four-part plug-in architecture capable of supporting any type of intelligent processing modules.
"Each of our modules is open-ended, permitting developers to choose from any of the prebuilt modules we have for various input devices and processing routines, or to easily integrate their own or others' algorithms into Odysee. So instead of spending 10 percent of your time on the application and 90 percent implementing it, you can leverage preexisting work [and] spend 90 percent of your time working on your application," said Hooge.
The data-acquisition module, for instance, works with a variety of off-the-self frame grabbers. Input can come from nearly any rasterized source, from video cameras to radar arrays to infrared sensors. For video, Odysee users can choose premade acquisition modules complete with all necessary device drivers. Or for specialized devices, an easy-to-write translation layer permits quick integration into Odysee.
Input plugs in
"With Odysee, your application is not tied to the specialized command set of a particular frame-grabber manufacturer. You can switch input devices without having to touch the code in any of the processing modules," said Hooge.
The ability to quickly plug in different input devices and processing modules gives machine-vision developers, according to BrainTech, the ability to "what if" for the first time. Instead of making long-term commitments to particular setups just because the time and effort to switch is uneconomical, developers can spend their time comparing different setups to work out an optimal scheme for real application data.
Module interchangeability in an Odysee application is facilitated by an object-oriented interface structure, including standardized input data and parameter formats, plus different output-data and control-signal formats. This interchangeability is helpful when different preprocessing and classification algorithms need to be tested.
For instance, application optimization can be achieved by comparing different off-the-shelf data-filtering routines to home-brewed versions in the preprocessing module, while different adaptive-learning routines are simultaneously tried out in the classification module.
Users can plug in classifiers based on neural networks, fuzzy logic, wavelets, fractals, statistics or any other technology, provided it is programmed in C or C++. BrainTech's own classifier, BrainTron, is native to Odysee. BrainTron features an adaptive neural-network learning capability optimized for machine vision, and has the ability to extend learning while it is classifying in real-time.
BrainTron also can be used as a framework for data fusion, which offers the ability to more accurately classify objects based on diverse input streams. For instance, BrainTron can analyze data streams from video inputs, power spectrums, radar signatures and thermal sensors to form separate feature vectors for each. The advantage here is that when one source of data is unavailable, say from lack of video because of night operation, the other data streams can still classify the object. That kind of data fusion can greatly increase a classification system's overall reliability and immunity to false alarms.
The icon-based drag-and-drop environment permits Odysee systems to be created without explicit programming. However, once a system is defined, it can be immediately compiled and tested in machine code, even though the user never sees the source code. Subsystems that have been tried and debugged in this manner can then be filed away for reuse in future projects.
A hardware accelerator board is also available to speed up deployed systems. The Odysee board is a PCI-bus card with three on-board microprocessors, each with their own separate memory subsystems. The main RISC processor runs at 500 Mips to perform rapid association matching between video input streams and stored templates. Multiple PCI cards can be used together to further accelerate a system, and a 64-bit accelerator chip is also available for adding hardware-associative abilities to the user's own boards.