The most advanced surveillance systems include analytics that track individuals of interest as they move through the field of view of the security network, even as they leave the field of view of one camera, move into a blind spot and then enter into the field of view of another camera in the surveillance network. Vision designers have programmed some of these systems to even detect unusual or suspicious movements. "Analytics is the biggest trend in the surveillance market today," said Mark Timmons, system architect in Xilinx's Industrial, Scientific and Medical (ISM) group. "It can account for human error and even take away the need for diligent human viewing and decision making. As you can imagine, surveillance in crowded environments such as train stations and sporting events can become extremely difficult, so having analytics that can spot dangerous overcrowding conditions or track individuals displaying suspicious behavior, perhaps radical movements, is very advantageous."
To further enhance this analysis and increase the effectiveness of these systems, surveillance and many other markets leveraging smarter vision are increasingly using "fusion" architectures that combine cameras with other sensing technologies such as thermal vision, radar, sonar and LIDAR (Light/Laser Detection and Ranging). In this way, the systems can enable night vision; detect thermal/heat signatures; or pick up objects not captured by or visible to the camera alone. This capability drastically reduces false detections and in turn allows for much more precise analytics. Needless to say, the added complexity of fusing the technologies and then analyzing that data requires ever more analytic-processing horsepower.
Timmons said that another megatrend in this market is products that perform all these forms of complex analysis "at the edge" of a surveillance system network – that is, within each camera – rather than having each camera transmit its data to a central mainframe system, which then performs a more refined analysis from these multiple feeds. Localized analytics adds resilience to the overall security system, makes each point in the system much faster and more accurate in detection, and thus can warn security operators sooner if indeed a camera spots a valid threat.
Localized analytics means that each unit not only requires greater processing horsepower to enhance and analyze what it is seeing, but must also be compact and yet incorporate highly integrated electronics. And because each unit must be able to communicate reliably with the rest of the network, it must also integrate electronic communication capabilities, adding further compute complexity. Increasingly, these surveillance units are connected via a wireless network as part of a larger surveillance system. And increasingly, these surveillance systems are becoming part of larger enterprise networks or even larger, global networks, like the U.S. military's Global Information Grid (see cover story, Xcell Journal issue 69).
This high degree of sophistication is being employed in the military-and-defense market in everything from foot soldier helmets to defense satellites networked to central command centers. What's perhaps more remarkable is how fast smarter vision technology is moving into other markets to enhance quality of life and safety.
Smarter vision for the perfect apple
Take, for example, an apple. Ever wonder how an apple makes it to your grocery store in such good condition? Giulio Corradi, an architect in Xilinx's ISM group, said that food companies are using ever-smarter vision systems in food inspection lines to, for example, sort the bad apples from the good ones. Corradi said first-generation embedded vision systems deployed on high-speed food inspection lines typically used a camera or perhaps several cameras to spot surface defects in apples or other produce. If the embedded vision system spotted an unusual color, the apple would be marked/sorted for further inspection or thrown away.
Beneath the skin
But what happens if, at some point before that, the fruit was dropped but the damage wasn't visible? "In some cases, damage that resulted from a drop may not be easily spotted by a camera, let alone by the human eye," said Corradi. "The damage may actually be in the flesh of the apple. So some smarter vision systems fuse an infrared sensor with the cameras to detect the damage beneath the surface of the apple's skin. Finding a bruised fruit triggers a mechanical sorter to pull the apple off the line before it gets packed for the grocery store." If the damaged apple had passed by without the smarter fusion vision system, the damage would likely become apparent by the time it was displayed on the grocery store shelves; the fruit would probably have to be thrown away. One rotten apple can, of course, spoil the bunch.
Analytics can also help a food company determine if the bruised apple is in good enough condition to divert to a new line, in which another smarter vision system can tell if it is suitable for some other purpose – to make applesauce, dried fruit or, if it is too far gone, best suited for composting.
Factory floors are another site for smarter vision, Corradi said. A growing number use robotic-assisted technologies or completely automated robotic lines that manufacturers can retool for different tasks. The traditional safety cages around the robots are too restrictive (or too small) to accommodate the range of movement required to manufacture changing product lines.
So to protect workers while not restricting the range of motion of automated factory lines, companies are employing smarter vision to create safety systems. Cameras and lasers erect "virtual fences or barriers" that audibly warn workers (and safety monitor personnel) if someone is getting too close to the factory line given the product being manufactured. Some installations include a multiphase virtual barrier system that will send an audible warning as someone crosses an outer barrier, and shut down the entire line automatically if the individual crosses a second barrier that is closer to the robot, preventing injury. Bier of the Embedded Vision Alliance notes that this type of virtual barrier technology has wide applicability. "It can have a tremendous impact in reducing the number of accidents in factories, but why not also have virtual barriers in amusement parks, or at our homes around swimming pools or on cars?" said Bier. "I think we'll see a lot more virtual barrier systems in our daily lives very soon."
I recently got OpenCV up and running on my Zynq-7000 based Zedboard. The performance of the OpenCV samples I ran was very good even without using the FPGA fabric. I didn't get my OpenCv from Xilinx, but rather downloaded and built it from the generic ARM Linux source.
I really wanted to see how compatible the Zynq is to the rest of the ARM CPU/SOC world. The Linux distribution I'm using (Xillinux) is made for the Zynq, but pretty much everything else I used was generic. I built CMAKE, Mjpg-Streamer and OpenCV from non-targeted source files. The Mjpg-Streamer source I used was made for the Raspberry Pi. The others were generic ARM Ubuntu sources. I also downloaded and used a large number of generic source libraries - all of that without touching a single line of code.