According to MIT neuroscientist Mriganka Sur, half of the human brain is devoted to vision. Why? I'm no neuroscientist, but I imagine that there are two reasons: First, vision is extremely valuable: humans use it constantly, and for an endless variety of tasks, from reading to navigation to creating all manner of objects. Second, vision is a hard problem, considering all of the things that we're able to discern visually--under widely varying and often very challenging conditions, such as glare and low light.
Computer vision enables machines to understand things through visual inputs, sometimes even exceeding the capabilities of human vision. For decades, computer vision has been a niche technology, because computer vision equipment has been large, expensive, and complex to use. But recently, products like the Microsoft Kinect and vision-based automotive safety systems have demonstrated that computer vision can now be deployed even in cost-sensitive applications, and in ways that are easy for non-specialists to use. The term "embedded vision" is often used to refer to the incorporation of visual intelligence into a wide range of systems.
On April 25, 2013 at the San Jose Convention Center, the Embedded Vision Alliance will host the Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. The Summit will include presentations on the key technologies that are enabling the widespread use of computer vision, including vision sensors, algorithms, processors and development tools. The Summit will also showcase over 20 demonstrations of state-of-the-art embedded vision technologies. This image gallery highlights some of the technologies and applications that will be presented at the Summit.
For more information about the Embedded Vision Summit, or to register, visit www.embedded-vision.com/embedded-vision-summit.
Car and driver
Click on image to enlarge.
(Source: Jeff Bier)
In cars, embedded vision is being deployed to reduce accidents.
The demo pictured here is a Xilinx reference design that uses four cameras mounted on the exterior of the car. Using the video feeds from these cameras, an embedded vision system can perform numerous safety functions. For example, it can provide the driver with a bird's-eye view of the car and its surroundings to aid in safe parking. And it can detect and read road signs and warn the driver if they're exceeding the speed limit. At the Embedded Vision Summit, Paul Zoratti, Xilinx Automotive Driver Assistance System Architect, will present some of the key challenges and techniques of automotive vision systems. Registration for the Summit
is free for qualified engineers, but space is limited.