At the Embedded Vision Summit, all these issues will be addressed in keynote addresses, seminars, technical presentations and over 20 hands-on demonstrations. Designers will get a leg up on what types of pattern-recognition enhancements are available to embedded systems, as well as get a comprehensive overview of how to add embedded vision capabilities to their app.
New embedded-vision development tools will also be unveiled at the Summit where experts from the 33 member companies of the Embedded Vision Alliance will be available for one-on-one discussions regarding how-to add embedded vision to any application.
Professor Pieter Abbeel from University of California Berkeley's Department of Electrical Engineering and Computer Sciences delivers the keynote "Artificial Intelligence for Robotic Butlers and Surgeons" on April 25th. Abbeel has been on the forefront of designing perception into robots, including research into how robots can better handle uncertainty.
"Significant improvements in robotic perception are critical to advance robotic capabilities in unstructured environments," writes Abbeel on his research page. "I believe that rather than following the current common practice of trying to build a system that can detect chairs, for example, from having seen a relatively small number of example images of chairs, it is more fruitful for robotic perception to work on instance recognition." Click here for examples of his work -- such as robots folding clothes (something at which you would want your personal robot valet to excel).
The rest of the first day consists of sessions from two tracks. When signing up, attendees may mix the tracks to make the program that best fits their needs. To see embedded vision tracks, click here.
Embedded Vision Summit technical speakers include Jose Alvarez, Mario Bergeron, Ning Bi, Jeff Bier, Goksel Dedeoglu, Eric Gregori, Tim Jones, Gershom Kutliroff, Simon Morris, Michael Tusch, Markus Wloka, and Paul Zoratti. You can find their bios here.
Participants who want hands-on training may attend Friday's Blackfin Embedded Vision Starter Kit Hands-on Workshop, April 26, 8:30 am to 1:30 pm at San Jose Convention Center. You can sign up for the workshop here. The hands-on session offers developers a chance to explore a wide range of embedded vision applications through a tutorial using the Avnet Embedded Vision Starter kit. Experts from BDTI, Analog Devices, and Avnet will lead the training.
What is exciting is the growth of both the processing capacity and the sensor resolutions. Given that vision requires huge processing resources and the proliferation of quad / 8 core processors there is enough horsepower availible to time wise effectively process live images. Couple that with the advent of cheap, high resolution cameras you have a nexus of opportunity to provide "real time" live vision processing for the masses. Very exciting times indeed.
The application of vision is indeed becoming more prevalent in embedded systems. It is fascinating to see how vision is being applied to problems in the industrial and consumer spaces that have traditionally been handled by other, more cumbersome solutions. The replacement of gaming controllers with stereo vision based solution is one example among many. An example of one approach to adding custom embedded vision to a product is outlined in the following TI white paper: http://www.ti.com/lit/wp/spry232/spry232.pdf
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.