I particular like the eye lid detection to improve road safety.
There is no doubt vision as UI will be the next big exploring topic. I think there are still a lot that we don't know. In terms of application area, I believe vision, together with other sensors, will certain improve road safety by helping the driver to stay in the lane and providing warning in emergency situation. Another application area would be in gaming. Maybe, in the very near future, gamer can just stare at the desired target instead of using button to toggle between enemies.
I have seen the Livescribe pen in action: very impressive, although the digital paper is a hassle. I would like to see vision processing provide the same type of interface as the pen but without special paper. I remember years (and years ago) a "white board" that had a scanner built in so that anything drawn on the 3' x 4' screen (that scrolled for new/fresh screens) could be copied, and sent electronically That was a big help for free flowing conversations!
I see potential for at least two abuses here -
1. Put on a vision interface headset, and advertisers have *really exclusive* command of your eyeballs;
2. Government can easily eavesdrop on conversations with lip-reading hardware/software.
That said, vision interfaces are just another tool that are probably coming no matter what. The way they're used depends on us.
Voice correlation with lip reading in speech to text for the hearing-impaired would be a wonderful application.
@prabhakar_deosthali: the criteria for application of the embedded vision obviously needs to steer clear of situations where it distracts/hinders/obstructs end user activities. Not all applications will be embraced by the user community. I also suspect government or industry regulations will pop up once ergonomic requirements are established.
I do believe there are plenty of industrial apps for embedded vision. Take Video Surveillance for example -the processing of acquired images for facial recognition is making big advances in algorithms using embedded vision. Last week there was an announcement on recognition of 36Million faces per second (not sure what the compute resources behind that number are!):
It actually works very well.
But of course, as every reporter knows, it's more important to understand what your interviewee is saying right on the spot, and ask for clarifications, and to take good notes, rather than depending on your recording or scribbles!
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.