As DrQuine already noted, "gesture recognition" for many user communities brings to mind large-scale hand gestures such as pointing at a location or football referee signals. While the ultrasonic approach described in this article is interesting, a title such as "Wolfson to Use Bat-Like Sonar to Replace Contact Keyboards" might have captured the scope better.
It is an interesting and open question whether more complex bat-inspired methods (chirping comes to mind) could outperform passive and active optical approaches in key performance areas such as power consumption and resolution. Passive optical is faced with the power-draining need for real time image parsing, while active structured light methods (e.g. Kinect and Leap) have issues of illumination costs. However, bats use ultrasound for high-precision real-time location of both their environment (e.g. caves) and tiny prey (insects), and do so at considerable distances. So an existence proof for the possibility of long-distance ultrasonic scene and gesture analysis certainly exists in the world of biology.
However, my suspicion is that physics will intervene to make long-distance ultrasonic spatial perception a weak competitor in small devices.
For one thing, ultrasound would almost certainly require rapid-cycle flash illumination of large areas at power-draining high ultrasonic frequencies. But perhaps even more limiting is the antenna issue. Wavelengths are not a a big issue for compact optical receivers, but for ultrasound even a 33 KHz frequency echo has a wavelength of about a centimeter. Thus to provide adequate resolution, an ultrasound receiver will require single or multi-point antennas that span scales not much different from small mobile devices. Or to put that last point more colloquially while picking unfairly on singular antennas: Would anyone other than an avid DC comic fan really want a bat-eared mobile phone?
The referenced EETimes Europe article included an image to set expectations for the gesture recognition system. It shows a hand selecting one of several large blocks on a screen. That should be easy to manage with sonar (ultrasound). It will provide the significant benefit that no contact need be made with the screen and therefore the screen will not get covered with fingerprints or dirt (especially in an industrial environment).
When I think of gestures, I think of positional signals with hands (like sign language). That would be a much more difficult task to manage with ultrasound - especially if the user were at any distance. Likewise fine motions will be difficult to manage with the sonar system (such as selecting a character in a string to edit).
This is the fourth semi company, that I've seen anyway, getting into the gesture senser business. So far, visiting my company have been: Maxim, Vishay and recently MicroChip (with what may very well have been an SMSC product.) Anf all have different ways of going about it.
Anyway, its interesting. Companies generally don't make technology investments on the come. Something big must be about break in this area.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used on the Mars on EE Times Radio. Live radio show and live chat. Get your questions ready.