Cadillac of gesture recognition
As much as hard-core gamers might value precision, it is less important for gaming itself than for game development or special-effects film animations. For animation pros, the Cadillac of gesture recognition is Xsens Technologies’ MEMS-studded bodysuit. The accelerometers and gyros on the suit enable the previsualization of animation sequences in real-time.
Xsens uses Analog Devices’ high-precision three-axis accelerometers, gyroscopes and magnetometers for detailed motion tracking. Used by the pros who created the effects for the movie “Iron Man 2” and the PS3 game “KillZone 2,” for example, the Xsens technology offers a motion capture solution that can be used anywhere, without the need for a complex infrastructure. Eventually, Xsens predicts, the technology will be cost-reduced for consumer applications, enabling a Kinect-like experience but with far higher fidelity and with no limitation on the number of players.
“Microsoft’s Kinect is an elegant solution, since it does not require any sensors on the body, but as a result it is slower and sometimes sluggish in tracking human gestures,” said Casper Peeters, CEO of Xsens (Los Angeles). “Our technology is much more flexible in terms of where you can use it, and it achieves higher fidelity in tracking the wearer’s precise movements. But motion-based game controllers and phone interfaces are just beginning to emerge. Xsens operates at the other end of the spectrum, enabling high-end motion capture for precise character animation, with many more interesting applications emerging in the future.”
Microsoft, also with an eye on the future, plans to harness the 3-D tracking technology it obtained when it acquired 3-DV and Canesta in 2009 and 2010, respectively. Those companies have virtually cornered the market in time-of-flight gesture-recognition patents, especially for mobile devices.
Time-of-flight sensors measure the time it takes an infrared beam to bounce off objects and return to a special CMOS sensor, yielding a highly accurate 3-D depth map of any scene at any distance and in any lighting. Time-of-flight depth map technology also dovetails nicely with the 3-D camera-based gesture recognition algorithms Microsoft developed through its GestureTek license.
TriDiCam GmbH (Duisburg, Germany) and a few others claim to have time-of-flight sensor capability. But thus far only Canesta has proved the concept, using a CMOS image sensor to create a precise 3-D image map of hands hovering just inches above a mobile device, even outside in bright sunlight.
Companies such as Silicon Labs Inc. (Austin, Texas), meanwhile, have inexpensive infrared and ambient-light sensors for recognizing application-specific gestures, such as turning on a display or adjusting a volume level by drawing a line in the air with a finger.
|With OEM algorithm support, Microsoft's Surface can read 'hovering' gestures. The redesigned panel can hang on a wall.