PORTLAND, Ore. — Startup Aquifi Inc., of Palo Alto, Calif., claims to be poised to render the 3D gesture interfaces that use custom sensors obsolete -- including Microsoft's Kinect, Apple's PrimeSense, Leap Motion's standalone controller, and Google's Flutter -- with its "Fluid Experience Technology." By combining computer vision, machine learning, and cloud services, Aquifi claims to have developed a superior software-only gesture recognition system that uses the high-definition cameras already in smartphones, tablets, PCs, and smart TVs. The system can also be embedded in no-screen devices that add dual HD cameras, like Google's Nest thermostat, cars, and wearables.
Tony Zuccarino, vice president for sales and marketing at Aquifi, told EE Times:
Our main purpose was to create a user interface that understands the environment and the user so that it can react to you seamlessly, instinctively, and fluidly. We have blended together computer vision and machine learning in the local device, for adapting to a specific user, then access smart apps in the cloud where it accumulates knowledge from all users then pushes that knowledge back down into every user's device. In that sense, it is constantly learning from our user base, so the apps get smarter the more users we get -- sort of how Google Voice gets better over time.
Instead of having to use touch to center in on GPS map display while driving, Fluid Experience Technology tracks where you are looking on the screen.
Aquifi was started up by the founders of Canesta
(now owned by Microsoft
) inventors of the time-of-flight 3D sensor at the heart of Microsoft's X-box Kinect. It was funded, starting in 2011, with $9 million from Benchmark Capital and private investors including Blake Krikorian (founder of Sling Media) and Mike Farmwald (co-founder of Rambus).
The vision that Aquifi’s founders saw was the ability to do 3D tracking and gesture recognition that adapts to the user with nothing more than software and the HD cameras already in the user's device. While Aquifi's Fluid Experience Technology can perform some functions with a single HD camera, its 3D tracking capabilities require that devices have two HD cameras.
Aquifi's software runs on the user's device to track his or her face, hand, and fingers, then accesses smart gesture recognition apps online in the cloud.
Zuccarino told us:
The crux of our vision is using existing commodity HD cameras for a human interface that adapts to the user, rather than making the user adapt to a machine's interface. Gesture interfaces today are very inflexible, built with custom ICs, which makes them expensive, and the gestures they recognize are static and can only be used in specialized applications. But today's HD cameras provide a full-color, high-resolution image of the user and their environment. By making our solution 100 percent software using the data from already existing sensors, we hope to eventually obsolete all the custom hardware solutions -- in terms of capability and certainly in terms of cost, form factor, and power consumption.
Aquifi software can interpret user movements over a wide area, instead of having to be right in front of the device as required by custom sensor solutions. Not only does it locate the user's head, hands, and fingers, it also tracks hand gestures and body positions, identifying whose face is it is viewing and the direction that user's eyes are looking.
"The user does not have to be centered in front of the devices, because our software tracks where the user is located, adapts to the user so they can control their devices from their current position, using real-time machine learning to locate the user's head, hand, and fingers," says Zuccarino.
The company claims to have multiple major original equipment manufactures (OEMs) on board and will start doing public demonstrations later this year, with the first commercial Aquifi-enabled devices to appear in the first half of 2015.
Aquifi has filed more than 35 patents for its Fluid Experience Technology, four of which have already been granted.
— R. Colin Johnson, Advanced Technology Editor, EE Times