SAN JOSE, Calif. – Microsoft confirmed it is developing hardware to bring its Kinect gesture interface to Windows. The company also launched a competition that will give $20,000 to the best ten startups with ideas for using Kinect for Windows.
The news comes three weeks after Microsoft said it would extend in 2012 the Kinect gesture interface from its Xbox 360 console to PCs. At that time it rolled out a beta software developer's kit for Kinect for Windows and launched a Web site for the project.
Kinect was widely hailed as one of the fastest selling consumer products. It breathed new life into the Xbox 360 and Microsoft generally which had fallen behind Apple and its Android rivals who pioneered touch-screen interfaces in smartphones and tablets.
In a recent blog, Microsoft's general manager for Kinect, Craig Eisler said "we have optimized certain hardware components and made firmware adjustments which better enable PC-centric scenarios."
Specifically, Microsoft has enabled a "near mode" for Kinect, allowing "the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters, one of the most requested [Kinect] features," he said.
Presumably such a mode could open the door to alternatives for keyboards and mice for navigating PCs, tablets and smartphones. For years, startups such as Canesta designed projection keyboards and other interfaces, creating interesting demos that failed to gain market traction.
In addition, Eisler said Microsoft has shortened "the USB cable [for Kinect] to ensure reliability across a broad range of computers and [included] a small dongle to improve coexistence with other USB peripherals."
Vendors who license Microsoft's Kinect for Windows program will get access to Microsoft's ongoing updates in both speech and human tracking, he added.
Microsoft has worked on speech recognition for years, one of the pet projects of co-founder Bill Gates, but to date has not had much to show for its work. Apple recently released speech recognition capabilities for its iPhone 4S based on its acquisition of startup Siri.
The new Kinect Accelerator program "will give 10 tech-oriented companies using Kinect (on either Windows or Xbox360) an investment of $20,000 each, plus a number of other great perks," said Eisler.
Applications are being accepted through January 25. Microsoft is planning an event that will let winners demo their products for potential investors.
Microsoft said more than 200 companies are already participating in its Kinect for Windows program. In a recent blog, Microsoft co-founder Gates reiterated his long held view that natural user interfaces such as gestures will be crucial in the future of computting.
"Kinect + Windows will give [developers] the tools they need to develop novel solutions for everything from training employees to visualizing data, from configuring a car to managing an assembly line," said Eisler.
Meanwhile, chip makers Nvidia and Qualcomm have been competing to establish the still-emerging area of gesture and augmented reality interfaces for smartphones and tablets.
In July, Qualcomm acquired GestureTek and has since demonstrated gesture recognition using its ultraspound technology.
In a recent interview with EE Times, Phil Carmack, vice president of Nvidia's mobile group, said Nvidia is committed to work on augmented reality and gesture recognition.
"I like gesture recognition, and I hate getting smudgies all over my smartphone screen," he said. "Taking [mobile interfaces] to the next level will involve cameras and voice and touch to make a better experience," he said.
Kinect with Windows PC seems to be able to make the Spielberg's dream of "Minority Report" comes true. With 3 cameras, if raw image data can be accessed, 3D vision can be extracted from a laptop computer. What an amazing move? 20k seems a bit low imo.
It is going to be a very innovative usages of the technology as an outcome of the competition. Gesture recognition is not going to be so useful to the text based users, but it will change the way of computing, computers will become you companion, like two deaf persons talking with each other, it will create that kind of scenario.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.