@chanj, as to how it's done, you can refer to this embedded vision's write-up:
Omnivision's OV4682 captures both video and IR information. At the core of Google's Project Tango sits Movidius Vision processing chip.
When I was reading this I couldn't help but compare it to the Kinect sensor package. When that came out as a consumer device it was hacked into service as a general sensor device in about no time at all, which tells me that there is an interest in this. How does Tango compare as a sensor package? It looks like it is coming out as a much more accessible set of sensors if nothing else, but I don't have a good feel for the comparable quality of those sensors.
chanj0: about your question [How the depth map is being sensed?], I think Google Tango tablet uses OmniVision 3D Sensor OV4682 to record RGB and IR information (and OV7251 for motion tracking).
The OV4682 handles depth map via infrared (IR). Somebody please check to see if I am on the right track. I think Google Tango projects a grid of IR beams which shows on the objects (e.g., a person walking on a staircase) as patterns of dots. The OV4682 then captures these patterns and processes for depth, where distribution of dots on near objects is less dense than the distribution of dots on far objects. This approach (dots pattern) is known as structured-light.
Another depth-sensor approach is time-of-flight (ToF) which measures the time for light to strike the object and return. ToF is usually more effective than structured-light since structured-light may not be able to illuminate some objects and structured-light often requires callibration. If I were to make a wild guess, Google Tango may start with strucrured-light to get the ball rolling, then switch to ToF.
chanj0 - How the depth map is being sensed? Laser or stereo vision.
By chance I was looking at some old papers on 3D collected in the book 3D Model Recognition from Stereoscopic cues. Mayhew, Frisby et all.
3D can also be recovered by changing the depth of field in the image. This effectively changes the focus of the image. Mostly this is done for medical imaging focusing through the slide and creating an image stack.
3D Photogrammetry has been a passion of mine for some time. I use it to capture images of pipe organs and other items (museum objects like the Jaquet droz dolls and the antikeythera mechanism) then build virtual models of the structure.
3D has faded in and out on a decade or so cycle for the last 230 or so years. Problem is that only about 80 percent of the population can see it efficiently. I have 3D photos on my iPhone and website, usually carry the glasses with me. Have been doing this for close to 40 years.
There are ways of generating 3D without aids, like lenticular screens. Such are patent nightmares, as too many people have attempted to patent and control the obvious. Which in turn makes it too expensive.
Tablet is a device where there is more space for innovation and its also very handy for the user. You dont have to carry along a bulky device like notebook or laptop and you dont have to limit yourself with smaller screen as of the smartphone. But tablet needs more innovation/improvement in camera side can phone communication. And of course ease to hold. Battery power is another one thing that needs more innovation. Memory or storage capacity is also something that worries me.
Its good that Google is keeping it open for the developers. Its futuritic.
Going by the product sucess in 3D, people have still to warm up to the technology. I think the major hurdle is to make 3D more interactive with little gadget attached to the human. If we can achieve that then it will be a revolution in how we consume information now.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by