With two independent outputs (amplitude and distance), it is now possible to remove the ambiguity between certain types of gestures. At its simplest, we can look at a single sensor and two gestures. When a user moves a hand sideways through the field-of-view of a conventional amplitude-based optical sensor, the signal varies from very low (as the hand begins to reflect light back to the sensor), to very high (as the hand passes over the middle of the sensor, and light is reflecting back from all parts of the illuminated target), then back to very low as the hand exits the field of view. The exact same waveform can be seen if a hand comes down vertically: low signal when it’s far away, high signal when it gets close to the sensor, and then back to low signal when the hand exits the field of view, either vertically or horizontally. The “sameness” of the sensor response to these two gestures makes it impossible to differentiate.
However, if we add distance data, then it’s suddenly very clear as shown in figure 3.
Figure 3: With time-of-flight measurements, multiple outputs eliminate ambiguity for gesture detection.
Building on this, we envisage a system with multiple ToF sensors, spread around a screen or an interface. We can build a very low resolution depth-map of the scene in front of the object in question. A swipe or a flip could be differentiated as shown in figure 4
. Even though both moves are in the same direction, a flip of the hand contains much more Z movement than a swipe, which can’t be detected by conventional optical sensors but will be detected by a ToF sensor.
Figure 4: Swipe and flip actual graphs
About the authors:
Marc Drader is principal technologist for imaging systems for the Imaging Division of STMicroelectronics.
Laurent Plaza is business development manager for the Imaging Division of STMicroelectronics
Courtesy of EETimes Europe