The world of user interfaces is rapidly expanding. It's no longer just touch but it includes a great variety of gestures and speech assistants. What can we expect from such advancements?
“Alexa, play the motion picture soundtrack to Moana.” Just two years after Amazon announced Amazon Echo, “Alexa” is part of the common lexicon. “Okay, Google,” “Siri” and “Hey, Cortana” have joined Alexa in the panoply of speech assistants as consumers migrate toward touchless user interfaces – which now include gesture as well as voice. What can we expect from these new user interfaces – and how will MEMS and sensors suppliers help us to get there?
First, let’s not disparage touch completely. It makes good sense for personal computing devices, among other keyboard-based applications, and will remain dominant for some time to come. Still, there are clear instances when voice is more convenient to use. And for me, my bellwether is always what my teenage girls are doing – typing/swiping or using voice. For now, they are still holding their smartphones with a tight hold, but I do spot them touching the microphone feature to use speak-to-text at times.
Voice lends itself to selecting music, TV shows, and movies. That’s because there is a “massive unstructured database of content that is hard to navigate through hierarchical windows,” says Matt Crowley, CEO of Vesper. Crowley adds that use cases that involve simple tasks such as setting a timer, asking for weather forecasts, or opening a door are well-suited to voice.
Always-listening devices such as smart speakers and smart earbuds are also ideal for voice. The catch, however, is that the always-listening MEMS microphones that are essential to such devices must be power-efficient. MEMS microphone makers Vesper and InvenSense approach the power-consumption issue differently.
Vesper offers always-listening piezoelectric MEMS microphones — which use sound energy itself to wake devices from sleep, while consuming nearly zero power. InvenSense advocates for higher levels of integration, such as MEMS microphones that integrate analog-to-digital converters (ADCs) with the microphones, saving all the power that the ADCs would typically consume.
It's not just about voice user interfaces, though, because new gesture technologies are coming to market – and designers are racing to incorporate them.
MEMS-based ultrasound time-of-flight sensors enable consumer devices – including virtual reality (VR)/augmented reality (AR) systems — to sense motion, depth, and position of objects in three-dimensional space. Because most existing 3D sensing technologies are based on light, either visible light or infrared light, they have trouble with sunlight, which tends to overload optical receivers. They also have a hard time detecting dark-colored or optically transparent surfaces (like glass windows). In comparison, ultrasound can be used in any lighting condition, isn’t sensitive to object color, sees all solid objects, and is very low-power because there isn’t a lot of background noise present at ultrasonic frequencies. “The challenge for ultrasound is that it’s a new technology for consumer electronics, so many customers aren’t familiar with its capabilities,” said David Horsley, CTO, Chirp Microsystems.
In scenarios where a touch interface demands too much attention – such as in the car, where finding the right location to touch is distracting to the driver, Horsley says it’s appealing to imagine making adjustments with a wave of your hand rather than fussing with a little screen to interact with an application. He adds that ultrasonic sensing for VR/AR can support “inside-out tracking” of controllers or input devices with six degrees of freedom, allowing users to interact with the VR/AR environment without being tethered to a base station or confined to a prescribed space.
Machine learning is taking user interfaces one step further, particularly in applications such as AR/VR. “The evolution of user interfaces has taken us from pounding at keyboards to pawing at slabs of glass. What’s next?” asks David Allen, president of Virtuix. “The answer is mixed reality – a blend of the virtual and the real. Today, you can point your smartphone camera at something real and see a crude virtual overlay on it – as in Snapchat Lenses or Pokémon Go. Tomorrow, our devices will project onto the real world a crisp user interface. The enabling technology, made possible by MEMS and sensors, is simultaneous localization and mapping or ‘SLAM.’” Expect to see more from SLAM-based applications in the future, including mobile robotics and autonomous vehicles.
Nicolas Sauvage, senior director of ecosystem at TDK, made me question if I can tell the difference between my dog and a breakfast food. Sauvage cited an example of machine learning from a recent presentation by Andreesen Horowitz, a private VC firm. “With machine learning, someone can point their smartphone at a person, their dog or an object, and he or she will see information or funny filters/animations on top of that focal point.” Sauvage points out that machine learning is much better at recognizing what is a dog, or say, a muffin. In fact, according to the VC firm's presentation, only 72% of the best algorithms that programmers developed could tell the difference while machine learning got the right answer 93% of the time!
Where’s the dog and where’s the muffin? Machine learning gets the correct answer far more often than humans.(Source: Imagenet)
With consumers eager to interact with the digital world more freely and naturally, we can expect more alternative user interfaces that use MEMS and sensors to make devices smarter and more environmentally aware. We can also expect MEMS and sensors to drive higher-quality data back to electronic devices for machine learning or artificial intelligence, providing more salient information, and at times, more humor, as we interface with the world around us in ever-changing ways.
Despite the gains that we have made already with new user interfaces, what is less clear is how the technology industry will handle the potential privacy issues of electronic device that are always on. This is such a rich topic that I will dedicate my next blog post to it.
-- Karen Lightman is vice president, MEMS & Sensors Industry Group, SEMI.