Some of you may be familiar with Apple’s Siri voice commands that let you operate an iPhone or an iPad and search online, organizing your life through natural speech rather than through touch interfaces.
Together with its fairly basic built-in touchpad, the Google Glass user interface partly relies on voice commands to operate, including when users perform Internet searches.
Recently e-commerce giant Amazon unveiled a microphone-enabled barcode reader, the Amazon Dash, giving a helping hand to add items to your shopping list. The device was given away free of charge to a select number of Amazon's Prime Fresh loyalty program users, who can now order their groceries either vocally or through a quick scan of a product's container (for example a milk bottle). Just say it and get it delivered to your home. "Never forget an item again -- Dash remembers so you don't have to," says AmazonFresh's website.
More broadly, the data you generate as you speak into these voice-activated devices will be scrutinized beyond their superficial user-interface functionality. Most of the data will transit from the application to the cloud where more powerful data analytics can be performed, either to bring a better match to what you're looking for, or simply for better consumer profiling and to prompt you with useful spending reminders.
Apple has already admitted that it's kept such data for much longer than you would think is reasonable for a mere user-interface (up to two years). In effect, all these voice controls double as always-on eavesdroppers, adding another layer of intrusion and control over your life.
Of course, at hardware level, the audio processing engines and DSPs that support these voice commands are only enablers. But these chips are getting more powerful year-on-year while becoming more energy-efficient, hence finding their way into more electronic devices.