PARIS — The Versatile Extra-Sensory Transducer (VEST) under development at Rice University and Baylor College of Medicine represents a new form of wearable haptics, one that could find many applications beyond its initial goal of enabling the profoundly deaf to “feel” and understand speech through their body.
Led by PhD student Scott Novich under the supervision of David Eagleman, director of the Laboratory for Perception and Action at Baylor College of Medicine and an adjunct assistant professor of electrical and computer engineering at Rice University, the VEST is the result of a clever mix of electronics and neurosciences. Eagleman's research focuses on how the brain constructs perception, and one of his research topics include sensory substitution whereby sensory data is fed through unusual sensory channels.
For decades now, it has been shown that regardless of the sensory channel, our brain is able to adapt and learn to extract meaningful information from unobvious sensory inputs (consider reading braille through touch), and the VEST is expanding on that.
In a poster titled "VEST: A Vibrotactile Sensory Substitution Device for the Deaf" at last year’s Haptics IEEE Symposium in Houston, Novich presented a wearable vest laden with a network of small mass-motors distributed throughout the fabric, 24 in total.
As oncoming speech is captured by an Android smartphone, the recording is compressed and streamed as 20ms audio frames to the vest over Bluetooth. The audio frequencies from 0 to 4kHz are split into 24 bands, each attributed to a particular vibration motor, in effect mapping each audio signal into unique and complex patterns of vibration on the wearer’s torso.
Intuition would have you think that these haptic patterns were designed to relate to actual sound waves as we would perceive them on our torso in front of a very loud speaker. But none of that, the actual physical layout of the mass motors across the torso doesn’t matter either and the overall haptic effect created is something completely new and unexplored, a pattern described as too complicated to translate consciously.
Yet, initial trials with a completely deaf participant showed that within a few hours of training, distinguishable vibration patterns were consistently associated with discrete words.
“There is no underlining rule for designing the network of mass-motors across the VEST,” confirmed Novich during a phone interview with EE Times Europe. “Because we have sensory receptors all over the skin, it doesn’t matter where we put the actuators as long as they are separated far enough to be resolved individually, so they can't just be packed close to each other. It may still be the case that a logical layout (i.e tonotopic) makes training much easier or faster. But we transcode the sound in such a way that all the information reaches the brain”.
This last bit is the most important one, as the brain is then able to figure out, though sensory substitution, the meaning of all this. For now, each of the 24 frequency-specific mass motors receives a 3-bit input (8 discrete levels of vibration), not much but enough to have all the discernible speech information (retaining pitch mainly).
“If you do the maths, over 10 to 20ms windows, that’s about 3000 bits/second or the equivalent of a low bitrate speech Codec” told us Novich for a comparison. As for the number of actuators, the researchers settled on 24 as a practical experimental setup that would still yield exploitable results.
“The more you can fit the better, that’s more information and less compression, but then you have to figure out how to wire the thing up and you want to strike the right compromise for energy and weight too,” added Novich.
Now, such a vest could find its way into many other applications, for which it could transcode pretty much any type of data, not just the type of sensory data our body would normally be natively tuned to.
In fact, wearing this haptic vest, a person could subconsciously pick up on any sort of machine data and make sense of it according to the patterns he or she would have been trained to identify or interpret, making the VEST a truly universal data-to-brain transcoding machine.
“Instead of focusing visually on many different elements one at a time, an air pilot could receive the entire flight data from the cockpit to feel the entire system in his body and get a general intuition of the flight conditions” said Novich.
With the help of a team of electrical & computer engineering undergraduates at Rice, as part of their senior design project, the PhD candidate is looking at using up to 40 piezo-electric actuators for a lighter and unobtrusive design which could be worn inconspicuously by deaf people. The VEST would have a very competitive price advantage over today’s cochlear implants while benefiting many other applications.
“Our prototypes are already fairly close to something we could hand out to people to use,” Novich said, adding that the lab has just spun out a company, Neosensory, and is currently raising funds to finalize development and bring a product to market.
—Julien Happich is editor in chief of EE Times Europe.
Article originally posted as "Feeling augmented" on EE Times Europe.