SAN FRANCISCO--Audio processor maker Audience is reverse engineering the human hearing system to make shouting down the phone in a noisy nightclub a thing of the past.
Using its earSmart voice processor, using two microphones that work like human ears, Audience’s chip enables analysis of sound in much the same way as the human brain, which when processing audio decides what to remove, what to keep and what to enhance.
Using computational auditory scene analysis (CASA), the processor is able to manage the characterization, grouping and processing of complex mixtures of sound, which means it can hone in on the human voice and filter out background sounds, even if those sounds are overpoweringly loud. The system on a chip also suppresses echoes and other irritating audio effects that affect call quality.
The chip then automatically equalizes and adjusts voice volume so a person can hear and talk naturally, whatever environment they’re in.
The technology is currently integrated in over 60 mobile handsets, including Huawei’s new quad core Ascend D, but the firm, which is in the process of going public, is also involved in almost every new device architecture, form factor, operating system (OS), and network.
We demoed the system a few weeks back at Mobile World Congress in Barcelona. Check out the video here:
One more step remains when incorporating lessons learned from the human hearing system into mobile phones. Cell phone offer very poor volume feedback to the user (which actually existed on conventional phones 50 years ago). This is a root cause of so many people shouting into their cell phones. When the speakers' voices are dead in the earphone, they naturally speak louder.
I think this technology would probably exists in many telephones used in conference rooms. Anyways it do not make any difference to the person in the noise environment, he will still needs to listen with the external surrounded noise.
If such a technology already exist which can reduce the environmental noise and send/receive clear voice then i am surprised that smartphone companies did not jump and put this feature. Its truly essential feature.
Polycom has the best conference telephone in the market which has similar capability. Plantronics headsets have long been using dual microphones to reduce near end noise to the far end. My 5 years old bluetooth earpiece does it pretty well as far as I can tell. It seems like this earSmart tech has something more than both Polycom and Plantronices have done. Any more information is welcomed.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.