Audio latency is the total delay that the audio stream experiences. Applications that involve a video component and multi-channel sound systems are sensitive to latency and therefore latency must be considered an element of audio quality (see table below).
||Generally Acceptable Latency
|Multi-channel audio rear speakers
There are several possible sources of latency in a wireless audio solution. Two key sources of latency are; a) audio compression; and b) audio buffering.
Lossy audio compression requires sufficient audio data stored to allow frequency domain processing. This can add as much as 40msec of latency. Thus solutions that do not perform lossy audio compression, such as SMSC's Kleer technology, can have lower latency.
Audio buffers are required to hold the incoming audio at the source, and continue the flow of audio at the sink, while the radio is experiencing interference. The size of the audio buffer, and thus the amount of latency, required to deliver interference robustness in a wireless audio solution is dependent on how quickly the system can detect a bad channel and restore the connection, for example by switching to another radio channel.
Solutions based on Wi-Fi RF front-ends may have trouble finding available spectrum due to their larger spectral footprints or may not support channel switching at all, in which case it may not be possible to restore the connection before the audio buffer runs out unless a large buffer is used. Bluetooth uses Adaptive Frequency Hopping (AFH) which means that it will drop channels from its FHSS hopping sequence to avoid interference. This process takes time which means a larger buffer is required, and of course, Bluetooth still must find at least 20MHz of spectrum.
Digital audio must be converted back to analog audio prior to driving a speaker. This process requires an exact replication of the sampling rate that was used to create the digital audio, with very low jitter. Generally this involves a frequency- or phase-locked loop at the receiver that synchronizes to a reference that is transmitted over the radio.
If the solution is not able to recreate the sampling rate, then it will have to synchronize the audio by other means such as sample rate conversion or very crude means such as adding/dropping samples. Sample rate conversion can work well but care must be taken in the implementation to avoid degrading the audio quality.
Wireless streaming of high quality (at least CD quality) audio requires lossless digital transmission of 16-bit, 44.1ks/s (or better) digital stereo audio. The radio channel requires sufficient throughput to transmit this audio losslessly while having enough additional throughput to allow for fast re-transmission of buffered audio data.
The right balance between peak bit rate and spectral footprint must be found to minimize the opportunity to be affected by interference. The ability to avoid other interference-causing radios by quickly switching to another radio frequency is also critical to minimize dependence on audio buffering, which causes undesirable latency.
About the author:
Brent Allen is Director of Product Marketing for SMSC's Kleer product line. Acquired by SMSC in February 2010, Kleer
industry-leading low power consumption wireless audio technology is used in portable, home and automotive products
including wireless headphones, earbuds and speakers.
Audio coding for wireless applications
Bluetooth audio coding: Past, present and future
No more CD players for me!
Algorithmic delay and synchronization in MPEG audio codecs
Error-Resilient Coding for Audio Communication - Part 1: Waveform and CELP Speech Codecs | Part 2: Lapped transform codecs | Part 3: FEC techniques for speech and other coding techniques
Audio Coding: An Introduction to Data Compression - Part 1: Psychoacoustic model, masking, and MPEG Layer I coding | Part 2: MPEG Layer II coding, MP3 and AAC | Part 3: AAC continued, MPEG-4 AAC, Dolby Digital and other standards