Audio processing is essential to many consumer electronic applications such as mobile phones, MP3 players and a host of other products. While size and power consumption are often critical SoC design requirements, the market demands high-quality high fidelity (Hi-Fi) audio capabilities. To meet this consumer demand, designers are now embedding audio codecs into their next-generation, advanced SoCs.
The audio codec creates the interface between the digital host processor and the audio transducers, such as microphones and speakers. It is also responsible for several routine audio functions, thereby alleviating the workload on the host processor.
The clocks required by the data converters on an audio codec depend on the audio material sampling rates as well as on the clocks available on the host application and SoC. The combinations are quite complex due to the multitude of audio sample rate options and available host clocks. To further complicate matters, in audio-video (A/V) applications, the audio clocks need to also be synchronized with the video clocks required by the video data converters. Therefore, many designers are confronted with complex choices when deciding on trade-offs to minimize system costs related to clock generation and interfacing a multitude of sample rates.
The digital filters play an important role in synchronizing the different clocks because they process the digital samples between the digital audio interface and the audio data converters, and therefore, can perform sampling rate conversions. This article will review the functions of digital filters in audio codecs and will provide several examples to illustrate how they can interface to a range of sample-rates and clock environments.
The audio codec is composed of two types of data converters: a digital-to-analog converter (DAC) for playback and an analog-to-digital converter (ADC) for recording.
On the digital side, there are multiple blocks. The most important are the digital audio filters that convert the data rate to the oversampled clocks of the data converters and remove the high-frequency noise outside the audio band. Also important is a clock management block, which makes sure that all multi-rate blocks are synchronized with each other and supports all the required sampling rate combinations.
Today, data converters in audio codecs operate at highly oversampled frequencies, which mean that their conversion frequency is much higher than the audio band, often by a factor of 100 or more. For example, assuming a Redbook CD player has an audio data rate of 44.1 kSamples per second (kS/s), the typical oversampling rate is 128X, leading to the DAC's conversion rate of 5.6448 MSamples per second (MS/s).
Why are Digital Audio Filters Required on an Audio Codec?
Figure 1: Audio signal sampled at FS and its spectrum replicas at 2FS, 3FS, (in orange)
The main reason filters are required on an audio codec is to remove the aliasing or imaging bands. These are replicas of the signal band around the multiples of the audio sampling rate (FS) and are a result of the multi-rate operation. For example, an audio stream at 44.1 kS/s up-sampled to 5.6448 MS/s has spectrum replicas of around 88.2 kHz, 132.3 kHz. This is a result of the Nyquist Sampling theorem, as illustrated in Figure 1.
On a DAC, the image bands cause a stair-like waveform as shown in Figure 2. The filter smoothes the waveform and reduces the high-frequency energy. If this high-frequency energy was not removed, it would waste power and cause intermodulation distortion in the output drivers, causing the loud-speakers to generate audible noises.
Figure 2: The digital filter up-samples and smoothes the signal waveform before being applied to the DAC
On an ADC, the filter removes any out-of-band noise picked up at the input or generated within the ADC, as shown in Figure 3. If this is not removed when the signal is re-sampled at the standard audio rate, the noise would be folded down in-band due to aliasing and becomes audible.
Figure 3: On an ADC, any out-of-band noises (red signal on the left diagram) would be folded down into the signal band when the sample rate is re-sampled to the standard audio rate at the output (diagram on the right)
Clocks and Sampling Rates
Digital audio signals are sampled at standard frequencies. Due to legacy from the old Redbook CD, many audio recordings use the standard 44.1 kS/s rate. This unconventional number is derived from an early practice of reusing PAL videotape equipment for audio recordings. Modern audio systems, like DVDs, use 48 kS/s and its multiples 96 kS/s and 192 kS/s.
Voice applications, such as those in cell phones, use 8 kS/s and its multiples, 16 kS/s and 32 kS/s. Some applications may also use multiples of 44.1 kS/s, namely 88.2 kS/s and 176.4 kS/s. Since the data converters must operate at oversampled frequencies, typically 128X, or 256X, the required master clock frequencies to drive the data converters would be in the range of 5 to 12 MHz.
An audio codec must therefore support a wide variety of audio sample rates and accommodate a range of master clock frequencies facilitating its operation in the application. It is not a straightforward objective due to the multitude of combinations and restrictions in the possible clock frequency ratios. For this reason, the digital filters must include programming of its sample rate conversion.
For example, let's consider a practical case with an audio rate of 48 kS/s and the converter's sampling frequency of 12.288 MS/s. The resulting sample rate conversion is 256X. Now, for supporting 96 kS/s, the filters are reconfigured for a sample rate conversion of 128X. And for supporting 192 kS/s, the filters are reconfigured for a sample rate conversion of 64X. The sampling frequency of the data converters stays the same at 12.288 MS/s because the audio band limit is fixed at 20 kHz. For the 44.1-kS/s audio rate family, the corresponding master clock would be 11.2896 kHz.