Audio Peripherals - Enabling Connections to Real-World Devices
Audio subsystems require digital audio interfaces to provide multi-channel in inputs and outputs. The S/PDIF (Sony/Philips Digital Interconnect Format) is used as the connection between consumer devices such as media players, game consoles and home cinema systems.
Typically SoCs for consumer devices integrate one or more S/PDIF inputs/outputs for off-chip connections. Using S/PDIF also provides a high bandwidth on-chip audio link for HDMI transmitters and receivers. Instead of using DMA (Direct Memory Access) to transport data from the audio processor via the system memory to on-chip HDMI controllers, a direct link provides a plug-n-play solution.
I2S (Inter-IC Sound or Integrated Interchip Sound) interfaces are used for off-chip audio connections or to integrate analog interfaces. A good example of these analog interfaces is described in this Audio DesignLine article. The integration of these interfaces also requires provisioning of all the required clock signals (for either master or slave mode) and the software drivers.
In an effective subsystem, all audio peripherals are local to the audio processor to provide tighter integration. Traditionally, system-level DMA functions performed by the host processor would be streamed to and from the audio peripherals. Using a smart local interconnect allows the audio processor to stream audio (e.g., MP3 music) directly to, for example, an analog output (e.g., speaker). An implementation that is based on a 'FlexFifo', which is a local memory buffer for audio data, eliminates the need to have a buffer at each single peripheral.
Flexible allocation of memory buffers is more area efficient and also eliminates the need for a separate DMA unit because the audio processor can be used to directly control data source/sink to the peripherals. This not only reduces area, but more importantly, it simplifies the software interface. Rather than a series of DMA functions calls, only a single instruction (e.g. stream to S/PDIF) for the audio processor is needed.
Figure 2: Efficient Audio Subsystem hardware architecture includes audio peripherals that are local to the processor.
Audio Software Functions – Supporting Popular Audio Formats
The software stack for an audio subsystem should include decoders and encoders that support the latest, most popular multi-channel formats and be able to support future extensions. Well-known audio formats include, but are not limited to, those from Dolby Laboratories (e.g. Dolby Digital Plus, Dolby Pro Logic IIz), DTS (e.g. DTS HD Master Audio and DTS Neo:6), SRS Labs (e.g. TruSurround, TruVolume) and Microsoft (e.g. WMA 10Pro) as well as open standards like MP3, FLAC, Ogg/Vorbis, AAC LC and aacPlusv2. Other components include post-processing features such as equalizers, bass and treble management and volume control.
Typically either SoC integrators or their customers (OEMs) also add their own audio software and unique sound effects, enabling them to further differentiate in the market. A Media Streaming Framework is an excellent solution that enables easy integration of all the audio software into the subsystem.
Media Streaming Framework – Providing Easy Instantiation of Software into the Subsystem
By combining a sub-set of all the software functions designers can build their 'use-cases.' However, integrating a combination of software IP from different vendors that use different coding standards and interfaces into a single design can be challenging. A very effective way to solve this challenge is to use a Media Streaming Framework (MSF) that has pre-defined application interfaces (API).
A Media Streaming Framework allows designers to easily drop in all the available software functions and add/modify and re-order them as desired. As a result, creating a complete Blu-ray Disc or set-top-box application becomes much simpler. Using standard APIs also allows the re-use of functions from one design to the next, making the subcontracting of software development more manageable and reduces the risk for design errors that can be caused by ambiguity in the interface specification.
GStreamer Application Software Plug-In – A Media Framework Standard
Because the application software is running on the host processor, it needs to be able to leverage all the features and functions that are available in the software stack of the audio subsystem. GStreamer has become the industry-standard media framework for Linux and Android-based designs such as tablets, digital TVs and set-top-boxes. Its plug-in architecture allows designers to easily build complete, modular systems. A GStreamer audio plug-in enables a complete library of all audio functions that are present in the audio subsystem to be readily available on the host processor through simple function calls for the application software.
Software infrastructures offering integrated software mechanisms such as Remote Procedure Calls (RPC) and Inter Processor Communication (IPC) functionality are used to implement communication between the host and audio processor. They make the location where the software is actually executed (on the audio processor) transparent to the application, which is running on the host processor. Using the RPC and IPC technology, a GStreamer audio plug-in on the host provides access to all features available within the audio subsystem.
Figure 3: GStreamer Audio Plug-in and Media Streaming Framework as available in Synopsys' SoundWave Audio Subsystem enable quick integration into the application software.