Digital signal processors offer outstanding multimedia performance. Typically, they require just 40 percent to 50 percent as many cycles as a general-purpose processor (GPP) core to run a codec (encoders/decoders). They also offer far greater flexibility and reconfigurability than ASICs. Yet until now, programmers have had to learn proprietary languages to take advantage of the benefits of DSPs in digital video applications. But the emergence of application programming interfaces is eliminating the need to learn these proprietary DSP languages. APIs make it possible to easily leverage DSPs from applications running on the GPP.
Open-source multimedia frameworks, which typically run under the Linux operating system on the GPP, are an ideal target for these APIs. The computational burden of video codecs can be offloaded by leveraging the APIs, which abstract many of the complexities of DSP programming. This approach only requires programmers to have basic knowledge of the DSP and eliminates the need to write code to stitch together DSP functions with those that run on the GPP. Those advantages, plus the ability to utilize the many capabilities offered by free open-source plug-ins and frameworks, can substantially reduce time-to-market for new video products.
Developers have several alternatives when it comes to selecting hardware platforms to run the codec algorithms that compress a digital stream for transmission or storage, and decompress it for viewing and editing. ASICs offer high performance and low power consumption in digital video applications because the hardware is designed specifically for the applications. The disadvantage of an ASIC is that nonrecurring engineering expenses are high. In addition, it can be very expensive to implement changes in ASICs, such as to those made to accommodate codec standards.
GPP cores, on the other hand, have a comparatively low NRE and can be fairly easily reprogrammed to address change. But their performance is low for digital video because they are relatively inefficient at performing computationally intensive signal-processing applications. For example, GPPs accomplish multiplication by a series of shift-and-add operations that each take one or more clock cycles.
DSPs have the potential to provide the best of both worlds. In contrast to a GPP, a DSP is optimized for computationally intensive signal processing of the type found in digital video applications. DSPs have single-cycle multipliers or multiply-accumulate units that can speed up codec execution. Higher-performance DSPs have several independent execution units that can operate in parallel, enabling them to carry out several operations per instruction. Yet, the DSP also provides full software programmability including field-reprogramming capabilities. This enables a user, for example, to roll out an MPEG-2 product and later upgrade to H.264 video codec. The primary limitation of DSPs in digital video applications is that they are typically programmed using proprietary languages, and programmers who are familiar with DSPs are much less common than those who are familiar with popular GPP architectures.
Developers of digital video systems also face integration challenges. Digital video systems are made up of multiple encoders, decoders, codecs, algorithms and other software components, which must all be integrated together into an executable image long before any content can run on the system. Stitching all these elements together and making sure they function cohesively can be a difficult task. Some systems will require distinct video, imaging, speech, audio and other multimedia modules. Developers who manually integrate each software module or algorithm are distracted from working on value-added functionality, such as adding innovative features.
Many digital video developers have taken the open-source path to build software. A common approach is to obtain significant parts of the software from an open source and leverage in-house expertise in the areas of usability and hardware integration. Developers often participate in open-source projects to develop technology to fulfill specific needs and integrate the open-source code with internally developed code to create a product.
Addressing all of these issues, Texas Instruments has developed an API that allows DSPs like GStreamer to be leveraged from open-source multimedia frameworks. The API enables multimedia programmers to leverage the DSP codec engine from within a familiar environment. The interface frees digital video programmers from dealing with the complexity of programming DSPs, making it easy for the ARM/Linux developers to exploit the power of DSP codec acceleration without needing to have knowledge of the hardware. The interface also automatically and efficiently partitions work between the ARM and DSP. This eliminates the need to write code to interface between functions that run on the DSP and those that run on GPP cores. The interface has been developed in the form of a GStreamer plug-in that was developed by TI in accordance with open-source community standards.
GStreamer is a media-processing library that provides an abstract model of a transformation that is based on a pipeline metaphor where media flows in a defined direction from input to output. It has gained wide popularity in the digital video programming community through its ability to abstract the manipulation of different media in a way that simplifies the programming process. GStreamer makes it possible to write a general video or music player that can support many different formats and networks. Most operations are performed, not by the GStreamer core, but rather by plug-ins. GStreamer base functionality is primarily concerned with registering and loading plugs-in and providing base classes that define the fundamental capabilities of classes.
Source filters present the raw multimedia data for processing. They may get it from a file on a hard disk [such as the file source filter], or from a CD or DVD drive, or they may get it from a "live" source such as a television receiver card or a network. Some source filters simply pass on the raw data to a parser or splitter filter, while other source filters also perform the parsing step themselves. Transform filters accept either raw or partially processed data and process it further before passing it on.