Wireless multimedia relies on complex video software/server technology, which in turn relies on complex processing to generate streaming video and audio. One product category that will clearly become wireless multimedia-capable is the personal digital assistant. PDAs contain microprocessors of various performance levels, some capable of processing video streams of low resolution and frame rate in software. To create a PDA-based system that is capable of good-quality two-way video communication will take enhancements to the computing capabilities of the system.
One approach to delivering this additional performance is to take advantage of the ubiquitous small-form-factor interface standards such as the Personal Computer Memory Card International Association (PCMCIA) formats and Compact Flash, which the latest generation of PDAs support. Advanced video and wireless processing systems can be developed on PCMCIA and Compact Flash circuit cards.
A big challenge in delivering a compelling product solution is the limitation of these interfaces when it comes to the data bandwidth requirements of large-resolution motion images. Advances in video data compression and decompression, however, promise to alleviate this problem partly or completely.
Unfortunately, video codecs such as MPEG-4 are developed for processors that cannot deliver the combination of quality, cost and performance demanded in a mass-market solution. As a result, the nature of the challenge logically shifts to accelerating the use of codecs in more-sophisticated processors.
Why is delivery of multimedia services on a wirelessly connected Internet-centric computing appliance such a difficult problem? And what can be done to solve it?
While a wirelessly connected PDA has a screen to view images, and speakers and microphones to reproduce and capture audio, it does not come with a video-input capture capability. Also, since these devices are designed to be cost-effective, they use the most economical processor they can. The problem is that to do two-way streaming video at a market-acceptable quality requires processor power and compression/decompression hardware and software.
When MediaWorks looked at ways to solve the problem of MPEG-4 video capture, transmission and playback on current PDA devices, our first step was to develop the missing video-input capture capability.
This was accomplished by developing a PCMCIA-based VGA-resolution camera. We chose a sensor that provided sufficient frame rate and image size. Although a PDA cannot display a VGA image, the user may want to wirelessly send the image over the Internet to a PC user who has the capability to view it. PCMCIA was the chosen format just for ease of development, but Compact Flash is another target. The initial architecture passed the full image data over the PCMCIA bus to the PDA, where software was used for encoding outgoing images and decoding incoming ones.
The initial results provided acceptable 7 to 12 frame/second QCIF (176 x 144) images-a view of what the local user sees and an image of what is being sent to the local user. These results were acceptable as a proof of concept, but the image size needed to be increased to satisfy the transmission to a PC. Also, the frame rate had to be faster to make the video more fluid.
The next generation needed architectural changes to solve these issues. Since we are looking at augmenting PDAs without video capability, we still needed to provide a PCMCIA- or Compact Flash-based VGA-capable camera. However, to increase performance, the bus bottleneck and compute performance limitations needed to be overcome.
This is accomplished by having the encoder reside on the camera side of the interface. Depending on the image sequence, the encoded version can be less than one-tenth the data rate of the original. Encoding on the camera side of the bus makes it possible to transmit a larger image size to the PDA, as well as more frames per second.
The encoding is the more computationally intensive task since encoding requires partial decoding capabilities. Our solution was to develop an encoder for the camera leveraging configurable-processor technology. This new architecture also lends itself to the next generation of products that will add wireless capability to the camera. The decoding will remain in software on the PDA. With this approach, we believe that CIF-resolution images at 30 frames/s, and VGA images at greater than 20 frames/s, are possible on a wireless PDA.
An important part of the solution we are developing is use of a configurable-processor solution such as Altera's Nios or Tensilica's Xtensa. These architectures make it possible to customize the processor exactly to the specific task, in contrast with the current methodology of finding a processor that is just close.
The process we used to design with a configurable processor as the main engine for a wireless multimedia-capable PDA has the following steps. First, we profiled the initial software and hardware configuration on an Altera Apex 20KE FPGA as well as an instruction-set simulator. The profile results were analyzed to identify performance bottlenecks.
This allowed us to iteratively look at our design, identify bottlenecks, come up with a possible improvement and calculate the benefit that could be derived. If the benefits justified it, we then implemented a proposed solution and profiled the system with the changes. The results of the profiling should confirm the estimated performance improvements.
This cycle of profiling, developing a solution and confirming the solution via profiling continues until the hardware/software mix meets the performance goals. Through this process there may be some unavoidable bottlenecks, and one will also reach a point of diminishing returns.
Preliminary results for the design approaches investigated show significant improvements in cycle counts. Work is still being done to introduce additional optimizations
Once we've identified the final solution set, the goal is cost reduction via migration to a fixed-silicon solution, such as an ASIC.
See related chart
See related chart