User habits and data structures have changed significantly not only in the consumer sector, but also in the applications of the embedded segment: the market demands innovative and intuitive user interfaces with multi-touch information transmission that is increasingly based on high quality (3D) graphics and visualization. Naturally, these developments affect the required processing capacity. Despite having to process more data in less time, modern embedded applications must be more compact, use less power and be more cost effective. This has opened a gap in the market which cannot be filled by conventional x86 architectures alone. Computer-on-Modules (COMs), such as the conga-TFS COM Express basic module from congatec are designed to fill this gap.
Traditional approaches are not enough
In the past, measures such as increasing the clock rate or adding more x86 cores solved most of the performance problems that arose. However, when it comes to efficient processing of massive parallel data streams, such as what is needed in industrial automation and medical imaging applications, these measures do not help. The reason is that x86 processors are designed for multifunctional performance with serial processing.
When it comes to processing large amounts of data in parallel with the same algorithms, for example when applying an image filter to each pixel of an image, their scalar and serial structure are a disadvantage. This is where the advantage of GPUs becomes evident. Over time, GPUs have developed into powerful, programmable vector processors with up to several hundreds of identical cores. This makes them highly scalable and provides an excellent platform for demanding parallel tasks using GPGPU (general purpose GPU) calculations.
Together we are strong
What could be more advantageous is to combine the performance of both the CPU and the GPU, thereby creating an extremely powerful processor that can handle virtually all tasks. AMD has done just that with the introduction of the Accelerated Processing Unit (APU). It is called APU because it integrates a CPU and a discrete-class programmable GPU onto a single silicon die. This way it becomes possible to efficiently process scalar workloads on x86 cores and vector workloads on the GPU thereby achieving better overall performance and greater power savings.
Accelerated processing units
AMD introduced the first APU with the launch of its Embedded G-Series platform. It is designed for compact, low-power devices and is used, for example, across congatecís extremely compact Qseven module range. In June of 2012, AMD unveiled the Embedded R-Series platform, a processor platform specifically aimed at mid-range to high-performance systems. Like the AMD G-Series platform, the AMD R-Series platform is a space-saving 2-chip solution consisting of an APU and an appropriate controller hub that runs all other peripheral connections Ė see figure 1.
Figure 1: The AMD Embedded G-Series APU and controller hub at a glance
The new AMD Embedded R-Series APUs come in eight different performance classes with dual and quad core processors and a range of AMD Radeon graphics units from the 7000 family. They are scalable from 17 to 35 Watts TDP for the top-of-the-range model AMD R-464L APU and seamlessly extend the performance of the AMD G-Series APUs.