Introduction: The proliferation of multicore processors and the desire to consolidate applications and functionality will push the embedded industry into embracing virtualization in much the same way it has been embraced in the server and compute-centric markets. However, there are many paths to virtualization for embedded systems. After a tour of those options and their pros and cons, Freescale Semiconductor’s Syed Shah shows why the bare metal hypervisor-based approach, coupled with hardware virtualization assists in the core, the memory subsystem and the I/O, offers the best performance.
Virtualization already has impacted the server and IT industries in a significant way. IT organizations are using it to reduce power consumption and building space, provide high availability for critical applications and streamline application deployment and migration. The trends to adopt virtualization in the server space also are being driven by the desire to support multiple OSes and consolidation of services on a single server by defining multiple virtual machines (VM). Each VM operates as a standalone device. Since multiple VMs can run on a single server provided the server has enough processing capacity, IT gains the advantage of reduced server inventory and better server utilization.
Although not mainstream, similar trends are trickling down into the embedded space as well. The concept of having a sea of processors and the associated processing capacity sliced and diced between applications and processes is not science fiction anymore. The challenge of extracting higher utilization on the processors and consolidation triggered by cost reduction are driving the adoption of virtualization in the embedded systems.
Case in-point is the merging of the control- and data-plane processing on to the same system-on-chip (SoC). Previous approaches used separate discrete devices for these functions. With multicore SoCs, given enough processing capacity and virtualization, control-plane applications and data-plane applications can be run without one impacting the other. Data-plane and control-plane applications, in most cases, will be mapped to different cores in the multicore SoC as shown in Figure 1.
Fig. 1: Control- and data-plane application consolidation in a virtualized multicore SoC
Control- and data-plane applications are not the only application-level consolidation that will occur. Virtualization and partitioning will allow OEMs to enable their customers to customize service offerings by adding their own applications/OSes to the base system on the same SoC, rather than have another discrete processor to handle it. Data or control traffic that is relevant to the customized application and OS can be directed to the appropriate virtualized core without impacting or compromising the rest of the system.
Another example of consolidation of functions is generally called board-level consolidation. Functions that were previously implemented on different boards now can be consolidated on to a single card and a single multicore SoC. Virtualization can present different virtual SoCs to the applications. With increasing SoC and application complexity, the probability of failures due to software bugs and SoC mis-configuration are greater than purely hardware- based failures. In such a paradigm, it may make sense to consolidate application level fault tolerance on to a single multicore SoC, where a fraction of the cores are set aside in hot standby mode. While such a scheme will save the cost of having to develop a standby board or at the very least another SoC, it would require the SoC to be able to virtualize not only the core complex but also the I/Os.
Virtualization also can help telecom service providers do in-service software upgrades. In traditional systems, software upgrades either completely bring down the embedded device (switch, router, set-top box) or at the very least slow its performance to the level where it becomes very noticeable. With live traffic running through the device, software upgrades can disrupt traffic flows and may cause potential revenue loss to the service provider. With virtualization enabled, the multicore SoC can be partitioned in such a way to allow one partition to continue to service traffic, while the other partition is upgraded. Once the upgrade is complete, traffic can be moved from the current partition over to the upgraded partition.
Although virtualization has its advantages, it comes with new challenges and considerations including partitioning, fair sharing and protection of resources between multiple/competing applications and OSes. In the sections to follow, we will discuss virtualization technology and how it addresses these challenges.