Defining virtualization for embedded
Virtualization is a combination of software and hardware features that creates virtual CPUs or Virtual SoCs (vSoC). These vCPUs or vSoCs are generally referred to as Virtual Machines (VM). Each VM is an abstraction of the physical SoC, complete with its view of the interfaces, memory and other resources within the physical SoC. Virtualization provides the required level of isolation and partitioning of resources to enable each VM. Each VM is protected from interference from another VM. The virtualization layer is generally called the virtual machine monitor (VMM).
Partitioning can be defined as sub-dividing resources of an SoC in a manner that allows the partitioned resources to operate independently of one another. Partitioned resources can be mapped either explicitly to the actual hardware or to the virtualized hardware. Note that the system can be partitioned without being virtualized. For example, in an SoC that allows partitioning but not virtualization, each Ethernet interface can be assigned to a partition but a single Ethernet interface cannot be assigned to two different partitions at the same time. However, if the SoC also provided virtualization capabilities, then a single Ethernet interface can be virtualized and each virtual Ethernet interface can be presented to a different partition.
A hypervisor is a software program that works in concert with hardware virtualization features to present the VM to the guest OS. It is the hypervisor that creates the virtualization layer.
There are two broad architectural approaches when it comes virtualizing the system:
2) Bare-metal hypervisor.
Each approach has its pros and cons and the choice would depend on the applications and the market segments. These approaches are shown in Figure 2.
Fig. 2: Virtualization approaches
. While the OS-hosted approach is the most flexible of the two options,
this approach is only as secure and reliable as the host OS.
The OS-hosted approach installs and runs the hypervisor just like an application on the host OS. The hypervisor schedules the guest OSes based on its stored scheduling policies. While the OS-hosted approach is the most flexible of the two options, this approach is only as secure and reliable as the host OS. If the host OS fails or has to be rebooted, the guest OSes running on the VMM also need to be rebooted.
Scheduling of the VMM and guest OSes is dependent on the scheduling policies of the host OS, since they run on top of the host OS. Besides being less efficient than the other approach, the hosted approach cannot guarantee any service-level agreements for applications running on the hypervisor.
The bare-metal hypervisor is a low-level software program that works in concert with hardware virtualization features to present the VM to the guest OS. The bare-metal hypervisor approach does not depend on the host OS and runs directly on the physical hardware (bare metal). The hypervisor fully controls the SoC enabling it to provide quality of service guarantees to the guest OSes.
Although orthogonal, but equally important, is the handling of the I/O by the hypervisor. Different I/O-handling approaches can be used:
1) Fully virtualized I/O
2) Dedicated I/O
3) Paravirtulized I/O.
In the fully virtualized approach, the hypervisor virtualizes the I/O by emulating the devices in software, but the software overhead this emulation creates reduces the efficiency of the system. In the paravirtualized approach, the I/O interfaces are not fully virtualized. Device drivers may reside in the hypervisor, or the guest OS or in a separate partition. The key difference between full I/O virtualization and paravirtualization is that not all functions are emulated in paravirtualization and hence this approach reduces the software overhead at the cost OS portability.
In the dedicated approach, each VM is assigned a dedicated I/O in its own partition and does not have to go through the hypervisor for I/O transactions once setup, resulting in the lowest software overhead.
In summary, the OS-hosted approach offers the greatest application and guest OS portability, while the bare-metal hypervisor approach offers best performance and the lowest virtualization overhead.
Unlike servers or compute-centric systems, one key design metric for embedded systems is performance of the system per watt of power dissipation. That is, the system should be optimized to extract the best possible performance within a given power budget. Usually the power budget of embedded systems is more constrained than that of the servers or compute- centric systems.
While portability and flexibility are important, often they are not the number one concern. As such, the bare-metal hypervisor approach offers the best virtualization solution for embedded systems. We will focus on bare-metal hypervisor in the rest of document.