The concept of virtualization originated on the IBM mainframes of the late 1960s, where a single computer could execute multiple different guest operating systems at the same time.
The virtualization layer in these mainframes provided a virtual environment that had the exact same properties as the bare hardware. Different operating systems could be executed in each of these virtual environments without being aware of one another.
The next groups to take on virtualization were the major UNIX vendors, such as Sun Microsystems, Hewlett-Packard, IBM, and SGI, initially on their own proprietary (and expensive) hardware.
VMware picked up the virtualization torch in the late 1980s, bringing the technology to x86 hardware and server farms. Virtualizing the x86 architecture is not an easy feat, and the virtual environments carry a hefty performance penalty.
Thisis why the processor vendors (Intel, AMD) started to add virtualization hardware assist (VT-x, AMD-V) to their server-level processors. Multiple other companies are providing virtualization technologies, making this a true hot topic. Some examples are Microsoft, Citrix, Red Hat, and Parallels.
The main principle in virtualization is that there is a virtual machine monitor (VMM, often called the hypervisor) that has ultimate control of the hardware. This hypervisor will need to arbitrate requests from the operating systems executing in the virtual environments (often referred to as guest operating systems or guests).
In full virtualization this means that the guest operating system is unmodified and unaware that it runs in a virtual environment. Certain actions (privileged operations) by the guest will be arbitrated by the hypervisor, for example, actions such as reconfiguring the memory management unit (MMU).
With hardware assist, the processor will detect the privileged operation by the guest and will context-switch to the hypervisor so that the hypervisor can perform the necessary actions to arbitrate the request.
Without hardware assist, the hypervisor has to inspect the code prior to execution, which is expensive and has significant performance impact. In many server and information technology environments, full virtualization is a requirement.
Virtualization in the IT market has enabled companies to deliver more services with a smaller price tag by using fewer servers while delivering better capabilities with fault tolerance, load balancing across sites, disaster recovery, and so forth.
The concepts used in virtualization for embedded designs are the same as in the server industry, but the implementation varies widely. The chips used in the embedded industry do not always have the hardware assist to facilitate virtualization; they have less horsepower, and the amount of memory they have is limited.
Embedded applications also typically require real-time behaviour to events, often in the microsecond range. This means embedded hypervisors need to be small and fast and add minimal overhead to interrupt handling and computational speed (ideally no overhead at all).
With full virtualization, running unmodified operating systems in fully virtual environments can be costly, especially if hardware assist is not available. In the embedded world, performance is more important than full virtualization support, and source code is available for many embedded systems, opening the door to a technique called "paravirtualization."
In paravirtualization, the guest operating system is modified to collaborate with the hypervisor. Basically the privileged operations are removed from the guest operating systems and they are replaced with requests (essentially system calls) into the hypervisor.
Paravirtualization provides efficient virtualization on top of any processor—Intel, PowerPC, ARM, MIPS, and so on—bringing the benefits of virtualization to the embedded world.
Multicore and virtualization
Virtualization is often used in the same sentence as multicore. The reason for this is evident: Virtualization provides the capability to partition a powerful multicore processor into multiple separate virtual environments. One of the first embedded industries to go multicore was networking; but industrial control, automotive, and many other industries are following.
Multicore simply delivers more instructions per second for less power and with less heat. The question often is how to harness this power. Virtualization is the answer here. It provides the capability to create low overhead individual environments to run legacy code or to develop new code within.
Modern processors have anywhere from two to 32 cores, with more on the horizon, and designers can run these processors with a single operating system or partition them and run multiple operating systems on them, some single core, some multicore.
Primary Virtualization Drivers
There are many reasons why device developers are looking to multicore and virtualization for embedded devices, but there are two main ones: performance and consolidation.
The performance driver is easy to understand. A project that is looking to build more powerful devices will need to look towards multicore, as the race for higher megahertz processors is over. Higher frequencies consume more power, so using multiple cores running at lower frequencies is the way to go.
The consolidation driver is also easy to understand. Many devices currently have multiple processors, especially in automotive, transportation, industrial control, medical, printers, and so forth, and multicore provides an opportunity to combine these separate processors into a single device, saving space, cost, and power.
The hypervisor runs on the bare metal multicore or single-core hardware, partitions the memory, processing cores and devices, creates the virtual environments, and then gets out of the way. It allows a control partition to reset virtual environments and update them on the fly.
The hypervisor also provides the ability to lock each virtual environment to its own core or to time-slice a single physical core between multiple virtual environments with an efficient real-time scheduler.
Multicore is here to stay, and a product that may be using a single core today may go dual-core in the next release and quad core or more five years from now. Virtualization provides the flexibility to adapt to those changes without having to make large architectural changes in the software for these devices.
One of the reasons why many current embedded systems have multiple different processors is to keep critical code separate from non-critical code. For example, the control of large machinery runs on a separate processor from the software that provides the graphical user interface to the user.
These types of separations can often be found in embedded systems for security, safety, or legal reasons. An embedded hypervisor is a small layer of software underneath the virtual environments, so it is the ideal arbitrator, ensuring that the virtual environments do not impact each other. Often these arbitrators need to be certified, which is feasible due to the small size of the virtualization layer.
Separation is not a barrier to virtualization, but it is something that should be taken into account when designing a virtualized embedded system.
Hypervisors drive innovation
A slight twist on the consolidation driver is the situation where an existing device has a certain amount of functionality, for example, a medical device, a multifunctional printer, a set-top-box. These devices often run a real-time operating system to satisfy responsiveness requirements.
The emergence of Linux has opened up possibilities to add new and groundbreaking functionality to these devices, but the cost of adding a second processor to the device is often prohibitive. Virtualization removes that barrier and enables the designer to add Linux to the existing processor (provided enough processing cycles are available) and provide innovative new functionality to the device.
Hypervisor technology has been around longer than the now ubiquitous personal computer. Virtualization in the IT market has taken off in the past years and is now scaling the walls to the embedded market. But embedded systems are more sensitive to available resources than IT systems; hence the hypervisors themselves are smaller and focused on real-time behaviour.
Using hypervisors to innovate existing designs provides the embedded designer with new ways to add features and security, both through open source and their own in-house developments. Certification artefacts help developers through the standards approval process more effectively, and hypervisors assist with separating the safety critical elements of the design from less critical areas.
But embedded designers are much more sensitive to the available resources than enterprise system developers. The new generation of hypervisors for industrial, transportation, and medical designs must have a tight focus on the memory space and processor cycles.
This allows the designer the maximum flexibility in the code development, providing new features to improve both the safety and functionality of the next-generation systems. Embedded systems are getting faster, multicores are being used more and more, and designers are looking for the next level of flexibility and scalability when building embedded systems.
Mark Hermeling is senior product manager for multicore and virtualization at Wind River.