Though many devices get virtualized, two of the main ones are the block device and the network device. You don't score any points for guessing why. These are the devices that form the backbone of server farms.
How do virtual machines running on hypervisors communicate? The basic requirements of networking in a virtualized environment are exactly the same as those for a regular physical network. Since there are no physical network interface cards (NICs) in the virtual machines -- and possibly only a single physical NIC in the system -- each VM emulates a NIC for its guest OS and provides it with MAC and IP addresses. These emulated network devices require network drivers to be present in the guest OSs.
Just as physical networks need switches or hubs, virtual networks require a Layer 2 virtual switch connected to the VMs virtual ports, as illustrated in Figure 3.
A virtual network implemented using Type 1 virtualization.
The network card driver could be running under the following scenarios:
- In the context of a guest OS running on a VM
- In the context of an OS running on a VM that handles all the device accesses (including the NIC). Even though this OS is also running on a VM, it is not considered a guest OS for a number of reasons, including the fact that the OS could be running with higher privileges, with its only responsibility being handling all device accesses.
- In the context of the hypervisor itself
Virtual networking issues are similar to those in the physical network world. One area of concern is performance. In a virtual network, the virtual switch's performance is one of the main limiting factors. That's not surprising when we consider the time it takes for the virtual switch to serve each virtual port and copy the network payload data from each VM.
In the world of virtual networks of server farms, the problems mentioned above are mitigated by employing single-root IO virtualization (SR-IOV) technology provided by a peripheral component interface (PCI) subsystem. SR-IOV is a technology proposed by the PCI-SIG, in which a device will have multiple virtual functions and one physical function. Each guest OS will use one virtual function for data transfer, whereas the hypervisor will use the physical function for data transfer and device configuration and control.
Since PCI is not yet deeply entrenched in the embedded world, network performance is still an issue here. Players like Intel would definitely like PCI to percolate through the system when they push their SoCs into embedded space.
The question that arises is which approach should be taken in order to improve network performance. Should we:
- Embrace PCI and hence SR-IOV technology?
- Implement virtual switch functionality in hardware (as part of the NIC)?
- Develop the equivalent of SR-IOV technology for a system bus?
- Do something else?
Do you have any thoughts on this topic? If so, please share them with the rest of us in the comments below.