Vous êtes sur la page 1sur 7

Xen is an open-source baremetal hypervisor which allows you to run different

operating systems in parallel on a single host machine. This type of hypervisor is

normally referred to as type 1 hypervisor in Virtualization world.

Xen is used as a basis for Server virtualization, Desktop virtualization, Infrastructure

as a service (IaaS) and embedded/hardware appliances. The ability of physical host
system to run multiple guest VMs can vastly improve the utilization of the underlying

Cutting-edge features of Xen hypervisor

 Xen is operating system agnostic – Main control stack (domain 0) can be Linux,
NetBSD, OpenSolaris e.t.c
 Driver Isolation capability – Xen can allow the main device driver for a system to
run inside of a virtual machine. The VM can be rebooted in case of driver failure/crash
without affecting the rest of the system.
 Paravirtualization support: This allows fully paravirtualized guests to run much
faster as compared to fully virtualised guest using hardware virtualisation extensions
 Small footprint and interface -Xen hypervisor uses microkernel design resulting in
footprint of around 1MB in size. This small memory footprint and limited interface to
the guest makes Xen more robust and secure than other hypervisors.

Xen Project packages

Xen project packages consist of:

 Xen Project-enabled Linux kernel

 Xen hypervisor itself
 Modified Version of QEMU – support for HVM
 Set of userland tools
Xen Components

The Xen Project hypervisor is responsible for handling CPU, Memory, and interrupts
as it runs directly on the hardware. It runs immediately after exiting
bootloader. domain/guest is a running instance of virtual machine.

Below is a list of Xen Project Components:

1. Xen Project hypervisor: It runs directly on the hardware. Hypervisor is responsible

for managing memory, CPU and interrupts. It has no knowledge of I/O functions such
as networking and storage.
2. The control Domain (Domain 0): Domain0 is a special domain which contains the
drivers for all the devices in the host system and control stack to manage virtual
machine life cycle – creation, destruction, and configuration.
3. Guest Domains/Virtual Machines: Guest refers to operating system running in a
virtualized environment. There are two modes of virtualization supported by Xen
o Paravirtualization (PV) :
o Hardware-assisted or Full Virtualization (HVM)
Both of the above guest types can be used at the same time on a single hypervisor.
Paravirtualization techniques can as well be used in an HVM guest (PV on HVM) –
essentially creating a continuum between PV and HVM.

The Guest VMs are called Unprivileged domain (or DomU) since they have no
privilege access to hardware or I/O functionality. In other words, they are totally
isolated from the hardware.

4. Toolstack and Console : Toolstack is a control stack under Domain 0 which

allows a user to manage virtual machine creation, configuration and destruction. It
exposes an interface that can be used on command line Console. on a graphical
interface or by a cloud orchestration stack such as OpenStack or CloudStack. Console
is the interface to the outside world.

Paravirtualization (PV)

 Efficient and lightweight virtualization technique that was originally introduced by

Xen project.
 Hypervisor provides API used by the OS of the Guest VM
 Guest OS needs to be modified to provide the API
 Does not require virtualization extensions from the host CPU.
 PV guests and control domains require PV-enabled kernel and PV drivers to let the
guests be aware of the hypervisor and can run efficiently without emulation or virtual
emulated hardware.

Functionalities implemented by Paravirtualization include:

 Interrupt and timers

 Disk and Network drivers
 Emulated Motherboard and Legacy Boot
 Privileged Instructions and Page tables
Hardware-assisted virtualizion (HVM) – Full Virtualization

 Uses CPU VM extensions from host CPU to handle Guest requests.

 Requires Intel VT or AMD-V hardware extensions.
 A fully virtualized guests do not require any kernel support. Hence Windows
operating systems can be used as a Xen Project HVM guest.
 The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE
disk controller, VGA graphic adapter, USB controller, network adapter etc.
 Perfomance of emulation is boosted using hardware extensions.
 In terms of performance, fully virtualized guests are usually slower than
paravirtualized guests, because of the required emulation.
 Note that it is possible to use PV Drivers for I/O to speed up HVM guest

PVHVM – PV-on-HVM drivers

 PVH mode combines the best elements of HVM and PV

 Allows H/W virtualized guests to use PV disk and I/O drivers
 No modificatons to guest OS
 HVM guests uses optimized PV drivers to boost Perfomance – bypasses the emulation
for disk and network IO resulting in a better performance on HVM systems.
 Optimal performance on guests operating systems such as Windows.
 PVHVM drivers are only required for HVM (fully virtualized) guest VMs.
Installing Xen on CentOS 7.x

Follow these steps to install Xen Hypervisor environment:

1. Enable CentOS Xen Repository

# yum -y install centos-release-xen

2. Update kernel and install and xen:

# yum -y update kernel && yum -y install xen

3. Configure GRUB to start Xen Project

Because the hypervisor starts before your operating system we need to change how
the system boot process is setup:

# vim /etc/default/grub
Change memory amount for Domain0 to match your memory allocated.

cpuinfo com1=115200,8n1 console=com1,tty loglvl=all
4. Run grub-bootxen.sh script to make sure grub is
updated /boot/grub2/grub.cfg

bash `which grub-bootxen.sh`

Confirm the values have been modified:

grep dom0_mem /boot/grub2/grub.cfg

5. Reboot your server

# systemctl reboot
6. Once you reboot, verify that the new kernel is running with:

# uname -r
7. Verify that xen is running using:
# xl info

host : xen.example.com

release : 3.18.21-17.el7.x86_64

machine : x86_64

nr_cpus : 6

max_cpu_id : 5

nr_nodes : 1

cores_per_socket : 1

threads_per_core : 1


Deploy first VM

At this point you should be ready to bring up your first VM. In this demo, I’ll
use virt-install to deploy a VM on Xen.

# yum --enablerepo=centos-virt-xen -y install libvirt

libvirt-daemon-xen virt-install

# systemctl enable libvirtd

# systemctl start libvirtd

The HostOS install in Xen is known as Dom0. Virtual Machines (VMs) running via
Xen are known as DomU’s.

virt-install -d \

--connect xen:/// \

--name testvm \
--os-type linux \

--os-variant rhel7 \

--vcpus=1 \

--paravirt \

--ram 1024 \

--disk /var/lib/libvirt/images/testvm.img,size=10 \

--nographics -l
"" \

--extra-args="text console=com1 utf8 console=hvc0"

If you would like to control DomU VMs using graphical application, consider
installing virt-manager

# yum -y install virt-manager