Vous êtes sur la page 1sur 10

Study of secure isolation of virtual machines and

their exposure to hosts in a virtual environment.


Gavin Fitzpatrick
School of Computing
Dublin City University
Dublin, Ireland
Email: gavin.fitzpatrick7@mail.dcu.ie

Abstract—In this paper we look at the fundamentals resource isolation between VM’s competing for the
of virtualization and how it is defined, this paper also same resources.
discuses isolation for virtual machines within a single host 2) Namespace Isolation: States how a VMM limits
environment and what types of isolation are present. Type access to it’s file-system, processes, memory ad-
1 and 2 hypervisors are described in detail, highlighting
dresses, user ids etc. It also affects two aspects of
the differences in architectures. This paper then discusses
a test platform, testing tools and experiments used to
application programs
test how well isolation is performed within five chosen i Configuration Independence: File names of one
platforms. Results are discussed on a test by test basis VM do not conflict with that of another VM.
and also on a platform by platform basis. Related works ii Security: One VM cannot modify data belonging
are discussed at the end of the paper. to another VM stored in the same host.

I. I NTRODUCTION
Virtualization has been around a long time since the Namespace and Resource Isolation [45] may not be
70’s however x86 [21] Virtualization only came about such a major risk within private enterprise where all
in the 1990s. As defined by Popek and Goldberd [?] an infrastructure is physically and network secure. How-
x86 Virtual Machine Monitor (VMM) is defined by three ever with the emergence of cloud computing [31] and
primary characteristics, infrastructure as a service (IaaS), companies can now
rent infrastructure directly off the cloud for example EC2
1) Fidelity: A VMM must provide an environment for
from Amazon[1]. This allows a single host to contain
programs identical to that of the original machine
multiple VMs from many different organizations. As a
2) Performance: Programs running within the VMM
result the isolation landscape discussed in [45] becomes
must only suffer from minor decreases in per-
very important especially if there are any misbehaving
formance, the majority of guest CPU instructions
VMs running on the platform in question. This paper
should by executed in hardware without interven-
will look at different hypervisor architectures and discuss
tion from the VMM
resource isolation within these hypervisors. Section II
3) Safety: The VMM must have complete control of
details the testing environment including both physical
the system resources, Guests or Virtual Machines
and virtual aspects to the environment. Section III dis-
(VMs) should not be able to access any resource
cusses Type 1 hypervisor architectures used in this paper,
not allocated to them.
Section IV discusses type 2 hypervisor architectures used
The safety characteristic can be further defined as dis- in this paper. Sections III and IV also discuss how secure
cussed in [45] where isolation is divided into two dimen- isolation is achieved within each architecture where
sions. possible. Section V discusses the testing tools used for
1) Resource Isolation: Refers to a VMM’s ability each experiment. Section VI discusses each experiment
to isolate resource consumption of one VM from and the reasons behind each experiment. Section’s VII
that of another VM using appropriate algorithms and VIII discusses Results on a Test-tool and Hypervisor
[20]. Using appropriate scheduling and allocation Basis. Section IX discusses related work in the field, and
of machine resources a VMM can enforce strong finally Section X closes with the Conclusion of the paper.
II. T EST E NVIRONMENT • Ring 0: has full access to underlying hardware
A. Physical Environment within the host system
Type 1 hypervisors usually run at Ring 0 or in Rooted
All testing was conducted on a Dell Dimension
Mode on Hardware assisted systems [21] and all access
530 with a VT [6] enabled Intel(R) Core(TM)2 Quad
to the underlying hardware resources are controlled by
CPU Q6600 @ 2.40GHz, 3Gb 677Mhz RAM, a single
the Hypervisor. Although in the past Para Virtualization,
100Mbps NIC and 3 x SATA 7200pm Hard disk drives
Binary Virtualizationand Hardware Assisted Virtualiza-
all of which were connected individually:
tion represented different VMM architectures, however
• Disk 1 contains the 64bit operating systems: Ubuntu
today with Intel’s VT-x [6] and AMD’s AMD-V [28] fea-
10.4, Windows 7 and Server 2008 R2 turing in all new systems the current Type 1 hypervisors
i Disk 1 contained a multiboot grub loader which all take advantage of Hardware Virtualization, both the
allowed Ubuntu 10.4 and Windows Server 2008 R2 VT-X and AMD-V operate at Ring -1 mode. However
boot. Window 7 was loaded into the boot loader as discussed in [30] there are some notable performance
within Win2008 using a tool called Eazy BCD [?]. differences.
• Disk 2 contains Citrix XEN server 5.6.0 [2]
• Software outperforms hardware for workloads that
• Disk 3 contained ESXi 4.1.0 [25]
perform I/O, create processes or rapidly switch
B. Virtual Machine Environment contexts between guest context and host context.
• Hardware outperforms software for workloads rich
As the host contains 3Gb of memory the number of
VMs is restricted due to the amount of memory in the in system calls.
system. Some hypervisors do offer memory overcommit As stated in [24] Type 1 hypervisors are designed
techniques [25] [2] however to maintain consistency 4 with greater levels of isolation in mind by maintaining
VMs are loaded on the host machine. Each VM is separate memory partitions for each guest, allows user
allocated 1 vCPU, 512mb ram, a bridged virtual network programs in Ring3 to execute natively on the CPU.
card so it is visible on the physical network through the A. XenServer
hosts physical network card. The following four VMs
are installed and configured on each hypervisor: The first type 1 hypervisor is Citrix Xen Server 5.6
[2] which is a Para virtualized hypervisor based on the
• VM1 contains a guest O.S windows XP SP3,
opensource XEN project [13]. The VMM automatically
• VM2 containst a guest O.S windows 2003 server,
loads a Secure Linux O.S as Domain 0 within the
• VM3 contains a guest O.S ubuntu 10.4.
XenServer. All guest interactions with the hypervisor are
• VM4 contains a guest O.S ubuntu 10.4 - this guest
managed via Domain 0 which itself is a privileged guest
is also responsible for running bench mark tests
sitting on top of the Host [3]. XenServer schedules CPU
against the host
time slices to guest domains using the Borrowed Virtual
There are also 2 additional zombies VMs running with 2 Tme Algorithm [34], XenServer also offers I/O rings
laptops also cabled onto the same network. These VMs for data transfer from guests to the Xen hypervisor [42].
run on Virtualbox [vboxarch] within each laptop. Each Network access from guests is controlled via VIFs [42]
VM is installed with mausezahn [9] which is a network which have 2 I/O rings associated with the VIFs for send
traffic generation tool used for network testing. All guest and recieve data using a round robin algorithm. However
OSs are 32bit. For the remainder of this paper each guest Para virtualization requires modification to Guest O.S’s
stated above will be referred to as VMX with X referring in order to perform correctly, [42] discusses how many
to the number of the VM, zombies will be referred to as additional lines of code are required to allow an O.S
VMZ for the remainder of this paper. to perform safely within this environment as stated in
III. T YPE 1 H YPERVISOR - NATIVE OR BARE -M ETAL Table 2. For testing purposes there were no configuration
changes made to the XenServer environment, all guest
x86 architectures are designed based on 4 rings of domains are stored on local disk 2.
privilege [30],
• Ring 3: executes user mode - has no direct access B. Hyper-V R2
to the underling hardware Microsoft’s Hyper-V [44] is also a Paravirtualized
• Ring 2: not used by modern operating systems. hypervisor which follows the same architecture as
• Ring 1: not used by modern operating systems. XenServer however microsoft uses partitions instead of
domains, therefore a secure version of Windows 2008 time slices to the guest O.S using a [20] porportional
R2 is loaded as the root partition i.e. Domain 0 [26] share based algorithm which allows access to the CPU
[3]. Hyper-V’s Architecture is described in detail by from different guests. All access is defined by shares
Russinovich [44] which states that the kernel runs at which can be customized by the administrator of the
Ring 0 while the VMM runs at Ring -1 which allows host thereby allowing higher priority VMs to have more
full control of execution of the kernel code. The Hyper- CPU time (shares). Storage I/O is controlled in the
V process Hvix64.exe [6] for VT-x or Hvax64.exe for same way for guests using a Storage I/O controller [14]
AMD-V [28] is loaded as part of the Win2008 boot meaning guests have access to I/O within the host via
process in order to launch itself into Ring -1. Each shares which again can be controlled by an administrator.
child partition is represented by a Vmwp.exe process Network I/O is controlled via [11] meaning guests will
which manages the state of each child partition. The have pre-assigned shares for network access.
Vmms.exe process is used to create the VMwp.exe For testing purposes there were no configuration
process. Hyper-V offers two features which allow greater changes made to the ESXi 4.1 [25] environment, all
native performance from its guests: guests are stored on local disk 2 within the vmfs par-
1) Enlightenments: The latest Microsoft operating tition, all guests have equal share access to the CPU,
systems can directly request services from the hy- I/O and Memory.
pervisor using the Microsoft hyper-call API allow-
ing near native performance of guest executed code IV. T YPE 2 H YPERVISOR - H OSTED
on the hypervisor. Hyper-calls can also be used Type 2 Hypervisor are loaded into memory within
to immediately schedule another virtual process to the host of a non virtualized OS via Ring0 drivers [22]
access CPU’s reducing the use of spin-locks on [27]. The hypervisor will exist as as number of processes
multiprocessors. within the Host OS and is therefore dependant on CPU
2) Host Integration Services: Available to Microsoft scheduling within the Host OS.
and Linux guests [8], when installed allows near
native access to hardware devices and consists of A. VirtualBox
3 components VirtualBox [22] claims to offer native virtualization
• Virtual Service Clients (VCS): resides at Ring citevboxnative as guest code can run unmodified directly
0 in the child partition, replaces guest devices on the host computer, however the VirtualBox hypervisor
drives and communicates via the VMB with resides within the host O.S and does not require VT-x
the hypervisor [6] or AMD-V [28] to operate this will be considered as
• Virtual Machine Bus Driver (VMB): presents a type 2 hypervisor for the remainder of the paper as it’s
a communication through which guests within architecture is similar to that of VMWare’s Workstation
child partitions can communicate with the root [47]. VirtualBox runs two processes
partition
1) Vboxsvc.exe: manages and tracks all virtual box
• Virtual Service Providers (VSP): resides at
processes running within the hosted environment.
Ring 0 in the root partition, initiates request
2) Virtualbox.exe: this process running within the
via the root’s device drive on behalf of guests
host operating system is responsible for the fol-
running within the child partitions.
lowing functions:
For testing purposes there were no configuration changes i Contains a complete guest operating system in-
made to the Hyper-V environment of the root partition cluding all guest processes and drivers.
Win2008 R2, all guest domains are stored on local disk ii Contains a Ring 0 driver which sits inside the host
1 within the Windows 2008 partition. O.S and is responsible for the following:
C. ESXi • allocating physical memory for the virtual
VMWare’s ESXi [25] is considered a baremetal hy- machine
pervisor which uses both Binary and Hardware assisted • switching between the host Ring3 and guest
virtualization [21]. Unlike XenServer and Hyper-V, ESXi context
does not use a preloaded domain 0 guest for host man- VirtualBox’s argument regarding native virtualization
agement, all access and control to the physical resources [23] is that the CPU can run in one of four states while
are controlled via the hypervisor. ESXi schedules CPU guests are running
1) execute host ring-3 code (other host processes) or A. RAMspeed
host ring 0 code
I used RAMspeed [17] for measuring RAM read/write
2) emulate guest code - an emulator steps in to
performance within the test VM. The RAMspeed test
translate this code into usable ring 3 code
performs 4 sub-test operations.
3) execute guest ring 3 code natively
4) execute guest ring 0 code natively - if VT-x or • Copy (A=B) Transfers data from B to A
AMD-V is enabled this executed at ring 0, however • Scale (A=m*B) modifies B before writing to A
if VT-x or AMD-V is disabled or not present the • Add (A = B+C) reads in B and C, adds these values
guest is fooled into running at ring 1. then writes to A
VirtualBox 3.2.6 running on a Windows 7 host on Disk • Triad (A=m*B+C) combination of Scale and Add
1 and the default configuration was used in this paper. operations.
10 rounds are performed for both Integers and Floatpoint
B. Workstation point calculations. Each sub-test is averaged within each
round, an overall average is then taken across 10 rounds
VMWare’s workstation is another hosted hypervisor
to give an accurate reading.
used to allow depending on hardware resources multiple
guests to run concurrently on top of the Host O.S. It
also uses processes [47] within the Host O.S to control B. System Stability Tester
and manage its guests. Although there is no official I used systester [?] for benchmark and test CPU
documentation for Workstation version 6 architecture performance using 2 different algorithms to calculate
[?], I have found an article which states that Workstation 512K value’s of Pi which are:
[27] and Virtualbox follow a similar architecture in [18]
in section 3.2. • Borwein [15] Quadratic Convergence algorithm
which runs for five consequtive rounds
• vmware.exe - also known as the VMApp, resides
• Gausse-Legendre [32] algorithm is greatly more
in Ring3 and handles I/O requests from the guests
efficient at calculating Pi than Borwein therefore
via system calls.
decided to run this algorithm ten times for each test
• vmware-vmx.exe - also known as the VMX driver,
to keep them comparable.
resides at Ring0 within the Host O.S, guests com-
municate via the VMX driver to the host.
• VMM - unknown to the Host O.S, gets loaded C. FIO I/O Tool
into the Kernel at Ring0 when the VMX drives is This tool was used to benchmark I/O [4] to the disk
executed (Guest VM starts up) subsystem within VM4. Ten 32mb files were written
Workstation 6.5 was used within the experiments with directly to the disk within the host using the libaio
default settings in place. engine, each file contained random writes of 32k blocks
in size recording IOPS or max average bandwidth.
V. T ESTING T OOLS
There are a number of benchmarking tools which D. Ping tests
can be used for looking at performance metrics within This test examines Network I/O, One hundred ICMP
Virtual Machines such as VMWare’s VM Mark and [?] packets are sent from a virtual network card inside
BenchSuite [7]. There are 4 key resources which are the guest VM4 are sent to three locations:
shared across all Guests within a Virtual environment,
CPU, Memory, Disk I/O and Network I/O. The VMM 1) VM2 within the same host
or Hypervisor is responsible for sharing/scheduling out 2) Hosts physical Network Interface
these resources to each VM in a fair manor. However 3) The physical gateway
if 1 VM is misbehaving as demonstrated in Section VI These tests measure how efficient the internal virtual
the remaining Guests may not receive their fair share of networking was during the experiments carried out in
resources via the VMM. As a result of this I have chosen this paper The average response time over a hundred
the following testing tools to look at each resource during IMCP packet requests is calculated for further review
each experiment and compare findings. for each test.
E. Geekbench C. Experiment 2 (Exp2) - Fuzz testing
Geekbench [5] is a proprietary benchmarking tool for Fuzz [36] is a random input testing tool used on
processor and memory performance, tests are scored applications, it subjects them to streams of completely
based on the following factors: random input messages and can be considered as a of
• Integer: Blowfish, Text Compress/Decompress application error checking tool. Ormandy also uses this
• Floating Point: Primality test, Dot Product testing approach in [41] however unlike Crashme which
• Memory: Read/Write Sequential, Stdlib Copy/Write executes it’s own processes against the cpu, fuzz sends
• Stream: Copy, Scale, Add, Triad - similar to ram- random messages via the message thread queue of target
speed tests application thread. As a result this causes the target
Each score is combined and an average is taken to application to misbehave.. As discussed in an article by
represent a single score for all four factors, however the Symantec [35] which investigates how different types
higher the value the better the score. of hypervisors can be detected inside a virtual machine.
This is due to additional functionality made available
VI. E XPERIMENTS via a private guest to host channel allowing instructions
from the host to pass through to the guest such as reboot,
There were in total 10 tests performed on all 5 shutdown and clipboard information. VMware Tools [25]
platforms for each experiment as described within this XenServer Tools [2] are examples tools which must be
section. All tests are performed VM4 running the Ubuntu installed within the guest for this functionality to exist.
10.4 operating system However it is not possible to run the fuzz application
against these processes within all platforms therefore I
A. System Idle / Control
looked at an existing common application which existed
For reference a control experiment which performed in each VM1. Fuzz was against the calc.exe with the
the above tests on an idle platform with 4 VM’s running. following command:
Only the testing VM was active during this time perform-
• fork -ws -a c:/windows/system32/calc.exe -e 78139
ing the above tests described in Section V. For all other
- which resulted in a crash of the calc application
experiments 1 or more of the remaining VM’s would
and cpu usage 100
misbehave depending on the experiment performed.

B. Experiment 1 (Exp1) - Crashme D. Experiment 3a (Exp3a) - Forkbomb on 1 VM

As discussed in [41] crashme [33] subjects an O.S to Fork bombs is a well known technique used by many
a stress test by continually attempting to execute random benchmark tools such as [7] which is used to create
byte sequences until a failure has occurred Three parallel a well known denial of service attack resulting in a
tests are performed on the misbehaving guest VM1, misbehaving guest. A Forkbomb is a parent process
each test uses one of three random number generators: which forks into new child processes until all resources
RAND from the C library, [10]Merseene twister, VNSQ are exhausted. All allocated memory is consumed by
(variation of the middle square method), these tests are these child processes within the misbehaving VM. This
executed as follows ” +1000 666 50 00:30:00 experiment tests what pressure or additional load is
• +1000: specifies the size of random data string in
placed on the MMU within the VMM of the system.
bytes, the + sign states the storage of these bytes The first experiment runs a fork bomb on 1 misbe-
are malloc’ed each time having guest VM1, along with VM4 this causes up to
• 666: input seed into the random number generator
33percent of the host memory to be active by the guests
• 50: how many times to loop before exiting the sub
causing a low to medium load on the MMU.
process normally
• 00:30:00: all tests run for a maximum of 1800 E. Experiment 3b (Exp3b) - Forkbomb on 2 VMs
seconds or 30 minutes A Forkbomb is executed in 2 misbehaving guests VM1
During this period VM1 did not crash therefore a sum- and VM2 This means along with the workload carried
mary of exit codes for each execution is collected in a out by VM4 and the 2 misbehaving guests causes up to
log file for further use. 33percent of the host memory to be active by the guests
Crashme causes the misbehaving VM to run at 100 causing a medium load on the MMU.
F. Experiment 3c (Exp3c)- Forkbomb on 3 VMs
The third Forkbomb test runs a Forkbomb in 3 mis-
behaving guests VM1, VM2 and VM3, these in addition
to VM4 cause up to 66percent of the host memory to be
active by the guests causing a high load on the MMU.
G. Experiment 4 - DoS attacks
Two zombie machines VMZ attack VM2 using a
Mausezahn [9] network packet generator. This allows 2
Fig. 2. Ramspeed with Floating Point Numbers
attacks scenario’s to take place against a host and VM2
which would react as a misbehaving guest from a VMM
aspect. B. System Test Suite
Experiment 4a (Exp4a) - DoS attack - port 80 Both
VMZ send syn requests to port 80 on an IIS webserver
running within VM2, each SYN packets comes from
a random source address meaning the webserver gets
overloaded with SYN requests. VM2’s CPU usage jumps
and holds at 85Experiment 4b (Exp4b) - DoS attack
against all ports using SYN requests with 1k byte
padding. Both VMZ send SYN requests to VM2 which
completely saturating the physical and virtual networks
with received traffic of12000KB/sec from both VMZ.
VII. R ESULTS B Y T ESTING TOOL Fig. 3. Calculate Pi using Gausse Le

Discuss results of each experiment performed using


each testing tool, note that all platforms are summed
and averaged as a single value for each experiment.
A. Ramspeed

Fig. 4. Calculate Pi using Borwein

System Test suite which a CPU bench test that calcu-


lates Pi up to 512k in length for 10 rounds using [32] in
Fig. 1. Ramspeed with Integers Fig.3 and for 5 rounds using [15] in Fig.4. Observations:
• Both tests clearly illustrate the same loss in CPU
Fig. 1 and 2 show how the Rampseed tests performed performance during Exp4a however there is an
across all platforms for each experiment. There is a improvement in Exp4b over all platforms tested.
clear deterioration in access to Memory via the MMU • Gauss [32] shows high cpu isolation over Exp1-3b
from during experiments Exp3b, Exp3c which involve • Borwein [15] shows a decrease in CPU performance
forkbombs that attack the 33percent to 50percent of during Exp1 which runs crashme operations against
the phyisical memory within the system. Also in Exp4a the CPU.
and Exp4b, which involves attacks on the network and • Both tests show a decrease in performance during
subsequently the I/O subsystem within each platform see the control test when all vm’s are idle apart from
a large performance drop in RAM access. VM4.
C. Geekbench

Fig. 7. Pings to Gateway from VM4

Fig. 5. Geekbench CPU and Memory tests

Geekbench is a proprietary CPU and Memory Bench-


mark suite which can be runs many CPU and Memory
tests from VM4 running on the host. Fig. 5 illustrates
scores from all platforms over each experiment. Obser-
vations:
• The control experiment which has no misbehaving
guests returns the highest score, however there is
Fig. 8. Pings to Host from VM4
a gradual decline from Exp1 to Exp3b and Exp4b
during which numerous CPU and memory tests take
place.
• Exp3c (50percent of the hosts physical memory is E. Ping Tests
tested) and Exp4a (VM2 suffers from high CPU
and network I/O) return the lowest scores across all Ping tests are performed from VM4 to the destinations
platforms. shown in Fig. 7, 8 and 9. Ping Observations:
Geekbench reinforces the two previous test in A and B • Ping Gateway Fig. 7: Exp1 shows a 30percent
which show a clear degradation of CPU and memory increase in ping responses across all platforms how-
performance in Exp3c and Exp4a. ever all other experiments to 3c remain close to 1
D. FIO - I/O second. item However Exp4a and 4b return ping
response times greater than 10 seconds for replies
with packet-loss greater than 50percent, there are
several factors for this, as the host network card
is being bombarded with network requests it may
not be able to send or receive ICMP packets to the
gateway on the physical network.
• Ping Host Fig. 8: All experiments returned ping
responses between .3 and .4 seconds however Exp2
and Exp3b exhibited the lowest response times.
• Ping VM Fig. 9: The control experiment across all
platforms shows a high ping response time, this
Fig. 6. Random Writes to disk
could be due to the low level of context switches
[30] due to the majority of vm’s running idle.
FIO [4] is an open source I/O benchmarking tool • Exp3b shows an increase in response times to due
which can perform a variety of operations on a disk to a forkbomb being launched on VM2.
subsystem. Observations of Fig. 6: • Exp4a and Exp4b were not included due to the
• There is a clear degradation in performance during failure of some platforms to register any response
Exp3c across all hyper-visors. times from the VM
ii Ramspeed: Follows average trend across all plat-
forms for all experiments however Integer values
recorded a 3.3percent increase over average, while
Floating Point values recorded a 6percent increase
over the average trend.
2) CPU Fig. 3,4:Consistently performs 1.2percent be-
low average in all experiments in calculating pi to
512K numbers. [32], [15],
3) Disk Fig. 6: Consistently records 19percent below
Fig. 9. Pings to VM2 from VM4
average scores for all experiments keeping in line
with the average trend.
4) Network Fig. 7-9: Ping response times are faster
VIII. R ESULTS B Y H YPERVISOR than average for the Host and VM tests.
Each Hypervisor is compared against the average Workstation does not seem to suffer from the same
score across all platforms for each test as shown in Fig.1- resource isolation issues as Virtualbox, although it has
9. 18percent below average. I/O performance memory per-
formance is highest across all platforms within Worksta-
A. Virtualbox tion when running the 8 experiments presented on this
1) Memory Fig. 1,2,5: paper. CPU is also slightly below the average however as
i Geek: geekbench tests show a 2.4percent below this system is hosted, the hypervisor must also contend
average score across all experiments, its interesting with the host operating system for access to the CPU.
to point out that Exp1 scores very poorly.
ii Ramspeed: Follows average trend across all ex-
C. XenServer
periments however the initial control experiment
returns poor results for both Integers and floating 1) Memory Fig. 1,2,5:
point values. i Geekbench: follows average trend however Exp3c
2) CPU Fig. 3,4: Consistently performs 5.5percent is below average, all other experiments are slightly
below the average trend across all experiments. above average
3) Disk Fig. 6:Perfoms 22percent below average trend ii Ramspeed: consistent with average for all ex-
across all experiments however there are spikes in periments apart from Exp3b to Exp4b where
performance during Exp3b and Exp4a MB/s trends below average, Memory is 4.5percent
4) Network: Ping response times are consistently slower than the average scores.
slower than average for all experiments apart from 2) CPU Fig. 3,4: Both algorithms score 3percent
Exp4a and 4b faster than average times as shown in graphs
As noted under the geekbench test, Exp1 causes a high 3) Disk Fig. 6: Consistently performs much higher
performance penalty which would suggest that crashme than average as shown in Fig. 6. up to 41percent
is treated as emulated guest code and must be translated greater performance
into usable ring 3 code. Also Virtualbox operates ate 4) Network Fig. 7-9: Host and Gateway ping response
5.5percent below the CPU average across all experiments times are better than average apart from Exp4a,4b,
which would suggest that a greater amount of Ring 3 VM ping responses are about average as shown in
code must be translated before it can be processed by graphs
the hypervisor. Lower Disk I/O is common across all Although ESXi consistently outperforms all hypervisors
non paravirtualized platforms. tested, XenServer however illustrates excellent disk I/O
and good network I/O performance, which can be traced
B. Workstation back to its I/O Ring architecture [42] that round robin
1) Memory Fig. 1,2,5: based algorithm therefore all guest domains get equal
i Geek:tests show a 1.1percent below average score access to I/O. Also XenServer’s CPU scheduling [34] can
however there are big drop offs in Exp3c, Exp4a, be seen to offer good resource isolation as both Memory
Exp4b and CPU closely compare to the average trends.
D. Hyper-V 4) Network Fig. 7-9: Ping response times are consis-
1) Memory Fig. 1,2,5: tently better than average for all experiments apart
i Geek: Overall Hyper-V follows the average scores from Exp4a and 4b
for Geekbench however Exp3b,3c,4a and 4b are Clearly VMWare’s work on [20] [11] has improved
reporting higher scores than average, all other ex- resource isolation within the hypervisor as CPU and
periments are reporting lower than average scores. network consistently out perform other platform however
ii Ramspeed:Overall memory access is 3.4percent more work regarding isolation is required for around the
slower than the average trend. All experiments up MMU [18] to improve memory access during high load
to Exp3b are showing below average scores times. Also more work is required to improve I/O access
2) CPU Fig. 3,4: Pi Algorithms [32] Gauss is iden- to the disk subsystem [14].
tical to the average across all platforms, however
IX. R ELATED W ORK
Borwein [15] is 2.5percent slower than the average
3) Disk Fig. 6: Disk write access is 18percent faster There has been quite a lot of related work in this
than the average however Exp3b and Exp3c show area due to isolation of virtual machines becoming a
a marked loss in bandwidth to disk subsystem. hot topic as part of the current move towards IaaS [31]
4) Network Fig. 7-9: Ping responses to both Host and within the Cloud. Similar stress tests were performed
VM2 are slower than average across all experi- in [40] and [39] which look at misbehaving virtual
ments machines in XenServer, VMWare, OpenVZ [12] and
Hyper-V and XenServer offer greater I/O performance Solaris [19] Containers. Also similar benchmarking work
for all experiments presented on this paper. This per- was performed in [37] where a number of stress tests
formance gain may be explained through its Host In- are performed and analyzed for further study. Another
tergration Service [44]. Although no attacks were made interesting piece of research [38] which involved stress
directly against the I/O subsystem forkbombs running testing of applications and analysis of performance of
within Exp3a, 3b and 3c would cause page file access virtual environments. Finally at a data center level Intel
[16] in order to reduce the level of ram usage within one [46] have been researching performance benchmarking
of the two Micosoft child partitions. This would cause of virtual machines within a multiple host environment.
increased demand on the disk subsystem. As a result X. C ONCLUSION
Disk performance is reduced on both XenServer and
Hyper-V during these experiments. Hyper-V performs Based on the testing tools and experiments covered
below average for memory and cpu isolation in this paper its clear to see that paravirtualization [21]
using Hyper-V [44] and XenServer [42] architectures of-
E. ESXi fer higher I/O disk subsystem throughput using IORings
1) Memory Fig. 1,2,5: [42] within XenServer and using Integration Services [8]
i Geek:Looking at the geekbench tests across all within Hyper-V which results in higher I/O resource
platforms and experiments ESXi performs 2.2% isolation [45] based on the experiments undertaken.
above average. However VMWare’s ESXi Hypervisor [25] offers higher
ii Ramspeed: ESXi scored above average for exper- resource isolation of cpu resources using [20] and better
iments 1,2 and 4, however falls slightly below network isolation based on its [11] Network I/O Control
average for the 3 forkbomb experiments, overall technology. VMWare have also released [14] Storage
ESXi performs 2.5% better than average. I/O Control which was tested as part of this paper
2) CPU Fig. 3,4: Consistently outperforming all other however proved inefficient based on the experiments
platforms in all experiments for both algorithms in undertaken. A new player to the market is [29]KVM
calculating pi [32], [15] resulting in a 5% above which is RedHat hypervisor replacement for Xen[13]
average performance., which offers comparable performance as shown in the
3) Disk Fig. 6: Consistently recorded slightly below following article [43]. Although both VirtualBox and
average scores for all experiments, however in Workstation are considered type 2 hypervisors [18] both
experiment 3c there is a very big hit on disk install Ring 0 drivers into the Host O.S [22] [27], as a
write access, this trend results in a 18% below result both allow guest context Ring 3 code to run at host
average performance for disk write access to the context Ring 3 with very low levels of emulation or code
disk subsystem translation to native Ring3. Both hypervisor’s performed
well during the resource isolation experiments as both [29] D. L. U. L. A. L. A. Kivity, Y. Kamay. Linux virtual machine
were close to the average trends across all experiments. monitor. 2007.
[30] K. Adams and O. Agesen. A Comparison of Software and
Hardware Techniques for x86 Virtualization. 2006. http://www.
R EFERENCES
vmware.com/pdf/asplos235 adams.pdf.
[1] Amazon Elastic Compute Cloud (Amazon EC2). http://aws. [31] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R.Katz, A. Kon-
amazon.com/ec2/. winski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Za-
[2] Citrix XenServer. http://www.citrix.com/English/ps2/products/ haria. Above the clouds: A berkeley view of cloud computing,
product.asp?contentID=683148. Feb 2009. http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/
[3] Comparing vmware esxi/esx and windows server 2008 EECS-2009-28.pdf.
with hyper-v. http://www.citrix.com/site/resources/dynamic/ [32] L. Berggren, J. M. Borwein, and P. B. Borwein. GaussLegendre
salesdocs/Citrix XenServer Vs VMware.pdf. algorithm.
[4] fio i/o benchmark tool,. [33] G. Carrette. Crachme: Randon input testing. http://people.
[5] Geek Benchmark Suite. http://www.primatelabs.ca/geekbench/. delphiforums.com/gjc/crashme.html.
[6] Intel Virtualization Technology Specification for the IA-32 Intel [34] K. J. Duda and D. R. Cheriton. Borrowed-Virtual-Time (BVT)
Architecture. http://www.intel.com/technology/itj/2006/v10i3/ scheduling: supporting latency-sensitive threads in a general-
1-hardware/6-vt-x-vt-i-solutions.htm. purpose scheduler. pages 261–276, december 1999.
[7] Isolation Benchmark Suite. http://web2.clarkson.edu/class/ [35] P. Ferrie. Attacks on Virtual Machine Emulators.
cs644/isolation/. www.symantec.com/avcenter/reference/Virtual Machine
[8] Linux Integration Services RC Released. http: Threats.pdf.
//blogs.technet.com/b/iftekhar/archive/2009/07/22/ [36] J. E. Foster and B. P. Miller. An Empirical Study of Robustness
microsoft-hyper-v-r2-linux-integration-services-rc-released. of Windows NT Applications Using Random Testing. Seattle,
aspxl. 2000.
[9] Mausezahn. http://packages.ubuntu.com/karmic/mz. [37] J. Griffin and P. Doyle. Desktop virtualization scaling ex-
[10] Mersenne Random Number Generator. http://www-personal. periments with virtualbox (2009). 9th IT and T Conference.
umich.edu/∼wagnerr/MersenneTwister.html. http://arrow.dit.ie/ittpapnin/4.
[11] Network I/O Control: Architecture, Performance and Best [38] Y. Koh, R. Knaerhase, P. Brett, Z. Wen, and C. Pu. An analysis
Practices. http://www.vmware.com/files/pdf/techpaper/VMW of performance interfernece effects in virtual envrionments,
Netioc BestPractices.pdf. 2007.
[12] Openvz - container-based virtualization for linux. http://wiki. [39] J. Matthews, T.Deshane, W.Hu, J.Owens, M.McCabe,
openvz.org/Main Page. D.Dimatos, and M.Hapuarachchi. Quantifying the performance
[13] OSS - XEN. http://www.xen.org. isolation properties of virtualization systems.
[14] Performance Implications of Storage I/O Control in vSphere [40] J. Matthews, T.Deshane, W.Hu, J.Owens, M.McCabe,
Environments with Shared Storage. http://www.vmware.com/ D.Dimatos, and M.Hapuarachchi. Performance isolation of
files/pdf/techpaper/vsp 41 perf SIOC.pdf. a misbehaving virtual machine with xen,vmware and solaris
[15] Quadratic Convergence of Borwein. http://www.pi314.net/eng/ containers. 2007.
borwein.php. [41] T. Ormondy. An empirical study into the security exposure to
[16] Ram, virtual memory, pagefile and all that stuff. http://support. hosts of hostile virtualized environments.
microsoft.com/kb/2267427. [42] P.Barham, B.Dragovic, K. Fraser, S.Hand, T. Harris, A. Ho,
[17] Ramspeed, a cache and memory benchmarking tool,. R. Neugebauer, I. Pratt, and A. Warfield. Xen and the Art of
[18] Secure virtualization and multicore platforms state-of-the-art Virtualization. 2003.
report. H. Douglas C. Gehrmann. [43] B. Rao, A. Shah, and Z. S. J. M. M. Ben-Yehuda, T. Deshane.
[19] Solaris containers. http://www.sun.com/software/solaris/ds/ Quantitative Comparison of Xen and KVM. June 2003.
containers.jsp. [44] M. Russinovich. Inside Windows Server 2008 Kernel
[20] The CPU Scheduler within VMWare ESX4. Whitepaper. Changes. http://technet.microsoft.com/en-us/magazine/2008.03.
[21] Understanding full virtualization, paravirtualization and kernel.aspx.
hardware assist. http://www.vmware.com/files/pdf/VMware [45] S. Soltesz, M. Fiuczynski, L. Peterson, M. McCabe, and
paravirtualization.pdf. J. Matthews. Virtual Doggelgnger: On the Performance,
[22] Virtualbox Architecture. http://www.virtualbox.org/wiki/ Isolation and Scalability of Para- and Paene- Virtual-
VirtualBox architecture. izated Systems. http://www.cs.princeton.edu/∼mef/research/
[23] Virtualbox Native Virtualization. http://www.virtualbox.org/ paenevirtualization.pdf.
wiki/Virtualization. [46] O. Tickoo, R.Iyer, R. IIIikal, and D. Newell. Mod-
[24] Virtualbox Virtual Machines. http://www.xen.org/files/ eling virtual machine performance: Challenges and ap-
Marketing/HypervisorTypeComparison.pdf. proaches. http://www.sigmetrics.org/sigmetrics/workshops/
papers hotmetrics/session3 1.pdf.
[25] VMWare vSphere Hypervisor ESXi. http://www.vmware.com/
[47] Workstation 6 User Manual. http://www.vmware.com/pdf/ws6
products/vsphere-hypervisor/index.html.
manual.pdf.
[26] Windows 2008 r2. http://www.microsoft.com/
windowsserver2008/en/us/default.aspx.
[27] Workstation Processes. http://www.extremetech.com/article2/0,
2845,1156611,00.asp.
[28] AMD64 Virtualization Codenamed ”Pacifica” Technology: Se-
cure Virtual Machine Architecture Reference Manual, May
2005.

Vous aimerez peut-être aussi