Académique Documents
Professionnel Documents
Culture Documents
Abstract—Virtualization is abstracting computer resources. It is Indeed, virtualization brings a huge economical value to the
the ability to run multiple operating systems simultaneously on a organizations. Mainly, it reduces the Total Cost of Ownership
single physical machine. In this paper we evaluate the (TCO) by consolidating a bunch of physical servers into one
performance of the operating system level virtualization software physical server. For example, 10 Web servers can use one
(OpenVZ). The evaluation will be based on Web hosting, in other physical machine instead of using 10 physical machines. This
words, we evaluate the performance of Web servers on two
reduces a lot of cost -beside the cost of the machines- such as
different machines (dubbed as Baru and Lama) and we observe
the behavior of OpenVZ and its scalability for Web servers on electricity bills, space, air conditioning, etc. In addition,
different architectures (32 bits and 64 bits). The study evaluates organizations can couple virtualization with open source
the CPU, Memory and Throughputs under different loading software solutions (e.g. OpenVZ, Xen) to achieve more
conditions. reduction in TCO and not to bother anymore about licensing
issues. This is handy when multiple virtual machines or virtual
Index Terms — OpenVZ, performance, server consolidation, machine backups are kept online for redundancy, restoration,
virtualization. or disaster recovery purposes [4].
Virtualization technology can be implemented in several
I. INTRODUCTION techniques namely, hardware emulation, full virtualization,
“Server consolidation is an approach to the efficient usage paravirtualization, operating system level virtualization,
of computer server resources in order to reduce the total application virtualization, and desktop virtualization.
number of servers or server locations that an organization However, the discussion about the differences amongst them
requires”[1]. Simply put, server consolidation is the is beyond this paper. But, a brief discussion about operating
combination of several servers (i.e. Web Server, Mail Server, system level virtualization will be provided in OpenVZ
Java Server, etc) into one physical machine, in order to, save section.
cost and utilize the computer resources. Besides cost savings, virtualization in general has great
One way to achieve server consolidation is to use advantages for the organizations, due to the fact that the
virtualization. Virtualization as a word does not exists in most virtualization layer isolates and separates the VMs. None of
English dictionaries. However, the closest word to it in the VMs is aware about the existence of other VMs on the
Oxford Concise Dictionary is “virtual” which means same machine. They seem to each other as if they were
“Computing: not physically existing as such but made by running on different physical machines. The isolation provides
software to appear to do so” [2]. However, virtual is not a new higher availability and security for the organization. If one of
term in the computer jargons, it was used before such as the VMs crashes or goes down, it has no effects on other
virtual memory, virtual reality, etc. VMs. Moreover, system administrators can fire up a VM in
Virtualization in our context according to [3] is defined as less than a minute. As for security, running different services
“the abstraction of computer resources” and “abstraction is the on different VMs improves the security. For instance, if an
process of separating hardware functionality from the intruder manages to hack one server, only that particular
underlying hardware.” In other words, virtualization is server goes down and others will remain running as if nothing
sharing, and partitioning the resources of computer hardware happened.
amongst more than one running operating systems on a single
Live migration of VMs is yet another great advantage of
physical machine simultaneously. Each one of the running
virtualization. Assume one of the VMs is exhausting the
operating systems is called a Virtual Machine (VM).
Virtualization offers the ability to run multiple operating hardware resources, then the administrator can migrate the
systems (Linux, Windows, MacOS X) on one physical server into another physical machine in less than one minute.
machine at the same time. Or if the physical machine should go down for upgrade or
Currently, virtualization is grabbing the attention of the data maintenance, the VMs can be migrated to another machine.
centers as well as the small and medium enterprises due to its Therefore, migration boosts the availability of the running
enormous advantages. This led processor manufacturer mainly servers.
Intel and AMD to develop processors to support virtualization Software developers and testers use virtualization to run
i.e. Intel VT and AMD-V respectively. several platforms to develop and test the products on (i.e.
Linux, Windows, MacOS X). It reduces the time of installing
the platforms as well as increasing the number of platforms
[1] The corresponding author.
343
The following configuration of “autobench” was done to V. DISCUSSION
measure the performance. When we benchmarked the whole
machine, the total number of connections to be sent to the A. OpenVZ
machine was set to 20,000. The number of requests in each There are general observations about OpenVZ that we
connection was set to 10. The client time out was set to 7 would like to talk about in this section.
seconds. However, when we benchmarked the containers, for It is easy to fire up a new container using OpenVZ. It would
each container, 5,000 connections were sent in each test. And take about one minute. Moreover, a container occupies only a
the rest of the configuration parameters were the same as small amount of memory. However, this depends on the size
above. of the distribution as well as the running services. In our case,
OpenVZ kernel version 2.6.18, and Centos 5 templates from we only fired up the web server and the default services that
OpenVZ website were used and Apache web server version run on the Centos 5 template from OpenVZ Website. We
2.2.3 was used as the Web server. The http queries were found that each VE takes around 25 MB of RAM and about
generated by using Httperf version 0.9.0 and autobench 2.1.2. 400 MB of hard disk. Accordingly only 20 containers can be
We conducted various tests by scaling the number of running web servers using 450 MB of RAM and about 8 GB
containers from 4 to 8 and finally to 12. of hard disk.
The configurations of Apache were left to the default We ran 55 containers on Baru and 23 on Lama at the same
except the “KeepAlive Off” which we changed to “On” in time. This almost filled the memory of each machine
order to allow multiple requests in one connection. respectively. Moreover, we ran the containers with no
application overhead within itself.
B. The host machine (Baru)
After running “autobench” (from 1000 to 7000 requests per
second), the machine managed to handle about 6000 requests
per second with no error. This amount of concurrent requests
has saturated the CPUs of the machine. However, the
throughput was not saturated due to the size of the files (~80
Kilo bytes).
Figure 8 shows the results of “autobench” which uses
httperf to benchmark the machines. In Fig 8 the X-axis
represents the number of requests per second that httperf is
supposed to send each test, and Y-axis represents different
values that changes based on the figure’s keys. In the
following paragraph, we will describe what each line means.
Sent request per second is the number of requests that was
Fig. 6. The Test Bed architecture sent to the machine each second. Average reply rate per
second is the average number of replies from the server per
The configuration files of all the containers were the same second. Response time is the reply time of the request in
and a sample is shown in Fig 7. milliseconds. Throughput is the size of the bandwidth that was
used during each test in Kilo bit per second. And finally we
ONBOOT="yes" observed the percentage of errors for each test as packet loss.
# UBC parameters (in form of barrier:limit)
KMEMSIZE="11055923:11377049" As for Fig 8 we see that the first test sent 1000 requests per
LOCKEDPAGES="256:256"
PRIVVMPAGES="65536:69632"
second and the replies were all intact. In the next test which
SHMPAGES="21504:21504" was 1500 requests per second also it was fine. Autobench
NUMPROC="240:240"
PHYSPAGES="0:2147483647" conducted 14 tests starting from 1000 requests per second and
VMGUARPAGES="33792:2147483647"
OOMGUARPAGES="26112:2147483647"
ending at 7000 requests per second. Each test autobench
NUMTCPSOCK="360:360" increases the number of requests per second by 500. So, we
NUMFLOCK="188:206"
NUMPTY="16:16" see that until 6000 requests per second everything was fine
NUMSIGINFO="256:256"
TCPSNDBUF="1720320:2703360"
and no errors occurred. After 6000 requests per second we
TCPRCVBUF="1720320:2703360" notice there is a drop in the number of requests sent to the
OTHERSOCKBUF="1126080:2097152"
DGRAMRCVBUF="262144:262144" server which is a bit steep. And so did every thing. The drop
NUMOTHERSOCK="360:360"
DCACHESIZE="3409920:3624960"
occurred due to the CPUs saturation which was utilized 100%.
NUMFILE="9312:9312" Fig 9 shows the CPU usage in relation with the time. X-axis
AVNUMPROC="180:180"
NUMIPTENT="128:128" represents the time line of the test. We captured the reading of
# Disk quota parameters (in form of softlimit:hardlimit) the CPU each second for the whole duration of the tests. Y-
DISKSPACE="1048576:1153024"
DISKINODES="200000:220000" axis shows the percentage of the idle CPU. Why the idle?
QUOTATIME="0"
# CPU fair sheduler parameter Because the percentage of the idle CPU show the percentage
CPUUNITS="1000" of the free portion of CPU, we can see whether the whole
system including the web server is utilizing the CPU or not.
Fig. 7. Configuration of a typical container
344
Fig 9 shows the CPU idle percentage. We observe that at In other words, all the containers receive the same load at
the beginning of the test the usage of the CPU was about 10%. the same time. Fig 11 shows the results of autobench on one
This value keeps fluctuating each second and keeps going of the containers (the other containers have almost an identical
down until it reaches 0% which means that the CPU is fully graphs so we present one of them here). As the graph shows
used. In this test, this point was reached at approximately at autobench starts from 200 requests and ends at 2000 requests.
210 seconds and error occurrences were increasing from this However, the errors started to occur at about 1600 requests
point and onward (i.e. 6500 requests per second and beyond). per second. This means that the machine is receiving about
Fig 10 shows the usage of the RAM of the whole duration 6400 requests per second at the same time. Note that this
of this test. number is about the same as the performance of machine Baru
The results indicate that Baru machine can handle about in section A. Surprisingly, the CPU usage (Fig 12) was almost
6000 requests per second and the CPU was saturated. the same as in section A and is saturated at the same the error
However, in reality, usually benchmarking is done up to 80% occurrences. As for the RAM (Fig 13) it has also the same
of the CPU usage. usage as in section A.
345
TABLE 1
Performance of Baru and its containers
Machine Number Requests Total requests As for Lama we mentioned before that it handled about
of servers per second per second 3300 requests per second. So, when we ran 4 containers on it
Baru 1 ~6000 ~6000 we found that 700 requests per seconds were handled by each
4 containers 4 ~1600 ~6400 container which means 700 * 4 = 3200 requests per second
8 containers 8 ~750 ~6000 which is about the same number of requests that were handled
12 containers 12 ~500 ~6000 by Lama. Note that we did not manage to run 12 containers on
Lama due to RAM limitations.
TABLE 2
Performance of Lama and its containers
Machine Number of Requests per Total requests
servers second per second
Lama 1 ~3300 ~3300
4 containers 4 ~700 ~3200
8 containers 8 ~400 ~3200
VI.CONCLUSION
We found that when using OpenVZ, almost no overhead on
the computer resources and it shares the resources of the
computer fairly amongst the running containers. Moreover,
we found that both machines (Baru and Lama) can handle
about 6000 and 3300 requests per second respectively. And so
do containers of OpenVZ when we run all of them
Fig. 11. Performance of one of the containers concurrently on the both Baru and Lama. This means that
OpenVZ performance does not change with the change of the
architecture (32 bits and 64 bits). We can safely say that if we
want to run 15 Web servers using OpenVZ containers, each
VE can handle 400 requests per second at the same time (this
is on Baru). The WebVZ is a very useful tool for
administrating OpenVZ containers both locally and remotely.
REFERENCES
[1] Information Technology @ Johns Hopkins-Glossary of
Technical Terms and Acronyms. Available
http://it.jhu.edu/glossary/pqrs.html
[2] Concise Oxford English Dictionary (Eleventh Edition).
Computer Software.
[3] Wolf C., Halter E. (2005). Virtualization from the desktop to the
enterprise. Apress, 2005. p.1.
Fig. 12. CPU usage of 4 containers [4] Shields G., The Shortcut Guide to Selecting the Right
Virtualization Solution, electronic book. Available:
http://nexus.realtimepublishers.com/SGSRVS.htm
[5] OS Virtualization. Available:
http://www.parallels.com/en/products/virtuozzo/os/
[6] OpenVZ Wiki. Available:
http://wiki.openvz.org/Category:Definitions
[7] Padala P., Xiaoyun Zhu, Zhikui Wang, Singhal S., Shin K.
Performance Evaluation of Virtualization Technologies for
Server Consolidation. HP Laboratories Palo Alto, April 11,
2007. Available: http://www.hpl.hp.com/techreports/2007/HPL-
2007-59.pdf
[8] WebVZ. Available: http://webvz.sourceforge.net
[9] The Linux HTTP Benchmarking HOWTO. Available:
http://www.xenoclast.org/doc/benchmark/HTTP-benchmarking-
HOWTO/
346