Vous êtes sur la page 1sur 15

Understanding Performance

Interference of I/O Workload in


Virtualized Cloud Environments
Summarized by: Michael Riera
9/17/2011

University of Central Florida CDA5532

Agenda

Purpose
Xen I/O Architecture
Experiment Setup
Experiment Results
Base case Measurement
Interference Analysis
Throughput Interference

Conclusion

Purpose
The purpose of this paper is to quantify the
interference of multiple VMs on the
throughput of an I/O intensive workload
through a series of SPEC benchmark files sizes
of 1KB, 4KB, 10KB, 30KB, 50KB, 70KB, 100KB.

Xen Architecture /Description


Is a popular open-source x86 virtual machine
monitor (VMM) based on virtualization
Technologies
Supports para-virtualization
Hypercalls straight to the host operating system
Lower overhead

PCI Pass through

Xen Architecture /Description

Example

NIC

Receive Packet

Event Channel (Xen VMM)

Decides whether the driver has access to HW

Physical Interface

Interface to Network card

Bridge

Removes from NIC


Demux the network packet and delivers it to the appropriate VM

Backend

Raise a hypercall to the VMM requesting space on the guest

I/O Channel

Netback and netfront exchange the page descriptor by page remapping over mechanism over
I/O descriptor ring

Frontend

Receives a packet as if it came from a NIC card

Experiment Setup
IBM ThinkCentre A52 Workstation
3.2Ghz Intel Pentium 4
16KB L1 Cache
2MB L2 Cache
2GB 400Mhz DDR memory
Seagate 250GB
Intel NIC E100 PRO/100 Network

Client machines
Connected 1Gb/s lines

Experiment Setup
I/O intensive workload
Running two isolated guest (VM1, VM2)

VMM Sharing physical hardware


Each VM has equal resources
CPU Scheduling: SMP Credit scheduler
VM1 has an Apache HTTP server
VM2 provides the web services
Cache buffer data (no disk reading are involved)

Experiment Setup

Experiment Results

Experiment Results

Experiment Results
CPU Utilization

Measured the average CPU utilization of each VM, including CPU usage of
Dom0, VM1 and VM2

VMM events per second

VMM adopts asynchronous hypercall mechanism to notify the VMs of system


events.

VM switches

Metric for number of times VMM need to context switch

I/O count

Pages Exchange

I/O counts per execution

Page exchange per execution

VM State

Execution state(using CPU), runnable state(waiting for CPU), block state


(waiting for IO)

Experiment Results

Experiment Results

Conclusion
Competing network intensive loads
High Cache misses
High Wait times

Less competing resources (1,100) Optimal


Interference highly sensitive to multiple VMs
Credit Scheduler to blame for large wait in the
larger files compared to the smaller ones

Vous aimerez peut-être aussi