Académique Documents
Professionnel Documents
Culture Documents
Abstract
Network virtualization is an essential technique for data center operators to provide
traffic isolation, differentiated service, and security enforcement for multi-tenant ser-
vices. However, traditional protocols used in local area networks may not be appli-
cable for data center networks due to the difference in network topology. Recent
research suggests that layer-2-in-layer-3 tunneling protocols may be the solution to
address the challenges. In this article, we find via testbed experiments that directly
applying these tunneling protocols toward network virtualization only results in
poor performance due to the scalability problems. Specifically, we observe that the
bottlenecks actually reside inside the servers. We then propose a CPU offloading
mechanism that exploits a packet steering function to balance packet processing
among available CPU threads, thus greatly improving network performance. Com-
pared to a virtualized network created based on VXLAN, our scheme improves
the bandwidth for up to almost 300 percent on a 10 Gb/s link between a pair of
tunnel endpoints.
Format of
GRE UDP/IP Modified TCP/IP
encapsulated packet
Per-flow equal-cost multiple Yes, Virtual Subnet ID and FlowID are used Yes, UDP port number is Yes, TCP port number is
path to calculate hash. hashed. hashed.
endpoint that attaches to such virtualized network functions, the edge switch. It learns other VTEP addresses from the
especially on 10 Gb/s networks. outer and inner headers of the packets received. VXLAN
The rest of the article is organized as follows. The following also provides a 24-bit address space for tenant IDs. Further-
section provides an overview of the most popular tunneling more, VXLAN achieves per-flow load balancing in the net-
protocols. After that we describe the experimental setup of work fabric through performing hashing on the outer UDP
the tunneling environment at 1 Gb/s and 10 Gb/s. Then we source port number.
report on the analysis of the experimental results and identi-
fy the performance bottlenecks. An optimized framework is Stateless Transport Tunneling
then described to address these bottlenecks. The final section STT [10] encapsulates the layer 2 frames of each tenant into
summarizes the lessons learned and discusses potential future modified TCP/IP packets. The sequence number field in
directions. the outer TCP header is redefined as the fields of the STT
frame length and the STT fragment offset, and the acknowl-
Overview of Encapsulation Protocols edgment number is used similarly to the IPv4 identification
A number of layer-2-in-layer-3 protocols have emerged with- for packet assembly. Thus, STT does not maintain a state
in industry and academia on network encapsulation [15]. machine for retransmission. This is acceptable in data cen-
Examples include Generic Routing Encapsulation (GRE) [7], ter networking because the TCP stack of tenants operat-
Network Virtualization Using Generic Routing Encapsulation ing system will handle the retransmission task. NVGRE,
(NVGRE) [8], Virtual Extensible LAN (VXLAN) [9], and VXLAN, and STT are the three most widely used tunneling
Stateless Transport Tunneling (STT) [10]. These protocols protocols for creating overlay networks in data centers, as
encapsulate layer 2 frames within layer 3 packets and deliv- listed in Table 1. By decoupling tenants virtual networks
er these packets through an IP network. By decoupling the from the infrastructure network, overlay network technol-
tenant-defined packet header and the operator-defined packet ogies provide the key to accommodate a large number of
header, the operators can modify the configurations of under- tenants in cloud infrastructure. First, it creates an isolated
lay networks or servers without disrupting tenants traffic and virtual network for each tenant. Tenants can set up their
provide larger virtualized networks spanning multiple routers. own medium access control (MAC)/IP addresses without
Moreover, these protocols invariably have a large address interfering with the MAC/IP addresses of the other tenants.
space for each tenant. From the network administrators point of view, these tun-
neling protocols provide a large IP address space for the
Generic Routing Encapsulation virtual networks of their tenants with simultaneous traffic
GRE [6] is a point-to-point protocol that encapsulates a tenant isolation and security enforcement. Seamless VM migration
packet within another IP packet. A GRE-encapsulated packet can also be achieved as the headers of decapsulated packets
is routed through the data center network based on the desti- remain the same.
nation IP address. When the packet reaches the destination,
the outer IP and GRE headers are stripped, and the remain- Evaluation Methodology and Setup
ing inner packet is delivered to the tenants application. The In these experiments, we compare the effective bandwidth
tenant is totally unaware of this encapsulation and decapsula- between a pair of bare-metal servers without tunneling and
tion process, and presumes it is the only user in this virtual- that of going through a tunnel using GRE or VXLAN. All of
ized network. these configurations include pairs of vSwitches [11] residing in
More recently, a new encapsulation framework, NVGRE identical 2.4 GHz Intel Xeon servers connected by a 10 Gb/s
[7], was proposed to better support multi-tenant environments. top-of-rack (TOR) switch, as shown in Fig. 1. The vSwitches
Specifically, this framework encapsulates a tenants layer 2 inside both servers are software implementations of virtual
frame. The fields of Virtual Subnet ID (VSID) and FlowID network switches that support tunneling protocols such as
are used to identify tenants flows. This 24-bit VSID can sup- GRE and VXLAN, and thus they can function as tunneling
port up to 16 million tenants and virtual LANs. endpoints. By deploying vSwitches across servers, these end-
points form a dedicated overlay network for a tenant. The
Virtual Extensible Local Area Network most important feature of an overlay network is that it makes
In contrast to GRE and NVGRE, VXLAN [9] encapsu- the underlay fabric completely transparent and thus substan-
lates each users data frame into a new UDP/IP datagram, tially improves the flexibility. The responsibility of vSwitches
and extracts the data frame based on the Virtual Tunnel are to classify and deliver packets from applications to the
End Points (VTEP) protocol. VTEP can be in the form of underlay IP network and vice versa through pushing or pop-
software residing at each server endpoint or a daemon in ping tunneling headers.
Figure 1. The two servers are interconnected by a 10 Gb/s rack switch (not shown here) and form
a 10 Gb/s Ethernet link (black solid line). The virtualized networks implemented by VXLAN
(purple dashed line) and GRE (green dashed line) are over this physical 10 Gb/s link. The solid
lines show how packets actually travel from the source to the destination. The operating system is
Ubuntu 12.04.3; Long Term Support and the Open vSwitch are in version 1.11.
0.8 8
Aggregated bandwidth (Gb/s)
0.4 4
0.2 2
Tunnel name Default MTU Adjusted MTU Bandwidth performance of the overlay network powered by VXLAN
10
GRE 1500 1462
Aggregated bandwidth (Gb/s)
8
VXLAN 1500 1450