Vous êtes sur la page 1sur 7

IEEE ICC 2015 - Next Generation Networking Symposium

Load Balancing for Multicast Traffic in SDN using


Real-Time Link Cost Modification
Alexander Craig , Biswajit Nandy , Ioannis Lambadaris and Peter Ashwood-Smith
Department

of Systems and Computer Engineering, Carleton University, Ottawa, Canada, K1S 5B6
Email: {alexander.craig,bnandy,ioannis}@carleton.ca
Huawei Canada, Ottawa, Canada, K2K 3J1
Email: peter.ashwoodsmith@huawei.com

AbstractIn this paper we propose an approach for applying


traffic load balancing to multicast traffic through real-time link
cost modification in a software defined network (SDN) controller.
We present an SDN controller architecture supporting traffic
monitoring, group management, and multicast traffic routing.
An implemented prototype is described, and this prototype is
used to implement shortest path multicast routing techniques
which make use of the real-time state of traffic flows in the
network. This prototype is evaluated through experimentation in
Mininet emulated wide area networks. Evaluation is presented in
terms of resulting network performance metrics focusing on the
distribution of traffic flows. Our results demonstrate that realtime modification of links costs produces statistically significant
improvements in traffic distribution metrics, with an average
improvement of up to 52.8% in traffic concentration relative to
shortest-path routing. This indicates that SDN enables the use
of real-time modification of link cost functions as an effective
technique for implementing traffic load balancing for multicast
traffic.

I.

I NTRODUCTION

A wide range of applications have emerged in todays


Internet which require the real-time transmission of multimedia data from one or more traffic sources to a group of
receivers. Examples of these applications include Internet Protocol TV (IPTV), video and audio conferencing, multi-player
games, and Virtual Local Area Networks (VLANs). While
unicast delivery can be used for these applications, this results
in unnecessary duplication of packets at the traffic source,
and inefficient usage of network resources as these duplicate
packets are carried through the network. Multicast delivery
improves the efficiency of these applications by allowing the
network forwarding elements to optimize delivery such that
packets are only duplicated within the network when strictly
necessary to reach all receivers.
Software Defined Networking (SDN) is an emerging network paradigm in which the control plane of the network is
logically centralized. In traditional networks both the forwarding plane and the control plane of the network are logically
distributed throughout all forwarding elements in the network.
In the SDN paradigm forwarding elements remain responsible
for implementing the forwarding plane of the network, but the
control plane of the network is implemented using a logically
centralized controller which typically runs on commodity
server hardware. To support the operation of SDN network
forwarding elements must support a protocol which allows for
the remote configuration of forwarding tables, as well as the

978-1-4673-6432-4/15/$31.00 2015 IEEE

5789

reporting of flow statistics to a remote controller. OpenFlow


[1] has emerged as a defacto standard for this protocol.
SDN has the potential to simplify multicast traffic engineering by leveraging the centralized nature of the network
control plane. The main contribution of this paper is the
presentation of an SDN controller architecture which integrates
traffic monitoring, group management, and multicast routing
to implement traffic load balancing using real-time link cost
modification. An implemented prototype is described, and this
prototype is used to implement shortest path multicast routing
techniques which make use of the real-time state of traffic
flows in the network. This prototype is evaluated through
experimentation with emulated SDN wide area networks.
Evaluation is presented in terms network performance metrics
focusing on the distribution of traffic flows and resulting traffic
volume. Our results demonstrate that these techniques are
effective at reducing localized congestion in the network.
The remainder of this paper is organized as follows. Section
II will further discuss the motivation of this work. Section III
will discuss related work in the field of SDN multicast. Section
IV will precisely describe the scope and problem definition of
the paper, and Section V will describe the implemented controller architecture. Section VI will present the experimental
setup used to evaluate the implemented prototype, and Section
VII will present the results of performance evaluation. Section
VIII will present concluding statements and related areas for
future work.
II.

M OTIVATION

The primary motivation of this paper is to demonstrate


that the properties of SDN enable the real-time adjustment
of link costs used for routing calculations with the end goal
of implementing traffic engineering. The logically centralized
control plane of the SDN paradigm is ideally positioned to
overcome several limitations of real-time link cost adjustment
in traditional network deployments.
In a traditional network deployment, protocols such as
Protocol Independent Multicast Spare Mode (PIM-SM) or
Multicast Open Shortest Path First (MOSPF) are used to
implement multicast routing. These protocols are implemented
in a distributed fashion where each router maintains a separate
local view of the network topology for routing calculations,
which is updated through the dissemination of link state
updates. While traffic state information can be encapsulated
in link state updates, this method is not commonly used,

IEEE ICC 2015 - Next Generation Networking Symposium

as effective traffic engineering through link cost modification


requires an accurate, global view of the current traffic state.
The convergence delays entailed by the dissemination of link
state updates imply that network elements will often have differing views of the current network traffic state, and therefore
network elements are not well positioned to independently
implement traffic engineering goals by modifying link costs.
The majority of networks which implement traffic engineering
through link weight adjustment use an approach similar to
the proposed approach in [2], in which an external network
manager with a global view of the network state is used to
calculate changes to link costs. This external network manager
can use a variety of techniques to measure the network traffic
state, including polling of management information bases in
network elements, calculation based on packet or flow level
measurements at the network edge, network tomography, or
packet sampling. Modifications to link costs by this network
manager are considered as a significant change to the network
which is performed over a coarse timescale. This is because
each router must be individually updated with new link costs,
and this process incurs convergence delays before all routers
agree on a new, global set of link costs.
In the SDN paradigm the network controller is ideally
positioned to fulfill the role of a centralized network monitor,
as the network controller is tightly integrated with routing in
the network. In the presented SDN controller architecture, all
multicast routing calculations work off a single view of the network state maintained by the controller. The controller learns
the network state through a combination of LLDP polling
(for topology discovery), and OpenFlow statistic polling (for
traffic state discovery). This unified network view may still
contain stale network state due to network delays, but routing
calculations will always be performed using a consistent view
of the network state, and no convergence delays are incurred
by traffic state or topology changes. Additionally, all routing
calculations work off of a centralized view of link costs in
the network controller, and updates to this internal view of
link costs do not incur convergence delays. It is our belief
that these benefits of the SDN paradigm will enable real-time
modification of link costs based on the current traffic state as
a viable technique for traffic load balancing.
III.

R ELATED W ORK

This paper draws inspiration from several recent works in


the domain of handling multicast traffic with SDN. In CastFlow [3] the authors propose a clean slate approach to SDN
multicast, and evaluate an implemented controller prototype.
The evaluation of Castflow focuses primarily on the processing
requirements and delays incurred by the controller, and the
authors propose a complete replacement of the IGMP [4]
protocol for group management as a means to improve these
metrics. The proposed replacement is not well defined, and
the evaluation of the implemented prototype relies on group
membership state being provided by the test orchestration
scripts. In Multiflow [5] the authors independently propose
another clean slate approach to SDN multicast, and evaluate an
implemented controller prototype titled OpenMcast. This work
relies on the IGMP protocol for both group management and
the determination of multicast routes. In this work, IGMP join
messages are propagated through the network to the multicast

5790

group source, and the path selected for the IGMP join messages is reversed to determine the path of multicast packets.
While our work makes use of IGMP, our work is distinct
from Multiflow in that IGMP packets are not directly used
to calculate multicast routes, and IGMP is implemented in a
centralized manner to reduce unnecessary forwarding of IGMP
packets. In [6] the authors focus on improving the reliability
of SDN multicast delivery through the implementation of fast
tree switching. This paper describes a technique in which
multiple backup multicast trees are calculated (using Dijkstras
algorithm) and cached in the network controller for each
multicast group. While our work does not implement redundant
tree calculation, it is similar to this related work in that our
work also uses Dijkstras algorithm for the calculation of
shortest path trees. These related works do not implement realtime bandwidth utilization tracking, nor do they implement
routing based on current traffic state.
IV.

P ROBLEM D ESCRIPTION

This paper primarily aims to demonstrate that traffic load


balancing through real-time link cost adjustments can be
feasibly implemented in an SDN controller, and that this traffic
load balancing technique provides significant benefits in terms
of reducing localized congestion in the network. This work
assumes that the network topology will remain static after
the network is initialized. This is not a strict requirement,
as the implemented prototype uses Link Layer Discovery
Protocol (LLDP) polling to dynamically learn the network
topology, but the implemented controller prototype is not
optimized to minimize route recalculation when the network
topology changes. This work also assumes that the number and
location of multicast senders in the network remains static.
This assumption does not apply to multicast receivers, and
in our evaluation multicast receivers randomly join and leave
multicast groups as a stochastic process.
With these assumptions in place, load balancing is implemented through the real-time modification of link costs
used for multicast tree calculation with Dijkstras shortest
path algorithm [7]. Two link cost functions of the linear and
inverse proportional types are evaluated, and compared against
shortest-path routing based only on the network topology.
These link cost functions are inspired by [8], but are slightly
modified to account for the lack of assumed a priori bandwidth
reservations. These particular link cost functions were chosen
as they provide an intuitive means of directing traffic away
from congested links, without applying more complex optimization based approaches. Let Ci denote the cost function of
link i. Let Ui denote the bandwidth utilization of link i (in
Mbps), and let Mi denote the maximum bandwidth capacity
of link i (in Mbps). Let denote the floating point minimum
value of the controller platform, and let denote the floating
point maximum value of the controller platform. In the linear
link cost scheme, the link cost function is defined as:

Ci =

Ui
Mi

: Ui = 0
: Ui > 0

(1)

In the inverse proportional link cost scheme, the link cost


function is defined as:

IEEE ICC 2015 - Next Generation Networking Symposium

Ci =

1
1(Ui /Mi )

: Ui = 0
: 0 < Ui < Mi
: U i Mi

(2)

Shortest-path routing is implemented by uniformly setting


Ci for all links in the network to a constant value of 1. While
these cost function definitions allow the possibility of Ui = 0,
in practice Ui is always greater than 0, as the LLDP queries
periodically generated by the controller ensure that all network
links will carry some small amount of traffic.
V.

C ONTROLLER I MPLEMENTATION

Fig. 1. Deployment diagram detailing the key components of the implemented


controller prototype.

The evaluated controller prototype is implemented as a


number of software modules for the POX network controller
[9]. The controller implementation, along with all scripts used
to automate the evaluation presented in this work, is freely
available from the GroupFlow GitHub repository [10]. A component diagram of the implemented prototype is provided in
Figure 1. A detailed description of each component proposed
by this paper follows (topology discovery in SDN is a well
studied area, and will not be explored in detail here):
Multicast Group Membership Management: The IGMP
Manager module implements IGMPv3 [4] support on all
network switches. Rather than implement IGMPv3 exactly
for all switches in the network, the controller implements
IGMPv3 in a semi-centralized manner. This is performed by
treating all routers in the network as a single IGMPv3 router,
whose IGMP enabled ports are formed by the union of all
ports in the network which do not connect two switches
in the same control domain. This optimization is based on
the insight that flooding IGMP messages within the same
control domain is redundant when all IGMP messages will be
processed by the same controller. The IGMP Manager module
is dependent only on the POX provided discovery module,
which implements topology discovery through LLDP polling.
A reception state map is maintained, which consists of lists
of requested multicast group IP addresses and port numbers
on which delivery has been requested. The IGMP Manager
produces events (which include a copy of the reception state

5791

map) to be consumed by the GroupFlow module whenever the


network topology changes, or whenever the multicast reception
state for a particular switch changes (i.e. a multicast receiver
joins/leaves a group).
Real-Time Traffic Measurement: The FlowTracker module implements real-time bandwidth usage tracking, and reports link bandwidth utilizations and capacities (i.e. Ui and
Mi , as defined in Section IV) to the GroupFlow module
whenever a routing calculation is performed. Traffic estimation
is implemented through periodic polling of all switches in the
network with OpenFlow ofp flow stats queries (as defined
in the OpenFlow v1.0 specification [11]), similar to the approaches presented in [12] [13]. Unlike these related works, the
FlowTracker module has not been optimized to minimize the
rate and scope of network polling, and the module will query
all flows on all switches at a fixed periodic interval. This query
interval determines the frequency at which link cost functions
are modified. Query times for each switch are randomly
staggered, based on the time at which the switch first connected
to the controller. This approach has known drawbacks, as continually polling the network introduces potentially unnecessary
overhead in the control plane. This implementation could be
improved by adopting a passive utilization tracking approach,
such as the approach presented in [14]. The FlowTracker
module maintains a map of average link utilizations keyed by
switch DPID and port number, which the GroupFlow module
queries when calculating link cost functions. When calculating
normalized link utilizations, it is assumed that all tracked links
in the network have a uniform maximum bandwidth (which is
provided as a command line argument to the module). This
assumption is made because OpenFlow 1.0 does not provide a
mechanism for querying the maximum bandwidth supported
by a particular port. This mechanism is provided in later
versions of the OpenFlow specification, and future works can
easily relax this assumption.
Multicast Routing Model: The GroupFlow module implements detection of multicast senders, multicast tree calculation,
and installation/removal of OpenFlow rules to direct multicast
traffic. Shortest path tree calculation is performed for each
combination of multicast sender IP address and destination IP
address using Dijkstras algorithm, using the network topology
view provided by the POX discovery module. Multicast tree
calculation and flow installation is performed whenever a
multicast receiver joins or leaves a multicast group for which
a sender has been identified. Routing is considered separately
for each multicast group, and no flow aggregation between
separate multicast groups is implemented. Pseudocode for the
calculation of shortest path multicast trees is provided in Figure
2.
VI.

E XPERIMENTAL S ETUP

The implemented prototype was evaluated through network


emulation of representative topologies and workloads using
Mininet. Unlike traditional network simulators such as NS-2/3
or OPNET which implement discrete event simulation, Mininet
is a network emulation platform. All switches in the network
are implemented using real instances of OpenVSwitch v1.4.6,
and links are implemented as paired virtual interfaces which
have their bandwidth and delay characteristics set through the
Linux tc (Traffic Control) utility. Real network applications

IEEE ICC 2015 - Next Generation Networking Symposium

1:
2:
3:

full topo graph List of all edge tuples corresponding to inter-switch links
Learned from Discovery module
adjacency map 2D map of port numbers, keyed by egress node and ingress node Learned from Discovery module
desired reception state List of tuples of form <recv node, recv node output port> Derived from IGMPManager
events

4:
5:
6:

installed flow nodes List of nodes on which rules are already installed for this sender/destination address
tree src node Node on which the sender for this sender/destination address is connected
weighted topo graph Empty Set

7:
8:
9:
10:

for all edge in full topo graph do


egress port adjacency map[edge.egress node][edge.ingress node]
max link util mbps flow tracker.get max util(edge.egress node, egress port)
link util mbps flow tracker.get curr util(edge.egress node, egress port)

11:
12:
13:

link cost CalculateLinkCost(link util mbps, max util mbps)


weighted topo graph weighted topo graph <link cost, edge.egress node, edge.ingress node>
end for

CalculateLinkCost: Implements the cost function calculation as defined in Section IV

CalculatePathTreeDijkstras: Runs Dijkstras algorithm on the supplied topology graph with the
specified source node. Returns a map of lists where path_tree_map[dst_node] = Set of edges
from source node to destination node
14:

path tree map CalculatePathTreeDijkstras(weighted topo graph, tree src node)

15:
16:
17:
18:

edges to install Empty Set


for all receiver in desired reception state do
edges to install edges to install path tree map[receiver.recv node]
end for
InstallOpenflowRules: Generates and sends flow installation and removal messages to the
network based on the set of edges to install and the set of nodes on which rules are
currently installed. The reception state list is passed so that packet output actions can be
generated on the terminal nodes. Returns the set of nodes on which flows have been installed.

19:

installed flow nodes InstallOpenflowRules(edges to install, desired reception state, installed flow nodes)

Fig. 2: Pseudocode for calculation of a multicast tree for a single combination of sender and multicast destination address by
the GroupFlow module. Nodes correspond to switches, which are identified by their DPID in integer format. Edges correspond
to links, and each edge is stored as a tuple of <egress node DPID, ingress node DPID>.
are used to generate and receive traffic in the network. From
the perspective of the network controller a Mininet emulated
network is indistinguishable from a physical SDN network.
All results presented here were produced by parsing the output
logs of the network controller, and as such the same evaluation
techniques could be applied to a physical SDN network.
Random fully connected topologies with an average node
degree of 4 were generated using the Waxman generator
provided by BRITE [15]. This topology configuration was
selected due to the work in [16] which indicates that random
2-connected topologies (i.e. topologies in which each pair of
nodes can be connected by at least 2 node-disjoint paths) are
most representative of real multicast performance in WAN
networks. Networks with a size of 20 and 40 nodes covering
a 1500km 4000km area (roughly the size of the continental
United States) were generated and evaluated. Both topology
configurations presented similar trends in resulting metrics, and
as such traffic performance metrics will only be presented for
the 40 node network. All links in the network core (i.e. between switches) were uniformly configured with a bandwidth
of 20 Mb/s. To emulate edge networks, a single emulated host
was connected to each switch by a 1 Gb/s link, so that multicast
flows would only be bottlenecked by links in the network
core. Accordingly, the FlowTracker module was configured to

5792

only track traffic state on core network links. The FlowTracker


module was configured to poll each connected switch at a 1
second interval, resulting in an average of one ofp flow stats
query per 25 milliseconds in the 40 node network. Link cost
functions are updated at the same rate. POX was run inside
the Mininet VM, and all switches connected to the controller
using out-of-band control through a direct connection over the
loop-back interface of the emulation host platform.
The network workload consisted of streaming sessions of
720p video in variable bit-rate MPEG4 format (generated using
VLC [17]), with an average bit-rate of 1.64 Mbps. Custom
python applications were used as the receivers of the multicast
streams. In each trial run, the number of multicast groups
was set to a fixed value. Each group was initialized with
a single sender, chosen with a uniform random distribution
from all hosts in the network. The launching of these sender
applications was performed before statistic collection began
for each trial run, and the sender application launch times
were uniformly randomized over the duration of the test
media in order to eliminate statistic correlations between trial
runs caused the variable bit-rate of the test media. Within
each multicast group receivers were generated as a Poisson
process, and their reception duration was generated using a
exponential distribution with a mean of 60 seconds (i.e. the

IEEE ICC 2015 - Next Generation Networking Symposium

VII.

3.5

2.5

2
ShortestPath Routing
Linear Link Weights
Inv. Proportional Link Weights

1.5

10

15

20

25
30
35
Number of Active Groups

40

45

50

Fig. 4: Average standard deviation of link utilizations across all


links in the network, plotted against the number of multicast
groups running in the network.

R ESULTS

All data points presented in this section are averaged over


50 trial runs, and the plotted error bars denote 95% confidence
intervals calculated using a student-t distribution.
A. Effectiveness of Traffic Load Balancing
40 Node Random Topo, Degree 4 Traffic Concentration (20 Mbit/s Links, 720p Streaming)
7
ShortestPath Routing
Linear Link Weights
6.5
Inv. Proportional Link Weights
6
Traffic Concentration

40 Node Random Topo, Degree 4 Link Util Std.Dev. (20 Mbit/s, 720p Streaming)
4.5

Link Util Std.Dev. (Mbps)

1
parameter of the exponential distribution was set to 60
).
For each receiver arrival event the receiving host was chosen
uniformly at random from all hosts in the network. For the
purpose of calculating the mean occupancy of each group,
this configuration can be considered as an M/M/ queuing
system, as all receivers enter service (i.e. begin receiving the
media stream) as soon as they are initialized, without queuing
delay. The mean occupancy of a M/M/ queue is calculated
as /, where is the parameter to the exponential distribution
used to determine inter-arrival times. In all evaluation shown
5
here, was set to 60
, resulting in a mean inter-arrival time of
12 seconds, and a mean occupancy of 5 receivers per multicast
group. Accordingly, 5 active receivers were generated for each
multicast group prior to the start of trial run. This configuration
allows for realistic churn of multicast receivers, while ensuring
that the occupancy of each group averages at 5 receivers in
steady state. Statistic collection began after all senders and all
initial receivers were initialized, and statistics were collected
for a period of 3 minutes for each trial run. Trials were run
with the number of active groups varied in 5 group increments
from 10 groups to 50 groups.

5.5
5
4.5
4
3.5
3

the network. Reducing the traffic concentration in the network


is desirable, as doing so increases the reliability of multicast
delivery by avoiding packet loss due to localized congestion.
In a network with high traffic concentration multicast receivers
may experience packet loss due to congestion on a small
number of links, while the majority of links in the network
remain uncongested. A low traffic concentration indicates that
the distribution of traffic among network links is more evenly
distributed, and multicast receivers should not experience
packet loss due to congestion unless that majority of links in
the network near congestion. Our results indicate that both the
linear and inverse proportional cost function schemes provide a
statistically significant improvement in load balancing metrics
over shortest-path routing. On average, the linear type cost
function results in a 48.6% lower traffic concentration and a
36.1% lower link utilization standard deviation than shortestpath routing, while the inverse proportional type cost function
results in a 52.8% lower traffic concentration and a 37.6%
lower link utilization standard deviation than shortest-path
routing.

2.5
2

B. Impact on Traffic Volume


10

15

20

25
30
35
Number of Active Groups

40

45

50

TABLE I.

Fig. 3: Traffic concentration (ratio between the peak link


utilization and the average link utilization in the network),
plotted against the number of active multicast groups.
Figures 3 and 4 present metrics which are used to evaluate
the effectiveness of load balancing in the network. Figure 3
presents the traffic concentration of the network, which is
defined as the ratio between the peak link utilization in the
network and the average link utilization among all links in the
network [18]. Figure 4 presents the link utilization standard
deviation among all links in the network. Both of these metrics
capture similar information, in that a lower value of the metric
indicates that traffic is more evenly distributed among links in

5793

AVERAGE N UMBER OF E DGES PER M ULTICAST T REE


Cost Function Type
Shortest-Path
Linear
Inv. Proportional

Number of Edges

9.30

10.85

11.13

While the previous metrics demonstrate that these techniques are effective at balancing traffic load in the network,
it is also important to quantify the extent to which these
techniques cause multicast traffic to deviate from the minimum
hop count paths, and thus the extent to which total traffic
volume is increased relative to shortest-path routing. Figure
5 presents the average link utilization among all links in the
network. The 40 node network contains 80 bidirectional links,
which the FlowTracker measures as 160 unidirectional links.
Therefore, the total volume of traffic in the network can be

Average Link Util Across All Links (Mbps)

IEEE ICC 2015 - Next Generation Networking Symposium

multicast tree (each of which corresponds to a single packet


output flow table entry). The average number of edges per
multicast tree is presented in Table I. Similar to the average
link utilization, the number of installed flow table entries scales
linearly with the number of groups active in the network.
This raises scalability concerns for large networks with many
multicast groups, as the limited flow table size of forwarding
elements is a significant bottleneck in SDN [19]. The proposed
traffic load balancing approach exacerbates this scalability
issue, as the number of required flow entries is increased
relative to shortest-path routing (by 12.9% for the linear type
cost function, and 15.9% for the inverse proportional type
cost function). Potential methods for addressing this scalability
issue are left as an area for future work.

Average Link Util Across All Links (40 Nodes, 20 Mbit/s Links, 720p Streaming)
7
ShortestPath Routing
Linear Link Weights
Inv. Proportional Link Weights
6

10

15

20

25
30
35
Number of Active Groups

40

45

VIII.

50

Fig. 5: Average link utilization across all links in the network,


plotted against the number of active multicast groups.

Number of Installed Flows (Across All Network Switches)

Number of Installed Flows (40 Nodes, 20 Mbit/s Links, 720p Streaming)


550
500
450
400
350
300
250
ShortestPath Routing
Linear Link Weights
Inv. Proportional Link Weights

200
150
100
50

10

15

20

25
30
35
Number of Active Groups

40

45

50

Fig. 6: Total number of flows entries installed by the


GroupFlow module across all network switches, plotted against
the number of active multicast groups.

obtained by multiplying the average link utilization by 160.


Our results demonstrate the expected result, which is that load
balancing techniques will result in an increased total volume of
traffic in the network. On average, the linear type cost function
results in a 13.5% greater average link utilization than shortestpath routing, while the inverse proportional type cost function
results in a 16.6% greater average link utilization than shortestpath routing. The average link utilization scales linearly with
the number of groups active in the network, regardless of
the link cost function chosen, as all multicast groups have a
uniform mean occupancy and mean bit-rate.
Figure 6 presents that total number of multicast flow table
entries installed across all switches in the network. The total
number of flows in the network is strongly correlated with the
average link utilization, as the total volume of traffic in the
network is directly dependent on the number of edges in each

5794

C ONCLUSIONS AND F UTURE W ORK

In this paper we present an approach for implementing


multicast traffic engineering using real-time link cost function
modification with an SDN controller. We demonstrate a controller architecture in which the problem of multicast routing
is implemented through four logically decoupled modules
which handle topology discovery, group membership, traffic
measurement, and multicast tree calculation. We evaluate an
implemented controller prototype using network emulation,
and present metrics indicating that the developed techniques
are effective at implementing traffic load balancing, at the
cost of increased total traffic volume. Our results demonstrate
that real-time modification of links costs produces statistically
significant improvements in traffic distribution metrics (up to
a 52.8% improvement in traffic concentration, and a 37.6%
improvement in link utilization standard deviation relative to
shortest-path routing). Our evaluation indicates that real-time
modification of link cost functions can be feasibly implemented in an SDN controller, and that the technique is effective
in reducing localized congestion of multicast traffic.
Due to limitations in Mininet this work uses out-of-band
control over the emulation host platforms loop-back interface
to implement switch management. This configuration emulates
a network where the controller is simultaneously running
locally on every switch, and this control deployment is not
implementable on a real WAN. Due to this limitation this
work does not enable the realistic evaluation of propagation,
switching, and queuing delays associated with network control
traffic. The greatest impact of these delays would be a reduction in the accuracy of real-time bandwidth estimation, and the
introduction inconsistencies in the controllers learned network
state. However, control traffic delays are expected to be small
relative to the inter-query interval of the FlowTracker module
(1 second in our work). For example, if the controller in our
40 node network is placed such that the average propagation
delay among all network switches in minimized, the average
and maximum round trip propagation delay to the controller
are found to be 6.10 ms and 11.91 ms respectively. Future
work could examine more realistic control traffic modeling to
determine the impact of variable control delays and control
channel congestion / connectivity loss on the load balancing
technique presented here. Control traffic could be modeled
using in-band control in the emulated data network, or out-ofband control with a separately emulated control network (either
using OpenFlow with in-band control in a user inaccessible
control network, or a traditional IP control network).

IEEE ICC 2015 - Next Generation Networking Symposium

A promising direction for future work is the investigation


of branch forwarding with unicast tunneling. While the method
presented in this work is effective at reducing localized congestion, it does so at the cost of increasing the amount of multicast
state which must be stored in the network forwarding elements.
A promising approach for addressing this issue is branch aware
forwarding [20] [21]. Under this approach multicast forwarding state is only stored on forwarding elements which form
branch nodes in a multicast tree (i.e. nodes with at least three
incident edges), while forwarding over unbranched nodes is
implemented using unicast tunneling. A potential drawback of
this approach is that the use of unicast tunneling may prevent
the network controller from estimating bandwidth utilization
on a per flow basis. However, the approach presented in this
work only relies on bandwidth estimations on a per link basis,
and therefore this approach should be compatible with branch
aware forwarding.

[15]

[16]

[17]
[18]

[19]

[20]

[21]

R EFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]
[8]

[9]
[10]

[11]

[12]

[13]

[14]

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,


J. Rexford, S. Shenker, and J. Turner, OpenFlow: enabling innovation
in campus networks, ACM SIGCOMM Computer Communication
Review, vol. 38, no. 2, pp. 6974, 2008.
B. Fortz, J. Rexford, and M. Thorup, Traffic engineering with traditional IP routing protocols, IEEE Communications Magazine, vol. 40,
no. 10, pp. 118124, Oct. 2002.
C. Marcondes, T. Santos, A. Godoy, C. Viel, and C. Teixeira, CastFlow:
Clean-slate multicast approach using in-advance path processing in programmable networks, in Proc. of 2012 IEEE Symposium on Computers
and Communications (ISCC), 2012, pp. 94101.
B. Cain, S. Deering, I. Kouvelas, B. Fenner, and A. Thyagarajan,
Internet Group Management Protocol, Version 3, RFC 3376 (Proposed
Standard), Internet Engineering Task Force, Oct. 2002, updated by
RFC 4604. [Online]. Available: http://www.ietf.org/rfc/rfc3376.txt
L. Bondan, L. F. Mller, and M. Kist, Multiflow: Multicast cleanslate with anticipated route calculation on OpenFlow programmable
networks, Journal of Applied Computing Research, vol. 2, no. 2, pp.
6874, 2013.
D. Kotani, K. Suzuki, and H. Shimonishi, A design and implementation
of OpenFlow controller handling IP multicast with fast tree switching,
in Proc. of 2012 IEEE/IPSJ 12th International Symposium on Applications and the Internet (SAINT), Jul. 2012, pp. 6067.
E. W. Dijkstra, A note on two problems in connexion with graphs,
Numerische mathematik, vol. 1, no. 1, pp. 269271, 1959.
I. Matta and L. Guo, On routing real-time multicast connections, in
Proc. of IEEE International Symposium on Computers and Communications, 1999, pp. 6571.
About
POX

NoxRepo.
[Online].
Available:
http://www.noxrepo.org/pox/about-pox/ (Accessed: 22 February 2014)
GroupFlow.
[Online].
Available:
https://github.com/alexcraig/GroupFlow
(Accessed:
18
March
2014)
OpenFlow Switch Specification v1.0.0. [Online]. Available: https://www.opennetworking.org/images/stories/downloads/sdnresources/onf-specifications/openflow/openflow-spec-v1.0.0.pdf (Accessed: 22 February 2014)
A. Tootoonchian, M. Ghobadi, and Y. Ganjali, OpenTM: traffic matrix
estimator for OpenFlow networks, in Proc. of 11th International
Conference, PAM 2010. Springer, 2010, pp. 201210.
L. Jose, M. Yu, and J. Rexford, Online measurement of large traffic
aggregates on commodity switches, in Proc of. USENIX Hot-ICE,
2011.
C. Yu, C. Lumezanu, Y. Zhang, V. Singh, G. Jiang, and H. V.
Madhyastha, Flowsense: Monitoring network utilization with zero
measurement cost, in Proc. of 14th International Conference, PAM
2013. Springer, 2013, pp. 3141.

5795

A. Medina, A. Lakhina, I. Matta, and J. Byers, BRITE: an approach


to universal topology generation, in Proc. of Ninth International
Symposium on Modeling, Analysis and Simulation of Computer and
Telecommunication Systems, 2001, pp. 346353.
C. A. Noronha and F. A. Tobagi, Evaluation of multicast routing
algorithms for multimedia streams. Computer Systems Laboratory,
Stanford University, 1994.
VideoLAN
Media
Player.
[Online].
Available:
http://www.videolan.org/vlc/index.html (Accessed: 22 February 2014)
K. Kaur and M. Sachdeva, Performace metrics for evaluation of
multicast routing protocols, in Proc. of 2012 International Conference
on Advances in Engineering, Science and Management (ICAESM),
2012, pp. 582587.
Y. Kanizo, D. Hay, and I. Keslassy, Palette: Distributing tables in
software-defined networks, in Proc. of IEEE INFOCOM, Apr. 2013,
pp. 545549.
J. Tian and G. Neufeld, Forwarding state reduction for sparse mode
multicast communication, in Proc. of IEEE INFOCOM 98. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 2, 1998, pp. 711719 vol.2.
D.-N. Yang and W. Liao, Protocol design for scalable and adaptive
multicast for group communications, in Proc. of IEEE International
Conference on Network Protocols, 2008, pp. 3342.

Vous aimerez peut-être aussi