Vous êtes sur la page 1sur 13

Computer Networks 137 (2018) 160–172

Contents lists available at ScienceDirect

Computer Networks
journal homepage: www.elsevier.com/locate/comnet

Join and spilt TCP for SDN networks: Architecture, implementation,


and evaluation
Wei Guo a, V. Mahendran b,∗, Sridhar Radhakrishnan a
a
School of Computer Science, University of Oklahoma, Norman, OK,73019, USA
b
Department of Computer Science and Engineering, Indian Institute of Technology Tirupati, Tirupati, Andhra Pradesh, 517506, India

a r t i c l e i n f o a b s t r a c t

Article history: With the advent of SDN technologies, fine-granular steering and dynamic controlling of network flows
Received 16 July 2017 can be possible. Such features can be appropriately utilized to enhance the network performance. In
Revised 9 March 2018
this paper, we present a composite system-level implementation framework for aggregation and splitting
Accepted 21 March 2018
of TCP flows. Our framework is featured to work in an end-user agnostic manner that doesn’t involve
Available online 21 March 2018
intervention or modification from the end-user.
Keywords: While most existing works are focused either on aggregation-only or split-only frameworks, we pro-
TCP pose a more practical ‘join-and-split’ working prototype using SDN. To the best of our knowledge, for the
SDN first time, we implement a composite join-and-split TCP architecture that uses efficient mechanisms to
Flow aggregation and splitting synchronize different join and split network points. We have used a ‘linked-ACK’ mechanism to preserve
Implementation framework the end-to-end semantics of flows that are separated by the join and split network points. With extensive
Performance evaluation study
performance evaluation study, we show the improved performance achieved by our framework in both
wired and wireless environments. We describe the implementation of aggregation/splitting of TCP flows
intuitively through state diagram. We are hopeful that this paper serves as system-level guide for the
research community.
© 2018 Elsevier B.V. All rights reserved.

1. Introduction semantics. Maintaining end-to-end semantics means the following:


An acknowledgement for a received data segment by an interme-
Today’s Internet is constantly growing with the addition of diate device is sent to a client only after the original destination
new sets of devices and services. A network increasing in scale host receives this data segment (rather than sending ACK message
should also renew and reinvent its core functional capabilities, and at the time of the actual receipt of data segment by the interme-
adapt to new design paradigms in order to provide improved deliv- diate device) [2].
ery performance. Software-Defined Network (SDN) [1] is one such In the framework, all TCP flows that share a common path are
paradigm that provides effective network management and dy- aggregated at the beginning of this shared path. In a typical net-
namic flow steering capabilities to enable engineers build efficient work, this may happen on a node at the network edge. For in-
network services. stance, in a cellular network-based smart grid meters (connected
While the traditional switches and routers perform simple through a common base station) that send their data to a data
packet routing and forwarding, the SDN-enabled switches of to- center hosting servers that perform activities such as monitoring,
day, on the other hand, can intelligently forward as well as dy- and power load balancing [3]; the base station becomes the strate-
namically steer the network traffic. Appropriate steering and man- gic point for joining the TCP flows. Another strategic point in the
agement of flows in the network can be helpful in improving de- server-side of the access network can become the other end-point
livery performance of the associated flows. In this work, we uti- of this shared path. Between these two join and split points, a
lize SDN technologies and develop efficient TCP-based frameworks. single long TCP flow can steadily transfer the aggregated network
To this end, we present an SDN-based end-user agnostic ‘join-and- traffic by exploiting the common congestion control mechanism of
split’ TCP framework with features that preserve end-to-end flow this TCP. The significant task of this framework lies in effectively
synchronizing necessary information between join and split net-
work points. To this end, we provide a control plane functionality

Corresponding author. to the join and split network points for enabling synchronized in-
E-mail addresses: wguo@ou.edu (W. Guo), mahendran@iittp.ac.in (V. Mahen- formation transfer. Moreover, this framework needs to function in
dran), sridhar@ou.edu (S. Radhakrishnan).

https://doi.org/10.1016/j.comnet.2018.03.022
1389-1286/© 2018 Elsevier B.V. All rights reserved.
W. Guo et al. / Computer Networks 137 (2018) 160–172 161

a seamless manner without user interference. The aggregated flow 2. Related work
must be routed between the join and the split points of the net-
work in a user agnostic manner. A few recent studies have focused on improving the TCP
Our framework maintains synchronized TCP session states be- throughput performance through the technique of combining dif-
tween the clients-to-join part of the flows, and the associated split- ferent flows in the network. In [8], the authors with the help
point-to-server part of the flows. TCP connections carry state infor- of simulation experiments improve the smart grid meters’ traffic
mation in the form of TCP options that are negotiated during the through a flow-aggregation framework. Along similar lines, in a
connection setup time [4], such as enabling selected ACK, enabling LTE-based wireless smart grid scenario [9], we showed that with
timestamp, and in the case of multipath TCP enabling MPTCP [5]. appropriate TCP aggregation and scheduling, fairness among the
These options (when not suitably taken care of) are lost when the TCP flows can be achieved (in addition to obtaining improved
TCP flows are modified during aggregation or splitting. In this pa- throughput performance).
per, we enable the SDN controller to perform Deep Packet Inspec- In a different work [10], we proposed an integration of IoT-
tion (DPI) [6] on each SYN segments from clients, parse TCP op- based MQTT messages at the edge switches (known as fog nodes)
tions information, and synchronize this learnt information to the for achieving improved delivery performance. The aforementioned
split TCP point (proxy) node. This split-proxy node then establishes works in common support the logic of simple flow-aggregation
a (fake) TCP connection to the server with same options as re- frameworks that cannot be non-trivially extended and applied to
quired by the client. generic network scenarios that involve both flow joining and flow
The two proxy points in the network divide the TCP flows splitting. In a different work [11], we proposed split-only frame-
into three non-overlapping independent flows. These separated work that separates each flows into a chain of two flows with first
flows will have different throughputs, and unless properly han- flow providing a congestion-free wireless transport, and the sec-
dled would break the end-to-end flow semantics [7]. In this work, ond (part of the) wired-network flow providing regular congestion
we propose a new concept of Linked-ACK, wherein the ACKs of based TCP transport.
server side flows are sent along the path from the server to the Unlike the existing works that independently address either
client. Therefore, the clients receive ACKs only when the packet is aggregation-only [9] or split-only [11] TCP frameworks, in this pa-
received at the server. The Linked-ACK mechanism therefore can per, we propose a unified join-and-split TCP framework that pro-
help in maintaining the end-to-end flow semantics. In addition, the vides an effective information synchronization capabilities between
Linked-ACK limits the total buffered data proportional to the maxi- the join and fork proxy points in the network. This feature enables
mum (sender) congestion window size of the corresponding flows. aggregation and split functionality to work at different points of
In this manner, the buffers of the join and split proxy nodes are the same network.
protected from a potential overflow. Works such as [12–14] exploit the idea of preserving end-to-
While a number of recent works have attempted to improve end semantics of flows by caching the ACK segments, with the help
TCP performance by tuning the server-side TCP parameters, our of proxy nodes. However, they do not focus on aggregating multi-
framework provides the designers more control to fine-tune the ple TCP flows. On the other hand, MPTCP flow-based ‘proxy’ frame-
TCP flow and achieve better performance. Thanks to SDN technolo- work has been proposed in [15]. However, this work did not pro-
gies, with the help of SDN controller we can provide global view vide the details of preserving end-to-end semantics in the MPTCP
and control of the network information which can be appropriately proxy. We in this paper integrate the MPTCP proxy into our ‘join-
utilized to improve the network performance. For instance, to sup- and-split’ framework along with a concept of Linked-ACK to main-
port Multi-Path TCP (MPTCP), end-users are required to upgrade to tain the TCP end-to-end semantics.
a compatible kernel-code. On the other hand, our framework facil- Flow aggregation techniques as such in SDN are addressed in
itates the ‘join’ proxy to be used as MPTCP proxy point and help the literature from a different flow management perspective. To re-
the end-users to benefit from the advantages of MPTCP without duce the installation of flow rules in the switches, studies have en-
modifying end-user side kernel codes. We believe this will provide couraged the use of flow aggregation. To name a few, Zhang et al.
a flexible and scalable solution for the legacy systems in the net- [16], Mizuyama et al. [17], and Kosugiyama et al. [18] have encour-
work. aged flow aggregation for efficient SDN operation. The Zhang et al.
In summary, the main contributions of this work are as follows: [16] work identifies flows affected by link failures in the network,
and aggregates them and send the aggregated flow through a local
1. Develop and implement a novel ‘join and split’ TCP framework reroute path so as to reduce the flow management operations in
based on SDN that seamlessly joins and splits TCP flows to the SDN.
achieve better performance. The SDN control information linearly increases with the num-
2. Linked-ACK concept is proposed and implemented to maintain ber of flows which severly limits the performance in a wireless
the TCP end-to-end semantics, and also control the buffer usage network scenario. To combat this scenario, the authors in [17] use
of the proxy network points. flow aggregation to reduce the SDN control traffic. The authors
3. Provide a platform to offload TCP fine tuning from clients and in [18] propose a heuristic algorithm for composition and aggre-
servers to ‘join and split’ proxy points, for better controllability. gation of flows in order to minimize end-to-end delay of flows.
All of these work do not show how an aggregation is performed
The remainder of this paper is organized as follows. in a real experiment, and also do not provide the necessary infor-
Section 2 discusses some of the related works and positions mation of splitting them at the server-side of the network. On the
our work with respect to the state-of-the-art. In Section 3, we other hand, the aggregation is performed only for the benefit of
provide the system model that we use throughout the paper. In SDN operations. However, we demonstrate the benefit of flow ag-
Section 4, we provide our SDN-based join and split framework gregation/and split for the benefit of clients (by ensuring through-
with Linked-ACK. Section 5 discusses our performance evaluation put fairness and improved throughput performance).
results done on different application scenarios such as wireless
client, and MPTCP applications. In this section, we also describe 3. System design and implementation
the working of fairness integrated framework that is used to
enable fairness among the network flows. Finally, we conclude our Fig. 1 shows the system model used throughout the paper.
work with future directions in Section 7. Each switch is considered to support SDN’s OpenFlow protocol.
162 W. Guo et al. / Computer Networks 137 (2018) 160–172

Step2:
1) Generate new Unique ID (UID) of C1 to S
2) Generate new Source Transport (NSrcPort) of C1 to S

Controller Step3: Send


Step4: Send
1) Source IP
1) Source IP
2) Source Port
2) Source Port
3) UID
3) UID
4) NSrcPort
Controller

Proxy1 Proxy2
client1

switch1 switch2 server Proxy1 Proxy2


client2
Step1:
Send C1 to S
SDN Data Plane
SDN Control Plane
client3
switch1 switch2

Fig. 1. System model. Step5: Step6:


Insert Flow table: Fake TCP Connecon Insert Flow table: Fake TCP Connecon
C1 to S switch to C1 to P1 P2 to S switch to C1 to S

A proxy computing node is attached to each of the switches. An


SDN controller is connected to these OpenFlow switches and proxy SDN Control Plane
nodes. The controller interacts with the switches through Open-
Fig. 2. SDN join and split control plane architecture. When new SYN segments ar-
Flow protocol, and sends commands to the proxy nodes through a rive to switch1, the SDN control plane performs the following the steps from 1
custom-designed application-layer protocol running over TCP net- through 6.
work stack. Without loss of generality, three clients share a same
path from Switch-1 to Switch-2.
flows through running a custom-defined OpenFlow protocol in the
switches.
4. SDN-based TCP join and split framework
4.1. TCP join and split framework using SDN
In our framework, the TCP flows from each of the clients are
combined (at join-proxy node) to form a long TCP flow along the SDN supports both proactive and reactive ways of flow routing.
shared path of the network. The SDN controller generates a Unique Proactive way populates flow tables ahead of the traffic coming
IDentification (UID) number, and associates it to each of the flows from the switch. On the other hand, the reactive way handles the
to be aggregated. The join-proxy attaches the UID to each received flows ‘on the fly’ depending on the information provided by the
data, and sends it to the split-proxy node. The received data on incoming flows. In our work, we consider reactive way of routing
split-proxy can split the flows properly with the help of UID. As the flows. Without loss of generality, we consider that the routing
shown in Fig. 1, proxy1 joins TCP flows originated from the three table entries directing flows from, and to the (join and split) proxy
different clients. Proxy2 then splits the flow from each client, and nodes are preinstalled. The first segment of the incoming TCP flow
sends them to the server. from each client is forwarded to the SDN controller for analysis.
Clients and the server need not necessarily learn any informa- The sequence of steps in the flow table setup process is shown in
tion from the network. To make the join-proxy and split-proxy Fig. 2. Switch-1 forwards the first packet (TCP SYN packet) from
transparent to both clients and the server, the SDN controller is client1 (destined toward the server) to the SDN controller.
programmed to setup flow tables to create fake TCP connections The SDN controller analyses the received segment, extracts the
between clients and join-proxy node, and subsequently between client’s information, and distributes this information to the split
the split-proxy and the server. While the clients assume that they and the join proxy points. The SDN controller then generates a UID,
are transmitting data to the server, the data is actually sent to and creates a new TCP flow transport (namely, NSrcPort) between
join-proxy node. We presented a part of this simple join-and-split the split-proxy point and the server, which is shown in step2 in
framework in [19]. the Fig. 2. In a special case, the subflows of an MPTCP flow would
In this join-and-split framework, we need to ensure that the share a same UID. The UID with associated flow information is the
end-to-end semantics are maintained. Each TCP connection from key component used in splitting and joining of flows. While a stan-
every client to the server is divided into three individual TCP con- dard (out-of-the-box) SDN controller lacks any control of the proxy,
nections by the join and split proxy points. Each of these individ- we extend the SDN control plane functionality to allow the con-
ual TCP connections adjusts the throughput in its own path, and troller to manage the proxies attached to each OpenFlow switch
maintains its own TCP states. Typically, as incoming flow rate is with a custom application protocol developed over TCP.
higher than the output rate at the proxy node, it creates an un- After analyzing the SYN packet, the controller computes the
stable queue. Therefore, it is necessary to maintain the end-to- routing path and creates a fake TCP connection between the client
end semantics, and also maintain queue stability. In this paper, we and the join-proxy node. To this end, a flow table entry substi-
propose a “Linked-ACK” to link the ACKs of three individual TCP tutes the server information with join-proxy point’s information in
W. Guo et al. / Computer Networks 137 (2018) 160–172 163

Step3: Step5:
1) Get UID by matching source IP and port 1) Get UID, length, data by parsing packets
2) construct new packet UID+length+data 2) Get Socket by matching UID
3) Create Socket with NSrcPort if Socket is NULL
4) Send(Socket, data, length)

Proxy1 Proxy2

Step2: Step6:
c1 to Proxy1 Proxy2 to s

Step4:
Step1: Proxy1 to Proxy2 Step7:
c1 to s UID+length+data c1 to s

client1 switch1 switch2 server

SDN Data Plane


Fig. 3. SDN join and split data flow initiation using TCP SYN segments.

the specific fields of data link layer, network layer, and transport of the resultant split flows will have its own TCP congestion state,
layer of the incoming flow from the client. This modified TCP flow and throughput rate. Assuming the client is sending data at a con-
can be accepted by the TCP server of join-proxy and vice versa. In stant rate, the join-proxy maintains a buffer to store the received
a similar way, another fake TCP connection is established between data from the client. While a larger buffer size is expensive, a small
split-proxy and the server. This TCP connection fakes the TCP client buffer size on the other hand negatively impacts with a reduced
information (which typically comes with an arbitrary source port TCP throughput.
number assigned by the client’s OS). It is worthwhile to note that Our proposed ‘Linked-ACK’ provides a better solution, and
the TCP flow from split-proxy to the server with random source bounds the buffer to a finite size. For brevity, let us repre-
port can’t be associated with the client information in the SDN sent the different ACK messages along a server-client network
controller. Therefore, split-proxy starts the TCP connection with a path as follows: (i) The ACK message ‘from server to split-proxy’
given source port NSrcPort for SDN controller to retrieve client in- be represented as ACK_split, the ACK message from ‘split-proxy
formation. to join proxy’ be represented as ACK_join, and the ACK mes-
After the flow tables are configured, the join-proxy receives sage from ‘join-proxy to client’ be represented as ACK_client. Our
data from clients and retrieves the UID and the client’s informa- ‘Linked-ACK’ framework operates in a lock-step fashion wherein
tion (i.e., source IP address, and port number). Then it constructs a the ACK_join is not released until its associated ACK_split packet
new packet containing the following application layer information: is released. In a similar way, an ACK_client is received only
“UID+length+data”, where ‘length’ is the total length of the con- upon the release of its associated ACK_join. We customize the
structed packet). Split-proxy point retrieves the UID and data, and standard OpenFlow protocol, and add the following four actions:
pushes the data to a pre-established socket, as shown as Fig. 3. caching ACK_join, caching ACK_client, release ACK_join, and release
We consider a 2-Byte UID thereby supporting a maximum of ACK_client. The modified flow table entries for routing the ACK
65, 532 concurrent TCP flows, and use ‘source IP and port-number’ packets are shown in step1 and step2 in Fig. 4. The ACK_client is
information to identify these flows. The TCP flows that come with cached (instead of being released) after the fake TCP connection is
different TCP options should be handled appropriately at the split- established on the switch-1. This fake TCP connection directly con-
proxy node. With the help of SDN technologies, we can perform nects to the join-proxy node. In a similar way, ACK_join is cached
deep-packet inspection on these packets, and forward the neces- on the switch-2.
sary TCP header configuration information to the split-proxy. The The ACK_join, and the ACK_client are stored in a FIFO queue
split-proxy point can then establish a TCP flow with same con- data structure, with an exception to the SYN+ACK segment which
figuration as if it was an original client exchanging packets with will be released immediately to complete the three-way hand-
the server (hence the name ‘proxy’ points). The SDN controller shake. To guarantee that no ACK segments are lost, the length of
is capable to inspect the application-layer information over TCP. the FIFO queue is set to a size larger than the sender’s total conges-
Therefore, by extending application layer functionality, our system tion window (CWND) size. Even when sender-side CWND is large,
(in future) can also support application-layer join and split frame- our approach works due to the TCP’s ‘cumulative ACK’ mechanism
works. wherein one ACK message represents the summary of received in-
order bytes.
4.2. Preserving end-to-end flow semantics with ‘Linked-ACK’ The Algorithm 1 describes the release of ACK-join segment on
split proxy node to the join proxy node. We maintain an aggre-
In this section, we describe the “Linked-ACK” framework that gated ACK (namely, aggregateACK) for all TCP-split segments. This
we have developed to maintain the end-to-end semantics of the is done because the delayed ACK has to combine several ACK re-
flows. As the TCP flow is split into 3 independent TCP flows, each
164 W. Guo et al. / Computer Networks 137 (2018) 160–172

Step1: Insert Flow Table Step2: Insert Flow Table


Match: Match:
1)ACK from Proxy1 to clients 1)ACK from Proxy1 to Proxy2
Acons: Acons:
1)Fake TCP connecon 1)Cache Packet
2)Cache Packet
Match: ACK from s to clients
Match: ACK from Proxy2 to Acons:
Proxy1 1)Fake TCP connecon
Acons: 2)Release ACK from Proxy2
1)Release ACK from proxy1 to Proxy1
Controller 3)OutPut
to clients
2)Output

switch1 switch2
Proxy1 to c1
Proxy1 to c2 Proxy2 to Proxy1
Enqueue Enqueue
Proxy1 to c3 If(SYN==1) skip
If(SYN==1) skip
Release ACK_Client Release ACK_Join
s to c1
Aggregated ACK s to c2
Join ACK - extra ACK s to c3
+ extra ACK
If(SYN==1) skip

SDN Data Plane


SDN Control Plane
Fig. 4. Linked-ACK. It shows the ACK collection, caching and distribution with relative installed flow tables.

Algorithm 1 Release ACK function that runs on the split-proxy ference: the added extra ACK has to be removed to tally extra
node. (UID+length) data cached in the join-proxy node.
1: increasedACK_join += IncreasedACKValue(ACK_split)
2: aggregateACK += extraACK(ACK_split)
3: while true do 4.3. Linked-ACK framework based TCP state machine
4: if increasedACK_join > aggregateACK then
5: return Fig. 5 shows the extended TCP state diagram with Linked-ACK
6: end if implemented on the join-proxy node. The traditional TCP state di-
7: aggregateACK -= increasedACK_join agram [20] describes the different states of a TCP sender/receiver.
8: pop and send ACK_join On the other hand, our extended state diagram in Fig. 5 de-
9: end while scribes the TCP sender and TCP receiver inside the join-proxy
node. In the join proxy node, the TCP receiver (server) receives TCP
segments from end host (clients). An application aggregates TCP
flows, and buffers the received segments and generated ACK mes-
sages, on the respective data and ACK buffers. A TCP sender in the
sponses together into a single response. As TCP uses the cumula- join-proxy node maintains a connection with the TCP split proxy
tive ACKs, our ‘Linked-ACK’ releases ACKs based on the increased node. Fig. 5 shows the exchange of data and ACK segments be-
ACK value, not on the number of ACK segments. To this end, the tween senders and receivers. In addition to the traditional states,
IncreasedACKValue() function returns the increased ACK value by the following extra states are used: i) PROXY RECEIVER ESTAB-
comparing ACK values with the previous ACKs. For a corner case, LISHED, ii) PROXY SENDER ESTABLISHED, iii) DATA_BUFFERED, iv)
this function returns 0 if the ACK number is 0, which is caused by RECEIVER_RCVD, v) SENDER_SENT, AND vi) ACK_BUFFERED.
SYN or RST flag. In PROXY RECEIVER ESTABLISHED state, the TCP server accepts
The extra information such as UID and length which is attached TCP connections from end-host clients. In the RECEIVER_RCVD
to the data message has to be taken into account in the byte-count state the received data from the clients are pushed into a buffer,
of ACK_join. Therefore, a function called extraACK() adds an extra and the TCP server transitions to DATA_BUFFERED state. TCP sender
ACK value to the aggregated ACK. Subsequently, by using a loop in DATA_BUFFERED state reads data from the buffer that are fed
this algorithm keeps checking for an increased ACK value in the by the receivers. In PROXY RECEIVER ESTABLISHED state, when
ACK_join message stored in a queue. If ACK_join has smaller in- ACK_split message is received from the split proxy node, the join
creased ACK value than the aggregated ACK segment, the ACK-join proxy node’s TCP receiver transitions to ACK_BUFFERED state and
is dequeued and sent to the join-proxy. releases the buffered ACK_join messages to the appropriate re-
The release ACK_Client function which runs on join-proxy ceivers. To guarantee that no data segments are lost in the buffer,
nodes works in a similar fashion. This function releases the ACK- the minimum buffer size is set to the maximum receiver CWND
client messages to the clients. However, with the following dif- size.
W. Guo et al. / Computer Networks 137 (2018) 160–172 165

starng point
appl: passive open
LISTEN CLOSED

send: SYN
appl: acve open

appl: new sender

appl:close or meout
rcvd: SYN; SYN_SENT
send: SYN,ACK
rcvd: SYN,ACK;
SYN_RCVD send: ACK

send: FIN
appl: close
rcvd: FIN
PROXY SENDER ESTABLISHED
send: ACK

rcvd: ACK;
send: <nothing>

rcvd: FIN
PROXY RECEIVER ESTABLISHED CLOSE_WAIT
send: ACK
send: FIN send: FIN
appl:new ACK_split received
appl: release ACK_join

appl: close appl: close


appl: new data in cache
appl: cached ACK
appl: cache ACK_join

send: DATA appl: close all receivers


rcvd: ACK
send: ACK_join

appl: release LAST_ACK


DATA_BUFFERED SENDER_SENT

rcvd: ACK_split
appl: release ack_join
ACK_BUFFERED
appl: released

send: FIN rcvd: ACK rcvd: FIN


FIN_WAIT_1 FIN_WAIT_2 FIN_WAIT_2
appl: close send: ACK

Transions for join proxy-node's TCP client toward split proxy node
Transions for join proxy node’s TCP server serving end-host TCP clients
appl: Acons performed by the applicaon
send: Segment is sent
rcvd: Segment is received

Fig. 5. TCP state diagram in the Linked-ACK framework.


Normalized goodput performance

5. Performance evaluation and results


1
The network topology shown in Fig. 1 is used for our perfor-
mance study. The considered network is emulated in a Mininet
network environment [21]. Hosts in the Mininet run in differ- 0.8
ent network namespaces with their own set of network inter-
faces, IP and routing tables. Switches of Mininet support Open-
Flow to enable SDN functionalities. Links in the Mininet emu- 0.6
late bandwidth, delay, and packet loss probability. The popular
Floodlight [22] open-source SDN controller is used for managing
flow tables in our experiments. All the TCP flows use the default 0.4
Linux kernel configuration, but the MultiPath TCPs are installed
from [23].
0.2 Regular TCP flows
5.1. Aggregated TCP goodput performance Aggregated TCP flow
0
In this section, we show that the aggregated TCP flow can sub- 0 200 400 600 800
stantially improve the TCP goodput. Using the topology shown in
Fig. 1, we simulated up to a maximum of 800 concurrent TCP Number of flows
flows. The bottleneck link from switch-2 to the server is set to
Fig. 6. Goodput comparison of aggregated TCP flows vs. equivalent regular TCP
1.5 Mbps, and all other links are configured to 1 Gbps. The switch flows.
to proxy links are considered to have unrestricted bandwidth. The
average TCP goodput performance of aggregated TCP, and its non-
aggregated TCP counterpart is shown in Fig. 6. The goodput per- yields better TCP throughput even as the number of TCP flows in-
formance is shown as values normalized using the total link band- creases.
width. We used 95% confidence. Each test lasted for about 180 s.
From Fig. 6, it is clear that our approach is potentially scalable as 5.2. Linked-ACK throughput performance
the goodput remained consistently higher with an increase in the
number of flows. On the other hand, the regular TCP goodput suf- With our ‘Linked-ACK’ framework implementation, the through-
fered throughput degradation with an increase in the number of put performance in the sub-path between ‘split-proxy and the
flows. Therefore, we can conclude that the aggregating TCP flows server’ path gets synchronized with the throughput along the sub-
166 W. Guo et al. / Computer Networks 137 (2018) 160–172

·106
client
150

Congestion window size


Join-proxy
Throughput (in Bytes/s)

Split-proxy
1 100

50
0.5

client 0
0 server
0 5 10 15 20 25 30
0 5 10 15 20 25 30 Time (in secs)
Time (in secs) Fig. 9. Linked-ACK client congestion window size and two proxies queue size. The
queue size of each proxy is always smaller than the congestion window size.
Fig. 7. Linked-ACK throughput of the client to the join-proxy (shown as client), the
split-proxy to the server (shown as server). These two flows synchronized by our
‘Linked-ACK’ framework.
5.3. Proxy buffer analysis

Fig. 9 shows a time plot of a client’s congestion window size


and the buffer sizes of join-proxy and split-proxy nodes. The
·107 client’s congestion window size is usually higher than the buffer
size of each proxies. By exploiting TCP’s flow control mechanism,
the proxy nodes can limit the maximum receiver window size in
3 order to control the sender client’s congestion window size. Our
Total received bytes

framework guarantees zero packet loss on the proxy application


layer, when the minimum buffer size of each TCP flow is set to the
maximum receiver window size. While the buffer size in the proxy
2 is mostly small for most of the time, it only shoots up when the
TCP congestion window size reduces (due to packet loss). Also, the
total buffer size of join-proxy and split-proxy is typically not larger
than the TCP congestion window size.
1
5.4. Fairness application
client
In this section, with the help of a weighted round robin sched-
0 server
uler in our flow joining framework we show that a better TCP fair-
ness among different flows can be achieved. However, this appli-
0 5 10 15 20 25 30 cation in its native form doesn’t preserve the TCP end-to-end se-
Time (in secs) mantics, and it also requires unbounded buffer size making the
implementation less practical. We hence integrated this fairness
Fig. 8. Linked-ACK Total received bytes of the client to the join-proxy (shown as framework with our linked-ACK to solve the unbounded buffer size
client) flow, and the split-proxy to the server (shown as server) flow.
problem and preserve end-to-end semantics. Figs. 10 and 11 re-
spectively show the flow of data and ACK segments in our pro-
posed aggregation framework with Weighted Fair Queuing (WFQ).
Fig. 12 compares the throughput of 3 TCP long flows in the absence
path between ‘client and join-proxy’. Fig. 7 shows the TCP through- of fairness framework. We consider same round trip time (RTT) for
put performance of a long TCP flow from a single client to the all of the flows. Fig. 13 shows the throughput of 3 long flows with
server with ‘Linked-ACK’ framework. It is clear that both the flows the integrated fairness application. We use Jain’s fairness index to
have about the same throughput performance. Fig. 8 shows the to- study the throughput fairness of flows in our experiments. Fig. 14
tal received bytes of the flows in the respective sub-paths of client shows the Jains fairness index value of 3 TCP flows of traditional
to join-proxy, and split-proxy to server. It is clear that these flows setup in Fig. 12, and 3 TCP flows with proposed fairness framework
have almost the same number of total received bytes, which indi- integrated setup in Fig. 13. The Jains fairness index F is shown in
cates the fact that they are well synchronized. Eq. (1).
n
From individual flow perspective, it is to be noted that the ag- ( i=i xi )
2
gregate ACK mechanism incurs a slightly higher delay, as each flow F ( x1 , x2 . . . , xn ) =  n (1)
n i=i x2i
travels extra length to the join and split proxy nodes, thereby caus-
ing this extra delay. However, this lock-step ACK mechanism would where xi represents the throughput of flow-‘i’, n is the total num-
provide synchronized buffer usage at the intermediate nodes that ber of flows, and F = 1 stands for complete fairness where each
can help to improve the performance of interactive flows. gets an equal share of the bandwidth. The average fairness index
W. Guo et al. / Computer Networks 137 (2018) 160–172 167

Map Map
Client Info UID UID Client Info

Aggregate UID Len data Split

WFQ

data data data


client1

Proxy1 Proxy2

client1
Forwarding Forwarding

Fake TCP Connecon Fake TCP Connecon

client1 server
switch1 switch2

Data flow of Client1 Data flow of Client3


Data flow of Client2 Data flow of Aggregated flow

Fig. 10. Schematic diagram showing the flow of data segments between clients and server in our Proposed network framework with flow aggregation using Weighted Fair
Queue (WFQ).

Proxy1 Proxy2
client1

Forwarding Forwarding
client1
Fake TCP Connecon
Aggregate
Fake TCP
&
Release Connecon
Release
client1 server

switch1 switch2

Data flow of Client1 Data flow of Client3


Data flow of Client2 Data flow of Aggregated flow

Fig. 11. Schematic diagram showing the flow of ACK segments between clients and server in our Proposed network framework with flow aggregation using Weighted Fair
Queue (WFQ).

·105 ·105
8 client1 client1
6
Throughput in Bytes/s

Throughput in Bytes/s

client2 client2
client3 client3
6
4
4

2
2

0 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (in secs) Time (in secs)
Fig. 12. Throughput performance of three TCP clients’ flows in a traditional frame- Fig. 13. Throughput performance of three TCP flows in a framework with Weighted
work with no aggregation and split proxy nodes. Round Robin data structure based proxy aggregation node.
168 W. Guo et al. / Computer Networks 137 (2018) 160–172

·106
1
client1

Throughput in Bytes/s
0.8 client2
Fairness index

6 client3
0.6 UDP
4
0.4
2
0.2 Proposed with WRR
Regular TCP
0 0
0 5 10 15 20 25 30
Time (in secs) 0 5 10 15 20 25 30
Fig. 14. Jain’s fairness index ranges from 1/3 to 1. Weighted Round Robin (WRR) Time (in secs)
application provides better fairness than the regular counterpart.
Fig. 16. Throughput performance of three TCP flows in the presence of UDP back-
ground flow using our proposed flow aggregation and split framework. Background
·106 traffic is added at time T = 15 s.

client1
Throughput in Bytes/s

6 client2 1
client3
UDP
0.8
Fairness index

4
0.6
2
0.4

0 0.2 Aggregated TCP


0 5 10 15 20 25 30 Regular TCP
0
Time (in secs) 0 5 10 15 20 25 30
Fig. 15. Throughput performance of three TCP flows in the presence of UDP back- Time (in secs)
ground flow in traditional no-aggregation experiment set up. Background traffic is
added at time T = 15 s. Fig. 17. Throughput fairness comparison of proposed aggregation and split frame-
work vs. traditional framework in the presence of UDP background.

of Figs. 12 and 13 are 0.8914 and 0.9993. The integrated fairness


application provides close to 1 fairness is 12.1% better than origi-
nal TCP flows. such scenarios, there are two possible solutions. One solution is by
We also evaluated the performance of our framework in the reserving exclusive bandwidth for the aggregated TCP flow. In this
presence of background traffic. Without loss of generality, the pres- isolated transport channel, the aggregated TCP can be controlled to
ence of background traffic will create additional flows competing achieve fairness in a similar lines of our framework. Another pos-
for bandwidth. However, this will not affect the fairness of the ag- sible solution is to aggregate all background traffic (if permissible)
gregated flow. The plots in Figs. 15–17 show the throughput and into a separate aggregated TCP flow.
fairness performance of the different flows in the presence of a In addition to static flows, our framework also supports dy-
single UDP background flow. It is evident that other than the ini- namic addition of flows that arrive at different points in time
tial transient period during the timing of joining of UDP flow, the to the network. With a small customization our framework can
better fairness is achieved by our aggregation and split framework. support dynamic arrival of flows, and aggregate them regardless
The peak in the initial transient period in Fig. 17 is caused due of their arrival times. To enable this, the (join-proxy/split-proxy)
to bandwidth competition between the added UDP traffic, and ag- nodes were added with a customized action that appropriately ad-
gregated TCP flow. After this transient period, the aggregated flow justs the sequence number of the joining flows. The throughput
performance was found to be stable. In a scenario with large num- performance of dynamic addition of flows is shown in Fig. 18. In
ber of UDP background traffic dynamically joining the network, the absence of our proposed framework, the performance would
and competing with aggregated TCP flow, the UDPs would natu- be similar to that of the traditional TCPs. In other words, this per-
rally overpower the TCP flows and break the fairness. However in formance would be in similar lines to the plot shown in Fig. 12.
W. Guo et al. / Computer Networks 137 (2018) 160–172 169

·106 ·106
3 client1 2 Proxy
Throughput in Bytes/s

Throughput in Bytes/s
client2 No proxy
client3
2 1.5

1
1
0.5
0
0
0 20 40 60 80 100 120 140 0 20 40 60 80
Time (in secs) Time (in secs)
Fig. 18. Throughput performance of dynamic addition/removal of TCP flows in our Fig. 19. TCP throughput performance in a wireless network environment.
proposed flow aggregation and split framework. Client2 flow arrives at time T =
20 s, and client3 arrives time T = 40 s. Client 1 and client 2 leave the network at
respective times of T = 100 s and T = 130 s. 150
TCP delay in milliseconds
5.5. Wireless application

In this section, we study the performance of the network in the


100
presence of wirelessly connected clients. In this network, each TCP
flow experiences packet loss from both wireless and wired parts of
the network. Join proxy node placed at the boundary of wireless
and wired network protects the influential factors of the wireless
TCP from not being carried over to the wired part of the network,
vice versa. In our simulation, the wireless network link loss rate is
50
set to p1 = 1%. We also assume the background traffic to cause a
packet loss rate of p2 = 1%. Proxy
The closed-form expression [24] of TCP throughput T is shown No proxy
in Eq. (2):
0
MSS × C 0 20 40 60 80
T = √ (2)
RT T × p
Time (in secs)
where MSS is the Maximum Segment Size, C is a constant value,
and p is the link loss rate. The queue loss is factored-out by setting Fig. 20. TCP delay performance in wireless network environment.
the queue size to a large value. Without the proxy-based frame-
work, the throughput T1 is given by,
shown in Eq. (4), T2 is computed as the minimum throughput from
MSS × C the following two paths: between client and join-proxy node, and
T1 = 
(t1 + t2 + 1 ) × p1 + ( 1 − p1 ) ∗ p2 split-proxy node to the server. Though the proxy node separates
the respective packet-losses from wired and wireless counterparts,
MSS × C
≈ √ (3) the delay is influenced by the entire network. Therefore,
(t1 + t2 + 1 ) × p1 + p2
t 1 + t 2 + 2 > t 2 + 3 . (5)
where p1 and p2 are the link loss rates on the respective links
of client1-switch1 and switch2-server, t1 and t1 are the propaga- The ratio of throughputs from the respective frameworks is given
tion delays on the respective links of client1-switch1 and switch2- in Eq. (6).
server, and 1 is the total queuing delay along the path from √
T1 (t1 + t2 + 2 ) × p1
client1-switch1-switch2-server. In our simulation, unless otherwise = √ . (6)
T2 (t1 + t2 + 1 ) × p1 + p2
specified the p1 and p2 values are set to 1%, respectively. The prop-
agation delays t1 and t2 are set to 40 ms and 40 ms, respectively. Fig. 19 shows respective throughputs T1 and T2 , and their average
On the other hand, the TCP throughput with our proposed values were found to be 0.915 Mbps and 1.12 Mbps, respectively.
proxy-based framework is given by Fig. 20 shows the respective total delay values of t1 + t2 + 1 and
t1 + t2 + 2 . By substituting the values to the throughput ratio
MSS × C MSS × C
T2 = min( √ , √ ) (4) T1 /T2 , the simulation data 01..915
12 = 0.8169 matches with the right
(t1 + t2 + 2 ) × p1 (t2 + 3 ) × p2 side is Eq. (6) which is 0.8145.
where 2 is the total queuing delay along the path from client1- Hence the matching TCP throughput model validates our pro-
switch1-proxy1-switch1-switch2-proxy2-switch2-server, 3 is the posed ‘linked-ACK’ based flow aggregation framework, and also
total queuing delay along the path from proxy2-switch2-server. As proves that the TCP performance is improved. The extra delay
170 W. Guo et al. / Computer Networks 137 (2018) 160–172

·107 6. Proposed framework in the presence of security solutions

Our proposed framework presented is a flow-level based ag-


1 gregation/ splitting mechanism which directly relates to the ap-
Throughput in Bytes/s

plication services that generate and consume the flow data. At


this granularity, flow (or application) level security services that lie
0.8 above the transport layer (such as SSL) can be more suitable. The
IPsec based tunnelling (on the other hand) is a broadly used net-
work layer security mechanism (independent of any specific appli-
0.6 cation or flow). In this case, we expect the aggregation and split
proxy to be a part of the IPsec VPN (running IPsec on each of
0.4 Sum throughput them). The SDN controller is also expected to have the network
view and control through IPsec connections and forwarding rules,
MPTCP subflow1 wherein the controller takes over tunnel managements from the
0.2 MPTCP subflow2 traditional gateways similar to the work proposed in [25].
Regular TCP
7. Conclusion
0
0 5 10 15 20 25 30
In this work, we have proposed and implemented a generic
Time (in secs) join-and split SDN framework of aggregating and splitting TCP
flows, with ‘linked-ACK’ mechanism to preserve end-to-end se-
Fig. 21. Multipath TCP throughput with proxy. Two subflows of MPTCP successfully
convert to conventional TCP. The Linked-ACK preserved the end-to-end semantics mantics. The framework developed is implemented in an user-
and synchronized well with MPTCP subflows. agnostic manner so as to make it more practical. With extensive
simulation experiments, we have demonstrated the efficacy of our
proposed framework. We have showed the following benefits as
achieved by our proposed framework: i) achieves an improved TCP
caused by large queue size in proxy shown as Fig. 20 can be re-
goodput performance, ii) improved buffer usage at the respective
duced by assigning a smaller maximum receiver congestion win-
split and join nodes, iii) provides fairness among different client
dow size value on the proxy-side.
flows, iv) improved wireless network throughput, and v) integrates
MPTCP based proxy node which provides a hybrid implementation
of supporting MPTCP nodes to traditional TCP flows. In future, we
plan to extend this work to support application-specific optimiza-
5.6. MPTCP application tions by exploiting our flow aggregation and splitting framework.
In addition, a generic system to support multiple sets of aggrega-
MPTCP was proposed as an extension to TCP extension in order tion and split points involves many non-trivial cases to consider.
to enable multipath forwarding. It provides the ability to simulta- To name a few, case (i): An already aggregated flow meets another
neously use multiple paths between peers to improve robust data aggregator node, case (ii): An split flow meets another split node.
transport and throughput [5]. MPTCP can be feasible on the devices Synchronizing flow information in such use-cases in multiple ag-
with two or more network interfaces, such as latest smart phones gregation and split point scenario requires a radical change in our
and tablets that come with WiFi and cellular radios. With MPTCP it design framework. We plan to develop such a framework in future.
is also possible that the different subflows from wireless network We also plan to study the performance of the proposed network in
can be forwarded to a same path towards the server on the wire large networks to investigate the feasibility of similar trends in a
network. In our framework, the linked-ACK based join-proxy node scaled manner.
is placed at the start of the overlapped path converts MPTCP to be
regular TCP. In this setting, MPTCP has to be typically installed on References
both client and server.
By using our ‘linked-ACK’ based proxy node framework in this [1] SDN Architecture, 2017, Last accessed: 13 July(https://www.opennetworking.
setting, we can have multiple benefits. For instance, the server org).
[2] K. Ratnam, I. Matta, WTCP: an efficient mechanism for improving wireless ac-
doesn’t need to have MPTCP installed as the proxy node aggre- cess to TCP services, Int. J. Commun. Syst. 16 (1) (2003) 47–62.
gates flows into one native TCP flow. In this manner the draw- [3] P. Siano, Demand response and smart grids–a survey, Renewable Sustainable
back of MPTCP sub-flows propagating on the overlapped path can Energy Rev. 30 (2014) 461–478.
[4] RFC 6182 - architectural guidelines for multipath TCP development, 2011,
be avoided. In the simulation, the topology of MPTCP is slightly (https://tools.ietf.org/html/rfc6182).
different from the one in Fig 1. Instead of having one path from [5] D. Wischik, C. Raiciu, A. Greenhalgh, M. Handley, Design, implementation and
client1 to switch1, client1 has two interfaces connecting two non- evaluation of congestion control for multipath tcp, in: NSDI ’11: Proceedings
of the 8th USENIX Conference on Networked Systems Design and Implemen-
overlapped path to the switch1. Client1 establishes the MPTCP con- tation, 2011, pp. 99–112.
nection with the join-proxy node, and two subflows transfer data [6] M. Jarschel, F. Wamser, T. Hohn, T. Zinner, P. Tran-Gia, SDN-based appli-
from two different path. Join-proxy only mark the source IP ad- cation-aware networking on the example of YouTube video streaming, in:
EWSDN ’13: Proceedings of the Second European Workshop on Software De-
dress of the first subflow to the socket. All data received from
fined Networks, 2013, pp. 87–92.
MPTCP socket are attached with the flow ID of the first sub- [7] G. Hampel, A. Rana, T. Klein, Seamless TCP Mobility Using Lightweight MPTCP
flow. Therefore, the split-proxy establishes one new TCP flow with Proxy, in: MobiWac ’13: Proceedings of the 11th ACM International Symposium
on Mobility Management and Wireless Access, 2013, pp. 139–146.
server by using the first subflow‘s IP address. The flows between
[8] T. Khalifa, A. Abdrabou, K. Naik, M. Alsabaan, A. Nayak, N. Goel, Split- and
join-proxy and split-proxy, and split-proxy to server run on regu- aggregated-transmission control protocol (SA-TCP) for smart power grid, IEEE
lar TCP with CUBIC congestion algorithm. Fig. 21 shows that the Trans. Smart Grid 5 (1) (2014) 381–391.
throughput received on the server matches well with the total [9] W. Guo, V. Mahendran, S. Radhakrishnan, Achieving throughput fairness in
smart grid using SDN-based flow aggregation and scheduling, in: WiMob
MPTCP throughput performance, and the throughput drop on reg- STWiMob ’16: Proceedings of IEEE WiMob Workshop on Selected Topics in
ular TCP reflects well to all MPTCP subflows. Wireless and Mobile Computing, 2016, pp. 1–7.
W. Guo et al. / Computer Networks 137 (2018) 160–172 171

[10] Y. Xu, V. Mahendran, S. Radhakrishnan, Towards SDN-based fog computing: [17] K. Mizuyama, Y. Taenaka, K. Tsukamoto, Estimation based adaptable flow ag-
MQTT broker virtualization for effective and reliable delivery, in: COMSNETS gregation method for reducing control traffic on software defined wireless
’16: Proceedings of the 8th International Conference on Communication Sys- networks, in: PerCom Workshops ’17: Proceedings of the IEEE International
tems and Networks, 2016, pp. 1–6. Conference on Pervasive Computing and Communications Workshops, 2017,
[11] W. Guo, V. Mahendran, S. Radhakrishnan, Improved video throughput and re- pp. 363–368.
duced gaming delay in WLAN through seamless SDN-based traffic steering, [18] T. Kosugiyama, K. Tanabe, H. Nakayama, T. Hayashi, K. Yamaoka, A flow aggre-
CCNC ’17: Proceedings of the IEEE Consumer Communications and Network- gation method based on end-to-end delay in SDN, in: ICC ’17: Proceedings of
ing Conference, 2017, pp. 1–4. the IEEE International Conference on Communications, 2017, pp. 1–6.
[12] I. Moiseenko, D. Oran, TCP/ICN: Carrying TCP over content centric and named [19] W. Guo, V. Mahendran, S. Radhakrishnan, End-user agnostic join and fork
data networks, in: ACM-ICN ’16: Proceedings of the 3rd ACM Conference on framework for TCP flows in SDN, CCNC Demo ’17: Proceedings of the IEEE Con-
Information-Centric Networking, 2016, pp. 112–121. sumer Communications and Networking Conference, 2017, pp. 616–617.
[13] G. Hasegawa, M. Nakata, H. Nakano, Receiver-based ACK splitting mechanism [20] RFC 793 - transmission control protocol - protocol specification, 1981, (https:
for TCP over wired/wireless heterogeneous networks, IEICE Trans. Commun. //tools.ietf.org/html/rfc793).
E90-B (5) (2007) 1132–1141. [21] Mininet virtual network emulation environment, 2018, Last accessed: 7
[14] J. Navarro-Ortiz, P. Ameigeiras, J.J. Ramos-Munoz, J. Lopez-Soler, Removing re- March(http://mininet.org).
dundant TCP functionalities in wired-cum-wireless networks with IEEE 802.11e [22] Project floodlight, 2018, Last accessed: 7 March 2018, (http://projectfloodlight.
HCCA support, Int. J. Commun. Syst. 27 (11) (2014) 3352–3367. org/floodlight/).
[15] T. Klein, H. Georg, Mptcp proxies and anchors, 2012, (https://tools.ietf.org/ [23] MultiPath TCP Project, Last accessed: 7 March 2018, (http://multipath-tcp.org).
html/drafthampelmptcpproxiesanchors00). [24] J. Padhye, V. Firoiu, D. Towsley, J. Kurose, Modeling TCP throughput: A Simple
[16] X. Zhang, Z. Cheng, R. Lin, L. He, S. Yu, H. Luo, Local fast reroute with flow model and its empirical validation, SIGCOMM Comput. Commun. Rev. 28 (4)
aggregation in software defined networks, IEEE Commun. Lett. 21 (4) (2017) (1998) 303–314.
785–788. [25] W. Li, F. Lin, G. Sun, SDIG: Toward software-defined ipsec gateway, in: ICNP
’16: Proceedings of the International Conference on Network Protocols, 2016,
pp. 1–8.
172 W. Guo et al. / Computer Networks 137 (2018) 160–172

Wei Guo received his Ph. D. degree in Computer Science from the University of Oklahoma, USA in 2017. He received his Masters degree in Computer
Science from Beijing University of Posts and Telecommunications (BUPT), Beijing, China in 2013. His research interests include Software-Defined
Networks (SDNs), data center networks, and network security.

V. Mahendran is an Assistant Professor in the department of computer science and engineering at the Indian Institute of Technology Tirupati, India.
He obtained his Ph.D., degree in Computer Science and Engineering from the Indian Institute of Technology Madras in 2013. He received his B.E.
degree in Computer Science and Engineering from the Periyar University, India, in 2002 and M.E. degree in Embedded System Technologies from
College of Engineering, Guindy (CEG), Anna University, India, in 2007. His research interests include Software- Defined Networks, RFID Systems,
Delay-Tolerant Networks, and Mobile Ad-hoc Networks.

Sridhar Radhakrishnan is a Professor in the School of Computer Science at the University of Oklahoma, which he joined in 1990. He received
his Ph.D. in Computer Science from Louisiana State University in 1990. He received his undergraduate degrees from Vivekananda College, Chennai,
India and from University of South Alabama, Mobile, Alabama. His research interest is in the design of protocols for broadband, wireless and mobile
networks. He has published many research articles in journals, conference proceedings, and book chapters.

Vous aimerez peut-être aussi