Académique Documents
Professionnel Documents
Culture Documents
In Partial Fulfillment
Of the Requirements for the Degree
Master of Science
By
Ram Gandhi Arumugaperumal
008851466
ram.gandhi@sjsu.edu
512-788-1579
669-222-9542
05/04/2015
05/04/2015
Department
Approval
______________________________________
_________________
Dr. Greg Bernstein
Date
Project Advisor
______________________________________
_________________
Dr. Nader Mir
Date
Project Co-Advisor
______________________________________
_________________
Dr. Thuy T. Le
Date
Graduate Advisor
ACKNOWLEDGEMENT
We would like to express our deepest appreciation to all those who provided us the
possibility to complete this report. We are grateful to our final year project advisor, Dr. Greg
Bernstein and Dr. Nader Mir, whose contribution in stimulating suggestions and encouragement,
helped us to coordinate this project. Dr. Bernstein has given us opportunity to explore in this area
of networking and given his expertise with his guidance. We express our gratitude to Electrical
engineering department for providing us resource and regular guidelines to complete the report.
We would like to thank our family members for their support and understanding.
At the end we thank all those engineers who have given their contribution to make the
society better.
ABSTRACT
This project aims to create a secure, robust and a proactive network environment which
can counteract ever-evolving cyber threats in a more efficient way than the current methods. In
order to achieve this goal we utilize Software Defined Networking (SDN) technology by adding a
security control application on top of SDN network controller. Our SDN security application will
take evasive actions (reconfiguration) on its own rather than placing all security action
responsibility to the user.
The main attacks taken into consideration would be DoS, DDoS and other prevalent cybersecurity threats. A key part of our security controller application is the reconfiguration to take place
automatically, various parameters of the network have been defined, which when exceeds the
threshold value, can force the controller to reconfigure the network to make it more secure.
The ability to create queues dynamically and change the policy of the switch/router in the
SDN network to route the matching flows via the qos scheduled queues has been exploited in this
project in order to achieve security from attacks that implement bandwidth abuse like DoS.
TABLE OF CONTENTS
TABLE OF FIGURES ................................................................................................................................ 6
1.
INTRODUCTION............................................................................................................................... 7
1.1
BACKGROUND: ........................................................................................................................ 8
3.
Schedule ............................................................................................................................................. 40
4.
CONCLUSION: ................................................................................................................................ 42
5.
REFERENCE .................................................................................................................................... 43
TABLE OF FIGURES
1. INTRODUCTION
The project is focusing on building an intelligent network that senses the incoming security
attack and adjust the SDN policies on the fly so that the networks remain stable and perfect.
There are lot of different kinds of attacks that are possible on a computer network. The
motive of a network attack differs with the type of attack. Some attacks aim to steal the data that
is flowing between a particular source and destination and also sometimes modify it e.g. man-inmiddle attack. Some other attacks focus on making the components of the network trust that the
attacker is someone else, so as to gain access to restricted areas e.g. IP spoofing. The last and the
most common type of attack is making the resource not available so as to not provide its service
to its intended places or persons e.g DoS. The projects prime focus is to identify these attacks that
aims to make the network resource max out and decline their service to the ones that ask for it, and
apply appropriate measures so that the service is still granted to the legitimate clients residing in
the network. This is achievable using bandwidth monitoring and control in the flow level. More
about this is discussed in the later sections.
Our algorithm includes a method to calculate the hardware resources available, monitor the
traffic bandwidth per flow and a fix a threshold bandwidth per flow, above which, will indicate
that that particular traffic flow has the possibility of being malicious. When the threshold is
breached, we create queues for each flow and then the respective flow is then directed to their
respective queues. On the whole our hypothesis is designing a pro-active network monitor to
analyze the incoming traffic and optimizing the amount of traffic that the network actually handles
without creating congestion. The project uses an SDN architecture on which the security algorithm
works.
1.1 BACKGROUND:
Software Defined Networking is a rapidly advancing concept that involves the decoupling
of data plane and the control plane. It is because of this separation of functional planes that makes
it more robust and pro-active. This layout gives us the opportunity to have a centralized control
plane which results in a more dynamic control over the whole network and also the functionalities
[2]
of each device can be customized according to the needs of a particular situation that the network
[1]
. SDN provides us with the ease of controlling all the components in the
network with a centralized controller rather than individually controlling [3] each and every device
present in the network.
SDN Controller:
SDN controller is a centralized controller which possess a global view of the complete
network. It has all the software entities since the controller is the brain of a SDN network. As the
controller possesses all the path, the controller optimizes the flow by choosing the ideal route
which in turn helps in optimizing the hardware of the network. Controllers these days have
graphical interfaces for the user which enables the users to have a pictorial representation of the
entire network the administrator possess.
[4]
switching layer which is connected to the hardware. The middle layer consist of the controller. The
layer at the top contains the applications. These applications trigger and produce the spark for the
flow of packets in the network.
Southbound API:
Southbound API or Southbound Protocol is essentially a set of APIs and protocols. These
juggle between the lowest level and the middle level of the SDN layers [5]. The main focus of this
southbound API is to provide an effective communication channel between the controller at the
middle level to install decisions related to control-plane and there by controlling the data plane.
Northbound API:
Northbound API is a bundle of APIs and protocols to facilitate the interaction between the
Applications (top layer of the SDN) and the middle layer possessing the controller. It is with the
Northbound API, that the applications send a required command to the controller [6], which adjusts
the various parameters of the network required to service the request by the specific application.
Governments and Organizations, in their quest to reduce the costs going towards building
and maintaining a network have always looked for innovative ways to get the advantages of a high
powered network without having the same. One of most promising innovations in this direction is
the use of Software Defined Network (SDN).
based we can program to listen for lot of events and also execute certain functions as handlers [7].
The advantage that we are exploiting in this project is the capability to listen for specific events
and create queues in the switches thereby directing the specific flows to the specific queues. This
dynamic queue creation once the router or switch is alive in the network is not possible in a
traditional network.
One way using a SDN will help our project is that in large topologies, with only one SDN
switch reporting, the controller could look up for all the routers sending packets to the congested
interface and perform bandwidth control (aka end-to-end congestion control) the way is done now,
each SDN reports the congestion, so control is applied across the whole path.
One of the major concerns while maintaining the sanctity of a network is that most of the
existing security attack algorithms are based on creating a congestion in the network and make the
networking components like switches and routers, firewalls vulnerable. Another issue with these
types of attacks is that in a scenario where there is a huge load or a huge congestion in a particular
path with legitimate traffic, the existing security measures will not differentiate between the
different flows [8], which will in turn make the congestion more severe. Instead of relying on the
network components in case of an attack, for example in a Layer 2 Switch, the switch can easily
be subjected to a MAC Flooding attack which essentially turn the switch into a hub, thereby
broadcasting traffic on all its ports rather than on a single one as mandated for a L2 Switch
[9]
This can be easily overcome by making use of a SDN where the traffic flow can be adjusted by
the controller and can be customized according to the needs of a Network Manager. Since we use
SDN, we can override this issue by asking the controller to re-direct flows that are identified as
malicious [10].
Windows
VMware Workstation
Ubuntu Virtual Machine
containing the POX controller
Pox Port- 6635
Eth0
Ubuntu VM
as Host 2
vPort1
vPort2
To create the required SDN topology, we heavily depend on the Virtual Machines (VMs).
We create two virtual machines, one Ubuntu Virtual Machine, consisting the POX controller which
takes care of all the bandwidth calculations and the routing mechanisms. The interface eth0 on this
VM is set to bind with the port 6635 for the POX controller to send the control messages
[14]
. In
addition to this we also use two other ports 1234 and 2345 to send and receive additional control
messages which are not openflow supported.
Since, the interface (eth0) of the virtual machine was proving difficult to be accessed from
outside, we decided to create a separate Linux Mint Virtual Machine running the Open vSwitch on
which we simulate two live hosts. Again, the traffic on this machine primarily uses two ports, 1234
& 2345.
The threshold value, over which a traffic is termed as rogue traffic is determined in the
controller. This value is measured by the controller depending on the total load the controller is
expected to handle at any given time. This also depends on the resource at hand like the number
and capacity of the switch. This is a sensitive task since a wrong input may completely defeat the
purpose of the project. A large value may allow all the traffic to pass through the network and
ultimately the project will not be able to make any significant impact [15]. A lower value will make
the controller to consider all traffic, including legitimate traffic as malicious and as a result all
traffic will have a lower processing rate, this is particularly dangerous when the network is
handling time sensitive data.
2. PROJECT IMPLEMENTATION
2.1 Working
The project uses a bandwidth control algorithm that allows for the detection of the good
and bad flows in the network, after which certain resource restrictions are placed for the different
flows thereby controlling the illegitimate traffic
[16]
explanation of the architecture, the different resources used for the algorithm to work, and finally
the working of the algorithm and therefore the project.
2.2 Project Architecture
This project was developed on a Software Defined Networks (SDN) based network
architecture. The hosts in the SDN were virtual machines running Ubuntu distros. The controller
in use was a POX controller. This POX controller is based on python. An open vSwitch is also
used. The whole setup is virtual machine based i.e. separate VMs were used to host the different
components of the project on top of a windows machine. An Ubuntu VM created using VMware
workstation, is used to host the POX controller. Another VM running Linux Mint is used to host
the open vSwitch. The open vSwitch creates a virtual bridge[17] to which the eth0 internet interface
is connected. Along with this two other virtual interfaces such as the vport1 and vport2 are also
connected. These vports are tuntap ports that were created to provide interface points for getting
the VMs connected. Inside this Linux Mint VM, two other VMs running Ubuntu are created and
connected using the bridged architecture.
The pox running on the VM is also made to run the forwarding.l2_learning module. This
module helps in detecting any openflow switches in the network.
The above code snippet shows the code that is used to handle the connectionUp event that is
triggered whenever a switch comes up. This is what enables the controller to learn of the any
switches that comes up in the SDN network. It also floods the network to find of any switches in
the network.
The algorithm of bandwidth control works on both the controller as well as the open vSwitch end.
On the controller side, the python module DRoN Algorithm is used. This module integrates with
the POX controller and implements the needed functionalities. On the other hand, its counterpart
algorithm works on the open vSwitch to send and receive notifications and messages from their
counterparts and work accordingly.
2.2.1 In the POX controller
DRoN Algorithm is the python module that is made to run along with the POX controller.
DRoN Algorithm has the IP address of the controller. Now it uses 2 ports, to create 2 sockets. Note
that these ports are not the usual ports that are defined in the open flow standards. The standard
openflow port 6633 is used by the controller to send and receive notification and messages from
the switches in the network. These two ports are for sending and receiving data, notification and
other stats between the controller, switch and the other hosts. These ports are used only by the
DRoN Algorithm and its counterpart in the vSwitch. The port number used for the DRoN
Algorithm to send messages to the switch is 1234 and the port used by switch to send messages to
the DRoN Algorithm is 2345. The socket API are used for retrieving the IP address and to bind
them to the ports.
2.2.2 In the Open vSwitch
The Open vSwitch provides methods to create different virtual bridges and to assign the
existing ports as well as virtual ports. The following is the architecture that is created and deployed
in the virtual switch.
A virtual bridge is created using the ovs-vsctl commands. eth0 is the default interface in a
linux machine using which the machines network stack communicated to the outside world. This
eth0 is added as one of the interface in the vBridge. In the topology or the architecture that the
project is implemented on, there are VMs created inside the VM in which the open vSwitch is
running. So in order for these level 2 VMs to get connected, two tuntap ports
[19]
are created.
Tun/tap ports are software only ports. In other words these are ports that are known only to the
kernel. They do not exist physically. So after these tun/tap ports are created, these are also added
to the vBridge. After all these interfaces are added to the vBridge the ip address of the eth0 port is
flushed out and the vBridge is configured as a dhcp client there by getting an ip address. This is
done using the dhclient vBridge command. After the vBridge gets an IP, the controller IP and port
must be configure for that vBridge. This is also accomplished using the ovs-vsctl commands. The
open vSwitch doesnt know to find it controller and communicate with it. On the other hand the
POX controller running the l2_learining module has the ability to send BPDU packets and learn
of any switches in the network.
The commands to configure the Open vSwitch is as follows:
#Create internal and external bridges for each interface
ovs-vsctl add-br eth0br
ovs-vsctl add-port eth0br eth0
Now that the vSwitch and the controller has been configured, the DRoN Algorithm works
and controls the bandwidth accordingly.
The SDN Router indicated here is the open vSwitch. Likewise the SDN controller here is
the POX controller running the DRoN Algorithm module. Also in addition to the open vSwitch,
the DRoN-switch module is run with the OVS (Open vSwitch). In the OVS, we first detect its
capacity, specifically the bandwidth. This is done using the popular iperf tool. An iperf server is
run on a separate machine which is not in the SDN architecture, in our case the Windows machine
is used for this purpose. An iperf client is run on the Open vSwitch terminal. The bandwidth that
is got from the iperf test is provided to the OVS DRoN-switch module.
After the resources have been measured, these values are stored as the capacity of the switch. This
data is hardcoded in the OVS DRoN-switch module.
After this step, all the modules are started. First the POX controller along with the DRoN
python module and the forwarding.l2_learning module.
After the controller and its DRoN module is up and running, we goto the OVS and run the OVSSwitch module in a similar way to the controller. This is where the flows and other bandwidth
measurements are made.
Now that all the modules are running, the OVS-Switch module monitors the bandwidth of
its vBridge interfaces. This is done by exploiting the /proc/net/dev/ directory [20]. The /proc/ is a
kernel based filesystem, that provides information available to user space processes. The
/proc/net/dev/ is file that contains vital information regarding the configured devices. The OVSSwitch module takes a sample of the information available in the /proc/net/dev/ file and uses it for
monitoring. A sample of the /dev/ file is as below. This screenshot tells us the vital information
about all the physical as well as virtual interfaces such as the bandwidth (transmit bytes) currently
used.
This information is stored in an array along with the interface name. The below command is used
to grep out the interface name and check their output transmit bytes and using the awk command
the output is shortened to the 10th column and appended to the sample array.
After the transmit byte information of all the interfaces have been sampled and saved, the OVS
Switch-DRoN module checks if any congestion is happening in these interfaces. This is done by
comparing the bandwidth got from the iperf and the data got from the /proc/net/dev/. While the
iperf data provides the maximum bandwidth possible, the /proc/ data provides the bandwidth
currently in use.
threshold is initially set to the upper limit of the particular interface. Another flag variable
is_congested is used to set to 1 when a congestion is detected. Also a time window is defined so
that samples are captured within that time frame. This windows is reset after a calculation has been
completed. So when the monitor happens, the algorithm checks if the average of that interface has
exceeded the threshold and also if the flag is not set. If all these conditions satisfy, then the module
has detected a congestion. So, now a congestion notification is sent to the DRoN module running
with the POX controller using the port 1234, and the is_congested flag is set to true and also the
threshold is now set to lower list so as to implement a penalty [13].
Now that we know how a congestion is detected, we also need to know when a detected congestion
has stopped, and the traffic is back to normal. For this, when we sample and monitor all the
interfaces in loop, and if the average bandwidth has become less than the lower limit and also the
is_congested flag is set true, then the module will send a congestion stopped notification to the
DRoN module. Also the is_congested flag is set to false and now the threshold points to the upper
limit of that interface. The notifications are sent using the congestion_detected and
congestion_stopped methods.
Now this congestion notification is received by the DRoN module on the port 1234. After this
notification has been received, a stat request is sent to the OVS. The stat request is a standard
request defined in the openflow 1.0 standards. It is used to request the stats of the network
component such as the OVS (Open vSwitch) in this case. The reply contains details about the flow
stats.
Since this is a standard defined in the openflow 1.0 standards, after using this method to request
for the open flow stats, the switch replies with the stats back to the controller.
Now that we have classified the good and bad flows, the next step is to assign computed and
modified bandwidth to the flows inorder for the security to work. A method is written to check the
good flows and the bad flows bandwidth. The good flow can be provided a maximum bandwidth
of 900000000 bits per second and 30000000 bits per second for a bad flow. If any good or bad
flow exceeds their respective limit, then their upper limit is assigned as their computed bandwidth
else their current bandwidth is assigned as the computed new bandwidth. A variable
tot_comp_band is used to store the total computed bandwidth. This variable is incremented with
the computed new bandwidth per flow each time a flow is assigned the bandwidth. Now if the
bandwidth stored by this variable is less than the total maximum bandwidth that the interface can
support (this is got from the iperf), the rest of the bandwidth is divided by the total number of good
flows and the extra bandwidth per good flow is assigned to each good flows.
the respective queues. A dictionary in python is used to store the variables of the queue. Then this
dict is changed into a JSON format and sent to OVS.
After this message is received by the OVS Switch-DRoN module, the queues are created per flows.
The queue creation is done using the ovs-vsctl commands. The above snippet shows that the command
has a max rate of 900000000 bits per second. And the bandwidth assigned to that particular queue is got
from the dict queue from the controller (DRoN). After the queues are created, the method queue ready
is used to send the notification to the DRoN controller to notify of the completion of the queue creation.
So inorder to send the notification back, the same process is repeated that was followed to send the
message to create the queues. All the needed data such as the dpid, the maximum bandwidth and the
notification of Queues created are put in a dict, converted to JSON format and sent to the DRoN
controller.
Now the controller i.e. the DRoN running along with the POX has got the notification indicating
that the queues have been created with the specific bandwidth restriction for each queue. But only the
queues have been created, but the traffic is still flowing through no queues and just through the default
way(no queues and no qos) to reach their destination. So the next step is to send a flow modification
command that would make the traffic in specific flows go through their respective queues. This is done
with the help of flow_mod packet. This is an openflow standard that is usually sent by the controller to
the switch asking it to explicitly change its flow table in accordance with the data that the flow_mod.
Since this is an open flow standard, this message could be sent by the controller through the openflow
port of 6633 itself.
A foreach loop is run across all the flows, and for each flow (detected from the network source and
destination address) the traffic of that particular flow is subjected to the queue conditions.
The ofp_action_enqeue method from the openflow protocol help us in achieving this.
Now that the queues have been both implemented as well as the flows have been directed to the
respective queues, the bandwidth has been controlled.
The testing of the algorithm was done using a series of methods. The most important were
the iperf and the HTTPerf. An iperf server was hosted in a separate server outside the current
network. The iperf clients are run in the host level 2 VMs. When the test is run, the time for each
measurement is given as 1 second. That is the time windows is given as 1 second and the test is
run for 40 seconds. So now the bandwidth measurement for 40 seconds is got with data for each
second. The data sent exceed the threshold for congestion when it crosses few seconds of iperf
testing. This is more experienced when there is more than 3 nodes being tested. That is more than
3, level 2 VMs running in the open vSwitch.
So as soon as the congestion is triggered, and the notification is sent to the controller, the
appropriate actions take place to regulate the bandwidth.
ovs-vsctl -- set Port $1 qos=@DRoNqos -- --id=@DRoNqos create QoS type=linux-htb otherconfig:max-rate=10000000 queues:0=@queue0 queues:1=@queue1 -- --id=@queue0 create Queue otherconfig:min-rate=1000000
other-config:max-rate=1000000
--
--id=@queue1
create
Queue
other-
config:min-rate=8000000 other-config:max-rate=8000000
The above is an example command that the open vSwitch executes to set the queue bandwidth and
assign QoS thereby doing it. The output doesnt shed much information when its done with a 2
node setup. But when the setup is extended to a 4 or a 7 node, then considerable difference is seen
in the bandwidth set.
The below is the output when the DRoN Algorithm is running.
Note that the bandwidth without DRoN Algorithm running, for legitimate traffic or in other words
good behaved traffic is very less when compared to the one with DRoN Algorithm module running.
The below is a summarized output for different number of nodes used.
Two nodes
Bandwidth with defense = [56.25, 28.05, 30.06]
Bandwidth without defense = [20.62, 19.44, 19.76]
Four nodes
Bandwidth with defense = [20.23, 21.68, 19.94]
Bandwidth without defense = [0.99, 1.21, 0.95]
Ten nodes
Bandwidth with defense = [14.52, 18.52, 13.39]
Bandwidth without defense = [1.24, 1.98, 2.2]
A graph is plotted using the data from the above median values for the bandwidth obtained by
hosts with and without using DRoN while an iperf is also simultaneously running to create a
congestion. The bandwidth used by attackers tends to make the other flows (good flows) to utilize
a lesser bandwidth. A graph indicating the bandwidth when both DRoN Algorithm is used and not
used is below. The x-axis indicated the different test results with different number of nodes, and
the y axis uses bandwidth obtained. The data from the iperf bandwidth output from every second
interval both with and without using the DRoN module is used to plot this graph. .We can see that
when the DRoN module is running, we can get an increased bandwidth by the normal users and
when the DRoN module is not used the bandwidth per flow is decreased. This situation is well
seen when the number of nodes increases in the network. The values are taken from the average
of test results.
Bandwidth in mbits/s
50
40
30
20
10
0
Bandwidth with
Bandwidth
Bandwidth with
Bandwidth
Defense (2 Nodes) without defense Defense (4 Nodes) without defense
(2 Nodes)
(4 Nodes)
Bandwidth with
Defense (10
Nodes)
Series2
Series3
Bandwidth
without defense
(10 Nodes)
Reply rate [replies/s]: min 9.8 avg 9.8 max 9.8 stddev 0.0 (2 samples)
Reply time [ms]: response 4.9 transfer 0.0
Reply size [B]: header 255.0 content 11104.0 footer 0.0 (total 11359.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
CPU time [s]: user 3.14 system 9.64 (user 24.5% system 75.3% total 99.7%)
Net I/O: 87.2 KB/s (0.7*10^6 bps)
httperf --hog --timeout=7 --client=0/1 --server=192.168.146.139 --port=80 -uri=http://192.168.146.139/index.html --rate=10 --send-buffer=4096 --recvbuffer=16384 --num-conns=100 --num-calls=1
Maximum connect burst length: 1
Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
Reply time [ms]: response 21.9 transfer 0.0
Reply size [B]: header 255.0 content 11104.0 footer 0.0 (total 11359.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
CPU time [s]: user 2.46 system 7.43 (user 24.8% system 75.0% total 99.8%)
The connection block and the reply block of the output will show light on the statistics that vary
on using the DRoN Algorithm module in the POX controller. A similar graph to the TCP iperf for
HTTperf is as follows. In this graph reply time is taken across the y-axis and the number of nodes
in the network is taken across the x-axis. From the test result above we can see that the response
time for one node when HTTPerf is run with the hog to use as much as TCP connections as
possible to the http server, so as to test the response time under maximum load. Also note that this
HTTPerf is run on the same time an iperf is also running so as to exhaust the bandwidth of that
particular interface and get queues created with qos. From the test result for a single node above
we can see that the response time is 4.9 seconds for the httperf with the DRoN and 21 seconds for
httperf without DRoN. Results from output for 2 and 4 nodes were used for this graph.
Following is the result obtained from the HTTPerf from running with and without DRoN
Number
Nodes
1
4.9
19.7
5.3
23
13
40
Response time in ms
35
30
25
20
15
10
5
0
1
Number of Nodes
Response time with defense
In the traditional networks, if an illegitimate packet flow is determined, that particular flow is
dropped. For example, a DoS attacks using a SYN packet flooding is taking place in an interface. Let us
assume that there are also legitimate users trying to open a TCP session to the server using that interface.
But in a traditional system, after detection of the SYN packet flood, the most optimal solution is to either
scale the server (incase of huge corporates like Google and Facebook, where all legitimate users will look
like a SYN flood) or to drop the packets(in all other cases). If we take a closed network, where the number
users vary less frequently, then the optimal solution is to drop the packets
[11]
that the attackers malicious traffic blankets all the legitimate traffic coming into the network which will
be most probably dropped by the network owing to the overpowering
[12]
network. In this project, we dont drop the packets. Instead, we detect the bad and good flows according
to the percentage of the bandwidth they are using. And a bad flow instead of getting dropped, is allocated
a lesser qos or in other word a lesser bandwidth portion of the maximum bandwidth available. So now,
the SYN flood wont make the server overflow of resources and crash, and also the legitimate users amidst
the illegitimate users/attackers are also served their requests.
Used for
VMware Workstation
POX controller
Open vSwitch
Pycharm
Jmeter
3. Schedule
Week
Date
Tasks
Oct 10-17
Basic study about SDN and its features. Study about the
Oct 20 24
Oct 26 31
mininet installation
Nov 3 7
Nov 10 14
Nov 17 21
Nov 24 28
Dec 1 5
Dec 8 12
features
10
Dec 15 19
11
Dec 22 - 26
12
Dec 29 Jan 2
13
Jan 5 - 9
14
Jan 12 16
15
Jan 19 23
16
Jan 26 - 30
17
Feb 2 - 6
Figure out all the possible routes and come up with the
18
Feb 9 - 13
19
Feb 16 20
20
Feb 23 - 27
21
Mar 2 - 6
22
Mar 9 - 13
23
Mar 16 20
24
Mar 23 - 27
25
Mar 30 Apr 3
26
Apr 6 - 10
27
Apr 13 17
28
Apr 20 24
29
Apr 27 May 1
30
May 4 May 8
4. CONCLUSION:
SDN systems has gradually started to find applications in many fields where sensitive data is being
handled. The need of the hour is to have stable and versatile security system that has a brain on its
own, which has the ability to sense an attack if at all its taking place and configure itself in such
a way that the attacker will not gain unauthorized access into the system. In this project, we detect
the congestion and malicious flows without even having an IDS system. After detection
appropriate bandwidth control mechanisms are taken place to avoid the attack. The pro of the
project is that, this works for almost many attacks which involves bandwidth abuse. In case of
DoS, the attack is going to be SYN flooding which again a bandwidth abuses for that particular
flow. So this solution is one that is useful to major and popular attack scenarios. The con of this
algorithm is that, this works perfectly fine if there is just one switch in the network. Incase more
switches are been used, there must be a way to track the stats of each switch, which will lead to
resource exhaustion in the controller.
5. REFERENCE
[1]Taejin Kim; Taekkyeun Lee; Ki-Hyung Kim; Honjin Yeh; Manpyo Hong "An efficient packet
processing protocol based on exchanging messages between switches and controller in OpenFlow
networks", Emerging Technologies for a Smarter World (CEWIT)
[2]Akbar, M.S.; Khaliq, K.A.; Bin Rais, R.N.; Qayyum, A. "Information-Centric Networks:
Categorizations, challenges, and classifications",
International Conference
[4]Wun-Yuan Huang; Ta-Yuan Chou; Jen-Wei Hu; Te-Lung Liu "Automatical End to End
Topology Discovery and Flow Viewer on SDN", Advanced Information Networking and
Applications Workshops (WAINA), 2014
[5]CISCO White paper
http://www.cisco.com/web/strategy/docs/gov/cis13090_sdn_sled_white_paper.pdf
[6]Gates, C.; Becknel, D. "Host anomalies from network data", Information Assurance Workshop,
2005. IAW '05. Proceedings from the Sixth Annual IEEE SMC
[7] SDN: Software Defined Networks by Thomas D. Nadeau book
[8] Thanasegaran, S.; Tateiwa, Y.; Katayama, Y.; Takahashi, N., "Simultaneous Analysis of Time
and Space for Conflict Detection in Time-Based Firewall Policies," Computer and Information
Technology (CIT), 2010 IEEE 10th International Conference on, vol., no., pp.1015, 1021, June 29
2010-July 1
[9] Chun-Jen Chung; Khatkar, P.; Tianyi Xing; Jeongkeun Lee; Dijiang Huang, "NICE: Network
Intrusion Detection and Countermeasure Selection in Virtual Network Systems," Dependable and
Secure Computing, IEEE Transactions on , vol.10, no.4, pp.198,211, July-Aug. 2013.
[10] Smith, P.; Schaeffer-Filho, A; Hutchison, D.; Mauthe, A, "Management patterns: SDNenabled network resilience management," Network Operations and Management Symposium
(NOMS), 2014 IEEE , vol., no., pp.1,9, 5-9 May 2014
[11] Kai Wang; Yaxuan Qi; Baohua Yang; Yibo Xue; Jun Li, "LiveSec: Towards Effective
Security Management in Large-Scale Production Networks," Distributed Computing Systems
Workshops (ICDCSW), 2012 32nd International Conference on , vol., no., pp.451,460, 18-21 June
2012
[12]Vaughan Nichols, S.J. 2011. OpenFlow: The Next Generation of the Network? Computer,
Volume 44 Issue 8 Page(s):13-15.
[13] Miki Yamamoto, A Survey of Active NetworkTrans, Vol.J84-B. No.8 pp.14011412,Aug.2001
[14] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, Nox:
towards an operating system for networks,SIGCOMM Comput. Commun. Rev., vol. 38, no. 3,
pp. 105110, Jul.2008.
[15] S. Gutz, A. Story, C. Schlesinger, and N. Foster, Splendid isolation: a slice abstraction for
software-defined networks, in Proceedings of the first workshop on Hot topics in software defined
networks, ser. HotSDN12. New York, NY, USA: ACM, 2012
[16] P. Ayres, H. Sun, H. Chao, and W. Lau, Alpi: A ddos defense system for hogh-speed
networks, IEEE Journal oc Selected Areas in Communications, vol. 24, pp. 18641876, 2006.
[17] D. Z. S. Hartman, M. Wasserman, Security requirements in the software defined networking
model, April 2013. [Online].
[18] Wentao Liu, "Research on DoS Attack and Detection Programming," Intelligent Information
Technology Application, 2009. IITA 2009. Third International Symposium on , vol.1, no.,
pp.207,210,21-22Nov.2009
[19] Backreference on Tuntap software interfaces- tutorial
http://backreference.org/2010/03/26/tuntap-interface-tutorial/
[20] Linux Dev center Reference http://www.onlamp.com/pub/a/linux/2000/11/16/LinuxAdmin.html
[21] Picture reference - http://en.wikipedia.org/wiki/Software-defined_networking