Vous êtes sur la page 1sur 53

Sub: Wireless Sensor Network

(Elective- IV)(2015 Course)


BE Electronics Engineering

Unit V

Data Aggregation & Security


By
Prof. S, A, Shaikh
Associate Professor, Dept. of Electronics Engineering
Pravara Rural Engineering College, Loni
Syllabus
• Clustering Techniques,
• In-Network Processing and Data Aggregation,
• Compressive Sampling,
• Security Issues in Wireless Sensor Networks,
• Attacks,
• Defensive Measures,
• Security requirements and threat model.
Clustering Techniques in WSN
What is Clustering Techniques?
• WSN base station always needs to generate an aggregated value to the end users
and the aggregation of the data to be forwarded can also help in reducing the
transmission overhead and the energy consumption.
• To support the data aggregation in the network the nodes can be accommodated
in the small groups called the Clusters.
• Clustering can be defined as the division of the nodes in the groups on the basis
of some mechanism. Clustering has been shown to improve network lifetime, a
primary metric for evaluating the performance of a sensor network.
• Clustering is done to achieve the energy efficiency and the scalability of the
network. Formation of the cluster also involves the assigning the role to the node
on the basis of their perimeters.
• The coordinator of the cluster which is responsible for the processing, aggregation
and transmission of the data to the base station is called the Cluster Head (CH) or
the leader, whereas the other nodes which are responsible for sensing and
forwarding the collected data to the CH are called the Member Nodes. Figure
represents the basis hierarchy of Clustering:
Clustering Techniques in WSN Cont..

Figure : Clustering in WSN


• In clustering the 2 tier hierarchy is adopted where in first phase the member nodes
sense the data and forward to the CH and in second phase the CH aggregates and
process the data to deliver it to the Base Station. The CH node looses more energy as
compared to the MN because it performs the fusion on the entire collected data and
sends that aggregated report to the BS located far from the cluster location. In a cluster
organization both the Intra-cluster and the Inter cluster communication takes place.
• Clustering in WSNs involves grouping nodes into clusters and electing a CH such that:
• The members of a cluster can communicate with their CH directly.
• A CH can forward the aggregated data to the central base Station through other
CHs.
Clustering Techniques in WSN Cont..
• Perimeters of the clustering
 Cluster count/Number of clusters
 Cluster size uniformity
 Inter-clustering routing
 Intra-clustering routing
• 1. Cluster count: On the basis of cluster count the network can be divided into two
categories: fixed and variable. Fixed cluster count is that in which the number of clusters
in the network are fixed whereas in variable sizes network the number of clusters is not
fixed.
• 2. Cluster size uniformity: Cluster size uniformity deals with the size of cluster. It is of
two types: Even and Odd. In even cluster size the number of nodes is same in all the
clusters of the network and in odd uniformity the cluster size is different.
• 3. Inter-cluster Routing: Inter-cluster routing describes the communication mode of the
different cluster. It can be of two types: Single hop and multi hop. Single hop is that type
in which the CH communicates with the BS directly. In multi hop clustering the CH
communicates with the BS through various intermediate CHs.
• 4. Intra-cluster Routing: It describes the mode of communication between the member
nodes and the CH. It can be of two types: single hop and multi hop. In single hop the
MN directly with the CH whereas in the multi hop the MN don’t directly deal with the
CH.
Advantages of Clustering in WSN
Advantages of Clustering in WSN
a) Scalability: As the node is divided into various assignment levels, it makes
it easy to add new nodes to the cluster.
b) Data aggregation: Data aggregation helps in reducing the redundant data
collected from member nodes.
c) Fewer loads: Aggregated data avoids the load of the transmission of data
from CH to the BS.
d) Less energy: the energy is used less when only non redundant and
aggregated data is to be transferred.
e) Collision Avoidance: By dividing the resources orthogonally to each cluster
can leads to a collision free data transmission.
f) Load Balancing: Equal sized cluster adapt the prolonging of network by
balancing the load and prevents from premature energy exhaustion.
g) Fault tolerance: Whenever a node suffers from energy depletion the
reclustering can be done.
h) QoS: Clustering protocol helps in delivering a quality and
Design challenges of clustering:
WSN networks become more challenging when they are implemented .The design
goals of the WSN are targeted more as compared to wired networks. The WSN is
divided into groups called clusters in order to prolong the life of the network .
Some of the design considerations in designing Clustering algorithm are:
a) Storage: The storage in sensors is very limited and hence it is required to satisfy the
storage constraints and query requirement.
b) Security: WSN is very vulnerable to the threats and security. Hence while designing
clustering protocols security measures must be added to it.
c) Communication: communication over the entire region can increase the reliability
and also ensures the coverage of the network, to generate the true reports.
d) Limited Energy: Sensor networks are limited by the energy .Clustering can reduce
the energy consumptions as compared to the direct communication.
e) Network Lifetime: The limited energy can lead to the reduction in lifetime.
Clustering can reduce the energy consumptions by implementing Intracluster
communication and multi hop routing schemes.
f) QoS: Clustering always focus on energy efficiency but does not pay attention
towards the quality. Hence it is always required to generate a quality in clustering
algorithm.
Clustering Techniques In WSN
• Clustering was one of the first techniques applied to very large networks. The main idea is to
organize the network into smaller sub-networks, so that data can be collected and analyzed
in a location-restricted way, meaning only some important or aggregated data comes to the
final network-wide sink. The idea is depicted in following figure
Clustering Techniques In WSN
• In clustering, we differentiate between the following roles of individual nodes:
• Cluster members are simple nodes, which sense some phenomena. They send
their data to their cluster heads. A cluster member belongs to a single cluster
head.
• Cluster heads are local sinks, which gather all of the information of their cluster
members. Several options are possible here and depend on the application: send
everything through a more powerful link to a final data storage place; aggregate or
compress the data then send it through the sensor network to a global sink; or
analyze the data and either report some event or not.
• The sink, sometimes called the global sink, is a single node in the network, which
gathers all information from all nodes in the network. It is not necessary that it
exists at all.
• The main objective of clustering is to save energy and to avoid data congestion in
the network. The underlying assumption is that if you try to route all of the data of
all nodes to a single sink, then at least some of the nodes will be completely
congested (the ones close to the sink) and that their batteries will be drained too
fast. Furthermore, it is important which are the cluster heads and how they are
selected and managed. Some concrete examples are discussed next.
Random Clustering
• The simplest approach to clustering is to randomly select cluster heads
from all available nodes. A protocol like this is LEACH by Heinzelman and
was the first clustering protocol. In fact, it was the first routing protocol
as well. Before LEACH, there existed the assumption and the concept
that all sensor nodes communicate directly with the sink in a single hop.
• This was the beginning of wireless sensor networks research and at the
time, the main new concept was tiny size and resources. LEACH works as
follows: Every node throws a dice and decides whether to become a
cluster head or not with some probability 𝜖. If it becomes a cluster head,
it advertises itself to all nodes around.
• Non-cluster head nodes decide which cluster head to join by selecting
the best communication link, which also offers the possibility to select a
lower transmission power. Nodes start sending their data to their cluster
heads (one hop directly) and cluster heads send it to the sink (again one
hop).
• In order to not drain the batteries of the cluster heads too fast, cluster
heads change over time. Thus, the procedure is repeated regularly.
Random Clustering
• The advantages of LEACH and its subsequent protocols and optimizations is that
it is simple and flexible. It can be easily extended to also work with fixed
transmission power and several hops to the cluster heads or from cluster heads
to the sink. However, it also has several disadvantages.
• First of all, the random selection of cluster heads cannot guarantee any uniform
distribution of the cluster heads in the field. Thus, some areas will have many
cluster heads but some will have none. This results in a very unbalanced
transmission costs in the network (Figure ). Another problem comes from the
fact that in really large networks, even with maximum transmission power, the
nodes cannot reach all other nodes
• Thus, the original version of the algorithm might leave some areas without
cluster heads at all and also without data reporting for long periods of time. A
remedy was introduced, where a node throws the dice again, if it does not hear
about a cluster head.
• However, this results in too many cluster heads and again in higher energy
expenditure. Another disadvantage of the approach is its “jumpy” behavior. To
save energy on the cluster heads, the protocol selects new ones.
• As discussed, you need the multi-hop version to ensure that all nodes are
connected to the global sink through cluster heads. Thus, every time the cluster
head changes, you need to restart the routing procedure, as all existing routes
become unusable.
Random Clustering
Nearest Sink Clustering
• Another possibility to tackle communication problems in large networks is to install more
than one sink. This means more than one node in the network serves as a gateway to the
final storage place. Imagine that all cluster heads in Figure 6.1 have an upload link to the
database and are thus preselected as cluster heads. Exactly as for the global sink, these
multiple sinks have bigger batteries and thus less energy problems.
• The task of the clustering algorithm is how to select the cluster head for each individual
node. The solution is relatively simple. Each cluster head announces itself as a sink, similar to
how you found the sink in routing algorithms. This information is updated as normal routing
information at every hop (e.g. with ETX count, hop count, etc.).
• In other words, you run the sink discovery procedure for each of the sinks individually. Every
node receives several sink announcements from various neighbors. It simply selects the sink
which looks best (depending on the used routing metric) and starts using that one. It can
also keep a secondary choice, in case something happens with its first choice.
• This implementation has the disadvantage that the cluster heads are actually fixed and need
some special hardware. This renders the network more expensive than using only normal
nodes. It can be also seen as several networks installed next to each other.
• On the one hand, as the beginning of this chapter noted, this increases the installation and
maintenance costs and lowers the flexibility of the network. On the other hand, this is the
preferred solution by practitioners. It offers redundancy for the sink (if one fails, the others
can take over its duties, even if at higher communication costs). Overall, it is simple, stable,
and well balanced.
Geographic Clustering
• The preceding examples show that clusters are geographically organized, meaning one
cluster is built by geographically close nodes. This leads to the idea that you can also
use direct geographic information to build the clusters. An example of such a protocol is
Clique. The protocol itself is quite complex, as it relies also on reinforcement learning to
select the best cluster head in geographic clusters. Here, the discussion presents a
simplified version of it.
• First, the network is cut into clusters according to geographic coordinates. Following
figure presents a sample network, where the cluster size is set to be 100 × 100 meters.
The remaining challenge is to select the cluster heads inside the geographic clusters.
You could, for example, select them randomly. Exactly as with LEACH, each node throws
a dice and decides whether to become a cluster head or not. If a node does not hear
about a cluster head in its own cluster, it retries the procedure.
• Another possibility is to exchange the IDs of all nodes in the cluster and to select the
lowest ID as the first cluster head, then the second-lowest as the next cluster head, etc.
Also the geographic location can be used as a criteria for selecting the cluster head, e.g.,
the closest to the sink.
• The problem of selecting a cluster head in a geographically restricted area is simpler
than in a large network. Both random selection and deterministic selection work well,
as the communication area is small and there is no need to balance the energy. This
renders geographic clustering an efficient way to cluster a large network into smaller,
independent pieces.
Geographic Clustering
In-network Processing And Data Aggregation
• There are mainly two concepts for in-network aggregation and processing:
compression and aggregation. In compression, data remains as it is, but its
resource usage is minimized. For example, 5 packets with 5 bytes data in each
could be easily combined in a single packet and this would save the communication
overhead of sending 5 different packets. In aggregation, the data is somehow
processed and only part of it continues to the sink. For example, only the maximum
sensed value is allowed to travel to the sink and all lower values get discarded on
their way. The following sections discuss some examples.
• Compression
• The first choice for compression is combining data from several packets into a
single one. This concept can always be used and delivers very good results.
However, it also relies more or less on the arrival of packets at the sink and thus
cannot be used for real-time applications.
• Let us explore the example in Following figure. On the left, you see a sample
network, where data is routed with the help of the CTP routing protocol. Without
compression, nodes 4, 7, and 10 would need to forward 2 packets every sampling
period—their own packet and one from a child node. Node 5 would need to
forward 3 packets and node 3 even 4 packets. This sum up to a total of 17
transmissions.
In-network Processing And Data Aggregation
Compression
In-network Processing And Data Aggregation
Compression
• If we simply combine the packets at each hop into a single one, each node would
have to transfer only 1 packet, which sums up to only 9 transmissions, exactly as
many as nodes exist in the network. Recall that transmissions are costly because
each of them needs time to obtain the wireless medium and require not only the
data to be transferred, but also various protocol headers, control packets,
acknowledgements, etc. By combining the data into single packets, you would
save 47% of the transmissions.
• The only tricky part of this measure, which is otherwise beautifully simple, is how
to synchronize the data packets so that data packets are not delayed too much.
• The problem occurs most often when clocks of individual nodes start to drift away
from each other. Probably the most trivial method to implement this is to define a
waiting period exactly as long as the sampling period.
• Thus, every node sends a single packet exactly after it has sampled its own
sensors. Everything, which has arrived between its last sampling period and now,
gets into the same single packet and out.
• This means that in the worst case scenario, packets from the previous sampling
round from other nodes will be forwarded only in next sampling round. This delay
might be large, but it is bounded so it is the maximum possible delay.
In-network Processing And Data Aggregation
• Huffman Codes:-
• There is also another potential problem. Sometimes the data is so large
that several pieces of it do not fit into a single packet. Then, you need to
introduce some real compression techniques to minimize its size and to fit
more of it into a single packet.
• Here, data compression algorithms come into play such as Huffman codes.
A Huffman code assigns a special code to each character, which is shorter
than its original encoding. Let us explore an example.
• Assume that you have an alphabet with 5 characters only. To encode those
with bits, you need at least 3 bits.
• The codes for these characters could be 000, 001, 010, 011, and 100. The
Huffman code calculates first the probability of how often individual
characters occur.
• For example, after analyzing some sample text in this alphabet, you can
identify the probabilities for your characters as 0.1, 0.3, 0.2, 0.15, and 0.25.
This is depicted in Following figure. Now, you can create the Huffman tree.
In-network Processing And Data Aggregation
Huffman Codes:-
In-network Processing And Data Aggregation
Huffman Codes:-
• For this, the characters are ordered by their probability and then the two lowest ones
are connected with a next level node, until the root is reached. Now, you can start
assigning codes to the individual nodes. Assign a 0 to the node with the character b (the
one with the highest probability) and to everything else a 1. Next, assign a 0 and 1 to
the next level of nodes and you get 10 for the character e and 11 for everything else,
and so on.
• The codes are now clear. However, how do you decode them? If you look at the
encoded example from Figure 6.5, you do not have empty spaces between the
characters (they are introduced in the figure just for better human readability). Instead,
you have the continuous code of 1111001101110000101010. So, if you start with the
first character, you get a 1. It cannot be a b, because b has the code of 0. Thus, it is
something else. If you continue, the code is now 11. This still does not correspond to
any of your codes, so you just continue like this until you get something you know. In
this case, the code 1111, which is an a. Now you are ready with the first character so
you can continue on.
• The real beauty of the Huffman code lies in the used probabilities to create codes. If you
look carefully at the created tree, you see that the characters used most (with the
highest probability) get the shortest codes, like b and e. The characters used rarely, like
a and d, get the longest codes. Thus, this code is optimal for this alphabet (and the text
used for calculating the probabilities).
Differential Compression and Model Prediction
• Cutting unnecessary data leads to the topic of differential compression, which extracts the
new data compared to something already existing. This technique is widely used for
transferring large files, synchronization services such as remote storage, etc. In sensor
networks, it is simpler. A node can transfer first all of its available data to the sink, e.g., its
co-ordinates, current temperature, current humidity.
• Then, at every sampling period, it normally samples its sensors, but it only reports to the
sink if something, maybe even significantly, changed. For example, a common concept is to
have one sampling period (e.g., 1 minute) and one minimum reporting period (e.g., 30
minutes).
• Then, the node reports only if the sampled data is different than the last reported one or
when the minimum reporting period expires. In this way, the sink receives only few data
items, but it knows that the node is still alive. This type of compression is often used for
sensor networks such as to disseminate new application code to all nodes.
• Model prediction is very similar to differential compression, but uses a more general
concept of what “same data” means. Here, a model is built for the data, which mirrors its
normal fluctuations and changes.
• For example, a model of temperature data in a house mirrors the fact that it is normal for
the temperature to fluctuate between 21 and 23◦C, with small steps of 0.5◦C over the span
of 5 minutes. If the measured data differs from the model, the data needs to be sent to the
sinks, e.g., when the temperature rise is too fast or when the minimum or maximum values
are exceeded.
Compression Summary
Compression Summary This discussion shows that
compression in sensor networks requires the following:
• Good understanding of the data. This enables more
memory-efficient storage and elimination of repeated
data.
• Correct formatting:- The normally used formatting, such
as floating point numbers for GPS coordinates, is typically
the worst choice. Instead, smarter and more efficient
formats can be found, which also allow for compression.
• Simple compression:- A sensor node cannot afford a
complex compression algorithm and it is usually not
needed (unless for video or audio streams, etc.).
Statistical Techniques
• Compression is very useful and should be considered first when
communication in the network is heavy. However, there are also other
techniques. One of them is to evaluate the data in a statistical way somewhere
on the way from its source to the destination.
• This can be done on the cluster head, which also offers the advantage of
having a location-restricted data set. Consequently, instead of sending all the
data to the sink, only the newly calculated values are sent, which are typically
much smaller than the original data. This can also be performed the whole way
to the sink, e.g., on each hop.
• Statistical evaluation could be the computation of some statistical values of the
complete data set such as minimum, maximum, and mean. It can also be a
voting mechanism, in which the median of the data set is calculated.
• Or it can be a consensus approach, in which individual nodes vote whether
some event has occurred or not. While such an evaluation might seem very
practical and efficient at first glance, it needs to be carefully evaluated before
usage.
• Once statistically evaluated, the original data cannot be recovered any more.
Statistical Techniques
In following figure, a single cluster of nodes with their cluster head is shown.

• The temperature is sensed at each node and transmitted to the cluster head. The
cluster head can calculate various statistics over the data such as the minimum,
maximum, mean, and median temperatures sensed. It can also report only events,
e.g., the fact that it is really hot at node 9. Or it can define a voting algorithm in which
at least two nodes have to report a “hot” or “burn” event. In all these cases, the
original data is lost and cannot be recovered at the sink any more.
Statistical Techniques
• What does the “hot” event at node 9 mean? Perhaps it is faulty? Or
maybe the other nodes do not report a “hot” event because they are
only slightly below the threshold for it? Perhaps node 9 is almost burning
already? If too many statistics are transferred to the sink, there will be no
real communication savings. If too few statistics are transferred, the
question remains whether you can use the data at all.
• However, this does not mean that in-network aggregation is never good.
There are certain applications, which precisely define their events and
what “normal” behavior is. For example, in structural monitoring
mechanical stress is measured such as acceleration (vibration). High-
frequency data like acceleration is gathered at the nodes and needs to be
processed and compared to various thresholds.
• Then, only the final event is transferred or, if the event is considered
important, the full data can be sent to the sink as an exception. In such
high-frequency data domains, you have only two options: either
drastically reduce the data volume after sensing or sense less.
Statistical Techniques
Compressive Sampling
• Instead of first sensing a large amount of data then trying to reduce it, it is much better to
sense only as much as you can transfer. However, how do you know when something
interesting is happening such as an earthquake?
• This is the task of compressive sensing—to decide when to sense data so that nothing
interesting is missed. It is not only smarter to do it this way but sometimes, it is simply
impossible to fit all of the data on the sensor node or the sensing drains too much power
on the node.
• Compressive sampling or sensing is a research topic within the area of signal processing.
This discussion does not enter into mathematical details of compression sampling but the
interested reader can turn to the tutorial of Candes and Wakin.
• The Randomized Timing Vector (RTV) algorithm by Rubin and Camp, which uses
compressive sensing, is easier to understand and follow. The main idea is that instead of
continuously sensing a signal, such as an ECG signal or vibration signal at some base
frequency, you can sample it only from time to time and still preserve the information
underlying the signal.
• Most signals carry some information and some noise. You are actually interested only in
the information. Thus, you want to take as many measurements as needed to get the
information but ignore the noise. An example is given in Fig.
Statistical Techniques
Compressive Sampling
Statistical Techniques
Compressive Sampling
• A signal (a repeated pulse) is compressively sensed (the red dots) while its information
properties are preserved. In this case, the information is the repeated pulse itself with
its frequency and strength.
• RTV works as follows: It first decides on a base sampling frequency, e.g., 500 Hz (once
every 2 milliseconds). Then, it decides on a sampling period, e.g., 1 second. Then, it
creates a random vector with an exact length of as many samples that can fit into a
sampling period. In this case, you need 500 samples. Each of these numbers is either 0
or 1. Now, a timer at the base frequency is initialized and every time it fires, the
application looks at the next number in the random vector. If it is a 1, then it takes the
sample. If it is a 0, it goes back to sleep. Once the sampling period is over, the random
vector is recalculated and the resulting samples’ vector is sent to the sink. This vector is
much smaller than the full sample vector and its length depends on how many 1s you
have in the random vector.
• At the sink, the sampled vector can be analyzed and research has shown that the
underlying information is preserved. Thus, with this type of compression you cannot
directly reconstruct the original signal, but its properties are preserved and the
resulting smaller signal can instead be analyzed.
• Compressive sampling is a powerful approach to reduce the amount of produced data
at its source. It is quite similar to compression techniques but works before the data has
been sampled
Security Issues/Constraints in Wireless Sensor
Networks:-
• A WSN consists of a large number of sensor nodes that are inherently resource-
constrained devices. These nodes have limited processing capability, very low
storage capacity, and constrained communication bandwidth. These constraints
are due to limited energy and physical size of the sensor nodes. Due to these
constraints, it is difficult to directly employ the conventional security
mechanisms in WSNs. In order to optimize the conventional security algorithms
for WSNs, it is necessary to be aware about the constraints of sensor nodes.
Some of the major constraints of a WSN are listed below.
• 1. Energy constraints: Energy is the biggest constraint for a WSN. In general,
energy consumption in sensor nodes can be categorized in three parts: (i)
energy for the sensor transducer, (ii) energy for communication among sensor
nodes, and (iii) energy for microprocessor computation. It is found that each bit
transmitted in WSNs consumes about as much power as executing 800 to 1000
instructions. Thus, communication is more costly than computation in WSNs.
Any message expansion caused by security mechanisms comes at a significant
cost. Further, higher security levels in WSNs usually correspond to more energy
consumption for cryptographic functions. Thus, WSNs could be divided into
different security levels depending on energy cost.
Security Issues/Constraints in Wireless Sensor
Networks:-
• 2. Memory limitations: A sensor is a tiny device with only a small amount of memory
and storage space. Memory is a sensor node usually includes flash memory and RAM.
Flash memory is used for storing downloaded application code and RAM is used for
storing application programs, sensor data, and intermediate results of computations.
There is usually not enough space to run complicated algorithms after loading the OS
and application code. In the Smart Dust project, for example, Tiny OS consumes about
4K bytes of instructions, leaving only 4500 bytes for running security algorithms and
applications. A common sensor type- Telos B- has a 16-bit, 8 MHz RISC CPU with only
10K RAM, 48K program memory, and 1024K flash storage. The current security
algorithms are therefore, infeasible in these sensors.
• 3. Unreliable communication: Unreliable communication is another serious threat to
sensor security. Normally the packet-based routing of sensor networks is based on
connectionless protocols and thus inherently unreliable. Packets may get damaged due
to channel errors or may get dropped at highly congested nodes. Furthermore, the
unreliable wireless communication channel may also lead to damaged or corrupted
packets. Higher error rate also mandates robust error handling schemes to be
implemented leading to higher overhead. In certain situation even if the channel is
reliable, the communication may not be so. This is due to the broadcast nature of
wireless communication, as the packets may collide in transit and may need
retransmission
Security Issues/Constraints in Wireless Sensor
Networks:-
• 4. Higher latency in communication:
• In a WSN, multi-hop routing, network congestion and processing in
the intermediate nodes may lead to higher latency in packet
transmission. This makes synchronization very difficult to achieve.
The synchronization issues may sometimes be very critical in
security as some security mechanisms may rely on critical event
reports and cryptographic key distribution.
• 5. Unattended operation of networks:
• In most cases, the nodes in a WSN are deployed in remote regions
and are left unattended. The likelihood that a sensor encounters a
physical attack in such an environment is therefore, very high.
Remote management of a WSN makes it virtually impossible to
detect physical tampering. This makes security in WSNs a
particularly difficult task.
Security Issues/Constraints in Wireless Sensor
Networks:-
• 6. Deployment and immense scale
• A high degree of dynamics in WSNs is caused due to node mobility, node failures, and
environmental obstructions. Frequent topology changes and network partitions are the
reasons for this. Sensor node can be deployed in large areas which is one of the most
attractive characteristics of WSNs ability. Thousands or millions of nodes, without any
prior knowledge on their position can be deployed making the structure of the network
complicated. It is therefore required that efficient security schemes can operate within
this dynamic environment. It is a substantial task of networking tens to hundreds or
thousands of nodes and implementing security over such a network is equally
challenging too. More robust security techniques are needed to cope with such
dynamics of ever-changing nature of sensor networks. At the same time changes in the
network membership needs to be supported in an equally efficient and secure manner.
There should be transparency regarding node device joining/leaving the network and a
minimum amount of information should have to be reconfigured.
• 7. High risk of Physical attack:
• Once nodes are deployed in WSN, nodes are left free without control or monitoring of
the nodes. This helps the attacker to confront the nodes easily to capture one or more
nodes and inject the malicious code into the captured node from the network. This will
increase the attacks on the network.
Security Requirements in WSN:-
• The security goals in a sensor network are briefly overviewed by this section. The typical
network requirements and the unique requirements suited solely to WSNs are all included
in the requirements of the WSNs.
• Data Confidentiality
• The issue of securing messages must be addressed by any network with security issues and
is an important network security issue. This has the potential of hiding messages from
passive attackers. The following facts are considered in the confidentiality of the sensor
networks:
• The sensor readings must not be leaked to its neighbors.
• It is important to build s secure channel, as the sensor nodes may communicate highly
sensitive data such as secret key distribution.
• To add protection against the traffic analysis attack, the sensor's identities and public keys
should also be encrypted to an extent.
• Data Integrity and Authentication
• The ability to confirm that the message has not been tempered with while it was on the
network is referred to as integrity. The modification of the data packet is not the only harm
an adversary can inflict. By injecting the additional packets, it can also change the whole
packet stream. Therefore, it is important to ensure that the data used has originated from
the correct source before it is used in the decision -making process.
• Verification of the data being sent to the claimed sender is allowed by the data
authentication for the receiver. Data authentication can be achieved through a purely
symmetric mechanism, in case of a two party communication.
Security Requirements in WSN:-
• Data Freshness
• Ensuring the freshness of each message is also important, concurrent with
the goals mentioned in the previous section. Data freshness gives an idea
as to whether the data is recent and it also makes sure that the data has
not been replayed. When there are shared-key strategies applied in the
design and they need to be changed over time, this requirement is
especially important.
• Availability
• The ability to use the resources is verified through availability. In addition,
it makes sure that the network is available for the message to move on.
• Self-Organization
• According to the various environments, every sensor node needs to be
independent and flexible enough to be self-organizing and self-healing in a
WSN network to work on an ad hoc basis. In a sensor network, no fixed
infrastructure is available for the purpose of network management. The
WSNs security is also challenged by this inherent feature.
Security Requirements in WSN:-
• Time Synchronization
• Some form of time synchronization is the basis of the sensor network
applications. An individual sensor radio may be turned off for some time
periods to conserve power. As the packets travel between the two
sensors, the sensors may want to calculate the end-to-end delay of
packet. Group synchronization may be required by a more collaborative
sensor network for tracking the underlying application.
• Secure Localization
• The network ability to accurately and automatically locate each sensor
node in the network will determine the use of sensor network. To
pinpoint the location of a fault within the entire network, sensor network
requires accurate location data to work effectively.
• Non Repudiation: That means if a node send a message then that node
again send the same message i.e. that is already sent. Nonrepudiation,
which denotes that a node cannot deny sending a message it has
previously sent
Attacks in WSN:-
• The Wireless Sensor Network attacks can be viewed from different levels like
attacks against security mechanisms and against basic mechanisms like routing. We
have broadly categorized security attacks of WSN into two categories viz.
• Goal Oriented Attacks and Layer Oriented Attacks.
• Goal Oriented Attacks Goal Oriented Attacks are further divided into two
categories – active and passive.
• Active attacks: In active attacks, the attacker takes active measures to achieve
control over the network. In this, the attacker modifies the messages or real data
stream or generate the false data in communication. Such types of attacks include
DOS attack, Replay attack, Selective forwarding, Worm hole, Sybil, Masquerade
attack, node replication, rushing, sinkhole, and modification of messages.
• Passive attacks: In passive attacks, the attacker monitors the traffic that is
unencrypted and that is looking for sensitive information which can be used in
other types of attacks. Such types of attacks include decrypting encrypted traffic,
traffic analysis, capturing authentic information and monitoring communications.
The passive attacks violate the goal of data confidentiality because it results in the
disclosure of data files to an attacker without any knowledge or consent of user.
These attacks are basically the pre arrangements before the actual active attacks.
They just listen to the communication.
Layer Oriented Attacks:-
A. Physical layer attacks
• Jamming – It is caused due to interference with the radio frequencies of the
network's devices which is an attack on the availability of the sensor
network. It is different from normal radio propagation in the way that it is
unwanted and disruptive, thus resulting in denial-of-service conditions.
• Tampering – It is also called node capturing in which a node is
compromised, it is easy to perform and is pretty harmful. Tampering is
physically modifying and destroying sensors nodes.
B. Link layer attacks
• Collision – It is caused in link layer that handles neighbor-to-neighbor
communication along with channel arbitration. Entire packet can be
disrupted if an adversary is able to generate collisions of even part of a
transmission, CRC mismatch and possibly require retransmission can be
caused by a single bit error.
• Exhaustion – Exhaustion of a network's battery power can be induced by an
interrogation attack. A compromised node could repeatedly send thus
consuming the battery power more than required.
Layer Oriented Attacks:-
• C. Network layer attacks
• Hello flood attack – It is caused when an attacker with high transmission power can
send or replay hello packets which are used for neighbour discovery. In this way,
attacker creates an illusion of being a neighbor to other nodes and underlying routing
protocol can be disrupted which facilitate further types of attacks.
• Wormhole attack – It is caused due to formation of a low-latency link that is formed
so that packets can travel from one to the other end faster than normally via a multi-
hop route. The wormhole attack is a threat against the routing protocol and is
challenging to detect and prevent. In this type of attack, an adversary can convince
the distant nodes that are only one or two hops away through the wormhole causing
confusion in the network routing mechanisms.
• Sybil attack – It is caused when an attacker uses a malicious device to create a large
number of entities in order to gain influence in the network traffic. The ID of these
malicious nodes can be the result due to fake network additions or duplication of
existing legitimate identities. The sybil attack usually targets fault tolerant schemes
including distributed storage, topology maintenance, and multi-hop routing.
• Sinkhole attack – It is caused when an attacker prevents the base station of the
network from obtaining complete and accurate sensing data, thus resulting in a
serious threat to higher-layer applications. By Sinkhole attack, attacker can attract
nearly all the traffic from a specific area. Sinkhole attacks work in the way by making
malicious node look especially attractive to other surrounding nodes with respect to
routing protocols underling routing algorithm.
Layer Oriented Attacks:-
• D. Transport layer attacks:
• Flooding attack – It is a Denial of Service (DoS) attack designed to bring a network
or service down by flooding it with large amounts of traffic. Flood attacks usually
occur when a network or service becomes weighed down with packets, thus
initiating incomplete connection requests that it cannot, longer process genuine
connection requests. By flooding a server with connections that cannot be
completed, flood attack eventually fills the servers memory buffer and once this
buffer is full, no further connections can be made, and thus resulting in a Denial of
Service.
• E. Application layer attacks
• Denial-of-Service (DoS) – This attack is usually referred as intended attack
of opponent for the purpose of destroying or destructing the sensor
network. DoS attack may result in limiting or eliminating the sensor
network functionality than expected. DoS attack may occur at any layer of
OSI layers of WSN. DoS penetrates the efficiency of targeted networks by
affecting its associated protocols by consuming the resources, destructing
or altering the infrastructure configuration, and physically destroying the
network components.
Layer Oriented Attacks:-
• Cloning attack – It is caused when adversaries may easily capture
and compromise sensors nodes and deploy unlimited number of
clones in the sensor network of the compromised nodes. As these
clones have legitimate access to the sensor network (i.e. legitimate
IDs, keys, other security credentials, etc.), they can easily participate
in the sensor network operations in the same way as a legitimate
node resulting in a large variety of insider attacks, or even taking over
the entire network. If these clones in the sensor network are left
undetected, the sensor network is unshielded to attackers, thus
extremely vulnerable. That is why clone attackers are severely
destructive. Effective and efficient solutions are required for clone
attack detection to limit their damage
Defensive Measures for Attacks in WSN:
• Attacks based on the protocol layer:
• (1) Physical layer: This layer includes the following attacks:
• • Jamming: It includes transmission of signal by attacker at base stations with same
frequency as of transmitter. It disrupts the radio communication and causes radio
interference in the network.
• Defensive measures: A prominent measure to mitigate jamming is use of spread spectrum
communication, i.e., frequency hopping spread spectrum (FHSS). It forwards given data by
performing swapping of carrier data among different frequency channels.
• • Tampering: Attacker tries to access hardware apparatus like chips. It involves handling of
motes and derives secret information from shared nodes).

• Defensive measures: It includes accessing of secret data that lies between external memory
chip and microcontroller. This process is called as eavesdropping.
• (2) Data link layer: This layer includes the following attacks:
• • Collision: When an attacker listens a node transmitting a message, it forwards its own
signals to make interferences. It leads to collision when multiple nodes transmit data with
same frequency and data rate. It can alter the data and hence data packet can be treated as
invalid.
• Defensive measures: Measures applied to jamming attacks can be applied to this attack.
• • Exhaustion: Attacker continuously sends data or request over the channel which leads to
starvation. The source of origination of attack can be pc or laptop.
• Defensive measures: It is possible to reduce the MAC sending rate in order to ignore
excessive request from sensor network. It prevents loss of energy as well as allows sensor
node to transmit data in shorter time. In this way, nodes get attached to MAC channel for
long time.
Defensive Measures for Attacks in WSN:
(3) Network layer: This layer consists of the following
attacks:
• Selective forwarding: It includes dropping of packets by malicious node and forward
most of the messages.
Defensive measures: To counterattack multipath routing can be used. This reduces
the probability of an attack by adversary. To supervise the system watchdog can be
used.
• Acknowledge spoofing: Attacker may spoofs link layer acknowledgements. False
error messages are generated by the attacker. Routing loops are created. As a result,
end to end latency is increased and network is portioned.
Defensive measures: To counterattack all the packets must be encrypted.
• Blackhole attack: Attacker intends to occupy available traffic in a network to a
particular node called a black hole which is created in centre. A metaphorical sinkhole
is created. All the traffic is directed to fake sinkhole.
Defensive measures: A scheme must be implemented so that all the nodes in
network must comply with corrupt information produced by invalid nodes.
Cryptographic methods can be used.
Defensive Measures for Attacks in WSN:
(3) Network layer Attack Cont…
Wormhole attack: The packets are being received by attacker at specific
position, transfers them to different positions and then sending back them into
network from that point. The main aim of attacker is to challenge cryptography
protection.
Defensive measure: To counterattack a four-way handshaking message
exchange mechanism is used. Private channel can also be used for protection.
Sybil: Self-duplicity property is attached with single node that keeps presence of
node in multiple locations. Third parties target these multiple locations and
cause problems in distributed storage access, multipath routing and distortion in
topology.
Defensive measures: To counterattack validation technique must be used.
Hello flood: Attacker sends hello packets from one node to another. Attacker
advertises cheap routes which lead to forwarding of messages to attacker.
Defensive measure: HELLO FLOOD can be counterattacked by using profile
authentication protocol.
Threat model in WSN:
• Attacks on Wireless Sensor Network can be categorized into various categories on the basis of
certain criteria. In the first category attack can be: Mote Class or Laptop Class.
• In mote class attacker can interact with only few sensor nodes where the entire sensor nodes must
have similar capabilities, whereas in case of a Laptop Class attacker can interact with more
powerful devices like PDA’s, Laptops etc. A Laptop Class adversary can produce a huge amount of
damage than Mote Class. Mote class adversary can effect only within small geographical area,
but on the other hand Laptop Class adversary could have an effect on the entire network and
even could block the entire sensor network.
• Another classification of attack on Wireless Sensor Network can be: Insider or Outsider Attack. In case
of Insider attack the attacker has access to that node which has all the secret keys and is capable of
participating in all the communications. In Outsider attack attacker has no access to Wireless
Sensor Network. It is done by the unauthorized node that eavesdrop the packets
exchanged between the sensor nodes during their communication.
• Next classification of attacks is based on Network Layer which are: Attacks at Physical Layer, at
Data Link Layer and at Network Layer.
• At Physical Layer attacker mainly exhaust the resources available by transmitting the radio signals on a
Wireless Channel.
• At Data Link Layer the attacker violate the predefined protocols of the Link Layer. This kind of
attack also leads to Denial of Service attack.
• At Network Layer attacker threatens the sensor applications and services. In this Localization and
Aggregation are used to prevent from this
• Passive versus active attacks: passive attacks include eavesdropping on or monitoring packets exchanged
within a WSN; active attacks involve some modifications of the data steam or the creation of a false
stream.
Threat model in WSN:
Based on the characteristics and goals of the attacks and attackers,
threat model of WSN can be presented by comparing them on the most
important classes. Threat model of WSNs is presented by attributes such
as the damage level caused, location, network functionality and
attacker's strength.
Threat model in WSN:
Evaluation of Threats and Attacks
In this section, the attacks that exist in networking layers of WSN have been
evaluated in context to the security class, attack threat and threat model.
These parameters have been chosen keeping in mind the vulnerabilities that
exists in various layers of the network. Table 8 briefly categorizes the attacks
and threats of layered architecture of WSN and provides a quick glance with
respect to the underlying threat model. The threat model has been evaluated
under various parameters. Brief descriptions of parameters used in threat
model used for evaluation are as under:-
A. Security Classes – Interruption, Interception, Fabrication, Modification
B. Attack Threat- Availability, Authentication, Integrity and Confidentiality
C. Threat Model-4 levels of the threat model have been considered -
1)Attacks based on Damage/Access Level (Active/Passive type)
(2)Based on Attacker's Location(Internal or External)
(3)Attacker's functional capabilities and resource access(Laptop class or mote
class)
(4) Attacks based on function (operation)- (Secrecy, availability, Stealthy)
Threat model in WSN:
Evaluation of Threats and Attacks
Thank You
?

Vous aimerez peut-être aussi