Vous êtes sur la page 1sur 10

1

Peer-to-peer
Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between
peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of
nodes. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available
to other network participants, without the need for central coordination by servers or stable hosts.[1] Peers are both suppliers and
consumers of resources, in contrast to the traditional client–server model where only servers supply, and clients consume. The
peer-to-peer application structure was popularized by file sharing systems like Napster. The concept has inspired new structures
and philosophies in many areas of human interaction. Peer-to-peer networking is not restricted to technology, but covers
also social processes with a peer-to-peer dynamic. In such context, social peer-to-peer processes are currently emerging
throughout society.

Architecture of P2P systems


Peer-to-peer systems often implement an abstract overlay network, built at Application Layer, on top of the native or
physical network topology. Such overlays are used for indexing and peer discovery and make the P2P system
independent from the physical network topology. Content is typically exchanged directly over the underlying Internet
Protocol (IP) network. Anonymous peer-to-peer systems are an exception, and implement extra routing layers to obscure
the identity of the source or destination of queries.In structured peer-to-peer networks, peers (and, sometimes,
resources) are organized following specific criteria and algorithms, which lead to overlays with specific topologies and
properties. They typically use distributed hash table-based (DHT) indexing, such as in the Chord system (MIT).
[2]
Unstructured peer-to-peer networks do not provide any algorithm for organization or optimization of network
connections.[citation needed]. In particular, three models of unstructured architecture are defined. In pure peer-to-peer systems
the entire network consists solely of equipotentpeers. There is only one routing layer, as there are no preferred nodes
with any special infrastructure function. Hybrid peer-to-peer systems allow such infrastructure nodes to exist, often
called supernodes.[3] In centralized peer-to-peer systems, a central server is used for indexing functions and to bootstrap
the entire system.[citation needed]. Although this has similarities with a structured architecture, the connections between peers
are not determined by any algorithm. The first prominent and popular peer-to-peer file sharing system, Napster, was an
example of the centralized model. Gnutella and Freenet, on the other hand, are examples of the decentralized
model. Kazaa is an example of the hybrid model.P2P networks are typically used for connecting nodes via largely ad
hoc connections.[citation needed] Data, including digital formats such asaudio files, and real time data such as telephony traffic,
is passed using P2P technology.A pure P2P network does not have the notion of clients or servers but only
equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This
model of network arrangement differs from the client–server model where communication is usually to and from a
central server. A typical example of a file transfer that does not use the P2P model is the File Transfer Protocol (FTP)
service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these
requests.The P2P overlay network consists of all the participating peers as network nodes. There are links between any
two nodes that know each other: i.e. if a participating peer knows the location of another peer in the P2P network, then
there is a directed edge from the former node to the latter in the overlay network. Based on how the nodes in the
overlay network are linked to each other, we can classify the P2P networks as unstructured or structured.

[edit]Structured systems
2
Structured P2P networks employ a globally consistent protocol to ensure that any node can efficiently route a search to some peer
that has the desired file, even if the file is extremely rare. Such a guarantee necessitates a more structured pattern of overlay
links. By far the most common type of structured P2P network is the distributed hash table (DHT), in which a variant of consistent
hashing is used to assign ownership of each file to a particular peer, in a way analogous to a traditional hash table's assignment of
each key to a particular array slot.Distributed hash tables (DHTs) are a class of decentralized distributed systems that provide a
lookup service similar to a hash table: (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve
the value associated with a given key. Responsibility for maintaining the mapping from keys to values is distributed among the
nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows DHTs to scale to
extremely large numbers of nodes and to handle continual node arrivals, departures, and failures.DHTs form an infrastructure that
can be used to build peer-to-peer networks. Notable distributed networks that use DHTs include BitTorrent's distributed tracker,
the Kad network, the Storm botnet, YaCy, and the Coral Content Distribution Network.Some prominent research projects include
the Chord project, the PAST storage utility, the P-Grid, a self-organized and emerging overlay network and the CoopNet content
distribution system (see below for external links related to these projects).DHT-based networks have been widely utilized for
accomplishing efficient resource discovery[4][5] for grid computing systems, as it aids in resource management and scheduling of
applications. Resource discovery activity involves searching for the appropriate resource types that match the user’s application
requirements. Recent advances in the domain of decentralized resource discovery have been based on extending the existing
DHTs with the capability of multi-dimensional data organization and query routing. Majority of the efforts have looked at
embedding spatial database indices such as the Space Filling Curves (SFCs) including the Hilbert curves, Z-curves, k-d tree, MX-
CIF Quad tree and R*-tree for managing, routing, and indexing of complex Grid resource query objects over DHT networks.
Spatial indices are well suited for handling the complexity of Grid resource queries. Although some spatial indices can have issues
as regards to routing load-balance in case of a skewed data set, all the spatial indices are more scalable in terms of the number of
hops traversed and messages generated while searching and routing Grid resource queries.

[edit]Unstructured systems
An unstructured P2P network is formed when the overlay links are established arbitrarily. Such networks can be easily
constructed as a new peer that wants to join the network can copy existing links of another node and then form its own links over
time. In an unstructured P2P network, if a peer wants to find a desired piece of data in the network, the query has to
be flooded through the network to find as many peers as possible that share the data. The main disadvantage with such networks
is that the queries may not always be resolved. Popular content is likely to be available at several peers and any peer searching
for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely
that search will be successful. Since there is no correlation between a peer and the content managed by it, there is no guarantee
that flooding will find a peer that has the desired data. Flooding also causes a high amount of signaling traffic in the network and
hence such networks typically have very poor search efficiency. Many of the popular P2P networks are unstructured.In pure P2P
networks: Peers act as equals, merging the roles of clients and server. In such networks, there is no central server managing the
network, neither is there a central router. Some examples of pure P2P Application Layer networks designed for peer-to-peer file
sharing are Gnutella (pre v0.4) and Freenet.There also exist hybrid P2P systems, which distribute their clients into two groups:
client nodes and overlay nodes. Typically, each client is able to act according to the momentary need of the network and can
become part of the respective overlay network used to coordinate the P2P structure. This division between normal and 'better'
nodes is done in order to address the scaling problems on early pure P2P networks. Examples for such networks are for example
Gnutella (after v0.4) or G2.
3
Another type of hybrid P2P network are networks using on the one hand central server(s) or bootstrapping mechanisms, on the
other hand P2P for their data transfers. These networks are in general called 'centralized networks' because of their lack of ability
to work without their central server(s). An example for such a network is the eDonkey network (eD2k).

[edit]Indexing and resource discovery


Older peer-to-peer networks duplicate resources across each node in the network configured to carry that type of information. This
allows local searching, but requires much traffic.Modern networks use central coordinating servers and directed search requests.
Central servers are typically used for listing potential peers (Tor), coordinating their activities (Folding@home), and searching
(Napster, eMule). Decentralized searching was first done by flooding search requests out across peers. More efficient directed
search strategies, including supernodes and distributed hash tables, are now used.Many P2P systems use stronger peers (super-
peers, super-nodes) as servers and client-peers are connected in a star-like fashion to a single super-peer.

[edit]Peer-to-peer-like systems
In modern definitions of peer-to-peer technology, the term implies the general architectural concepts outlined in this article.
However, the basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions,
reaching back to principles stated in the first Request for Comments, RFC 1.[6]A distributed messaging system that is often likened
as an early peer-to-peer architecture is the USENET network news system that is in principle a client–server model from the user
or client perspective, when they read or post news articles. However, news servers communicate with one another as peers to
propagate Usenet news articles over the entire group of network servers. The same consideration applies toSMTP email in the
sense that the core email relaying network of Mail transfer agents has a peer-to-peer character, while the periphery of e-mail
clients and their direct connections is strictly a client–server relationship. Tim Berners-Lee's vision for the World Wide Web, as
evidenced by his WorldWideWeb editor/browser, was close to a peer-to-peer design in that it assumed each user of the web
would be an active editor and contributor creating and linking content to form an interlinked web of links. This contrasts to
the broadcasting-like structure of the web as it has developed over the years.

[edit]Advantages and weaknesses


In P2P networks, clients provide resources, which may include bandwidth, storage space, and computing power. As nodes arrive
and demand on the system increases, the total capacity of the system also increases. In contrast, in a typical client–server
architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the
system, fewer resources are available to serve each client.The decentralized nature of P2P networks also increases robustness
because it removes the single point of failure that is inherent in a client-server based system.[7]As with most network systems,
unsecure and unsigned codes may allow remote access to files on a victim's computer or even compromise the entire network.
[citation needed]
In the past this has happened for example to the FastTrack network when anti P2P companies managed to introduce
faked chunks into downloads and downloaded files (mostly MP3 files) were unusable afterwards or even contained malicious
code.[citation needed] Consequently, the P2P networks of today have seen an enormous increase of their security and file verification
mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost
any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.Internet
service providers (ISPs) have been known to throttle P2P file-sharing traffic due to the high-bandwidth usage.[8] Compared to Web
browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities,
P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination
4
packets. As a reaction to this bandwidth throttling several P2P applications started implementing protocol obfuscation, such as
the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable
properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random.[9]A
possible solution to this is called P2P caching, where a ISP stores the part of files most accessed by P2P clients in order to save
access to the Internet.

Mesh networking
From Wikipedia, the free encyclopedia
For other meanings of the word mesh, see Mesh (disambiguation).

Mesh networking (topology) is a type of networking where each node must not only capture and disseminate its own data, but also
serve as a relay for other sensor nodes, that is, it must collaborate to propagate the data in the network.A mesh network can be designed
using a flooding technique or a routing technique. When using a routing technique, the message propagates along a path,
by hopping from node to node until the destination is reached. To ensure all its paths' availability, a routing network must allow for
continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms. A mesh network whose
nodes are all connected to each other is a fully connected network. Mesh networks can be seen as one type of ad hoc network.Mobile ad
hoc networks (MANET) and mesh networks are therefore closely related, but MANET also have to deal with the problems introduced by
the mobility of the nodes.The self-healing capability enables a routing based network to operate when one node breaks down or a
connection goes bad. As a result, the network is typically quite reliable, as there is often more than one path between a source and a
destination in the network. Although mostly used in wireless scenarios, this concept is also applicable to wired networks and software
interaction.

Wireless mesh networks


Wireless mesh networks were originally developed for military applications and are typical of mesh architectures. Over the past
decade the size, cost, and power requirements of radios has declined, enabling more radios to be included within each device
acting as a mesh node. The additional radios within each node enable it to support multiple functions such as client access,
backhaul service, and scanning (required for high speed handover in mobile applications). Additionally, the reduction in radio size,
cost, and power has enabled the mesh nodes to become more modular—one node or device now can contain multiple radio cards
or modules, allowing the nodes to be customized to handle a unique set of functions and frequency bands.

Work in this field has been aided by the use of game theory methods to analyze strategies for the allocation of resources and
routing of packets.[1]In early 2007, the US-based firm Meraki launched a mini wireless mesh router.[2] This is an example of a
wireless mesh network (on a claimed speed of up to 50 megabits per second). The 802.11 radio within the Meraki Mini has been
optimized for long-distance communication, providing coverage over 250 metres. This is an example of a single-radio mesh
network being used within a community as opposed to multi-radio long range mesh networks like BelAir[3] or MeshDynamics[4]that
provide multifunctional infrastructure.The Naval Postgraduate School, Monterey CA, demonstrated a wireless mesh network for
border security.[5] In a pilot system, aerial cameras kept aloft by balloons relayed real time high resolution video to ground
personnel via a mesh network.An MIT Media Lab project has developed the XO-1 laptop or "OLPC"(One Laptop per Child) which
is intended for disadvantaged schools in developing nations and uses mesh networking (based on the IEEE 802.11s standard) to
create a robust and inexpensive infrastructure.[6] The instantaneous connections made by the laptops are claimed by the project to
reduce the need for an external infrastructure such as the Internet to reach all areas, because a connected node could share the
5
connection with nodes nearby. A similar concept has also been implemented by Greenpacket with its application called
SONbuddy.[7]In Cambridge, UK, on the 3rd June 2006, mesh networking was used at the “Strawberry Fair” to run mobile live
television, radio and Internet services to an estimated 80,000 people.[8]The Champaign-Urbana Community Wireless
Network (CUWiN) project is developing mesh networking software based on open source implementations of the Hazy-Sighted
Link State Routing Protocol and Expected Transmission Count metric. Additionally, the Wireless Networking Group [9] in
the University of Illinois at Urbana-Champaign are developing a multichannel, multi-radio wireless mesh testbed, called Net-X as a
proof of concept implementation of some of the multichannel protocols being developed in that group. The implementations are
based on an architecture that allows some of the radios to switch channels to maintain network connectivity, and includes
protocols for channel allocation and routing.[10]SMesh is an 802.11 multi-hop wireless mesh network developed by the Distributed
System and Networks Lab at Johns Hopkins University.[11] A fast handoff scheme allows mobile clients to roam in the network
without interruption in connectivity, a feature suitable for real-time applications, such as VoIP.Many mesh networks operate across
multiple radio bands. For example Firetide and Wave Relay mesh networks have the option to communicate node to node on
5.2 GHz or 5.8 GHz, but communicate node to client on 2.4 GHz (802.11). This is accomplished using SDR (Software-Defined
radio.)The SolarMESH project examined the potential of powering 802.11-based mesh networks using solar power and
rechargeable batteries.[12]Legacy 802.11 access points were found to be inadequate due to the requirement that are continuously
powered.[13] The IEEE 802.11sstandardization efforts are considering power save options, but solar-powered applications might
involve single radio nodes where relay-link power saving will be inapplicable.The WING project [14] (sponsored by the Italian
Ministry of University and Research and led by CREATE-NET and Technion) developed a set of novel algorithms and protocols
for enabling wireless mesh networks as the standard access architecture for next generation Internet. Particular focus has been
given to interference and traffic aware channel assignment, multi-radio/multi-interface support, and opportunistic scheduling and
traffic aggregation in highly volatile environments.Recent standards for wired communications have also incorporated concepts
from Mesh Networking. An examples is ITU-T G.hn, a standard that specifies a high-speed (up to 1 Gigabit/s) local area
network using existing home wiring (power lines, phone lines and coaxial cables). In noisy environments such as power lines
(where signals can be heavily attenuated and corrupted by noise) it's common that mutual visibility between devices in a network
is not complete. In those situations, one of the nodes has to act as a relay and forward messages between those nodes that
cannot communicate directly, effectively creating a mesh network. In G.hn, relaying is performed at the Data Link Layer.

SENSOR NETWORK

A wireless sensor network (WSN) consists of spatially distributed autonomous sensors to monitor physical or environmental
conditions, such as temperature, sound, vibration,pressure, motion or pollutants.,[1][2] and to cooperatively pass their data through
the network to a main location. The more modern networks are bi-directional, enabling also to control the activity of the sensors.
The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such
networks are used in many industrial and consumer application, such as industrial process monitoring and control, machine health
monitoring,[3] environment and habitat monitoring, healthcare applications, home automation, and traffic control.[2][4]

The WSN is built of "nodes" – from a few to several hundreds or even thousands, where each node is connected to one (or
sometimes several) sensors. Each such sensor network node has typically several parts: a radio transceiver with an
internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and
an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from that of a
6
shoebox down to the size of a grain of dust, although functioning "motes" of genuine microscopic dimensions have yet to be
created. The cost of sensor nodes is similarly variable, ranging from hundreds of dollars to a few pennies, depending on the
complexity of the individual sensor nodes. Size and cost constraints on sensor nodes result in corresponding constraints on
resources such as energy, memory, computational speed and communications bandwidth.[2] The topology of the WSNs can vary
from a simple star network to an advanced multi-hop wireless mesh network. The propagation technique between the hops of the
network can be routing or flooding.[5][6]

In computer science and telecommunications, wireless sensor networks are an active research area with numerous workshops
and

Applications
[edit]Area monitoring
Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some
phenomenon is to be monitored. A military example is the use of sensors to detect enemy intrusion; a civilian example is the geo-
fencing of gas or oil pipelines.When the sensors detect the event being monitored (heat, pressure), the event is reported to one of
the base stations, which then takes appropriate action (e.g., send a message on the internet or to a satellite). Similarly, wireless
sensor networks can use a range of sensors to detect the presence of vehicles ranging from motorcycles to train cars.
[edit]Air pollution monitoring

Wireless sensor networks have been deployed in several cities (Stockholm, London or Brisbane) to monitor the concentration of
dangerous gases for citizens.
[edit]Forest fires detection

A network of Sensor Nodes can be installed in a forest to control when a fire has started. The nodes will be equipped with sensors
to control temperature, humidty and gases which are produced by fire in the trees or vegetation.[7] The early detection is crucial for
a successful action of the firefighters; thanks to Wireless Sensor Networks, the fire brigade will be able to know when a fire is
started and how it is spreading.
[edit]Greenhouse monitoring

Wireless sensor networks are also used to control the temperature and humidity levels inside commercial greenhouses. When the
temperature and humidity drops below specific levels, the greenhouse manager must be notified via e-mail or cell phone text
message, or host systems can trigger misting systems, open vents, turn on fans, or control a wide variety of system responses.
[edit]Landslide detection

A landslide detection system, makes use of a wireless sensor network to detect the slight movements of soil and changes in
various parameters that may occur before or during a landslide. And through the data gathered it may be possible to know the
occurrence of landslides long before it actually happens.

[edit]Industrial monitoring
[edit]Machine health monitoring

Wireless sensor networks have been developed for machinery condition-based maintenance (CBM)as they offer significant cost
savings and enable new functionalities. In wired systems, the installation of enough sensors is often limited by the cost of wiring.
7
Previously inaccessible locations, rotating machinery, hazardous or restricted areas, and mobile assets can now be reached with
wireless sensors.

[edit]Water/wastewater monitoring
There are many opportunities for using wireless sensor networks within the water/wastewater industries. Facilities not wired for
power or data transmission can be monitored using industrial wireless I/O devices and sensors powered using solar panels or
battery packs.
[edit]Landfill ground well level monitoring and pump counter

Wireless sensor networks can be used to measure and monitor the water levels within all ground wells in the landfill site and
monitor leachateaccumulation and removal. A wireless device and submersible pressure transmitter monitors the leachate level.
The sensor information is wirelessly transmitted to a central data logging system to store the level data, perform calculations, or
notify personnel when a service vehicle is needed at a specific well.
[edit]Agriculture

Using wireless sensor networks within the agricultural industry is increasingly common; using a wireless network frees the farmer
from the maintenance of wiring in a difficult environment. Gravity feed water systems can be monitored using pressure
transmitters to monitor water tank levels, pumps can be controlled using wireless I/O devices and water use can be measured and
wirelessly transmitted back to a central control center for billing. Irrigation automation enables more efficient water use and
reduces waste.

[edit]Structural monitoring
Wireless sensors can be used to monitor the movement within buildings and infrastructure such as bridges, flyovers,
embankments, tunnels etc... enabling Engineering practices to monitor assets remotely with out the need for costly site visits, as
well as having the advantage of daily data, whereas traditionally this data was collected weekly or monthly, using physical site
visits, involving either road or rail closure in some cases. it is also far more accurate than any visual inspection that would be
carried out.

Characteristics
The main characteristics of a WSN include

 Power consumption constrains for nodes using batteries or energy harvesting


 Ability to cope with node failures
 Mobility of nodes
 Dynamic network topology
 Communication failures
 Heterogeneity of nodes
 Scalability to large scale of deployment
 Ability to withstand harsh environmental conditions
 Ease of use
 Unattended operation.
8
Sensor nodes can be imagined as small computers, extremely basic in terms of their interfaces and their components. They
usually consist of a processing unit with limited computational power and limited memory, sensors or MEMS (including specific
conditioning circuitry), acommunication device (usually radio transceivers or alternatively optical), and a power source usually in
the form of a battery. Other possible inclusions are energy harvesting modules, secondary ASICs, and possibly secondary
communication devices (e.g. RS-232 or USB).

The base stations are one or more distinguished components of the WSN with much more computational, energy and
communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the
WSN on to a server. Other special components in routing based networks are routers, designed to compute, calculate and
distribute the routing tables. Many techniques are used to connect to the outside world including mobile phone networks, satellite
phones, radio modems, high power WiFi links etc.

[edit]Platforms

[edit]Standards and specifications


Several standards are currently either ratified or under development for wireless sensor networks. There are a number of
standardization bodies in the field of WSNs. The IEEE focuses on the physical and MAC layers; the Internet Engineering Task
Force works on layers 3 and above. In addition to these, bodies such as the International Society of Automation provide vertical
solutions, covering all protocol layer. Finally, there are also several non-standard, proprietary mechanisms and specifications.

Standards are used far less in WSNs than in other computing systems. However predominant standards commonly used in WSN
communications include:

 WirelessHART
 ISA100
 IEEE 1451
 ZigBee / 802.15.4
 IEEE 802.11

[edit]Hardware
Main article: sensor node

The main challenge in a WSN is to produce low cost and tiny sensor nodes. There are an increasing number of small companies
producing WSN hardware and the commercial situation can be compared to home computing in the 1970s. Many of the nodes are
still in the research and development stage, particularly their software. Also inherent to sensor network adoption is the use very
low power method for data acquisition.

[edit]Software

Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs are meant to be deployed in large
numbers in various environments, including remote and hostile regions, with ad-hoc communications as key. For this reason,
algorithms and protocols need to address the following issues:

 Lifetime maximization
9
 Robustness and fault tolerance
 Self-configuration

Some of the important topics in WSN software research are:

 Operating systems
 Security
 Mobility
 Usability – human interface for deployment and management, debugging and end-user control
 Middleware – the design of middle-level primitives between high level software and the systems

[edit]Operating systems

Operating systems for wireless sensor network nodes are typically less complex than general-purpose operating systems. They
more strongly resemble embedded systems, for two reasons. First, wireless sensor networks are typically deployed with a
particular application in mind, rather than as a general platform. Second, a need for low costs and low power leads most wireless
sensor nodes to have low-power microcontrollers ensuring that mechanisms such as virtual memory either unnecessary or too
expensive to implement.It is therefore possible to use embedded operating systems such as eCos or uC/OS for sensor networks.
However, such operating systems are often designed with real-time properties.TinyOS[8] is perhaps the first[9] operating system
specifically designed for wireless sensor networks. TinyOS is based on an event-driven programming model instead
of multithreading. TinyOS programs are composed into event handlers and tasks with run to completion-semantics. When an
external event occurs, such as an incoming data packet or a sensor reading, TinyOS signals the appropriate event handler to
handle the event. Event handlers can post tasks that are scheduled by the TinyOS kernel some time later.LiteOS is a newly
developed OS for wireless sensor networks, which provides UNIX like abstraction and support for C programming
language.Contiki is an OS which uses a simpler programming style in C while providing advances such as 6LoWPAN and proto-
threads.
[edit]Algorithms
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged andremoved. (August
2006)

The algorithmic approach to modelling, simulating and analyzing WSNs differentiates itself from the protocol approach by the fact
that the idealised mathematical models used are more general and easier to analyze. However, they are sometimes less realistic
than the models used for protocol design, since an algorithmic approach often neglects timing issues, protocol overhead, the
routing initiation phase and sometimes distributed implementation of the algorithms.[citation needed]

[edit]Simulation of WSNs
In general, there are two ways to develop simulations of WSNs. Either use a custom platform to develop the simulation. And the
second option is to develop one's own simulation:

[edit]Simulators

As such, at present Agent-based Modeling and Simulation is the only paradigm which allows the simulation of even complex
behavior in the environments of Wireless sensors (such as flocking).[10]
10
Network Simulators like QualNet, OPNET Modeler, NetSim and NS2 can be used to simulate Wireless Sensor Network. Other
simulators, like IDEA1 – based on SystemC – have hardware-level librairies that permits system-level simulations by taking low-
level constraints into account.

[edit]Agent-based simulation of WSN


Agent-based simulation of wireless sensor and ad-hoc networks is a relatively newer paradigm. ABM was originally based on
social simulation. A recent article on agent-based simulation published in the IEEE Communications magazine gives examples
and tutorials on how to develop custom agent-based simulation models for wireless sensors, mobile robots and P2P networks in a
short period of time (few hours).<

[edit]Other concepts
[edit]Distributed sensor network
If a centralised architecture is used in a sensor network and the central node fails, then the entire network will collapse, however
the reliability of the sensor network can be increased by using distributed architecture.
Distributed architecture is used in WSNs for the following reasons:

1. Sensor nodes are prone to failure,


2. For better collection of data
3. To provide nodes with backup in case of failure of the central node

We also take care of nodes sensing redundant information and forwarding the data that is of no use. There is also no centralised
body to allocate the resources and they have to be self organised.

[edit]Data visualization
The data gathered from wireless sensor networks is usually saved in the form of numerical data in a central base station.
Additionally, the Open Geospatial Consortium (OGC) is specifying standards for interoperability interfaces and metadata
encodings that enable real time integration of heterogeneous sensor webs into the Internet, allowing any individual to monitor or
control Wireless Sensor Networks through a Web Browser.

[edit]Information fusion
In wireless sensor networks, information fusion, also called data fusion, has been developed for processing sensor data by
filtering, aggregating, and making inferences about the gathered data. Information fusion deals with the combination of multiple
sources to obtain improved information: cheaper, greater quality or greater relevance.[11] Within the wireless sensor networks
domain, simple aggregation techniques such as maximum, minimum, and average, have been developed for reducing the overall
data traffic to save energy.[12]

Vous aimerez peut-être aussi