Vous êtes sur la page 1sur 17

Network Devices (Hub, Repeater, Bridge, Switch, Router, Gateways and Brouter)

1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over the same network
before the signal becomes too weak or corrupted so as to extend the length to which the signal can be
transmitted over the same network. An important point to be noted about repeaters is that they do not amplify
the signal. When the signal becomes weak, they copy the signal bit by bit and regenerate it at the original
strength. It is a 2 port device.

2. Hub – A hub is basically a multiport repeater. A hub connects multiple wires coming from different
branches, for example, the connector in star topology which connects different stations. Hubs cannot filter data,
so data packets are sent to all connected devices. In other words, collision domain of all hosts connected
through Hub remains one. Also, they do not have intelligence to find out best path for data packets which leads
to inefficiencies and wastage.

Types of Hub
 Active Hub:- These are the hubs which have their own power supply and can clean, boost and
relay the signal along with the network. It serves both as a repeater as well as wiring centre. These are used to
extend the maximum distance between nodes.
 Passive Hub :- These are the hubs which collect wiring from nodes and power supply from active
hub. These hubs relay signals onto the network without cleaning and boosting them and can’t be used to extend
the distance between nodes.

3. Bridge – A bridge operates at data link layer. A bridge is a repeater, with add on the functionality of filtering
content by reading the MAC addresses of source and destination. It is also used for interconnecting two LANs
working on the same protocol. It has a single input and single output port, thus making it a 2 port device.
Types of Bridges
 Transparent Bridges:- These are the bridge in which the stations are completely unaware of the
bridge’s existence i.e. whether or not a bridge is added or deleted from the network, reconfiguration of
the stations is unnecessary. These bridges make use of two processes i.e. bridge forwarding and bridge learning.
 Source Routing Bridges:- In these bridges, routing operation is performed by source station and
the frame specifies which route to follow. The hot can discover frame by sending a special frame called
discovery frame, which spreads through the entire network using all possible paths to destination.

4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its efficiency(a large
number of ports imply less traffic) and performance. A switch is a data link layer device. The switch can
perform error checking before forwarding data, that makes it very efficient as it does not forward packets that
have errors and forward good packets selectively to correct port only. In other words, switch divides collision
domain of hosts, but broadcast domain remains same.

5. Routers – A router is a device like a switch that routes data packets based on their IP addresses. Router is
mainly a Network Layer device. Routers normally connect LANs and WANs together and have a dynamically
updating routing table based on which they make decisions on routing the data packets. Router divide broadcast
domains of hosts connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks together that may work
upon different networking models. They basically work as the messenger agents that take data from one system,
interpret it, and transfer it to another system. Gateways are also called protocol converters and can operate at
any network layer. Gateways are generally more complex than switch or router.

7. Brouter – It is also known as bridging router is a device which combines features of both bridge and router.
It can work either at data link layer or at network layer. Working as router, it is capable of routing packets
across networks and working as bridge, it is capable of filtering local area network traffic.

Introduction of Internetworking

Internetworking is combined of 2 words, inter and networking which implies an association between totally
different nodes or segments. This connection area unit is established through intercessor devices akin to routers
or gateway. The first term for associate degree internetwork was catenet. This interconnection is often among or
between public, private, commercial, industrial, or governmental networks. Thus, associate degree internetwork
could be an assortment of individual networks, connected by intermediate networking devices, that functions as
one giant network. Internetworking refers to the trade, products, and procedures that meet the challenge of
making and administering internetworks.
To enable communication, every individual network node or phase is designed with similar protocol or
communication logic, that is Transfer Control Protocol (TCP) or Internet Protocol (IP). Once a network
communicates with another network having constant communication procedures, it’s called Internetworking.
Internetworking was designed to resolve the matter of delivering a packet of information through many links.
There a minute difference between extending the network and Internetworking. Merely exploitation of either a
switch or a hub to attach 2 local area networks is an extension of LAN whereas connecting them via the router
is associate degree example of Internetworking. Internetworking is enforced in Layer three (Network Layer) of
OSI-ISO model. The foremost notable example of internetworking is that the Internet.
There are chiefly 3 unit of Internetworking:
1. Extranet
2. Intranet
3. Internet
Intranets and extranets might or might not have connections to the net. If there is a connection to the net, the
computer network or extranet area unit is usually shielded from being accessed from the net if it is not
authorized. The net isn’t thought-about to be a section of the computer network or extranet, though it should
function a portal for access to parts of associate degree extranet.
1. Extranet – It’s a network of the internetwork that’s restricted in scope to one organization or
entity however that additionally has restricted connections to the networks of one or a lot of different
sometimes, however not essential. It’s very lowest level of Internetworking, usually enforced in an exceedingly
personal area. Associate degree extranet may additionally be classified as a Man, WAN, or different form of
network however it cannot encompass one local area network i.e. it should have a minimum of one reference to
associate degree external network.
2. Intranet – This associate degree computer network could be a set of interconnected networks,
which exploits the Internet Protocol and uses IP-based tools akin to web browsers and FTP tools, that’s
underneath the management of one body entity. That body entity closes the computer network to the remainder
of the planet and permits solely specific users. Most typically, this network is the internal network of a
corporation or different enterprise. An outsized computer network can usually have its own internet server to
supply users with browseable data.
3. Internet – A selected Internetworking, consisting of a worldwide interconnection of
governmental, academic, public, and personal networks based mostly upon the Advanced analysis comes
Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense additionally home to
the World Wide Web (WWW) and cited as the ‘Internet’ to differentiate from all different generic
Internetworks. Participants within the web, or their service suppliers, use IP Addresses obtained from address
registries that management assignments.
Internetworking has evolved as an answer to a few key problems: isolated LANs, duplication of resources, and
an absence of network management. Isolated LANs created transmission problem between totally different
offices or departments. Duplication of resources meant that constant hardware and code had to be provided to
every workplace or department, as did a separate support employee. This lack of network management meant
that no centralized methodology of managing and troubleshooting networks existed.
One more form of interconnection of networks usually happens among enterprises at the Link Layer of the
networking model, i.e. at the hardware-centric layer below the amount of the TCP/IP logical interfaces. Such
interconnection is accomplished through network bridges and network switches. This can be typically
incorrectly termed internetworking, however, the ensuing system is just a bigger, single subnetwork, and no
internetworking protocol, akin to web Protocol, is needed to traverse these devices.
However, one electronic network is also reborn into associate degree internetwork by dividing the network into
phases and logically dividing the segment traffic with routers. The Internet Protocol is meant to supply an
associate degree unreliable packet service across the network. The design avoids intermediate network
components maintaining any state of the network. Instead, this task is allotted to the endpoints of every
communication session. To transfer information correctly, applications should utilize associate degree
applicable Transport Layer protocol, akin to Transmission management Protocol (TCP), that provides a reliable
stream. Some applications use a less complicated, connection-less transport protocol, User Datagram Protocol
(UDP), for tasks that don’t need reliable delivery of information or that need period of time service, akin to
video streaming or voice chat.
Internetwork Addressing –
Internetwork addresses establish devices severally or as members of a bunch. Addressing schemes differ based
on the protocol family and therefore the OSI layer. Three kinds of internetwork addresses area unit ordinarily
used: data-link layer addresses, Media Access control (MAC) addresses, and network-layer addresses.
1. Data Link Layer addresses: A data-link layer address unambiguously identifies every physical
network association of a network device. Data-link addresses typically area unit cited as physical or hardware
addresses. Data-link addresses sometimes exist among a flat address area and have a pre-established and
usually fastened relationship to a selected device. End systems usually have just one physical network
association, and therefore have just one data-link address. Routers and different internetworking devices usually
have multiple physical network connections and so eventually have multiple data-link addresses.
2. MAC Addresses: Media Access management (MAC) addresses encompass a set of data-link
layer addresses. MAC addresses establish network entities in LANs that implement the IEEE MAC addresses
of the data-link layer. MAC addresses different area unit distinctively for every local area network interface.
MAC addresses are forty-eight bits long and are expressed in form of twelve hexadecimal digits. The primary
half dozen hexadecimal digits, that are usually administered by the IEEE, establish the manufacturer or
merchant and therefore comprise the Organizational Unique Identifier (OUI). The last half dozen positional
notation digits comprise the interface serial variety or another price administered by the particular merchant.
MAC addresses typically area unit referred to as burned-in addresses (BIAs) as a result of burned into read-only
memory(ROM) and are traced into random-access memory (RAM) once the interface card initializes.
3. Network-Layer Addresses: Network addresses sometimes exist among a gradable address area
and typically area unit referred to as virtual or logical addresses. the connection between a network address and
a tool is logical and unfixed, it usually relies either on physical network characteristics or on groupings that
don’t have any physical basis. finish systems need one network-layer address for every network-layer protocol
they support. Routers and different Internetworking devices need one network-layer address per physical
network association for every network-layer protocol supported.

Challenges to Internetworking –

Implementing a useful internetwork isn’t at any certainty. There are several challenging fields, particularly in
the areas of dependableness, connectivity, network management, and adaptability and each and every space is
essential in establishing associate degree economical and effective internetwork. Few of them are:-
 The initial challenge lies when we are trying to connect numerous systems to support
communication between disparate technologies. For example, Totally different sites might use different kinds
of media, or they could operate at variable speeds.
 Another essential thought is reliable service that should be maintained in an internetwork.
Individual users and whole organizations depend upon consistent, reliable access to network resources.
 Network management should give centralized support associate degreed troubleshooting
capabilities in an internetwork. Configuration, security, performance, and different problems should be
adequately addressed for the internetwork to perform swimmingly.
 Flexibility, the ultimate concern, is important for network enlargement and new applications and
services, among different factors.

Digital Electronics | Analog to Analog Conversion (Modulation)


Analog Signal: An analog signal is any continuous signal for which the time varying feature of the signal is a
representation of some other time varying quantity i.e., analogous to another time varying Signal.
Analog to Analog Conversion –
Analog-to-analog conversion, or modulation, is the representation of analog information by an analog signal. It
is a process by virtue of which a characteristic of carrier wave is varied according to the instantaneous
amplitude of the modulating signal. This modulation is generally needed when a bandpass channel is required.
Bandpass is a range of frequencies which are transmitted through a bandpass filter which is a filter allowing
specific frequencies to pass preventing signals at unwanted frequencies.

Analog to Analog conversion can be done in three ways:


1. Amplitude Modulation
2. Frequency Modulation
3. Phase Modulation
1. AMPLITUDE MODULATION:
The modulation in which the amplitude of the carrier wave is varied according to the instantaneous amplitude
of the modulating signal keeping phase and frequency as constant. The figure below shows the concept of
amplitude modulation:

AM is normally implemented by using a simple multiplier because the amplitude of the carrier signal needs to
be changed according to the amplitude of the modulating signal.
AM bandwidth:
The modulation creates a bandwidth that is twice the bandwidth of the modulating signal and covers a range
centered on the carrier frequency.
Bandwidth= 2fm
2. FREQUENCY MODULATION –
The modulation in which the frequency of the carrier wave is varied according to the instantaneous amplitude
of the modulating signal keeping phase and amplitude as constant. The figure below shows the concept of
frequency modulation:
FM is normally implemented by using a voltage-controlled oscillator as with FSK. The frequency of the
oscillator changes according to the input voltage which is the amplitude of the modulating signal.

FM bandwidth:
1. The bandwidth of a frequency modulated signal varies with both deviation and modulating
frequency.
If modulating frequency (Mf) 0.5, wide band Fm signal.
2. For a narrow band Fm signal, bandwidth required is twice the maximum frequency of the
modulation, however for a wide band Fm signal the required bandwidth can be very much larger, with
detectable sidebands spreading out over large amounts of the frequency spectrum.
3. PHASE MODULATION:
The modulation in which the phase of the carrier wave is varied according to the instantaneous amplitude of the
modulating signal keeping amplitude and frequency as constant. The figure below shows the concept of
frequency modulation:

Phase modulation is practically similar to Frequency Modulation, but in Phase modulation frequency of the
carrier signal is not increased. It is normally implemented by using a voltage-controlled oscillator along with a
derivative. The frequency of the oscillator changes according to the derivative of the input voltage which is the
amplitude of the modulating signal.
PM bandwidth:
1. For small amplitude signals, PM is similar to amplitude modulation (AM) and exhibits its
unfortunate doubling of baseband bandwidth and poor efficiency.
2. For a single large sinusoidal signal, PM is similar to FM, and its bandwidth is approximately, 2
(h+1) Fm where h= modulation index.
Thus, Modulation allows us to send a signal over a bandpass frequency range. If every signal gets its own
frequency range, then we can transmit multiple signals simultaneously over a single channel, all using different
frequency ranges.

Classification of Routing Algorithms

Prerequisite – Fixed and Flooding Routing algorithms

Routing is process of establishing the routes that data packets must follow to reach the destination. In this
process, a routing table table is created which contains information regarding routes which data packets follow.
Various routing algorithm are used for the purpose of deciding which route an incoming data packet needs to be
transmitted on to reach destination efficiently.
Classification of Routing Algorithms: The routing algorithms can be classified as follows:
1. Adaptive Algorithms –
These are the algorithms which change their routing decisions whenever network topology or traffic load
changes. The changes in routing decisions are reflected in the topology as well as traffic of the network. Also
known as dynamic routing, these make use of dynamic information such as current topology, load, delay, etc. to
select routes. Optimization parameters are distance, number of hops and estimated transit time.

Further these are classified as follows:


 (a) Isolated – In this method each, node makes its routing decisions using the the information it
has without seeking information from other nodes. The sending nodes doesn’t have information about status of
particular link. Disadvantage is that packet may be sent through a congested network which may result in delay.
Examples: Hot potato routing, backward learning.
 (b) Centralized – In this method, a centralized node has entire information about the network and
makes all the routing decisions. Advantage of this is only one node is required to keep the information of entire
network and disadvantage is that if central node goes down the entire network is done.
 (c) Distributed – In this method, the node receives information fro its neighbors and then takes
the decision about routing the packets. Disadvantage is that the packet may be delayed if there is change in
between interval in which it receives information and sends packet.
2. Non-Adaptive Algorithms –
These are the algorithms which do not change their routing decisions once they have been selected. This is also
known as static routing as route to be taken is computed in advance and downloaded to routers when router is
booted.
Further these are classified as follows:
 (a) Flooding – This adapts the technique in which every incoming packet is sent on every
outgoing line except from which it arrived. One problem with this is that packets may go in loop and as a result
of which a node may receive duplicate packets. These problems can be overcome with the help of sequence
numbers, hop count and spanning tree.
 (b) Random walk – In this method, packets are sent host by host or node by node to one of its
neighbors randomly. This is highly robust method which is usually implemented by sending packets onto the
link which is least queued.
Routing v/s Flooding:
Introduction of Firewall in Computer Network

Firewall is a network security device, either hardware or software based, which monitors all incoming and
outgoing traffic and based on defined set of security rules it accept, reject or drop that specific traffic.
Accept : allow the traffic
Reject : block the traffic but reply with an “unreachable error”
Drop : block the traffic with no reply
Firewall establishes a barrier between secured internal networks and outside untrusted network, such as
Internet.

History and Need for Firewall


Before Firewalls, network security was performed by Access Control Lists (ACLs) residing on routers. ACLs
are rules that determine whether network access should be granted or denied to specific IP address.
But ACLs cannot determine the nature of packet it is blocking. Also, ACL alone does not have the capacity to
keep threats out of the network. Hence, Firewall was introduced.
Connectivity to the Internet is no longer optional for organizations. However, accessing Internet provides
benefits to the organization; it also enables the outside world to interact with internal network of the
organization. This creates a threat to the organization. In order to secure the internal network from
unauthorized traffic we need Firewall.
How Firewall Works
Firewall match the network traffic against the rule set defined in its table. Once the rule is matched, associate
action is applied to the network traffic. For example, Rules are defined like any employee from HR
department cannot access the data from code server and at the same time other rule is defined like system
administrator can access the data from both HR and technical department. Rules can be defined on firewall
based on the necessity and security policies of the organization.
From the perspective of a server, network traffic can be either outgoing or incoming. Firewall maintains
distinct set of rules for both the cases. Mostly the outgoing traffic, originated from the server itself, allowed to
pass. Still, setting rule on outgoing traffic is always better in order to achieve more security and prevent
unwanted communication.
Incoming traffic is treated differently. Most traffic which reaches on firewall is one of these three major
Transport Layer protocols- TCP, UDP or ICMP. All these types have a source address and destination
address. Also, TCP and UDP have port numbers. ICMP uses type code instead of port number which
identifies purpose of that packet.
Default policy: It is very difficult to explicitly cover every possible rule on firewall. For this reason, firewall
must always have a default policy. Default policy only consist action (accept, reject or drop).
Suppose no rule is defined about SSH connection to the server on firewall. So, it will follow default policy. If
default policy on firewall is set to accept, then any computer outside of your office can establish SSH
connection to the server. Therefore, setting default policy as drop (or reject) is always a good practice.
Generation of Firewall
Firewalls can be categorized based on its generation.
First Generation- Packet Filtering Firewall : Packet filtering firewall is used to control network access by
monitoring outgoing and incoming packet and allowing them to pass or stop based on source and destination
IP address, protocols and ports. It analyses traffic at the transport protocol layer (but mainly uses first 3
layers).
Packet firewalls treats each packet in Isolation. They have no ability to tell whether a packet is part of an
existing stream of traffic. Only It can allow or deny the packets based on unique packet headers.

Packet filtering firewall maintains a filtering table which decides whether the packet will be forwarded or
discarded. From the given filtering table, the packets will be Filtered according to following rules:

1. Incoming packets from network 192.168.21.0 are blocked.


2. Incoming packets destined for internal TELNET server (port 23) are blocked.
3. Incoming packets destined for host 192.168.21.3 are blocked.
4. All well-known services to the network 192.168.21.0 are allowed.
Second Generation- Stateful Inspection Firewall : Stateful firewalls (performs Stateful Packet Inspection)
are able to determine the connection state of packet, unlike Packet filtering firewall, which makes it more
efficient. It keeps track of the state of networks connection travelling across it, such as TCP streams. So the
filtering decisions would not only be based on defined rules, but also on packet’s history in the state table.

Third Generation- Application Layer Firewall : Application layer firewall can inspect and filter the packets
on any OSI layer, up to application layer. It has ability to block specific content, also recognize when certain
application and protocols (like HTTP, FTP) are being misused.
In other words, Application layer firewalls are hosts that run proxy servers. A proxy firewall prevents direct
connection between either side of firewall, each packet has to pass through the proxy. It can allow or block the
traffic based on predefined rules.
Note: Application layer firewalls can also be used as Network Address Translator(NAT).
Next Generation Firewalls (NGFW) : Next Generation Firewalls are being deployed these days to stop
modern security breaches like advance malware attacks and application layer attacks. NGFW consists of Deep
Packet Inspection, Application Inspection, SSL/SSH inspection and many fuctionalities to protect the network
from these modern threats.
Types of Firewall
Firewalls are generally of two types: Host-based and Network-based.
Host- based Firewalls : Host-based firewall are installed on each network node which controls each
incoming and outgoing packet. It is a software application or suit of applications, comes as a part of operating
system. Host-based firewalls are needed because network firewalls cannot provide protection inside a trusted
network. Host firewall protects each host from attacks and unauthorized access.
Network-based Firewalls : Network firewall function on network level. In other words, these firewalls filters
all incoming and outgoing traffic across the network. It protects the internal network by filtering the traffic
using rules defined on firewall. A Network firewall might have two or more network interface cards (NICs).
Network-based firewall is usually a dedicated system with proprietary software installed.
Both types of firewall have their own advantages.
Bootstrap Protocol
The Bootstrap Protocol (BOOTP) is a computer networking protocol used in Internet
Protocol networks to automatically assign an IP address to network devices from a configuration
server. The BOOTP was originally defined in RFC 951.
When a computer that is connected to a network is powered up and boots its operating system,
the system software broadcasts BOOTP messages onto the network to request an IP address
assignment. A BOOTP configuration server assigns an IP address based on the request from a
pool of addresses configured by an administrator.
BOOTP is implemented using the User Datagram Protocol (UDP) as transport protocol, port
number 67 is used by the (DHCP) server to receive client requests and port number 68 is used by
the client to receive (DHCP) server responses. BOOTP operates only on IPv4 networks.
Historically, BOOTP has also been used for Unix-like diskless workstations to obtain the
network location of their boot image, in addition to the IP address assignment. Enterprises used it
to roll out a pre-configured client (e.g., Windows) installation to newly installed PCs.
Originally requiring the use of a boot floppy disk to establish the initial network connection,
manufacturers of network cards later embedded the protocol in the BIOS of the interface cards as
well as system boards with on-board network adapters, thus allowing direct network booting.
While some parts of BOOTP have been effectively superseded by the Dynamic Host
Configuration Protocol (DHCP), which adds the feature of leases, parts of BOOTP are used to
provide service to the DHCP protocol. DHCP servers also provide the legacy BOOTP
functionality.

Network Capacity Planning


Network capacity planning is the process of planning a network for utilization, bandwidth,
operations, availability and other network capacity constraints.
It is a type of network or IT management process that assists network administrators in planning
for network infrastructure and operations in line with current and future operations.
Network capacity planning is generally done to identify shortcomings or parameters that can
affect the network’s performance or availability within a predictable future time, usually in
years. Typically, network capacity planning requires information about:

 Current network traffic volumes


 Network utilization
 Type of traffic
 Capacity of current infrastructure
This analysis helps network administrators understand the maximum capability of current
resources and the amount of new resources needed to cater to future requirements. In addition to
technical network infrastructure, network capacity planning may also include planning for
human resources that will manage and/or monitor the network.

Network Growth Planning


Growth potential should be assessed in regard to the following areas:
 Bandwidth requirements.
 Disk/Storage capacity requirements.
 Network segmentation requirements (based on performance and security requirements).
 Performance requirements.
 Security requirements.
 Management/administrative requirements.
 Availability requirements.

Assessing bandwidth and storage capacity requirements


Future bandwidth requirements can be estimated based on the following:
 Increases in number of personnel. The more people you have using the network
simultaneously, the more network bandwidth will be required to support them.
 Enhanced productivity. As existing personnel become more efficient and more experienced
and thus able to turn out more work in less time, bandwidth needs may increase to handle the
increased work product being stored on and transferred across the network.
 New, more sophisticated network applications. As your company grows, the need for high
bandwidth applications such as video conferencing will become more likely.
 Geographic expansion. As the company expands to multiple geographic locations, bandwidth
capacity must support communications with distant sites.
Unit-5

Network Troubleshooting
Network troubleshooting is the collective measures and processes used to identify, diagnose and
resolve problems and issues within a computer network.
It is a systematic process that aims to resolve problems and restore normal network operations
within the network.
Network troubleshooting is primarily done by network engineers or administrators to repair or
optimize a network. It is generally done to recover and establish network or Internet connections
on end nodes/devices.

Some of the processes within network troubleshooting include but are not limited to:
Finding and resolving problems and establishing Internet/network connection of a
computer/device/node

 Configuring a router, switch or any network management device


 Installing cables or Wi-Fi devices
 Updating firmware devices on router switch
 Removing viruses
 Adding, configuring and reinstalling a network printer

Troubleshooting tools:

Network troubleshooting can be a manual or automated task. When using automated


tools, network management can be done using network diagnostic software. List of the
best network diagnostics and troubleshooting tools:

1. SolarWinds Port Scanner – Free tool to check the ports on your network devices to
ensure that you don’t have unattended ports open.
2. Paessler Network Troubleshooting with PRTG – Infrastructure management system that
includes port monitoring.
3. Ping – Simple command line utility that checks on the speed of connections.
4. Tracert – Free command line utility that lists the probable hops to a network or internet
destination address.
5. Ipconfig – This command line tool reports the IPv4 and IPv6 addresses, subnets, and
default gateways for all network adapters on a PC.
6. Netstat – This tool displays active connections on your computer.
7. Nslookup – Available for Windows, Unix, Linux, and Mac OS, this tool gives you DNS
server diagnostics.
8. Speed and up/down test sites – A list of websites that will test your internet connections.
9. Sysinternals – Set of Microsoft tools for Windows that help troubleshoot and configure
Active Directory.
10. Wireshark – Free packet sniffer that will help you analyze traffic flows.
11. Nmap – Network security and monitoring tool that needs a companion utility, Zenmap, as
a user interface.
Network Segmentation
Network segmentation separates different types of traffic logically and prevents characteristics
(normal or abnormal) in one segmented network from affecting another network. Network
segmentation has many beneficial applications in the world of enterprise IT and the Internet
today, but will be a safety-critical function first and foremost in the IoT.
Segmentation also prevents an infected, defective, or malicious device or entity from attacking
other devices and entities in adjacent systems over the network. This applies to both the IoT data
traffic and the IoT applications and systems making use of that data traffic. This is sometimes
referred to as network slicing. There will be two different forms of segmentation, at least. We
can call them north-south isolation and segmentation and east-west micro segmentation.

Advantages of segmentation

 Reduced congestion: Improved performance is achieved, because on a segmented network


there are fewer hosts per sub network, thus minimizing local traffic
 Improved security:
o Broadcasts will be contained to local network. Internal network structure will not be
visible from outside.
o There is a reduced attack surface available to pivot in if one of the hosts on the network
segment is compromised. Common attack vectors such
as LLMNR and NetBIOS poisoning can be partially alleviated by proper network
segmentation as they only work on the local network. For this reason it is recommended
to segment the various areas of a network by usage. A basic example would be to split up
web servers, databases servers and standard user machines each into their own segment.
o By creating network segments containing only the resources specific to the consumers
that you authorise access to, you are creating an environment of least privilege[1][2]
 Containing network problems: Limiting the effect of local failures on other parts of
network
 Controlling visitor access: Visitor access to the network can be controlled by implementing
VLANs to segregate the network

Windows Internet Naming Service (WINS)


The Windows Internet Naming Service (WINS) converts NetBIOS host names into IP addresses.
It allows Windows machines on a given LAN segment to recognize Windows machines on other
LAN segments.
WINS is similar to DNS in that they both provide name resolution on TCP/IP networks, but
WINS resolves NetBIOS names while DNS resolves domain names. A network will require a
WINS server when there are older applications that use NetBIOS. Windows XP and Windows
2000 started using DNS names instead of (or in addition) to NetBIOS names and so DNS
emerged at this point as Microsoft's solution to network name resolution.
If a WINS server does not exist, resolution of NetBIOS host names can be done by the
LMHOSTS file, which is a static file on each workstation.

Network threats

There numerous network threats that can have adverse impact on communication network:

1. Viruses and Worms


2. Trojan Horses
3. SPAM
4. Phishing
5. Packet Sniffers
6. Maliciously Coded Websites
7. Password Attacks
8. Hardware Loss and Residual Data Fragments
9. Shared Computers
10. Zombie Computers and Botnets.

A Virus is a "program or piece of code that is loaded onto computer without user knowledge and
runs against his wishes. Viruses can hugely damage to computers. With respect to a network, if a
virus is downloaded then all the computers in the network would be affected because the virus
would make copies of itself and spread itself across networks. A worm is similar to a virus but a
worm can run itself whereas a virus needs a host program to run. To protect from worm, it is
necessary to install a security suite, such as Kaspersky Total Protection, that protects the
computer against threats such as viruses and worms.
A Trojan Horse is "a program in which malicious or harmful code is contained inside apparently
harmless programming or data in such a way that it can get control and do its chosen form of
damage, such as ruining the file allocation table on user's hard disk. In a network if a Trojan
Horse is installed on a computer and interferes with the file allocation table it could cause
enormous damage to all computers of that network. In order to get protected, security
professionals must have Security suites, such as Norton Internet Security that will prevent from
downloading Trojan Horses.
SPAM is "flooding the Internet with many copies of the same message, in an attempt to force the
message on people who would not otherwise pick to receive it. SPAM filters are an effective
way to stop SPAM, these filters come with most of the e-mail providers online. Also you can
buy a variety of SPAM filters that work efficiently.
Phishing is also a security threat that misuses user's valuable information. Phishing is explained
as an e-mail fraud method in which the perpetrator sends out legitimate-looking emails in an
attempt to gather personal and financial information from recipients. Phishing is one of the worst
security threats over a network because a lot of people that use computers linked up to a network
are amateurs and would be very susceptible to giving out information that could cause situations
such as theft of money or identity theft. It is recommended to use Phishing filters to filter out this
unwanted mail and to prevent threat.
A packet sniffer is a device or program that allows snooping on traffic travelling between
networked computers. The packet sniffer will capture data that is addressed to other machines,
saving it for later analysis. In a network a packet sniffer can filter out personal information and
this can lead to areas such as identity theft, so this is a major security threat to a network. When
strong encryption is used, all packets are unreadable to any but not to the destination address.
This makes packet sniffers ineffective. So to protect from it, it is important to obtain strong
encryption.
Some websites across the net contain code that is malicious.
Malicious code is "Programming code that is capable of causing harm to availability, integrity of
code or data, or confidentiality in a computer system. AVG report that "300,000 infected sites
appear per day (PC Advisor, 2009). To protect the system, it is advised to use a security suite,
such as AVG, can detect infected sites and try to prevent the user from entering the site.
Password attacks are attacks by hackers that are able to regulate passwords or find passwords to
different protected electronic areas. Many systems on a network are password protected and
hence it would be easy for a hacker to hack into the systems and steal data. This may be the
easiest way to get private information because people are able to get software online that obtains
the password. Currently, there is no software that prevents password attacks.
Hardware loss and residual data fragments are also major security threats for companies and
governments. Suppose, if a number of laptops get stolen from a bank that have client details on
them, this would enable the robber's to get personal information from clients and maybe steal the
clients identities. This is an increasing concern and as of present the only solution is to keep data
and hardware under strict surveillance.
Shared computers also pose threat. Shared computers involve sharing a computer with one or
more people. There are number of suggestions when using shared computers that include:

o Do not check the "Remember my ID on this computer" box.


o Never leave a computer unattended while signed-in.
o Always sign out completely.
o Clear the browsers cache.
o Keep an eye out for "shoulder surfers".
o Avoid confidential transactions.
o Be wary of spyware.
o Never save passwords.
o Change password often
A zombie computer or "drone" is a computer that has been secretly compromised by hacking
tools which allow a third party to control the computer and its resources remotely. A hacker
could hack into a computer and control the computer and obtain data. The solution for this type
of threat is that Antivirus software can help prevent zombie computers.
A botnet is a number of Internet computers that, although their owners are unaware of it, have
been set up to forward transmissions (including spam or viruses) to other computers on the
internet. This is a major security threat on a network because the network, unknown to anyone,
could be acting as a hub that forwards malicious files to other computers. To protect from this
threat, Network Intrusion Prevention (NIP) systems must be installed.
Network Security is a wide arena and it is complicated task of Network Security manager. There
are still threats such as password attacks that have no deterrence.
To summarize, internal security organisations in India and around the globe has to undergo
unparalleled challenges such as the need to tackle crime, address the increasing challenge of
Transnational criminal networks and the ongoing threat of international and domestic terrorism,
cybercrime, money laundering, narcoterrorism and human trafficking. Since many years, India's
internal security landscape has seen theatrical changes. The Ministry of Home Affairs has
already taken effective measures to strengthen the national security apparatus and
communication and information management systems. All internal security activities should be
underpinned by vigorous information management to safeguard the effective use of resources
and data assets. Nevertheless, security agencies face challenges at every stage of information
management such as creation, collection, storage, and communication. To deal with such as
challenges, security agencies must develop robust and automated information management and
install various protective measures to protect from cyber threats.

Vous aimerez peut-être aussi