Vous êtes sur la page 1sur 68

Short view of

Network Management & Information Security

Chapter -1
Explain CIA Model

Confidentiality
In Privacy is security, I discussed the importance of protecting your most sensitive
information from unauthorized access. Roughly synonymous with privacy as a security
concern is the Confidentiality part of the CIA Triad.
Protecting confidentiality hinges upon defining and enforcing appropriate access levels
for information. Doing so often involves separating information into discrete collections
organized by who should have access to it and how sensitive it is (i.e., how much and
what type of damage you would suffer if confidentiality was breached).
Some of the most commonly used means of managing confidentiality on individual
systems include traditional Unix file permissions, access control lists, and both file and
volume encryption.
Integrity
The I in CIA stands for Integrity specifically, data integrity. The key to this
component of the CIA Triad is protecting data from modification or deletion by
unauthorized parties, and ensuring that when authorized people make changes that
shouldnt have been made the damage can be undone.
Some data should not be inappropriately modifiable at all, such as user account controls,
because even a momentary change can lead to significant service interruptions and
confidentiality breaches. Other data must be much more available for modification than
such strict control would allow, such as user files but should be reversible as much as
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

reasonably possible in case of changes that may later be regretted (as in the case of
accidentally deleting the wrong files). For circumstances where changes should be easy
for authorized personnel, but easily undone, version control systems and more traditional
backups are among the most common measures used to ensure integrity. Traditional Unix
file permissions, and even more limited file permissions systems like the read-only file
flag in MS Windows 98, can also be an important factor in single system measures for
protecting data integrity.
Availability
The last component in the CIA Triad refers to the Availability of your data. Systems,
access channels, and authentication mechanisms must all be working properly for the
information they provide and protect to be available when needed.
High Availability systems are those computing resources whose architectures are
specifically oriented toward improving availability. Depending on the specific HA
system design, it might target power outages, upgrades, and hardware failures to improve
availability, it might manage multiple network connections to route around network
outages, or it might be designed to deal with potential availability problems such as
Denial of Service attacks.
Many approaches to availability improvements exist, such as HA clusters, failover
redundancy systems, and rapid disaster recovery capabilities as in the case of imagebased network boot systems. If your business models or other needs require maximum
effective uptime, such options should be investigated in depth.
What is Threats & Vulnerabilities
Threats
A significant security problem for networked systems is hostile, or at least unwanted,
trespass by users or software. User trespass can take the form of unauthorized logon to a
machine or, in the case of an authorized user, acquisition of privileges or performance of
actions beyond those that have been authorized. Software trespass can take the form of a
virus, worm, or Trojan horse.
All these attacks relate to network security because system entry canbe achieved by
means of a network. However, these attacks are not confined to network-based attacks. A
user with access to a local terminal may attempt trespass without using an intermediate
network. System trespass is an area in which the concerns of network security and
computer security overlap.
One of the most publicized threats to security is the intruder, generally referred to as a
hacker or cracker. The other is viruses. The intruder can be a masquerader. This is an
individual who is not authorized to use the computer and who penetrates a systems
access controls to exploit a legitimate users account. The misfeasor is a legitimate user
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

who access data, programs, or resources for which such access is not authorized. A
clandestine user is who seizes supervisory control of the system and uses this control to
evade auditing and access controls or to suppress audit collection. The masquerader is
likely to be an outsider; the misfeasor generally is an insider; and the clandestine user can
be either an outsider or an insider.
The intruder needs information in the form of a user password. Then, he can log in to a
system and exercise all the privileges accorded to the legitimate user. Some effort is
needed for a potential intruder to learn passwords. Among the techniques, we have:
Try default passwords used with standard accounts that are shipped with the system
Exhaustively try all short passwords
Collect information about users, such as full names, names of spouses, children, books
in office, etc
Try users phone numbers. Social security numbers, room numbers
License plate numbers
Use a Trojan horse
Guessing attacks are feasible, and indeed highly effective, when a large number of
guesses can be attempted, and each guess verified, without the guess process being
detectable. The professional intruder is unlikely to try those crude guessing methods.
The front line of defense against intruders is the password system. The password serves
to authenticate the ID of the individual logging on to the system. The ID also determines
the privileges accorded to the user. The encryption routine is designed to discourage
guessing attacks. Password length is only part of the problem. Many people, when
permitted to choose their own password, pick a password that is guessable, such as their
own name, their street name, etc.
The goal for prevention is then to eliminate guessable passwords while allowing the user
to select a password that is memorable. For basic techniques are in use:
User education
Computer-generated password
Reactive password checking
Proactive password checking
One rule should be enforced. All passwords must be at least eight characters long. The
passwords must include at least one each of uppercase, lowercase, numeric digits, and
punctuation marks.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

Vulnerabilities
Network vulnerabilities are present in every system. Network technology advances so
rapidly that it can be very difficult to eradicate vulnerabilities altogether; the best one
can hope for, in many cases, is simply to minimize them. Networks are vulnerable to
slowdowns due to both internal and external factors. Internally, networks can be
affected by overextension and bottlenecks, external threats, DoS/DDoS attacks, and
network data interception. The execution of arbitrary commands can lead to system
malfunction, slowed performance, and even failure. Indeed, total system failure is the
largest threat caused by a compromised system-understanding possible vulnerabilities
is critical for administrators.
Internal network vulnerabilities result from overextension of bandwidth (user needs
exceeding total resources) and bottlenecks (user needs exceeding resources in specific
network sectors). These problems can be addressed by network management systems
and utilities such as traceroute, which allow administrators to pinpoint the location of
network slowdowns. Traffic can then be rerouted within the network architecture to
increase speed and functionality.
External Network Vulnerabilities
DoS and DDoS are external attacks as the result of one attack or a number of
coordinated attacks, respectively. Designed to slow down or disable networks
altogether, these attacks are among the most serious threats that networks face.
Administrators must use tools to monitor network performance in order to catch these
threats as soon as possible. Many monitoring systems are configured to send alarms
or alerts to administrators when such attacks occur, allowing for network access by
intruders to be quickly terminated.
Data interception is another of the most common network vulnerabilities, for both
LANs and WLANs. Hackers within range of a WLAN workstation can infiltrate a
secure session, and monitor or change the network data for the purpose of accessing
sensitive information or altering the operation of the network. User authentication
systems are used to keep such interception from occurring. Firewalls can keep
unauthorized users from accessing the network in the first place, while base station
discovery scans allow for the rooting out of intruders on a given network.
Unauthorized access

Unauthorized Access is when a person who does not have permission to connect
to or use a system gains entry in a manner unintended by the system owner. The
popular term for this is hacking.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

How did this happen?

The specifics are different for each individual event but it could happen in any
number of ways. Usually access is gained via unpatched software or other known
vulnerabilities.
Impersonation
->Impersonation is the act of assuming a different identity on a temporary basis so
that a different security context or set of credentials can be used to access a resource.
->Where false routing advertisements redirect traffic away from the intended
destination and instead directs traffic to a site that masquerades as the destination
service. This form of masquerading is used to gather otherwise confidential
information from users of the original service.
->An act whereby one entity assumes the identity and privileges of another entity
without restrictions and without any indication visible to the recipients of the
impersonator's calls that delegation has taken place. Impersonation is a case of simple
delegation.
->Attempting to impersonate any person, using forged headers or other identifying
information, is prohibited. The use of anonymous re-emailers and nicknames does not
constitute impersonation.
->Impersonate any person or entity; falsely state or otherwise misrepresent your
affiliation with any person or entity, including ExReviews; forge headers or otherwise
manipulate identifiers in order to disguise the origin of any submissions to us or
through the Site; or expressly state or imply.

Denial of Service
What is a denial-of-service (DoS) attack?
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from
accessing information or services. By targeting your computer and its network
connection, or the computers and network of the sites you are trying to use, an attacker
may be able to prevent you from accessing email, websites, online accounts (banking,
etc.), or other services that rely on the affected computer.
The most common and obvious type of DoS attack occurs when an attacker "floods" a
network with information. When you type a URL for a particular website into your
browser, you are sending a request to that site's computer server to view the page. The
server can only process a certain number of requests at once, so if an attacker overloads

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

the server with requests, it can't process your request. This is a "denial of service"
because you can't access that site.
An attacker can use spam email messages to launch a similar attack on your email
account. Whether you have an email account supplied by your employer or one available
through a free service such as Yahoo or Hotmail, you are assigned a specific quota,
which limits the amount of data you can have in your account at any given time. By
sending many, or large, email messages to the account, an attacker can consume your
quota, preventing you from receiving legitimate messages.
What is a distributed denial-of-service (DDoS) attack?
In a distributed denial-of-service (DDoS) attack, an attacker may use your computer to
attack another computer. By taking advantage of security vulnerabilities or weaknesses,
an attacker could take control of your computer. He or she could then force your
computer to send huge amounts of data to a website or send spam to particular email
addresses. The attack is "distributed" because the attacker is using multiple computers,
including yours, to launch the denial-of-service attack.
How do you avoid being part of the problem?
Unfortunately, there are no effective ways to prevent being the victim of a DoS or DDoS
attack, but there are steps you can take to reduce the likelihood that an attacker will use
your computer to attack other computers:

Install and maintain anti-virus software (see Understanding Anti-Virus Software


for more information).
Install a firewall, and configure it to restrict traffic coming into and leaving your
computer (see Understanding Firewalls for more information).
Follow good security practices for distributing your email address (see Reducing
Spam for more information). Applying email filters may help you manage
unwanted traffic.

How do you know if an attack is happening?


Not all disruptions to service are the result of a denial-of-service attack. There may be
technical problems with a particular network, or system administrators may be
performing maintenance. However, the following symptoms could indicate a DoS or
DDoS attack:

unusually slow network performance (opening files or accessing websites)


unavailability of a particular website
inability to access any website
dramatic increase in the amount of spam you receive in your account

What do you do if you think you are experiencing an attack?


Even if you do correctly identify a DoS or DDoS attack, it is unlikely that you will be
able to determine the actual target or source of the attack. Contact the appropriate
technical professionals for assistance.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

If you notice that you cannot access your own files or reach any external websites
from your work computer, contact your network administrators. This may indicate
that your computer or your organization's network is being attacked.
If you are having a similar experience on your home computer, consider
contacting your internet service provider (ISP). If there is a problem, the ISP
might be able to advise you of an appropriate course of action.

What is malicious software?


Malicious software (malware) is any software that gives partial to full control of your
computer to do whatever the malware creator wants. Malware can be a virus, worm,
trojan, adware, spyware, root kit, etc. The damage done can vary from something slight
as changing the author's name on a document to full control of your machine without
your ability to easily find out. Most malware requires the user to initiate it's operation.
Some vectors of attack include attachments in e-mails, browsing a malicious website that
installs software after the user clicks ok on a pop-up, and from vulnerabilities in the
operating system or programs. Malware is not limited to one operating system.
Malware types can be categorized as follows: viruses, worms, trojans, and backdoors
seek to infect and spread themselves to create more havoc. Adware and spyware seek to
embed themselves to watch what the user does and act upon that data. Root kits seek to
give full access of your machine to the attacker to do what they want.
What is trap door?
Point to remember

Method of bypassing normal authentication methods


Remains hidden to casual inspection
Can be a new program to be installed
Can modify an existing program
Also known as Back Doors

A trap door is a means of access to a computer program that bypasses security


mechanisms. A programmer may sometimes install a back door so that the program can
be accessed for troubleshooting or other purposes. However, attackers often use back
doors that they detect or install themselves, as part of an exploit. In some cases, a worm is
designed to take advantage of a back door created by an earlier attack. For
example, Nimdagained entrance through a back door left by Code Red.
Whether installed as an administrative tool or a means of attack, a back door is a security
risk, because there are always crackers out there looking for any vulnerability to exploit.
In her article "Who gets your trust?" security consultant Carole Fennelly uses an analogy
to illustrate the situation: "Think of approaching a building with an elaborate security
system that does bio scans, background checks, the works. Someone who doesn't have
time to go through all that might just rig up a back exit so they can step out for a smoke -and then hope no one finds out about it."
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

What is logic bomb?


Point to remember

Piece of code that executes itself when pre-defined conditions are met
Logic Bombs that execute on certain days are known as Time Bombs
Code performs some payload not expected by the user.
Shareware that deactivates itself are not logic bombs.
Some of the very first viruses had logic bombs
Friday the 13th Virus duplicated itself every Friday of the month and on the
13th causing slowdown on networks
Michelangelo Virus, one of the first viruses to get news coverage, execute
itself on March 6th and tried to damage hard-disks
A logic bomb could also be programmed to wait for a certain message from the
programmer. The logic bomb could, for example, check a web site once a week for a
certain message. When the logic bomb sees that message, or when the logic
bomb stopsseeing that message, it activates and executes its code.
A logic bomb can also be programmed to activate on a wide variety of other variables,
such as when a database grows past a certain size or a users home directory is deleted.
The most dangerous form of the logic bomb is a logic bomb that activates when
something doesnt happen. Imagine a suspicious and unethical system administrator who
creates a logic bomb which deletes all of the data on a server if he doesnt log in for a
month. The system administrator programs the logic bomb with this logic because he
knows that if he is fired, he wont be able to get back into the system to set his logic
bomb. One day on his way to work, our suspicious and unethical system administrator is
hit by a bus. Three weeks later, his logic bomb goes off and the server is wiped clean.
The system administrator meant for the logic bomb to explode if he was fired; he did not
forsee that he would be hit by a bus.
Because a logic bomb does not replicate itself, it is very easy to write a logic bomb
program. This also means that a logic bomb will not spread to unintended victims. In
some ways, a logic bomb is the most civilized programmed threat, because a logic bomb
must be targeted against a specific victim.
The classic use for a logic bomb is to ensure payment for software. If payment is not
made by a certain date, the logic bomb activates and the software automatically deletes
itself. A more malicious form of that logic bomb would also delete other data on the
system.
What is Trojan Hourse?
A destructive program that masquerades as a benign application. Unlikeviruses, Trojan
horses do not replicate themselves but they can be just as destructive. One of the most
insidious types of Trojan horse is a program that claims to rid your computer of viruses
but instead introduces viruses onto your computer. Trojan horses are broken down in
classification based on how they breach systems and the damage they cause. The seven
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

main types of Trojan horses are: Remote Access Trojans, Data Sending Trojans,
Destructive Trojans, Proxy Trojans, FTP Trojans, security software disabler Trojans,
denial-of-service attack (DoS) Trojans
In computers, a Trojan horse is a program in which malicious or harmful code is
contained inside apparently harmless programming or data in such a way that it can get
control and do its chosen form of damage, such as ruining the file allocation table on
your hard disk. In one celebrated case, a Trojan horse was a program that was supposed
to find and destroy computerviruses. A Trojan horse may be widely redistributed as part
of a computer virus.
The term comes from Greek mythology about the Trojan War, as told in theAeneid by
Virgil and mentioned in the Odyssey by Homer. According to legend, the Greeks
presented the citizens of Troy with a large wooden horse in which they had secretly
hidden their warriors. During the night, the warriors emerged from the wooden horse and
overran the city.
What is Virus?
In computers, a virus is a program or programming code that replicates by being copied
or initiating its copying to another program, computer boot sector or document. Viruses
can be transmitted as attachments to an e-mail note or in a downloaded file, or be present
on a diskette or CD. The immediate source of the e-mail note, downloaded file, or
diskette you've received is usually unaware that it contains a virus. Some viruses wreak
their effect as soon as their code is executed; other viruses lie dormant until
circumstances cause their code to be executed by the computer. Some viruses are benign
or playful in intent and effect ("Happy Birthday, Ludwig!") and some can be quite
harmful, erasing data or causing your hard disk to require reformatting. A virus that
replicates itself by resending itself as an e-mail attachment or as part of a network
message is known as a worm.
Generally, there are three main classes of viruses:
File infectors. Some file infector viruses attach themselves to program files, usually
selected .COM or .EXE files. Some can infect any program for which execution is
requested, including .SYS, .OVL, .PRG, and .MNU files. When the program is loaded,
the virus is loaded as well. Other file infector viruses arrive as wholly-contained
programs or scripts sent as an attachment to an e-mail note.
System or boot-record infectors. These viruses infect executable code found in certain
system areas on a disk. They attach to the DOS boot sectoron diskettes or the Master
Boot Record on hard disks. A typical scenario (familiar to the author) is to receive a
diskette from an innocent source that contains a boot disk virus. When your operating
system is running, files on the diskette can be read without triggering the boot disk virus.
However, if you leave the diskette in the drive, and then turn the computer off or reload
the operating system, the computer will look first in your A drive, find the diskette with
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

10

its boot disk virus, load it, and make it temporarily impossible to use your hard disk.
(Allow several days for recovery.) This is why you should make sure you have a bootable
floppy.
Macro viruses. These are among the most common viruses, and they tend to do the least
damage. Macro viruses infect your Microsoft Word application and typically insert
unwanted words or phrases.
The best protection against a virus is to know the origin of each program or file you load
into your computer or open from your e-mail program. Since this is difficult, you can
buy anti-virus software that can screen e-mail attachments and also check all of your files
periodically and remove any viruses that are found. From time to time, you may get an email message warning of a new virus. Unless the warning is from a source you recognize,
chances are good that the warning is a virus hoax.
The computer virus, of course, gets its name from the biological virus. The word itself
comes from a Latin word meaning slimy liquid or poison.
What is warms?
In a computer, a worm is a self-replicating virus that does not alter files but resides in
active memory and duplicates itself. Worms use parts of anoperating system that are
automatic and usually invisible to the user. It is common for worms to be noticed only
when their uncontrolled replication consumes system resources, slowing or halting other
tasks.
This term is not to be confused with WORM (write once, read many).
A computer worm is a self-replicating malware computer program, which uses
a computer networkto send copies of itself to other nodes (computers on the network)
and it may do so without any user intervention. This is due to security shortcomings
on the target computer. Unlike a computer virus, it does not need to attach itself to an
existing program. Worms almost always cause at least some harm to the network,
even if only by consuming bandwidth, whereas viruses almost always corrupt or
modify files on a targeted computer.
Security Strategies & Processes

The security methodology described in this document is designed to help security


professionals develop a strategy to protect the availability, integrity, and
confidentiality of data in an organization's information technology (IT) system. It will be
of interest to information resource managers, computer security officials, and
administrators, and of particular value to those trying to establish computer security

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

11

policies. The methodology offers a systematic approach to this important task and, as a
final precaution, also involves establishing contingency plans in case of a disaster.
Data in an IT system is at risk from various sourcesuser errors and malicious and nonmalicious attacks. Accidents can occur and attackers can gain access to the system and
disrupt services, render systems useless, or alter, delete, or steal information.
An IT system may need protection for one or more of the following aspects of data:

Confidentiality. The system contains information that requires protection from


unauthorized disclosure. Examples: Timed dissemination information (for
example, crop report information), personal information, and proprietary business
information.

Integrity. The system contains information that must be protected from


unauthorized, unanticipated, or unintentional modification. Examples: Census
information, economic indicators, or financial transactions systems.

Availability. The system contains information or provides services that must be


available on a timely basis to meet mission requirements or to avoid substantial
losses. Examples: Systems critical to safety, life support, and hurricane
forecasting.

Security administrators need to decide how much time, money, and effort needs to be
spent in order to develop the appropriate security policies and controls. Each organization
should analyze its specific needs and determine its resource and scheduling requirements
and constraints. Computer systems, environments, and organizational policies are
different, making each computer security services and strategy unique. However, the
principles of good security remain the same, and this document focuses on those
principles.
Although a security strategy can save the organization valuable time and provide
important reminders of what needs to be done, security is not a one-time activity. It is an
integral part of the system lifecycle. The activities described in this document generally
require either periodic updating or appropriate revision. These changes are made when
configurations and other conditions and circumstances change significantly or when
organizational regulations and policies require changes. This is an iterative process. It is
never finished and should be revised and tested periodically.
Top Of Page
Overview of How to Compile a Security Strategy
Reviewing Current Policies
Establishing an effective set of security policies and controls requires using a strategy to
determine the vulnerabilities that exist in our computer systems and in the current
security policies and controls that guard them. The current status of computer security
policies can be determined by reviewing the list of documentation that follows. The
review should take notice of areas where policies are lacking as well as examine
documents that exist:
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

12

Physical computer security policies such as physical access controls.

Network security policies (for example, e-mail and Internet policies).

Data security policies (access control and integrity controls).

Contingency and disaster recovery plans and tests.

Computer security awareness and training.

Computer security management and coordination policies.


Other documents that contain sensitive information such as:

Computer BIOS passwords.

Router configuration passwords.

Access control documents.

Other device management passwords.

Identifying Assets and Vulnerabilities to Known Threats


Assessing an organization's security needs also includes determining its vulnerabilities to
known threats. This assessment entails recognizing the types of assets that an
organization has, which will suggest the types of threats it needs to protect itself against.
Following are examples of some typical asset/threat situations:

The security administrator of a bank knows that the integrity of the bank's
information is a critical asset and that fraud, accomplished by compromising this
integrity, is a major threat. Fraud can be attempted by inside or outside attackers.

The security administrator of a Web site knows that supplying information


reliably (data availability) is the site's principal asset. The threat to this
information service is a denial of service attack, which is likely to come from an
outside attacker.

A law firm security administrator knows that the confidentiality of its information
is an important asset. The threat to confidentiality is intrusion attacks, which
might be launched by inside or outside attackers.

A security administrator in any organization knows that the integrity of


information on the system could be threatened by a virus attack. A virus could be
introduced by an employee copying games to his work computer or by an outsider
in a deliberate attempt to disrupt business functions.

Identifying Likely Attack Methods, Tools, and Techniques


Listing the threats (and most organizations will have several) helps the security
administrator to identify the various methods, tools, and techniques that can be used in an
attack. Methods can range from viruses and worms to password and e-mail cracking. It is
important that administrators update their knowledge of this area on a continual basis,
because new methods, tools, and techniques for circumventing security measures are
constantly being devised.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

13

Establishing Proactive and Reactive Strategies


For each method, the security plan should include a proactive strategy as well as
a reactive strategy.
The proactive or pre-attack strategy is a set of steps that helps to minimize existing
security policy vulnerabilities and develop contingency plans. Determining the damage
that an attack will cause on a system and the weaknesses and vulnerabilities exploited
during this attack helps in developing the proactive strategy.
The reactive strategy or post-attack strategy helps security personnel to assess the
damage caused by the attack, repair the damage or implement the contingency plan
developed in the proactive strategy, document and learn from the experience, and get
business functions running as soon as possible.
Testing
The last element of a security strategy, testing and reviewing the test outcomes, is carried
out after the reactive and proactive strategies have been put into place. Performing
simulation attacks on a test or lab system makes it possible to assess where the various
vulnerabilities exist and adjust security policies and controls accordingly.
These tests should not be performed on a live production system because the outcome
could be disastrous. Yet, the absence of labs and test computers due to budget restrictions
might preclude simulating attacks. In order to secure the necessary funds for testing, it is
important to make management aware of the risks and consequences of an attack as well
as the security measures that can be taken to protect the system, including testing
procedures. If possible, all attack scenarios should be physically tested and documented
to determine the best possible security policies and controls to be implemented.
Certain attacks, such as natural disasters such as floods and lightning cannot be tested,
although a simulation will help. For example, simulate a fire in the server room that has
resulted in all the servers being damaged and lost. This scenario can be useful for testing
the responsiveness of administrators and security personnel, and for ascertaining how
long it will take to get the organization functional again.
Testing and adjusting security policies and controls based on the test results is an iterative
process. It is never finished and should be evaluated and revised periodically so that
improvements can be implemented.
The Incident Response Team
Good practice calls for forming an incident response team. The incident response team
should be involved in the proactive efforts of the security professional. These include:

Developing incident handling guidelines.

Identifying software tools for responding to incidents/events.

Researching and developing other computer security tools.

Conducting training and awareness activities.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

Performing research on viruses.

Conducting system attack studies.

14

These efforts will provide knowledge that the organization can use and information to
issue before and during incidents.
After the security administrator and incident response team have completed these
proactive functions, the administrator should hand over the responsibility for handling
incidents to the incident response team. This does not mean that the security
administrator should not continue to be involved or be part of the team, but the
administrator may not always be available and the team should be able to handle
incidents on its own. The team will be responsible for responding to incidents such as
viruses, worms, or other malicious code; intrusions; hoaxes; natural disasters; and insider
attacks. The team should also be involved in analyzing any unusual event that may
involve computer or network security.

Importance of Security Polices and audit


The word "audit" can send shivers down the spine of the most battle-hardened executive.
It means that an outside organization is going to conduct a formal written examination of
one or more crucial components of the organization. Financial audits are the most
common examinations a business manager encounters. This is a familiar area for most
executives: they know that financial auditors are going to examine the financial records
and how those records are used. They may even be familiar with physical security audits.
However, they are unlikely to be acquainted with information security audits; that is, an
audit of how the confidentiality, availability and integrity of an organization's
information is assured. They should be. An information security audit is one of the best
ways to determine the security of an organization's information without incurring the cost
and other associated damages of a security incident.
What is a Security Audit?
You may see the phrase "penetration test" used interchangeably with the phrase
"computer security audit". They are not the same thing. A penetration test (also known as
a pen-test) is a very narrowly focused attempt to look for security holes in a critical
resource, such as a firewall or Web server. Penetration testers may only be looking at one
service on a network resource. They usually operate from outside the firewall with
minimal inside information in order to more realistically simulate the means by which a
hacker would attack the site.
On the other hand, a computer security audit is a systematic, measurable technical
assessment of how the organization's security policy is employed at a specific site.
Computer security auditors work with the full knowledge of the organization, at times
with considerable inside information, in order to understand the resources to be audited.
Security audits do not take place in a vacuum; they are part of the on-going process of
defining and maintaining effective security policies. This is not just a conference room
activity. It involves everyone who uses any computer resources throughout the
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

15

organization. Given the dynamic nature of computer configurations and information


storage, some managers may wonder if there is truly any way to check the security
ledgers, so to speak. Security audits provide such a tool, a fair and measurable way to
examine how secure a site really is.
Computer security auditors perform their work though personal interviews, vulnerability
scans, examination of operating system settings, analyses of network shares, and
historical data. They are concerned primarily with how security policies - the foundation
of any effective organizational security strategy - are actually used. There are a number
of key questions that security audits should attempt to answer:

Are passwords difficult to crack?


Are there access control lists (ACLs) in place on network devices to control who
has access to shared data?

Are there audit logs to record who accesses data?

Are the audit logs reviewed?

Are the security settings for operating systems in accordance with accepted
industry security practices?

Have all unnecessary applications and computer services been eliminated for each
system?

Are these operating systems and commercial applications patched to current


levels?

How is backup media stored? Who has access to it? Is it up-to-date?

Is there a disaster recovery plan? Have the participants and stakeholders ever
rehearsed the disaster recovery plan?

Are there adequate cryptographic tools in place to govern data encryption, and
have these tools been properly configured?

Have custom-built applications been written with security in mind?

How have these custom applications been tested for security flaws?

How are configuration and code changes documented at every level? How are
these records reviewed and who conducts the review?
These are just a few of the kind of questions that can and should be assessed in a security
audit. In answering these questions honestly and rigorously, an organization can
realistically assess how secure its vital information is.
Security Policy Defined
As stated, a security audit is essentially an assessment of how effectively the
organization's security policy is being implemented. Of course, this assumes that the
organization has a security policiy in place which, unfortunately, is not always the case.
Even today, it is possible to find a number of organizations where a written security
policy does not exist. Security policies are a means of standardizing security practices by
having them codified (in writing) and agreed to by employees who read them and sign off
on them. When security practices are unwritten or informal, they may not be generally
understood and practiced by all employees in the organization. Furthermore, until all
employees have read and signed off on the security policy, compliance of the policy
cannot be enforced. Written security policies are not about questioning the integrity and

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

16

competency of employees; rather, they ensure that everyone at every level understands
how to protect company data and agrees to fulfill their obligations in order to do so.
Natural tensions frequently exist between workplace culture and security policy. Even
with the best of intentions, employees often choose convenience over security. For
example, users may know that they should choose difficult-to-guess passwords, but they
may also want those passwords to be close at hand. So every fledgling auditor knows to
check for sticky notes on the monitor and to pick up the keyboard and look under it for
passwords. IT staff may know that every local administrator account should have a
password; yet, in the haste to build a system, they may just bypass that step, intending to
set the password later, and therefore place an insecure system on the network.
The security audit should seek to measure security policy compliance and recommend
solutions to deficiencies in compliance. The policy should also be subject to scrutiny. Is it
a living document, accurately reflecting how the organization protects IT assets on a daily
basis? Does the policy reflect industry standards for the type of IT resources in use
throughout the organization?

Chapter -2
What is Network Configuration Management?
Network configuration management allows you to control changes to the configuration of
your network devices, like switches and routers. Using configuration management tools
you can make changes to the configuration of a router, then roll the changes back to a
previous configuration if the changes werent successful. Contrast the previous situation
with a network configuration management system with the situation without a network
configuration management system. You would make the changes, hopefully
remembering to document what you changed. If the changes werent successful you
would, at best, then have to undo the changes you made manually from your
documentation. At worst, you would be left with trying to remember what was changed
and why.
Network configuration management really comes into its own when multiple engineers
make changes to network equipment. How does engineer A know what engineer B has
been up to? Even if they communicate well together the chances are that crucial details
will be missed out. The changes made by engineer A may only fail when engineer B is on
duty. How is engineer B expected to troubleshoot the network when he/she doesnt know
exactly what changes engineer A made?
Another area in which network configuration management comes into its own is policy
enforcement. Say you want to roll out the same configuration (with minor differences like
the IP address) to multiple devices. Performing manual configuration updates can take a
long time, especially if you have a lot of network devices. With the help of intelligent

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

17

network configuration management, the more error prone tasks can be automated saving
time & reducing the risk of errors.
Why do I need Network Configuration Management?
Networks of any size are in a constant state of flux. Any of the engineers responsible for
the network can change the configuration of the switches and routers at any time.
Configuration changes to live equipment can have devastating effects on the reliability of
the network and the services provided by it. Network configuration management is
designed to allow you to take control of network changes.
Network configuration management is intended to simplify the job of managing medium
to large networks. The aim of network configuration management is to save you time &
reduce errors on your network due to misconfiguration of network devices. Even if your
network changes do cause errors, network configuration management allows you to fix
the errors faster.
Ive often been in the situation where I wished that I could move a systems configuration
back in time to when the device worked properly. Having a network configuration
management system means that you can change a devices configuration with a much
reduced risk of downtime. If you make a mistake, simply roll back your changes to a
version of the configuration that you know works.
When a fault has been identified on your network it can be invaluable to have an audit
trail of all configuration changes of your network devices. Not only will you quickly be
able to identify which devices have changed, but also what the changes were, when the
changes were made & by whom.
What is FCAPS?
FCAPS (fault-management, configuration, accounting, performance, and security) is an
acronym for a categorical model of the working objectives of network management.
There are five levels, called the fault-management level (F), the configuration level (C),
the accounting level (A), the performance level (P), and the security level (S).
At the F level, network problems are found and corrected. Potential future problems are
identified, and steps are taken to prevent them from occurring or recurring. In this way,
the network is kept operational, and downtime is minimized.
At the C level, network operation is monitored and controlled. Hardware and
programming changes, including the addition of new equipment and programs,
modification of existing systems, and removal of obsolete systems and programs, are
coordinated. An inventory of equipment and programs is kept and updated regularly.
The A level, which might also be called the allocation level, is devoted to distributing
resources optimally and fairly among network subscribers. This makes the most effective

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

18

use of the systems available, minimizing the cost of operation. This level is also
responsible for ensuring that users are billed appropriately.
The P level is involved with managing the overall performance of the
network.Throughput is maximized, bottlenecks are avoided, and potential problems are
identified. A major part of the effort is to identify which improvements will yield the
greatest overall performance enhancement.
At the S level, the network is protected against hackers, unauthorized users, and physical
or electronic sabotage. Confidentiality of user information is maintained where necessary
or warranted. The security systems also allow network administrators to control what
each individual authorized user can (and cannot) do with the system

SNMP, MIBs and OIDs - an Overview


SNMP is one of the most commonly used technologies when it comes to network
monitoring. Bandwidth Monitoring programs like PRTG Traffic Grapher use it. But how
does SNMP work? What are MIBs and OIDs? Read this short introduction into the world
of SNMP!
SNMP Basics
SNMP stands for Simple Network Management Protocol and consists of three key
components: managed devices, agents, and network-management systems (NMSs). A
managed device is a node that has an SNMP agent and resides on a managed network.
These devices can be routers and access server, switches and bridges, hubs, computer
hosts, or printers. An agent is a software module residing within a device. This agent
translates information into a compatible format with SNMP. An NMS runs monitoring
applications. They provide the bulk of processing and memory resources required for
network management.
MIBs, OIDs etc.
MIB stands for Management Information Base and is a collection of information
organized hierarchically. These are accessed using a protocol such as SNMP. There are
two types of MIBs: scalar and tabular. Scalar objects define a single object instance
whereas tabular objects define multiple related object instances grouped in MIB tables.
OIDs or Object Identifiers uniquely identify manged objects in a MIB hierarchy. This
can be depicted as a tree, the levels of which are assigned by different organizations. Top
level MIB object IDs (OIDs) belong to different standard organizations. Vendors define
private branches including managed objects for their own products.
SNMP version 1, which is the SNMP standard supported by PRTG Traffic Grapher, was
the initial development of the SNMP protocol. A description can be found in Request for
Comments (RFC) 1157 and it functions within the specification of the Structure of
Management Information (SMI). It operates over User Datagram Protocol (UDP),
Internet Protocol (IP), OSI Connectionless Network Services (CLNS), AppleTalk
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

19

Datagram Delivery Prtocol (DDP), and Novell Internet Packet Exchange (IPX). SNMP
v1 is considered the de facto network management protocol in the Internet community.
SNMP works on the basis that network management systems send out a request and the
managed devices return a response. This is implemented using one of four operations:
Get, GetNext, Set, and Trap. SNMP messages consist of a header and a PDU (protocol
data units). The headers consist of the SNMP version number and the community name.
The community name is used as a form of security in SNMP. The PDU depends on the
type of message that is being sent. The Get, GetNext, and Set, as well as the response
PDU, consist of PDU type, Request ID, Error status, Error index and Object/variable
fields. The Trap consist of Enterprise, Agent, Agent address, Generic trap type, Specific
trap code, Timestamp and Object/Value fields.
MIBs are a collection of definitions which define the properties of the managed object
within the device to be managed (such as a router, switch, etc.) Each managed device
keeps a database of values for each of the definitions written in the MIB. As such, it is
not actually database but implementation dependant. Each vendor of SNMP equipment
has an exclusive section of the MIB tree structure under their control.
In order for all of this to be properly organized, all of the manageable features of all
products (from each vendor) are arranged in this tree. Each 'branch' of this tree has a
number and a name, and the complete path from the top of the tree down to the point of
interest forms the name of that point. This is the OID. Nodes near the top of the tree are
extremely general I nature. For example, to get to the Internet, one has to reach to the
fourth tier. As one moves further down, the names get more and more specific, until one
gets to the bottom, where each node represents a particular feature on a specific device
(or agent).

Chapter 3
What is OSI Layer?
The OSI Reference Model is founded on a suggestion developed by the International
Organization for Standardization (ISO). The model is known as ISO OSI (Open Systems
Interconnection) Reference Model because it relates with connecting open systems that
is, systems that are open for communication with other systems.
OSI Model is a set of protocols that try to identify and homogenize the data
communication practices. The OSI Model has the support of most computer and network
vendors, many big customers, and most governments, including the United States.
The OSI Model is a model that illustrates how data communications should take place. It
segregates the process into seven groups, called layers. Into these layers are integrated the
protocol standards developed by the ISO and other standards organization, including the
Institute of Electrical and Electronic Engineers (IEEE), American National Standards
Institute (ANSI), and the International Telecommunications Union (ITU), formerly
known as the CCITT (Comite Consultatif Internationale de Telegraphique et Telephone).
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

20

The OSI Model affirms what protocols and standards should be used at each layer. It is
modular, each layer of the OSI Model functions with the one above and below it.
The short form used to memorize the layer names of the OSI Model is
All People SeemTo Need Data Processing. The lower two layers are normally put into
practice with hardware and software. The remaining five layers are only implemented
with software.
The layered approach to network communications gives the subsequent advantages:
Reduced intricacy, enhanced teaching/learning, modular engineering, accelerated
advancement, interoperable technology, and standard interfaces.

The Seven Layers of the OSI Model


The seven layers of the OSI model are:
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security
Layer

Name

Application

Presentation

Session

Transport

Network

Data Link

Physical

21

The easiest way to remember the layers of the OSI model is to use the handy mnemonic
All People Seem To Need Data Processing:
Layer

Name

Mnemonic

Application

All

Presentation

People

Session

Seem

Transport

To

Network

Need

Data Link

Data

Physical

Processing

Functions of Each Layer of the OSI Model


Layer Seven
The Application Layer of the OSI model is responsible for providing end-user services,
such as file transfers, electronic messaging, e-mail, virtual terminal access, and network
management. This is the layer with which the user interacts.
Layer Six
The Presentation Layer of the OSI model is responsible for defining the syntax which two
network hosts use to communicate. Encryption and compression should be Presentation
Layer functions.
Layer Five
The Session Layer of the OSI model is responsible for establishing process-to-process
commnunications between networked hosts.
Layer Four
The Transport Layer of the OSI model is responsible for delivering messages between
networked hosts. The Transport Layer should be responsible for fragmentation and
reassembly.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

22

Layer Three
The Network Layer of the OSI model is responsible for establishing paths for data
transfer through the network. Routers operate at the Network Layer.
Layer Two
The Data Link Layer of the OSI model is responsible for communications between
adjacent network nodes. Hubs and switches operate at the Data Link Layer.
Layer One
The Physical Layer of the OSI model is responsible for bit-level transmission between
network nodes. The Physical Layer defines items such as: connector types, cable types,
voltages, and pin-outs.
What is MTU (Maximum transfer Unit)?
A maximum transmission unit (MTU) is the largest size packet orframe, specified
in octets (eight-bit bytes), that can be sent in a packet- or frame-based network such as
the Internet. The Internet's Transmission Control Protocol uses the MTU to determine the
maximum size of each packet in any transmission. Too large an MTU size may mean
retransmissions if the packet encounters a router that can't handle that large a packet. Too
small an MTU size means relatively more header overhead and more acknowledgements
that have to be sent and handled. Most computer operating systems provide
adefault MTU value that is suitable for most users. In general, Internet users should
follow the advice of their Internet service provider (ISP) about whether to change the
default value and what to change it to.
In Windows 95, the default MTU was 1500 octets (eight-bit bytes), partly because this is
the Ethernet standard MTU. The Internet de facto standard MTU is 576, but ISPs often
suggest using 1500. If you frequently access Web sites that encounter routers with an
MTU size of 576, you may want to change to that size. (Apparently some users find that
changing the setting to 576 improves performance and others do not find any
improvement.) The minimum value that an MTU can be set to is 68.
For more recent Windows systems, the operating system is able to sense whether your
connection should use 1500 or 576 and select the appropriate MTU for the connection.
For protocols other than TCP, different MTU sizes may apply.
IP
The Internet Protocol (IP) is the method or protocol by which data is sent from one
computer to another on the Internet. Each computer (known as ahost) on the Internet has
at least one IP address that uniquely identifies it from all other computers on the Internet.
When you send or receive data (for example, an e-mail note or a Web page), the message
gets divided into little chunks called packets. Each of these packets contains both the
sender's Internet address and the receiver's address. Any packet is sent first to
a gatewaycomputer that understands a small part of the Internet. The gateway computer
reads the destination address and forwards the packet to an adjacent gateway that in turn
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

23

reads the destination address and so forth across the Internet until one gateway recognizes
the packet as belonging to a computer within its immediate neighborhood or domain.
That gateway then forwards the packet directly to the computer whose address is
specified.
Because a message is divided into a number of packets, each packet can, if necessary, be
sent by a different route across the Internet. Packets can arrive in a different order than
the order they were sent in. The Internet Protocol just delivers them. It's up to another
protocol, the Transmission Control Protocol (TCP) to put them back in the right order.
IP is a connectionless protocol, which means that there is no continuing connection
between the end points that are communicating. Each packet that travels through the
Internet is treated as an independent unit of data without any relation to any other unit of
data. (The reason the packets do get put in the right order is because of TCP, the
connection-oriented protocol that keeps track of the packet sequence in a message.) In the
Open Systems Interconnection (OSI) communication model, IP is in layer 3, the
Networking Layer.
The most widely used version of IP today is Internet Protocol Version 4 (IPv4). However,
IP Version 6 (IPv6) is also beginning to be supported. IPv6 provides for much longer
addresses and therefore for the possibility of many more Internet users. IPv6 includes the
capabilities of IPv4 and any server that can support IPv6 packets can also support IPv4
packets.
UDP
UDP (User Datagram Protocol) is a communications protocol that offers a limited
amount of service when messages are exchanged between computers in a network that
uses the Internet Protocol (IP). UDP is an alternative to the Transmission Control
Protocol (TCP) and, together with IP, is sometimes referred to as UDP/IP. Like the
Transmission Control Protocol, UDP uses the Internet Protocol to actually get a data unit
(called a datagram) from one computer to another. Unlike TCP, however, UDP does not
provide the service of dividing a message into packets (datagrams) and reassembling it at
the other end. Specifically, UDP doesn't provide sequencing of the packets that the data
arrives in. This means that the application program that uses UDP must be able to make
sure that the entire message has arrived and is in the right order. Network applications
that want to save processing time because they have very small data units to exchange
(and therefore very little message reassembling to do) may prefer UDP to TCP. The
Trivial File Transfer Protocol (TFTP) uses UDP instead of TCP.
UDP provides two services not provided by the IP layer. It provides port numbers to help
distinguish different user requests and, optionally, a checksumcapability to verify that the
data arrived intact.
ICMP
In the Open Systems Interconnection (OSI) communication model, UDP, like TCP, is in
layer 4, the Transport Layer.
ICMP is the Internet Control Message Protocol.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

24

ICMP is a complementary protocol to IP (Internet Protocol). Like IP, ICMP resides on


the Network Layerof the OSI Model.
ICMP is designed for sending control and test messages across IP networks.
Unlike
the Transport
Layerprotocols TCP (Transmission
Control
Protocol)
and UDP (User Datagram Protocol) which operate on top of IP, ICMP exists alongside
IP.
The ability to understand ICMP is a requirement for any IP-compatible network device.
However, many security devices such as firewalls block or disable all or part of ICMP
functionality for security purposes.
ARP
Short for Address Resolution Protocol, a network layer protocol used to convert an IP
address into a physical address (called a DLC address), such as an Ethernet address.
A host wishing to obtain a physical addressbroadcasts an ARP request onto the TCP/IP
network. The host on the network that has the IP address in the request then replies with
its physical hardware address.
There is also Reverse ARP (RARP) which can be used by a host to discover its IP address.
In this case, the host broadcasts its physical address and a RARP server replies with the
host's IP address.
RARP
RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine
in a local area network can request to learn its IP address from a gateway server's Address
Resolution Protocol (ARP) table or cache. A network administrator creates a table in a
local area network's gateway router that maps the physical machine (or Media Access
Control - MAC address) addresses to corresponding Internet Protocol addresses. When a
new machine is set up, its RARP client program requests from the RARP server on the
router to be sent its IP address. Assuming that an entry has been set up in the router table,
the RARP server will return the IP address to the machine which can store it for future
use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and token ring LANs.
DNS
The domain name system (DNS) is the way that Internet domain names are located and
translated into Internet Protocol addresses. A domain name is a meaningful and easy-toremember "handle" for an Internet address.
Because maintaining a central list of domain name/IP address correspondences would be
impractical, the lists of domain names and IP addresses are distributed throughout the
Internet in a hierarchy of authority. There is probably a DNS server within close
geographic proximity to youraccess provider that maps the domain names in your
Internet requests or forwards them to other servers in the Internet.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

25

PING
Ping is a basic Internet program that allows a user to verify that a particular IP
address exists and can accept requests.
Ping is used diagnostically to ensure that a host computer the user is trying to reach is
actually operating. Ping works by sending an Internet Control Message Protocol (ICMP)
Echo Request to a specified interface on the network and waiting for a reply. Ping can be
used for troubleshooting to test connectivity and determine response time.
As a verb, ping means "to get the attention of" or "to check for the presence of" another
party online. The computer acronym (for Packet Internet or Inter-Network Groper) was
contrived to match the submariners' term for the sound of a returned sonar pulse.
Tip: To find out the dot address (such as 205.245.172.72) for a given domain name,
Windows users can go to their command prompt screen (start/run/cmd) and enter ping
xxxxx.yyy (where xxxxx is the second-level domain name like "what is" and yyy is the
top-level domain name like "com").
Message confidentiality
Public-key cryptography refers to a widely used set of methods for transforming a
written message into a form that can be read only by the intended recipient.
This cryptographic approach involves the use of asymmetric key algorithms that is,
the non-message information (the public key) needed to transform the message to a
secure form is different from the information needed to reverse the process (the private
key). The person who anticipates receiving messages first creates both a public key and
an associated private key, and publishes the public key. When someone wants to send a
secure message to the creator of these keys, the sender encrypts it (transforms it to secure
form) using the intended recipient's public key; to decrypt the message, the recipient uses
the private key.
Thus, unlike symmetric key algorithms, a public key algorithm does not require
a secure initial exchange of one or more secret keysbetween the sender and receiver. The
particular algorithm used for encrypting and decrypting was designed in such a way that,
while it is easy for the intended recipient to generate the public and private keys and to
decrypt the message using the private key, and while it is easy for the sender to encrypt
the message using the public key, it is extremely difficult for anyone to figure out the
private key based on their knowledge of the public key.
The use of these keys also allows protection of the authenticity of a message by creating
a digital signature of a message using the private key, which can be verified using the
public key.
Public key cryptography is a fundamental and widely used technology around the world.
It is the approach which is employed by many cryptographic algorithms
and cryptosystems. It underpins such Internet standards as Transport Layer Security
(TLS) (successor to SSL),PGP, and GPG.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

26

Symmetric vs. asymmetric algorithms


When using symmetric algorithms, both parties share the same key for en- and
decryption. To provide privacy, this key needs to be kept secret. Once somebody else
gets to know the key, it is not safe any more. Symmetric algorithms have the advantage
of not consuming too much computing power. A few well-known examples are: DES,
Triple-DES (3DES), IDEA, CAST5, BLOWFISH, TWOFISH.
Asymmetric algorithms use pairs of keys. One is used for encryption and the other one
for decryption. The decryption key is typically kept secretly, therefore called ``private
key'' or ``secret key'', while the encryption key is spread to all who might want to send
encrypted messages, therefore called ``public key''. Everybody having the public key is
able to send encrypted messages to the owner of the secret key. The secret key can't be
reconstructed from the public key. The idea of asymmetric algorithms was first published
1976 by Diffie and Hellmann.
Asymmetric algorithms seem to be ideally suited for real-world use: As the secret key
does not have to be shared, the risk of getting known is much smaller. Every user only
needs to keep one secret key in secrecy and a collection of public keys, that only need to
be protected against being changed. With symmetric keys, every pair of users would need
to have an own shared secret key. Well-known asymmetric algorithms are RSA, DSA,
ELGAMAL.
However, asymmetric algorithms are much slower than symmetric ones. Therefore, in
many applications, a combination of both is being used. The asymmetric keys are used
for authentication and after this has been successfully done, one or more symmetric keys
are generated and exchanged using the asymmetric encryption. This way the advantages
of both algorithms can be used.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

27

Chapter -4
"Buffer overflow" (sometimes called buffer overrun) attacks are
designed to trigger arbitrary code execution by a program by sending
it more data than it is supposed to receive.
Programs that accept parameterized input data temporarily store
them in a region of memory called abuffer). But some read functions,
such as strcpy() functions from the C language, cannot manage this
type of overflow and cause the application to crash, which can lead to
arbitrary code execution and open access to the system.
The implementation of this type of attack is extremely complicated as it requires in-depth
knowledge of program and processor architecture. However, there are
various expoits capable of automating this type of attack and making it accessible to
quasi-novices.
Operating principle
The operating principle of a buffer overflow is closely related to the architecture of the
processor on which the vulnerable application is executed.
Data entered in an application are stored in random access memory in a region called
a buffer. A correctly designed program should stipulate a maximum size for input data
and make sure the input data do not exceed this value.
The instructions and data of a running program are temporarily stored adjacently in
memory in a region called a stack). The data located after the buffer contain a return
address (called an instruction pointer) that lets the program continue its run-time. If the
size of the data is greater than the size of the buffer, the return address is overwritten and
the program will read an invalid memory address generating asegmentation fault in the
application.
A hacker with strong technical knowledge can make sure the overwritten memory
address corresponds to an actual address, for example located in the buffer itself. As
such, by writing instructions in the buffer (arbitrary code), it is easy for him to execute it.
It is therefore possible to include instructions in the buffer that open a command
interpreter (a shell) and make it possible for the hacker to take control of the system. This
arbitrary code that makes it possible to execute the shell is called a shellcode.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

28

IP spoofing:
This is a means of changing the information in the headers of a packet to forge the source
IP address. Spoofing is used to impersonate a different machine from the one that
actually sent the data. This can be done to avoid detection and/or to target the machine to
which the spoofed address belongs for a deluge of responses, as done in several types of
DoS attacks. By spoofing an address that is a trusted port, the attacker can get packets
through a firewall that would otherwise be filtered out.
Port scanning has a legitimate purpose: Network administrators use it to test the security
of their own systems. Popular diagnostic utilities such as Security Administrators Tool
for Analyzing Networks (SATAN) include scanning capabilities, and there are a number
of freeware scanning programs. Take a look at this site for information on scanners.
Intrusion types Ways of intruding into your network to do damage include the following

Source routing attack: This is a protocol exploit that is used by hackers to reach
private IP addresses on an internal network by routing traffic through another machine
that can be reached from both the Internet and the local network. Source routing is
supported by TCP/IP to allow those sending network data to route the packets through a
specific network point for better performance. It is also used by administrators to map
their networks or to troubleshoot routing problems.
Trojan attacks: Trojans are programs that masquerade as something else and
allow hackers to take control of your machine, browse your drives, upload or download
data, etc. For example, in 1999, a Trojan program file called Picture.exe was designed to
collect personal data from the hard disk of an infiltrated computer and send it to a specific
e-mail address. So-called Trojan ports are popular avenues of attack for these programs.
A list of these hostile ports and the types of attacks that use them is located on
the DoShelp site.
Registry attack: In this type of attack, a remote user connects to a Windows
machines registry and changes the registry settings. To prevent such an attack, configure
permissions so that the Everyone group does not have access.
Password hijacking attacks: The easiest way to gain unauthorized access to a
protected system is to find a legitimate password. This can be done via social engineering
(getting authorized users to divulge their passwords via persuasion, intimidation, or
trickery) or using brute forcethat is, trying one possible password after another until
one works. Password cracker programs automate this guessing process.

TCP Sweep
In TCP sweep the layer of one source remotely stated to follow the other ones layer. Mean if
one sender want to send the data and if his TCP session is sweep by other on e then he cant
send or transect the data with out the permission of TCP sweep person in it the TCP session
is started to do behavior according to a other unknown person.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

29

What is port scanning? It is similar to a thief going through your neighborhood and
checking every door and window on each house to see which ones are open and which
ones are locked.
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two of the
protocols that make up the TCP/IP protocol suite which is used universally to
communicate on the Internet. Each of these has ports 0 through 65535 available so
essentially there are more than 65,000 doors to lock.
The first 1024 TCP ports are called the Well-Known Ports and are associated with
standard services such as FTP, HTTP, SMTP or DNS. Some of the addresses over 1023
also have commonly associated services, but the majority of these ports are not associated
with any service and are available for a program or application to use to communicate on.
Port scanning software, in its most basic state, simply sends out a request to connect to
the target computer on each port sequentially and makes a note of which ports responded
or seem open to more in-depth probing.
If the port scan is being done with malicious intent, the intruder would generally prefer to
go undetected. Network security applications can be configured to alert administrators if
they detect connection requests across a broad range of ports from a single host. To get
around this the intruder can do the port scan in strobe or stealth mode. Strobing limits the
ports to a smaller target set rather than blanket scanning all 65536 ports. Stealth scanning
uses techniques such as slowing the scan. By scanning the ports over a much longer
period of time you reduce the chance that the target will trigger an alert.
By setting different TCP flags or sending different types of TCP packets the port scan can
generate different results or locate open ports in different ways. A SYN scan will tell the
port scanner which ports are listening and which are not depending on the type of
response generated. A FIN scan will generate a response from closed ports- but ports that
are open and listening will not send a response, so the port scanner will be able to
determine which ports are open and which are not.
There are a number of different methods to perform the actual port scans as well as tricks
to hide the true source of port scan. You can read more about some of these by visiting
these web sites: Port Scanning or Network Probes Explained.
It is possible to monitor your network for port scans. The trick, as with most things in
information security, is to find the right balance between network performance and
network safety. You could monitor for SYN scans by logging any attempt to send a SYN
packet to a port that isn't open or listening. However, rather than being alerted every time
a single attempt occurs- and possibly being awakened in the middle of the night for an
otherwise innocent mistake- you should decide on thresholds to trigger the alert. For
instance you might say that if there are more than 10 SYN packet attempts to nonlistening ports in a given minute that an alert should be triggered. You could design filters
and traps to detect a variety of port scan methods- watching for a spike in FIN packets or

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

30

just an anomylous number of connection attempts to a variety of ports and / or IP


addresses from a single IP source.
To help ensure that your network is protected and secure you may wish to perform your
own port scans. A MAJOR caveat here is to ensure you have the approval of all the
powers that be before embarking on this project lest you find yourself on the wrong side
of the law. To get accurate results it may be best to perform the port scan from a remote
location using non-company equipment and a different ISP. Using software such
as NMap you can scan a range of IP addresses and ports and find out what an attacker
would see if they were to port scan your network. NMap in particular allows you to
control almost every aspect of the scan and perform various types of port scans to fit your
needs.
Once you find out what ports respond as being open by port scanning your own network
you can begin to work on determining whether its actually necessary for those ports to be
accessible from outside your network. If they're not necessary you should shut them
down or block them. If they are necessary, you can begin to research what sorts of
vulnerabilities and exploits your network is open to by having these ports accessible and
work to apply the appropriate patches or mitigation to protect your network as much as
possible.

SYN Flood
A SYN flood is a form of denial-of-service attack in which an attacker sends a succession
of SYN requests to a target's system. Some systems can misdetect a SYN Flood when
being scanned for open proxies, as commonly done by IRC servers and services. These
are notSYN Floods, merely an automated system designed to check the connecting IP.

A normal connection between a user (Alice) and a


server. The three-way handshake is correctly
performed.

When
a
client
attempts
to
start
a TCP connection
to
a
server,
the client and server exchange a series of messages which normally runs like this:

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

31

1. The client requests a connection by sending a SYN (synchronize) message to the


server.
2. The server acknowledges this request by sending SYN-ACK back to the client.
3. The client responds with an ACK, and the connection is established.
This is called the TCP three-way handshake, and is the foundation for every connection
established using the TCP protocol.
The SYN flood is a well known type of attack and is generally not effective against
modern networks. It works if a server allocates resources after receiving a SYN, but
before it has received the ACK.
SYN Flood. The attacker (Mallory) sends several
packets but does not send the "ACK" back to the
server. The connections are hence half-opened and
consuming server resources. Alice, a legitimate user,
tries to connect but the server refuses to open a
connection resulting in a denial of service

There are two methods, but both involve the server not receiving the ACK. A malicious
client can skip sending this last ACK message. Or byspoofing the source IP address in
the SYN, it makes the server send the SYN-ACK to the falsified IP address, and thus
never receive the ACK. In both cases the server will wait for the acknowledgement for
some time, as simple network congestion could also be the cause of the missing ACK.
If these half-open connections bind resources on the server, it may be possible to take up
all these resources by flooding the server with SYNmessages. Once all resources set aside
for half-open connections are reserved, no new connections (legitimate or not) can be
made, resulting in denial of service. Some systems may malfunction badly or even crash
if other operating system functions are starved of resources this way.
The technology often used in 1996 for allocating resources for half open TCP
connections involved a queue which was often very short with each entry of the queue
being removed upon a completed connection, or upon expiry (e.g., after 3 minutes).
When the queue was full, further connections failed. With the examples above, all further
connections would be prevented for 3 minutes by sending a total of 8 packets. A welltimed 8 packets every 3 minutes would prevent all further TCP connections from
completing. This allowed for a Denial of Service attack with very minimal traffic.
SYN cookies provide protection against the SYN flood by eliminating the resources
allocated on the target host.
Limiting new connections per source per timeframe is not a general solution since the
attacker can spoof the packets to have multiple sources.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

32

Reflector routers can also be used as attackers, instead of client machines.


Teardrop attack
denial of service (DoS) attack is an incident in which a user or organization is deprived of
the services of a resource they would normally expect to have. In a distributed denial-ofservice, large numbers of compromised systems (sometimes called a botnet) attack a
single target.
DOS attack:A Denial of Service (DoS) attack is an attack which attempts to prevent the victim from
being able to use all or part of their network connection.
A denial of service attack may target a user, to prevent them from making outgoing
connections on the network. A denial of service may also target an entire organization, to
either prevent outgoing traffic or to prevent incoming traffic to certain network services,
such as the organizations web page.
Denial of service attacks are much easier to accomplish than remotely gaining
administrative access to a target system. Because of this, denial of service attacks have
become very common on the Internet.
NOTE
The Teardrop Attack uses IP's packet fragmentation algorithm to send corrupted packets
to the victim machine. This confuses the victim machine and may hang it.

Description of Teardrop
This DoS attack affects Windows 3.1, 95 and NT machines. It also affects Linux versions
previous to 2.0.32 and 2.1.63.
Teardrop is a program that sends IP fragments to a machine connected to the Internet or
a network. Teardrop exploits an overlapping IP fragment bug present in Windows 95,
Windows NT and Windows 3.1 machines. The bug causes the TCP/IP fragmentation reassembly code to improperly handle overlapping IP fragments. This attack has not been
shown to cause any significant damage to systems, and a simple reboot is the preferred
remedy. It should be noted, though, that while this attack is considered to be nondestructive, it could cause problems if there is unsaved data in open applications at the
time that the machine is attacked. The primary problem with this is a loss of data.
Symptoms of Attack
When a Teardrop attack is run against a machine, it will crash (on Windows machines, a
user will likely experience the Blue Screen of Death), or reboot. If you have protected
yourself from the winnuke andssping DoS attacks and you still crash, then the mode of
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

33

attack is probably teardrop or land. If you are using IRC, and your machine becomes
disconnected from the network or Internet, but does not crash, the mode of attack is
probably click.
LAND
A LAND (Local Area Network Denial) attack is a DoS (Denial of Service) attack that
consists of sending a special poison spoofed packet to a computer, causing it to lock up.
The security flaw was actually first discovered in 1997 by someone using the alias
"m3lt", and has resurfaced many years later in operating systems such as Windows
Server 2003 and Windows XP SP2.
How it works
The attack involves sending a spoofed TCP SYN packet (connection initiation) with the
target host's IP address and an open port as both source and destination.
The reason a LAND attack works is because it causes the machine to reply to itself
continuously.
Definition: "A LAND attack involves IP packets where the source and destination address
are set to address the same device.
Other land attacks have since been found in services like SNMP and Windows 88/tcp
(kerberos/global services) which were caused by design flaws where the devices accepted
requests on the wire appearing to be from themselves and causing replies repeatedly.

What is a "smurf attack"?

smurf is a simple yet effective DDoS attack technique that takes advantage of the
ICMP (Internet Control Message Protocol). ICMP is normally used on the internet
for error handling and for passing control messages. One of its capabilities is to
contact a host to see if it is "up" by sending an "echo request" packet. The
common "ping" program uses this functionality. smurf is installed on a computer
using a stolen account, and then continuously "pings" one or more networks of
computers using a forged source address. This causes all the computers to respond
to a different computer than actually sent the packet. The forged source address,
which is the actual target of the attack, is then overwhelmed by response traffic.
The computer networks that respond to the forged ("spoofed") packet serve as
unwitting accomplices to the attack. The basic characteristics and defense
strategies against smurf follow. Further information is available from CERT.
Attack Platforms: In order for smurf to work, it must find attack platforms that
have IP broadcast functionality enabled on their routers. This functionality
allows smurf to send a single forged ping packet and have it broadcast to an entire
network of computers. To prevent your system from being used as a smurf attack
platform, disable IP-directed broadcast functionality on all routers. Generally
speaking, this functionality will not be missed.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

34

The attacker may still be able to launch a smurf attack from inside your
LAN, in which case disabling IP broadcast functionality at the router will
have no effect. To protect against such an attack, many operating systems
provide settings to prevent computers from responding to IP-directed
broadcast requests.
o In order for the attacker to successfully take advantage of you as an attack
platform, your routers must allow packets to exit the network with source
addresses that do not originate from your internal network. It is possible to
configure your routers to filter out packets which do not originate from
your internal network. This is known as network egress filtering.
o ISP's should employ network ingress filtering, which drops packets which
do not originate from a known range of IP addresses.
Targets: the easiest way to frustrate a smurf attack is to filter for echo reply
packets at the border routers and drop them. This will prevent the packets from
hitting the web server and the internal network.
o Dropping all echo reply packets will prevent flooding of your network, but
it will not prevent traffic jams in the pipe from your upstream provider.
If you are the target of an attack, ask your ISP to also filter out and
drop echo reply packets.
o If you do not want to completely disable echo reply, then you can
selectively drop echo reply packets that are addressed to your high-profile,
public web servers.

VPN (Virtual Private Network)


The landscape of VPN products and services offered by a wide variety of vendors
continues to evolve. This has caused companies whose networks need protection to
become confused about what is and is not a VPN, and the features of the different VPN
systems that are being offered to them. The descriptions and definitions in this white
paper should help to reduce the confusion for VPN customers, as well as to aid VPN
vendors in describing their offerings in a useful fashion.
VPN Terminology
A virtual private network (VPN) is a private data network that makes use of the public
telecommunication infrastructure, maintaining privacy through the use of a tunneling
protocol and security procedures. A virtual private network can be contrasted with a
system of owned or leased lines that can only be used by one company. The main
purpose of a VPN is to give the company the same capabilities as private leased lines at
much lower cost by using the shared public infrastructure. Phone companies have
provided private shared resources for voice messages for over a decade. A virtual private
network makes it possible to have the same protected sharing of public resources for data.
Companies today are looking at using a private virtual network for both extranets and
wide-area intranets.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

35

This document describes three important VPN technologies: trusted VPNs, secure VPNs,
and hybrid VPNs. It is important to note that secure VPNs and trusted VPNs are not
technically related, and can co-exist in a single service package.
Before the Internet became nearly-universal, a virtual private network consisted of one or
more circuits leased from a communications provider. Each leased circuit acted like a
single wire in a network that was controlled by customer. The communications vendor
would sometimes also help manage the customer's network, but the basic idea was that a
customer could use these leased circuits in the same way that they used physical cables in
their local network.
The privacy afforded by these legacy VPNs was only that the communications provider
assured the customer that no one else would use the same circuit. This allowed customers
to have their own IP addressing and their own security policies. A leased circuit ran
through one or more communications switches, any of which could be compromised by
someone wanting to observe the network traffic. The VPN customer trusted the VPN
provider to maintain the integrity of the circuits and to use the best available business
practices to avoid snooping of the network traffic. Thus, these are called trusted VPNs.
As the Internet became more popular as a corporate communications medium, security
became much more of a pressing issue for both customers and providers. Seeing that
trusted VPNs offered no real security, vendors started to create protocols that would
allow traffic to be encrypted at the edge of one network or at the originating computer,
moved over the Internet like any other data, and then decrypted when it reached the
corporate network or a receiving computer. This encrypted traffic acts like it is in a tunnel
between the two networks: even if an attacker can see the traffic, they cannot read it, and
they cannot change the traffic without the changes being seen by the receiving party and
therefore rejected. Networks that are constructed using encryption are called secure
VPNs.
More recently, service providers have begun to offer a new type of trusted VPNs, this
time using the Internet instead of the raw telephone system as the substrate for
communications. These new trusted VPNs still do not offer security, but they give
customers a way to easily create network segments for wide area networks (WANs). In
addition, trusted VPN segments can be controlled from a single place, and often come
with guaranteed quality-of-service (QoS) from the provider.
A secure VPN can be run as part of a trusted VPN, creating a third type of VPN that is
very new on the market: hybrid VPNs. The secure parts of a hybrid VPN might be
controlled by the customer (such as by using secure VPN equipment on their sites) or by
the same provider that provides the trusted part of the hybrid VPN. Sometimes an entire
hybrid VPN is secured with the secure VPN, but more commonly, only a part of a hybrid
VPN is secure.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

36

VPN Tunneling
Virtual private network technology is based on the idea of tunneling. VPN
tunnelinginvolves establishing and maintaining a logical network connection (that may
contain intermediate hops). On this connection, packets constructed in a specific VPN
protocol format are encapsulated within some other base or carrier protocol, then
transmitted between VPN client and server, and finally de-encapsulated on the receiving
side.
For Internet-based VPNs, packets in one of several VPN protocols are encapsulated
withinInternet Protocol (IP) packets. VPN protocols also support authentication and
encryption to keep the tunnels secure.
IPSEC
A secure network starts with a strong security policy that defines the freedom of access to
information and dictates the deployment of security in the network. Cisco Systems offers
many technology solutions for building a custom security solution for Internet, extranet,
intranet, and remote access networks. These scalable solutions seamlessly interoperate to
deploy enterprise-wide network security. Cisco System's IPsec delivers a key technology
component for providing a total security solution. Cisco's IPsec offering provides
privacy, integrity, and authenticity for transmitting sensitive information over the
Internet.
Cisco's end-to-end offering allows customers to implement IPsec transparently into the
network infrastructure without affecting individual workstations or PCs. Cisco IPsec
technology is available across the entire range of computing infrastructure: Windows 95,
Windows NT 4.0, and Cisco IOS software.
IPsec is a framework of open standards for ensuring secure private communications over
the Internet. Based on standards developed by the Internet Engineering Task Force
(IETF), IPsec ensures confidentiality, integrity, and authenticity of data communications
across a public network. IPsec provides a necessary component of a standards-based,
flexible solution for deploying a network-wide security policy.
IPsec's method of protecting IP datagrams takes the following forms:

Data origin authentication

Connectionless data integrity authentication

Data content confidentiality

Anti-replay protection

Limited traffic flow confidentiality

IPsec protects IP datagrams by defining a method of specifying the traffic to protect, how
that traffic is to be protected, and to whom the traffic is sent.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

37

Protocols
There are a number of VPN protocols in use that secure the transport of data traffic over a
public network infrastructure. Each protocol varies slightly in the way that data is kept
secure.
IP security (IPSec) is used to secure communications over the Internet. IPSec traffic can
use either transport mode or tunneling to encrypt data traffic in a VPN. The difference
between the two modes is that transport mode encrypts only the message within the data
packet (also known as the payload) while tunneling encrypts the entire data packet. IPSec
is often referred to as a "security overlay" because of its use as a security layer for other
protocols.
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) use cryptography to
secure communications over the Internet. Both protocols use a "handshake" method of
authentication that involves a negotiation of network parameters between the client and
server machines. To successfully initiate a connection, an authentication process
involving certificates is used. Certificates are cryptographic keys that are stored on both
the server and client.
Point-To-Point Tunneling Protocol (PPTP) is another tunneling protocol used to connect
a remote client to a private server over the Internet. PPTP is one of the most widely used
VPN protocols because of it's straightforward configuration and maintenance and also
because it is included with the Windows operating system.
Layer 2 Tunneling Protocol (L2TP) is a protocol used to tunnel data communications
traffic between two sites over the Internet. L2TP is often used in tandem with IPSec
(which acts as a security layer) to secure the transfer of L2TP data packets over the
Internet. Unlike PPTP, a VPN implementation using L2TP/IPSec requires a shared key or
the use of certificates.
VPN technology employs sophisticated encryption to ensure security and prevent any
unintentional interception of data between private sites. All traffic over a VPN is
encrypted using algorithms to secure data integrity and privacy. VPN architecture is
governed by a strict set of rules and standards to ensure a private communication channel
between sites. Corporate network administrators are responsible for deciding the scope of
a VPN, implementing and deploying a VPN, and ongoing monitoring of network traffic
across the network firewall. A VPN requires administrators to be continually be aware of
the overall architecture and scope of the VPN to ensure communications are kept private.
Advantages & Disadvantages
A VPN is a inexpensive effective way of building a private network. The use of the
Internet as the main communications channel between sites is a cost effective alternative
to expensive leased private lines. The costs to a corporation include the network
authentication hardware and software used to authenticate users and any additional
mechanisms such as authentication tokens or other secure devices. The relative ease,
speed, and flexibility of VPN provisioning in comparison to leased lines makes VPNs an
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

38

ideal choice for corporations who require flexibility. For example, a company can adjust
the number of sites in the VPN according to changing requirements.
There are several potential disadvantages with VPN use. The lack of Quality of Service
(QoS) management over the Internet can cause packet loss and other performance issues.
Adverse network conditions that occur outside of the private network is beyond the
control of the VPN administrator. For this reason, many large corporations pay for the
use of trusted VPNs that use a private network to guarantee QoS. Vendor interoperability
is another potential disadvantage as VPN technologies from one vendor may not be
compatible with VPN technologies from another vendor. Neither of these disadvantages
have prevented the widespread acceptance and deployment of VPN technology.

AH header: ( authentication header)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Next header

Length

Security Parameters Index


Sequence number
Authentication Data :::
Next header
Consider of previous packet addresss last eight bit and in current the packet put the same
eight bit at in beginning just for authentication
Eg. Packet 1- 00001111101010101001 in packet the default first eight bit is 1001.
Because to maintain sequesce and authentication
Length :- that contain the size of transaction n the header.
And in remaining all part the actual data can be store
Security parameter index:In that the security level cab be defined mean the priority and authentication can be
defined on header in this section.
Sequence number

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

39

Every authentication header have it oven sequence number after the checking of next
header then header checked by the sequence. It always be in reverse manner. It depends
on no. of packets.
Authentication data:The encryption code or a symmetric and asymmetric key are define in this section. Mean
a user level security is define by this section.

IPSec Key Exchange (IKE)


IPSec, like many secure networking protocol sets, is based on the concept of a shared
secret. Two devices that want to send information securely encode and decode it using a
piece of information that only they know. Anyone who isn't in on the secret is able to
intercept the information but is prevented either from reading it (if ESP is used to encrypt
the payload) or from tampering with it undetected (if AH is used). Before either AH or
ESP can be used, however, it is necessary for the two devices to exchange the secret
that the security protocols themselves will use. The primary support protocol used for this
purpose in IPSec is called Internet Key Exchange (IKE).
IKE is defined in RFC 2409, and is one of the more complicated of the IPSec protocols to
comprehend. In fact, it is simply impossible to truly understand more than a real
simplification of its operation without significant background in cryptography. I don't
have a background in cryptography and I must assume that you, my reader, do not either.
So rather than fill this topic with baffling acronyms and unexplained concepts, I will just
provide a brief outline of IKE and how it is used.
IKE Overview and Relationship to Other Key Exchange Methods
The purpose of IKE is to allow devices to exchange information required for secure
communication. As the title suggests, this includes cryptographic keys used for
encoding authentication information and performing payload encryption. IKE works by
allowing IPSec-capable devices to exchange security associations (SAs), to populate their
security association databases (SADs). These are then used for the actual exchange of
secured datagrams with the AH and ESP protocols.
IKE is considered a hybrid protocol because it combines (and supplements) the
functions of three other protocols. The first of these is the Internet Security Association
and Key Management Protocol (ISAKMP). This protocol provides a framework for
exchanging encryption keys and security association information. It operates by allowing
security associations to be negotiated through a series of phases.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

40

ISAKMP is a generic protocol that supports many different key exchange methods. In
IKE, the ISAKMP framework is used as the basis for a specific key exchange method
that combines features from two key exchange protocols:
o

OAKLEY: Describes a specific mechanism for exchanging keys through the


definition of various key exchange modes. Most of the IKE key exchange
process is based on OAKLEY.

SKEME: Describes a different key exchange mechanism than OAKLEY. IKE


uses
some
features
from
SKEME,
including
its
method
of public keyencryption and its fast re-keying feature.

PPTP (Point to Point tunneling protocol)


Point-to-Point Tunneling Protocol (PPTP) is a protocol (set of communication rules) that
allows corporations to extend their own corporate network through private "tunnels" over
the public Internet. Effectively, a corporation uses a wide-area network as a single large
local area network. A company no longer needs to lease its own lines for wide-area
communication but can securely use the public networks. This kind of interconnection is
known as a virtual private network (VPN).
PPTP, a proposed standard sponsored by Microsoft and other companies, and Layer 2
Tunneling Protocol, proposed by Cisco Systems, are among the most likely proposals as
the basis for a new Internet Engineering Task Force (IETF) standard. With PPTP, which
is an extension of the Internet's Point-to-Point Protocol (PPP), any user of a PC with PPP
client support is able to use an independent service provider (ISP) to connect securely to
a server elsewhere in the user's company.
PPTP is a protocol or technology that supports the use of VPNs. Using PPTP, remote
users can access their corporate networks securely, using the Microsoft Windows
Platforms and other PPP (Point to Point tunneling Protocols) enabled systems. This is
achieved with remote users dialing into their local internet security providers, to connect
securely to their networks via the internet. PPP (Point to point protocol) is used by PPTP
to provide the encryption and authentication on data packets. The main use of PPTP is to
provide a tunnel for PPP, as PPP is none routable over the internet.
PPTP is a tunneling protocol that was developed by various vendor companies including
Microsoft and AS Robotics. PPTP has its issues and is considered as a weak security
protocol according to many experts, although Microsoft continues to improve the use of
PPTP, and claims issues within PPTP have now been corrected. PPTP is not as secure as
IPSec and cannot secure two networks. PPTP can only secure one IP address with one
other IP address or with a network. PPTP is now often replaced by L2TP which provides
security using IPSec, and PPTP has also been made obsolete by L2TP and IPSec. Lastly
another limitation PPTP has compared to L2TP is that it can not route over other
networks other than IP.
Although PPTP is easier to use and configure than IPSec, IPSec outweighs PPTP in other
areas such as being more secure and a robust protocol.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

41

L2TP
Layer Two Tunneling Protocol (L2TP) is an extension of the Point-to-Point Tunneling
Protocol (PPTP) used by an Internet service provider (ISP) to enable the operation of a
virtual private network (VPN) over the Internet. L2TP merges the best features of two
other tunneling protocols: PPTP from Microsoft and L2F from Cisco Systems. The two
main components that make up L2TP are the L2TP Access Concentrator (LAC), which is
the device that physically terminates a call and the L2TP Network Server (LNS), which is
the device that terminates and possibly authenticates the PPP stream.
PPP defines a means of encapsulation to transmit multiprotocol packets over layer two
(L2) point-to-point links. Generally, a user connects to a network access server (NAS)
through ISDN, ADSL, dialup POTS or other service and runs PPP over that connection.
In this configuration, the L2 and PPP session endpoints are both on the same NAS.
L2TP uses packet-switched network connections to make it possible for the endpoints to
be located on different machines. The user has an L2 connection to an access
concentrator, which then tunnels individual PPP frames to the NAS, so that the packets
can be processed separately from the location of the circuit termination. This means that
the connection can terminate at a local circuit concentrator, eliminating possible longdistance charges, among other benefits. From the user's point of view, there is no
difference in the operation.

Chapter -5
Two-factor authentication (TFA or 2FA) means using two independent means of
evidence to assert an entity's identity to another entity. Two-factor authentication is
commonly found in electronic computer authentication, where basic authentication is the
process of a requesting entity presenting some evidence of its identity to a second entity.
Two-factor authentication seeks to decrease the probability that the requestor is
presenting false evidence of its identity. The number of factors is important as it implies a
higher probability that the bearer of the identity evidence indeed holds that identity in
another realm (ie: computer system vs real life). In reality there are more variables to
consider when establishing the relative assurance of truthfulness in an identity assertion,
than simply how many "factors" are used.
Two-factor authentication is often confused with other forms of authentication. Two
factor authentication implies the use of two independent means of evidence to assert an
entity, rather than two iterations of the same means. "Something one has", "something
one knows", and "something one is" are useful simple summaries of three independent
factors. In detail these factors are,

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

42

what the requestor individually knows as a secret, such as a password or


a Personal Identification Number (PIN)
what the requesting owner uniquely has, such as a passport, physical token, or an
ID-card
what the requesting bearer individually is, such as biometric data, like a
fingerprint or the face geometry.

It is generally accepted that any independent two of these authentication methods (e.g.
password + value from a physical token) is two-factor authentication. The accepting
identity may use these facts (among other criteria) as a truth upon which to grant or deny
the requestor's access to a sensitive data set or physical area. The requestor may be a
person or computer system agent acting on behalf of a person.
Another independent means that is becoming more practiced in computer systems is
"how one behaves", although it is more often used as a decision point for transactions or
to de-authenticate an entity than to establish initial truth in identity.
Brute Force
brute force attack or exhaustive key search is a strategy that can in theory be used
against any encrypted data by an attacker who is unable to take advantage of any
weakness in an encryption system that would otherwise make his/her task easier. It
involves systematically checking all possible keys until the correct key is found. In the
worst case, this would involve traversing the entire search space.
The key length used in the encryption determines the practical feasibility of performing a
brute force attack, with longer keys exponentially more difficult to crack than shorter
ones. Brute force attacks can be made less effective by obfuscating the data to be
encoded, something that makes it more difficult for an attacker to recognise when he/she
has cracked the code. One of the measures of the strength of an encryption system is how
long it would theoretically take an attacker to mount a successful brute force attack
against it.
Brute-force attacks are an application of brute-force search, the general problem-solving
technique of enumerating all candidates and checking each one.
Dictionary attack
A dictionary attack consists of trying every word in the dictionary as a possible
password for an encrypted message.
A dictionary attack is generally more efficient than a brute force attack, because users
typically choose poor passwords.
Dictionary attacks are generally far less successful against systems that use passphrases
instead of passwords.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

43

Improving Dictionary Attacks


There are two methods of improving the success of a dictionary attack.
The first method of improving the success of a dictionary attack is to use a larger
dictionary, or more dictionaries. Technical dictionaries and foreign language dictionaries
will increase the overall chance of discovering the correct password.
The second method of improving the success of a dictionary attack is to perform string
manipulation on the dictionary. For example, the dictionary may have the word
password in it. Common string manipulation techniques will try the word backwards
(drowssap), with common number-letter replacements (p4ssw0rd), or with different
capitalization (Password).
Of course, very small dictionaries may lead to the fastest success, if one or more of the
targets is encrypted with a very weak password. A short list of girls names can yield
amazing results.
A dictionary of potential passwords is more accurately known as a wordlist.
If the dictionary attack fails
If an extensive dictionary attack fails, it may be worthwhile to resort to a brute force
attack. A brute force attack is more certain to achieve results eventually than a dictionary
attack.

Single Sign-On SSO is a name for a collection of technologies that allows network users
to provide a single set of credentials for all network services.
This Single Sign-On solution provides the following centralized services:

Authentication: Verification that a user or server is who they claim to be and


providing a mechanism for passing this information throughout the network. MIT
Kerberos 5 is used.
Account Information: Information about the user: name and group membership
are two important pieces of information. OpenLDAP is used.
Shared File Systems: this solution provides a shared file system using pam_mount
and sshfs.
(Limited) Authorization: authorization information is a combination of group
membership information held in the LDAP directory and local file system
permissions.

This guide is divided in to several sections that describe installation of required server
software, testing, and installation of software on the client.
Single sign-on (SSO) is mechanism whereby a single action of user authentication and
authorization can permit a user to access all computers and systems where he has access
permission, without the need to enter multiple passwords. Single sign-on reduces human
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

44

error, a major component of systems failure and is therefore highly desirable but difficult
to implement.
The Open Group does not prematurely or inappropriately attempt to standardize highlevel product functionality, but instead endorses consolidated user administration systems
which achieve "openness" by adhering to a LDAP-based meta-directory model, and
which support a defined set of schemas registered by TOG, with associated test suites.
(LDAP : Lightweight Directory Access Protocol)
To this end, an LDAP Profile Specification Working Group has been created and is
tracking and contributing to the Internet Engineering Task Force (IETF) LDAP work.
If you are a member of The Open Group, you can follow Single sign-on issues either in
the Security Group Minutes or the Management Group Minutes of the quarterly
Members' Meetings.

Password Policy
A password policy is a set of rules designed to enhance computer security by
encouraging users to employ strong passwords and use them properly. A password policy
is often part of an organization's official regulations and may be taught as part of security
awareness training. The password policy may either be advisory or mandated by
technical means.
Many policies require a minimum password length, typically 8 characters. Some systems
impose a maximum length for compatibility with legacy systems.
Some policies suggest or impose requirements on what type of password a user can
choose, such as:

the use of both upper- and lower-case letters (case sensitivity)


inclusion of one or more numerical digits
inclusion of special characters, e.g. @, #, $ etc.
prohibition of words found in a dictionary or the user's personal information
prohibition of passwords that match the format of calendar dates, license
plate numbers, telephone numbers, or other common numbers
prohibition of use of company name or its abbreviation

As of October 2005, employees of the UK Government are advised to use passwords of


the following form consonant, vowel, consonant, consonant, vowel, consonant, number,
number (for example pinray45). This form is called an Environ password and is caseinsensitive. Unfortunately, since the form of this 8-character password is known to
potential attackers, the number of possibilities that need to be tested is actually fewer than
a 6-character password of no form (486,202,500 vs 2,176,782,336).

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

45

Other systems create the password for the users or let the user select one of a limited
number of displayed choices.
Password duration
Some policies require users to change passwords periodically, e.g. every 90 or 180 days.
Systems that implement such policies sometimes prevent users from picking a password
too close to a previous selection.
This policy can often backfire. Since it's hard to come up with 'good' passwords that are
also easy to remember, if people are required to come up with many passwords because
they have to change them often, they end up using much weaker passwords; the policy
also encourages users to write passwords down. Also, if the policy prevents a user from
repeating a recent password, this means that there is a database in existence of everyone's
recent passwords (or their hashes) instead of having the old ones erased from memory.
Requiring a very strong password, and not requiring it be changed is often better.
However it does have a major drawback: if someone acquires a password, if it's not
changed, they may have long term access.
It is necessary to weigh these factors: the likelihood of someone guessing a password
because it is weak, vs the likelihood of someone managing to steal, or otherwise acquire
without guessing, a password.
Common password practice
Password policies often include advice on proper password management such as:

never sharing a computer account


never using the same password for more than one account
never telling a password to anyone, including people who claim to be from
customer service or security

never write down a password

never communicating a password by telephone, e-mail or instant messaging

being careful to log off before leaving a computer unattended

changing passwords whenever there is suspicion they may have been


compromised

operating system password and application passwords are different

password should be alpha-numeric

make passwords COMPLETELY random but easy for you to remember


Sanctions
Password policies may include progressive sanctions beginning with warnings and
ending with possible loss of computer privileges or job termination. Where
confidentiality is mandated by law, e.g. with classified information, a violation of
password policy could be a criminal offense. Some consider a convincing explanation of
the importance of security to be more effective than threats of sanctions

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

46

Selection process
The level of password strength required depends, in part, on how easy it is for an attacker
to submit multiple guesses. Some systems limit the number of times a user can enter an
incorrect password before some delay is imposed or the account is frozen. At the other
extreme, some systems make available a specially hashed version of the password so
anyone can check its validity. When this is done, an attacker can try passwords very
rapidly and much stronger passwords are necessary for reasonable security.
(See password cracking and password length equation.) Stricter requirements are also
appropriate for accounts with higher privileges, such as root or system administrator
accounts.

Usability considerations
Password policies are usually a tradeoff between theoretical security and the practicalities
of human behavior. For example:

Requiring excessively complex passwords and forcing them to be changed


frequently can cause users to write passwords down in places that are easy for an
intruder to find, such as a Rolodex or post-it note near the computer.

Users often have dozens of passwords to manage. It may be more realistic to


recommend a single password be used for all low security applications, such as
reading on-line newspapers and accessing entertainment web sites.

Similarly, demanding that users never write down their passwords may be
unrealistic and lead users to choose weak ones. An alternative is to suggest keeping
written passwords in a secure place, such as a safe or an encrypted master file. The
validity of this approach depends on what the most likely threat is deemed to be.
While writing down a password may be problematic if potential attackers have access
to the secure store, if the threat is primarily remote attackers who do not have access
to the store, it can be a very secure method.

Inclusion of special characters can be a problem if a user has to logon a computer


in a different country. Some special characters may be difficult or impossible to find
on keyboards designed for another language.

Some identity management systems allow Self Service Password Reset, where
users can bypass password security by supplying an answer to one or more security
questions such as "where were you born?," "what's your favorite movie?," etc. Often
the answers to these questions can easily be obtained by social
engineering, phishing or simple research.

Other approaches are available that are generally considered to be more secure than
simple passwords. These include use of a security token or one-time password system,
such as S/Key.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

47

Enforcing a Policy
The more complex a password policy the harder it may be to enforce, due to user
difficulty in remembering or choosing a suitable password.
Most companies will require users to familiarise themselves with any password policy,
much in the same way a company would require empoyees to be aware of Health &
Safety regulations, or building fire exits, however it is often difficult to ensure that the
relevant policies are actually being followed.
Types of Biometrics
1) Fingerprint Recognition
Fingerprint biometrics are probably the most common form of biometrics available
today. This form of data encryption has evolved out of the use of fingerprints for
identification purposes over the last several decades. By having an individual scan their
fingerprint electronically to decode information, the transmitter of the data can be certain
that the intended recipient is the receiver of the data. When scanned electronically,
fingerprints provide a higher level of detail and accuracy can be achieved over manual
systems.
Some other strengths associated with fingerprint biometrics are that giving fingerprints is
more widely accepted, convenient and reliable than other forms of physical identification,
especially when using technology. In fact, studies have shown that fingerprint
identification is currently thought to be the least intrusive of all biometric techniques.
One concern of fingerprint biometrics is that latent prints left on the glass will register the
prior user, however there already exist units that will not scan unless a "live" finger is on
the glass and will only register the later imprint. Furthermore, the error rate experienced
with this form of encryption is approximately one in one hundred thousand scans.
Lastly, one of the most important features of fingerprint biometrics is its cost. Scanners
are already available fairly cheap and as the technology becomes more common this cost
should only decrease. In fact, in anticipation of widespread use of this technology in the
future, some "mouse" manufacturers are developing their products with fingerprint
scanner technology built right into the "mouse" itself.
Closely associated with fingerprint biometrics is another biometric that registers the
imprint left by the palm of the hand. These types of scanners measure the geometry of
the hand rather than the fine skin patterns as found in the finger tip. Hand scanners have
been used in apartment buildings, nurseries, and even the 1996 Olympic Village in
Atlanta to control access to restricted areas. These units are more commonly found in
areas where dirt or debris on hands may make fingerprint identification difficult such as
on shop floors in manufacturing plants. Although palm print scanners nearly match
fingerprint scanners in reliability, the units are much larger in size and cost than
fingerprint scanners. An average palm scanner can cost over $2,000.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

48

2)Optical Recognition
There are two common types of optical biometrics; retinas and irises. Retinal and iris
biometric devices are more accurate than fingerprint and hand biometric devices because
both the retina and iris have more characteristics to identify and match than those found
on the hand. These types of devices have come a long way in recent years allowing the
individual to be scanned even through their glasses or contacts. The error rate for the
typical retina or iris scanner is about one in two million attempts which further
demonstrates the reliability of this technology. Two drawbacks to these devices however
are that they have difficulty reading images of those people who are blind or have
cataracts, and that they currently are cumbersome to use.
There are several industries which are particularly interested in this type of technology,
but one of those which is most interested is the banking industry. Citibank has signed a
licensing agreement with Sensar, Inc. for use of their iris scanning systems, which the
company will most likely incorporate into their ATMs. The total investment totaled three
million dollars which demonstrates the amount of faith the company places on this form
of biometrics for the future. One concern of the banking industry is that, due to the
current cumbersomeness of these units, it may leave their customers vulnerable when
conducting transactions at ATMs. Some prisons are using this technology today to
identify inmates and guards.
The cost of these systems make them somewhat unattractive for network users with the
typical cost averaging $6,500, but as this technology becomes more standardized and
accepted the cost should reduce and become less of a factor in the decision making
process.

3) Facial Recognition
This type of technology has been popularized in many action movies as a means of
identifying villains as they enter a building. Facial biometrics can function from either
short distances or over greater distances. This form of biometric however is often less
reliable then more common forms such as fingerprints and iris scans. The interpretative
functions the computer must perform to find a match is much more subjective using this
technology. An image is examined for overall facial structure which works well over
short distances but progressively loses accuracy the greater the distance between the
individual and the scanner. Changes in lighting can also increase the error rate in these
devices.
This type of technology is in place in several airport terminals and at many border
crossings to help determine the identities of individuals at a distance who may be
involved in criminal activities without alerting the individual that they are being
monitored.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

49

One of the more attractive features of these type of products is their cost. Units can
typically be purchased for as little as $150. At this price, this type of technology might
lend itself to electronic commerce, but the units can be cumbersome to use and still are
not as reliable as other forms of biometrics to be used for encryption purposes.
4) Voice Recognition
There are several distinct advantages that voice recognition has for use in encryption
technology. Not only are voice biometrics perfect for telecommunication applications,
most of the modern personal computers already possess the necessary hardware to utilize
the applications. Even if they don't, sound cards can be purchased for as little as $50 and
condenser microphones can be purchased for as little as $10. Therefore, for less than
$100 individuals can possess the technology needed to have fairly reliable biometric
encryption technology for use over the Internet.
The error rate for this type of biometric is not as accurate, however, as some other forms.
The error rate for this type of technology ranges between two and five percent, however it
lends itself well for voice verification over the public telephone system and is more
secure than PINs.
Some drawbacks to this technology are that voiceprints can vary over the course of the
day, and ones health, such as a cold or laryngitis, can affect verification of the user by the
system.
5) Signature Recognition
Signing documents is something that most every adult is familiar with. In our personal
lives we sign everything from personal checks to birthday cards. In the business world
we sign things such as expense accounts and other official documents. This lends itself
well for signature recognition to be used as a means of biometric verification in electronic
commerce. This type of signature identification is different however from the normal
two-dimensional signature that one would find on a form or document. Biometric
signature recognition operates in a three-dimensional environment where, not only is the
height and width of pen strokes measured, but also the amount of pressure applied in the
pen stroke to measure the depth that would occur as if the stroke was made in the air.
This helps to reduce the risk of forgery that can occur in two-dimensional signatures.
One drawback to this form of encryption is that people do not always sign documents in
exactly the same manner. The angle at which they sign may be different due to seating
position or due to hand placement on the writing surface. Therefore, even though it is
three dimensional which adds to its ability to discern impostors, it is not as accurate as
other forms of biometric verification.
These types of systems are not as expensive as some of the higher end systems such as
iris scanners, and they are priced more in the range of voice and fingerprint scanners
which makes them quite affordable for network use.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

50

6) Keystroke Recognition
This type of technology is not as mundane as it sounds. The concept is based off of the
current password or PIN system, but adds an extra dimension of keystroke dynamics.
Not only must an intruder know the correct password using this technology, but they
must also be able to replicate the rate of typing and intervals between letters to gain
access to the information. It is most likely that, even if an unauthorized person is able to
guess the correct password, they will not be able to type it with the proper rhythm unless
they have had the ability to hear and memorize the correct users key strokes.
This is most likely one of the least secure of the new biometric technologies that has
evolved in recent years, however, it is also probably one of the cheapest and easiest to
implement. It will most likely not gain much attention for use in electronic commerce
since other systems can be purchased for about the same amount and offer far more
reliability.

False rejection
False rejection, also called a type I error, is a mistake occasionally made
by biometric security systems. In an instance of false rejection, the system fails to
recognize an authorized person and rejects that person as an impostor.
One of the most important specifications in any biometric system is the false rejection
rate (FRR). The FRR is defined as the percentage of identification instances in which
false rejection occurs. This can be expressed as a probability. For example, if the FRR is
0.05 percent, it means that on the average, one out of every 2000 authorized persons
attempting to access the system will not be recognized by that system.
False Acceptance
False acceptance, also called a type II error, is a mistake occasionally made
by biometric security systems. In an instance of false acceptance, an unauthorized person
is identified as an authorized person.
Obviously, false acceptance is an undesirable event. One of the most important
specifications in any biometric system is the false acceptance rate (FAR). The FAR is
defined as the percentage of identification instances in which false acceptance occurs.
This can be expressed as a probability. For example, if the FAR is 0.1 percent, it means
that on the average, one out of every 1000 impostors attempting to breach the system will
be successful. Stated another way, it means that the probability of an unauthorized person
being identified an an authorized person is 0.1 percent.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

51

CHARACTERISTICS OF BIOMETRIC SYSTEMS


These are the important factors necessary for any effective biometric system: accuracy,
speed and throughput rate, acceptability to users, uniqueness of the biometric organ and
action, resistance to counterfeiting, reliability, data storage requirements, enrollment
time, intrusiveness of data collection, and subject and system contact requirements.
Accuracy
Accuracy is the most critical characteristic of a biometric identifying verification system.
If the system cannot accurately separate authentic persons from impostors, it should not
even be termed a biometric identification system.
False Reject Rate
The rate, generally stated as a percentage, at which authentic, enrolled persons are
rejected as unidentified or unverified persons by a biometric system is termed the false
reject rate. False rejection is sometimes called a Type I error. In access control, if the
requirement is to keep the bad guys out, false rejection is considered the least important
error. However, in other biometric applications, it may be the most important error. When
used by a bank or retail store to authenticate customer identity and account balance, false
rejection means that the transaction or sale (and associated profit) is lost, and the
customer becomes upset. Most bankers and retailers are willing to allow a few false
accepts as long as there are no false rejects.
False rejections also have a negative effect on throughput, frustrations, and unimpeded
operations, because they cause unnecessary delays in personnel movements. An
associated problem that is sometimes incorrectly attributed to false rejection is failure to
acquire. Failure to acquire occurs when the biometric sensor is not presented with
sufficient usable data to make an authentic or impostor decision. Examples include
smudged prints on a fingerprint system, improper hand positioning on a hand geometry
system, improper alignment on a retina or iris system, or mumbling on a voice system.
Subjects cause failure to acquire problems, either accidentally or on purpose.
False Accept Rate
The rate, generally stated as a percentage, at which unenrolled or impostor persons are
accepted as authentic, enrolled persons by a biometric system is termed the false accept
rate. False acceptance is sometimes called a Type II error. This is usually considered to
be the most important error for a biometric access control system.
Crossover Error Rate (CER)
This is also called the equal error rate and is the point, generally stated as a percentage, at
which the false rejection rate and the false acceptance rate are equal. This has become the
most important measure of biometric system accuracy.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

52

All biometric systems have sensitivity adjustment capability. If false acceptance is not
desired, the system can be set to require (nearly) perfect matches of enrollment data and
input data. If tested in this configuration, the system can truthfully be stated to achieve a
(near) zero false accept rate. If false rejection is not desired, this system can be readjusted
to accept input data that only approximate a match with enrollment data. If tested in this
configuration, the system can be truthfully stated to achieve a (near) zero false rejection
rate. However, the reality is that biometric systems can operate on only one sensitivity
setting at a time.
The reality is also that when system sensitivity is set to minimize false acceptance,
closely matching data will be spurned, and the false rejection rate will go up significantly.
Conversely, when system sensitivity is set to minimize false rejects, the false acceptance
rate will go up notably. Thus, the published (i.e., truthful) data tell only part of the story.
Actual system accuracy in field operations may even be less than acceptable. This is the
situation that created the need for a single measure of biometric system accuracy.
The crossover error rate (CER) provides a single measurement that is fair and impartial in
comparing the performance of the various systems. In general, the sensitivity setting that
produces the equal error will be close to the setting that will be optimal for field operation
of the system. A biometric system that delivers a CER of 2% will be more accurate than a
system with a CER of 5%.
Speed and Throughput Rate
The speed and throughput rate are the most important biometric system characteristics.
Speed is often related to the data processing capability of the system and is stated as how
fast the accept or reject decision is annunciated. In actuality, it relates to the entire
authentication procedure: stepping up to the system; inputting the card or PIN (if a
verification system); input of the physical data by inserting a hand or finger, aligning an
eye, speaking access words, or signing a name; processing and matching of data files;
annunciation of the accept or reject decision; and, if a portal system, movement through
and closing the door.
Generally accepted standards include a system speed of 5 seconds from startup through
decision annunciation. Another standard is a portal throughput rate of 6 to 10/minute,
which equates to 6 to 10 seconds/person through the door. Only in recent years have
biometric systems become capable of meeting these speed standards, and, even today,
some marketed systems do not maintain this rapidity. Slow speed and the resultant
waiting lines and movement delays have frequently caused the removal of biometric
systems and even the failure of biometric companies.
Acceptability to Users
System acceptability to the people who must use it has been a little noticed but
increasingly important factor in biometric identification operations. Initially, when there
were few systems, most were of high security and the few users had a high incentive to
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

53

use the systems; user acceptance was of little interest. In addition, little user threat was
seen in fingerprint and hand systems.
Biometric system acceptance occurs when those who must use the system
organizational managers and any union present all agree that there are assets that need
protection, the biometric system effectively controls access to these assets, system usage
is not hazardous to the health of the users, system usage does not inordinately impede
personnel movement and cause production delays, and the system does not enable
management to collect personal or health information about the users. Any of the parties
can effect system success or removal. Uncooperative users will overtly or covertly
compromise, damage, or sabotage system equipment. The cost of union inclusion of the
biometric system in their contracts may become too costly. Moreover, management has
the final decision on whether the biometric system benefits outweigh its liabilities.

Chapter-6
What is cryptography?
Cryptography is often seen as a 'black art': something OTHERS understand but YOU
need. This of course need not the the case at all. Yes, there are some complex concepts to
embrace, but basic understanding need not be a trial.
This web site is designed to help you understand the basics of cryptography, presenting
the main ideas in simple language. It also provides access to a series of resources to help
you apply, and implement, cryptographic solutions.
It will hopefully prove to be invaluable to all who use it - both beginners and seasoned
professionals.
The Fundamental Idea of Cryptography:
It is possible to transform or encipher a message or plaintext into "an intermediate form"
or ciphertext in which the original information is present but hidden. Then we can release
the transformed message (the ciphertext) without exposing the information it represents.
By using different transformations, we can create many different ciphertexts for the exact
same message. So if we select a particular transformation "at random," we can hope that
anyone wishing to expose the message ("break" the cipher) can do no better than simply
trying all available transformations (on average, half) one-by-one. This is a brute force
attack.
The difference between intermediate forms is the interpretation of the ciphertext data.
Different ciphers and different keys will produce different interpretations (different
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

54

plaintexts) for the exact same ciphertext. The uncertainty of how to interpret any
particular ciphertext is how information is "hidden."
Naturally, the intended recipient needs to know how to transform or decipher the
intermediate form back into the original message, and this is the key distribution
problem.
By itself, ciphertext is literally meaningless, in the sense of having no one clear
interpretation. In so-called perfect ciphers, any ciphertext (of appropriate size) can be
interpreted as any message, just by selecting an appropriate key. In fact, any number
of different messages can produce exactly the same ciphertext, by using the appropriate
keys. In other ciphers, this may not always be possible, but it must always be considered.
To attack and break a cipher, it is necessary to somehow confirm that the message we
generate from ciphertext is the exact particular message which was sent.
What Cryptography Can Do
Potentially, cryptography can hide information while it is in transit or storage. In general,
cryptography can:

Provide secrecy.
Authenticate that a message has not changed in transit.
Implicitly authenticate the sender.

Cryptography hides words: At most, it can only hide talking about contraband or illegal
actions. But in a country with "freedom of speech," we normally expect crimes to be
more than just "talk."
Cryptography can kill in the sense that boots can kill; that is, as a part of some other
process, but that does not make cryptography like a rifle or a tank. Cryptography is
defensive, and can protect ordinary commerce and ordinary people. Cryptography may
be to our private information as our home is to our private property, and our home is our
"castle."
Potentially, cryptography can hide secrets, either from others, or during communication.
There are many good and non-criminal reasons to have secrets: Certainly, those engaged
in commercial research and development (R&D) have "secrets" they must keep. Business
often needs secrecy from competitors while plans and laid and executed, and the need for
secrecy often continues as long as there are business operations. Professors and writers
may want to keep their work private, until an appropriate time. Negotiations for new jobs
are generally secret, and romance often is as well, or at least we might prefer that detailed
discussions not be exposed. And health information is often kept secret for good reason.
One possible application for cryptography is to secure on-line communications between
work and home, perhaps leading to a society-wide reduction in driving, something we
could all appreciate.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

55

A Simple Cipher
On a piece of lined paper, write the alphabet in order, one character per line:
A
B
C
...
Then, on each line, we write another character to the right. In this second column, we also
want to use each alphabetic character exactly once, but we want to place them in some
different order.
A F
B W
C A
...
When we have done this, we can take any message and encipher it letter-by-letter.
Naive Ciphers
Suppose we want to hide a name: We might think to innovate a different rule for each
letter. We might say: "First we have 'T', but 't' is the 3rd letter in 'bottle' so we write '3.'"
We can continue this way, and such a cipher could be very difficult to break. So why is
this sort of thing not done? There are several reasons:
1. First, any cipher construction must be decipherable, and it is all too easy, when
choosing rules at random, to make a rule that depends upon plaintext, which will
of course not be present until after the ciphertext is deciphered.
2. The next problem is remembering the rules, since the rules constitute the key. If
we choose from among many rules, in no pattern at all, we may have
a strongcipher, but be unable to remember the key. And if we write the key down,
all someone has to do is read that and properly interpret it (which may be another
encryption issue). So we might choose among few rules, in some pattern, which
will make a weaker cipher.
3. Another problem is the question of what we do for longer messages. This sort of
scheme seems to want a different key, or perhaps just more key, for a longer
message, which is certainly inconvenient. What often happens in practice is that
the key is re-used repeatedly, and that will be very, very weak.
4. Yet another problem is the observation that describing the rule selection may take
more information than the message itself. To send the message to someone else,
we must somehow transport the key securely to the other end. But if
we can transfer this amount of data securely in the first place, we wonder why we
cannot securely transfer the smaller message itself.
Modern ciphering is about constructions which attempt to solve these problems. A
modern cipher has a large keyspace, which might well be controlled by
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

56

a hashingcomputation on a language phrase we can remember. A modern cipher


system can handle a wide range of message sizes, with exactly the same key, and
normally provides a way to securely re-use keys. And the key can be much, much smaller
than a long message.
Moreover, in a modern cipher, we expect the key to not be exposed, even
if the opponent has both the plaintext and the associated ciphertext for many messages
(aknown-plaintext attack). In fact, we normally assume that the opponent knows the full
construction of the cipher, and has lots of known plaintext, and still cannot find the key.
Such designs are not trivial.
USE/ Basic Requirement of cryptography
Rule - Only store sensitive data that you need
Many eCommerce businesses use payment providers that store the credit card for
recurring billing. This offloads the burden of keeping credit card numbers safe.
Only use strong cryptographic algorithms
This is a good default if one doesn't have AES and one of the authenticated encryption
modes that provide confidentiality and authenticity (i.e., data origin authentication) such
as CCM, EAX, CMAC, etc. For Java, if you are using SunJCE that will be the case. The
cipher modes supported in JDK 1.5 and later are CBC, CFB, CFBx, CTR, CTS, ECB,
OFB, OFBx, PCBC, None of these cipher modes are authentication encryption modes.
(That's why I added it explicitly.) If you are using an alternate JCE provider such as
Bouncy Castle, RSA JSafe, IAIK, etc then some of these authentication encryption
modes probably should be preferred.
Ensure that random numbers are cryptographically strong
Ensure that all random numbers, random file names, random GUIDs, and random strings
are generated in a cryptographically strong fashion. Also ensure that random algorithms
are seeded with sufficient entropy.
Only use widely accepted implementations of cryptographic algorithms
Do not implement an existing cryptographic algorithm on your own, no matter how easy
it appears. Use widely accepted algorithms and widely accepted implementations only.
Ensure that an implementation has, at least, had some cryptography experts involved in
its creation.
Prefer authenticated encryption modes
If an authenticated encryption mode is not available, then the best option is to consult
with a professional cryptographer. Ask the cryptographer to review and customize the
system to counter padding oracle attacks. This customization will typically involve the
use of a message authentication code (MAC) that must be carefully employed in order to
be effective.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

57

In the worst case, where neither an AE mode nor a professional cryptographer is


available, then the cryptography is definitely vulnerable to a padding oracle attack. The
best that can be done in this situation is to design the overall system so that the
decryption function is only given ciphertext retrieved directly from a canonical source
which must itself be protected by other integrity controls. While this technique will help
reduce the risk of a padding oracle attack, it does not eliminate the risk, and it should be
treated as a temporary workaround until an AE-based solution can be implemented.
Store the hashed and salted value of passwords
Store the salted hashed value of the password. Salt each hash. Use a different random salt
for each password hash. Never store the clear text password or an encrypted version of
the password.
Ensure that the cryptographic protection remains secure even if access controls fail
This rule supports the principle of defense in depth. Access controls (usernames,
passwords, privileges, etc.) are one layer of protection. Storage encryption should add an
additional layer of protection that will continue protecting the data even if an attacker
subverts the database access control layer.
Ensure that any secret key is protected from unauthorized access
Define a key lifecycle
The key lifecycle details the various states that a key will move through during its life.
The lifecycle will specify when a key should no longer be used for encryption, when a
key should no longer be used for decryption (these are not necessarily coincident),
whether data must be rekeyed when a new key is introduced, and when a key should be
removed from use all together.
Store unencrypted keys away from the encrypted data
If the keys are stored with the data then any compromise of the data will easily
compromise the keys as well. Unencrypted keys should never reside on the same machine
or cluster as the data.
Use independent keys when multiple keys are required
Ensure that key material is independent. That is, do not choose a second key which is
easily related to the first (or any preceeding) keys.
Protect keys in a key vault
Keys should remain in a protected key vault at all times. In particular, ensure that there is
a gap between the threat vectors with direct access to the data and the threat vectors with
direct access to the keys. This implies that keys should not be stored on the application or
web server (assuming that application attackers are part of the relevant threat model).
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

58

Document concrete procedures for managing keys through the lifecycle


These procedures must be written down and the key custodians must be adequately
trained.
Build support for changing keys periodically
Key rotation is a must as all good keys do come to an end either through expiration or
revocation. So a developer will have to deal with rotating keys at some point -- better to
have a system in place now rather than scrambling later.
Data encryption and decryption technique
DES (the Data Encryption Standard) is a symmetric block cipher developed by IBM. The
algorithm uses a 56-bit key to encipher/decipher a 64-bit block of data. The key is always
presented as a 64-bit block, every 8th bit of which is ignored. However, it is usual to set
each 8th bit so that each group of 8 bits has an odd number of bits set to 1.
The algorithm is best suited to implementation in hardware, probably to discourage
implementations in software, which tend to be slow by comparison. However, modern
computers are so fast that satisfactory software implementations are readily available.
DES is the most widely used symmetric algorithm in the world, despite claims that the
key length is too short. Ever since DES was first announced, controversy has raged about
whether 56 bits is long enough to guarantee security.
The key length argument goes like this. Assuming that the only feasible attack on DES is
to try each key in turn until the right one is found, then 1,000,000 machines each capable
of testing 1,000,000 keys per second would find (on average) one key every 12 hours.
Most reasonable people might find this rather comforting and a good measure of the
strength of the algorithm.
Those who consider the exhaustive key-search attack to be a real possibility (and to be
fair the technology to do such a search is becoming a reality) can overcome the problem
by using double or triple length keys. In fact, double length keys have been recommended
for the financial industry for many years.
Use of multiple length keys leads us to the Triple-DES algorithm, in which DES is
applied three times. If we consider a triple length key to consist of three 56-bit keys K1,
K2, K3 then encryption is as follows:
Encrypt with K1
Decrypt with K2
Encrypt with K3
Decryption is the reverse process:
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

59

Decrypt with K3
Encrypt with K2
Decrypt with K1
Setting K3 equal to K1 in these processes gives us a double length key K1, K2.
Setting K1, K2 and K3 all equal to K has the same effect as using a single-length (56-bit
key). Thus it is possible for a system using triple-DES to be compatible with a system
using single-DES.
RSA
RSA is a public key algorithm invented by Rivest, Shamir and Adleman. The key used
for encryption is different from (but related to) the key used for decryption.
The algorithm is based on modular exponentiation. Numbers e, d and N are chosen with
the property that if A is a number less than N, then (Ae mod N)d mod N = A.
This means that you can encrypt A with e and decrypt using d. Conversely you can
encrypt using d and decrypt using e (though doing it this way round is usually referred to
as signing and verification).
The pair of numbers (e,N) is known as the public key and can be published.
The pair of numbers (d,N) is known as the private key and must be kept secret.
The number e is known as the public exponent, the number d is known as the private
exponent, and N is known as the modulus. When talking of key lengths in connection
with RSA, what is meant is the modulus length.
An algorithm that uses different keys for encryption and decryption is said to be
asymmetric.
Anybody knowing the public key can use it to create encrypted messages, but only the
owner of the secret key can decrypt them.
Conversely the owner of the secret key can encrypt messages that can be decrypted by
anybody with the public key. Anybody successfully decrypting such messages can be
sure that only the owner of the secret key could have encrypted them. This fact is the
basis of the digital signature technique.
Without going into detail about how e, d and N are related, d can be deduced from e and
N if the factors of N can be determined. Therefore the security of RSA depends on the
difficulty of factorizing N. Because factorization is believed to be a hard problem, the
longer N is, the more secure the cryptosystem. Given the power of modern computers, a
length of 768 bits is considered reasonably safe, but for serious commercial use 1024 bits
is recommended.
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

60

The problem with choosing long keys is that RSA is very slow compared with a
symmetric block cipher such as DES, and the longer the key the slower it is. The best
solution is to use RSA for digital signatures and for protecting DES keys. Bulk data
encryption should be done using DES.
Symmetric / Asymmetric key
An encryption system in which the sender and receiver of a message share a single,
common key that is used to encrypt and decrypt the message. Contrast this with publickey cryptology, which utilizes two keys - a public key to encrypt messages and a private
key to decrypt them.
Symmetric-key systems are simpler and faster, but their main drawback is that the two
parties must somehow exchange the key in a secure way. Public-key encryption avoids
this problem because the public key can be distributed in a non-secure way, and the
private key is never transmitted.
Symmetric-key cryptography is sometimes called secret-key cryptography.The most
popular symmetric-key system is the Data Encryption Standard (DES).
Symmetric-key algorithms are a class of algorithms for cryptography that use trivially
related, often identical, cryptographic keys for both decryption and encryption etc.
The encryption key is trivially related to the decryption key, in that they may be identical
or there is a simple transformation to go between the two keys. The keys, in practice,
represent a shared secret between two or more parties that can be used to maintain a
private information link.
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, onekey, and private-key encryption. Use of the last and first terms can create ambiguity with
similar terminology used in public-key cryptography.
Symmetric Key Encryption (personal Key)
Encryption algorithms that use the same key for encrypting and for decrypting
information are called symmetric-key algorithms. The symmetric key is also called a
secret key because it is kept as a shared secret between the sender and receiver of
information. Otherwise, the confidentiality of the encrypted information is compromised.
Below shows basic symmetric key encryption and decryption.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

61

Encryption and Decryption with a Symmetric Key


Symmetric key encryption is much faster than public key encryption, often by 100 to
1,000 times. Because public key encryption places a much heavier computational load on
computer processors than symmetric key encryption, symmetric key technology is
generally used to provide secrecy for the bulk encryption and decryption of information.
Symmetric keys are commonly used by security protocols as session keys for confidential
online communications. For example, the Transport Layer Security (TLS) and Internet
Protocol security (IPSec) protocols use symmetric session keys with standard encryption
algorithms to encrypt and decrypt confidential communications between parties.
Different session keys are used for each confidential communication session and session
keys are sometimes renewed at specified intervals.
Symmetric keys also are commonly used by technologies that provide bulk encryption of
persistent data, such as e-mail messages and document files. For example,
Secure/Multipurpose Internet Mail Extensions (S/MIME) uses symmetric keys to encrypt
messages for confidential mail, and Encrypting File System (EFS) uses symmetric keys
to encrypt files for confidentiality.
Cryptography-based security technologies use a variety of symmetric key encryption
algorithms to provide confidentiality. For more information about the specific encryption
algorithms that are used by security technologies, see the applicable documentation for
each technology. For more information about how the various symmetric key algorithms
differ, see the cryptography literature that is referenced under "Additional Resources" at
the end of this chapter.
Asymmetric key (Public Key Encryption)
Encryption algorithms that use different keys for encrypting and decrypting information
are most often called public-key algorithms but are sometimes also called asymmetric key
algorithms . Public key encryption requires the use of both a private key (a key that is
known only to its owner) and a public key (a key that is available to and known to other
entities on the network). A user's public key, for example, can be published in the
directory so that it is accessible to other people in the organization. The two keys are
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

62

different but complementary in function. Information that is encrypted with the public
key can be decrypted only with the corresponding private key of the set. Below figure
shows basic encryption and decryption with asymmetric keys.

Encryption and Decryption with Asymmetric Keys

Secret-key
For symmetric key cryptography to work for online communications, the secret key must
be securely shared with authorized communicating parties and protected from discovery
and use by unauthorized parties. Public key cryptography can be used to provide a secure
method for exchanging secret keys online. Two of the most common key exchange
algorithms are the following:

Diffie-Hellman Key Agreement algorithm

RSA key exchange process

Both methods provide for highly secure key exchange between communicating parties.
An intruder who intercepts network communications cannot easily guess or decode the
secret key that is required to decrypt communications. The exact mechanisms and
algorithms that are used for key exchange varies for each security technology. In general,
the Diffie-Hellman Key Agreement algorithm provides better performance than the RSA
key exchange algorithm.
Diffie-Hellman Key Agreement
Public key cryptography was first publicly proposed in 1975 by Stanford University
researchers Whitfield Diffie and Martin Hellman to provide a secure solution for
confidentially exchanging information online. Below Figure shows the basic DiffieHellman Key Agreement process.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

63

Diffie-Hellman Key Agreement


Diffie-Hellman key agreement is not based on encryption and decryption, but instead
relies on mathematical functions that enable two parties to generate a shared secret key
for exchanging information confidentially online. Essentially, each party agrees on a
public value g and a large prime number p . Next, one party chooses a secret value x and
the other party chooses a secret value y . Both parties use their secret values to derive
public values, g x mod p and g y mod p,and they exchange the public values. Each party
then uses the other party's public value to calculate the shared secret key that is used by
both parties for confidential communications. A third party cannot derive the shared
secret key because they do not know either of the secret values, x or y .
For example, Alice chooses secret value x and sends the public value g x mod p to Bob.
Bob chooses secret value y and sends the public value g y mod p to Alice. Alice uses the
value g xy mod p as her secret key for confidential communications with Bob. Bob uses
the value g yx mod p as his secret key. Because g xy mod p equals g yx mod p , Alice and
Bob can use their secret keys with a symmetric key algorithm to conduct confidential
online communications. The use of the modulo function ensures that both parties can
calculate the same secret key value, but an eavesdropper cannot. An eavesdropper can
intercept the values of g and p , but because of the extremely difficult mathematical
problem created by the use of a large prime number in mod p, the eavesdropper cannot
feasibly calculate either secret value x or secret value y . The secret key is known only to
each party and is never visible on the network.
Diffie-Hellman key exchange is widely used with varying technical details by Internet
security technologies, such as IPSec and TLS, to provide secret key exchange for
confidential online communications. For technical discussions about Diffie-Hellman key
agreement and how it is implemented in security technologies, see the cryptography
literature that is referenced under "Additional Resources" at the end of this chapter.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

64

RSA Key Exchange


The Rivest-Shamir-Adleman (RSA) algorithms available from RSA Data Security, Inc.,
are the most widely used public key cryptography algorithms. For RSA key exchange,
secret keys are exchanged securely online by encrypting the secret key with the intended
recipient's public key. Only the intended recipient can decrypt the secret key because it
requires the use of the recipient's private key. Therefore, a third party who intercepts the
encrypted, shared secret key cannot decrypt and use it. Below figure illustrates the basic
RSA key exchange process.

Basic RSA Key Exchange


The RSA key exchange process is used by some security technologies to protect
encryption keys. For example, EFS uses the RSA key exchange process to protect the
bulk encryption keys that are used to encrypt and decrypt files.
Chapter 7
proxy server
In an enterprise that uses the Internet, a proxy server is a server that acts as an
intermediary between a workstation user and the Internet so that the enterprise can ensure
security, administrative control, and caching service. A proxy server is associated with or
part of a gateway server that separates the enterprise network from the outside network
and a firewall server that protects the enterprise network from outside intrusion.
A proxy server receives a request for an Internet service (such as a Web page request)
from a user. If it passes filtering requirements, the proxy server, assuming it is also
a cache server , looks in its local cache of previously downloaded Web pages. If it finds
the page, it returns it to the user without needing to forward the request to the Internet. If
the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses
one of its own IP addresses to request the page from the server out on the Internet. When
the page is returned, the proxy server relates it to the original request and forwards it on
to the user.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

65

To the user, the proxy server is invisible; all Internet requests and returned responses
appear to be directly with the addressed Internet server. (The proxy is not quite invisible;
its IP address has to be specified as a configuration option to the browser or other
protocol program.)
An advantage of a proxy server is that its cache can serve all users. If one or more
Internet sites are frequently requested, these are likely to be in the proxy's cache, which
will improve user response time. In fact, there are special servers called cache servers. A
proxy can also do logging.
The functions of proxy, firewall, and caching can be in separate server programs or
combined in a single package. Different server programs can be in different computers.
For example, a proxy server may in the same machine with a firewall server or it may be
on a separate server and forward requests through the firewall.
FIREWALL
A firewall is a set of related programs, located at a networkgateway server, that protects
the resources of a private network from users from other networks. (The term also
implies the security policy that is used with the programs.) An enterprise with
an intranet that allows its workers access to the wider Internet installs a firewall to
prevent outsiders from accessing its own private data resources and for controlling what
outside resources its own users have access to.
Basically, a firewall, working closely with a router program, examines each
network packet to determine whether to forward it toward its destination. A firewall also
includes or works with a proxy server that makes network requests on behalf of
workstation users. A firewall is often installed in a specially designated computer
separate from the rest of the network so that no incoming request can get directly at
private network resources.
There are a number of firewall screening methods. A simple one is to screen requests to
make sure they come from acceptable (previously identified)domain name and Internet
Protocol addresses. For mobile users, firewalls allow remote access in to the private
network by the use of secure logon procedures and authentication certificates.
A number of companies make firewall products. Features include logging and reporting,
automatic alarms at given thresholds of attack, and a graphical user interface for
controlling the firewall.
Computer security borrows this term from firefighting, where it originated. In
firefighting, a firewall is a barrier established to prevent the spread of fire.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

66

Smurf Attack
the smurf attack, named after its exploit program, is a denial-of-service attack which
uses spoofed broadcast ping messages to flood a target system. An attacker sends forged
ICMP echo packets to broadcast addresses of vulnerable networks with forged source
address pointing to the target (victim) of the attack. All the systems on these networks
reply to the victim with ICMP echo replies. This rapidly exhausts the bandwidth available
to the target.
There is not much the victim can do, because there is no connectivity to outside as the
incoming link that is overloaded with ICMP packets. However, the victim can get the
subnet number used as the amplifier and contact the owner to tell them to turn off
amplification (i.e. enable filtering of ICMP Echoes).
IRC servers are the primary victim to smurf attacks. Script-kiddies run programs that
scan the Internet looking for "amplifiers" (i.e. subnets that will respond). They compile
lists of these amplifiers and exchange them with their friends. Thus, when a victim is
flooded with responses, they will appear to come from all over the Internet. On IRCs,
hackers will use bots (automated programs) that connect to IRC servers and collect IP
addresses. The bots then send the forged packets to the amplifiers to inundate the victim.
The attack is named "smurf" after a program that generated the attack.
Several years ago, most IP networks could lend themselves thus to smurf attacks -- in the
lingo, they were "smurfable". Today, thanks largely to the ease with which administrators
can make a network immune to this abuse, very few networks remain smurfable. To
secure a network with a Cisco router from taking part in a smurf attack, it suffices to
issue the router command:
no ip directed-broadcast .
Smurf attacks use a combination of IP Address Spoofing and ICMP flooding to
overwhelmingly saturate a target network with traffic to such an extent that all normal
traffic is effectively drowned out thereby causing a Denial of Service (DoS) attack.
Smurf attacks consist of three separate elements; the source site, the bounce site and the
target
site.
First of all an attacker will select a bounce site. This is usually a very large network.

The attacker then modifies a PING packet to contain the address of the target site
as the PING packets source address.

Next the attacker sends the spoofed PING packet to the broadcast address of the
target site.

This will result in the bounce site broadcasting the spoofed packet to all devices
configured to receive messages from that broadcast address, which by default will be all
devices on that Local Area Network (LAN) or subnet segment if the network has been
configured into a number of smaller subnets for administrative purposes.

Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

67

All devices on the bounce site network receiving this misinformation will not know
that it is misinformation and so they will automatically respond to the request with a
reply to the site which is the intended target of the smurf attack.

This results in the target site being overwhelmed by a huge number of erroneous
replies that it neither requested nor knows about.

The outcome of the oversaturation is that the target is unable to process the
requests often due to a buffer overflow and hence it will hang or reboot.
In many cases such is the overwhelming effect of this type of attack that it will cause the
target to appear to simply grind to a halt in attempting to process the flood of incoming
reply PINGs from the bounce site.
Another consequence can be that the target machines CPU processing queue, internal
counters, out of sequence processing units and cache simply cannot cope with the flood
and the CPU will register processing queue errors which can cause the CPU to
continually flush its processing pipeline and buffers continuously with the result that the
CPU will suddenly appear to be running at 100% up until such time as it overheats and
becomes an unusable blob of silicone.
Fortunately; modern CPUs have thermal regulatory mechanisms that usually prevent total
obliteration of the CPU due to this type of processing strain and loop running but many
older systems and those with thermal throttling turned off will often die.
Smurf Attack Countermeasures
Countering a smurf attack is not as hard as one might expect. A correctly configured
stateful firewall device will know that the massive influx of ICMP Ping replies was
never requested by any devices internal to it and if configured to do so it will simply drop
these packets. This will protect devices internal to the firewall.
Also configuring your firewall to deny external ICMP traffic access to your internal
network will work just as effectively. Once again this may make remote administration
and connectivity testing a little more difficult than would otherwise be the case but this is
a small price to pay for a respectable degree of immunity to this type of attack.
However; if the attacker is able to send enough spoofed ICMP PING packet reply
requests and the exploited bounce site is large enough, the number of ICMP replies the
bounce network is triggered to send your way may be large enough to overwhelm your
modem/router/firewall device(s). The effects of this inundation of arriving PING replies
may be such that all of the firewall devices resources become consumed in dealing with
the smurf attack flood.
As a result it may be unable to service legitimate network requests and so a denial of
service will be experienced by all internal network devices requesting external access
(Internet access or a branch office network access) as will all external requests for access
to internal network resources; such as your publically accessible website.
Failover Redundancy
This is one reason why having a redundant failover backup device(s) and extra
live IP Addresses are such good ideas. While the redundant failover system may allow
your internal network to have external access capabilities it is much harder to provision
for similar redundancy for external requests for internal resources access.
ISP Involvement
One thing you can do is to get your ISP to block all ICMP traffic at their end. This should
get you back up and running but be warned it will take a little time unless you have
Edited by
Prof. Kaushal Borisagar

Short view of
Network Management & Information Security

68

entered into an agreement that specifically states the actions to be taken by both yourself
and your ISP in the event of your being attacked by a smurf Denial of Service (DoS)
attack.
This falls into the category of preventative countermeasures since you have processes and
procedures already in place just waiting for a trigger event before they are swung into
action. Your ISP will be able to drop the unsolicited PING replies while rerouting
legitimate traffic to your spare/alternate IP Address.
Attacker Identification
The ultimate objective is to identify the network that is being used as the bounce site
and to stop their inadvertent broadcasting of the spoofed ICMP PING reply requests. It is
also be possible to identify the source of the spoofed ICMP PING reply request
broadcasts (the attacker) by back tracking the path the requests came in via.

Edited by
Prof. Kaushal Borisagar

Vous aimerez peut-être aussi