Académique Documents
Professionnel Documents
Culture Documents
Unit-1
1. Explain the process of risk management ?
ANS:
Risk assessment is a key component of a holistic, organization-wide risk management
process as defined in NIST Special Publication 800-39, Managing Information Security Risk:
Organization, Mission, and Information System View.
Risk management processes include: (i) framing risk; (ii) assessing risk; (iii) responding to
risk; and (iv) monitoring risk. Figure 1 illustrates the four steps in the risk management
process—including the risk assessment step and the information and communications flows
necessary to make the process work effectively
1.The first component of risk management addresses how organizations frame risk or
establish a risk context—that is, describing the environment in which risk-based decisions are
made. The purpose of the risk framing component is to produce a risk management strategy
that addresses how organizations intend to assess risk, respond to risk, and monitor risk—
making explicit and transparent the risk perceptions that organizations routinely use in
making both investment and operational decisions. The risk management strategy establishes
a foundation for managing risk and delineates the boundaries for risk-based decisions within
organization
2. The second component of risk management addresses how organizations assess risk within
the context of the organizational risk frame. The purpose of the risk assessment component is
to identify: (i) threats to organizations (i.e., operations, assets, or individuals) or threats
directed through organizations against other organizations or the Nation; (ii) vulnerabilities
internal and external to organizations; (iii) the harm (i.e., adverse impact) that may occur
given the potential for threats exploiting vulnerabilities; and (iv) the likelihood that harm will
occur. The end result is a determination of risk (i.e., typically a function of the degree of
harm and likelihood of harm occurring).
3. The third component of risk management addresses how organizations respond to risk once
that risk is determined based on the results of a risk assessment. The purpose of the risk
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
4. The fourth component of risk management addresses how organizations monitor risk over
time. The purpose of the risk monitoring component is to: (i) determine the ongoing
effectiveness of risk responses (consistent with the organizational risk frame); (ii) identify
risk-impacting changes to organizational information systems and the environments in which
the systems operate;16 and (iii) verify that planned risk responses are implemented and
information security requirements derived from and traceable to organizational
missions/business functions, federal legislation, directives, regulations, policies, standards,
and guidelines are satisfied.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
TASK 1-1: Identify the purpose of the risk assessment in terms of the information that
the assessment is intended to produce and the decisions the assessment is intended to
support.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
IDENTIFY SCOPE
TASK 1-2: Identify the scope of the risk assessment in terms of organizational
applicability, time frame supported, and architectural/technology considerations
The scope of the risk assessment determines what will be considered in the assessment.
Risk assessment scope affects the range of information available to make risk-based
decisions and is determined by the organizational official requesting the assessment and
the risk management strategy. Establishing the scope of the risk assessment helps
organizations to determine:
(i) what tiers are addressed in the assessment;
(ii) what parts of organizations are affected by the assessment and how they are
affected;
(iii) what decisions the assessment results support;
(iv) how long assessment results are relevant;
(v) what influences the need to update the assessment.
Organizational Applicability:
Organizational applicability describes which parts of the organization or sub
organizations are affected by the risk assessment and the risk-based decisions resulting
from the assessment (including the parts of the organization or sub organizations
responsible for implementing the activities and tasks related to the decisions).
Effectiveness Time Frame:
Organizations determine how long the results of particular risk assessments can be used
to legitimately inform risk based decisions. The time frame is usually related to the
purpose of the assessment. For example, a risk assessment to inform Tier 1 policy-related
decisions needs to be relevant for an extended period of time since the governance
process for policy changes can be time-consuming in many organization
Architectural/Technology Considerations:
Organizations use architectural and technology considerations to clarify the scope of the
risk assessment. For example, at Tier 3, the scope of the risk assessment can be an
organizational information system in its environment of operations. This entails placing
the information system in its architectural context, so that vulnerabilities in inherited
controls can be taken into consideration
IDENTIFY ASSUMPTIONS AND CONSTRAINTS
TASK 1-3: Identify the specific assumptions and constraints under which the risk
assessment is conducted
Threat Sources:
Organizations determine which types of threat sources are to be considered during risk
assessments. Organizations make explicit the process used to identify threats and any
assumptions related to the threat identification process. If such information is identified
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
during the risk framing step and included as part of the organizational risk management
strategy, the information need not be repeated in each individual risk assessment
Threat Events:
Organizations determine which type of threat events are to be considered during risk
assessments and the level of detail needed to describe such events. Descriptions of threat
events can be expressed in highly general terms (e.g., phishing, distributed denial-of-
service), in more descriptive terms using tactics, techniques, and procedures, or in highly
specific terms (e.g., the names of specific information systems, technologies,
organizations, roles, or locations). In addition, organizations consider
(i) what representative set of threat events can serve as a starting point for the
identification of the specific threat events in the risk assessment; and
(ii) What degree of confirmation is needed for threat events to be considered relevant for
purposes of the risk assessment.
TASK 1-4: Identify the sources of descriptive, threat, vulnerability, and impact information
to be used in the risk assessment
Descriptive information enables organizations to be able to determine the relevance of threat
and vulnerability information. At Tier 1, descriptive information can include, for example,
the type of risk management and information security governance structures in place within
organizations and how the organization identifies and prioritizes critical missions/business
functions. At Tier 2, descriptive information can include, for example, information about: (i)
organizational mission/business processes, functional management processes, and
information flows; (ii) enterprise architecture, information security architecture, and the
technical/process flow architectures of the systems, common infrastructures, and shared
services that fall within the scope of the risk assessment; and (iii) the external environments
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
in which organizations operate including, for example, the relationships and dependencies
with external providers
DENTIFY RISK MODEL AND ANALYTIC APPROACH
TASK 1-5: Identify the risk model and analytic approach to be used in the risk assessment.
Organizations define one or more risk models for use in conducting risk assessments (see
Section 2.3.1) and identify which model is to be used for the risk assessment. To facilitate
reciprocity of assessment results, organization-specific risk models include, or can be
translated into, the risk factors (i.e., threat, vulnerability, impact, likelihood, and predisposing
condition) defined in the appendices. Organizations also identify the specific analytic
approach to be used for the risk assessment including the assessment approach (i.e.,
quantitative, qualitative, semi-quantitative) and the analysis approach (i.e., threat-oriented,
asset/impact-oriented, vulnerability-oriented). For each assessable risk factor, the appendices
include three assessment scales (one qualitative and two semi-quantitative scales) with
correspondingly different representation
4. What are the different risk assessment approaches?
ANS:
Risk, and its contributing factors, can be assessed in a variety of ways, including
quantitatively, qualitatively, or semi-quantitatively. Each risk assessment approach
considered by organizations has advantages and disadvantages.
A preferred approach (or situation-specific set of approaches) can be selected based on
organizational culture and, in particular, attitudes toward the concepts of uncertainty and risk
communication.
(1) Quantitative assessments typically employ a set of methods, principles, or rules for
assessing risk based on the use of numbers—where the meanings and proportionality of
values are maintained inside and outside the context of the assessment.
This type of assessment most effectively supports cost-benefit analyses of alternative risk
responses or courses of action. However, the meaning of the quantitative results may not
always be clear and may require interpretation and explanation—particularly to explain the
assumptions and constraints on using the results.
For example, organizations may typically ask if the numbers or results obtained in the risk
assessments are reliable or if the differences in the obtained values are meaningful or
insignificant. Additionally, the rigor of quantification is significantly lessened when
subjective determinations are buried within the quantitative assessments, or when significant
uncertainty surrounds the determination of values. The benefits of quantitative assessments
(in terms of the rigor, repeatability, and reproducibility of assessment results) can, in some
cases, be outweighed by the costs (in terms of the expert time and effort and the possible
deployment and use of tools required to make such assessments).
(2) In contrast to quantitative assessments, qualitative assessments typically employ a set of
methods, principles, or rules for assessing risk based on nonnumerical categories or levels
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(e.g., very low, low, moderate, high, very high). This type of assessment supports
communicating risk results to decision makers.
However, the range of values in qualitative assessments is comparatively small in most cases,
making the relative prioritization or comparison within the set of reported risks difficult.
Additionally, unless each value is very clearly defined or is characterized by meaningful
examples, different experts relying on their individual experiences could produce
significantly different assessment results.
The repeatability and reproducibility of qualitative assessments are increased by the
annotation of assessed values (e.g., this value is high because of the following reasons) and
by the use of tables or other well-defined functions to combine qualitative values.
(3) Finally, semi-quantitative assessments typically employ a set of methods, principles, or
rules for assessing risk that uses bins, scales, or representative numbers whose values and
meanings are not maintained in other contexts.
This type of assessment can provide the benefits of quantitative and qualitative assessments.
The bins (e.g., 0-15, 16-35, 36-70, 71-85, 86-100) or scales (e.g., 1-10) translate easily into
qualitative terms that support risk communications for decision makers (e.g., a score of 95
can be interpreted as very high), while also allowing relative comparisons between values in
different bins or even within the same bin (e.g., the difference between risks scored 70 and 71
respectively is relatively insignificant, while the difference between risks scored 36 and 70 is
relatively significant).
The role of expert judgment in assigning values is more evident than in a purely quantitative
approach. Moreover, if the scales or sets of bins provide sufficient granularity, relative
prioritization among results is better supported than in a purely qualitative approach.
As in a quantitative approach, rigor is significantly lessened when subjective determinations
are buried within assessments, or when significant uncertainty surrounds a determination of
value. As with the nonnumeric categories or levels used in a well-founded qualitative
approach, each bin or range of values needs to be clearly defined and/or characterized by
meaningful examples.
Independent of the type of value scale selected, assessments make explicit the temporal
element of risk factors. For example, organizations can associate a specific time period with
assessments of likelihood of occurrence and assessments of impact severity
5. What are the different risk analysis approaches?
ANS:
Analysis approaches differ with respect to the orientation or starting point of the risk
assessment, level of detail in the assessment, and how risks due to similar threat scenarios are
treated. An analysis approach can be:
(i) Threat-oriented;
(ii) Asset/impact-oriented; or
(iii) Vulnerability oriented.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(1)A threat-oriented approach starts with the identification of threat sources and threat
events, and focuses on the development of threat scenarios; vulnerabilities are identified
in the context of threats, and for adversarial threats, impacts are identified based on
adversary intent.
(2)An asset/impact-oriented approach starts with the identification of impacts or
consequences of concern and critical assets, possibly using the results of a mission or
business impact analyses and identifying threat events that could lead to and/or threat
sources that could seek those impacts or consequences.
(3)A vulnerability-oriented approach starts with a set of predisposing conditions or
exploitable weaknesses/deficiencies in organizational information systems or the
environments in which the systems operate, and identifies threat events that could
exercise those vulnerabilities together with possible consequences of vulnerabilities being
exercised. Each analysis approach takes into consideration the same risk factors, and thus
entails the same set of risk assessment activities, albeit in different order.
Differences in the starting point of the risk assessment can potentially bias the results,
causing some risks not to be identified. Therefore, identification of risks from a second
orientation (e.g., complementing a threat-oriented analysis approach with an asset/impact-
oriented analysis approach) can improve the rigor and effectiveness of the analysis.
In addition to the orientation of the analysis approach, organizations can apply more rigorous
analysis techniques (e.g., graph-based analyses) to provide an effective way to account for the
many-to-many relationships between:
(i) threat sources and threat events (i.e., a single threat event can be caused by
multiple threat sources and a single threat source can cause multiple threat
events);
(ii) threat events and vulnerabilities (i.e., a single threat event can exploit multiple
vulnerabilities and a single vulnerability can be exploited by multiple threat
events); and
(iii) Threat events and impacts/assets (i.e., a single threat event can affect multiple
assets or have multiple impacts, and a single asset can be affected by multiple
threat events).
Rigorous analysis approaches also provide a way to account for whether, in the time
frame for which risks are assessed, a specific adverse impact could occur (or a specific
asset could be harmed) at most once, or perhaps repeatedly, depending on the nature of
the impacts and on how organizations (including mission/business processes or
information systems) recover from such adverse impacts.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(1)Threats
A threat is any circumstance or event with the potential to adversely impact organizational
operations and assets, individuals, other organizations, or the Nation through an information
system via unauthorized access, destruction, disclosure, or modification of information,
and/or denial of service. Threat events are caused by threat sources. A threat source is
characterized as:
Risk models differ in the degree of detail and complexity with which threat events are
identified. When threat events are identified with great specificity, threat scenarios can be
modeled, developed, and analyzed. Threat events for cyber or physical attacks are
characterized by the tactics, techniques, and procedures (TTPs) employed by adversaries.
Understanding adversary-based threat events gives organizations insights into the capabilities
associated with certain threat sources. In addition, having greater knowledge about who is
carrying out the attacks gives organizations a better understanding of what adversaries desire
to gain by the attacks. Knowing the intent and targeting aspects of a potential attack helps
organizations narrow the set of threat events that are most relevant to consider.
Threat shifting is the response of adversaries to perceived safeguards and/or countermeasures
(i.e., security controls), in which adversaries change some characteristic of their
intent/targeting in order to avoid and/or overcome those safeguards/countermeasures. Threat
shifting can occur in one or more domains including:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(i) the time domain (e.g., a delay in an attack or illegal entry to conduct additional
surveillance);
(ii) the target domain (e.g., selecting a different target that is not as well protected);
(iii) the resource domain (e.g., adding resources to the attack in order to reduce
uncertainty or overcome safeguards and/or countermeasures); or
(iv) the attack planning/attack method domain (e.g., changing the attack weapon or
attack path). Threat shifting is a natural consequence of a dynamic set of interactions
between threat sources and types of organizational assets targeted. With more
sophisticated threat sources, it also tends to default to the path of least resistance to
exploit particular vulnerabilities, and the responses are not always predictable. In
addition to the safeguards and/or countermeasures implemented and the impact of a
successful exploit of an organizational vulnerability, another influence on threat
shifting is the benefit to the attacker. That perceived benefit on the attacker side can
also influence how much and when threat shifting occurs.
Vulnerabilities are not identified only within information systems. Viewing information
systems in a broader context, vulnerabilities can be found in organizational governance
structures (e.g., the lack of effective risk management strategies and adequate risk framing,
poor intra -agency communications, inconsistent decisions about relative priorities of
missions/business functions, or misalignment of enterprise architecture to support
mission/business activities). Vulnerabilities can also be found in external relationships (e.g.,
dependencies on particular energy sources, supply chains, information technologies, and
telecommunications providers), mission/business processes (e.g., poorly defined processes or
processes that are not risk-aware), and enterprise/information security architectures (e.g.,
poor architectural decisions resulting in lack of diversity or resiliency in organizational
information systems).
In general, risks materialize as a result of a series of threat events, each of which takes
advantage of one or more vulnerabilities. Organizations define threat scenarios to describe
how the events caused by a threat source can contribute to or cause harm. Development of
threat scenarios is analytically useful, since some vulnerabilities may not be exposed to
exploitation unless and until other vulnerabilities have been exploited. Analysis that
illuminates how a set of vulnerabilities, taken together, could be exploited by one or more
threat events is therefore more useful than the analysis of individual vulnerabilities. In
addition, a threat scenario tells a story, and hence is useful for risk communication as well as
for analysis.In addition to vulnerabilities as described above, organizations also consider
predisposing conditions.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(3)Likelihood
The likelihood of occurrence is a weighted risk factor based on an analysis of the probability
that a given threat is capable of exploiting a given vulnerability (or set of vulnerabilities). The
likelihood risk factor combines an estimate of the likelihood that the threat event will be
initiated with an estimate of the likelihood of impact (i.e., the likelihood that the threat event
results in adverse impacts). For adversarial threats, an assessment of likelihood of occurrence
is typically based on: (i) adversary intent; (ii) adversary capability; and (iii) adversary
targeting. For other than adversarial threat events, the likelihood of occurrence is estimated
using historical evidence, empirical data, or other factors. Note that the likelihood that a
threat event will be initiated or will occur is assessed with respect to a specific time frame
(e.g., the next six months, the next year, or the period until a specified milestone is reached).
If a threat event is almost certain to be initiated or occur in the (specified or implicit) time
frame, the risk assessment may take into consideration the estimated frequency of the event.
The likelihood of threat occurrence can also be based on the state of the organization
(including for example, its core mission/business processes, enterprise architecture,
information security architecture, information systems, and environments in which those
systems operate)—taking into consideration predisposing conditions and the presence and
effectiveness of deployed security controls to protect against unauthorized/undesirable
behavior, detect and limit damage, and/or maintain or restore mission/business capabilities.
The likelihood of impact addresses the probability (or possibility) that the threat event will
result in an adverse impact, regardless of the magnitude of harm that can be expected.
Organizations typically employ a three-step process to determine the overall likelihood of
threat events. First, organizations assess the likelihood that threat events will be initiated (for
adversarial threat events) or will occur (for non-adversarial threat events). Second,
organizations assess the likelihood that the threat events once initiated or occurring, will
result in adverse impacts or harm to organizational operations and assets, individuals, other
organizations, or the Nation. Finally, organizations assess the overall likelihood as a
combination of likelihood of initiation/occurrence and likelihood of resulting in adverse
impact.
Threat-vulnerability pairing (i.e., establishing a one-to-one relationship between threats and
vulnerabilities) may be undesirable when assessing likelihood at the mission/business
function level, and in many cases, can be problematic even at the information system level
due to the potentially large number of threats and vulnerabilities. This approach typically
drives the level of detail in identifying threat events and vulnerabilities, rather than allowing
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
organizations to make effective use of threat information and/or to identify threats at a level
of detail that is meaningful. Depending on the level of detail in threat specification, a given
threat event could exploit multiple vulnerabilities. In assessing likelihoods, organizations
examine vulnerabilities that threat events could exploit and also the mission/business function
susceptibility to events for which no security controls or viable implementations of security
controls exist (e.g., due to functional dependencies, particularly external dependencies). In
certain situations, the most effective way to reduce mission/business risk attributable to
information security risk is to redesign the mission/business processes so there are viable
work-arounds when information systems are compromised. Using the concept of threat
scenarios described above, may help organizations overcome some of the limitations of
threat-vulnerability pairing.
(4)Impact
The level of impact from a threat event is the magnitude of harm that can be expected to
result from the consequences of unauthorized disclosure of information, unauthorized
modification of information, unauthorized destruction of information, or loss of information
or information system availability. Such harm can be experienced by a variety of
organizational and non-organizational stakeholders including, for example, heads of agencies,
mission and business owners, information owners/stewards, mission/business process
owners, information system owners, or individuals/groups in the public or private sectors
relying on the organization—in essence, anyone with a vested interest in the organization’s
operations, assets, or individuals, including other organizations in partnership with the
organization, or the Nation.28 Organizations make explicit: (i) the process used to conduct
impact determinations; (ii) assumptions related to impact determinations; (iii) sources and
methods for obtaining impact information; and (iv) the rationale for conclusions reached with
regard to impact determinations.
Organizations may explicitly define how established priorities and values guide the
identification of high-value assets and the potential adverse impacts to organizational
stakeholders. If such information is not defined, priorities and values related to identifying
targets of threat sources and associated organizational impacts can typically be derived from
strategic planning and policies. For example, security categorization levels indicate the
organizational impacts of compromising different types of information. Privacy Impact
Assessments and criticality levels (when defined as part of contingency planning or
Mission/Business Impact Analysis) indicate the adverse impacts of destruction, corruption, or
loss of accountability for information resources to organizations.
Strategic plans and policies also assert or imply the relative priorities of immediate or near-
term mission/business function accomplishment and long-term organizational viability
(which can be undermined by the loss of reputation or by sanctions resulting from the
compromise of sensitive information) . Organizations can also consider the range of effects of
threat events including the relative size of the set of resources affected, when making final
impact determinations. Risk tolerance assumptions may state that threat events with an
impact below a specific value do not warrant further analysis.
(5)Risk:
Figure 3 illustrates an example of a risk model including the key risk factors discussed above
and the relationship among the factors. Each of the risk factors is used in the risk assessment
process in Chapter Three.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
As noted above, risk is a function of the likelihood of a threat event’s occurrence and
potential adverse impact should the event occur. This definition accommodates many types of
adverse impacts at all tiers in the risk management hierarchy described in Special Publication
800-39 (e.g., damage to image or reputation of the organization or financial loss at Tier 1;
inability to successfully execute a specific mission/business process at Tier 2; or the
resources expended in responding to an information system incident at Tier 3). It also
accommodates relationships among impacts (e.g., loss of current or future mission/business
effectiveness due to the loss of data confidentiality; loss of confidence in critical information
due to loss of data or system integrity; or unavailability or degradation of information or
information systems). This broad definition also allows risk to be represented as a single
value or as a vector (i.e., multiple values), in which different types of impacts are assessed
separately. For purposes of risk communication, risk is generally grouped according to the
types of adverse impacts (and possibly the time frames in which those impacts are likely to be
experienced).
(6)Aggregation
Organizations may use risk aggregation to roll up several discrete or lower-level risks into a
more general or higher-level risk. Organizations may also use risk aggregation to efficiently
manage the scope and scale of risk assessments involving multiple information systems and
multiple mission/business processes with specified relationships and dependencies among
those systems and processes. Risk aggregation, conducted primarily at Tiers 1 and 2 and
occasionally at Tier 3, assesses the overall risk to organizational operations, assets, and
individuals given the set of discrete risks. In general, for discrete risks (e.g., the risk
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
When aggregating risk, organizations consider the relationship among various discrete risks.
For example, there may be a cause and effect relationship in that if one risk materializes,
another risk is more or less likely to materialize. If there is a direct or inverse relationship
among discrete risks, then the risks can be coupled (in a qualitative sense) or correlated (in a
quantitative sense) either in a positive or negative manner. Risk coupling or correlation (i.e.,
finding relationships among risks that increase or decrease the likelihood of any specific risk
materializing) can be done at Tiers 1, 2, or 3.
(7)Uncertainty
Uncertainty is inherent in the evaluation of risk, due to such considerations as:
(i) limitations on the extent to which the future will resemble the past;
(ii) imperfect or incomplete knowledge of the threat (e.g., characteristics of
adversaries including tactics, techniques, and procedures);
(iii) undiscovered vulnerabilities in technologies or products; and
(iv) unrecognized dependencies, which can lead to unforeseen impacts. Uncertainty
about the value of specific risk factors can also be due to the step in the RMF or
phase in the system development life cycle at which a risk assessment is
performed. For example, at early phases in the system development life cycle, the
presence and effectiveness of security controls may be unknown, while at later
phases in the life cycle, the cost of evaluating control effectiveness may outweigh
the benefits in terms of more fully informed decision making. Finally, uncertainty
can be due to incomplete knowledge of the risks associated with other information
systems, mission/ business processes, services, common infrastructures, and/or
organizations. The degree of uncertainty in risk assessment results, due to these
different reasons, can be communicated in the form of the results (e.g., by
expressing results qualitatively, by providing ranges of values rather than single
values for identified risks, or by using a visual representations of fuzzy regions
rather than points).
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
• consider the relationships among critical assets, the threats to those assets, and
vulnerabilities (both organizational and technological) that can expose assets to threats.
Introduction to the OCTAVE Approach
• evaluate risks in an operational context - how they are used to conduct an organization’s
business and how those assets are at risk due to security threats
• create a practice-based protection strategy for organizational improvement as well as risk
mitigation plans to reduce the risk to the organization’s critical assets
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
A deep examination into incident response is beyond the scope of this guide, but following
six steps when you respond to security incidents can help you manage them quickly and
efficiently:
Protect human life and people's safety. This should always be your first priority. For
example, if affected computers include life support systems, shutting them off may not be an
option; perhaps you could logically isolate the systems on the network by reconfiguring
routers and switches without disrupting their ability to help patients.
Contain the damage. Containing the harm that the attack caused helps to limit additional
damage. Protect important data, software, and hardware quickly. Minimizing disruption of
computing resources is an important consideration, but keeping systems up during an attack
may result in greater and more widespread problems in the long run. For example, if you
contract a worm in your environment, you could try to limit the damage by disconnecting
servers from the network. However, sometimes disconnecting servers can cause more harm
than good. Use your best judgment and your knowledge of your own network and systems to
make this determination. If you determine that there will be no adverse effects, or that they
would be outweighed by the positive benefits of activity, containment should begin as
quickly as possible during a security incident by disconnecting from the network the systems
known to be affected. If you cannot contain the damage by isolating the servers, ensure that
you actively monitor the attacker’s actions in order to be able to remedy the damage as soon
as possible. And in any event, ensure that all log files are saved before shutting off any
server, in order to preserve the information contained in those files as evidence if you (or
your lawyers) need it later.
Assess the damage. Immediately make a duplicate of the hard disks in any servers that were
attacked and put those aside for forensic use later. Then assess the damage. You should begin
to determine the extent of the damage that the attack caused as soon as possible, right after
you contain the situation and duplicate the hard disks. This is important so that you can
restore the organization's operations as soon as possible while preserving a copy of the hard
disks for investigative purposes. If it is not possible to assess the damage in a timely manner,
you should implement a contingency plan so that normal business operations and productivity
can continue.
It is at this point that organizations may want to engage law enforcement regarding the
incident; however, you should establish and maintain working relationships with law
enforcement agencies that have jurisdiction over your organization's business before an
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
incident occurs so that when a serious problem arises you know whom to contact and how to
work with them. You should also advise your company’s legal department immediately, so
that they can determine whether a civil lawsuit can be brought against anyone as a result of
the damage.
Determine the cause of the damage. In order to ascertain the origin of the assault, it is
necessary to understand the resources at which the attack was aimed and what vulnerabilities
were exploited to gain access or disrupt services. Review the system configuration, patch
level, system logs, audit logs, and audit trails on both the systems that were directly affected
as well as network devices that route traffic to them. These reviews often help you to
discover where the attack originated in the system and what other resources were affected.
You should conduct this activity on the computer systems in place and not on the backed up
drives created in step 3. Those drives must be preserved intact for forensic purposes so that
law enforcement or your lawyers can use them to trace the perpetrators of the attack and
bring them to justice. If you need to create a backup for testing purposes to determine the
cause of the damage, create a second backup from your original system and leave the drives
created in step 3 unused.
Repair the damage. In most cases, it is very important that the damage be repaired as
quickly as possible to restore normal business operations and recover data lost during the
attack. The organization's business continuity plans and procedures should cover the
restoration strategy. The incident response team should also be available to handle the restore
and recovery process or to provide guidance on the process to the responsible team. During
recovery, contingency procedures are executed to limit the spread of the damage and isolate
it. Before returning repaired systems to service be careful that they are not reinfected
immediately by ensuring that you have mitigated whatever vulnerabilities were exploited
during the incident.
Review response and update policies. After the documentation and recovery phases are
complete, you should review the process thoroughly. Determine with your team the steps that
were executed successfully and what mistakes were made. In almost all cases, you will find
that your processes need to be modified to allow you to handle incidents better in the future.
You will inevitably find weaknesses in your incident response plan. This is the point of this
after-the-fact exercise—you are looking for opportunities for improvement. Any flaws should
prompt another round of the incident-response planning process so that you can handle future
incidents more smoothly.
This methodology is illustrated in the following diagram: Figure: Incident Response Process
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
9. Explain proactive approach to risk management. What are the benefits over reactive
approach?
ANS:
They want an alternative to this reactive approach, one that seeks to reduce the probability
that security incidents will occur in the first place. Organizations that effectively manage risk
evolve toward a more proactive approach, but as you will learn in this chapter, it is only part
of the solution.
For an organization looking to understand its information security needs, OCTAVE is a risk
based strategic assessment and planning technique for security. OCTAVE is self-directed,
meaning that people from an organization assume responsibility for setting the organizations
security strategy. The technique leverages people’s knowledge of their organization’s
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
security- related practices and processes to capture the current state of security practice
within the organization. Risks to the most critical assets are used to prioritize areas of
improvement and set the security strategy for the organization.
Unlike the typical technology-focused assessment, which is targeted at technological risk and
focused on tactical issues, OCTAVE is targeted at organizational risk and focused on
strategic, practice-related issues. It is a flexible evaluation that can be tailored for most
organizations.
When applying OCTAVE, a small team of people from the operational (or business) units
and the information technology (IT) department work together to address the security needs
of the organization, balancing the three key aspects illustrated in Figure 1: operational risk,
security practices, and technology.
The OCTAVE approach is driven by two of the aspects: operational risk and security
practices.
Technology is examined only in relation to security practices, enabling an organization to
refine the view of its current security practices. By using the OCTAVE approach, an
organization makes information-protection decisions based on risks to the confidentiality,
integrity, and availability of critical information-related assets. All aspects of risk (assets,
threats, vulnerabilities, and organizational impact) are factored into decision making,
enabling an organization to match a practice-based protection strategy to its security risks.
11. What are the various domains & corresponding processes of COBIT?
ANS: COBIT stands for “Control Objectives for Information and related Technology”.
COBIT is just one of the frameworks from ISACA (Information Systems Audit and Control
Association), an international professional association, affiliated member of (IFAC)
International Federation of Accountants and (ITGI) IT Governance Institute. ISACA has
more than 86,000 members in 160 countries and is a recognized worldwide leader in IT
governance, control, security and assurance which was founded back in 1969.
COBIT is an IT governance framework and supporting toolset that allows managers to bridge
the gap between control requirements, technical issues and business risks. COBIT enables
clear policy development and good practice for IT control throughout organizations. COBIT
emphasizes regulatory compliance, helps organizations to increase the value attained from IT,
enables alignment and simplifies implementation of the COBIT framework.
COBIT uses a maturity model as a means of assessing the maturity of the processes described
in the domains. The model encompasses the following levels:
1) Non-existent
2) Initial / ad hoc
3) Repeatable but intuitive
4) Defined process
5) Managed and measurable
6) Optimized
COBIT is made up of a number of „domains‟, „processes‟ & „activities‟. Here they are:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
DOMAIN
PROCESSES
PROCESSES
DOMAIN
PROCESSES
DS1 Define and Manage Service Levels (ITIL related: Service Level Management)
DS2 Manage Third-party Services
DS3 Manage Performance and Capacity (ITIL related: Capacity Management)
DS4 Ensure Continuous Service (ITIL related: IT Service Continuity Management)
DS5 Ensure Systems Security (ITIL related: Security Management)
DS6 Identify and Allocate Costs (ITIL related: Financial Management for IT Services)
DS7 Educate and Train Users
DS8 Manage Service Desk and Incidents (ITIL related: Incident Management)
DS9 Manage the Configuration (ITIL related: Configuration Management)
DS10 Manage Problems (ITIL related: Problem Management)
DS11 Manage Data (ITIL related: Availability Management)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
DOMAIN
PROCESSES
1) People
2) Applications
3) Information
4) Infrastructure.
Risk, and its contributing factors, can be assessed in a variety of ways, including
quantitatively, qualitatively, or semi-quantitatively. Each risk assessment approach
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
The OCTAVE Method has been designed for large organizations having multi-layered
hierarchy and maintaining their own computing infrastructure.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
• Phase 3: Develop security strategy and mitigation plans (strategy and plan
development)—The analysis team establishes risks to the organization’s critical assets based
on analysis of the information gathered and decides what to do about them. The team creates
a protection strategy for the organisation and mitigation plans to address identified risks. The
team also determines the ‘next steps’ required for implementation and gains senior
management’s approval on the outcome of the whole process.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
ANS: OCTAVE Allegro is focused on risk assessment in an organisational context, but offers
an alternative approach and attempts to improve an organisation’s ability to perform risk
assessment in a more efficient and effective manner. One of the insights acquired from earlier
experiences has been the need to move to a more information-centric risk assessment.
One of the guiding philosophies of Allegro has been that when information assets are the
focus of the security risk assessment, all other related assets are considered ‘information
containers’, storing, processing or transporting the information assets. Information containers
can be people (since people access information and gain knowledge), objects (piece of paper)
or technology (database). Thus, threats to information assets are analysed by considering
where they live and effectively limiting the number and types of assets brought into the
process.
Some key drivers that led SEI to formulating this new methodology include:
The OCTAVE Allegro approach comprises eight processes and is organised into four phases:
• Phase 2: Profile assets—Information assets that are determined to be critical are identified
and profiled. This profiling process establishes clear boundaries for the asset; identifies its
security requirements; and identifies all of the locations where the asset is stored, transported
or processed
• Phase 4: Identify and mitigate risks—Risks to information assets are identified and
analysed and the development of mitigation approaches commences.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
15. What are the various risk framing components & explain relationship among them?
Ans:
Risk is a measure of the extent to which an entity is threatened by a potential circumstance or
event, and is typically a function of: (i) the adverse impacts that would arise if the
circumstance or event occurs; and (ii) the likelihood of occurrence. Information security risks
are those risks that arise from the loss of confidentiality, integrity, or availability of
information or information systems and reflect the potential adverse impacts to organizational
operations (i.e., mission, functions, image, or reputation), organizational assets, individuals,
other organizations, and the Nation. Risk assessment is the process of identifying, estimating,
and prioritizing information security risks.
Assessing risk requires the careful analysis of threat and vulnerability information to
determine the extent to which circumstances or events could adversely impact an
organization and the likelihood that such circumstances or events will occur.
A risk assessment methodology typically includes: a risk assessment process, an explicit risk
model, defining key terms and assessable risk factors and the relationships among the factors;
an assessment approach (e.g., quantitative, qualitative, or semi-qualitative), specifying the
range of values those risk factors can assume during the risk assessment and how
combinations of risk factors are identified/analyzed so that values of those factors can be
functionally combined to evaluate risk; and an analysis approach (e.g., threat oriented,
asset/impact-oriented, or vulnerability-oriented), describing how combinations of risk factors
are identified/analyzed to ensure adequate coverage of the problem space at a consistent level
of detail. Risk assessment methodologies are defined by organizations and are a component
of the risk management strategy developed during the risk framing step of the risk
management process.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Figure 2 illustrates the fundamental components in organizational risk frames and the
relationships among those components.
16. How are the values of asset derived in quantitative risk assessment approach?
Ans:
In quantitative risk assessments, the goal is to try to calculate objective numeric values for
each of the components gathered during the risk assessment and cost-benefit analysis. For
example, you estimate the true value of each business asset in terms of what it would cost to
replace it, what it would cost in terms of lost productivity, what it would cost in terms of
brand reputation, and other direct and indirect business values. You endeavor to use the same
objectivity when computing asset exposure, cost of controls, and all of the other values that
you identify during the risk management process.
Valuing Assets
Determining the monetary value of an asset is an important part of security risk management.
Business managers often rely on the value of an asset to guide them in determining how
much money and time they should spend securing it. Many organizations maintain a list of
asset values (AVs) as part of their business continuity plans. Note how the numbers
calculated are actually subjective estimates, though: No objective tools or methods for
determining the value of an asset exist. To assign a value to an asset, calculate the following
three primary factors:
(1)The overall value of the asset to your organization:
Calculate or estimate the asset’s value in direct financial terms. Consider a simplified
example of the impact of temporary disruption of an e-commerce Web site that normally runs
seven days a week, 24 hours a day, generating an average of $2,000 per hour in revenue from
customer orders. You can state with confidence that the annual value of the Web site in terms
of sales revenue is $17,520,000.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
marketing campaigns, and other factors. Additionally, some customers may find an
alternative Web site that they prefer to the original, so the Web site may have some
permanent loss of users. Calculating the revenue loss is actually quite complex if you want to
be precise and consider all potential types of loss.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
UNIT: 2
1. What are the various uses of IDPS technologies?
Ans:
Organizations should consider using multiple types of IDPS technologies to achieve more
comprehensive and accurate detection and prevention of malicious activity.
The four primary types of IDPS technologies—network-based, wireless, NBA, and host-
based—each offer fundamentally different information gathering, logging, detection, and
prevention capabilities. Each technology type offers benefits over the others, such as
detecting some events that the others cannot and detecting some events with significantly
greater accuracy than the other technologies.
In many environments, a robust IDPS solution cannot be achieved without using multiple
types of IDPS technologies. For most environments, a combination of network-based and
host-based IDPS technologies is needed for an effective IDPS solution. Wireless IDPS
technologies may also be needed if the organization determines that its wireless networks
need additional monitoring or if the organization wants to ensure that rogue wireless
networks are not in use in the organization’s facilities. NBA technologies can also be
deployed if organizations desire additional detection capabilities for denial of service attacks,
worms, and other threats that NBAs are particularly well-suited to detecting. Organizations
should consider the different capabilities of each technology type along with other cost-
benefit information when selecting IDPS technologies.
SIEM software complements IDPS technologies in several ways, including correlating events
logged by different technologies, displaying data from many event sources, and providing
supporting information from other sources to help users verify the accuracy of IDPS alerts.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
1) Signature-Based Detection
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2) Anomaly-Based Detection
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
There are many types of IDPS technologies. For the purposes of this document, they are
divided into the following four groups based on the type of events that they monitor and the
ways in which they are deployed:
Network-Based, which monitors network traffic for particular network segments or devices
and analyzes the network and application protocol activity to identify suspicious activity. It
can identify many different types of events of interest. It is most commonly deployed at a
boundary between networks, such as in proximity to border firewalls or routers, virtual
private network (VPN) servers, remote access servers, and wireless networks. Section 4
contains extensive information on networkbased IDPS technologies.
Wireless, which monitors wireless network traffic and analyzes its wireless networking
protocols to identify suspicious activity involving the protocols themselves. It cannot identify
suspicious activity in the application or higher-layer network protocols (e.g., TCP, UDP) that
the wireless network traffic is transferring. It is most commonly deployed within range of an
organization’s wireless network to monitor it, but can also be deployed to locations where
unauthorized wireless networking could be occurring.
Network Behavior Analysis (NBA), which examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain
forms of malware (e.g., worms, backdoors), and policy violations (e.g., a client system
providing network services to other systems). NBA systems are most often deployed to
monitor flows on an organization’s internal networks, and are also sometimes deployed
where they can monitor flows between an organization’s networks and external networks
(e.g., the Internet, business partners’ networks).
Host-Based, which monitors the characteristics of a single host and the events occurring
within that host for suspicious activity. Examples of the types of characteristics a host-based
IDPS might monitor are network traffic (only for that host), system logs, running processes,
application activity, file access and modification, and system and application configuration
changes. Host-based IDPSs are most commonly deployed on critical hosts such as publicly
accessible servers and servers containing sensitive information.
Sensor or Agent. Sensors and agents monitor and analyze activity. The term sensor is
typically used for IDPSs that monitor networks, including network-based, wireless, and
network behavior analysis technologies. The term agent is typically used for host-based IDPS
technologies.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
individual sensors or agents cannot. Matching event information from multiple sensors or
agents, such as finding events triggered by the same IP address, is known as correlation.
Management servers are available as both appliance and software-only products. Some small
IDPS deployments do not use any management servers, but most IDPS deployments do. In
larger IDPS deployments, there are often multiple management servers, and in some cases
there are two tiers of management servers.
Console. A console is a program that provides an interface for the IDPS’s users and
administrators. Console software is typically installed onto standard desktop or laptop
computers. Some consoles are used for IDPS administration only, such as configuring sensors
or agents and applying software updates, while other consoles are used strictly for monitoring
and analysis. Some IDPS consoles provide both administration and monitoring capabilities.
Software Only. Some vendors sell sensor software without an appliance. Administrators can
install the software onto hosts that meet certain specifications. The sensor software might
include a customized OS, or it might be installed onto a standard OS just as any other
application would.
Most IDPS technologies can provide a wide variety of security capabilities. The common
security capabilities are divided into four categories: information gathering, logging,
detection, and prevention.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2. Logging Capabilities
IDPSs typically perform extensive logging of data related to detected events. This data can
be used to confirm the validity of alerts, investigate incidents, and correlate events between
the IDPS and other logging sources.
Data fields commonly used by IDPSs include event date and time, event type, importance
rating (e.g., priority, severity, impact, confidence), and prevention action performed (if any).
Specific types of IDPSs log additional data fields, such as network-based IDPSs performing
packet captures and host-based IDPSs recording user IDs.
IDPS technologies typically permit administrators to store logs locally and send copies of
logs to centralized logging servers (e.g., syslog, security information and event management
software). Generally, logs should be stored both locally and centrally to support the integrity
and availability of the data. Also, IDPSs should have their clocks synchronized using the
Network Time Protocol (NTP) or through frequent manual adjustments so that their log
entries have accurate timestamps.
3. Detection Capabilities
IDPS technologies typically offer extensive, broad detection capabilities. Most products use
a combination of detection techniques, which generally supports more accurate detection and
more flexibility in tuning and customization. Technologies vary widely in their tuning and
customization capabilities.
Typically, the more powerful a product’s tuning and customization capabilities are, the more
its detection accuracy can be improved from the default configuration. Organizations should
carefully consider the tuning and customization capabilities of IDPS technologies when
evaluating products.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
A whitelist is a list of discrete entities that are known to be benign. Whitelists are typically
used on a granular basis to reduce or ignore false positives involving known benign activity
from trusted hosts. Whitelists and blacklists are most commonly used in signature-based
detection and stateful protocol analysis.
(3) Alert Settings: Most IDPS technologies allow administrators to customize each alert
type. Examples of actions that can be performed on an alert type include the following:
– Toggling it on or off
– Setting a default priority or severity level
– Specifying what information should be recorded and what notification methods (e.g., e-
mail, pager) should be used
– Specifying which prevention capabilities should be used.
Some products also suppress alerts if an attacker generates many alerts in a short period of
time, and may also temporarily ignore all future traffic from the attacker. This is to prevent
the IDPS from being overwhelmed by alerts.
(4)Code Viewing and Editing: Some IDPS technologies permit administrators to see some or
all of the detection-related code. Viewing the code can help analysts to determine why
particular alerts were generated, and thereby help to validate alerts and identify false
positives. The ability to edit all detection-related code and write new code (e.g., new
signatures) is necessary to fully customize certain types of detection capabilities.
Editing the code requires programming and intrusion detection skills; also, some IDPSs use
proprietary programming languages, which would necessitate the programmer learning a new
language. Bugs introduced into the code during the customization process could cause the
IDPS to function incorrectly or fail altogether, so administrators should treat code
customization as they would any other alteration of production systems’ code.
Administrators should review tuning and customizations periodically to ensure that they are
still accurate. For example, whitelists and blacklists should be checked regularly and all
entries validated to ensure that they are still accurate and necessary. Thresholds and alert
settings might need to be adjusted periodically to compensate for changes in the environment
and in threats. Edits to detection code might need to be replicated whenever the product is
updated (e.g., patched, upgraded). Administrators should also ensure that any products
collecting baselines for anomaly-based detection have their baselines rebuilt periodically as
needed to support accurate detection.
4. Prevention Capabilities
Most IDPSs offer multiple prevention capabilities; the specific capabilities vary by IDPS
technology type. IDPSs usually allow administrators to specify the prevention capability
configuration for each type of alert. This usually includes enabling or disabling prevention,
as well as specifying which type of prevention capability should be used. Some IDPS sensors
have a learning or simulation mode that suppresses all prevention actions and instead
indicates when a prevention action would have been performed. This allows administrators
to monitor and fine-tune the configuration of the prevention capabilities before enabling
prevention actions, which reduces the risk of inadvertently blocking benign activity.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
8. What are the various types of sensors used in network based IDPS System?
2.Software Only: Some vendors sell sensor software without an appliance. Administrators
can install the software onto hosts that meet certain specifications. The sensor software
might include a customized OS, or it might be installed onto a standard OS just as any other
application would.
Firewalls are devices or programs that control the flow of network traffic between networks
or hosts that employ differing security postures. A firewall is a security system that controls
incoming and outgoing network traffic based on a set of rules. Firewall is used to prevent
unauthorized users from accessing private networks connected to the Internet.
The most basic feature of a firewall is the packet filter. Older firewalls that were only packet
filters were essentially routing devices that provided access control functionality for host
addresses and communication sessions.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
These devices, also known as stateless inspection firewalls, do not keep track of the state of
each flow of traffic that passes though the firewall; this means, for example, that they cannot
associate multiple requests within a single session to each other.
Packet filtering is at the core of most modern firewalls, but there are few firewalls sold today
that only do stateless packet filtering. Unlike more advanced filters, packet filters are not
concerned about the content of packets. Their access control functionality is governed by a
set of directives referred to as a ruleset.
Packet filtering capabilities are built into most operating systems and devices capable of
routing; the most common example of a pure packet filtering device is a network router that
employs access control lists.
In their most basic form, firewalls with packet filters operate at the network layer. This
provides network access control based on several pieces of information contained in a packet,
including:
(1) The packet’s source IP address—the address of the host from which the packet originated
(such as 192.168.1.1)
(2) The packet’s destination address—the address of the host the packet is trying to reach
(e.g., 192.168.2.1)
(3) The network or transport protocol being used to communicate between source and
destination hosts, such as TCP, UDP, or ICMP
(4) Possibly some characteristics of the transport layer communications sessions, such as
session source and destination ports (e.g., TCP 80 for the destination port belonging to a
web server, TCP 1320 for the source port belonging to a personal computer accessing the
server)
(5) The interface being traversed by the packet, and its direction (inbound or outbound).
Filtering inbound traffic is known as ingress filtering. Outgoing traffic can also be filtered, a
process referred to as egress filtering. Here, organizations can implement restrictions on their
internal traffic, such as blocking the use of external file transfer protocol (FTP) servers.
Organizations should only permit outbound traffic that uses the source IP addresses in use by
the organization—a process that helps block traffic with spoofed addresses from leaking onto
other networks.
Stateless packet filters are generally vulnerable to attacks and exploits that take advantage of
problems within the TCP/IP specification and protocol stack. For example, many packet
filters are unable to detect when a packet’s network layer addressing information has been
spoofed or otherwise altered.
Spoofing attacks, such as using incorrect addresses in the packet headers, are generally
employed by intruders to bypass the security controls implemented in a firewall platform.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Firewalls that operate at higher layers can thwart some spoofing attacks by verifying that a
session is established, or by authenticating users before allowing traffic to pass. Because of
this, most firewalls that use packet filters also maintain some state information for the packets
that traverse the firewall.
Some packet filters can specifically filter packets that are fragmented.
Packet fragmentation is allowed by the TCP/IP specifications and is encouraged in situations
where it is needed. However, packet fragmentation has been used to make some attacks
harder to detect (by placing them within fragmented packets), and unusual fragmentation has
also been used as a form of attack. For example, some network based attacks have used
packets that should not exist in normal communications, such as sending some fragments of a
packet but not the first fragment, or sending packet fragments that overlap each other. To
prevent the use of fragmented packets in attacks, some firewalls have been configured to
block fragmented packets.
Some firewalls can reassemble fragments before passing them to the inside network, although
this requires additional firewall resources, particularly memory.
Firewalls that have this reassembly feature must implement it carefully; otherwise someone
can readily mount a denial-of-service attack. Choosing whether to block, reassemble, or pass
fragmented packets is a tradeoff between overall network interoperability and full system
security.
Stateful inspection improves on the functions of packet filters by tracking the state of
connections and blocking packets that deviate from the expected state. This is accomplished
by incorporating greater awareness of the transport layer.
As with packet filtering, stateful inspection intercepts packets at the network layer and
inspects them to see if they are permitted by an existing firewall rule, but unlike packet
filtering, stateful inspection keeps track of each connection in a state table. While the details
of state table entries vary by firewall product, they typically include source IP address,
destination IP address, port numbers, and connection state information.
Three major states exist for TCP traffic—connection establishment, usage, and termination
(which refers to both an endpoint requesting that a connection be closed and a connection
with a long period of inactivity.)
Stateful inspection in a firewall examines certain values in the TCP headers to monitor the
state of each connection. Each new packet is compared by the firewall to the firewall’s state
table to determine if the packet’s state contradicts its expected state.
For example, an attacker could generate a packet with a header indicating it is part of an
established connection, in hopes it will pass through a firewall. If the firewall uses stateful
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
inspection, it will first verify that the packet is part of an established connection listed in the
state table.
In the simplest case, a firewall will allow through any packet that seems to be part of an open
connection (or even a connection that is not yet fully established). However, many firewalls
are more cognizant of the state machines for protocols such as TCP and UDP, and they will
block packets that do not adhere strictly to the appropriate state machine.
For example, it is common for firewalls to check attributes such as TCP sequence numbers
and reject packets that are out of sequence.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
inspection engine that analyzes protocols at the application layer to compare vendor-
developed profiles of benign protocol activity against observed events to identify deviations.
This allows a firewall to allow or deny access based on how an application is running over
the network. For instance, an application firewall can determine if an email message contains
a type of attachment that the organization does not permit (such as an executable file), or if
instant messaging (IM) is being used over port 80 (typically used for HTTP). Another feature
is that it can block connections over which specific actions are being performed (e.g., users
could be prevented from using the FTP “put” command, which allows users to write files to
the FTP server). This feature can also be used to allow or deny web pages that contain
particular types of active content, such as Java or ActiveX, or that have SSL certificates
signed by a particular certificate authority (CA), such as a compromised or revoked CA.
Application firewalls can enable the identification of unexpected sequences of commands,
such as issuing the same command repeatedly or issuing a command that was not preceded
by another command on which it is dependent. These suspicious commands often originate
from buffer overflow attacks, DoS attacks, malware, and other forms of attack carried out
within application protocols such as HTTP. Another common feature is input validation for
individual commands, such as minimum and maximum lengths for arguments. For example,
a username argument with a length of 1000 characters is suspicious—even more so if it
contains binary data. Application firewalls are available for many common protocols
including HTTP, database (such as SQL), email (SMTP, Post Office Protocol [POP], and
Internet Message Access Protocol [IMAP])3, voice over IP (VoIP), and Extensible Markup
Language (XML).
Another feature found in some application firewalls involves enforcing application state
machines, which are essentially checks on the traffic’s compliance to the standard for the
protocol in question. This compliance checking, sometimes call “RFC compliance” because
most protocols are defined in RFCs issued by the Internet Engineering Task Force (IETF),
can be a mixed blessing. Many products implement protocols in ways that almost, but not
completely, match the specification, so it is usually necessary to let such implementations
communicate across the firewall. Compliance checking is only useful when it detects and
blocks communication that can be harmful to protected systems.
Firewalls with both stateful inspection and stateful protocol analysis capabilities are not full-
fledged intrusion detection and prevention systems (IDPS), which usually offer much more
extensive attack detection and prevention capabilities. For example, IDPSs also use
signature-based and/or anomaly-based analysis to detect additional problems within network
traffic.
12. Write short note on Application-Proxy Gateways & Dedicated Proxy Servers.
Application-Proxy Gateways
An application-proxy gateway is a feature of advanced firewalls that combines lower-layer
access control with upper-layer functionality. These firewalls contain a proxy agent that acts
as an intermediary between two hosts that wish to communicate with each other, and never
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
allows a direct connection between them. Each successful connection attempt actually results
in the creation of two separate connections—one between the client and the proxy server, and
another between the proxy server and the true destination. The proxy is meant to be
transparent to the two hosts—from their perspectives there is a direct connection. Because
external hosts only communicate with the proxy agent, internal IP addresses are not visible to
the outside world. The proxy agent interfaces directly with the firewall ruleset to determine
whether a given instance of network traffic should be allowed to transit the firewall.
In addition to the ruleset, some proxy agents have the ability to require authentication of each
individual network user. This authentication can take many forms, including user ID and
password, hardware or software token, source address, and biometrics.
Like application firewalls, the proxy gateway operates at the application layer and can inspect
the actual content of the traffic. These gateways also perform the TCP handshake with the
source system and are able to protect against exploitations at each step of a communication.
In addition, gateways can make decisions to permit or deny traffic based on information in
the application protocol headers or payloads. Once the gateway determines that data should
be permitted, it is forwarded to the destination host.
Application-proxy gateways are quite different than application firewalls. First, an
application-proxy gateway can offer a higher level of security for some applications because
it prevents direct connections between two hosts and it inspects traffic content to identify
policy violations. Another potential advantage is that some application-proxy gateways have
the ability to decrypt packets (e.g., SSL-protected payloads), examine them, and re-encrypt
them before sending them on to the destination host. Data that the gateway cannot decrypt is
passed directly through to the application. When choosing the type of firewall to deploy, it is
important to decide whether the firewall actually needs to act as an application proxy so that
it can match the specific policies needed by the organization.
Firewalls with application-proxy gateways can also have several disadvantages when
compared to packet filtering and stateful inspection. First, because of the “full packet
awareness” of application-proxy gateways, the firewall spends much more time reading and
interpreting each packet. Because of this, some of these gateways are poorly suited to high-
bandwidth or real-time applications—but applicationproxy gateways rated for high
bandwidth are available. To reduce the load on the firewall, a dedicated proxy server can be
used to secure less time-sensitive services such as email and most web traffic. Another
disadvantage is that application-proxy gateways tend to be limited in terms of support for
new network applications and protocols—an individual, application-specific proxy agent is
required for each type of network traffic that needs to transit a firewall. Many application-
proxy gateway firewall vendors provide generic proxy agents to support undefined network
protocols or applications. Those generic agents tend to negate many of the strengths of the
application-proxy gateway architecture because they simply allow traffic to “tunnel” through
the firewall.
Dedicated Proxy Servers
Dedicated proxy servers differ from application-proxy gateways in that while dedicated
proxy servers retain proxy control of traffic, they usually have much more limited firewalling
capabilities. They are described in this section because of their close relationship to
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
13. Write short note on Web Application Firewalls & Firewalls for Virtual
Infrastructures.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In all these cases, the firewall’s rules will determine what to do with traffic it does not (or, in
the case of encrypted traffic, cannot) understand. An organization should have policies about
how to handle traffic in such cases, such as either permitting or blocking encrypted traffic
that is not authorized to be encrypted.
15. Write short note on VPN ?
ANS :
A virtual private network (VPN) extends a private network across a public network, such as
the Internet. It enables users to send and receive data across shared or public networks as if
their computing devices were directly connected to the private network, and thus are
benefiting from the functionality, security and management policies of the private network. A
VPN is created by establishing a virtual point-to-point connection through the use of
dedicated connections, virtual tunneling protocols, or traffic encryption.
A VPN spanning the Internet is similar to a wide area network (WAN). From a user
perspective, the extended network resources are accessed in the same way as resources
available within the private network. Traditional VPNs are characterized by a point-to-point
topology, and they do not tend to support or connect broadcast domains. Therefore,
communication, software, and networking, which are based on OSI layer 2 and broadcast
packets, such as NetBIOS used in Windows networking, may not be fully supported or work
exactly as they would on a local area network (LAN). VPN variants, such as Virtual Private
LAN Service (VPLS), and layer 2 tunneling protocols, are designed to overcome this
limitation.
VPNs allow employees to securely access the corporate intranet while traveling outside the
office. Similarly, VPNs securely connect geographically separated offices of an organization,
creating one cohesive network. VPN technology is also used by individual Internet users to
secure their wireless transactions, to circumvent geo restrictions and censorship, and to
connect to proxy servers for the purpose of protecting personal identity and location
Types of VPN
There are various kinds of VPNs accessible.
1.PPTP VPN
Point-to-Point Tunneling Protocol was developed by a consortium founded by Microsoft for
creating VPN over dialup networks, and as such has long been the standard protocol for
internal business VPN. It is a VPN protocol only, and relies on various authentication
methods to provide security (with MS-CHAP v2 being the most common). Available as
standard on just about every VPN capable platform and device, and thus being easy to set up
without the need to install additional software, it remains a popular choice both for businesses
and VPN providers. It also has the advantage of requiring a low computational overhead to
implement (i.e. it’s quick).
However, although now usually only found using 128-bit encryption keys, in the years since
it was first bundled with Windows 95 OSR2 back in 1999, a number of security
vulnerabilities have come to light, the most serious of which is the possibility of
unencapsulated MS-CHAP v2 Authentication.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2. Site-to-Site VPN
Site-to-site is a lot exactly the same thing because PPTP except there is absolutely no
“dedicated” collection being used. This allows different websites of the identical business,
each using its own actual network, for connecting together to create the VPN. In contrast to
PPTP, the actual routing, security and decryption is completed through the routers to both the
finishes, that could become hardware-based or even software-based.
3. L2TP VPN
L2TP or even Layer in order to Tunneling Protocol is comparable to PPTP, because it also
does not provide encryption also it depends on PPP process to get this done. The main among
PPTP as well as L2TP could be that the second option provides not just data privacy but also
information honesty. L2TP originated by Microsoft and Cisco.
4. IPsec
Attempted and trusted protocol which creates a canal from the remote control site within your
central website. Since the name indicates, it’s created for IP visitors. IPSec needs expensive,
time intensive client installs which could be regarded as an important drawback.
5. SSL
SSL or even Secure Socket Layer is really a VPN available via https more than internet
browser. SSL makes a secure program from your PC browser towards the application
machine you’re being able to access. The main benefit of SSL is it does not need any
software program installed since it uses the internet browser since the client software.
6. MPLS VPN
MPLS (Multi-Protocol Tag Switching) are not any great for remote accessibility for
individual customers, however for site-to-site connection, they’re probably the most flexible
as well as scalable choice. These techniques are basically ISP-tuned VPNs, wherever several
websites are linked to form the VPN utilizing the same INTERNET. A good MPLS network
is not as easy to setup or even add to since the other people, and therefore guaranteed to cost
more.
7. Hybrid VPN
Several companies possess managed to mix top features of SSL as well as IPSec &
additionally various other VPN kinds. Crossbreed VPN servers can acknowledge connections
from several kinds of VPN customers. They provide higher versatility at each clienbt and
machine levels and guaranteed to be costly.
8.SSTP
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Secure Socket Tunneling Protocol was introduced by Microsoft in Windows Vista SP1, and
although it is now available for Linux, RouterOS and SEIL, it is still largely a Windows-only
platform (and there is a snowball’s chance in hell of it ever appearing on an Apple device!).
SSTP uses SSL v3, and therefore offers similar advantages to OpenVPN (such as the ability
to use to TCP port 443 to avoid NAT firewall issues), and because it is integrated into
Windows may be easier to use and more stable.
However unlike OpenVPN, SSTP is a proprietary standard owned by Microsoft. This means
that the code is not open to public scrutiny, and Microsoft’s history of co-operating with the
NSA, and on-going speculation about possible backdoors built-in to the Windows operating
system, do not inspire us with confidence in the standard.
ANS :
Figure 3-1 shows a typical network layout with a hardware firewall device acting as a router.
The unprotected side of the firewall connects to the single path labeled “WAN,” and the
protected side connects to three paths labeled “LAN1,” “LAN2,” and “LAN3.” The firewall
acts as a router for traffic between the wide area network (WAN) path and the LAN paths. In
the figure, one of the LAN paths also has a router; some organizations prefer to use multiple
layers of routers due to legacy routing policies within the network.
Many hardware firewall devices have a feature called DMZ, an acronym related to the
demilitarized zones that are sometimes set up between warring countries. While no single
technical definition exists for firewall DMZs, they are usually interfaces on a routing firewall
that are similar to the interfaces found on the firewall’s protected side. The major difference
is that traffic moving between the DMZ and other interfaces on the protected side of the
firewall still goes through the firewall and can have firewall protection policies applied.
DMZs are sometimes useful for organizations that have hosts that need to have all traffic
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
destined for the host bypass some of the firewall’s policies (for example, because the DMZ
hosts are sufficiently hardened), but traffic coming from the hosts to other systems on the
organization’s network need to go through the firewall. It is common to put public-facing
servers, such as web and email servers, on the DMZ. An example of this is shown in Figure
3-2, a simple network layout of a firewall with a DMZ. Traffic from the Internet goes into the
firewall and is routed to systems on the firewall’s protected side or to systems on the DMZ.
Traffic between systems on the DMZ and systems on the protected network goes through the
firewall, and can have firewall policies applied.
Most network architectures are hierarchical, meaning that a single path from an outside
network splits into multiple paths on the inside network—and it is generally most efficient to
place a firewall at the node where the paths split. This has the advantage of positioning the
firewall where there is no question as to what is “outside” and what is “inside.” However,
there may be reasons to have additional firewalls on the inside of the network, such as to
protect one set of computers from another. If a network’s architecture is not hierarchical, the
same firewall policies should be used on all ingresses to the network. In many organizations,
there is only supposed to be one ingress to the network, but other ingresses are set up on an
ad-hoc basis, often in ways that are not allowed by overall policy. In these situations, if a
properly configured firewall is not placed at each entry point, malicious traffic that would
normally be blocked by the main ingress can enter the network by other means.
The diagrams in Figures 3-1 and 3-2 each show a single firewall; however, many
implementations use multiple firewalls. Some vendors sell high-availability (HA) firewalls,
which allow one firewall to take over for another if the first firewall fails or is taken offline
for maintenance. HA firewalls are deployed in pairs at the same spot in the network topology
so that they both have the same external and internal connections. While HA firewalls can
increase reliability, they can also introduce some problems, such as the need to combine logs
between the paired firewalls and possible confusion by administrators when configuring the
firewalls (for example, knowing which firewall is pushing its policy changes to the other
firewall). HA functionality may be provided through a variety of vendor-specific techniques.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
ANS :
Firewall policies should only permit appropriate source and destination IP addresses to be
used. Specific recommendations for IP addresses include:
Organizations should also block the following types of traffic at the perimeter:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
multicast IP, may or may not be appropriate for blocking at an organization’s firewall.
Multicast and broadcast networking is seldom used in normal networking
environments, but when it is used both inside and outside of the organization, it
should be allowed through firewalls.
Firewalls at the network perimeter should block all incoming traffic to networks and hosts
that should not be accessible from external networks. These firewalls should also block all
outgoing traffic from the organization’s networks and hosts that should not be permitted to
access external networks. Deciding which addresses should be blocked is often one of the
most time-consuming aspects of developing firewall IP policies. It is also one of the most
error-prone, because the IP address associated with an undesired entity often changes over
time.
IPv6
IPv6 is a new version of IP that is increasingly being deployed. Although IPv6’s internal
format and address length differ from those of IPv4, many other features remain the same—
and some of these are relevant to firewalls. For the features that are the same between IPv4
and IPv6, firewalls should work the same. For example, blocking all inbound and outbound
traffic that has not been expressly permitted by the firewall policy should be done regardless
of whether or not the traffic has an IPv4 or IPv6 address.
As of this writing, some firewalls cannot handle IPv6 traffic at all; others are able to handle it
but have limited abilities to filter IPv6 traffic; and still others can filter IPv6 traffic to
approximately the same extent as IPv4 traffic. Every organization, whether or not it allows
IPv6 traffic to enter its internal network, needs a firewall that is capable of filtering this
traffic. These firewalls should have the following capabilities:
The firewall should be able to use IPv6 addresses in all filtering rules that use IPv4
addresses.
The administrative interface should allow administrators to clone IPv4 rules to IPv6
addresses to make administration easier.
The firewall needs to be able to filter ICMPv6, as specified in RFC 4890,
Recommendations for Filtering ICMPv6 Messages in Firewalls.
The firewall should be able to block IPv6-related protocols such as 6-to-4 and 4-to-6
tunneling, Teredo, and Intra-site Automatic Tunnel Addressing Protocol (ISATAP) if
they are not required.
Many sites tunnel IPv6 packets in IPv4 packets. This is particularly common for sites
experimenting with IPv6, because it is currently easier to obtain IPv6 transit from a
tunnel broker through a v6-to-v4 tunnel than to get native IPv6 transit from an
Internet service provider (ISP). A number of ways exist to do this, and standards for
tunneling are still evolving. If the firewall is able to inspect the contents of IPv4
packets, it needs to know how to inspect traffic for any tunneling method used by the
organization. A corollary to this is that if an organization is using a firewall to
prohibit IPv6 coming into or going out of its network, that firewall needs to recognize
and block all forms of v6-to-v4 tunneling.
Note that the above list is short and not all the rules are security-specific. Because IPv6
deployment is still in its early stages, there is not yet widespread agreement in the IPv6
operations community about what an IPv6 firewall should do that is different from IPv4
firewalls.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
For firewalls that permit IPv6 use, traffic with invalid source or destination IPv6 addresses
should always be blocked—this is similar to blocking traffic with invalid IPv4 addresses.
Since much more effort has been spent on making lists of invalid IPv4 addresses than on IPv6
addresses, finding lists of invalid IPv6 addresses can be difficult. Also, IPv6 allows network
administrators to allocate addresses in their assigned ranges in different ways. This means
that in a particular address range assigned to an organization, there can literally be trillions of
invalid IPv6 addresses and only a few that are valid. By necessity, listing which IPv6
addresses are invalid will have to be less fine-grained than listing invalid IPv4 addresses, and
the firewall rules that use these lists will be less effective than their IPv4 counterparts.
Organizations that do not yet use IPv6 should block all native and tunneled IPv6 traffic at
their firewalls. Note that such blocking limits testing and evaluation of IPv6 and IPv6
tunneling technologies for future deployment. To permit such use, the firewall administrator
can selectively unblock IPv6 or the specific tunneling technologies of interest for use by the
authorized testers.
ANS :
Firewall policies should only allow necessary IP protocols through. Examples of commonly
used IP protocols, with their IP protocol numbers are ICMP (1), TCP (6), and UDP (17).
Other IP protocols, such as IPsec components Encapsulating Security Payload (ESP) (50) and
Authentication Header (AH) (51) and routing protocols, may also need to pass through
firewalls. These necessary protocols should be restricted whenever possible to the specific
hosts and networks within the organization with a need to use them. By permitting only
necessary protocols, all unnecessary IP protocols are denied by default.
Some IP protocols are rarely passed between an outside network and an organization’s LAN,
and therefore can simply be blocked in both directions at the firewall. For example, IGMP is
a protocol used to control multicast networks, but multicast is rarely used, and when it is, it is
often not used across the Internet. Therefore, blocking all IGMP traffic in both directions is
feasible if multicast is not used.
Application protocols can use TCP, UDP, or both, depending on the design of the protocol.
An application server typically listens on one or more fixed TCP or UDP ports. Some
applications use a single port, but many applications use multiple ports. For example,
although SMTP uses TCP port 25 for sending mail, it uses TCP port 587 for mail submission.
Similarly, FTP uses at least two ports, one of which can be unpredictable, and while most
web servers use only TCP port 80, it is common to have web sites that also use additional
ports such as TCP port 8080. Some applications use both TCP and UDP; for example, DNS
lookups can occur on UDP port 53 or TCP port 53. Application clients typically use any of a
wide range of ports.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
As with other aspects of firewall rulesets, deny by default policies should be used for
incoming TCP and UDP traffic. Less stringent policies are generally used for outgoing TCP
and UDP traffic because most organizations permit their users to access a wide range of
external applications located on millions of external hosts.
In addition to allowing and blocking UDP and TCP traffic, many firewalls are also able to
report or block malformed UDP and TCP traffic directed towards the firewall or to hosts
protected by the firewall. This traffic is frequently used to scan for hosts, and may also be
used in certain types of attacks. The firewall can help block such activity—or at least report
when such activity is happening.
ICMP
Attackers can use various ICMP types and codes to perform reconnaissance or manipulate the
flow of network traffic. However, ICMP is needed for many useful things, such as getting
reasonable performance across the Internet. Some firewall policies block all ICMP traffic, but
this often leads to problems with diagnostics and performance. Other common policies allow
all outgoing ICMP traffic, but limit incoming ICMP to those types and codes needed for Path
Maximum Transmission Unit (PMTU) discovery (ICMP code 3) and destination reachability.
To prevent malicious activity, firewalls at the network perimeter should deny all incoming
and outgoing ICMP traffic except for those types and codes specifically permitted by the
organization. For ICMP in IPv4, ICMP type 3 messages should not be filtered because they
are used for important network diagnostics. The ping command (ICMP code 8) is an
important network diagnostic, but incoming pings are often blocked by firewall policies to
prevent attackers from learning more about the internal topology of the organization’s
network. For ICMP in IPv6, many types of messages must be allowed in specific
circumstances to enable various IPv6 features. See RFC 4890, Recommendations for
Filtering ICMPv6 Messages in Firewalls, for detailed information on selecting which
ICMPv6 types to allow or disallow for a particular firewall type.
ICMP is often used by low-level networking protocols to increase the speed and reliability of
networking. Therefore, ICMP within an organization’s network generally should not be
blocked by firewalls that are not at the perimeter of the network, unless security needs
outweigh network operational needs. Similarly, if an organization has more than one
network, ICMP that comes from or goes to other networks within the organization should not
be blocked.
IPsec Protocols
An organization needs to have a policy whether or not to allow IPsec VPNs that start or end
inside its network perimeter. The ESP and AH protocols are used for IPsec VPNs, and a
firewall that blocks these protocols will not allow IPsec VPNs to pass. While blocking ESP
can hinder the use of encryption to protect sensitive data, it can also force users who would
normally encrypt their data with ESP to allow it to be inspected—for example, by a stateful
inspection firewall or an application-proxy gateway.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Organizations that allow IPsec VPNs should block ESP and AH except to and from specific
addresses on the internal network—those addresses belong to IPsec gateways that are allowed
to be VPN endpoints.19 Enforcing this policy will require people inside the organization to
obtain the appropriate policy approval to open ESP and/or AH access to their IPsec routers.
This will also reduce the amount of encrypted traffic coming from inside the network that
cannot be examined by network security controls.
19. What are the various policies based on applications, user identity & Network
Activity.
Policies Based on Applications
Most early firewall work involved simply blocking unwanted or suspicious traffic at the
network Boundary. Inbound application firewalls or application proxies take a different
approach—they let traffic destined for a particular server into the network, but capture that
traffic in a server that processes it like a port-based firewall. The application-based approach
provides an additional layer of security for incoming traffic by validating some of the traffic
before it reaches the desired server. The theory is that the inbound application firewall’s or
proxy’s additional security layer can protect the server better than the server can protect
itself—and can also remove malicious traffic before it reaches the server to help reduce
server load. In some cases, an application firewall or proxy can remove traffic that the server
might not be able to remove on its own because it has greater filtering capabilities. An
application firewall or proxy also prevents the server from having direct access to the outside
network.
If possible, inbound application firewalls and proxies should be used in front of any server
that does not have sufficient security features to protect it from application-specific attacks.
The main considerations when deciding whether or not to use an inbound application
firewall or proxy are:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
this puts more of the address-specific policy in a single place—the main firewall—and
reduces the amount of traffic seen by the application firewall or proxy, freeing more power to
filter content. Of course, if the perimeter firewall is also the application firewall and an
internal application proxy is not used, no such rules are needed. Outbound application proxies
are useful for detecting systems that are making inappropriate or dangerous connections from
inside the protected network. By far the most common type of outbound proxy is for HTTP.
Outbound HTTP proxies allow an organization to filter dangerous content before it reaches
the requesting PC. They also help an organization better understand and log web traffic from
its users, and to detect activity that is being tunneled over HTTP. When an HTTP proxy
filters content, it can alert the web user that the site being visited sent the filtered content. The
most prominent non-security benefit of HTTP
proxies is caching web pages for increased speed and decreased bandwidth use. Most
organizations should employ HTTP proxies.
Many firewalls allow the administrator to block established connections after a certain period
of inactivity. For example, if a user on the outside of a firewall has logged into a file server
but has not made any requests during the past 15 minutes, the policy might be to block any
further traffic on that connection. Time-based policies are useful in thwarting attacks caused
by a logged-in user walking away from a computer and someone else sitting down and using
the established connections (and therefore the logged-in user’s credentials). However, these
policies can also be bothersome for users who make connections but do not use them
frequently. For instance, a user might connect to a file server to read a file and then spend a
long time editing the file. If the user does not save the file back to the file server before the
firewall-mandated timeout, the timeout could cause the changes to the file to be lost. Some
organizations have mandates about when firewalls should block connections that are
considered to be inactive, when applications should disconnect sessions if there is no activity,
etc. A firewall used by such an organization should be able to set policies that match the
mandates while being specific enough to match the security objective of the mandates.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
A different type of firewall policy based on network activity is one that throttles or redirects
traffic if the rate of traffic matching the policy rule is too high. For example, a firewall might
redirect the connections made to a particular inside address to a slower route if the rate of
connections is above a certain threshold. Another policy might be to drop incoming ICMP
packets if the rate is too high. Crafting such policies is quite difficult because throttling and
redirecting can cause desired traffic to be lost or have difficult-to diagnose transient failures.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In the planning stages of a Web server, the following items should be considered [Alle00]:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Ability to disable unnecessary network services that may be built into the OS or server
software
Ability to control access to various forms of executable programs, such as Common
Gateway
Interface (CGI) scripts and server plug-ins in the case of Web servers
Ability to log appropriate server activities to detect intrusions and attempted intrusions
Provision of a host-based firewall capability.
In addition, organizations should consider the availability of trained, experienced staff to
administer the server and server products. Many organizations have learned the difficult
lesson that a capable and experienced administrator for one type of operating environment is
not automatically as effective for another.
Although many Web servers do not host sensitive information, most Web servers should be
considered sensitive because of the damage to the organization’s reputation that could occur
if the servers’ integrity is compromised. In such cases, it is critical that the Web servers are
located in areas that provide secure physical environments. When planning the location of a
Web server, the following issues should be considered:
Are the appropriate physical security protection mechanisms in place? Examples include—
Locks
Card reader access
Security guards
Physical IDSs (e.g., motion sensors, cameras).
Are there appropriate environmental controls so that the necessary humidity and
temperature are maintained?
Is there a backup power source? For how long will it provide power?
If high availability is required, are there redundant Internet connections from at least two
different Internet service providers (ISP)?
If the location is subject to known natural disasters, is it hardened against those disasters
and/or is there a contingency site outside the potential disaster area?
22. What are the steps for securely installing web server?
ANS-In many respects, the secure installation and configuration of the Web server
application mirrors the OS process discussed in Section 4. The overarching principle, as
before, is to install only the services required for the Web server and to eliminate any known
vulnerabilities through patches or upgrades. Any unnecessary applications, services, or
scripts that are installed should be removed immediately once the installation process is
complete. During the installation of the Web server, the following steps should be performed:
Install the Web server software either on a dedicated host or on a dedicated guest OS if
virtualization is being employed.
Apply any patches or upgrades to correct for known vulnerabilities.
Create a dedicated physical disk or logical partition (separate from OS and Web server
application) for Web content.
Remove or disable all services installed by the Web server application but not required
(e.g., gopher, FTP, remote administration).
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Remove or disable all unneeded default login accounts created by the Web server
installation.
Remove all manufacturers’ documentation from the server.
Remove all example or test files from the server, including scripts and executable code.
Apply appropriate security template or hardening script to server.
Reconfigure HTTP service banner (and others as required) not to report Web server and
OS type and version (this may not be possible with all Web servers).
Organizations should consider installing the Web server with non-standard directory names,
directory locations, and filenames. Many Web server attack tools and worms targeting Web
servers only look for files and directories in their default locations. While this will not stop
determined attackers, it will force them to work harder to compromise the server, and it also
increases the likelihood of attack detection because of the failed attempts to access the default
filenames and directories and the additional time needed to perform an attack.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
24 State IEEE 802.11 Network Components and explain its Architectural Models.
Ans IEEE 802.11 has two fundamental architectural components, as follows:
Station (STA). A STA is a wireless endpoint device. Typical examples of STAs are laptop
computers, personal digital assistants (PDA), mobile phones, and other consumer electronic
devices with IEEE 802.11 capabilities.
Access Point (AP). An AP logically connects STAs with a distribution system (DS), which
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
is typically an organization’s wired infrastructure. APs can also logically connect wireless STAs
with each other without accessing a distribution system.
The IEEE 802.11 standard also defines the following two WLAN design structures or
configurations :-
Ad Hoc Mode. The ad hoc mode does not use APs. Ad hoc mode is sometimes referred to
as infrastructure less because only peer-to-peer STAs are involved in the communications.
Today, a STA is most often thought of as a simple laptop with an inexpensive network interface
card (NIC) that provides wireless connectivity; however, many other types of devices could also
be STAs. In Figure 24.1, the STAs in the IBSS are a mobile phone, a laptop, and a PDA. IEEE
802.11 and its variants continue to increase in popularity; scanners, printers, digital cameras and
other portable devices can also be STAs. The circular shape in Figure 2-1 depicts the IBSS. It is
helpful to consider this as the radio frequency coverage area within which the stations can remain
in communication. A fundamental property of IBSS is that it defines no routing or forwarding, so,
based on the bare IEEE 802.11i spec, all the devices must be within radio range of one another.
An ad hoc network can be created for many reasons, such as allowing the sharing of files or the
rapid exchange of e-mail. However, an ad hoc WLAN cannot communicate with external
networks. A further complication is that an ad hoc network can interfere with the operation of an
AP-based infrastructure mode network (see next section) that exists within the same wireless
space.
Infrastructure Mode.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In infrastructure mode, an IEEE 802.11 WLAN comprises one or more Basic Service Sets (BSS),
the basic building blocks of a WLAN. A BSS includes an AP and one or more STAs. The AP in a
BSS connects the STAs to the DS. The DS is the means by which STAs can communicate with the
organization’s wired LANs and external networks such as the Internet. The IEEE 802.11
infrastructure mode is depicted in Figure 2-2.
The DS and use of multiple BSSs and their associated APs allow for the creation of wireless
networks of arbitrary size and complexity. In the IEEE 802.11 specification, this type of multi-
BSS network is referred to as an extended service set (ESS). Figure 24.3 conceptually depicts a
network with both wired and wireless capabilities. It shows three APs with their corresponding
BSSs, which comprise an ESS; the ESS is attached to the wired infrastructure. In turn, the wired
infrastructure is connected through a perimeter firewall to the Internet. This architecture could
permit various STAs, such as laptops and PDAs, to provide Internet connectivity for their users.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
25 What are the various types of authentic methods implemented in IEEE 802.11 security?
Ans Authentic methods implemented in IEEE 802.11 security are explained below :-
Service Set Identifier (SSID) for the AP. The SSID is a name assigned to a WLAN; it
allows STAs to distinguish one WLAN from another. SSIDs are broadcast in plaintext in wireless
communications, so an eavesdropper can easily learn the SSID for a WLAN. However, the SSID
is not an access control feature, and was never intended to be used for that purpose.
Media Access Control (MAC) address for the STA. A MAC address is a (hopefully)
unique 48-bit value that is permanently assigned to a particular wireless network interface. Many
implementations of IEEE 802.11 allow administrators to specify a list of authorized MAC
addresses; the AP will permit devices with those MAC addresses only to use the WLAN. This is
known as MAC address filtering. However, since the MAC address is not encrypted, it is simple to
intercept traffic and identify MAC addresses that are allowed past the MAC filter. Unfortunately,
almost all WLAN adapters allow applications to set the MAC address, so it is relatively trivial to
spoof a MAC address, meaning attackers can gain unauthorized access easily.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2. Encryption
The WEP protocol, part of the IEEE 802.11 standard, uses the RC4 stream cipher algorithm to
encrypt wireless communications, which protects their contents from disclosure to eavesdroppers.
The standard for WEP specifies support for a 40-bit WEP key only; however, many vendors offer
non-standard extensions to WEP that support key lengths of up to 128 or even 256 bits. WEP also
uses a 24-bit value known as an initialization vector (IV) as a seed value for initializing the
cryptographic key stream. For example, a 104-bit WEP key with a 24-bit IV becomes a 128-bit
RC4 key. Ideally, larger key sizes translate to stronger protection, but the cryptographic technique
used by WEP has known flaws that are not mitigated by longer keys.
Most attacks against WEP encryption have been based on IV-related vulnerabilities. For example,
the IV portion of the RC4 key is sent in cleartext, which allows an eavesdropper that monitors and
analyzes a relatively small amount of network traffic to recover the key by taking advantage of the
IV value knowledge, the relatively small 24-bit IV key space, and a weakness in the way WEP
implements the RC4 algorithm. Also, WEP does not specify precisely how the IVs should be set or
changed; some products use a static, well-known IV value or reset to zero. If two messages have
the same IV, and the plaintext of either message is known, it is relatively trivial for an attacker to
determine the plaintext of the second message. In particular, because many messages contain
common protocol headers or other easily guessable contents, it is often possible to identify the
original plaintext contents with minimal effort.
Even traffic from products that use sequentially increasing IV values is still susceptible to attack.
There are less than 17 million possible IV values; on a busy WLAN, the entire IV space may be
exhausted in a few hours. When the IV is chosen randomly, which represents the best possible
generic IV selection algorithm, by the birthday paradox two IVs already have a 50% chance of
colliding after about 2 frames.
3. Data Integrity
WEP performs data integrity checking for messages transmitted between STAs and APs. WEP is
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
designed to reject any messages that have been changed in transit, such as by a man-in-the-middle
attack.
WEP data integrity is based on a simple encrypted checksum—a 32-bit cyclic redundancy check
(CRC- 32) computed on each payload prior to transmission. The payload and checksum are
encrypted using the RC4 key stream and transmitted. The receiver decrypts them, recomputes the
checksum on the received payload, and compares it with the transmitted checksum. If the
checksums are not the same, the transmitted data frame has been altered in transit, and the frame is
discarded.
Unfortunately, CRC-32 is subject to bit flipping attacks, which means that an attacker knows
which CRC- 32 bits will change when message bits are altered. WEP attempts to counter this
problem by encrypting the CRC-32 to produce an integrity check value (ICV). The creators of
WEP believed that an enciphered CRC-32 would be less subject to tampering. However, they did
not realize that a property of stream ciphers such as WEP’s RC4 is that bit flipping survives the
encryption process—the same bits flip whether or not encryption is used. Therefore, the WEP ICV
offers no additional protection against bit flipping.
Integrity should be provided by a cryptographic checksum rather than a CRC. Also known as
keyed hashes or message authentication codes (MAC), cryptographic checksums prevent bit
flipping attacks because they are designed so that any change to the original message results in
significant and unpredictable changes to the resulting checksum. CRCs are generally more
efficient computationally than cryptographic checksums, but are only designed to protect against
random bit errors, not intentional forgeries, so they do not provide the same level of integrity
protection.
4. Replay Protection
The cryptographic implementation provides no protection against replay attacks because it does
not include features such as an incrementing counter, timestamp, or other temporal data that would
make replayed traffic easily detectable.
5. Availability
Individuals who do not have physical access to the WLAN infrastructure can cause a denial of
service for the WLAN. One threat is known as jamming, which involves a device that emits
electromagnetic energy on the WLAN’s frequencies. The energy makes the frequencies unusable
by the WLAN, causing a denial of service. Jamming can be performed intentionally by an attacker
or unintentionally by a non-WLAN device transmitting on the same frequency. Another threat
against availability is flooding, which involves an attacker sending large numbers of messages to
an AP at such a high rate that the AP cannot process them, or other STAs cannot access the
channel, causing a partial or total denial of service. These threats are difficult to counter in any
radio-based communications; thus, the IEEE 802.11 standard does not provide any defense against
jamming or flooding. Also, as described in Section 3.2.1, attackers can establish rogue APs; if
STAs mistakenly attach to a rogue AP instead of a legitimate one, this could make the legitimate
WLAN effectively unavailable to users. Although 802.11i protects data frames, it does not offer
protection to control or management frames. An attacker can exploit the fact that management
frames are not authenticated to deauthenticate a client or to disassociate a client from the network.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
The IEEE 802.11i standard is the sixth amendment to the baseline IEEE 802.11 standards. It
includes many security enhancements that leverage mature and proven security technologies. For
example, IEEE 802.11i references the Extensible Authentication Protocol (EAP) standard, which
is a means for providing mutual authentication between STAs and the WLAN infrastructure, as
well as performing automatic cryptographic key distribution. Section 6 describes EAP in depth;
EAP is a standard developed by the Internet Engineering Task Force (IETF). IEEE 802.11i
employs accepted cryptographic practices, such as generating cryptographic checksums through
hash message authentication codes (HMAC).
The IEEE 802.11i specification introduces the concept of a Robust Security Network (RSN). An
RSN is defined as a wireless security network that only allows the creation of Robust Security
Network Associations (RSNA). An RSNA is a logical connection between communicating IEEE
802.11 entities established through the IEEE 802.11i key management scheme, called the 4-Way
Handshake, which is a protocol that validates that both entities share a pairwise master key
(PMK), synchronizes the installation of temporal keys, and confirms the selection and
configuration of data confidentiality and integrity protocols. The entities obtain the PMK in one of
two ways—either the PMK is already configured on each device, in which case it is called a pre-
shared key (PSK), or it is distributed as a side effect of a successful EAP authentication instance,
which is a component of IEEE 802.1X port-based access control. The PMK serves as the basis for
the IEEE 802.11i data confidentiality and integrity protocols that provide enhanced security over
the flawed WEP. Most large enterprise deployments of RSN technology will use IEEE 802.1X and
EAP rather than PSKs because of the difficulty of managing PSKs on numerous devices. WLAN
connections employing ad hoc mode, which typically involve only a few STAs, are more likely to
use PSKs.
This section provides a brief introduction to the IEEE 802.1X standard, which is specified by the
IEEE 802.11i amendment. Two components defined in IEEE 802.1X are relied upon for the
establishment of RSNAs: authentication servers and IEEE 802.1X port-based access control. The
IEEE 802.1X standard provides a framework for access control that leverages EAP to provide
centralized, mutual authentication. IEEE 802.1X was originally developed for wired LANs to
prevent unauthorized use in open environments such as university campuses, but it has been used
by IEEE 802.11i for WLANs as well. The IEEE 802.1X framework provides the means to block
user access until authentication is successful, thereby controlling access to WLAN resources.
The IEEE 802.1X standard defines several terms related to authentication. The authenticator is an
entity at one end of a point-to-point LAN segment that facilitates authentication of the entity
attached to the other end of that link. For example, the AP in Figure 25.2 serves as an
authenticator. The supplicant is the entity being authenticated. The STA may be viewed as a
supplicant.21 The authentication server (AS) is an entity that provides an authentication service to
an authenticator. This service determines from the credentials provided by the supplicant whether
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
the supplicant is authorized to access the services provided by the authenticator. The AS provides
these authentication services and delivers session keys to each AP in the wireless network; each
STA either receives session keys from the AS or derives the session keys itself. The AS either
authenticates the STA and AP itself, or provides information to the STA and AP so that they may
authenticate each other. The AS typically lies inside the DS, as depicted in Figure 25.2. When
employing a solution based on the IEEE 802.11i standard, the AS most often used for
authentication is an Authentication, Authorization, and Accounting (AAA) server that uses the
Remote Authentication Dial In User Service (RADIUS)22 or Diameter23 protocol to transport
authentication related traffic. This is discussed further in Section 4. The supplicant/authenticator
model is intrinsically a unilateral rather than mutual authentication model: the supplicant
authenticates to the network. IEEE 802.11i combats this bias by requiring that the EAP method
used provides mutual authentication.
Figure 25.3 provides a simple conceptual view of IEEE 802.1X that depicts all the fundamental
IEEE 802.11i components: STAs, an AP, and an AS. In this example, the STAs are the
supplicants, and the AP is the authenticator. Until successful authentication occurs between a STA
and the AS, the STA’s communications are blocked by the AP. Because the AP sits at the
boundary between the wireless and wired networks, this prevents the unauthenticated STA from
reaching the wired network. The technique used to block the communications is known as port-
based access control. IEEE 802.1X can control data flows by distinguishing between EAP and
non-EAP frames, then passing EAP frames through an uncontrolled port and non-EAP frames
through a controlled port, which can block access. IEEE 802.11i extends this to block the AP’s
communication until keys are in place as well. Thus, the IEEE 802.11i extensions prevent a rogue
access point from exchanging anything but EAP traffic with the STA’s host.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Applicable laws and regulations (Federal, state, and international)
Litigation requirements
Contractual
Accepted practices
Criticality of data to organization
Although each organization’s Web server backup policy will be different to reflect its
particular environment, it should address the following issues:
frequency of backups
requests,
legal investigations, and other such requests
data backup team, if one exists.
Differential backups reduce the number of backup sets that must be accessed to restore a
configuration by backing up all changed data since the last full backup. However, each
differential backup increases as time lapses from the last full backup, requiring more
processing time and storage than would an incremental backup. Generally, full backups are
performed less frequently (weekly to monthly or when a significant change occurs), and
incremental or differential backups are performed more frequently (daily to weekly).
The frequency of backups will be determined by several factors:
Static Web content (less frequent backups)
Dynamic Web content (more frequent backups)
E-commerce/e-government (very frequent backups)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
t required for data reconstruction without data backup
Inexpensive Disks [RAID]).
All organizations should maintain an authoritative (i.e., verified and trusted) copy of their
public Web sites on a host that is inaccessible to the Internet. This is a supplement to, but not
replacement for, an appropriate backup policy (see Section 9.2.1). For simple, relatively static
Web sites, this could be as simple as a copy of the Web site on a read-only medium (e.g.,
Compact Disc-Recordable [CD-R]).
However, for most organizations, the authoritative copy of the Web site is maintained on a
secure host.
This host is usually located behind the organization’s firewall on the internal network and not
on the DMZ (see Section 8.1.2). The purpose of the authoritative copy is to provide a means
of restoring information on the public Web server if it is compromised as a result of an
accident or malicious action.
This authoritative copy of the Web site allows an organization to rapidly recover from Web
site integrity breaches (e.g., defacement).
To successfully accomplish the goal of providing and protecting an authoritative copy of the
Web server content, the following requirements must be met:
Use write-once media (appropriate for relatively static Web sites).
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Locate the host with the authoritative copy behind a firewall, and ensure there is no outside
access to the host.
Minimize users with authorized access to host.
Control user access in as granular a manner as possible.
Employ strong user authentication.
Employ appropriate logging and monitoring procedures.
Consider additional authoritative copies at different physical locations for further
protection.
Update authoritative copy first (any testing on code should occur before updating the
authoritative copy).
Establish policies and procedures for who can authorize updates, who can perform updates,
when updates can occur, etc.
Physically transfer data using secure physical media (e.g., encrypted and/or write-once
media, such as CD-Rs).
Use a secure protocol (e.g., SSH) for network transfers.
incident response procedures (see Section 9.3).
periodically (e.g., every 15 minutes, hourly, or daily) because this will overwrite a Web site
defacement automatically.
Most organizations eventually face a successful compromise of one or more hosts on their
network. The first step in recovering from a compromise is to create and document the
required policies and procedures for responding to successful intrusions before an
intrusion.77 The response procedures should outline the actions that are required to respond
to a successful compromise of the Web server and the appropriate sequence of these actions
(sequence can be critical). Most organizations already have a dedicated incident response
team in place, which should be contacted immediately when there is suspicion or
confirmation of a compromise. In addition, the organization may wish to ensure that some of
its staff are knowledgeable in the fields of computer and network forensics.
A Web server administrator should follow the organization’s policies and procedures for
incident handling, and the incident response team should be contacted for guidance before the
organization takes any action after a suspected or confirmed security compromise. Examples
of steps commonly performed after discovering a successful compromise are as follows:
compromised systems or take other steps to contain the attack so that
additional information can be collected.79
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
for other attacks)
detection reports)
ompromised)
Results of consultation with management and legal counsel.
The lower the level of access gained by the intruder and the more the Web server
administrator understands about the attacker’s actions, the less risk there is in restoring from
a backup and patching the vulnerability. For incidents in which there is less known about the
attacker’s actions and/or in which the attacker gains high-level access, it is recommended that
the OS and applications be reinstalled from the manufacturer’s original distribution media
and that the Web server data be restored from a known good backup.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
If legal action is pursued, system administrators need to be aware of the guidelines for
handling a host after a compromise. Consult legal counsel and relevant law enforcement
authorities as appropriate.
However, vulnerability scanners have some significant weaknesses. Generally, they identify
only surface vulnerabilities and are unable to address the overall risk level of a scanned Web
server. Although the scan process itself is highly automated, vulnerability scanners can have
a high false positive error rate (reporting vulnerabilities when none exist). This means an
individual with expertise in Web server security and administration must interpret the results.
Furthermore, vulnerability scanners cannot generally identify vulnerabilities in custom code
or applications.
Vulnerability scanners rely on periodic updating of the vulnerability database to recognize the
latest vulnerabilities. Before running any scanner, Web server administrators should install
the latest updates to its vulnerability database. Some databases are updated more regularly
than others (the frequency of updates should be a major consideration when choosing a
vulnerability scanner).
Vulnerability scanners are often better at detecting well-known vulnerabilities than more
esoteric ones because it is impossible for any one scanning product to incorporate all known
vulnerabilities in a timely manner. In addition, manufacturers want to keep the speed of their
scanners high (the more vulnerabilities detected, the more tests required, which slows the
overall scanning process). Therefore, vulnerability scanners may be less useful to Web server
administrators operating less popular Web servers, OSs, or custom-coded applications.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
sting compliance with host application usage/security policies.
Organizations should conduct vulnerability scanning to validate that OSs and Web server
applications are up-to-date on security patches and software versions. Vulnerability scanning
is a labor-intensive activity that requires a high degree of human involvement to interpret the
results. It may also be disruptive to operations by taking up network bandwidth, slowing
network response times, and potentially affecting the availability of the scanned server or its
applications. However, vulnerability scanning is extremely important for ensuring that
vulnerabilities are mitigated as soon as possible, before they are discovered and exploited by
adversaries. Vulnerability scanning should be conducted on a weekly to monthly basis.
Many organizations also run a vulnerability scan whenever a new vulnerability database is
released for the organization’s scanner application. Vulnerability scanning results should be
documented and discovered deficiencies should be corrected.
Organizations should also consider running more than one vulnerability scanner. As
previously discussed, no scanner is able to detect all known vulnerabilities; however, using
two scanners generally increases the number of vulnerabilities detected. A common practice
is to use one commercial and one freeware scanner. Network-based and host-based
vulnerability scanners are available for free or for a fee.
2.Penetration Testing
Penetration testing is “security testing in which evaluators attempt to circumvent the security
features of a system based on their understanding of the system design and implementation”.
The purpose of penetration testing is to exercise system protections (particularly human
response to attack indications) by using common tools and techniques developed by
attackers. This testing is highly recommended for complex or critical servers.
oes beyond surface vulnerabilities and demonstrates how these vulnerabilities can be
exploited iteratively to gain greater access
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
es
engineering.
Penetration testing is “security testing in which evaluators attempt to circumvent the security
features of a system based on their understanding of the system design and implementation”.
The urpose of penetration testing is to exercise system protections (particularly human
response to attack indications) by using common tools and techniques developed by
attackers. This testing is highly recommended for complex or critical servers.
exploited iteratively to gain greater access
ot purely theoretical
engineering.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Computer systems recognize people based on the authentication data the systems receive.
Authentication presents several challenges: collecting authentication data, transmitting the
data securely, and knowing whether the person who was originally authenticated is still the
person using the computer system. For example, a user may walk away from a terminal while
still logged on, and another person may start using it.
There are three means of authenticating a user's identity which can be used alone or in
combination:
• Something the individual knows (a secret- e.g., a password, Personal Identification Number
(PIN), or cryptographic key);
something the individual possesses (a token - e.g., an ATM card or a smart card);
And something the individual is (a biometric - e.g., such characteristics as a voice
pattern, handwriting dynamics, or a fingerprint).
While it may appear that any of these means could provide strong authentication,
there are For most applications, trade-offs will have to be made problems associated
with each. If people among security, ease of use, and ease of wanted to pretend to be
someone else on a administration, especially in modern networked t. i ..environments,
computer system, they can guess or learn that individual's password; they can also
steal or fabricate tokens. Each method also has drawbacks for legitimate users and
system administrators: users forget passwords and may lose tokens, and
administrative overhead for keeping track of I&A data and tokens can be substantial.
Biometric systems have significant technical, user acceptance, and cost problems as
well.
1.1 Passwords
In general, password systems work by requiring the user to enter a user ID and
password (or passphrase or personal identification number). The system compares the
password to a previously stored password for that user ID. If there is a match, the user
is authenticated and granted access.
1. Guessing orfinding passwords. If users select their own passwords, they tend to
make them easy to remember. That often makes them easy to guess. The names of
people's children, pets, or favorite sports teams are common examples. On the other
hand, assigned passwords may be difficult to remember, so users are more likely to
write them down. Many computer systems are shipped with administrative accounts
that have preset passwords. Because these passwords are standard, they are easily
"guessed."
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2. Giving passwords away. Users may share their passwords. They may give their
password to a co-worker in order to share files. In addition, people can be tricked into
divulging their passwords. This process is referred to as social engineering.
3. Electronic monitoring. When passwords are transmitted to a computer system,
they can be electronically monitored. This can happen on the network used to transmit
the password or on the computer system itself. Simple encryption of a password that
will be used again does not solve this problem because encrypting the same password
will create the same ciphertext; the ciphertext becomes the password.
4. Accessing the passwordfile. If the password file is not protected by strong access
controls, the file can be downloaded. Password files are often protected with one-way
encryption 109 so that plain-text passwords are not available to system administrators
or hackers (if they successfully bypass access controls). Even if the file is encrypted,
brute force can be used to learn passwords if the file is downloaded (e.g., by
encrypting English words and comparing them to the file).
Passwords Used as Access Control. Some mainframe operating systems and many
PC applications use passwords as a means of restricting access to specific resources
within a system. Instead of using mechanisms such as access control lists (see Chapter
17), access is granted by entering a password.
Although the authentication derived from the knowledge of a cryptographic key may
be based entirely on something the user knows, it is necessary for the user to also
possess (or have access to) something that can perform the cryptographic
computations, such as a PC or a smart card .
Although some techniques are based solely on something the user possesses, most of
the techniques described in this section are combined with something the user knows.
This combination can provide significantly stronger security than either something the
user knows or possesses alone.
Objects that a user possesses for the purpose of I&A are called tokens. This section
divides tokens into two categories: memory tokens and smart tokens.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
automatic teller machine (ATM) card. This uses a combination of something the user
possesses (the card) with something the user knows (the PIN).
Biometric systems can provide an increased level of security for computer systems,
but the technology is still less mature than that of memory tokens or smart tokens.
Imperfections in biometric authentication devices arise from technical difficulties in
measuring and profiling physical attributes as well as from the somewhat variable
nature of physical attributes. These may change, depending on various conditions. For
example, a person's speech pattern may change under stressful conditions or when
suffering from a sore throat or cold.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
30. List & explain the important implementation issues for I&A systems.
ANS:
A: Some of the important implementation issues for I&A systems include administration,
maintaining authentication, and single log-in.
1. Administration
It is also possible for someone to use a legitimate user's account after log-in." Many computer
systems handle this problem by logging a user out or locking their display or session after a
certain period of inactivity. However, these methods can affect productivity and can make the
computer less user-friendly.
3. Single Log-in
From an efficiency viewpoint, it is desirable for users to authenticate themselves only once
and then to be able to access a wide variety of applications and data available on local and
remote systems, even if those systems require users to authenticate themselves. This is known
as single log-in. If the access is within the same host computer, then the use of a modern
access control system (such as an access control list) should allow for a single log-in. If the
access is across multiple platforms, then the issue is more complicated, as discussed below.
There are three main techniques that can provide single log-in across multiple computers:
host-to-host authentication, authentication servers, and user-to-host authentication.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
31. What are various criteria used by the system to determine if a request for access will
be granted?
A: The system uses various criteria to determine if a request for access will be granted. Many
of the advantages and complexities involved in implementing and managing access control
are related to the different kinds of user accesses supported.
1. Identity
It is probably fair to say that the majority of access controls are based upon the identity of the
user which is usually obtained through identification and authentication (I&A). The identity
is usually unique, to support individual accountability, but can be a group identification or
can even be anonymous. For example, public information dissemination systems may serve a
large group called "researchers" in which the individual researchers are not known.
2. Roles
Access to information may also be controlled by the job assignment or function (i.e., the role)
of the user who is seeking access. Examples of roles include data entry clerk, purchase
officer, project leader, programmer, and technical editor. Access rights are grouped by role
name, and the use of resources is restricted to individuals authorized to assume the associated
role. An individual may be authorized for more than one role, but may be required to act in
only a single role at a time. Changing roles may require logging out and then in again, or
entering a role-changing command. Note that use of roles is not the same as shared-use
accounts. An individual may be assigned a standard set of rights of a hipping department data
entry clerk, for example, but the account would still be tied to that individual's identity to
allow for auditing.
The use of roles can be a very effective way of providing access control. The process of
defining roles should be based on a thorough analysis of how an organization operates and
should include input from a wide spectrum of users in an organization.
3. Location
Access to particular system resources may also be based upon physical or logical location.
For example, in a prison, all users in areas to which prisoners are physically permitted may be
limited to read-only access. Changing or deleting is limited to areas to which prisoners are
denied physical access. The same authorized users (e.g., prison guards) would operate under
significantly different logical access controls, depending upon their physical location.
Similarly, users can be restricted based upon network addresses (e.g., users from sites within
a given organization may be permitted greater access than those from outside).
4. Time
Time-of-day or day-of-week restrictions are common limitations on access. For example, use
of confidential personnel files may be allowed only during normal working hours - and
maybe denied before 8:00 a.m. and after 6:00 p.m. and all day during weekends and holidays.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
5. Transaction
Another approach to access control can be used by organizations handling transactions (e.g.,
account inquiries). Phone calls may first be answered by a computer that requests that callers
key in their account number and perhaps a PIN. Some routine transactions can then be made
directly, but more complex ones may require human intervention. In such cases, the
computer, which already knows the account number, can grant a clerk, for example, access to
a particular account for the duration of the transaction. When completed, the access
authorization is terminated. This means that users have no choice in which accounts they
have access to, and can reduce the potential for mischief. It also eliminates employee
browsing of accounts and can thereby heighten privacy.
6. Service Constraints
Service constraints refer to those restrictions that depend upon the parameters that may arise
during use of the application or that are preestablished by the resource owner/manager. For
example, a particular software package may only be licensed by the organization for five
users at a time. Access would be denied for a sixth user, even if the user were otherwise
authorized to use the application. Another type of service constraint is based upon application
content or numerical thresholds. For example, an ATM machine may restrict transfers of
money between accounts to certain dollar limits or may limit maximum ATM withdrawals to
$500 per day. Access may also be selectively permitted based on the type of service
requested. For example, users of computers on a network may be permitted to exchange
electronic mail but may not be allowed to log in to each others' computers.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Four distinct functional nodes are identified for the generation, distribution, and management
of cryptographic keys: a central oversight authority, key processing facility(ies), service
agents, and client nodes.
1. Central Oversight Authority
The KMI’s central oversight authority is the entity that provides overall KMI data
synchronization and system security oversight for an organization or set of organizations. The
central oversight authority 1) coordinates protection policy and practices (procedures)
documentation, 2) may function as a holder of data provided by service agents, and 3) serves
as the source for common and system level information required by service agents (e.g.,
keying material and registration information, directory data, system policy specifications, and
system-wide key compromise and certificate revocation information).
2. Key Processing Facility(ies)
Service agents support organizations’ KMIs as single points of access for other KMI nodes.
All transactions initiated by client nodes are either processed by a service agent or forwarded
to other nodes for processing. Service agents direct service requests from client nodes to key
processing facilities, and when services are required from multiple processing facilities,
coordinate services among the processing facilities to which they are connected. Service
agents are employed by users to order keying material and services, retrieve keying material
and services, and manage cryptographic material and public key certificates. A service agent
may provide cryptographic material and/or certificates by utilizing specific key processing
facilities for key and/or certificate generation.
Service agents may provide registration, directory, and support for data recovery services (i.e.
key recovery), as well as provide access to relevant documentation, such as policy statements
and infrastructure devices. Service agents may also process requests for keying material (e.g.,
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
user identification credentials), and assign and manage KMI user roles and privileges. A
service agent may also provide interactive help desk services as required.
4. Client Nodes
Client nodes are interfaces for managers, devices, and applications to access KMI functions,
including the requesting of certificates and other keying material. They may include
cryptographic modules, software, and procedures necessary to provide user access to the
KMI. Client nodes interact with service agents to obtain cryptographic key services. Client
nodes provide interfaces to end user entities (e.g., encryption devices) for the distribution of
keying material, for the generation of requests for keying material, for the receipt and
forwarding (as appropriate) of compromised key lists (CKLs) and/or certificate revocation
lists (CRLs), for the receipt of audit requests, and for the delivery of audit responses. Client
nodes typically initiate requests for keying material in order to synchronize new or existing
user entities with the current key structure, and receive encrypted keying material for
distribution to end-user cryptographic devices.
A client node can be a FIPS 140-2 compliant workstation executing KMI security software or
a FIPS 140-2 compliant special purpose device. Actual interactions between a client node and
a service agent depend on whether the client node is a device, a manager, or a functional
security application.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
(c) The Federal Information Processing Standard 199 (FIPS 199) impact level which is
determined by the consequences of a compromise of the protected information and/or
processes (including sensitivity and perishability of the information);
(d) The cryptographic protection mechanisms to be employed (e.g., message authentication,
digital signature, encryption);
(e) Protection requirements for cryptographic processes and keying material (e.g., tamper-
resistant processes, confidentiality of keying material); and
(f) Applicable statutes, and executive directives and guidance to which the KMI and its
supporting documentation shall conform.
B. Organizational Responsibilities
The following classes of organizational responsibilities should be identified:
(a) Identification of the Keying Material Manager – Since the security of all material that
is cryptographically protected depends on the security of the keying material
employed, the ultimate responsibility for key management should reside at the
executive level. The keying material manager should report directly to the
organization’s Chief Information Officer (CIO)
(b) Identification of Infrastructure Entities and Roles - The key management policy
document should identify organizational responsibilities for key KMI roles. The
following roles should be assigned:
(1) Central Oversight Authority (may be the Keying Material Manager)
(2) Certification Authorities (CAs)
(3) Registration Authorities (RAs)
(4) Overseers of operations (e.g., Key Processing Facility(ies), Service Agents)
(c) Basis for and Identification of Essential Key Management Roles – The KMP should
also identify responsible organization(s), organization (not individual) contact
information, and any relevant statutory or administrative requirements for many
functions.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
UNIT 3
Q58. What are the various components of PKI?
PKI COMPONENTS
Functional elements of a public key infrastructure include certification authorities, registration
authorities, repositories, and archives. The users of the PKI come in two flavors: certificate holders
and relying parties. An attribute authority is an optional component.
A certification authority (CA) is similar to a notary. The CA confirms the identities of parties
sending and receiving electronic payments or other communications. Authentication is a necessary
element of many formal communications between parties, including payment transactions. In most
check-cashing transactions, a driver’s license with a picture is sufficient authentication. A personal
identification number (PIN) provides electronic authentication for transactions at a bank automated
teller machine (ATM).
A registration authority (RA) is an entity that is trusted by the CA to register or vouch for the
identity of users to a CA.
A repository is a database of active digital certificates for a CA system. The main business of the
repository is to provide data that allows users to confirm the status of digital certificates for
individuals and businesses that receive digitally signed messages. These message recipients are called
relying parties. CAs post certificates and CRLs to repositories.
An archive is a database of information to be used in settling future disputes. The business of the
archive is to store and protect sufficient information to determine if a digital signature on an “old”
document should be trusted.
The CA issues a public key certificate for each identity, confirming that the identity has the
appropriate credentials. A digital certificate typically includes the public key, information about the
identity of the party holding the corresponding private key, the operational period for the certificate,
and the CA’s own digital signature. In addition, the certificate may contain other information about
the signing party or information about the recommended uses for the public key. A subscriber is an
individual or business entity that has contracted with a CA to receive a digital certificate verifying an
identity for digitally signing electronic messages.
CAs must also issue and process certificate revocation lists (CRLs), which are lists of certificates
that have been revoked. The list is usually signed by the same entity that issued the certificates.
Certificates may be revoked, for example, if the owner’s private key has been lost; the owner leaves
the company or agency; or the owner’s name changes. CRLs also document the historical revocation
status of certificates. That is, a dated signature may be presumed to be valid if the signature date was
within the validity period of the certificate, and the current CRL of the issuing CA at that date did not
show the certificate to be revoked.
PKI users are organizations or individuals that use the PKI, but do not issue certificates. They rely on
the other components of the PKI to obtain certificates, and to verify the certificates of other entities
that they do business with. End entities include the relying party, who relies on the certificate to
know, with certainty, the public key of another entity; and the certificate holder, that is issued a
certificate and can sign digital documents. Note that an individual or organization may be both a
relying party and a certificate holder for various applications.
3.1.1 Certification Authorities
The certification authority, or CA, is the basic building block of the PKI. The CA is a collection of
computer hardware, software, and the people who operate it. The CA is known by two attributes: its
name and its public key. The CA performs four basic PKI functions: issues certificates (i.e., creates
and signs them); maintains certificate status information and issues CRLs; publishes its current (e.g.,
unexpired) certificates and CRLs, so users can obtain the information they need to implement security
services; and maintains archives of status information about the expired certificates that it issued.
These requirements may be difficult to satisfy simultaneously. To fulfill these requirements, the CA
may delegate certain functions to the other components of the infrastructure.
A CA may issue certificates to users, to other CAs, or both. When a CA issues a certificate, it is
asserting that the subject (the entity named in the certificate) has the private key that corresponds to
the public key contained in the certificate. If the CA includes additional information in the certificate,
the CA is asserting that information corresponds to the subject as well. This additional information
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
might be contact information (e.g., an electronic mail address), or policy information (e.g., the types
of applications that can be performed with this public key.)
When the subject of the certificate is another CA, the issuer is asserting that the certificates issued by
the other CA are trustworthy.
The CA inserts its name in every certificate (and CRL) it generates, and signs them with its private
key. Once users establish that they trust a CA (directly, or through a certification path) they can trust
certificates issued by that CA. Users can easily identify certificates issued by that CA by comparing
its name. To ensure the certificate is genuine, they verify the signature using the CA’s public key. As
a result, it is important that the CA provide adequate protection for its own private key. Federal
government CAs should always use cryptographic modules that have been validated against FIPS
140.
As CA operation is central to the security services provided by a PKI, this topic is explored in
additional detail in Section 5, CA System Operation.
3.1.2 Registration Authorities
An RA is designed to verify certificate contents for the CA. Certificate contents may reflect
information presented by the entity requesting the certificate, such as a drivers license or recent pay
stub. They may also reflect information provided by a third party. For example, the credit limit
assigned to a credit card reflects information obtained from credit bureaus. A certificate may reflect
data from the company’s Human Resources department, or a letter from a designated company
official. For example, Bob’s certificate could indicate that he has signature authority for small
contracts. The RA aggregates these inputs and provides this information to the CA.
Like the CA, the RA is a collection of computer hardware, software, and the person or people who
operate it. Unlike a CA, an RA will often be operated by a single person. Each CA will maintain a list
of accredited RAs; that is a list of RAs determined to be trustworthy. An RA is known to the CA by a
name and a public key. By verifying the RA’s signature on a message, the CA can be sure an
accredited RA provided the information, and it can be trusted. As a result, it is important that the RA
provide adequate protection for its own private key. Federal government RAs should always use
cryptographic modules that have been validated against FIPS 140.
3.1.3 PKI Repositories
PKI applications are heavily dependent on an underlying directory service for the distribution of
certificates and certificate status information. The directory provides a means of storing and
distributing certificates, and managing updates to certificates. Directory servers are typically
implementations of the X.500 standard or subset of this standard. X.500 consists of a series of
recommendations and the specification itself references several ISO standards. It was designed for
directory services that could work across system, corporate, and international boundaries. A suite of
protocols is specified for operations such as chaining, shadowing, and referral for server-to-server
communication, and the Directory Access Protocol (DAP) for client to server communication. The
Lightweight Directory Access Protocol (LDAP) was later developed as an alternative to DAP. Most
directory servers and clients support LDAP, though not all support DAP.
To be useful for the PKI applications, directory servers need to be interoperable; without such
interoperability, a relying party will not be able to retrieve the needed certificates and CRLs from
remote sites for signature verifications. To promote interoperablility among Federal agency
directories and thus PKI deployments, the Federal PKI Technical Working Group is developing a
Federal PKI Directory Profile [Chang] to assist agencies interested in participating in the FBCA
demonstration effort. It is recommended that agencies refer to this document for the minimum
interoperability requirements before standing up their agency directories.
3.1.4 Archives
An archive accepts the responsibility for long term storage of archival information on behalf of the
CA. An archive asserts that the information was good at the time it was received, and has not been
modified while in the archive. The information provided by the CA to the archive must be sufficient
to determine if a certificate was actually issued by the CA as specified in the certificate, and valid at
that time. The archive protects that information through technical mechanisms and appropriate
procedures while in its care. If a dispute arises at a later date, the information can be used to verify
that the private key associated with the certificate was used to sign a document. This permits the
verification of signatures on old documents (such as wills) at a later date.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In Figure 2, the Bridge CA has established relationships with three enterprise PKIs. The first is
Bob’s and Alice’s CA, the second is Carol’s hierarchical PKI, and the third is Doug’s mesh PKI.
None of the users trusts the Bridge CA directly. Alice and Bob trust the CA that issued their
certificates; they trust the Bridge CA because the Fox CA issued a certificate to it. Carol’s trust point
is the root CA of her hierarchy; she trusts the Bridge CA because the root CA issued a certificate to it.
Doug trusts the CA in the mesh that issued his certificate; he trusts the Bridge CA because there is a
valid certification path from the CA that issued him a certificate to the Bridge CA. Alice (or Bob) can
use the bridge of trust that exists through the Bridge CA to establish relationships with Carol and
Doug.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
There are ten common fields: six mandatory and four optional. The mandatory fields are: the serial
number, the certificate signature algorithm identifier, the certificate issuer name, the certificate
validity period, the public key, and the subject name. The subject is the party that controls the
corresponding private key. There are four optional fields: the version number, two unique identifiers,
and the extensions. These optional fields appear only in version 2 and 3 certificates.
Version. The version field describes the syntax of the certificate. When the version field is omitted,
the certificate is encoded in the original, version 1, syntax. Version 1 certificates do not include the
unique identifiers or extensions. When the certificate includes unique identifiers but not extensions,
the version field indicates version 2. When the certificate includes extensions, as almost all modern
certificates do, the version field indicates version 3.
Serial number. The serial number is an integer assigned by the certificate issuer to each certificate.
The serial number must be unique for each certificate generated by a particular issuer. The
combination of the issuer name and serial number uniquely identifies any certificate.
Signature. The signature field indicates which digital signature algorithm (e.g., DSA with SHA-1 or
RSA with MD5) was used to protect the certificate.
Issuer. The issuer field contains the X.500 distinguished name of the TTP that generated the
certificate.
Validity. The validity field indicates the dates on which the certificate becomes valid and the date on
which the certificate expires.
Subject. The subject field contains the distinguished name of the holder of the private key
corresponding to the public key in this certificate. The subject may be a CA, a RA, or an end entity.
End entities can be human users, hardware devices, or anything else that might make use of the
private key.
Subject public key information. The subject public key information field contains the subject’s
public key, optional parameters, and algorithm identifier. The public key in this field, along with the
optional algorithm parameters, is used to verify digital signatures or perform key management. If the
certificate subject is a CA, then the public key is used to verify the digital signature on a certificate.
Issuer unique ID and subject unique ID. These fields contain identifiers, and only appear inversion
2 or version 3 certificates. The subject and issuer unique identifiers are intended to handle the reuse of
subject names or issuer names over time. However, this mechanism has proven to be an unsatisfactory
solution. The Internet Certificate and CRL profile does not [HOUS99] recommend inclusion of these
fields.
Extensions. This optional field only appears in version 3 certificates. If present, this field contains
one or more certificate extensions. Each extension includes an extension identifier, a criticality flag,
and an extension value. Common certificate extensions have been defined by ISO and ANSI to
answer questions that are not satisfied by the common fields.
Subject type. This field indicates whether a subject is a CA or an end entity.
Names and identity information. This field aids in resolving questions about a user’s identity, e.g.,
are “alice@gsa.gov” and “c=US; o=U.S. Government; ou=GSA; cn=Alice Adams” the same person?
Key attributes. This field specifies relevant attributes of public keys, e.g., whether it can be
used for key transport, or be used to verify a digital signature.
Policy information. This field helps users determine if another user’s certificate can be trusted,
whether it is appropriate for large transactions, and other conditions that vary with organizational
policies.
Certificate extensions allow the CA to include information not supported by the basic certificate
content. Any organization may define a private extension to meet its particular business requirements.
However, most requirements can be satisfied using standard extensions. Standard extensions are
widely supported by commercial products. Standard extensions offer improved interoperability, and
they are more cost effective than private extensions. Extensions have three components: extension
identifier, a criticality flag, and extension value. The extension identifier indicates the format and
semantics of the extension value. The criticality flag indicates the importance of the extension. When
the criticality flag is set, the information is essential to certificate use. Therefore, if an unrecognized
critical extension is encountered, the certificate must not be used. Alternatively, an unrecognized non-
critical extension may be ignored. The subject of a certificate could be an end user or another CA. The
basic certificate fields do not differentiate between these types of users. The basic constraints
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
extension appears in CA certificates, indicating this certificate may be used to build certification
paths.
The key usage extension indicates the types of security services that this public key can be used to
implement. These may be generic services (e.g., non-repudiation or data encryption) or PKI specific
services (e.g., verifying signatures on certificates or CRLs).
The subject field contains a directory name, but that may not be the type of name that is used by a
particular application. The subject alternative name extension is used to provide other name forms
for the owner of the private key, such as DNS names or email addresses. For example, the email
address alice@gsa.gov.gov could appear in this field.
CAs may have multiple key pairs. The authority key identifier extension helps users select the right
public key to verify the signature on this certificate.
Users may also have multiple key pairs, or multiple certificates for the same key. The subject key
identifier extension is used to identify the appropriate public key.
Organizations may support a broad range of applications using PKI. Some certificates may be more
trustworthy than others, based on the procedures used to issue them or the type of user cryptographic
module. The certificate policies extension contains a globally unique identifier that specifies the
certificate policy that applies to this certificate.
Different organizations (e.g., different companies or government agencies) will use different
certificate policies. Users will not recognize policies from other organizations. The policy mappings
extension converts policy information from other organizations into locally useful policies. This
extension appears only in CA certificates.
The CRL distribution points extension contains a pointer to the X.509 CRL where status information
for this certificate may be found. (X.509 CRLs are described in the following section.)
When a CA issues a certificate to another CA, it is asserting that the other CA's certificates are
trustworthy. Sometimes, the issuer would like to assert that a subset of the certificates should be
trusted. There are three basic ways to specify that a subset of certificates should be trusted:
The basic constraints extension (described above) has a second role, indicating whether this CA is
trusted to issue CA certificates, or just user certificates.
The name constraints extension can be used to describe a subset of certificates based on the names in
either the subject or subject alternative name fields. This extension can be used to define the set of
acceptable names, or the set of unacceptable names. That is, the CA could assert “names in the NIST
directory space are acceptable” or “names in the NIST directory space are not acceptable.”
The policy constraints extension can be used to describe a subset of certificates based on the contents
of the policy extension. If policy constraints are implemented, users will reject certificates without a
policy extension, or where the specified policies are unrecognized.
3.3.2 Certificate Revocation Lists (CRLs)
Certificates contain an expiration date. Unfortunately, the data in a certificate may become unreliable
before the expiration date arrives. Certificate issuers need a mechanism to provide a status update for
the certificates they have issued. One mechanism is the X.509 certification revocation list (CRL).
CRLs are the PKI analog of the credit card hot list that store clerks review before accepting large
credit card transactions. The CRL is protected by a digital signature of the CRL issuer. If the signature
can be verified, CRL users know the contents have not been tampered with since the signature was
generated. CRLs contain a set of common fields, and may also include an optional set of extensions.
The CRL contains the following fields:
Version. The optional version field describes the syntax of the CRL. (In general, the version will be
two.)
Signature. The signature field contains the algorithm identifier for the digital signature algorithm
used by the CRL issuer to sign the CRL.
Issuer. The issuer field contains the X.500 distinguished name of the CRL issuer.
This update. The this-update field indicates the issue date of this CRL.
Next update. The next-update field indicates the date by which the next CRL will be issued.
Revoked certificates. The revoked certificates structure lists the revoked certificates. The entry for
each revoked certificate contains the certificate serial number, time of revocation, and optional CRL
entry extensions.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
The CRL entry extensions field is used to provide additional information about this particular revoked
certificate. This field may only appear if the version is v2.
CRL Extensions. The CRL extensions field is used to provide additional information about the whole
CRL. Again, this field may only appear if the version is v2.
ITU-T and ANSI X9 have defined several CRL extensions for X.509 v2 CRLs. They are specified in
[X509 97] and [X955]. Each extension in a CRL may be designated as critical or non-critical. A CRL
validation fails if an unrecognized critical extension is encountered.
However, unrecognized non-critical extensions may be ignored. The X.509 v2 CRL format allows
communities to define private extensions to carry information unique to those communities.
Communities are encouraged to define non-critical private extensions so that their CRLs can be
readily validated by all implementations.
The most commonly used CRL extensions include the following:
The CRL number extension is essentially a counter. In general, this extension is provided so that
users are informed if an emergency CRL was issued.
As noted in the previous section, CAs may have multiple key pairs. When appearing in a CRL, the
authority key identifier extension helps users select the right public key to verify the signature on
this CRL.
The issuer field contains a directory name, but that may not be the type of name that is used by a
particular application. The issuer alternative name extension is used to provide other name forms for
the owner of the private key, such as DNS names or email addresses. For example, the email address
CA1@nist.gov could appear in this field.
The issuing distribution points extension is used in conjunction with the CRL distribution points
extension in certificates. This extension is used to confirm that this particular CRL is the one
described by the CRL distribution points extension and contains status information for certificate in
question. This extension is required when the CRL does not cover all certificates issued by a CA,
since the CRL may be distributed on an insecure network.
The extensions described above apply to the entire CRL. There are also extensions that apply to a
particular revoked certificate.
Certificates may be revoked for a number of different reasons. The user’s crypto module may have
been stolen, for example, or the module may simply have been broken. The reason code extension
describes why a particular certificate was revoked. The relying party may use this information to
decide if a previously generated signature may be accepted.
Sometimes a CA does not wish to issue its own CRLs. It may delegate this task to another CA.
The CA that issues a CRL may include the status of certificates issued by a number of different CAs
in the same CRL. The certificate issuer extension is used to specify which CA issued a particular
certificate, or set of certificates, on a CRL.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
directory that contains only the public keys or certificates, and to locate this directory at the border of
the organization - this directory is referred to as a border directory. A likely location for the
directory would be outside the organization’s firewall or perhaps on a protected DMZ segment of its
network so that it is still available to the public but better protected from attack. Figure 3 illustrates a
typical arrangement of PKI-related systems.
The main directory server located within the organization's protected network would periodically
refresh the border directory with new certificates or updates to the existing certificates. Users within
the organization would use the main directory server, whereas other systems and organizations would
access only the border directory. When a user in organization A wishes to send encrypted e-mail to a
user in organization B, user A would then retrieve user B's certificate from organization B's border
directory, and then use the public key in that certificate to encrypt the e-mail.
Client requests and server responses, which can be very helpful in reconstructing sequences
of events and determining their apparent outcome. If the application logs successful user
authentications, it is usually possible to determine which user made each request. Some
applications can perform highly detailed logging, such as e-mail servers recording the sender,
recipients, subject name, and attachment names for each e-mail; Web servers recording each
URL requested and the type of response provided by the server; and business applications
recording which financial records were accessed by each user. This information can be used
to identify or investigate incidents and to monitor application usage for compliance and
auditing purposes.
Account information such as successful and failed authentication attempts, account changes
(e.g., account creation and deletion, account privilege assignment), and use of privileges. In
addition to identifying security events such as brute force password guessing and escalation of
privileges, it can be used to identify who has used the application and when each person has
used it.
Usage information such as the number of transactions occurring in a certain period (e.g.,
minute, hour) and the size of transactions (e.g., e-mail message size, file transfer size). This
can be useful for certain types of security monitoring (e.g., a ten-fold increase in e-mail
activity might indicate a new e-mail–borne malware threat; an unusually large outbound e-
mail message might indicate inappropriate release of information).
Significant operational actions such as application startup and shutdown, application failures,
and major application configuration changes. This can be used to identify security
compromises and operational failures.
Q 64. State & explain the common log management infrastructure functions.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Log management infrastructures typically perform several functions that assist in the storage,
analysis, and disposal of log data. These functions are normally performed in such a way that they do
not alter the original logs. The following items describe common log management infrastructure
functions:
General
– Log parsing is extracting data from a log so that the parsed values can be used as input for another
logging process. A simple example of parsing is reading a text-based log file that contains 10 comma-
separated values per line and extracting the 10 values from each line. Parsing is performed as part of
many other logging functions, such as log conversion and log viewing.
– Event filtering is the suppression of log entries from analysis, reporting, or long-term storage
because their characteristics indicate that they are unlikely to contain information of interest. For
example, duplicate entries and standard informational entries might be filtered because they do not
provide useful information to log analysts. Typically, filtering does not affect the generation or short-
term storage of events because it does not alter the original log files.
– In event aggregation, similar entries are consolidated into a single entry containing a count of the
number of occurrences of the event. For example, a thousand entries that each record part of a scan
could be aggregated into a single entry that indicates how many hosts were scanned. Aggregation is
often performed as logs are originally generated (the generator counts similar related events and
periodically writes a log entry containing the count), and it can also be performed as part of log
reduction or event correlation processes, which are described below.
Storage
– Log rotation is closing a log file and opening a new log file when the first file is considered to be
complete. Log rotation is typically performed according to a schedule (e.g., hourly, daily, weekly) or
when a log file reaches a certain size. The primary benefits of log rotation are preserving log entries
and keeping the size of log files manageable. When a log file is rotated, the preserved log file can be
compressed to save space. Also, during log rotation, scripts are often run that act on the archived log.
For example, a script might analyze the old log to identify malicious activity, or might perform
filtering that causes only log entries meeting certain characteristics to be preserved. Many log
generators offer log rotation capabilities; many log files can also be rotated through simple scripts or
third-party utilities, which in some cases offer features not provided by the log generators.
– Log archival is retaining logs for an extended period of time, typically on removable media, a
storage area network (SAN), or a specialized log archival appliance or server. Logs often need to be
preserved to meet legal or regulatory requirements. Section 4.2 provides additional information on log
archival. There are two types of log archival: retention and preservation. Log retention is archiving
logs on a regular basis as part of standard operational activities. Log preservation is keeping logs that
normally would be discarded, because they contain records of activity of particular interest. Log
preservation is typically performed in support of incident handling or investigations.
– Log compression is storing a log file in a way that reduces the amount of storage space needed for
the file without altering the meaning of its contents. Log compression is often performed when logs
are rotated or archived.
– Log reduction is removing unneeded entries from a log to create a new log that is smaller. A similar
process is event reduction, which removes unneeded data fields from all log entries. Log and event
reduction are often performed in conjunction with log archival so that only the log entries and data
fields of interest are placed into long-term storage.
– Log conversion is parsing a log in one format and storing its entries in a second format. For
example, conversion could take data from a log stored in a database and save it in an XML format in a
text file. Many log generators can convert their own logs to another format; third-party conversion
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
utilities are also available. Log conversion sometimes includes actions such as filtering, aggregation,
and normalization.
– In log normalization, each log data field is converted to a particular data representation and
categorized consistently. One of the most common uses of normalization is storing dates and times in
a single format. For example, one log generator might store the event time in a twelve-hour format
(2:34:56 P.M. EDT) categorized as Timestamp, while another log generator might store it in twenty-
four (14:34) format categorized as Event Time, with the time zone stored in different notation (-0400)
24
in a different field categorized as Time Zone. Normalizing the data makes analysis and reporting
much easier when multiple log formats are in use. However, normalization can be very resource-
intensive, especially for complex log entries (e.g., typical intrusion detection logs).
– Log file integrity checking involves calculating a message digest for each file and storing the
message digest securely to ensure that changes to archived logs are detected. A message digest is a
digital signature that uniquely identifies data and has the property that changing a single bit in the data
causes a completely different message digest to be generated. The most commonly used message
25
digest algorithms are MD5 and Secure Hash Algorithm 1 (SHA-1). If the log file is modified and its
message digest is recalculated, it will not match the original message digest, indicating that the file
has been altered. The original message digests should be protected from alteration through FIPS-
approved encryption algorithms, storage on read-only media, or other suitable means.
Analysis
– Event correlation is finding relationships between two or more log entries. The most common form
of event correlation is rule-based correlation, which matches multiple log entries from a single source
or multiple sources based on logged values, such as timestamps, IP addresses, and event types. Event
correlation can also be performed in other ways, such as using statistical methods or visualization
tools. If correlation is performed through automated methods, generally the result of successful
correlation is a new log entry that brings together the pieces of information into a single place.
Depending on the nature of that information, the infrastructure might also generate an alert to indicate
that the identified event needs further investigation.
– Log viewing is displaying log entries in a human-readable format. Most log generators provide
some sort of log viewing capability; third-party log viewing utilities are also available. Some log
viewers provide filtering and aggregation capabilities.
– Log reporting is displaying the results of log analysis. Log reporting is often performed to
summarize significant activity over a particular period of time or to record detailed information
related to a particular event or series of events.
Disposal
– Log clearing is removing all entries from a log that precede a certain date and time. Log clearing is
often performed to remove old log data that is no longer needed on a system because it is not of
importance or it has been archived.
A log management infrastructure usually encompasses most or all of the functions described in this
section. Section 3.1 describes the components and architectures of log management infrastructures.
The placement of some of the log functions among the three tiers of the log management
infrastructure depends primarily on the type of log management software used. Log management
infrastructures are typically based on one of the two major categories of log management software:
syslog-based centralized logging software and security information and event management software.
Sections 3.3 and 3.4 describe these types of software. Section 3.5 describes additional types of
software that may be valuable within a log management infrastructure.
65. What are the various types of network & host based security software.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Most organizations use several types of network-based and host-based security software to detect
malicious activity, protect systems and data, and support incident response efforts. Accordingly,
security software is a major source of computer security log data. Common types of network-based
and host-based security software include the following:
Most organizations face similar log management-related challenges, which have the same underlying
problem: effectively balancing a limited amount of log management resources with an ever-increasing
supply of log data. This section discusses the most common types of challenges, divided into three
groups. First, there are several potential problems with the initial generation of logs because of their
variety and prevalence. Second, the confidentiality, integrity, and availability of generated logs could
be breached inadvertently or intentionally. Finally, the people responsible for performing log analysis
are often inadequately prepared and supported. The three categories of log challenges:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In a typical organization, many hosts’ OSs, security software, and other applications generate and
store logs. This complicates log management in the following ways:
Many Log Sources. Logs are located on many hosts throughout the organization, necessitating
log management to be performed throughout the organization. Also, a single log source can
generate multiple logs—for example, an application storing authentication attempts in one log
and network activity in another log.
Inconsistent Log Content. Each log source records certain pieces of information in its log
entries, such as host IP addresses and usernames. For efficiency, log sources often record only
the pieces of information that they consider most important. This can make it difficult to link
events recorded by different log sources because they may not have any common values
recorded (e.g., source 1 records the source IP address but not the username, and source 2
records the username but not the source IP address).
Inconsistent Timestamps. Each host that generates logs typically references its internal clock
when setting a timestamp for each log entry. If a host’s clock is inaccurate, the timestamps in its
logs will also be inaccurate. This can make analysis of logs more difficult, particularly when logs
from multiple hosts are being analyzed. For example, timestamps might indicate that event A
happened 45 seconds before event B, when event A actually happened two minutes after event B.
Inconsistent Log Formats. Many of the log source types use different formats for their logs,
such as comma-separated or tab-separated text files, databases, syslog, Simple Network
Management Protocol (SNMP), Extensible Markup Language (XML), and binary files. Some logs
are designed for humans to read, while others are not; some logs use standard formats, while
others use proprietary formats. Some logs are created not for local storage in a file, but for
transmission to another system for processing; a common example of this is SNMP traps. For
some output formats, particularly text files, there are many possibilities for the sequence of the
values in each log entry and the delimiters between the values (e.g., comma-separated values, tab-
delimited values, XML).
Because logs contain records of system and network security, they need to be protected from breaches
of their confidentiality and integrity. For example, logs might intentionally or inadvertently capture
sensitive information such as users’ passwords and the content of e-mails. This raises security and
privacy concerns involving both the individuals that review the logs and others that might be able to
access the logs through authorized or unauthorized means. Logs that are secured improperly in storage
or in transit might also be susceptible to intentional and unintentional alteration and destruction. This
could cause a variety of impacts, including allowing malicious activities to go unnoticed and
manipulating evidence to conceal the identity of a malicious party. For example, many rootkits are
specifically designed to alter logs to remove any evidence of the rootkits’ installation or execution.
Organizations also need to protect the availability of their logs.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
cannot easily see, such as correlating entries from multiple logs that relate to the same event. Another
problem is that many administrators consider log analysis to be boring and to provide little benefit for
the amount of time required. Log analysis is often treated as reactive—something to be done after a
problem has been identified through other means—rather than proactive, to identify ongoing activity
and look for signs of impending problems. Traditionally, most logs have not been analyzed in a real-
time or near-real-time manner. Without sound processes for analyzing logs, the value of the logs is
significantly reduced.
Log Generation. The first tier contains the hosts that generate the log data. Some hosts run logging
client applications or services that make their log data available through networks to log servers in the
second tier. Other hosts make their logs available through other means, such as allowing the servers to
authenticate to them and retrieve copies of the log files.
Log Analysis and Storage. The second tier is composed of one or more log servers that receive log
data or copies of log data from the hosts in the first tier. The data is transferred to the servers either in
a real-time or near-real-time manner, or in occasional batches based on a schedule or the amount of
log data waiting to be transferred. Servers that receive log data from multiple log generators are
sometimes called collectors or aggregators. Log data may be stored on the log servers themselves or
on separate database servers.
Log Monitoring. The third tier contains consoles that may be used to monitor and review log data
and the results of automated analysis. Log monitoring consoles can also be used to generate reports.
In some log management infrastructures, consoles can also be used to provide management for the log
servers and clients. Also, console user privileges sometimes can be limited to only the necessary
functions and data sources for each user.
The second tier—log analysis and storage—can vary greatly in complexity and structure. The
simplest arrangement is a single log server that handles all log analysis and storage functions.
Examples of more complex second tier arrangements are as follows:
- Multiple log servers that each perform a specialized function, such as one server performing log
collection, analysis, and short-term log storage, and another server performing long-term storage.
- Multiple log servers that each performs analysis and/or storage for certain log generators. This can
also provide some redundancy. A log generator can switch to a backup log server if its primary log
server becomes unavailable. Also, log servers can be configured to share log data with each other,
which also supports redundancy.
- Two levels of log servers, with the first level of distributed log servers receiving logs from the log
generators and forwarding some or all of the log data they receive to a second level of more
centralized log servers.
68. What are the various functions of log management infrastructure?
Same as Q.64
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Syslog was developed at a time when the security of logs was not a major consideration. Accordingly,
it did not support the use of basic security controls that would preserve the confidentiality, integrity,
and availability of logs. For example, most syslog implementations use the connectionless, unreliable
User Datagram Protocol (UDP) to transfer logs between hosts. UDP provides no assurance that log
entries are received successfully or in the correct sequence. Also, most syslog implementations do not
perform any access control, so any host can send messages to a syslog server unless other security
measures have been implemented to prevent this, such as using a physically separate logging network
for communications with the syslog server, or implementing access control lists on network devices to
restrict which hosts can send messages to the syslog server. Attackers can take advantage of this by
flooding syslog servers with bogus log data, which can cause important log entries to go unnoticed or
even potentially cause a denial of service. Another shortcoming of most syslog implementations is
that they cannot use encryption to protect the integrity or confidentiality of logs in transit. Attackers
on the network might monitor syslog messages containing sensitive information regarding system
configurations and security weaknesses; attackers might also be able to perform man-in-the-middle
attacks such as modifying or destroying syslog messages in transit.
As the security of logs has become a greater concern, several implementations of syslog have been
created that place a greater emphasis on security. Most have been based on a proposed standard, RFC
3195, which was designed specifically to improve the security of syslog.Implementations based on
RFC 3195 can support log confidentiality, integrity, and availability through several features,
including the following:
Reliable Log Delivery. Several syslog implementations support the use of Transmission
Control Protocol (TCP) in addition to UDP. TCP is a connection-oriented protocol that
attempts to ensure the reliable delivery of information across networks. Using TCP helps to
ensure that log entries reach their destination. Having this reliability requires the use of more
network bandwidth; also, it typically takes more time for log entries to reach their destination.
Some syslog implementations use log caching servers.
Transmission Confidentiality Protection. RFC 3195 recommends the use of the Transport
Layer Security (TLS) protocol to protect the confidentiality of transmitted syslog messages.
TLS can protect the messages during their entire transit between hosts. TLS can only protect
the payloads of packets, not their IP headers, which means that an observer on the network
can identify the source and destination of transmitted syslog messages, possibly revealing the
IP addresses of the syslog servers and log sources. Some syslog implementations use other
means to encrypt network traffic, such as passing syslog messages through secure shell (SSH)
tunnels. Protecting syslog transmissions can require additional network bandwidth and
increase the time needed for log entries to reach their destination.
Transmission Integrity Protection and Authentication. RFC 3195 recommends that if
integrity protection and authentication are desired, that a message digest algorithm be used.
RFC 3195 recommends the use of MD5; proposed revisions to RFC 3195 mention the use of
SHA-1. Because SHA is a FIPS-approved algorithm and MD5 is not, Federal agencies should
use SHA instead of MD5 for message digests whenever feasible.
Q.70 Explain the Need for Log Management
Log management can benefit an organization in many ways. It helps to ensure that computer security
records are stored in sufficient detail for an appropriate period of time. Routine log reviews and
analysis are beneficial for identifying security incidents, policy violations, fraudulent activity, and
operational problems shortly after they have occurred, and for providing information useful for
resolving such problems. Logs can also be useful for performing auditing and forensic analysis,
supporting the organization’s internal investigations, establishing baselines, and identifying
operational trends and long-term problems.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Besides the inherent benefits of log management, a number of laws and regulations further compel
organizations to store and review certain logs. The following is a listing of key regulations, standards,
and guidelines that help define organizations’ needs for log management:
Federal Information Security Management Act of 2002 (FISMA). FISMA emphasizes the
need for each Federal agency to develop, document, and implement an organization-wide
program to provide information security for the information systems that support its
operations and assets. NIST SP 800-53, Recommended Security Controls for Federal
Information Systems, was developed in support of FISMA. NIST SP 800-53 is the primary
source of recommended security controls for Federal agencies. It describes several controls
related to log management, including the generation, review, protection, and retention of audit
records, as well as the actions to be taken because of audit failure.
Gramm-Leach-Bliley Act (GLBA).GLBA requires financial institutions to protect their
customers’ information against security threats. Log management can be helpful in
identifying possible security violations and resolving them effectively.
Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA includes
security standards for certain health information. NIST SP 800-66, An Introductory Resource
Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA)
Security Rule, lists HIPAA-related log management needs. For example, Section 4.1 of NIST
SP 800-66 describes the need to perform regular reviews of audit logs and access reports.
Also,
Q 71. List& Explain the classic categories of malware.
Malware has become the greatest external threat to most hosts, causing damage and requiring
extensive recovery efforts within most organizations. The following are the classic categories of
malware:
Viruses. A virus self-replicates by inserting copies of itself into host programs or data files. Viruses
are often triggered through user interaction, such as opening a file or running a program. Viruses can
be divided into the following two subcategories:
– Compiled Viruses. A compiled virus is executed by an operating system. Types of compiled
viruses include file infector viruses, which attach themselves to executable programs; boot sector
viruses, which infect the master boot records of hard drives or the boot sectors of removable media;
and multipartite viruses, which combine the characteristics of file infector and boot sector viruses.
– Interpreted Viruses. Interpreted viruses are executed by an application. Within this subcategory,
macro viruses take advantage of the capabilities of applications’ macro programming language to
infect application documents and document templates, while scripting viruses infect scripts that are
understood by scripting languages processed by services on the OS.
Worms. A worm is a self-replicating, self-contained program that usually executes itself without
user intervention. Worms are divided into two categories:
– Network Service Worms. A network service worm takes advantage of a vulnerability in a network
service to propagate itself and infect other hosts.
– Mass Mailing Worms. A mass mailing worm is similar to an email-borne virus but is self-
contained, rather than infecting an existing file.
Trojan Horses. A Trojan horse is a self-contained, non-replicating program that, while appearing
to be benign, actually has a hidden malicious purpose. Trojan horses either replace existing files with
malicious versions or add new malicious files to hosts. They often deliver other attacker tools to hosts.
Malicious Mobile Code. Malicious mobile code is software with malicious intent that is
transmitted from a remote host to a local host and then executed on the local host, typically without
the user’s explicit instruction. Popular languages for malicious mobile code include Java, ActiveX,
JavaScript, and VBScript.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Blended Attacks. A blended attack uses multiple infection or transmission methods. For example,
a blended attack could combine the propagation methods of viruses and worms.
messaging software. Antivirus software should monitor activity involving the applications most likely
to be used to infect hosts or spread malware to other hosts.
hard drives regularly to identify any file system infections and, optionally, depending on organization
security needs, to scan removable media inserted into the host before allowing its use. Users should
also be able to launch a scan manually as needed, which is known as on-demand scanning.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Disinfecting files, which refers to removing malware from within a file, and quarantining files,
which means that files containing malware are stored in isolation for future disinfection or
examination. Disinfecting a file is generally preferable to quarantining it because the malware is
removed and the original file restored; however, many infected files cannot be disinfected.
Accordingly, antivirus software should be configured to attempt to disinfect infected files and to
either quarantine or delete files that cannot be disinfected.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
should take to prevent future incidents and to prepare more effectively to handle incidents that do
occur.
4.1 Preparation
Organizations should perform preparatory measures to ensure that they are capable of responding
effectively to malware incidents. Sections 4.1.1 through 4.1.3 describe several recommended
preparatory measures, including building and maintaining malware-related skills within the incident
response team, facilitating communication and coordination throughout the organization, and
acquiring necessary tools and resources.
4.1.1 Building and Maintaining Malware-Related Skills
All malware incident handlers should have a solid understanding of how each major category of
malware infects hosts and spreads.
4.1.2 Facilitating Communication and Coordination
One of the most common problems during malware incident handling is poor communication and
coordination. To improve communication and coordination, an organization should designate in
advance a few individuals or a small team to be responsible for coordinating the organization’s
responses to malware incidents.
4.1.3 Acquiring Tools and Resources
Organizations should also ensure that they have the necessary tools (hardware and software) and
resources to assist in malware incident handling.
4.3 Containment
Containment of malware has two major components: stopping the spread of the malware and
preventing further damage to hosts. Nearly every malware incident requires containment actions. In
addressing an incident, it is important for an organization to decide which methods of containment to
employ initially, early in the response.
4.3.1 Containment Through User Participation
At one time, user participation was a valuable part of containment efforts, particularly during large-
scale incidents in non-managed environments. Users were provided with instructions on how to
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
identify infections and what measures to take if a host was infected, such as calling the help desk,
disconnecting the host from the network, or powering off the host.
4.3.2 Containment Through Automated Detection
Many malware incidents can be contained primarily through the use of the automated technologies.
These technologies include antivirus software, content filtering, and intrusion prevention software.
Because antivirus software on hosts can detect and remove infections, it is often the preferred
automated detection method for assisting in containment.
4.3.3 Containment Through Disabling Services
Some malware incidents necessitate more drastic and potentially disruptive measures for containment.
These incidents make extensive use of a particular service. Containing such an incident quickly and
effectively might be accomplished through a loss of services, such as shutting down a service used by
malware, blocking a certain service at the network perimeter, or disabling portions of a service (e.g.,
large mailing lists).
4.3.4 Containment Through Disabling Connectivity
Containing incidents by placing temporary restrictions on network connectivity can be very effective.
For example, if infected hosts attempt to establish connections with an external host to download
rootkits, handlers should consider blocking all access to the external host (by IP address or domain
name, as appropriate).
4.3.5 Containment Recommendations
Containment can be performed through many methods in the four categories described above (users,
automated detection, loss of services, and loss of connectivity). Because no single malware
containment category or individual method is appropriate or effective in every situation, incident
handlers should select a combination of containment methods that is likely to be effective in
containing the current incident while limiting damage to hosts and reducing the impact that
containment methods might have on other hosts. For example, shutting down all network access
might be very effective at stopping the spread of malware, but it would also allow infections on hosts
to continue damaging files and would disrupt many important functions of the organization.
4.4 Eradication
Although the primary goal of eradication is to remove malware from infected hosts, eradication is
typically more involved than that. If an infection was successful because of host vulnerability or other
security weakness, such as an unsecured file share, then eradication includes the elimination or
mitigation of that weakness, which should prevent the host from becoming reinfected or becoming
infected by another instance of malware or a variant of the original threat. Eradication actions are
often consolidated with containment efforts.
In general, organizations should rebuild any host that has any of the following incident characteristics,
instead of performing typical eradication actions (disinfection):
-level access to the host.
-level access to the host was available to anyone through a backdoor, an
unprotected share created by a worm, or other means.
antivirus software or other programs or techniques. This indicates that either the malware has not been
eradicated completely or that it has caused damage to important system or application files or settings.
is doubt about the nature of and extent of the infection or any unauthorized access gained
because of the infection.
4.5 Recovery
The two main aspects of recovery from malware incidents are restoring the functionality and data of
infected hosts and removing temporary containment measures. Additional actions to restore hosts are
not necessary for most malware incidents that cause limited host damage (for example, an infection
that simply altered a few data files and was completely removable with antivirus software). As
discussed in Section 4.4, for malware incidents that are far more damaging, such as Trojan horses,
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
rootkits, or backdoors, corrupting thousands of system and data files, or wiping out hard drives, it is
often best to first rebuild the host, then secure the host so that it is no longer vulnerable to the malware
threat. Organizations should carefully consider possible worst-case scenarios, such as a new malware
threat that necessitates rebuilding a large percentage of the organization’s workstations, and determine
how the hosts would be recovered in these cases. This should include identifying who would perform
the recovery tasks, estimating how many hours of labor would be needed and how much calendar time
would elapse, and determining how the recovery efforts should be prioritized.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
In containing a malware incident, it is also important to understand that stopping the spread of
malware does not necessarily prevent further damage to hosts. Malware on a host might continue to
exfiltrate sensitive data, replace OS files, or cause other damage. In addition, some instances of
malware are designed to cause additional damage when network connectivity is lost or other
containment measures are performed. For example, an infected host might run a malicious process
that contacts another host periodically. If that connectivity is lost because the infected host is
disconnected from the network, the malware might overwrite all the data on the host’s hard drive. For
these reasons, handlers should not assume that just because a host has been disconnected from the
network, further damage to the host has been prevented, and in many cases, should begin eradication
efforts as soon as possible to prevent more damage.
Organizations should have strategies and procedures in place for making containment-related
decisions that reflect the level of risk acceptable to the organization. For example, an organization
might decide that infected hosts performing critical functions should not be disconnected from
networks or shut down if the likely damage to the organization from those functions being unavailable
would be greater than the security risks posed by not isolating or shutting down the host. Containment
strategies should support incident handlers in selecting the appropriate combination of containment
methods based on the characteristics of a particular situation.
Containment methods can be divided into four basic categories: relying on user participation,
performing automated detection, temporarily halting services, and blocking certain types of network
connectivity. Sections 4.3.1 through 4.3.4 describe each category in detail.
4.3.1 Containment Through User Participation
At one time, user participation was a valuable part of containment efforts, particularly during large-
scale incidents in non-managed environments. Users were provided with instructions on how to
identify infections and what measures to take if a host was infected, such as calling the help desk,
disconnecting the host from the network, or powering off the host. The instructions might also cover
malware eradication, such as updating antivirus signatures and performing a host scan, or obtaining
and running a specialized malware eradication utility. As hosts have increasingly become managed,
user participation in containment has sharply decreased. However, having users perform containment
actions is still helpful in non-managed environments and other situations in which use of fully
automated containment methods is not feasible.
4.3.2 Containment Through Automated Detection
Many malware incidents can be contained primarily through the use of the automated technologies
described in Section 3.4 for preventing and detecting infections. These technologies include antivirus
software, content filtering, and intrusion prevention software. Because antivirus software on hosts can
detect and remove infections, it is often the preferred automated detection method for assisting in
containment. However, as previously discussed, many of today’s malware threats are novel, so
antivirus software and other technologies often fail to recognize them as being malicious. Also,
malware that compromises the OS may disable security controls such as antivirus software,
particularly in unmanaged environments where users have greater control over their hosts.
Containment through antivirus software is not as robust and effective as it used to be.
Examples of automated detection methods other than antivirus software are as follows:
Content Filtering.
Network-Based IPS Software.
Executable Blacklisting.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Containing incidents by placing temporary restrictions on network connectivity can be very effective.
For example, if infected hosts attempt to establish connections with an external host to download
rootkits, handlers should consider blocking all access to the external host (by IP address or domain
name, as appropriate). Similarly, if infected hosts within the organization attempt to spread their
malware, the organization might block network traffic from the hosts’ IP addresses to control the
situation while the infected hosts are physically located and disinfected. An alternative to blocking
network access for particular IP addresses is to disconnect the infected hosts from the network, which
could be accomplished by reconfiguring network devices to deny network access or physically
disconnecting network cables from infected hosts.
4.3.5 Containment Recommendations
Containment can be performed through many methods in the four categories described above (users,
automated detection, loss of services, and loss of connectivity). Because no single malware
containment category or individual method is appropriate or effective in every situation, incident
handlers should select a combination of containment methods that is likely to be effective in
containing the current incident while limiting damage to hosts and reducing the impact that
containment methods might have on other hosts.
77. Explain the three main categories of patch and vulnerability metrics.
There are three main categories of patch and vulnerability metrics: susceptibility to attack, mitigation
response time, and cost. This section provides example metrics in each category.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
unapplied patches, and exposed network services that exist, the greater the chance that the system will
be penetrated. Large systems consisting of many computers are thus inherently less secure than
smaller similarly configured systems. This does not mean that the large systems are necessarily
secured with less rigor than the smaller systems. To avoid such implications, ratios should be used
when comparing the effectiveness of the security programs of multiple systems. Ratios (e.g., number
of unapplied patches per computer) allow effective comparison between systems. Both raw results
and ratios should be measured and published for each system, as appropriate, since they are both
useful and serve different purposes.
Number of Patches
Number of Vulnerabilities
It is also important to measure how quickly an organization can identify, classify, and respond to a
new vulnerability and mitigate the potential impact within the organization. Response time has
become increasingly important, because the average time between a vulnerability announcement and
an exploit being released has decreased dramatically in the last few years. There are three primary
response time measurements that can be taken: vulnerability and patch identification, patch
application, and emergency security configuration changes.
Measuring the cost of patch and vulnerability management is difficult because the actions are often
split between many different personnel and groups. In the simplest case, there will be a dedicated
centralized PVG that deploys patches and security configurations directly. However, most
organizations will have the patch and vulnerability functions split between multiple groups and
allocated to a variety of full-time and part-time personnel. There are four main cost measurements that
should be taken: the PVG, system administrator support, enterprise patch and vulnerability
management tools, and incidents that occurred due to failures in the patch and vulnerability
management program.
78. What is The Patch and Vulnerability Group & what are their duties?
The PVG should be specially tasked to implement the patch and vulnerability management program
throughout the organization. The PVG is the central point for vulnerability remediation efforts, such
as OS and application patching and configuration changes. Since the PVG needs to work actively with
local administrators, large organizations may need to have several PVGs; they could work together or
be structured hierarchically with an authoritative top-level PVG. The duties of a PVG should include
the following:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
2. Monitor security sources for vulnerability announcements, patch and non-patch remediations,
and emerging threats that correspond to the software within the PVG’s system inventory.
5. Conduct testing of patches and non-patch remediations on IT devices that use standardized
configurations.
10. Verify vulnerability remediation through network and host vulnerability scanning.
79. What are the primary methods of remediation that can be applied to an affected system?
Organizations should deploy vulnerability remediations to all systems that have the vulnerability,
even for systems that are not at immediate risk of exploitation. Vulnerability remediations should also
be incorporated into the organization’s standard builds and configurations for hosts. There are three
primary methods of remediation that can be applied to an affected system: the installation of a
software patch, the adjustment of a configuration setting, and the removal of the affected software.
+ Security Patch Installation. Applying a security patch (also called a “fix” or “hotfix”) repairs
the vulnerability, since patches contain code that modifies the software application to address
and eliminate the problem. Patches downloaded from vendor Web sites are typically the most
up-to-date and are likely free of malicious code.
+ Configuration Adjustment. Adjusting how an application or security control is configured can
18
effectively block attack vectors and reduce the threat of exploitation. Common configuration
adjustments include disabling services and modifying privileges, as well as changing firewall
rules and modifying router access controls. Settings of vulnerable software applications can
be modified by adjusting file attributes or registry settings.
+ Software Removal. Removing or uninstalling the affected software or vulnerable service
eliminates the vulnerability and any associated threat. This is a practical solution when an
application is not needed on a system. Determining how the system is used, removing
unnecessary software and services, and running only what is essential for the system’s
purpose is a recommended security practice.
80. Who are involved in log management planning? Explain their responsibilities.
As part of the log management planning process, an organization should define the roles and
responsibilities of individuals and teams who are expected to be involved in log management. Teams
and individual roles often involved in log management include the following:
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
System and network administrators, who are usually responsible for configuring logging on
individual systems and network devices, analyzing those logs periodically, reporting on the results of
log management activities, and performing regular maintenance of the logs and logging software
Security administrators, who are usually responsible for managing and monitoring the log
management infrastructures, configuring logging on security devices (e.g., firewalls, network-based
intrusion detection systems, antivirus servers), reporting on the results of log management activities,
38
and assisting others with configuring logging and performing log analysis
Computer security incident response teams, who use log data when handling some incidents
Application developers, who may need to design or customize applications so that they perform
logging in accordance with the logging requirements and recommendations
Information security officers, who may oversee the log management infrastructures
Chief information officers (CIO), who oversee the IT resources that generate, transmit, and store the
logs
Auditors, who may use log data when performing audits
Individuals involved in the procurement of software that should or can generate computer security
log data.
81. What are the steps included in developing logging policies?
An organization should define its requirements and goals for performing logging and monitoring logs.
The requirements should include all applicable laws, regulations, and existing organizational policies,
such as data retention policies. The goals should be based on balancing the organization’s reduction of
risk with the time and resources needed to perform log management functions. The requirements and
goals should then be used as the basis for establishing an organization-wide log management
capability and prioritizing log management appropriately throughout the enterprise.
Organizations should develop policies that clearly define mandatory requirements and suggested
recommendations for several aspects of log management, including the following:
Log generation
– Which types of hosts must or should perform logging
– Which host components must or should perform logging (e.g., OS, service, application)
– Which types of events each component must or should log (e.g., security events, network
connections, authentication attempts)
– Which data characteristics must or should be logged for each type of event (e.g., username
and source IP address for authentication attempts)
– How frequently each type of event must or should be logged (e.g., every occurrence, once
for all instances in x minutes, once for every x instances, every instance after x instances)
Log transmission
– Which types of hosts must or should transfer logs to a log management infrastructure
– Which types of entries and data characteristics must or should be transferred from
individual hosts to a log management infrastructure
– How log data must or should be transferred (e.g., which protocols are permissible),
including out-of-band methods where appropriate (e.g., for standalone systems)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
– How frequently log data should be transferred from individual hosts to a log management
infrastructure (e.g., real-time, every 5 minutes, every hour)
– How the confidentiality, integrity, and availability of each type of log data must or should
be protected while in transit, including whether a separate logging network should be
used
Log storage and disposal
– How often logs should be rotated
– How the confidentiality, integrity, and availability of each type of log data must or should
be protected while in storage (at both the system level and the infrastructure level)
– How long each type of log data must or should be preserved (at both the system level and
the infrastructure level)
– How unneeded log data must or should be disposed of (at both the system level and the
infrastructure level)
– How much log storage space must or should be available (at both the system level and the
infrastructure level)
– How log preservation requests, such as a legal requirement to prevent the alteration and
destruction of particular log records, must be handled (e.g., how the impacted logs must
be marked, stored, and protected)
Log analysis
– How often each type of log data must or should be analyzed (at both the system level and
the infrastructure level)
– Who must or should be able to access the log data (at both the system level and the
infrastructure level), and how such accesses should be logged
47
– What must or should be done when suspicious activity or an anomaly is identified
– How the confidentiality, integrity, and availability of the results of log analysis (e.g., alerts,
reports) must or should be protected while in storage (at both the system level and the
infrastructure level) and in transit
– How inadvertent disclosures of sensitive information recorded in logs, such as passwords or
the contents of e-mails, should be handled.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Unit 4
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
occurs (e.g., the corruption of a data file) audit trails can aid in the recovery process (e.g., by
using the record of changes made to reconstruct the file).
1.3 Intrusion Detection:
Intrusion detection refers to the process of identifying attempts to penetrate a system
and gain unauthorized access.
If audit trails have been designed and implemented to record appropriate access.information,
they can assist in intrusion detection. Although normally thought of as a real-time effort,
intrusions can be detected in real time, by examining audit records as they are created (or
through the use of other kinds of warning flags/notices), or after the fact (e.g., by examining
audit records in a batch process). Real-time intrusion detection is primarily aimed at outsiders
attempting to gain unauthorized access to the system. It may also be used to detect changes in
the system's performance indicative of, for example, a virus or worm attack. There may be
difficulties in implementing real-time auditing, including unacceptable system performance.
After-the-fact identification may indicate that unauthorized access was attempted (or was
successful). Attention can then be given to damage assessment or reviewing controls that
were attacked.
ANS:
General principles of an audit:
The auditor should comply with the Code of Ethics for Members issued by the
International Federation of Accountants.
Ethical principles governing the auditor‟s professional responsibilities are:
a) Independence;
b) Integrity;
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
c) Objectivity;
d) Professional competence and due care;
e) Confidentiality;
f) Professional behaviour; and
g) Technical standards
The auditor should conduct an audit in accordance with International Standard of
Audit (ISAs).
These contain basic principles and essential procedures together with related guidance
in the form of explanatory and other materials.
The auditor should plan and perform an audit with an attitude of professional
scepticism recognizing that circumstances may exist that cause the financial
statements to be materially misstated. An attitude of professional scepticism means
the auditor makes a critical assessment, with a questioning mind, of the validity of
audit evidence obtained and is alert to audit evidence that contradicts or brings into
question the reliability of documents or management representations. For examples,
an attitude of professional scepticism is necessary throughout the audit process for the
auditor to reduce the risk of overlooking suspicious circumstances, of over
generalizing when drawing conclusions from audit observations, and of using faulty
assumptions in determining the nature, timing and extent of the audit procedures and
evaluating the results thereof.
In planning and performing an audit, the auditor neither assumes that management is
dishonest nor assumes unquestioned honesty. Accordingly, representations from
management are not a substitute for obtaining sufficient appropriate audit evidence to
be able to draw reasonable conclusion on which to base the audit opinion.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
auditing systems are closely linked; in some cases, they may even be the same thing. In most
cases, the analysis of audit trail data is a critical part of maintaining operational assurance.
Identification and Authentication:
Audit trails are tools often used to help hold users accountable for their actions. To be held
accountable, the users must be known to the system (usually accomplished through the
identification and authentication process). However, as mentioned earlier, audit trails record
events and associate them with the perceived user (i.e., the user ID). If a user is impersonated,
the audit trail will establish events but not the identity of the user.
Logical Access Control:
Logical access controls restrict the use of system resources to authorized users. Audit trails
complement this activity in two ways. First, they may be used to identify breakdowns in
logical access controls or to verify that access control restrictions are behaving as expected,
for example, if a particular user is erroneously included in a group permitted access to a file.
Second, audit trails are used to audit use of resources by those who have legitimate access.
Additionally, to protect audit trail files, access controls are used to ensure that audit trails are
not modified.
Contingency Planning:
Audit trails assist in contingency planning by leaving a record of activities performed on the
system or within a specific application. In the event of a technical malfunction, this log can
be used to help reconstruct the state of the system (or specific files).
Incident Response:
If a security incident occurs, such as hacking, audit records and other intrusion detection
methods can be used to help determine the extent of the incident. For example, was just one
file browsed, or was a Trojan horse planted to collect passwords?
Cryptography:
Digital signatures can be used to protect audit trails from undetected modification. (This does
not prevent deletion or modification of the audit trail, but will provide an alert that the audit
trail has been altered.) Digital signatures can also be used in conjunction with adding secure
time stamps to audit records. Encryption can be used if confidentiality of audit trail
information is important.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
using tools to perform most of the analysis. Many simple analyzers can be constructed
quickly (and cheaply) from system utilities, but they are limited to audit reduction and
identifying particularly sensitive events. More complex tools that identify trends or
sequences of events are slowly becoming available as off-the-shelf software.
(If complex tools are not available for a system, development may be prohibitively
expensive. Some intrusion detection systems, for example, have taken years to develop.)
The final cost of audit trails is the cost of investigating anomalous events. If the system is
identifying too many events as suspicious, administrators may spend undue time
reconstructing events and questioning personnel.
7. Explain Audit Trails. What are the two types of audit records explain in detail?
An audit trail (also called audit log) is a security-relevant chronological record, set of
records, and/or destination and source of records that provide documentary evidence of
the sequence of activities that have affected at any time a specific operation, procedure, or
event. Audit records typically result from activities such as financial transactions,
scientific research and health care data transactions, or communications by individual
people, systems, accounts, or other entities.
A system can maintain several different audit trails concurrently. There are typically two
kinds of audit records, (1) an event-oriented log and (2) a record of every keystroke, often
called keystroke monitoring. Event-based logs usually contain records describing system
events, application events, or user events. An audit trail should include sufficient
information to establish what events occurred and who (or what) caused them. In general,
an event record should specify when the event occurred, the user ID associated with the
event, the program or command used to initiate the event, and the result. Date and time
can help determine if the user was a masquerader or the actual person specified.
1. Keystroke Monitoring:-
Keystroke monitoring is the process used to view or record both the keystrokes entered by
a computer user and the computer's response during an interactive session. Keystroke
monitoring is usually considered a special case of audit trails. Examples of keystroke
monitoring would include viewing characters as they are typed by users, reading users'
electronic mail, and viewing other recorded information typed by users. Some forms of
routine system maintenance may record user keystrokes. This could constitute keystroke
monitoring if the keystrokes are preserved along with the user identification so that an
administrator could determine the keystrokes entered by specific users. Keystroke
monitoring is conducted in an effort to protect systems and data from intruders who
access the systems without authority or in excess of their assigned authority. Monitoring
keystrokes typed by intruders can help administrators assess and repair damage caused by
intruders.
2. Audit Events :-
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
The system itself enforces certain aspects of policy (particularly system-specific policy)
such as access to files and access to the system itself. Monitoring the alteration of systems
configuration files that implement the policy is important. If special accesses (e.g.,
security administrator access) have to be used to alter configuration files, the system
should generate audit records whenever these accesses are used. Flexibility is a critical
feature of audit trails. Ideally (from a security point of view), a system administrator
would have the ability to monitor all system and user activity, but could choose to log
only certain functions at the system level, and within certain applications. The decision of
how much to log and how much to review should be a function of application/data
sensitivity and should be decided by each functional manager/application owner with
guidance from the system administrator and the computer security manager/officer,
weighing the costs and benefits of the logging.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
• Audit reduction tools are preprocessors designed to reduce the volume of audit
records to facilitate manual review. Before a security review, these tools can remove
many audit records known to have little security significance.
• Trends/variance -detection tools look for anomalies in user or system behavior. It is
possible to construct more sophisticated processors that monitor usage trends and detect
major variations.
4. Interdependencies:-
The ability to audit supports many of the controls presented in this handbook. The
following paragraphs describe some of the most important interdependencies.
Policy: The most fundamental interdependency of audit trails is with policy. Policy
dictates who is authorized access to what system resources.
Assurance: System auditing is an important aspect of operational assurance. The data
recorded into an audit trail is used to support a system audit.
Contingency Planning: Audit trails assist in contingency planning by leaving a record of
activities performed on the system or within a specific application.
Cryptography: Digital signatures can be used to protect audit trails from undetected
modification. (This does not prevent deletion or modification of the audit trail, but will
provide an alert that the audit trail has been altered.) Digital signatures can also be used in
conjunction with adding secure time stamps to audit records. Encryption can be used if
confidentiality of audit trail information is important.
5. Cost Considerations:-
Audit trails involve many costs. First, some system overhead is incurred recording the
audit trail. Additional system overhead will be incurred storing and processing the
records. The more detailed the records, the more overhead is required. Another cost
involves human and machine time required to do the analysis. The final cost of audit trails
is the cost of investigating anomalous events. If the system is identifying too many events
as suspicious, administrators may spend undue time reconstructing events and
questioning personnel.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
which includes steps that, when followed, provide the foundation of any good plan. Let’s
take a look at the five phases.
Phase 1: Identify the risks
The first phase is to conduct a risk assessment, identifying any potential hazards that
could disrupt your business. Consider any type of risk your team can imagine, including
natural threats, human threats and technical threats.
Phase 2: Analyze the risks you face
Next, you’ll perform a business impact analysis (BIA) to gauge the impact of each
potential risk. For each risk, determine how severe the impact would be and how long
your business could survive without those processes running. Consider what is absolutely
necessary for recovery, how quickly it needs to happen, what are your minimum
operating resources are and any dependencies, either internal or external.
Phase 3: Design your strategy
Now it’s time to figure out strategies to mitigate interruptions and to quickly recover from
them. Consider everything you’ll need to protect your people, your assets and you’re your
functions. Start by comparing your current recovery capabilities to your business
requirements and plzn how you will fill that gap.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
UNIT 5
Log Monitoring. Various tools and techniques can assist in log monitoring, such as
analyzing log entries and correlating log entries across multiple systems. This can assist
in incident handling, identifying policy violations, auditing, and other efforts.
Data Recovery. There are dozens of tools that can recover lost data from systems,
including data that has been accidentally or purposely deleted or otherwise modified. The
amount of data that can be recovered varies on a case-by-case basis.
Data Acquisition. Some organizations use forensics tools to acquire data from hosts that
are being redeployed or retired. For example, when a user leaves an organization, the data
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
from the user.s workstation can be acquired and stored in case it is needed in the future.
The workstation.s media can then be sanitized to remove all of the original user.s data.
Due Diligence/Regulatory Compliance. Existing and emerging regulations require many
organizations to protect sensitive information and maintain certain records for audit
purposes. Also, when protected information is exposed to other parties, organizations may
be required to notify other agencies or impacted individuals. Forensics can help
organizations exercise due diligence and comply with such requirements.
2. Who are the primary users of forensic tools and techniques? Also state the various
factors to be considered when selecting an external or internal party?
Answer:-
Practically every organization needs to have some capability to perform computer
and network forensics. Without such a capability, an organization will have
difficulty determining what events have occurred within its systems and networks, such
as exposures of protected, sensitive data. Although the extent of this need varies, the
primary users of forensic tools and techniques within an organization usually can be
divided into the following three groups.
Investigators within an organization are most often from the Office of Inspector
General (OIG), and they are responsible for investigating allegations of misconduct. For
some organizations, the OIG immediately takes over the investigation of any event that is
suspected to involve criminal activity. The OIG typically uses many forensic techniques
and tools. Other investigators within an organization might include legal advisors and
members of the human resources department. Law enforcement officials and others
outside the organization that might perform criminal investigations are not considered
part of an organization.s internal group of investigators.
IT Professionals.
This group includes technical support staff and system, network, and security
administrators. They use a small number of forensic techniques and tools specific to their
area of expertise during their routine work (e.g., monitoring, troubleshooting, data
recovery).
Incident Handlers.
This group responds to a variety of computer security incidents, such as unauthorized
data access, inappropriate system usage, malicious code infections, and denial of service
attacks. Incident handlers typically use a wide variety of forensic techniques and tools
during their investigations.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
3. What are the different groups in which primary users of forensic tools and
techniques within an organization usually can be divided into ?
Answer:-
Investigators.
Investigators within an organization are most often from the Office of Inspector
General (OIG), and they are responsible for investigating allegations of misconduct. For
some organizations, the OIG immediately takes over the investigation of any event that is
suspected to involve criminal activity. The OIG typically uses many forensic techniques
and tools. Other investigators within an organization might include legal advisors and
members of the human resources department.
IT Professionals.
This group includes technical support staff and system, network, and security
administrators. They use a small number of forensic techniques and tools specific to their
area of expertise during their routine work (e.g., monitoring, troubleshooting, data
recovery).
Incident Handlers.
This group responds to a variety of computer security incidents, such as unauthorized
data access, inappropriate system usage, malicious code infections, and denial of service
attacks. Incident handlers typically use a wide variety of forensic techniques and tools
during their investigations.
Cost.
There are many potential costs. Software, hardware, and equipment used to collect
and examine data may carry significant costs (e.g., purchase price, software updates and
upgrades, maintenance), and may also require additional physical security measures to
safeguard them from tampering.
Response Time.
Personnel located on-site might be able to initiate computer forensic activity more
quickly than could off-site personnel. For organizations with geographically dispersed
physical locations, off-site outsourcers located near distant facilities might be able to
respond more quickly than personnel located at the organization.s headquarters.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Data Sensitivity
Because of data sensitivity and privacy concerns, some organizations might be
reluctant to allow external parties to image hard drives and perform other actions that
provide access to data.
For example -health care information, financial records.
Answer:-
Organizations should determine which parties should handle each aspect of forensics.
Most organizations rely on a combination of their own staff and external parties to
perform forensic tasks. Organizations should decide which parties should take care of
which tasks based on skills and abilities, cost, response time, and data sensitivity.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
teams that may provide assistance in these efforts include IT professionals, management,
legal advisors, human resources personnel, auditors, and physical security staff. Members
of these teams should understand their roles and responsibilities in forensics, receive
training and education on forensic.related policies, guidelines, and procedures, and be
prepared to cooperate with and assist others on forensic actions.
Organizations should create and maintain guidelines and procedures for performing
forensic tasks.
The guidelines should include general methodologies for investigating an incident
using forensic techniques, and step-by-step procedures should explain how to perform
routine tasks. The guidelines and procedures should support the admissibility of evidence
into legal proceedings. Because electronic logs and other records can be altered or
otherwise manipulated, organizations should be prepared, through their policies,
guidelines, and procedures, to demonstrate the reliability and integrity of such records.
The guidelines and procedures should also be reviewed regularly and maintained so that
they are accurate.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Data Collection
The first step in the forensic process is to identify potential sources of data and
acquire data from them. Section 3.1.1 describes the variety of data sources available and
discusses actions that organizations can take to support the ongoing collection of data
for forensic purposes. Section 3.1.2 describes the recommended steps for collecting data,
including additional actions necessary to support legal or internal disciplinary
proceedings. Section 3.1.3 discusses incident response considerations, emphasizing
the need to weigh the value of collected data against the costs and impact to the
organization of the collection process.
Identifying Possible Sources of Data
The increasingly widespread use of digital technology for both professional and
personal purposes has led to an abundance of data sources. The most obvious and
common sources of data are desktop computers, servers, network storage devices, and
laptops.
A- Develop a plan to acquire the data. Developing a plan is an important first step in most
cases because there are multiple potential data sources. The analyst should create a plan
that prioritizes the sources, establishing the order in which the data should be acquired.
Important factors for prioritization include the following:
Likely Value.
Based on the analyst.s understanding of the situation and previous experience in
similarsituations, the analyst should be able to estimate the relative likely value of each
potential data source.
Volatility. Volatile data refers to data on a live system that is lost after a computer is
powered down or due to the passage of time.
Amount of Effort Required.
The amount of effort required to acquire different data sources may vary widely.
For example, acquiring data from a network router would probably require much less
effort than acquiring data from an ISP.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Examination
After data has been collected, the next phase is to examine the data, which involves
assessing and extracting the relevant pieces of information from the collected data. This
phase may also involve bypassing or mitigating OS or application features that obscure
data and code, such as data compression, encryption, and access control mechanisms. An
acquired hard drive may contain hundreds of thousands of data files; identifying the data
files that contain information of interest, including information concealed through file
compression and access control, can be a daunting task. In addition, data files of interest
may contain extraneous information that should be filtered
Analysis
Once the relevant information has been extracted, the analyst should study and
analyze the data to draw conclusions from it.14 The foundation of forensics is using a
methodical approach to reach appropriate conclusions based on the available data or
determine that no conclusion can yet be drawn. The analysis should include identifying
people, places, items, and events, and determining how these elements are related so that
a conclusion can be reached. Often, this effort will include correlating data among
multiple sources. Tools such as centralized logging and security event management
software can facilitate this process by automatically gathering and correlating the data.
Reporting
The final phase is reporting, which is the process of preparing and presenting the
information resulting from the analysis phase. Many factors affect reporting, including
the following:
Alternative Explanations.
When the information regarding an event is incomplete, it may not be possible to
arrive at a definitive explanation of what happened. When an event has two or more
plausible explanations, each should be given due consideration in the reporting process.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Analysts should use a methodical approach to attempt to prove or disprove each possible
explanation that is proposed.
Audience Consideration.
Knowing the audience to which the data or information will be shown is important.
An incident requiring law enforcement involvement requires highly detailed reports of all
information gathered, and may also require copies of all evidentiary data obtained. A
system administrator might want to see network traffic and related statistics in great
detail. Senior management might simply want a high-level overview of what happened,
such as a simplified visual representation of how the attack occurred, and what should be
done to prevent similar incidents.
Actionable Information.
Reporting also includes identifying actionable information gained from data that may
allow an analyst to collect new sources of information. For example, a list of contacts
may be developed from the data that might lead to additional information about an
incident or crime. Also, information might be obtained that could prevent future events,
such as a backdoor on a system that could be used for future attacks, a crime that is being
planned, a worm scheduled to start spreading at a certain time, or a vulnerability that
could be exploited.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
particular file format, the original source application should be used; if this is not
available, then it may be necessary to research the file.s format and manually extract the
data from the file.
Uncompressing Files.
Compressed files may contain files with useful information, as well as other compressed
files Therefore, it is important that the analyst locate and extract compressed files.
Uncompressing files should be performed early in the forensic process to ensure that the
contents of compressed files are included in searches and other actions. Compression
bombs can cause examination tools to fail or consume considerable resources; they might
also contain malware and other malicious payloads.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
File metadata provides details about any given file. For example, collecting the
metadata on a graphic file might provide the graphic.s creation date, copyright
information, and description, and the creator.s identity.58 Metadata for graphics
generated by a digital camera might include the make and model of the digital camera
used to take the image, as well as F-stop, flash, and aperture settings. For word processing
files, metadata could specify the author, the organization that licensed the software, when
and by whom edits were last performed, and user-defined comments.
A data file (also called a file) is a collection of information logically grouped into a single
entity and referenced by a unique name, such as a filename.
After a logical backup or bit stream imaging has been performed, the backup or image may
have to be restored to another media before the data can be examined. This is dependent on
the forensic tools that will be used to perform the analysis. Some tools can analyze data
directly from an image file, whereas others require that the backup or image be restored to a
medium first. Regardless of whether an image file or a restored image is used in the
examination, the data should be accessed only as read-only to ensure that the data being
examined is not modified and that it will provide consistent results on successive runs.
This section describes the processes involved in examining files and data, as well as
techniques that can expedite examination.
The first step in the examination is to locate the files. A disk image can capture many
gigabytes of slack space and free space, which could contain thousands of files and file
fragments. Manually extracting data from unused space can be a time-consuming and
difficult process, because it requires knowledge of the underlying filesystem format.
Fortunately, several tools are available that can automate the process of extracting data
from unused space and saving it to data files, as well as recovering deleted files and files
within a recycling bin. Analysts can also display the contents of slack space with hex
editors or special slack recovery tools
The rest of the examination process involves extracting data from some or all of the
files. To make sense of the contents of a file, an analyst needs to know what type of
data the file contains. The intended purpose of file extensions is to denote the nature
of the file.s contents; for example, a jpg extension indicates a graphic file, and an mp3
extension indicates a music file. However, users can assign any file extension to any
type of file, such as naming a text file mysong.mp3 or omitting a file extension. In
addition, some file extensions might be hidden or unsupported on other OSs.
Therefore, analysts should not assume that file extensions are accurate.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Analysts should have access to various tools that enable them to perform
examinations and analysis of data, as well as some collection activities. Many forensic
products allow the analyst to perform a wide range of processes to analyze files and
applications, as well as collecting files, reading disk images, and extracting data from
files. Most analysis products also offer the ability to generate reports and to log all
errors that occurred during the analysis.
Uncompressing Files.
Compressed files may contain files with useful information, as well as other
compressed files.
8. Explain the two different techniques used for copying files from media.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Logical Backup. A logical backup copies the directories and files of a logical
volume. It does not capture other data that may be present on the media, such as
deleted files or residual data stored in slack space.
Bit Stream Imaging. Also known as disk imaging, bit stream imaging generates a
bit-for-bit copy of the original media, including free space and slack space. Bit
stream images require more storage space and take longer to perform than logical
backups.
If evidence may be needed for prosecution or disciplinary actions, the analyst should get a bit
stream image of the original media, label the original media, and store it securely as evidence.
All subsequent analysis should be performed using the copied media to ensure that the
original media is not modified and that a copy of the original media can always be recreated
if necessary. All steps that were taken to create the image copy should be documented. Doing
so should allow any analyst to produce an exact duplicate of the original media using the
same procedures.
When a bit stream image is executed, either a disk-to-disk or a disk-to-file copy can be
performed. A disk-to-disk copy, as its name suggests, copies the contents of the media
directly to another media. A disk-to-file copy copies the contents of the media to a single
logical data file. A disk-to-disk copy is useful since the copied media can be connected
directly to a computer and its contents readily viewed. However, a disk-to-disk copy requires
a second media similar to the original media. A disk-to-file copy allows the data file image to
be moved and backed up easily.
Numerous hardware and software tools can perform bit stream imaging and logical backups.
Hardware tools are generally portable, provide bit-by-bit images, connect directly to the drive
or computer to be imaged, and have built-in hash functions. Hardware tools can acquire data
from drives that use common types of controllers, such as Integrated Drive Electronics (IDE)
and Small Computer System Interface (SCSI). Software solutions generally consist of a
startup diskette, CD, or installed programs that run on a workstation to which the media to be
imaged is attached. Some software solutions create logical copies of files or partitions and
may ignore free or unallocated drive space, whereas others create a bit-by-bit image copy of
the media.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
survey.Tenable Network Security estimated year 2005 that it was used by over 75,000
organizations worldwide.
Nessus allows scans for the following types of vulnerabilities:
Vulnerabilities that allow a remote hacker to control or access sensitive data on a system.
Misconfiguration (e.g. open mail relay, missing patches, etc.).
Default passwords, a few common passwords, and blank/absent passwords on some
system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary
attack.
Denials of service against the TCP/IP stack by using malformed packets
Preparation for PCI DSS audits
Initially, Nessus consisted of two main components; nessusd, the Nessus daemon,
which does the scanning, and nessus, the client, which controls scans and presents the
vulnerability results to the user. Later versions of Nessus (4 and greater) utilize a web
server which provides the same functionality as the client.
In typical operation, Nessus begins by doing a port scan with one of its four internal
portscanners (or it can optionally use AmapM or Nmap) to determine which ports are
open on the target and then tries various exploits on the open ports. The vulnerability
tests, available as subscriptions, are written in NASL(Nessus Attack Scripting
Language), a scripting language optimized for custom network interaction.
produces several dozen new vulnerability checks (called plugins) each week, usually
on a daily basis. These checks are available for free to the general public; commercial
customers are not allowed to use this Home Feed any more. The Professional Feed
(which is not free) also give access to support and additional capabilities (e.g. audit
files, compliance tests, additional vulnerability detection plugins).
Optionally, the results of the scan can be reported in various formats, such as plain
text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for
debugging. On UNIX, scanning can be automated through the use of a command-line
client. There exist many different commercial, free and open source tools for both
UNIX and Windows to manage individual or distributed Nessus scanners.
If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus'
vulnerability tests may try to cause vulnerable services or operating systems to crash.
This lets a user test the resistance of a device before putting it in production.
Nessus provides additional functionality beyond testing for known network
vulnerabilities. For instance, it can use Windows credentials to examine patch levels
on computers running the Windows operating system, and can perform password
auditing using dictionary and brute force methods. Nessus 3 and later can also audit
systems to make sure they have been configured per a specific policy, such as
the NSA's guide for hardening Windows servers. This functionality utilizes Tenable's
proprietary audit files or Security Content Automation Protocol (SCAP) content.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Vulnerabilities that allow a remote hacker to control or access sensitive data on a system.
Misconfiguration (e.g. open mail relay, missing patches, etc.).
Default passwords, a few common passwords, and blank/absent passwords on some
system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary
attack.
Denials of service against the TCP/IP stack by using malformed packets
Preparation for PCI DSS audit
The following is a list of the 11 clauses, in no order of importance, followed by the number of
main security categories within each. Each main security category has a ‘control objective’.
This states what the control is to achieve. Second, each has one or more controls that can be
applied to achieve the control’s objective.
Security Policy.
Organizing Information Security.
Asset Management.
Human Resources Security.
Physical and Environment Security.
Communications and Operations Management.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
Access Control.
Information Systems Acquisition, Development and Maintenance.
Information Security Incident Management.
Business Continuity Management.
Host discovery – Identifying hosts on a network. For example, listing the hosts that
respond to TCP and/or ICMP requests or have a particular port open.
Port scanning – Enumerating the open ports on target hosts.
Version detection – Interrogating network services on remote devices to determine
application name and version number.
OS detection – Determining the operating system and hardware characteristics of
network devices.
Scriptable interaction with the target – using Nmap Scripting Engine (NSE) and Lua
programming language.
Nmap can provide further information on targets, including reverse DNS names, device
types, and MAC addresses.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
14. What are the basic phases of forensic process? Give a brief overview of
it.
Ans:-The basic phases of the forensic process: collection, examination, analysis, and reporting.
During collection, data related to a specific event is identified, labeled, recorded, and
collected, and its integrity is preserved. In the second phase, examination, forensic tools and
techniques appropriate to the types of data that were collected are executed to identify and
extract the relevant information from the collected data while protecting its integrity.
Examination may use a combination of automated tools and manual processes. The next
phase, analysis, involves analyzing the results of the examination to derive useful information
that addresses the questions that were the impetus for performing the collection and
examination. The final phase involves reporting the results of the analysis, which may
include describing the actions performed, determining what other actions need to be
performed, and recommending improvements to policies, guidelines, procedures, tools, and
other aspects of the forensic process.
1) Data Collection
The first step in the forensic process is to identify potential sources of data and acquire data from
them.
The increasingly widespread use of digital technology for both professional and personal purposes has
led to an abundance of data sources. The most obvious and common sources of data are desktop
computers, servers, network storage devices, and laptops. These systems typically have internal drives
that accept media, such as CDs and DVDs, and also have several types of ports (e.g., Universal Serial
Bus [USB], Firewire, Personal Computer Memory Card International Association [PCMCIA]) to
which external data storage media and devices can be attached.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
After identifying potential data sources, the analyst needs to acquire the data from the sources. Data
acquisition should be performed using a three-step process: developing a plan to acquire the data,
acquiring the data, and verifying the integrity of the acquired data.
When performing forensics during incident response, an important consideration is how and
when the incident should be contained. Isolating the pertinent systems from external
influences may be necessary to prevent further damage to the system and its data or to
preserve evidence. In many cases, the analyst should work with the incident response team to
make a containment decision (e.g., disconnecting network cables, unplugging power,
increasing physical security measures, gracefully shutting down a host). This decision should
be based on existing policies and procedures regarding incident containment, as well as the
team.s assessment of the risk posed by the incident, so that the chosen containment strategy
or combination of strategies sufficiently mitigates risk while maintaining the integrity of
potential evidence whenever possible.
Ans:-
Filesystems
Before media can be used to store files, the media must usually be partitioned and formatted
into logical volumes. Partitioning is the act of logically dividing a media into portions that
function as physically separate units. A logical volume is a partition or a collection of
partitions acting as a single entity that has been formatted with a filesystem. Some media
types, such as floppy disks, can contain at most one partition (and consequently, one logical
volume). The format of the logical volumes is determined by the selected filesystem.
A filesystem defines the way that files are named, stored, organized, and accessed on logical
volumes. Many different filesystems exist, each providing unique features and data
structures. However, all filesystems share some common traits. First, they use the concepts of
directories and files to organize and store data. Directories are organizational structures that
are used to group files together. In addition to files, directories may contain other directories
called subdirectories. Second, filesystems use some data structure to point to the location of
files on media. In addition, they store each data file written to media in one or more file
allocation units. These are referred to as clusters by some filesystems (e.g., File Allocation
Table [FAT], NT File System [NTFS]) and as blocks by other filesystems (e.g., UNIX and
Linux). A file allocation unit is simply a group of sectors, which are the smallest units that
can be accessed on media.
Some commonly used filesystems are as follows:
! FAT12.FAT12 is used only on floppy disks and FAT volumes smaller than 16 MB.
FAT12 uses a 12-bit file allocation table entry to address an entry in the filesystem.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
NTFS. Windows NT/2000/XP and Windows Server 2003 support NTFS natively. NTFS
is a recoverable filesystem, which means that it can automatically restore the
consistency of the filesystem when errors occur.
During backups and imaging, the integrity of the original media should be maintained. To
ensure that the backup or imaging process does not alter data on the original media, analysts
can use a write-blocker while backing up or imaging the media. A write-blocker is a
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)
hardware or software-based tool that prevents a computer from writing to computer storage
media connected to it. Hardware write-blockers are physically connected to the computer and
the storage media being processed to prevent any writes to that media. Software write-
blockers are installed on the analysts forensic system and currently are available only for MS-
DOS and Windows systems. (Some OSs [e.g., Mac OS X, Linux] may not require software
write-blockers because they can be set to boot with secondary devices not mounted.
However, attaching a hardware write-blocking device will ensure that integrity is maintained)
MS-DOS. Based software write-blockers work by trapping Interrupt 13 and extended
Interrupt 13 disk writes. Windows-based software write-blockers use filters to sort interrupts
sent to devices to prevent any writes to storage media.
In general, when using a hardware write-blocker, the media or device used to read the media
should be connected directly to the write-blocker, and the write-blocker should be connected
to the computer or device used to perform the backup or imaging. When using a software
write-blocker, the software should be loaded onto a computer before the media or device used
to read the media is connected to the computer. Write-blockers may also allow write-blocking
to be toggled on or off for a particular device.
Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch