Vous êtes sur la page 1sur 138

Prof.

Sohrab Vakharia Information Security Management


(only for private circulation; notes compiled from various resources)

Unit-1
1. Explain the process of risk management ?

ANS:
Risk assessment is a key component of a holistic, organization-wide risk management
process as defined in NIST Special Publication 800-39, Managing Information Security Risk:
Organization, Mission, and Information System View.
Risk management processes include: (i) framing risk; (ii) assessing risk; (iii) responding to
risk; and (iv) monitoring risk. Figure 1 illustrates the four steps in the risk management
process—including the risk assessment step and the information and communications flows
necessary to make the process work effectively
1.The first component of risk management addresses how organizations frame risk or
establish a risk context—that is, describing the environment in which risk-based decisions are
made. The purpose of the risk framing component is to produce a risk management strategy
that addresses how organizations intend to assess risk, respond to risk, and monitor risk—
making explicit and transparent the risk perceptions that organizations routinely use in
making both investment and operational decisions. The risk management strategy establishes
a foundation for managing risk and delineates the boundaries for risk-based decisions within
organization

2. The second component of risk management addresses how organizations assess risk within
the context of the organizational risk frame. The purpose of the risk assessment component is
to identify: (i) threats to organizations (i.e., operations, assets, or individuals) or threats
directed through organizations against other organizations or the Nation; (ii) vulnerabilities
internal and external to organizations; (iii) the harm (i.e., adverse impact) that may occur
given the potential for threats exploiting vulnerabilities; and (iv) the likelihood that harm will
occur. The end result is a determination of risk (i.e., typically a function of the degree of
harm and likelihood of harm occurring).

3. The third component of risk management addresses how organizations respond to risk once
that risk is determined based on the results of a risk assessment. The purpose of the risk

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

response component is to provide a consistent, organization-wide response to risk in


accordance with the organizational risk frame by: (i) developing alternative courses of action
for responding to risk; (ii) evaluating the alternative courses of action; (iii) determining
appropriate courses of action consistent with organizational risk tolerance; and (iv)
implementing risk responses based on selected courses of action.

4. The fourth component of risk management addresses how organizations monitor risk over
time. The purpose of the risk monitoring component is to: (i) determine the ongoing
effectiveness of risk responses (consistent with the organizational risk frame); (ii) identify
risk-impacting changes to organizational information systems and the environments in which
the systems operate;16 and (iii) verify that planned risk responses are implemented and
information security requirements derived from and traceable to organizational
missions/business functions, federal legislation, directives, regulations, policies, standards,
and guidelines are satisfied.

2. What are the steps for risk assessment?


ANS:
Risk assessment component of risk management—providing a step-by-step process for
organizations on: (i) how to prepare for risk assessments;
(ii) How to conduct risk assessments;
(iii) How to communicate risk assessment results to key organizational personnel; and
(iv) How to maintain the risk assessments over time.
Risk assessments are not simply one-time activities that provide permanent and definitive
information for decision makers to guide and inform responses to information security risks.
Rather, organizations employ risk assessments on an ongoing basis throughout the system
development life cycle and across all of the tiers in the risk management hierarchy—with the
frequency of the risk assessments and the resources applied during the assessments,
commensurate with the expressly defined purpose and scope of the assessments.
Risk assessments address the potential adverse impacts to organizational operations and
assets, individuals, other organizations, and the economic and national security interests of
the United States, arising from the operation and use of information systems and the
information processed, stored, and transmitted by those systems. Organizations conduct risk
assessments to determine risks that are common to the organization’s core missions/business
functions, mission/business processes, mission/business segments, common
infrastructure/support services, or information systems. Risk assessments can support a wide
variety of risk-based decisions and activities by organizational officials across all three tiers
in the risk management hierarchy including, but not limited to, the following:
• Development of information security architecture;
• Definition of interconnection requirements for information systems (including systems
supporting mission/business processes and common infrastructure/support services);

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

• Design of security solutions for information systems and environments of operation


including selection of security controls, information technology products, suppliers/supply
chain, and contractors
• Authorization (or denial of authorization) to operate information systems or to use security
controls inherited by those systems (i.e., common controls);
• Modification of missions/business functions and/or mission/business processes
permanently, or for a specific time frame (e.g., until a newly discovered threat or
vulnerability is addressed, until a compensating control is replaced); • Implementation of
security solutions (e.g., whether specific information technology products or configurations
for those products meet established requirements); and
• Operation and maintenance of security solutions (e.g., continuous monitoring strategies and
programs, ongoing authorizations).
Because organizational missions and business functions, supporting mission/business
processes, information systems, threats, and environments of operation tend to change over
time, the validity and usefulness of any risk assessment is bounded in time

3. What are steps to Prepare for a risk assessment?


ANS:
The first step in the risk assessment process is to prepare for the assessment. The objective of
this step is to establish a context for the risk assessment. This context is established and
informed by the results from the risk framing step of the risk management process. Risk
framing identifies, for example, organizational information regarding policies and
requirements for conducting risk assessments, specific assessment methodologies to be
employed, procedures for selecting risk factors to be considered, scope of the assessments,
rigor of analyses, degree of formality, and requirements that facilitate consistent and
repeatable risk determinations across the organization. Organizations use the risk
management strategy to the extent practicable to obtain information to prepare for the risk
assessment. Preparing for a risk assessment includes the following tasks:
 Identify the purpose of the assessment;
 Identify the scope of the assessment;
 Identify the assumptions and constraints associated with the assessment;
 Identify the sources of information to be used as inputs to the assessment; and
 Identify the risk model and analytic approaches (i.e., assessment and analysis
approaches) to be employed during the assessment.

PREPARE FOR THE ASSESSMENT:


 IDENTIFY PURPOSE:

TASK 1-1: Identify the purpose of the risk assessment in terms of the information that
the assessment is intended to produce and the decisions the assessment is intended to
support.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

 IDENTIFY SCOPE

TASK 1-2: Identify the scope of the risk assessment in terms of organizational
applicability, time frame supported, and architectural/technology considerations
The scope of the risk assessment determines what will be considered in the assessment.
Risk assessment scope affects the range of information available to make risk-based
decisions and is determined by the organizational official requesting the assessment and
the risk management strategy. Establishing the scope of the risk assessment helps
organizations to determine:
(i) what tiers are addressed in the assessment;
(ii) what parts of organizations are affected by the assessment and how they are
affected;
(iii) what decisions the assessment results support;
(iv) how long assessment results are relevant;
(v) what influences the need to update the assessment.

Organizational Applicability:
Organizational applicability describes which parts of the organization or sub
organizations are affected by the risk assessment and the risk-based decisions resulting
from the assessment (including the parts of the organization or sub organizations
responsible for implementing the activities and tasks related to the decisions).
Effectiveness Time Frame:
Organizations determine how long the results of particular risk assessments can be used
to legitimately inform risk based decisions. The time frame is usually related to the
purpose of the assessment. For example, a risk assessment to inform Tier 1 policy-related
decisions needs to be relevant for an extended period of time since the governance
process for policy changes can be time-consuming in many organization
Architectural/Technology Considerations:
Organizations use architectural and technology considerations to clarify the scope of the
risk assessment. For example, at Tier 3, the scope of the risk assessment can be an
organizational information system in its environment of operations. This entails placing
the information system in its architectural context, so that vulnerabilities in inherited
controls can be taken into consideration
 IDENTIFY ASSUMPTIONS AND CONSTRAINTS

TASK 1-3: Identify the specific assumptions and constraints under which the risk
assessment is conducted
Threat Sources:
Organizations determine which types of threat sources are to be considered during risk
assessments. Organizations make explicit the process used to identify threats and any
assumptions related to the threat identification process. If such information is identified

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

during the risk framing step and included as part of the organizational risk management
strategy, the information need not be repeated in each individual risk assessment
Threat Events:
Organizations determine which type of threat events are to be considered during risk
assessments and the level of detail needed to describe such events. Descriptions of threat
events can be expressed in highly general terms (e.g., phishing, distributed denial-of-
service), in more descriptive terms using tactics, techniques, and procedures, or in highly
specific terms (e.g., the names of specific information systems, technologies,
organizations, roles, or locations). In addition, organizations consider
(i) what representative set of threat events can serve as a starting point for the
identification of the specific threat events in the risk assessment; and

(ii) What degree of confirmation is needed for threat events to be considered relevant for
purposes of the risk assessment.

Vulnerabilities and Predisposing Conditions:


Organizations determine the types of vulnerabilities that are to be considered during risk
assessments and the level of detail provided in the vulnerability descriptions. Organizations
make explicit the process used to identify vulnerabilities and any assumptions related to the
vulnerability identification process. If such information is identified during the risk framing
step and included as part of the organizational risk management strategy, the information
need not be repeated in each individual risk assessment.
Likelihood:
Organizations make explicit the process used to conduct likelihood determinations and any
assumptions related to the likelihood determination process. If such information is identified
during the risk framing step and included as part of the organizational risk management
strategy, the information need not be repeated in each individual risk assessment.
 IDENTIFY INFORMATION SOURCES

TASK 1-4: Identify the sources of descriptive, threat, vulnerability, and impact information
to be used in the risk assessment
Descriptive information enables organizations to be able to determine the relevance of threat
and vulnerability information. At Tier 1, descriptive information can include, for example,
the type of risk management and information security governance structures in place within
organizations and how the organization identifies and prioritizes critical missions/business
functions. At Tier 2, descriptive information can include, for example, information about: (i)
organizational mission/business processes, functional management processes, and
information flows; (ii) enterprise architecture, information security architecture, and the
technical/process flow architectures of the systems, common infrastructures, and shared
services that fall within the scope of the risk assessment; and (iii) the external environments

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

in which organizations operate including, for example, the relationships and dependencies
with external providers
 DENTIFY RISK MODEL AND ANALYTIC APPROACH

TASK 1-5: Identify the risk model and analytic approach to be used in the risk assessment.
Organizations define one or more risk models for use in conducting risk assessments (see
Section 2.3.1) and identify which model is to be used for the risk assessment. To facilitate
reciprocity of assessment results, organization-specific risk models include, or can be
translated into, the risk factors (i.e., threat, vulnerability, impact, likelihood, and predisposing
condition) defined in the appendices. Organizations also identify the specific analytic
approach to be used for the risk assessment including the assessment approach (i.e.,
quantitative, qualitative, semi-quantitative) and the analysis approach (i.e., threat-oriented,
asset/impact-oriented, vulnerability-oriented). For each assessable risk factor, the appendices
include three assessment scales (one qualitative and two semi-quantitative scales) with
correspondingly different representation
4. What are the different risk assessment approaches?
ANS:
Risk, and its contributing factors, can be assessed in a variety of ways, including
quantitatively, qualitatively, or semi-quantitatively. Each risk assessment approach
considered by organizations has advantages and disadvantages.
A preferred approach (or situation-specific set of approaches) can be selected based on
organizational culture and, in particular, attitudes toward the concepts of uncertainty and risk
communication.
(1) Quantitative assessments typically employ a set of methods, principles, or rules for
assessing risk based on the use of numbers—where the meanings and proportionality of
values are maintained inside and outside the context of the assessment.
This type of assessment most effectively supports cost-benefit analyses of alternative risk
responses or courses of action. However, the meaning of the quantitative results may not
always be clear and may require interpretation and explanation—particularly to explain the
assumptions and constraints on using the results.
For example, organizations may typically ask if the numbers or results obtained in the risk
assessments are reliable or if the differences in the obtained values are meaningful or
insignificant. Additionally, the rigor of quantification is significantly lessened when
subjective determinations are buried within the quantitative assessments, or when significant
uncertainty surrounds the determination of values. The benefits of quantitative assessments
(in terms of the rigor, repeatability, and reproducibility of assessment results) can, in some
cases, be outweighed by the costs (in terms of the expert time and effort and the possible
deployment and use of tools required to make such assessments).
(2) In contrast to quantitative assessments, qualitative assessments typically employ a set of
methods, principles, or rules for assessing risk based on nonnumerical categories or levels

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

(e.g., very low, low, moderate, high, very high). This type of assessment supports
communicating risk results to decision makers.
However, the range of values in qualitative assessments is comparatively small in most cases,
making the relative prioritization or comparison within the set of reported risks difficult.
Additionally, unless each value is very clearly defined or is characterized by meaningful
examples, different experts relying on their individual experiences could produce
significantly different assessment results.
The repeatability and reproducibility of qualitative assessments are increased by the
annotation of assessed values (e.g., this value is high because of the following reasons) and
by the use of tables or other well-defined functions to combine qualitative values.
(3) Finally, semi-quantitative assessments typically employ a set of methods, principles, or
rules for assessing risk that uses bins, scales, or representative numbers whose values and
meanings are not maintained in other contexts.
This type of assessment can provide the benefits of quantitative and qualitative assessments.
The bins (e.g., 0-15, 16-35, 36-70, 71-85, 86-100) or scales (e.g., 1-10) translate easily into
qualitative terms that support risk communications for decision makers (e.g., a score of 95
can be interpreted as very high), while also allowing relative comparisons between values in
different bins or even within the same bin (e.g., the difference between risks scored 70 and 71
respectively is relatively insignificant, while the difference between risks scored 36 and 70 is
relatively significant).
The role of expert judgment in assigning values is more evident than in a purely quantitative
approach. Moreover, if the scales or sets of bins provide sufficient granularity, relative
prioritization among results is better supported than in a purely qualitative approach.
As in a quantitative approach, rigor is significantly lessened when subjective determinations
are buried within assessments, or when significant uncertainty surrounds a determination of
value. As with the nonnumeric categories or levels used in a well-founded qualitative
approach, each bin or range of values needs to be clearly defined and/or characterized by
meaningful examples.
Independent of the type of value scale selected, assessments make explicit the temporal
element of risk factors. For example, organizations can associate a specific time period with
assessments of likelihood of occurrence and assessments of impact severity
5. What are the different risk analysis approaches?
ANS:
Analysis approaches differ with respect to the orientation or starting point of the risk
assessment, level of detail in the assessment, and how risks due to similar threat scenarios are
treated. An analysis approach can be:
(i) Threat-oriented;
(ii) Asset/impact-oriented; or
(iii) Vulnerability oriented.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

(1)A threat-oriented approach starts with the identification of threat sources and threat
events, and focuses on the development of threat scenarios; vulnerabilities are identified
in the context of threats, and for adversarial threats, impacts are identified based on
adversary intent.
(2)An asset/impact-oriented approach starts with the identification of impacts or
consequences of concern and critical assets, possibly using the results of a mission or
business impact analyses and identifying threat events that could lead to and/or threat
sources that could seek those impacts or consequences.
(3)A vulnerability-oriented approach starts with a set of predisposing conditions or
exploitable weaknesses/deficiencies in organizational information systems or the
environments in which the systems operate, and identifies threat events that could
exercise those vulnerabilities together with possible consequences of vulnerabilities being
exercised. Each analysis approach takes into consideration the same risk factors, and thus
entails the same set of risk assessment activities, albeit in different order.
Differences in the starting point of the risk assessment can potentially bias the results,
causing some risks not to be identified. Therefore, identification of risks from a second
orientation (e.g., complementing a threat-oriented analysis approach with an asset/impact-
oriented analysis approach) can improve the rigor and effectiveness of the analysis.

In addition to the orientation of the analysis approach, organizations can apply more rigorous
analysis techniques (e.g., graph-based analyses) to provide an effective way to account for the
many-to-many relationships between:
(i) threat sources and threat events (i.e., a single threat event can be caused by
multiple threat sources and a single threat source can cause multiple threat
events);
(ii) threat events and vulnerabilities (i.e., a single threat event can exploit multiple
vulnerabilities and a single vulnerability can be exploited by multiple threat
events); and
(iii) Threat events and impacts/assets (i.e., a single threat event can affect multiple
assets or have multiple impacts, and a single asset can be affected by multiple
threat events).

Rigorous analysis approaches also provide a way to account for whether, in the time
frame for which risks are assessed, a specific adverse impact could occur (or a specific
asset could be harmed) at most once, or perhaps repeatedly, depending on the nature of
the impacts and on how organizations (including mission/business processes or
information systems) recover from such adverse impacts.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

6. Explain generic risk model in detail.


ANS:-
Risk Models: Risk models define the risk factors to be assessed and the relationships among
those factors. Risk factors are characteristics used in risk models as inputs to determining
levels of risk in risk assessments. Risk factors are also used extensively in risk
communications to highlight what strongly affects the levels of risk in particular situations,
circumstances, or contexts.
Typical risk factors include threat, vulnerability, impact, likelihood, and predisposing
condition. Risk factors can be decomposed into more detailed characteristics (e.g., threats
decomposed into threat sources and threat events).These definitions are important for
organizations to document prior to conducting risk assessments because the assessments rely
upon well-defined attributes of threats, vulnerabilities, impact, and other risk factors to
effectively determine risk.

(1)Threats
A threat is any circumstance or event with the potential to adversely impact organizational
operations and assets, individuals, other organizations, or the Nation through an information
system via unauthorized access, destruction, disclosure, or modification of information,
and/or denial of service. Threat events are caused by threat sources. A threat source is
characterized as:

(i) the intent and method targeted at the exploitation of a vulnerability; or


(ii) a situation and method that may accidentally exploit a vulnerability.
In general, types of threat sources include:
(i) hostile cyber or physical attacks;
(ii) human errors of omission or commission;
(iii) structural failures of organization-controlled resources (e.g., hardware, software,
environmental controls); and
(iv) natural and man-made disasters, accidents, and failures beyond the control of the
organization. Various taxonomies of threat sources have been developed. Some
taxonomies of threat sources use the type of adverse impacts as an organizing
principle. Multiple threat sources can initiate or cause the same threat event—for
example, a provisioning server can be taken off-line by a denial-of-service attack,
a deliberate act by a malicious system administrator, an administrative error, a
hardware fault, or a power failure.

Risk models differ in the degree of detail and complexity with which threat events are
identified. When threat events are identified with great specificity, threat scenarios can be
modeled, developed, and analyzed. Threat events for cyber or physical attacks are
characterized by the tactics, techniques, and procedures (TTPs) employed by adversaries.
Understanding adversary-based threat events gives organizations insights into the capabilities
associated with certain threat sources. In addition, having greater knowledge about who is
carrying out the attacks gives organizations a better understanding of what adversaries desire
to gain by the attacks. Knowing the intent and targeting aspects of a potential attack helps
organizations narrow the set of threat events that are most relevant to consider.
Threat shifting is the response of adversaries to perceived safeguards and/or countermeasures
(i.e., security controls), in which adversaries change some characteristic of their
intent/targeting in order to avoid and/or overcome those safeguards/countermeasures. Threat
shifting can occur in one or more domains including:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

(i) the time domain (e.g., a delay in an attack or illegal entry to conduct additional
surveillance);
(ii) the target domain (e.g., selecting a different target that is not as well protected);
(iii) the resource domain (e.g., adding resources to the attack in order to reduce
uncertainty or overcome safeguards and/or countermeasures); or
(iv) the attack planning/attack method domain (e.g., changing the attack weapon or
attack path). Threat shifting is a natural consequence of a dynamic set of interactions
between threat sources and types of organizational assets targeted. With more
sophisticated threat sources, it also tends to default to the path of least resistance to
exploit particular vulnerabilities, and the responses are not always predictable. In
addition to the safeguards and/or countermeasures implemented and the impact of a
successful exploit of an organizational vulnerability, another influence on threat
shifting is the benefit to the attacker. That perceived benefit on the attacker side can
also influence how much and when threat shifting occurs.

(2)Vulnerabilities and Predisposing Conditions


A vulnerability is a weakness in an information system, system security procedures, internal
controls, or implementation that could be exploited by a threat source.25 Most information
system vulnerabilities can be associated with security controls that either have not been
applied (either intentionally or unintentionally), or have been applied, but retain some
weakness. However, it is also important to allow for the possibility of emergent
vulnerabilities that can arise naturally over time as organizational missions/business functions
evolve, environments of operation change, new technologies proliferate, and new threats
emerge. In the context of such change, existing security controls may become inadequate and
may need to be reassessed for effectiveness. The tendency for security controls to potentially
degrade in effectiveness over time reinforces the need to maintain risk assessments during the
entire system development life cycle and also the importance of continuous monitoring
programs to obtain ongoing situational awareness of the organizational security posture.

Vulnerabilities are not identified only within information systems. Viewing information
systems in a broader context, vulnerabilities can be found in organizational governance
structures (e.g., the lack of effective risk management strategies and adequate risk framing,
poor intra -agency communications, inconsistent decisions about relative priorities of
missions/business functions, or misalignment of enterprise architecture to support
mission/business activities). Vulnerabilities can also be found in external relationships (e.g.,
dependencies on particular energy sources, supply chains, information technologies, and
telecommunications providers), mission/business processes (e.g., poorly defined processes or
processes that are not risk-aware), and enterprise/information security architectures (e.g.,
poor architectural decisions resulting in lack of diversity or resiliency in organizational
information systems).
In general, risks materialize as a result of a series of threat events, each of which takes
advantage of one or more vulnerabilities. Organizations define threat scenarios to describe
how the events caused by a threat source can contribute to or cause harm. Development of
threat scenarios is analytically useful, since some vulnerabilities may not be exposed to
exploitation unless and until other vulnerabilities have been exploited. Analysis that
illuminates how a set of vulnerabilities, taken together, could be exploited by one or more
threat events is therefore more useful than the analysis of individual vulnerabilities. In
addition, a threat scenario tells a story, and hence is useful for risk communication as well as
for analysis.In addition to vulnerabilities as described above, organizations also consider
predisposing conditions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

A predisposing condition is a condition that exists within an organization, a mission or


business process, enterprise architecture, information system, or environment of operation,
which affects (i.e., increases or decreases) the likelihood that threat events, once initiated,
result in adverse impacts to organizational operations and assets, individuals, other
organizations, or the Nation.27 Predisposing conditions include, for example, the location of a
facility in a hurricane- or flood-prone region (increasing the likelihood of exposure to
hurricanes or floods) or a stand-alone information system with no external network
connectivity (decreasing the likelihood of exposure to a network-based cyber attack) .
Vulnerabilities resulting from predisposing conditions that cannot be easily corrected could
include, for example, gaps in contingency plans, use of outdated technologies, or
weaknesses/deficiencies in information system backup and failover mechanisms. In all cases,
these types of vulnerabilities create a predisposition toward threat events having adverse
impacts on organizations. Vulnerabilities (including those attributed to predisposing
conditions) are part of the overall security posture of organizational information systems and
environments of operation that can affect the likelihood of occurrence of a threat event.

(3)Likelihood
The likelihood of occurrence is a weighted risk factor based on an analysis of the probability
that a given threat is capable of exploiting a given vulnerability (or set of vulnerabilities). The
likelihood risk factor combines an estimate of the likelihood that the threat event will be
initiated with an estimate of the likelihood of impact (i.e., the likelihood that the threat event
results in adverse impacts). For adversarial threats, an assessment of likelihood of occurrence
is typically based on: (i) adversary intent; (ii) adversary capability; and (iii) adversary
targeting. For other than adversarial threat events, the likelihood of occurrence is estimated
using historical evidence, empirical data, or other factors. Note that the likelihood that a
threat event will be initiated or will occur is assessed with respect to a specific time frame
(e.g., the next six months, the next year, or the period until a specified milestone is reached).
If a threat event is almost certain to be initiated or occur in the (specified or implicit) time
frame, the risk assessment may take into consideration the estimated frequency of the event.
The likelihood of threat occurrence can also be based on the state of the organization
(including for example, its core mission/business processes, enterprise architecture,
information security architecture, information systems, and environments in which those
systems operate)—taking into consideration predisposing conditions and the presence and
effectiveness of deployed security controls to protect against unauthorized/undesirable
behavior, detect and limit damage, and/or maintain or restore mission/business capabilities.
The likelihood of impact addresses the probability (or possibility) that the threat event will
result in an adverse impact, regardless of the magnitude of harm that can be expected.
Organizations typically employ a three-step process to determine the overall likelihood of
threat events. First, organizations assess the likelihood that threat events will be initiated (for
adversarial threat events) or will occur (for non-adversarial threat events). Second,
organizations assess the likelihood that the threat events once initiated or occurring, will
result in adverse impacts or harm to organizational operations and assets, individuals, other
organizations, or the Nation. Finally, organizations assess the overall likelihood as a
combination of likelihood of initiation/occurrence and likelihood of resulting in adverse
impact.
Threat-vulnerability pairing (i.e., establishing a one-to-one relationship between threats and
vulnerabilities) may be undesirable when assessing likelihood at the mission/business
function level, and in many cases, can be problematic even at the information system level
due to the potentially large number of threats and vulnerabilities. This approach typically
drives the level of detail in identifying threat events and vulnerabilities, rather than allowing

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

organizations to make effective use of threat information and/or to identify threats at a level
of detail that is meaningful. Depending on the level of detail in threat specification, a given
threat event could exploit multiple vulnerabilities. In assessing likelihoods, organizations
examine vulnerabilities that threat events could exploit and also the mission/business function
susceptibility to events for which no security controls or viable implementations of security
controls exist (e.g., due to functional dependencies, particularly external dependencies). In
certain situations, the most effective way to reduce mission/business risk attributable to
information security risk is to redesign the mission/business processes so there are viable
work-arounds when information systems are compromised. Using the concept of threat
scenarios described above, may help organizations overcome some of the limitations of
threat-vulnerability pairing.

(4)Impact
The level of impact from a threat event is the magnitude of harm that can be expected to
result from the consequences of unauthorized disclosure of information, unauthorized
modification of information, unauthorized destruction of information, or loss of information
or information system availability. Such harm can be experienced by a variety of
organizational and non-organizational stakeholders including, for example, heads of agencies,
mission and business owners, information owners/stewards, mission/business process
owners, information system owners, or individuals/groups in the public or private sectors
relying on the organization—in essence, anyone with a vested interest in the organization’s
operations, assets, or individuals, including other organizations in partnership with the
organization, or the Nation.28 Organizations make explicit: (i) the process used to conduct
impact determinations; (ii) assumptions related to impact determinations; (iii) sources and
methods for obtaining impact information; and (iv) the rationale for conclusions reached with
regard to impact determinations.

Organizations may explicitly define how established priorities and values guide the
identification of high-value assets and the potential adverse impacts to organizational
stakeholders. If such information is not defined, priorities and values related to identifying
targets of threat sources and associated organizational impacts can typically be derived from
strategic planning and policies. For example, security categorization levels indicate the
organizational impacts of compromising different types of information. Privacy Impact
Assessments and criticality levels (when defined as part of contingency planning or
Mission/Business Impact Analysis) indicate the adverse impacts of destruction, corruption, or
loss of accountability for information resources to organizations.
Strategic plans and policies also assert or imply the relative priorities of immediate or near-
term mission/business function accomplishment and long-term organizational viability
(which can be undermined by the loss of reputation or by sanctions resulting from the
compromise of sensitive information) . Organizations can also consider the range of effects of
threat events including the relative size of the set of resources affected, when making final
impact determinations. Risk tolerance assumptions may state that threat events with an
impact below a specific value do not warrant further analysis.

(5)Risk:
Figure 3 illustrates an example of a risk model including the key risk factors discussed above
and the relationship among the factors. Each of the risk factors is used in the risk assessment
process in Chapter Three.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

FIGURE 3: GENERIC RISK MODEL WITH KEY RISK FACTORS

As noted above, risk is a function of the likelihood of a threat event’s occurrence and
potential adverse impact should the event occur. This definition accommodates many types of
adverse impacts at all tiers in the risk management hierarchy described in Special Publication
800-39 (e.g., damage to image or reputation of the organization or financial loss at Tier 1;
inability to successfully execute a specific mission/business process at Tier 2; or the
resources expended in responding to an information system incident at Tier 3). It also
accommodates relationships among impacts (e.g., loss of current or future mission/business
effectiveness due to the loss of data confidentiality; loss of confidence in critical information
due to loss of data or system integrity; or unavailability or degradation of information or
information systems). This broad definition also allows risk to be represented as a single
value or as a vector (i.e., multiple values), in which different types of impacts are assessed
separately. For purposes of risk communication, risk is generally grouped according to the
types of adverse impacts (and possibly the time frames in which those impacts are likely to be
experienced).

(6)Aggregation
Organizations may use risk aggregation to roll up several discrete or lower-level risks into a
more general or higher-level risk. Organizations may also use risk aggregation to efficiently
manage the scope and scale of risk assessments involving multiple information systems and
multiple mission/business processes with specified relationships and dependencies among
those systems and processes. Risk aggregation, conducted primarily at Tiers 1 and 2 and
occasionally at Tier 3, assesses the overall risk to organizational operations, assets, and
individuals given the set of discrete risks. In general, for discrete risks (e.g., the risk

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

associated with a single information system supporting a well-defined mission/business


process), the worst-case impact establishes an upper bound for the overall risk to
organizational operations, assets, and individuals.29 One issue for risk aggregation is that this
upper bound for risk may fail to apply. For example, it may be advantageous for
organizations to assess risk at the organization level when multiple risks materialize
concurrently or when the same risk materializes repeatedly over a period of time. In such
situations, there is the possibility that the amount of overall risk incurred is beyond the risk
capacity of the organization, and therefore the overall impact to organizational operations and
assets (i.e., mission/business impact) goes beyond that which was originally assessed for each
specific risk.

When aggregating risk, organizations consider the relationship among various discrete risks.
For example, there may be a cause and effect relationship in that if one risk materializes,
another risk is more or less likely to materialize. If there is a direct or inverse relationship
among discrete risks, then the risks can be coupled (in a qualitative sense) or correlated (in a
quantitative sense) either in a positive or negative manner. Risk coupling or correlation (i.e.,
finding relationships among risks that increase or decrease the likelihood of any specific risk
materializing) can be done at Tiers 1, 2, or 3.

(7)Uncertainty
Uncertainty is inherent in the evaluation of risk, due to such considerations as:
(i) limitations on the extent to which the future will resemble the past;
(ii) imperfect or incomplete knowledge of the threat (e.g., characteristics of
adversaries including tactics, techniques, and procedures);
(iii) undiscovered vulnerabilities in technologies or products; and
(iv) unrecognized dependencies, which can lead to unforeseen impacts. Uncertainty
about the value of specific risk factors can also be due to the step in the RMF or
phase in the system development life cycle at which a risk assessment is
performed. For example, at early phases in the system development life cycle, the
presence and effectiveness of security controls may be unknown, while at later
phases in the life cycle, the cost of evaluating control effectiveness may outweigh
the benefits in terms of more fully informed decision making. Finally, uncertainty
can be due to incomplete knowledge of the risks associated with other information
systems, mission/ business processes, services, common infrastructures, and/or
organizations. The degree of uncertainty in risk assessment results, due to these
different reasons, can be communicated in the form of the results (e.g., by
expressing results qualitatively, by providing ranges of values rather than single
values for identified risks, or by using a visual representations of fuzzy regions
rather than points).

7. What are the key characteristics of OCTAVE approach?


ANS: Key Characteristics of the OCTAVE Approach
OCTAVE is self-directed, requiring an organization to manage the evaluation process and
make information-protection decisions. An interdisciplinary team, called the analysis team,
leads the evaluation. The team includes people from both the business units and the IT
department, because both perspectives are important when characterizing the global,
organizational view of information security risk.
OCTAVE is an asset-driven evaluation approach. Analysis teams • identify information-
related assets (e.g., information and systems) that are important to the organization
• focus risk analysis activities on those assets judged to be most critical to the organization

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

• consider the relationships among critical assets, the threats to those assets, and
vulnerabilities (both organizational and technological) that can expose assets to threats.
Introduction to the OCTAVE Approach
• evaluate risks in an operational context - how they are used to conduct an organization’s
business and how those assets are at risk due to security threats
• create a practice-based protection strategy for organizational improvement as well as risk
mitigation plans to reduce the risk to the organization’s critical assets

The organizational, technological, and analysis aspects of an information security risk


evaluation are complemented by a three-phased approach. OCTAVE is organized around
these three basic aspects (illustrated in Figure 2), enabling organizational personnel to
assemble a comprehensive picture of the organization’s information security needs.

The phases are


(1) Phase 1: Build Asset-Based Threat Profiles – This is an organizational evaluation. The
analysis team determines what is important to the organization (information-related assets)
and what is currently being done to protect those assets. The team then selects those assets
that are most important to the organization (critical assets) and describes security
requirements for each critical asset. Finally, it identifies threats to each critical asset, creating
a threat profile for that asset.
(2)Phase 2: Identify Infrastructure Vulnerabilities – This is an evaluation of the information
infrastructure. The analysis team examines network access paths, identifying classes of
information technology components related to each critical asset. The team then determines
the extent to which each class of component is resistant to network attacks.
(3) Phase 3: Develop Security Strategy and Plans – During this part of the evaluation, the
analysis team identifies risks to the organization’s critical assets and decides what to do about
them. The team creates a protection strategy for the organization and mitigation plans to
address the risks to the critical assets, based upon an analysis of the information gathered.
Figure 2: OCTAVE Phases

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

8. Explain reactive approach to Risk management with proper diagram.


ANS:
The Reactive Approach: Today, many information technology (IT) professionals feel
tremendous pressure to complete their tasks quickly with as little inconvenience to users as
possible. When a security event occurs, many IT professionals feel like the only things they
have time to do are to contain the situation, figure out what happened, and fix the affected
systems as quickly as possible. Some may try to identify the root cause, but even that might
seem like a luxury for those under extreme resource constraints.
While a reactive approach can be an effective tactical response to security risks that have
been exploited and turned into security incidents, imposing a small degree of rigor to the
reactive approach can help organizations of all types to better use their resources.
Recent security incidents may help an organization to predict and prepare for future
problems. This means that an organization that takes time to respond to security incidents in a
calm and rational manner while determining the underlying reasons that allowed the incident
to transpire will be better able to both protect itself from similar problems in the future and
respond more quickly to other issues that may arise.

A deep examination into incident response is beyond the scope of this guide, but following
six steps when you respond to security incidents can help you manage them quickly and
efficiently:
Protect human life and people's safety. This should always be your first priority. For
example, if affected computers include life support systems, shutting them off may not be an
option; perhaps you could logically isolate the systems on the network by reconfiguring
routers and switches without disrupting their ability to help patients.
Contain the damage. Containing the harm that the attack caused helps to limit additional
damage. Protect important data, software, and hardware quickly. Minimizing disruption of
computing resources is an important consideration, but keeping systems up during an attack
may result in greater and more widespread problems in the long run. For example, if you
contract a worm in your environment, you could try to limit the damage by disconnecting
servers from the network. However, sometimes disconnecting servers can cause more harm
than good. Use your best judgment and your knowledge of your own network and systems to
make this determination. If you determine that there will be no adverse effects, or that they
would be outweighed by the positive benefits of activity, containment should begin as
quickly as possible during a security incident by disconnecting from the network the systems
known to be affected. If you cannot contain the damage by isolating the servers, ensure that
you actively monitor the attacker’s actions in order to be able to remedy the damage as soon
as possible. And in any event, ensure that all log files are saved before shutting off any
server, in order to preserve the information contained in those files as evidence if you (or
your lawyers) need it later.
Assess the damage. Immediately make a duplicate of the hard disks in any servers that were
attacked and put those aside for forensic use later. Then assess the damage. You should begin
to determine the extent of the damage that the attack caused as soon as possible, right after
you contain the situation and duplicate the hard disks. This is important so that you can
restore the organization's operations as soon as possible while preserving a copy of the hard
disks for investigative purposes. If it is not possible to assess the damage in a timely manner,
you should implement a contingency plan so that normal business operations and productivity
can continue.
It is at this point that organizations may want to engage law enforcement regarding the
incident; however, you should establish and maintain working relationships with law
enforcement agencies that have jurisdiction over your organization's business before an

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

incident occurs so that when a serious problem arises you know whom to contact and how to
work with them. You should also advise your company’s legal department immediately, so
that they can determine whether a civil lawsuit can be brought against anyone as a result of
the damage.
Determine the cause of the damage. In order to ascertain the origin of the assault, it is
necessary to understand the resources at which the attack was aimed and what vulnerabilities
were exploited to gain access or disrupt services. Review the system configuration, patch
level, system logs, audit logs, and audit trails on both the systems that were directly affected
as well as network devices that route traffic to them. These reviews often help you to
discover where the attack originated in the system and what other resources were affected.
You should conduct this activity on the computer systems in place and not on the backed up
drives created in step 3. Those drives must be preserved intact for forensic purposes so that
law enforcement or your lawyers can use them to trace the perpetrators of the attack and
bring them to justice. If you need to create a backup for testing purposes to determine the
cause of the damage, create a second backup from your original system and leave the drives
created in step 3 unused.
Repair the damage. In most cases, it is very important that the damage be repaired as
quickly as possible to restore normal business operations and recover data lost during the
attack. The organization's business continuity plans and procedures should cover the
restoration strategy. The incident response team should also be available to handle the restore
and recovery process or to provide guidance on the process to the responsible team. During
recovery, contingency procedures are executed to limit the spread of the damage and isolate
it. Before returning repaired systems to service be careful that they are not reinfected
immediately by ensuring that you have mitigated whatever vulnerabilities were exploited
during the incident.
Review response and update policies. After the documentation and recovery phases are
complete, you should review the process thoroughly. Determine with your team the steps that
were executed successfully and what mistakes were made. In almost all cases, you will find
that your processes need to be modified to allow you to handle incidents better in the future.
You will inevitably find weaknesses in your incident response plan. This is the point of this
after-the-fact exercise—you are looking for opportunities for improvement. Any flaws should
prompt another round of the incident-response planning process so that you can handle future
incidents more smoothly.
This methodology is illustrated in the following diagram: Figure: Incident Response Process

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

9. Explain proactive approach to risk management. What are the benefits over reactive
approach?
ANS:
They want an alternative to this reactive approach, one that seeks to reduce the probability
that security incidents will occur in the first place. Organizations that effectively manage risk
evolve toward a more proactive approach, but as you will learn in this chapter, it is only part
of the solution.

The Proactive Approach:


Proactive security risk management has many advantages over a reactive approach. Instead of
waiting for bad things to happen and then responding to them afterwards, you minimize the
possibility of the bad things ever occurring in the first place.
You make plans to protect your organization's important assets by implementing controls that
reduce the risk of vulnerabilities being exploited by malicious software, attackers, or
accidental misuse.
An analogy may help to illustrate this idea. Influenza is a deadly respiratory disease that
infects millions of people in the United States alone each year. Of those, over 100,000 must
be treated in hospitals, and about 36,000 die. You could choose to deal with the threat of the
disease by waiting to see if you get infected and then taking medicine to treat the symptoms if
you do become ill.
Alternatively, you could choose to get vaccinated before the influenza season begins.
Organizations should not, of course, completely forsake incident response.
An effective proactive approach can help organizations to significantly reduce the number of
security incidents that arise in the future, but it is not likely that such problems will
completely disappear.
Therefore, organizations should continue to improve their incident response processes while
simultaneously developing long-term proactive approaches.

10. Write a short note on OCTAVE.


ANS:
The Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE®)
approach is one such framework that enables organizations to understand, assess and address
their information security risks from the organization’s perspective. OCTAVE is not a
product, rather it is a process driven methodology to identify, prioritize and manage
information security risks.

It is intended to help organizations:


• Develop qualitative risk evaluation criteria based on operational risk tolerances
• Identify assets that are critical to the mission of the organization.
• Identify vulnerabilities and threats to the critical assets
• Determine and evaluate potential consequences to the organization if threats are realized
• Initiate corrective actions to mitigate risks and create practice-based protection strategy

For an organization looking to understand its information security needs, OCTAVE is a risk
based strategic assessment and planning technique for security. OCTAVE is self-directed,
meaning that people from an organization assume responsibility for setting the organizations
security strategy. The technique leverages people’s knowledge of their organization’s

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

security- related practices and processes to capture the current state of security practice
within the organization. Risks to the most critical assets are used to prioritize areas of
improvement and set the security strategy for the organization.

Unlike the typical technology-focused assessment, which is targeted at technological risk and
focused on tactical issues, OCTAVE is targeted at organizational risk and focused on
strategic, practice-related issues. It is a flexible evaluation that can be tailored for most
organizations.

When applying OCTAVE, a small team of people from the operational (or business) units
and the information technology (IT) department work together to address the security needs
of the organization, balancing the three key aspects illustrated in Figure 1: operational risk,
security practices, and technology.

The OCTAVE approach is driven by two of the aspects: operational risk and security
practices.
Technology is examined only in relation to security practices, enabling an organization to
refine the view of its current security practices. By using the OCTAVE approach, an
organization makes information-protection decisions based on risks to the confidentiality,
integrity, and availability of critical information-related assets. All aspects of risk (assets,
threats, vulnerabilities, and organizational impact) are factored into decision making,
enabling an organization to match a practice-based protection strategy to its security risks.

11. What are the various domains & corresponding processes of COBIT?

ANS: COBIT stands for “Control Objectives for Information and related Technology”.
COBIT is just one of the frameworks from ISACA (Information Systems Audit and Control
Association), an international professional association, affiliated member of (IFAC)
International Federation of Accountants and (ITGI) IT Governance Institute. ISACA has
more than 86,000 members in 160 countries and is a recognized worldwide leader in IT
governance, control, security and assurance which was founded back in 1969.
COBIT is an IT governance framework and supporting toolset that allows managers to bridge
the gap between control requirements, technical issues and business risks. COBIT enables
clear policy development and good practice for IT control throughout organizations. COBIT
emphasizes regulatory compliance, helps organizations to increase the value attained from IT,
enables alignment and simplifies implementation of the COBIT framework.
COBIT uses a maturity model as a means of assessing the maturity of the processes described
in the domains. The model encompasses the following levels:
1) Non-existent
2) Initial / ad hoc
3) Repeatable but intuitive
4) Defined process
5) Managed and measurable
6) Optimized
COBIT is made up of a number of „domains‟, „processes‟ & „activities‟. Here they are:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

DOMAIN

1) Plan & Organize (PO)

PROCESSES

PO1 Define a Strategic IT Plan and direction


PO2 Define the Information Architecture
PO3 Determine Technological Direction
PO4 Define the IT Processes, Organization and Relationships
PO5 Manage the IT Investment (ITIL related: Financial Management for IT Services)
PO6 Communicate Management Aims and Direction
PO7 Manage IT Human Resources
PO8 Manage Quality
PO9 Assess and Manage IT Risks
PO10 Manage Projects
DOMAIN

2) Acquire & Implement (AI)

PROCESSES

AI1 Identify Automated Solutions


AI2 Acquire and Maintain Application Software
AI3 Acquire and Maintain Technology Infrastructure
AI4 Enable Operation and Use
AI5 Procure IT Resources
AI6 Manage Changes (ITIL related: Change Management)
AI7 Install and Accredit Solutions and Changes (ITIL related: Release Management)

DOMAIN

3) Deliver & Support (DS)

PROCESSES

DS1 Define and Manage Service Levels (ITIL related: Service Level Management)
DS2 Manage Third-party Services
DS3 Manage Performance and Capacity (ITIL related: Capacity Management)
DS4 Ensure Continuous Service (ITIL related: IT Service Continuity Management)
DS5 Ensure Systems Security (ITIL related: Security Management)
DS6 Identify and Allocate Costs (ITIL related: Financial Management for IT Services)
DS7 Educate and Train Users
DS8 Manage Service Desk and Incidents (ITIL related: Incident Management)
DS9 Manage the Configuration (ITIL related: Configuration Management)
DS10 Manage Problems (ITIL related: Problem Management)
DS11 Manage Data (ITIL related: Availability Management)

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

DS12 Manage the Physical Environment


DS13 Manage Operations

DOMAIN

4) Monitor & Evaluate (ME)

PROCESSES

ME1 Monitor and Evaluate IT Processes


ME2 Monitor and Evaluate Internal Control
ME3 Ensure Regulatory Compliance
ME4 Provide IT Governance

COBIT identifies four classes of IT resources:

1) People
2) Applications
3) Information
4) Infrastructure.

12. Explain any 2 methods of quantitative risk assessment?


ANS: Risk assessment is the process of identifying, estimating, and prioritizing information
security risks. Assessing risk requires the careful analysis of threat and vulnerability
information to determine the extent to which circumstances or events could adversely impact
an organization and the likelihood that such circumstances or events will occur.

A risk assessment methodology typically includes:


(i) A risk assessment process;
(ii) An explicit risk model, defining key terms and assessable risk factors and the
relationships among the factors;
(iii) An assessment approach (e.g., quantitative, qualitative, or semi-qualitative), specifying
the range of values those risk factors can assume during the risk assessment and how
combinations of risk factors are identified/analyzed so that values of those factors can be
functionally combined to evaluate risk; and
(iv) An analysis approach (e.g., threat-oriented, asset/impact-oriented, or vulnerability-
oriented), describing how combinations of risk factors are identified/analyzed to ensure
adequate coverage of the problem space at a consistent level of detail.
Risk assessment methodologies are defined by organizations and are a component of the risk
management strategy developed during the risk framing step of the risk management process.

ASSESSMENT APPROACH (QUANTITATIVE):

Risk, and its contributing factors, can be assessed in a variety of ways, including
quantitatively, qualitatively, or semi-quantitatively. Each risk assessment approach

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

considered by organizations has advantages and disadvantages. A preferred approach (or


situation-specific set of approaches) can be selected based on organizational culture and, in
particular, attitudes toward the concepts of uncertainty and risk communication.
Quantitative assessments typically employ a set of methods, principles, or rules for
assessing risk based on the use of numbers—where the meanings and proportionality of
values are maintained inside and outside the context of the assessment. This type of
assessment most effectively supports cost-benefit analyses of alternative risk responses or
courses of action. However, the meaning of the quantitative results may not always be clear
and may require interpretation and explanation—particularly to explain the assumptions and
constraints on using the results.
For example, organizations may typically ask if the numbers or results obtained in the risk
assessments are reliable or if the differences in the obtained values are meaningful or
insignificant.
Additionally, the rigor of quantification is significantly lessened when subjective
determinations are buried within the quantitative assessments, or when significant uncertainty
surrounds the determination of values. The benefits of quantitative assessments (in terms of
the rigor, repeatability, and reproducibility of assessment results) can, in some cases, be
outweighed by the costs (in terms of the expert time and effort and the possible deployment
and use of tools required to make such assessments).

13. Explain with diagram OCTAVE method.


ANS: OCTAVE is a risk-based strategic assessment and planning technique for information
security. It is self-directed, meaning that people from within the organisation assume
responsibility for setting the organisation’s security strategy.
The approach leverages people’s knowledge of their organisation’s security related practices
and processes to capture the current state of security practice within the organisation.
Risks to the most critical assets are used to prioritise areas of improvement and set the
security strategy for the organisation.
Unlike the typical technology focused assessment that is targeted at technological risks and
focused on tactical issues, OCTAVE is targeted at organisational risk and focused on
strategic, practice-related issues. It is a flexible evaluation that can be tailored for most
organisations.

The OCTAVE Method

The OCTAVE Method has been designed for large organizations having multi-layered
hierarchy and maintaining their own computing infrastructure.

The organizational, technological and analysis aspects of an information security risk


evaluation are undertaken by a three-phased approach with eight processes.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

• Phase 1: Build asset-based threat profiles (organizational evaluation)—The analysis


team determines critical assets and what is currently being done to protect them. The security
requirements for each critical asset are then identified. Finally, the organizational
vulnerabilities with the existing practices and the threat profile for each critical asset are
established.

• Phase 2: Identify infrastructure vulnerabilities (technological evaluation)—The


analysis team identifies network access paths and the classes of IT components related to
each critical asset. The team then determines the extent to which each class of component is
resistant to network attacks and establishes the technological vulnerabilities that expose the
critical assets.

• Phase 3: Develop security strategy and mitigation plans (strategy and plan
development)—The analysis team establishes risks to the organization’s critical assets based
on analysis of the information gathered and decides what to do about them. The team creates
a protection strategy for the organisation and mitigation plans to address identified risks. The
team also determines the ‘next steps’ required for implementation and gains senior
management’s approval on the outcome of the whole process.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

14. Explain with diagram OCTAVE allegro

ANS: OCTAVE Allegro is focused on risk assessment in an organisational context, but offers
an alternative approach and attempts to improve an organisation’s ability to perform risk
assessment in a more efficient and effective manner. One of the insights acquired from earlier
experiences has been the need to move to a more information-centric risk assessment.

One of the guiding philosophies of Allegro has been that when information assets are the
focus of the security risk assessment, all other related assets are considered ‘information
containers’, storing, processing or transporting the information assets. Information containers
can be people (since people access information and gain knowledge), objects (piece of paper)
or technology (database). Thus, threats to information assets are analysed by considering
where they live and effectively limiting the number and types of assets brought into the
process.

Some key drivers that led SEI to formulating this new methodology include:

• Improving ease of use


• Refining the definition of assessment scope by introducing the container concept
• Streamlining data collection and threat identification processes
• Reducing training and knowledge requirements
• Improving institutionalisation and repeatability
• Reducing the technology view

The OCTAVE Allegro approach comprises eight processes and is organised into four phases:

• Phase 1: Establish drivers—The organisation develops risk measurement criteria


consistent with organisational drivers.

• Phase 2: Profile assets—Information assets that are determined to be critical are identified
and profiled. This profiling process establishes clear boundaries for the asset; identifies its
security requirements; and identifies all of the locations where the asset is stored, transported
or processed

• Phase 3: Identify threats—Threats to critical information assets are identified in the


context of the locations where the asset is stored, transported or processed.

• Phase 4: Identify and mitigate risks—Risks to information assets are identified and
analysed and the development of mitigation approaches commences.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

15. What are the various risk framing components & explain relationship among them?
Ans:
Risk is a measure of the extent to which an entity is threatened by a potential circumstance or
event, and is typically a function of: (i) the adverse impacts that would arise if the
circumstance or event occurs; and (ii) the likelihood of occurrence. Information security risks
are those risks that arise from the loss of confidentiality, integrity, or availability of
information or information systems and reflect the potential adverse impacts to organizational
operations (i.e., mission, functions, image, or reputation), organizational assets, individuals,
other organizations, and the Nation. Risk assessment is the process of identifying, estimating,
and prioritizing information security risks.

Assessing risk requires the careful analysis of threat and vulnerability information to
determine the extent to which circumstances or events could adversely impact an
organization and the likelihood that such circumstances or events will occur.

A risk assessment methodology typically includes: a risk assessment process, an explicit risk
model, defining key terms and assessable risk factors and the relationships among the factors;
an assessment approach (e.g., quantitative, qualitative, or semi-qualitative), specifying the
range of values those risk factors can assume during the risk assessment and how
combinations of risk factors are identified/analyzed so that values of those factors can be
functionally combined to evaluate risk; and an analysis approach (e.g., threat oriented,
asset/impact-oriented, or vulnerability-oriented), describing how combinations of risk factors
are identified/analyzed to ensure adequate coverage of the problem space at a consistent level
of detail. Risk assessment methodologies are defined by organizations and are a component
of the risk management strategy developed during the risk framing step of the risk
management process.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Figure 2 illustrates the fundamental components in organizational risk frames and the
relationships among those components.

16. How are the values of asset derived in quantitative risk assessment approach?
Ans:
In quantitative risk assessments, the goal is to try to calculate objective numeric values for
each of the components gathered during the risk assessment and cost-benefit analysis. For
example, you estimate the true value of each business asset in terms of what it would cost to
replace it, what it would cost in terms of lost productivity, what it would cost in terms of
brand reputation, and other direct and indirect business values. You endeavor to use the same
objectivity when computing asset exposure, cost of controls, and all of the other values that
you identify during the risk management process.
Valuing Assets
Determining the monetary value of an asset is an important part of security risk management.
Business managers often rely on the value of an asset to guide them in determining how
much money and time they should spend securing it. Many organizations maintain a list of
asset values (AVs) as part of their business continuity plans. Note how the numbers
calculated are actually subjective estimates, though: No objective tools or methods for
determining the value of an asset exist. To assign a value to an asset, calculate the following
three primary factors:
(1)The overall value of the asset to your organization:
Calculate or estimate the asset’s value in direct financial terms. Consider a simplified
example of the impact of temporary disruption of an e-commerce Web site that normally runs
seven days a week, 24 hours a day, generating an average of $2,000 per hour in revenue from
customer orders. You can state with confidence that the annual value of the Web site in terms
of sales revenue is $17,520,000.

(2)The immediate financial impact of losing the asset:


If you deliberately simplify the example and assume that the Web site generates a constant
rate per hour, and the same Web site becomes unavailable for six hours, the calculated
exposure is .000685 or .0685 percent per year. By multiplying this exposure percentage by
the annual value of the asset, you can predict that the directly attributable losses in this case
would be approximately $12,000. In reality, most e-commerce Web sites generate revenue at
a wide range of rates depending upon the time of day, the day of the week, the season,

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

marketing campaigns, and other factors. Additionally, some customers may find an
alternative Web site that they prefer to the original, so the Web site may have some
permanent loss of users. Calculating the revenue loss is actually quite complex if you want to
be precise and consider all potential types of loss.

(3)The indirect business impact of losing the asset:


In this example, the company estimates that it would spend $10,000 on advertising to
counteract the negative publicity from such an incident. Additionally, the company also
estimates a loss of .01 or 1 percent of annual sales, or $175,200. By combining the extra
advertising expenses and the loss in annual sales revenue, you can predict a total of $185,200
in indirect losses in this case.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

UNIT: 2
1. What are the various uses of IDPS technologies?
Ans:
Organizations should consider using multiple types of IDPS technologies to achieve more
comprehensive and accurate detection and prevention of malicious activity.
The four primary types of IDPS technologies—network-based, wireless, NBA, and host-
based—each offer fundamentally different information gathering, logging, detection, and
prevention capabilities. Each technology type offers benefits over the others, such as
detecting some events that the others cannot and detecting some events with significantly
greater accuracy than the other technologies.
In many environments, a robust IDPS solution cannot be achieved without using multiple
types of IDPS technologies. For most environments, a combination of network-based and
host-based IDPS technologies is needed for an effective IDPS solution. Wireless IDPS
technologies may also be needed if the organization determines that its wireless networks
need additional monitoring or if the organization wants to ensure that rogue wireless
networks are not in use in the organization’s facilities. NBA technologies can also be
deployed if organizations desire additional detection capabilities for denial of service attacks,
worms, and other threats that NBAs are particularly well-suited to detecting. Organizations
should consider the different capabilities of each technology type along with other cost-
benefit information when selecting IDPS technologies.

The four types of IDPS technologies are: ·


Network-Based, which monitors network traffic for particular network segments or devices
and analyzes the network and application protocol activity to identify suspicious activity.
Wireless, which monitors wireless network traffic and analyzes it to identify suspicious
activity involving the wireless networking protocols themselves.
Network Behavior Analysis (NBA), which examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain
forms of malware, and policy violations (e.g., a client system providing network services to
other systems).
Host-Based, which monitors the characteristics of a single host and the events occurring
within that host for suspicious activity.
Organizations planning to use multiple types of IDPS technologies or multiple products of
the same
IDPS technology type should consider whether or not the IDPSs should be integrated.
Direct IDPS integration most often occurs when an organization uses multiple IDPS products
from a single vendor, by having a single console that can be used to manage and monitor the
multiple products.
Some products can also mutually share data, which can speed the analysis process and help
users to better prioritize threats. A more limited form of direct IDPS integration is having one
IDPS product provide data for another IDPS product (but no data sharing in the opposite
direction). Indirect IDPS integration is usually performed with security information and event
management (SIEM) software, which is designed to import information from various
security-related logs and correlate events among them.

SIEM software complements IDPS technologies in several ways, including correlating events
logged by different technologies, displaying data from many event sources, and providing
supporting information from other sources to help users verify the accuracy of IDPS alerts.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

2. What are the various functions of IDPS technologies?


Ans:
Intrusion detection is the process of monitoring the events occurring in a computer system or
network and analyzing them for signs of possible incidents, which are violations or imminent
threats of violation of computer security policies, acceptable use policies, or standard security
practices. Intrusion prevention is the process of performing intrusion detection and attempting
to stop detected possible incidents. An intrusion detection system (IDS) is software that
automates the intrusion detection process. An intrusion prevention system (IPS) is software
that has all the capabilities of an intrusion detection system and can also attempt to stop
possible incidents. Intrusion detection and prevention systems (IDPS) are primarily focused
on identifying possible incidents, logging information about them, attempting to stop them,
and reporting them to security administrators. In addition, organizations use IDPSs for other
purposes, such as identifying problems with security policies, documenting existing threats,
and deterring individuals from violating security policies.
The four primary types of IDPS technologies—network-based, wireless, NBA, and host-
based—each offer fundamentally different information gathering, logging, detection, and
prevention capabilities. Each technology type offers benefits over the others, such as
detecting some events that the others cannot and detecting some events with significantly
greater accuracy than the other technologies. In many environments, a robust IDPS solution
cannot be achieved without using multiple types of IDPS technologies.
For most environments, a combination of network-based and host-based IDPS technologies is
needed for an effective IDPS solution. Wireless IDPS technologies may also be needed if the
organization determines that its wireless networks need additional monitoring or if the
organization wants to ensure that rogue wireless networks are not in use in the organization’s
facilities. NBA technologies can also be deployed if organizations desire additional detection
capabilities for denial of service attacks, worms, and other threats that NBAs are particularly
well-suited to detecting. Organizations should consider the different capabilities of each
technology type along with other cost-benefit information when selecting IDPS technologies.
The four types of IDPS technologies are: ·
Network-Based, which monitors network traffic for particular network segments or devices
and analyzes the network and application protocol activity to identify suspicious activity.
Wireless, which monitors wireless network traffic and analyzes it to identify suspicious
activity involving the wireless networking protocols themselves.
Network Behavior Analysis (NBA), which examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain
forms of malware, and policy violations (e.g., a client system providing network services to
other systems).
Host-Based, which monitors the characteristics of a single host and the events occurring
within that host for suspicious activity.

3. What are the common detection methodologies of IDPS ?


IDPS technologies use many methodologies to detect incidents. Most IDPS technologies use
multiple detection methodologies, either separately or integrated, to provide more broad and
accurate detection. EX: An e-mail with a subject of “Free pictures!” and an attachment
filename of “freepics.exe”, which are characteristics of a known form of malware

1) Signature-Based Detection

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

A signature is a pattern that corresponds to a known threat. Signature-based detection is the


process of comparing signatures against observed events to identify possible incidents.
Signature-based detection is very effective at detecting known threats but largely ineffective
at detecting previusly unknown threats.
For example, if an attacker modified the malware in the previous example to use a filename
of “freepics2.exe”, a signature looking for “freepics.exe” would not match it.
Signature-based detection is the simplest detection method because it just compares the
current unit of activity, such as a packet or a log entry, to a list of signatures using string
comparison operations.
Signature-based detection technologies have little understanding of many network or
application protocols and cannot track and understand the state of complex communications.
They also lack the ability to remember previous requests when processing the current request.

2) Anomaly-Based Detection

Anomaly-based detection is the process of comparing definitions of what activity is


considered normal against observed events to identify significant deviations. An IDPS using
anomaly-based detection has Profiles that represent the normal behavior of such things as
users, hosts, network connections, or applications. The profiles are developed by monitoring
the characteristics of typical activity over a period of time. Profiles can be developed for
many behavioral attributes, such as the number of e-mails sent by a user, the number of failed
login attempts for a host, and the level of processor usage for a host in a given period of time.
The major benefit of anomaly-based detection methods is that they can be very effective at
detecting previously unknown threats. An initial profile is generated over a period of time
(typically days, sometimes weeks) sometimes called a training period.

3) Stateful Protocol Analysis

Stateful protocol analysis is the process of comparing predetermined profiles of generally


accepted definitions of benign protocol activity for each protocol state against observed
events to identify deviations. Stateful protocol analysis relies on vendor-developed universal
profiles that specify how particular protocols should and should not be used. The “stateful” in
stateful protocol analysis means that the IDPS is capable of understanding and tracking the
state of network, transport, and application protocols that have a notion of state. For example,
when a user starts a File Transfer Protocol (FTP) session, the session is initially in the
unauthenticated state. Unauthenticated users should only perform a few commands in this
state, such as viewing help information or providing usernames and passwords. Once the user
has authenticated successfully, the session is in the authenticated state, and users are expected
to perform any of several dozen commands. Performing most of these commands while in the
unauthenticated state would be considered suspicious, but in the authenticated state
performing most of them is considered benign.

4. What are the various types of IDPS technologies?

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

There are many types of IDPS technologies. For the purposes of this document, they are
divided into the following four groups based on the type of events that they monitor and the
ways in which they are deployed:

Network-Based, which monitors network traffic for particular network segments or devices
and analyzes the network and application protocol activity to identify suspicious activity. It
can identify many different types of events of interest. It is most commonly deployed at a
boundary between networks, such as in proximity to border firewalls or routers, virtual
private network (VPN) servers, remote access servers, and wireless networks. Section 4
contains extensive information on networkbased IDPS technologies.

Wireless, which monitors wireless network traffic and analyzes its wireless networking
protocols to identify suspicious activity involving the protocols themselves. It cannot identify
suspicious activity in the application or higher-layer network protocols (e.g., TCP, UDP) that
the wireless network traffic is transferring. It is most commonly deployed within range of an
organization’s wireless network to monitor it, but can also be deployed to locations where
unauthorized wireless networking could be occurring.

Network Behavior Analysis (NBA), which examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain
forms of malware (e.g., worms, backdoors), and policy violations (e.g., a client system
providing network services to other systems). NBA systems are most often deployed to
monitor flows on an organization’s internal networks, and are also sometimes deployed
where they can monitor flows between an organization’s networks and external networks
(e.g., the Internet, business partners’ networks).

Host-Based, which monitors the characteristics of a single host and the events occurring
within that host for suspicious activity. Examples of the types of characteristics a host-based
IDPS might monitor are network traffic (only for that host), system logs, running processes,
application activity, file access and modification, and system and application configuration
changes. Host-based IDPSs are most commonly deployed on critical hosts such as publicly
accessible servers and servers containing sensitive information.

5. What are the typical components of IDPS System?

The typical components in an IDPS solution are as follows:

Sensor or Agent. Sensors and agents monitor and analyze activity. The term sensor is
typically used for IDPSs that monitor networks, including network-based, wireless, and
network behavior analysis technologies. The term agent is typically used for host-based IDPS
technologies.

Management Server. A management server is a centralized device that receives information


from the sensors or agents and manages them. Some management servers perform analysis on
the event information that the sensors or agents provide and can identify events that the

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

individual sensors or agents cannot. Matching event information from multiple sensors or
agents, such as finding events triggered by the same IP address, is known as correlation.
Management servers are available as both appliance and software-only products. Some small
IDPS deployments do not use any management servers, but most IDPS deployments do. In
larger IDPS deployments, there are often multiple management servers, and in some cases
there are two tiers of management servers.

Database Server. A database server is a repository for event information recorded by


sensors, agents, and/or management servers. Many IDPSs provide support for database
servers.

Console. A console is a program that provides an interface for the IDPS’s users and
administrators. Console software is typically installed onto standard desktop or laptop
computers. Some consoles are used for IDPS administration only, such as configuring sensors
or agents and applying software updates, while other consoles are used strictly for monitoring
and analysis. Some IDPS consoles provide both administration and monitoring capabilities.

6. What are the typical components of network based IDPS System?

A typical network-based IDPS is composed of sensors, one or more management servers,


multiple consoles, and optionally one or more database servers (if the network-based IDPS
supports their use). All of these components are similar to other types of IDPS technologies,
except for the sensors. A network based IDPS sensor monitors and analyzes network activity
on one or more network segments. The network interface cards that will be performing
monitoring are placed into promiscuous mode, which means that they will accept all
incoming packets that they see, regardless of their intended destinations.
Most IDPS deployments use multiple sensors, with large deployments having hundreds of
sensors.Sensors are available in two formats:

Appliance. An appliance-based sensor is comprised of specialized hardware and sensor


software. The hardware is typically optimized for sensor use, including specialized NICs and
NIC drivers for efficient capture of packets, and specialized processors or other hardware
components that assist in analysis. Parts or all of the IDPS software might reside in firmware
for increased efficiency. Appliances often use a customized, hardened operating system (OS)
that administrators are not intended to access directly.

Software Only. Some vendors sell sensor software without an appliance. Administrators can
install the software onto hosts that meet certain specifications. The sensor software might
include a customized OS, or it might be installed onto a standard OS just as any other
application would.

7. List & explain the various security capabilities of IDPS technologies.

ANS: - SECURITY CAPABILITIES

Most IDPS technologies can provide a wide variety of security capabilities. The common
security capabilities are divided into four categories: information gathering, logging,
detection, and prevention.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

1. Information Gathering Capabilities


Some IDPS technologies offer information gathering capabilities, such as collecting
information on hosts or networks from observed activity. Examples include identifying hosts
and the operating systems and applications that they use, and identifying general
characteristics of the network.

2. Logging Capabilities
IDPSs typically perform extensive logging of data related to detected events. This data can
be used to confirm the validity of alerts, investigate incidents, and correlate events between
the IDPS and other logging sources.
Data fields commonly used by IDPSs include event date and time, event type, importance
rating (e.g., priority, severity, impact, confidence), and prevention action performed (if any).
Specific types of IDPSs log additional data fields, such as network-based IDPSs performing
packet captures and host-based IDPSs recording user IDs.
IDPS technologies typically permit administrators to store logs locally and send copies of
logs to centralized logging servers (e.g., syslog, security information and event management
software). Generally, logs should be stored both locally and centrally to support the integrity
and availability of the data. Also, IDPSs should have their clocks synchronized using the
Network Time Protocol (NTP) or through frequent manual adjustments so that their log
entries have accurate timestamps.

3. Detection Capabilities
IDPS technologies typically offer extensive, broad detection capabilities. Most products use
a combination of detection techniques, which generally supports more accurate detection and
more flexibility in tuning and customization. Technologies vary widely in their tuning and
customization capabilities.
Typically, the more powerful a product’s tuning and customization capabilities are, the more
its detection accuracy can be improved from the default configuration. Organizations should
carefully consider the tuning and customization capabilities of IDPS technologies when
evaluating products.

Examples of such capabilities are as follows:


(1) Thresholds:
A threshold is a value that sets the limit between normal and abnormal behavior. Thresholds
usually specify a maximum acceptable level, such as x failed connection attempts in 60
seconds, or x characters for a filename length. Thresholds are most often used for anomaly-
based detection and stateful protocol analysis.

(2) Blacklists and Whitelists:


A blacklist is a list of discrete entities, such as hosts, TCP or UDP port numbers, ICMP types
and codes, applications, usernames, URLs, filenames, or file extensions, that have been
previously determined to be associated with malicious activity.
Blacklists, also known as hot lists, are typically used to allow IDPSs to recognize and block
activity that is highly likely to be malicious, and may also be used to assign a higher priority
to alerts that match entries on the blacklists.
Some IDPSs generate dynamic blacklists that are used to temporarily block recently detected
threats (e.g., activity from an attacker’s IP address).

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

A whitelist is a list of discrete entities that are known to be benign. Whitelists are typically
used on a granular basis to reduce or ignore false positives involving known benign activity
from trusted hosts. Whitelists and blacklists are most commonly used in signature-based
detection and stateful protocol analysis.

(3) Alert Settings: Most IDPS technologies allow administrators to customize each alert
type. Examples of actions that can be performed on an alert type include the following:
– Toggling it on or off
– Setting a default priority or severity level
– Specifying what information should be recorded and what notification methods (e.g., e-
mail, pager) should be used
– Specifying which prevention capabilities should be used.

Some products also suppress alerts if an attacker generates many alerts in a short period of
time, and may also temporarily ignore all future traffic from the attacker. This is to prevent
the IDPS from being overwhelmed by alerts.

(4)Code Viewing and Editing: Some IDPS technologies permit administrators to see some or
all of the detection-related code. Viewing the code can help analysts to determine why
particular alerts were generated, and thereby help to validate alerts and identify false
positives. The ability to edit all detection-related code and write new code (e.g., new
signatures) is necessary to fully customize certain types of detection capabilities.

Editing the code requires programming and intrusion detection skills; also, some IDPSs use
proprietary programming languages, which would necessitate the programmer learning a new
language. Bugs introduced into the code during the customization process could cause the
IDPS to function incorrectly or fail altogether, so administrators should treat code
customization as they would any other alteration of production systems’ code.

Administrators should review tuning and customizations periodically to ensure that they are
still accurate. For example, whitelists and blacklists should be checked regularly and all
entries validated to ensure that they are still accurate and necessary. Thresholds and alert
settings might need to be adjusted periodically to compensate for changes in the environment
and in threats. Edits to detection code might need to be replicated whenever the product is
updated (e.g., patched, upgraded). Administrators should also ensure that any products
collecting baselines for anomaly-based detection have their baselines rebuilt periodically as
needed to support accurate detection.

4. Prevention Capabilities
Most IDPSs offer multiple prevention capabilities; the specific capabilities vary by IDPS
technology type. IDPSs usually allow administrators to specify the prevention capability
configuration for each type of alert. This usually includes enabling or disabling prevention,
as well as specifying which type of prevention capability should be used. Some IDPS sensors
have a learning or simulation mode that suppresses all prevention actions and instead
indicates when a prevention action would have been performed. This allows administrators
to monitor and fine-tune the configuration of the prevention capabilities before enabling
prevention actions, which reduces the risk of inadvertently blocking benign activity.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

8. What are the various types of sensors used in network based IDPS System?

ANS: - TYPES OF SENSORS

A typical network-based IDPS is composed of sensors, one or more management servers,


multiple consoles, and optionally one or more database servers. All of these components are
similar to other types of IDPS technologies, except for the sensors.
A network based IDPS sensor monitors and analyzes network activity on one or more
network segments.
The network interface cards that will be performing monitoring are placed into promiscuous
mode, which means that they will accept all incoming packets that they see, regardless of
their intended destinations.
Most IDPS deployments use multiple sensors, with large deployments having hundreds of
sensors.

Sensors are available in two formats:


1.Appliance: An appliance-based sensor is comprised of specialized hardware and sensor
software. The hardware is typically optimized for sensor use, including specialized NICs and
NIC drivers for efficient capture of packets, and specialized processors or other hardware
components that assist in analysis. Parts or all of the IDPS software might reside in firmware
for increased efficiency.
Appliances often use a customized, hardened operating system (OS) that administrators are
not intended to access directly.

2.Software Only: Some vendors sell sensor software without an appliance. Administrators
can install the software onto hosts that meet certain specifications. The sensor software
might include a customized OS, or it might be installed onto a standard OS just as any other
application would.

9. Explain packet filtering firewall technology.

ANS: - PACKET FILTERING

Firewalls are devices or programs that control the flow of network traffic between networks
or hosts that employ differing security postures. A firewall is a security system that controls
incoming and outgoing network traffic based on a set of rules. Firewall is used to prevent
unauthorized users from accessing private networks connected to the Internet.

The most basic feature of a firewall is the packet filter. Older firewalls that were only packet
filters were essentially routing devices that provided access control functionality for host
addresses and communication sessions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

These devices, also known as stateless inspection firewalls, do not keep track of the state of
each flow of traffic that passes though the firewall; this means, for example, that they cannot
associate multiple requests within a single session to each other.

Packet filtering is at the core of most modern firewalls, but there are few firewalls sold today
that only do stateless packet filtering. Unlike more advanced filters, packet filters are not
concerned about the content of packets. Their access control functionality is governed by a
set of directives referred to as a ruleset.

Packet filtering capabilities are built into most operating systems and devices capable of
routing; the most common example of a pure packet filtering device is a network router that
employs access control lists.

In their most basic form, firewalls with packet filters operate at the network layer. This
provides network access control based on several pieces of information contained in a packet,
including:
(1) The packet’s source IP address—the address of the host from which the packet originated
(such as 192.168.1.1)

(2) The packet’s destination address—the address of the host the packet is trying to reach
(e.g., 192.168.2.1)

(3) The network or transport protocol being used to communicate between source and
destination hosts, such as TCP, UDP, or ICMP

(4) Possibly some characteristics of the transport layer communications sessions, such as
session source and destination ports (e.g., TCP 80 for the destination port belonging to a
web server, TCP 1320 for the source port belonging to a personal computer accessing the
server)

(5) The interface being traversed by the packet, and its direction (inbound or outbound).

Filtering inbound traffic is known as ingress filtering. Outgoing traffic can also be filtered, a
process referred to as egress filtering. Here, organizations can implement restrictions on their
internal traffic, such as blocking the use of external file transfer protocol (FTP) servers.
Organizations should only permit outbound traffic that uses the source IP addresses in use by
the organization—a process that helps block traffic with spoofed addresses from leaking onto
other networks.

Stateless packet filters are generally vulnerable to attacks and exploits that take advantage of
problems within the TCP/IP specification and protocol stack. For example, many packet
filters are unable to detect when a packet’s network layer addressing information has been
spoofed or otherwise altered.
Spoofing attacks, such as using incorrect addresses in the packet headers, are generally
employed by intruders to bypass the security controls implemented in a firewall platform.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Firewalls that operate at higher layers can thwart some spoofing attacks by verifying that a
session is established, or by authenticating users before allowing traffic to pass. Because of
this, most firewalls that use packet filters also maintain some state information for the packets
that traverse the firewall.

Some packet filters can specifically filter packets that are fragmented.
Packet fragmentation is allowed by the TCP/IP specifications and is encouraged in situations
where it is needed. However, packet fragmentation has been used to make some attacks
harder to detect (by placing them within fragmented packets), and unusual fragmentation has
also been used as a form of attack. For example, some network based attacks have used
packets that should not exist in normal communications, such as sending some fragments of a
packet but not the first fragment, or sending packet fragments that overlap each other. To
prevent the use of fragmented packets in attacks, some firewalls have been configured to
block fragmented packets.

Some firewalls can reassemble fragments before passing them to the inside network, although
this requires additional firewall resources, particularly memory.
Firewalls that have this reassembly feature must implement it carefully; otherwise someone
can readily mount a denial-of-service attack. Choosing whether to block, reassemble, or pass
fragmented packets is a tradeoff between overall network interoperability and full system
security.

10. Explain stateful inspection

ANS: - STATEFUL INSPECTION

Stateful inspection improves on the functions of packet filters by tracking the state of
connections and blocking packets that deviate from the expected state. This is accomplished
by incorporating greater awareness of the transport layer.
As with packet filtering, stateful inspection intercepts packets at the network layer and
inspects them to see if they are permitted by an existing firewall rule, but unlike packet
filtering, stateful inspection keeps track of each connection in a state table. While the details
of state table entries vary by firewall product, they typically include source IP address,
destination IP address, port numbers, and connection state information.

Three major states exist for TCP traffic—connection establishment, usage, and termination
(which refers to both an endpoint requesting that a connection be closed and a connection
with a long period of inactivity.)

Stateful inspection in a firewall examines certain values in the TCP headers to monitor the
state of each connection. Each new packet is compared by the firewall to the firewall’s state
table to determine if the packet’s state contradicts its expected state.

For example, an attacker could generate a packet with a header indicating it is part of an
established connection, in hopes it will pass through a firewall. If the firewall uses stateful

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

inspection, it will first verify that the packet is part of an established connection listed in the
state table.

In the simplest case, a firewall will allow through any packet that seems to be part of an open
connection (or even a connection that is not yet fully established). However, many firewalls
are more cognizant of the state machines for protocols such as TCP and UDP, and they will
block packets that do not adhere strictly to the appropriate state machine.
For example, it is common for firewalls to check attributes such as TCP sequence numbers
and reject packets that are out of sequence.

Table 2-1 provides an example of a state table.


If a device on the internal network (shown here as 192.168.1.100) attempts to connect to a
device outside the firewall (192.0.2.71), the connection attempt is first checked to see if it is
permitted by the firewall ruleset.
If it is permitted, an entry is added to the state table that indicates a new session is being
initiated, as shown in the first entry under “Connection State”.
If 192.0.2.71 and 192.168.1.100 complete the three-way TCP handshake, the connection state
will change to “Established” and all subsequent traffic matching the entry will be allowed to
pass through the firewall.
Because some protocols, most notably UDP, are connectionless and do not have a formal
process for initializing, establishing, and terminating a connection, their state cannot be
established at the transport layer as it is for TCP.
For these protocols, most firewalls with stateful inspection are only able to track the source
and destination IP addresses and ports.
UDP packets must still match an entry in the state table based on source and destination IP
address and port information to be permitted to pass—a DNS response from an external
source would be permitted to pass only if the firewall had previously seen a corresponding
DNS query from an internal source.
Since the firewall is unable to determine when a session has ended, the entry is removed from
the state table after a preconfigured timeout value is reached. Application-level firewalls that
are able to recognize DNS over UDP will terminate a session after a DNS response is
received, and may act similarly with the Network Time Protocol (NTP).
11. Write short note on application firewalls.
Application Firewalls
A newer trend in stateful inspection is the addition of a stateful protocol analysis capability,
referred to by some vendors as deep packet inspection. Stateful protocol analysis improves
upon standard stateful inspection by adding basic intrusion detection technology—an

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

inspection engine that analyzes protocols at the application layer to compare vendor-
developed profiles of benign protocol activity against observed events to identify deviations.
This allows a firewall to allow or deny access based on how an application is running over
the network. For instance, an application firewall can determine if an email message contains
a type of attachment that the organization does not permit (such as an executable file), or if
instant messaging (IM) is being used over port 80 (typically used for HTTP). Another feature
is that it can block connections over which specific actions are being performed (e.g., users
could be prevented from using the FTP “put” command, which allows users to write files to
the FTP server). This feature can also be used to allow or deny web pages that contain
particular types of active content, such as Java or ActiveX, or that have SSL certificates
signed by a particular certificate authority (CA), such as a compromised or revoked CA.
Application firewalls can enable the identification of unexpected sequences of commands,
such as issuing the same command repeatedly or issuing a command that was not preceded
by another command on which it is dependent. These suspicious commands often originate
from buffer overflow attacks, DoS attacks, malware, and other forms of attack carried out
within application protocols such as HTTP. Another common feature is input validation for
individual commands, such as minimum and maximum lengths for arguments. For example,
a username argument with a length of 1000 characters is suspicious—even more so if it
contains binary data. Application firewalls are available for many common protocols
including HTTP, database (such as SQL), email (SMTP, Post Office Protocol [POP], and
Internet Message Access Protocol [IMAP])3, voice over IP (VoIP), and Extensible Markup
Language (XML).
Another feature found in some application firewalls involves enforcing application state
machines, which are essentially checks on the traffic’s compliance to the standard for the
protocol in question. This compliance checking, sometimes call “RFC compliance” because
most protocols are defined in RFCs issued by the Internet Engineering Task Force (IETF),
can be a mixed blessing. Many products implement protocols in ways that almost, but not
completely, match the specification, so it is usually necessary to let such implementations
communicate across the firewall. Compliance checking is only useful when it detects and
blocks communication that can be harmful to protected systems.
Firewalls with both stateful inspection and stateful protocol analysis capabilities are not full-
fledged intrusion detection and prevention systems (IDPS), which usually offer much more
extensive attack detection and prevention capabilities. For example, IDPSs also use
signature-based and/or anomaly-based analysis to detect additional problems within network
traffic.

12. Write short note on Application-Proxy Gateways & Dedicated Proxy Servers.
Application-Proxy Gateways
An application-proxy gateway is a feature of advanced firewalls that combines lower-layer
access control with upper-layer functionality. These firewalls contain a proxy agent that acts
as an intermediary between two hosts that wish to communicate with each other, and never

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

allows a direct connection between them. Each successful connection attempt actually results
in the creation of two separate connections—one between the client and the proxy server, and
another between the proxy server and the true destination. The proxy is meant to be
transparent to the two hosts—from their perspectives there is a direct connection. Because
external hosts only communicate with the proxy agent, internal IP addresses are not visible to
the outside world. The proxy agent interfaces directly with the firewall ruleset to determine
whether a given instance of network traffic should be allowed to transit the firewall.
In addition to the ruleset, some proxy agents have the ability to require authentication of each
individual network user. This authentication can take many forms, including user ID and
password, hardware or software token, source address, and biometrics.
Like application firewalls, the proxy gateway operates at the application layer and can inspect
the actual content of the traffic. These gateways also perform the TCP handshake with the
source system and are able to protect against exploitations at each step of a communication.
In addition, gateways can make decisions to permit or deny traffic based on information in
the application protocol headers or payloads. Once the gateway determines that data should
be permitted, it is forwarded to the destination host.
Application-proxy gateways are quite different than application firewalls. First, an
application-proxy gateway can offer a higher level of security for some applications because
it prevents direct connections between two hosts and it inspects traffic content to identify
policy violations. Another potential advantage is that some application-proxy gateways have
the ability to decrypt packets (e.g., SSL-protected payloads), examine them, and re-encrypt
them before sending them on to the destination host. Data that the gateway cannot decrypt is
passed directly through to the application. When choosing the type of firewall to deploy, it is
important to decide whether the firewall actually needs to act as an application proxy so that
it can match the specific policies needed by the organization.
Firewalls with application-proxy gateways can also have several disadvantages when
compared to packet filtering and stateful inspection. First, because of the “full packet
awareness” of application-proxy gateways, the firewall spends much more time reading and
interpreting each packet. Because of this, some of these gateways are poorly suited to high-
bandwidth or real-time applications—but applicationproxy gateways rated for high
bandwidth are available. To reduce the load on the firewall, a dedicated proxy server can be
used to secure less time-sensitive services such as email and most web traffic. Another
disadvantage is that application-proxy gateways tend to be limited in terms of support for
new network applications and protocols—an individual, application-specific proxy agent is
required for each type of network traffic that needs to transit a firewall. Many application-
proxy gateway firewall vendors provide generic proxy agents to support undefined network
protocols or applications. Those generic agents tend to negate many of the strengths of the
application-proxy gateway architecture because they simply allow traffic to “tunnel” through
the firewall.
Dedicated Proxy Servers
Dedicated proxy servers differ from application-proxy gateways in that while dedicated
proxy servers retain proxy control of traffic, they usually have much more limited firewalling
capabilities. They are described in this section because of their close relationship to

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

application-proxy gateway firewalls. Many dedicated proxy servers are application-specific,


and some actually perform analysis and validation of common application protocols such as
HTTP. Because these servers have limited firewalling capabilities, such as simply blocking
traffic based on its source or destination, they are typically deployed behind traditional
firewall platforms. Typically, a main firewall could accept inbound traffic, determine which
application is being targeted, and hand off traffic to the appropriate proxy server (e.g., email
proxy). This server would perform filtering or logging operations on the traffic, then forward
it to internal systems.
A proxy server could also accept outbound traffic directly from internal systems, filter or log
the traffic, and pass it to the firewall for outbound delivery. An example of this is an HTTP
proxy deployed behind the firewall—users would need to connect to this proxy en route to
connecting to external web servers. Dedicated proxy servers are generally used to decrease
firewall workload and conduct specialized filtering and logging that might be difficult to
perform on the firewall itself.
In recent years, the use of inbound proxy servers has decreased dramatically. This is because
an inbound proxy server must mimic the capabilities of the real server it is protecting, which
becomes nearly impossible when protecting a server with many features. Using a proxy
server with fewer capabilities than the server it is protecting renders the non-matched
capabilities unusable. Additionally, the essential features that inbound proxy servers should
have (logging, access control, etc.) are usually built into the real servers. Most proxy servers
now in use are outbound proxy servers, with the most common being HTTP proxies.
Figure 2-2 shows a sample diagram of a network employing a dedicated HTTP proxy server
that has been placed behind another firewall system. The HTTP proxy would handle
outbound connections to external web servers and possibly filter for active content. Requests
from users first go to the proxy, and the proxy then sends the request (possibly changed) to
the outside web server. The response from that web server then comes back to the proxy,
which relays it to the user. Many organizations enable caching of frequently used web pages
on the proxy to reduce network traffic and improve response times.

13. Write short note on Web Application Firewalls & Firewalls for Virtual
Infrastructures.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Web Application Firewalls


The HTTP protocol used in web servers has been exploited by attackers in many ways, such
as to place malicious software on the computer of someone browsing the web, or to fool a
person into revealing private information that they might not have otherwise. Many of these
exploits can be detected by specialized application firewalls called web application firewalls
that reside in front of the web server.
Web application firewalls are a relatively new technology, as compared to other firewall
technologies, and the type of threats that they mitigate are still changing frequently. Because
they are put in front of web servers to prevent attacks on the server, they are often considered
to be very different than traditional firewalls.
Firewalls for Virtual Infrastructures
Many virtualization solutions allow more than one operating system to run on a single
computer simultaneously, each appearing as if it were a real computer. This has become
popular recently because it allows organizations to make more efficient use of computer
hardware. Most of these types of virtualization systems include virtualized networking, which
allows the multiple operating systems to communicate as if they were on a standard Ethernet,
even though there is no actual networking hardware.
Network activity that passes directly between virtualized operating systems within a host
cannot be monitored by an external firewall. However, some virtualization systems offer
built-in firewalls or allow third-party software firewalls to be added as plug-ins. Using
firewalls to monitor virtualized networking is a relatively new area of firewall technology,
and it is likely to change significantly as virtualization usage continues to increase.

14. State the Limitations of Firewall Inspection.


Limitations of Firewall Inspection
Firewalls can only work effectively on traffic that they can inspect. Regardless of the firewall
technology chosen, a firewall that cannot understand the traffic flowing through it will not
handle that traffic properly—for example, allowing traffic that should be blocked. Many
network protocols use cryptography to hide the contents of the traffic. other encrypting
protocols include Secure Shell (SSH) and Secure Real-time Transport Protocol (SRTP).
Firewalls also cannot read application data that is encrypted, such as email that is encrypted
using the S/MIME or OpenPGP protocols, or files that are manually encrypted. Another
limitation faced by some firewalls is understanding traffic that is tunneled, even if it is not
encrypted. For example, IPv6 traffic can be tunneled in IPv4 in many different ways. The
content may still be unencrypted, but if the firewall does not understand the particular
tunneling mechanism used, the traffic cannot be interpreted.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

In all these cases, the firewall’s rules will determine what to do with traffic it does not (or, in
the case of encrypted traffic, cannot) understand. An organization should have policies about
how to handle traffic in such cases, such as either permitting or blocking encrypted traffic
that is not authorized to be encrypted.
15. Write short note on VPN ?
ANS :
A virtual private network (VPN) extends a private network across a public network, such as
the Internet. It enables users to send and receive data across shared or public networks as if
their computing devices were directly connected to the private network, and thus are
benefiting from the functionality, security and management policies of the private network. A
VPN is created by establishing a virtual point-to-point connection through the use of
dedicated connections, virtual tunneling protocols, or traffic encryption.
A VPN spanning the Internet is similar to a wide area network (WAN). From a user
perspective, the extended network resources are accessed in the same way as resources
available within the private network. Traditional VPNs are characterized by a point-to-point
topology, and they do not tend to support or connect broadcast domains. Therefore,
communication, software, and networking, which are based on OSI layer 2 and broadcast
packets, such as NetBIOS used in Windows networking, may not be fully supported or work
exactly as they would on a local area network (LAN). VPN variants, such as Virtual Private
LAN Service (VPLS), and layer 2 tunneling protocols, are designed to overcome this
limitation.
VPNs allow employees to securely access the corporate intranet while traveling outside the
office. Similarly, VPNs securely connect geographically separated offices of an organization,
creating one cohesive network. VPN technology is also used by individual Internet users to
secure their wireless transactions, to circumvent geo restrictions and censorship, and to
connect to proxy servers for the purpose of protecting personal identity and location
Types of VPN
There are various kinds of VPNs accessible.
1.PPTP VPN
Point-to-Point Tunneling Protocol was developed by a consortium founded by Microsoft for
creating VPN over dialup networks, and as such has long been the standard protocol for
internal business VPN. It is a VPN protocol only, and relies on various authentication
methods to provide security (with MS-CHAP v2 being the most common). Available as
standard on just about every VPN capable platform and device, and thus being easy to set up
without the need to install additional software, it remains a popular choice both for businesses
and VPN providers. It also has the advantage of requiring a low computational overhead to
implement (i.e. it’s quick).
However, although now usually only found using 128-bit encryption keys, in the years since
it was first bundled with Windows 95 OSR2 back in 1999, a number of security
vulnerabilities have come to light, the most serious of which is the possibility of
unencapsulated MS-CHAP v2 Authentication.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

2. Site-to-Site VPN
Site-to-site is a lot exactly the same thing because PPTP except there is absolutely no
“dedicated” collection being used. This allows different websites of the identical business,
each using its own actual network, for connecting together to create the VPN. In contrast to
PPTP, the actual routing, security and decryption is completed through the routers to both the
finishes, that could become hardware-based or even software-based.
3. L2TP VPN
L2TP or even Layer in order to Tunneling Protocol is comparable to PPTP, because it also
does not provide encryption also it depends on PPP process to get this done. The main among
PPTP as well as L2TP could be that the second option provides not just data privacy but also
information honesty. L2TP originated by Microsoft and Cisco.
4. IPsec
Attempted and trusted protocol which creates a canal from the remote control site within your
central website. Since the name indicates, it’s created for IP visitors. IPSec needs expensive,
time intensive client installs which could be regarded as an important drawback.
5. SSL
SSL or even Secure Socket Layer is really a VPN available via https more than internet
browser. SSL makes a secure program from your PC browser towards the application
machine you’re being able to access. The main benefit of SSL is it does not need any
software program installed since it uses the internet browser since the client software.
6. MPLS VPN
MPLS (Multi-Protocol Tag Switching) are not any great for remote accessibility for
individual customers, however for site-to-site connection, they’re probably the most flexible
as well as scalable choice. These techniques are basically ISP-tuned VPNs, wherever several
websites are linked to form the VPN utilizing the same INTERNET. A good MPLS network
is not as easy to setup or even add to since the other people, and therefore guaranteed to cost
more.
7. Hybrid VPN
Several companies possess managed to mix top features of SSL as well as IPSec &
additionally various other VPN kinds. Crossbreed VPN servers can acknowledge connections
from several kinds of VPN customers. They provide higher versatility at each clienbt and
machine levels and guaranteed to be costly.

8.SSTP

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Secure Socket Tunneling Protocol was introduced by Microsoft in Windows Vista SP1, and
although it is now available for Linux, RouterOS and SEIL, it is still largely a Windows-only
platform (and there is a snowball’s chance in hell of it ever appearing on an Apple device!).
SSTP uses SSL v3, and therefore offers similar advantages to OpenVPN (such as the ability
to use to TCP port 443 to avoid NAT firewall issues), and because it is integrated into
Windows may be easier to use and more stable.
However unlike OpenVPN, SSTP is a proprietary standard owned by Microsoft. This means
that the code is not open to public scrutiny, and Microsoft’s history of co-operating with the
NSA, and on-going speculation about possible backdoors built-in to the Windows operating
system, do not inspire us with confidence in the standard.

16.Explain various network layout with Firewall Implementation ?

ANS :

Network Layouts with Firewalls

Figure 3-1 shows a typical network layout with a hardware firewall device acting as a router.
The unprotected side of the firewall connects to the single path labeled “WAN,” and the
protected side connects to three paths labeled “LAN1,” “LAN2,” and “LAN3.” The firewall
acts as a router for traffic between the wide area network (WAN) path and the LAN paths. In
the figure, one of the LAN paths also has a router; some organizations prefer to use multiple
layers of routers due to legacy routing policies within the network.

Many hardware firewall devices have a feature called DMZ, an acronym related to the
demilitarized zones that are sometimes set up between warring countries. While no single
technical definition exists for firewall DMZs, they are usually interfaces on a routing firewall
that are similar to the interfaces found on the firewall’s protected side. The major difference
is that traffic moving between the DMZ and other interfaces on the protected side of the
firewall still goes through the firewall and can have firewall protection policies applied.
DMZs are sometimes useful for organizations that have hosts that need to have all traffic

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

destined for the host bypass some of the firewall’s policies (for example, because the DMZ
hosts are sufficiently hardened), but traffic coming from the hosts to other systems on the
organization’s network need to go through the firewall. It is common to put public-facing
servers, such as web and email servers, on the DMZ. An example of this is shown in Figure
3-2, a simple network layout of a firewall with a DMZ. Traffic from the Internet goes into the
firewall and is routed to systems on the firewall’s protected side or to systems on the DMZ.
Traffic between systems on the DMZ and systems on the protected network goes through the
firewall, and can have firewall policies applied.

Most network architectures are hierarchical, meaning that a single path from an outside
network splits into multiple paths on the inside network—and it is generally most efficient to
place a firewall at the node where the paths split. This has the advantage of positioning the
firewall where there is no question as to what is “outside” and what is “inside.” However,
there may be reasons to have additional firewalls on the inside of the network, such as to
protect one set of computers from another. If a network’s architecture is not hierarchical, the
same firewall policies should be used on all ingresses to the network. In many organizations,
there is only supposed to be one ingress to the network, but other ingresses are set up on an
ad-hoc basis, often in ways that are not allowed by overall policy. In these situations, if a
properly configured firewall is not placed at each entry point, malicious traffic that would
normally be blocked by the main ingress can enter the network by other means.
The diagrams in Figures 3-1 and 3-2 each show a single firewall; however, many
implementations use multiple firewalls. Some vendors sell high-availability (HA) firewalls,
which allow one firewall to take over for another if the first firewall fails or is taken offline
for maintenance. HA firewalls are deployed in pairs at the same spot in the network topology
so that they both have the same external and internal connections. While HA firewalls can
increase reliability, they can also introduce some problems, such as the need to combine logs
between the paired firewalls and possible confusion by administrators when configuring the
firewalls (for example, knowing which firewall is pushing its policy changes to the other
firewall). HA functionality may be provided through a variety of vendor-specific techniques.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

17. What are various policies based on ip address ?

ANS :

IP Addresses and Other IP Characteristics

Firewall policies should only permit appropriate source and destination IP addresses to be
used. Specific recommendations for IP addresses include:

 Traffic with invalid source or destination addresses should always be blocked,


regardless of the firewall location. Examples of relatively common invalid IPv4
addresses are 127.0.0.0 to 127.255.255.255 (also known as the localhost addresses)
and 0.0.0.0 (interpreted by some operating systems as a localhost or a broadcast
address). These have no legitimate use on a network. Also, traffic using link-local
addresses (169.254.0.0 to 169.254.255.255) should be blocked.
 Traffic with an invalid source address for incoming traffic or destination address for
outgoing traffic (an invalid “external” address) should be blocked at the network
perimeter. This traffic is often caused by malware, spoofing, denial of service attacks,
or misconfigured equipment. The most common type of invalid external addresses is
an IPv4 address within the ranges in RFC 1918, Address Allocation for Private
Internets, that are reserved for private networks. These ranges are 10.0.0.0 to
10.255.255.255 (10.0.0.0/8 in Classless Inter-Domain Routing [CIDR] notation),
172.16.0.0 to 172.31.255.255 (172.16.0.0/12), and 192.168.0.0 to 192.168.255.255
(192.168.0.0/16).
 Traffic with a private destination address for incoming traffic or source address for
outgoing traffic (an “internal” address) should be blocked at the network perimeter.
Perimeter devices can perform address translation services to permit internal hosts
with private addresses to communicate through the perimeter, but private addresses
should not be passed through the network perimeter.
 Outbound traffic with invalid source addresses should be blocked (this is often called
egress filtering). Systems that have been compromised by attackers can be used to
attack other systems on the Internet; using invalid source addresses makes these kinds
of attacks more difficult to stop. Blocking this type of traffic at an organization’s
firewall helps reduce the effectiveness of these attacks.
 Incoming traffic with a destination address of the firewall itself should be blocked
unless the firewall is offering services for incoming traffic that require direct
connections—for example, if the firewall is acting as an application proxy.

Organizations should also block the following types of traffic at the perimeter:

 Traffic containing IP source routing information, which allows a system to specify


the routes that packets will employ while traveling from source to destination. This
could potentially permit an attacker to construct a packet that bypasses network
security controls. IP source routing is rarely used on modern networks, and valid
applications are even less common on the Internet.
 Traffic from outside the network containing broadcast addresses that is directed to
inside the network. Any system that responds to the directed broadcast will then send
its response to the system specified by the source, rather than to the source system
itself. These packets can be used to create huge “storms” of network traffic for denial
of service attacks. Regular broadcast addresses, as well as addresses used for

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

multicast IP, may or may not be appropriate for blocking at an organization’s firewall.
Multicast and broadcast networking is seldom used in normal networking
environments, but when it is used both inside and outside of the organization, it
should be allowed through firewalls.

Firewalls at the network perimeter should block all incoming traffic to networks and hosts
that should not be accessible from external networks. These firewalls should also block all
outgoing traffic from the organization’s networks and hosts that should not be permitted to
access external networks. Deciding which addresses should be blocked is often one of the
most time-consuming aspects of developing firewall IP policies. It is also one of the most
error-prone, because the IP address associated with an undesired entity often changes over
time.

 IPv6

IPv6 is a new version of IP that is increasingly being deployed. Although IPv6’s internal
format and address length differ from those of IPv4, many other features remain the same—
and some of these are relevant to firewalls. For the features that are the same between IPv4
and IPv6, firewalls should work the same. For example, blocking all inbound and outbound
traffic that has not been expressly permitted by the firewall policy should be done regardless
of whether or not the traffic has an IPv4 or IPv6 address.
As of this writing, some firewalls cannot handle IPv6 traffic at all; others are able to handle it
but have limited abilities to filter IPv6 traffic; and still others can filter IPv6 traffic to
approximately the same extent as IPv4 traffic. Every organization, whether or not it allows
IPv6 traffic to enter its internal network, needs a firewall that is capable of filtering this
traffic. These firewalls should have the following capabilities:

 The firewall should be able to use IPv6 addresses in all filtering rules that use IPv4
addresses.
 The administrative interface should allow administrators to clone IPv4 rules to IPv6
addresses to make administration easier.
 The firewall needs to be able to filter ICMPv6, as specified in RFC 4890,
Recommendations for Filtering ICMPv6 Messages in Firewalls.
 The firewall should be able to block IPv6-related protocols such as 6-to-4 and 4-to-6
tunneling, Teredo, and Intra-site Automatic Tunnel Addressing Protocol (ISATAP) if
they are not required.
 Many sites tunnel IPv6 packets in IPv4 packets. This is particularly common for sites
experimenting with IPv6, because it is currently easier to obtain IPv6 transit from a
tunnel broker through a v6-to-v4 tunnel than to get native IPv6 transit from an
Internet service provider (ISP). A number of ways exist to do this, and standards for
tunneling are still evolving. If the firewall is able to inspect the contents of IPv4
packets, it needs to know how to inspect traffic for any tunneling method used by the
organization. A corollary to this is that if an organization is using a firewall to
prohibit IPv6 coming into or going out of its network, that firewall needs to recognize
and block all forms of v6-to-v4 tunneling.

Note that the above list is short and not all the rules are security-specific. Because IPv6
deployment is still in its early stages, there is not yet widespread agreement in the IPv6
operations community about what an IPv6 firewall should do that is different from IPv4
firewalls.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

For firewalls that permit IPv6 use, traffic with invalid source or destination IPv6 addresses
should always be blocked—this is similar to blocking traffic with invalid IPv4 addresses.
Since much more effort has been spent on making lists of invalid IPv4 addresses than on IPv6
addresses, finding lists of invalid IPv6 addresses can be difficult. Also, IPv6 allows network
administrators to allocate addresses in their assigned ranges in different ways. This means
that in a particular address range assigned to an organization, there can literally be trillions of
invalid IPv6 addresses and only a few that are valid. By necessity, listing which IPv6
addresses are invalid will have to be less fine-grained than listing invalid IPv4 addresses, and
the firewall rules that use these lists will be less effective than their IPv4 counterparts.

Organizations that do not yet use IPv6 should block all native and tunneled IPv6 traffic at
their firewalls. Note that such blocking limits testing and evaluation of IPv6 and IPv6
tunneling technologies for future deployment. To permit such use, the firewall administrator
can selectively unblock IPv6 or the specific tunneling technologies of interest for use by the
authorized testers.

18. What are various policies based on protocol?

ANS :

Policies Based on Protocols

Firewall policies should only allow necessary IP protocols through. Examples of commonly
used IP protocols, with their IP protocol numbers are ICMP (1), TCP (6), and UDP (17).
Other IP protocols, such as IPsec components Encapsulating Security Payload (ESP) (50) and
Authentication Header (AH) (51) and routing protocols, may also need to pass through
firewalls. These necessary protocols should be restricted whenever possible to the specific
hosts and networks within the organization with a need to use them. By permitting only
necessary protocols, all unnecessary IP protocols are denied by default.
Some IP protocols are rarely passed between an outside network and an organization’s LAN,
and therefore can simply be blocked in both directions at the firewall. For example, IGMP is
a protocol used to control multicast networks, but multicast is rarely used, and when it is, it is
often not used across the Internet. Therefore, blocking all IGMP traffic in both directions is
feasible if multicast is not used.

 TCP and UDP

Application protocols can use TCP, UDP, or both, depending on the design of the protocol.
An application server typically listens on one or more fixed TCP or UDP ports. Some
applications use a single port, but many applications use multiple ports. For example,
although SMTP uses TCP port 25 for sending mail, it uses TCP port 587 for mail submission.
Similarly, FTP uses at least two ports, one of which can be unpredictable, and while most
web servers use only TCP port 80, it is common to have web sites that also use additional
ports such as TCP port 8080. Some applications use both TCP and UDP; for example, DNS
lookups can occur on UDP port 53 or TCP port 53. Application clients typically use any of a
wide range of ports.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

As with other aspects of firewall rulesets, deny by default policies should be used for
incoming TCP and UDP traffic. Less stringent policies are generally used for outgoing TCP
and UDP traffic because most organizations permit their users to access a wide range of
external applications located on millions of external hosts.

In addition to allowing and blocking UDP and TCP traffic, many firewalls are also able to
report or block malformed UDP and TCP traffic directed towards the firewall or to hosts
protected by the firewall. This traffic is frequently used to scan for hosts, and may also be
used in certain types of attacks. The firewall can help block such activity—or at least report
when such activity is happening.

 ICMP

Attackers can use various ICMP types and codes to perform reconnaissance or manipulate the
flow of network traffic. However, ICMP is needed for many useful things, such as getting
reasonable performance across the Internet. Some firewall policies block all ICMP traffic, but
this often leads to problems with diagnostics and performance. Other common policies allow
all outgoing ICMP traffic, but limit incoming ICMP to those types and codes needed for Path
Maximum Transmission Unit (PMTU) discovery (ICMP code 3) and destination reachability.

To prevent malicious activity, firewalls at the network perimeter should deny all incoming
and outgoing ICMP traffic except for those types and codes specifically permitted by the
organization. For ICMP in IPv4, ICMP type 3 messages should not be filtered because they
are used for important network diagnostics. The ping command (ICMP code 8) is an
important network diagnostic, but incoming pings are often blocked by firewall policies to
prevent attackers from learning more about the internal topology of the organization’s
network. For ICMP in IPv6, many types of messages must be allowed in specific
circumstances to enable various IPv6 features. See RFC 4890, Recommendations for
Filtering ICMPv6 Messages in Firewalls, for detailed information on selecting which
ICMPv6 types to allow or disallow for a particular firewall type.

ICMP is often used by low-level networking protocols to increase the speed and reliability of
networking. Therefore, ICMP within an organization’s network generally should not be
blocked by firewalls that are not at the perimeter of the network, unless security needs
outweigh network operational needs. Similarly, if an organization has more than one
network, ICMP that comes from or goes to other networks within the organization should not
be blocked.

 IPsec Protocols

An organization needs to have a policy whether or not to allow IPsec VPNs that start or end
inside its network perimeter. The ESP and AH protocols are used for IPsec VPNs, and a
firewall that blocks these protocols will not allow IPsec VPNs to pass. While blocking ESP
can hinder the use of encryption to protect sensitive data, it can also force users who would
normally encrypt their data with ESP to allow it to be inspected—for example, by a stateful
inspection firewall or an application-proxy gateway.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Organizations that allow IPsec VPNs should block ESP and AH except to and from specific
addresses on the internal network—those addresses belong to IPsec gateways that are allowed
to be VPN endpoints.19 Enforcing this policy will require people inside the organization to
obtain the appropriate policy approval to open ESP and/or AH access to their IPsec routers.
This will also reduce the amount of encrypted traffic coming from inside the network that
cannot be examined by network security controls.

19. What are the various policies based on applications, user identity & Network
Activity.
Policies Based on Applications
Most early firewall work involved simply blocking unwanted or suspicious traffic at the
network Boundary. Inbound application firewalls or application proxies take a different
approach—they let traffic destined for a particular server into the network, but capture that
traffic in a server that processes it like a port-based firewall. The application-based approach
provides an additional layer of security for incoming traffic by validating some of the traffic
before it reaches the desired server. The theory is that the inbound application firewall’s or
proxy’s additional security layer can protect the server better than the server can protect
itself—and can also remove malicious traffic before it reaches the server to help reduce
server load. In some cases, an application firewall or proxy can remove traffic that the server
might not be able to remove on its own because it has greater filtering capabilities. An
application firewall or proxy also prevents the server from having direct access to the outside
network.
If possible, inbound application firewalls and proxies should be used in front of any server
that does not have sufficient security features to protect it from application-specific attacks.
The main considerations when deciding whether or not to use an inbound application
firewall or proxy are:

฀ Is a suitable application firewall available? Or, if appropriate, is a suitable application


proxy
available?
฀ Is the server already sufficiently protected by existing firewalls?
฀ Can the main server remove malicious content as effectively as the application firewall or
proxy?
฀ Is the latency caused by an application proxy acceptable for the application?
฀ How easy it is to update the filtering rules on the main server and the application firewall
or proxy to handle newly developed threats?
Application proxies can introduce problems if they are not highly capable. Unless an
application proxy is significantly more robust than the server and easy to keep updated, it is
usually best to stay with the application server alone. Application firewalls can also introduce
problems if they are not fast enough to handle the traffic destined for the server. However, it
is also important to consider the server’s resources—if the server does not have sufficient
resources to withstand attacks, the application firewall or proxy could be used as a shield.
When an inbound application firewall or proxy is behind a perimeter firewall or in the
firewall’s DMZ, the perimeter firewall should be blocking based on IP addresses, as
described earlier in this section, to reduce the load on the application firewall or proxy. Doing

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

this puts more of the address-specific policy in a single place—the main firewall—and
reduces the amount of traffic seen by the application firewall or proxy, freeing more power to
filter content. Of course, if the perimeter firewall is also the application firewall and an
internal application proxy is not used, no such rules are needed. Outbound application proxies
are useful for detecting systems that are making inappropriate or dangerous connections from
inside the protected network. By far the most common type of outbound proxy is for HTTP.
Outbound HTTP proxies allow an organization to filter dangerous content before it reaches
the requesting PC. They also help an organization better understand and log web traffic from
its users, and to detect activity that is being tunneled over HTTP. When an HTTP proxy
filters content, it can alert the web user that the site being visited sent the filtered content. The
most prominent non-security benefit of HTTP
proxies is caching web pages for increased speed and decreased bandwidth use. Most
organizations should employ HTTP proxies.

Policies Based on User Identity


Traditional packet filtering does not see the identities of the users who are communicating in
the traffic traversing the firewall, so firewall technologies without more advanced capabilities
cannot have policies that allow or deny access based on those identities. However, many
other firewall technologies can see these identities and therefore enact policies based on user
authentication. One of the most common ways to enforce user identity policy at a firewall is
by using a VPN. Both IPsec VPNs and SSL VPNs have many ways to authenticate users,
such as with secrets that are provisioned on a user-by-user basis, with multi-factor
authentication (e.g., time-based cryptographic tokens protected with PINs), or with digital
certificates controlled by each user. NAC has also become a popular method for firewalls to
allow or deny users access to particular network resources. In addition, application firewalls
and proxies can allow or deny access to users based on the user authentication within the
applications themselves.
Firewalls that enforce policies based on user identity should be able to reflect these policies
in their logs.That is, it is probably not useful to only log the IP address from which a
particular user connected if the user was allowed in by a user-specific policy; it is also
important to log the user’s identity as well.

Policies Based on Network Activity

Many firewalls allow the administrator to block established connections after a certain period
of inactivity. For example, if a user on the outside of a firewall has logged into a file server
but has not made any requests during the past 15 minutes, the policy might be to block any
further traffic on that connection. Time-based policies are useful in thwarting attacks caused
by a logged-in user walking away from a computer and someone else sitting down and using
the established connections (and therefore the logged-in user’s credentials). However, these
policies can also be bothersome for users who make connections but do not use them
frequently. For instance, a user might connect to a file server to read a file and then spend a
long time editing the file. If the user does not save the file back to the file server before the
firewall-mandated timeout, the timeout could cause the changes to the file to be lost. Some
organizations have mandates about when firewalls should block connections that are
considered to be inactive, when applications should disconnect sessions if there is no activity,
etc. A firewall used by such an organization should be able to set policies that match the
mandates while being specific enough to match the security objective of the mandates.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

A different type of firewall policy based on network activity is one that throttles or redirects
traffic if the rate of traffic matching the policy rule is too high. For example, a firewall might
redirect the connections made to a particular inside address to a slower route if the rate of
connections is above a certain threshold. Another policy might be to drop incoming ICMP
packets if the rate is too high. Crafting such policies is quite difficult because throttling and
redirecting can cause desired traffic to be lost or have difficult-to diagnose transient failures.

20. Explain with diagram IT security requirements.


ANS:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

21. What should be considered In the planning stages of a Web server?


ANS-Security should be considered from the initial planning stage at the beginning of the
systems development life cycle to maximize security and minimize costs. It is much more
difficult and expensive to address security after deployment and implementation.
Organizations are more likely to make decisions about configuring hosts appropriately and
consistently if they begin by developing and using a detailed, welldesigned deployment plan.
Developing such a plan enables organizations to make informed tradeoff decisions between
usability and performance, and risk. A deployment plan allows organizations to maintain
secure configurations and aids in identifying security vulnerabilities, which often manifest
themselves as deviations from the plan.

In the planning stages of a Web server, the following items should be considered [Alle00]:

฀ Identify the purpose(s) of the Web server.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀ What information categories will be stored on the Web server?


฀ What information categories will be processed on or transmitted through the Web server?
฀ What are the security requirements for this information?
฀ Will any information be retrieved from or stored on another host (e.g., back-end database,
mail server)?
฀ What are the security requirements for any other hosts involved (e.g., back-end database,
directory server, mail server, proxy server)?
฀ What other service(s) will be provided by the Web server (in general, dedicating the host
to being only a Web server is the most secure option)?
฀ What are the security requirements for these additional services?
฀ What are the requirements for continuity of services provided by Web servers, such as
those
specified in continuity of operations plans and disaster recovery plans?
฀ Where on the network will the Web server be located (see Section 8)?
฀ Identify the network services that will be provided on the Web server, such as those
supplied through the following protocols:
฀ HTTP
฀ HTTPS8
฀ Internet Caching Protocol (ICP)
฀ Hyper Text Caching Protocol (HTCP)
฀ Web Cache Coordination Protocol (WCCP)
฀ SOCKS9
฀ Database services (e.g., Open Database Connectivity [ODBC]).
฀ Identify any network service software, both client and server, to be installed on the Web
server and any other support servers.
฀ Identify the users or categories of users of the Web server and any support hosts.
฀ Determine the privileges that each category of user will have on the Web server and
support hosts.
฀ Determine how the Web server will be managed (e.g., locally, remotely from the internal
network, remotely from external networks).
฀ Decide if and how users will be authenticated and how authentication data will be
protected.
฀ Determine how appropriate access to information resources will be enforced.
฀ Determine which Web server applications meet the organization’s requirements.
Consider servers that may offer greater security, albeit with less functionality in some
instances. Some issues to consider include—
฀ Cost
฀ Compatibility with existing infrastructure
฀ Knowledge of existing employees
฀ Existing manufacturer relationship
฀ Past vulnerability history
฀ Functionality.
฀ Work closely with manufacturer(s) in the planning stage.
The choice of Web server application may determine the choice of OS. However, to the
degree possible, Web server administrators should choose an OS that provides the following
[Alle00]:
฀ Ability to restrict administrative or root level activities to authorized users only
฀ Ability to control access to data on the server

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀ Ability to disable unnecessary network services that may be built into the OS or server
software
฀ Ability to control access to various forms of executable programs, such as Common
Gateway
Interface (CGI) scripts and server plug-ins in the case of Web servers
฀ Ability to log appropriate server activities to detect intrusions and attempted intrusions
฀ Provision of a host-based firewall capability.
In addition, organizations should consider the availability of trained, experienced staff to
administer the server and server products. Many organizations have learned the difficult
lesson that a capable and experienced administrator for one type of operating environment is
not automatically as effective for another.
Although many Web servers do not host sensitive information, most Web servers should be
considered sensitive because of the damage to the organization’s reputation that could occur
if the servers’ integrity is compromised. In such cases, it is critical that the Web servers are
located in areas that provide secure physical environments. When planning the location of a
Web server, the following issues should be considered:

฀ Are the appropriate physical security protection mechanisms in place? Examples include—
฀ Locks
฀ Card reader access
฀ Security guards
฀ Physical IDSs (e.g., motion sensors, cameras).
฀ Are there appropriate environmental controls so that the necessary humidity and
temperature are maintained?
฀ Is there a backup power source? For how long will it provide power?
฀ If high availability is required, are there redundant Internet connections from at least two
different Internet service providers (ISP)?
฀ If the location is subject to known natural disasters, is it hardened against those disasters
and/or is there a contingency site outside the potential disaster area?

22. What are the steps for securely installing web server?
ANS-In many respects, the secure installation and configuration of the Web server
application mirrors the OS process discussed in Section 4. The overarching principle, as
before, is to install only the services required for the Web server and to eliminate any known
vulnerabilities through patches or upgrades. Any unnecessary applications, services, or
scripts that are installed should be removed immediately once the installation process is
complete. During the installation of the Web server, the following steps should be performed:

฀ Install the Web server software either on a dedicated host or on a dedicated guest OS if
virtualization is being employed.
฀ Apply any patches or upgrades to correct for known vulnerabilities.
฀ Create a dedicated physical disk or logical partition (separate from OS and Web server
application) for Web content.
฀ Remove or disable all services installed by the Web server application but not required
(e.g., gopher, FTP, remote administration).

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀ Remove or disable all unneeded default login accounts created by the Web server
installation.
฀ Remove all manufacturers’ documentation from the server.
฀ Remove all example or test files from the server, including scripts and executable code.฀
Apply appropriate security template or hardening script to server.
฀ Reconfigure HTTP service banner (and others as required) not to report Web server and
OS type and version (this may not be possible with all Web servers).
Organizations should consider installing the Web server with non-standard directory names,
directory locations, and filenames. Many Web server attack tools and worms targeting Web
servers only look for files and directories in their default locations. While this will not stop
determined attackers, it will force them to work harder to compromise the server, and it also
increases the likelihood of attack detection because of the failed attempts to access the default
filenames and directories and the additional time needed to perform an attack.

23. State and explain any 4 Wireless Standards.


Ans The need for interoperability among different brands of WLAN products led to several
organizations developing wireless networking standards.

Following are the four IEEE 802.11 family of wireless standards :-

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Table 23.1 Summary of IEEE 802.11 Standards

24 State IEEE 802.11 Network Components and explain its Architectural Models.
Ans IEEE 802.11 has two fundamental architectural components, as follows:

 Station (STA). A STA is a wireless endpoint device. Typical examples of STAs are laptop
computers, personal digital assistants (PDA), mobile phones, and other consumer electronic
devices with IEEE 802.11 capabilities.

 Access Point (AP). An AP logically connects STAs with a distribution system (DS), which

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

is typically an organization’s wired infrastructure. APs can also logically connect wireless STAs
with each other without accessing a distribution system.

The IEEE 802.11 standard also defines the following two WLAN design structures or
configurations :-

 Ad Hoc Mode. The ad hoc mode does not use APs. Ad hoc mode is sometimes referred to
as infrastructure less because only peer-to-peer STAs are involved in the communications.

Figure 24.1. IEEE 802.11 Ad Hoc Mode


The ad hoc mode (or topology) is depicted conceptually in Figure 2-1. This mode of operation,
also known as peer-to-peer mode, is possible when two or more STAs are able to communicate
directly to one another. Figure 24.1 shows three devices communicating with each other in a peer-
to-peer fashion without any infrastructure. A set of STAs configured in this ad hoc manner is
known as an independent basic service set (IBSS).

Today, a STA is most often thought of as a simple laptop with an inexpensive network interface
card (NIC) that provides wireless connectivity; however, many other types of devices could also
be STAs. In Figure 24.1, the STAs in the IBSS are a mobile phone, a laptop, and a PDA. IEEE
802.11 and its variants continue to increase in popularity; scanners, printers, digital cameras and
other portable devices can also be STAs. The circular shape in Figure 2-1 depicts the IBSS. It is
helpful to consider this as the radio frequency coverage area within which the stations can remain
in communication. A fundamental property of IBSS is that it defines no routing or forwarding, so,
based on the bare IEEE 802.11i spec, all the devices must be within radio range of one another.
An ad hoc network can be created for many reasons, such as allowing the sharing of files or the
rapid exchange of e-mail. However, an ad hoc WLAN cannot communicate with external
networks. A further complication is that an ad hoc network can interfere with the operation of an
AP-based infrastructure mode network (see next section) that exists within the same wireless
space.

 Infrastructure Mode.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

In infrastructure mode, an AP connects wireless STAs to each other or to a distribution system,


typically a wired network. Infrastructure mode is the most commonly used mode for WLANs

Figure 24.2 IEEE 802.11 Infrastructure Mode

In infrastructure mode, an IEEE 802.11 WLAN comprises one or more Basic Service Sets (BSS),
the basic building blocks of a WLAN. A BSS includes an AP and one or more STAs. The AP in a
BSS connects the STAs to the DS. The DS is the means by which STAs can communicate with the
organization’s wired LANs and external networks such as the Internet. The IEEE 802.11
infrastructure mode is depicted in Figure 2-2.

The DS and use of multiple BSSs and their associated APs allow for the creation of wireless
networks of arbitrary size and complexity. In the IEEE 802.11 specification, this type of multi-
BSS network is referred to as an extended service set (ESS). Figure 24.3 conceptually depicts a
network with both wired and wireless capabilities. It shows three APs with their corresponding
BSSs, which comprise an ESS; the ESS is attached to the wired infrastructure. In turn, the wired
infrastructure is connected through a perimeter firewall to the Internet. This architecture could
permit various STAs, such as laptops and PDAs, to provide Internet connectivity for their users.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Figure 24.3 Extended Service Set in an Enterprise

25 What are the various types of authentic methods implemented in IEEE 802.11 security?
Ans Authentic methods implemented in IEEE 802.11 security are explained below :-

1. Access Control and Authentication


The original IEEE 802.11 specification defines two means to validate the identities of wireless
devices attempting to gain access to a WLAN, open system authentication and shared key
authentication; neither of these alternatives is secure. IEEE 802.11 implementations are required to
support open system authentication; shared key authentication support is optional. Open system
authentication is effectively a null authentication mechanism that does not provide true identity
verification. In practice, a STA is authenticated to an AP simply by providing the following
information:

 Service Set Identifier (SSID) for the AP. The SSID is a name assigned to a WLAN; it
allows STAs to distinguish one WLAN from another. SSIDs are broadcast in plaintext in wireless
communications, so an eavesdropper can easily learn the SSID for a WLAN. However, the SSID
is not an access control feature, and was never intended to be used for that purpose.

 Media Access Control (MAC) address for the STA. A MAC address is a (hopefully)
unique 48-bit value that is permanently assigned to a particular wireless network interface. Many
implementations of IEEE 802.11 allow administrators to specify a list of authorized MAC
addresses; the AP will permit devices with those MAC addresses only to use the WLAN. This is
known as MAC address filtering. However, since the MAC address is not encrypted, it is simple to
intercept traffic and identify MAC addresses that are allowed past the MAC filter. Unfortunately,
almost all WLAN adapters allow applications to set the MAC address, so it is relatively trivial to
spoof a MAC address, meaning attackers can gain unauthorized access easily.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Figure 25.1. Shared Key Authentication Message Flow

2. Encryption

The WEP protocol, part of the IEEE 802.11 standard, uses the RC4 stream cipher algorithm to
encrypt wireless communications, which protects their contents from disclosure to eavesdroppers.
The standard for WEP specifies support for a 40-bit WEP key only; however, many vendors offer
non-standard extensions to WEP that support key lengths of up to 128 or even 256 bits. WEP also
uses a 24-bit value known as an initialization vector (IV) as a seed value for initializing the
cryptographic key stream. For example, a 104-bit WEP key with a 24-bit IV becomes a 128-bit
RC4 key. Ideally, larger key sizes translate to stronger protection, but the cryptographic technique
used by WEP has known flaws that are not mitigated by longer keys.

Most attacks against WEP encryption have been based on IV-related vulnerabilities. For example,
the IV portion of the RC4 key is sent in cleartext, which allows an eavesdropper that monitors and
analyzes a relatively small amount of network traffic to recover the key by taking advantage of the
IV value knowledge, the relatively small 24-bit IV key space, and a weakness in the way WEP
implements the RC4 algorithm. Also, WEP does not specify precisely how the IVs should be set or
changed; some products use a static, well-known IV value or reset to zero. If two messages have
the same IV, and the plaintext of either message is known, it is relatively trivial for an attacker to
determine the plaintext of the second message. In particular, because many messages contain
common protocol headers or other easily guessable contents, it is often possible to identify the
original plaintext contents with minimal effort.

Even traffic from products that use sequentially increasing IV values is still susceptible to attack.
There are less than 17 million possible IV values; on a busy WLAN, the entire IV space may be
exhausted in a few hours. When the IV is chosen randomly, which represents the best possible
generic IV selection algorithm, by the birthday paradox two IVs already have a 50% chance of
colliding after about 2 frames.

3. Data Integrity

WEP performs data integrity checking for messages transmitted between STAs and APs. WEP is

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

designed to reject any messages that have been changed in transit, such as by a man-in-the-middle
attack.

WEP data integrity is based on a simple encrypted checksum—a 32-bit cyclic redundancy check
(CRC- 32) computed on each payload prior to transmission. The payload and checksum are
encrypted using the RC4 key stream and transmitted. The receiver decrypts them, recomputes the
checksum on the received payload, and compares it with the transmitted checksum. If the
checksums are not the same, the transmitted data frame has been altered in transit, and the frame is
discarded.

Unfortunately, CRC-32 is subject to bit flipping attacks, which means that an attacker knows
which CRC- 32 bits will change when message bits are altered. WEP attempts to counter this
problem by encrypting the CRC-32 to produce an integrity check value (ICV). The creators of
WEP believed that an enciphered CRC-32 would be less subject to tampering. However, they did
not realize that a property of stream ciphers such as WEP’s RC4 is that bit flipping survives the
encryption process—the same bits flip whether or not encryption is used. Therefore, the WEP ICV
offers no additional protection against bit flipping.

Integrity should be provided by a cryptographic checksum rather than a CRC. Also known as
keyed hashes or message authentication codes (MAC), cryptographic checksums prevent bit
flipping attacks because they are designed so that any change to the original message results in
significant and unpredictable changes to the resulting checksum. CRCs are generally more
efficient computationally than cryptographic checksums, but are only designed to protect against
random bit errors, not intentional forgeries, so they do not provide the same level of integrity
protection.

4. Replay Protection

The cryptographic implementation provides no protection against replay attacks because it does
not include features such as an incrementing counter, timestamp, or other temporal data that would
make replayed traffic easily detectable.

5. Availability

Individuals who do not have physical access to the WLAN infrastructure can cause a denial of
service for the WLAN. One threat is known as jamming, which involves a device that emits
electromagnetic energy on the WLAN’s frequencies. The energy makes the frequencies unusable
by the WLAN, causing a denial of service. Jamming can be performed intentionally by an attacker
or unintentionally by a non-WLAN device transmitting on the same frequency. Another threat
against availability is flooding, which involves an attacker sending large numbers of messages to
an AP at such a high rate that the AP cannot process them, or other STAs cannot access the
channel, causing a partial or total denial of service. These threats are difficult to counter in any
radio-based communications; thus, the IEEE 802.11 standard does not provide any defense against
jamming or flooding. Also, as described in Section 3.2.1, attackers can establish rogue APs; if
STAs mistakenly attach to a rogue AP instead of a legitimate one, this could make the legitimate
WLAN effectively unavailable to users. Although 802.11i protects data frames, it does not offer
protection to control or management frames. An attacker can exploit the fact that management
frames are not authenticated to deauthenticate a client or to disassociate a client from the network.

6. IEEE 802.11i Security

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

The IEEE 802.11i standard is the sixth amendment to the baseline IEEE 802.11 standards. It
includes many security enhancements that leverage mature and proven security technologies. For
example, IEEE 802.11i references the Extensible Authentication Protocol (EAP) standard, which
is a means for providing mutual authentication between STAs and the WLAN infrastructure, as
well as performing automatic cryptographic key distribution. Section 6 describes EAP in depth;
EAP is a standard developed by the Internet Engineering Task Force (IETF). IEEE 802.11i
employs accepted cryptographic practices, such as generating cryptographic checksums through
hash message authentication codes (HMAC).

26 Write short note on IEEE 802.11i security.


Ans The IEEE 802.11i standard is the sixth amendment to the baseline IEEE 802.11 standards. It
includes many security enhancements that leverage mature and proven security technologies. For
example, IEEE 802.11i references the Extensible Authentication Protocol (EAP) standard, which
is a means for providing mutual authentication between STAs and the WLAN infrastructure, as
well as performing automatic cryptographic key distribution. EAP is a standard developed by the
Internet Engineering Task Force (IETF). IEEE 802.11i employs accepted cryptographic practices,
such as generating cryptographic checksums through hash message authentication codes (HMAC).

The IEEE 802.11i specification introduces the concept of a Robust Security Network (RSN). An
RSN is defined as a wireless security network that only allows the creation of Robust Security
Network Associations (RSNA). An RSNA is a logical connection between communicating IEEE
802.11 entities established through the IEEE 802.11i key management scheme, called the 4-Way
Handshake, which is a protocol that validates that both entities share a pairwise master key
(PMK), synchronizes the installation of temporal keys, and confirms the selection and
configuration of data confidentiality and integrity protocols. The entities obtain the PMK in one of
two ways—either the PMK is already configured on each device, in which case it is called a pre-
shared key (PSK), or it is distributed as a side effect of a successful EAP authentication instance,
which is a component of IEEE 802.1X port-based access control. The PMK serves as the basis for
the IEEE 802.11i data confidentiality and integrity protocols that provide enhanced security over
the flawed WEP. Most large enterprise deployments of RSN technology will use IEEE 802.1X and
EAP rather than PSKs because of the difficulty of managing PSKs on numerous devices. WLAN
connections employing ad hoc mode, which typically involve only a few STAs, are more likely to
use PSKs.

This section provides a brief introduction to the IEEE 802.1X standard, which is specified by the
IEEE 802.11i amendment. Two components defined in IEEE 802.1X are relied upon for the
establishment of RSNAs: authentication servers and IEEE 802.1X port-based access control. The
IEEE 802.1X standard provides a framework for access control that leverages EAP to provide
centralized, mutual authentication. IEEE 802.1X was originally developed for wired LANs to
prevent unauthorized use in open environments such as university campuses, but it has been used
by IEEE 802.11i for WLANs as well. The IEEE 802.1X framework provides the means to block
user access until authentication is successful, thereby controlling access to WLAN resources.

The IEEE 802.1X standard defines several terms related to authentication. The authenticator is an
entity at one end of a point-to-point LAN segment that facilitates authentication of the entity
attached to the other end of that link. For example, the AP in Figure 25.2 serves as an
authenticator. The supplicant is the entity being authenticated. The STA may be viewed as a
supplicant.21 The authentication server (AS) is an entity that provides an authentication service to
an authenticator. This service determines from the credentials provided by the supplicant whether

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

the supplicant is authorized to access the services provided by the authenticator. The AS provides
these authentication services and delivers session keys to each AP in the wireless network; each
STA either receives session keys from the AS or derives the session keys itself. The AS either
authenticates the STA and AP itself, or provides information to the STA and AP so that they may
authenticate each other. The AS typically lies inside the DS, as depicted in Figure 25.2. When
employing a solution based on the IEEE 802.11i standard, the AS most often used for
authentication is an Authentication, Authorization, and Accounting (AAA) server that uses the
Remote Authentication Dial In User Service (RADIUS)22 or Diameter23 protocol to transport
authentication related traffic. This is discussed further in Section 4. The supplicant/authenticator
model is intrinsically a unilateral rather than mutual authentication model: the supplicant
authenticates to the network. IEEE 802.11i combats this bias by requiring that the EAP method
used provides mutual authentication.

Figure 25.2. Conceptual View of Authentication Server in a Network

Figure 25.3 provides a simple conceptual view of IEEE 802.1X that depicts all the fundamental
IEEE 802.11i components: STAs, an AP, and an AS. In this example, the STAs are the
supplicants, and the AP is the authenticator. Until successful authentication occurs between a STA
and the AS, the STA’s communications are blocked by the AP. Because the AP sits at the
boundary between the wireless and wired networks, this prevents the unauthenticated STA from
reaching the wired network. The technique used to block the communications is known as port-
based access control. IEEE 802.1X can control data flows by distinguishing between EAP and
non-EAP frames, then passing EAP frames through an uncontrolled port and non-EAP frames
through a controlled port, which can block access. IEEE 802.11i extends this to block the AP’s
communication until keys are in place as well. Thus, the IEEE 802.11i extensions prevent a rogue
access point from exchanging anything but EAP traffic with the STA’s host.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Figure 25.3. IEEE 802.1X Port-Based Access Control

27. Write short note on the following:


1. Server Backup Procedures
Web Server Backup Procedures
One of the most important functions of a Web server administrator is to maintain the integrity
of the data on the Web server. This is important because Web servers are often some of the
most exposed and vital servers on an organization’s network. There are two principal
components to backing up data on a Web server: regular backup of the data and OS on the
Web server, and maintenance of a separate protected authoritative copy of the organization’s
Web content.

1.Web Server Backup Policies and Strategies


The Web server administrator needs to perform backups of the Web server on a regular basis
for several reasons. A Web server could fail as a result of a malicious or unintentional act or a
hardware or software failure. In addition, Federal agencies and many other organizations are
governed by regulations on the backup and archiving of Web server data. Web server data
should also be backed up regularly for legal and financial reasons.
All organizations need to create a Web server data backup policy. Three main factors
influence the contents of this policy:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)


฀Applicable laws and regulations (Federal, state, and international)
฀Litigation requirements


฀Contractual
฀Accepted practices
฀Criticality of data to organization


Although each organization’s Web server backup policy will be different to reflect its
particular environment, it should address the following issues:






฀ frequency of backups


฀ requests,
legal investigations, and other such requests


฀ data backup team, if one exists.

Three primary types of backups exist: full, incremental, and differential.


Full backups include the OS,applications, and data stored on the Web server (i.e., an image of
every piece of data stored on the Web server hard drives). The advantage of a full backup is
that it is easy to restore the entire Web server to thestate (e.g., configuration, patch level,
data) it was in when the backup was performed. The disadvantageof full backups is that they
take considerable time and resources to perform. Incremental backups reducethe impact of
backups by backing up only data that has changed since the previous backup (either full
orincremental).

Differential backups reduce the number of backup sets that must be accessed to restore a
configuration by backing up all changed data since the last full backup. However, each
differential backup increases as time lapses from the last full backup, requiring more
processing time and storage than would an incremental backup. Generally, full backups are
performed less frequently (weekly to monthly or when a significant change occurs), and
incremental or differential backups are performed more frequently (daily to weekly).
The frequency of backups will be determined by several factors:


฀Static Web content (less frequent backups)
฀Dynamic Web content (more frequent backups)
฀E-commerce/e-government (very frequent backups)

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)







฀ t required for data reconstruction without data backup

Inexpensive Disks [RAID]).

2 . Maintain a Test Web Server


Most organizations will probably wish to maintain a test or development Web server. Ideally,
this server should have hardware and software identical to the production or live Web server
and be located on an internal network segment (intranet) where it can be fully protected by
the organization’s perimeter network defenses. Although the cost of maintaining an
additional Web server is not inconsequential, having a test Web server offers numerous
advantages:

production Web server.

develop and test new content and applications.

Web servers.
฀ ware critical for development and testing but that might represent an unacceptable
security risk on the production server can be installed on the development server (e.g.,
software compliers, administrative tool kits, remote access software).
The test Web server should be separate from the server that maintains an authoritative copy
of the content on the production Web server (see Section 9.2.3).

3. Maintain an Authoritative Copy of Organizational Web Content

All organizations should maintain an authoritative (i.e., verified and trusted) copy of their
public Web sites on a host that is inaccessible to the Internet. This is a supplement to, but not
replacement for, an appropriate backup policy (see Section 9.2.1). For simple, relatively static
Web sites, this could be as simple as a copy of the Web site on a read-only medium (e.g.,
Compact Disc-Recordable [CD-R]).
However, for most organizations, the authoritative copy of the Web site is maintained on a
secure host.
This host is usually located behind the organization’s firewall on the internal network and not
on the DMZ (see Section 8.1.2). The purpose of the authoritative copy is to provide a means
of restoring information on the public Web server if it is compromised as a result of an
accident or malicious action.
This authoritative copy of the Web site allows an organization to rapidly recover from Web
site integrity breaches (e.g., defacement).

To successfully accomplish the goal of providing and protecting an authoritative copy of the
Web server content, the following requirements must be met:

฀Use write-once media (appropriate for relatively static Web sites).

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀Locate the host with the authoritative copy behind a firewall, and ensure there is no outside
access to the host.
฀Minimize users with authorized access to host.
฀Control user access in as granular a manner as possible.
฀Employ strong user authentication.
฀Employ appropriate logging and monitoring procedures.
฀Consider additional authoritative copies at different physical locations for further
protection.


฀Update authoritative copy first (any testing on code should occur before updating the
authoritative copy).
฀Establish policies and procedures for who can authorize updates, who can perform updates,
when updates can occur, etc.


฀Physically transfer data using secure physical media (e.g., encrypted and/or write-once
media, such as CD-Rs).
฀Use a secure protocol (e.g., SSH) for network transfers.


incident response procedures (see Section 9.3).


periodically (e.g., every 15 minutes, hourly, or daily) because this will overwrite a Web site
defacement automatically.

2. Recovering From a Security Compromise


Recovering From a Security Compromise

Most organizations eventually face a successful compromise of one or more hosts on their
network. The first step in recovering from a compromise is to create and document the
required policies and procedures for responding to successful intrusions before an
intrusion.77 The response procedures should outline the actions that are required to respond
to a successful compromise of the Web server and the appropriate sequence of these actions
(sequence can be critical). Most organizations already have a dedicated incident response
team in place, which should be contacted immediately when there is suspicion or
confirmation of a compromise. In addition, the organization may wish to ensure that some of
its staff are knowledgeable in the fields of computer and network forensics.

A Web server administrator should follow the organization’s policies and procedures for
incident handling, and the incident response team should be contacted for guidance before the
organization takes any action after a suspected or confirmed security compromise. Examples
of steps commonly performed after discovering a successful compromise are as follows:


฀ compromised systems or take other steps to contain the attack so that
additional information can be collected.79

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀ al counsel, and law


enforcement
฀ he attacker also has compromised other
systems.
฀ —
฀The current state of the server, starting with the most ephemeral data (e.g., current network
Connections, memory dump, files time stamps, logged in users)
฀Modifications made to the system’s software and configuration
฀Modifications made to the data
฀Tools or data left behind by the attacker
฀System, intrusion detection, and firewall log files.

฀ Restore the system


฀Eit her install a clean version of the OS, applications, necessary patches, and Web content;
or restore the system from backups (this option can be more risky because the backups may
have been made after the compromise, and restoring from a compromised backup may still
allow the attacker access to the system).
฀Disable unnecessary services
฀Apply all patches.
฀Change all passwords (including on uncompromised hosts, if their passwords are believed
to have been seen by the compromised host, or if the same passwords are used on other hosts)
฀Reconfigure network security elements (e.g., firewall, router, IDPS) to provide additional
Protection and notification.

฀ Test system to ensure security.


฀ Reconnect system to network.
฀ cess the system
or network again.

Based on the organization’s policy and procedures, system administrators should decide
whether to reinstall the OS of a compromised system or restore it from a backup. Factors that
are often considered include the following:




for other attacks)


detection reports)

฀ ompromised)
฀ Results of consultation with management and legal counsel.

The lower the level of access gained by the intruder and the more the Web server
administrator understands about the attacker’s actions, the less risk there is in restoring from
a backup and patching the vulnerability. For incidents in which there is less known about the
attacker’s actions and/or in which the attacker gains high-level access, it is recommended that
the OS and applications be reinstalled from the manufacturer’s original distribution media
and that the Web server data be restored from a known good backup.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

If legal action is pursued, system administrators need to be aware of the guidelines for
handling a host after a compromise. Consult legal counsel and relevant law enforcement
authorities as appropriate.

3. Security Testing Servers


Security Testing Web Servers
Periodic security testing of public Web servers is critical. Without periodic testing, there is no
assurance that current protective measures are working or that the security patch applied by
the Web server administrator is functioning as advertised. Although a variety of security
testing techniques exists, vulnerability scanning is the most common. Vulnerability scanning
assists a Web server administrator in identifying vulnerabilities and verifying whether the
existing security measures are effective. Penetration testing is also used, but it is used less
frequently and usually only as part of an overall penetration test of the organization’s
network.
1 Vulnerability Scanning
Vulnerability scanners are automated tools that are used to identify vulnerabilities and
misconfigurations of hosts. Many vulnerability scanners also provide information about
mitigating discovered
vulnerabilities.

Vulnerability scanners attempt to identify vulnerabilities in the hosts scanned. Vulnerability


scanners can help identify out-of-date software versions, missing patches, or system
upgrades, and they can validate compliance with or deviations from the organization’s
security policy. To accomplish this, vulnerability scanners identify OSs and major software
applications running on hosts and match them with known vulnerabilities in their
vulnerability databases.

However, vulnerability scanners have some significant weaknesses. Generally, they identify
only surface vulnerabilities and are unable to address the overall risk level of a scanned Web
server. Although the scan process itself is highly automated, vulnerability scanners can have
a high false positive error rate (reporting vulnerabilities when none exist). This means an
individual with expertise in Web server security and administration must interpret the results.
Furthermore, vulnerability scanners cannot generally identify vulnerabilities in custom code
or applications.
Vulnerability scanners rely on periodic updating of the vulnerability database to recognize the
latest vulnerabilities. Before running any scanner, Web server administrators should install
the latest updates to its vulnerability database. Some databases are updated more regularly
than others (the frequency of updates should be a major consideration when choosing a
vulnerability scanner).

Vulnerability scanners are often better at detecting well-known vulnerabilities than more
esoteric ones because it is impossible for any one scanning product to incorporate all known
vulnerabilities in a timely manner. In addition, manufacturers want to keep the speed of their
scanners high (the more vulnerabilities detected, the more tests required, which slows the
overall scanning process). Therefore, vulnerability scanners may be less useful to Web server
administrators operating less popular Web servers, OSs, or custom-coded applications.

Vulnerability scanners provide the following capabilities:


฀ ive hosts on a network

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)





฀ sting compliance with host application usage/security policies.

Organizations should conduct vulnerability scanning to validate that OSs and Web server
applications are up-to-date on security patches and software versions. Vulnerability scanning
is a labor-intensive activity that requires a high degree of human involvement to interpret the
results. It may also be disruptive to operations by taking up network bandwidth, slowing
network response times, and potentially affecting the availability of the scanned server or its
applications. However, vulnerability scanning is extremely important for ensuring that
vulnerabilities are mitigated as soon as possible, before they are discovered and exploited by
adversaries. Vulnerability scanning should be conducted on a weekly to monthly basis.
Many organizations also run a vulnerability scan whenever a new vulnerability database is
released for the organization’s scanner application. Vulnerability scanning results should be
documented and discovered deficiencies should be corrected.
Organizations should also consider running more than one vulnerability scanner. As
previously discussed, no scanner is able to detect all known vulnerabilities; however, using
two scanners generally increases the number of vulnerabilities detected. A common practice
is to use one commercial and one freeware scanner. Network-based and host-based
vulnerability scanners are available for free or for a fee.

2.Penetration Testing

Penetration testing is “security testing in which evaluators attempt to circumvent the security
features of a system based on their understanding of the system design and implementation”.
The purpose of penetration testing is to exercise system protections (particularly human
response to attack indications) by using common tools and techniques developed by
attackers. This testing is highly recommended for complex or critical servers.

Penetration testing can be an invaluable technique to any organization's information security


program. However, it is a very labor-intensive activity and requires great expertise to
minimize the risk to targeted systems. At a minimum, it may slow the organization's network
response time because of network mapping and vulnerability scanning. Furthermore, the
possibility exists that systems may be damaged or rendered inoperable in the course of
penetration testing. Although this risk is mitigated by the use of experienced penetration
testers, it can never be fully eliminated.

Penetration testing does offer the following benefits:


฀ oes beyond surface vulnerabilities and demonstrates how these vulnerabilities can be
exploited iteratively to gain greater access

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

฀ es


engineering.

28. What is penetration testing?


Penetration Testing

Penetration testing is “security testing in which evaluators attempt to circumvent the security
features of a system based on their understanding of the system design and implementation”.
The urpose of penetration testing is to exercise system protections (particularly human
response to attack indications) by using common tools and techniques developed by
attackers. This testing is highly recommended for complex or critical servers.

Penetration testing can be an invaluable technique to any organization's information security


program. However, it is a very labor-intensive activity and requires great expertise to
minimize the risk to targeted systems. At a minimum, it may slow the organization's network
response time because of network mapping and vulnerability scanning. Furthermore, the
possibility exists that systems may be damaged or rendered inoperable in the course of
penetration testing. Although this risk is mitigated by the use of experienced penetration
testers, it can never be fully eliminated.

Penetration testing does offer the following benefits:


฀ es and tools employed by attackers


exploited iteratively to gain greater access

฀ ot purely theoretical


engineering.

29. Write a note on Identification & Authentication Technologies.


I&A is a critical building block of computer security since it is the basis for most types of
access control and for establishing user accountability. Access control often requires that the
system be able to identify and differentiate among users. For example, access control is often
based on least privilege, which refers to the granting to users of only those accesses required
to perform their duties. User accountability requires the linking of activities on a computer
system to specific individuals and, therefore, requires the system to identify users.
Identification is the means by which a user provides a claimed identity to the system.
Authentication is the means of establishing the validity of this claim.
This chapter discusses the basic means of identification and authentication, the current
technology used to provide I&A, and some important implementation issues.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Computer systems recognize people based on the authentication data the systems receive.
Authentication presents several challenges: collecting authentication data, transmitting the
data securely, and knowing whether the person who was originally authenticated is still the
person using the computer system. For example, a user may walk away from a terminal while
still logged on, and another person may start using it.
There are three means of authenticating a user's identity which can be used alone or in
combination:
• Something the individual knows (a secret- e.g., a password, Personal Identification Number
(PIN), or cryptographic key);
 something the individual possesses (a token - e.g., an ATM card or a smart card);
 And something the individual is (a biometric - e.g., such characteristics as a voice
pattern, handwriting dynamics, or a fingerprint).

While it may appear that any of these means could provide strong authentication,
there are For most applications, trade-offs will have to be made problems associated
with each. If people among security, ease of use, and ease of wanted to pretend to be
someone else on a administration, especially in modern networked t. i ..environments,
computer system, they can guess or learn that individual's password; they can also
steal or fabricate tokens. Each method also has drawbacks for legitimate users and
system administrators: users forget passwords and may lose tokens, and
administrative overhead for keeping track of I&A data and tokens can be substantial.
Biometric systems have significant technical, user acceptance, and cost problems as
well.

1 I&A Based on Something the User Knows


The most common form of I&A is a user ID coupled with a password. This technique
is based solely on something the user knows. There are other techniques besides
conventional passwords that are based on knowledge, such as knowledge of a
cryptographic key.

1.1 Passwords

In general, password systems work by requiring the user to enter a user ID and
password (or passphrase or personal identification number). The system compares the
password to a previously stored password for that user ID. If there is a match, the user
is authenticated and granted access.
1. Guessing orfinding passwords. If users select their own passwords, they tend to
make them easy to remember. That often makes them easy to guess. The names of
people's children, pets, or favorite sports teams are common examples. On the other
hand, assigned passwords may be difficult to remember, so users are more likely to
write them down. Many computer systems are shipped with administrative accounts
that have preset passwords. Because these passwords are standard, they are easily
"guessed."

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

2. Giving passwords away. Users may share their passwords. They may give their
password to a co-worker in order to share files. In addition, people can be tricked into
divulging their passwords. This process is referred to as social engineering.
3. Electronic monitoring. When passwords are transmitted to a computer system,
they can be electronically monitored. This can happen on the network used to transmit
the password or on the computer system itself. Simple encryption of a password that
will be used again does not solve this problem because encrypting the same password
will create the same ciphertext; the ciphertext becomes the password.
4. Accessing the passwordfile. If the password file is not protected by strong access
controls, the file can be downloaded. Password files are often protected with one-way
encryption 109 so that plain-text passwords are not available to system administrators
or hackers (if they successfully bypass access controls). Even if the file is encrypted,
brute force can be used to learn passwords if the file is downloaded (e.g., by
encrypting English words and comparing them to the file).

Passwords Used as Access Control. Some mainframe operating systems and many
PC applications use passwords as a means of restricting access to specific resources
within a system. Instead of using mechanisms such as access control lists (see Chapter
17), access is granted by entering a password.

1.2 Cryptographic Keys

Although the authentication derived from the knowledge of a cryptographic key may
be based entirely on something the user knows, it is necessary for the user to also
possess (or have access to) something that can perform the cryptographic
computations, such as a PC or a smart card .

2 I&A Based on Something the User Possesses

Although some techniques are based solely on something the user possesses, most of
the techniques described in this section are combined with something the user knows.
This combination can provide significantly stronger security than either something the
user knows or possesses alone.

Objects that a user possesses for the purpose of I&A are called tokens. This section
divides tokens into two categories: memory tokens and smart tokens.

2.1 Memory Tokens


Memory tokens store, but do not process, information. Special reader/writer devices
control the writing and reading of data to and from the tokens. The most common type
of memory token is a magnetic striped card, in which a thin stripe of magnetic
material is affixed to the surface of a card (e.g., as on the back of credit cards). A
common application of memory tokens for authentication to computer systems is the

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

automatic teller machine (ATM) card. This uses a combination of something the user
possesses (the card) with something the user knows (the PIN).

2.2 Smart Tokens


A smart token expands the functionality of a memory token by incorporating one or
more integrated circuits into the token itself. When used for authentication, a smart
token is another example of authentication based on something a user possesses (i.e.,
the token itself). A smart token typically requires a user also to provide something the
user knows (i.e., a PIN or password) in order to "unlock" the smart token for use.

3 I&A Based on Something the User Is


Biometric authentication technologies use the unique characteristics (or attributes) of
an individual to authenticate that person's identity. These include physiological
attributes (such as fingerprints, hand geometry, or retina patterns) or behavioral
attributes (such as voice patterns and hand-written signatures). Biometric
authentication technologies based upon these attributes have been developed for
computer log-in aaplication.

Biometric authentication is technically complex and expensive, and user acceptance


can be difficult. However, advances continue to be made to make the technology
more reliable, less costly, and more user-friendly.

Biometric authentication generally operates in the following manner: Before any


authentication attempts, a user is "enrolled" by creating a reference profile (or
template) based on the desired physical attribute. The resulting template is associated
with the identity of the user and stored for later use.

Biometric systems can provide an increased level of security for computer systems,
but the technology is still less mature than that of memory tokens or smart tokens.
Imperfections in biometric authentication devices arise from technical difficulties in
measuring and profiling physical attributes as well as from the somewhat variable
nature of physical attributes. These may change, depending on various conditions. For
example, a person's speech pattern may change under stressful conditions or when
suffering from a sore throat or cold.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

30. List & explain the important implementation issues for I&A systems.
ANS:
A: Some of the important implementation issues for I&A systems include administration,
maintaining authentication, and single log-in.
1. Administration

Administration of authentication data is a critical element for all types of authentication


systems. The administrative overhead associated with I&A can be significant. I&A systems
need to create, distribute, and store authentication data. For passwords, this includes creating
passwords, issuing them to users, and maintaining a password file. Token systems involve the
creation and distribution of tokens/PINs and data that tell the computer how to recognize
valid tokens/PINs. For biometric systems, this includes creating and storing profiles. The
administrative tasks of creating and distributing authentication data and tokens can be a
substantial. Identification data has to be kept current by adding new users and deleting former
users. If the distribution of passwords or tokens is not controlled, system administrators will
not know if they have been given to someone other than the legitimate user. It is critical that
the distribution system ensure that authentication data is firmly linked with a given
individual.
2. Maintaining Authentication

It is also possible for someone to use a legitimate user's account after log-in." Many computer
systems handle this problem by logging a user out or locking their display or session after a
certain period of inactivity. However, these methods can affect productivity and can make the
computer less user-friendly.
3. Single Log-in

From an efficiency viewpoint, it is desirable for users to authenticate themselves only once
and then to be able to access a wide variety of applications and data available on local and
remote systems, even if those systems require users to authenticate themselves. This is known
as single log-in. If the access is within the same host computer, then the use of a modern
access control system (such as an access control list) should allow for a single log-in. If the
access is across multiple platforms, then the issue is more complicated, as discussed below.
There are three main techniques that can provide single log-in across multiple computers:
host-to-host authentication, authentication servers, and user-to-host authentication.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

31. What are various criteria used by the system to determine if a request for access will
be granted?
A: The system uses various criteria to determine if a request for access will be granted. Many
of the advantages and complexities involved in implementing and managing access control
are related to the different kinds of user accesses supported.
1. Identity

It is probably fair to say that the majority of access controls are based upon the identity of the
user which is usually obtained through identification and authentication (I&A). The identity
is usually unique, to support individual accountability, but can be a group identification or
can even be anonymous. For example, public information dissemination systems may serve a
large group called "researchers" in which the individual researchers are not known.
2. Roles

Access to information may also be controlled by the job assignment or function (i.e., the role)
of the user who is seeking access. Examples of roles include data entry clerk, purchase
officer, project leader, programmer, and technical editor. Access rights are grouped by role
name, and the use of resources is restricted to individuals authorized to assume the associated
role. An individual may be authorized for more than one role, but may be required to act in
only a single role at a time. Changing roles may require logging out and then in again, or
entering a role-changing command. Note that use of roles is not the same as shared-use
accounts. An individual may be assigned a standard set of rights of a hipping department data
entry clerk, for example, but the account would still be tied to that individual's identity to
allow for auditing.
The use of roles can be a very effective way of providing access control. The process of
defining roles should be based on a thorough analysis of how an organization operates and
should include input from a wide spectrum of users in an organization.
3. Location

Access to particular system resources may also be based upon physical or logical location.
For example, in a prison, all users in areas to which prisoners are physically permitted may be
limited to read-only access. Changing or deleting is limited to areas to which prisoners are
denied physical access. The same authorized users (e.g., prison guards) would operate under
significantly different logical access controls, depending upon their physical location.
Similarly, users can be restricted based upon network addresses (e.g., users from sites within
a given organization may be permitted greater access than those from outside).
4. Time

Time-of-day or day-of-week restrictions are common limitations on access. For example, use
of confidential personnel files may be allowed only during normal working hours - and
maybe denied before 8:00 a.m. and after 6:00 p.m. and all day during weekends and holidays.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

5. Transaction

Another approach to access control can be used by organizations handling transactions (e.g.,
account inquiries). Phone calls may first be answered by a computer that requests that callers
key in their account number and perhaps a PIN. Some routine transactions can then be made
directly, but more complex ones may require human intervention. In such cases, the
computer, which already knows the account number, can grant a clerk, for example, access to
a particular account for the duration of the transaction. When completed, the access
authorization is terminated. This means that users have no choice in which accounts they
have access to, and can reduce the potential for mischief. It also eliminates employee
browsing of accounts and can thereby heighten privacy.
6. Service Constraints

Service constraints refer to those restrictions that depend upon the parameters that may arise
during use of the application or that are preestablished by the resource owner/manager. For
example, a particular software package may only be licensed by the organization for five
users at a time. Access would be denied for a sixth user, even if the user were otherwise
authorized to use the application. Another type of service constraint is based upon application
content or numerical thresholds. For example, an ATM machine may restrict transfers of
money between accounts to certain dollar limits or may limit maximum ATM withdrawals to
$500 per day. Access may also be selectively permitted based on the type of service
requested. For example, users of computers on a network may be permitted to exchange
electronic mail but may not be allowed to log in to each others' computers.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

32. List & explain the KMI components in detail


ANS:

Four distinct functional nodes are identified for the generation, distribution, and management
of cryptographic keys: a central oversight authority, key processing facility(ies), service
agents, and client nodes.
1. Central Oversight Authority

The KMI’s central oversight authority is the entity that provides overall KMI data
synchronization and system security oversight for an organization or set of organizations. The
central oversight authority 1) coordinates protection policy and practices (procedures)
documentation, 2) may function as a holder of data provided by service agents, and 3) serves
as the source for common and system level information required by service agents (e.g.,
keying material and registration information, directory data, system policy specifications, and
system-wide key compromise and certificate revocation information).
2. Key Processing Facility(ies)

Key processing services typically include one or more of the following:


• Acquisition or generation of public key certificates (where applicable),
• Initial generation and distribution of keying material,
• Maintenance of a database that maps user entities to an organization’s certificate/key
structure,
• Maintenance and distribution of compromise key lists (CKLs) and/or certificate revocation
lists (CRLs), and
• Generation of audit requests and the processing of audit responses as necessary for the
prevention of undetected compromises.
3. Service Agents

Service agents support organizations’ KMIs as single points of access for other KMI nodes.
All transactions initiated by client nodes are either processed by a service agent or forwarded
to other nodes for processing. Service agents direct service requests from client nodes to key
processing facilities, and when services are required from multiple processing facilities,
coordinate services among the processing facilities to which they are connected. Service
agents are employed by users to order keying material and services, retrieve keying material
and services, and manage cryptographic material and public key certificates. A service agent
may provide cryptographic material and/or certificates by utilizing specific key processing
facilities for key and/or certificate generation.
Service agents may provide registration, directory, and support for data recovery services (i.e.
key recovery), as well as provide access to relevant documentation, such as policy statements
and infrastructure devices. Service agents may also process requests for keying material (e.g.,

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

user identification credentials), and assign and manage KMI user roles and privileges. A
service agent may also provide interactive help desk services as required.
4. Client Nodes

Client nodes are interfaces for managers, devices, and applications to access KMI functions,
including the requesting of certificates and other keying material. They may include
cryptographic modules, software, and procedures necessary to provide user access to the
KMI. Client nodes interact with service agents to obtain cryptographic key services. Client
nodes provide interfaces to end user entities (e.g., encryption devices) for the distribution of
keying material, for the generation of requests for keying material, for the receipt and
forwarding (as appropriate) of compromised key lists (CKLs) and/or certificate revocation
lists (CRLs), for the receipt of audit requests, and for the delivery of audit responses. Client
nodes typically initiate requests for keying material in order to synchronize new or existing
user entities with the current key structure, and receive encrypted keying material for
distribution to end-user cryptographic devices.
A client node can be a FIPS 140-2 compliant workstation executing KMI security software or
a FIPS 140-2 compliant special purpose device. Actual interactions between a client node and
a service agent depend on whether the client node is a device, a manager, or a functional
security application.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

33. Write a short note on Key Management Policy.


ANS:
Each U.S. Government organization that manages cryptographic systems that are intended to
protect sensitive information should base the management of those systems on an
organizational policy statement. The KMP3 is a high-level document that describes
authorization and protection objectives and constraints that apply to the generation,
distribution, accounting, storage, use, and destruction of cryptographic keying material.
Policy Content
The Key Management Policy (KMP) is a high-level statement of organizational key
management policies that includes authorization and protection objectives, and constraints
that apply to the generation, distribution, accounting, storage, use, and destruction of
cryptographic keying material. The policy document or documents that comprise the KMP
will include high-level key management structure and responsibilities, governing standards
and guidelines, organizational dependencies and other relationships, and security objectives.
The KMP is used for a number of different purposes. The KMP is used to guide the
development of KMPSs for each PKI CA or symmetric key management group that operates
under its provisions. CAs from other organizations’ PKIs may review the KMP before cross-
certification, and managers of symmetric key KMIs may review the KMP before joining new
or existing multiple center groups. Auditors and accreditors will use the KMP as the basis for
their reviews of PKI CA and/or symmetric key KMI operations. Application owners that are
considering a PKI certificate source should review a KMP/CP to determine whether its
certificates are appropriate for their applications.
General Policy Content Requirements
Although detailed formats are specified for some environments, the policy documents into
which key management information is inserted may vary from organization to organization.
In general, the information should appear in a top-level organizational information systems
policies and practices document. The policy need not always be elaborate. A degree of
flexibility may be desirable with respect to actual organizational assignments and operations
procedures in order to accommodate organizational and information infrastructure changes
over time. However, the KMP needs to establish a policy foundation for the full set of key
management functions.
A. Security Objectives

The security objectives should include the identification of:


(a) The nature of the information to be protected (e.g., financial transactions, confidential
information, critical process data);
(b) The classes of threats against which protection is required (e.g., the unauthorized
modification of data, replay of communications, fraudulent repudiation of transactions,
disclosure of information to unauthorized parties);

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

(c) The Federal Information Processing Standard 199 (FIPS 199) impact level which is
determined by the consequences of a compromise of the protected information and/or
processes (including sensitivity and perishability of the information);
(d) The cryptographic protection mechanisms to be employed (e.g., message authentication,
digital signature, encryption);
(e) Protection requirements for cryptographic processes and keying material (e.g., tamper-
resistant processes, confidentiality of keying material); and
(f) Applicable statutes, and executive directives and guidance to which the KMI and its
supporting documentation shall conform.
B. Organizational Responsibilities
The following classes of organizational responsibilities should be identified:
(a) Identification of the Keying Material Manager – Since the security of all material that
is cryptographically protected depends on the security of the keying material
employed, the ultimate responsibility for key management should reside at the
executive level. The keying material manager should report directly to the
organization’s Chief Information Officer (CIO)
(b) Identification of Infrastructure Entities and Roles - The key management policy
document should identify organizational responsibilities for key KMI roles. The
following roles should be assigned:
(1) Central Oversight Authority (may be the Keying Material Manager)
(2) Certification Authorities (CAs)
(3) Registration Authorities (RAs)
(4) Overseers of operations (e.g., Key Processing Facility(ies), Service Agents)
(c) Basis for and Identification of Essential Key Management Roles – The KMP should
also identify responsible organization(s), organization (not individual) contact
information, and any relevant statutory or administrative requirements for many
functions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

UNIT 3
Q58. What are the various components of PKI?
PKI COMPONENTS
Functional elements of a public key infrastructure include certification authorities, registration
authorities, repositories, and archives. The users of the PKI come in two flavors: certificate holders
and relying parties. An attribute authority is an optional component.
A certification authority (CA) is similar to a notary. The CA confirms the identities of parties
sending and receiving electronic payments or other communications. Authentication is a necessary
element of many formal communications between parties, including payment transactions. In most
check-cashing transactions, a driver’s license with a picture is sufficient authentication. A personal
identification number (PIN) provides electronic authentication for transactions at a bank automated
teller machine (ATM).
A registration authority (RA) is an entity that is trusted by the CA to register or vouch for the
identity of users to a CA.
A repository is a database of active digital certificates for a CA system. The main business of the
repository is to provide data that allows users to confirm the status of digital certificates for
individuals and businesses that receive digitally signed messages. These message recipients are called
relying parties. CAs post certificates and CRLs to repositories.
An archive is a database of information to be used in settling future disputes. The business of the
archive is to store and protect sufficient information to determine if a digital signature on an “old”
document should be trusted.
The CA issues a public key certificate for each identity, confirming that the identity has the
appropriate credentials. A digital certificate typically includes the public key, information about the
identity of the party holding the corresponding private key, the operational period for the certificate,
and the CA’s own digital signature. In addition, the certificate may contain other information about
the signing party or information about the recommended uses for the public key. A subscriber is an
individual or business entity that has contracted with a CA to receive a digital certificate verifying an
identity for digitally signing electronic messages.
CAs must also issue and process certificate revocation lists (CRLs), which are lists of certificates
that have been revoked. The list is usually signed by the same entity that issued the certificates.
Certificates may be revoked, for example, if the owner’s private key has been lost; the owner leaves
the company or agency; or the owner’s name changes. CRLs also document the historical revocation
status of certificates. That is, a dated signature may be presumed to be valid if the signature date was
within the validity period of the certificate, and the current CRL of the issuing CA at that date did not
show the certificate to be revoked.
PKI users are organizations or individuals that use the PKI, but do not issue certificates. They rely on
the other components of the PKI to obtain certificates, and to verify the certificates of other entities
that they do business with. End entities include the relying party, who relies on the certificate to
know, with certainty, the public key of another entity; and the certificate holder, that is issued a
certificate and can sign digital documents. Note that an individual or organization may be both a
relying party and a certificate holder for various applications.
3.1.1 Certification Authorities
The certification authority, or CA, is the basic building block of the PKI. The CA is a collection of
computer hardware, software, and the people who operate it. The CA is known by two attributes: its
name and its public key. The CA performs four basic PKI functions: issues certificates (i.e., creates
and signs them); maintains certificate status information and issues CRLs; publishes its current (e.g.,
unexpired) certificates and CRLs, so users can obtain the information they need to implement security
services; and maintains archives of status information about the expired certificates that it issued.
These requirements may be difficult to satisfy simultaneously. To fulfill these requirements, the CA
may delegate certain functions to the other components of the infrastructure.
A CA may issue certificates to users, to other CAs, or both. When a CA issues a certificate, it is
asserting that the subject (the entity named in the certificate) has the private key that corresponds to
the public key contained in the certificate. If the CA includes additional information in the certificate,
the CA is asserting that information corresponds to the subject as well. This additional information

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

might be contact information (e.g., an electronic mail address), or policy information (e.g., the types
of applications that can be performed with this public key.)
When the subject of the certificate is another CA, the issuer is asserting that the certificates issued by
the other CA are trustworthy.
The CA inserts its name in every certificate (and CRL) it generates, and signs them with its private
key. Once users establish that they trust a CA (directly, or through a certification path) they can trust
certificates issued by that CA. Users can easily identify certificates issued by that CA by comparing
its name. To ensure the certificate is genuine, they verify the signature using the CA’s public key. As
a result, it is important that the CA provide adequate protection for its own private key. Federal
government CAs should always use cryptographic modules that have been validated against FIPS
140.
As CA operation is central to the security services provided by a PKI, this topic is explored in
additional detail in Section 5, CA System Operation.
3.1.2 Registration Authorities
An RA is designed to verify certificate contents for the CA. Certificate contents may reflect
information presented by the entity requesting the certificate, such as a drivers license or recent pay
stub. They may also reflect information provided by a third party. For example, the credit limit
assigned to a credit card reflects information obtained from credit bureaus. A certificate may reflect
data from the company’s Human Resources department, or a letter from a designated company
official. For example, Bob’s certificate could indicate that he has signature authority for small
contracts. The RA aggregates these inputs and provides this information to the CA.
Like the CA, the RA is a collection of computer hardware, software, and the person or people who
operate it. Unlike a CA, an RA will often be operated by a single person. Each CA will maintain a list
of accredited RAs; that is a list of RAs determined to be trustworthy. An RA is known to the CA by a
name and a public key. By verifying the RA’s signature on a message, the CA can be sure an
accredited RA provided the information, and it can be trusted. As a result, it is important that the RA
provide adequate protection for its own private key. Federal government RAs should always use
cryptographic modules that have been validated against FIPS 140.
3.1.3 PKI Repositories
PKI applications are heavily dependent on an underlying directory service for the distribution of
certificates and certificate status information. The directory provides a means of storing and
distributing certificates, and managing updates to certificates. Directory servers are typically
implementations of the X.500 standard or subset of this standard. X.500 consists of a series of
recommendations and the specification itself references several ISO standards. It was designed for
directory services that could work across system, corporate, and international boundaries. A suite of
protocols is specified for operations such as chaining, shadowing, and referral for server-to-server
communication, and the Directory Access Protocol (DAP) for client to server communication. The
Lightweight Directory Access Protocol (LDAP) was later developed as an alternative to DAP. Most
directory servers and clients support LDAP, though not all support DAP.
To be useful for the PKI applications, directory servers need to be interoperable; without such
interoperability, a relying party will not be able to retrieve the needed certificates and CRLs from
remote sites for signature verifications. To promote interoperablility among Federal agency
directories and thus PKI deployments, the Federal PKI Technical Working Group is developing a
Federal PKI Directory Profile [Chang] to assist agencies interested in participating in the FBCA
demonstration effort. It is recommended that agencies refer to this document for the minimum
interoperability requirements before standing up their agency directories.
3.1.4 Archives
An archive accepts the responsibility for long term storage of archival information on behalf of the
CA. An archive asserts that the information was good at the time it was received, and has not been
modified while in the archive. The information provided by the CA to the archive must be sufficient
to determine if a certificate was actually issued by the CA as specified in the certificate, and valid at
that time. The archive protects that information through technical mechanisms and appropriate
procedures while in its care. If a dispute arises at a later date, the information can be used to verify
that the private key associated with the certificate was used to sign a document. This permits the
verification of signatures on old documents (such as wills) at a later date.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

3.1.5 PKI users


PKI Users are organizations or individuals that use the PKI, but do not issue certificates. They rely on
the other components of the PKI to obtain certificates, and to verify the certificates of other entities
that they do business with. End entities include the relying party, who relies on the certificate to
know, with certainty, the public key of another entity; and the certificate holder, that is issued a
certificate and can sign digital documents. Note that an individual or organization may be both a
relying party and a certificate holder for various applications.

59. Explain mesh and hierarchical PKI structure.


CAs may be linked in a number of ways. Most enterprises that deploy a PKI will choose either a
“mesh” or a “hierarchical” architecture:
• Hierarchical: Authorities are arranged hierarchically under a “root” CA that issues certificates to
subordinate CAs. These CAs may issue certificates to CAs below them in the hierarchy, or to users. In
a hierarchical PKI, every relying party knows the public key of the root CA. Any certificate may be
verified by verifying the certification path of certificates from the root CA. Alice verifies Bob’s
certificate, issued by CA 4, then CA 4’s certificate, issued by CA 2, and then CA 2’s certificate issued
by CA 1, the root, whose public key she knows.
• Mesh: Independent CA’s cross certify each other (that is issue certificates to each other), resulting in
a general mesh of trust relationships between peer CAs. Figure 1 (b) illustrates a mesh of authorities.
A relying party knows the public key of a CA “near” himself, generally the one that issued his
certificate. The relying party verifies certificate by verifying a certification path of certificates that
leads from that trusted CA. CAs cross certify with each other, that is they issue certificates to each
other, and combine the two in a crossCertificatePair. So, for example, Alice knows the public key of
CA 3, while Bob knows the public key of CA 4. There are several certification paths that lead from
Bob to Alice. The shortest requires Alice to verify Bob’s certificate, issued by CA 4, then CA 4’s
certificate issued by CA 5 and finally CA 5’s certificate, issued by CA 3. CA 3 is Alice’s CA and she
trusts CA 3 and knows its public key.
Figure 1 illustrates these two basic PKI architectures.

Figure 1. Traditional PKI Architectures


60. Explain bridge PKI architecture.
Bridg PKI Architecture
The Bridge CA architecture was designed to connect enterprise PKIs regardless of the architecture.
This is accomplished by introducing a new CA, called a Bridge CA, whose sole purpose is to establish
relationships with enterprise PKIs.
Unlike a mesh CA, the Bridge CA does not issue certificates directly to users. Unlike a root CA in a
hierarchy, the Bridge CA is not intended for use as a trust point. All PKI users consider the Bridge CA
an intermediary. The Bridge CA establishes peer-to-peer relationships with different enterprise PKIs.
These relationships can be combined to form a bridge of trust connecting the users from the different
PKIs.
If the trust domain is implemented as a hierarchical PKI, the Bridge CA will establish a relationship
with the root CA. If the domain is implemented as a mesh PKI, the bridge will
establish a relationship with only one of its CAs. In either case, the CA that enters into a trust
relationship with the Bridge is termed a principal CA.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

In Figure 2, the Bridge CA has established relationships with three enterprise PKIs. The first is
Bob’s and Alice’s CA, the second is Carol’s hierarchical PKI, and the third is Doug’s mesh PKI.
None of the users trusts the Bridge CA directly. Alice and Bob trust the CA that issued their
certificates; they trust the Bridge CA because the Fox CA issued a certificate to it. Carol’s trust point
is the root CA of her hierarchy; she trusts the Bridge CA because the root CA issued a certificate to it.
Doug trusts the CA in the mesh that issued his certificate; he trusts the Bridge CA because there is a
valid certification path from the CA that issued him a certificate to the Bridge CA. Alice (or Bob) can
use the bridge of trust that exists through the Bridge CA to establish relationships with Carol and
Doug.

Figure 2. Bridge CA and Enterprise PKIs


61. Explain the two basic data structures used in PKIs.
PKI DATA STRUCTURES
Two basic data structures are used in PKIs. These are the public key certificate and the
certificate revocation lists. A third data structure, the attribute certificate, may be used as
an addendum
3.3.1 X.509 Public Key Certificates
The X.509 public key certificate format [IETF 01] has evolved into a flexible and powerful
mechanism. It may be used to convey a wide variety of information. Much of that information is
optional, and the contents of mandatory fields may vary as well. It is important for PKI implementers
to understand the choices they face, and their consequences. Unwise choices may hinder
interoperability or prevent support for critical applications.
The X.509 public key certificate is protected by a digital signature of the issuer. Certificate users
know the contents have not been tampered with since the signature was generated if the signature can
be verified. Certificates contain a set of common fields, and may also include an optional set of
extensions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

There are ten common fields: six mandatory and four optional. The mandatory fields are: the serial
number, the certificate signature algorithm identifier, the certificate issuer name, the certificate
validity period, the public key, and the subject name. The subject is the party that controls the
corresponding private key. There are four optional fields: the version number, two unique identifiers,
and the extensions. These optional fields appear only in version 2 and 3 certificates.
Version. The version field describes the syntax of the certificate. When the version field is omitted,
the certificate is encoded in the original, version 1, syntax. Version 1 certificates do not include the
unique identifiers or extensions. When the certificate includes unique identifiers but not extensions,
the version field indicates version 2. When the certificate includes extensions, as almost all modern
certificates do, the version field indicates version 3.
Serial number. The serial number is an integer assigned by the certificate issuer to each certificate.
The serial number must be unique for each certificate generated by a particular issuer. The
combination of the issuer name and serial number uniquely identifies any certificate.
Signature. The signature field indicates which digital signature algorithm (e.g., DSA with SHA-1 or
RSA with MD5) was used to protect the certificate.
Issuer. The issuer field contains the X.500 distinguished name of the TTP that generated the
certificate.
Validity. The validity field indicates the dates on which the certificate becomes valid and the date on
which the certificate expires.
Subject. The subject field contains the distinguished name of the holder of the private key
corresponding to the public key in this certificate. The subject may be a CA, a RA, or an end entity.
End entities can be human users, hardware devices, or anything else that might make use of the
private key.
Subject public key information. The subject public key information field contains the subject’s
public key, optional parameters, and algorithm identifier. The public key in this field, along with the
optional algorithm parameters, is used to verify digital signatures or perform key management. If the
certificate subject is a CA, then the public key is used to verify the digital signature on a certificate.
Issuer unique ID and subject unique ID. These fields contain identifiers, and only appear inversion
2 or version 3 certificates. The subject and issuer unique identifiers are intended to handle the reuse of
subject names or issuer names over time. However, this mechanism has proven to be an unsatisfactory
solution. The Internet Certificate and CRL profile does not [HOUS99] recommend inclusion of these
fields.
Extensions. This optional field only appears in version 3 certificates. If present, this field contains
one or more certificate extensions. Each extension includes an extension identifier, a criticality flag,
and an extension value. Common certificate extensions have been defined by ISO and ANSI to
answer questions that are not satisfied by the common fields.
Subject type. This field indicates whether a subject is a CA or an end entity.
Names and identity information. This field aids in resolving questions about a user’s identity, e.g.,
are “alice@gsa.gov” and “c=US; o=U.S. Government; ou=GSA; cn=Alice Adams” the same person?
Key attributes. This field specifies relevant attributes of public keys, e.g., whether it can be
used for key transport, or be used to verify a digital signature.
Policy information. This field helps users determine if another user’s certificate can be trusted,
whether it is appropriate for large transactions, and other conditions that vary with organizational
policies.
Certificate extensions allow the CA to include information not supported by the basic certificate
content. Any organization may define a private extension to meet its particular business requirements.
However, most requirements can be satisfied using standard extensions. Standard extensions are
widely supported by commercial products. Standard extensions offer improved interoperability, and
they are more cost effective than private extensions. Extensions have three components: extension
identifier, a criticality flag, and extension value. The extension identifier indicates the format and
semantics of the extension value. The criticality flag indicates the importance of the extension. When
the criticality flag is set, the information is essential to certificate use. Therefore, if an unrecognized
critical extension is encountered, the certificate must not be used. Alternatively, an unrecognized non-
critical extension may be ignored. The subject of a certificate could be an end user or another CA. The
basic certificate fields do not differentiate between these types of users. The basic constraints

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

extension appears in CA certificates, indicating this certificate may be used to build certification
paths.
The key usage extension indicates the types of security services that this public key can be used to
implement. These may be generic services (e.g., non-repudiation or data encryption) or PKI specific
services (e.g., verifying signatures on certificates or CRLs).
The subject field contains a directory name, but that may not be the type of name that is used by a
particular application. The subject alternative name extension is used to provide other name forms
for the owner of the private key, such as DNS names or email addresses. For example, the email
address alice@gsa.gov.gov could appear in this field.
CAs may have multiple key pairs. The authority key identifier extension helps users select the right
public key to verify the signature on this certificate.
Users may also have multiple key pairs, or multiple certificates for the same key. The subject key
identifier extension is used to identify the appropriate public key.
Organizations may support a broad range of applications using PKI. Some certificates may be more
trustworthy than others, based on the procedures used to issue them or the type of user cryptographic
module. The certificate policies extension contains a globally unique identifier that specifies the
certificate policy that applies to this certificate.
Different organizations (e.g., different companies or government agencies) will use different
certificate policies. Users will not recognize policies from other organizations. The policy mappings
extension converts policy information from other organizations into locally useful policies. This
extension appears only in CA certificates.
The CRL distribution points extension contains a pointer to the X.509 CRL where status information
for this certificate may be found. (X.509 CRLs are described in the following section.)
When a CA issues a certificate to another CA, it is asserting that the other CA's certificates are
trustworthy. Sometimes, the issuer would like to assert that a subset of the certificates should be
trusted. There are three basic ways to specify that a subset of certificates should be trusted:
The basic constraints extension (described above) has a second role, indicating whether this CA is
trusted to issue CA certificates, or just user certificates.
The name constraints extension can be used to describe a subset of certificates based on the names in
either the subject or subject alternative name fields. This extension can be used to define the set of
acceptable names, or the set of unacceptable names. That is, the CA could assert “names in the NIST
directory space are acceptable” or “names in the NIST directory space are not acceptable.”
The policy constraints extension can be used to describe a subset of certificates based on the contents
of the policy extension. If policy constraints are implemented, users will reject certificates without a
policy extension, or where the specified policies are unrecognized.
3.3.2 Certificate Revocation Lists (CRLs)
Certificates contain an expiration date. Unfortunately, the data in a certificate may become unreliable
before the expiration date arrives. Certificate issuers need a mechanism to provide a status update for
the certificates they have issued. One mechanism is the X.509 certification revocation list (CRL).
CRLs are the PKI analog of the credit card hot list that store clerks review before accepting large
credit card transactions. The CRL is protected by a digital signature of the CRL issuer. If the signature
can be verified, CRL users know the contents have not been tampered with since the signature was
generated. CRLs contain a set of common fields, and may also include an optional set of extensions.
The CRL contains the following fields:
Version. The optional version field describes the syntax of the CRL. (In general, the version will be
two.)
Signature. The signature field contains the algorithm identifier for the digital signature algorithm
used by the CRL issuer to sign the CRL.
Issuer. The issuer field contains the X.500 distinguished name of the CRL issuer.
This update. The this-update field indicates the issue date of this CRL.
Next update. The next-update field indicates the date by which the next CRL will be issued.
Revoked certificates. The revoked certificates structure lists the revoked certificates. The entry for
each revoked certificate contains the certificate serial number, time of revocation, and optional CRL
entry extensions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

The CRL entry extensions field is used to provide additional information about this particular revoked
certificate. This field may only appear if the version is v2.
CRL Extensions. The CRL extensions field is used to provide additional information about the whole
CRL. Again, this field may only appear if the version is v2.
ITU-T and ANSI X9 have defined several CRL extensions for X.509 v2 CRLs. They are specified in
[X509 97] and [X955]. Each extension in a CRL may be designated as critical or non-critical. A CRL
validation fails if an unrecognized critical extension is encountered.
However, unrecognized non-critical extensions may be ignored. The X.509 v2 CRL format allows
communities to define private extensions to carry information unique to those communities.
Communities are encouraged to define non-critical private extensions so that their CRLs can be
readily validated by all implementations.
The most commonly used CRL extensions include the following:
The CRL number extension is essentially a counter. In general, this extension is provided so that
users are informed if an emergency CRL was issued.
As noted in the previous section, CAs may have multiple key pairs. When appearing in a CRL, the
authority key identifier extension helps users select the right public key to verify the signature on
this CRL.
The issuer field contains a directory name, but that may not be the type of name that is used by a
particular application. The issuer alternative name extension is used to provide other name forms for
the owner of the private key, such as DNS names or email addresses. For example, the email address
CA1@nist.gov could appear in this field.
The issuing distribution points extension is used in conjunction with the CRL distribution points
extension in certificates. This extension is used to confirm that this particular CRL is the one
described by the CRL distribution points extension and contains status information for certificate in
question. This extension is required when the CRL does not cover all certificates issued by a CA,
since the CRL may be distributed on an insecure network.
The extensions described above apply to the entire CRL. There are also extensions that apply to a
particular revoked certificate.
Certificates may be revoked for a number of different reasons. The user’s crypto module may have
been stolen, for example, or the module may simply have been broken. The reason code extension
describes why a particular certificate was revoked. The relying party may use this information to
decide if a previously generated signature may be accepted.
Sometimes a CA does not wish to issue its own CRLs. It may delegate this task to another CA.
The CA that issues a CRL may include the status of certificates issued by a number of different CAs
in the same CRL. The certificate issuer extension is used to specify which CA issued a particular
certificate, or set of certificates, on a CRL.

62. Write a note on physical architecture of PKI.


Physical Architecture
There are numerous ways in which a PKI can be designed physically. It is highly recommended that
the major PKI components be implemented on separate systems, that is, the CA on one system, the
RA on a different system, and directory servers on other systems. Because the systems contain
sensitive data, they should be located behind an organization's Internet firewall. The CA system is
especially important because a compromise to that system could potentially disrupt the entire
operations of the PKI and necessitate starting over with new certificates.
Consequently, placing the CA system behind an additional organizational firewall is recommended so
that it is protected both from the Internet and from systems in the organization itself. Of course, the
organizational firewall would permit communications between the CA and the RA as well as other
appropriate systems.
If distinct organizations wish to access certificates from each other, their directories will need to be
made available to each other and possibly to other organizations on the Internet. However, some
organizations will use the directory server for much more than simply a repository for certificates.
The directory server may contain other data considered sensitive to the organization and thus the
directory may be too sensitive to be made publicly available. A typical solution would be to create a

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

directory that contains only the public keys or certificates, and to locate this directory at the border of
the organization - this directory is referred to as a border directory. A likely location for the
directory would be outside the organization’s firewall or perhaps on a protected DMZ segment of its
network so that it is still available to the public but better protected from attack. Figure 3 illustrates a
typical arrangement of PKI-related systems.
The main directory server located within the organization's protected network would periodically
refresh the border directory with new certificates or updates to the existing certificates. Users within
the organization would use the main directory server, whereas other systems and organizations would
access only the border directory. When a user in organization A wishes to send encrypted e-mail to a
user in organization B, user A would then retrieve user B's certificate from organization B's border
directory, and then use the public key in that certificate to encrypt the e-mail.

Figure 3. PKI Physical Topology


63. List the most commonly logged types of information and their potential benefits.
The following lists some of the most commonly logged types of information and the potential benefits
of each:

฀ Client requests and server responses, which can be very helpful in reconstructing sequences
of events and determining their apparent outcome. If the application logs successful user
authentications, it is usually possible to determine which user made each request. Some
applications can perform highly detailed logging, such as e-mail servers recording the sender,
recipients, subject name, and attachment names for each e-mail; Web servers recording each
URL requested and the type of response provided by the server; and business applications
recording which financial records were accessed by each user. This information can be used
to identify or investigate incidents and to monitor application usage for compliance and
auditing purposes.
฀ Account information such as successful and failed authentication attempts, account changes
(e.g., account creation and deletion, account privilege assignment), and use of privileges. In
addition to identifying security events such as brute force password guessing and escalation of
privileges, it can be used to identify who has used the application and when each person has
used it.
฀ Usage information such as the number of transactions occurring in a certain period (e.g.,
minute, hour) and the size of transactions (e.g., e-mail message size, file transfer size). This
can be useful for certain types of security monitoring (e.g., a ten-fold increase in e-mail
activity might indicate a new e-mail–borne malware threat; an unusually large outbound e-
mail message might indicate inappropriate release of information).
฀ Significant operational actions such as application startup and shutdown, application failures,
and major application configuration changes. This can be used to identify security
compromises and operational failures.
Q 64. State & explain the common log management infrastructure functions.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Log management infrastructures typically perform several functions that assist in the storage,
analysis, and disposal of log data. These functions are normally performed in such a way that they do
not alter the original logs. The following items describe common log management infrastructure
functions:

฀ General
– Log parsing is extracting data from a log so that the parsed values can be used as input for another
logging process. A simple example of parsing is reading a text-based log file that contains 10 comma-
separated values per line and extracting the 10 values from each line. Parsing is performed as part of
many other logging functions, such as log conversion and log viewing.
– Event filtering is the suppression of log entries from analysis, reporting, or long-term storage
because their characteristics indicate that they are unlikely to contain information of interest. For
example, duplicate entries and standard informational entries might be filtered because they do not
provide useful information to log analysts. Typically, filtering does not affect the generation or short-
term storage of events because it does not alter the original log files.
– In event aggregation, similar entries are consolidated into a single entry containing a count of the
number of occurrences of the event. For example, a thousand entries that each record part of a scan
could be aggregated into a single entry that indicates how many hosts were scanned. Aggregation is
often performed as logs are originally generated (the generator counts similar related events and
periodically writes a log entry containing the count), and it can also be performed as part of log
reduction or event correlation processes, which are described below.
฀ Storage
– Log rotation is closing a log file and opening a new log file when the first file is considered to be
complete. Log rotation is typically performed according to a schedule (e.g., hourly, daily, weekly) or
when a log file reaches a certain size. The primary benefits of log rotation are preserving log entries
and keeping the size of log files manageable. When a log file is rotated, the preserved log file can be
compressed to save space. Also, during log rotation, scripts are often run that act on the archived log.
For example, a script might analyze the old log to identify malicious activity, or might perform
filtering that causes only log entries meeting certain characteristics to be preserved. Many log
generators offer log rotation capabilities; many log files can also be rotated through simple scripts or
third-party utilities, which in some cases offer features not provided by the log generators.
– Log archival is retaining logs for an extended period of time, typically on removable media, a
storage area network (SAN), or a specialized log archival appliance or server. Logs often need to be
preserved to meet legal or regulatory requirements. Section 4.2 provides additional information on log
archival. There are two types of log archival: retention and preservation. Log retention is archiving
logs on a regular basis as part of standard operational activities. Log preservation is keeping logs that
normally would be discarded, because they contain records of activity of particular interest. Log
preservation is typically performed in support of incident handling or investigations.
– Log compression is storing a log file in a way that reduces the amount of storage space needed for
the file without altering the meaning of its contents. Log compression is often performed when logs
are rotated or archived.
– Log reduction is removing unneeded entries from a log to create a new log that is smaller. A similar
process is event reduction, which removes unneeded data fields from all log entries. Log and event
reduction are often performed in conjunction with log archival so that only the log entries and data
fields of interest are placed into long-term storage.
– Log conversion is parsing a log in one format and storing its entries in a second format. For
example, conversion could take data from a log stored in a database and save it in an XML format in a
text file. Many log generators can convert their own logs to another format; third-party conversion

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

utilities are also available. Log conversion sometimes includes actions such as filtering, aggregation,
and normalization.
– In log normalization, each log data field is converted to a particular data representation and
categorized consistently. One of the most common uses of normalization is storing dates and times in
a single format. For example, one log generator might store the event time in a twelve-hour format
(2:34:56 P.M. EDT) categorized as Timestamp, while another log generator might store it in twenty-
four (14:34) format categorized as Event Time, with the time zone stored in different notation (-0400)
24
in a different field categorized as Time Zone. Normalizing the data makes analysis and reporting
much easier when multiple log formats are in use. However, normalization can be very resource-
intensive, especially for complex log entries (e.g., typical intrusion detection logs).
– Log file integrity checking involves calculating a message digest for each file and storing the
message digest securely to ensure that changes to archived logs are detected. A message digest is a
digital signature that uniquely identifies data and has the property that changing a single bit in the data
causes a completely different message digest to be generated. The most commonly used message
25
digest algorithms are MD5 and Secure Hash Algorithm 1 (SHA-1). If the log file is modified and its
message digest is recalculated, it will not match the original message digest, indicating that the file
has been altered. The original message digests should be protected from alteration through FIPS-
approved encryption algorithms, storage on read-only media, or other suitable means.
฀ Analysis
– Event correlation is finding relationships between two or more log entries. The most common form
of event correlation is rule-based correlation, which matches multiple log entries from a single source
or multiple sources based on logged values, such as timestamps, IP addresses, and event types. Event
correlation can also be performed in other ways, such as using statistical methods or visualization
tools. If correlation is performed through automated methods, generally the result of successful
correlation is a new log entry that brings together the pieces of information into a single place.
Depending on the nature of that information, the infrastructure might also generate an alert to indicate
that the identified event needs further investigation.
– Log viewing is displaying log entries in a human-readable format. Most log generators provide
some sort of log viewing capability; third-party log viewing utilities are also available. Some log
viewers provide filtering and aggregation capabilities.
– Log reporting is displaying the results of log analysis. Log reporting is often performed to
summarize significant activity over a particular period of time or to record detailed information
related to a particular event or series of events.
฀ Disposal
– Log clearing is removing all entries from a log that precede a certain date and time. Log clearing is
often performed to remove old log data that is no longer needed on a system because it is not of
importance or it has been archived.
A log management infrastructure usually encompasses most or all of the functions described in this
section. Section 3.1 describes the components and architectures of log management infrastructures.
The placement of some of the log functions among the three tiers of the log management
infrastructure depends primarily on the type of log management software used. Log management
infrastructures are typically based on one of the two major categories of log management software:
syslog-based centralized logging software and security information and event management software.
Sections 3.3 and 3.4 describe these types of software. Section 3.5 describes additional types of
software that may be valuable within a log management infrastructure.

65. What are the various types of network & host based security software.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Most organizations use several types of network-based and host-based security software to detect
malicious activity, protect systems and data, and support incident response efforts. Accordingly,
security software is a major source of computer security log data. Common types of network-based
and host-based security software include the following:

฀ Antimalware Software. The most common form of antimalware software is antivirus


software, which typically records all instances of detected malware, file and system
disinfection attempts, and file quarantines. Additionally, antivirus software might also record
when malware scans were performed and when antivirus signature or software updates
occurred. Antispyware software and other types of antimalware software (e.g., rootkit
detectors) are also common sources of security information.
฀ Intrusion Detection and Intrusion Prevention Systems. Intrusion detection and intrusion
prevention systems record detailed information on suspicious behavior and detected attacks,
as well as any actions intrusion prevention systems performed to stop malicious activity in
progress. Some intrusion detection systems, such as file integrity checking software, run
periodically instead of continuously, so they generate log entries in batches instead of on an
4
ongoing basis.
฀Remote Access Software. Remote access is often granted and secured through virtual private
networking (VPN). VPN systems typically log successful and failed login attempts, as well as
the dates and times each user connected and disconnected, and the amount of data sent and
received in each user session. VPN systems that support granular access control, such as
many Secure Sockets Layer (SSL) VPNs, may log detailed information about the use of
resources.
฀Web Proxies. Web proxies are intermediate hosts through which Web sites are accessed. Web
proxies make Web page requests on behalf of users, and they cache copies of retrieved Web
pages to make additional accesses to those pages more efficient. Web proxies can also be
used to restrict Web access and to add a layer of protection between Web clients and Web
servers. Web proxies often keep a record of all URLs accessed through them.
฀Vulnerability Management Software. Vulnerability management software, which includes
patch management software and vulnerability assessment software, typically logs the patch
installation history and vulnerability status of each host, which includes known vulnerabilities
and missing software updates. Vulnerability management software may also record additional
information about hosts’ configurations. Vulnerability management software typically runs
occasionally, not continuously, and is likely to generate large batches of log entries.
฀ Authentication Servers. Authentication servers, including directory servers and single sign-on
servers, typically log each authentication attempt, including its origin, username, success or
failure, and date and time.
66. What are the challenges in log management?

2.3 The Challenges in Log Management

Most organizations face similar log management-related challenges, which have the same underlying
problem: effectively balancing a limited amount of log management resources with an ever-increasing
supply of log data. This section discusses the most common types of challenges, divided into three
groups. First, there are several potential problems with the initial generation of logs because of their
variety and prevalence. Second, the confidentiality, integrity, and availability of generated logs could
be breached inadvertently or intentionally. Finally, the people responsible for performing log analysis
are often inadequately prepared and supported. The three categories of log challenges:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

2.3.1 Log Generation and Storage

In a typical organization, many hosts’ OSs, security software, and other applications generate and
store logs. This complicates log management in the following ways:

฀ Many Log Sources. Logs are located on many hosts throughout the organization, necessitating
log management to be performed throughout the organization. Also, a single log source can
generate multiple logs—for example, an application storing authentication attempts in one log
and network activity in another log.
฀ Inconsistent Log Content. Each log source records certain pieces of information in its log
entries, such as host IP addresses and usernames. For efficiency, log sources often record only
the pieces of information that they consider most important. This can make it difficult to link
events recorded by different log sources because they may not have any common values
recorded (e.g., source 1 records the source IP address but not the username, and source 2
records the username but not the source IP address).
฀ Inconsistent Timestamps. Each host that generates logs typically references its internal clock
when setting a timestamp for each log entry. If a host’s clock is inaccurate, the timestamps in its
logs will also be inaccurate. This can make analysis of logs more difficult, particularly when logs
from multiple hosts are being analyzed. For example, timestamps might indicate that event A
happened 45 seconds before event B, when event A actually happened two minutes after event B.
฀ Inconsistent Log Formats. Many of the log source types use different formats for their logs,
such as comma-separated or tab-separated text files, databases, syslog, Simple Network
Management Protocol (SNMP), Extensible Markup Language (XML), and binary files. Some logs
are designed for humans to read, while others are not; some logs use standard formats, while
others use proprietary formats. Some logs are created not for local storage in a file, but for
transmission to another system for processing; a common example of this is SNMP traps. For
some output formats, particularly text files, there are many possibilities for the sequence of the
values in each log entry and the delimiters between the values (e.g., comma-separated values, tab-
delimited values, XML).

2.3.2 Log Protection

Because logs contain records of system and network security, they need to be protected from breaches
of their confidentiality and integrity. For example, logs might intentionally or inadvertently capture
sensitive information such as users’ passwords and the content of e-mails. This raises security and
privacy concerns involving both the individuals that review the logs and others that might be able to
access the logs through authorized or unauthorized means. Logs that are secured improperly in storage
or in transit might also be susceptible to intentional and unintentional alteration and destruction. This
could cause a variety of impacts, including allowing malicious activities to go unnoticed and
manipulating evidence to conceal the identity of a malicious party. For example, many rootkits are
specifically designed to alter logs to remove any evidence of the rootkits’ installation or execution.
Organizations also need to protect the availability of their logs.

2.3.3 Log Analysis


Within most organizations, network and system administrators have traditionally been responsible for
performing log analysis—studying log entries to identify events of interest. It has often been treated
as a low-priority task by administrators and management because other duties of administrators, such
as handling operational problems and resolving security vulnerabilities, necessitate rapid responses.
Administrators who are responsible for performing log analysis often receive no training on doing it
efficiently and effectively, particularly on prioritization. Also, administrators often do not receive
tools that are effective at automating much of the analysis process, such as scripts and security
software tools (e.g., host-based intrusion detection products, security information and event
management software). Many of these tools are particularly helpful in finding patterns that humans

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

cannot easily see, such as correlating entries from multiple logs that relate to the same event. Another
problem is that many administrators consider log analysis to be boring and to provide little benefit for
the amount of time required. Log analysis is often treated as reactive—something to be done after a
problem has been identified through other means—rather than proactive, to identify ongoing activity
and look for signs of impending problems. Traditionally, most logs have not been analyzed in a real-
time or near-real-time manner. Without sound processes for analyzing logs, the value of the logs is
significantly reduced.

67. Explain log management infrastructure.


A log management infrastructure consists of the hardware, software, networks, and media used to
generate, transmit, store, analyze, and dispose of log data. Most organizations have one or more log
management infrastructures. This section describes the typical architecture of a log management
infrastructure and how its components interact with each other. It then describes the basic functions
performed within a log management infrastructure. Next, it examines the two major categories of log
management software: syslog-based centralized logging software and security information and event
management software. The section also describes additional types of software that may be useful
within a log management infrastructure.
A log management infrastructure typically comprises the following three tiers:

฀ Log Generation. The first tier contains the hosts that generate the log data. Some hosts run logging
client applications or services that make their log data available through networks to log servers in the
second tier. Other hosts make their logs available through other means, such as allowing the servers to
authenticate to them and retrieve copies of the log files.
฀ Log Analysis and Storage. The second tier is composed of one or more log servers that receive log
data or copies of log data from the hosts in the first tier. The data is transferred to the servers either in
a real-time or near-real-time manner, or in occasional batches based on a schedule or the amount of
log data waiting to be transferred. Servers that receive log data from multiple log generators are
sometimes called collectors or aggregators. Log data may be stored on the log servers themselves or
on separate database servers.
฀ Log Monitoring. The third tier contains consoles that may be used to monitor and review log data
and the results of automated analysis. Log monitoring consoles can also be used to generate reports.
In some log management infrastructures, consoles can also be used to provide management for the log
servers and clients. Also, console user privileges sometimes can be limited to only the necessary
functions and data sources for each user.
The second tier—log analysis and storage—can vary greatly in complexity and structure. The
simplest arrangement is a single log server that handles all log analysis and storage functions.
Examples of more complex second tier arrangements are as follows:

- Multiple log servers that each perform a specialized function, such as one server performing log
collection, analysis, and short-term log storage, and another server performing long-term storage.
- Multiple log servers that each performs analysis and/or storage for certain log generators. This can
also provide some redundancy. A log generator can switch to a backup log server if its primary log
server becomes unavailable. Also, log servers can be configured to share log data with each other,
which also supports redundancy.
- Two levels of log servers, with the first level of distributed log servers receiving logs from the log
generators and forwarding some or all of the log data they receive to a second level of more
centralized log servers.
68. What are the various functions of log management infrastructure?
Same as Q.64

69. Write short note on Syslog Security

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Syslog was developed at a time when the security of logs was not a major consideration. Accordingly,
it did not support the use of basic security controls that would preserve the confidentiality, integrity,
and availability of logs. For example, most syslog implementations use the connectionless, unreliable
User Datagram Protocol (UDP) to transfer logs between hosts. UDP provides no assurance that log
entries are received successfully or in the correct sequence. Also, most syslog implementations do not
perform any access control, so any host can send messages to a syslog server unless other security
measures have been implemented to prevent this, such as using a physically separate logging network
for communications with the syslog server, or implementing access control lists on network devices to
restrict which hosts can send messages to the syslog server. Attackers can take advantage of this by
flooding syslog servers with bogus log data, which can cause important log entries to go unnoticed or
even potentially cause a denial of service. Another shortcoming of most syslog implementations is
that they cannot use encryption to protect the integrity or confidentiality of logs in transit. Attackers
on the network might monitor syslog messages containing sensitive information regarding system
configurations and security weaknesses; attackers might also be able to perform man-in-the-middle
attacks such as modifying or destroying syslog messages in transit.

As the security of logs has become a greater concern, several implementations of syslog have been
created that place a greater emphasis on security. Most have been based on a proposed standard, RFC
3195, which was designed specifically to improve the security of syslog.Implementations based on
RFC 3195 can support log confidentiality, integrity, and availability through several features,
including the following:

฀ Reliable Log Delivery. Several syslog implementations support the use of Transmission
Control Protocol (TCP) in addition to UDP. TCP is a connection-oriented protocol that
attempts to ensure the reliable delivery of information across networks. Using TCP helps to
ensure that log entries reach their destination. Having this reliability requires the use of more
network bandwidth; also, it typically takes more time for log entries to reach their destination.
Some syslog implementations use log caching servers.
฀ Transmission Confidentiality Protection. RFC 3195 recommends the use of the Transport
Layer Security (TLS) protocol to protect the confidentiality of transmitted syslog messages.
TLS can protect the messages during their entire transit between hosts. TLS can only protect
the payloads of packets, not their IP headers, which means that an observer on the network
can identify the source and destination of transmitted syslog messages, possibly revealing the
IP addresses of the syslog servers and log sources. Some syslog implementations use other
means to encrypt network traffic, such as passing syslog messages through secure shell (SSH)
tunnels. Protecting syslog transmissions can require additional network bandwidth and
increase the time needed for log entries to reach their destination.
฀ Transmission Integrity Protection and Authentication. RFC 3195 recommends that if
integrity protection and authentication are desired, that a message digest algorithm be used.
RFC 3195 recommends the use of MD5; proposed revisions to RFC 3195 mention the use of
SHA-1. Because SHA is a FIPS-approved algorithm and MD5 is not, Federal agencies should
use SHA instead of MD5 for message digests whenever feasible.
Q.70 Explain the Need for Log Management
Log management can benefit an organization in many ways. It helps to ensure that computer security
records are stored in sufficient detail for an appropriate period of time. Routine log reviews and
analysis are beneficial for identifying security incidents, policy violations, fraudulent activity, and
operational problems shortly after they have occurred, and for providing information useful for
resolving such problems. Logs can also be useful for performing auditing and forensic analysis,
supporting the organization’s internal investigations, establishing baselines, and identifying
operational trends and long-term problems.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Besides the inherent benefits of log management, a number of laws and regulations further compel
organizations to store and review certain logs. The following is a listing of key regulations, standards,
and guidelines that help define organizations’ needs for log management:

฀ Federal Information Security Management Act of 2002 (FISMA). FISMA emphasizes the
need for each Federal agency to develop, document, and implement an organization-wide
program to provide information security for the information systems that support its
operations and assets. NIST SP 800-53, Recommended Security Controls for Federal
Information Systems, was developed in support of FISMA. NIST SP 800-53 is the primary
source of recommended security controls for Federal agencies. It describes several controls
related to log management, including the generation, review, protection, and retention of audit
records, as well as the actions to be taken because of audit failure.
฀ Gramm-Leach-Bliley Act (GLBA).GLBA requires financial institutions to protect their
customers’ information against security threats. Log management can be helpful in
identifying possible security violations and resolving them effectively.
฀ Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA includes
security standards for certain health information. NIST SP 800-66, An Introductory Resource
Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA)
Security Rule, lists HIPAA-related log management needs. For example, Section 4.1 of NIST
SP 800-66 describes the need to perform regular reviews of audit logs and access reports.
Also,
Q 71. List& Explain the classic categories of malware.
Malware has become the greatest external threat to most hosts, causing damage and requiring
extensive recovery efforts within most organizations. The following are the classic categories of
malware:
Viruses. A virus self-replicates by inserting copies of itself into host programs or data files. Viruses
are often triggered through user interaction, such as opening a file or running a program. Viruses can
be divided into the following two subcategories:
– Compiled Viruses. A compiled virus is executed by an operating system. Types of compiled
viruses include file infector viruses, which attach themselves to executable programs; boot sector
viruses, which infect the master boot records of hard drives or the boot sectors of removable media;
and multipartite viruses, which combine the characteristics of file infector and boot sector viruses.
– Interpreted Viruses. Interpreted viruses are executed by an application. Within this subcategory,
macro viruses take advantage of the capabilities of applications’ macro programming language to
infect application documents and document templates, while scripting viruses infect scripts that are
understood by scripting languages processed by services on the OS.
Worms. A worm is a self-replicating, self-contained program that usually executes itself without
user intervention. Worms are divided into two categories:
– Network Service Worms. A network service worm takes advantage of a vulnerability in a network
service to propagate itself and infect other hosts.
– Mass Mailing Worms. A mass mailing worm is similar to an email-borne virus but is self-
contained, rather than infecting an existing file.
Trojan Horses. A Trojan horse is a self-contained, non-replicating program that, while appearing
to be benign, actually has a hidden malicious purpose. Trojan horses either replace existing files with
malicious versions or add new malicious files to hosts. They often deliver other attacker tools to hosts.

Malicious Mobile Code. Malicious mobile code is software with malicious intent that is
transmitted from a remote host to a local host and then executed on the local host, typically without
the user’s explicit instruction. Popular languages for malicious mobile code include Java, ActiveX,
JavaScript, and VBScript.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Blended Attacks. A blended attack uses multiple infection or transmission methods. For example,
a blended attack could combine the propagation methods of viruses and worms.

72. List& Explain the popular attacker tools.


Various types of attacker tools might be delivered to a host by malware. These tools allow attackers to
have unauthorized access to or use of infected hosts and their data, or to launch additional attacks.
Popular types of attacker tools are as follows:
Backdoors. A backdoor is a malicious program that listens for commands on a certain TCP or
UDP port. Most backdoors allow an attacker to perform a certain set of actions on a host, such as
acquiring passwords or executing arbitrary commands. Types of backdoors include zombies (better
known as bots), which are installed on a host to cause it to attack other hosts, and remote
administration tools, which are installed on a host to enable a remote attacker to gain access to the
host’s functions and data as needed.
Keystroke Loggers. A keystroke logger monitors and records keyboard use. Some require the
attacker to retrieve the data from the host, whereas other loggers actively transfer the data to another
host through email, file transfer, or other means.
Rootkits. A rootkit is a collection of files that is installed on a host to alter its standard
functionality in a malicious and stealthy way. A rootkit typically makes many changes to a host to
hide the rootkit’s existence, making it very difficult to determine that the rootkit is present and to
identify what the rootkit has changed.
Web Browser Plug-Ins. A web browser plug-in provides a way for certain types of content to be
displayed or executed through a web browser. Malicious web browser plug-ins can monitor all use of
a browser.
E-Mail Generators. An email generating program can be used to create and send large quantities
of email, such as malware and spam, to other hosts without the user’s permission or knowledge.
Attacker Toolkits. Many attackers use toolkits containing several different types of utilities and
scripts that can be used to probe and attack hosts, such as packet sniffers, port scanners, vulnerability
scanners, password crackers, and attack programs and scripts.
Because attacker tools can be detected by antivirus software, some people think of them as forms of
malware. However, attacker tools have no infections capability on their own; they rely on malware or
other attack mechanisms to install them onto target hosts. Strictly speaking, attacker tools are not
malware, but because they are so closely tied to malware and often detected and removed using the
same tools, attacker tools will be covered where appropriate throughout this publication.

73. What are the recommended capabilities of an antivirus software?


Antivirus software is the most commonly used technical control for malware threat mitigation. There
are many brands of antivirus software, with most providing similar protection through the following
recommended capabilities:

-time activities on hosts to check for suspicious activity; a common example is


scanning all email attachments for known malware as emails are sent and received. Antivirus software
should be configured to perform real-time scans of each file as it is downloaded, opened, or executed,
which is known as on-access scanning.

messaging software. Antivirus software should monitor activity involving the applications most likely
to be used to infect hosts or spread malware to other hosts.

hard drives regularly to identify any file system infections and, optionally, depending on organization
security needs, to scan removable media inserted into the host before allowing its use. Users should
also be able to launch a scan manually as needed, which is known as on-demand scanning.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Disinfecting files, which refers to removing malware from within a file, and quarantining files,
which means that files containing malware are stored in isolation for future disinfection or
examination. Disinfecting a file is generally preferable to quarantining it because the malware is
removed and the original file restored; however, many infected files cannot be disinfected.
Accordingly, antivirus software should be configured to attempt to disinfect infected files and to
either quarantine or delete files that cannot be disinfected.

74. Write a short note on sandboxing


Sandboxing refers to a security model where applications are run within a sandbox—a controlled
environment that restricts what operations the applications can perform and that isolates them from
other applications running on the same host. In a sandbox security model, typically only authorized
“safe” operations may be performed within the sandbox; the sandbox prohibits applications within the
sandbox from performing any other operations. The sandbox also restricts access to system resources,
such as memory and the file system, to keep the sandbox’s applications isolated from the host’s other
applications.
Sandboxing provides several benefits in terms of malware incident prevention and handling. By
limiting the operations available, it can prevent malware from performing some or all of the malicious
actions it is attempting to execute; this could prevent the malware from succeeding or reduce the
damage it causes. And the sandboxing environment—the isolation—can further reduce the impact of
the malware by restricting what information and functions the malware can access. Another benefit of
sandboxing is that the sandbox itself can be reset to a known good state every time it is initialized.

75. Explain malware incident response life cycle in detail.


The initial phase of malware incident response involves performing preparatory activities, such as
developing malware-specific incident handling procedures and training programs for incident
response teams. As described in Section 3, the preparation phase also involves using policy,
awareness activities, vulnerability mitigation, and security tools to reduce the number of malware
incidents. Despite these measures, residual risk will inevitably persist, and no solution is foolproof.
Detection of malware infections is thus necessary to alert the organization whenever incidents occur.
Early detection is particularly important for malware incidents because they are more likely than other
types of incidents to increase their impact over time, so faster detection and handling can help reduce
the number of infected hosts and the damage done.

Figure 4-1. Incident Response Life Cycle


For each incident, the organization should act appropriately, based on the severity of the incident, to
mitigate its impact by containing it, eradicating infections, and ultimately recovering from the
incident. The organization may need to jump back to the detection and analysis phase during
containment, eradication, and recovery—for example, to check for additional infections that have
occurred since the original detection was done. After an incident has been handled, the organization
should issue a report that details the cause and cost of the incident and the steps the organization

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

should take to prevent future incidents and to prepare more effectively to handle incidents that do
occur.

4.1 Preparation
Organizations should perform preparatory measures to ensure that they are capable of responding
effectively to malware incidents. Sections 4.1.1 through 4.1.3 describe several recommended
preparatory measures, including building and maintaining malware-related skills within the incident
response team, facilitating communication and coordination throughout the organization, and
acquiring necessary tools and resources.
4.1.1 Building and Maintaining Malware-Related Skills
All malware incident handlers should have a solid understanding of how each major category of
malware infects hosts and spreads.
4.1.2 Facilitating Communication and Coordination
One of the most common problems during malware incident handling is poor communication and
coordination. To improve communication and coordination, an organization should designate in
advance a few individuals or a small team to be responsible for coordinating the organization’s
responses to malware incidents.
4.1.3 Acquiring Tools and Resources
Organizations should also ensure that they have the necessary tools (hardware and software) and
resources to assist in malware incident handling.

4.2 Detection and Analysis


Organizations should strive to detect and validate malware incidents rapidly to minimize the number
of infected hosts and the amount of damage the organization sustains. Because malware can take
many forms and be distributed through many means, there are many possible signs of a malware
incident and many locations within an organization where the signs might be recorded or observed.
4.2.1 Identifying Malware Incident Characteristics
Because no indicator is completely reliable—even antivirus software might miscategorize benign
activity as malicious—incident handlers need to analyze any suspected malware incident and validate
that malware is the cause. In some cases, such as a massive, organization-wide infection, validation
may be unnecessary because the nature of the incident is obvious.
4.2.2 Identifying Infected Hosts
Identifying hosts that are infected by malware is part of every malware incident. Once identified,
infected hosts can undergo the appropriate containment, eradication, and recovery actions.
Unfortunately, identifying all infected hosts is often complicated by the dynamic nature of computing.
For instance, people shut hosts down, disconnect them from networks, or move them from place to
place, making it extremely difficult to identify which hosts are currently infected.
4.2.3 Prioritizing Incident Response
Once a malware incident has been validated, the next activity is to prioritize its handling. Certain
forms of malware, such as worms, tend to spread very quickly and can cause a substantial impact in
minutes or hours, so they often necessitate a high-priority response.
4.2.4 Malware Analysis
Incident handlers can study the behavior of malware by analyzing it either actively (executing the
malware) or forensically (examining the infected host for evidence of malware). Forensic approaches
are safer to perform on an infected host because they can examine the host without allowing the
malware to continue executing.

4.3 Containment
Containment of malware has two major components: stopping the spread of the malware and
preventing further damage to hosts. Nearly every malware incident requires containment actions. In
addressing an incident, it is important for an organization to decide which methods of containment to
employ initially, early in the response.
4.3.1 Containment Through User Participation
At one time, user participation was a valuable part of containment efforts, particularly during large-
scale incidents in non-managed environments. Users were provided with instructions on how to

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

identify infections and what measures to take if a host was infected, such as calling the help desk,
disconnecting the host from the network, or powering off the host.
4.3.2 Containment Through Automated Detection
Many malware incidents can be contained primarily through the use of the automated technologies.
These technologies include antivirus software, content filtering, and intrusion prevention software.
Because antivirus software on hosts can detect and remove infections, it is often the preferred
automated detection method for assisting in containment.
4.3.3 Containment Through Disabling Services
Some malware incidents necessitate more drastic and potentially disruptive measures for containment.
These incidents make extensive use of a particular service. Containing such an incident quickly and
effectively might be accomplished through a loss of services, such as shutting down a service used by
malware, blocking a certain service at the network perimeter, or disabling portions of a service (e.g.,
large mailing lists).
4.3.4 Containment Through Disabling Connectivity
Containing incidents by placing temporary restrictions on network connectivity can be very effective.
For example, if infected hosts attempt to establish connections with an external host to download
rootkits, handlers should consider blocking all access to the external host (by IP address or domain
name, as appropriate).
4.3.5 Containment Recommendations
Containment can be performed through many methods in the four categories described above (users,
automated detection, loss of services, and loss of connectivity). Because no single malware
containment category or individual method is appropriate or effective in every situation, incident
handlers should select a combination of containment methods that is likely to be effective in
containing the current incident while limiting damage to hosts and reducing the impact that
containment methods might have on other hosts. For example, shutting down all network access
might be very effective at stopping the spread of malware, but it would also allow infections on hosts
to continue damaging files and would disrupt many important functions of the organization.

4.4 Eradication
Although the primary goal of eradication is to remove malware from infected hosts, eradication is
typically more involved than that. If an infection was successful because of host vulnerability or other
security weakness, such as an unsecured file share, then eradication includes the elimination or
mitigation of that weakness, which should prevent the host from becoming reinfected or becoming
infected by another instance of malware or a variant of the original threat. Eradication actions are
often consolidated with containment efforts.
In general, organizations should rebuild any host that has any of the following incident characteristics,
instead of performing typical eradication actions (disinfection):
-level access to the host.
-level access to the host was available to anyone through a backdoor, an
unprotected share created by a worm, or other means.

antivirus software or other programs or techniques. This indicates that either the malware has not been
eradicated completely or that it has caused damage to important system or application files or settings.
is doubt about the nature of and extent of the infection or any unauthorized access gained
because of the infection.

4.5 Recovery
The two main aspects of recovery from malware incidents are restoring the functionality and data of
infected hosts and removing temporary containment measures. Additional actions to restore hosts are
not necessary for most malware incidents that cause limited host damage (for example, an infection
that simply altered a few data files and was completely removable with antivirus software). As
discussed in Section 4.4, for malware incidents that are far more damaging, such as Trojan horses,

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

rootkits, or backdoors, corrupting thousands of system and data files, or wiping out hard drives, it is
often best to first rebuild the host, then secure the host so that it is no longer vulnerable to the malware
threat. Organizations should carefully consider possible worst-case scenarios, such as a new malware
threat that necessitates rebuilding a large percentage of the organization’s workstations, and determine
how the hosts would be recovered in these cases. This should include identifying who would perform
the recovery tasks, estimating how many hours of labor would be needed and how much calendar time
would elapse, and determining how the recovery efforts should be prioritized.

4.6 Lessons Learned


When a major malware incident occurs, the primary individuals performing the response usually work
intensively for days or weeks. As the major handling efforts end, the key people are usually mentally
and physically fatigued, and are behind in performing other tasks that were pending during the
incident handling period. Consequently, the lessons learned phase of incident response might be
significantly delayed or skipped altogether for major malware incidents. However, because major
malware incidents can be extremely expensive to handle, it is particularly important for organizations
to conduct robust lessons learned activities for major malware incidents. Although it is reasonable to
give handlers and other key people a few days to catch up on other tasks, review meetings and other
efforts should occur expeditiously, while the incident is still fresh in everyone’s minds. Examples of
possible outcomes of lessons learned activities for malware incidents are as follows:
Security Policy Changes. Security policies might be modified to prevent similar incidents. For
example, if connecting personally owned mobile devices to organization laptops caused a serious
infection, modifying the organization’s policies to secure, restrict, or prohibit such device connections
might be advisable.
Awareness Program Changes. Security awareness training for users might be changed to reduce
the number of infections or to improve users’ actions in reporting incidents and assisting with
handling incidents on their own hosts.
Software Reconfiguration. OS or application settings might need to be changed to support
security policy changes or to achieve compliance with existing policy.
Malware Detection Software Deployment. If hosts were infected through a transmission
mechanism that was unprotected by antivirus software or other malware detection tools, an incident
might provide sufficient justification to purchase and deploy additional software.
Malware Detection Software Reconfiguration. Detection software might need to be reconfigured
in various ways, such as the following:
– Increasing the frequency of software and signature updates
– Improving the accuracy of detection (e.g., fewer false positives, fewer false negatives)
– Increasing the scope of monitoring (e.g., monitoring additional transmission mechanisms,
monitoring additional files or file systems)
– Changing the action automatically performed in response to detected malware
– Improving the efficiency of update distribution.

76. List and explain the major component of containment of malware.


Containment of malware has two major components: stopping the spread of the malware and
preventing further damage to hosts. Nearly every malware incident requires containment actions. In
addressing an incident, it is important for an organization to decide which methods of containment to
employ initially, early in the response. Containment of isolated incidents and incidents involving
noninfectious forms of malware is generally straightforward, involving such actions as disconnecting
the affected hosts from networks or shutting down the hosts. For more widespread malware incidents,
such as fast-spreading worms, organizations should use a strategy that contains the incident for most
hosts as quickly as possible; this should limit the number of machines that are infected, the amount of
damage that is done, and the amount of time that it will take to fully recover all data and services.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

In containing a malware incident, it is also important to understand that stopping the spread of
malware does not necessarily prevent further damage to hosts. Malware on a host might continue to
exfiltrate sensitive data, replace OS files, or cause other damage. In addition, some instances of
malware are designed to cause additional damage when network connectivity is lost or other
containment measures are performed. For example, an infected host might run a malicious process
that contacts another host periodically. If that connectivity is lost because the infected host is
disconnected from the network, the malware might overwrite all the data on the host’s hard drive. For
these reasons, handlers should not assume that just because a host has been disconnected from the
network, further damage to the host has been prevented, and in many cases, should begin eradication
efforts as soon as possible to prevent more damage.
Organizations should have strategies and procedures in place for making containment-related
decisions that reflect the level of risk acceptable to the organization. For example, an organization
might decide that infected hosts performing critical functions should not be disconnected from
networks or shut down if the likely damage to the organization from those functions being unavailable
would be greater than the security risks posed by not isolating or shutting down the host. Containment
strategies should support incident handlers in selecting the appropriate combination of containment
methods based on the characteristics of a particular situation.
Containment methods can be divided into four basic categories: relying on user participation,
performing automated detection, temporarily halting services, and blocking certain types of network
connectivity. Sections 4.3.1 through 4.3.4 describe each category in detail.
4.3.1 Containment Through User Participation
At one time, user participation was a valuable part of containment efforts, particularly during large-
scale incidents in non-managed environments. Users were provided with instructions on how to
identify infections and what measures to take if a host was infected, such as calling the help desk,
disconnecting the host from the network, or powering off the host. The instructions might also cover
malware eradication, such as updating antivirus signatures and performing a host scan, or obtaining
and running a specialized malware eradication utility. As hosts have increasingly become managed,
user participation in containment has sharply decreased. However, having users perform containment
actions is still helpful in non-managed environments and other situations in which use of fully
automated containment methods is not feasible.
4.3.2 Containment Through Automated Detection
Many malware incidents can be contained primarily through the use of the automated technologies
described in Section 3.4 for preventing and detecting infections. These technologies include antivirus
software, content filtering, and intrusion prevention software. Because antivirus software on hosts can
detect and remove infections, it is often the preferred automated detection method for assisting in
containment. However, as previously discussed, many of today’s malware threats are novel, so
antivirus software and other technologies often fail to recognize them as being malicious. Also,
malware that compromises the OS may disable security controls such as antivirus software,
particularly in unmanaged environments where users have greater control over their hosts.
Containment through antivirus software is not as robust and effective as it used to be.
Examples of automated detection methods other than antivirus software are as follows:
Content Filtering.
Network-Based IPS Software.
Executable Blacklisting.

4.3.3 Containment Through Disabling Services


Some malware incidents necessitate more drastic and potentially disruptive measures for containment.
These incidents make extensive use of a particular service. Containing such an incident quickly and
effectively might be accomplished through a loss of services, such as shutting down a service used by
malware, blocking a certain service at the network perimeter, or disabling portions of a service (e.g.,
large mailing lists).

4.3.4 Containment Through Disabling Connectivity

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Containing incidents by placing temporary restrictions on network connectivity can be very effective.
For example, if infected hosts attempt to establish connections with an external host to download
rootkits, handlers should consider blocking all access to the external host (by IP address or domain
name, as appropriate). Similarly, if infected hosts within the organization attempt to spread their
malware, the organization might block network traffic from the hosts’ IP addresses to control the
situation while the infected hosts are physically located and disinfected. An alternative to blocking
network access for particular IP addresses is to disconnect the infected hosts from the network, which
could be accomplished by reconfiguring network devices to deny network access or physically
disconnecting network cables from infected hosts.
4.3.5 Containment Recommendations
Containment can be performed through many methods in the four categories described above (users,
automated detection, loss of services, and loss of connectivity). Because no single malware
containment category or individual method is appropriate or effective in every situation, incident
handlers should select a combination of containment methods that is likely to be effective in
containing the current incident while limiting damage to hosts and reducing the impact that
containment methods might have on other hosts.

Q.76 List and explain the major component of containment of malware.


Malware incident containment has two major components: stopping the spread of malware and
preventing further damage to hosts. Nearly every malware incident requires containment actions. In
addressing an incident, it is important for an organization to decide which methods of containment to
employ initially, early in the response. Organizations should have strategies and procedures in place
for making containment-related decisions that reflect the level of risk acceptable to the organization.
Containment strategies should support incident handlers in selecting the appropriate combination of
containment methods based on the characteristics of a particular situation. Specific containment-
related recommendations include the following:
– It can be helpful to provide users with instructions on how to identify infections and what measures
to take if a host is infected; however, organizations should not rely primarily on users for containing
malware incidents.
– If malware cannot be identified and contained by updated antivirus software, organizations should
be prepared to use other security tools to contain it. Organizations should also be prepared to submit
copies of unknown malware to their security software vendors for analysis, as well as contacting
trusted parties such as incident response organizations and antivirus vendors when guidance is needed
on handling new threats.
– Organizations should be prepared to shut down or block services used by malware to contain an
incident and should understand the consequences of doing so. Organizations should also be prepared
to respond to problems caused by other organizations disabling their own services in response to a
malware incident.
– Organizations should be prepared to place additional temporary restrictions on network connectivity
to contain a malware incident, such as suspending Internet access or physically disconnecting hosts
from networks, recognizing the impact that the restrictions might have on organizational functions.

77. Explain the three main categories of patch and vulnerability metrics.
There are three main categories of patch and vulnerability metrics: susceptibility to attack, mitigation
response time, and cost. This section provides example metrics in each category.

Measuring a System’s Susceptibility to Attack

An organization’s susceptibility to attack can be approximated by several measurements. An


organization can measure the number of patches needed, the number of vulnerabilities, and the
number of network services running on a per system basis. These measurements should be taken
individually for each computer within the system, and the results then aggregated to determine the
system-wide result. Both raw results and ratios (e.g., number of vulnerabilities per computer) are
important. The raw results help reveal the overall risk a system faces because the more vulnerabilities,

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

unapplied patches, and exposed network services that exist, the greater the chance that the system will
be penetrated. Large systems consisting of many computers are thus inherently less secure than
smaller similarly configured systems. This does not mean that the large systems are necessarily
secured with less rigor than the smaller systems. To avoid such implications, ratios should be used
when comparing the effectiveness of the security programs of multiple systems. Ratios (e.g., number
of unapplied patches per computer) allow effective comparison between systems. Both raw results
and ratios should be measured and published for each system, as appropriate, since they are both
useful and serve different purposes.

Number of Patches

Number of Vulnerabilities

Number of Network Services


Mitigation Response Time

It is also important to measure how quickly an organization can identify, classify, and respond to a
new vulnerability and mitigate the potential impact within the organization. Response time has
become increasingly important, because the average time between a vulnerability announcement and
an exploit being released has decreased dramatically in the last few years. There are three primary
response time measurements that can be taken: vulnerability and patch identification, patch
application, and emergency security configuration changes.

Response Time for Vulnerability and Patch Identification

Response Time for Patch Application

Response Time for Emergency Configuration Changes


Cost

Measuring the cost of patch and vulnerability management is difficult because the actions are often
split between many different personnel and groups. In the simplest case, there will be a dedicated
centralized PVG that deploys patches and security configurations directly. However, most
organizations will have the patch and vulnerability functions split between multiple groups and
allocated to a variety of full-time and part-time personnel. There are four main cost measurements that
should be taken: the PVG, system administrator support, enterprise patch and vulnerability
management tools, and incidents that occurred due to failures in the patch and vulnerability
management program.

Cost of the Patch and Vulnerability Group

Cost of System Administrator Support

Cost of Enterprise Patch and Vulnerability Management Tools

Cost of Program Failures

78. What is The Patch and Vulnerability Group & what are their duties?
The PVG should be specially tasked to implement the patch and vulnerability management program
throughout the organization. The PVG is the central point for vulnerability remediation efforts, such
as OS and application patching and configuration changes. Since the PVG needs to work actively with
local administrators, large organizations may need to have several PVGs; they could work together or
be structured hierarchically with an authoritative top-level PVG. The duties of a PVG should include
the following:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

1. Inventory the organization’s IT resources to determine which hardware equipment, operating


systems, and software applications are used within the organization.

2. Monitor security sources for vulnerability announcements, patch and non-patch remediations,
and emerging threats that correspond to the software within the PVG’s system inventory.

3. Prioritize the order in which the organization addresses remediating vulnerabilities.

4. Create a database of remediations that need to be applied to the organization.

5. Conduct testing of patches and non-patch remediations on IT devices that use standardized
configurations.

6. Oversee vulnerability remediation.

7. Distribute vulnerability and remediation information to local administrators.

8. Perform automated deployment of patches to IT devices using enterprise patch management


tools.

9. Configure automatic update of applications whenever possible and appropriate.

10. Verify vulnerability remediation through network and host vulnerability scanning.

11. Train administrators on how to apply vulnerability remediations.

79. What are the primary methods of remediation that can be applied to an affected system?
Organizations should deploy vulnerability remediations to all systems that have the vulnerability,
even for systems that are not at immediate risk of exploitation. Vulnerability remediations should also
be incorporated into the organization’s standard builds and configurations for hosts. There are three
primary methods of remediation that can be applied to an affected system: the installation of a
software patch, the adjustment of a configuration setting, and the removal of the affected software.

+ Security Patch Installation. Applying a security patch (also called a “fix” or “hotfix”) repairs
the vulnerability, since patches contain code that modifies the software application to address
and eliminate the problem. Patches downloaded from vendor Web sites are typically the most
up-to-date and are likely free of malicious code.
+ Configuration Adjustment. Adjusting how an application or security control is configured can
18
effectively block attack vectors and reduce the threat of exploitation. Common configuration
adjustments include disabling services and modifying privileges, as well as changing firewall
rules and modifying router access controls. Settings of vulnerable software applications can
be modified by adjusting file attributes or registry settings.
+ Software Removal. Removing or uninstalling the affected software or vulnerable service
eliminates the vulnerability and any associated threat. This is a practical solution when an
application is not needed on a system. Determining how the system is used, removing
unnecessary software and services, and running only what is essential for the system’s
purpose is a recommended security practice.
80. Who are involved in log management planning? Explain their responsibilities.
As part of the log management planning process, an organization should define the roles and
responsibilities of individuals and teams who are expected to be involved in log management. Teams
and individual roles often involved in log management include the following:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

System and network administrators, who are usually responsible for configuring logging on
individual systems and network devices, analyzing those logs periodically, reporting on the results of
log management activities, and performing regular maintenance of the logs and logging software
Security administrators, who are usually responsible for managing and monitoring the log
management infrastructures, configuring logging on security devices (e.g., firewalls, network-based
intrusion detection systems, antivirus servers), reporting on the results of log management activities,
38
and assisting others with configuring logging and performing log analysis
Computer security incident response teams, who use log data when handling some incidents
Application developers, who may need to design or customize applications so that they perform
logging in accordance with the logging requirements and recommendations
Information security officers, who may oversee the log management infrastructures
Chief information officers (CIO), who oversee the IT resources that generate, transmit, and store the
logs
Auditors, who may use log data when performing audits
Individuals involved in the procurement of software that should or can generate computer security
log data.
81. What are the steps included in developing logging policies?
An organization should define its requirements and goals for performing logging and monitoring logs.
The requirements should include all applicable laws, regulations, and existing organizational policies,
such as data retention policies. The goals should be based on balancing the organization’s reduction of
risk with the time and resources needed to perform log management functions. The requirements and
goals should then be used as the basis for establishing an organization-wide log management
capability and prioritizing log management appropriately throughout the enterprise.

Organizations should develop policies that clearly define mandatory requirements and suggested
recommendations for several aspects of log management, including the following:

฀ Log generation
– Which types of hosts must or should perform logging
– Which host components must or should perform logging (e.g., OS, service, application)
– Which types of events each component must or should log (e.g., security events, network
connections, authentication attempts)
– Which data characteristics must or should be logged for each type of event (e.g., username
and source IP address for authentication attempts)
– How frequently each type of event must or should be logged (e.g., every occurrence, once
for all instances in x minutes, once for every x instances, every instance after x instances)
฀ Log transmission
– Which types of hosts must or should transfer logs to a log management infrastructure
– Which types of entries and data characteristics must or should be transferred from
individual hosts to a log management infrastructure
– How log data must or should be transferred (e.g., which protocols are permissible),
including out-of-band methods where appropriate (e.g., for standalone systems)

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

– How frequently log data should be transferred from individual hosts to a log management
infrastructure (e.g., real-time, every 5 minutes, every hour)
– How the confidentiality, integrity, and availability of each type of log data must or should
be protected while in transit, including whether a separate logging network should be
used
฀ Log storage and disposal
– How often logs should be rotated
– How the confidentiality, integrity, and availability of each type of log data must or should
be protected while in storage (at both the system level and the infrastructure level)
– How long each type of log data must or should be preserved (at both the system level and
the infrastructure level)
– How unneeded log data must or should be disposed of (at both the system level and the
infrastructure level)
– How much log storage space must or should be available (at both the system level and the
infrastructure level)
– How log preservation requests, such as a legal requirement to prevent the alteration and
destruction of particular log records, must be handled (e.g., how the impacted logs must
be marked, stored, and protected)
฀ Log analysis
– How often each type of log data must or should be analyzed (at both the system level and
the infrastructure level)
– Who must or should be able to access the log data (at both the system level and the
infrastructure level), and how such accesses should be logged
47
– What must or should be done when suspicious activity or an anomaly is identified
– How the confidentiality, integrity, and availability of the results of log analysis (e.g., alerts,
reports) must or should be protected while in storage (at both the system level and the
infrastructure level) and in transit
– How inadvertent disclosures of sensitive information recorded in logs, such as passwords or
the contents of e-mails, should be handled.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Unit 4

1. State the benefits & objectives of information security audit.

ANS: Benefits and Objectives:


Audit trails can provide a means to help accomplish several security-related
objectives,including individual accountability, reconstruction of events, intrusion detection,
and problem analysis.
1.1 Individual Accountability:
Audit trails are a technical mechanism that help managers maintain individual accountability.
By advising users that they are personally accountable for their actions, which are tracked by
an audit trail that logs user activities, managers can help promote proper user behavior. 129
Users are less likely to attempt to circumvent security policy if they know that their actions
will be recorded in an audit log.For example, audit trails can be used in concert with access
controls to identify and provide information about users suspected of improper modification
of data (e.g., introducing errors into a database). An audit trail may record "before" and
"after" versions of records. (Depending upon the size of the file and the capabilities of the
audit logging tools, this may be very resource intensive.) Comparisons can then be made
between the actual changes made to records and what was expected. This can help
management determine if errors were made by the user, by the system or application
software, or by some other source. Audit trails work in concert with logical access controls,
which restrict use of system resources. Granting users access to particular resources usually
means that they need that access to accomplish their job. Authorized access, of course, can be
misused, which is where audit trail analysis is useful. While users cannot be prevented from
using resources to which they have legitimate access authorization, audit trail analysis is used
to examine their actions. For example,consider a personnel office in which users have access
to those personnel records for which they are responsible. Audit trails can reveal that an
individual is printing far more records than the average user, which could indicate the selling
of personal data. Another example may be an engineer who is using a computer for the
design of a new product. Audit trail analysis could reveal that an outgoing modem was used
extensively by the engineer the week before quitting.This could be used to investigate
whether proprietary data files were sent to an unauthorized party.
1.2 Reconstruction of Events:
Audit trails can also be used to reconstruct events after a problem has occurred. Damage can
be more easily assessed by reviewing audit trails of system activity to pinpoint how, when,
and why normal operations ceased. Audit trail analysis can often distinguish between
operator-induced errors (during which the system may have performed exactly as instructed)
or system-created errors (e.g., arising from a poorly tested piece of replacement code). If, for
example, a system fails or the integrity of a file (either program or data) is questioned, an
analysis of the audit trail can reconstruct the series of steps taken by the system, the users,
and the application. Knowledge of the conditions that existed at the time of, for example, a
system crash, can be useful in avoiding future outages. Additionally, if a technical problem

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

occurs (e.g., the corruption of a data file) audit trails can aid in the recovery process (e.g., by
using the record of changes made to reconstruct the file).
1.3 Intrusion Detection:
Intrusion detection refers to the process of identifying attempts to penetrate a system
and gain unauthorized access.

If audit trails have been designed and implemented to record appropriate access.information,
they can assist in intrusion detection. Although normally thought of as a real-time effort,
intrusions can be detected in real time, by examining audit records as they are created (or
through the use of other kinds of warning flags/notices), or after the fact (e.g., by examining
audit records in a batch process). Real-time intrusion detection is primarily aimed at outsiders
attempting to gain unauthorized access to the system. It may also be used to detect changes in
the system's performance indicative of, for example, a virus or worm attack. There may be
difficulties in implementing real-time auditing, including unacceptable system performance.
After-the-fact identification may indicate that unauthorized access was attempted (or was
successful). Attention can then be given to damage assessment or reviewing controls that
were attacked.

1.4 Problem Analysis:


Audit trails may also be used as on-line tools to help identify problems other than
intrusions as
they occur. This is often referred to as real-time auditing or monitoring. If a system or
application is deemed to be critical to an organization's business or mission, real-time
auditing
may be implemented to monitor the status of these processes (although, as noted
above, there can be difficulties with real-time analysis). An analysis of the audit trails
may be able to verify that the system operated normally (i.e., that an error may have
resulted from operator error, as opposed to a system-originated error). Such use of
audit trails may be complemented by system performance logs. For example, a
significant increase in the use of system resources (e.g., disk file space or outgoing
modem use) could indicate a security problem.

2. List the principles of Auditing.

ANS:
General principles of an audit:
The auditor should comply with the Code of Ethics for Members issued by the
International Federation of Accountants.
Ethical principles governing the auditor‟s professional responsibilities are:
a) Independence;
b) Integrity;

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

c) Objectivity;
d) Professional competence and due care;
e) Confidentiality;
f) Professional behaviour; and
g) Technical standards
The auditor should conduct an audit in accordance with International Standard of
Audit (ISAs).
These contain basic principles and essential procedures together with related guidance
in the form of explanatory and other materials.
The auditor should plan and perform an audit with an attitude of professional
scepticism recognizing that circumstances may exist that cause the financial
statements to be materially misstated. An attitude of professional scepticism means
the auditor makes a critical assessment, with a questioning mind, of the validity of
audit evidence obtained and is alert to audit evidence that contradicts or brings into
question the reliability of documents or management representations. For examples,
an attitude of professional scepticism is necessary throughout the audit process for the
auditor to reduce the risk of overlooking suspicious circumstances, of over
generalizing when drawing conclusions from audit observations, and of using faulty
assumptions in determining the nature, timing and extent of the audit procedures and
evaluating the results thereof.
In planning and performing an audit, the auditor neither assumes that management is
dishonest nor assumes unquestioned honesty. Accordingly, representations from
management are not a substitute for obtaining sufficient appropriate audit evidence to
be able to draw reasonable conclusion on which to base the audit opinion.

4. State & explain any 4 interdependencies of audit trails .


ANS:
Interdependencies:
The ability to audit supports many of the controls presented in this handbook. The following
paragraphs describe some of the most important interdependencies.
Policy:
The most fundamental interdependency of audit trails is with policy. Policy dictates who is
authorized access to what system resources. Therefore it specifies, directly or indirectly, what
violations of policy should be identified through audit trails.
Assurance:
System auditing is an important aspect of operational assurance. The data recorded into an
audit trail is used to support a system audit. The analysis of audit trail data and the process of

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

auditing systems are closely linked; in some cases, they may even be the same thing. In most
cases, the analysis of audit trail data is a critical part of maintaining operational assurance.
Identification and Authentication:
Audit trails are tools often used to help hold users accountable for their actions. To be held
accountable, the users must be known to the system (usually accomplished through the
identification and authentication process). However, as mentioned earlier, audit trails record
events and associate them with the perceived user (i.e., the user ID). If a user is impersonated,
the audit trail will establish events but not the identity of the user.
Logical Access Control:
Logical access controls restrict the use of system resources to authorized users. Audit trails
complement this activity in two ways. First, they may be used to identify breakdowns in
logical access controls or to verify that access control restrictions are behaving as expected,
for example, if a particular user is erroneously included in a group permitted access to a file.
Second, audit trails are used to audit use of resources by those who have legitimate access.
Additionally, to protect audit trail files, access controls are used to ensure that audit trails are
not modified.
Contingency Planning:
Audit trails assist in contingency planning by leaving a record of activities performed on the
system or within a specific application. In the event of a technical malfunction, this log can
be used to help reconstruct the state of the system (or specific files).
Incident Response:
If a security incident occurs, such as hacking, audit records and other intrusion detection
methods can be used to help determine the extent of the incident. For example, was just one
file browsed, or was a Trojan horse planted to collect passwords?
Cryptography:
Digital signatures can be used to protect audit trails from undetected modification. (This does
not prevent deletion or modification of the audit trail, but will provide an alert that the audit
trail has been altered.) Digital signatures can also be used in conjunction with adding secure
time stamps to audit records. Encryption can be used if confidentiality of audit trail
information is important.

5. Write a note on cost considerations in audit trails


ANS:
Cost Considerations :
Audit trails involve many costs. First, some system overhead is incurred recording the
audit trail. Additional system overhead will be incurred storing and processing the
records. The more detailed the records, the more overhead is required. Another cost
involves human and machine time required to do the analysis. This can be minimized by

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

using tools to perform most of the analysis. Many simple analyzers can be constructed
quickly (and cheaply) from system utilities, but they are limited to audit reduction and
identifying particularly sensitive events. More complex tools that identify trends or
sequences of events are slowly becoming available as off-the-shelf software.
(If complex tools are not available for a system, development may be prohibitively
expensive. Some intrusion detection systems, for example, have taken years to develop.)
The final cost of audit trails is the cost of investigating anomalous events. If the system is
identifying too many events as suspicious, administrators may spend undue time
reconstructing events and questioning personnel.

7. Explain Audit Trails. What are the two types of audit records explain in detail?
An audit trail (also called audit log) is a security-relevant chronological record, set of
records, and/or destination and source of records that provide documentary evidence of
the sequence of activities that have affected at any time a specific operation, procedure, or
event. Audit records typically result from activities such as financial transactions,
scientific research and health care data transactions, or communications by individual
people, systems, accounts, or other entities.
A system can maintain several different audit trails concurrently. There are typically two
kinds of audit records, (1) an event-oriented log and (2) a record of every keystroke, often
called keystroke monitoring. Event-based logs usually contain records describing system
events, application events, or user events. An audit trail should include sufficient
information to establish what events occurred and who (or what) caused them. In general,
an event record should specify when the event occurred, the user ID associated with the
event, the program or command used to initiate the event, and the result. Date and time
can help determine if the user was a masquerader or the actual person specified.
1. Keystroke Monitoring:-
Keystroke monitoring is the process used to view or record both the keystrokes entered by
a computer user and the computer's response during an interactive session. Keystroke
monitoring is usually considered a special case of audit trails. Examples of keystroke
monitoring would include viewing characters as they are typed by users, reading users'
electronic mail, and viewing other recorded information typed by users. Some forms of
routine system maintenance may record user keystrokes. This could constitute keystroke
monitoring if the keystrokes are preserved along with the user identification so that an
administrator could determine the keystrokes entered by specific users. Keystroke
monitoring is conducted in an effort to protect systems and data from intruders who
access the systems without authority or in excess of their assigned authority. Monitoring
keystrokes typed by intruders can help administrators assess and repair damage caused by
intruders.

2. Audit Events :-

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

The system itself enforces certain aspects of policy (particularly system-specific policy)
such as access to files and access to the system itself. Monitoring the alteration of systems
configuration files that implement the policy is important. If special accesses (e.g.,
security administrator access) have to be used to alter configuration files, the system
should generate audit records whenever these accesses are used. Flexibility is a critical
feature of audit trails. Ideally (from a security point of view), a system administrator
would have the ability to monitor all system and user activity, but could choose to log
only certain functions at the system level, and within certain applications. The decision of
how much to log and how much to review should be a function of application/data
sensitivity and should be decided by each functional manager/application owner with
guidance from the system administrator and the computer security manager/officer,
weighing the costs and benefits of the logging.

2.1 System-Level Audit Trails:-


If a system-level audit capability exists, the audit trail should capture, at a minimum, any
attempt to log on (successful or unsuccessful), the log-on ID, date and time of each log-on
attempt, date and time of each log-off, the devices used, and the function(s) performed
once logged on (e.g., the applications that the user tried, successfully or unsuccessfully, to
invoke). System-level logging also typically includes information that is not specifically
security-related, such as system operations, cost-accounting charges, and network
performance.
2.2 Application-Level Audit Trails:-
System-level audit trails may not be able to track and log events within applications, or
may not be able to provide the level of detail needed by application or data owners, the
system administrator, or the computer security manager. In general, application-level
audit trails monitor and log user activities, including data files opened and closed, specific
actions, such as reading, editing, and deleting records or fields, and printing reports. Some
applications may be sensitive enough from a data availability, confidentiality, and/or
integrity perspective that a "before" and "after" picture of each modified record (or the
data element(s) changed within a record) should be captured by the audit trail.
2.3 User Audit Trails:-
User audit trails can usually log:
• all commands directly initiated by the user;
• all identification and authentication attempts; and
• files and resources accessed.
It is most useful if options and parameters are also recorded from commands. It is much
more useful to know that a user tried to delete a log file (e.g., to hide unauthorized
actions) than to know the user merely issued the delete command, possibly for a personal
data file.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

9. What are the implementations issues regarding Audit Trail?


Implementation Issues:-
Audit trail data requires protection, since the data should be available for use when
needed and is not useful if it is not accurate. Also, the best planned and implemented
audit trail is of limited value without timely review of the logged data. Audit trails may be
reviewed periodically, as needed (often triggered by occurrence of a security event),
automatically in realtime, or in some combination of these. System managers and
administrators, with guidance from computer security personnel, should determine how
long audit trail data will be maintained - either on the system or in archive files.
Following are examples of implementation issues that may have to be addressed when
using audit trails.
1. Protecting Audit Trail Data:-
Access to on-line audit logs should be strictly controlled. Computer security managers
and system administrators or managers should have access for review purposes; however,
security and/or administration personnel who maintain logical access functions may have
no need for access to audit logs. It is particularly important to ensure the integrity of audit
trail data against modification. The audit trail files needs to be protected since, for
example, intruders may try to "cover their tracks" by modifying audit trail records. Audit
trail records should be protected by strong access controls to help prevent unauthorized
access.

2. Review of Audit Trails:-


Audit trails can be used to review what occurred after an event, for periodic reviews, and
for real-time analysis. Reviewers should know what to look for to be effective in spotting
unusual activity. They need to understand what normal activity looks like. Audit trail
review can be easier if the audit trail function can be queried by user ID, terminal ID,
application name, date and time, or some other set of parameters to run reports of selected
information.

3. Tools for Audit Trail Analysis:-


Many types of tools have been developed to help to reduce the amount of information
contained in audit records, as well as to distill useful information from the raw data.
Especially on larger systems, audit trail software can create very large files, which can be
extremely difficult to analyze manually. The use of automated tools is likely to be the
difference between unused audit trail data and a robust program. Some of the types of
tools include:

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

• Audit reduction tools are preprocessors designed to reduce the volume of audit
records to facilitate manual review. Before a security review, these tools can remove
many audit records known to have little security significance.
• Trends/variance -detection tools look for anomalies in user or system behavior. It is
possible to construct more sophisticated processors that monitor usage trends and detect
major variations.

4. Interdependencies:-
The ability to audit supports many of the controls presented in this handbook. The
following paragraphs describe some of the most important interdependencies.
Policy: The most fundamental interdependency of audit trails is with policy. Policy
dictates who is authorized access to what system resources.
Assurance: System auditing is an important aspect of operational assurance. The data
recorded into an audit trail is used to support a system audit.
Contingency Planning: Audit trails assist in contingency planning by leaving a record of
activities performed on the system or within a specific application.
Cryptography: Digital signatures can be used to protect audit trails from undetected
modification. (This does not prevent deletion or modification of the audit trail, but will
provide an alert that the audit trail has been altered.) Digital signatures can also be used in
conjunction with adding secure time stamps to audit records. Encryption can be used if
confidentiality of audit trail information is important.

5. Cost Considerations:-
Audit trails involve many costs. First, some system overhead is incurred recording the
audit trail. Additional system overhead will be incurred storing and processing the
records. The more detailed the records, the more overhead is required. Another cost
involves human and machine time required to do the analysis. The final cost of audit trails
is the cost of investigating anomalous events. If the system is identifying too many events
as suspicious, administrators may spend undue time reconstructing events and
questioning personnel.

10. Write a note on interdependences in Audit Trial.


The ability to audit supports many of the controls presented in this handbook. The
following paragraphs describe some of the most important interdependencies.
Policy:- The most fundamental interdependency of audit trails is with policy. Policy
dictates who is authorized access to what system resources. Therefore it specifies, directly
or indirectly, what violations of policy should be identified through audit trails.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Assurance:- System auditing is an important aspect of operational assurance. The data


recorded into an audit trail is used to support a system audit. The analysis of audit trail
data and the process of auditing systems are closely linked; in some cases, they may even
be the same thing. In most cases, the analysis of audit trail data is a critical part of
maintaining operational assurance.
Identification and Authentication:-Audit trails are tools often used to help hold users
accountable for their actions. To be held accountable, the users must be known to the
system (usually accomplished through the identification and authentication process).
However, as mentioned earlier, audit trails record events and associate them with the
perceived user (i.e., the user ID). If a user is impersonated, the audit trail will establish
events but not the identity of the user.
Logical Access Control:-Logical access controls restrict the use of system resources to
authorized users. Audit trails complement this activity in two ways. First, they may be
used to identify breakdowns in logical access controls or to verify that access control
restrictions are behaving as expected, for example, if a particular user is erroneously
included in a group permitted access to a file. Second, audit trails are used to audit use of
resources by those who have legitimate access. Additionally, to protect audit trail files,
access controls are used to ensure that audit trails are not modified.
Contingency Planning:-Audit trails assist in contingency planning by leaving a record of
activities performed on the system or within a specific application. In the event of a
technical malfunction, this log can be used to help reconstruct the state of the system (or
specific files).
Incident Response:-If a security incident occurs, such as hacking, audit records and other
intrusion detection methods can be used to help determine the extent of the incident. For
example, was just one file browsed, or was a Trojan horse planted to collect passwords?
Cryptography:-Digital signatures can be used to protect audit trails from undetected
modification. (This does not prevent deletion or modification of the audit trail, but will
provide an alert that the audit trail has been altered.) Digital signatures can also be used in
conjunction with adding secure time stamps to audit records. Encryption can be used if
confidentiality of audit trail information is important.
11. Explain the concept of Business Continuity Planning with its different phases.
Ans. Business continuity is the process of creating systems of prevention and recovery to
deal with potential threats to a company.
A business continuity plan is a plan to continue operations if a place of business is
affected by different levels of disaster which can be localized short term disasters, to days
long building wide problems, to a permanent loss of a building. Such a plan typically
explains how the business would recover its operations or move operations to another
location after damage by events like natural disasters, theft, or flooding. For example, if a
fire destroys an office building or data center, the people and business or data center
operations would relocate to a recovery site.
Business continuity planning can seem intimidating, leaving you to ask, “Where do I
begin?” Fortunately, business continuity planning falls neatly into five phases, each of

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

which includes steps that, when followed, provide the foundation of any good plan. Let’s
take a look at the five phases.
Phase 1: Identify the risks
The first phase is to conduct a risk assessment, identifying any potential hazards that
could disrupt your business. Consider any type of risk your team can imagine, including
natural threats, human threats and technical threats.
Phase 2: Analyze the risks you face
Next, you’ll perform a business impact analysis (BIA) to gauge the impact of each
potential risk. For each risk, determine how severe the impact would be and how long
your business could survive without those processes running. Consider what is absolutely
necessary for recovery, how quickly it needs to happen, what are your minimum
operating resources are and any dependencies, either internal or external.
Phase 3: Design your strategy
Now it’s time to figure out strategies to mitigate interruptions and to quickly recover from
them. Consider everything you’ll need to protect your people, your assets and you’re your
functions. Start by comparing your current recovery capabilities to your business
requirements and plzn how you will fill that gap.

Phase 4: Plan development and execution


Finally, it’s time to create a concise, well organized and easy-to follow document or set of
documents. Consider everyone that may use the plan, and document it in a way that will
be most useful when your business is suffering an interruption. Then publish the plan,
socialize it and train your staff on how to use it
Phase 5: Measure your success by testing
A plan isn’t truly a plan until it has been thoroughly tested. There are a variety of tests
you should perform, with each providing different information on how to improve your
plan. Tests can range from a checklist test, a walk-through performed by you your team
as if there were an actual event, emergency evacuation drills, and when ready, a full on
recovery simulation test is a bit more complex and involves your team simulating and
emergency and using the actual equipment, facilities and supplies just as in a real disaster.
After each test, you can make any necessary modifications to your plan to keep it current.
Once you’ve completed testing, the cycle is complete and begins again. Periodically
reassess risks, impacts and strategies, make corrections as necessary, and re-test
frequently to ensure the most effective plan.
12. Explain the concept of Business Continuity Planning and Recovery Plan in
industry.
13. Explain the various backup & recovery techniques for applications.
14. Write a short note on logical security audit.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

UNIT 5

1. What is forensic science? What is the need of it?


Ans. Forensics is the term given to an investigation of a crime using scientific means. It is
also used as the name of the application of scientific knowledge to legal matters. History
Forensic science has developed over the past 300 years or so, and its processes continue
to improve and evolve today as science and technology find better and more accurate
techniques. In 1929 the first American forensic lab was created in Los Angeles by the
police department.
Over the last decade, the number of crimes that involve computers has grown, spurring an
increase in companies and products that aim to assist law enforcement in using computer-
based evidence to determine the who, what, where, when, and how for crimes.
As a result, computer and network forensics has evolved to assure proper presentation of
computer crime evidentiary data into court. Forensic tools and techniques are most often
thought of in the context of criminal investigations and computer security incident
handling used to respond to an event by investigating suspect systems, gathering and
preserving evidence, reconstructing events, and assessing the current state of an event.
However, forensic tools and techniques are also useful for many other types of tasks, such
as the following:
Operational Troubleshooting. Many forensic tools and techniques can be applied to
troubleshooting operational issues, such as finding the virtual and physical location of a
host with arrive, they interview you and you give them details of whom you saw leaving
the store. They send their forensics team and the team discovers that the fingerprints on
the safe, on the damaged shop door and on the window where the intruder escaped, don’t
match the person the police have apprehended based on your description. They match a
criminal known to police as a jewellery thief and who already has their fingerprints
recorded in the police database. Who do you think the crime points to? The person you
identified on this dark and stormy night, or the person who an incorrect network
configuration, resolving a functional problem with an application, and recording and
reviewing the current OS and application configuration settings for a host.

Log Monitoring. Various tools and techniques can assist in log monitoring, such as
analyzing log entries and correlating log entries across multiple systems. This can assist
in incident handling, identifying policy violations, auditing, and other efforts.
Data Recovery. There are dozens of tools that can recover lost data from systems,
including data that has been accidentally or purposely deleted or otherwise modified. The
amount of data that can be recovered varies on a case-by-case basis.
Data Acquisition. Some organizations use forensics tools to acquire data from hosts that
are being redeployed or retired. For example, when a user leaves an organization, the data

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

from the user.s workstation can be acquired and stored in case it is needed in the future.
The workstation.s media can then be sanitized to remove all of the original user.s data.
Due Diligence/Regulatory Compliance. Existing and emerging regulations require many
organizations to protect sensitive information and maintain certain records for audit
purposes. Also, when protected information is exposed to other parties, organizations may
be required to notify other agencies or impacted individuals. Forensics can help
organizations exercise due diligence and comply with such requirements.

2. Who are the primary users of forensic tools and techniques? Also state the various
factors to be considered when selecting an external or internal party?

Answer:-
Practically every organization needs to have some capability to perform computer
and network forensics. Without such a capability, an organization will have
difficulty determining what events have occurred within its systems and networks, such
as exposures of protected, sensitive data. Although the extent of this need varies, the
primary users of forensic tools and techniques within an organization usually can be
divided into the following three groups.

Investigators within an organization are most often from the Office of Inspector
General (OIG), and they are responsible for investigating allegations of misconduct. For
some organizations, the OIG immediately takes over the investigation of any event that is
suspected to involve criminal activity. The OIG typically uses many forensic techniques
and tools. Other investigators within an organization might include legal advisors and
members of the human resources department. Law enforcement officials and others
outside the organization that might perform criminal investigations are not considered
part of an organization.s internal group of investigators.
IT Professionals.
This group includes technical support staff and system, network, and security
administrators. They use a small number of forensic techniques and tools specific to their
area of expertise during their routine work (e.g., monitoring, troubleshooting, data
recovery).
Incident Handlers.
This group responds to a variety of computer security incidents, such as unauthorized
data access, inappropriate system usage, malicious code infections, and denial of service
attacks. Incident handlers typically use a wide variety of forensic techniques and tools
during their investigations.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

3. What are the different groups in which primary users of forensic tools and
techniques within an organization usually can be divided into ?

Answer:-

Investigators.
Investigators within an organization are most often from the Office of Inspector
General (OIG), and they are responsible for investigating allegations of misconduct. For
some organizations, the OIG immediately takes over the investigation of any event that is
suspected to involve criminal activity. The OIG typically uses many forensic techniques
and tools. Other investigators within an organization might include legal advisors and
members of the human resources department.
IT Professionals.
This group includes technical support staff and system, network, and security
administrators. They use a small number of forensic techniques and tools specific to their
area of expertise during their routine work (e.g., monitoring, troubleshooting, data
recovery).

Incident Handlers.
This group responds to a variety of computer security incidents, such as unauthorized
data access, inappropriate system usage, malicious code infections, and denial of service
attacks. Incident handlers typically use a wide variety of forensic techniques and tools
during their investigations.

Cost.
There are many potential costs. Software, hardware, and equipment used to collect
and examine data may carry significant costs (e.g., purchase price, software updates and
upgrades, maintenance), and may also require additional physical security measures to
safeguard them from tampering.

Response Time.
Personnel located on-site might be able to initiate computer forensic activity more
quickly than could off-site personnel. For organizations with geographically dispersed
physical locations, off-site outsourcers located near distant facilities might be able to
respond more quickly than personnel located at the organization.s headquarters.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Data Sensitivity
Because of data sensitivity and privacy concerns, some organizations might be
reluctant to allow external parties to image hard drives and perform other actions that
provide access to data.
For example -health care information, financial records.

4. What are the key recommendations of establishing and organizing a forensic


capability?

Answer:-

Organizations should have a capability to perform computer and network forensics.-


Forensics is needed for various tasks within an organization, including investigating
crimes and inappropriate behavior, reconstructing computer security incidents,
troubleshooting operational problems, supporting due diligence for audit record
maintenance, and recovering from accidental system damage. Without such a capability,
an organization will have difficulty determining what events have occurred within its
systems and networks, such as exposures of protected, sensitive data. Also, handling
evidence in a forensically sound manner puts decision makers in a position where they
can confidently take the necessary actions.

Organizations should determine which parties should handle each aspect of forensics.
Most organizations rely on a combination of their own staff and external parties to
perform forensic tasks. Organizations should decide which parties should take care of
which tasks based on skills and abilities, cost, response time, and data sensitivity.

Incident handling teams should have robust forensic capabilities.


More than one team member should be able to perform each typical forensic activity.
Hands-on exercises and IT and forensic training courses can be helpful in building and
maintaining skills, as can demonstrations of new tools and technologies.

Many teams within an organization should participate in forensics.


Individuals performing forensic actions should be able to reach out to other teams and
individuals within an organization, as needed, for additional assistance. Examples of

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

teams that may provide assistance in these efforts include IT professionals, management,
legal advisors, human resources personnel, auditors, and physical security staff. Members
of these teams should understand their roles and responsibilities in forensics, receive
training and education on forensic.related policies, guidelines, and procedures, and be
prepared to cooperate with and assist others on forensic actions.

Forensic considerations should be clearly addressed in policies.


At a high level, policies should allow authorized personnel to monitor systems and
networks and perform investigations for legitimate reasons under appropriate
circumstances. Organizations may also have a separate forensic policy for incident
handlers and others with forensic roles that provides more detailed rules for appropriate
behavior. Everyone who may be called upon to assist with any forensic efforts should be
familiar with and understand the forensic policy.

Organizations should create and maintain guidelines and procedures for performing
forensic tasks.
The guidelines should include general methodologies for investigating an incident
using forensic techniques, and step-by-step procedures should explain how to perform
routine tasks. The guidelines and procedures should support the admissibility of evidence
into legal proceedings. Because electronic logs and other records can be altered or
otherwise manipulated, organizations should be prepared, through their policies,
guidelines, and procedures, to demonstrate the reliability and integrity of such records.
The guidelines and procedures should also be reviewed regularly and maintained so that
they are accurate.

5. Write a note on forensic process.


Answer:-
The most common goal of performing forensics is to gain a better understanding of an
event of interest by finding and analyzing the facts related to that event. As described in
Section 2.1, forensics may be needed in many different situations, such as evidence
collection for legal proceedings and internal disciplinary actions, and handling of
malware incidents and unusual operational problems. Regardless of the need, forensics
should be performed using the four-phase process shown in Figure 3-1. The exact details
of these steps may vary based on the specific need for forensics; the organization.s
policies, guidelines, and procedures should indicate any variations from the standard
procedure.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Data Collection
The first step in the forensic process is to identify potential sources of data and
acquire data from them. Section 3.1.1 describes the variety of data sources available and
discusses actions that organizations can take to support the ongoing collection of data
for forensic purposes. Section 3.1.2 describes the recommended steps for collecting data,
including additional actions necessary to support legal or internal disciplinary
proceedings. Section 3.1.3 discusses incident response considerations, emphasizing
the need to weigh the value of collected data against the costs and impact to the
organization of the collection process.
Identifying Possible Sources of Data
The increasingly widespread use of digital technology for both professional and
personal purposes has led to an abundance of data sources. The most obvious and
common sources of data are desktop computers, servers, network storage devices, and
laptops.

Acquiring the Data


After identifying potential data sources, the analyst needs to acquire the data from
the sources. Data acquisition should be performed using a three-step process: developing
a plan to acquire the data, acquiring the data, and verifying the integrity of the acquired
data.

A- Develop a plan to acquire the data. Developing a plan is an important first step in most
cases because there are multiple potential data sources. The analyst should create a plan
that prioritizes the sources, establishing the order in which the data should be acquired.
Important factors for prioritization include the following:
Likely Value.
Based on the analyst.s understanding of the situation and previous experience in
similarsituations, the analyst should be able to estimate the relative likely value of each
potential data source.
Volatility. Volatile data refers to data on a live system that is lost after a computer is
powered down or due to the passage of time.
Amount of Effort Required.
The amount of effort required to acquire different data sources may vary widely.
For example, acquiring data from a network router would probably require much less
effort than acquiring data from an ISP.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

2. Acquire the data.


=If the data has not already been acquired by security tools, analysis tools, or other
means, the general process for acquiring data involves using forensic tools to collect
volatile data, duplicating non-volatile data sources to collect their data, and securing the
original non-volatile data sources. Data acquisition can be performed either locally or
over a network.
3. Verify the integrity of the data. After the data has been acquired, its integrity should be
verified. It is particularly important for an analyst to prove that the data has not been
tampered with if it might be needed for legal reasons.

Examination
After data has been collected, the next phase is to examine the data, which involves
assessing and extracting the relevant pieces of information from the collected data. This
phase may also involve bypassing or mitigating OS or application features that obscure
data and code, such as data compression, encryption, and access control mechanisms. An
acquired hard drive may contain hundreds of thousands of data files; identifying the data
files that contain information of interest, including information concealed through file
compression and access control, can be a daunting task. In addition, data files of interest
may contain extraneous information that should be filtered

Analysis
Once the relevant information has been extracted, the analyst should study and
analyze the data to draw conclusions from it.14 The foundation of forensics is using a
methodical approach to reach appropriate conclusions based on the available data or
determine that no conclusion can yet be drawn. The analysis should include identifying
people, places, items, and events, and determining how these elements are related so that
a conclusion can be reached. Often, this effort will include correlating data among
multiple sources. Tools such as centralized logging and security event management
software can facilitate this process by automatically gathering and correlating the data.
Reporting
The final phase is reporting, which is the process of preparing and presenting the
information resulting from the analysis phase. Many factors affect reporting, including
the following:

Alternative Explanations.
When the information regarding an event is incomplete, it may not be possible to
arrive at a definitive explanation of what happened. When an event has two or more
plausible explanations, each should be given due consideration in the reporting process.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Analysts should use a methodical approach to attempt to prove or disprove each possible
explanation that is proposed.

Audience Consideration.
Knowing the audience to which the data or information will be shown is important.
An incident requiring law enforcement involvement requires highly detailed reports of all
information gathered, and may also require copies of all evidentiary data obtained. A
system administrator might want to see network traffic and related statistics in great
detail. Senior management might simply want a high-level overview of what happened,
such as a simplified visual representation of how the attack occurred, and what should be
done to prevent similar incidents.

Actionable Information.
Reporting also includes identifying actionable information gained from data that may
allow an analyst to collect new sources of information. For example, a list of contacts
may be developed from the data that might lead to additional information about an
incident or crime. Also, information might be obtained that could prevent future events,
such as a backdoor on a system that could be used for future attacks, a crime that is being
planned, a worm scheduled to start spreading at a certain time, or a vulnerability that
could be exploited.

6. Write a note on forensic toolkit.


Answer:-
Analysts should have access to various tools that enable them to perform examinations
and analysis of data, as well as some collection activities. Many forensic products allow
the analyst to perform a wide range of processes to analyze files and applications, as well
as collecting files, reading disk images, and extracting data from files. The forensic
toolkit should contain applications that can accomplish data examination and analysis in
many ways and can be run quickly and efficiently from floppy disks, CDs, or a forensic
workstation. The following processes are among those that an analyst should be able to
perform with a variety of tools:

Using File Viewers.


Using viewers instead of the original source applications to display the contents of
certain types of files is an important technique for scanning or previewing data, and is
more efficient because the analyst does not need native applications for viewing each type
of file. Various tools are available for viewing common types of files, and there are also
specialized tools solely for viewing graphics. If available file viewers do not support a

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

particular file format, the original source application should be used; if this is not
available, then it may be necessary to research the file.s format and manually extract the
data from the file.

Uncompressing Files.

Compressed files may contain files with useful information, as well as other compressed
files Therefore, it is important that the analyst locate and extract compressed files.
Uncompressing files should be performed early in the forensic process to ensure that the
contents of compressed files are included in searches and other actions. Compression
bombs can cause examination tools to fail or consume considerable resources; they might
also contain malware and other malicious payloads.

Graphically Displaying Directory Structures.


This practice makes it easier and faster for analysts to gather general information
about the contents of media, such as the type of software installed and the likely technical
aptitude of the user(s) who created the data. Most products can display Windows, Linux,
and UNIX directory structures, whereas other products are specific to Macintosh
directory structures.

Identifying Known Files.


The benefit of finding files of interest is obvious, but it is also often beneficial to
eliminate unimportant files, such as known good OS and application files, from
consideration. Analysts should use validated hash sets, such as those created by NIST.s
National Software Reference Library (NSRL) project56 or personally created hash sets57
that have been validated, as a basis for identifying known benign and malicious files.
Hash sets typically use the SHA-1 and MD5 algorithms to establish message digest
values for each known file.

Performing String Searches and Pattern Matches.


String searches aid in perusing large amounts of data to find key words or strings.
Various searching tools are available that can use Boolean, fuzzy logic, synonyms and
concepts, stemming, and other search methods. Examples of common searches include
searching for multiple words in a single file and searching for misspelled versions
certain words. Developing concise sets of search terms for common situations can help
the analyst reduce the volume of information to review.

Accessing File Metadata.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

File metadata provides details about any given file. For example, collecting the
metadata on a graphic file might provide the graphic.s creation date, copyright
information, and description, and the creator.s identity.58 Metadata for graphics
generated by a digital camera might include the make and model of the digital camera
used to take the image, as well as F-stop, flash, and aperture settings. For word processing
files, metadata could specify the author, the organization that licensed the software, when
and by whom edits were last performed, and user-defined comments.

7. Write a note on Examining data files.

A data file (also called a file) is a collection of information logically grouped into a single
entity and referenced by a unique name, such as a filename.

After a logical backup or bit stream imaging has been performed, the backup or image may
have to be restored to another media before the data can be examined. This is dependent on
the forensic tools that will be used to perform the analysis. Some tools can analyze data
directly from an image file, whereas others require that the backup or image be restored to a
medium first. Regardless of whether an image file or a restored image is used in the
examination, the data should be accessed only as read-only to ensure that the data being
examined is not modified and that it will provide consistent results on successive runs.
This section describes the processes involved in examining files and data, as well as
techniques that can expedite examination.

 Locating the Files

The first step in the examination is to locate the files. A disk image can capture many
gigabytes of slack space and free space, which could contain thousands of files and file
fragments. Manually extracting data from unused space can be a time-consuming and
difficult process, because it requires knowledge of the underlying filesystem format.
Fortunately, several tools are available that can automate the process of extracting data
from unused space and saving it to data files, as well as recovering deleted files and files
within a recycling bin. Analysts can also display the contents of slack space with hex
editors or special slack recovery tools

 Extracting the Data

The rest of the examination process involves extracting data from some or all of the
files. To make sense of the contents of a file, an analyst needs to know what type of
data the file contains. The intended purpose of file extensions is to denote the nature
of the file.s contents; for example, a jpg extension indicates a graphic file, and an mp3
extension indicates a music file. However, users can assign any file extension to any
type of file, such as naming a text file mysong.mp3 or omitting a file extension. In
addition, some file extensions might be hidden or unsupported on other OSs.
Therefore, analysts should not assume that file extensions are accurate.

 Using a Forensic Toolkit

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Analysts should have access to various tools that enable them to perform
examinations and analysis of data, as well as some collection activities. Many forensic
products allow the analyst to perform a wide range of processes to analyze files and
applications, as well as collecting files, reading disk images, and extracting data from
files. Most analysis products also offer the ability to generate reports and to log all
errors that occurred during the analysis.

 Using File Viewers.


Using viewers instead of the original source applications to display the contents of
certain types of files is an important technique for scanning or previewing data, and is
more efficient because the analyst does not need native applications for viewing each
type of file

 Uncompressing Files.
Compressed files may contain files with useful information, as well as other
compressed files.

 Graphically Displaying Directory Structures.


This practice makes it easier and faster for analysts to gather general information
about the contents of media, such as the type of software installed and the likely
technical aptitude of the user(s) who created the data.

 Identifying Known Files.


The benefit of finding files of interest is obvious, but it is also often beneficial to
eliminate unimportant files, such as known good OS and application files, from
consideration.

 Performing String Searches and Pattern Matches.


String searches aid in perusing large amounts of data to find key words or strings.

 Accessing File Metadata.


File metadata provides details about any given file. For example, collecting the
metadata on a graphic file might provide the graphic.s creation date, copyright
information, and description, and the creator.s identity.

8. Explain the two different techniques used for copying files from media.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

Files can be copied from media using two different techniques:

 Logical Backup. A logical backup copies the directories and files of a logical
volume. It does not capture other data that may be present on the media, such as
deleted files or residual data stored in slack space.
 Bit Stream Imaging. Also known as disk imaging, bit stream imaging generates a
bit-for-bit copy of the original media, including free space and slack space. Bit
stream images require more storage space and take longer to perform than logical
backups.
If evidence may be needed for prosecution or disciplinary actions, the analyst should get a bit
stream image of the original media, label the original media, and store it securely as evidence.
All subsequent analysis should be performed using the copied media to ensure that the
original media is not modified and that a copy of the original media can always be recreated
if necessary. All steps that were taken to create the image copy should be documented. Doing
so should allow any analyst to produce an exact duplicate of the original media using the
same procedures.
When a bit stream image is executed, either a disk-to-disk or a disk-to-file copy can be
performed. A disk-to-disk copy, as its name suggests, copies the contents of the media
directly to another media. A disk-to-file copy copies the contents of the media to a single
logical data file. A disk-to-disk copy is useful since the copied media can be connected
directly to a computer and its contents readily viewed. However, a disk-to-disk copy requires
a second media similar to the original media. A disk-to-file copy allows the data file image to
be moved and backed up easily.
Numerous hardware and software tools can perform bit stream imaging and logical backups.
Hardware tools are generally portable, provide bit-by-bit images, connect directly to the drive
or computer to be imaged, and have built-in hash functions. Hardware tools can acquire data
from drives that use common types of controllers, such as Integrated Drive Electronics (IDE)
and Small Computer System Interface (SCSI). Software solutions generally consist of a
startup diskette, CD, or installed programs that run on a workstation to which the media to be
imaged is attached. Some software solutions create logical copies of files or partitions and
may ignore free or unallocated drive space, whereas others create a bit-by-bit image copy of
the media.

9. What is NESSUS? Why is it considered as the most popular vulnerability


scanner?
NESSUS

Nessus is a proprietary comprehensive which is developed by Tenable Network Security. It is


free of charge for personal use in a non-enterprise environment.
According to surveys done 2009 by sectools.org, Nessus is the world's most popular
vulnerability scanner, taking first place in the 2000, 2003 and 2006 security tools

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

survey.Tenable Network Security estimated year 2005 that it was used by over 75,000
organizations worldwide.
Nessus allows scans for the following types of vulnerabilities:

 Vulnerabilities that allow a remote hacker to control or access sensitive data on a system.
 Misconfiguration (e.g. open mail relay, missing patches, etc.).
 Default passwords, a few common passwords, and blank/absent passwords on some
system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary
attack.
 Denials of service against the TCP/IP stack by using malformed packets
 Preparation for PCI DSS audits
 Initially, Nessus consisted of two main components; nessusd, the Nessus daemon,
which does the scanning, and nessus, the client, which controls scans and presents the
vulnerability results to the user. Later versions of Nessus (4 and greater) utilize a web
server which provides the same functionality as the client.
 In typical operation, Nessus begins by doing a port scan with one of its four internal
portscanners (or it can optionally use AmapM or Nmap) to determine which ports are
open on the target and then tries various exploits on the open ports. The vulnerability
tests, available as subscriptions, are written in NASL(Nessus Attack Scripting
Language), a scripting language optimized for custom network interaction.
 produces several dozen new vulnerability checks (called plugins) each week, usually
on a daily basis. These checks are available for free to the general public; commercial
customers are not allowed to use this Home Feed any more. The Professional Feed
(which is not free) also give access to support and additional capabilities (e.g. audit
files, compliance tests, additional vulnerability detection plugins).
 Optionally, the results of the scan can be reported in various formats, such as plain
text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for
debugging. On UNIX, scanning can be automated through the use of a command-line
client. There exist many different commercial, free and open source tools for both
UNIX and Windows to manage individual or distributed Nessus scanners.
 If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus'
vulnerability tests may try to cause vulnerable services or operating systems to crash.
This lets a user test the resistance of a device before putting it in production.
 Nessus provides additional functionality beyond testing for known network
vulnerabilities. For instance, it can use Windows credentials to examine patch levels
on computers running the Windows operating system, and can perform password
auditing using dictionary and brute force methods. Nessus 3 and later can also audit
systems to make sure they have been configured per a specific policy, such as
the NSA's guide for hardening Windows servers. This functionality utilizes Tenable's
proprietary audit files or Security Content Automation Protocol (SCAP) content.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

10. What types of vulnerabilities are scanned by NESSUS.


Nessus allows scans for the following types of vulnerabilities:

 Vulnerabilities that allow a remote hacker to control or access sensitive data on a system.
 Misconfiguration (e.g. open mail relay, missing patches, etc.).
 Default passwords, a few common passwords, and blank/absent passwords on some
system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary
attack.
 Denials of service against the TCP/IP stack by using malformed packets
 Preparation for PCI DSS audit

11. What are the control objectives of ISO 17799 standard?


According to the ISO, the ISO 17779 ‘establishes guidelines and general principles for
initiating, implementing, maintaining and improving information security management in an
organization.’ As mentioned, the standard simply offer guidelines, it does not contain indebt
information on how information security should be implemented and maintained.
The security controls, the means of managing risk, mentioned in this standard should not all
be implemented. The appropriate controls should be selected after an in dept risk assessment
has been completed. Only then should controls be selected to meet the specific needs of the
organization. Each organization is unique; therefore each will face different threats and
vulnerabilities. It is also important to understand that the controls mentioned in the standard
are not organized or prioritized according to any specific criteria. Each control should be
given equal importance and considered at the systems and projects requirements specification
and design stage. Failure to do this will result in less cost effective measures or even failure
in achieving adequate security. The last point that should be highlighted about the standard is
the ISO warning that no set of controls will achieve complete security. The ISO encourages
additional intervention from management to monitor, evaluate and improve the effectiveness
of security controls to support the business objectives of the organization.

The following is a list of the 11 clauses, in no order of importance, followed by the number of
main security categories within each. Each main security category has a ‘control objective’.
This states what the control is to achieve. Second, each has one or more controls that can be
applied to achieve the control’s objective.
 Security Policy.
 Organizing Information Security.
 Asset Management.
 Human Resources Security.
 Physical and Environment Security.
 Communications and Operations Management.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

 Access Control.
 Information Systems Acquisition, Development and Maintenance.
 Information Security Incident Management.
 Business Continuity Management.

12) What is the functionality of NMAP tool?


Ans:- Nmap (Network Mapper) is a security scanner originally written by Gordon Lyon (also
known by his pseudonym Fyodor Vaskovich) used to discover hosts and services on a
computer network, thus creating a "map" of the network. To accomplish its goal, Nmap sends
specially crafted packets to the target host and then analyzes the responses.
The software provides a number of features for probing computer networks, including host
discovery and service and operating system detection. These features are extensible by scripts
that provide more advanced service detection, vulnerability detection, and other features.
Nmap is also capable of adapting to network conditions including latency and congestion
during a scan. Nmap is under development and refinement by its user community.

Nmap features include:

 Host discovery – Identifying hosts on a network. For example, listing the hosts that
respond to TCP and/or ICMP requests or have a particular port open.
 Port scanning – Enumerating the open ports on target hosts.
 Version detection – Interrogating network services on remote devices to determine
application name and version number.
 OS detection – Determining the operating system and hardware characteristics of
network devices.
 Scriptable interaction with the target – using Nmap Scripting Engine (NSE) and Lua
programming language.

Nmap can provide further information on targets, including reverse DNS names, device
types, and MAC addresses.

Typical uses of Nmap:

 Auditing the security of a device or firewall by identifying the network connections


which can be made to, or through it.
 Identifying open ports on a target host in preparation for auditing
 Network inventory, network mapping, maintenance and asset management.
 Auditing the security of a network by identifying new servers.
 Generating traffic to hosts on a network.
 Find and exploit vulnerabilities in a network.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

14. What are the basic phases of forensic process? Give a brief overview of
it.
Ans:-The basic phases of the forensic process: collection, examination, analysis, and reporting.
During collection, data related to a specific event is identified, labeled, recorded, and
collected, and its integrity is preserved. In the second phase, examination, forensic tools and
techniques appropriate to the types of data that were collected are executed to identify and
extract the relevant information from the collected data while protecting its integrity.
Examination may use a combination of automated tools and manual processes. The next
phase, analysis, involves analyzing the results of the examination to derive useful information
that addresses the questions that were the impetus for performing the collection and
examination. The final phase involves reporting the results of the analysis, which may
include describing the actions performed, determining what other actions need to be
performed, and recommending improvements to policies, guidelines, procedures, tools, and
other aspects of the forensic process.

1) Data Collection

The first step in the forensic process is to identify potential sources of data and acquire data from
them.

a) Identifying Possible Sources of Data

The increasingly widespread use of digital technology for both professional and personal purposes has
led to an abundance of data sources. The most obvious and common sources of data are desktop
computers, servers, network storage devices, and laptops. These systems typically have internal drives
that accept media, such as CDs and DVDs, and also have several types of ports (e.g., Universal Serial
Bus [USB], Firewire, Personal Computer Memory Card International Association [PCMCIA]) to
which external data storage media and devices can be attached.

b) Acquiring the Data

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

After identifying potential data sources, the analyst needs to acquire the data from the sources. Data
acquisition should be performed using a three-step process: developing a plan to acquire the data,
acquiring the data, and verifying the integrity of the acquired data.

b) Incident Response Considerations

When performing forensics during incident response, an important consideration is how and
when the incident should be contained. Isolating the pertinent systems from external
influences may be necessary to prevent further damage to the system and its data or to
preserve evidence. In many cases, the analyst should work with the incident response team to
make a containment decision (e.g., disconnecting network cables, unplugging power,
increasing physical security measures, gracefully shutting down a host). This decision should
be based on existing policies and procedures regarding incident containment, as well as the
team.s assessment of the risk posed by the incident, so that the chosen containment strategy
or combination of strategies sufficiently mitigates risk while maintaining the integrity of
potential evidence whenever possible.

15. Short note on File Systems.

Ans:-
Filesystems

Before media can be used to store files, the media must usually be partitioned and formatted
into logical volumes. Partitioning is the act of logically dividing a media into portions that
function as physically separate units. A logical volume is a partition or a collection of
partitions acting as a single entity that has been formatted with a filesystem. Some media
types, such as floppy disks, can contain at most one partition (and consequently, one logical
volume). The format of the logical volumes is determined by the selected filesystem.

A filesystem defines the way that files are named, stored, organized, and accessed on logical
volumes. Many different filesystems exist, each providing unique features and data
structures. However, all filesystems share some common traits. First, they use the concepts of
directories and files to organize and store data. Directories are organizational structures that
are used to group files together. In addition to files, directories may contain other directories
called subdirectories. Second, filesystems use some data structure to point to the location of
files on media. In addition, they store each data file written to media in one or more file
allocation units. These are referred to as clusters by some filesystems (e.g., File Allocation
Table [FAT], NT File System [NTFS]) and as blocks by other filesystems (e.g., UNIX and
Linux). A file allocation unit is simply a group of sectors, which are the smallest units that
can be accessed on media.
Some commonly used filesystems are as follows:

! FAT12.FAT12 is used only on floppy disks and FAT volumes smaller than 16 MB.
FAT12 uses a 12-bit file allocation table entry to address an entry in the filesystem.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

! FAT16. MS-DOS, Windows 95/98/NT/2000/XP, Windows Server 2003, and some


UNIX OSs support FAT16 natively. FAT16 is also commonly used for multimedia
devices such as digital cameras and audio players. FAT16 uses a 16-bit file allocation
table entry to address an entry in the filesystem. FAT16 volumes are limited to a
maximum size of 2 GB in MS-DOS and Windows 95/98. Windows NT and newer
OSs increase the maximum volume size for FAT16 to 4 GB.
! FAT32.Windows 95 Original Equipment Manufacturer (OEM) Service Release 2
(OSR2), Windows 98/2000/XP, and Windows Server 2003 support FAT32 natively,
as do some multimedia devices. FAT32 uses a 32-bit file allocation table entry to
address an entry in the filesystem. The maximum FAT32 volume size is 2 terabytes
(TB).

NTFS. Windows NT/2000/XP and Windows Server 2003 support NTFS natively. NTFS
is a recoverable filesystem, which means that it can automatically restore the
consistency of the filesystem when errors occur.

16. How is the collection of files done in forensic science?


ANs:- During data collection, the analyst should make multiple copies of the relevant files or
file systems typically a master copy and a working copy. The analyst can then use the
working copy without affecting the original files or the master copy.

a) Copying Files from Media

Files can be copied from media using two different techniques:


! Logical Backup. A logical backup copies the directories and files of a logical volume. It
does not capture other data that may be present on the media, such as deleted files or residual
data stored in slack space.
Bit Stream Imaging. Also known as disk imaging, bit stream imaging generates a bit-for-bit
copy of the original media, including free space and slack space. Bit stream images require
more storage space and take longer to perform than logical backups.
When a bit stream image is executed, either a disk-to-disk or a disk-to-file copy can be
performed. A disk-to-disk copy, as its name suggests, copies the contents of the media
directly to another media. A disk-to-file copy copies the contents of the media to a single
logical data file. A disk-to-disk copy is useful since the copied media can be connected
directly to a computer and its contents readily viewed. However, a disk-to-disk copy requires
a second media similar to the original media.

b) Data File Integrity

During backups and imaging, the integrity of the original media should be maintained. To
ensure that the backup or imaging process does not alter data on the original media, analysts
can use a write-blocker while backing up or imaging the media. A write-blocker is a

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch
Prof. Sohrab Vakharia Information Security Management
(only for private circulation; notes compiled from various resources)

hardware or software-based tool that prevents a computer from writing to computer storage
media connected to it. Hardware write-blockers are physically connected to the computer and
the storage media being processed to prevent any writes to that media. Software write-
blockers are installed on the analysts forensic system and currently are available only for MS-
DOS and Windows systems. (Some OSs [e.g., Mac OS X, Linux] may not require software
write-blockers because they can be set to boot with secondary devices not mounted.
However, attaching a hardware write-blocking device will ensure that integrity is maintained)
MS-DOS. Based software write-blockers work by trapping Interrupt 13 and extended
Interrupt 13 disk writes. Windows-based software write-blockers use filters to sort interrupts
sent to devices to prevent any writes to storage media.
In general, when using a hardware write-blocker, the media or device used to read the media
should be connected directly to the write-blocker, and the write-blocker should be connected
to the computer or device used to perform the backup or imaging. When using a software
write-blocker, the software should be loaded onto a computer before the media or device used
to read the media is connected to the computer. Write-blockers may also allow write-blocking
to be toggled on or off for a particular device.

Complied by Shetty College and UPG college M.S.C.I.T. students 2015 batch

Vous aimerez peut-être aussi