Vous êtes sur la page 1sur 12

UNIT5

MATTER MARKED RED IS IMPORTANT

Security Assurance Approaches


Todays world requires that digital data be accessible, dependable and protected from misuse. Unfortunately,
this need for accessible data also exposes organisations to a variety of new threats that can affect their
information. Often organisations invest huge resources trying to protect their IT infrastructure without
assessing the risks to their critical information. These organisations fail to realise that the primary objective is
to protect mission-critical information rather than the IT infrastructure.
Organisations deploy established information security control frameworks as business needs and regulatory
requirements become imminent. Most of these frameworks have evolved from industry best practices and
recommend information security risk assessment aligned to the organisations risk management framework as
one of the control objectives. The challenge enterprises face today is in adopting a robust, process-oriented
information security risk assessment framework to comply with the control objective.

1.The Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE)


approach is one such framework that enables organisations to understand, assess and address their information
security risks from the organisations perspective. OCTAVE is not a product, rather it is a process-driven
methodology to identify, prioritise and manage information security risks. It is intended to help organisations:

Develop qualitative risk evaluation criteria based on operational risk tolerances

Identify assets that are critical to the mission of the organisation

Identify vulnerabilities and threats to the critical assets

Determine and evaluate potential consequences to the organisation if threats are realised

Initiate corrective actions to mitigate risks and create practice-based protection strategy

The OCTAVE approach was developed by the Software Engineering Institute (SEI) at Carnegie Mellon
University to address the information security compliance challenges faced by the US Department of Defense
(DoD). SEI is a US federally funded research and development centre sponsored by the DoD.

The OCTAVE Method


The OCTAVE Method has been designed for large organisations having multi-layered hierarchy and
maintaining their own computing infrastructure. The organisational, technological and analysis aspects of an
information security risk evaluation are undertaken by a three-phased approach with eight processes (figure 3):

Phase 1: Build asset-based threat profiles (organisational evaluation)The analysis team


determines critical assets and what is currently being done to protect them. The security requirements
for each critical asset are then identified. Finally, the organisational vulnerabilities with the existing
practices and the threat profile for each critical asset are established.

Phase 2: Identify infrastructure vulnerabilities (technological evaluation)The analysis team


identifies network access paths and the classes of IT components related to each critical asset. The team
then determines the extent to which each class of component is resistant to network attacks and
establishes the technological vulnerabilities that expose the critical assets.

Phase 3: Develop security strategy and mitigation plans (strategy and plan development)The
analysis team establishes risks to the organisations critical assets based on analysis of the information
gathered and decides what to do about them. The team creates a protection strategy for the organisation
and mitigation plans to address identified risks. The team also determines the next steps required for
implementation and gains senior managements approval on the outcome of the whole process.

2.COBIT ( Control Objectives for Information and related Technologies)


It is a methodology for evaluating a company's IT department that was published in 1996 by the IT
Governance Institute and the ISACA(Information Systems Audit and Control Association) represented in
France by the AFAI (French Association of Audit and IT Advice).
This approach is based on a process benchmark, key goal indicators (KGIs) and key performance indicators
(KPIs) that are used to monitor the processes in order to collect data that the company can use to reach its
goals.
The COBIT approach puts forward 34 processes organized in 4 larger functional areas that cover 318 goals:

Deliver & Support

Monitor

Planning & Organisation

Acquire & Implement

SECURITY OF IT SYSTEMS
Used in computer security, intrusion detection refers to the process of monitoring computer and network
activities and analyzing those events to look for signs of intrusion in your system. The point of looking for

unauthorized intrusions is to alert IT professionals and system administrators within your organization to
potential system or network security threats and weaknesses.

IDS A Passive Security Solution

An intrusion detection system (IDS) is designed to monitor all inbound and outbound network activity and
identify any suspicious patterns that may indicate a network or system attack from someone attempting to
break into or compromise a system. IDS is considered to be a passive-monitoring system, since the main
function of an IDS product is to warn you of suspicious activity taking place not prevent them. An IDS
essentially reviews your network traffic and data and will identify probes, attacks, exploits and other
vulnerabilities. IDSs can respond to the suspicious event in one of several ways, which includes displaying an
alert,logging the event or even paging an administrator. In some cases the IDS may be prompted to reconfigure
the network to reduce the effects of the suspicious intrusion.
An IDS specifically looks for suspicious activity and events that might be the result of a virus, worm or hacker.
This is done by looking for known intrusion signatures or attack signatures that characterize different worms or
viruses and by tracking general variances which differ from regular system activity. The IDS is able to provide
notification of only known attacks.
The term IDS actually covers a large variety of products, for which all produce the end result of detecting
intrusions. An IDS solution can come in the form of cheaper shareware or freely distributedopen
source programs, to a much more expensive and secure vendor software solution. Additionally, some IDSs
consist of both software applications and hardware appliances and sensor devices which are installed at
different points along your network.

Misuse Detection vs. Anomaly Detection


In misuse detection, the IDS analyzes the information it gathers and compares it to large databases of attack
signatures. Essentially, the IDS looks for a specific attack that has already been documented. Like a virus
detection system, detection software is only as good as the database of intrusion signatures that it uses to
compare packets against. In anomaly detection, the system administrator defines the baseline, or normal, state
of the network's traffic load, breakdown, protocol, and typical packet size. The anomaly detector monitors
network segments to compare their state to the normal baseline and look for anomalies.
Passive Vs. Reactive Systems
In a passive system, the IDS detects a potential security breach, logs the information and signals an alert. In a
reactive system, the IDS responds to the suspicious activity by logging off a user or by reprogramming the
firewall to block network traffic from the suspected malicious source.
Network-based vs. Host-based IDS

Intrusion detection systems are network or host based solutions. Network-based IDS systems (NIDS) are often
standalone hardware appliances that include network intrusion detection capabilities. It will usually consist of
hardware sensors located at various points along the network or software that is installed to system computers
connected to your network, which analyzes data packets entering and leaving the network. Host-based IDS
systems (HIDS) do not offer true real-time detection, but if configured correctly are close to true real-time.

IPS AN ACTIVE SECURITY SOLUTION

IPS or intrusion prevention system, is definitely the next level of security technology with its capability to
provide security at all system levels from the operating system kernel to network data packets. It provides
policies and rules for network traffic along with an IDS for alerting system or network administrators to
suspicious traffic, but allows the administrator to provide the action upon being alerted. Where IDS informs of
a potential attack, an IPS makes attempts to stop it. Another huge leap over IDS, is that IPS has the capability
of being able to prevent known intrusion signatures, but also some unknown attacks due to its database of
generic attack behaviors. Thought of as a combination of IDS and an application layer firewall for protection,
IPS is generally considered to be the "next generation" of IDS.
Currently, there are two types of IPSs that are similar in nature to IDS. They consist of host-based intrusion
prevention systems (HIPS) products and network-based intrusion prevention systems (NIPS).

Network-based vs. Host-based IPS


Host-based intrusion prevention systems are used to protect both servers and workstations through software
that runs between your system's applications and OS kernel. The software is preconfigured to determine the
protection rules based on intrusion and attack signatures. The HIPS will catch suspicious activity on the system
and then, depending on the predefined rules, it will either block or allow the event to happen. HIPS monitors
activities such as application or data requests, network connection attempts, and read or write attempts to name
a few.
Network-based intrusion prevention systems (often called inline prevention systems) is a solution for networkbased security. NIPS will intercept all network traffic and monitor it for suspicious activity and events, either
blocking the requests or passing it along should it be deemed legitimate traffic. Network-based IPSs works in

several ways. Usually package- or software-specific features determine how a specific NIPS solution works,
but generally you can expect it to scan for intrusion signatures, search for protocol anomalies, detect
commands not normally executed on the network and more.

FIREWALLS
A firewall is a system designed to prevent unauthorized access to or from a private network. Firewalls can be
implemented in bothhardware and software, or a combination of both.

How are Firewalls Used?


Firewalls are frequently used to prevent unauthorized Internetusers from accessing private networks connected
to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall,
which examines each message and blocks those that do not meet the specified security criteria.

Hardware and Software Firewalls


Firewalls can be either hardware or software but the ideal firewall configuration will consist of both. In
addition to limiting access to your computer and network, a firewall is also useful for allowing remote access
to a private network through secure authentication certificates and logins.
Hardware firewalls can be purchased as a stand-alone product but are also typically found in broadband
routers, and should be considered an important part of your system and network set-up. Most hardware
firewalls will have a minimum of four network ports to connect other computers, but for larger networks,
business networking firewall solutions are available.
Software firewalls are installed on your computer (like any software) and you can customize it; allowing you
some control over its function and protection features. A software firewall will protect your computer from
outside attempts to control or gain access your computer.

COMMON FIREWALL TECHNIQUES


Firewalls are used to protect both home and corporate networks. A typical firewall program or hardware device
filters all information coming through the Internet to your network or computer system. There are several types
of firewall techniques that will prevent potentially harmful information from getting through:

Packet Filter
Looks at each packet entering or leaving the network and accepts or rejects it based on user-defined rules.
Packet filtering is fairly effective and transparent to users, but it is difficult to configure. In addition, it is
susceptible to IP spoofing.

Application Gateway
Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective,
but can impose a performance degradation.

Circuit-level Gateway
Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been
made, packets can flow between the hosts without further checking.

Proxy Server
Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network
addresses.
In practice, many firewalls use two or more of these techniques in concert. A firewall is considered a first line
of defense in protecting private information. For greater security, data can be encrypted.

WORLD-WIDE-WEB SECURITY
There are many security issues related to the WWW. Within the scope of this paper, we will only discuss the
communications security aspect, both at the network and the application level, and the payment security aspect.
1.COMMUNICATIONS SECURITY

The communication between a web browser and a web server is secured by the SSL/TLS protocol.
Historically, Secure Sockets Layer (SSL) was an initiative of Netscape Communications. SSL 2.0 contains a
number of security flaws which are solved in SSL 3.0. SSL 3.0 was adopted by the IETF Transport Layer
Security (TLS) working group, which made some small improvements and published the TLS 1.0 [9] standard.
SSL/TLS is used in this paper, as SSL is an acronym everyone is quite familiar with; however, the use of
TLS in applications is certainly preferred to the use of the SSL protocols. Within the protocol stack, SSL/TLS
is situated underneath the application layer. It can in principle be used to secure the communication of any
application, and not only between a web browser and server. SSL/TLS provides entity authentication, data
authentication, and data confidentiality. In short, SSL/TLS works as follows: public-key cryptography is used
to authenticate the participating entities, and to establish cryptographic keys; symmetric key cryptography is
used for encrypting the communication and adding Message Authentication Codes (MACs), to provide data
confidentiality and data authentication respectively. Thus, SSL/TLS depends on a Public Key Infrastructure.
Participating entities (usually only the server) should have a public/private key pair and a certificate. Root
certificates (the certification authorities certificates that are needed to verify the entities certificates) should be
securely distributed in advance (e.g., they are shipped with the browsers). Private keys should be properly
protected. Note that these two elements, i.e., distribution of root certificates in browsers and the protection of
private keys, is actually one of the weak and exploited points with respect to WWW security . More detailed
information on SSL/TLS, the security flaws in SSL 2.0, and the differences between SSL 3.0 and TLS 1.0, can
be found in Rescorla .
2.PAYMENT SECURITY

Although numerous different electronic payment systems have been proposed that can be or are used on the
WWW, including micro-payment systems and cash-like systems, most transactions on the web are paid using
credit cards. Mostly, customers just have to send their credit card number to the merchants web server. This is
normally done securely over SSL/TLS, but some serious problems can still be identified. Users have to
disclose their credit card number to each merchant. This is quite contradictory to the fact that the credit card
number is actually the secret on which the whole payment system is based (note that there is no electronic
equivalent of the additional security mechanisms present in real world credit card transactions, such as face-toface interaction, physical cards and handwritten signatures). Even if the merchant is trusted and honest this is
risky, as one can obtain huge lists of credit card numbers by hacking into (trustworthy, but less protected)
merchants web servers. Moreover, it is possible to generate fake but valid credit card numbers, which is of
great concern for the on-line merchants. Thus, merchants bear risk in card-not-present transactions

WIRELESS SECURITY
GSM and WAP are currently probably the two most popular and widely used wireless technologies. They are
briefly presented in the following paragraphs. Thereafter, some other systems and initiatives in the wireless
world are discussed.
GSM
GSM, Global System for Mobile communications, is the currently very popular digital cellular
telecommunications system specified by the European Telecommunications Standards Institute (ETSI). In
short, GSM intends to provide three security services temporary identities, for the confidentiality of the user
identity; entity authentication, that is, to verify the identity of the user; and encryption, for the confidentiality
of user-related data (note that data can be contained in a traffic channel, e.g., voice, or signaling channel, e.g.,
SMS messages).
WAP
The Wireless Application Protocol (WAP) is a protocol stack for wireless communication networks. WAP is
bearer independent; the most common bearer is currently GSM

INFORMATION SECURITY AUDIT


An information technology audit, or information systems audit, is an examination of the management
controls within an Information technology (IT) infrastructure. The evaluation of obtained evidence determines
if the information systems are safeguarding assets, maintaining data integrity, and operating effectively to
achieve the organization's goals or objectives. These reviews may be performed in conjunction with a financial
statement audit, internal audit, or other form of attestation engagement.
IT audits are also known as "automated data processing (ADP) audits" and "computer audits". They were
formerly called "electronic data processing (EDP) audits".

An IT audit is different from a financial statement audit. While a financial audit's purpose is to evaluate
whether an organization is adhering to standard accounting practices, the purposes of an IT audit are to
evaluate the system's internal control design and effectiveness. This includes, but is not limited to, efficiency
and security protocols, development processes, and IT governance or oversight. Installing controls are
necessary but not sufficient to provide adequate security. People responsible for security must consider if the
controls are installed as intended, if they are effective if any breach in security has occurred and if so, what
actions can be done to prevent future breaches. These inquiries must be answered by independent and unbiased
observers. These observers are performing the task of information systems auditing. In an Information Systems
(IS) environment, an audit is an examination of information systems, their inputs, outputs, and processing.
The primary functions of an IT audit are to evaluate the systems that are in place to guard an organization's
information. Specifically, information technology audits are used to evaluate the organization's ability to
protect its information assets and to properly dispense information to authorized parties. The IT audit aims to
evaluate the following:

Types of IT audits
Various authorities have created differing taxonomies to distinguish the various types of IT audits. Goodman &
Lawless state that there are three specific systematic approaches to carry out an IT audit:

Technological innovation process audit. This audit constructs a risk profile for existing and new
projects. The audit will assess the length and depth of the company's experience in its chosen
technologies, as well as its presence in relevant markets, the organization of each project, and the
structure of the portion of the industry that deals with this project or product, organization and
industry structure.

Innovative comparison audit. This audit is an analysis of the innovative abilities of the company
being audited, in comparison to its competitors. This requires examination of company's research
and development facilities, as well as its track record in actually producing new products.

Technological position audit: This audit reviews the technologies that the business currently has
and that it needs to add. Technologies are characterized as being either "base", "key", "pacing" or
"emerging

IT AUDIT PROCESS
The following are basic steps in performing the Information Technology Audit Process:[4]
1. Planning
2. Studying and Evaluating Controls
3. Testing and Evaluating Controls
4. Reporting

5. Follow-up
6. reports

OVERVIEW OF SECURITY STANDARDS ISO -17799 STANDARD


ISO/IEC 17799:2005 establishes guidelines and general principles for initiating, implementing, maintaining,
and improving information security management in an organization. The objectives outlined provide general
guidance on the commonly accepted goals of information security management. ISO/IEC 17799:2005 contains
best practices of control objectives and controls in the following areas of information security management:

security policy;

organization of information security;

asset management;

human resources security;

physical and environmental security;

communications and operations management;

access control;

information systems acquisition, development and maintenance;

information security incident management;

business continuity management;

compliance.
The control objectives and controls in ISO/IEC 17799:2005 are intended to be implemented to meet the
requirements identified by a risk assessment. ISO/IEC 17799:2005 is intended as a common basis and practical
guideline for developing organizational security standards and effective security management practices, and to
help build confidence in inter-organizational activities.

The standard contains 12 sections: risk assessment and treatment; security policy; organization of information
security; asset management; access control; information security incident management; human resources
security; physical and environmental security; communications and operations management; information
systems acquisition, development and maintenance; business continuity management; and compliance.

Within each section, information security control objectives are specified and a range of controls are outlined
that are generally regarded as best practices. For each control, implementation guidance is provided. Each
organization is expected to perform an information security risk assessment prior to implementing controls.

Introduction and PCI Data Security Standard Overview


The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance
cardholder data security and facilitate the broad adoption of consistent data security measures globally. PCI
DSS provides a baseline of technical and operational requirements designed to protect cardholder data. PCI
DSS applies to all entities involved in payment card processing including merchants, processors, acquirers,
issuers, and service providers, as well as all other entities that store, process or transmit cardholder data. PCI
DSS comprises a minimum set of requirements for protecting cardholder data, and may be enhanced by
additional controls and practices to further mitigate risks. Below is a high-level overview of the 12 PCI DSS
requirements.

PCI DSS originally began as five different programs: Visa's Cardholder Information Security
Program, MasterCard's Site Data Protection, American Express' Data Security Operating Policy, Discover's
Information Security and Compliance, and the JCB's Data Security Program. Each company's intentions were
roughly similar: to create an additional level of protection for card issuers by ensuring that merchants meet
minimum levels of security when they store, process and transmit cardholder data. The Payment Card Industry
Security Standards Council (PCI SSC) was formed, and on December 15, 2004, these companies aligned their
individual policies and released version 1.0 of the Payment Card Industry Data Security Standard (PCI DSS).
In September 2006, the PCI standard was updated to version 1.1 to provide clarification and minor revisions to
version 1.0.
Version 1.2 was released on October 1, 2008. Version 1.1 "sunsetted" on December 31, 2008. Version 1.2 did
not change requirements, only enhanced clarity, improved flexibility, and addressed evolving risks and threats.
In August 2009 the PCI SSC announced the move from version 1.2 to version 1.2.1 for the purpose of making

minor corrections designed to create more clarity and consistency among the standards and supporting
documents.
Version 2.0 was released in October 2010 and is active for merchants and service providers from January 1,
2011 to December 31, 2014.
Version 3.0 was released in November 2013 and is active from January 1, 2014 to December 31, 2017.
Version 3.1 was released in April 2015

Vous aimerez peut-être aussi