Vous êtes sur la page 1sur 26

Introduction to Computer Security

Computer Security / Information Security:

Information security means protecting information and information systems from

unauthorized access, use, modification, or destruction. The terms information security,
computer security and information assurance are frequently used interchangeably. These
fields are interrelated and share the common goals of protecting the confidentiality,
integrity and availability of information.

With the introduction of the computer, the need for automated tools for protecting the
files and other information stored on the computer became evident. This is especially the
case for a shared system as like internet. Thus, computer security is the generic name for
the collection of tools designed to protect data and to prevent hackers.

Computer Security rests on confidentiality, integrity and availability.


Confidentiality is the concealment of information or resources. Cryptography can be the

better choice for maintaining the privacy of information, which traditionally is used to
protect the secret messages. Similarly, privacy of resources, i.e. resource hiding can be
maintained by using proper firewalls. Confidentiality is sometimes called secrecy or


Integrity ensures the correctness as well as trustworthiness of data or resources. For

example, if we say that we have preserved the integrity of an item, we may mean that the
item is: precise, accurate, unmodified, modified only in acceptable ways, modified only
by authorized people, modified only by authorized processes, consistent, meaningful and

Integrity mechanisms fall into two classes; prevention mechanisms and detection
mechanisms. Prevention mechanisms are responsible to maintain the integrity of data by
blocking any unauthorized attempts to change the data or any attempts to change data in
unauthorized ways. While detection mechanisms; rather than preventing the violations of
integrity; they simply analyze the datas integrity is no longer trustworthy. Such
mechanisms may analyze the system events or the data itself to see if required constraints
still hold.


Availability refers to the ability to use the information or resource desired. An

unavailable system is as bad as no system at all. An object or service is thought to be
available if;

It is present in a usable form.

It has capacity enough to meet the service's needs.
It is making clear progress, and, if in wait mode, it has a bounded waiting time.
The service is completed in an acceptable period of time.

Availability is usually defined in terms of quality of service, in which authorized users

are expected to receive a specific level of service. The aspect of availability that is
relevant to security is that someone may intentionally arrange to deny access to data or to
service by making it unavailable

Fig 1. : Relationship between Confidentiality, Integrity and Availability


A threat to a computing system is a set of circumstances that has the potential to cause
loss or harm. It is a potential violation of security, means that it is a possible danger that
might exploit vulnerability.

Attack is an assault on system security that derives from an intelligent threat, i.e. attack is
an intelligent act that is an intentional attempt to evade security services and violate the
security policy of a system.

Threats can be categorized into four classes:

Disclosure- Unauthorized access to information

Deception- Acceptance of false data
Modification, Spoofing, denial of receipt, Repudiation of origin
Disruption- Interruption of correct operation
Usurpation- Unauthorized control of some part of system
Modification, Spoofing, denial of service, delay

Snooping- It is an unauthorized interception of information. It is passive, means that

some entity is listening to communications or browsing the system information. Passive
wiretapping is an example of snooping where attackers monitors the network

Modification- It is an unauthorized change of information. It is active, means that some

entity is changing the information. Active wiretapping is an example of modification
where data across the network is altered by the attackers.

Spoofing / Masquerading- It is an impersonation of one entity by another. E.g.: if a user

tries to log into a computer across the internet but instead reaches another computer that

claims to be the desired one, the user has been spoofed. Delegation is basically authorized

spoofing. The difference is that the ones to which authority is delegated does not

impersonate the delegator; he/she simply asserts authority to act as an agent for the

delegator. So masquerading is a violation of security, whereas delegation is not.

Repudiation of origin- A false denial that an entity sent something, is a form of


Denial of receipt- A false denial that an entity received some message or information, is a
form of deception.

Delay- It is a temporal forbiddance of service. E.g.: If delivery of a message or a service

requires time t; if an attacker can force the delivery time to be more than t, then there is
delayed delivery.

Denial of service- It is an infinite delay i.e., a long term inhibition of service. E.g., an
entity may suppress all messages directed to a particular destination. Another form of
service denial is the disruption of an entire network, either by disabling the network or by
overloading it with messages so as to degrade the performance.

Security Policy:

Policy is a set of mechanisms by means of which your information security objectives

can be defined and attained. Security policy governs a set of rules and objectives need by
an organization.

The purpose of the information security policy is:

To prescribe mechanisms that help identify and prevent the compromise of
information security and the misuse of data, applications, networks and computer
To define mechanisms that protect the reputation of the organization and allow
the organization to satisfy its legal and ethical responsibilities with regard to its
networks' and computer systems' connectivity to worldwide networks.
To prescribe an effective mechanism for responding to external complaints and
queries about real or perceived non-compliance with this policy.

What Makes a Good Security Policy?

The characteristics of a good security policy are:

1. It must be implementable through system administration procedures, publishing of

acceptable use guidelines, or other appropriate methods.

2. It must be enforceable with security tools, where appropriate, and with sanctions,
where actual prevention is not technically feasible.

3. It must clearly define the areas of responsibility for the users, administrators, and

Basic Properties of Security (Basic Principles of Security):

Confidentiality: Let X be a set of entities and let I be some information. Then I has the
property of confidentiality with respect to X if no member of X can obtain information
about I. Confidentiality implies that information must not be disclosed to some set of
entities. It may be disclosed to others. The membership of set X is often implicit for
example, when we speak of a document that is confidential. Some entity has access to the
document. All entities not authorized to have such access make up the set X.

Integrity: Let X be a set of entities and let I be some information or a resource. Then I
has the property of integrity with respect to X if all members of X trust I. In addition to
trusting the information itself, the members of X also trust that the conveyance and
storage of I do not change the information or its trustworthiness (this aspect is sometimes

called data integrity). If I is information about the origin of something, or about an
identity, the members of X trust that the information is correct and unchanged (this aspect
is sometimes called origin integrity or, more commonly, authentication). Also, I may be a
resource rather than information. In that case, integrity means that the resource functions
correctly (meeting its specifications). This aspect is called assurance. As with
confidentiality, the membership of X is often implicit.

Availability: Let X be a set of entities and let I be a resource. Then I has the property of
availability with respect to X if all members of X can access I. The exact definition of
"access" varies upon the needs of the members of X, the nature of the resource, and the
use of the resource. If a book-selling server takes up to 1 hour to service a purchase
request, that may meet the client's requirements for "availability." If a server of medical
information takes up to 1 hour to service an anesthetic allergy information request, that
will not meet an emergency room's requirements for "availability."

Policy can be expressed in:

- Natural language, which is usually imprecise but easy to understand;

- Mathematics, which is usually precise but hard to understand;
- Policy languages, which look like some form of programming language and try to
balance precision with ease of understanding.

Security Mechanism:

Security Mechanism is a method, tool, or procedure for enforcing a security. A security

mechanism is an entity or procedure that enforces some part of the security policy. If
there is a conflict in policies, discrepancies may create security vulnerabilities. A security
mechanism is an entity or procedure that enforces some part of the security policy.
Mechanisms may be

- Technical mechanism enforces the policy inside the system. For example, mechanism
that enables a password to authenticate user before using the computer.

- Procedural mechanism enforces the policy outside the system. For example, mechanism
that sensor's a disk containing a game program obtained from an unreliable source.

Consider a scenario; suppose a universitys computer lab has a policy that prohibits any
student from copying another students homework files. The computer system provides
mechanisms for preventing others from reading a users file. Suppose, Anna fails to use
these mechanisms to protect her homework files, and Bill copies them. A breach of
security has occurred, because Bill has violated the security policy. If the policy said
students has to read-protect their homework files, then Anna did breach security, as she
didnt do this.

Example: In the preceding example, the policy is the statement that no student may copy
another student's homework. One mechanism is the file access controls; if the second
student had set permissions to prevent the first student from reading the file containing
her homework, the first student could not have copied that file.

Security policies are often implicit rather than explicit. This causes confusion, especially
when the policy is defined in terms of the mechanisms. This definition may be
ambiguous - for e.g., if some mechanisms prevent a specific action and others allow it.
Such policies lead to confusion, and sites should avoid them.

The difference between a policy and an abstract description of that policy is crucial to the
analysis that follows. A security model is a model that represents a particular policy or set
of policies. A model abstracts details relevant for analysis. Analyses rarely discuss
particular policies; they usually focus on specific characteristics of policies, because
many policies exhibit these characteristics; and the more policies with those
characteristics, the more useful the analysis. There is a result that says no single
nontrivial analysis can cover all policies, but restricting the class of security policies
sufficiently allows meaningful analysis of that class of policies.

Goals of Security:

Prevention is to prevent the attackers from violating security policy. Prevention means
that an attack will fail. Typically, prevention involves implementation of mechanisms
that users can not override and that are trusted to be implemented in a correct ways so
that the attacker can't defeat the mechanism by changing it.

Detection is to detect attackers violation of security policy. So it occurs after someone

violates the policy. The mechanism determines that a violation of the policy has occurred
(or is underway) due to attack, and reports it. The system must then respond
appropriately. Detection is most useful when an attack can't be prevented.

Recovery is to stop attack and to assess and repair any damage caused by attack. With
recovery, it should be such that the system continues to function correctly, possibly after
a period during which it fails to function correctly, due to attacks.
For example if the attacker deletes a file, one recovery mechanism is to restore the file
from backup tapes.

Protection State:

The state of a system at any instance is defined by the collection of the current values of
all memory locations, all secondary storage, and all registers and other components of the
system. The subset of this collection that deals with protection defines the protection state
of the system. Access control matrix model is the most precise model used to describe a
protection state.

Consider a set of possible protection states P. Suppose there is a subset Q of P consists

exactly those states in which system is authorized to reside. So, whenever the system
state is in Q, the system is supposed to be secure. When the system state is in P-Q, the
system is not secure. So enforcing security means that the system state is always from the

subset Q. Any operations like reading, writing, altering and execution of data or
instruction cause the change in state of the system i.e., state transition occurs. We are
concerned with only those state transitions that will lead to the authorized states.

Access Control Matrix Model:

Access to protected information must be restricted to people who are authorized to access
the information. The computer programs, and in many cases the computers that process
the information, must also be authorized. This requires that mechanisms be in place to
control the access to protected information

Access control matrix model is the simplest framework for describing a protection
system. It defines the right of users over files in matrix.

- Set of objects O; the set of all protected entities that are relevant to the protection
- Set of subjects S; set of active objects such as processes and users
Now the access control matrix model designated by a matrix A defines the relationship
between these entities with the rights drawn from a set of rights R in each entry of as, o ,

where s S , o O , and as, o R . The subject s has a set of rights as, o over the
object o. The set of protection states of the system is represented by the triple (S, O, A)
For example:

file1 file2 process1 process2

Process 1 read, write read read, write, write

own execute, own

Process 2 append read, own read read, write

execute, own

Fig2: Access control matrix

Access Control List(ACL)

Access Control List is the easier way to represent access control matrix and it is most
commonly used implementation of access control matrix. The ACL permits any given
user to be allowed or disallowed access to any object. The columns of an ACL show a list
of users attached to protected objects. One can associate access rights for individuals and
resources directly with each object.

Assumptions and Trust:

All security policies and mechanisms rest on assumptions specific to the type of security
and the environment in which it is to be employed.

As policies are to define the issue of security, they have to define security correctly for
the particular site. For example, a web site has to be available, but if the security policy
does not mention availability, the definition of security is inappropriate for the site. Also,
a policy may not specify whether a particular state is secure or non-secure. This
ambiguity causes problems. Hence proper assumptions should be made before defining a
concrete policy.

As mechanisms are to enforce policy, they must be appropriate. For example,

cryptography does not assure availability, so using cryptography in the above situation
wont work. Trusting the mechanisms work requires several assumptions;

- each mechanism is designed to implement one or more parts of security policy,

- the union of mechanisms implements all aspects of the security policy,
- the mechanisms are implemented correctly,
- the mechanisms are installed and administered correctly

The security mechanisms may be secure, precise, or broad. Let P defines set of all
possible states.

Secure Precise Broad

Possible States (P)

Secure States (Q)

Let Q be the set of secure states, as specified by the securityReachable

policy. Let R be(R)
States the set of
some reachable states that a system can enter ( R P ).
Then a security mechanism is;
- secure if all the reachable states, R are in the set of secure states Q, i.e. R Q .
- precise if all the reachable states are secure and all the secure states are reachable,
i.e. R Q
- broad if some reachable states are non secure states, i.e. there are states r such that
r R and r Q .


Assurance is a measure of how well the system meets its requirements; more informally,
how much we can trust the system to do what it is supposed to do. It does not say what
the system is to do; rather, it only covers how well the system does it. System
specification, design and implementation can provide a basis for determining how
much to trust a system. This aspect of trust is the assurance. It is an attempt to provide a
basis for supporting how much one can trust a system.

Specification is a statement of the desired functioning of the system. Specifications arise
from requirements analysis, in which the goals of the system are determined. The
specification says what are the requirements and what the system must do to meet those
requirements. It is a statement of functionality, not assurance, and can be very formal
(mathematical) or informal (natural language). The specification can be high-level or
low-level (for example, describing what the system as a whole is to do vs. what specific
modules of code are to do).

Design architects the system to meet the specifications. The design of a system translates
the specification into the components that will implement them. The design is said to
satisfy the specification if the design will not permit the system to violate those
predefined specifications.
Typically, the design is layered by breaking the system into abstractions, and then
refining the abstractions as we work our way down to the hardware. An analyst also must
show whether the design matches specifications or not.

Implementation is the actual coding of the modules and software components. These
must be correct (perform as specified), and their aggregation must satisfy the design.
Thus, implementation creates a system that satisfies the design. This leads that
implementation will also satisfy the specifications.

Operational Issues with Security:

Security does not end when the system is completed. Its operation affects security. A
secure system can be breached by improper operation (for example, when accounts
with no passwords are created). The problem is how to assess the effect of operational
issues on security.

Cost-Benefit Analysis: This weighs the cost of protecting data and resources with the
costs associated with losing the data. If the data or resources cost less, or are of less
value, than their protection, adding security mechanisms and procedures is not cost-

effective because the data or resources can be reconstructed more cheaply than the
protections themselves.

Similarly other considerations are the overlap of mechanisms effects (one mechanism
may protect multiple services, so its cost is amortized), the non-technical aspects of the
mechanism (will it be impossible to enforce), and the ease of use (if a mechanism is too
cumbersome, it may cost more to retrofit a decent user interface than the benefits would

Risk Analysis: Risks are events or conditions that may occur, and whose occurrence, if it
does take place, has a harmful or negative effect. A risk analysis involves identifying the
most probable threats to a system and analyzing the related vulnerabilities of the system
to these threats. The risk analysis also should determine the impact of each type of
potential threat on various functions or units within the system.

What happens if the data and resources are compromised? This tells us what we need to
protect and to what level. Cost-benefit analyses help determine the risk here, but there
may be other metrics involved (such as customs).

Laws and Customs: These constrain what you can do. E.g. Encryption use can be
unlawful. Laws restrict the availability and use of technology and affect procedural
controls. Hence any policy and any selection of mechanisms must take into account legal
considerations. Customs involve non-legislated things, like the all the employees are to
provide their DNA samples in a company for authentication purpose. That is legal for the
company, but it is not socially acceptable, as an alternative to a password. Thus
society/customs distinguish between legal and acceptable practices.

Human Issues with Security:

Organizational Problems:

With the organizational problems, the question is of who is responsible for security. The
key here is that those responsible for security have the power to enforce security.
Otherwise there is confusion, and the architects need not worry if the system is secure
because they wont be blamed if someone gets in. This arises when system
administrators, for example, are responsible for security, but only security officers can
make the rules. Preventing this problem (power without responsibility or vice versa) is
tricky and requires capable management. Whats worse is that security is not a direct
financial incentive for most companies because it doesnt bring in revenue. It merely
prevents the loss of revenue obtained from other sources.

Lack of resource is another common problem. Securing a system requires resources as

well as people. It requires time to design a configuration that will provide a sufficient
level of security, to implement the configuration, and to administer the system.

People problems:
People problems are by far the main source of security problems. Outsiders are attackers
from without the organization; insiders are people who have authorized access to the
system and, possibly, are authorized to access data and resources, but use the data or
resources in unauthorized ways. It is speculated that insiders account for 80-90% of all
security problems, but the studies generally do not disclose their methodology in detail,
so it is hard to know how accurate they are. Social engineering, or two-faced, is quite
effective, especially if the people gulled are inexperienced in security (possibly because
they are new, or because they are tired).

The Security Life Cycle: Threats





Operation and Maintenance

The considerations discussed till now appear to flow linearly from one to the next as
shown in figure above. In addition, each stage of the cycle feeds back to the preceding
stage, and through that stage to all earlier stages. Thus each stage affects all the ones that
come before it. Feedback from operation and maintenance is critical, and often
overlooked. It allows one to validate the threats and the legitimacy of the policy.

Types of Security Policy:

A security policy considers all relevant aspects of confidentiality, integrity, and


With respect to confidentiality, it identifies those states in which information leaks to

those not authorized to receive it. This includes not only the leakage of rights but also the
illegal transmission of information without leakage of rights, called information flow.
Also, the policy must handle dynamic changes of authorization, so it includes a temporal
element. For e.g., a contractor working for a company may be authorized to access
proprietary information during the lifetime of a nondisclosure agreement, but when that
nondisclosure agreement expires, the contractor can no longer access that information.
This aspect of the security policy is often called a confidentiality policy.

With respect to integrity, a security policy identifies authorized ways in which

information may be altered and entities authorized to alter it. Authorization may derive
from a variety of relationships, and external influences may constrain it. For e.g. in many
transactions, a principle called separation of duties forbids an entity from completing the
transaction on its own. Those parts of the security policy that describe the conditions and
manner in which data can be altered are called the integrity policy.

With respect to availability, a security policy describes what services must be provided. It
may present parameters within which the services will be accessiblefor e.g., that a
browser may download Web pages but not Java applets. It may require a level of
servicefor example, that a server will provide authentication data within 1 minute of
the request being made. This relates directly to issues of quality of service.

The statement of a security policy may formally state the desired properties of the
system. If the system is to be provably secure, the formal statement will allow the
designers and implementers to prove that those desired properties hold. If a formal proof
is unnecessary or infeasible, analysts can test that the desired properties hold for some set
of inputs. In practice, a less formal type of security policy defines the set of authorized
states. Typically, the security policy assumes that the reader understands the context in

which the policy is issuedin particular, the laws, organizational policies, and other
environmental factors. The security policy then describes conduct, actions, and
authorizations defining "authorized users" and "authorized use."

Example: A university disallows cheating, that includes copying another student's

homework assignment (with or without permission). A computer science class requires
the students to do their homework on the department's computer. One student notices that
a second student has not read protected the file containing her homework and copies it.
Has either student (or have both students) breached security?

The second student has not, despite her failure to protect her homework. The security
policy requires no action to prevent files from being read. Although she may have been
too trusting, the policy does not ban this; hence, the second student has not breached
security. The first student has breached security. The security policy disallows the
copying of homework, and the student has done exactly that. Whether the security policy
specifically states that "files containing homework shall not be copied" or simply says
that "users are bound by the rules of the university" is irrelevant; in the latter case, one of
those rules bans cheating. If the security policy is silent on such matters, the most
reasonable interpretation is that the policy disallows actions that the university disallows,
because the computer science department is part of the university.
A security policy is a statement of what is and what is not, allowed. This defines security
for particular site/system.

- A military security policy (also called a governmental security policy) is a security

policy developed primarily to provide confidentiality. The name comes from the
military's need to keep information. Although integrity and availability are important,
organizations using this class of policies can overcome the loss of either - for e.g., by
using orders not sent through a computer network. But the compromise of confidentiality
would be catastrophic, because an opponent would be able to plan countermeasures (and
the organization may not know of the compromise).

Confidentiality is one of the factors of privacy, an issue recognized in the laws of many
government entities (such as the Privacy Act of the United States). Aside from
constraining what information a government entity can legally obtain from individuals,
such acts place constraints on the disclosure and use of that information. Unauthorized
disclosure can result in penalties that include jail or fines; also, such disclosure
undermines the authority and respect that individuals have for the government and
inhibits them from disclosing that type of information to the agencies so compromised.

- A commercial security policy is a security policy developed primarily to provide

integrity. The name comes from the need of commercial firms to prevent tampering with
their data, because they could not survive such compromises. For e.g., if the
confidentiality of a bank's computer is compromised, a customer's account balance may
be revealed. This would certainly embarrass the bank and possibly cause the customer to
take his/her business elsewhere. However, if the integrity of the computer holding the
accounts were compromised, the balances in the customers' accounts could be altered,
with financially ruinous effects.

Some integrity policies use the notion of a transaction; like database specifications, they
require that actions occur in such a way as to leave the database in a consistent state.
These policies, called transaction-oriented integrity security policies, are critical to
organizations that require consistency of databases.

The role of trust in these policies highlights their difference. Confidentiality policies
place no trust in objects. The policy statement states whether that object can be disclosed.
It says nothing about whether the object should be believed. Integrity policies, to the
contrary, indicate how much the object can be trusted. Given that this level of trust is
correct, the policy states what a subject can do with that object. The assignment of a level
of confidentiality is based on what the classifier wants others to know, but the assignment
of a level of integrity is based on what the classifier subjectively believes to be true about
the trustworthiness of the information.

Thus, a confidentiality policy is a security policy dealing only with confidentiality. While
an integrity policy is a security policy dealing only with integrity. Both confidentiality
policies and military policies deal with confidentiality; however, a confidentiality policy
does not deal with integrity at all, whereas a military policy may. A similar distinction
holds for integrity policies and commercial policies, where both deal with integrity;
however integrity policy does not deal with confidentiality.

Organizational Security Policies

A key element of any organization's security planning is an effective security policy. A

security policy must answer: who can access which resources in what manner?

A security policy is a high-level management document to inform all users of the goals of
and constraints on using a system. A policy document is written in broad enough terms
that it does not change frequently. The information security policy is the foundation upon
which all protection efforts are built. It should be a visible representation of priorities of
the entire organization, definitively stating underlying assumptions that drive security
activities. The policy should articulate senior management's decisions regarding security
as well as asserting management's commitment to security. To be effective, the policy
must be understood by everyone as the product of a directive from an authoritative and
influential person at the top of the organization.

Writing a useful and effective security policy, it covers inclusion of following aspects;

- Purpose

Security policies are used for several purposes, including the following:

recognizing sensitive information assets/resources

clarifying security responsibilities
promoting awareness for existing employees
guiding new employees

- Audience

A security policy addresses several different audiences with different expectations. That
is, each groupusers, owners, and beneficiariesuses the security policy in important
but different ways.

- Contents

A security policy must identify its audiences: the beneficiaries, users, and owners. The
policy should describe the nature of each audience and their security goals. Several other
sections are required, including the purpose of the computing system, the resources
needing protection, and the nature of the protection to be supplied. We discuss each one
in turn.

Characteristics of a Good Security Policy

If a security policy is written poorly, it cannot guide the developers and users in
providing appropriate security mechanisms to protect important assets. Certain
characteristics make a security policy a good one.

- Coverage

A security policy must be comprehensive/ all-inclusive: It must either apply to or

explicitly exclude all possible situations. Furthermore, a security policy may not be
updated as each new situation arises, so it must be general enough to apply naturally to
new cases that occur as the system is used in unusual or unexpected ways.

- Durability

A security policy must grow and adapt well. In large measure, it will survive the system's
growth and expansion without change. If written in a flexible way, the existing policy
will be applicable to new situations. However, there are times when the policy must
change (such as when government regulations mandate new security constraints), so the
policy must be changeable when it needs to be.

An important key to durability is keeping the policy free from ties to specific data or
protection mechanisms that almost certainly will change. It is preferable to describe
assets needing protection in terms of their function and characteristics, rather than in
terms of specific implementation.

- Realism

The policy must be realistic. That is, it must be possible to implement the stated security
requirements with existing technology. Moreover, the implementation must be beneficial
in terms of time, cost, and convenience; the policy should not recommend a control that
works but prevents the system or its users from performing their activities and functions.

- Usefulness

An obscure or incomplete security policy will not be implemented properly, if at all. The
policy must be written in language that can be read, understood and followed by anyone
who must implement it or is affected by it. For this reason, the policy should be succinct,
clear, and direct.

Risk Analysis:

Risks are events or conditions that may occur, and whose occurrence, if it does take
place, has a harmful or negative effect. Exposure to the consequences of uncertainty
constitutes a risk. In everyday usage, risk is often used synonymously with the
probability of a known loss. In information security, a risk is defined as a function of
three variables:

1. the probability that there is a threat

2. the probability that there are any vulnerabilities
3. the potential impact.

In general, there are three strategies for risk reduction:

- avoiding the risk, by changing requirements for security or other system


- transferring the risk, by allocating the risk to other systems, people, organizations,
or assets; or by buying insurance to cover any financial loss should the risk become a

- assuming the risk, by accepting it, controlling it with available resources, and
preparing to deal with the loss if it occurs

Good, effective security planning includes a careful risk analysis. Risk analysis is the
process of examining a system and its operational context to determine possible
exposures and the potential harm they can cause.

Steps of Risk Analysis

By following well-defined steps, we can analyze the security risks in a computing

system. The basic steps of risk analysis are listed below.

1. Identify assets.
2. Determine vulnerabilities.
3. Estimate likelihood of exploitation.
4. Compute expected annual loss.
5. Survey applicable controls and their costs.
6. Project annual savings of control.

Access Control:

Access control is the ability to permit or deny the use of a particular resource by a
particular entity. Access control mechanisms can be used in managing physical resources
(such as a movie theater, to which only ticketholders should be admitted), logical
resources (a bank account, with a limited number of people authorized to make a
withdrawal), or digital resources (for example, a private text document on a computer,
which only certain users should be able to read).

In any access control model, the entities that can perform actions in the system are called
subjects, and the entities representing resources to which access may need to be
controlled are called objects.

Types of Access Control:

Discretionary Access Control(DAC) or Identity Based Access Control(IBAC):

Individual user sets access control mechanism to allow or deny access to an object.
Access control is left to the discretion of the owner. Discretionary access controls base
access rights on the identity of the subject and the identity of the object involved. Identity
is the key; the owner of the object constrains who can access it by allowing only
particular subjects to have access. The owner states the constraint in terms of the identity
of the subject, or the owner of the subject. The owner can pass rights onto other subjects
(discretion). Also their programs can pass their rights and the owner has power to
determine who can access.

EXAMPLE: Suppose a child keeps a diary. The child controls access to the diary,
because she can allow someone to read it (grant read access) or not allow someone to
read it (deny read access). The child allows her mother to read it, but no one else. This is
a discretionary access control because access to the diary is based on the identity of the
subject (mom) requesting read access to the object (the diary).

Mandatory Access Control (MAC) or Rule Based Access Control:

When a system mechanism controls access to an object and an individual user cannot
alter that access, the control is a mandatory access control (MAC), occasionally called a
rule-based access control. System mechanism controls access to object, and individual
cannot alter that access. The operating system controls access, and the owner cannot
override the controls. Neither the subject nor the owner of the object can determine
whether access is granted. Typically, the system mechanism will check information
associated with both the subject and the object to determine whether the subject should
access the object. Rules describe the conditions under which access is allowed. Subjects
cannot pass the rights and subjects programs cannot pass the right to access. System
controls all accesses, and no one may alter the rules governing access to those objects.

EXAMPLE: The law allows a court to access driving records without the owners'
permission. This is a mandatory control, because the owner of the record has no control
over the court's accessing the information.

Originator Controlled Access Control (ORCON or ORGCON):

An originator controlled access control (ORCON or ORGCON) bases access on the

creator of an object (or the information it contains). Information is controlled by
originator or creator of information not owner. Sometimes creator may be owner too. The
goal of this control is to allow the originator of the file (or of the information it contains)
to control the dissemination of the information. ORCON is the combination of MAC and
DAC and the basic rules are:
- The owner of an object cannot change the access controls of the object.
- When an object is copied, the access control restrictions of that source are copied
and bound to the target of the copy.
- The creator (originator) can alter the access control restrictions on a per-subject
and per-object basis.
EXAMPLE: For protecting Audio/Video CD access is controlled to control piracy.
Master CD is allowed to copy for selling and once copied CD is prevented to copy.

Role Based Access Control (RBAC):

Role Based Access Control (RBAC), also known as Non discretionary Access Control,
takes more of a real world approach to structuring access control. Access under RBAC is
based on a user's job function within the organization to which the computer system

Essentially, RBAC assigns permissions to particular roles in an organization. Users are

then assigned to that particular role. For example, an accountant in a company will be
assigned to the Accountant role, gaining access to all the resources permitted for all
accountants on the system. Similarly, a software engineer might be assigned to the
developer role.

Roles differ from groups in that while users may belong to multiple groups, a user under
RBAC may only be assigned a single role in an organization. Additionally, there is no
way to provide individual users additional permissions over and above those available for
their role. The accountant described above gets the same permissions as all other
accountants, nothing more and nothing less.

The Bell-LaPadula Model:

A confidentiality policy, also called an information flow policy, prevents the

unauthorized disclosure of information. Unauthorized alteration of information is
secondary. For example, the navy must keep confidential the date on which a troop ship
will sail. If the date is changed, the redundancy in the systems and paperwork should
catch that change. But if the enemy knows the date of sailing, the ship could be sunk.
Because of extensive redundancy in military communications channels, availability is
also less of a problem.

The Bell-LaPadula Model corresponds to military-style classifications. It has influenced

the development of many other models and indeed much of the development of computer
security technologies. The simplest type of confidentiality classification is a set of
security clearances arranged in a linear (total) ordering. These clearances represent
sensitivity levels. The higher the security clearance, the more sensitive the information
and the greater the need to keep it confidential. A subject has a security clearance levels
like C (for CONFIDENTIAL), TS (for TOP SECRET). An object has a security
classification levels like S (for SECRET), UC (for UNCLASSIFIED). When we refer to
both subject clearances and object classifications, we use the term "classification". The
goal of the Bell-LaPadula security model is to prevent read access to objects at a security
classification higher than the subject's clearance.
The properties of the Bell-LaPadula model are:
- The simple security property which is no read up.
- The star property which is no write down.
A problem with this model is it does not deal with the integrity of data.

The Biba Integrity Model:

Integrity refers to the trustworthiness of data or resources. Integrity is usually defined in

terms of preventing improper or authorized change to data. There are three main goals of
- Preventing unauthorized users from making modifications to data or programs.
- Preventing authorized users from making improper or unauthorized
- Maintaining internal and external consistency of data and programs.

The Biba integrity model was published in 1977 at the Mitre Corporation; one year after
the Bell La-Padula model was published. The primary motivation for creating this model
is the inability of the Bell-LaPadula model to deal with integrity of data. The Biba model
addresses the problem with the star property of the Bell-LaPadula model, which does not
restrict a subject from writing to a more trusted object.

A classification is an element of hierarchical set of elements. It consists of elements like

C (for Crucial), VI (for Very Important), I (for Important). Set of categories and
classification determines the level of integrity.

The properties of Biba Model are:

- The no write-up is essential, since it limits the damage that can be done by
malicious objects in the system. For instance, no write-up limits the amount of
damage that can be done by a trojan horse in the system. The trojan horse would
only be able to write to objects at it integrity level or lower. This is important
because it limits the damage that can be done to the operating system.
- The no read-down prevents a trust subject from being contaminated by a less
trusted object.