Vous êtes sur la page 1sur 33

SSCP Study Notes

1. Access Controls
2. Administration
3. Audit and Monitoring
4. Risk, Response, and Recovery
5. Cryptography
6. Data Communications
7. Malicious Code
Modified version of original study guide by Vijayanand Banahatti (SSCP)

Table of Content
1.0 ACCESS CONTROLS...... 03
2.0 ADMINISTRATION ... 07
3.0 AUDIT AND MONITORING...... 13
4.0 RISK, RESPONSE, AND RECOVERY....... 18
5.0 CRYPTOGRAPHY....... 21
6.0 DATA COMMUNICATIONS...... 25
7.0 MALICIOUS CODE..... 31
REFERENCES........ 33

1.0 ACCESS CONTROLS


Access control objects: Any objects that need controlled access can be considered an access control object.
Access control subjects: Any users, programs, and processes that request permission to objects are access
control subjects. It is these access control subjects that must be identified, authenticated and authorized.
Access control systems: Interface between access control objects and access control subjects.

1.1 Identification, Authentication, Authorization, Accounting


1.1.1 Identification and Authentication Techniques
Identification works with authentication, and is defined as a process through which the identity of an
object is ascertained. Identification takes place by using some form of authentication.
Authentication Types
Something you know
Something you have

Something you are

Example
Passwords, personal identification numbers (PIN), pass phrases,
mother's maiden name, fave sports team etc
Proximity cards, Identification tokens, Keys,
Identification badges, Passports, certificates, transponders, smart
cards etc.
Fingerprints, Signatures, Eye characteristics, Facial
characteristics, Voiceprints, DNA.

These three types of authentication types can be combined to provide greater security. These combinations are called
factors of authentication. (Two factor or three factor)

1.1.2 Authorization Techniques


A process through which an access control subject is authenticated and identified, the subject is
authorized to have a specific level or type of access to the access control object.
1.1.3 Accounting Techniques
Access control system is accountable for any security-related transaction providing accountability.
Accountability within a system means that anyone using the system is tracked and held accountable or
responsible for their actions. Example: Authentication audit trail or log, privilege elevation audit trail or
log.
1.1.4 Password Administration
Password administration is an important part of any access control system. The selection, management,
and auditing of passwords must occur through either automated or administrative methods.
o Password Selection: Policy generally deals with minimum password length, required character usage,
Password expiry, password reuse etc.
o Password Management: Anything that happens to the password during its entire life cycle, from a
user needing their password reset to automatic password expiry.
o Password audit and control: To determine the overall functionality of the access control system to
help to reduce unauthorized access and attacks. Good audit logging practice can be followed.
1.1.5 Assurance
In order to provide assurance, the following four questions must be answered: (i.e. CIA + Accountability)
Are transactions between the access control subject and access control object confidential?
Is the integrity of the access control object ensured and guaranteed?
Is the access control object available to be accessed when needed?
Is the access control system accountable for what it authenticates (e.g. logging/auditing)?
If all four can be answered affirmatively then assurance has been properly assured.

1.2 Access Control Administration


After implementation of the access control system, administering the process is the major task. This involves
following factors.
Account Administration: Administration of all user, system, and service accounts used within the access control
system. This includes creation (authorization, rights, permissions), maintenance (account lockout-reset, audit,
password policy), and destruction (rename or delete) of accounts.

Access Rights and Permissions: The owner of the data should decide any rights and permissions for a specific
account. The principle of least privilege will be used here to grant all the rights and permissions necessary to an
account to perform the required duties, but not more than required or needed.
Monitoring: The changes to accounts, the escalation of privileges should be logged and should be constantly
monitored for security.
Removable Media Security: Any removable media from the system can be the vulnerability. All removable
media should be restricted or controlled in some manner to provide for the best possible system security.
Management of Data Caches: Access control is not only for users - any type of information which is on the
system needs to be considered - e.g. temporary data caches (pagefile, dr watsons, .tmp files etc)

1.3 Access Control Models, Methodologies and Implementation


1.3.1 Type of Access Control Polices
Access control policies are put into place to mitigate and control security risks. These policies are the
guidelines that should be followed by both automated access control systems and actual physical
security. Following policies work together to support a global access control policy.
Type of Policies
Preventive

Detective
Corrective

Description
Polices for preventing the vulnerabilities from getting exploited.
E.g. patching policy, background checks, data classification,
separation of duties etc
This type of policy is implemented to know when an attack is
occurring. E.g. IDS, log monitoring etc
Policies that address immediate correction action, after the
vulnerabilities getting exploited. These policies include disaster
recovery plans, emergency restore procedures, password lockout
threshold etc.

NB: Don't confuse with Control types (p.14)

1.3.2 Access Control Policy Implementation Methods


Administrative: In this implementation a policy is administratively controlled through workplace
policies or orders passed down to subordinates. Do not have any automated steps built in and require
people to do as they are told. It does offer an easy way to implement a first line of defense. E.g. written
policy on passwords (length, expiration, lockout) etc.
Logical/Technical: In this implementations automated methods of enforcing access control policies are
used. This type of implementation of policies restricts human errors during operation stage. E.g. Actual
lockout password restrictions implemented (length, expiration, lockout), use of SSL, SSH etc.
Physical: This type of implementation includes everything from controlling access to a secure building
to protecting network cabling from electro-magnetic interference (EMI). Example: Security guards,
Biometric devices, ID badges, Perimeter defenses (walls/fences), Physical locks.
Note 1: The policies and implementations may be combined - I.e. Preventive / Administrative (e.g.
written password policy); Detective / Logical/Technical (e.g. IDS); Corrective / Administrative (e.g.
disaster recovery plan). There are also some, for e.g. CCTC that may be seen as Preventive/Physical
(when recording only) & Detective/Physical (when being actively monitored).
Note 2: Don't confuse the SSCP usage of policy with Windows Policies (e.g. min password length etc)
1.3.3 Access Control Models
Discretionary Access Control (DAC): The data owner decides the access. (Owner can change
permissions).
Mandatory Access Control (MAC): The system decides the access depending on the classification
(sensitivity label). Stronger than DAC. (Only central admin can change permissions, but the data owner
still decides on the data classification).
Role-based access control (RBAC) aka Non- Discretionary: The role of the user/task (subject)
determines the access to the data object. Uses a centrally administrated set of controls to determine how
subjects and objects interact.

Formal Models:
1. Biba
First formal model to address integrity. The Biba model bases its access control on levels of integrity. It
consists of three primary rules.

1. A subject at a given integrity level X can only read objects at the same or higher integrity levels - the
simple integrity axiom.
2. A subject at integrity level X can only write objects at the same or lower integrity levels - the * (star)
integrity axiom.
3. A subject at integrity level X can only invoke a subject at the same or lower integrity levels.
2. Clark/Wilson
This model is similar to Biba, as it addresses integrity. Protecting the integrity of information by focusing on
preventing authorized users from making unauthorized modifications of data, fraud, and errors within
commercial applications.
It uses segregation of duties or separation of duties. The principle of segregation of duty states no single
person should perform a task from beginning to end, but that the task should be divided among two or more
people to prevent fraud by one person acting alone. This ensures the integrity of the access control object by
securing the process used to create or modify the object.
3. Bell/LaPadula
This formal model specifies that all access control objects have a minimum-security level assigned to it so
that access control subjects with a security level lower than the security level of the objects are unable to
access the object. The Bell-LaPadula formal model only addresses confidentiality. It is what the MAC
model is based on. Bell-LaPadula also formed the basis of the original "Orange Book".
Note: Bell-LaPadula does not address integrity or availability. Remember: No read up / No write down.
ORANGE BOOK: Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) book
or the Orange book. Orange book requires that the system to be configured as standalone.
Division
A: VERIFIED PROTECTION
B: MANDATORY PROTECTION

C: DISCRETIONARY PROTECTION
D: MINIMAL PROTECTION

Level
A1
B1
B2
B3
C1
C2
None

Definition
Verified Protection/Design
Labeled Security Protection
Structured Protection
Security Domains
Discretionary Security
Controlled Access Protection
Minimal Protection Security - Evaluated and failed

RED BOOK: Is in 2 parts Trusted Network Interpretation of the TCSEC and Trusted Network
Interpretation Environments Guideline: Guidance for Applying the Trusted Network Interpretation. The
guidelines within this book are as strict as the Orange book itself, but it is designed to work with networked
environments.
Note: The Orange book does NOT address integrity.
1.3.4 Access Control Methodologies
Centralized access control: All access control queries being directed to a central point of authentication.
This type of system allows for a single point of administration for the entire access control system.
Decreases the administrative effort, but also raises costs. Implementation more difficult. Example: Kerberos,
Remote Authentication Dial-In User Service (RADIUS) and Terminal Access Controller Access Control
System (TACACS), TACACS+ (allows encryption of data).
Decentralized access control:
Access control system is not centralized to a single computer system or group of systems. Offers the
advantage of providing for access control system functionality in cases where connectivity to a centralized
access control system is difficult. Difficult to maintain a decentralized access control system compared to a
centralized access control system. Some examples of this are a Windows workgroup where every member of
the workgroup handles access control, or a database system that handles its own authentication.

1.4 Remote Authentication


To provide reliable authentication for remote users in small organizations it is possible to use the default
authentication method of the software being used for remote access. For large organization the following
authentication methods are used: Remote Authentication Dial-In User Service (RADIUS) and Terminal Access
Controller Access Control System (TACACS/TACACS+).
1.4.1 RADIUS
Using RADIUS, a remote access server accepts the authentication credentials from the access control
subject and passes them along to the RADIUS server for authentication. The RADIUS server then responds
to the remote access server either with authorization or denial. A major advantage of RADIUS is that
communication between the RADIUS server and the remote access server is encrypted, which helps increase
the overall security of access control.
1.4.2 TACACS
Older, does not use encryption and less often used. It allows for a centralized access control approach that
keeps all access control changes isolated to a single place. When the TACACS server receives the
identification data, it either returns authorization information or denies access to the user. This information
is passed back to the remote access server in clear text and the remote access server responds appropriately.
1.4.3 TACACS+
Same as TACACS, in this authentication information is going across the network in an encrypted format.

1.5 Single Sign On (SSO)


With SSO, the user authenticates once, and the fact that they have been authenticated is passed onto each system
that they attempt to access. Some SSO products are:
- Kerberos (see below)
- NetSP
- SESAME
- Kryptoknight
Advantage of SSO
- Hire/fire and enable/disable access to systems
quickly and efficiently.
- Reduced admin effort of forgotten passwords.
- Improved end user experience.

- X.509 (think NSD)


- Snareworks
Disadvantages of SSO
- Cost.
- Difficult to implement.
- If users SSO password is compromised the
attacker has access to all the users systems.

1.4.1 Kerberos (see p. 47-50 of SSCP book for more)


Kerberos is a network authentication protocol designed to provide strong authentication for client/server
applications through the use of symmetric-key authentication and tickets (authentication tokens). Kerberos

systems use private keys, and a Kerberos server must have copies of all keys on it, which requires a
great deal of physical security. It allows for cross-platform authentication.
Kerberos has a Key Distribution Center (KDC) which holds all keys and provides central authentication
services. It uses time-stamping of it tickets to help ensure they are not compromised (i.e. non-repudiation)
and an overall structure of control called a realm. Because of the time-stamping it is important that clocks
of systems are synchronized. Susceptible to replay attacks if ticket is compromised within an allotted time
frame.
The Authentication Service (AS) is the part of the KDC that authenticates clients. The Ticket Granting
Service (TGS) makes the tickets and issues them to clients.
User Logon process:
1. User identifies themselves and presents their credentials to the KDC (password, smart card etc)
2. The AS authenticates the credentials.
3. The TGS issues a Ticket Granting Ticket (TGT) that is associated with the client token.
The TGT expires when the user ends their session (disconnects/logs off ) and is cached locally for the
duration of the session.
Resource Access process:
As above then
4. The TGT is presented to the KDC along with details of remote resource the client requires access to.
5. The KDC returns a session ticket to the client.
6. The session ticket is presented to the remote resource and access is granted.
Note: Kerberos does NOT address availability.

2.0 ADMINISTRATION
2.1 Security Administration Principles
Authorization
Identification and
Authentication
Accountability

Non-repudiation

Least privileges

Data Classification

A process through which an access control subject is authenticated and identified, the
subject is authorized to have a specific level or type of access to the access control object.
Identification works with authentication, and is defined as a process through which the
identity of an object is ascertained. Identification takes place by using some form of
authentication.
Accountability within a system means that anyone using the system is tracked and held
accountable or responsible for their actions. Example: Authentication audit trail or log,
privilege elevation audit trail or log.
Non-repudiation is an attribute of communications that seeks to prevent future false denial
of involvement by either party. Non-repudiation is consequently an essential element of
trust in e-business.
The principle of least privilege states that a user should be given enough access to the
system to enable him/her to perform the duties required by their job. Elevated levels of
access should not be granted until they are required to perform job functions. Owners of
the information in a system are responsible for the information and are the appropriate
authority for authorizing access level upgrades for the system users under their control.
The primary purpose of data classification is to indicate the level of confidentiality,
integrity and availability that is required for each type of information. It helps to ensure
that the data is protected in the most cost-effective manner. The data owner always
decides the level of classification.

2.2 CIA Triad


Confidentiality
Confidentiality means that information on the system/network is safe from disclosure to unauthorized individuals.
Integrity
Integrity of information within an organization means the information is whole and complete and has not been
altered in any way, except by those authorized to manipulate it. The integrity of a computer or information
system could impact the integrity of the critical information contained therein. Losing the integrity of a system or
the information in that system means that the information can no longer be trusted.
Availability
Availability is having the information available right when its needed. When availability is considered with
respect to the critical information within an organization, it is easy to see why it becomes so crucial that it is
always there when it is needed.

2.3 Data classification


To decide what level of protection and how much, it is recommended to conduct an analysis which will
determine the value of information and system resources. The value can be determined by analyzing the impact
due to loss of data, unauthorized disclosure of information, the cost of replacement, or the amount of
embarrassment/loss of reputation. Each level is defined to have more serious consequences if it is not protected.
Remember, the data owner always decided the level of classification. Common classification levels (from
highest to the lowest level)
Commercial
Confidential
Private
Sensitive
Public

Military
Top Secret
Secret
Confidential
Sensitive but
unclassified
Unclassified

2.4 System Life Cycle Phases and Security Concerns

Applies to new developments and systems improvements and maintenance.


Security should be included at each phase of the cycle.
Security should not be addressed at the end of development because of the added cost, time and effort.
Separation of duties should be practiced in each phase (e.g. programmer not having production access).
Changes must be authorized, tested and recorded. Any changes must not affect the security of the
system or its capability to enforce the security policy.

The seven phases are:


1.

2.

3.

4.

5.

6.

7.

Project Initiation (Requirements Analysis)


Initial study and conception of project.
InfoSec involvement:
 Perform Risk Assessment to:
- Define sensitivity of information and level of protection needed
- Define criticality of system and security risks
- Ensure regulatory/legal/privacy issues are addressed and compliance to security standards
Project Definition, Design & Analysis (Software Plans and Requirements)
Functional/system design requirements
Ensure requirements can be met by application.
InfoSec involvement:
 Determine acceptable level of risk (level of loss, percentage of loss, permissible variance)
 Identify security requirements and controls
- Determine exposure points in process - i.e. threats and vulnerabilities
- Define controls to mitigate exposure
 Due diligence, legal liabilities and reasonable care
System Design Specification
Detailed planning of functional components
Design of test plans and program controls
InfoSec involvement:
 Incorporate security mechanisms and verify program controls
 Evaluate encryption options
Software Development
Writing/implementing the code/software
InfoSec involvement:
 Develop information security-related code
 Implement unit testing
 Develop documentation
Implementation, evaluation and testing
Installing system software and testing software against requirements
Documenting the internal design of the software
InfoSec involvement:
 Conduct acceptance testing
 Test security software
 Certification and accreditation (where applicable)
Maintenance
Product changes and bug fixes
InfoSec involvement:
 Penetration testing and vulnerability assessment
 Re-certification
Revision or disposal
Major modification to the product
Evaluation of new requirements and making decision to replace rather than re-code
InfoSec involvement:
 Evaluation of major security flaws
 Security testing of major modifications

2.5 Due Diligence / Due Care


The concepts of due diligence and due care require that an organization engage in good business practices
relative to the organizations industry.
Due Diligence is the continual effort of making sure that the correct policies, procedures and standards are in
place and being followed. Due diligence may be mandated by various legal requirements in the organizations
industry or compliance with governmental regulatory standards.
An example of Due Care is training employees in security awareness as opposed to simply creating a policy
with no implementation plan or follow-up. Another example is requiring employees to sign statements that they
have read and understood appropriate acceptable use policies.
In lay terms, due diligence is the responsibility a company has to investigate and identify issues, and due care is
doing something about the findings.

2.6 Certification / Accreditation / Acceptance / Assurance


Certification deals with testing and assessing the security mechanism in a system.
Accreditation pertains to the management formally recognizing the system and its security level OR being approved
by a designated approving authority.
Acceptance designates that a system has met all security and performance requirements that were set for the project.
Assurance is a term used to define the level of confidence in system.
Once a system is built the certification process begins to test the system for all security and functional requirements. If
the system meets all requirements it gains accreditation. Accredited systems are then accepted into the operational
environment. This acceptance is because the owners and users of the system now have a reasonable level of assurance
that the system will perform as intended, both from a security and functional perspective.

2.7 Data/information Storage


Primary: Main memory, which is directly accessible by CPU - volatile and looses its value on power failure.
Secondary: Mass storage devices (hard-drive, floppy drive, tapes, CDs). Retains data even if computer is off.
Real (physical) memory: Referred to main memory or random-access memory (RAM).
Virtual memory: "Imaginary" memory area supported by OS and implemented in conjunction with hardware.
RAM (Random Access Memory): Can read and write data.
ROM (Read Only Memory): Can only read data, hold instructions for starting up the computer.
PROM (Programmable Read Only Memory): Memory chip on which program can be stored. But cannot wipe
it and use for storing other data.
EPROM (Erasable Programmable Read Only Memory): Can be erased by exposing to ultraviolet light.
EEPROM (Electrically Erasable...PROM): Can be erased by exposing to an electrical charge.

2.8 System Security Architecture


Deals specifically with those mechanisms within a system that ensure information is not tampered with while it is
being processed or used. Different levels of information are labeled and classified based upon their sensitivity.
There are three common modes used to control access to systems containing classified information:
Note: All are MAC models.
2.8.1 System High Mode
Proper clearance required for ALL information on the system. All users that have access must have a

security clearance that authorizes their access. Although all users have access, they may not have
a need to know for all the information because there are various levels of information
classification. The levels of information classification are clearly labeled to make it clear what the
access requirements are. All users can access SOME data, based on their need to know.
2.8.2 Compartment Mode
Proper clearance required for THE HIGHEST LEVEL of information on the system. All users that
have access to the system must have a security clearance that authorizes their access. Each user is
authorized to access the information only when a need to know requirement can be justified. A strict
documentation process tracks the access given to each user and the individual who granted the access.

All users can access SOME data, based on their need to know and formal access approval.
9

2.8.3 Multilevel Secure Mode (MLS)


Proper clearance required for ALL information on the system. All users that have access to the system
must have a security clearance that authorizes their access. Uses data classification and Mandatory
Access Control (MAC) to secure the system. Processes and data are controlled. Processes from lower
security levels are not allowed to access processes at higher levels. All users can access SOME data,

based on their need to know, formal access approval and clearance level.
In addition, there are several system security architecture concepts that may be applied:
2.8.4 Hardware Segmentation
Within a system, memory allocations are broken up into segments that are completely separate from
one another. The kernel within the operating system controls how the memory is allocated to each
process and gives just enough memory for the process to load the application and the process data. Each
process has its own allocated memory and each segment is protected from one another.
2.8.5 Trusted computing base
Is defined as the total combination of protection mechanisms within a computer system. Includes
hardware, software and firmware. Originated from the Orange Book.
Security perimeter: Defined as resources that fall outside of TCB. Communication between trusted
components and un-trusted components needs to be controlled to ensure that confidential information
does not flow in an unintended way.
Reference monitor: Is an abstract machine (access control system), which mediates all access that
subjects have to objects to ensure that the subjects have the necessary access rights and to protect the
objects from unauthorized access and destructive modification. Compares access level to data
classification to permit/deny access.
Security kernel: Made up of mechanisms (h/w, s/w, firmware) that fall under the TCB and implement
and enforce the reference monitor. At the core of TCB and is the most common approach to building
trusted systems. Must be isolated from the reference monitor.
2.8.6 Data Protection Mechanisms
Layered design: Layered design is intended to protect operations that are performed within the kernel.
Each layer deals with a specific activity: the outer layer performs normal tasks (least trusted) and the
inner layer more complex and protected (most trusted) tasks. Segmenting processes like this mean that
untrusted user processes running in the outer layers will not be able to corrupt the core system.
Data abstraction: Data abstraction is the process of defining what an object us, what values it is
allowed to have, and the operations that are allowed against the object. The definition of an object is
broken down to its most essential form leaving only those details required for the system to operate.
Data hiding: Data hiding is the process of hiding information available to one process level in the
layered model from processes in other layers. Data hiding is a protection mechanism meant to keep the
core system processes safe from tampering or corruption.

2.9 Change Control / Configuration Management (p.135-139)


Changes will occur in both the application development process and the normal network and application upgrade
process the requester does not necessarily understand the impact of these changes. Changes are unavoidable at
every stage of system development. Change control does not apply as strictly to the development process as to
production systems.
Change control helps ensure that the security policies and infrastructure that have been built to protect the
organization are not broken by changes that occur on the system from day to day.
Configuration management is the process of identifying, analyzing and controlling the software and hardware
of a system. The process starts with configuration change request submitted to the configuration control board
(CCB). The board will review the effect of change and approve or reject it.
Tools used for change control / configuration management: Checksum (e.g. MD5 hash), Digital signatures,
IDS, file integrity monitors, Enterprise security manager, Software Configuration management. These tools can
all be used to verify the integrity of both production and development files/software and help ensure that the
organization does not suffer an outage because of bad changes (i.e. changes planned correctly but implemented
with flaws because of, for example, corrupt files.). They can also help create "golden images" of production
data/configurations.
It is important to enforce the change control / configuration management process. Some tools for detecting
violations are: NetIQ, PentaSafe, PoliVec and Tripwire. These tools offer solutions for monitoring the
configuration of systems and alerting on out-of-course changes.

10

2.10 Policy, Standard, Guidelines, Baselines


Security
Policy
Standards

Baselines
Guidelines
Procedures

Is a general statement written by senior management to dictate what type of role security plays
within the organization - it also provides scope and direction for all further security.
Specifies how hardware/software products are to be used. Provide a means to ensure that
specific technology, applications, parameters and procedures are carried out in a uniform way.
These rules are usually compulsory within a company and they need to be enforced.
Provides the minimum level of security necessary throughout the organization.
Are recommended actions and operational guides when a specific standard does not apply.
Are step-by-step actions to achieve a certain task.

2.11 Roles and Responsibilities


Senior Manager
Security professional
Data Owner

Data Custodian
User

Ultimately responsible for security of the organization and the protection of its assets.
Functionally responsible for security and carries out senior managers directives.
Is usually a member of management and is ultimately responsible for the protection
and use of data. Decides upon the classification of the data. Will delegate the
responsibility of the day-to-day maintenance of the data to the data custodian.
Is given the responsibility of the maintenance and protection of the data.
Any individual who routinely uses the data for work-related tasks. Must have the
necessary level of access to the data to perform the duties.

2.12 Structure and Practices


Separation of duties

Non-disclosure agreements
Job rotation

Makes sure that one individual cannot complete a risky task by herself. More
than one person would need to work together to cause some type of destruction
or fraud and this drastically reduces its probability of exploitation.
To protect the company if /when an employee leaves for one reason or another.
No one person should stay in one position for a long period of time because it
can end up giving too much control to this one individual.

2.13 Security Awareness


The weakest link in any security program in any organization is the users. Part of a quality security program is
teaching users what security means to the organization and how each user impacts the process
Make security part of the hiring process.
Gain support from upper management.
Provided tailored security and policy training:
Security-related job training for operators
Awareness training for specific departments or personnel groups with security sensitive positions
Technical security training for IT support personnel and system administrators
Advanced training for security practitioners and information system auditors.
Security training for senior mgrs, functional mgrs and business unit mgrs/other group heads.
Perform clean-desk spot checks.
Lead by example.

2.14 Security Management Planning


A security plan is meant to provide a roadmap for the organization concerning security. It is also meant to be
specific to each organization. Because they are unique the process may differ for each organization but generally
the steps are:
Define the Mission and Determine Priorities (p.151)
Determine the Risks and Threats to Priority Areas (p.151)
Create a Security Plan to Address Threats (p.152)
- Develop security policies (p.152)
- Perform security assessment (p. 153)
- Identify security solutions (p. 153)
- Identify costs, benefits, and feasibility of solutions and finalize the plan (p.153)
- Present the plan to the higher management in order to gain management buy-in (p.153)

11

2.15 Common Development of a Security Policy


The phases of the common development process of a security policy are:
Initial & Evaluation
Development
Approval
Publication
Implementation
Maintenance

Writing a proposal to management that states the objectives of the policy.


Drafting and writing the actual policy, incorporating the agreed objectives.
The process of presenting the policy to the approval body.
Publishing and distributing the policy within the organization.
Carrying out and enforcing the objectives of the policy.
Regularly reviewing the policy to ensure currency (may be on a scheduled basis).

12

3.0 AUDIT AND MONITORING


Auditing is the process to verify that a specific system, control, process, mechanism, or function meets a defined
list of criteria. Gives security mangers the ability to determine the compliance with a specific policy or standard.
Often used to provide senior management with reports on the effectiveness of security controls. Monitoring is
the process to collect information to identify security events and report in a pre-described format.

3.1 Control Types


Directive
Preventive
Detective
Corrective

Recovery

Usually set by management or administrators or to ensure that the requisite actions or


activities for maintaining policy or system integrity take place.
To inhibit persons or processes from being able to initiate actions or activities that could
potentially violate the policy for which the control was devised.
To identify actions or activities from any source that violate the policy for which the
control was devised. Detective controls often act as a trigger for a corrective control.
To act upon a situation where a policy has been violated. Often called countermeasures,
they act in an automated fashion to inhibit the particular action/activity that violated a
policy from becoming more serious than it already is. Used to restore controls.
To act upon a situation where a policy has been violated. Recovery controls attempt to
restore the system or processes relating to the violation in policy to their original state.

3.2 Security Audits


The auditing process provides a well-defined set of procedures and protocols to measure compliance or deviation
from applicable standards, regulations etc.
Auditing goals should be coupled with governance. Ensures that auditing goals align with the business goals.
Governance considers organizational relationships and processes that directly affect the entire enterprise.
Once the goal of an audit has been clearly identified, the controls required to meet the objective can be planned this is often called the control objective.
3.2.1 Internal and External Security Audit
Internal auditors are employees of the organization in which the audit is taking place. They examine
the existing internal control structure for compliance to policies and help management accomplish
objectives through a disciplined approach to governance, control and risk mitigation.
External auditors are often hired as external contractors to address specific regulatory requirements.
Organizations should always check the credentials of the third party before starting the audit and a nondisclosure agreement should be signed (as a minimum).
3.2.2 Auditing Process
The Department of Defense (DoD) provides detailed steps that are particular to an IT audit:
1. Plan the audit

2. Determine the
existing controls
in place and the
risk profile
3. Conduct
compliance testing
4. Conduct
substantive testing
5. Determine the
materiality of
weaknesses found
6. Present findings

Understand the business context of the security audit


Obtain required approvals from senior management and legal representatives
Obtain historical information on previous audits, if possible
Research the applicable regulatory statutes
Assess the risk conditions inherent to the environment
Evaluate the current security posture using risk-based approach
Evaluate the effectiveness of existing security controls
Perform detection/control risk assessment
Determine the total resulting risk profile
Determine the effectiveness of policies and procedures
Determine the effectiveness of segregation of duties
Verify that the security controls behave as expected
Test controls in practice
If the security exploits found were to be executed, what would be the tangible
($) impact to the business and the intangible (reputation) impact.
- Determine if the security exploits increase the organizational risk profile
- Prepare the audit report and the audit opinion
- Create recommendations

13

3.2.3 Audit Data Sources (p. 192)


Audit sources are locations from where audit data can be gathered, for valuation and analysis. The
auditor should always consider the objectivity of the information source. Audit sources can be gathered
from a number of locations such as:
- Organization charts
- Network topology diagrams
- Business process and development documentation

- Hardware and software inventories


- Informal interviews with employees
- Previous audit reports

3.2.4 Audit Trails


Audit trails are a group of logs or relevant information that make up the set of evidence related to a
particular activity. For every action taken on an information system there should be a relevant log entry
containing information about the name of the system, the userid of the user, what action what taken and
the result of the action.
One of the most difficult aspects of establishing an audit trail is ensuring audit trail integrity.
Integrity of the audit trail is crucial to event reconstruction of a security incident. It is important to
protect the audit trail from unauthorized access and log tampering. The use of a Central Logging
Facility (CLF) to maintain disparate system logs is recommended. Backups of audit logs should also be
considered.
Audit log reviews will be done to review the level of detail that should be covered so that general
inferences can be made about host activity and granular enough to investigate further into a particular
event.
Audit trails provide a method of tracking or logging that allow for tracing security-related activity.
Useful audit trails include:
- Password changes
- Privilege use
- Privilege escalation

- Account creations and deletions


- Resource access
- Authentication failures

System Events provide triggers that are captured in the audit trail and used to demonstrate a pattern of
activity. The following are examples of events tracked:
- Admin/operator actions
- Resource access denials
- Resource access approvals

- Startup and shutdown


- Log in and log off
- Object create, delete, and modify

Sampling and Data Extraction is done when there is no original data available. In this case, the
administrator would have to use collection techniques such as interviews or questionnaires to extract
the data from a group of respondents. Data sampling allows them to extract specific information. This is
most often used for the detection of anomalous activity.
Retention periods indicate how long media must be kept to comply with regulatory constraints. The
key question is "how long is long enough"? Largely depends on regulatory/compliance issues.
3.2.5 Security Auditing Methods
The auditing methods should be well documented, and proven to be reconstructable if required. The
frequency of review depends on the type and importance of audit. The security audit report should
highlight all the findings and recommendations whenever required. The following are two types of
methods that are commonly used for security audits.

Penetration testing (p.201): Classified as proactive security audit, by testing security controls via
a simulation of actions that can be taken by real attackers.
When preparing for a penetration test, a list of attacks that will take place has to be generated or
mapped. This list of attacks can be likened to an audit checklist. A responsible penetration test
requires careful coordination and planning to minimize the likelihood of negative impact to an
organization.

14

A penetration test is the authorized, scheduled and systematic process of using known
vulnerabilities and exploiting the same in an attempt to perform an intrusion into host, network,
physical or application resources.
The penetration test can be conducted on internal (a building access or Intranet host security
system) or external (the company connection to the Internet) resources. It normally consists of
using an automated and manual testing of organization resources. The process includes.
-

Host identification - i.e. identification of open ports and services running.


Fingerprinting the OS and applications running - i.e. identification of OS version and
applications running (TCP/IP fingerprinting techniques, banner grabbing etc.)
Creating a vulnerability matrix for the host according to the OS and application by using
common sources such as SecurityFocus, CERT etc. to collate known vulnerabilities.
Vulnerability analysis using automated tools (like ISS, Nessus etc) and manual techniques.
Reporting the weakness and preparing the road map ahead.

Checklist Audit (p.198): Standard audit questions are prepared as template and used for a wide
variety of organizations (e.g. SPRINT).
If an auditor relies on the checklist too much and does not perform his or her own verification of
related details based on observations unique to the environment, a major security flaw could go
unnoticed. The same is true of software tools that automate the audit process and/or check for
security vulnerabilities (see CAATs below).
Other types of security audit methods are war-dialing (to see if there are any open modems),
dumpster diving (to test the effectiveness of the secure disposal of confidential information),
social engineering (to test employees security behaviour) and war-driving (looking for unsecured
wireless access points)
3.2.6 Computer Assisted Audit Tool (CAAT)
A CAAT is any software or hardware used to perform audit processes. CAATs can help find errors,
detect fraud, identify areas where processes can be improved and analyze data to detect deviations from
the norm.
The advantage of using CAATs is the automation of manual tasks for data analysis. The danger of
using them is reliance on tools to replace human observation and intuition. Auditors should use
CAATs to exhaustively test data in different ways, test data integrity, identify trends, anomalies, and
exceptions and to promote creative approaches to audits while leveraging these tools.
Some example of (mainframe based) CAATs are: EZTrieve, CA-PanAudit, FocAudit and SAS. PC's
can also be used for spreadsheet/database programs for auditing or a Generalize Audit Software (GAS)
tool can be used to perform these audit functions - e.g. Integrated Development Environment Applicatin
(IDEA)
3.2.7 Central Logging Facility (CLF)
A CLF helps ensure that audit and system logs are sent to a secure, trusted location that is separate and
non-accessible from the devices that are being monitored.
A CLF can collect and integrate disparate data from multiple systems and help determine a pattern of
attack through data correlation. It can also reveal discrepancies between remote logs and logs kept on a
protected server - in this way it may detect log tampering.

3.3 Reporting and Monitoring Mechanism


The monitoring can be real-time, ad-hoc or passive, depending on the need and importance. To keep the system
security up to date, security administrators must constantly monitor the system and be aware of attacks as they
happen. Monitoring can be done automatically or manually, but either way a good policy and practice of
constant monitoring should be in place.
Warning Banners will warn the users of systems about their adherence to acceptable usage policy and their
legal liability. This will add to the process of legal requirements during prosecution of malicious users. In
addition the banners warn all users that anything they do on the systems is subject to monitoring.

15

Keystroke Monitoring is a process whereby computer system administrators view or record both the keystrokes
entered by a computer user and the computer's response during a user-to-computer session.
Traffic analysis allows data captured over the wire to be reported in human readable format for action.
Trend analysis draws on inferences made over time on historical data (mostly traffic). Can show how an
organization increases or decreases its compliance to policy (or whatever is being audited) over time.
Event Monitoring provides alerts, or notification, whenever a violation in policy is detected. IDSs typically
come to mind, but firewall logs, server/app logs, and many other sources can be monitored for event triggers.
Closed Circuit Television (CCTV) will monitor the physical activity of persons
Hardware monitoring is carried out for fault detection and software monitoring for detecting the illegal
installation of software.
Alarms and signals work with IDS. An alarm allows an administrator to be made aware of the occurrence of a
specific event. This can give the administrator a chance to head off an attack or to fix something before a
situation gets worse. These notifications can include paging, calling a telephone number and delivering a
message, or notification of centralized monitoring personnel
Violation Reports are used extensively in monitoring an access control system. This type of report basically
shows any attempts of unauthorized access. This could simply be a list of failed logon attempts reported. Also
see Clipping Levels p. 17
Honeypots are deliberately kept by the organizations for studying attackers' behavior and also in drawing
attention away from other potential targets.

Misuse detectors analyze system activity, looking for events or sets of events that match a predefined
pattern of events that describe a known attack. Sometimes called "signature-based detection." The most
common form of misuse detection used in commercial products specifies each pattern of events
corresponding to an attack as a separate signature
Intrusion Detection Systems (IDS) provide an alert when an anomaly occurs that does not match a predefined
baseline or if network activity matches a particular pattern that can be recognized as an attack. There are two
major types of intrusion detection:
- Network-based IDS (NIDS) which will sniff all network traffic and report on the results.
- Host-based IDS (HIDS) which will operate on one particular system and report only on items affecting that
system. Intrusion detection systems use two approaches:
Signatures based identification
(aka knowledge-based)
- Detect known attacks
- Pattern matching
- Similar to virus scan

Anomalies identification
(aka statistical-anomaly based or behaviour abased)
- Looks for attacks indicated through abnormal behavior.
- The assumption here is that all intrusive events are considered anomalies.
- A profile of what is considered normal activity must be built first.

3.4 Types of Attack


Dictionary Attack: A dictionary attack uses a flat text file containing dictionary words (sometimes in multiple
languages) and many other common words. They are systematically tried against the users password.
Brute Force Attack: In this type of attack, every conceivable combination of letters, numbers, and symbols are
systematically tried against the password until it is broken. It may take an incredibly long time due to different
permutations and combinations that require to be tried.
Denial of Service (DoS): Is a situation where a circumstance, either intentionally or accidentally, prevents the
system from functioning as intended or prevents legitimate users from using that service. In certain cases, the
system may be functioning exactly as designed however it was never intended to handle the load, scope, or
parameters being imposed upon it. Denial-of-service attack is characterized by an explicit attempt by attackers to
prevent legitimate users of a service from using that service. Examples include:
- Attempts to "flood" a network, thereby preventing legitimate network traffic.
- Attempts to disrupt connections between two machines, thereby preventing access to a service.
- Attempts to prevent a particular individual from accessing a service.
- Attempts to disrupt service to a specific system or person.
Distributed Denial of Service (DDoS): Similar to DoS attack but the attacker uses other systems to launch the
denial of service attack. A trojan horse could be placed on the "slave" system that allows the attacker to launch
the attack from this system.

16

Spoofing: Spoofing is a form of attack where the intruder pretends to be another system and attempts to
provide/obtain data and communications that were intended for the original system. This can be done in several
different ways including IP spoofing, session hijacking, and Address Resolution Protocol (ARP) spoofing.
Man In The Middle Attacks: Performed by effectively inserting an intruders system in the middle of the
communications path between two other systems on the network. By doing this, an attacker is able to see both
sides of the conversation between the systems and pull data directly from the communications stream. In
addition, the intruder can insert data into the communications stream, which could allow them to perform
extended attacks or obtain more unauthorized data from the host system.
Spamming attacks: Spamming or the sending of unsolicited e-mail messages is typically considered more of an
annoyance than an attack, but it can be both. It slows down the system, making it unable to process legitimate
messages. In addition to that mail servers have a finite amount of storage capacity, which can be overfilled by
sending a huge number of messages to the server, thus effectively leading to DoS attack on the mail server.
Sniffing: The process of listening/capturing the traffic going across the network either using a dedicated device
or a system configured with special software and a network card set in promiscuous mode. A sniffer basically sits
on the network and listens for all traffic going across the network. The software associated with the sniffer is
then able to filter the captured traffic allowing the intruder to find passwords and other data sent across the
network in clear text. Sniffers have a valid function within information technology by allowing network analysts
to troubleshoot network problems, but they can also be very powerful weapons in the hands of intruders.

3.4 TEMPEST
TEMPEST is the U.S. government codename for a set of standards for limiting electric or electromagnetic
radiation emanations from electronic equipment such as microchips, monitors, or printers. It helps ensure that
devices are not susceptible to attacks like Van Eck Phreaking.

3.5 Clipping Level


Using clipping levels refers to setting allowable thresholds on a reported activity. Clipping levels set a baseline
for normal user errors, and only violations exceeding that threshold will be recorded for analysis of why the
violations occurred.
For example, a clipping level of three can be set for reporting failed log-on attempts at a workstation. Thus, three
or fewer log-on attempts by an individual at a workstation will not be reported as a violation (thus eliminating the
need for reviewing normal log-on entry errors.)

17

4.0 RISK, RESPONSE, AND RECOVERY


Risk Management
Risk Assessment
Risk Analysis

Identification, measurement and controlling the risk.


Process of determining the relationship of threats to vulnerabilities and the controls in
place and the resulting impact (objective process).
Using a risk analysis process determines the overall risk (subjective process). The negative
impact can be loss of integrity, availability or confidentiality. The RA should recommend
controls to mitigate the risk (i.e. counter-measures).

4.1 Risk Management (Risks, Threats, Vulnerabilities and exposures)


Risk management is the cyclical process of identification, measurement and control of loss associated with
adverse events. Includes risk analysis, the selection/evaluation of safeguards, cost benefit analysis,
safeguard/countermeasure implementation etc. Is made up of multiple steps (p. 231 SSCP):
Identification
Assessment
Planning
Monitoring
Control

Each risk is that is potentially harmful is identified.


The consequences of a potential threat are determined and the likelihood and frequency of a
risk occurring are analyzed.
Data that is collected is put into a meaningful format, which is used to create strategies to
diminish or remove the impact of a risk.
Risks are tracked and strategies evaluated on a cyclical basis - i.e. even though a risk has been
dealt with it cannot be forgotten.
Steps are taken to correct plans that are not working to improve the management of risk

Vulnerability: Weakness in an information system that could be exploited by a threat agent (e.g. software bug).
Threat: Any potential danger, which can harm an information system - accidental or intentional (e.g. hacker).
Risk: Is the likelihood of a threat agent taking advantage of vulnerability.
Risk = Threat x Vulnerability
Exposure: An instance of being exposed to losses from a threat agent.
Assets: The business resources associated with the system (tangible and intangible). These will include:
hardware, software, personnel, documentation, and information communication etc. The partial or complete loss
of assets might affect the confidentiality, integrity or availability of the system information.
Controls: Put in place to reduce, mitigate, or transfer risk. These can be physical, administrative or technical (see
p. 4). They can also be deterrent, preventive, corrective or detective (see p. 14).
Safeguards: Controls that provide some amount of protection to assets.
Countermeasures: Controls that are put in place as a result of a risk analysis to reduce vulnerability.
Risk Mitigation: The process of selecting and implementing controls to reduce the risk to acceptable levels
Note: Risks can be reduced, accepted, managed, mitigated, transferred or deemed to require additional analysis.

4.2 Risk Analysis


To identify and analyze risks we need to do a risk analysis. Two general methodologies used for risk analysis.
4.2.1 Quantitative Risk analysis: The results show the quantity in terms of value (money). It will
give the real numbers in terms of costs of countermeasures and amount of damage that can happen. The
process is mathematical and known as risk modeling, based on probability models.
AV = Asset Value ($)
EF (Exposure Factor) [Max 100%] = Percentage of asset loss caused by presumed successful attacks.
SLE (Single Loss Expectancy) [is the cost] = Asset value x Exposure Factor
ARO (Annualized Rate of Occurrence) = Estimated frequency a threat will occur within a year =
Likelihood of an event taking place x the number of times it could occur in a single year.
ALE (Annualized Loss Expectancy) = SLE x ARO
ROI (Return on Investment) = ALE / Annualized cost of countermeasures ($) [Generally, if ROI is
greater than 1.0 then countermeasure should be placed.]
Cost/Benefit Analysis = Compares the cost of a control to the benefits [ALE (before) - ALE (after) Annual Cost = Value of safeguard] (p.267 of SSCP for examples)
4.2.2 Qualitative Risk analysis: Walk through different scenarios of risk possibilities and rank the
seriousness of the threats and the sensitivity of the assets. This provides higher level, subjective results
than quantitative risk analysis.
Note: Quantitative takes longer and is more complex.

18

4.3 Risk Analysis /Assessment Tools and Techniques


DELPHI

COBRA

OCTAVE
NIST Risk
Assessment
Methodology
(SP800-30)

Delphi techniques involve a group of experts independently rating and ranking business risk for
a business process or organization and blending the results into a consensus. Each expert in the
Delphi group measures and prioritizes the risk for each element or criteria.
'Consultative, Objective and Bi-functional Risk Analysis'. It is a questionnaire PC system using
expert system principles and extensive knowledge base. It evaluates the relative importance of
all threats and vulnerabilities.
Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a risk-based
strategic assessment and planning technique for security.
Step 1.System Characterization
Step 5.Likelihood Determination
Step 2.Threat Identification
Step 6.Impact Analysis
Step 3.Vulnerability Identification
Step 7.Risk Determination
Step 4.Control Analysis
Step 8.Control Recommendations
Step 9.Results Documentation

4.4 Recovery from Disaster


Disaster/business recovery planning is a critical component of the business continuity planning process.
4.4.1 Business continuity planning (BCP) (p.268 SSCP)
BCP is the process of proactively developing, documenting, and integrating processes, procedures that
will allow an organization to respond to a disaster such that critical business functions will continue
with minimal or insignificant changes until such time as its normal facilities are restored. BCP
encompasses the full restoration process of all business operations.
Because BCP focuses on restoring the normal business functions of the entire business is it important
that critical business functions are identified. It is the responsibility of each department to define those
requirements that are essential to continuing their operations. Therefore, it is important to assess and
document the requirements for each department within the business continuity plan. This is typically
performed through a business impact analysis (BIA).
Note: A BIA is usually performed immediately prior to doing the BCP.
4.4.2 Disaster recovery planning (DRP) (p.271 SSCP)
Disaster recovery plans should document the precautions taken so that the effects of a disaster will be
minimized and the actions that must be taken to enable the organization to either maintain or quickly
resume business-critical systems.1) Emergency response 2) Backup 3) Recovery
BCP addresses restoring key business functions whilst a DRP focuses on restoring information systems.
4.4.3 Backups (p.277 SSCP)
Backs up all data in a single backup job. Changes archive bit.
Full Backup
Backs up all data since the last backup (i.e. new and modified). Changes archive bit.
Incremental
Backs up all data changed since the last full backup. Does not change archive bit.
Differential
Makes a full backup but does not change the archive bit
Copy Backup
Typical tape rotation schedule is Grandfather (monthly full backup) - Father (weekly) - Son (daily)
It is also important that a backup is only as good as its ability to be restored - test restores of data
should be performed regularly.
4.4.4 Business Impact Analysis (BIA)
Is a process used to help business units understand the impact of a disruptive event. The impact may be
financial (quantitative - loss of stock value) or operational (qualitative - unable to respond to customer).
A BIA identifies the companys critical systems needed for survival and estimates the outage time that
can be tolerated. First step of a BIA is to identify all business units within the organization (you then
interview each to determine its criticality).
4.4.5 Testing Disaster Recovery Plans: Various methods exist to test the DRP:
Checklist test: Copies of the plan are distributed to management/participant for review.
Structured Walkthrough test: Business unit management meets in a room to review the plan.
Simulation Test: All support personnel meet in a practice execution session.
Parallel Test: Complete live-test without taking operational system down (e.g. on staging)
Full-Interruption Test: Normal production shut down, with real disaster recovery processes.

19

4.4.6 Restoration and Recovery


The most prepares facility (and most expensive) that has the necessary hardware, software,
Hot Site
phone lines, network connections etc to allow a business to resume business functions
almost immediately.
Warm Site Not as well equipped as a hot site, but has part of the necessary hardware, software, network
etc needed to restore business functions quickly. Most commonly used.
Cheaper, ready for equipment to be brought in during emergency, but no hardware resides at
Cold Site
the site, though does have AC, electrical wiring etc. May not work when a disaster strikes.
Reciprocal This is an arrangement with another company, so that one will accommodate the other in the
event of an emergency - not ideal for large companies. Is the cheapest option. Main concern
Site
is compatibility of equipment.
When deciding on appropriate locations for alternate sites, it is important that they be in different
geographical locations that cannot be victim to the same disaster. This should be balanced with the
need for the alternate site not to be so far away that it will significantly add to the downtime.
When moving business functions to an alternate site the most critical should be moved first. When
moving business functions back to primary site, the least critical should be moved first.

4.5 Response to Incident (p. 282 SSCP)


Incident: Violating an explicit or implied security policy. E.g. attempts to gain unauthorized access to system/data,
unauthorized use of system for processing/storage of data, changes to system hardware/software without authorization.
Incident response: Activities that are performed when security related incident occurs that has potential for adverse
effect to the system or organization. The objective of incident response and subsequent investigation is as follows:
Control and manage the incident (i.e. ensure all applicable logs/evidence is preserved).
Timely investigation and assessment of the severity of the incident (i.e. draw up list of suspects,
understand how the intruder gained access, document the damage caused etc).
Timely recovery or bypass of the incident to resume normal operating conditions (i.e. restore the
breached system to original state, whilst ensuring it is secure).
Timely notification of the incident to senior administrators/management (i.e. communicate the results
of the investigation, especially if there are legal impacts)
Prevention of similar incidents (i.e. apply security measures to ensure the breach cannot occur again).
Generally speaking the following steps should be conducted when investigating an incident:
Contact senior management and the Incident Response Team.
Do NOT power down or reboot the system or open any files (i.e. in no way alter the system state).
Unplug the system from the network.
Document any processes that are running and any open files/error messages etc.
Save the contents of memory/page files and any system or application logs.
If possible make a byte by byte image of the physical disk (ideally on write-once media - e.g. CD).
Because any evidence collected may be used in possible criminal proceedings, thorough documentation must be
kept. In particular a chain of custody must be established for any evidence acquired.
A chain of custody proves where a piece of evidence was at a given time and who was responsible for it. This
helps ensure the integrity of the evidence.
Original or primary evidence rather than a copy or duplicate.
A copy of evidence or oral description of its contents.
Proves/disproves specific act through oral testimony based on information gathered through
the witness's five senses.
Real:
Tangible objects/physical evidence.
Conclusive:
Incontrovertible - overrides all other evidence.
Opinions:
Two different types: Expert - may offer an opinion based on personal expertise or facts.
Nonexpert - may testify only as to facts.
Circumstantial: Inference of information from other, intermediate, relevant facts.
Documentary: Printed business records, manuals, printouts.
Demonstrative: Used to aid a jury (charts, illustrations etc).
Corroborative: Supporting evidence used to help prove an idea or point. It cannot stand on its own, but is
used as a supplementary tool to help prove a primary piece of evidence.
Hearsay:
Also known as second-hand evidence. Evidence that is not based on personal, first hand
knowledge of the witness but was obtained from another source. Not usually admissible in
court (hearsay rule), though there are exceptions. Computer based evidence is considered to
be hearsay, but is admissible if relevant.

Best evidence:
Secondary:
Direct:

20

5.0 CRYPTOGRAPHY
Cryptography: Science of secret writing that enables you to store and transmit data in a form that is available
only to the intended individuals.
Cryptosystem: Hardware or software implementation of cryptography that transforms a message to ciphertext
and back to plaintext.
Cryptoanalysis/Cryptanalysis: Recovering plaintext from ciphertext without a key or breaking the encryption.
Cryptology: The study of both cryptography and cryptoanalysis.
Ciphertext: Data in encrypted or unreadable format.
Encipher: Converting data into an unreadable format.
Decipher: Converting data into a readable format.
Cryptovariable (key): Secret sequence of bits (key) used for encryption and decryption.
Steganography: The art of hiding the existence of a message in a different medium (e.g. in jpg, mp3 etc)
Key Escrow: The unit keys are split into two sections and given to two different escrow agencies to maintain.
How the plain-text is
processed

Algorithms used or
number of keys used.

The Cryptographic Systems


Stream ciphers (p.348 SSCP): Stream ciphers are symmetric algorithms that operate
on plaintext bit-by-bit. Stream cipher algorithms create a keystream that is combined
with the plaintext to create the ciphertext. As with other ciphers, the processing of
plaintext uses an XOR operation. E.g. of stream cipher is RC4.
Block ciphers (p.346 SSCP): Encrypts data in discrete chunks of a fixed size. Block
ciphers are symmetric - they use the same secret key for encryption and decryption.
Commonly, the block size will be 64 bits, but the ciphers may support blocks of any
size, depending on the implementation. 128-bit block ciphers are becoming common.
Symmetric Encryption Algorithms: Also known as private key, because only one key
is used and it must be kept secret for security. Both parties will be using the same key
for encryption and decryption. Much faster than asymmetric systems, hard to break if
using a large key size. Key distribution requires a secure mechanism for key delivery.
Limited security as it only provides confidentiality. The "out-of-band method" means
that the key is transmitted through another channel than the message.
Asymmetric Encryption Algorithms: Also known as public key. Two different
asymmetric keys are mathematically related - public and private key. Better key
distribution than symmetric systems. Better scalability than symmetric systems. Can
provide confidentiality, authentication and non-repudiation.

Key clustering = When a plaintext message generates identical ciphertext messages using the same
transformation algorithm, but with different keys.
Secure message format: Entire message encrypted by the receivers public key - only the receiver can decrypt
the message using his/her own private key, thus ensuring confidentiality. [This is the normal method]
Open message format: Entire message encrypted by the senders private key - anyone can decrypt the message
using the sender's public key, but they can be sure that the message originated from the sender.
Secure and signed format: Signed by the senders private key and entire message encrypted with the receiver's
public key. Only the receiver can decrypt the message using his/her own private key, thus ensuring
confidentiality. By signing the message with the sender's private key, the receiver can verify its authenticity
using the sender's public key. [Most secure]

5.1 Symmetric Encryption Algorithms (p.333 SSCP)


Data Encryption Standard (DES) [Sometimes referred to as Data Encryption Algorithm - DEA]: Based on
IBMs 128-bit algorithm Lucifer. A block encryption algorithm, 64 bit in -> 64 bit out. 56 bits make up the true
(effective) key and 8 bits used for parity. A block of 64 bits is divided in half and each char is encrypted one at a
time. The chars are put through 16 rounds of transposition and substitution.
3DES: Uses 48 rounds in its computation. Heavy performance hit and it can take up to three times longer than
DES to perform encryption and decryption. 168-bit key size (i.e. 3x56)
Advanced Encryption Standard (AES): NIST replacement standard for DES based on Rijndael. AES is a
block cipher with a variable block and key length. Employs a round transformation that is comprised of three
layers of distinct and invertible transformations: The non-linear layer; The linear mixing layer; The key addition
layer. AES has 3 key length options: 128, 192 and 256 bits.

21

International Data Encryption Algorithm (IDEA): 128-bit key is used. Block cipher operates on 64 bit blocks
of data. The 64-byte data block is divided into 16 smaller blocks and each has eight rounds of mathematical
functions performed on it. Used in PGP.
Skipjack: Used for electronic encryption devices (hardware). This makes it unique since the other algorithms
might be implemented in either hardware or software. SkipJack operates in a manner similar to DES, but uses an
80-bit key and 32 rounds, rather than 56-bit keys and 16 rounds (DES).
Blowfish: A block cipher that works on 64-bit blocks of data. The key length can be up to 448 bits and the data
blocks go through 16 rounds of cryptographic functions.
RC4/5: A block cipher that has a variety of parameters it can use for block size, key size and the number of
rounds used. Block sizes: 32/64/128 and key size up to 2048 bits.

5.2 Asymmetric Encryption Algorithms (p. 331 SSCP)


Diffie-Hellman Algorithm: This was the first published use of public key cryptography (1976). Because of the
inherent slowness of asymmetric cryptography, the Diffie-Hellman algorithm was not intended for use as a
general encryption scheme, rather its purpose was to transmit a private key for Data Encryption Standard (DES)
(or some similar symmetric algorithm) across an insecure medium - i.e. key distribution.
RSA: Rivest, Shamir and Adleman proposed another public key encryption system. Provides authentication
(digital signature), encryption and key exchange. Is used in many web browsers with SSL and in SSH. Security is

based on the difficulty of factoring large numbers.


Digital Signature Algorithm (DSA) aka Digital Signature Standard (DSS - See below): A public key
encryption algorithm. The algorithm utilizes public and private key pairs. Only the private key of capable of
creating a signature. This permits verification of the senders identity as well as assurance of the integrity of the
message data that has been signed. The hash function used in the creation and verification process is defined in
the Secure Hash Standard (SHS). The private key and the digest (hash-value) are then used as inputs to the DSA,
which generates the signature. For message and sender verification, the recipient uses the hash function to create
a message digest, and then the senders public key is used to verify the signature. Allowed key range from 512 to
1,024 bits. DSA is slower than RSA for signature verification.
Elliptic Curve Cryptosystem (ECC): Provides digital signatures, secure key distribution and encryption.
Requires smaller percentage of the resources than other systems due to the use of modular exponentiation of
discrete logarithmic functions.

5.3 Symmetric vs Asymmetric Systems


Attribute
Keys
Key exchange

Symmetric system
One key for encryption and decryption
Out of band

Speed
Key length
Practical Use
Security

Faster algorithm
Fixed key length
For encryption of large files
Confidentiality and integrity

Asymmetric system
Two keys, one for encryption another for decryption
Symmetric key is encrypted and sent with message; thus, the
key is distributed by inbound means
More complex and slower (resource intensive)
Variable key length
For key exchange (secret key) and distribution of keys
Confidentiality, integrity, authentication and non-repudiation

5.4 Message integrity


One-way hash: Is a function that takes a variable-length string a message, and compresses and transforms it into
a fixed length value referred to as a hash value. The hash value of one-way hash is called message digest. It is
cannot be performed in reverse. It only provides integrity of a message, not confidentiality or authentication. It is
used in hashing to create a fingerprint for a message.
Digital signatures: An encrypted hash value of a message. First compute the hash of the document, and then
encrypt the message digest with the sender's private key. The result is the digital signature
Digital signature standard (DSS): A standard for digital signatures, functions and acceptable use. Is a standard,
does NOT concern itself with encryption.
5.4.1 Hash Algorithms (p.337 SSCP)
MD4: Produces 128-bit hash values. Used for high-speed computation in software implementation and
is optimized for microprocessors.
MD5: Produces 128-bit hash values. More complex than MD4. Processes text in 512-bit blocks.
MD2: Produces 128-bit hash values. Slower than MD4 and MD5
SHA: Produces 160-bit hash values. This is then inputted into the DSA, which computes the signature
for a message. The message digest is signed instead of the whole message.
SHA-1: Updated version of SHA.

22

HAVAL: Is a variable length one-way hash function and is the faster modification of MD5. Processes
text in 1024-bit blocks. HAVAL compresses a message of arbitrary length into a digest of 128, 160,
192, 224 or 256 bits. In addition, HAVAL has a parameter that controls the number of passes a message
block (of 1024 bits) is processed. A message block can be processed in 3, 4 or 5 passes.
Hash Salting: Refers to the process of adding random data to the hash value. Many hashes have weaknesses or
could be looked up on a hash lookup table (if the table were big enough and the computer fast enough). Salting
the hash negates this weakness. Cryptographic protocols that use salts include SSL.

5.5 Link and end-to-end encryption


Link encryption: Encrypts all the data along a specific communication path like a T3 line or telephone circuit.
Data, header, trailers, addresses and routing data that are part of the packets are encrypted. Provides protection
against packet sniffers and eavesdroppers. Packets have to be decrypted at each hop and encrypted again. It is at
the physical level of the OSI model.
End-to-end encryption: Only data is encrypted. Is usually initiated at the application layer of the originating
computer. Stays encrypted from one end of its journey to the other. Higher granularity of encryption is available
because each application or user can use a different key. It is at the application level of the OSI model.

5.6 Cryptography for Emails


Privacy-Enhanced Mail (PEM): Provided confidentiality, authentication, and non-repudiation. Specific
components that can be used:
- Messages encrypted with DES in CBC mode
- Public key management provided by RSA
- Authentication provided by MD2 or MD5
- X.509 standard used for certification structure and format
Message Security Protocol (MSP): Can sign and encrypt messages and perform hashing functions.
Pretty Good Privacy (PGP): Developed by Phil Zimmerman. Uses RSA public key encryption for key
management and IDEA symmetric cipher for bulk encryption of data. PGP uses pass-phrases to encrypt the
users private key that is stored on their hard drive. It also provides digital signatures.
S/MIME - Secure Multipurpose Internet mail Extensions: S/MIME is the RSA developed standard for
encrypting and digitally signing electronic mail that contains attachments and for providing secure electronic
data interchange (EDI). Provides confidentiality through the users encryption algorithm, integrity through the
users hashing algorithm, authentication through the use of X.509 public key certificates and non-repudiation
through cryptographically signed messages - i.e. uses a public-key based, hybrid encryption scheme.

5.7 Internet Security


S-HTTP - Secure Hypertext Transport Protocol: Encrypts messages with session keys. Provides integrity and
sender authentication capabilities. Used when an individual message needs to be encrypted.
HTTPS: Protects the communication channel between two computers. Uses SSL and HTTP. Used when all
information that passes between two computers needs to be encrypted.
SSL (Secure Sockets Layer): Protects a communication channel by use of public key encryption. Uses publickey (asymmetric) for key exchange and certificate-based authentication and private-key (symmetric) for

traffic encryption.
Provides data encryption, server authentication, message integrity and client authentication. Keeps the
communication path open until one of the parties requests to end the session (use TCP). Lies beneath the
application layer and above the transport layer of the OSI model. Originally developed by Netscape - version 3
designed with public input. Subsequently became the Internet standard known as TLS (Transport Layer
Security). If asked at what layer of OSI SSL operates, the answer is Transport.
SET - Secure Electronic Transaction: System for ensuring the security of financial transactions on the Internet.
Mastercard, Visa, Microsoft, and others supported it initially. With SET, a user is given an electronic wallet
(digital certificate) and a transaction is conducted and verified using a combination of digital certificates and
digital signatures in a way that ensures privacy and confidentiality. Uses some but not all aspects of a PKI. SSH:
Used to securely login and work on a remote computer over a network. Uses a tunneling mechanism that
provides terminal like access to computers. Should be used instead of telnet, ftp, rsh etc.
IPSec (Internet Protocol Security): A method of setting up a secure channel for protected data exchange
between two devices. Provides security to the actual IP packets at the network layer. Is usually used to establish
VPN. It is an open, modular framework that provides a lot of flexibility. Suitable only to protect upper layer
protocols. IPSec uses two protocols: AH and ESP.

23

AH (Authentication Header): Supports access control, data origin authentication, and connectionless
integrity. AH provides integrity, authentication and non-repudiation - does NOT provide confidentiality.
ESP (Encapsulating Security Payload): Uses cryptographic mechanism to provide source
authentication (by IP header), confidentiality and message integrity.
IPSec works in two modes:
1.
Transport mode: Only the payload of the message is encrypted. (for peer-to-peer)
2.
Tunnel mode: Payload, routing and header information is encrypted. (for gateway-to-gateway)

5.8 PKI (p. 355 SSCP)


Public key cryptography started in 1976 by Diffie and Hellman and in 1977, Rivest, Shamir and Adleman
designed the RSA Cryptosystem (first public key system). Each public key cryptosystem has its own policies,
procedures and technology required to manage the systems. The X.509 standard provides a basis for defining
data formats and procedures for the distribution of public keys via certificates that are digitally signed by CA's.
5.8.1 X.509
X.509 is the standard used to define what makes up a digital certificate. It was developed from the
X.500 standard for Directory Services. Section 11.2 of X.509 describes a certificate as allowing an
association between a user's distinguished name (DN) and the user's public key. A common X.509
certificate would include: DN, Serial Number, Issuer, Valid From, Valid To, Public Key, Subject etc.
The following are the components of a PKI:
Digital certificate: An electronic file issued by a trusted third party Certificate Authority (CA). It contains
credentials of that individual along with other identifying information (i.e. a user's public key). There are two
types of digital certificates: server certificates and personal certificates.
Certificate Authority (CA): An organization that maintains and issues public key certificates, it is equivalent to
passport office. They are responsible for the lifetime of a certificate - i.e. issuing, expiration etc. CA's issue
certificates validating the identity of a user or system with a digital signature. CA's also revoke certificates by
publishing to the CRL. Cross-certification is the act or process by which two CAs each certify a public key

of the other, issuing a public-key certificate to that other CA, enabling users that are certified under
different certification hierarchies to validate each other's certificate
Note: A key is renewed at or near the end of key's lifetime, provided none of the information has changed. If any
information used to issue the key changes it should be revoked and a new key issued.
Certificate Revocation List (CRL): A list of every certificate that has been revoked for whatever reason. This
list is maintained periodically and made available to concern parties. CRL's are usually based on an LDAP server.
Registration authority (RA): Performs the certification registration duties. A RA is internal to a CA and
provides the interface between the user and the CA. It authenticates the identity of the users and submits the
certificate request to the CA.
PKI provides confidentiality, access control, integrity, authentication and non-repudiation. PKI enabled
applications and standards that rely on PKI include SSL, S/MIME, SET, IPSec and VPN.

5.9 Cryptographic Attacks


Ciphertext-only attack: Capturing several samples of ciphertext encrypted using the same algorithm and
analyzing it to determine the key.
Known-plaintext only: The attacker has the plaintext and the ciphertext, which has been encrypted. So the
attacker can be analyze the ciphertext to determine the key.
Chosen-plaintext attack: The attacker can choose the plaintext that gets encrypted. This is typically used when
dealing with black-box type of encryption algorithm.
Man-in-the-middle attack: Eavesdropping on different conversations. Using digital signatures during the
session-key exchange can circumvent the attack.
Dictionary attacks: Takes a password file with one-way function values and then takes the most commonly
used passwords and run them through the same one-way function. These files are then compared.
Replay attack: An attacker copies a ticket and breaks the encryption and then tries to impersonate the client and
resubmit the ticket at a later time to gain unauthorized access to a resource.

24

6.0 DATA COMMUNICATIONS


6.1 Data Communication Models:
TCP/IP
7

Application

6
5

OSI
Application
Presentation
Session

Transport

Transport

Internet

3
2

Network
(packets)
Datalink (frame)

Physical (bits)

Network

Description
Provides different services to the applications (HTTP, FTP, Telnet, SET,
HTTP-S). Provides non-repudiation at application level.
Converts the information (ASCII, JPEG, MIDI, MPEG, GIF)
Handles problems which are not communication issues (PPP, SQL,
Gateways, NetBEUI)
Provides end to end communication control (TCP, UDP, TLS/SSL)
Routes the information in the network (IP, IPX, ICMP, RIP, OSPF, IPSec,
Routers)
Provides error control between adjacent nodes (Ethernet, Token Ring,
FDDI, SLIP, PPP, RARP, L2F, L2TP, PPTP, FDDI, ISDN, 802.11,
switches, bridges)
Connects the entity to the transmission media (UTP, coax, voltage
Levels, signaling, hubs, repeaters) converts bits into voltage for
transmission.

The session layer enables communication between two computers to happen in three different modes:
1. Simplex: Communication takes place in one direction.
2. Half-duplex: Communication takes place in both directions, but only one system can send information at a time.
3. Full-duplex: Communication takes place in both direction and both systems can send information at a time.
Datalink (Layer 2) primarily responsible for error correction at the bit-level
Transport (layer 4) primarily responsible for error correction at the packet level

6.2 TCP/IP - Transmission Control Protocol/Internet Protocol


IP: Main task is to support inter-network addressing and packet forwarding and routing. Is a connectionless
protocol that envelops data passed to it from the transport layer. IPv4 uses 32 bits for its address and IPv6 uses
128 bits.
TCP: Is a reliable and connection-oriented protocol that ensures packets are delivered to the destination
computer. If a packet is lost during transmission, TCP has the capability to resend it. Provides reliability and
ensures that the packets are delivered. There are more overheads in TCP packet.
Encapsulation Process:

Note: The IP header contains a protocol field. Common values are 1=ICMP 2=IGMP 6=TCP 17=UDP
TCP Handshake:
1. Host sends a SYN packet 2. Receiver answers with a SYN/ACK packet 3. Host sends an ACK packet
UDP: Is a best-effort and connectionless oriented protocol. Does not have packet sequencing, flow and
congestion control and the destination does not acknowledge every packet it receives. There are fewer overheads
in UDP packet.
TCP and UDP use port numbers of 16-bit length
Remember, only TCP is connection-oriented (IP is NOT)

25

6.3 Common types of LAN systems


Ethernet (802.3)
Ethernet uses a bus or star topology and supports data transfer rates of 10 Mbps. Based on IEEE 802.3
specifications. It is one of the most widely implemented LAN standards. More recent versions of Ethernet, called
100Base-T (or Fast Ethernet), supports data transfer rates of 100 Mbps and gigabit Ethernet supports data rates of
1 gigabit (1,000 megabits) per second. An Ethernet address (aka physical MAC address) uses 48-bits.
Token Ring (802.5)
A type of computer network in which all the computers are arranged logically in a circle. A token, which is a
special bit pattern, travels around the circle. To send a message, a computer catches the token, attaches a message
to it, and then lets it continue to travel around the network. Whichever device has the token can put data into the
network. The station then removes the token from the ring and starts transmitting. Since there is only one token,
only one station can transmit at a given time, thus avoiding collisions on the channel. After the transmission is
over, the station returns the token to the ring. System rules in the protocol specifications mandate how long a
device may keep the token, how long it can transmit for and how to generate a new token if there isn't one
circulating.
FDDI (Fiber Distributed Data Interface)
A set of ANSI protocols for sending digital data over fiber optic cable. FDDI networks are token-passing
networks, and support data rates of up to 100 Mbps. FDDI networks are typically used as backbones for widearea networks. An extension to FDDI, called FDDI-2, supports the transmission of voice and video information
as well as data. Another variation of FDDI, called FDDI Full Duplex Technology (FFDT) uses the same network
infrastructure but can potentially support data rates up to 200 Mbps. Uses 2 rings - one for redundancy.

6.4 Cabling
Coaxial Cable: Resistant to EMI (electromagnetic interference), provides a higher bandwidth and longer cable
lengths compared to twisted pair. Can transmit using both baseband and broadband methods. 10base2: ThinNet,
coax cable, maxlength 185m, provides 10 Mbps. 10base5: Thicknet, coax cable, maxlength 500m, provides 10
Mbps
Twisted pair: Cheaper and easier to work with than coaxial cable and is a commonly used cable. Shielded
twisted pair (STP - 2 wires) has an outer foil shielding, which is added protection from radio frequency
interference. Unshielded twisted pair (UTP - 4 wires) has different categories of cabling with varying
characteristics. The physical connector used to connect PCs and network devices, is called an RJ-45. 10base-T:
Uses twisted-pair wiring, provides 10 Mbps, max length 100m.
Fast Ethernet: Uses twisted-pair wiring, provides 100 Mbps.
Fiber-optic cabling: It has higher transmission speeds that can travel over longer distances and is not affected by
attenuation and EMI when compared to cabling that uses copper. It is used to connect two LANs. It does not
radiate signals like UTP cabling and is very hard to tap into. The complexity of making connections using fiber is
one of its major drawbacks and also it is expensive.

6.5 Signaling Types


Baseband (Digital): Baseband uses digital signaling (binary digits as electrical pulses) to transmit data. Signals
flow across the medium in the form of pulses of electricity or light. To boost the signals repeaters are used. Cable
carries only one channel.
Broadband (Analog): Broadband uses analog signaling (electromagnetic waves) and a range of frequencies.
The signal flows across a cable medium in the form of optical or electromagnetic waves. A repeater reconstructs
the data packet and passes along the physical medium to its destination. The cable carries several channels.
[A modem is a digital to analog converter (DAC). The signal begins as baseband (digital) and is then converted to broadband (analog)
before traveling across the phone-cabling system]

6.6 Transmission Modes (Approaches)


Unicast: Information is sent from one point to another point - the predominant form of transmission on LANs
and the Internet. All LANs and IP networks support the unicast transfer mode, and most users are familiar with
the standard unicast applications (e.g. HTTP, SMTP, FTP and telnet) which employ the TCP transport protocol.
Multicast: Information is sent from one or more points to a set of other points. In this case there may be one or
more senders, and the information is distributed to a set of receivers. The format of IP multicast packet is
identical to that of unicast packets and is distinguished only by the use of a special class of destination address
(class D IP address), which denotes a specific multicast group. Since TCP supports only the unicast mode,
multicast applications must use the UDP transport protocol.

26

Broadcast: A packet goes to all computers on its subnet. Broadcast transmission is supported on most LANs,
and may be used to send the same message to all computers on the LAN (e.g. the address resolution protocol
(ARP) uses this to send an address resolution query to all computers on a LAN). The data is sent to a special
broadcast address. Network layer protocols (such as IP) also support a form of broadcast which allows the same
packet to be sent to every system in a logical network.
6.6.1 LAN access methods
Carrier-Sense Multiple access and collision detection (CSMA/CD): Monitors the
transmission/carrier activity on the wire to determine the best time to transmit data. Computers listen
for the absence of a carrier tone, which indicates that no one else is transmitting date at the same time.
Collisions can still happen, but they are detected and the information re-sent.
Token passing: A 24-bit control frame used to control which computers communicate at what
intervals. The token grants a computer the right to communicate. Do not cause collisions because only
one computer can communicate at a time, which has the token.
Carrier-Sense Multiple Access with Collision Avoidance (CSMA/CA): Instead of detecting collision
it tries to avoid them by having each computer signal its intention to transmit before actually
transmitting. Although CSMA/CA avoids collisions (guaranteed) there is an additional overhead from
each workstation broadcasting its intention prior to transmitting by sending a jam signal. Thus,
CSMA/CA is slower than CSMA/CD. CSMA/CA is used on Apple Talk networks and also in wireless.

6.7 Networks
Local Area Network (LAN)
Spans a relatively small geographical area. Most LANs are confined to a single or group of buildings. Network
Interface Card (NIC) connects computers. Two types of LAN (1) Wired LAN and (2) Wireless LAN
Wide Area Network (WAN)
LANs connected together over distance via telephone lines/radio waves/fibre. High-speed dedicated networks
(leased lines or point to point network). Secured WANs can be created using IPSec.
Metropolitan Area Network (MAN)
Similar to WAN - MANs are high-speed communication lines and equipment covering a metropolitan area.
Intranet: A network belonging to an organization, usually a corporation, accessible only by the organization's
members, employees, or others with authorization (Private network). Intranets are used to share information.
Internet: A global network connecting millions of computers (global interconnection of LAN, WAN, and
MAN). Internet is decentralized by design. Each Internet computer, called a host, is independent.
Extranet: An Intranet that is partially accessible to authorized outsiders. An extranet provides various levels of
accessibility to outsiders, very popular means for business partners to exchange information.

6.8 Network Topology


Ring Topology: Series of devices connected by unidirectional transmission links that forms a ring. Each node is
dependent upon the preceding nodes. If one system failed, all other systems could fail.
Bus Topology: A single cable runs the entire length of the network. Each node decides to accept, process or
ignore the packet. The cable where all nodes are attached is a potential single point of failure.
Star Topology: All nodes connect to a central hub or switch. Each node has a dedicated link to the central hub.
Easy to maintain.
Mesh Topology: All systems and resources are connected to each other.

6.9 IEEE Standards


The Institute of Electrical and Electronics engineers (IEEE) Project 802 was started to establish LAN standards:
802.3 LAN architecture to run CSMA/CD (Ethernet)
802.5 LAN architecture to run on Token Ring network.
802.6 LAN architecture for MANs. (Metropolitan Networks)
802.8 LAN architecture deals with fiber-optic implementation of Ethernet (Fiber Optic Tag)
802.9 LAN deals with integrated data and voice (Isochronous LANs)
802.11x Wireless LAN
802.11b: Max 11 Mbit/s
802.11g: Max 54 Mbit/s
802.11i: (aka WPA2) specifies security mechanisms for wireless LANs.

27

6.10 Network Devices


Hub or
Repeater

Physical Layer
(OSI Layer 1)

Bridge

Data Link Layer


(OSI Layer 2)
Data Link Layer
(OSI Layer 2)

Switches

Routers

Network Layer
(OSI Layer 3)

Broadcasts all packets to all ports. When it receives a packet it transmits


(repeats) the packet to all of its ports (to all of the other PCs on the network.)
This can result in a lot of unnecessary traffic being sent on the network.
Forwards packets and filters based on MAC addresses; forwards broadcast
traffic, but not collision traffic. Can be used to extend networks.
Switches control flow of network traffic based on the address information in
each packet. A switch is an intelligent hub, which learns which devices (MAC
add) are connected to its ports and forwards packets to the appropriate port only.
Reduces amount of unnecessary traffic.
A device that forwards data packets along networks. A router is connected to at
least two networks. Routers use headers and forwarding tables to determine the
best path for forwarding the packets.

Gateway is a node on a network that serves as an entrance to another network.


Proxies intercept all requests, which are going to the real server from client. It is generally used for performance
and filtering reason. This will also hide the internal (private) IP address.

6.11 Firewalls (p. 400 SSCP)


Devices designed to prevent unauthorized access. Can be implemented in hardware or software. All the packets
entering/leaving the network are examined according to predefined rule set (access control). Types are:
Packet Filter
aka Screening
router
(Layer 3 or 4)
ApplicationProxy
(aka Bastion
Host OR
Application
Layer/Level)
StatefulInspection
(Layer 3)
Screened-host

Screenedsubnet

Most common type of firewall. Placed


between trusted and untested network.
Uses ACLs to filter traffic.
Inspects all packets at application layer to
filter application specific commands such
as http: post and get, etc. Typically uses
2 NICs

Monitors packets to filter them and also


the status of connections. (e.g. will close
a half-open connection).

Uses a packet-filtering firewall/router


and a bastion (application-proxy) host.
It employs two packet-filtering
firewall/routers and a bastion host.
Separates internet - dmz - external
networks. Supports both packetfiltering and application-proxy services

Adv.: Cheap, may not need dedicated hardware


(can use router), easy to setup.
Dis: Difficult to maintain ACLs, network
performance degradation.
Adv: More secure than packet-filtering - can tell
what application the packet is trying to use.
Dis: Requires more data processing and can slow
down network performance even more.

Adv: Faster than application-proxy and more


secure than packet-filtering.
Dis: Expensive.
Adv: Highly secure
Dis: Packet-filtering firewall/router is a single
point of attack.
Adv: Considered the most secure type of firewall.
Dis: Packet-filtering firewall/router is a single
point of attack, but because there is a 2nd one that
protects the internal network, it is still secure.

6.12 Protocols
Internet Protocol (IP): See previous (p.25)
Transmission Control Protocol (TCP): See previous (p.25)
User Datagram Protocol (UDP): See previous (p.25)
NetBios Extended User Interface (NetBEUI): It is an enhanced version of the NetBIOS protocol used by
network operating systems such as LAN Manager. NetBIOS works at layer 5 (Session).

28

6.12 Remote Authentication Service Servers


To authenticate and authorize remote users several methods have been created to make the system secure. Some
of the way by which we can access the remote services are: Dial-up, ISDN (Integrated Services Digital
Network), DSL (Digital Subscriber Line), Cable modems (Provide high-speed access).
RADIUS (Remote Authentication Dial-In User service): Simplest method of providing user authentication.
RADIUS server holds a list of usernames and passwords that systems on the network refer to when
authenticating a user. RADIUS supports a number of popular protocols such as PPP, PAP and CHAP. RADIUS
uses UDP along with client and server model. RADIUS encrypts only the password the remainder of the packet
is unencrypted. A third party could capture other information, such as username, authorized services. RADIUS
combines authentication and authorization.
TACACS (Terminal Access Controller Access Control System): Provides remote authentication and event
logging using UDP as communication protocol. User tries to log into a TACACS device, the device refers to the
TACACS server to authenticate the user. This provides a central location for all usernames and passwords to be
stored. Does not allow for a device to prompt a user to allow them to change their password. It also does not use
dynamic password tokens. The information is NOT encrypted.
TACACS+ (Terminal Server Controller Access Control Systems Plus): Provides enhancements to the
standard version of TACACS. It allows users the ability to change their password; dynamic password tokens so
that the tokens can be resynchronized; also provides better auditing capabilities. TACACS+ uses TCP as its
communication protocol. Encrypts the entire body of the packet but leaves a standard TACACS+ header.
PPP - Point-to-Point: Is used to encapsulate messages and transmit them through an IP network.
PAP - Password Authentication Protocol: Provides identification and authentication of the user attempting to
access a network from the remote system. (User should enter a password). The user's name and password are sent
over the wire to a server, for comparison with the database. Sniffing is possible because the password can be
captured.
CHAP - Challenge Handshake Authentication Protocol: An authentication protocol that uses
challenge/response mechanism to authenticate instead of sending a username and password. Avoids sending
passwords in any form over the wire by using a challenge/response technique. CHAP is better than PAP. The
authentication can be repeated any number of times to ensure the Replay attacks is not possible.
Serial Line Internet Protocol (SLIP), and Point-to-Point Protocol (PPP): Works at layer 2 (Datalink) to
connect two systems over a serial line, (point-to-point communication line using a dial-up modem), some way is
needed to transport IP packets (a network layer activity) across the serial link (a data link layer activity). The
following two schemes generally used, SLIP and PPP. PPP has replaced SLIP, because the later does not do
error detection, dynamic assignment of IP addresses and data compression.
Point to point tunneling protocol (PPTP): PPTP was developed by Microsoft to provide virtual dial-up
services. PPTP is an encapsulation protocol based on PPP and encrypts and encapsulates PPP packets.
Layer 2 Tunneling Protocol (L2TP): The extension of point-to-point protocol (PPP). L2TP is also called a
"virtual dial-up protocol" because it extends a dial-up PPP session across the Internet. The client's PPP frames
are encapsulated into IP packets with an L2TP tunneling header and sent across the Internet connection.
L2TP was derived from PPTP features and Cisco protocol called L2F (Layer 2 Forwarding).
L2TP supports TACACS+ and RADIUS authentication. PPTP does not.
L2TP also supports more protocols than PPTP, including IPX, SNA, and others.
Microsoft continues to support PPTP for its Windows products, but L2TP is preferred over PPTP.
IPSec is now the Internet standard for tunneling and secure VPNs.
Layer 2 Forward Protocol (L2F): Used to establish a secure tunnel across Internet developed by Cisco. This
tunnel creates a virtual point-to-point connection between the user and the enterprise customer's network. L2F
allows encapsulation of PPP/SLIP packets within L2F. Not used by IPSec. It is used by VPNs.
6.12.1 Network Layer Security Protocols
IP Security (IPsec): Works at layer 3 (Network layer). IPSec provides authentication, integrity and
encryption. It is used extensively in VPNs (Virtual Private Networks). See p. 23 for more.
6.12.2 Application Layer Security Protocols
Secure Sockets Layer (SSL): SSL developed by Netscape for establishing authenticated and encrypted
sessions between Web servers and Web clients. See p.23 for more.
Transport layer security (TLS): The IETF's version of SSL v3.0. Uses Diffie-Hellman public-key
cryptography. TLS also uses HMAC (Hashed Message Authentication Code), a core protocol essential
for security on the Internet along with IPSec. HMAC is the mechanism for message authentication that
uses either MD5 or SHA-1 hash functions in combination with a shared secret key.

29

6.13 Communications Security Techniques


Network address translation (NAT): Allows using one set of IP addresses for internal traffic and a second set
of addresses for external traffic. It allows hosts on a private internal network to transparently communicate with
destinations on an external network or vice versa. Following are the types of NAT.
Static: Maps unregistered IP address to a registered IP address on a one-to-one basis. [Typically used
for internal to external translation.]
Dynamic: Maps unregistered IP address to a registered IP address from a group of registered IP addresses.
Port Address Translation: A form of dynamic NAT that maps multiple unregistered IP addresses to a
single registered IP address by using different ports.
Note: The Internet Assigned Numbers Authority (IANA) has reserved 3 blocks of IP addresses for use in internal
private networks. All of these addresses are non-routable and cannot be connected to the Internet:
10.0.0.0 to 10.255.255.255 (used for large organizations)
172.16.0.0 to 172.31.255.255 (used for medium Intranets)
192.168.0.0 to 192.168.255.255 (used for small Intranets)
Virtual Private Network (VPN): A secure private connection through a public network. A virtual private
network is the creation of private links across public networks such as the Internet using encryption and
tunneling techniques. Before IPSec, L2TP (Layer 2 Tunneling Protocol) was used to encapsulate IP packets in
"tunneling" packets that hide the underlying Internet routing structure. Two types of VPN are generally used.
1) Remote Access: User-to-LAN connection via a public or shared network, for employees that have a
need to connect to the corporate LAN from the remote place. The user systems will be loaded with
special client software that enables a secure link between themselves and the corporate LAN.
2) Site-to-site: VPN connects fixed sites to a corporate LAN over Internet or intranet.
IP Address Ranges
Address Types
Class A addresses
Class B addresses
Class C addresses
Class D addresses

Starts with
0-127 (128)
128-191 (64)
192-223 (32)
224-239 (16)

30

7.0 MALICIOUS CODE


Virus

Worm
Trojan Horses

Logic bomb

Is program or piece of code, which has been loaded without permission, it can hide itself, can
reproduce itself, and can attach to any other program. Virus will try to do
undesirable/unwanted things.
A program, which can replicates itself over a computer network and usually performs
malicious actions.
A destructive program, which has been inserted inside an apparently harmless program. This
program can do the intended function in foreground as well as undesirable function in the
background.
A logic bomb is a program, or portion of a program, which lies dormant until a specific piece
of program logic or system event is activated. If the specific logic is fulfilled then it will
generally perform security-compromising activity.

7.1 Various Type of Virus


7.1.1 What Part Viruses Infect
Boot sector: Boot sector viruses infect the boot record on hard disks, floppy disks. If the infected computer boots
successfully, then the boot sector virus stays in the memory and infects floppies and other media when the
infected computer writes them.
Master Boot Record (MBR): Very similar to boot sector viruses, except that they infect the MBR (Master Boot
Record) instead of the boot sector.
File infector viruses: Infect files, which contain executables code, such as .EXE and .COM files, infect other files
when they are executed.
Macro: Macro viruses infect certain types of data files. Most macro viruses infect Microsoft Office files, such as
Word Documents, Excel Spreadsheets, PowerPoint Presentations, and Access Databases. These are typically
using the Visual Basic macro language, which is built into Microsoft Office applications.
Source Code: These viruses add code to actual program source code.
7.1.2 How Viruses Infect
Polymorphic: A virus that changes its virus signature (i.e., its binary pattern) every time it replicates and infects
a new files in order to keep from being detected by an anti-virus program.
Stealth: In order to avoid detection, a virus will often take over system functions likely to spot it and use them to
hide itself.
Multi-partite: Multi-partite viruses share the characteristics of more than one virus type (these are having duel
personality). For example, a multi-partite virus might infect both the boot record and program files.
Camouflage Viruses: Viruses that attempted to appear as a harmless program to scanners. (Older/Outdated type
of virus).

7.2 How malicious code can be introduced into the computing environment
-

Network attacks: Trying to get the username and password by brute forcing or dictionary attack. After
successful exploitation introducing virus file or malicious code.
Spoofing (masquerading): Sending email that appears to have originated from one source when it actually
was sent from another source.
Alteration of authorized code and introducing malicious code.
Email Spamming or bombing: sending email to hundreds or thousands of users with attached virus file.
Active-X: Set of platform independent technologies developed by Microsoft that enable software
components to interact with one another in a networked environment. This functionality of Active X
components can be exploited by malicious mobile code.
Mobile code: Code that can be transferred from a system to another system to be executed (i.e. Java,
ActiveX etc)
Trap doors: mechanism, which is intentionally built often for the purpose of providing direct access.
Hidden code or hardware device used to circumvent security controls.

7.3 Mechanisms that can be used to prevent, detect malicious code attacks
Generally anti-virus software program will be used in combination with Scanning, Integrity Checking and
Interception. You should also try to ensure:

31

Use of Anti-virus software).


Keeping virus definition files up to date
Scanning at the network, mainframe, server, and workstation for vulnerability
Loading software only from trusted sources
Physical security of removable media
Making frequent backups
Installing change detection software (integrity checker)
Implement a user awareness program

7.3 Common Attacks


Common DoS attacks
Buffer Overflow Attack: Occurs when a process receives much more data than expected. If the process has no
programmed routine to deal with this excessive amount of data, it acts in an unexpected way that the intruder can
exploit. Several types of buffer overflow attacks exist, with the most common being the Ping of Death (large
packet Ping attack) or the use of over 256-character user or
filenames in email.
SYN Attack: Occurs when an attacker exploits the use of the buffer space during a TCP session initialization
handshake. The attacker floods the target systems small in-process queue with connection requests, but it does
not respond when a target system replies to those requests. This causes the target system to time out while
waiting for the proper response, which makes the system crash or become unusable.
Teardrop Attack: Consists of modifying the length and fragmentation offset fields in sequential IP packets. The
target system then becomes confused and crashes after it receives contradictory instructions on how the
fragments are offset on these packets.
Smurf: Uses a combination of IP spoofing and ICMP to saturate a target network with traffic, thereby launching
a denial of service attack. It consists of three elements the source site, the bounce site, and the target site. The
attacker (the source site) sends a spoofed PING packet to the broadcast address of a large network (the bounce
site). This modified packet contains the address of the target site. This causes the bounce site to broadcast the
misinformation to all of the devices on its local network. All of these devices now respond with a reply to the
target system, which is then saturated with those replies.
Common Session Hijacking Attacks
IP Spoofing Attacks: Involves an alteration of a packet at the TCP level, which is used to attack Internetconnected systems that provide various TCP/IP services. The attacker sends a packet with an IP source address
of a known, trusted host to convince a system that it is communicating with a known entity that gives an intruder
access. This target host may accept the packet and act upon it.
TCP Sequence Number Attacks: Exploit the communications session, which was established between the
target and the trusted host that initiated the session. The intruder tricks the target into believing it is connected to
a trusted host and then hijacks the session by predicting the targets choice of an initial TCP sequence number.
This session is then often used to launch various attacks
on other hosts.
Other Fragmentation Attacks
IP fragmentation attacks use varied IP datagram fragmentation to disguise its TCP packets from a targets IP
filtering devices. The following are some examples of these types of attacks:
A tiny fragment attack occurs when the intruder sends a very small fragment that forces some of the TCP
header field into a second fragment. If the targets filtering device does not enforce minimum fragment size, this
illegal packet can then be passed on through the targets network.
An overlapping fragment attack is another variation on a datagrams zero-offset modification (like the teardrop
attack). Subsequent packets overwrite the initial packets destination address information and then the second
packet is passed by the targets filtering device. This can happen if the targets filtering device does not enforce a
minimum fragment offset for fragments with non-zero offsets.

32

References
International Information Systems Security certification Consortium (www.isc2.org)
The CISSP and SSCP Open Study Guide Web site (www.cccure.org)
CERT Coordination Center (www.cert.org)
NIST CSRC (www.csrc.nist.gov)
Google (www.google.com)
Tom Sheldons Linktionary.com (www.linktionary.com)
Online Computer Dictionary for computer/Internet terms & Definitions (www.webopedia.com)
Computer Knowledge Virus Tutorial (www.cknow.com/vtutor/)
Free online dictionary and thesaurus (http://encyclopedia.thefreedictionary.com/)
SANS Institute - Computer Security Education & Information Security Training (www.sans.org)
Wikipedia (www.wikipedia.org)

33

Vous aimerez peut-être aussi