Vous êtes sur la page 1sur 12

QUESTION 1

Biometrics

Biometrics describes the automated methods of recognizing an individual based on a physiological

or behavioral characteristic. Biometric authentication systems include measurements of the face,

fingerprint, hand geometry, iris, retina, signature, and voice. Biometric technologies can be the

foundation of highly secure identification and personal verification solutions. The popularity and use

of biometric systems has increased because of the increased number of security breaches and

transaction fraud. Biometrics provides confidential financial transactions and personal data privacy.

For example, Apple uses fingerprint technology with its smartphones. The user’s fingerprint unlocks

the device and accesses various apps such as online banking or payment apps.

When comparing biometric systems there are several important factors to consider including

accuracy, speed or throughput rate, acceptability to users, uniqueness of the biometric organ and

action, resistance to counterfeiting, reliability, data storage requirements, enrollment time, and

intrusiveness of the scan. The most important factor is accuracy. Accuracy is expressed in error

types and rates.

The first error rate is Type I Errors or false rejections. A Type I Error rejects a person that registers

and is an authorized user. In access control, if the requirement is to keep the bad guys out, false

rejection is the least important error. However, in many biometric applications, false rejections can

have a very negative impact on business. For example, bank or retail store needs to authenticate

customer identity and account balance. False rejection means that the transaction or sale is lost, and

the customer becomes upset. Most bankers and retailers are willing to allow a few false accepts as

long as there are minimal false rejects.

The acceptance rate is stated as a percentage and is the rate at which a system accepts unenrolled

individuals or imposters as authentic users. False acceptance is a Type II error. Type II errors allow

the bad guys in so they are normally considered to be the most important error for a biometric

access control system.

The most widely used method to measure the accuracy of biometric authentication is the Crossover

Error Rate (CER). The CER is the rate where false rejection rate and the false acceptance rate are
equal as shown in the figure.

Badges and Access Logs

An access badge allows an individual to gain access to an area with automated entry points. An

entry point can be a door, a turnstile, a gate, or other barrier. Access badges use various

technologies such as a magnetic stripe, barcode, or biometrics.

A card reader reads a number contained on the access badge. The system sends the number to a

computer that makes access control decisions based on the credential provided. The system logs the

transaction for later retrieval. Reports reveal who entered what entry points at what time.

Guards and Escorts

All physical access controls including deterrent and detection systems ultimately rely on personnel to

intervene and stop the actual attack or intrusion. In highly secure information system facilities,

guards control access to the organization’s sensitive areas. The benefit of using guards is that they

can adapt more than automated systems. Guards can learn and distinguish many different

conditions and situations and make decisions on the spot. Security guards are the best solution for

access control when the situation requires an instantaneous and appropriate response. However,

guards are not always the best solution. There are numerous disadvantages to using security guards

including cost and the ability to monitor and record high volume traffic. The use of guards also

introduces human error to the mix.

Video and Electronic Surveillance

Video and electronic surveillance supplement or in some cases, replace security guards. The benefit

of video and electronic surveillance is the ability to monitor areas even when no guards or personnel

are present, the ability to record and log surveillance videos and data for long periods, and the ability

to incorporate motion detection and notification.

Video and electronic surveillance can also be more accurate in capturing events even after they

occur. Another major advantage is that video and electronic surveillance provide points of view not

easily achieved with guards. It can also be far more economical to use cameras to monitor the entire

perimeter of a facility. In a highly secure environment, an organization should place video and

electronic surveillance at all entrances, exits, loading bays, stairwells and refuse collection areas. In
most cases, video and electronic surveillance supplement security guards.

RFID and Wireless Surveillance

Managing and locating important information system assets are a key challenge for most

organizations. Growth in the number of mobile devices and IoT devices has made this job even

more difficult. Time spent searching for critical equipment can lead to expensive delays or downtime.

The use of Radio Frequency Identification (RFID) asset tags can be of great value to the security

staff. An organization can place RFID readers in the door frames of secure areas so that they are

not visible to individuals.

The benefit of RFID asset tags is that they can track any asset that physically leaves a secure area.

New RFID asset tag systems can read multiple tags simultaneously. RFID systems do not require

line-of-sight to scan tags. Another advantage of RFID is the ability to read tags that are not visible.

Unlike barcodes and human readable tags that must be physically located and viewable to read,

RFID tags do not need to be visible to scan. For example, tagging a PC up under a desk would

require personnel to crawl under the desk to physically locate and view the tag when using a manual

or barcode process. Using an RFID tag would allow personnel to scan the tag without even seeing it.

Physical Protection of Workstation

There are several methods of physically protecting computer equipment;

Security Cables and Locks

Many portable devices and expensive computer monitors have a special steel bracket security slot

built in to use in conjunction with cable locks.

The most common type of door lock is a standard keyed entry lock. It does not automatically lock

when the door closes. Additionally, an individual can wedge a thin plastic card such as a credit card

between the lock and the door casing to force the door open. Door locks in commercial buildings are

different from residential door locks. For additional security, a deadbolt lock provides extra security.

Any lock that requires a key, though, poses a vulnerability if the keys are lost, stolen, or duplicated.
A cipher lock, shown in Figure, uses buttons that a user presses in a given sequence to open the

door. It is possible to program a cipher lock. This means that a user’s code may only work during

certain days or certain times. For example, a cipher lock may only allow Bob access to the server

room between the hours of 7 a.m. and 6 p.m. Monday through Friday. Cipher locks can also keep a

record of when the door opened, and the code used to open it.

QUESTION 2

2. Briefly explain about following topics related to Cryptography

a. Cryptanalysis & Cryptology

b. CIA

c. Public Key Infrastructure

Cryptanalysis
For as long as there has been cryptography, there has been cryptanalysis. Cryptanalysis is the

practice and study of determining the meaning of encrypted information (cracking the code), without

access to the shared secret key.

Throughout history, there have been many instances of cryptanalysis:

• The Vigenère cipher had been absolutely secure until it was broken in the 19th century by

English cryptographer Charles Babbage.

• Mary, Queen of Scots, was plotting to overthrow Queen Elizabeth I from the throne and sent

encrypted messages to her co-conspirators. The cracking of the code used in this plot led to

the beheading of Mary in 1587.

• The Enigma-encrypted communications were used by the Germans to navigate and direct

their U-boats in the Atlantic. The Polish and British cryptanalysts broke the German Enigma

code. Winston Churchill was of the opinion that it was a turning point in WWII.

The figure symbolizes that many keys must be tried before successfully breaking a code.

Several methods are used in cryptanalysis:

• Brute-force method - The attacker tries every possible key knowing that eventually one of
them will work.

• Ciphertext method - The attacker has the ciphertext of several encrypted messages but no

knowledge of the underlying plaintext.

• Known-Plaintext method - The attacker has access to the ciphertext of several messages

and knows something about the plaintext underlying that ciphertext.

• Chosen-Plaintext method - The attacker chooses which data the encryption device

encrypts and observes the ciphertext output.

• Chosen-Ciphertext method - The attacker can choose different ciphertext to be decrypted

and has access to the decrypted plaintext.

• Meet-in-the-Middle method - The attacker knows a portion of the plaintext and the

corresponding ciphertext.

Cryptology
Cryptology is the science of making and breaking secret codes. As shown in the figure, cryptology

combines two separate disciplines:

• Cryptography - the development and use of codes

• Cryptanalysis - the breaking of those codes

There is a symbiotic relationship between the two disciplines because each makes the other one

stronger. National security organizations employ practitioners of both disciplines and put them to

work against each other.

There have been times when one of the disciplines has been ahead of the other. For example,

during the Hundred Years War between France and England, the cryptanalysts were leading the

cryptographers. France mistakenly believed that the Vigenère cipher was unbreakable, and then the

British cracked it. Some historians believe that the successful cracking of encrypted codes and

messages had a major impact on the outcome of World War II. Currently, it is believed that

cryptographers are in the lead.

Cryptanalysis is often used by governments in military and diplomatic surveillance, by enterprises in

testing the strength of security procedures, and by malicious hackers in exploiting weaknesses in
websites.

Cryptanalysts are individuals who perform cryptanalysis to crack secret codes. A sample job

description is displayed in the figure.

While cryptanalysis is often linked to mischievous purposes, it is actually a necessity. It is an ironic

fact of cryptography that it is impossible to prove that any algorithm is secure. It can only be proven

that it is not vulnerable to known cryptanalytic attacks. Therefore, there is a need for

mathematicians, scholars, and security forensic experts to keep trying to break the encryption

methods.

In the world of communications and networking, authentication, integrity, and data confidentiality are

implemented in many ways using various protocols and algorithms. The choice of protocol and

algorithm varies based on the level of security required to meet the goals of the network security

policy.

As an example, for message integrity, message-digest 5 (MD5) is faster but less secure than Secure

Hash Algorithm 2 (SHA2). Confidentiality can be implemented using DES, 3DES, or the very secure

AES. Again, the choice varies depending on the security requirements specified in the network

security policy document. The table in the figure lists common cryptographic hashes, protocols, and

algorithms.

CIA
Confidentiality

Preserving authorized restrictions on information access and disclosure, including means for

protecting personal privacy and proprietary information

Integrity

Guarding against improper information modification or destruction, and includes ensuring


information

non repudiation and authenticity

Availability

Ensuring timely and reliable access to and use of information.


Public Key Infrastructure
On the Internet, continually exchanging identification between all parties would be impractical.

Therefore, individuals agree to accept the word of a neutral third party. Presumably, the third

party does an in-depth investigation prior to the issuance of credentials. After this in-depth

investigation, the third party issues credentials that are difficult to forge. From that point

forward, all individuals who trust the third party simply accept the credentials that the third party

issues.

For example, in the figure Alice applies for a driver’s license. In this process, she submits

evidence of her identity, such as birth certificate, picture ID, and more to a government licensing

bureau. The bureau validates Alice’s identity and permits Alice to complete a driver’s

examination. Upon successful completion, the licensing bureau issues Alice a driver license.

Later, Alice needs to cash a check at the bank. Upon presenting the check to the bank teller, the

bank teller asks her for ID. The bank, because it trusts the government licensing bureau, verifies

her identity and cashes the check.

The Public Key Infrastructure (PKI) is the framework used to securely exchange information

between parties. The foundation of a PKI identifies a certificate authority analogous to the

licensing bureau. The certificate authority issues digital certificates that authenticate the identity

of organizations and users. These certificates are also used to sign messages to ensure that the

messages have not been tampered with.

PKI is needed to support large-scale distribution and identification of public encryption keys.

PKI enables users and computers to securely exchange data over the Internet and to verify the

identity of the other party. The PKI identifies the encryption algorithms, levels of security, and

distribution policy to users.

Any form of sensitive data exchanged over the Internet is reliant on PKI for security. Without

PKI, confidentiality can still be provided but authentication is not guaranteed. For example, the
information could be encrypted and exchanged. However, there would be no assurance of the

identity of the other party.

The PKI framework consists of the hardware, software, people, policies, and procedures needed

to create, manage, store, distribute, and revoke digital certificates. Specifically, the main

elements of the PKI are;

1. Certificate Store (End Device, Ex. PC)

2. PKI Certificate

3. PKI Certificate Authority

4. Certificate Database

Not all PKI certificates are directly received from a CA. A registration authority (RA) is

asubordinate CA and is certified by a root CA to issue certificates for specific uses.

QUESTION 3

Access Control Models


There are five access control models that you should understand when studying this course. As an

introduction, the five access control models are summarized here:

Discretionary Access Control


A key characteristic of the Discretionary Access Control (DAC) model is that every object has an

owner and the owner can grant or deny access to any other subjects. For example, if you create a

file, you are the owner and can grant permissions to any other user to access the file. The New

Technology File System (NTFS), used on Microsoft Windows operating systems, uses the DAC

model.

Role Based Access Control


A key characteristic of the Role Based Access Control (RBAC) model is the use of roles or groups.

Instead of assigning permissions directly to users, user accounts are placed in roles and

administrators assign privileges to the roles. These roles are typically identified by job functions. If a

user account is in a role, the user has all the privileges assigned to the role. Microsoft Windows

operating systems implement this model with the use of groups.


Rule-based access control
A key characteristic of the rule-based access control model is that it applies global rules that apply to

all subjects. As an example, a firewall uses rules that allow or block traffic to all users equally. Rules

within the rule-based access control model are sometimes referred to as restrictions or filters.

Attribute Based Access Control

A key characteristic of the Attribute Based Access Control (ABAC) model is its use of rules that can

include multiple attributes. This allows it to be much more flexible than a rule-based access control

model that applies the rules to all subjects equally. Many software-defined networks use the ABAC

model. Additionally, ABAC allows administrators to create rules within a policy using plain language

statements such as “Allow Managers to access the WAN using a mobile device.”

Mandatory Access Control


A key characteristic of the Mandatory Access Control (MAC) model is the use of labels applied to

both subjects and objects. For example, if a user has a label of top secret, the user can be granted

access to a top secret document. In this example, both the subject and the object have matching

labels. When documented in a table, the MAC model sometimes resembles a lattice (such as one

used for a climbing rosebush), so it is referred to as a lattice-based model.

Question4

IDS
The security challenges that face today's network administrators cannot be successfully managed by

any single application. Although implementing device hardening, authentication, authorization, and

accounting (AAA) access control, and firewall features are all part of a properly secured network,

these features still cannot defend the network against fast-moving Internet worms and viruses. A

network must be able to instantly recognize and mitigate worm and virus threats.

It is no longer possible to contain intrusions at a few points in the network. Intrusion prevention is

required throughout the entire network to sucessfully detect and stop an attack at every inbound and

outbound point.
A networking architecture paradigm shift is required to defend against fast-moving and evolving

attacks. This must include cost-effective detection and prevention systems, such as intrusion

detection systems (IDS) or the more scalable intrusion prevention systems (IPS). The network

architecture integrates these solutions into the entry and exit points of the network.

One approach to prevent worms and viruses from entering a network is for an administrator to

continuously monitor the network and analyze the log files generated by the network devices. This

solution is not very scalable. Manually analyzing log file information is a time-consuming task and

provides a limited view of the attacks being launched against a network. By the time the logs are

analyzed, the attack may have already been successful.

Intrusion Detection Systems (IDSs) were implemented to passively monitor the traffic on a network.

The figure shows that an IDS-enabled device copies the traffic stream and analyzes the copied

traffic rather than the actual forwarded packets. Working offline, it compares the captured traffic

stream with known malicious signatures, similar to software that checks for viruses. Working offline

means several things:

• IDS works passively

• IDS device is physically positioned in the network so that traffic must be mirrored in order to

reach it

• Network traffic does not pass through the IDS unless it is mirrored

Although the traffic is monitored and perhaps reported, no action is taken on packets by the IDS.

This offline IDS implementation is referred to as promiscuous mode.

The advantage of operating with a copy of the traffic is that the IDS does not negatively affect the

packet flow of the forwarded traffic. The disadvantage of operating on a copy of the traffic is that the

IDS cannot stop malicious single-packet attacks from reaching the target before responding to the

attack. An IDS often requires assistance from other networking devices, such as routers and

firewalls, to respond to an attack.

A better solution is to use a device that can immediately detect and stop an attack. An Intrusion

Prevention System (IPS) performs this function.


IPS
An IPS builds upon IDS technology. However, an IPS device is implemented in inline mode. This

means that all ingress and egress traffic must flow through it for processing. As shown in the figure,

an IPS does not allow packets to enter the trusted side of the network without first being analyzed. It

can detect and immediately address a network problem.

An IPS monitors Layer 3 and Layer 4 traffic. It analyzes the contents and the payload of the packets

for more sophisticated embedded attacks that might include malicious data at Layers 2 to 7. Some

IPS platforms use a blend of detection technologies, including signature-based, profile-based, and

protocol analysis-based intrusion detection. This deeper analysis enables the IPS to identify, stop,

and block attacks that would pass through a traditional firewall device. When a packet comes in

through an interface on an IPS, that packet is not sent to the outbound or trusted interface until the

packet has been analyzed.

The advantage of operating in inline mode is that the IPS can stop single-packet attacks from

reaching the target system. The disadvantage is that a poorly configured IPS, or a non-proportional

IPS solution, can negatively affect the packet flow of the forwarded traffic.

The biggest difference between IDS and IPS is that an IPS responds immediately and does not

allow any malicious traffic to pass, whereas an IDS allows malicious traffic to pass before it is

addressed.

IDS and IPS technologies share several characteristics. IPS technologies are both deployed as

sensors. An IDS or IPS sensor can be in the form of several different devices:

• A router configured with Cisco IOS IPS software

• A device specifically designed to provide dedicated IDS or IPS services

• A network module installed in an adaptive security appliance (ASA), switch, or router

IDS and IPS technologies use signatures to detect patterns in network traffic. A signature is a set of

rules that an IDS or IPS uses to detect malicious activity. Signatures can be used to detect severe

breaches of security, to detect common network attacks, and to gather information. IDS and IPS

technologies can detect atomic signature patterns (single-packet) or composite signature patterns

(multi-packet).

Advantages and Disadvantages of IDS and IPS


IDS Advantages and Disadvantages
A list of the advantages and disadvantages of IDS and IPS is shown in the figure.

A primary advantage of an IDS platform is that it is deployed in offline mode. Since the IDS sensor is

not inline, it has no impact on network performance. It does not introduce latency, jitter, or other

traffic flow issues. In addition, if a sensor fails it does not affect network functionality. It only affects

the ability of the IDS to analyze the data.

However, there are many disadvantages of deploying an IDS platform. An IDS sensor is primarily

focused on identifying possible incidents, logging information about the incidents, and reporting the

incidents. The IDS sensor cannot stop the trigger packet and is not guaranteed to stop a connection.

The trigger packet alerts the IDS to a potential threat. IDS sensors are also less helpful in stopping

email viruses and automated attacks, such as worms.

Users deploying IDS sensor response actions must have a well-designed security policy and a good

operational understanding of their IDS deployments. Users must spend time tuning IDS sensors to

achieve expected levels of intrusion detection.

Finally, because IDS sensors are not inline, an IDS implementation is more vulnerable to network

security evasion techniques in the form of various network attack methods.

IPS Advantages and Disadvantages


An IPS sensor can be configured to perform a packet drop to stop the trigger packet, the packets

associated with a connection, or packets from a source IP address. Additionally, because IPS

sensors are inline, they can use stream normalization. Stream normalization is a technique used to

reconstruct the data stream when the attack occurs over multiple data segments.

A disadvantage of IPS is that (because it is deployed inline) errors, failure, and overwhelming the

IPS sensor with too much traffic can have a negative effect on network performance. An IPS sensor

can affect network performance by introducing latency and jitter. An IPS sensor must be

appropriately sized and implemented so that time-sensitive applications, such as VoIP, are not

adversely affected.

Vous aimerez peut-être aussi