Vous êtes sur la page 1sur 35

CSE 467/598

Data and Information Security


Prof. Gail-Joon Ahn
IMPORTANT DATES
!! Project:
!! Phase II: 11:59PM, March 24, 2011
!! Exam #1 (Midterm): Mar 8, 2011
!! Spring Break: Mar 15 & 17, 2011 (no class)
!! Class Presentation : April19, 21, 26, and 28, 2011
(paper report due)
!! Exam #2 (Final): May 10, 2011
(12:00PM 2:00 PM)
2
Cyber Gazette
Hidden Trojan, DroidDream in Android Applications
!! Google has pulled more than 20 free applications from Android Market.
These applications attempted to gain root access to the smartphone to
view sensitive data and download additional malware. Called
DroidDream, the malware has been discovered in more than 50 applications
in the official Andoid Market.
3
!! While both Google and Apple check apps for software quality and
interaction with the smartphone OS, do they closely scrutinize applications
for hidden malicious code and other security issues?
based on the slides for "Computer Security: Art
and Science"
4
Lipners Integrity Matrix Model
!! Combines Bell-LaPadula, Biba models to
obtain model conforming to both
confidentiality and integrity requirements
!! Do it in two steps
!! Bell-LaPadula component first
!! Add in Biba component
based on the slides for "Computer Security: Art
and Science"
5
Bell-LaPadula Clearances
!! 2 security clearances
!! AM (Audit Manager):
!! system audit, management functions
!! SL (System Low):
!! any process can read at this level
based on the slides for "Computer Security: Art
and Science"
6
Bell-LaPadula Categories
!! 5 categories
!! D (Development):
!! production programs in development but not yet in use
!! PC (Production Code):
!! production processes, programs
!! PD (Production Data):
!! data covered by integrity policy
!! SD (System Development):
!! system programs in development but not yet in use
!! T (Software Tools):
!! programs on production system not related to protected data
based on the slides for "Computer Security: Art
and Science"
7
Users and Security Levels
Subjects Security Level
Ordinary users (SL, { PC, PD })
Application developers (SL, { D, T })
System programmers (SL, { SD, T })
System managers and
auditors
(AM, { D, PC, PD, SD, T })
System controllers (SL, {D, PC, PD, SD, T})
and downgrade privilege
based on the slides for "Computer Security: Art
and Science"
8
Objects and Security Levels
Objects Security Level
Development code/test data (SL, { D, T })
Production code (SL, { PC })
Production data (SL, { PC, PD })
Software tools (SL, { T })
System programs (SL, ! )
System programs in
modification
(SL, { SD, T })
System and application logs (AM, { appropriate })
based on the slides for "Computer Security: Art
and Science"
9
Ideas
!! Ordinary users can execute (read) production code but
cannot alter it
!! Ordinary users can alter and read production data
!! System managers need access to all logs but cannot
change levels of objects
!! System controllers need to install code (hence downgrade
capability)
!! Logs are append only, so must dominate subjects writing
them
Check Requirements
!! Users have no access to T, so cannot write their own
programs
!! Applications programmers have no access to PD, so
cannot access production data; if needed, it must be put
into D, requiring the system controller to intervene
!! Installing a program requires downgrade procedure (from
D to PC), so only system controllers can do it
!! Only system controllers can downgrade (control); any
such downgrading must be logged (audit)
!! System management and audit users are in AM and so
have access to system state and logs
based on the slides for "Computer Security: Art
and Science"
10
based on the slides for "Computer Security: Art
and Science"
11
Problem
!! Too inflexible
!! System managers cannot run programs for
repairing inconsistent or erroneous production
database
!! System managers at AM, production data at SL
!! So introduce new categories
!! SP (production), SD (development) and SSD
(system development)
!! Reduce security categories to those three
based on the slides for "Computer Security: Art
and Science"
12
Adding Biba
!! 3 integrity classifications
!! ISP(System Program): for system programs
!! IO (Operational): production programs,
development software
!! ISL (System Low): users get this on log in
!! 2 integrity categories
!! ID (Development): development entities
!! IP (Production): production entities
based on the slides for "Computer Security: Art
and Science"
13
Users and Levels
Subjects Security Level Integrity Level
Ordinary users (SL, { SP }) (ISL, { IP })
Application
developers
(SL, { SD }) (ISL, { ID })
System programmers (SL, { SSD }) (ISL, { ID })
System managers
and auditors
(AM, { SP, SD, SSD }) (ISL, { IP, ID})
System controllers (SL, { SP, SD }) and
downgrade privilege
(ISP, { IP, ID})
Repair (SL, { SP }) (ISL, { IP })
based on the slides for "Computer Security: Art
and Science"
14
Objects and Classifications
Objects Security Level Integrity Level
Development code/test
data
(SL, { SD }) (ISL, { IP} )
Production code (SL, { SP }) (IO, { IP })
Production data (SL, { SP }) (ISL, { IP })
Software tools (SL, ! ) (IO, { ID })
System programs (SL, ! ) (ISP, { IP, ID })
System programs in
modification
(SL, { SSD }) (ISL, { ID })
System and application
logs
(AM, { appropriate }) (ISL, ! )
Repair (SL, {SP}) (ISL, { IP })
based on the slides for "Computer Security: Art
and Science"
15
Ideas
!! Security clearances of subjects same as without
integrity levels
!! Ordinary users need to modify production data, so
ordinary users must have write access to integrity
category IP
!! Ordinary users must be able to write production
data but not production code; integrity classes
allow this
MIDTERM
!! Note #2 ~ #13
INTEGRITY AVAILABILITY
CONFIDENTIALITY
(secrecy)
USAGE
17
based on the slides for "Computer Security: Art
and Science"
18
Q: Classify each of the following as a violation of
confidentiality, of integrity, of availability, or of
some combination thereof
!! John copies Marys homework
!! Carol changes the amount of Angelos check from $100
to $1,000
!! Gina forges Rogers signature on a deed
!! Ronda shares Peters medical information with an
unknown health insurance company without obtaining a
consent from Peter
!! Henry crashes Julies personal computer
based on the slides for "Computer Security: Art
and Science"
19
Q: Identify mechanisms for implementing the
following. State what policy or policies they might
be enforcing
!! A password changing problem will reject passwords that
are less than five characters long or that are found in the
dictionary
!! The permissions of the file containing Carols homework
will prevent Robert from cheating and copying it
!! Annie, a system analyst, will be able to detect a student
using a program to scan her system for vulnerabilities
!! A program used to submit homework will be able to
maintain and restore its copy in/from the backup storage
based on the slides for "Computer Security: Art
and Science"
20
Policies and Mechanisms
!! Policy says what is, and is not, allowed
!! This defines security for the site/system/etc.
!! Mechanisms enforce policies
!! Human enforced: Disallowing people from
bringing CDs and floppy disks into a computer
facility to control what is placed on systems
!! System supported
based on the slides for "Computer Security: Art
and Science"
21
Assumptions and Trust
!! Underlie all aspects of security
!! Policy assumptions
!! Correctly capture security requirements
!! Unambiguously partition system states:
!! secure and nonsecure
!! Mechanism assumptions
!! Assumed to enforce policy
!! Support mechanisms work correctly
-> means security mechanisms prevent the system
from entering a nonsecure state
based on the slides for "Computer Security: Art
and Science"
22
Assumptions and Trust:
Types of Mechanisms
secure! precise! broad!
set of reachable states! set of secure states!
P
D
R
!! Mechanisms are either secure, precise, or broad
!! Let P and Q be the set of all possible states and the set of
secure states, respectively
!! A security mechanism is secure if R !Q
precise if R = Q
broad if there exist states r such that r "R and r ! Q
based on the slides for "Computer Security: Art
and Science"
23
Policy Languages
!! Express security policies in a precise way
!! Two possible approaches:
!! Low-level languages
!! Policy constraints expressed in terms of program
options, input, or specific characteristics of entities on
system
!! High-level languages
!! Policy constraints expressed abstractly
!! deny (alice op print) when condition 1 v condition 2
based on the slides for "Computer Security: Art
and Science"
24
DAC:
Authorization Administration
!! Who can administer authorizations
!! Centralized administration
!! DB admin
!! Ownership-based administration
!! Owner of the entity
!! Delegation:
"! Owner gives grant and revoke rights to someone else
"! can result in significant complications..
based on the slides for "Computer Security: Art
and Science"
25
States
Data
States
Protection
States
values of the
system relevant to
protection
data values
Data Privilege
States
Authorization
Privilege States
Values of the system
directly relevant to data
protection
Values of the system
relevant to authorization
management
Protection
States
Protection
States
Protection
States
based on the slides for "Computer Security: Art
and Science"
26
Access control matrix
objects (entities)
s
u
b
j
e
c
t
s

!
1
!
2

.
!
"

#
1
. #
$
!
1
. !
"
%
!! Subjects S = { s
1
,,s
n
}
!! Objects O = { o
1
,,o
m
}
!! Rights R = { r
1
,,r
k
}
!! Entries A[s
i
, o
j
] " R
!! A[s
i
, o
j
] = { r
x
, , r
y
}
means subject s
i
has rights
r
x
, , r
y
over object o
j
based on the slides for "Computer Security: Art
and Science"
27
Conditional Access Control
(with user context)
R,W,O RC W(A1,A2)
R(A1,A3) W,O X
W RC
Access Control Matrix
S1(A1,A2,A3)
S2(A1,A2,A3)
S3(A1,A2,A3)
O1 O2 O3
based on the slides for "Computer Security: Art
and Science"
28
Conditional Access Control
(with user and system context)
R,W,O RC W(A1,A2)
R(A1,A3) W(time),O X
W RC
Access Control Matrix
S1(A1,A2,A3)
S2(A1,A2,A3)
S3(A1,A2,A3)
O1 O2 O3
System variables: time, load
based on the slides for "Computer Security: Art
and Science"
29
Conditional Role-based Access Control
(with user and system context)
R,W,O RC W(A1,A2)
R(A1,A3) W(time),O X
W RC
Access Control Matrix
S1(A1,A2,A3) = {R1,R3}
S2(A1,A2,A3) = {R2}
S3(A1,A2,A3)= {R3}
O1 O2 O3
System variables: time, load
R1
R2
R3
R1
R2 R3
Mutex(R2,R3)
based on the slides for "Computer Security: Art
and Science"
30
Multi-object, multi-access
rule in the ACM
QS1
QS2
QS3
QS4
cannot be
allowed!!!
Query: calculate the
maximum salary
of a group
ACL VS CAPABILITY
31
File c
File b
File a
/project
User: rights
.
.
.
.
User: rights
.
.
.
.
Rights: File a, Rights: Fila b
File a File b
user
Per-Object basis
Per-Subject basis
Foundational Results
user space -> application space -> system space -> kernel (protected) space
U
r w
own
V
F
r w
own
G
r
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! Graham-Denning Model
!! Protection-Principles and
practice (72)
!! Primitive commands
based on the slides for "Computer Security: Art
and Science"
34
Primitive Operations on ACM
!! create subject s
!! Creates new row, column in ACM
!! create object o
!! Creates new column in ACM
!! destroy subject s
!! Deletes row, column from ACM;
!! destroy object o
!! Deletes column from ACM
!! enter r into A[s, o]
!! Adds r rights for subject s over object o
!! delete r from A[s, o]
!! Removes r rights from subject s over object o
based on the slides for "Computer Security: Art
and Science"
35
Complex Transition
Commands
!! Multiple primitive operations may be invoked
!! Example: process p creates file f with r and w
permission
command create_file(p, f)!
! !create object f;!
! !enter own into A[p, f];!
! !enter r into A[p, f];!
! !enter w into A[p, f];!
!end!
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! How can we determine if a
computer system is secure?
!! What is secure?
!! Adding a (generic) right r
where there does not exist one
before is called leaking
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! Harrison, Ruzzo and Ullman (HRU)
!! Halting problems ! Safety
problems
!! Undecidable
Foundational Results
A B X Y
1! 2! 3! 4!
head
s
1
! s
2
! s
3
! s
4
!
s
4
!
s
3
!
s
2
!
s
1
! A
B
X!
Y!
own!
own!
own!
#(k
1
, D) = (k
2
, Y, R)
s
5
!
s
5
!
own!
b k
2
end!
5!
b!
A B X D
1! 2! 3! 4!
head
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! Take-Grant Protection Model
! A specific (not generic) system
! Set of de jure rules for state
transitions
! Safety decidable
Foundational Results
$%
t
&
t
&%
&%
$%
$%
$%
! " ! "
|-!
!
p
!
u
"
v
!
w
"
x
!
y
!
s
'!
"
s
"
q
t
t
g
t
r
t
g
g
r
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! (Extended) Schematic Protection
Model
!what characteristics distinguish a
model?
!Introduce types
!If the scheme is acyclic and
attenuating, the safety question
is decidable
Foundational Results
user space -> application space -> system space -> kernel (protected) space
!! Typed Access Matrix (TAM) Model
!! Is there a way to design an
ACM that looks like ESPM ?
Based on the slides for "Computer Security: Art
and Science"
43
Secure, Precise Mechanisms
!! QUESTION: Can one devise a procedure for
developing a mechanism that is both secure and
precise?
!! Consider only confidentiality policies here
! Integrity policies produce same result
Based on the slides for "Computer Security: Art
and Science"
44
Observability
!! Observability Postulate:
!! Treat the program (application) as a black box.
!! then, the output of the application encodes all
available information about its inputs (data)
!! covert channels (!!!) need to be considered as part of the
output
Based on the slides for "Computer Security: Art
and Science"
45
Observability
!! Covert channel example:
!! Inputs: name, password;
!! Output: Good or Bad
!! If name is invalid immediately print Bad;
!! else access database to check password
!! Problem:
!! Based on the response time, one can determine if name is
valid or not
!! This means timing (a covert channel) should be
considered as part of the output
Based on the slides for "Computer Security: Art
and Science"
46
Protection Mechanism
!! Let p be a function/program p: I
1
' ... ' I
n
( R.
!! A protection mechanism m is a function
m: I
1
' ... ' I
n
( R ) E
for which, when i
k
* I
k
,
!! m(i
1
, ..., i
n
) = p(i
1
, ..., i
n
) or
!! m(i
1
, ..., i
n
) * E.
!! E is set of error outputs
!! In above example, E = { Password Database Missing,
Password Database Locked, }
Some outputs are
hidden by the PM
Based on the slides for "Computer Security: Art
and Science"
47
Confidentiality Policy
!! Confidentiality policy determines which inputs can
be revealed.
!! for p: I
1
' ... ' I
n
( R, CP is a function
!! c: I
1
' ... ' I
n
( A (" I
1
' ... ' I
n
)
Some inputs/data are
available for observation
Based on the slides for "Computer Security: Art
and Science"
48
Examples
!! c(i
1
, ..., i
n
) = C, a constant
!! Deny observer for any information (output does
not vary with inputs)
!! c(i
1
, ..., i
n
) = (i
1
, ..., i
n
), and m = m
!! Allow observer full access to information
!! c(i
1
, ..., i
n
) = i
1

!! Allow observer access to information about first
input but no information about other inputs.
Based on the slides for "Computer Security: Art
and Science"
49
Confidentiality Policy
!! Confidentiality policy determines which inputs can
be revealed.
!! for p: I
1
' ... ' I
n
( R, CP is a function
!! c: I
1
' ... ' I
n
( A (" I
1
' ... ' I
n
)
!! Security mechanism
m: I
1
' ... ' I
n
( R ) E
is secure iff m is consistent with c: i.e.,
!! + m: A ( R ) E such that,
m(i
1
, ..., i
n
) = m(c(i
1
, ..., i
n
))
Some inputs/data are available
for observation
(user is Authorized)
Security Mech. is implemented
using a Protection Mech.
Consistent with the Conf.
Policy
m(i
1
, ..., i
n
) = p(i
1
, ..., i
n
) or
m(i
1
, ..., i
n
) * E.
Based on the slides for "Computer Security: Art
and Science"
50
Examples
!! c(i
1
, ..., i
n
) = C, a constant
!! Deny observer for any information (output does
not vary with inputs)
!! c(i
1
, ..., i
n
) = (i
1
, ..., i
n
), and m = m
!! Allow observer full access to information
!! c(i
1
, ..., i
n
) = i
1

!! Allow observer access to information about first
input but no information about other inputs.
Based on the slides for "Computer Security: Art
and Science"
51
Precision
!! m
1
, m
2
distinct protection mechanisms for
program p under policy c
!! m
1
as precise as m
2
(m
1
! m
2
) if,
!! for all inputs i
1
, , i
n
,
m
2
(i
1
, , i
n
) = p(i
1
, , i
n
) , m
1
(i
1
, , i
n
) = p(i
1
, , i
n
)
!! m
1
more precise than m
2
(m
1
~ m
2
) iff
!! m
1
! m
2
+ there is an input (i
1
, , i
n
) such that
m
1
(i
1
, , i
n
) = p(i
1
, , i
n
) and m
2
(i
1
, , i
n
) " p(i
1
, , i
n
).
52
EVALUATION ASSURANCE
LEVELS
!! EAL0 :Inadequate assurance
!! EAL1 :Functionally tested
!! EAL2 :Structurally tested
!! EAL3 :Methodically tested and checked
!! EAL4 :Methodically designed, tested and reviewed
!! EAL5 :Semi-formally designed and tested
!! EAL6 :Semi-formally verified, designed and tested
!! EAL7 :Formally verified, designed and tested
53
!! Defined as an attribute that is associated with
computer system entities to denote their
hierarchical sensitivity and need-to-know
attributes
!! Consists of two components
!! Hierarchical security levels
!! Non-hierarchical security categories
54
!! Labels = levels x P(categories)
!! Levels = {confidential, secret}
!! Categories = {army, navy}
!! P(Categories) = { empty set, {army}, {navy},
{army, navy}}
!! Labels = { (confidential, {army}), (secret, {army}),
, (secret, empty set)}
based on slides for "Computer Security: Art and
Science"
55
Bell-LaPadula Model
!! Expand include categories
!! Security label is (clearance, category set)
!! Finer grained classifications
!! Can be used to implement need-to-know
!! Examples
!! ( Top Secret, { NUC, EUR, ASI } )
!! ( Confidential, { EUR, ASI } )
!! ( Secret, { NUC, ASI } )
based on slides for "Computer Security: Art and
Science"
56
Reading and Writing
Information
!! Information flows up, not down
!! Reads up disallowed, reads down allowed
!! Security Condition
"! Subject s can read object o iff L(s) dom L(o) and s has
permission to read o
!! Writes up allowed, writes down disallowed
!! *-Property
"! Subject s can write object o iff L(o) dom L(s) and s has
permission to write o
Security level and security label are interchangeable terms
from this point
based on slides for "Computer Security: Art and
Science"
57
Types of Tranquility
!! Strong Tranquility
!! The clearances of subjects, and the classifications of
objects, do not change during the lifetime of the system
!! Weak Tranquility
!! The clearances of subjects, and the classifications of
objects, do not change in a way that violates the simple
security condition or the *-property during the lifetime of the
system
based on slides for "Computer Security: Art and
Science"
58
Formal Model: Summary
!! S, O, P (r,w,a,e), M, L (Cx K), H and
( f
s
, f
o
, f
c
) * F
!! V set of protection states (b, m, f, h)
!! W set of actions of the system
!! W " R ' D ' V ' V
based on slides for "Computer Security: Art and
Science"
59
Simple Security Condition
!! (s, o, p) satisfies the simple security condition
relative to f iff one of the following holds:
! p = e or p = a (no information flow to user)
! Holds trivially if rights do not involve reading
! p = r or p = w and f
s
(s) dom f
o
(o)
!! If all elements of b satisfy ssc rel f, then state satisfies
simple security condition
based on slides for "Computer Security: Art and
Science"
60
*-Property
!! State (b, m, f, h) satisfies the *-property iff for each
s * S the following hold:
! -o *b(s: a) [ f
o
(o) dom f
c
(s) ]
! -o *b(s: w) [ f
o
(o) = f
c
(s) ]
! -o *b(s: r) [ f
c
(s) dom f
o
(o) ]
!! Idea:
!! for writing, object dominates subject;
!! for reading, subject dominates object
based on slides for "Computer Security: Art and
Science"
61
Rules of transformations
!! .: R ' V ( D ' V
!! Takes a state and a request, returns a decision and a
(possibly new) state
!! Rule . security-preserving iff it satisfies
!! ssc-preserving property
!! *-property,
!! ds-property
based on slides for "Computer Security: Art and
Science"
62
Secure (Basic Security
Theorem )
!! A system is secure iff
!! if it starts at a secure state
!! .s satisfy
!! Simple security condition
!! *-property
!! Discretionary security property
!! ..and only one rule selected per request!!!
based on slides for "Computer Security: Art and
Science"
63
Reconsider System Z
!! Initial state:
!! subject s, object o
!! C = {High, Low}, K = {All}
!! f
c
(s) = (Low, {All}), f
o
(o) = (High, {All})
!! m[s, o] = { w }, b = { (s, o, w) }.
!! s requests r access to o
!! After request
!! f
c
(s) = (Low, {All}), f!
o
(o) = (Low, {All})
!! m! [s, o] = {r, w} (s, o, r) * b!,
Cannot read
Can read
based on slides for "Computer Security: Art and
Science"
64
McLeans Reformulation of
Secure Action
!! Given state that satisfies the 3 properties,
!! the action transforms the system into a state
that satisfies these properties
!! eliminates any accesses present in the
transformed state that would violate the property
in the initial state,
!! then the action is secure
File F
(Secret)
A:r
A:w
File G
(Unclass)
B:r,w
A:w
ACL
Alice
Program Goodies
Trojan Horse
executes
read
write
Level (Alice)=Secret
Level (Bob)=Unclassified
65
based on the slides for "Computer Security: Art
and Science"
66
Intuition for Integrity Levels
!! The higher the level, the more confidence
!! That a program will execute correctly
!! That data is accurate and/or reliable
!! Note relationship between integrity and
trustworthiness
!! Important point: integrity levels are not
security levels
!! Idea:
!! when s reads o, i(s) = min(i(s), i(o));
!! s can only write objects at lower levels
!! Rules
1.! s * S can write to o * O if and only if i(o) # i(s).
2.! If s * S reads o * O, then i!(s) = min(i(s), i(o)), where i!
(s) is the subjects integrity level after the read.
3.! s
1
* S can execute s
2
* S if and only if i(s
2
) # i(s
1
).
based on the slides for "Computer Security: Art
and Science"
67
Low-Water-Mark Policy
Reads contaminate
the reader
based on the slides for "Computer Security: Art
and Science"
68
Ring Policy
!! Idea: subject integrity levels static
!! Rules
1.! s * S can write to o * O if and only if i(o) # i(s).
2.! Any subject can read any object.
3.! s
1
* S can execute s
2
* S if and only if i(s
2
) # i(s
1
).
!! Eliminates indirect modification problem
It ignores
contaminations
based on the slides for "Computer Security: Art
and Science"
69
Bibas Model
(Strict Integrity Policy)
!! Similar to Bell-LaPadula model
1.! s * S can read o * O iff i(s) # i(o)
2.! s * S can write to o * O iff i(o) # i(s)
3.! s
1
* S can execute s
2
* S iff i(s
2
) # i(s
1
)
!! Add compartments and discretionary controls to
get full dual of Bell-LaPadula model
!! Information flow result holds
!! Different proof, though
based on the slides for "Computer Security: Art
and Science"
70
LOCUS and Biba
!! Goal: prevent untrusted software from altering
data (and programs)
!! Approach: make levels of trust explicit
!! credibility rating based on estimate of softwares
trustworthiness (0 untrusted, n highly trusted)
!! trusted file systems contain software with a single
credibility level
!! Process has risk level or highest credibility level at which
process can execute
!! Must use run-untrusted command to run software at lower
credibility level

Vous aimerez peut-être aussi