FMS-Amini-V0_6.pdf

Formal Methods for Information Security
Morteza Amini
Winter 1393
Contents
1 Preliminaries
1.1 Introduction to the Course . . . . .
1.1.1 Aim . . . . . . . . . . . . . .
1.1.2 Evaluation Policy . . . . . .
1.1.3 References . . . . . . . . . .
1.2 The Concept of Formal Method . .
1.3 Formal Methods . . . . . . . . . . .
1.3.1 Set, Relation, Partial-Order
1.3.2 Logics . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Formal Methods for Security Modeling
2.1 Discretionary Security Models . . . . . . . . . . . . . . . . . . . . .
2.1.1 Lampson’s Model (1971) . . . . . . . . . . . . . . . . . . . .
2.1.2 HRU Model (1976) . . . . . . . . . . . . . . . . . . . . . . .
2.1.3 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Mandatory Security Models . . . . . . . . . . . . . . . . . . . . . .
2.2.1 BLP Model (1976) . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Denning’s Lattice Model of Secure Information Flow (1976)
2.3 Information Flow Control . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Noninterference for Deterministic Systems (1986) . . . . .
2.3.2 Noninterference for Nondeterministic Systems . . . . . . .
2.3.3 Nondeducibility (1986) . . . . . . . . . . . . . . . . . . . . .
2.3.4 Generalized Noninterference (GNI) . . . . . . . . . . . . . .
2.3.5 Restrictiveness . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Role Based Access Control Models . . . . . . . . . . . . . . . . . .
2.4.1 Core RBAC (RBAC0 ) . . . . . . . . . . . . . . . . . . . . .
2.4.2 Hierarchical RBAC (RBAC1 ) . . . . . . . . . . . . . . . . .
2.4.3 Constrained RBAC (RBAC2 ) . . . . . . . . . . . . . . . . .
2.4.4 RBAC3 Model . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Logics for Access Control . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Abadi’s Calculus for Access Control . . . . . . . . . . . . .
2.5.2 A Calculus of Principals . . . . . . . . . . . . . . . . . . . .
2.5.3 A Logic of Principals and Their Statements . . . . . . . .
1
3
3
3
3
4
4
9
9
11
17
17
18
20
23
28
28
39
43
43
49
50
52
53
56
56
58
60
60
61
61
63
63
3 Exercise Answers
69
2
Chapter 1
Preliminaries
1.1
1.1.1
Introduction to the Course
Aim
Diversity of computer security requirements results in introducing of different
kinds of security models. In fact, each security model is an abstraction of a security policy. Importance of computer security motivates us to precisely specify
and verify such security models using formal methods (such as set theory and
different types of logics). In the first part of this course, different approaches for
formal modeling and specification of security and access control (authorization)
models are introduced and surveyed. In the second part of the course, formal
specification and verification of security properties in security protocols using
formal methods (especially different types of modal logics) are introduced. Introduction of BAN logic as well as Epistemic and Belief logic and using them for
verification of some famous security protocols are the main topics of this part.
During this course, students learn how to use formal methods to formally and
precisely specify their required security model or security protocol and how to
verify them using existing formal approaches and tools.
1.1.2
Evaluation Policy
1. Mid-term Exam (35%)
2. Final Exam (25%)
3. Theoretical & Practical Assignments (15%)
3
4. Research Project (20%)
5. Class Activities (5%)
1.1.3
References
• G. Bella, Formal Correctness of Security Protocols, Springer, 2007.
• P. Ryan, S. Schneider, and M.H. Goldsmith, Modeling and Analysis of
Security Protocols, Addison-Wesley, 2000.
• M. Bishop, Computer Security, Addison-Wesley, 2003.
• Related papers and technical reports such as
– D. E. Bell and L. J. La Padula, Secure Computer System: Unified
exposition and Multics interpretation, Technical Report ESD-TR-75306, Mitre Corporation, Bedford, MA, March 1976.
– M. Abadi, M. Burrows, B. Lampson, and G. Plotkin, A Calculus
for Access Control in Distributed Systems, ACM Transactions on
Programming Languages and Systems, Vol. 15, No. 4, pp. 706-734,
1993.
– D.F. Ferraiolo, R. Sandhu, S. Gavrila, D.R. Kuhn, R. Chandramouli,
Proposed NIST Standard for Role-Based Access Control, ACM Transactions on Information and System Security (TISSEC), Vol. 4, No.
3, pp. 224-274, ACM Press, 2001.
– D. Wijesekera and S. Jajodia, A Propositional Policy Algebra for Access Control, ACM Transactions on Information and System Security,
Vol. 6, No. 2, pp. 286-325, ACM Press, 2003.
– J.M. Rushby, Noninterference, Transitivity, and Channel Control Security Policies, Technical Report CSL-92-02, SRI International, 1992.
– K.J. Biba, Integrity Considerations for Secure Computing Systems,
Technical Report TR-3153, Mitre Corporation, Bedford, MA, April
1977.
– D. E. Denning, A Lattice Model of Secure Information Flow, Communication of the ACM, Vol. 19, No. 5, pp. 236-243, 1976.
– M. Burrows, M. Abadi, and R. Needham, A Logic of Authentication,
ACM Transactions on Computer Systems, Vol. 8, pp. 18-36, 1990.
1.2
The Concept of Formal Method
Formal: The term formal relates to form or outward appearance.
4
Formal in Dictionaries
Definition of formal from Heritage:
* Relating to or involving outward form or structure, often in contrast to
content or meaning. Being or relating to essential form or constitution: a
formal principle.
• Following or being in accord with accepted or prescribed forms, conventions, or regulations: had little formal education; went to a formal party.
• Characterized by strict or meticulous observation of forms; methodical:
very formal in their business transactions. Stiffly ceremonious: a formal
greeting.
• Characterized by technical or polysyllabic vocabulary, complex sentence
structure, and explicit transitions; not colloquial or informal: formal discourse.
• Having the outward appearance but lacking in substance: a formal requirement that is usually ignored.
Definition of formal from Oxford:
* Of or concerned with outward form or appearance as distinct from content.
• Done in accordance with convention or etiquette; suitable for or constituting an official or important occasion.
• Officially sanctioned or recognized
Example: Turing machine, which models a computing system, contains abstract
concepts (constructing or specifying the outward appearance of a computing
system) such as the following
• States
• Alphabet
• Transitions
Method: A method is a means or manner of procedure, especially a regular
and systematic way in accomplishing something.
It is also a set of principles for selecting and applying a number of construction
techniques and tools in order to construct an efficient artifact (here, a secure
system).
5
Example: axiomatic method (based on axioms in Mathematics) or empirical
method (based on experiments in Physics).
Methodology: is the study of and the knowledge about methods.
Abstract: means a thing considered apart from concrete existence. It does
not exist in reality or real experience, and cannot perceived through any of the
senses. It is also though of or stated without reference to a specific instance.
Model: A model is an abstraction of some physical phenomenon that accounts
for its known or inferred properties and may be used for further study of its
characteristics.
Formal Method: means a method which has a mathematical foundation, and
thus, employs techniques and tools based on mathematics and mathematical
logic that support the modelling, specification, and reasoning about (verification
of) hardware/sofware/... systems.
Examples of formal techniques and tools:
• Program logics (Hoare logic, dynamic logic)
• Temporal logics (CTL, LTL)
• Process algebras (CSP, PI-calculus)
• Abstract data types (CASL, Z)
• Development tools (B-tool, PVS, VSE)
• Theorem provers (Inka, Isabelle)
• Model checkers (Murphi, OFMC, Spin)
Security: is a property of a computer system to which unauthorized access
to and modification of information and data as well as unauthorized use of
resources is prevented.
Information Security: is CIA:
• Confidentiality: the nonoccurrence of unauthorized disclosure of information.
• Integrity: the nonoccurrence of unauthorized modification of programs or
data.
• Availability: the degree to which a system or component is operational
and accessible when required for use.
6
Other security properties can be seen as special cases of confidentiality, integrity,
and availability. Such as the following:
• Anonymity: A condition in which your true identity is not known; confidentiality of your identity.
• Privacy: You choose what you let other people know; confidentiality of
information you don’t want to share.
• Authenticity: Being who you claim to be; being original not false; integrity
of claimed identity.
• Non-repudiation: A message has been sent (or received) by a party and the
party cannot deny having done so; integrity of the senders (or receivers)
claimed identity and integrity of the proof that the message has been sent
by the sender (or received by the receiver).
. Note: Formal methods for confidentiality and integrity are rather mature,
formal methods for availability not yet. Focus of this course will be on confidentiality and integrity.
Security Policy: captures the security requirements of an enterprise or describes the steps that have to be taken to achieve security. It discriminates
authorized and unauthorized as considered in a secure system.
Security Model: is an abstraction of a security policy. It identifies the relations among the entities (such as subjects and objects) of a system from security
point of view.
Security mechanisms and security models are not the same thing.
Examples of security mechanism:
• Login procedure
• Firewalls
• Access control systems
Examples of security models:
• The access matrix model
• The BLP model
• The RBAC model
7
What does formal approach mean?
A formal approach to security is the employment of a formal method in analyzing
the security of a given computing system or constructing a secure one.
Note that Computing System = Hardware + Software.
Formal methods can be applied on different levels of abstraction and during
different development phases.
Objective of Using Formal Method for Security: Clarifying requirements
and analyzing systems such that security incidents are prevented (or at least
identified).
Three Steps in Using Formal Methods for Security:
1. System Specification: Abstraction and modelling with a well-defined syntactic and semantic structure. It documents how the system operates or
should operate.
2. Requirement Specification: Security modelling (e.g., BLP). It documents
the security requirements in unambiguous way.
3. Verification: Validates the system w.r.t. its requirements and can be
formally done in different ways including:
• model checking (by searching the satisfiability of a given property in
the possible models)
• theorem proving (by inference of a given property using syntactical
inference rules in proof theory)
Applying formal methods does not mean that all three steps must be performed.
E.g., one may decide to only model the behavior and the requirements of the
system without any verification.
It is also possible to apply formal methods only to a particularly critical part of
the system rather than to the whole system
8
Advantages and Disadvantages of Formal Methods:
Some advantages are:
• clean foundation,
• abstraction; separation of policies from implementation mechanisms,
• preciseness,
• verifiability.
Some disadvantages are:
• difficulty in specification and verification (especially for complicated and
big systems),
• requires specialists of this field.
1.3
1.3.1
Formal Methods
Set, Relation, Partial-Order
Set theory is the branch of mathematics that studies sets, which are collections
of objects. In theory, objects are abstract and can be defined of any type.
The modern study of set theory was initiated by Georg Cantor and Richard
Dedekind in the 1870s. After the discovery of paradoxes in naive set theory,
numerous axiom systems were proposed in the early twentieth century, of which
the ZermeloFraenkel axioms with the axiom of choice, are the best-known (the
collection named as ZFC Set Theory).
The formalism we consider in this course is based on ZFC set theory.
Basic Concepts of Sets
Sets: A, B, C, ...
Members: a, b, c, ...
Membership: ∈ (a ∈ A means a is a member of set A)
Set Inclusion: ⊆ (A ⊆ B means for all a ∈ A we have a ∈ B)
Union: ∪ (A ∪ B is the set of all objects that are a member of A or B)
9
Intersection: ∩ (A ∩ B is the set of all objects that are members of both A
and B)
Set Difference: ∖ (A ∖ B is the set of all members of A that are not members
of B)
Cartesian Product: × (A×B is the set whose members are all possible ordered
pairs ⟨a, b⟩ where a is a member of A and b is a member of B)
Power Set: P() (P(A) is the set whose members are all possible subsets of A)
Empty Set: ∅ (∅ is the unique set containing no elements and also denoted
by {})
Basic Concepts of Relations
Relation: A k-ary relation over the nonempty sets X1 , X2 , ... Xk is a subset
of the cartesian product X1 × X2 × ... × Xk . For example, a binary relation R
can be defined as a subset of A × B.
Each member of a k-ary relation is k-tuple like ⟨x1 , x2 , ..., xk ⟩ ∈ R where x1 ∈ X1 ,
x2 ∈ X2 , ..., xk ∈ Xk .
Function: A binary relation f is a function from X to Y (denoted by f ∶ X → Y )
if for every x ∈ X there is exactly one element y ∈ Y such that the ordered pair
⟨x, y⟩ is contained in the subset defining the function.
Thereare different types of functions including injective functions, surjective
functions, bijective functions, identity functions, constant functions, invertible
functions.
Partial Order: A partial order, which is denoted by (P, ≤), is a binary relation
≤ over a set P which is reflexive, antisymmetric, and transitive, i.e., for all a, b,
and c in P , we have that:
• a ≤ a (reflexivity),
• if a ≤ b and b ≤ a then a = b (antisymmetry),
• if a ≤ b and b ≤ c then a ≤ c (transitivity).
Total Order: A total order, which is denoted by (P, ≤), is a binary relation ≤
over a set P which is antisymmetric, transitive, and total i.e., for all a, b, and c
in P , we have that:
• if a ≤ b and b ≤ a then a = b (antisymmetry),
10
• if a ≤ b and b ≤ c then a ≤ c (transitivity),
• a ≤ b or b ≤ a (totality).
Totality implies reflexivity, thus a total order is also a partial order. Also every
two elements of P are comparable based on total ordered relation.
Lattice: A lattice, which is denoted by (L, ≤) is a partially ordered set in which
any two elements have a supremum (also called a least upper bound or join)
and an infimum (also called a greatest lower bound or meet).
Exercise 1: Let (L, ⪯) be a lattice and (T, ≤) be a total order. Is (L × T, ⊑),
where ⊑ is defined as follows, a lattice?
⟨a, b⟩ ⊑ ⟨c, d⟩ ⇔ (a ⪯ c) ∧ (b ≤ d)
1.3.2
Logics
Logic refers to the study of modes of reasoning.
Each logical framework may contain:
• Syntax: containing the alphabets and sentences (i.e., formulae) of a logical
language.
• Semantics (Model Theory): containing the interpretation or meaning of
the symbols and formulae defined in the syntax of a logical language.Each
interpretation is called model, which describes a possible world.
• Proof Theory: containing a set of axioms and inference rules enabling
inference over a given set of formulae.
There are different types of logics:
• classical logics: which are bi-valued logics without any modal operator,
such as propositional logic and predicate logic.
• non-classical logics: such as different types of modal logics (deontic logic,
epistemic logic, belief logic, ...), fuzzy logic, multi-valued logic, and default
logic.
.Note: Modal logics are more interesting than the other ones for using in
security specification and verification.
11
Propositional Logic
A propositional calculus or logic is a formal system in which formulae representing propositions can be formed by combining atomic propositions using logical
connectives, and in which a system of formal proof rules allows certain formulae
to be established as theorems.
Syntax
Formula:
• Each proposition is a formula and also – is a formula.
• If A and B are formulae, then ¬A, A ∧ B, A ∨ B, A → B are formulae.
Semantics
A model in propositional logic is an interpretation function.
We define an interpretation function I for atomic propositions as
I ∶ AtomicP ropositions → {0, 1}
and extend it for other formulae as follows:
• I(A ∧ B) = 1 iff I(A) = 1 and I(B) = 1
• I(A ∨ B) = 1 iff I(A) = 1 or I(B) = 1 (or both hold)
• I(A → B) = 1 iff if I(A) = 1 then I(B) = 1
• I(¬A) = 1 iff I(A) = 0
• I(–) = 0
Truth: A formula A is true in model I if and only if I(A) = 1.
Some definitions:
• I is a model of A iff I(A) = 1 and denoted by I ⊧ A.
• If Γ is a set of formulae, then I ⊧ Γ iff for all A ∈ Γ, we have I ⊧ A.
• We say A is inferred from Γ (denoted by Γ ⊧ A) iff for every model I, if
I ⊧ Γ, then I ⊧ A.
• If Γ is empty (i.e., ⊧ A), then A is a tautology. In other words, for every
model I, we have I ⊧ A.
12
Proof procedure in propositional logic is decidable (i.e., we can make the truth
table for a given formula).
A proof theory or a proof procedure should be
• sound: each provable formula is a tautology (if ⊢ A then ⊧ A).
• complete: each tautology is provable (if ⊧ A then ⊢ A).
First-Order Logic
Syntax
Term: If t1 , ..., tn are terms and f is a function, then f (t1 , ..., tn ) is a term.
Formula:
• Each formula defined in propositional logic is a formula in FOL.
• If t1 , ..., tn are terms and P n is an n-ary predicate, then P (t1 , ..., tn ) is a
formula.
• If A is a formula, then ∀x, A and ∃x, A are formulae.
Semantics
A model in FOL is denoted by M = ⟨∆, I⟩.
∆ is the domain (set of elements, objects, things we want to describe or reason
about).
I is an interpretation function which is defined as follows:
• I(a) = di ∈ ∆ (an individual element of the domain)
• I(x) ∈ ∆ (any individual element of the domain)
• I(f n ) ∶ ∆ × ... × ∆ → ∆ (an n-ary function on the domain)
• I(P n ) ⊆ ∆ × ... × ∆ ( a set on n-tuples)
• I(P 0 ) ∈ {0, 1}
Truth:
• M ⊧ Pi0 iff I(Pi0 ) = 1.
13
• M ⊧ Pjn (t1 , ..., tn ) iff ⟨I(t1 ), ..., I(tn )⟩ ∈ I(P ).
• M ⊧ ∀x, P (x) iff for every element d of the domain ∆, M ⊧ P [x∣d] (where
x is substituted by d).
• M ⊧ ∃x, P (x) iff there is at least one element of the domain ∆ such that
M ⊧ P [x∣d] (where x is substituted by d).
Example: ∆ = {,, -, ce441, ce971}
I(Ahmadi) = ,
I(Bahmani) = I(CE441) = ce441
I(CE971) = ce971
I(Lecturer) = {,, -}
I(Course) = {ce441, ce971}
I(Student) = ∅
I(T eaches) = {⟨,, ce441⟩, ⟨,, ce971⟩, ⟨-, ce971⟩}
By the above interpretation the following relations hold:
M ⊧ Lecturer(Ahmadi), M ⊧ Lecturer(Bahmani)
M ⊧ Course(CE441), M ⊧ Course(CE971)
M ⊧ {T eaches(Ahmadi, CE441), T eaches(Bahmani, CE971)}
Decidability First-order logic is undecidable in general; more precisely it
is semi-decidable. A logical system is semidecidable if there is an effective
method for generating theorems (and only theorems) such that every theorem
will eventually be generated. This is different from decidability because in a
semidecidable system there may be no effective procedure for checking that a
formula is not a theorem.
Decidable Fragments of FOL
• Two Variable FOL: There are just two variables and only monadic and
binary predicates. Formulae like ∃y, (∀x, P (x, y) ∧ ∃x, Q(x, y)).
• Guarded Fragment of FOL: All quantifiers are relatived (guarded) by
atomic formulae. In the form of ∃y(α(x, y) ∧ ψ(x, y)) or ∀y(α(x, y) →
ψ(x, y)) where α is atomic and ψ is in GF and f ree(α) ⊆ f ree(ψ) = x, y.
• Horn Clauses of FOL: represent a subset of the set of sentences representable in FOL. In the form of P1 (x) ∧ P2 (x) ∧ ... ∧ Pn (x) → Q(x).
Modal Logics
A modal is an expression (like necessarily or possibly) that is used to qualify
the truth of a judgement. Modal logic is, strictly speaking, the study of the
14
deductive behavior of the expressions it is necessary that (denoted by ◻p) and
it is possible that (denoted by ◇p). However, the term modal logic may be used
more broadly for a family of related systems.
There are different types of modal logics such as:
• Epistemic Logic
• Belief Logic
• Deontic Logic
• Temporal Logic
More details on different types of modal logics will be presented later in this
course.
Propositional Modal Logic The famous type of modal logics.
Syntax
Formula:
• Each formula defined in propositional logic is a formula in PML.
• If A is a formula in PML, then ◻A is a formula.
• If A is a formula in PML, then ◇A is a formula.
Semantics
We usually use Kripke’s semantics for modal logics. A Kripke model is denoted
by M = ⟨W, R, I⟩, where
• W is a set of possible worlds.
• R ⊆ W × W is a relation between the possible worlds (the relation has
different meanings in different types of modal logics and hence has different
properties in them such as seriality, transitivity, and reflexivity).
• I ∶ P ropositions → P(W ) is an interpretation function that maps each
proposition to a set of possible worlds where the proposition holds (is
true).
Truth:
• ⊧M
α p (p is a proposition) iff α ∈ I(p)
15
M
• ⊧M
α ◻A iff in all worlds β such that ⟨α, β⟩ ∈ R, we have ⊧β A.
M
• ⊧M
α ◇A iff there exists a possible world β such that ⟨α, β⟩ ∈ R and ⊧β A.
Propositional modal logic is decidable.
16
Chapter 2
Formal Methods for
Security Modeling
2.1
Discretionary Security Models
In orange Book (the book on trusted computer system evaluation criteria –
TCSEC, 1985), two types of access control are defined.
• DAC (Discretionary Access Control): is a means of restricting access to
objects based on the identity of subjects and/or groups to which they
belong. The controls are discretionary in the sense that a subject with a
certain access permission is capable of passing that permission (perhaps
indirectly) on to any other subject (unless restricted by mandatory access
control).
[For commercial and non-governmental purpose, and based on need-toknow principle.]
• MAC (Mandatory Access Control): is a means of restricting access to objects based on the sensitivity (as represented by a label) of the information
contained in the objects and the formal authorization (i.e., clearance) of
subjects to access information of such sensitivity.
[For military or governmental purpose.]
In further classifications of access control systems and models, other types such
as role-based access control and attribute-based access control were introduced.
In this part of the course, we concentrate on some important DAC models and
safety problem in these models.
17
2.1.1
Lampson’s Model (1971)
Reference Paper: Butler W. Lampson, “Protection”, in Proceedings of the 5th
Princeton Conference on Information Sciences and Systems, p. 437, Princeton,
1971.
For the first time, Lampson defined protection as follows.
Protection: is a general term for all the mechanisms that control the access of
a program to other things in the system.
Example: samples of protection
• supervisor/user mode
• memory relocation
• access control by user to file directory
The foundation of any protection system is the idea of different protection environments or contexts. Depending on the context in which a process finds itself,
it has certain powers. In Lampson’s model the following terms are equivalent:
domain/ protection context/ environment/ state or sphere/ ring/ capability
list/ subject.
The major components of Lampson’s object system is a triple ⟨X, D, A⟩ where:
• X is a set of objects that are the things in the system which have to be
protected (e.g., files, processes, segments, terminals).
• D: is a set of domains (subjects) that are the entities that have access
to objects. A subject would be the owner of an object.
• A: is an access matrix that determines access of subjects to objects.
In access matrix A, rows are labeled by domain names and columns by object
names. Each element Ai,j consists of strings called access attributes (such as
read, write, owner, ...) that specifies the access which domain i has to object
j. Attached to each attribute is a bit called the copy flag which controls the
transfer of access in a way described in the specified rules below.
. Note: If we look at X or D, there are just sets, but for adding semantics,
we specify that X is a set of objects, etc. Thus, generally, accompanying the
formal specification, we need to provide informal specification of symbols to give
meaning (soul) to the formal specification.
18
ŽŵĂŝŶϯ
&ŝůĞϭ
&ŝůĞϮ
WƌŽĐĞƐƐϭ
ŽŵĂŝŶϭ
ŽŵĂŝŶϭ ŽŵĂŝŶϮ
ΎŽǁŶĞƌ
ĐŽŶƚƌŽů
ΎŽǁŶĞƌ
ĐŽŶƚƌŽů
ΎĐĂůů
ΎŽǁŶĞƌ
ΎƌĞĂĚ
ΎǁƌŝƚĞ
ŽŵĂŝŶϮ
ĐĂůů
ΎƌĞĂĚ
ǁƌŝƚĞ
ǁĂŬĞƵƉ
ŽŵĂŝŶϯ
ŽǁŶĞƌ
ĐŽŶƚƌŽů
ƌĞĂĚ
ΎŽǁŶĞƌ
ΎĐŽƉLJĨůĂŐƐĞƚ
Figure 2.1: Portion of an access matrix in Lampson’s model.
Note– How can we specify the access matrix in the Lampson’s model more
formally? Given a set of rights or access attributes R, it can be defined as a
function A ∶ D ×X → P(R). Thus, A maps each tuple ⟨d, x⟩ to a subset of access
rights.
Rules:
• Rule (a): d can remove access attributes from Ad′ ,x if it has control access
to d′ . Example: domain1 can remove attributes from rows 1 and 2.
• Rule (b): d can copy to Ad′ ,x any access attributes it has for x which has
the copy flag set, and can say whether the copied attributes shall have the
copy flag set or not. Example: domain1 can copy ‘write’ to A2,f ile1 .
• Rule (c): d can add any access attribute to Ad′ ,x with or without the
copy flag, if it has owner access to x. Example: domain3 can add ‘write’
to A2,f ile2 .
• Rule (d): d can remove access attributes from Ad′ ,x if d has owner access
to x, provided d′ does not have ‘protected’ access to x. The ‘protected’
restriction allows one owner to defend his access from other owners. Its
most important application is to prevent a program being debugged from
taking away the debugger’s access.
In the above rules, there are some commands such as add, copy, remove which
can be defined precisely and formally. Each command has some preconditions
and has some effects on the access matrix as a result.
Exercise 2: Define add, copy, and remove formally in the way stated above.
In fact, the above rules specify a reference monitor. Now, we should verify
our required properties. One of these requirements is safety problem. It has
been proved that the safety problem in access matrix model is undecidable.
19
2.1.2
HRU Model (1976)
Reference Paper: Michael A. Harrison , Walter L. Ruzzo , Jeffrey D. Ullman,
“Protection in Operating Systems”, Communications of the ACM, 19 (8), pp.
461–471, 1976.
HRU is a general model of protection mechanisms in computing systems, which
is proposed for arguing about safety problem.
A Formal Model of Protection Systems
Definition– A protection system consists of
1. R as a finite set of generic rights, and
2. C as a finite set of commands of the form:
command α(X1 , ..., Xk )
if r1 in (Xs1 , Xo1 ) and
r2 in (Xs2 , Xo2 ) and
...
rm in (Xsm , Xom )
then
op1
op2
...
opn
end
or if m is zero, simply
command α(X1 , ..., Xk )
...
opn
op1
end
Here, α is a name, and X1 , ..., Xk are formal parameters. Each opi is one of the
primitive operations:
enter r into (Xs , Xo )
delete r from (Xs , Xo )
create subject Xs
create object Xo
destroy subject Xs
destroy object Xo .
Also, r, r1 , ..., rm are generic rights and s, s1 , ..., sm and o, o1 , ..., om are integers
between 1 and k.
We may call the predicate following if the conditions of α and the sequence of
operations op1 , ..., opn the body of α.
20
Figure 2.2: HRU access matrix.
Definition– configuration of a protection system is a triple (S, O, P ), where S
is the set of current subjects, O is the set of current objects, S ⊆ O, and P is an
access matrix, with a row for every subject in S and a column for every object
in O. P [s, o] is a subset of R, the generic rights. P [s, o] gives the rights to
object o possessed by subject s.
Example: R = {own, read, write, execute}
1. A process creates a new file.
command CREATE (process, file)
create object file
enter own into (process, file)
end
2. The owner of a file may confer any right to that file, other than own, on
any subject (including owner himself).
command CONFERr (owner, friend, file)
if own in (owner, file)
then enter r into (friend, file)
end
[where r ∈ {read, write, execute}]
Exercise 3: Write Lampson’s rules in the form of HRU commands.
Definition– Let (S, O, P ) and (S ′ , O′ , P ′ ) be configurations of a protection system, and let op be a primitive operation. We say that (S, O, P ) ⇒op (S ′ , O′ , P ′ )
if either:
1. op= enter r into (s, o) and S = S ′ , O = O′ , s ∈ S, o ∈ O, P ′ [s1 , o1 ] =
P [s1 , o1 ] if (s1 , o1 ) ≠ (s, o) and P ′ [s, o] = P [s, o] ∪ {r}.
21
2. op=delete r from (s, o) and S = S ′ , O = O′ , s ∈ S, o ∈ O, P ′ [s1 , o1 ] =
P [s1 , o1 ] if (s1 , o1 ) ≠ (s, o) and P ′ [s, o] = P [s, o] − {r}.
3. op=create subject s′ where s′ is a new symbol not in O, S ′ = S ∪{s′ }, O′ =
O ∪ {s′ }, P ′ [s, o] = P [s, o] for all (s, o) ∈ S × O. and P ′ [s′ , o] = ∅ for all
o ∈ O′ , and P ′ [s, s′ ] = ∅ for all s ∈ S ′ .
4. op = create object o′ , where o′ is a new symbol not in O, S ′ = S, O′ =
O ∪ {o′ }, P ′ [s, o] = P [s, o] for all (s, o) in S × O and P ′ [s, o′ ] = ∅ for all
s ∈ S.
5. op = destroy subject s′ , where s′ ∈ S, S ′ = S − {s′ }, O′ = O − {s′ }, and
P ′ [s, o] = P [s, o] for all (s, o) ∈ S ′ × O′ .
6. op=destroy object o′ where o′ ∈ O − S, S ′ = S, O′ = O − {o′ }, and P ′ [s, o] =
P [s, o] for all (s, o) ∈ S ′ × O′ .
Definition– Let Q = (S, O, P ) be a configuration of a protection system containing:
command α(X1 , ..., Xk )
if r1 in (Xs1 , Xo1 ) and
...
rm in (Xsm , Xom )
then
op1 , ..., opn
end
Then, we say Q ⊢α(x1 ,...,xk ) Q′ where Q′ is a configuration defined as:
1. If α’s conditions are not satisfied, i.e., if there is some 1 ≤ i ≤ m such that
ri is not in P [xsi , xoi ], then Q = Q′ .
2. Otherwise, i.e., if for all 1 ≤ i ≤ m, ri ∈ P [xsi , xoi ], then there exist
configurations Q0 , Q1 , ..., Qn such that:
Q = Q0 ⇒op∗1 Q1 ⇒op∗2 ... ⇒op∗n Qn = Q′
(op∗ denotes the primitive op with actual parameters x1 , x2 , ..., xk )
Q ⊢α Q′ if there exist parameters x1 , ..., xk such that Q ⊢α(x1 ,...,xk ) Q′ .
Q ⊢ Q′ if there exist a command α such that Q ⊢α Q′ .
Q ⊢∗ Q′ is reflexive and transitive closure of ⊢.
Example:
command α(X, Y, Z)
enter r1 into (X, X)
destroy subject X
enter r2 into (Y, Z)
22
end
There can never be a pair of different configurations Q and Q′ such that Q ⊢α(x,x,z)
Q′ .
2.1.3
Safety
Definition– Given a protection system, we say command α(X1 , ..., Xk ) leaks
generic right r from configuration Q = (S, O, P ) if α, when run on Q, can execute
a primitive operation which enters r into a cell of the access matrix which did
not previously contain r.
More formally, there is some assignment of actual parameters xl , ..., xk such that
1. α(xl , ..., xk ) has its conditions satisfied in Q, i.e. for each clause “r in
(Xi , Xj )” in α’s conditions we have r ∈ P [xi , xj ], and
2. if α’s body is opl , ..., opn , then there exists an m, 1 ≤ m ≤ n, and configurations Q = Q0 , Q1 , ..., Qm−1 = (S ′ , O′ , P ′ ), and Qm = (S”, O”, P ”), such
that Q0 ⇒op∗1 Q1 ⇒op∗2 ...Qm−1 ⇒op∗m Qm where op∗i denotes opi after
substitution of x1 , ..., xk for X1 , ..., Xk and moreover, there exist some s
and o such that r ∈/ P ′ [s, o] but r ∈ P ”[s, o].
[Of course, opm must be enter r into (s, o)]
Definition– Given a particular protection system and generic right r, we say
that the initial configuration Q0 is unsafe for r (or leaks r) if there is a configuration Q and a command α such that
1. Q0 ⊢∗ Q, and
2. α leaks r from Q.
We say Q0 is safe for r if Q0 is not unsafe for r.
Safety Problem: Is a given protection system and initial configuration unsafe
for a given right r or not?
Note that “leaks” are not necessarily bad. Any interesting system will have commands which can enter some rights (i.e. be able to leak those rights). The term
assumes its usual negative significance only when applied to some configuration,
most likely modified to eliminate ”reliable” subjects, and to some right which
we hope cannot be passed around.
Safety problem in general is undecidable, but there are special cases for which
we can show it is decidable whether a given right is potentially leaked in any
given initial configuration or not.
23
Definition– A protection system is mono-operational if each command’s interpretation (body) is a single primitive.
Theorem 1– There is an algorithm which decides whether or not a given monooperational protection system and initial configuration is unsafe for a given
generic right r.
Proof: The proof hinges on two simple observations. First, commands can
test for the presence of rights, but not for the absence of rights or objects.
This allows delete and destroy commands to be removed from computations
leading to a leak. Second, a command can only identify objects by the rights in
their row and column of the access matrix. No mono-operational command can
both create an object and enter rights, so multiple creates can be removed from
computations, leaving the creation of only one subject. This allows the length
of the shortest ”leaky” computation to be bounded.
Suppose (*) Q0 ⊢Cl Q1 ⊢C2 ... ⊢Cm Qm is a minimal length computation reaching some configuration Qm for which there is a command α leaking r. Let
Qi = (Si , Oi , Pi ). Now we claim that every Ci , 2 ≤ i ≤ m is an enter command,
and C1 is either an enter or create subject command.
Suppose not, and let Cn be the last non-enter command in the sequence (*).
Then we could form a shorter computation
′
′
′ Q
Q0 ⊢C1 Q1 ⊢ ...Qn−1 ⊢Cn+1
Q′n+1 ⊢ ... ⊢Cm
m
as follows.
(a) if Cn is a delete or destroy command, let Ci′ = Ci and Q′i = Qi plus the right,
subject or object which would have been deleted or destroyed by Cn . By the
first observation above, Ci cannot distinguish Qi−1 from Q′i−1 , so Q′i−1 ⊢Ci′ Q′i
holds. Likewise, α leaks r from Q′m since it did so from Qm .
(b) Suppose Cn is a create subject command and ∣Sn−1 ∣ ≥ 1, or Cn is a create
object command. Note that α leaks r from Qm by assumption, so α is an enter
command. Further, we must have ∣Sm ∣ ≥ 1 and ∣Sm ∣ = ∣Sm−1 ∣ = ... = ∣Sn ∣ ≥ 1
(Cm , ..., Cn+1 are enter commands by assumption). Thus ∣Sn−1 ∣ ≥ 1 even if Cn
is a create object command. Let s ∈ Sn−1 . Let o be the name of the object
created by Cn . Now we can let Ci′ = Ci with s replacing all occurrences of o,
and Q′i = Qi with s and o merged. For example, if o ∈ On − Sn we would have
⎧
⎪
if y ≠ s
⎪Pi [x, y],
Si′ = Si , Oi′ = Oi − {o}, Pi′ [x, y] = ⎨
⎪
Pi [x, s] ∪ Pi [x, o], if y = s
⎪
⎩
′
Clearly, Pi [x, o] ⊆ Pi [x, s], so for any condition in Ci satisfied by o, the corresponding condition in Ci′ is satisfied by s. Likewise for the conditions of α.
Exercise 4: Define Q′i precisely when the command is create subject s′ .
24
(c) Otherwise, we have ∣Sn−1 ∣ = 0, Cn is a create subject command, and n ≥ 2.
The construction in this case is slightly different–the create subject command
cannot be deleted (subsequent ”enters” would have no place to enter into).
However, the commands preceding Cn can be skipped (provided that the names
of objects created by them are replaced), giving
Q0 ⊢cn Q′n ⊢c′n+1 ⊢ ... ⊢c′m Q′m
where, if Sn = {s}, we have Ci′ is Ci with s replacing the names of all objects in
On−1 , and Q′i is Qi with s merged with all o ∈ On−1 .
Exercise 5: Define Q′i precisely in this case.
In each of these cases we have created a shorter ”leaky” computation, contradicting the supposed minimality of (*). Note that no Ci enters a right r into
a cell of the access matrix already containing r, else we could get a shorter
sequence by deleting Ci . Thus we have an upper bound on m:
m ≤ g(∣S0 ∣ + 1)(∣O0 ∣ + 1) + 1
where g is the number of generic rights.
Given a graph and an integer k, produce a protection system whose initial access
matrix is the adjacency matrix for the graph and having one command. This
command’s conditions test its k parameters to see if they form a k-clique, and
its body enters some right r somewhere. The matrix will be unsafe for r in
this system if and only if the graph has a k-clique. The above is a polynomial
reduction of the known NP-complete clique problem to our problem, so our
problem is at best NP-complete.
Review– Each Turing machine T consists of a finite set of states K and a
distinct finite set of tape symbols Γ. One of the tape symbols is the blank B,
which initially appears on each cell of a tape which is infinite to the right only
(that is, the tape cells are numbered 1, 2 , . . . , i, ...). There is a tape head
which is always scanning (located at) some cell of the tape. The moves of T are
specified by a function δ from K × Γ to K × Γ × {L, R}.
For example, If δ(q, X) = (p, Y, R) for states p and q and tape symbols X and
Y , then should the Turing machine T find itself in state q, with its tape head
scanning a cell holding symbol X, then T enters state p, erases X and prints Y
on the tape cell scanned and moves its tape head one cell to the right.
Initially, T is in state q0 , the initial state, with its head at cell 1. Each tape cell
holds the blank. There is a particular state qf , known as the final state, and
it is a fact that it is undecidable whether started as above, an arbitrary Turing
machine T will eventually enter state qf (undecidability of halting problem).
Theorem 2– It is undecidable whether a given configuration of a given protection system is safe for a given generic right.
Proof: We shall show that safety is undecidable by reducing the halting problem
in the Turing machine to safety problem in protection systems. In other words,
we shall show that a protection system, can simulate the behavior of an arbitrary
25
Figure 2.3: Representing a tape as an access matrix
Turing machine, with leakage of a right corresponding to the Turing machine
entering a final state (a condition we know to be undecidable).
The set of generic rights of our protection system will include the states and
tape symbols of the Turing machine. At any time, the Turing machine will have
some finite initial prefix of its tape cells, say 1, 2, ..., k, which it has ever scanned.
This situation will be represented by a sequence of k subjects, s1 , s2 , ..., sk , such
that si ”owns” si+1 for 1 ≤ i < k. Thus, we use the ownership relation to order
subjects into a linear list representing the tape of the Turing machine. Subject si
represents cell i, and the fact that cell i now holds tape symbol X is represented
by giving si generic right X to itself. The fact that q is the current state and
that the tape head is scanning the j’th cell is represented by giving sj generic
right q to itself. Note that we have assumed the states distinct from the tape
symbols, so no confusion can result.
There is a special generic right end, which marks the last subject, sk . That is,
sk has generic right end to itself, indicating that we have not yet created the
subject sk+l which sk is to own. The generic right own completes the set of
generic rights.
The moves of the Turing machine are reflected in commands as follows. First,
if δ(q, X) = (p, Y, L), then there is
command Cqx (s, s′ )
if
own in (s, s′ ) and
q in (s′ , s′ ) and
X in (s′ , s′ )
then
delete q from (s′ , s′ )
delete X from (s′ , s′ )
enter p into (s, s)
enter Y into (s′ , s′ )
26
end
If δ(q, X) = (p, Y, R),
command Cqx (s, s′ )
if
own in (s, s′ ) and
q in (s, s) and
X in (s, s)
then
delete q from (s, s)
delete X from (s, s)
enter p into (s′ , s′ )
enter Y into (s, s)
end
To handle the case where the Turing machine moves into new territory, there is
also
command Dqx (s, s′ )
if
end in (s, s) and
q in (s, s) and
X in (s, s)
then
delete q from (s, s)
delete X from (s, s)
create subject s′
enter B into (s′ , s′ )
enter p into (s′ , s′ )
enter Y into (s, s)
delete end from (s, s)
enter end into (s′ , s′ )
enter own into (s, s′ )
end
In each configuration of the protection system reachable from the initial configuration, there is at most one command applicable. This follows from the
fact that the Turing machine has at most one applicable move in any situation,
and the fact that Cqx and Dqx can never be simultaneously applicable. The
protection system must therefore exactly simulate the Turing machine.
If the Turing machine enters state qf , then the protection system can leak
generic right qf , otherwise, it is safe for qf . Since it is undecidable whether the
Turing machine enters qf , it must be undecidable whether the protection system
is safe for qf .
◻
Theorem 3– The safety problem is decidable for protection systems without
create commands.
27
Theorem 4– The safety problem is decidable for protection systems that are
both monotonic and monoconditional.
Monotonic protection system is a system without destroy and delete commands.
Monoconditional system is a system with only one condition in condition part
of each command.
Theorem 5– The safety problem for protection systems with a finite number
of subjects is decidable.
Concluding Remarks– Elisa Bertino says: ”The results on the decidability
of the safety problem illustrate an important security principle, the principle of
economy of mechanisms
• if one designs complex systems that can only be described by complex
models, it becomes difficult to find proofs of security
• in the worst case (undecidability), there does not exist a universal algorithm that verifies security for all problem instances.”
2.2
2.2.1
Mandatory Security Models
BLP Model (1976)
Reference Paper: D. E. Bell and L. J. La Padula, “Secure Computer System:
Unified Exposition and Multics Interpretation”, Technical Report ESD-TR-75306, Mitre Corporation, Bedford, MA, 1976.
The model has the ability to represent abstractly the elements of computer systems and of security that are relevant to a treatment of classified information
stored in a computer system.
A Narrative Description
Subjects (denoted Si individually and S collectively) that are active entities can
have access to objects (denoted Oi individually and O collectively) which are
passive entities. No restriction is made regarding entities that may be both
subjects and objects.
The modes of access in the model are called access attributes (denoted x and
A).
The two effects that an access can have on an object are
• the extraction of information (”observing” the object) and
28
• the insertion of information (”altering” the object).
There are thus four general types of access imaginable:
• no observation and no alteration (denoted e – execute);
• observation, but no alteration (denoted r – read);
• alteration, but no observation (denoted a – append);
• both observation and alteration (denoted w – write).
A system state is expressed as a set of four components z = (b, M, f, h) where:
• b ∈ B is the current access set and (subject, object, attribute) ∈ b denotes
that subject has current access − attribute access to object in the state.
• h ∈ H is a hierarchy (parent-child structure) imposed on objects. Only directed, rooted trees and isolated points are allowed for objects hierarchies
(see Figure 2.4).
• M ∈ M is an access permission matrix. Mij ⊆ A, where A is the set of
access attributes.
• f ∈ F is a level function, the embodiment of security classifications in the
model.
A security level is a pair (classif ication, categoryset) where
– classif ication or clearance such as unclassified, confidential, secret,
and top secret.
– categoryset as a set of categories such as Nuclear, NATO, and Crypto.
(class1, categoryet1) dominates (class2, categoryset2) ⇔ class1 ≥ class2
and categoryset1 ⊇ categoryset2.
Dominance ordering (denoted by Ž) required to be partial ordering.
The (maximum) security level of a subject Si is denoted formally by fS (Si )
and informally by level(Si ). Similarly, the security level of an object Oj
is denoted formally by fO (Oj ) and informally by level(Oj ). The current
security level of a subject Si is denoted by fC (Si ). Thus, f = (fS , fO , fC ) ∈
F.
We refer to inputs to the system as requests (Rk and R) and outputs as decisions
(Dm and D). The system is all sequences of (request, decision, state) triples
with some initial state (z0 ) which satisfy a relation W on successive states (see
Figure 2.5).
29
parent-child relation be maintained which allows only directed,
rooted trees and isolated points as shown:
o
0
Figure 2.4: The
desired
object
hierarchies
in BLP model.
Figure
2. The
Desired
Object Structure
This particular structure is desired in order to take advantage of
R1/D1
R2/D2
R3/D3
. . . zm
thez0 implicit controlz1conventions
of and
of experience
z2 the wealth
with logical data objects structured in this way. The construct userl
is called a hierarchy (denoted H and H); a hierarchy specifies the
proqeny ofFigure
each object
so thatspecified
structures
themodel.
type mentioned are
2.5: System
in of
BLP
the only possibilities.
Security Definition
The next state component which we consider involves access
permission.
Access permission is included in the model in an access
Security matri
is defined
xt r'~ .by satisfying three properties in BLP model.
1. Simple Security Propoerty (SS-Property):
Simple
property is satisfied, if (subject, object, observe−attribute) is a cur[Notice that r1 is a matrix only in the model's conceptual
rent access, i.e., if subject observes (viz. r or w) object, then level(subject)
sphere: any interpretation of ~1 whi ch records a11 the necessary
dominates level(object).
.1.
information is acceptable.
The expected interpretation of the model anticipates protection of information
12
containers rather than of the information itself.
Hence a malicious program (an
interpretation of a subject) might pass classified information along by putting
it into an information container labeled at a lower level than the information
itself (Figure 2.6).
2. Star Property (*-Property)
Star property is satisfied if in any state, if a subject has simultaneous observe access to object−1 and alter access to object−2, then level(object−1)
is dominated by level(object−2).
Under the above restriction, the levels of all objects accessed by a given subject
are neatly ordered:
• level(a−accessed−object) dominates level(w−accessed−object);
30
Figure 2.6: Information flow showing the need for *-property.
• level(w−accessed−object−1) equals level(w−accessed−object−2); and
• level(w−accessed−object) dominates level(r−accessed−object).
Following the *-property, in any state, if (subject, object, attribute) ∈ b is a
current access, then:
• level(object) dominates current−level(subject) if attribute is a;
• level(object) equals current−level(subject) if attribute is w; and
• level(object) is dominated by current−level(subject) if attribute is r.
There are two important comments to be made about the *-property.
• First, it does not apply to trusted subjects: a trusted subject is one guaranteed not to consummate a security-breaching information transfer even
if it is possible.
• Second, it is important to remember that both ss-property and *-property
are to be enforced. Neither property by itself ensures the security we
desire.
3. Discretionary Security Property (ds-Property)
If (subject−i, object−j, attribute−x) is a current access (is in b), then
attribute−x is recorded in the (subject−i, object−j)-component of M (x ∈
Mij ).
31
Basic Security Theorem
This theorem states that security (as defined) can be guaranteed systemically
when each alteration to the current state does not itself cause a breach of security. Thus security can be guaranteed systemically if, whenever (subject, object, attribute)
is added to the current access set b,
1. level(subject) dominates level(object) if attribute involves observation
(to assure the ss-property);
2. current−level(subject) and level(object) have an appropriate dominance
relation (to assure the *-property); and
3. attribute is contained in the (subject, object) component of the access
permission matrix M (to assure the ds-property).
The basic security theorem establishes the Inductive nature of security in that
it shows that the preservation of security from one state to the next guarantees
total system security.
Thus, in constructing general mechanisms within the model is a direct consequence of the basic security theorem. This framework relies on the ”rule,” a
function for specifying a decision (an output) and a next-state for every state
and every request (an input):
(request, current − state) →rule (decision, next−state)
.
Formal Mathematical Model
The elements of the mathematical model are represented in the following. In
the following the notation AB denotes the set of all functions from B to A.
Elements of The Model
S = {S1 , S2 , ..., Sn } Set of subjects
O = {O1 , O2 , ..., Om } Set of objects
C = {C1 , C2 , ..., Cq }C1 > C2 > ... > Cq Classifications: clearance level of a subject;
classification of an object
K = {K1 , K2 , ..., Kr } Categories: special access privileges
L = {L1 , L2 , ..., Lp } Security levels
Li = (Ci , Ki ) where Ci ∈ C and Ki ⊆ K
32
Ž Dominance relation on L which is defined as follows:
Li Ž Lj iff Ci ≥ Cj and Ki ⊇ Kj
(L, Ž) is a partial order (the proof is convenient)
A = {r, e, w, a} Access attributes [r: read-only, e: execute (no read, no write),
w write (read and write); a: append (write-only)]
RA = {g, r} Request elements [g: get, give; r: release, rescind]
S ′ ⊆ S Subjects subject to *-property.
ST = S − S ′ Trusted subjects: subjects not subject to *-property but trusted not
to violate security with respect to it.
R = ⋃ R(i) where
1≤i≤5
R(1) = RA × S × O × A requests for get and release access
R(2) = S × RA × S × O × A requests for give and rescind access
R(3) = RA × S × O × L requests for generation and reclassification of objects
R(4) = S × O requests for destruction of objects
R(5) = S × L requests for changing security level
D = {yes, no, error, ?} Decisions (Dm ∈ D)
T = {1, 2, ..., t, ...} Indices
F ⊆ LS × LO × LS Security vectors [fS : subject security level function; fO :
object security level function; fC : current security level function]. An element
f = (fS , fO , fC ) ∈ F iff for each Si ∈ S we have fS (Si ) Ž fC (Si )
X = RT Request sequences (x ∈ X)
Y = DT Decision sequences (y ∈ Y )
M = {M1 , M2 , ..., M24n.m } Access matrixes; an element of M, say Mk , is an
n × m matrix with entries from P(A); the (i, j)-entry of matrix Mk shows Si ’s
attributes relative to Oj ; the entry is denoted by Mij .
H ⊆ (P(O))O Hierarchies; a hierarchy is a forest possibly with stumps, i.e., a
hierarchy can be represented by a collection of rooted, directed trees and isolated
points. A hierarchy H ∈ H iff
(1)Oi ≠ Oj implies H(Oi ) ∩ H(Oj ) = ∅
(2) ∃/ {O1 , O2 , ..., Ow } ∈ O, ∀r, 1 ≤ r ≤ w, Or+1 ∈ H(Or ) and Ow+1 = O1
B = P(S × O × A) Current access set (b ∈ B)
V = B × M × F × H States (v ∈ V )
Z = V T State sequences; if z ∈ Z, then zt ∈ z is the t-th state in the state
sequence z.
33
Definition (System): Suppose that W ⊂ R×D×V ×V . The system Σ(R, D, W, z0 ) ⊂
X ×Y ×Z is defined by (x, y, z) ∈ Σ(R, D, W, z0 ) iff (xt , yt , zt , zt−1 ) ∈ W for each t
in T , where z0 is an initial state of the system, usually of the form (∅, M, f, H).
Notation– The following notation is defined.
b(S ∶ x, y, ..., z) = {O∣(S, O, x) ∈ b ∨ (S, O, y) ∈ b ∨ ... ∨ (S, O, z) ∈ b}
Simple Security Property: A state v = (b, M, f, H) satisfies the simple-security
property (ss-property) iff
S ∈ S ⇒ [(O ∈ b(S ∶ r, w)) ⇒ (fS (S) Ž fO (O))]
It is convenient also to define:
(S, O, x) ∈ b satisfies the simple security condition relative to f (SSC rel f ) iff
(i) x = e or a, or
(ii) x = r or w and fS (S) Ž fO (O)
Star-Property: Suppose S ′ is a subset of S. A state v = (b, M, f, H) satisfies the
*-property relative to S ′ iff
⎧
(O ∈ b(S ∶ a)) ⇒ (fO (O) Ž fC (S))
⎪
⎪
⎪
⎪
′
S ∈ S ⇒ ⎨(O ∈ b(S ∶ w)) ⇒ (fO (O) = fC (S))
⎪
⎪
⎪
⎪
⎩(O ∈ b(S ∶ r)) ⇒ (fC (S) Ž fO (O))
An immediate consequence is: if v satisfies *-property rel S ′ and S ∈ S ′ then
[Oj ∈ b(S ∶ a) and Ok ∈ b(S ∶ r)] ⇒ fO (Oj ) Ž fO (Ok ).
Discretionary-Security Property: A state v = (b, M, f, H) satisfies the discretionarysecurity property (ds-property) iff
(Si , Oj , x) ∈ b ⇒ x ∈ Mij
Definition (Secure System): A state v is a secure state iff v satisfies the ssproperty and *-property rel S ′ and ds-property. A state sequence z is a secure
state sequence iff zt is a secure state for each t ∈ T . Call (x, y, z) ∈ Σ(R, D, W, z0 )
an appearance of the system. (x, y, z) ∈ Σ(R, D, W, z0 ) is a secure appearance
iff z is a secure sequence. Finally, Σ(R, D, W, z0 ) is a secure system iff every
appearance of Σ(R, D, W, z0 ) is a secure appearance. Similar definitions pertain
for the notions.
(i) the system Σ(R, D, W, z0 ) satisfies the ss-property,
(ii) the system satisfies *-property rel S ′ , and
(iii) the system satisfies the ds-property.
34
Definition (Rule): A rule is a function ρ ∶ R × V → D × V . A rule therefore
associates with each request-state pair (input) a decision-state pair (output).
A rule ρ is secure-state-preserving iff v ∗ is a secure state whenever ρ(Rk , v) =
(Dm , v ∗ ) and v is a secure state. Similar definitions pertain for the notions
(i) ρ is ss-property-preserving,
(ii) ρ is *-property-preserving, and
(iii) ρ is ds-property-preserving.
Suppose w = {ρ1 , ρ2 , ..., ρs } is a set of rules. The relation W (w) is defined by
(Rk , Dm , v ∗ , v) ∈ W (w) iff Dm ≠? and (Dm , v ∗ ) = ρi (Rk , v) for a unique i,
1 ≤ i ≤ s.
Definition: (Ri , Dj , v ∗ , v) ∈ R × D × V × V is an action of Σ(R, D, W, z0 ) iff
there is an appearance (x, y, z) of Σ(R, D, W, z0 ) and some t ∈ T such that
(Ri , Dj , v ∗ , v) = (xt , yt , zt , zt−1 ).
Theorem 1– Σ(R, D, W, z0 ) satisfies the ss-property for any initial state z0
which satisfies the ss-property iff W satisfies the following conditions for each
action (Ri , Dj , (b∗ , M ∗ , f ∗ , H ∗ ), (b, M, f, H)):
(i) each (S, O, x) ∈ b∗ − b satisfies the simple security condition relative to f ∗
(SSC rel f ∗ );
(ii) each (S, O, x) ∈ b which does not satisfy SSC rel f ∗ is not in b∗ .
Proof: (⇐)
Suppose z0 = (b, M, f, H) is an initial state which satisfies ss-property. Pick
(x, y, z) ∈ Σ(R, D, W, z0 ) and write zt = (b(t) , M (t) , f (t) , H (t) ) for each t ∈ T .
z1 satisfies ss-property
(x1 , y1 , z1 , z0 ) is in W . In order to show that z1 satisfies ss-property we need to
show that each (S, O, x) ∈ b(1) satisfies SSC rel f (1) .
Notice that b(1) = (b(1) − b(0) ) ∪ (b(0) ∩ b(1) ) and (b(1) − b(0) ) ∩ (b(1) ∩ b(0) ) =
φ. Suppose (S, O, x) ∈ b(1) . Then either (S, O, x) is in (b(1) − b(0) ) or is in
(b(1) ∩ b(0) ). Suppose (S, O, x) ∈ (b(1) − b(0) ). Then (S, O, x) satisfies SSC rel
f (1) according to (i). Suppose (S, O, x) ∈ (b(1) ∩ b(0) ). Then (S, O, x) satisfies
SSC rel f (1) according to (ii). Therefore z1 satisfies ss-property.
if zt−1 satisfies ss-property, then zt satisfies ss-property.
The argument given for z1 satisfies ss-property applies with t − 1 substituted for
0 and t substituted for 1.
35
By induction, z satisfies ss-property so that the appearance (x, y, z) satisfies ssproperty. Since (x, y, z) being arbitrary, Σ(R, D, W, z0 ) satisfies the ss-property.
(⇒) Suppose Σ(R, D, W, z0 ) satisfies the ss-property for any initial state z0
which satisfies ss-property. Argue by contradiction. Contradiction yields the
proposition
“there is some action (xt , yt , zt , zt−1 ) such that either
(iii) some (S, O, x) in b(t) − b(t−1) does not satisfy SSC rel f (t) or
(iv) some (S, O, x) in b(t−1) which does not satisfy SSC rel f (t) is in b(t) , i.e.,
is in b(t−1) ∩ b(t) .”
Suppose (iii). Then there is some (S, O, x) ∈ b(t) which does not satisfy SSC rel
f (t) . Suppose (iv). Then there is some (S, O, x) ∈ b(t) which does not satisfy SSC
rel f (t) . Therefore zt does not satisfy ss-property, (x, y, z) does not satisfy ssproperty, and so Σ(R, D, W, z0 ) does not satisfy ss-property, which contradicts
initial assumption of the argument.
◻
Theorem 2– Σ(R, D, W, z0 ) satisfies the *-property relative to S ′ ⊆ S for any
initial state z0 which satisfies *-property relative to S ′ iff W satisfies the following conditions for each action (Ri , Dj , (b∗ , M ∗ , f ∗ , H ∗ ), (b, M, f, H)):
(i) for each S ∈ S ′ ,
∗
(a) O ∈ (b∗ − b)(S ∶ a) ⇒ fO
(O) Ž fC∗ (S),
∗
(b) O ∈ (b∗ − b)(S ∶ w) ⇒ fO
(O) = fC∗ (S),
∗
(c) O ∈ (b∗ − b)(S ∶ r) ⇒ fC∗ (S) Ž fO
(O);
(ii) for each S ∈ S ′ ,
∗
(a’) [O ∈ b(S ∶ a) and fO
(O) Ž
/ fC∗ (S)] ⇒ O ∈/ b∗ (S ∶ a), and
∗
(b’) [O ∈ b(S ∶ w) and fO
(O) ≠ fC∗ (S)] ⇒ O ∈/ b∗ (S ∶ w), and
∗
(c’) [O ∈ b(S ∶ r) and fC∗ (S) Ž
/ fO
(O)] ⇒ O ∈/ b∗ (S ∶ r).
Proof: As an exercise (similar to the proof of Theorem 1).
◻
Theorem 3– Σ(R, D, W, z0 ) satisfies the ds-property iff z0 satisfies the dsproperty and W satisfies the following condition that for each action (Ri , Dj , (b∗ , M ∗ , f ∗ , H ∗ ), (b, M, f, H)):
∗
(i) (Sa , Oa′ , x) ∈ b∗ − b ⇒ x ∈ Ma,a
′ ; and
∗
(ii) (Sa , Oa′ , x) ∈ b and x ∈/ Ma,a
/ b∗ .
′ ⇒ (Sa , Oa′ , x) ∈
36
Proof: As an exercise (similar to the proof of Theorem 1).
◻
Corollary 1– Σ(R, D, W, z0 ) is a secure system iff z0 is a secure state and W
satisfies the conditions of theorems 1 to 3 for each action.
Corrolary 2– Suppose w is a set of secure-state-preserving rules and z0 is an
initial state which is a secure state. Then Σ(R, D, W (w), z0 ) is a secure system.
Theorem 5– Let ρ be a rule and ρ(Rk , v) = (Dm , v ∗ ) , where v = (b, M, f, H)
and v ∗ = (b∗ , M ∗ , f ∗ , H ∗ ).
(i) If b∗ ⊆ b and f ∗ = f , then ρ is ss-property-preserving.
(ii) If b∗ ⊆ b and f ∗ = f , then ρ is *-property-preserving.
∗
(iii) If b∗ ⊆ b and Mij
⊇ Mij for all i and j, then ρ is ds-property-preserving.
∗
(iv) If b∗ ⊆ b, f ∗ = f , and Mij
⊇ Mij for all i and j, then ρ is secure-statepreserving.
Proof: (i) If v satisfies the ss-property, then (S, O, x) ∈ b∗ with x = w or r
implies (S, O, x) ∈ b so that fS (S) Ž fO (O) by assumption. Since f ∗ = f , hence
∗
fS∗ (S) Ž fO
(O). Thus v ∗ satisfies ss-property and ρ is ss-property-preserving.
(ii) and (iii) are proved in ways exactly analogous to the proof of (i). Implications
(i), (ii), and (iii) prove implication (iv).
◻
Definition of Rules
Notation– The symbol ∖ will be used in expressions of the form A∖B; to mean
“proposition A except as modified by proposition B”.
Suppose M is a matrix. Then M ∖ Mij ← {a} means the matrix obtained from
M by replacing the (i, j)th element by {a}. M ∖ Mij ∪ {x} means the matrix
obtained from M by adding the element x to the (i, j)th set entry.
There are 11 rules defined in BLP model. Some of these rules are presented in
the following.
Rule 1 (R1): get−read
Domain of R1: all Rk = (g, Si , Oj , r) in R(1) . (Denote domain of Ri by dom(Ri ).)
Semantics: Subject Si requests access to object Oj in read-only mode (r).
*-property function: ∗1(Rk , v) = TRUE ⇔ fC (Si ) Ž fO (Oj ).
The rule:
37
⎧
(?, v),
if Rk ∈/ dom(R1);
⎪
⎪
⎪
⎪
⎪
⎪
⎪(yes, (b ∪ {(Si , Oj , r)}, M, f, H)), if [Rk ∈ dom(R1)]&[r ∈ Mij ]&
R1(Rk , v) = ⎨
⎪
[fS (Si ) Ž fO (Oj )]&[Si ∈ ST or ∗ 1(Rk , v)];
⎪
⎪
⎪
⎪
⎪
⎪
otherwise.
⎩(no, v);
Algorithm for R1:
if Rk ∈/ dom(R1)
then R1(Rk , v) = (?, v);
else if r ∈ Mij and ⟨[Si ∈ S ′ and ∗ 1(Rk , v)] or [Si ∈ ST and fS (Si ) Ž
fO (Oj )]⟩
then R1(Rk , v) = (yes, (b ∪ {(Si , Oj , r)}, M, f, H));
else R1(Rk , v) = (no, v);
end;
Similarly, rules R2 ∶ get-append, R3 ∶ get-execute, R4 ∶ get-write for requests of
type R(1) are defined.
Rule 5 (R5) ∶ release-read/execute/write/append
Domain of R5: all Rk = (r, Si , Oj , x) ∈ R(1) , x ∈ A.
Semantics: Subject Si signals the release of access to object Oj in mode x, where
x is r (read-only), e (execute), w (write), or a (append).
*-property function: ∗5(Rk , v) = TRUE.
The rule:
⎧
⎪
⎪(yes, (b − {(Si , Oj , x)}, M, f, H)),
R5(Rk , v) = ⎨
⎪
(?, v),
⎪
⎩
if Rk ∈ dom(R5);
otherwise.
Algorithm for R5:
if Rk ∈/ dom(R5); then R5(Rk , v) = (?, v);
else R5(Rk , v) = (yes, (b − {(Si , Oj , x)}, M, f, H));
end;
Rule 6 (R6) ∶ give-read/execute/write/append
Notation– In the following rule, OR denotes root object in the object hierarchy
and OS(j) denotes Oj ’s immediately superior object in the hierarchy. Also,
GIVE(Sλ , Oj , v) means Sλ is allowed (has an administrative permission) to give
permission to object Oj in current state v.
Domain of R6: all Rk = (Sλ , g, Si , Oj , x) ∈ R(2) , x ∈ A.
Semantics: Subject Sλ gives subject Si access permission to Oj in mode x, where
x is r, w, e, or a.
38
*-property function: ∗6(Rk , v) = TRUE.
The rule:
⎧
⎪
(?, v),
if Rk ∈/ dom(R6);
⎪
⎪
⎪
⎪
⎪
(yes,
(b,
M
∖
M
∪
{x},
f,
H)),
if [Ri ∈ dom(R6)]&
⎪
ij
⎪
⎪
⎪
⎪
⎪
[⟨[Oj ≠ OR ]&[OS(j) ≠ OR ]&[OS(j) ∈ b(Sλ ∶ w)]⟩ or
⎪
R6(Rk , v) = ⎨
⎪
⟨[OS(j) = OR ]&[GIVE(Sλ , Oj , v)]⟩ or
⎪
⎪
⎪
⎪
⎪
⎪
⟨[Oj = OR ]&[GIVE(Sλ , OR , v)]⟩];
⎪
⎪
⎪
⎪
⎪
⎪
otherwise.
⎩(no, v),
Algorithm for R6:
if Rk ∈/ dom(R6) then R6(Rk , v) = (?, v);
else if [⟨[Oj ≠ OR ] and [OS(j) ≠ OR ] and [OS(j) ∈ b(S ∶ w)]⟩ or ⟨[OS(j) =
OR ]&[GIVE(Sλ , Oj , v)]⟩ or ⟨[Oj = OR ] and [GIVE(Sλ , OR , v)]⟩]
then R6(Rk , v) = (yes, (b, M ∖ Mij ∪ {x}, f, H));
else R6(Rk , v) = (no, v);
end;
Other rules including R7 ∶ rescind-read/execute/write/append, R8 ∶ create-object,
R9 ∶ delete-object-group, R10 ∶ change-subject-current-security-level, and R11 ∶
change-object-security-level are defined similar to the ones specified above.
2.2.2
Denning’s Lattice Model of Secure Information Flow
(1976)
Reference: D.E., Denning, A Lattice Model of Secure Information Flow, Communications of ACM, 19(5), pp. 236–243, 1976.
The Model
An information flow model FM is defined by F M = ⟨N, P, SC, ⊕, →⟩, where
• N = {a, b, ...} is a set of logical storage objects or information receptacles.
• P = {p, q, ...} is a set of processes. Processes are the active agents responsible for all information flow.
• SC = {A, B, ...} is a set of security classes corresponding to disjoint classes
of information.
– Each object a is bound to a security class, denoted by ā. There
are two methods of binding objects to security classes: static binding, where the security class of an object is constant, and dynamic
binding, where the security class of an object varies with its content.
39
– Users and processes may be bound to security classes. In this case,
p̄ (security class of process p) may be determined by the security
clearance of the user owning p or by the history of security classes to
which p has had access.
• ⊕ ∶ SC × SC → SC is the class-combining operator, which is an associative
and commutative binary operator that specifies how to label information
obtained by combining information from two security classes. The set of
security classes is closed under ⊕.
• →⊆ SC × SC is a can flow relation, which is defined on pairs of security
classes. For classes A and B, we write A → B if and only if information
in class A is permitted to flow into class B. This includes flows along
legitimate and storage channels. We shall not be concerned with flows
along covert channels (i.e. a process’s effect on the system load).
The security requirements of the model: a flow model F M is secure if and only
if execution of a sequence of operations cannot give rise to a flow that violates
the relation →.
If a value f (a1 , ..., an ) flows to an object b that is statically bound to a security
class b̄, then ā1 ⊕ ... ⊕ ān → b̄ must hold. If f (a1 , ..., an ) flows to a dynamically
bound object b, then the class of b must be updated (if necessary) to hold the
above relation.
Example [High-Low Policy]– The high-low policy can be defined by triple
⟨SC, →, ⊕⟩ as follows:
SC = {H, L}
→= {(H, H), (L, L), (L, H)}
H ⊕ H = H, H ⊕ L = H, L ⊕ H = H, L ⊕ L = L
Denning’s Axioms (Derivation of Lattice Structure)
Under certain assumptions, the model components SC, →, and ⊕ form a universally bounded lattice. These assumptions follow from the semantics of information flow.
⟨SC, →, ⊕⟩ forms a universally bounded lattice iff
1. ⟨SC, →⟩ is a partially ordered set;
2. SC is finite;
3. SC has a lower bound L such that L → A for all A ∈ SC;
40
4. ⊕ is a least upper bound operator.
In assumption (1), reflexivity and transitivity of security classes are required
for consistency, and antisymmetry follows from the practical assumption of irredundant classes.
Assumption (2), that the set of security classes SC is finite, is a property of any
practical system.
Assumption (3), that there exists a lower bound L on SC, acknowledges the
existence of public information in the system. All constants (public contents)
are candidates to be labeled L, because information from constants should be
allowed to flow to any other object.
Assumption (4), that the class-combining operator ⊕ is also a least upper bound
operator, is demonstrated by showing that for all A, B, C ∈ SC:
(a) A → A ⊕ B and B → A ⊕ B.
(b) A → C and B → C ⇒ A ⊕ B → C.
Without property (a) we would have the semantic absurdity that operands could
not flow into the class of a result generated from them. Moreover, it would be
inconsistent for an operation such as c ∶= a + b to be permitted whereas c ∶= a is
not, since the latter operation can be performed by executing the former with
b = 0.
For part (b), consider five objects a, b, c, c1, and c2 such that ā → c̄, b̄ → c̄, and
c̄ = c̄1 = c̄2; and consider this program segment:
c1 ∶= a;
c2 ∶= b;
c ∶= c1 ∗ c2.
Execution of this program segment assigns to c information derived from a and
b; therefore, the flow ā ⊕ b̄ → c̄ is implied semantically. For consistency, we
require the flow relation to reflect this fact. Thus for any two classes A and B,
A ⊕ B is the least upper bound, also referred to as the join, of A and B.
⎧
⎪
if X = ∅
⎪L,
Notation– If X ⊆ SC is a subset of security classes, then ⊕X = ⎨
⎪
A ⊕ ... ⊕ An , if X = {A1 , ..., An }
⎪
⎩ 1
Assumptions (1)-(4) imply the existence of a greatest lower bound operator
on the security classes, which we denote by ⊗. It can be easily shown that
A ⊗ B = ⊕L(A, B), where L(A, B) = {C∣ C → A ∧ C → B}.
Also ⊗X for X ⊆ SC is defined similar to ⊕X.
Proposition– Ai → B(1 ≤ i ≤ n) if and only if ⊕X → B, or A1 ⊕ ... ⊕ An → B.
41
Enforcement of Security
The primary difficulty with guaranteeing security lies in detecting (and monitoring) all flow causing operations.
We distinguish between two types of flow:
• Explicit flow to an object b occurs as the result of executing any statement
(e.g. assignment or I/O) that directly transfers to b information derived
from operands a1 , ..., an .
• Implicit flow to b occurs as the result of executing -or not executing- a
statement that causes an explicit flow to b when that statement is conditioned on the value of an expression.
Definition (Program): An abstract program (or statement) S is defined recursively by:
• S is an elementary statement; e.g. assignment or I/O.
• If S1 and S2 are programs (statements), then S = S1 ; S2 is a program
(statement).
• If S1 , ..., Sm are programs (statements) and c is an m-valued variable then
S = c ∶ S1 , ..., Sm is a program (statement).
The conditional structure is used to represent all conditional (including iterative) statements found in programming languages. For example:
(if c then S1 else S2 ) ⇒ (c ∶ S1 , S2 )
(while c do S1 ) ⇒ (c ∶ S1 )
(do case c of S1 , ..., Sm ) ⇒ (c ∶ S1 , ..., Sm )
Definition– The security requirements for any program of the above form are
now stated as follows.
• If S is an elementary statement, which replaces the contents of an object b
with a value derived from objects a1 , ..., an (ai = b for some ai is possible),
then security requires that ā1 ⊕ ... ⊕ ān → b̄ hold after execution of S. If b
is dynamically bound to its class, it may be necessary to update b̄ when
S is executed.
• S = S1 ; S2 is secure if both S1 and S2 are individually secure (because of
the transitivity of →).
• S = c ∶ S1 , ..., Sm is secure if each Sk (1 ≤ k ≤ m) is secure and all implicit
flows from c are secure.
42
Let b1 , ..., bn be the objects into which S specifies explicit flows (i.e. i =
1, ..., n implies that, for each bi , there is an operation in some Sk that
causes an explicit flow to bi ); then all implicit flow is secure if c̄ → b̄i (1 ≤
i ≤ n), or equivalently c̄ → b̄1 ⊗ ... ⊗ b̄n holds after execution of S.
If bi is dynamically bound to its security class, it may be necessary to
update b̄i by b̄i ∶= b̄i ⊕ c̄
Access Control Mechanism
Each process p has an associated clearance class p̄ specifying the highest class p
can read from (observe) and the lowest class p can write into (modify or extend).
Security is enforced by a run-time mechanism that permits p to acquire read
access to an object a only if ā → p̄, and write access to an object b only if p̄ → b̄.
Hence, p can read from a1 , ..., am and write into b1 , ..., bn only if ā1 ⊕ ... ⊕ ām →
p̄ → b̄1 ⊗ ... ⊗ b̄n .
This mechanism automatically guarantees the security of all flows, explicit or
implicit, since no flow from an object a to an object b can occur unless ā → p̄ → b̄,
which implies ā → b̄.
2.3
2.3.1
Information Flow Control
Noninterference for Deterministic Systems (1986)
Reference: J.A. Goguen, J. Meseguer, “Security Policies and Security Models”,
IEEE Symposium on Security and Privacy, pp. 11–20, 1982.
One group of users, using a certain set of commands, is noninterferencings with
another group of users if what the first group does with those commands has
no effect on what that second group of users can see.
In this approach, security verification consists of showing that a given policy
(contains security requirements) is satisfied by a given model of a system.
The Model
Two types of systems are considered:
• Static system: what users are permitted to do does not change over time;
thus, their capabilities do not change in such a system.
43
• Dynamic system: what users are permitted to do can change with time;
thus, there are some commands that can change the users’ capabilities.
Static Systems
We may assume that all the information about what users are permitted to do
is encoded in a single abstract capability table.
The system will also have information which is not concerned with what is
permitted; this will include users’ programs, data, messages, etc. We will call
a complete characterization of all such information a state of the system. The
system will provide commands that change these states.
Definition– A static machine M consists of the following elements:
• U as a set of users (could also be taken to be subjects in the more general
way).
• S as a set of states.
• SC as a set of state commands.
• Out as a set of outputs.
Together with:
• out ∶ S × U → Out; a function which tells what a given user sees when the
machine is in a given state, called output function.
• do ∶ S × U × SC → S; a function which tells how states are updated by
commands, called state transition function.
• s0 ∈ S; a constant that indicates the initial machine state.
Note– U × SC can be considered as the set of inputs.
Capability Systems
We assume that in addition to the state machine features there are also capability
commands that can change the capability table.
Definition– A capability system M consists of the following elements:
• U as a set of users;
• S as a set of states;
• SC as a set of static commands;
44
State Commands
Commands
CHECK
Capt
S
Out
Figure 2.7: Static and capability commands execution.
• Out as a set of outputs;
• Capt as a set of capability tables;
• CC as a set of capability commands.
Together with the following functions:
• out ∶ S × Capt × U → Out; the output function, which tells what a given
user sees when the machine, including its capability component, is in a
given state.
• do ∶ S × Capt × U × SC → S; the state transition function, which tells how
states are updated by commands.
• cdo ∶ Capt × U × CC → Capt; the capability transition function, which tells
how capability tables are updated.
• (t0 , s0 ) ∈ Capt × S as an initial capability table and initial state.
C = SC ∪ CC is a set of all commands. We assume that there are no commands
that change both the state and the capability table (see Figure 2.7).
A subset of C is called an ability. Let Ab = P(C) denotes the set of all such
subsets (abilities). Evidently, Capt = AbU .
Given a capability system M , we can define a system transition function as
follows, which describes the effect of commands on the combined system state
space, which is S × Capt.
csdo ∶ S × Capt × U × C → S × Capt
which is defined as
⎧
⎪
⎪(do(s, t, u, c), t) if c ∈ SC
csdo(s, t, u, c) = ⎨
⎪
(s, cdo(t, u, c)) if c ∈ CC
⎪
⎩
We can now view a capability system as a state machine, with state space
S × Capt, input space (U × C)∗ and output space Out. The extended version of
function csdo can be defined as follows.
45
csdo ∶ S × Capt × (U × C)∗ → S × Capt
which is defined by
• csdo(s, t, N IL) = (s, t) and
• csdo(s, t, w.(u, c)) = csdo′ (csdo(s, t, w), u, c))
where w ∈ (U × C)∗ , N IL denotes the empty string, dot denotes concatenation,
and csdo′ denotes the primary definition of function csdo.
[[w]] = csdo(s0 , t0 , w) denotes the effect of the input string w on states, starting
from the initial state of the whole system.
A state s of a state machine M is reachable iff ∃w ∈ (U × C)∗ , [[w]] = (s, t).
Static Policies
Security policy is a set of noninterference assertions. Each noninterference assertion says that
what one group of users does using a certain ability has no effect on what
some other group of users sees.
Notation– Let w ∈ (U × C)∗ and u ∈ U . we define [[w]]u to be output to u
after doing w on M , i.e., [[w]]u = out([[w]], u).
Definition– Let G ⊆ U (a group of users), A ⊆ C (an ability), and w ∈ (U ×C)∗ .
Then we let PG (w) denotes the subsequence of w obtained by eliminating those
pairs (u, c) with u ∈ G. Similarly, for PA (w) and PG,A (w).
Example: G = {u, v}, A = {c1 , c2 }
PG,A ( (u′ , c1 ).(u, c3 ).(u, c2 ).(v ′ , c1 ) ) = (u′ , c1 ).(u, c3 ).(v ′ , c1 )
PA ( (u′ , c1 ).(u, c3 ).(u, c2 ).(v ′ , c1 ) ) = (u, c3 )
Definition– Given a state machine M and sets G and G′ of users, we say that
G does not interfere with (or is noninterfering with) G′ , written G ∶ ∣G′ iff
∀w ∈ (U × C)∗ , ∀u ∈ G′ , [[w]]u = [[PG (w)]]u
Similarly, an ability A does not interfere G′ , written A ∶ ∣G′ iff
∀w ∈ (U × C)∗ , ∀u ∈ G′ , [[w]]u = [[PA (w)]]u
Finally, users in G with ability A does not interfere with users in G′ , written
A, G ∶ ∣G′ iff
∀w ∈ (U × C)∗ , ∀u ∈ G′ , [[w]]u = [[PG,A (w)]]u
46
a
A1
b
A2
c
A3
d
Figure 2.8: An information flow diagram.
Example: A ∶ ∣{u} means running commands A does not have any effect on what
user u sees.
Definition– A security policy is a set of noninterference assertions.
Example: (Multilevel security) such as BLP
level ∶ U → L,
U [−∞, x] = {u ∈ U ∣ level(u) ≤ x}
U [x, +∞] = {u ∈ U ∣ level(u) ≥ x}
∀x > x′ , U [x, +∞] ∶ ∣U [−∞, x′ ] (specifies both SS and * propoerties of BLP)
Definition– G is invisible (relative to other users) iff G ∶ ∣ − G.
Now, it is very easy to express MLS using this notion:
∀x ∈ L, U [x, +∞] is invisible. OR
∀x ∈ L, U − U [−∞, x] is invisible
i.e., ∀x ∈ L, U − U [−∞, x] ∶ ∣U [−∞, x]
Example: (Security Officer) The set A consists of exactly those commands that
can change the capability table.
Policy: There is just one designated user seco, the security officer, whose use of
those commands will have any effect. A, −{seco} ∶ ∣U
Example: (Channel Control) A very general notion of channel is just a set of
commands, i.e., an ability A ⊆ C.
Policy: G and G′ can communicate only through the channel A.
−A, G ∶ ∣G′ ∧ − A, G′ ∶ ∣G
Example: (Information Flow) a, b, , c, d are processes, and A1 , A2 , and A3 are
channels. a, b, c, and d can communicate (as depicted in figure 2.8) as follows:
{b, c, d} ∶ ∣{a}
{c, d} ∶ ∣{b}
{c} ∶ ∣{d}
{d} ∶ ∣{c}
−A1 , {a} ∶ ∣{b, c, d}
− A2 , {b} ∶ ∣{c}
− A3 , {b} ∶ ∣{d}
47
Dynamic Policies
In dynamic policies, whether or not a given user u can interfere with another
user v, by using an operation (command) c may vary with time.
Definition– Let G and G′ be sets of users. Let A be a set of commands, and
Q be a predicate defined over (U × C)∗ , i.e., Q ∶ (U × C)∗ → {0, 1}. Then, G
using A is noninterfering with G′ under condition Q, written
G, A ∶ ∣G′ if Q
iff
∀u′ ∈ G′ , ∀w ∈ (U × C)∗ , [[w]]u′ = [[P (w)]]u′
where P is defined by
P (λ) = λ, where λ is the empty string, and
⎧
⎪
⎪λ , if Q(o′1 ...o′i−1 ) ∧ oi = (u, a) with u ∈ G and a ∈ A
P (o1 ...on ) = o′1 ...o′n , where o′i = ⎨
⎪
o , otherwise.
⎪
⎩ i
Example: (Discretionary Access) We assume the existence of a function CHECK(w, u, c),
which looks at the capability table in state [[w]] to see whether or not u is authorized to do command c; it returns true if he is, and false if not.
CHECK ∶ (U × C)∗ × U × C → {0, 1}
equivalently CHECK(u, c) ∶ (U × C)∗ → {0, 1}
are general policy that we wish to enforce for all users u and all commands c is
{u}, {c} ∶ ∣U if ¬CHECK(u, c)
We can define such a policy in another way.
pass(u, c) is a command, which gives a capability to a user.
unpass(u, c) is a command , which takes a capability from a user.
w ∈ (U × C)∗ ∧ w = w′ .o ⇒ previous(w) = w′ , last(w) = o
Policy:
{u}, {c} ∶ ∣U if [¬CHECK(previous, u, c) ∧ ( CHECK(previous, u′ , pass(u, c)) →
¬(last = (u′ , pass(u, c))) ) ]
This says that u using c cannot interfere if in the previous state he didn’t have
the capability to use c, unless some user u′ having the capability in the previous
state to pass u the ability to use c, in fact did so.
The corresponding assertion for the revocation operation, which we shall denote
unpass(u, c), is
{u}, {c} ∶ ∣U if [CHECK(previous, u′ , unpass(u, c)) ∧ last = (u′ , unpass(u, c))) ]
48
2.3.2
Noninterference for Nondeterministic Systems
In nondeterministic systems for each input we may have different outputs.
We need a framework for describing a nondeterministic systems. In this framework out to be relation instead of a function, i.e., allow the same input generate
different outputs.
To catch channels, we will include outputs in the history itself. The resulting
traces represent acceptable input/output behaviors, and a system is set of acceptable traces.
Example–A = {⟨⟩, ⟨in1 ⟩, ⟨in1 , out1 ⟩, ⟨in1 , in2 , out1 ⟩, ...}
We can show the above set with the following notation as well.
A = {⟨⟩, in1 , in1 .out1 , in1 .in2 .out1 , ...}
Example– A system in which a user can give as input either 0 or 1 and immediately receives that input as output is specified by the following set of traces:
A = {⟨⟩, in(0), in(1), in(0).out(0), in(1).out(1), in(0).out(0).in(1), ...}
For simplicity, we assume that any prefix of an acceptable trace must also be
an acceptable trace and that a user can give input at any time.
The obvious way to generalize noninterference is to require that the purge of
an acceptable trace be an acceptable trace, where the purge of a trace is formed
by removing all high level inputs from the trace.
Example– In the previous example, assume that all inputs and outputs are highlevel. Since the system generates no low-level output, it is trivially secure. Now
• T = highin(0).highout(0) is an acceptable trace,
• P (T ) = highout(0) but its purged trace is not acceptable (since it contains unsolicited output), and the system is not secure by the provided
definition.
Thus, the provided definition is not appropriate. An obvious way is to refine
the purge operator so that it removes, not simply all high-level input, but all
high-level output as well.
Example*– the system, specified by the following set of traces, satisfies the
described property and is secure.
A = {⟨⟩, highin(0), highin(1), lowout(0), lowout(1), highin(0).lowout(0),
highin(1).lowout(1)}
The above approach has some problems:
49
1. It is too strong in that it rules out any system where low-level input must
generate high-level output.
For example a system that secretly monitors low-level usage and sends its
audit records as high-level output to some other system for analysis, is
nonsecure.
2. In the previous example (labeled by *), consider a scenario where a Trojan
Horse acting on behalf of a high-level user can pass information to a lowlevel user using such a system. If the Trojan Horse wants to send a 0 or
1 to the low-level user, it simply gives the appropriate bit as input before
the next low-level output is generated.
To tackle the second problem, it would also have to regard the traces highin(0).lowout(1)
and highin(1).lowout(0) as being acceptable, which would close the nonsecure
channel.
A = {⟨⟩, highin(0), highin(1), lowout(0), lowout(1), highin(0).lowout(0),
highin(1).lowout(1), highin(0).lowout(1), highin(1).lowout(0)}
Of course, It would be too strong to require that any arbitrary insertion of highlevel events into an acceptable trace must be acceptable. The lighter version
would be enough, which is considered in the definition of Nondeducibility.
2.3.3
Nondeducibility (1986)
Definition– For any two acceptable traces T and S, there is an acceptable
trace R consisting of T ’s low-level events (in their respective order), S’s highlevel inputs (in their respective order), and possibly some other events that are
neither low-level events in T nor high-level inputs from S.
Intuitively whatever the low-level user sees is compatible with any acceptable
high-level input.
Nondeducibility has some problems:
1. Nondeducibility is weak,
2. Nondeducibility is not composable.
Nondeducibility is weak
For example, consider a system where a high-level user H gives arbitrary highlevel input (presumably a secret messages of some sort) and some low-level user
L gives the low-level input, look.
When L issues look, he or she receives as low-level output the encryption of H’s
input up to that time, if there is any, or else a randomly generated string (see
Figure 2.9).
50
H
m
L
look
E(m)
L
Figure 2.9: An example of the weakness of nondeducibility.
Such a system models an encryption system where low-level users can observe
encrypted messages leaving the system, but to prevent traffic analysis, random
strings are generated when there is no encrypted output.
This system satisfies nondeducibility since low-level users can learn nothing
about high-level input. The sample of acceptable traces of the system is as
follows.
T = highin(m1 ).lowin(look).lowout(E(m1 )).lowin(look).lowout(random)
S = highin(m2 ).lowin(look).lowout(E(m2 ))
For nondeducibility R = lowin(look).lowout(E(m1 )).lowin(look).lowout(random).highin(m2 )
(E(m1 ) seems random here)
The problem arises when we realize it would still satisfy nondeducibility even if
we removed the encryption requirement. For example:
S = highin(attack at dawn)
T = lowin(look).lowout(xxx)
R = lowin(look).lowout(xxx).highin(attack at dawn)
Similarly,
S = ⟨⟩
T = highin(attack at dawn).lowin(look).lowout(attack at dawn)
R = highin(attack at dawn).lowin(look).lowout(attack at dawn)
The system is nondeducibility secure, but intuitively is not secure.
Nondeducibility is not composable
The system A has the following traces:
Each trace starts with some number of high-level input, or outputs followed by
the low-level output STOP followed by the low-level output ODD (if there has been
an odd number of high-level events prior to STOP) or EVEN otherwise.
The high-level outputs and the output of STOP leave via the right channel, and
the events ODD and EVEN leave via left channel (see Figure 2.10).
The system B behaves exactly like A (see Figure 2.10), except that
• its high-level outputs leave it via left channel,
• its EVEN and ODD outputs leave it via right channel, and
51
A
EVEN
time
time
time
EVEN
ODD
STOP
STOP
B
B
A
time
ODD
STOP
STOP
Figure 2.10: An example of the non-composability of nondeducibility.
A
ODD
B
A
B
ODD
EVEN
EVEN
STOP
STOP
Figure 2.11: Hook-up composition of two sample systems.
• STOP is an input to its left channel.
Both systems A and B are nondeducibility secure.
Composition by hook-up: A and B are connected so that the left channel of B
is connected to the right channel of A (see Figure 2.11).
Since the number of shared high-level signals is the same for A and B, the fact
that A says ODD while B says EVEN (or vice versa) means that there has been at
least one high-level input from outside. Therefore, the composition of A and B
by hook-up is not nondeducibility secure.
Referring back to the definition of nondeducibility, we see that the cause of these
problems is that it allows us too much freedom in constructing an acceptable
trace R from the high-level inputs of an acceptable trace T and low-level events
from an acceptable trace S.
2.3.4
Generalized Noninterference (GNI)
Given an acceptable system trace T and alternation T1 formed by inserting or
deleting a high-level input to or from T , there is an acceptable trace T2 formed
52
by inserting or deleting high-level outputs to or from T1 after the occurrence of
the alternation in T made to form T1 .
For example, in previous example, a possible trace is lowin(look).lowout(xxx).
If we alter this trace to obtain highin(attack at dawn).lowin(look).lowout(xxx),
we are left with unacceptable trace that cannot be made acceptable by inserting or deleting high-level outputs after the occurrence of the inserted high-level
input. Hence, the systems fails to satisfy GNI.
The problem is that again, GNI is not composable.
2.3.5
Restrictiveness
To create a composable security property, we must be even more restrictive. We
require that a high-level input may not change the low-level state of the system.
Therefore, the system should respond the same to a low-level input whether or
not a high-level input was made immediately before.
State Machine
Definition– A state machine consists of
1. a set of possible states,
2. a set of possible events, which might be the inputs, outputs, and internal
signals of the system,
3. a set of possible transitions;
4. an initial state (named start).
e
σ0 Ð
→ σ1 is a transition, where σ0 is the state of machine before the transition.
e is the accompanying event for the transition and σ1 is the state of machine
after transition.
[e1 ,...,en ]
σ0 ÐÐÐÐÐ→ σn is a sequence of transitions starting in σ0 and ending in σn ,
involving events e1 , ..., en .
e
σ0 can accept event e if for some state σ1 , σ0 Ð
→ σ1 .
Definition– traces of a state machine are all sequences of events γ such that
γ
for some state σ1 , start Ð
→ σ1 , where start is the initial state.
Definition— A state machine is said to be input total if in any state it can
accept an input.
In a total input state machine, one can only learn about its state by watching
its outputs; no information is conveyed to the user by accepting inputs.
53
input totality is a condition for a state machine to be restrictive, but this is
not intended to imply that only such machines are secure.
Security for State Machine
Definition– If σ1 and σ2 are two states, then we say σ1 ≈ σ2 if the states
differ only in their high-level information, or in other words, if the values of all
low-level variables are the same in the two states.
Definition– If γ1 and γ2 are two sequences of events, then we say that γ1 ≈ γ2
if the two sequences agree for low-level events.
Example– a ∶ high-level,
b ∶ low-level
[a, b, b, a] ≈ [b, a, b, a] ≈ [b, b]
Definition– A state machine is defined to be restrictive for the view determined
by ≈ if:
1. It is input total.
2. Inputs affect equivalent states equivalently.
Formally, for any state σ1 , σ1′ , and σ2 , and for any two input sequences
β1 and β2 ,
β1
β2
[σ1 Ð→ σ1′ ∧ σ2 ≈ σ1 ∧ β1 ≈ β2 ] ⇒ ∃σ2′ [σ2 Ð→ σ2′ ∧ σ2′ ≈ σ1′ ]
3. Equivalent states produce equivalent outputs, which lead again to equivalent states.
Formally, for any states σ1 , σ1′ , and σ2 , and for any output sequence γ1 ,
γ1
γ2
[σ1 Ð→ σ1′ ∧ σ2 ≈ σ1 ] ⇒ ∃σ2′ , ∃γ2 [σ2 Ð→ σ2′ ∧ σ2′ ≈ σ1′ ∧ γ2 ≈ γ1 ]
Exercise 6– Prove by induction that it is enough to consider cases in which γ1
(but not necessarily γ2 ) consists of a single event.
Hooking Up Machine
Assume A and B are two state machines. Then, hooking them up means that
some output of A are sent to B vise versa.
The common events will then be communication events.
The state of the combined machine are pais ⟨σ, ν⟩, where σ is a state of A and
ν is a state of B.
An event of a composite machine is any event from either component machine.
For any sequence of events γ from their composite machine, let γ ↑ EA be the
sequence of events engaged in by machine A. Similarly for γ ↑EB .
54
γ
γ↑EA
⟨σ, ν⟩ Ð
→ ⟨σ ′ , ν ′ ⟩ is a valid transition of the composite machine if (σ ÐÐÐ→ σ ′ and
γ↑EB
ν ÐÐÐ→ ν ′ are valid transitions of A and B respectively).
⟨σ, ν⟩ ≈ ⟨σ ′ , ν ′ ⟩ ⇔ σ ≈ σ ′ ∧ ν ≈ ν ′
γ ≈ γ ′ ⇔ γ ↑EA ≈ γ ′ ↑EA ∧ γ ↑EB ≈ γ ′ ↑EB
Theorem– If state machines A and B are restrictive, then a composite machine
formed from hooking them up is restrictive.
Proof: (1) The input machine is input total. If β is any state of input for
the composite machine and ⟨σ, ν⟩ is any starting state, then β ↑ EA and β ↑ EB
are sequences of inputs for A and B respectively. Since A and B are input
β↑EA
β↑EB
total, there are states σ ′ and ν ′ such that σ ÐÐÐ→ σ ′ and ν ÐÐÐ→ ν ′ . Therefore
β
⟨σ, ν⟩ Ð
→ ⟨σ ′ , ν ′ ⟩.
(2) Suppose ⟨σ1 , ν1 ⟩, ⟨σ1′ , ν1′ ⟩, ⟨σ2 , ν2 ⟩ are states and β1 and β2 are input sequences.
β2↑EA
(I) A is restrictive. Thus, ∃σ2′ [σ2 ÐÐÐ→ σ2′ ∧ σ2′ ≈ σ1′ ]
β2↑EB
(II) B is restrictive. Then, ∃ν2′ [ν2 ÐÐÐ→ ν2′ ∧ ν2′ ≈ ν1′ ]
(3) As state earlier, it is sufficient to consider outputs of single event, (γ1 = [e]).
e
⟨σ1 , ν1 ⟩ Ð
→ ⟨σ1′ , ν1′ ⟩
⟨σ1 , ν1 ⟩ ≈ ⟨σ2 , ν2 ⟩
e
Assume e is an output from A. Since A is restrictive and σ1 Ð
→ σ1′ and σ1 ≈ σ2 ,
then
γ
∃σ2′ ∃γ[σ2 Ð
→ σ2′ ∧ σ2′ ≈ σ1′ ∧ γ ≈ [e] ].
Since the sequence γ is an output sequences any event shared by both A and B
must be inputs to B. Since γ ≈ [e], it follows that γ ↑ EB ≈ [e] ↑ EB . Therefore
γ↑EB
there exist ν2′ such that ν2′ ≈ ν1′ , ν2 ÐÐÐ→ ν2′ , and γ ↑EB ≈ [e]↑EB .
γ
Thus, there exists a state ⟨σ2′ , ν2′ ⟩ such that ⟨σ2 , ν2 ⟩ Ð
→ ⟨σ2′ , ν2′ ⟩ and ⟨σ2′ , ν2′ ⟩ ≈
′
′
⟨σ1 , ν1 ⟩.
Shortcomings of Restrictiveness
Restrictiveness is not preserved by many standard views of refinement.
Restrictiveness address only noise free channels.
Example– possible traces with 0.0001 probability.
A = {lowout(0), lowout(1), highin(0).lowout(1), highin(1).lowout(0)}
55
RBAC3
Role Hierarchies
Constraints
RBAC1
RBAC2
RBAC0
Figure 2.12: RBAC reference models.
2.4
Role Based Access Control Models
The basic concept of RBAC is that users are assigned to roles, permissions are
assigned to roles, and users acquire permissions by being members of roles.
Example– The roles existing in a university are Student, Professor, Staff, etc.
A role is a job function or job title within the organization with some associated
semantics regarding the authority and responsibility conferred on a member of
the role. It can be thought as a set of transactions a user or set of users can
perform with in the context of an organization.
For example an Instructor can present a course, enter the grades, publish his/her
lecture notes,
A user is assigned to a role that allows him or her to perform only what is
required for that role.
A permission is an approval to perform operation on one or more objects in
the system and an operation is an executable image of a program
Permissions are positive and denial of access is modeled as constraints rather
than negative permissions.
RBAC is a set of reference models which is presented in Figure 2.12.
2.4.1
Core RBAC (RBAC0 )
Definition– The RBAC0 (as it is shown in Figure 2.13) has the following
components:
• U, R, S, OP S, and OBS (users, roles, sessions, operations, and objects
respectively)
56
SOD CONSTRAINTS
RH
ROLE
HIERARCHY
UA
PA
USER
PERMISSION
ASSIGNMENT
U
R
S
USERS
ASSIGNMENT
ROLES
P
PERMISSIONS
SESSIONS
user
.
.
.
roles
Figure 2.13: The components of RBAC models.
• U A ⊆ U × R (user-to-role assignment relation)
• assigned-users ∶ R → P(U ) (the mapping of role r onto a set of users.
Formally: assigned-users(r) = {u ∈ U ∣ ⟨u, r⟩ ∈ U A}.)
• P = P(OP S × OBS) (permissions)
• P A ⊆ P × R (permission-to-role assignment relation)
• assigned-permissions ∶ R → P(P ) (the mapping of role r onto a set of
permissions. Formally: assigned-permissions(r) = {p ∈ P ∣ ⟨p, r⟩ ∈ P A}.)
• user-sessions ∶ U → P(S) (the mapping of user u onto a set of sessions)
• session-user ∶ S → U (determines the user of a given session. In other
words, session-user(s) = u iff s ∈ user-sessions(u).)
• session-roles ∶ S → P(R) (a function mapping each session si to a set of
roles. Formally: session-roles(si ) ⊆ {r∣ ⟨session-user(si ), r⟩ ∈ U A})
• avail-session-perms(si ) =
⋃
assigned-permissions(r) (the
r∈session-roles(si )
permissions available in session si )
Note– Assume that only a single security officer can change these components.
57
2.4.2
Hierarchical RBAC (RBAC1 )
RBAC1 adds the role hierarchies to RBAC0 . Role hierarchies define an inheritance relation among roles. Inheritance has been described in terms of permissions; that is, r1 inherits role r2 if all privileges of r2 are also privileges of
r1 . Note that user membership is inherited top-down, and role permissions are
inherited bottom-up.
This standard recognizes two different hierarchies.
• General role hierarchies provide support for an arbitrary partial order to
serve as the role hierarchy, to include the concept of multiple inheritances
of permissions and user membership among roles.
• Limited role hierarchies impose restrictions resulting in a simpler tree structure (i.e., a role may have one or more immediate ascendants, but is restricted to a single immediate descendant).
Note that an inverted tree is also possible. Examples of possible hierarchical
role structures are shown in Figure 2.14.
Definition– General Role Hierarchies:
• RH ⊆ R × R is a partial order on R called the inheritance relation, written
as ⪰, where r1 ⪰ r2 only if all permissions of r2 are also permissions of r1 ,
and all users of r1 are also users of r2 .
Formally: r1 ⪰ r2 ⇒ authorized-permissions(r2 ) ⊆ authorized-permissions(r1 )∧
authorized-users(r1 ) ⊆ authorized-users(r2 ).
• authorized-users ∶ R → P(U ), the mapping of role r onto a set of users in
the presence of a role hierarchy.
Formally: authorized-users(r) = {u ∈ U ∣ ∃r′ , r′ ⪰ r ∧ ⟨u, r′ ⟩ ∈ U A}.
• authorized-permissions ∶ R → P(P ), the mapping of role r onto a set of
permissions in the presence of a role hierarchy.
Formally: authorized-permissions(r) = {p ∈ P ∣ ∃r′ , r ⪰ r′ ∧ ⟨p, r′ ⟩ ∈ P A}.
Notation– r1 ≫ r2 , iff r1 ⪰ r2 ∧ ¬(∃r3 , r3 ≠ r1 ∧ r3 ≠ r2 ∧ r1 ⪰ r3 ⪰ r2 )
Definition (Limited Role Hierarchies) Previous definition with the following
limitation:
∀r, r1 , r2 ∈ R, r ≫ r1 ∧ r ≫ r2 ⇒ r1 = r2 .
58
Figure 2.14: Different types of role hierarchies: (a) tree; (b) inverted tree; (c)
lattice.
59
2.4.3
Constrained RBAC (RBAC2 )
Definition– RBAC2 is unchanged from RBAC0 except that for requiring that
there be a collection of constraints that determine whether or not values of
various components of RBAC0 are acceptable.
The constraint which is specified in the NIST standard is Separation of Duties
(SOD). SOD Enforces conflict of interest policies employed to prevent users
from exceeding a reasonable level of authority for their position.
There are two types of SOD:
• Static SOD (based on user-role assignment),
• Dynamic SOD (based on role activation).
Definition (Static Separation of Duties) No user is assigned to n or more roles
from the same role set, where n or more roles conflict with each other.
SSD ⊆ P(R) × N
∀⟨rs, n⟩ ∈ SSD, [n ≥ 2 ∧ ∣rs∣ ≥ n]
∀⟨rs, n⟩ ∈ SSD, ∀t ⊆ rs, [∣t∣ ≥ n ⇒ ⋂ assigned-users(r) = ∅]
r∈t
In presence of role hierarchies, we should ensure that inheritance does not undermine SSD policies.
∀⟨rs, n⟩ ∈ SSD, ∀t ⊆ rs, [∣t∣ ≥ n ⇒ ⋂ authorized-users(r) = ∅]
r∈t
Definition (Dynamic Separation of Duties) These constraints limit the number
of roles a user can activate in a single session.
DSD ⊆ P(R) × N
∀⟨rs, n⟩ ∈ DSD, [n ≥ 2 ∧ ∣rs∣ ≥ n]
∀s ∈ S, ∀rs ∈ P(R), ∀rs′ ∈ P(R), ∀n ∈ N, [⟨rs, n⟩ ∈ DSD ∧ rs′ ⊆ rs ∧ rs′ ⊆
session-roles(s) ⇒ ∣rs′ ∣ < n]
2.4.4
RBAC3 Model
RBAC3 combines RBAC1 and RBAC2 to provide both role hierarchies and
constraints.
60
2.5
2.5.1
Logics for Access Control
Abadi’s Calculus for Access Control
At least three ingredients are essential for security in computing systems:
1. A trusted computing base: the hardware and systems software should be
capable of preserving the secrecy and integrity of data.
2. Authentication: it should be possible to determine who made a statement;
for example, a user should be able to request that his files be deleted and
to prove that the command is his, and not that of an intruder.
3. Authorization, or access control : access control consists in deciding whether
the agent that makes a statement is trusted on this statement; for example, a user may be trusted (hence obeyed) when he says that his files
should be deleted.
These ingredients are fairly well understood in centralized systems. However,
distributed systems pose new problems, due to the difficulties with scale, communication, booting, loading, authentication, and authorization.
The basic questions of authentication and access control are, always,
• who is speaking?
• who is trusted?
Typically the answer is the name of a simple principal.
Main feature of this work:
It accounts for how a principal may come to believe that another principal is
making a request, ether on his or on someone else’s behalf. It also provides a
logical language for access control lists (ACLs).
Principals:
• Users and machines
• Channels
• Conjunctions of principals (A ∧ B)
• Groups
• Principals in roles (A as R)
61
• Principals on behalf of principals (B f or A or B∣A).
Composite principals play a central role in reasoning in distributed systems.
For composite principals, ∧ and ∣ are primitive operations. Other operations
are defined based on the primitive operations.
Composite Principals:
• A ∧ B: A and Bas cosigners. A request from A ∧ B is a request that both
A and B make.
• A ∨ B: the group of which A and B are the sole members. Disjunction is
often replaced with implication, in particular in dealing with groups.
“A is a member of the group G” can be written A ⇒ G. Here, A is at
least as power as G.
• A as R: the principal A in role R.
• B∣A (B quoting A): the principal obtained when B speaks on behalf of
A, not necessarily with a proof that A has delegated authority to B.
• B f or A: the principal obtained when B speaks on behalf of A, with
appropriate delegation certificates.
In order to define the rights of these composite principals, we develop an algebraic calculus. In this calculus, one can express equations such as
(B ∧ C) f or A = (B f or A) ∧ (C f or A)
and then examine their consequences.
Since ∧ is the standard meet in a semilattice, we are dealing with an ordered
algebra, and we can use a partial order ⇒ among principals: A ⇒ B stands for
A = A ∧ B and means that A is at least as powerful as B; we pronounce this
“A implies B” or “A speaks f or B”.
A modal logic extends the algebra of principals. In this logic, A says s represents
the informal statement that the principal A says s. Here s may function as
an imperative (“the file should be deleted”) or not (“Cs public key is K”);
imperative modalities are not explicit in the formalism.
The logic also underlies a theory of ACLs. We write ⊃ for the usual logical
implication connective and A controls s as an abbreviation for (A says s) ⊃ s,
which expresses trust in A on the truth of s.
ACL: an ACL for a formula s is a list of assertions of the form A controls s.
When s is clear from context, the ACL for s may simply be presented as the
list of principals trusted on s.
If A ⇒ B and B controls s, then A controls s as well. Thus, when B is listed
in ACL, access should be granted to any member of group B such as A.
62
Premises: B controls s ≡ B says s ⊃ s and A = A ∧ B.
Sentence: A controls s
Proof: A says s ≡ A∧B says s ⊃ B says s ⊃ s. Thus, A says s ⊃ s ≡ A controls s.
2.5.2
A Calculus of Principals
Principals form a semilattice under the operation of conjunction, and obey the
usual semilattice axioms
• ∧ is associative [i.e., (A ∧ B) ∧ C = A ∧ (B ∧ C)], commutative [i.e., A ∧ B =
B ∧ A], and idempotent [i.e., A ∧ A = A].
The principals form a semigroup under ∣:
• ∣ is associative.
The final axiom is the multiplicativity of ∣ in both of its arguments, which means:
• ∣ distributes over ∧ [i.e., A∣(B ∧ C) = A∣B ∧ A∣C and (A ∧ B)∣C =
A∣C ∧ B∣C].
In short, the axioms given for principals are those of structures known as multiplicative semilattice semigroups. A common example of a multiplicative
semilattice semigroup is an algebra of binary relations over a set, with the operations of union and composition.
2.5.3
A Logic of Principals and Their Statements
Syntax: The formulas are defined inductively, as follows:
• a countable supply of primitive propositions p0 , p1 , p2 , ... are formulas;
• if s and s′ are formulas then so are ¬s and s ∧ s′ ;
• if A and B are principal expressions then A ⇒ B is a formula;
• if A is a principal expression and s is a formula then A says s is a formula.
We use the usual abbreviations for boolean connectives, such as ⊃, and we
also treat equality between principals (=) as an abbreviation. In addition,
A controls s stands for (A says s) ⊃ s.
63
Axioms: The basic axioms are those for normal modal logics:
• if s is an instance of a propositional-logic tautology then ⊢ s;
• if ⊢ s and ⊢ (s ⊃ s′ ) then ⊢ s′ ;
• ⊢ A says (s ⊃ s′ ) ⊃ (A says s ⊃ A says s′ );
• if ⊢ s then ⊢ A says s, for every A.
The calculus of principals is included:
• if s is a valid formula of the calculus of principals then ⊢ s.
Other axioms connect the calculus of principals to the modal logic:
• ⊢ (A ∧ B) says s ≡ (A says s) ∧ (B says s);
• ⊢ (B∣A) says s ≡ B says A says s;
• ⊢ (A ⇒ B) ⊃ ((A says s) ⊃ (B says s)).
The last axiom is equivalent to (A = B) ⊃ ((A says s) ≡ (B says s)), a substitutivity property.
Semantics: The semantics is provided by a Kripke structure M = ⟨W, w0 , I, J⟩,
where
• W is a set (as usual, a set of possible worlds);
• w0 ∈ W is a distinguished element of W ;
• I ∶ P ropositions → P(W ) is an interpretation function that maps each
proposition symbol to a subset of W (the set of worlds where the proposition symbol is true);
• J ∶ P rincipals → P(W × W ) is an interpretation function that maps each
principal symbol to a binary relation over W (the accessibility relation for
the principal symbol).
The meaning function R extends J, mapping a principal expression to a relation:
R(Ai ) = J(Ai)
R(A ∧ B) = R(A) ∪ R(B)
R(B∣A) = R(A) ○ R(B)
The meaning function E maps each formula to its extension, that is, to the set
of worlds where it is true:
64
E(pi ) = I(pi )
E(¬s) = W − E(s)
E(s ∧ s′ ) = E(s) ∩ E(s′ )
E(A says s) = {w∣R(A)(w) ⊆ E(s)}
E(A ⇒ B) = W if R(B) ⊆ R(A) and ∅ otherwise
where R(C)(w) = {w′ ∣wR(C)w′ }.
A formula s holds in M at a world w if w ∈ E(s), and it holds in M if it holds
at w0 . In the latter case, we write M ⊧ s, and say that M satisfies s. Moreover,
s is valid if it holds in all models; we write this ⊧ s.
Example:
l
b
l
p
w4
l
b
p
b
w5
l
p
b
w0
p
w1
l
b p
l
b
w6
l
b p
w2
p
w7
l
b
p
w3
l
l
b
b
p
p
agent is in produce department
agent is in meat department
the bananas are yellow
the bananas are green
the pork is fresh
the pork is spoiled
Soundness and Completeness: The axioms are sound, in the sense that if
⊢ s then ⊧ s. Although useful for our application, the axioms are not complete.
For example, the formula
(C says (A ⇒ B)) ≡ ((A ⇒ B) ∨ (C says f alse))
is valid but not provable.
Exercise 7– prove the validity of the above equivalency by the presented semantics.
On Idempotence
The idempotence of ∣ is intuitively needed:
65
• A∣A = A: A says A says s and A says s are equal.
• Suppose that G represents a collection of nodes, that B and C represent
members of G, and that an ACL includes G∣A. By idempotence, the
principal C∣B∣A obtains access. This means that multiple hops within a
collection of nodes does not reduce rights and should not reduce security.
In particular, by idempotence there is no need to postulate that G∣G ⇒ G,
or to make sure that G∣G∣A appears in the ACL explicitly.
However, adding idempotence to the logic has some problems:
• Idempotence impose more complexity. e.g., it yields (A∧B) ⇒ (B∣A) and
(A ∧ B) ⇒ (A∣B) (since (A ∧ B) = (A ∧ B)∣(A ∧ B)). On a request of A ∧ B
we need to check both (A∣B) and (B∣A).
• We unable to find a sensible condition on binary relations that would force
idempotence and would be preserved by union and composition.
Corollary: The authors preferred to do without idempotence and rely on assumptions of the form G∣G ⇒ G.
Roles
There are many situations in which a principal may wish to reduce his powers.
A principal may wishes to respect the principle of least privilege, according to
which the principal should have only the privileges it needs to accomplish its
task.
These situations can be handled by the use of roles. A principal A may adopt a
role R and act with the identity A as R when he wants to diminish his powers.
For example, define the roles Ruser and Radmin representing a person acting as
a user and as an administrator, respectively. Suppose the ACLs in the system
include A∣Radmin controls s1 and A∣Ruser controls s2 . In her daily work, Alice
may step into her role as user by quoting Ruser ; when she needs to perform
administrative tasks, Alice can explicitly quote Radmin to gain access to objects
such as s1 that mention her administrative role.
Axioms of Roles:
For all roles the following axioms are hold:
• R∣R = R ( idempotency),
• R∣R′ = R′ ∣R (commutativity).
• A ⇒ (A as R)
66
These yield the following:
• A as R as R = A as R
• A as R as R′ = A as R′ as R
Roles and Groups: Roles may be related to Groups. e.g., Grole related to group
G. A as Grole means A act in the role of member of G. We do allow roles
related to groups but this relation is not formal.
Semantics of Roles:
Definition (identity)– A special principal 1, the identity, believes everything
that is true and nothing that is not. R(1)(w) = w, ∀w ∈ W
Definition (Role)– In the binary relation model, roles are subsets of the identity relations (R(R) ⊆ R(1)), i.e., 1 ⇒ R.
A principal A in role R is defined as (A as R) which is equal to A∣R.
Roles reduce privileges: R(R) ○ R(A) ⊆ R(A)
◦
An arbitrary principal
relation R(A) . . .
=
. . . composed with a
role relation R(R) . . .
. . . gives a new relation that is always a
subset of R(A).
Figure 2.15: The semantics of roles.
Access Control Decision
A general access control problem. The problem of making access control decisions is computationally complex. It is important therefore to understand the
precise form of its instances. The parts of an instance are:
• An expression P in the calculus of principals represents the principal that
is making the request. In particular, all appropriate delegations are taken
into account in constructing this expression. The various relevant certificates are presented for checking.
67
• A statement s represents what is being requested or asserted. The precise
nature of s is ignored; it is treated as an uninterpreted proposition symbol.
• Assumptions state implications among principals; these typically represent
assumptions about group memberships. They have the form Pi ⇒ Gi ,
where Pi is an arbitrary expression in the calculus of principals and Gi an
atom. Note that this syntax is liberal enough to write G∣G ⇒ G for every
appropriate G of interest, obtaining some of the benefit of the idempotence
axiom.
• Certain atomic symbols R0 , ..., Ri , ... are known to denote roles.
• An ACL is a list of expressions E0 , ..., Ei , ... in the calculus of principals;
these represent the principals that are trusted on s.
The basic problem of access control is deciding whether ⋀(Pi ⇒ Gi ), derived
i
from the assumptions, and ⋀(Ei controls s), derived from the ACL, imply
i
P controls s, given the special properties of roles and of the delegation server
D.
There is a proof that the problem of making access control decisions is equivalent
to the acceptance problem for alternating pushdown automata and requires
exponential time.
68
Chapter 3
Exercise Answers
Exercise 1: Since ⪯ is a partial order and ≤ is a total order, ⟨L × T, ⊑⟩ is a
partial ordered set. Precisely:
• ⟨a, b⟩ ⊑ ⟨a, b⟩ because (a ⪯ a) and (b ≤ b)
• If ⟨a1 , b1 ⟩ ⊑ ⟨a2 , b2 ⟩ and ⟨a2 , b2 ⟩ ⊑ ⟨a3 , b3 ⟩, then
– a1 ⪯ a2 and a2 ⪯ a3 , thus a1 ⪯ a3
– b1 ≤ b2 and b2 ≤ b3 , thus b1 ≤ b3
Hence, ⟨a1 , b1 ⟩ ⊑ ⟨a3 , b3 ⟩.
• If ⟨a, b⟩ ⊑ ⟨c, d⟩ and ⟨c, d⟩ ⊑ ⟨a, b⟩, then
– a ⪯ c and c ⪯ a, thus a = c
– b ≤ d and d ≤ b, thus b = d
Hence, ⟨a, b⟩ = ⟨c, d⟩
Also, every two elements of L × T has a supremum and an infimum, equal to
the following:
• GLB(⟨a, b⟩, ⟨c, d⟩) = (GLB(a, b), min(c, d)).
• LUB(⟨a, b⟩, ⟨c, d⟩) = (LUB(a, b), max(c, d)).
The above claim can be easily proven.
Exercise 2:
69
In the followings, ⟨X ′ , D′ , A′ ⟩ is the next state of ⟨X, D, A⟩ after execution of a
command.
Add access attribute r to cell Ad,x .
X ′ = X, D′ = D
⎧
⎪
⎪A[a, b] ∪ R , if a = d, b = x, R ⊂ {r, r∗ }, R ≠ ∅
A′ [a, b] = ⎨
⎪
A[a, b]
, otherwise
⎪
⎩
Remove access attribute r from cell Ad,x .
X ′ = X, D′ = D
⎧
⎪
⎪A[a, b] − {r, r∗ } , if a = d, b = x
A′ [a, b] = ⎨
⎪
A[a, b]
, otherwise
⎪
⎩
Copy access attribute r (or r∗ ) from cell Ad,x to Ad′ ,x .
X ′ = X, D′ = D
⎧
⎪
⎪A[a, b] ∪ R , if a = d′ , b = x, r∗ ∈ A[d, x], R ⊂ {r, r∗ }, R ≠ ∅
A′ [a, b] = ⎨
⎪
A[a, b]
, otherwise
⎪
⎩
Exercise 3:
For each right r in Lampson model, we should have some rules of the following
types.
Command Rule1r (d, d’, x)
if control in (d, d′ ) and
r in (d′ , x)
then
delete r from (d′ , x)
end;
Command Rule2-1r (d, d’, x)
if r ∗ in (d, x) and
then
enter r into (d′ , x)
end;
Command Rule2-2r (d, d’, x)
if r ∗ in (d, x) and
then
enter r ∗ into (d′ , x)
end;
Command Rule3-1r (d, d’, x)
if own in (d, x) and
then
enter r into (d′ , x)
end;
70
Command Rule3-2r (d, d’, x)
if own in (d, x) and
then
enter r ∗ into (d′ , x)
end;
Rule 4 cannot be specified in terms of HRU commands, because we need to check
not existing of protected right. To solve the problem we can replace protected
right with its negation, i.e., not-protected right and add such a right in all cells
of access matrix by default. In this new model, giving protected right changes
to removing not-protected right. Thus, we need to rewrite all of the previous
rules in this new model (which are easy) and have the fourth ruls of Lampson’s
model as follows.
Command Rule4r (d, d’, x)
if own in (d, x) and
not-protected in (d′ , x) and
r in (d, x)
then
delete r from (d′ , x)
end;
Exercise 4:
Si′ = Si − {s′ }, Oi′ = Oi − {s′ }
⎧
Pi [x, y]
⎪
⎪
⎪
⎪
⎪
⎪
P
⎪ i [s, y] ∪ Pi [s′ , y]
Pi′ [x, y] = ⎨
⎪
Pi [x, s] ∪ Pi [x, s′ ]
⎪
⎪
⎪
⎪
′ ′
′
′
⎪
⎪
⎩Pi [s, s] ∪ Pi [s , s ] ∪ Pi [s, s ] ∪ Pi [s , s]
,
,
,
,
if
if
if
if
x, y ≠ s
x = s, y ≠ s
x ≠ s, y = s
x, y = s
Exercise 5:
Q′i = Si′ = {s}
Pi′ [s, s] = Pi [s, s] ∪
⋃ Pi [s, o]
o∈On−1
γ2
[e]
Exercise 6: Suppose that we have [σ1 Ð→ σ1′ ∧ σ2 ≈ σ1 ] ⇒ ∃σ2′ , ∃γ2 [σ2 Ð→
σ2′ ∧ σ2′ ≈ σ1′ ∧ γ2 ≈ [e]]
We prove by induction that if the above equation holds for any ∣γ1 ∣ = n, then it
also holds for γ1′ = γ1 .e where ∣γ ′ ∣ = n + 1.
γ1′
γ1
[e]
[σ1 Ð→ σ1′ ∧ σ2 ≈ σ1 ] ⇒ ∃σ3 [σ1 Ð→ σ3 Ð→ σ1′ ∧ σ2 ≈ σ1 ]
γ2
[e]
⇒ ∃σ3′ , ∃γ2 [σ2 Ð→ σ3′ ∧ σ3′ ≈ σ3 ∧ γ2 ≈ γ1 ∧ σ3 Ð→ σ1′ ] (I)
γ2
From (I) ⇒ ∃σ3′ , ∃γ2 [σ2 Ð→ σ3′ ∧ γ2 ≈ γ1 ] (II)
From (I) and holding the theorem for single events ⇒ ∃σ2′ , ∃γ3 [γ3 ≈ [e] ∧ σ2′ ≈
71
γ3
σ1′ ∧ σ3′ Ð→ σ2′ ] (III)
γ4
From (II) and (III) ⇒ ∃σ2′ , ∃γ4 = γ2 .γ3 [σ2 Ð→ σ2′ ∧ γ4 ≈ γ1′ ∧ σ2′ ≈ σ1′ ]
Thus, the theorem holds for γ1′ where ∣γ1′ ∣ = n + 1.
Exercise 7: We should prove that for every model like M = ⟨W, w0 , I, J⟩ we
should have M ⊧ (C says (A ⇒ B)) ≡ ((A ⇒ B) ∨ (C says f alse)). Thus, we
should prove that E(C says (A ⇒ B)) = E((A ⇒ B) ∨ (C says f alse)).
Regarding the semantics of A ⇒ B, we have E(A ⇒ B) = W or ∅.
Suppose that E(A ⇒ B) = W . Now we have
E((A ⇒ B)∨(C says f alse)) = E(A ⇒ B)∪E(C says f alse) = W ∪E(C says f alse) =
W.
Also we have
E(C says (A ⇒ B)) = {w∣ R(C)(w) ⊆ E(A ⇒ B)} = {w∣ R(C)(w) ⊆ W } = W .
(I) Thus, in this case the theorem holds.
Suppose that E(A ⇒ B) = ∅. Now we have
E((A ⇒ B)∨(C says f alse)) = E(A ⇒ B)∪E(C says f alse) = E(C says f alse) =
{w∣ R(C)(w) ⊆ E(f alse) = ∅} = {w∣ R(C)(w) = ∅}.
Also we have E(C says (A ⇒ B)) = {w∣ R(C)(w) ⊆ E(A ⇒ B) = ∅} =
{w∣ R(C)(w) = ∅}.
(II) Thus, in this case the theorem holds as well.
From (I) and (II), we can conclude that E(C says (A ⇒ B)) = E((A ⇒ B) ∨
(C says f alse)).
72