Académique Documents
Professionnel Documents
Culture Documents
Prepared by
Preetha K G& Jisha S Manjaly
INDEX
Examples:
(1) Let A = {3, 4, 5, 6}, B = {6, 7, 8} and C = {8, 9, 7}.
Then
b) Intersection
A set can be formed by using all the common elements of two given sets. Such a
collection is called the intersection of the given sets.
c) Disjoint sets
Example:
f) Complement of a set
Let U be a universal set, A be any subset of U, then the elements of U which are not in A
i.e., U - A is the complement of A w.r.t. U is written as A' = U - A = Ac.
Examples:
g) Universal Set
In set theory, a universal set is a set which contains all objects, including itself.
h) Subset
A is a subset of a set B if A is "contained" inside B.
If A and B are sets and every element of A is also an element of B, then:
A is a subset of (or is included in) B, denoted by , A B
If S is the set {x, y, z}, then the complete list of subsets of S is as follows:
• {}
• {x}
• {y}
• {z}
• {x, y}
• {x, z}
• {y, z}
• {x, y, z}
l) Cartesian product
The Cartesian product of two sets X and Y denoted X × Y, is the set of all possible ordered
pairs whose first component is a member of X and whose second component is a member
of Y .
X×Y={x є X and y є Y}
A={1,2,3}
B={2,3}
A×B={(1,2),(1,3),(2,2),(2,3),(3,2),(3,3)}
Example: the real numbers are closed under subtraction, but the natural numbers are not.
2) Associativity
(a ∗ b) ∗ c = a ∗ (b ∗ c).
3) Existence of Identity element
For a binary operator ∗ the identity element e must satisfy a ∗ e = a and e ∗ a = a.
4) Existence of Inverse
Inverse of element a is a−1 must satisfy the property that a ∗ a−1 = e and a−1 ∗ a = e.
5) Commutativity
a∗b=b∗a
Groupoid is an algebraic structure satisfies property 1
Semi group is an algebraic structure satisfies property 1 and 2
Example: N with addition
Monoid is an algebraic structure satisfies property 1,2 and 3
Group is an algebraic structure satisfies property 1,2,3 and 4
Example: The integers endowed with the addition operation form a group
Abelian group is an algebraic structure satisfies property 1,2,3,4 and 5
Example: Z with addition
1.2 Relations
A binary relation on a set A is a collection of ordered pairs of elements of A. it is a
subset of the Cartesian product A2 = A × A. More generally, a binary relation between
two sets A and B is a subset of A × B.
Formal Definition:
A binary relation R is usually defined as an ordered triple (X, Y, G) where X and Y are
arbitrary sets (or classes), and G is a subset of the Cartesian product X × Y. The sets X and
Y are called the domain and codomain (target), respectively, of the relation, and G is
called its graph.
Example: if G = {(1,2),(1,3),(2,7)}, then (Z,Z, G), (R, N, G), and (N, R, G) are three
distinct relations.
• Reflexive :A relation R is reflexive if and only if, everything bears R to itself that
is for all x in X it holds that xRx
• Irreflexive (or strict): for all x in X it holds that not xRx. "Greater than" is an
example of an irreflexive relation.
Prepared by Preetha K G & Jisha S 8
Manjaly
TOC-Module I
• Symmetric: for all x and y in X it holds that if xRy then yRx. "Is a blood relative
of" is a symmetric relation, because x is a blood relative of y if and only if y is a
blood relative of x.
• Antisymmetric: for all x and y in X it holds that if xRy and yRx then x = y.
"Greater than or equal to" is an antisymmetric relation, because if x≥y and y≥x,
then x=y.
• Transitive: for all x, y and z in X it holds that if xRy and yRz then xRz. "Is an
ancestor of" is a transitive relation, because if x is an ancestor of y and y is an
ancestor of z, then x is an ancestor of z.
• Equivallence relations: A relation is reflexive, symmetric and transitive is called
equivalence relation.
• Quasi Order: A relation is reflexive and symmetric
• Compatability relation: A relation is reflexive and transitive
1.3 Functions
Let A and B be any 2 sets. A relation F from A to B is called a function if for all a є A
there is a unique b є B such that (a,b) є F
Example:
Example problem:
Let A{1,2,3,4} B={a,b,c} c={w,x,y,z} with F from A to B,g from B to C given by
f={(1,a),(2,a),(3,b),(4,c)}, g={(a,x),(b,y),(4,z)}.Find g.f?
(g.f)(1)=g[f(1)]=x
(g.f)(2)=g[f(2)]=x
(g.f)(3)=g[f(3)]=y
(g.f)(4)=g[f(4)]=z
g.f={()1,x),(2,x),(3,y),(4,z)}
something about its structure makes the job impossible. So although the very existence of
non-total computable functions means that the primitive recursive functions are already
inadequate, it’s worth remembering that there’s also something else going on: even
modulopartiality (eg. even if we could extend every partial function into a total function by
making the latter take some arbitrary value, such as zero, on all inputs for which the former is
undefined2) some computable functions just refuse to fit into the primitive recursive mould.
It turns out that we can solve this problem by introducing one more technique for
constructing new functions: the operator µ, which is variously referred to as the least search,
unbounded search or minimization operator. By introducing µ we are suddenly able to define
all computable partial functions as well as all of those computable total functions which, like
Ackermann’s, cannot be expressed through composition and primitive recursion alone. The
minimisation operator corresponds to the straightforward method of calculation where a
function is repeatedly evaluated with different input values until a desired output value is
produced — realize that there’s no guarantee in general that some chosen output value will
ever be produced, and this bad behavior is what admits us into the world of partial functions.
We can use the operator with functions of any number of arguments but it’s best to begin by
thinking about its application to unary functions. For example, we can already define the
function square(x) = multiply(x, x), (= x2) which is primitive recursive because multiply is
primitive recursive (it’s defined by primitive recursion in terms of another function, add,
which is itself primitive recursive). Having the square function available suggests an
immediate way to calculate the partial function squareroot(x) = px on a computer: we could
just write a (pseudocode) program like function
squareroot(x)
t := 0
while (square(t) 6= x) do
t := t + 1;return t
which calculates square(0), square(1), square(2), . . . until it finds an argument for square that
produces the required value. It’s obvious that such an argument only exists when our
required value is a perfect square3, and equally obvious that in all other cases the loop will
keep going forever and the function will never return — in other words, square root is only
defined for some inputs and is therefore not total,4 as we expected. This idea of iteratively
searching for the first5 argument value for which a function produces some desired result is
the essence of minimisation. One formal description of square root(x) is that it’s the function
which returns the least value t such that square (t) = x if such a value exists, and is undefined
otherwise; using the µ operator this is written as
square root(x) = µt{square(t) = x}.
µt therefore corresponds to the idea of wrapping a loop around some function applied to a
variable argument t and evaluating it for t = 0, 1, 2, . . . until it becomes equal to some
specified result, a process which is clearly easy to implement on a computer and hence on a
Turing or abacus machine. In the above usage, the result we’re searching for is provided by
the first argument (x in this case) of the new µ-defined function. This important style of µ-
application is known as inversion. We can generalize the earlier program into a higher-order
function which takes a unary function and a value as arguments (the earlier version took only
a value, since it was hard-coded to use square) and applies the inversion procedure,
function unary-invert(f, x)
t := 0
while (f(t) 6= x) do
t := t + 1;return t
and then we get a nicer way of specifying squareroot,
function squareroot(x)
return unary-invert(square, x)
assuming, of course, that square is defined. Notice that squareroot is the mathematical
inverse of square,
y = square(x) () x = squareroot(y), 8x, y 2 N
In general, as in this specific instance, inverting a total function doesn’t necessarily produce
another total function because the original function may not produce all possible values in its
codomain (of outputs) and so its inverse, which corresponds to running the function
“backwards”,may not accept all possible values in its domain (of inputs).Inversion by
minimisation extends fairly naturally but less memorably to binary, ternary, and generally n-
ary functions, with the inversion always being applied on the first argument. The program for
inversion of a ternary function, for example, would be
function ternary-invert(f, x, y, z)
t := 0
while (f(t, y, z) 6= x) do
t := t + 1; return t
and generally we write g(x1, . . . , xn) = µt{f(t, x2, . . . , xn) = x1}
to define g by inversion of an n-ary function f. This constructs a new n-ary function g which
will return a value t such that f(t, x2, . . . , xn) = x1 when such a value exists, and is undefined
otherwise. The functions which can be built up from the initial functions using composition,
primitive recursion and inversion are called the partial recursive or µ-recursive functions.
They’re also sometimes just called the recursive functions, and correspond exactly to the
class of all functions which are intuitively computable. The total recursive functions are those
partial recursive functions which are total.
1.3.7 Computable and non computable functions
Computable problems are solvable in a polynomial time.
Example: factorial of 5.Problems which re not solvable in polynomial time is known as
non computable problems. Example Turing machine halting problem.
Computable functions
A partial function f : A ->B is computable if there is a program P that computes f,
i.e., for any x єA, if there exists yєB that y = f (x), then computation P(x) halts with output
y
Noncomputable functions
Halting function:
Given a program P and an input x, decides whether P halts on x
Halt(P, x) ={ 1, if P(x) halts
0, otherwise}
A partial function, f : A ->B, is a subset f A × B such that for all x є A andy, z єB, (x, y)
є f and (x, z) є f implies y = z
A total function is a partial function f : A ->B such that for all x є A andthere exist y
єB,such that (x, y) є f
1.4 Diagonalization principle
• Let R be a binary relation on a set A.
• Let D be the diagonal set for R:
D = {a ∈ A | (a, a) ∉ R}.
• For each a in A, define a set Ra to be
Ra = {b ∈ A | (a, b) ∈ R}.
Prepared by Preetha K G & Jisha S 15
Manjaly
TOC-Module I
a b c d e
a × ×
b × ×
c × ×
d × ×
e ×
D={b,c,e}
ie (b,b),(c,c),(e,e) is there in the relation
Ra={b,d}
Rb={b,c}
Rc={c,d}
Rd={b,e}
Re={e}
D ={a,d}
Sentence is a string of terminal symbols. Sentential form is a mix of terminals and non
terminals. This is an intermediate form during derivation.
Example: Let G= G= (V, T, S, P)
V={S,A,B},T={a,b}
With Productions in P:
1. S → AB
2. A → aA
3. A → ǫ
4. B → bB
5. B → ǫ
Then
S AB aAB aaAB aaaAB aaaB aaabB aaabbB aaabb
L(G) = {ambn | m, n ≥ 0}
or of the form , where is the initial symbol and is the empty string.
Prepared by Preetha K G & Jisha S 19
Manjaly
TOC-Module I
Type 2: Type 2 languages are those generated by context free grammar. The languages
defined by Type 2 grammars are accepted by push-down automata. Type 2 grammars
have rules of the form A → β where A ∈ V and β ∈ T. The term “Context-Free” comes
from the fact that the non-terminal A can always be replaced by β, in no matter what
context it occurs. Context-Free Grammars are important because they are powerful
enough to describe the syntax of programming languages; in fact, almost all
programming languages are defined via Context-Free Grammars.
Type 3: Type 2 languages are those generated by Regular grammar. The languages
defined by Type 3 grammars are accepted by finite state automata. There are two kinds of
regular grammar:
1. Right-linear (right-regular), with rules of the form A → aB, or A → a the
structural descriptions generated with these grammars are right-branching.
2. Left-linear (left-regular), with rules of the form ABa or Aa the structural
descriptions generated with these grammars are left-branching.
Regular Grammars are commonly used to define the lexical structure of programming
languages.
Every Regular Language is Context-Free, every Context-Free Language is Context-
Sensitive and every Context-Sensitive Language is a Type 0 Language.
• Right-linear Deterministic or
Regular language grammar nondeterministic finite- a*
state acceptor
• Left-linear
grammar
Context-free Nondeterministic
Context-free grammar a b
language pushdown automaton
Context-sensitive Context-sensitive Linear-bounded
language grammar automaton a b c
Recursively
Any computable
enumerable Unrestricted grammar Turing machine
function
language
These languages form a strict hierarchy; that is, regular languages context-free
languages context-sensitive languages recursively enumerable languages.