Vous êtes sur la page 1sur 21

TOC-Module I

RAJAGIRI SCHOOL OF ENGINEERING


AND TECHNOLOGY
Rajagiri Valley, Kochi -39

R703- THEORY OF COMPUTATION


MODULE I

Prepared by
Preetha K G& Jisha S Manjaly

Prepared by Preetha K G & Jisha S 1


Manjaly
TOC-Module I

INDEX

Introduction to the Theory of Computation


1.1 Set Theory
1.1.1 Properties of a set
1.1.2 Countable and Uncountable sets
1.1.3 Equinumerous sets
1.1.4 Algebraic structure
1.2 Relations
1.3 Functions
1.3.1 One to one functions
1.3.2 On to Functions
1.3.3 Byjective functions
1.3.4 Composite functions
1.3.5 Primitive recursive functions
1.3.6 Partial recursive functions
1.3.7 Computable and non computable functions
1.4 Diagonalization principle
1.5 Formal representation of languages
1.6 Chomsky Classification

Prepared by Preetha K G & Jisha S 2


Manjaly
TOC-Module I

1.1 Set Theory


A set is a collection of distinct objects. Georg Cantor, the principal creator of set theory.
The elements of a set, also called its members, can be anything: numbers, people, letters
of the alphabet, other sets, and so on. Sets are conventionally denoted with capital letters.
The statement that sets A and B are equal means that they have precisely the same
members. All set operations preserve the property that each element of the set is unique.
The order in which the elements of a set are listed is irrelevant. A set can be described in
a number of different ways.
1. Roster or Tabular form
{1, 2, 3} is the set of three numbers 1, 2, and 3. { indicates the beginning of the set, and }
its end. Every object between them separated by commas is a member of the set. Thus
{{1, 2}, {{3}, 2}, 2}, {1 } } is the set of the elements {1, 2}, {{3}, 2} and {1}.
2. Set builder form or Rule method
A set can also be described by listing the properties that its members must satisfy. For
example, { x| 1 ≤x ≤2 and x is a real number. } represents the set of real numbers between
1 and 2, and { x| x is the square of an integer and x ≤100 } represents the set { 0, 1, 4, 9,
16, 25, 36, 49, 64, 81, 100 }.
3. By Recursion
In this representation, first, basic elements of the set are presented. Then a method is
given to generate elements of the set from known elements of the set. Thirdly a statement
is given that excludes undesirable elements from the set.
Example: set of all numbers is divisible by 3 and leaving a remainder 1.
an|a0=1;an+1=an+3
Operation on sets
a) Union
If two sets are given, a set can be formed by using all the elements of the two sets. Such a
collection is called the union of the given sets.

Examples:
(1) Let A = {3, 4, 5, 6}, B = {6, 7, 8} and C = {8, 9, 7}.

Prepared by Preetha K G & Jisha S 3


Manjaly
TOC-Module I

Then

b) Intersection
A set can be formed by using all the common elements of two given sets. Such a
collection is called the intersection of the given sets.

1) Let A = {3, 4, 5, 6}, B = {5, 6, 7}, C = {7, 8, 9}


Then

c) Disjoint sets

Example:

d) Difference of two sets (Relative complement)


The relative complement of set B in set A is the complement of B in A.
Definition: If A and B are any two sets then the relative complement of B in A is the set
of all elements in A which are not in B. It is denoted by A - B.

Prepared by Preetha K G & Jisha S 4


Manjaly
TOC-Module I

Alternate Definition: Difference of two sets A and B, A - B is a set whose elements


belong to A but not to B. A - B is called the relative complement of B w.r.t. A.
Example:

e) Symmetric Difference of two sets


If A and B are two sets, we define their symmetric difference as the set of all elements
that belong to A or to B, but not both A and B, and we denote it by A  B. Thus

f) Complement of a set
Let U be a universal set, A be any subset of U, then the elements of U which are not in A
i.e., U - A is the complement of A w.r.t. U is written as A' = U - A = Ac.

Examples:

g) Universal Set
In set theory, a universal set is a set which contains all objects, including itself.
h) Subset
A is a subset of a set B if A is "contained" inside B.
If A and B are sets and every element of A is also an element of B, then:
A is a subset of (or is included in) B, denoted by , A B

Prepared by Preetha K G & Jisha S 5


Manjaly
TOC-Module I

i) Null set (empty set)


The set containing no elements at all is called the null set, or empty set. It is denoted by a
pair of empty braces: { }, or by the symbol ø.
j) Proper subset
A proper subset of a set , denoted , is a subset that is strictly contained in and so
necessarily excludes at least one member of . The empty set is therefore a proper subset
of any nonempty set.
For example, consider a set . Then and are proper subsets, while
and are not.
k) Power set
The power set of S, written, P(S), is the set of all subsets of S.
Example

If S is the set {x, y, z}, then the complete list of subsets of S is as follows:

• {}
• {x}
• {y}
• {z}
• {x, y}
• {x, z}
• {y, z}
• {x, y, z}

and hence the power set of S is

• P(S)={{},{x},{y},{z},{x, y}, {x, z} , {y, z},{x, y, z} }

l) Cartesian product
The Cartesian product of two sets X and Y denoted X × Y, is the set of all possible ordered
pairs whose first component is a member of X and whose second component is a member
of Y .
X×Y={x є X and y є Y}
A={1,2,3}
B={2,3}
A×B={(1,2),(1,3),(2,2),(2,3),(3,2),(3,3)}

Prepared by Preetha K G & Jisha S 6


Manjaly
TOC-Module I

1.1.1 Properties of a set


n(AUB)=n(A)+n(B)-n(A∩B)
n(A-B)=n(A)-n(A∩B)
n(B-A)=n(B)-n(A∩B)
n(AUBUC)=n(A)+n(B)+n(C)- n(A∩B)- n(A∩C) n(B∩C)
1.1.2 Countable and Uncountable sets
A countable set is a set with the same cardinality (as some subset of the set of numbers.
It stems from the fact that the natural numbers are often called counting numbers. Finite
are also considered to be countable.
A set S is called countable if there exists an injective function from S to the natural
numbers.
If f is also surjective, thus making f bijective, then S is called countably infinite.
Theorem: Let S be a set. The following statements are equivalent:
1. S is countable, i.e. there exists an injective function
f: S->N.
2. Either S is empty or there exists a surjective function
. f: N->S.
3. Either S is finite or there exists a bijection
h: N->S.
A set that is not countable is called uncountable. Set of real numbers are the examples
for uncountable sets. These sets will be not countable and infinite.
1.1.3 Equinumerous sets
Two sets are equinumerous if there exist a bijection between two sets.
1.1.4 Algebraic structure
An algebraic structure consists of one or more sets closed under one or more
operations. Abstract algebra is primarily the study of algebraic structures and their
properties.
Properties
1) Closure
A set is said to be closed under some operation ∗ if performance of that operation on
members of the set always produces a member of the set.
If a and b are two elements in the set then a*b is also an element in the set.

Prepared by Preetha K G & Jisha S 7


Manjaly
TOC-Module I

Example: the real numbers are closed under subtraction, but the natural numbers are not.
2) Associativity
(a ∗ b) ∗ c = a ∗ (b ∗ c).
3) Existence of Identity element
For a binary operator ∗ the identity element e must satisfy a ∗ e = a and e ∗ a = a.
4) Existence of Inverse
Inverse of element a is a−1 must satisfy the property that a ∗ a−1 = e and a−1 ∗ a = e.
5) Commutativity
a∗b=b∗a
Groupoid is an algebraic structure satisfies property 1
Semi group is an algebraic structure satisfies property 1 and 2
Example: N with addition
Monoid is an algebraic structure satisfies property 1,2 and 3
Group is an algebraic structure satisfies property 1,2,3 and 4
Example: The integers endowed with the addition operation form a group
Abelian group is an algebraic structure satisfies property 1,2,3,4 and 5
Example: Z with addition
1.2 Relations
A binary relation on a set A is a collection of ordered pairs of elements of A. it is a
subset of the Cartesian product A2 = A × A. More generally, a binary relation between
two sets A and B is a subset of A × B.
Formal Definition:
A binary relation R is usually defined as an ordered triple (X, Y, G) where X and Y are
arbitrary sets (or classes), and G is a subset of the Cartesian product X × Y. The sets X and
Y are called the domain and codomain (target), respectively, of the relation, and G is
called its graph.
Example: if G = {(1,2),(1,3),(2,7)}, then (Z,Z, G), (R, N, G), and (N, R, G) are three
distinct relations.
• Reflexive :A relation R is reflexive if and only if, everything bears R to itself that
is for all x in X it holds that xRx
• Irreflexive (or strict): for all x in X it holds that not xRx. "Greater than" is an
example of an irreflexive relation.
Prepared by Preetha K G & Jisha S 8
Manjaly
TOC-Module I

• Symmetric: for all x and y in X it holds that if xRy then yRx. "Is a blood relative
of" is a symmetric relation, because x is a blood relative of y if and only if y is a
blood relative of x.
• Antisymmetric: for all x and y in X it holds that if xRy and yRx then x = y.
"Greater than or equal to" is an antisymmetric relation, because if x≥y and y≥x,
then x=y.
• Transitive: for all x, y and z in X it holds that if xRy and yRz then xRz. "Is an
ancestor of" is a transitive relation, because if x is an ancestor of y and y is an
ancestor of z, then x is an ancestor of z.
• Equivallence relations: A relation is reflexive, symmetric and transitive is called
equivalence relation.
• Quasi Order: A relation is reflexive and symmetric
• Compatability relation: A relation is reflexive and transitive
1.3 Functions
Let A and B be any 2 sets. A relation F from A to B is called a function if for all a є A
there is a unique b є B such that (a,b) є F
Example:

1.3.1 One to one (injective) functions


If a function F from A to B is said to be one-one if the different elements in domain A
have distinct images.ie F is one-one if f(a)=f(b) implies a=b.

Prepared by Preetha K G & Jisha S 9


Manjaly
TOC-Module I

1.3.2 On to(surjective) Functions


A function F from A to B is said to be onto function if each element of B is the image of
some element of A.

1.3.3 One to one correspondence (Byjective functions)


A function F from A to B is said to be bijective function if F is both one-one and onto.

Example problem:
Let A{1,2,3,4} B={a,b,c} c={w,x,y,z} with F from A to B,g from B to C given by
f={(1,a),(2,a),(3,b),(4,c)}, g={(a,x),(b,y),(4,z)}.Find g.f?

Prepared by Preetha K G & Jisha S 10


Manjaly
TOC-Module I

(g.f)(1)=g[f(1)]=x
(g.f)(2)=g[f(2)]=x
(g.f)(3)=g[f(3)]=y
(g.f)(4)=g[f(4)]=z
g.f={()1,x),(2,x),(3,y),(4,z)}

1.3.4 Composite functions


If F from A to B and G is from B to C , then the composite function denoted by (g Ο f) is
a function from A to C given by( (g Ο f) (a)) =g[f(a)] for each a є A
Example:
Let A={1,2,3,4} B={a,b,c} c={w,x,y,z} with f from A to B. g from B to c given by
f={(1,a),(2,a),(3,b),(4,c)} and g={(a,x),(b,y),(c,z)} Find (g Ο f) ?
(g Ο f)(1)=g[f(1)]=x
(g Ο f)(2)=g[f(2)]=x
(g Ο f)(3)=g[f(3)]=y
(g Ο f)(4)=g[f(4)]=z
So (g Ο f) = {(1, x), (2, x), (3, y), (4, z)}
1.3.5 Primitive recursive functions
A function is called primitive recursive if and only if it can be constructed from the basic
functions z, s, and pk by successive composition and primitive recursion. To keep this
discussion simple ,we will consider only functions of one or two variables , whose
domain is either I , the set of all non-negative integers , or I × I,and whose range is in I.In
this setting , we start with the basic functions:
1) Zero function
Z(x)=0 for all x є I
2) Successor function s(x) , whose value is the integer next in the sequence to x,
that is , in the usual notation , s(x) = x+1
3) Projector function
Pk(x1,x2)=xk, k=1,2.
There are two ways of building more complicated functions from these:
1) Composition, by which we construct
f(x,y)=h(g1(x,y),g2(x,y)) from defined functions g1,g2,h.

Prepared by Preetha K G & Jisha S 11


Manjaly
TOC-Module I

2) Primitive recursion by which a fuction can be defined recursively through


f(x,0)=g1(x)
f(x,y+1)=h(g2(x,y),f(x,y)) , from defined functions g1,g2 and h.
Now we define predecessor function pred(0)=0,
Pred(y+1)=y,
And from it the subtracting function
Sub(x,0)=x,
Sub(x,y+1)=pred(sub(x,y)).
Example:
Sub(5,3)=pred(sub(5,2))
=pred(pred(sub(5,1)))
=pred(pred(pred(sub(5,0))))
= pred(pred(pred(5))))
=pred(pred(4)))
=pred(3)
=2
1.3.6 Partial recursive functions
All primitive recursive functions are total because neither composition nor primitive
recursion allows us to fundamentally disrupt the behavior of the initial functions, which are
all total by their definitions. Although this is nice, composition and primitive recursion are
slightly too well-behaved to allow us to describe all of the functions which we think of as
being intuitively computable — in other words, it’s actually really easy to construct a Turing
or register machine which computes a function that can’t be described using only primitive
recursion. There exist certain computable (albeit weird and mostly useless) functions whose
corresponding Turing machines do halt on absolutely any input—they are total —but which
still cannot be represented using primitive recursion in spite of this. Ackermann’s function,
f(0, y) = y0,
f(x0, 0) = f(x, 1),
f(x0, y0) = f(x, f(x0, y)),
is the best example of one of these; it’s rather obviously both computable and total because
you or a Turing machine could straightforwardly evaluate it for any pair of inputs given
enough time and paper, but if you try to express it primitive recursively you’ll soon find that

Prepared by Preetha K G & Jisha S 12


Manjaly
TOC-Module I

something about its structure makes the job impossible. So although the very existence of
non-total computable functions means that the primitive recursive functions are already
inadequate, it’s worth remembering that there’s also something else going on: even
modulopartiality (eg. even if we could extend every partial function into a total function by
making the latter take some arbitrary value, such as zero, on all inputs for which the former is
undefined2) some computable functions just refuse to fit into the primitive recursive mould.
It turns out that we can solve this problem by introducing one more technique for
constructing new functions: the operator µ, which is variously referred to as the least search,
unbounded search or minimization operator. By introducing µ we are suddenly able to define
all computable partial functions as well as all of those computable total functions which, like
Ackermann’s, cannot be expressed through composition and primitive recursion alone. The
minimisation operator corresponds to the straightforward method of calculation where a
function is repeatedly evaluated with different input values until a desired output value is
produced — realize that there’s no guarantee in general that some chosen output value will
ever be produced, and this bad behavior is what admits us into the world of partial functions.
We can use the operator with functions of any number of arguments but it’s best to begin by
thinking about its application to unary functions. For example, we can already define the
function square(x) = multiply(x, x), (= x2) which is primitive recursive because multiply is
primitive recursive (it’s defined by primitive recursion in terms of another function, add,
which is itself primitive recursive). Having the square function available suggests an
immediate way to calculate the partial function squareroot(x) = px on a computer: we could
just write a (pseudocode) program like function
squareroot(x)
t := 0
while (square(t) 6= x) do
t := t + 1;return t
which calculates square(0), square(1), square(2), . . . until it finds an argument for square that
produces the required value. It’s obvious that such an argument only exists when our
required value is a perfect square3, and equally obvious that in all other cases the loop will
keep going forever and the function will never return — in other words, square root is only
defined for some inputs and is therefore not total,4 as we expected. This idea of iteratively
searching for the first5 argument value for which a function produces some desired result is

Prepared by Preetha K G & Jisha S 13


Manjaly
TOC-Module I

the essence of minimisation. One formal description of square root(x) is that it’s the function
which returns the least value t such that square (t) = x if such a value exists, and is undefined
otherwise; using the µ operator this is written as
square root(x) = µt{square(t) = x}.
µt therefore corresponds to the idea of wrapping a loop around some function applied to a
variable argument t and evaluating it for t = 0, 1, 2, . . . until it becomes equal to some
specified result, a process which is clearly easy to implement on a computer and hence on a
Turing or abacus machine. In the above usage, the result we’re searching for is provided by
the first argument (x in this case) of the new µ-defined function. This important style of µ-
application is known as inversion. We can generalize the earlier program into a higher-order
function which takes a unary function and a value as arguments (the earlier version took only
a value, since it was hard-coded to use square) and applies the inversion procedure,
function unary-invert(f, x)
t := 0
while (f(t) 6= x) do
t := t + 1;return t
and then we get a nicer way of specifying squareroot,
function squareroot(x)
return unary-invert(square, x)
assuming, of course, that square is defined. Notice that squareroot is the mathematical
inverse of square,
y = square(x) () x = squareroot(y), 8x, y 2 N
In general, as in this specific instance, inverting a total function doesn’t necessarily produce
another total function because the original function may not produce all possible values in its
codomain (of outputs) and so its inverse, which corresponds to running the function
“backwards”,may not accept all possible values in its domain (of inputs).Inversion by
minimisation extends fairly naturally but less memorably to binary, ternary, and generally n-
ary functions, with the inversion always being applied on the first argument. The program for
inversion of a ternary function, for example, would be
function ternary-invert(f, x, y, z)
t := 0
while (f(t, y, z) 6= x) do

Prepared by Preetha K G & Jisha S 14


Manjaly
TOC-Module I

t := t + 1; return t
and generally we write g(x1, . . . , xn) = µt{f(t, x2, . . . , xn) = x1}
to define g by inversion of an n-ary function f. This constructs a new n-ary function g which
will return a value t such that f(t, x2, . . . , xn) = x1 when such a value exists, and is undefined
otherwise. The functions which can be built up from the initial functions using composition,
primitive recursion and inversion are called the partial recursive or µ-recursive functions.
They’re also sometimes just called the recursive functions, and correspond exactly to the
class of all functions which are intuitively computable. The total recursive functions are those
partial recursive functions which are total.
1.3.7 Computable and non computable functions
Computable problems are solvable in a polynomial time.
Example: factorial of 5.Problems which re not solvable in polynomial time is known as
non computable problems. Example Turing machine halting problem.
Computable functions
A partial function f : A ->B is computable if there is a program P that computes f,
i.e., for any x єA, if there exists yєB that y = f (x), then computation P(x) halts with output
y
Noncomputable functions
Halting function:
Given a program P and an input x, decides whether P halts on x
Halt(P, x) ={ 1, if P(x) halts
0, otherwise}
A partial function, f : A ->B, is a subset f A × B such that for all x є A andy, z єB, (x, y)
є f and (x, z) є f implies y = z
A total function is a partial function f : A ->B such that for all x є A andthere exist y
єB,such that (x, y) є f
1.4 Diagonalization principle
• Let R be a binary relation on a set A.
• Let D be the diagonal set for R:
D = {a ∈ A | (a, a) ∉ R}.
• For each a in A, define a set Ra to be
Ra = {b ∈ A | (a, b) ∈ R}.
Prepared by Preetha K G & Jisha S 15
Manjaly
TOC-Module I

• Then D is distinct from each Ra.


Example
R={(a,b),(a,d),(b,b),(b,c),(c,c),(c,d),(d,b),(d,e),(e,e)}
So the corresponding matrix form is

a b c d e
a × ×
b × ×
c × ×
d × ×
e ×
D={b,c,e}
ie (b,b),(c,c),(e,e) is there in the relation
Ra={b,d}
Rb={b,c}
Rc={c,d}
Rd={b,e}
Re={e}

D ={a,d}

This D is distinct from Ra,Rb,Rc,Rd,Re


1.5 Formal representation of languages
One key point in theory of computation is that computation is encoded into languages.
The processing of language is equivalent to computing a function.
Language is a system suitable for expression of certain ideas, facts or concepts including
set of symbols and rules for their manipulation.
Basic definitions:
1) Symbol: A symbol is a character or mark.
Eg: a,b,c……1,2,3….,special characters
2) Alphabet: A non empty set of symbols is called alphabet. It is denoted by ∑.
Eg: ∑={a,b}
3) String or word: Finite sequence of symbols from alphabet is called strings.
Eg: Given an alphabet ∑={a,b},string formed from ∑ is aab,aba,ab etc.

Prepared by Preetha K G & Jisha S 16


Manjaly
TOC-Module I

4) Concatenation of strings: Concatenation of two strings w &v is the string


obtained by appending the symbols of v to the right of w.
Eg: w=a1,a2,…..an
v=b1,b2,….bn
wv= a1,a2,…..an b1,b2,….bn
5) Reverse of a string: Reverse of a string is obtained by writing the symbols in
reverse order.
Eg: w=w=a1,a2,…..an
WR=an….a1,a2
6) Length of a string: Length of a string is the number of string in the string,
denoted by |w|.
7) Empty string: A string with no symbol in it, denoted by ∈ or .| ∈ |=0.
8) Substring of w: Any string of consecutive characters in w is said to be a
substring of w.
If w=vu then the substring v and u are called prefix and suffix of w.
If w is a string then wn stands for the string obtained by repeating w n times. W0= ∈ .
9) Set of strings: If ∑ is an alphabet then set of strings obtained by concatenating
zero or more symbols from ∑.
∑* always containing ∈ .
∑+=∑*- ∈ .
∑+ & ∑* are always infinite and ∑ is finite.
10) Language over ∑: A set of strings from an alphabet is called a language. A
language L is defined as a subset of ∑*. A string in language is called the sentence
of L.
Eg: ∑={a,b}
∑*={ ∈ ,a,b,aa,ab,aab,aaa,aabb……}
Then {a,aa,ab} is a language on ∑
L={an,bn:n>=0} is also a lanuage.
11) Complement of a language: Denoted by L.
L=∑*-L
12) Reverse of a language: Set of all string reversals, denoted by LR.
LR={WR:w ∈ L}

Prepared by Preetha K G & Jisha S 17


Manjaly
TOC-Module I

13) Concatenation of two languages: Concatenation of two languages L1 and L2 is


the set of all strings obtained by concatenating any element L1 with any element
of L2.
Eg: Let L1={x|x ∈ {0,1}*} and L2={y|y ∈ {0,1}*}
L1L2= {xy|x ∈ L1,y ∈ L2}
Ln is defined as concatenating L with itself n times.
Ln=LLLLL……..(n times)
14) Star closure of a language: L*=L0UL1UL2.................
15) Positive closure of a language: L+=L1UL2UL3.................
Grammar
Grammar describes the structure of a language. A Grammar is a formalism to give a
finite representation of a Language. It defines the way in which all admissible strings
can be generated.
A Grammar G is a quadruple G= (V, T, S, P), such that:
V is a finite set of symbols called variables.
T is a finite set of terminal symbols
S ∈ V is a special start symbol.
P is a finite set of productions.
V and T are non empty and disjoint sets.
Production Rules:
Production rule is the heart of the grammar, they specify how the grammar transforms
one string into another, and through this they define a language associate with the
grammar.
Production rule are of the form x y
Where x is an element of (VUT)+ and y is an element of (VUT)*
Language generated by a grammar:
Let G= (V, T, S, P) be a grammar then the set L(G)={w ∈ T*:S*w} is a language
generated by G.
If w ∈ L(G) then the sequence Sw1w2………wnw is a derivation of the
sentence w. The strings S,w1,w2…. Which contain variables as well as terminals is
called sentential form of the derivation.

Prepared by Preetha K G & Jisha S 18


Manjaly
TOC-Module I

Sentence is a string of terminal symbols. Sentential form is a mix of terminals and non
terminals. This is an intermediate form during derivation.
Example: Let G= G= (V, T, S, P)
V={S,A,B},T={a,b}
With Productions in P:
1. S → AB
2. A → aA
3. A → ǫ
4. B → bB
5. B → ǫ
Then
S  AB  aAB  aaAB aaaAB  aaaB  aaabB aaabbB aaabb
L(G) = {ambn | m, n ≥ 0}

Two grammars are equivalent if they generate the same language.


Recursive and non recursive productions:
A production is recursive if a variable on the left side appears on the right.
Eg: A aAb
A production is non recursive if the right side contains no variables from the left.
Eg: S Ab
1.6 Chomsky Hierarchy
The concept of Grammar Classification was introduced by Noam Chomsky in the
1950s as a way to describe the structural complexity of particular sentences of natural
language. Noam Chomsky classified these languages into four classes, type 0 to type 3
according to the power of languages in the increasing order using the structure of
production.
Type 0: The most general Grammars are the so called Type 0 Grammars. Type 0
languages are those generated by unrestricted grammar that is Recursively enumerable
grammar. The languages defined by Type 0 grammars are accepted by Turing machines.
Type 1: Type 1 languages are those generated by context sensitive grammar. The
languages defined by Type 1 grammars are accepted by linear bounded automata.

Type 1 grammars have rules of the form B where , ,

or of the form , where is the initial symbol and is the empty string.
Prepared by Preetha K G & Jisha S 19
Manjaly
TOC-Module I

Type 2: Type 2 languages are those generated by context free grammar. The languages
defined by Type 2 grammars are accepted by push-down automata. Type 2 grammars
have rules of the form A → β where A ∈ V and β ∈ T. The term “Context-Free” comes
from the fact that the non-terminal A can always be replaced by β, in no matter what
context it occurs. Context-Free Grammars are important because they are powerful
enough to describe the syntax of programming languages; in fact, almost all
programming languages are defined via Context-Free Grammars.
Type 3: Type 2 languages are those generated by Regular grammar. The languages
defined by Type 3 grammars are accepted by finite state automata. There are two kinds of
regular grammar:
1. Right-linear (right-regular), with rules of the form A → aB, or A → a the
structural descriptions generated with these grammars are right-branching.
2. Left-linear (left-regular), with rules of the form ABa or Aa the structural
descriptions generated with these grammars are left-branching.
Regular Grammars are commonly used to define the lexical structure of programming
languages.
Every Regular Language is Context-Free, every Context-Free Language is Context-
Sensitive and every Context-Sensitive Language is a Type 0 Language.

Prepared by Preetha K G & Jisha S 20


Manjaly
n log b a = n log 3 / 2 1 = 1, f (n) = 1 = Θ(1)
n log b a = n log 3 9 = Θ(n 2 ), f ( n) = O (n log 3 9−1 )
TOC-Module I

Language Grammar Machine Example


Regular grammar

• Right-linear Deterministic or
Regular language grammar nondeterministic finite- a*
state acceptor
• Left-linear
grammar
Context-free Nondeterministic
Context-free grammar a b
language pushdown automaton
Context-sensitive Context-sensitive Linear-bounded
language grammar automaton a b c
Recursively
Any computable
enumerable Unrestricted grammar Turing machine
function
language
These languages form a strict hierarchy; that is, regular languages context-free
languages context-sensitive languages recursively enumerable languages.

Prepared by Preetha K G & Jisha S 21


Manjaly

Vous aimerez peut-être aussi