Vous êtes sur la page 1sur 7

Universal Turing machine

A universal Turing machine U is a Turing machine with a single binary one-way read-only input tape on
which it expects to find the encoding of an arbitrary Turing machine M. The set of all Turing machine
encodings must be prefix—free so that no special end—marker or `blank' is required to identify a code's
end. Having transferred the description of M onto its work tape U then proceeds to simulate the behavior
of M on the remaining contents of the input tape. If M halts then U cleans up its work tape leaving it with
just the output of M and halts too. If we indicate by M () the partial function computed by machine M and
by $$ the encoding of machine M as a binary string then we have U(x) =M(x).
There are two kinds of universal Turing machine, depending on whether the input tape alphabet of the
simulated machine is 01 #� ½ or just 01� ½ . The first kind is a plain Universal Turing machine while
the second is a prefix Universal Turing machine which has the nice property that the set of inputs on
which it halts is prefix free. The letter U is generally utilized to denote a fixed universal machine, whose
type is either mentioned explicitly or assumed clear from context.
A Turing Machine is the mathematical tool equal to a digital computer.
It was suggested by the mathematician Turing in the 30s and has been since then the most widely utilized
model of computation in computability and complexity theory. The model consists of an input output
relation that the machine calculates. The input is given in binary form on the machine's tape, and the
output consists of the contents of the tape when the machine halts. What decides how the contents of the
tape change is a finite state machine (or FSM also called a finite automaton) inside the Turing Machine.
The FSM is concluded by the number of states it has and the transitions between them. At every step the
current state and the character read on the tape decide the next state the FSM will be in the character that
the machine will yield on the tape (possibly the one read, leaving the contents unchanged) and which
direction the head moves in left or right.
The problem with Turing Machines is that a dissimilar one must be created for every new computation to
be performed for every input output relation.
This is why we introduce the notion of a universal turing machine (UTM), which along with the input on
the tape, takes in the description of a machine M.
The UTM can go on then to simulate M on the remaining of the contents of the input tape. A universal
Turing machine can therefore simulate any other machine.

In computer science a universal Turing machine (UTM) is a Turing machine that can simulate an arbitrary
Turing machine on arbitrary input. The universal machine basically attains this by reading both the
description of the machine to be simulated as well as the input thereof from its own tape.
Alan Turing introduced the machine in 1936–1937. This model is considered by some (Martin Davis
(2000)) to be the origin of the stored program computer—utilized by John von Neumann (1946) for the
"Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture.
It is also known as universal computing machine or universal machine machine U.
In terms of computational complexity a multi—tape universal Turing machine need only be slower by
logarithmic factor compared to the machines it simulates. Davis makes a persuasive argument that
Turing's conception of what is now known as "the stored- program computer", of placing the "action
table" -- the instructions for the machine—in the same "memory" as the input data.
P
A problem is in P if it admits an algorithm with worst-case time-demand in O (nk) for some integer k.
Note that to be in P a problem just has to have some algorithm which can solve it in polynomial time.
It may also have algorithmic solutions whose time-demand grows unreasonably (as in the case of finding a
determinant, where the naïve, definition-based algorithm took time in O (n!)). But this does not change the
complexity class assignment (a determinant can also be evaluated in O (n3) using the Gaussian
elimination method). However there are some problems for which it is known that there are no algorithms
which can solve them in polynomial time, these are referred to as provably intractable and as being in the
class EXPTIME (EXPonential TIME) -- or worse.
For these problems it has been shown that the lower bound on the time-demand of any possible algorithm
to solve them is a function that grows ‘unreasonably fast’.
Polynomial time (p-time) reduction
Consider again the examples of the Hamiltonian Circuit and Traveling Salesman Decision problems (HCP
and TSDP). Because the TSDP asks first for a valid tour (equivalent to a Hamiltonian circuit in an
undirected graph) and then requires that its length should be less than some specified value it’s therefore
in some sense ‘as least as hard as’ the HCP. The idea of p-time reduction makes this intuition explicit by
showing that a solution to the TSDP can be converted into a solution to the HCP in a negligible (in this
context, polynomial) amount of time, so that in some sense the HCP is indeed contained within the TDSP.
To say in general that a problem A reduces in p-time to another problem B, written as A ≤p B means that
there is some procedure, taking no more than polynomial time as a function of the size of the input to A,
which
 converts an input instance of A into an input instance of B
 allows a suitable algorithm for problem B to be executed
 provides a mechanism whereby the output obtained by this algorithm for problem B can be
translated back into an output for problem A
The algorithm for problem B thus also provides a solution to problem A.
Moreover A’s solution will be obtained in a time which is in the same complexity class as the algorithm
which solves B, since the extra work needed to ‘translate’ is just in p-time.
Most importantly, if we know -- or in the case of NP and NPC, suspect -- that we have a lower bound on
the time demand of all possible algorithms for B, we can say that in terms of its fundamental difficulty
problem A is ‘no worse than’ problem B.
NP
NP is defined as the set of all decision problems for which an algorithm exists which can be carried out by
a nondeterministic Turing machine in polynomial time. A nondeterministic Turing machine is similar to a
conventional Turing machine but at any given step during computation it can do more than one thing at
once. At this point it effectively bifurcates becoming multiple Turing machines simultaneously computing
multiple possibilities.
Since computation continues a nondeterministic Turing machine can divide many times calculating a vast
number of possibilities concurrently. There is no computer in the world which qualifies as a correct
nondeterministic Turing machine. With some minor ifs and buts nondeterministic Turing machines are a
only abstract concept and cannot exist in reality. A sample set of problems falling in NP is "What integers
between 1 and q are prime?" A nondeterministic Turing machine can do multiple things at once.
Consequently, a nondeterministic Turing machine can check every n between 1 and q at once in the same
amount of time that a deterministic Turing machine would take to check just one value of n.
Does P equal NP?
Obviously, any nondeterministic Turing machine can masquerade as a deterministic Turing machine by
simply not splitting at any step. So, any problem solvable by a deterministic Turing machine in
polynomial time is also solvable by a nondeterministic Turing machine in polynomial time. Thus, P ⊆
NP. It is also true though harder to prove that a deterministic Turing machine can be made which emulates
the behavior of any nondeterministic Turing machine. However, this requires the nondeterministic Turing
machine's algorithm to be modified. This change will almost surely make the algorithm slower in absolute
terms and probably increase its algorithmic complexity from polynomial time to something larger.
However this does not tell us whether the two are equal or not. While nondeterministic Turing machines
show to be vastly more powerful than deterministic Turing machines, this is neither noticeable nor proven.
In order to prove that P ≠ NP, we would need to prove that there exists a set of problems X such that: X
falls in NP. There exists an algorithm with which a nondeterministic Turing machine could resolve
problems in X in polynomial time. X does not fall in P.
There exists no algorithm whatsoever with which a deterministic Turing machine could resolve problems
in X in polynomial time.
NP—COMPLETE
The theory of NP—completeness is a solution to the practical problem of applying complexity theory to
individual problems. NP—complete problems are described in an exact sense as the hardest problems in P.
Even though we do not know whether there is any problem in NP that is not in P we can point to an NP—
complete problem and state that if there are any hard problems in NP that problems is one of the hard
ones. Consequently if we trust that P and NP are unequal and we prove that some problem is NP—
complete we should believe that it does not have a fast algorithm. For strange reasons most problems we
have looked at in NP turn out either to be in P or NP—complete. Consequently the theory of NP—
completeness turns out to be a good way of showing that a problem is possible to be hard because it
applies to a lot of problems. But there are problems that are in NP not identified to be in P and not likely
to be NP.
Reduction
Formally NP—completeness is described in terms of "reduction" which is just a complicated way of
saying one problem is easier than one more. We say that A is easier than B and write A < B if we can
write down an algorithm for solving A that uses a small number of calls to a subroutine for B (with
everything outside the subroutine calls being fast, polynomial time). There are several minor variations of
this definition depending on the detailed meaning of "small" it may be a polynomial number of calls a
fixed constant number or just one call. Then if A < B and B is in P so is A, and we can note down a
polynomial algorithm for A by expanding the subroutine calls to use the fast algorithm for B. As a result
"easier" in this context means that if one problem can be solved in polynomial time, so can the other. It is
probable for the algorithms for A to be slower than those for B even though A < B.
As an example consider the Hamiltonian cycle problem. Using longest path as a subroutine, for each edge
(u,v) of G if there is a simple path of length n-1 from u to v return yes return no. This algorithm creates m
calls to a longest path subroutine, and does O(m) work outside those subroutine calls, so it shows that
Hamiltonian cycle < longest path. It does not prove that Hamiltonian cycle is in P because we do not know
how to solve the longest path sub problems quickly. Like a second example consider a polynomial time
problem such as the minimum spanning tree. Then for every other problem B B < minimum spanning tree
since there is a fast algorithm for minimum spanning trees using a subroutine for B. We do not actually
have to call the subroutine or we can call it and ignore its results.
Cook's Theorem
We say that a problem A in NP is NP—complete when for every other problem B in NP B < A. This
seems like an extremely strong definition. After all the notion of reduction we have described above seems
to imply that if B < A then the two problems are very closely related for instance Hamiltonian cycle and
longest path are both about finding very similar structures in graphs.
Theorem:
One NP—complete problem can be obtained by modifying the halting problem (which without
modification is undecidable).
NP—HARD PROBLEM
A problem  is NP—hard if a polynomial-time algorithm for  would involve a polynomial-time
algorithm for every problem in NP so In other words-  is NP—hard  If  can be solved in polynomial
time, then P=NP. Naturally, if we could solve one particular NP—hard problem quickly then we could
quickly solve any problem whose solution is simple to understand using the solution to that one particular
problem as a subroutine. NP—hard problems are at least as hard as any problem in NP. Calling a problem
is NP—hard is like saying ‘If I own a dog, then it can speak fluent English.’ If a problem is NP—hard, no
one in their right mind should believe it can be solved in polynomial time.
At last, a problem is NP-complete if it is both NP—hard & an element of NP (or ‘NP-easy’).
NP complete problems are the hardest problems in NP. If anyone finds a polynomial-time algorithm for
even one NP-complete problem, then that would involve a polynomial-time algorithm for every NP-
complete problem. Literally thousands of problems have been shown to be NP-complete, so a polynomial-
time algorithm for one (& Hence all) of them seems incredibly unlikely.
It is not directly clear that any decision problems are NP—hard or NP-complete
NP—hardness is already a lot to demand of a problem insisting that the problem also have a
nondeterministic polynomial-time algorithm seems almost completely unreasonable. The following
remarkable theorem was first published by Steve Cook in 1971 & independently by Leonid Levin in 1973.
Circuit satisfiability is known as NP-complete.
Formal Definition (HC SVNT DRACONES)
A problem  is defined to be NP—hard if & ” in NP there is a polynomial-only if for every problem
Time Turing reduction from ” to —a Turing reduction just means a reduction that can be executed on
a Turing machine. Polynomial-time Turing reductions are also known as oracle reductions or Cook
reductions. It is elementary, but extremely tedious, to verify that any algorithm that can be executed on a
random- access machine5 in time T(n) can be simulated on a single-tape Turing machine in time
O(T(n)2), so polynomial-time Turing reductions do not actually need to be described using Turing
machines. Complexity-theory researchers prefer to define NP—hardness in terms of polynomial-time
many-one reductions, which are also called Karp reductions. Karp reductions are defined over
languages- sets of strings over a fixed alphabet. Every Karp reduction is a Cook reduction but not vice
versa; specifically any Karp reduction from  ” is equivalent to transforming the input to ”, invoking an
oracle (that is, ainto the input for ”,subroutine) for & then returning the answer verbatim. However as
far as we know not each Cook reduction can be simulated by a Karp reduction. Complexity theorists
prefer Karp reductions mainly because NP is closed underneath Karp reductions but is not closed
underneath Cook reductions. There are natural problems that are NP—hard with respect to Cook
reductions, but NP—hard with respect to Karp reductions only if P=NP. Alternatively, many-one
reductions apply only to decision problems (to languages); no optimization or construction problem is
Karp-NP—hard. To create things even further confusing both Cook & Karp originally defined NP—
hardness in terms of alogarithmic-space reductions. All logarithmic-space reduction is a polynomial-time
reduction but not vice versa. It is an open question whether relaxing the set of permitted (Cook or Karp)
reductions from logarithmic- space to polynomial-time changes the set of NP—hard problems.
VERTEX COVER PROBLEM
In 1972, Karp introduced a list of twenty-one NP-complete problems and one of which was the problem
of finding a minimum vertex cover in a graph. Given a graph one must discover a smallest set of vertices
such that every edge has at least one end vertex in the set. Such a set of vertices is known as a minimum
vertex cover of the graph and in common can be very hard to find. For illustration, try to find a minimum
vertex cover with seven vertices in the Frucht graph shown below in Figure:

We show a new polynomial—time VERTEX COVER ALGORITHM for finding minimal vertex covers
in graphs. We verify that every graph with n vertices and maximum vertex degree Δ must have a
minimum vertex cover of size at most n/(Δ+1)n− and that the algorithm will always find a vertex cover
of at most this size. Additionally, we prove that this condition is the best possible in terms of n and Δ by
explicitly constructing graphs for which the size of a minimum vertex cover is exactly n/(Δ+1)n−.
For all known examples of graphs the algorithm locates a minimum vertex cover
Definitions
We start with precise definitions of all the terminology and notation used in this presentation following.
We use the usual notation  to denote the floor function that is the greatest integer not greater than x
And  to denote the ceiling function i.e. the least integer not less than x. A simple graph G with n
vertices consists of a set of vertices V, with |V| = n, and a set of edges E, such that each edge is an
unordered pair of distinct vertices. We may label the vertices of G with the integers 1, 2, …, n. If the
unordered pair of vertices {u, v} is an edge in G, we say that u is a neighbor of v and write uv∈E.
Neighborhood is clearly a symmetric relationship: uv∈E if and only if vu∈E. The degree of a vertex v,
denoted by d(v), is the number of neighbors of v. The maximum degree over all vertices of G is indicated
by Δ. The adjacency matrix of G is an n×n matrix with the entry in row u and column v equal to 1 if uv∈E
and equal to 0 otherwise.
A clique Q of G is a set of vertices such that each unordered pair of vertices in Q is an edge. An
independent set S of G is a set of vertices such that unordered pair of vertices in S is not an edge A vertex
cover C of G is a set of vertices such that for every edge {u,v} of G at least one of u or v is in C. Given a
vertex cover C of G and a vertex v in C, we say that v is removable if the set C−{v} is still a vertex cover
of G. Denote by ρ(C) the number of removable vertices of a vertex cover C of G. A minimal vertex cover
has no removable vertices. A minimum vertex cover is a vertex cover with the smallest number of vertices.
Note down that a minimum vertex cover is always minimal but not necessarily vice versa.
An algorithm is a problem—solving method appropriate for implementation as a computer program.
While designing algorithms we are normally faced with a number of dissimilar approaches. For small
problems it hardly matters which approach we use as long as it is one that resolves the problem correctly.
However there are a lot of problems for which the only known algorithms take so long to calculate the
solution that they are practically useless.
A polynomial-time algorithm is one whose number of computational steps is forever bounded by a
polynomial function of the size of the input. Hence, a polynomial-time algorithm is one that is actually
useful in practice. The class of all such problems that have polynomial—time algorithms is indicated by P.
For some problems there are no known polynomial-time algorithms but these problems do have
nondeterministic polynomial-time algorithm try all candidates for solutions simultaneously and for each
given candidate, verify whether it is a correct solution in polynomial—time.

The class of all such problems is indicated by NP. Noticeably P ⊆ NP.


Additionally, there are problems that are known to be in NP and are such that any polynomial-time
algorithm for them can be transformed (in polynomial—time into a polynomial—time algorithm for every
problem in NP. Such troubles are known as NP-complete. The problem of discovering a minimum vertex
cover is known to be NP-complete. So if we are able to show the existence of a polynomial-time
algorithm that finds a minimum vertex cover in any graph, we could prove that
P = NP. The current algorithm is so far as we know a promising candidate for the task. One of the greatest
unresolved problems in mathematics and computer science today is whether P = NP or P ≠ NP [3].
The Hamiltonian Path Problem
The Hamiltonian Path Problem inquires whether there is a route in a directed graph from a starting node to
an ending node, visiting all nodes just once. The Hamiltonian Path Problem is known as NP complete
attaining surprising computational complexity with humble increases in size. This challenge has
encouraged researchers to enlarge the definition of a computer. DNA computers have been generated that
solve NP complete problems. Bacterial computers can be programmed by building genetic circuits to
execute an algorithm that is receptive to the environment and whose effect can be observed. Each
bacterium can inspect a solution to a mathematical problem and billions of them can discover billions of
possible solutions. Bacterial computers can be automated made responsive to select and reproduce
themselves so that more processing capacity is applied to problems over time.
A Hamiltonian path is a straightforward open path that holds each vertex in a graph exactly once. The
Hamiltonian Path problem is the difficulty to determine whether a given graph contains a Hamiltonian
path. To show that this problem is NP-complete we first require illustrating that it really belongs to the
class NP and then discover a known NP-complete problem that can be minimized to Hamiltonian Path.
For a given graph G we can solve Hamiltonian Path by non deterministically choosing edges from G that
are to be integrated in the path. Then we traverse the path and make sure that we visit each vertex just
once. This noticeably can be done in polynomial time and hence the problem belongs to NP. Now we have
to discover an NP-complete problem that can be minimized to Hamiltonian Path.
A closely related problem is the problem to decide whether a graph contains a Hamiltonian cycle, that is, a
Hamiltonian path that start and end in the same vertex. Furthermore we know that Hamiltonian Cycle is
NP-complete, so we may try to reduce this problem to Hamiltonian Path. Given a graph G = (V, E) we
construct a graph G’ such that G contains a Hamiltonian cycle if and only if G0 contains a Hamiltonian
path. This is done by choosing an arbitrary vertex u in G and adding a copy, u, of it together with all its
edges. Then add vertices v and v’ to the graph and connect v with u and v’ with u’, see Fig for an instance.

A graph G and the Hamiltonian path reduced graph G’.Suppose first that G contains a Hamiltonian cycle.
Then we get a Hamiltonian path in G’ if we start in v, follow the cycle that we got from G back to u’
instead of u and finally end in v’. For instance, consider the left graph, G, in Fig 1 which contains the
Hamiltonian cycle 1, 2, 5, 6, 4, 3, 1. In G0 this corresponds to the path v, 1, 2, 5, 6, 4, 3, 10, v’.
Conversely, suppose G’ contains a Hamiltonian path. In that case, the path must necessarily have
endpoints in v and v’. This path can be transformed to a cycle in G Namely, if we disregard v and v’, the
path must have endpoints in u and u’ and if we remove u’ we get a cycle in G if we close the path back to
u instead of u’. The construction won't work when G is a single edge, so this has to be taken care of as a
special case. Hence, we have shown that G contains a Hamiltonian cycle if and only if G’ contains a
Hamiltonian path, which concludes the proof that Hamiltonian Path is NP-complete. Modify our graph by
adding an additional node that contains edges to all the nodes in the original graph. If the original graph
has a Hamiltonian Path, the new graph will have a Hamiltonian Circuit: the circuit will run from the new
node to the start node of the Path, throughout all the nodes along the Path, back to the new node. If the
original graph does not have a Hamiltonian Path, there can be no Hamiltonian Circuit in the new Graph.
There is clearly not one starting from the new node. (No edge from the new node can lead to a Path
through the graph which permits a return to the new node.) There is no probable Circuit starting from any
node in the original graph. This is because at best the new node would make a Path, but not a Circuit, in
the new graph. If there is no Path in the original, there are at least two "gaps" between nodes that would
have to be bridged to create a Circuit. Adding the new node could only, at best, bridge one of these, to
produce a Path but not a Circuit.
Hamiltonian Path
A Hamiltonian path in an undirected graph G = (V, E ) is a path of | V| - 1 edges that visits each vertex
(counting the start and end vertices) exactly once.
Hamiltonian cycle
A Hamiltonian cycle in an undirected graph G = (V, E) is a cycle of |V| edges that begins at some vertex v,
visits every other vertex exactly once, and returns to v.

Vous aimerez peut-être aussi