Vous êtes sur la page 1sur 14

Local Search Algorithms & Optimization Problems

Hill Climbing

Hill climbing is an optimization technique which belongs to the family of local search. It
is relatively simple to implement, making it a popular first choice. Although more
advanced algorithms may give better results, in some situations hill climbing works well.

Hill climbing can be used to solve problems that have many solutions, some of which are
better than others. It starts with a random (potentially poor) solution, and iteratively
makes small changes to the solution, each time improving it a little. When the algorithm
cannot see any improvement anymore, it terminates. Ideally, at that point the current
solution is close to optimal, but it is not guaranteed that hill climbing will ever come
close to the optimal solution.

For example, hill climbing can be applied to the traveling salesman problem. It is easy to
find a solution that visits all the cities but is be very poor compared to the optimal
solution. The algorithm starts with such a solution and makes small improvements to it,
such as switching the order in which two cities are visited. Eventually, a much better
route is obtained.

Hill climbing is used widely in artificial intelligence, for reaching a goal state from a
starting node. Choice of next node and starting node can be varied to give a list of related
algorithms.

Hill climbing attempts to maximize (or minimize) a function f(x), where x are discrete
states. These states are typically represented by vertices in a graph, where edges in the
graph encode nearness or similarity of a graph. Hill climbing will follow the graph from
vertex to vertex, always locally increasing (or decreasing) the value of f, until a local
maximum (or local minimum) xm is reached. Hill climbing can also operate on a
continuous space: in that case, the algorithm is called gradient ascent (or gradient descent
if the function is minimized).*.

Problems with hill climbing: local maxima (we've climbed to the top of the hill, and
missed the mountain), plateau (everything around is about as good as where we are),
ridges (we're on a ridge leading up, but we can't directly apply an operator to improve our
situation, so we have to apply more than one operator to get there).

Solutions include: backtracking, making big jumps (to handle plateaus or poor local
maxima), applying multiple rules before testing (helps with ridges).

Hill climbing is best suited to problems where the heuristic gradually improves the closer
it gets to the solution; it works poorly where there are sharp drop-offs. It assumes that
local improvement will lead to global improvement.
Local maxima

A problem with hill climbing is that it will find only local maxima. Unless the heuristic is
convex, it may not reach a global maximum. Other local search algorithms try to
overcome this problem such as stochastic hill climbing, random walks and simulated
annealing. This problem of hill climbing can be solved by using random hill climbing
search technique

Ridges

A ridge is a curve in the search place that leads to a maximum, but the orientation of the
ridge compared to the available moves that are used to climb is such that each moves will
lead to a smaller point. In other words, each point on a ridge looks to the algorithm like a
local maximum, even though the point is part of a curve leading to a better optimum.

Plateau

Another problem with hill climbing is that of a plateau, which occurs when we get to a
"flat" part of the search space, i.e. we have a path where the heuristics are all very close
together. This kind of flatness can cause the algorithm to cease progress and wander
aimlessly.

Steepest Ascent

Hill climbing in which you generate all successors of the current state and choose the best
one. These are identical as far as many texts are concerned.

Branch and Bound


Generally, in search we want to find the move that results in the lowest cost (or highest,
depending). Branch and bound techniques rely on the idea that we can partition our
choices into sets using some domain knowledge, and ignore a set when we can determine
that the optimal element cannot be in it. In this way we can avoid examining most
elements of most sets. This can be done if we know that a higher bound on set X is lower
than a lower bound on set Y (in which case Y can be pruned).

Example: Travelling Salesman Problem. We decompose our set of choices into a set of
sets, in each one of which we've taken a different route out of the current city. We
continue to decompose until we have complete paths in the graph. If while we're
decomposing the sets, we find two paths that lead to the same node, we can eliminate the
more expensive one.

Best-first B&B is a variant in which we can give a lower bound on a set of possible
solutions. In every cycle, we branch on the class with the least lower bound. When a
singleton is selected we can stop.

Depth-first B&B selects the most recently generated set; it produces DFS behavior but
saves memory.

Some types of branch-and-bound algorithms: A*, AO*, alpha-beta, SSS*, B*.

Best-First Search

Expand the node that has the best evaluation according to the heuristic function. An
OPEN list contains states that haven't been visited; a CLOSED list contains those that
have, to prevent loops. This approach doesn't necessarily find the shortest path.

(When the heuristic is just the cost function g, this is blind search. When it's just h', the
estimated cost to the goal, this is steepest ascent (I think -- POD). When it's g + h', this is
A*.

Local search= use single current state and move to neighboring states.

Advantages:

– Use very little memory


– Find often reasonable solutions in large or infinite state spaces.
Are also useful for pure optimization problems.

– Find best state according to some objective function.


– e.g. survival of the fittest as a metaphor for optimization.

Example: n-queens
•Put nqueens on an n ×n oard with no two queens on the same row,
column, or diagonal

Hill Climbing:

Search methods based on hill climbing get their names from the way the nodes are
selected for expansion. At each point in the search path a successor node that appears to
lead most quickly to the top of the hill (goal) selected for exploration. This method
requires that some information be available with which to evaluate and order the most
promising choices. Hill climbing is like depth first searching where the most promising
child is selected for expansion.

Hill climbing is a variant of generate and test in which feedback from the test
procedure is used to help the generator decide which direction to move in the search
space. Hill climbing is often used when a good heuristic function is available for
evaluating states but when no other useful knowledge is available. For example ,
suppose you are in an unfamiliar city without a map and you want to get downtown.
You simply aim for the tall buildings. The heuristic function is just distcnce between the
current location and the location of the tall buildings and the desirable states are those
in which this distance is minimized.
Simple Hill Climbing:
The simplest way to implement hill climbing is the simple hill climbing whose
algorithm is as given below:

Algorithm : Simple Hill Climbing:

Step 1: Evaluate the initial state. It it is also a goal state, then return it and quit.
Otherwise continue with the initial state as the current state.

Step 2: Loop until a solution is found or until there are no new operators left to
be applied in the current state :

(a) Select an operator that has not yet been applied to the current
state and apply it to produce a new state.

(b) Evaluate the new state.

(i) If it is a goal state, then return it and quit .


(ii) If it is not a goal state, but it is better than the current
state, then make it the current state.
(iii) If it is not better than the current state, then continue
in the loop.
The key difference between this algorithm and the one we gave for generate and
test is the use of an evaluation function as a way to inject task specific knowledge into
the control process. It is the use of such knowledge that makes this heuristic search
method. It is the same knowledge that gives these methods their power to solve some
otherwise intractable problems

To see how hill climbing works , let’s take the puzzle of the four
colored blocks, . To solve the problem we first need to define a heuristic function that
describes how close a particular configuration is to being a solution. One such function is
simply the sum of the number of different colors on each of the four sides. A solution to
the puzzle will have a value of 16 . Next we need to define a set of rules that describe
ways of transforming one configuration to another . Actually one rule will suffice. It
says simply pick a block and rotate it 90 degrees in any direction. Having provided these
definitions the next step is to generate a starting configuration. This can either be done at
random or with the aid of the heuristic function. Now by using hill climbing , first we
generate a new state by selecting a block and rotating it . If the resulting state is better
then we keep it . If not we return to the previous state and try a different perturbation.
Problems in Hill Climbing

Steepest – Ascent Hill Climbing:

An useful variation on simple hill climbing considers all the moves form the
current state and selects the best one as the next state. This method is called steepest –
ascent hill climbing or gradient search. Steepest Ascent hill climbing contrasts with the
basic method in which the first state that is better than the current state is selected. The
algorithm works as follows.

Algorithm : Steepest – Ascent Hill Climbing

Step 1 : Evaluate the initial state. If it is also a goal state, then return it and quit .
Otherwise, continue with the initial state as the current state.

Step 2 : Loop until a solution is found or until a complete iteration produces no change
to current state :

(a) Let SUCC be a state such that any possible successor of the current
state will be better than SUCC.

(b) For each operator that applies to the current state do :


i. Apply the operator and generate a new state.
ii. Evaluate the new state. If it is a goal state, then return it
and quit. If not, compare it to SUCC . If it is better then
set SUCC to this state. If it is not better, leave SUCC
alone.

( c) If the SUCC is better than current state, then set current state
to SUCC.
To apply steepest- ascent hill climbing to the colored blocks problem, we must
consider all perturbations of the initial state and choose the best. For this problem this is
difficult since there ae so many possible moves. There is a trade-off between the time
required to select a move and the number of moves required to get a solution that must
be considered when deciding which method will work better for a particular problem.
Usually the time required to select a move is longer for steepest – ascent hill climbing
and the number of moves required to get to a solution is longer for basic hill climbing.
Both basic and steepest ascent hill climbing may fail to find a solution. Either
algorithm may terminate not by finding a goal state but by getting to a state from which
no better states can be generated. This will happen if the program has reached either a
local maximum, a plateau, or a ridge.
Local Maximum : A local maximum is a state that is better than all its neighbors but is
not better than some other states farther away.

Plateau: A Plateau is a flat area of the search space in which a whole set of neighboring
states have the same value.

Ridge : A ridge is a special kind of local maximum. It is an area of the search space that
is higher than surrounding areas and that itself has a slope.

Branch and bound technique:

Branch and bound technique follows the given below method: Begin generating
complete paths keeping track of the shortest path found so far. Give up exploring any
path as soon as its partial length becomes greater than the shortest path found so far.
Using this technique we are still guaranteed to find the shortest path. Although this
algorithm is more efficient it requires exponential time. The exact amount of time it saves
for a particular problem depends on the order in which the paths are explored . But still it
is inadequate for solving large problems.

The branch and bound search strategy applies to problems having a graph search
space where more than one alternative path may exist between two nodes. This strategy
saves all path lengths form a node to all generated nodes and chooses the shortest path for
further expansion. If then compares the new path lengths with all old ones and again
chooses the shortest path for expansion . In this way any path to a goal node is certain to
be a minimal length path.

Algorithm : Branch and bound Technique:

Step 1: Place the start node of zero path length on the queue.

Step 2 : Until the queue is empty or a goal node has been found ;
(a) determine if the first path in the queue contains a goal node,
(b) if the first path contains a goal node exit with success
(c) If the first path does not contain a goal node remove the path from the
queue and form new paths by extending the removed path by one step.
(d) Compute the cost of new paths and add them to the queue
(e) Sort the paths on the queue with lowest cost paths in front .
Step 3 : Other wise exit with failure.

OR Graphs

It is sometimes important to search a graph instead of searching a tree so that


duplicate paths will not be pursued. An algorithm to do this will operate by searching a
directed graph in which each node represents a point in the problem space . Each node
will contain in addition to a description of the problem state it represents, an indication of
how promising it is , a parent link that points back to the best node from which it came,
and a list of the nodes that were generated form it. The parent link will make it possible ,
if a better path is found to an already existing node, to propagate the improvement down
to its successors. We will call a graph of this sort an OR graph, since each of its branches
represents an alternative problem-solving path.

To implement such a graph – search procedure, we will need to use two


lists of nodes which are given below:

• OPEN : OPEN consists of nodes that have been generated and


have had the heuristic function applied to them but which have not
yet been examined .OPEN is actually a priority queue in which the
elements with the highest priority are those with the most
promising value of the heuristic function. Standard techniques for
manipulating priority queues can be used to manipulate the list.
• CLOSED : CLOSED consists of nodes that have already been
examined . We need to keep these nodes in memory if we want to
search a graph rather than a tree, since whenever a new node is
generated, we need to check whether it has been generated before.

We will also need a heuristic function that estimates the merits of each node we
generate. This will enable the algorithm to search more promising paths first.

Best First Search:

Best First search depends on the use of a heuristic to select most promising paths
to the goal node. This algorithm retains all estimates computed for previously generated
nodes and makes its selection based on the best among them all. Thus at any point in the
search process best first moves forward from the most promising of all the nodes
generated so far. In so doing it avoids potential traps encountered in hill climbing.

Best First Search is one way of combining the advantages of Depth First Search
and breadth First Search into a single method .Best First search follows a single path at a
time, but switch paths whenever some competing path looks more promising that the
current one does. This is done by applying an appropriate heuristic function to each of
them. We then expand the chosen node by using the rules to generate its successors. If
one of them is a solution, we can quit. If not, all those new nodes are added to the set of
nodes generated so far. Again the most promising node is selected and the process
continues.
The actual operation of the algorithm is very simple. It proceeds in steps ,
expanding one node at each step, until it generates a node that corresponds to a goal state.
At each step, it picks the most promising of the nodes that have so far been generated but
not expanded. It generates the successors of the chosen node, applies the heuristic
function to them , and adds them to the list of open node, after checking to see if any of
them have been generated before. By doing this check, we can guarantee that each node
only appears once in the graph , although many nodes may point to it as a successor.
Then the next step begins.

Algorithm : Best –First Search:

Step 1 : Start with OPEN containing just the initial state.

Step 2 : Until a goal is found or there are no nodes left on OPEN do :


(a) Pick the best node on OPEN
(b) Generate its Successors
(c) For each successor do :
(i)If it has not been generated before, evaluate it , add it to
OPEN and record its parent.
(ii) If it has been generated before, change the parent if this new
path is better than the previous one. In that case , update the
cost of getting to this node and to any successors that this
node may already have.

Example: A Best First Search

Step 1: A

Step 2 : A

B C D
(3) (5) (1)

Step 3: A

B C D
(3) (5)
E F
(4) (6)

Step 4 : A

B C D

(5)

G H E F
(6) (5) (4) (6)

Step 5 : A

B C D
(5)

G H E F
(6) (5) (6)

I J
(2) (1)

The above figures show the beginning of a best first search procedure. Initially there is
only one node , so it will be expanded. Doing so generates three new nodes. The heuristic
function, which, in this example is an estimate of the cost of getting to a solution from a
given node, is applied to each of these new nodes. Since node D is most promising it is
expanded next , producing two successor nodes, E and F. But then the heuristic function
is applied to them. Now another path, that going through node B, looks more promising,
so it is pursued, generating nodes G and H. But again when these new nodes are
evaluated they look less promising than another path, so attention is returned to the path
through D to E. E is then expanded , yielding nodes I and J. At the next step , J will be
expanded, since it is the most promising. This process can continue until a solution is
found.

Simulated Annealing versus Hill Climbing

Simulated Annealing

The goal is to find a minimal energy state. The search descends except occasionally
when, with low probability, it moves uphill instead. The probability of moving uphill
decreases as the temperature of the system decreases, so such moves are much more
likely earlier than later.

Problems include: choosing an initial temperature, choosing the annealing schedule (the
rate at which the system cools).

As we have seen in previous lectures, hill climbing suffers from problems in getting stuck
at local minima (or maxima). We could try to overcome these problems by trying various
techniques.
• We could try a hill climbing algorithm using different starting points.
• We could increase the size of the neighbourhood so that we consider more of the
search space at each move. For example, we could try 3-opt, rather than a 2-opt move
when implementing the TSP.
Unfortunately, neither of these have proved satisfactory in practice when using a simple
hill climbing algorithm.
Simulated annealing solves this problem by allowing worse moves (lesser quality) to be
taken some of the time. That is, it allows some uphill steps so that it can escape from
local minima.

Unlike hill climbing, simulated annealing chooses a random move from the
neighbourhood (recall that hill climbing chooses the best move from all those available –
at least when using steepest descent (or ascent)).
If the move is better than its current position then simulated annealing will always take it. If the move is
worse (i.e. lesser quality) then it will be accepted based on some probability.

Constraint-Satisfaction Problems

In some search problems, there is no explicit goal state; rather, there is a set of
constraints on a possible solution that must be satisfied. The task is not to find a sequence
of steps leading to a solution, but instead to find a particular state that simultaneously
satisfies all constraints.
The approach is to assign values to the constrained variables, each such assignment
limiting the range of subsequent choices. Even though the sequence is not of interest, the
problem can still be regarded as a search through state space.

Example: the eight-queens problem. The Eight-Queens Problem is to place eight queens
on a standard chessboard in such a way that no two queens are attacking one another.

The topology of a constraint graph can sometimes be used to identify solutions easily. In
particular, binary CSPs whose constraint graph is a tree can be solved optimally in time
O(nk2) where n is the number of variables and k is the number of values for each variable.
Going from the leaves toward the root, we delete from each node the values that do not
have at least one match for each of its successors. If any node ends up empty, there is no
solution; otherwise, we trace any remaining value from the root down, and this produces
a consistent solution.

The most common algorithm for solving CSPs is a type of depth-first search called
backtracking. The most primitive version assigns variables to values in a predetermined
order, at each step attempting to assign a variable to the next value that is consistent with
previous assignments and the constraints. If no consistent assignment can be found for
the next variable, a dead-end is reached. In this case the algorithm goes back to one of the
earlier variables and tries a different value.

Backtracking and Thrashing

The obvious approach is to assign variables in some order, then go back and change
assignments when a conflict is detected. However, the run-time complexity of this
approach is still exponential, and it suffers from thrashing; that is, search in different
parts of the space keeps failing for the same reasons.

The simplest cause of thrashing is node inconsistency, in which there is some possible
value of a variable that will cause it to fail in and of itself; when it is instantiated it
always fails immediately. This can be resolved by removing such values before search
begins.

Dependency-Directed Backtracking

Since backtracking is used in many AI applications (solving CSPs, TMSs, PROLOG,


etc.) there are a number of schemes to improve its efficiency. Such schemes, called
dependency-directed backtracking, or sometimes intelligent backtracking [Stallman and
Sussman, 1977], can be classified as follows:
Lookahead Schemes

These schemes control which variable to instantiate next or what value to choose among
the consistent options.

• Variable ordering: This approach tries to choose a variable that will make the
rest of the problem easier to solve. This is usually done by choosing the variable
involved in the most constraints.
• Value ordering: A value is chosen that will maximize the number of options
available for future assignments.

Look-back Schemes

These approaches control the decision of where and how to go back in case of dead-ends.
There are two basic approaches:

• Go back to source of failure: Try to change only those past decisions that caused
the error, leaving other past decisions unchanged.
• Constraint recording: Record the "reasons" for the dead-end so that they will be
avoided in future search.

Gashnig's "backjumping" [1979] is the best-known go-back scheme (q.v.). A simpler


version jumps to the youngest ancestor constraining the dead-end variable.

Dependency-directed backtracking is also used in truth-maintenance systems (Doyle's


RMS, 1979). It works as follows. A variable is assigned some value, and a justification
for that value is recorded (and it may be simply that there is no justification for any other
values). Then a default value is assigned to some other variable and justified. At this point
the system checks whether the assignments violate any constraints; if so, it records that
the two are not simultaneously accpetable, and this record is used to justify the choice of
some other variable. This continues until a solution is found. Such a system never
performs redundant backtracking and never repeats computations.

Preprocessing

Constraint recording can be implemented by preprocessing the problem or by recording


constraints as they are encountered during search. The most common approaches are arc
consistency and path consistency.

Arc consistency deletes values of variables that have no consistent matches in adjacent
(i.e., directly connected) variables. Path consistency records sets of forbidden value pairs
when they can't be matched at some third variable.
Preprocessing for path consistency can be expensive; O(n3k3) operations, while many
forbidden pairs would never actually be encountered. There are more efficient learning
techniques that process constraints as the search is performed.

Cycle Cutset

[Dechter and Pearl, 1987] Another approach to improving backtracking performance. The
goal is to identify a set of nodes that, when removed, leave a tree-structured (i.e., cycle-
free) constraint graph. Once in tree form, the CSP can be solved in linear time. This gives
an upper complexity bound on the complexity of CSPs -- if c is the size of some cycle
cutset, and we instantiate the variables in the cutset first, then the complexity of the
search is at most O(nkc), rather than the O(kn) associated with general backtrack search.

Backtrack-Free Search

Theorem [Freuder]: A k-consistent CSP having a width (k - 1) ordering admits a


backtrack-free solution in that ordering.

In particular, a graph of width 1 (i.e., a tree) that is arc consistent admits of backtrack free
solutions; a graph of width 2 that is path consistent admits of backtrack free solutions.

Vous aimerez peut-être aussi