Vous êtes sur la page 1sur 19

Announcements and Outline

Announcements:
Problem Solving „ Lise’s office hours: Thu 1:30-3:00 and by
appt
„ HW1 due next Tue
Russell and Norvig: Chapter 3 Outline:
„ Problem Solving Agents
CSMSC 421 – Fall 2003 „ Problem Formulation
„ Basic Search

Problem-Solving Agent Problem-Solving Agent


sensors sensors

? ?
environment environment
agent agent

actuators actuators
• Formulate Goal
• Formulate Problem
•States
•Actions
• Find Solution

1
Example: Route finding Holiday Planning
On holiday in Romania; Currently in Arad.
Flight leaves tomorrow from Bucharest.
Formulate Goal:
Be in Bucharest
Formulate Problem:
States: various cities
Actions: drive between cities
Find solution:
Sequence of cities: Arad, Sibiu, Fagaras, Bucharest

Problem Formulation Vacuum World


Oradea
71
States Neamt
R
Zerind 87
Actions
151 L R
75
Iasi
L
Arad
140
92 S S
Sibiu Fagaras
118 Start 99
Solution Vaslui
R R
80 L R L R
Rimnicu Vilcea
Timisoara L L

142 S S
111 Pitesti 211 S S
Lugoj 97
R
70 98 L R
85 Hirsova
146 101
Mehadia Urziceni
L
75 138 86
Bucharest
S S
Goal
Dobreta 120
90
Craiova Eforie
Giurgiu

2
Search Problem Search Problem

State space State space


„ each state is an abstract representation of Initial state:
the environment „ usually the current state
„ the state space is discrete „ sometimes one or several hypothetical
Initial state states (“what if …”)
Successor function Successor function
Goal test Goal test
Path cost Path cost

Search Problem Search Problem

State space State space


Initial state Initial state
Successor function: Successor function
„ [state Æ subset of states]
Goal test:
„ an abstract representation of the possible
actions „ usually a condition
Goal test „ sometimes the description of a state
Path cost Path cost

3
Search Problem Example: 8-puzzle

State space
Initial state 8 2 1 2 3
Successor function 3 4 7 4 5 6
Goal test 5 1 6 7 8
Path cost:
„ [path Æ positive number] Initial state Goal state
„ usually, path cost = sum of step costs
„ e.g., number of moves of the empty tile

Example: 8-puzzle Example: 8-puzzle


8 2 7
Size of the state space = 9!/2 = 181,440
3 4

8 2 5 1 6 15-puzzle Æ .65 x 1012


0.18 sec
3 4 7
6 days
5 1 6 8 2 8 2 24-puzzle Æ .5 x 1025
12 billion years
3 4 7 3 4 7
10 millions states/sec
5 1 6 5 1 6

4
Your Turn: Search Problem Search of State Space
State space
Initial state
Successor function
Goal test
Path cost

Search of State Space Search State Space

5
Search of State Space Search of State Space

Search of State Space Simple Agent Algorithm


Problem-Solving-Agent
1. initial-state Å sense/read state
2. goal Å select/read goal
3. successor Å select/read action models
4. problem Å (initial-state, goal, successor)
5. solution Å search(problem)
6. perform(solution)

Æ search tree

6
Example: 8-queens Example: 8-queens
Place 8 queens in a chessboard so that no two queens Formulation #1:
are in the same row, column, or diagonal. •States: any arrangement of
0 to 8 queens on the board
• Initial state: 0 queens on the
board
• Successor function: add a
queen in any square
• Goal test: 8 queens on the
board, none attacked

A solution Not a solution Æ 648 states with 8 queens

Example: 8-queens Example: Robot navigation


Formulation #2:
•States: any arrangement of
k = 0 to 8 queens in the k
leftmost columns with none
attacked
• Initial state: 0 queens on the
board
• Successor function: add a
queen to any square in the
leftmost empty column such
that it is not attacked
by any other queen
What is the state space?
Æ 2,067 states • Goal test: 8 queens on the
board

7
Example: Robot navigation Example: Robot navigation

Cost of one horizontal/vertical step = 1


Cost of one diagonal step = √2

Example: Assembly Planning Example: Assembly Planning

Initial state

Complex function:
Goal state
it must find if a collision-free
merging motion exists

Successor function:
• Merge two subassemblies

8
Example: Assembly Planning Assumptions in Basic Search

The environment is static


The environment is discretizable
The environment is observable
The actions are deterministic

Simple Agent Algorithm Search of State Space


Problem-Solving-Agent
1. initial-state Å sense/read state
2. goal Å select/read goal
3. successor Å select/read action models
4. problem Å (initial-state, goal, successor)
5. solution Å search(problem)
6. perform(solution)

Æ search tree

9
Basic Search Concepts Search Nodes ≠ States

Search tree 8 2
3 4 7
Search node 5 1 6
Node expansion
8 2 7
The search
search tree
tree may
may be
be infinite
infinite even
even
Search strategy: At each stage it 3 4
The
when the
the state
state space
space isis finite
finite
when
determines which node to expand 5 1 6

8 2 8 2 8 4 2 8 2
3 4 7 3 4 7 3 7 3 4 7
5 1 6 5 1 6 5 1 6 5 1 6

Fringe Search Algorithm

Set of search nodes that have not been 1. If GOAL?(initial-state) then return initial-state
2. INSERT(initial-node,FRINGE)
expanded yet 3. Repeat:
Implemented as a queue FRINGE If FRINGE is empty then return failure
„ INSERT(node,FRINGE) n Å REMOVE(FRINGE)
s Å STATE(n)
„ REMOVE(FRINGE) For every state s’ in SUCCESSORS(s)
The ordering of the nodes in FRINGE ƒ Create a node n’
defines the search strategy ƒ If GOAL?(s’) then return path or goal state
ƒ INSERT(n’,FRINGE)

10
Search Strategies Important Remark
A strategy is defined by picking the order of node
expansion Some problems formulated as search
Strategies are evaluated along the following problems are NP-hard problems. We
dimensions:
„ Completeness – does it always find a solution if one exists? cannot expect to solve such a problem
„ Time complexity – number of nodes generated/expanded in less than exponential time in the
Space complexity – maximum number of nodes in memory
worst-case
„

„ Optimality – does it always find a least-cost solution


Time and space complexity are measured in terms of
„ b – maximum brancing factor of the search tree
But we can nevertheless strive to solve
„ d – depth of the least-cost solution as many instances of the problem as
m – maximum depth of the state space (may be ∞)
possible
„

Blind vs. Heuristic Strategies Blind Search…


…[the ant] knew that a certain arrangement had to be made,
Blind (or uninformed) strategies do not but it could not figure out how to make it. It was like a man
with a tea-cup in one hand and a sandwich in the other, who
exploit any of the information contained wants to light a cigarette with a match. But, where the man
would invent the idea of putting down the cup and sandwich—
in a state before picking up the cigarette and the match—this ant would
have put down the sandwich and picked up the match, then it
would have been down with the match and up with the
cigarette, then down with the cigarette and up with the
Heuristic (or informed) strategies sandwich, then down with the cup and up with the cigarette,
until finally it had put down the sandwich and picked up the
exploits such information to assess that match. It was inclined to rely on a series of accidents to
one node is “more promising” than achieve its object. It was patient and did not think… Wart
watched the arrangements with a surprise which turned into
another vexation and then into dislike. He felt like asking why it did not
think things out in advance…
T.H. White, The Once and Future King

11
Blind Strategies Breadth-First Strategy

Breadth-first New nodes are inserted at the end of FRINGE


„ Bidirectional
1
Step cost = 1
Depth-first 2 3 FRINGE = (1)
„ Depth-limited
„ Iterative deepening 4 5 6 7

Uniform-Cost Step cost = c(action)


≥ε>0

Breadth-First Strategy Breadth-First Strategy

New nodes are inserted at the end of FRINGE New nodes are inserted at the end of FRINGE

1 1

2 3 FRINGE = (2, 3) 2 3 FRINGE = (3, 4, 5)

4 5 6 7 4 5 6 7

12
Breadth-First Strategy Evaluation
b: branching factor
New nodes are inserted at the end of FRINGE
d: depth of shallowest goal node
1
Complete
2 3 FRINGE = (4, 5, 6, 7) Optimal if step cost is 1
Number of nodes generated:
4 5 6 7 1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
Time and space complexity is O(bd+1)

Time and Memory Requirements Time and Memory Requirements


d #Nodes Time Memory d #Nodes Time Memory
2 111 .01 msec 11 Kbytes 2 111 .01 msec 11 Kbytes
4 11,111 1 msec 1 Mbyte 4 11,111 1 msec 1 Mbyte
6 ~106 1 sec 100 Mb 6 ~106 1 sec 100 Mb
8 ~108 100 sec 10 Gbytes 8 ~108 100 sec 10 Gbytes
10 ~1010 2.8 hours 1 Tbyte 10 ~1010 2.8 hours 1 Tbyte
12 ~1012 11.6 days 100 Tbytes 12 ~1012 11.6 days 100 Tbytes
14 ~1014 3.2 years 10,000 Tb 14 ~1014 3.2 years 10,000 Tb
Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node

13
Bidirectional Strategy Depth-First Strategy
2 fringe queues: FRINGE1 and FRINGE2
New nodes are inserted at the front of FRINGE
1

2 3
FRINGE = (1)
4 5

Time and space complexity = O(bd/2) << O(bd)

Depth-First Strategy Depth-First Strategy

New nodes are inserted at the front of FRINGE New nodes are inserted at the front of FRINGE
1 1

2 3 2 3
FRINGE = (2, 3) FRINGE = (4, 5, 3)
4 5 4 5

14
Depth-First Strategy Depth-First Strategy

New nodes are inserted at the front of FRINGE New nodes are inserted at the front of FRINGE
1 1

2 3 2 3

4 5 4 5

Depth-First Strategy Depth-First Strategy

New nodes are inserted at the front of FRINGE New nodes are inserted at the front of FRINGE
1 1

2 3 2 3

4 5 4 5

15
Depth-First Strategy Depth-First Strategy

New nodes are inserted at the front of FRINGE New nodes are inserted at the front of FRINGE
1 1

2 3 2 3

4 5 4 5

Depth-First Strategy Depth-First Strategy

New nodes are inserted at the front of FRINGE New nodes are inserted at the front of FRINGE
1 1

2 3 2 3

4 5 4 5

16
Evaluation Depth-Limited Strategy
b: branching factor
d: depth of shallowest goal node
Depth-first with depth cutoff k
(maximal depth below which nodes are
m: maximal depth of a leaf node
not expanded)
Complete only for finite search tree
Not optimal
Three possible outcomes:
Number of nodes generated:
1 + b + b2 + … + bm = O(bm) „ Solution
Time complexity is O(bm) „ Failure (no solution)
Space complexity is O(bm) „ Cutoff (no solution within cutoff)

Iterative Deepening Strategy Comparison of Strategies


Repeat for k = 0, 1, 2, …: Breadth-first is complete and optimal,
Perform depth-first with depth cutoff k but has high space complexity
Depth-first is space efficient, but neither
Complete complete nor optimal
Optimal if step cost =1 Iterative deepening is asymptotically
Time complexity is: optimal
(d+1)(1) + db + (d-1)b2 + … + (1) bd = O(bd)
Space complexity is: O(bd)

17
Repeated States Avoiding Repeated States
No Few Many
Requires comparing state descriptions
1 2 3 Breadth-first strategy:
search tree is finite search tree is infinite
4 5 „ Keep track of all generated states
7 8 6
„ If the state of a new node already exists,
then discard the node

8-queens assembly 8-puzzle and robot navigation


planning

Avoiding Repeated States Detecting Identical States


Depth-first strategy: Use explicit representation of state
Solution 1:
„
Š Keep track of all states associated with nodes in current
space
tree
Š If the state of a new node already exists, then discard
the node
Æ Avoids loops
„ Solution 2:
Š Keep track of all states generated so far
Š If the state of a new node has already been generated,
Use hash-code or similar representation
then discard the node
Æ Space complexity of breadth-first

18
Uniform-Cost Strategy Modified Search Algorithm
• Each step has some cost ≥ ε > 0.
• The cost of the path to each fringe node N is 1. INSERT(initial-node,FRINGE)
g(N) = Σ costs of all steps. 2. Repeat:
• The goal is to generate a solution path of minimal cost. If FRINGE is empty then return failure
• The queue FRINGE is sorted in increasing cost. n Å REMOVE(FRINGE)
s Å STATE(n)
A S
0 If GOAL?(s) then return path or goal state
1 10
S
For every state s’ in SUCCESSORS(s)
5 B 5 G
A B C
1 5 15 ƒ Create a node n’
5 ƒ INSERT(n’,FRINGE)
15 C
G G
11 10

Exercises Summary

Adapt uniform-cost search to avoid Search tree ≠ state space


repeated states while still finding the Search strategies: breadth-first, depth-
optimal solution first, and variants
Uniform-cost looks like breadth-first (it Evaluation of strategies: completeness,
is exactly breadth first if the step cost is optimality, time and space complexity
constant). Adapt iterative deepening in Avoiding repeated states
a similar way to handle variable step
costs Optimal search with variable step costs

19

Vous aimerez peut-être aussi