Vous êtes sur la page 1sur 28

Artificial intelligence

parth pratim roy

Stuart russell
Peter norig

Topics ( for study matter)

Probablity
Bayesian network
Decision theory
Neural network
Fussy logix
Hidden markov model
Searching techniques
uninformed
and informed
Time and space complexity
Travelling salesman problem (NP)
8-queen
Picture matching problem
Game theory max-min and alpha-beta algorithm
Markov decision processes

AI

Chapter 1> introdunction

Chapter 2>intelligent agents


problem solving

Chapter 3>problem solving by searching

Chapter 4>informed search and exploration

Chapter 5>constrained satisfaction


problems(present)
Chapter 6>adversial search

Chapter 1>Artificial intelligence

Chapter 2- intelligent Agents

Agents and environment

Concept of rationality
performance measure and rationality
omniscience learning and autonomy

Nature of environments
task environment and
properties of task environment

The structure of Agents


agents programs

Chapter 3-- problem solving


Searching

Problem solving agents


well defined problems and solutions
formulating problems

Example problems
toy problems
real world problems

Searching for solutions


measure problme solving performance

Uninformed search strategies

Avoiding repeated states


Searching with partial information
Sensorless problems
Contingency problems

Chapter 4-> informed search and


exploration

Informed (heauristic)search strategies


greedy best-first search
A* search minimizing the total estimated
solution cost
memory bound heuristic search

Heuristic functions
the effect of heuristic accuracy on performance
inventing adimissible heuristic functions
learning heuristics from experience

Local search algorithm and optimization problem


hill climbing search
simulated annealing search
local beam search
genetic algorithms

Local search in continous spacees

Online search agents and unknown environments


online search problems
online search agents
online local search

Chapter-5 constrained satisfaction


problems

Constrained satisfaction problems

Backtracking search for CSPs


variable and value ordering
propagating information through constraints
intelligent backtracking through contraints

Local seach for CSPs

The structure of problems

Chapter 6- Adversial search

Games

Optimal decisions in games


optimal strategies
minimax algorithms
optimal decisions in multiplayer game

Alpha-beta pruning

Imperfect, real-time decisions


evaluation functinos
cutting off search

Chapter 5 CSPs
In chapter 3 and 4 we have seen that the idea that problems acan be solved using
Searching the state-space.
--- then to speed up use domain specific heuristics
-- each node is a block box being accessed occasionally by
a>successor function
b>heuristic function
c>goal test
Now THIS chapter
It discusses CSP,whose states and goal test conform to a standard,structured and very
SIMPLE represantation
Structure of the problem and the difficulty of solving it

5...........
CSP_____
1> variables
2>constraints
The contraints specify the allowable combinations of values for that subset
A state of the problem is called an assignment of values to some or all of
the variables
Consistent that does not violate any of the constraints

The best know CSP is LINEAR PROGRAMMING


-- linear inequalities of convex shape
Contraints
Unary
Binary
Or n dimensional

EXAMPLES OF CSPs

Cryptarithmetic puzzles
Scheduling problem is
a classic example of
CSPs (costs)

BACKTRACKING SEARCH

We can denote any sort of constraints using binary cont


If we have Auxiliary Variables

BACKTRACKING SEARCH
FOR a common BFS problem with
>n variables
>d values
States = n!*d^n leaves
Instead of d^n

COMMUTATIVE ->
Order of application has no net effect on the outcome
A->B->C is same as C->A->B
==> that implies we will consider any one variable at a time

Basic definition ==>


Backtracking term is used because it uses a DFS and then backtracks
When a variable has no legal values left to assign

The algorithm::--Func-recursive-Backtracking
**GOAL TEST
1>ORDER DOMAIN VALUES
2>RECURSIVE BACKTRACKING

Variable and ordering


*Minimum Remain Values(MRVs)-- or most constrained variable
*Degree heuristics-> largest constrained with other unassigned variables
(( difference between MRV and DH))
*least contraint value heuristic
Forward Checking
In the map assignment problem the forward checking will
automatically delet branching with deadends i.e it wil backtrack
without further action
Constraint Propagation
The mapping needs to check while forward checking whether any of the
Constraints on other variables are not violated

Intelligent backtracking
Common is chronological backtracking

Modified one
Conflict set and do the BackJumping

Backjumping is redundant with forward checking

Constraint learning actually modifies CSP by learning new constraints


Induced from these constraints

Local search constraint satisfaction


Problem
8-queens problem can be solved using this
HILL climbing is one of local search constraint

Minimum-conflict heuristics
local search and differnce between hard and easy problems
It can be used for scheduling problems like on line
scheduling of aircrafts where changes happens we just have to do
local search instead of backtracking again

Structure of the problems


Any tree-structure CSP can be solved in time linear in the no of varibles
Dividing into Independent subproblems is reduces the problem constraint
from expon. To linear in n
Algofor reducing to tree form ::-1>take the root and then arrange in dec distance down
2>arc consistency backward
3>assign value forward (no backtracking bcoz of arc)

Intelligent Agents
Agents
Environments
And
Coupling
Rational agents ::--Sensors
Actuators
Percepts and
Percept sequence

Agents can be designed with the performance measure as what


actually you want not what the agent should behave

Omniscience rationality and autonomy

Percepts should be update


Exploration should be in blood
Learn as much as possible
Autonomy

Environments

Fully observable to partiablly observable

Deterministic vs stochastic
Episodic vs sequencial current event independent of others no

effect on future jobs

Static vs dynamic dynamic env ask the agent to respond to changes


Discrete vs continous chess vs taxi driving
Signle event vs multiagent

stochastic is good sometimes because it avoids


the pittfall of predictability. (dont be perfect)

Chapter 3

Well defined problems and solutions

Initial state

Successor functions
it actually given a state will try to find the
possible action it can take

Goal state
the agent must know what is the goal state

Path cost --- it presents the agent with the


decision taking parameters

Problems examples

Route finding problems

Travelling salesman problem

8 puzzles

8 queens

Internet searching

Robot navigation

Automatic assembly problem

Protein design

Searching for the solution

Search space and search tree are different


Search strategy is used for choosing which of
the search to consider
At each node a successor function Generates a
new set of states
Fringe is the set of nodes Generated but not
expanded

Searching strategies

Breadth first ssearch

Depth first search

Iterative depth first seacrch

Uniform cost search

Bidirectional search
1>>avoiding the repition of the Nodes
Graph Search ( solves the problem of repitition
on 2Dgraphs)

Searching with partial information

Sensorless problems
no percents

Contigency problems
game theory if the percepts are adversial
=== be prepared to tackle any temporary
problem

Exploration problems
the environment is unknown

Vous aimerez peut-être aussi