Académique Documents
Professionnel Documents
Culture Documents
iq
2 class AIstrategiesandalgorithms .. :
nd
Artificial Intelligence (A.I) Artificial Intelligence (A.I) :- Is the branch of computer science that concerned with the
automation of intelligence behaviour.
A.I :- Is simply a way of making a computer think. A.I:- Is the part of computer science concerned with designing intelligent computer system that
exhibit the character static associate with intelligent in human behavior. This require many process:1- learning :- acquiring the knowledge and rules that used these knowledge. 2- Reasoning:- Used the previous rules to access to nearly reasoning or fixed reasoning.
A.I Principles:1- The data structures used in knowledge representation. 2- the algorithms needed to apply that knowledge. 3- the language and programming techniques used their implementation.
A.I Application:1- Game application. 2-Automated reasoning and theorm proving. 3-Perception. 4-Expert Systems 5- Natural language understanding and semantic modeling. 6- planning and robotics machine learning language & environment for A.I Pattern recognition A.I & Philosophy.
A.I Branches:1- Logical A.I 2-Search 3- Representation 4-Inference 5-knowledge & Reasoning 6- Planning
7- Epistemology. 8- Ontology. 9- Heuristics. 10-Genetig Programming. Search Algorithms: To successfully design and implement search algorithms, a programmer must be able to analyze and predict their behavior. Many questions needed to be answered by the algorithm these include: - is the problem solver guranteed to find a solution? - Will the problem solver always terminate , or can it become caught in an infinite loop? - When a solution is found , is it guaranteed to be optimal? - -What is the complexity of the search process in terms of time usage ? space search? - How can the interpreter be designed to most effectively utilize a representation language? -State Space Search The theory of state space search is our primary tool for answering these questions , by representing a problem as state space graph, we can use graph theory to analyze the structure and complexity of both the problem and procedures used to solve it. - Graph Theory:A graph consist of a set of a nodes and a set of arcs or links connecting pairs of nodes. The domain of state space search , the nodes are interpreted to be stated in problem_solving process, and the arcs are taken to be transitions between states. Graph theory is our best tool for reasoning about the structure of objects and relations.
c d
Nodes=={a,b,c,d,e,f,g,h,i} Arcs={(a,b),(a,c),(a,d),(b,e),(b,f),(c,f),(c,g),(c,h),(c,i),(d,j)}
State Space Representation of Problems:A state space is represented by four_tuple [N,A,S,G,D], where:is a set of nodes or states of the graph. These correspond to the states in a problem solving process.
A is the set of arcs between the nodes. These correspond to the steps in a problem
solving process.
S a nonempty subset of N ,contains the start state of the problem. GD a nonempty subset of N ,contains the goal state of the problem.
A solution path:- Is a path through this graph from a node S to a node in GD.
a
100 75 100 125
b
300 50
e
50
125
100
An instance of traveling Saleman Problem The complexity of exhaustive search in the traveling Saleman is (N-1)!, where N is the No. of cities in the graph. There are several technique that reduce the search complexity. 1- Branch and Bound Algorithm :-Generate one path at a time , keeping track of the best circute so far. Use the best circuit so far as a bound of future branches of the search .
figure below illustrate branch and bound algoritm.
a b c d e a=375
a b c e d a =425
a b d c e a=474
2- Nearest Nieghbor Heuristic: At each stage of the circuit , go to the nearest unvisited city. This strategy reduces the complexity to N, so it is highly efficient , but it is not guranteed to find the shortest path , as the following example:
a
100 75 100 125
b
300 50
e
50
125
100
walk on the floor Climb the box Push the box a round (if it is already at the box). Grasp the banana if standing on the box directly under the banana.
The question is (Can the monkey get the banana?), the initial state of the world is setermind by:1- Monkey is at door. 2- Monkey is on floor. 3- Box is at Window. 4- Monkey does not have banana Initial state :- State (at door, on floor, at window, has not). At door On floor At window Has not State1 state2 horizontal position of monkey vertical position of monkey Position of box monkey has not banana
Move (state1, move, state2). State1: is the state before the move. Move: is the move executed. State2:is the state after the move. To answer the question :- Can the monkey in some initial state (state) get the banana? This can be formulated as a predicate canget(state).The program canget can be based on two observation:1) The program:- for any state in which the monkey already has the banana. The predicate canget must certainly be true , no move is needed in this case: Canget(state( state(_,_,_,has)). 2) In other cases one or more moves are necessary. Canget (state):-move (state1,move,state2),canget (state2).
A program of monkey and banana problem:Move (state (at door , on floor , at window , has not), walk, state (at box , on floor , at window , has not)). Move (state (at box , on floor , at window , has not), push , state (middle, on floor, middle, has not)). Move (state (middle, on floor, middle, has not), climb, state (middle, on box, middle, has not)). Move (state (middle, on box, middle, has not),grasp, state (middle, on box, middle, has not)). Canget( state(_,_,_,has)). Canget (State1):- move (state1,move,state2), canget (state2).
Goal= canget (state(at door, on floor, at window ,has not)). The monkey and banana problem can be represented by the following state space:-
No move possible
7) (X,Y: X+Y>=4 ^ Y>0) jug until the 4-gallon jug is full. 8) (X,Y:X+Y<=4 ^ X>0) 9) (X,Y:X+Y<=4 ^ Y>0) gallon jug. 10) (X,Y: X+Y<=3 ^ X>0) gallon jug.
(0,0)
0,3
4,0
0,0 0,0
3,0
4,3 0,3
4,3 0,3
1,3
0,0 4,0
3,3 0,3 4,2 4,0 0,2 4,0 4,2 2,0 The goal 0,0 3,3 3,0
1,0 1,3 0,1 1,0 0,0 4,1 0,1 2,3 The goal 0,0
Backtracking Search Algorithm Backtracking search begins at the start state and pursues a path until it reaches a goal or "dead end", if it reaches a goal , it returns the solution path and quits. If it reaches a dead end , it backtracks to the most recent node in the path having unexamined siblings and continues down on of those branches. The backtrack algorithm uses three lists plus one variable: SL, The state list, lists the states in the current path being tried , if a goal is found , SL contains the ordered list of states on the solution path. NSL, The new state list, contains nodes a waiting evaluation, nodes whose descendants have not yet been generated and searched. DE, dead ends, lists states whose descendants have failed to contain a goal end. If these states are encountered again, they will be immediately eliminated from consideration. CS, The current state.
The backtrack Algorithm { SL:=[start]; NSL:=[start]; DE:=[ ]; CS:=start; While NSL!=[] { IF CS=goal (or meets goal description) Return SL; //SUCCESS// IF CS has no children ("except on DE, SL, NSL). { WHILE SL!= [] and CS=first element of SL { Add CS to DE; //Dead end//
Remove first element of SL; //Backtrack// Remove first element of NSL; CS:=first element of NSL; } Add CS to SL; } Place children of CS (except those on DE, SL,NSL) on NSL CS:= first element of NSL; Add CS to SL; } } [End algorithm] Return fail; //failure// Example: A trace of backtrack Algorithm
Iteration 0 1 2 3 4 5 6 7 8
CS A B E H I F J C G
SL [A] [B A] [E B A] [H E B A] [I E B A] [F B A] [J F B A] [C A] [G C A]
NSL [A] [B C D A] [E F B C D A] [H I E F B C D A] [I E F B C D A] [F B C D A] [J F B C D A] [C D A] [G C D A]
DE [] [] [] [] [H] [E I H] [E I H] [B F J E I H] [B F J E I H]
Blind Search
This type of search takes all nodes of tree in specific order until it reaches to goal. The order can be in breath and the strategy will be called breadth first search, or in depth and the strategy will be called depth first search.
end return (failure) end. A B E K F G C H F D J Q Y Fig (2 1): Breadth first search 1 Open 2 Open 3 Open 4 Open 5 Open 6 Open 7 Open 8 Open = [A]; closed = [ ]. = [B, C, D]; closed = [A]. = [C, D, E, F]; closed = [B, A]. = [D, E, F, G, H]; closed = [C, B, A]. = [E, F, G, H, I, J]; closed = [D, C, B, A]. = [F, G, H, I, J, K, L]; closed = [E, D, C, B, A]. = [G, H, I, J, K, L, M]; closed = [F, E, D, C, B, A]. = [H, I, J, K, L, M, N,]; closed = [G, F, E, D, C, B, A]. R
L M NO P
C H I
D J Q U R
L M N O P T
Fig (2 2) Depth first search = [A]; closed = [ ]. = [B, C, D]; closed = [A]. = [E, F, C, D]; closed = [B, A]. = [K, L, F, , D]; closed = [E, B, A]. = [S, L, F, C, D]; closed = [K, E, B, A]. = [L, F, C, D]; closed = [S, K, E, B, A]. = [T, F, C, D]; closed = [L, S, K, E, B, A]. = [F, C, D,]; closed = [T, L, S, K, E, B, A]. = [M, C, D] as L is already on; closed = [F, T, L, S, K, E, B, A].
Heuristic Search
A heuristic is a method that might not always find the best solution but is guaranteed to find a good solution in reasonable time. By sacrificing completeness it increase efficiency. Heuristic search is useful in solving problems which:-
Could not be solved any other way. Solution take an infinite time or very long time to compute. Heuristic search methods generate and test algorithm , from these methods are:1- Hill Climbing. 2- Best-First Search. 3- A and A* algorithm. 1) Hill Climbing
The idea here is that , you dont keep the big list of states around you just keep track of the one state you are considering , and the path that got you there from the initial state. At every state you choose the state leads you closer to the goal (according to the heuristic estimate ), and continue from there. The name Hill Climbing comes from the idea that you are trying to find the top of a hill , and you go in the direction that is up from wherever you are. This technique often works, but since it only uses local information.
The smaller peak is an example of a local maxima in a domain (or local minima). Hill climbing search works well (and is fast ,takes a little memory) if an accurate heuristic measure is available in the domain , and there are now local maxima.
Hill Climbing Algorithm Begin Cs=start state; Open=[start]; Stop=false; Path=[start]; While (not stop) do { if (cs=goal) then return (path); generate all children of cs and put it into open if (open=[]) then stop=true else { x:= cs; for each state in open do { compute the heuristic value of y (h(y)); if y is better than x then x=y } if x is better than cs then cs=x else stop =true; } } return failure; } A trace of Hill Climbing Algorithm Searches for R4 (local maxima)
B2
C3
D1
E3
L4
G4
F2
Q5
H1
P7
T5
R4 R4
N5
M4
U7 R4
O6
S4
Open
A C3 B2 D1 G4 F2 N5 M4 R4 S4
Close
_ A A C3 A C3 A C3 G4 N5 The solution path is : A-C3-G4-N5
X
A C3 G4 N5 R4
Hill climbing Problems:Hill climbing may fail due to one or more of the following reasons:1- a local maxima: Is a state that is better than all of its neighbors but is not better than some other states.
B 10
C 20
D 15
E 25
Z X
F 10
G 30
I 15 P 15 R 10
J 20
M 20
K 20 N 35
L 15
O 20
L 25
S 15
X 15
Goal state
Y 25
Z 20
2- A Plateau: Is a flat area of the search space in which a number of states have the same best value , on plateau its not possible to determine the best direction in which to move.
A
B 20
C 20
D 20
E 20
3- A ridge: Is an area of the search space that is higher than surrounding areas , but that can not be traversed by a single move in any one direction.
B 10
C 20
D 15
E 25
F 10
G 20
H 25
J 20
I 10
H 15
2) Best-First-Search { Open:=[start]; Closed:=[]; While open [] do { Remove the leftmost from open, call it x; If x= goal then Return the path from start to x Else { Generate children of x; For each child of x do Do case
The child is not already on open or closed; { assign a heuristic value to the child state ; Add the child state to open; } The child is already on open: If the child was reached along a shorter path than the state currently on open then give the state on open this shorter path value The child is already on closed: If the child was reached along a shoreter path than tha state currently on open then { Give the state on closed this shorter path value Move this state from closed to open } } Put x on closed; Re-order state on open according to heuristic (best value first) } Return (failure); }
B4
C4
D6
E5
F5
G4
H3
J R4
T5
R4
O2 P3 Q
open
[A5] [B4 C4 D6] [C4 E5 F5 D6] [H3 G4 E5 F5 D6] [O2 P3 G4 E5 F5 D6] P3 G4 E5 F5 D6] [] [A5] [B4 A5] [C4 B4 A5] H3 C4 B4 A5]
closed
[O2 H3 C4 B4 A5]
Implementing Heuristic Evaluation Function: ) (Best First Search . ) (node . . ) ( . ) (Heuristic function . . Heuristic Function -:
A Algorithm
Trace of A- Algorithm (Search )Connect(a,b,3 )Connect(a,c,2 )Connect(a,d,4 )Connect(b,e,7 )Connect(b,f,7 Connect(c,g,3). Connect(c,h,2). )Connect(d,I,4 Connect(d,j,3). Connect(f,k,3).
Connect(h,k,1). Connect(I,k,2).
B3
C2
D4
E7
F7
H2
G3
I4
J3
K3
K1
K2
The tree after evaluation function calculation is : Connect(a,b,4) Connect(a,c,3) Connect(a,d,5) Connect(b,e,9) Connect(b,f,9) Connect(c,g,5). Connect(c,h,4). Connect(d,I,6) Connect(d,j,5). Connect(f, k6). Connect(h, k4). Connect(I, k5).
B4
C3
D5
E9
F9
H4
G5
I6
J5
K6
K4
K5
Open
[A] [C3 B4 D5] [H4 B4 D5 G5] [K4 B4 D5 G5]
Close
[] [A] [C3 A] [H4 C3 A] [K4 H4 C3 A] The path is : A-C3- H4- K4
A* Algorithm
Definition:-if A used with an evaluation function in which h(n) is less than or equal to the cost of the minimal path from n to the goal , the resulting search algorithm is called A* Algorithm.
A* Algorithm Properties:1) Admissibility A search algorithm is admissible if it is guaranteed to find a minimal cost to a solution whenever such a path exists. Admissibility Definition: Consider the evaluation function f(n)=g(n)+h(n). In determining the properties of admissible heuristics it is useful to define first an evaluation function f*:
F*(n)=g*(n)+h*(n)
Where: g*(n) is the cost of the shortest path from the start node n. h*(n) returns the actual cost of the shortest path from n to the goal. F*(n) is the actual cost of the optimal path from a start node to a goal node that passes through node n. If we employ best-first search with the evaluation function F* ,the resulting search strategy is admissible. In A algorithm , g(n) , The cost of the current path to state n , is a reasonable estimate of g*(n), but they may not be equal g(n)g*(n).These are equal only if the graph search has discovered the optimal the optimal path to state n. Similarly, We replace h*(n) with h(n) , a heuristic estimate of the minimal path to a goal state. If A Algorithm uses an evaluation F in which h(n) h* (n) it is called A* Algorithm .So all A* Algorithm are admissible.
2) Monotonicity A heuristic function h is monotone if :a. For all state ni and nj , where nj is a descendant of ni h(ni)-h(nj) cost(ni,nj). Where cost (ni,nj) is the actual cost of going from state ni to nj. b. The heuristic evaluation of the goal state is zero , or h(goal )=0. 3) Informedness For two A* heuristics h1 and h2 , if h1(n) h2(n), for all states n in the search space , heuristics h2 is said to be more informed than h1.
Open
[A] [C2+2 B3+4 D4+4] [C4 B7 D8] [H2+1G3+3 B7 D8] [H3G6 B7 D8] [K1+0 G6 B7 D8] [K1G6 B7 D8]
Close
[] [A] [A] [C4 A] [C4 A] [H3 C4 A] [K1 H3 C4 A]
Complex Search Space and problem solving Approach Tic Tac Toe Game
The complexity of the search space is 9! 9! =9*8*7*.*1 Therefore it is necessary to implement two conditions: 1- Problem reduction 2- Guarantee the solution
Using Heuristic in Games 1- Minimax Search Maximizing for MAX parents and Minimizing for MIN, The values go backup the graph to the children of the current state. These values are then used by the current state to select among its children. Figure below shows minimax on hypothetical state space with four-ply-look-ahead.
A has =3 (A will be no larger than 3) B is pruned since 5>3 C has =3 (C will be no smaller than 3) D is pruned, since 0<3 E is pruned, since 2<3 C is 3 : - . MAX -1 . MIN -2
-1 MIN .MAX
MIN . -2 MAX . MIN
Problem Reduction
1. AO* (And\Or)
The A* Algorithm have considered search strategies for OR graph which we want to find a single path to a goal, another kind of structure, the AND-OR graph (or tree), is useful for representing the solution of problems that can be solved decomposing them into a set of smaller problems, all of which must then be solved. This decomposition or reduction generates arcs that we will call AND arcs. Just as in an OR graph, several arcs may emerge from a single node, indicating a variety of ways in which the original problem might be solved. This is why the structure is called not simply an AND graph but rather an AND-OR graph. In order to find solution in an AND-OR graph, we need an algorithm similar to A*, but with the ability to handle the AND arc appropriaty. This algorithm should find a path from the starting node of the graph to a set of nodes representing solution states. The graph below illustrate why the A* Algorithm not adequate for searching AND-OR graphs
) ( 9) f* ( node C) D ( node D) f* ( AND arc) ( node D ( 38) ( 17) f*( node B) . F E B A B f* D f* To search an implicit AND-OR graph, It is necessary to do three things at each step: 1- Traverse the graph, starting the initial node and following the current best path, and accumulate the set of nodes that are on that path and have not yet been expanded. 2- Pick one of these unexpanded nodes and expand it. Add its successors to the graph and compute f*. 3- Change the f* estimate of the newly expanded node to reflect the new information provided by its successors.
2.Constraint Satisfaction Many problems in AI can be viewed as problems of Constraint Satisfaction in which the goal is to discover some problem state that satisfies a given set of constraints. By viewing a problem as one of constraint satisfaction, it is often possible to reduce substantially the amount of search that is required as compared with a method that attempts to form partial solutions directly by choosing specific values for components of the eventual solution. Constraint Satisfaction is a search procedure that operate in a space of constraints. The initial state contains the Constraints that are originally given in the problem description. A goal state is any state that has been constrained enough where enough most be defined for each problem. Algorithm : Constraint Satisfaction 1. Propagate available constraints. To do this, first set OPEN to the set of all objects that must have values assigned to them in a complete solution. Then do until an inconsistency is detected or until OPEN is empty: (a) Select an object OB from open. Strengthen as much as possible the set of constraints that apply to OB.
(b) If this set is different from the set that was assigned the last time OB was examined or if this is the first time OB has been examined, then add to OPEN all objects that share any constraints with OB. (c) Remove OB from OPEN. 2. If the union of the constraints discovered above defines a solution, then quit and report the solution. 3. If the union of the constraints discovered above defines a contradiction, then return failure. 4. If neither of the above occurs, then it is necessary to make a guess at something in order to proceed. To do this, loop until a solution is found or all possible solutions have been eliminated: (a) Select an object whose value is not yet determined and select a way of strengthening the constraints on that object. (b) Recursively invoke constraint satisfaction with the current set of constraints augmented by the strengthening constraint just selected. Example: consider the crypt arithmetic problem
Problem: SEND +MORE -------------MONEY Initial state: No two letters have the same value. The sums of the digits must be as shown in the problem.
M=1, since two single-digit numbers plus a carry cannot total more than 19. S=8 or 9, since S + M+C3 > 9(to generate the carry) and M=1, S+1+C3>9 so S + C3 > 8 and C3 is at most. O=0, since S + M(l) + C3 (<= l) must be at least l0 to generate a carry and it can be at most 11. But M is already 1, so O must be 0. N= E or E +1, depending on the value of C2. But N cannot have the same value as E. So N=E+1 and C2 is 1. In order for C2 to be 1, the sum of N + R + c1 must be greater than 9,so N+R must be greater than 8 N+R cannot be greater than 18, even with a carry in, so E cannot be 9. N=3,sinceN=E+1. R= 8 or 9,since R+N(3)+C1(lor0)= 2 or 12. But since N is already 3, the sum of these nonnegative numbers cannot be less than 3. Thus R + 3 + (0 Or 1)=l2and R=8 or 9.
3. Mean-End Analysis The means-ends analysis process centers around the detection of differences between the current state and the goal state. once such a difference is isolated, an operator that can reduce the difference must be found.
Example: Solve the problem of moving a desk with two things on it from one room to another
Move object Move robot Clear object Get object on object Get arm empty Be holding object
Push *
Carry *
Walk *
Pickup Putdown * *
Place
* *
* A Difference Table
Knowledge Representation
There are many methods can be used for knowledge representation and they can be described as follows:1- Semantic net. 2- Conceptual graph. 3- Frames 4- Prepositional and Predicates logic. 5- Clause forms Logical representation Conceptual representation
1) Semantic Net
It is consist of a set of nodes and arcs , each node is represented as a rectangle to describe the objects, the concepts and the events. The arcs are used to connect the nodes and they divided to three parts:Is a: Is Has a
can
(Reciever) ( agent) arcs . , ( object) Example1: Computer has many part like a CPU and the computer divided into two type, the first one is the mainframe and the second is the personal computer ,Mainframe has line printer with large sheet but the personal computer has laser printer , IBM as example to the mainframe and PIII and PIV as example to the personal computer.
Has a Computer CPU
Is a
Is a
PIV
PII
IBM
time
object
Past
book
proposition
past
time
past a book
object
time
object
Past
disk
time Past
time
present
The dog
agent
scratch
object
ear
instrument
time
Part of
paw
present
Ahmed
agent
tell
receiver
Saad
time
present
proposition
saw
receiver
Suha
time
past
3) Frame: Frame-list
Slot-list Frame-list( node-name, parent, [child]). Slot-list(node-name, parent).
Example:
Internal Structure computer monitor keyboard Disk controller
peripherals
printer
plotters
mouse
Frame list( computer,_ ,[Internal structure, monitor, keyboard , peripherals]). Frame-list(Internal structure, computer, [disk controller, mother board]). Frame- list(printer, peripheral, [speed, ports]). Slot-list(motherboard, Internal structure). Slot-list(mouse, peripheral).
Homework 2: Using Semantic Net and Conceptual graph to solve the following statement:
1) 2) 3) Suha send a book to Tom. Tom believe that Mustafa like cheese. Monkey ema grasp the banana with hand.
Example1: 1) Fido is a dog Dog (Fido). 2)All dogs are animals. (dog(x) animal(x)) 1) All animals will die. (animal(y) die(y)). 2) Prove that ,Fido will die die (Fido) Example 2:1) Any one passing their history exam and winning the lottery is happy. X (pass(x, history exam) win (x, the lottery)happy(x)) 2) Any one who studies or is lucky can pass all the exam X y (study(x) V lucky(x) pass(x, y)). 3) John did not study but he is lucky study (John) lucky(John). 4) Any one is lucky wins the lottery X (lucky(x) win(x, the lottery)). 5) Prove that John is happy. happy(John).
Example 3:- All people that are not poor and smart are happy. Those people that read are not stupid. John can read and wealthy. Happy people have exciting lives. Can any one be found with an exciting life? 1) All people that are not poor and smart are happy. X ( poor(x) smart(x) happy(x). 2) Those people that read are not stupid. X (read (x) stupid(x)). 3) John can read and wealthy. Read (John) wealthy (John). 4) Happy people have exciting lives. W (happy (w)
exciting (w)).
5) The negation of the conclusion is: 6) W ( exciting (w)). 7) Homework:- Convert the following statements to predicate logic? Marcus was a man. Marcus was a Pompeian. All Pompeian were Romans. Caeser was ruler . All Romans were either loyal to Caeser or hated him. Everyone is loyal to someone but not assassinate to someone. Prove that, People not try to assassinate Caeser.
6) Clause Forms:The statements that produced from predicate logic method are nested and very complex to understand, so this will lead to more complexity in resolution stage , therefore the following algorithm is used to convert the predicate logic to clause forms:-
Statistical Reasoning
In many practical problem-solving situations, the available knowledge is incomplete or inexact. In these cases, the knowledge is inadequate to support the desired sorts of logical inferences. The probabilistic reasoning methods allow AI systems to use uncertain or probabilistic knowledge in ways that take the uncertainty in to account.
1. Probability Reasoning
Probabilistic techniques, generally, are capable of managing imprecision of data and occasionally uncertainty of knowledge. Example: PRi { IF (the-observed-evidences-is-rash) (x ), AND (the-observed-evidences-is-fever) (y), AND (the-observed-evidences-is-high-pain) (z ), THEN (the-patient-bears-German-Measles)} (CF ) where x, y, z denote the degree of precision / beliefs / conditional probabilities that the patient bears a symptom, assuming he has German Measles. CF represents the degree of certainty of the rule or certainty factor/ probability that the patient bears German Measles, assuming the prior occurrence of the antecedent clauses. Bayesian Reasoning
The Definition of Conditional probability P (H / E) is given by P(H / E) = P(H E) / P(E) = P(H & E) / P(E), where H and E are two events and P (H E) denotes the joint occurrence of the events H & E. Analogously, P(E / H) = P(E & H) / P(H) . Now, since P(E & H) = P(H & E), we find P(E & H) = P(E / H) . P (H) = P(H / E) . P (E)
Example: H= John has malaria, E= John has a high fever. General Knowledge consist of 1. P(H): probability that a person has malaria, 2. P(E|H): probability that a person has high fever, given that he has malaria, 3. P(E|H): probability that a person has high fever, given that he does not have malaria, * the fact that john has the symptom of high fever P(H|E) which represents the probability that John has malaria, given that he has a high fever,
2. Bayesian Network
The main idea in this approach is that to describe the real world, it is not necessary to use a huge joint probability table in which we list the probabilities of all conceivable combinations of events. Most events are conditionally independent of most other ones, so their interactions need not be considered. Instead, we can use a more local representation in which we will describe clusters of events that interact. Example: Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian network (shown). All three variables have two possible values, T (for true) and F (for false).
where the names of the variables have been abbreviated to G = Grass wet, S = Sprinkler, and R = Rain. The model can answer questions like "What is the probability that it is raining, given the grass is wet?"
3.Dempster-Shafer Theory
This new approch considers set of propositions and assigns to each of them an interval [Belief, Plausibility] In which the degree of belief must lie. Belief measures the strength of the evidence in favor of a set of propositions. It ranges from 0 (indicating no evidence) to 1 (denoting certainty) Pl(s)= 1- Bel(~s) Suppose that we have two pieces of uncertain relevant to the same , and that they are represented by basic probability assignments m1 and m2, respectively. Dempsters rule of combination allows us to combine any two belief functions.
M3(A)
Example: in a simplified diagnosis problem, might consist of the set{ All,Flu,Cold,Pneu} All: allergy Flu: flu Cold: cold Pneu: pneumonia m1 corresponds to our belief after observing fever: { Flu,Cold,Pneu} (0.6) (0.4) m2 corresponds to our belief after observing a runny nose: { All,Flu,Cold } (0.8) (0.2) Then we can compute the combination m3 using the following table: {F,C,P} (0.6) (0.4) {A,F,C} {F,C} {A,F,C} (0.8) (0.48) {F,C,P} (0.32) (0.2) (0.12) (0.08)
As a result of applying m1 and m2, we produced m3: {Flu,Cold} (0.48) {All,Flu,Cold} (0.32) {Flu,Cold,Pneu} (0.12) (0.08) Let m4 corresponds to our belief given just the evidence that the problem go away when the patient goes on a trip { All} (0.9) (0.1) {F,C} {A,F,C} {F,C,P} (0.48) (0.32) (0.12) (0.08) {A} (0.9) (0.432) {A,F,C} (0.288) (0.108) {A} (0.072) {F,C} {A,F,C} {F,C,P} (0.1) (0.048) (0.032) (0.012) (0.008)
Then the final combined belief function, m5: {Flu,Cold} {All,Flu,Cold} {Flu,Cold,Pneu} {All} (0.104) (0.696) (0.026) (0.157) (0.017)