Académique Documents
Professionnel Documents
Culture Documents
09TL- BATCH
This class will meet @: 8:30 a.m -10:30 a.m (Mondays) 10:30 a.m - 12:30 p.m (Thursdays) 10:30 a.m - 12:30 p.m (Wednesdays) 10:00 a.m -11:30 a.m (Fridays)
Today's Lecture: Lecture # 35 Lec # 37 15-03-2012
4/24/2012
4/24/2012
Introduction
4/24/2012
4/24/2012
4/24/2012
Topics Covered
Transition matrix Probability vectors Absorbing vs. non-absorbing Markov chains Steady-state matrices
4/24/2012
Markov Chains
A Markov Chain is a weighted digraph representing a discrete-time system that can be in any number of discrete states The transition matrix for a Markov chain is the transpose of a matrix of probabilities of moving from one state to another Pij = probability of moving from state i to j
4/24/2012
Transition Matrix
Below, is the layout of the transition matrix for Chutes and Ladders (101x101)
p0,0 p1,0 . . p100,0
4/24/2012
4/24/2012
11
Probability Vector
The probability vector is a column vector in which the entries are nonnegative and add up to one The entries can represent the probabilities of finding a system in each of the states
4/24/2012
12
4/24/2012
13
4/24/2012
14
Common Question
A common question arising in Markov-chain models is, what is the long-term probability that the system will be in each state? The vector containing these long-term probabilities is called the steady-state vector of the Markov chain
4/24/2012
15
Steady-state matrix
The steady-state probabilities are average probabilities that the system will be in a certain state after a large number of transition periods The convergence of the steady-state matrix is independent of the initial distribution Long-term probabilities of being on certain squares
lim n P = m
4/24/2012 Principles of Teletraffic Engineering 16
Analytical Computation
Programming Language: Java Objectives:
Compute the transition matrix Compute the probability vectors
4/24/2012
18
4/24/2012
19
4/24/2012
20
4/24/2012
21
4/24/2012
22
Simulation Techniques
4/24/2012 Principles of Teletraffic Engineering 23
Simulation
Programming Language: C++ Objectives:
Find frequencies for being at each position Find mean number of moves to win Find standard deviation Simulate a large number of games
4/24/2012
24
Technique One
How we simulated the game
Array of 101 integers representing the board Each array entry represented a state
Psuedo-states held the value at the end of the ladder or the chute i.e. Index 1 represented square 1 and held 38. If index became 1 it would move to square 38. Normal states held the index of that state. Index would be compared to value and be the same so index would stay at its value.
4/24/2012
25
Technique One
When index became 100 game would be over. If index was greater than 100 it would reset to its previous. This repeats until index 100 is hit. Mean and standard deviation were calculated throughout the game. Printed most moves to win and least moves. Printed number of times each square was landed on for 250,000 games and frequency of each.
4/24/2012
26
4/24/2012
27
Technique Two
Run the game as a non-absorbing.
i.e. If the index is 100 or above subtract 100 to run the game as a non-ending board.
Calculate the frequency of landing on each square. This was done to compare to the non-absorbing analytical model.
4/24/2012
28
Repeated Play
4/24/2012
29
Analytic Computation
Programming Language: Java Objectives:
Compute the non-absorbing matrix Compute the corresponding probability vectors
4/24/2012
30
4/24/2012
31
Theoretical
P0 = 1.775%
P5 = 0.581%
P26 = 2.89%
Experimental
P0 = 1.76%
P5 = 0.552%
P26 = 2 . 94 %
P42 = 2.09%
P65 = 0.599% P99 = 0.452%
P42 = 2.13%
P65 = 0 . 573 %
P99 = 0.441%
Conclusion
The Theoretical Result and The Experimental Result matched The Markov Chain works in Chutes and Ladders
Chutes and Ladders is a game for 3-6 years old. We can make it more interesting by changing some rules
Sources
Mooney and Swift. A Course In Mathematical Modeling. MAA Publications, 1999.
4/24/2012
37
End
4/24/2012
38