Vous êtes sur la page 1sur 37

Markov Analysis

Introduction
Markov Analysis
A

technique dealing with probabilities of


future occurrences with currently known
probabilities

Numerous

applications in

Business (e.g., market share analysis),


Bad debt prediction
University enrollment predictions
Machine breakdown prediction

Markov Analysis
Matrix of Transition Probabilities

Shows the likelihood that the system will


change from one time period to the next

This is the Markov Process.


It enables the prediction of future states or
conditions.

States and State


Probabilities
States are

used to identify all possible conditions of a process or


system
A

system can exist in only one state at a time.


Examples include:
Working and broken states of a machine
Three shops in town, with a customer able to patronize
one at a time
Courses in a student schedule with the student able to
occupy only one class at a time

Assumptions of Markov
Analysis
1.

A finite number of possible states

2.

Probability of changing states remains the

same over time


3. Future state predictable from previous state
and the transition matrix
4. Size and states of the system remain the same
during analysis
5. States are collectively exhaustive
All possible states have been identified

6. States are mutually exclusive


Only one state at a time is possible

States and State Probabilities continued


1.

Identify all states.

2.

Determine the probability that the system


is in this state.

This information is placed into a vector of state


probabilities.
(i) = vector of state probabilities for period i
= (n)

where
n = number of states
n = P (being in state 1, 2, , state n)
Most of the time, problems deal with more than one
item!

States and State Probabilities continued


Three Departmental Stores example:
100,000 customers monthly for the 3 grocery stores

State 1 = store 1 = 40,000/100,000 = 40%

State 2 = store 2 = 30,000/100,000 = 30%

State 3 = store 3 = 30,000/100,000 = 30%

vector of state probabilities:


(1) = (0.4,0.3,0.3)
where
(1) = vector of state probabilities in period 1
= 0.4 = P (of a person being in store 1)
= 0.3 = P (of a person being in store 2)
= 0.3 = P (of a person being in store 3)

States and State Probabilities continued


Three Departmental Stores example, continued:

Probabilities in the vector of states for the stores


represent market share for the first period.

In period 1, the market shares are

Store 1: 40%

Store 2: 30%

Store 3: 30%

But, every month, customers who frequent one store


have a likelihood of visiting another store.

Customers from each store have different probabilities


for visiting other stores.

States and State Probabilities continued


Three Grocery Store example, continued:
Store-specific customer probabilities for visiting a
store in the next month:
Store 1:

Store 2:

Return to Store 1 = 80% Visit Store 1 = 10%


Visit Store 2 = 10%

Return to Store 2 = 70%

Visit Store 3 = 10%

Visit Store 3 = 20%

Store 3:
Visit Store 1 = 20%
Visit Store 2 = 20%
Return to Store 3 = 60%

States and State Probabilities continued


Three Grocery Store example, continued:
Combining the starting market share with the
customer probabilities for visiting a store next period
yields the market shares in the next period:
Initial
Share P(1) P(2) P(3)
Store 1:

40% 80% 10% 10%

Next Period: 32%


Store 2:

4%

30% 10% 70% 20%

Next Period:
Store 3:

4%

3% 21%

6%

30% 20% 20% 60%

Next Period:

6%

6% 18%

New Shares: 41% 31% 28% = 100%

Matrix of Transition
Probabilities
To calculate periodic changes, it is much more
convenient to use
a matrix of transition probabilities.
a matrix of conditional probabilities of being in a
future state given a current state.

Let Pij = conditional probability of being in state j in


the future given the current state of i, P (state j at
time = 1 | state i at time = 0)
For example, P12 is the probability of being in state
2 in the future given the event was in state 1 in
the prior period

Matrix of Transition Probabilities continued


Let P = matrix of transition probabilities
Row
Sum

P11 P12 P13


P=

P1n

P21 P22 P23 ***** P2n

*
*
*
*

Pm1

*****

******

Pmn

*
*
*
*

Important:
Each row must sum to 1.
But, the columns do NOT necessarily sum to 1.

Matrix of Transition Probabilities continued


Three Grocery Stores, revisited
The previously identified transitional probabilities
for each of the stores can now be put into a
matrix:
0.8
P=

0.1

0.1

0.1

0.7

0.2

0.2

0.2

0.6

Row 1 interpretation:
0.8 = P11 = P (in state 1 after being in state 1)
0.1 = P12 = P (in state 2 after being in state 1)
0.1 = P13 = P (in state 3 after being in state 1)

Predicting Future Market Shares


Grocery Store example
A purpose of Markov analysis is to predict the
future

Given the
1. vector of state probabilities and
2. matrix of transitional probabilities.

It is easy to find the state probabilities in the


future.

This type of analysis allows the computation of the


probability that a person will be at one of the
grocery stores in the future.

Since this probability is equal to market share, it is

Predicting Future Market Shares continued

Grocery Store example


When the current period is 0, finding the state
probabilities for the next period (1) can be found
using:
(1) = (0)P
Generally, in any period n, the state probabilities
for
period n+1 can be computed as:
(n+1) = (n)P

Predicting Future States continued


= state probabilities
(0) .4 .3 .3
.8
P .1
.2

.1
.7
.2

.1
.2
.6

(1) (0) P
(1) = .4 .3 .3

.8
.1

.2

.1
.7
.2

.1
.2
.6

( 1 ) [ (.4*.8 + .3*.1 + .3*.2),


(.4*.1 + .3*.7 + .3*.2),
]
(.4*.1 + .3*.2 + .3*.6)
(1) = [ 0.41 0.31 0.28]

Predicting Future Market Shares continued

In general,
(n) = (0)Pn
Therefore, the state probabilities n periods
in the future can be obtained from the

current state probabilities and the

matrix of transition probabilities.

Another Example of Markov


Analysis: The Machine Operations

States and State Probabilities


For example:

If dealing with only 1 machine, given the fact


that it is currently functioning correctly.

The vector of states can then be shown.


(1) = (1,0)

where
(1) = vector of states for the machine in period 1
= 1 = P (being in state 1) = P (machine working)
= 0 = P (being in state 2) = P (machine broken)

Markov Analysis of Machine Operations

P=

0.8 0.2
0.1 0.9

where
P11 = 0.8 = probability of machine working this period if
working last period
P12 = 0.2 = probability of machine not working this period
if working last
P21 = 0.1 = probability of machine working
this period if not working last
P22 = 0.9 = probability machine not working this period if
not working last

Markov Analysis of Machine Operations continued

What is the probability the machine will be working


next month?

(1) = (0)P

= (1,0) 0.8 0.2


0.1 0.9
= [(1)(0.8)+(0)(0.1), (1)(0.2)+(0)(0.9)]
= (0.8, 0.2)
Thus, if the machine works this month, then
there is
an 80% chance that it will be working
next month and
a 20% chance it will be broken.

Markov Analysis of Machine Operations continued

What is the probability the machine will be working


in two months?

(2) = (1)P

0.8 0.2
= (0.8,
0.1 0.9
0.2)
= [(0.8)(0.8)+(0.2)(0.1), (0.8)(0.2)+(0.2)(0.9)]
= (0.66,
Thus,
0.34) if the machine works next month, then
in two months there is
a 66% chance that it will be working
and
a 34% chance it will be broken.

Equilibrium State and


Absorbing State (Only
Understanding of Concepts
Required/No Maths Required)

Equilibrium Conditions
Equilibrium

state probabilities are the long-run


average probabilities for being in each state.

Equilibrium

conditions exist if state


probabilities do not change after a large
number of periods.

At

equilibrium, state probabilities for the next


period equal the state probabilities for current
period.

Equilibrium Conditions continued


One

way to compute the equilibrium share of the


market is to use Markov analysis for a large
number of periods and see if the future amounts
approach stable values.

On

the next slide, the Markov analysis is repeated


for 15 periods for the machine example.

By

the 15th period, the share of time the machine


spends working and broken is around 66% and
34%, respectively.

Machine Example: Periods to Reach


Equilibrium
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

State 1

State 2

1.0
.8
.66
.562
.4934
.44538
.411766
.388236
.371765
.360235
.352165
.346515
.342560
.339792
.337854

0.0
.2
.34
.438
.5066
.55462
.588234
.611763
.628234
.639754
.647834
.653484
.657439
.660207
.662145

The Markov Process


Equilibrium Conditions

(n)
Current
State

P
Matrix of
Transition

(n+1)
New
State

Equilibrium Equations
(i 1) (i ) P
Assume:

(i) 1

p
2 , P 11
p21

p12
p22

Then:
1 2 1P11 2 P21 1P12 2 P22
or :
1 1P11 2 P21,
2 1P12 2 P22
Therefore:

2 p 21
1 p12
1
2
and
1 p11
1 p22

Equilibrium Equations
continued
It is always true that
(next period) = (this period) P
or
(n+1) = (n) P
At Equilibrium:
* (n+1) = (n) = (n) P
* (n) = (n) P
Dropping the n term:
=P

Equilibrium Equations continued


Machine Breakdown example
At Equilibrium:
=P
0.8 0.2
(1, 2) = (1,
0.1 0.9
2)
Applying matrix multiplication:

(1, 2) = [(1)(0.8) + (2)(0.1), (1)(0.2) + (2)(0.9)]

Multiplying through yields:


1 = 0.8 1 + 0.1 2
2 = 0.2 1 + 0.9 2

Equilibrium Equations
continued
Machine Breakdown example
The state probabilities must sum to 1,
therefore:
s = 1
In this example, then:
1 + 2 = 1

In a Markov analysis, there are always n state


equilibrium equations and 1 equation of state
probabilities summing to 1.

Equilibrium Equations
continued
Machine Breakdown Example
Summarizing the equilibrium equations:
1 = 0.8 1 + 0.1 2
2 = 0.2 1 + 0.9 2
1 + 2 = 1
Solving by simultaneous equations:
1 = 0.333333
2 = 0.666667
Therefore, in the long-run, the machine
will be functioning 33.33% of the time
and broken down 66.67% of the time.

Absorbing States
Any

state that does not have a probability


of moving to another state is called an
absorbing state.
If an entity is in an absorbing state now,
the probability of being in an absorbing
state in the future is 100%.
An

example of such a process is accounts


receivable.
Bills are either paid, delinquent, or written off as
bad debt.
Once paid or written off, the debt stays paid or

Absorbing States
Accounts Receivable example
The possible states are

Paid
Bad debt
Less than 1 month old debt
1 to 3 months old debt

transition matrix for this would look similar to:


Paid

Paid
Bad
<1
1-3

1
0
0.6
0.4

Bad
0
1
0
0.1

<1
0
0
0.2
0.3

1-3
0
0
0.2
0.2

Markov Process
Fundamental Matrix
0

P=

1
0
0.6
0.4 A

0
1
0
0

0
0
0.2
0.3 B

0
0
0.2
0.2

Partition the probability matrix into 4 quadrants to


make 4 new sub-matrices:
I, 0, A, and B

Markov Process
Fundamental Matrix continued

Let P
A

Where I = Identity matrix,


and 0 = Null matrix
Then

F I B

Once F is found, multiply by the A matrix: FA


The FA indicates the probability that an amount in one of the
non-absorbing states will end up in one of the absorbing
states.

Markov Process
Fundamental Matrix continued
Once the FA matrix is found, multiply by the
M vector, which is the starting values for the
non-absorbing states, MFA,
where
M = (M1, M2, M3, Mn)
The resulting vector will indicate how many
observations end up in the first nonabsorbing state and the second nonabsorbing state, respectively.

Vous aimerez peut-être aussi