Académique Documents
Professionnel Documents
Culture Documents
The probability that a machine that is functioning in one period will continue to function or will break down in next period.
The probability that a consumer purchasing brand A in one period will purchase brand B in next period.
Markov processes
A customer visits either Murphys Foodliner or Ashleys Supermarket
Murphys Foodliner
0.9
0.1
Ashleys Supermarket
0.2
0.8
Transition probabilities
Let pij represent transition probability from state i in a given period to state j in the next period
P = p11
p12
= 0.9 0.2
0.1 0.8
p21 p22
0.9
Shop at Murphys
Shop at Murphys
0.9
0.1
Shop at Ashleys
0.1
Shop at Ashleys
0.2
Shop at Murphys
0.8
Shop at Ashleys
(0.1)(0.8) = 0.08
State probability
i(n) = probability that the system is in state i in period n
1(1) = probability that the system is in state 1 in period 1 2(1) = probability that the system is in state 2 in period 1 i(n) is referred to as state probability since it denotes the probability that the system is in state i in period n Let 1(0) and 2(0) be the state probabilities at some initial starting point. If 1(0) = 1 and 2(0) = 0, then it means that as an initial condition customer shopped at Murphys last week. If 1(0) = 0 and 2(0) = 1, then what it means?
State probability
Generic representation is
(n) = [1(n)
2(n)]
For the grocery sales problem, [1(0) 2(0)] = [1 0] to show that as an initial condition customer shopped at Murphys Foodliner last week To find the state probabilities for period n+1, simply multiply the state probabilities for period n by the transition probability matrix.
State probability
(next period) = (current period) P
(n+1) = (n) P (1) = (0) P [1(1) 2(1)] = [1(0) 2(0)] p11
=[1 0] 0.9
0.1
0.8
= [0.9 0.1]
0.2
State Probabilities For Future Periods Beginning Initially With A Murphys Customer
State Probability 1(n) 2(n) 0 1 0 1 2 3 4 5 6 7 8 9 10
0.8 0.78 0.74 0.72 0.70 0.69 0.68 0.67 0.9 0.68 3 1 7 3 6 4 6 6 0.1 0.21 0.25 0.27 0.29 0.30 0.31 0.32 0.1 0.32 7 9 3 7 4 6 4 4
State Probabilities For Future Periods Beginning Initially With An Ashleys Customer
State Probability 1(n) 2(n) 0 1 0 1 2 3 4 5 6 7 8 9 10
0.3 0.43 0.50 0.55 0.58 0.61 0.62 0.64 0.64 0.2 4 8 7 5 9 2 8 0 8 0.6 0.56 0.49 0.44 0.38 0.37 0.35 0.8 0.411 0.36 6 2 3 5 8 2 2
p12
p21 p22 But for steady state, 1(n+1) = 1(n) = 1 & 2(n+1) = 2(n) = 2 [1 2] = [1 2] 0.9
exercise
Assume a third grocery store Quick stop groceries, enters the market. It is smaller than the other two but offers quick service and gasoline for automobiles to attract some customers who currently visit Murphys or Ashleys. Assume that transition probabilities are as below:
To
From
Murphys Ashleys Quickstop
Murphys
0.85 0.20 0.15
Ashleys
0.10 0.75 0.10
Quickstop
0.05 0.05 0.75
Murphys has two aging categories for its accounts receivable: i) accounts that are classified as 0 - 30 days old; ii) accounts that are classified as 31 - 90 days old.
Murphys follows the procedure of aging the total balance in customers account as per oldest unpaid bill.
As on Sept 30, this customer a/c falls in 31-90 day category as the oldest unpaid bill is dated Aug 15 and is 46 days old.
DIFFERENT STATES
One Dollar currently in accounts receivable can be in one of these states in future weeks (as the data is collected on weekly basis). Each week is a trial. And there are 4 states in each trial: State 1: Paid category State 2: Bad debt category State 3: 0 - 30 day category State 4: 31 - 90 day category
DIFFERENT STATES
Pij = probability of a dollar in state i in one week moving to state j in next week
P=
=
=
1
0
0
1
0
0
0
0
=
=
0.4
0.3
0.3
0.1
0
1 0
0
0 0.3
0
0 0.3 0.1
1
0
0
1
0
0
0
0
0 0.4
0
0 1
Calculate N = (I - Q)-1 [Fundamental matrix] Calculate NR [Absorbing State Probability Matrix] Calculate BNR [To estimate the bad debts] where B = [b1 b2]; b1 = Total dollars in 0 - 30 day category and b2 = Total dollars in 31 - 90 day category
A large corporation has collected data on the reasons both middle managers and senior managers leave the company. Some managers eventually retire, but others leave the company prior to retirement for personal reasons including more attractive positions with other firms. Assume that the following matrix of one-year transition probabilities applies with the four states of the markov process as shown: (640 + 280) managers
exercise
Retirement
Retirement
Leaves - personal Middle manager Senior manager
1.00
0.00 0.03 0.08
0.00
0.00 0.80 0.03
0.00
0.00 0.10 0.88