Vous êtes sur la page 1sur 4

Stat 150 Stochastic Processes

Spring 2009

Lecture 9: Waiting for patterns


Lecturer: Jim Pitman

Waiting for patterns


Expected waiting time for patterns in Bernoulli trials Suppose X1 , X2 , . . . are independent coin tosses with P(Xi = H ) = p, P(Xi = T ) = 1 p = q . Take a particular pattern of some nite length K , say HH . . . H
K

or HH . . . H T . Let
K 1

Tpat := rst n s.t. (XnK +1 , XnK +2 , . . . , Xn ) = pattern


K

You can try to nd the distribution of Tpat , but you will nd it very dicult. We can compute E[Tpat ] with our tools. Start with the case of pat = HH . . . H .
K

For K = 1,

P(TH = n) = q n1 p, E(TH ) =
n

n = 1, 2, 3 . . . 1 p

nq n1 p =

For a general K , notice that when the rst T comes (e.g. HHHT , considering some K 4), we start the counting process again with no advantage. Let mK be the expected number of steps to get K heads in a row. Let TT be the time of the rst tail. Observe: If TT > K , then THH...H = K ; if TT = j K , then THH...H = j + (fresh copy of) THH...H .

Lecture 9: Waiting for patterns

Therefore E(THH...H |TT > K ) = K and E(THH...H |TT = j ) = j + mK . So condition mK on TT :


K

mK =
j =1 K

P(TT = j )(j + mK ) + P(TT > K )K pj 1 q (j + mk ) + pK K


j =1 K K j 1

=(
j =1

q )mK + (
j =1

pj 1 qj ) + pK K

= (1 p )mK + E(TT 1(TT K )) + pK K


K

Recall that E(X 1A ) = E(X |A)P(A), and notice that E(TT ) = 1/q , E(TT |TT > K ) = K + 1/q , so E(TT 1(TT K )) = E(TT ) E(TT 1(TT > K )) 1 1 = ( K + ) pK q q Therefore mK = (1 pK )mK + = 1 1 ( K + ) pK + p K K q q

1 pK 1 1 p pK 1 + p + p2 + + pK 1 = pK mK pK = 1 + p + p2 + + pK 1 .

or Try to understand equation mK pK = 1 + p + p2 + + pK 1 . Idea: Imagine you are gambling and observe the sequence of H and T evolving. Each time you bet that the next K steps will be HH . . . H . Scale to get $1 if
K

you win. Accounting: Suppose that K = 3.

Lecture 9: Waiting for patterns

Return:

$1 p$1 p2 $1  H O  ?  ?

T H H T T T H H

Cost:

p3 + p3 +

+ p3

Expected cost = $ pK mK Expected return = $1(1 + p + + pK 1 ) So by Conservation of Fairness, mK pK = 1 + p + p2 + + pK 1 . Rigorously, this is a martingale argument, but details are beyond scope of this course. See A Martingale Approach to the Study of Occurrence of Sequence Patterns in Repeated Experiments by Shuo-Yen Robert Li, Ann. Probab. Volume 8, Number 6 (1980), 1171-1176. Try pattern = HT HT
Return: $1  H T O pq $1  H

T ? ? ?

Cost:

p2 q 2 +

+ p2 q 2

Again, make fairness argument: p2 q 2 mHTHT = 1 + pq mHTHT = Try


H H T T H H T H H H H T T H H H T T T H H T T T T H T T H H T T / $1 /0 /0 /0 /0 /0 /0

1 + pq p2 q 2

H H T T H H T T H H T T H H T T H H T T H H T T H H T T H H T T

/ p2 q 2 $1

Then mHHTTHHTT =

1 + p2 q 2 . p4 q 4

Lecture 9: Waiting for patterns

To verify the method described above actually works for all patterns of length K = 2, make a M.C. with states {HH, TT, TH, TT} by tracking the last two states. HH HT TH TT HH p q 0 0 HT 0 0 p q TH p q 0 0 TT 0 0 p q We have a chain with states i {HH, HT, T H, T T }, and we want the mean rst passage time to HT. Say we start at i = T T , and dene mij = mean rst passage time to (j = HT ) In general, for a M.C. with states i, j, . . . , mij = Ei ( time to reach j ) If i = j , mij =
k=j

P (i, k )(1 + mkj ) +


k =j

1 P (i, j )

=1+
k=j

P (i, k )mkj

Remark: The derivation shows that j need not be an absorbing state. So we want mTT,HT = 1 + p mTH,HT + q mTT,HT mTH,HT = 1 + p mHH,HT mHH,HT = 1 + p mHH,HT Solve the system of equations and verify that mTT,HT = 1 . In previous notation, pq this is just mHT , and the general formula is seen to be working. You can check similarly that the formula works for patterns of length 3 by considering a chain with 8 patterns as its states, and so on (in principle) for patterns of length K with 2K states.

Vous aimerez peut-être aussi