Vous êtes sur la page 1sur 58

Stochastic Process

()
Li-Hsing Yen
National University of Kaohsiung
2
Independent Trials Processes
When a sequence of chance experiments
forms an independent trials ()
process
the possible outcomes for each experiment are
the same and occur with the same probability
knowledge of the outcomes of the previous
experiments does not influence our predictions for
the outcomes of the next experiment
(independence)
3
Stochastic Process
chance processes for which the knowledge of
previous outcomes influences predictions for
future experiments
In principle, when we observe a sequence of
chance experiments, all of the past outcomes
could influence our predictions for the next
experiment
4
Why Stochastic Process?
Example 1
good bad
0.9
0.1
0.8
0.2
0.9 0.1

Good:
Bad:
n
n-1
n-1
n-2
n=1000
n
5
Why Stochastic Process:
Example 2
At each play of the game,
the gambler wins $1 with
probability p and loses $1
with probability 1 p
The gambler quits playing either when he
goes broke or attains a fortune of $4.
Initially he has $2, what are the probabilities
of all possible outcomes after 4 plays (if 4
plays are possible)?
6

(random variable)

(random experiment)

(observation)
sample
2
Observation One
5
Observation Two
7
(Stochastic Process)

(outcome) (
)(random function)
realization
sample
function
Time or
sequence
Realization One
Time or
sequence
Realization Two
a particular set of
observations
8

(Bernoulli Process)
(Bernoulli trails)

X(i)i
X(i) is a
random
function
trials
0
1 2 3 4
Realization One
trials
0
1 2 3 4
Realization Two
9
Bernoulli Process Example
Counting the number of defective units
a unit arrives every minute
the probability of defective unit: 0.02
suppose 4 observations are made
the total number of defective units is a random
function of the duration of observation
X(t): the number of defective units through the
first t minutes of observation
X(t) is a random
function
10
All Sample Functions of the
Process
Realization X(1) X(2) X(3) X(4)
0 0 0 0 0
1 0 0 0 1
2 0 0 1 1
3 0 0 1 2
4 0 1 1 1
5 0 1 1 2
6 0 1 2 2
7 0 1 2 3
Realization X(1) X(2) X(3) X(4)
8 1 1 1 1
9 1 1 1 2
10 1 1 2 2
11 1 1 2 3
12 1 2 2 2
13 1 2 2 3
14 1 2 3 3
15 1 2 3 4
t (min)
0
1 2 3 4
X(t)
t (min)
0
1 2 3 4
X(t)
11
Discrete Value and Continuous
Value Processes
X(t) is a discrete value process if the set
of all possible values of X(t) at all times t
is a countable set S
X
Otherwise X(t) is a continuous value
process
time
time
number value
12
Discrete Time and Continuous
Time Processes
X(t) is a discrete time process if X(t) is defined
only for a set of time instants t
n
=nT where T is
a constant and n is an integer
Otherwise X(t) is a continuous time
process
Num. of
trails
time
13
Random Sequence
For a discrete-time process, the sample
function is completely described by the
ordered sequence of random variables
X
n
= X(nT)
A random sequence X
n
is an ordered
sequence of random variables X
0
, X
1
,
14
Example: Bernoulli Process
Bernoulli Process discrete value
process continuous value process?
Bernoulli Process discrete time
process continuous time process?
realization ?
t (min)
0
1 2 3 4
X(t)
0
1 2 3 4
i (trails)
X(i)

15
Example: Poisson Process
Poisson process discrete value
process continuous value process?
Poisson process discrete time
process continuous time process?
Poisson process realization ?
time
0
1 2 3 4
16
State Space and Chain
State space
the set of possible values of a stochastic process
in a Bernoulli process, if the number of
observations is K, then the state space is
S = {0, 1, 2, , K}
a Chain
a process possessing a discrete state space
Bernoulli process and Poisson process both are
chains
X(t)

17
Probability P
n
(t)
For a chain we can define the probability that
the process is in state n at time t
P
n
(t) = P[X(t) = n]
Example
assume a Poisson arrival process with mean 2t
{X(t): t > 0}: the continuous-time process that
counts the number of arrivals in [0,t]
P
n
(t) = P[X(t)=n] =
( )
!
2
2
n
t e
n
t
18
Transition Probability
The state of a system at any given moment
depends on its history
For any two points in time t and t and for any
two states m and n, define
For discrete-time process, define
] ) ( | ) ' ( [
' ,
,
m t X n t X P p
t t
n m
= = =
] ) ( | ) ( [
,
,
m i X n j X P p
j i
n m
= = =
19
Markov Chain
A stochastic process is a Markov chain if
it possesses a discrete state space and
it satisfies the Markov property
Markov property
Given a set of time points t
1
< t
2
< < t
k-1
<t
k
P[X(t
k
) = m
k
| X(t
k-1
) = m
k-1
, , X(t
2
) = m
2
, X(t
1
) = m
1
]
= P[X(t
k
) = m
k
| X(t
k-1
) = m
k-1
]
=
k k
k k
t t
m m
p
,
,
1
1

depends only on the


most recent observation
Its a
chain
20
Where Are We?
independent
of time
Independent Trial
Processes
Stochastic Processes
discrete
value
continuous
value
Chain
time-dependent
Markov-
property
Markov chain
non Markov-
property
Were
here
21
Where Are We Going?
continuous
time
discrete-time
Markov chain
stationary
non-
stationary
Theorem 2
discrete-time
Well be
here
Markov chain
Chapman-
Kolmogorov
equations
Infinite state space finite state space
Theorem 1
Example:
Poisson process
22
Chapman-Kolmogorov
Equations
State probability
Given a discrete-time Markov chain, for any
times i < r < j and states m and n,
| | n j X P p
j
n
= = ) (

=
=
N
k
j r
n k
r i
k m
j i
n m
p p p
1
,
,
,
,
,
,
X(i)=m X(j)=n
X(r)=1
X(r)=2
X(r)=k
23
Transition Probability Matrix
Consider a discrete-time Markov chain with
finite state space S = {1, 2, , N}.
For times i and j, its transition probability
matrix is given by
( )
|
|
|
|
|
.
|

\
|
=
j i
N N
j i
N
j i
N
j i
N
j i j i
j i
N
j i j i
p p p
p p p
p p p
j i
,
,
,
2 ,
,
1 ,
,
, 2
,
2 , 2
,
1 , 2
,
, 1
,
2 , 1
,
1 , 1
,

P
E=1
E=1
24
One-Step Transition
Probability Matrix
According to the definition of matrix
multiplication
P(i, j) = P(i, r)P(r, j) for every r
so P(i, j) = P(i, j-1)P(j-1, j)
= P(i, j-2) P(j-2, j-1) P(j-1, j)
= P(i, i+1) P(i+1, i+2) . P(j-1, j)
one step one step
25
Theorem 1
Let
For any discrete-time Markov chain with finite
state space
p( j)= [P(0, 1)P(1,2) P( j-1, j)]
T
p(0)
|
|
|
|
|
.
|

\
|
=
j
N
j
j
p
p
p
j

2
1
) ( p
T: matrix transpose
| | 1 ) ( = j X P
| | 2 ) ( = j X P
| | N j X P = ) (
The j-th state vector
26
Alternative Form of Theorem 1
Let
For any discrete-time Markov chain with finite
state space
p( j) = p(0) [P(0, 1)P(1,2) P( j-1, j)]
) , , , ( ) (
2 1
j
N
j j
p p p j = p
| | 1 ) ( = j X P
| | 2 ) ( = j X P
| | N j X P = ) (
The j-th state vector
no matrix transpose is needed
27
Stationary (Homogeneous)
Process
If the transition probabilities depend only on
the time difference (number of steps)
let r = j - i
one-step probabilities (when r = 1)
p
m,n
= P[X(i +1) = n | X(i) = m]
i j
n m
r
n m
j i
n m
p p p

= =
, ,
,
,
r-step probabilities
s
n k
N
k
r
k m
s r
n m
p p p
,
1
, ,
=
+
=
Chapman-Kolmogorov
equation for stationary case
28
One-Step Transition
Probability Matrix
P = P(i, i+1) for all i in a stationary process
( )
|
|
|
|
|
.
|

\
|
= =
N N N N
N
N
p p p
p p p
p p p
, 2 , 1 ,
, 2 2 , 2 1 , 2
, 1 2 , 1 1 , 1
1 , 0

P P
0
1

29
Markov Diagram
In the stationary form
the Chapman-Kolmogorov equations has a
graphical interpretation in terms of one-step
probabilities
1
2 3
p
2,1
p
1,2
p
3,1
p
1,3
p
2,3
p
3,2
p
3,3
p
1,1
p
2,2
p
1
p
2
p
3
30
Markov Diagram Example
1
2 3
p
2,1
p
1,2
p
3,1
p
1,3
p
2,3
p
3,2
p
3,3
p
1,1
p
2,2
p
1
p
2
p
3
) 3 3 1 ( ) 3 2 1 ( ) 3 1 1 (
2
3 , 1
+ + = P P P p
3 , 3 3 , 1 3 , 2 2 , 1 3 , 1 1 , 1
p p p p p p + + =
1
3
by Chapman-
Kolmogorov
Equations
31
Theorem 2
If {X(i)} is a stationary discrete-time Markov
chain with state space S={1, 2, , N}, then
p( j)= [ P
j
]
T
p(0)
Proof
According to Theorem 1
p( j)= [ P(0, 1)P(1,2) P( j-1, j)]
T
p(0)
In the Stationary case,
P(0, 1)P(1,2) P( j-1, j) = P
j
32
Example: Random Walk With
Barriers
Let p and q=1-p be the probabilities of steps
to right and left, respectively
Let {X(i)} denote the process that gives the
positive distance from the origin after i steps
The random walk has barriers at 0 and 3
The state space S = {0, 1, 2, 3}
p q
33
Is that a Markov chain?
The transition probability depends only
on the present state, not on all past
history
P[X(n+1) = 3 | X(n) = 2, , X(2) = 2, X(1) = 1, X(0) = 0]
= P[X(n+1) = 3 | X(n) = 2, , X(2) = 0, X(1) = 1, X(0) = 0]
= P[X(n+1) = 3 | X(n) = 2]
= p
Yes!
p
34
Is that a stationary process?
the transition probability depends on the
number of steps, not on actual times the
steps are taken
Yes
41
63

71
93

=
?
= 13
35
What is Its Markov diagram?
0 1 2 3
1
1 p p
1
q q
36
Random Walk With Barriers:
Known Facts
One-step transition probability matrix:
The initial vector:
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
0 1 0 0
0 0
0 0
0 0 1 0
3 , 3 2 , 3 1 , 3 0 , 3
3 , 2 2 , 2 1 , 2 0 , 2
3 , 1 2 , 1 1 , 1 0 , 1
3 , 0 2 , 0 1 , 0 0 , 0
p q
p q
p p p p
p p p p
p p p p
p p p p
P
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
0
0
0
1
) 0 (
3
2
1
0
p
p
p
p
p

37
Random Walk With Barriers:
Evaluating 2nd State Vector
By Theorem 2:
Thus
( ) | | ( )
( )
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
|
|
|
|
|
.
|

\
|
+
+
= =
0
0
0
0
0
1
0 0
0 1 0
0 ) 1 ( 0
0 0
0 2
2
2
2
p
q
p p
q p p
q p q
q q
T
p P p
| |
| |
| | | | 0 3 ) 2 ( 1 ) 2 (
2 ) 2 (
0 ) 2 (
= = = =
= =
= =
X P X P
p X P
q X P
38
Two-State Markov Chains
State space S = {0, 1}
One-step transition matrix
0 1
a
1-a
d
1-d
p
0
p
1
|
|
.
|

\
|


=
|
|
.
|

\
|
=
d d
a a
p p
p p
1
1
1 , 1 0 , 1
1 , 0 0 , 0
P
|
|
.
|

\
|
+ +
+ +
=
2
2
2
) 1 )( 1 ( ) 1 ( ) 1 (
) 1 ( ) 1 ( ) 1 )( 1 (
d d a d d d a
a d d a d a a
P
39
Computing P
j
Let u = 2 a d. If |1-u| < 1, then
|
|
.
|

\
|


+
|
|
.
|

\
|


=

d d
a a
u u
a d
a d
u
j j
1 1
1 1
) 1 (
1 1
1 1
1 1
P
Since
0 ) 1 ( lim =

j
j
u
|
|
.
|

\
|


=


a d
a d
u
j
j
1 1
1 1
lim
1
P
In the stationary case, P(0, j) = P
j
0j

40
Steady-State Distribution
1 , 1 0 , 0
1 , 1
0 , 1 0 , 0
2
1
lim lim *
p p
p
p p p
j
j
j
j


= = =

d
u
1 , 1 0 , 0
0 , 0
1 , 1 1 , 0
2
1
lim lim * 1
p p
p
p p p
j
j
j
j


= = =

a
(p*, 1-p*): steady-state distribution
For large j, we can treat the transition
probabilities as constants, independent
of j.
|
|
.
|

\
|

* 1 *
* 1 *
p p
p p
41
Markov Chain Example:

n k
Binomial distribution: B(n, p, k)=
Binomial distributionmodel

()
= p
k n k
p p
k
n

|
|
.
|

\
|
) 1 (
42
(cont.)

two-state Markov chain


43
(cont.)
|
|
.
|

\
|
=
99 . 0 01 . 0
80 . 0 20 . 0
P
one-step transition probability matrix:
0123 . 0
99 . 0 20 . 0 2
99 . 0 1
* =


= p
9877 . 0 * 1 = p
steady-state
probability
For large j and for any i
01
0123 . 0 * ] 1 ) ( | 0 ) ( [ ] 0 ) ( | 0 ) ( [ = ~ = = + ~ = = + p i X j i X P i X j i X P
9877 . 0 * 1 ] 1 ) ( | 1 ) ( [ ] 0 ) ( | 1 ) ( [ = ~ = = + ~ = = + p i X j i X P i X j i X P
44
Practice 1: Communications
good bad
0.9
0.1
0.8
0.2
0.9 0.1

Good:
Bad:

45
Practice 2: Gambler
At each play of the game,
the gambler wins $1 with
probability p and loses $1
with probability 1 p
The gambler quits playing either when he
goes broke or attains a fortune of $4.
Initially he has $2, what are the probabilities
of all possible outcomes after 4 plays (if 4
plays are possible)?
46
Markov Diagram of
The Gambling
0 1 2 3 4
p
p
p
1 p 1 p 1 p
1
1
1
47
Initial Vector and Transition
Probability Matrix
(
(
(
(
(
(

=
0
0
1
0
0
) 0 ( P
(
(
(
(
(
(

=
1 0 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0 1
p p
p p
p p
P
Initial vector One-step transition probability matrix
) 0 ( ] [ ) 4 (
4
P P P
T
=
48
Counting the Number of
Defective Units
a unit arrives every minute
the probability of defective unit: 0.02
suppose 4 observations are made
Realization x
1
x
2
x
3
x
4
0 0 0 0 0
1 0 0 0 1
2 0 0 1 1
3 0 0 1 2
4 0 1 1 1
5 0 1 1 2
6 0 1 2 2
7 0 1 2 3
Realization x
1
x
2
x
3
x
4
8 1 1 1 1
9 1 1 1 2
10 1 1 2 2
11 1 1 2 3
12 1 2 2 2
13 1 2 2 3
14 1 2 3 3
15 1 2 3 4
49
Markov Diagram
=
4
3 , 0
p
0 1 2 3 4
p
p
p
p
1
1- p 1- p 1- p 1- p 1
(
(
(
(
(
(

=
1 0 0 0 0
1 0 0 0
0 0 1 0 0
0 0 1 0
0 0 0 1
p p
p
p p
p p
P
Binomial Distribution
50
Special Types of Markov Chains
Absorbing Markov Chains
Ergodic Markov Chains
Regular Chains
51
Absorbing Markov Chains
A state s
i
of a Markov chain is called
absorbing if it is impossible to leave it (i.e., p
i,i
= 1).
a state which is not absorbing is called transient
A Markov chain is absorbing if
it has at least one absorbing state and
it is possible from every state to go to an
absorbing state (not necessarily in one step).
52
Questions About Absorbing
Markov Chains
What is the probability that the process will
eventually reach an absorbing state?
What is the probability that the process will
end up in a given absorbing state?
On the average, how long will it take for the
process to be absorbed?
On the average, how many times will the
process be in each transient state?
53
Ergodic Markov Chains
(Irreducible)
A Markov chain is called an ergodic
chain if it is possible to go from every
state to every state (not necessarily in
one move).
54
Regular Chains
A Markov chain is called a regular chain if
some power of the transition matrix has only
positive (non-zero) elements.
for some n, it is possible to go from any state to
any state in exactly n steps.
a regular Markov chain
may have a transition
matrix that has zeros.
|
|
|
.
|

\
|
=
5 . 0 25 . 0 25 . 0
5 . 0 0 5 . 0
25 . 0 25 . 0 5 . 0
P
P
2
has
no zeros
55
Regular Chain Ergodic Chain
But an ergodic chain is not necessarily regular
Example:
|
|
.
|

\
|
=
0 1
1 0
P
is ergodic but not regular
This chain is periodic
56
Example of a Non-Regular/
Absorbing Markov Chains
all powers of P will have a 0 in the upper right-
hand corner
|
|
.
|

\
|
=
5 . 0 5 . 0
0 1
P
an absorbing
state
57
Theorem 3
Let X(t) be a Markov Chain. If X(t) is a regular
chain, then
exists for all j, and
] ) ( Pr[ lim j n X
n
j
= =

|
|
|
|
|
.
|

\
|
=

n
n
M lim
long-range state
probabilities are
independent of
the initial state
58
Example of Theorem 3
|
|
|
.
|

\
|
=
5 . 0 25 . 0 25 . 0
5 . 0 0 5 . 0
25 . 0 25 . 0 5 . 0
P
|
|
|
.
|

\
|
=
4 . 0 2 . 0 4 . 0
4 . 0 2 . 0 4 . 0
4 . 0 2 . 0 4 . 0
6
P

Vous aimerez peut-être aussi