Vous êtes sur la page 1sur 31

Measure Theory and

Filtering: Introduction and


Applications
Lakhdar Aggoun and Robert J. Elliott
Solutions to Selected
Problems
Chapter 1
Problem 1. Let T
i

iI
be a family of -elds on . Prove that

iI
T
i
is
a -eld.
Solution:
(i) belongs to all the T
i

iI
and hence it belongs to

iI
T
i
.
(ii) If A
1
, A
2
, . . . is a countable sequence, then

A
n
T
i
for
i I. Therefore

A
n

iI
T
i
.
(iii) Let A

iI
T
i
, then A T
i
for all i I which implies
that A
c
T
i
for all i I and A
c

iI
T
i
. Therefore

iI
T
i
is a -eld.
Problem 2. Let A and B be two events. Express by means of the indicator
functions of A and B,
I
AB
, I
AB
, I
AB
, I
BA
, I
(AB)(BA)
, where A B = A

B.
Solution:
The following equalities are easily checked.
I
BA
= I
A
+I
B
I
AB
,
I
AB
= I
A
I
B
,
I
AB
= I
A
I
AB
,
I
BA
= I
B
I
AB
,
I
(AB)(BA)
= I
AB
+ I
BA
= I
A
I
AB
+ I
B
I
AB
=
I
A
+I
B
2I
A
I
B
.
Problem 3. Let = R
m
and dene the sequences
C
2n
= [1, 2 +
1
2n
) and C
2n+1
= [2
1
2n + 1
, 1).
Show that limsup C
n
= [2, +2], liminf C
n
= [1, 1].
Solution:
Recall that limsup C
n
=

n1

kn
C
k
.
1
2
However,

kn
C
k
=

kn
[1, 2 +
1
2k
)
_
kn
[2
1
2k + 1
, 1)
= [1, 2 +
1
2n
)
_
[2
1
2n + 1
, 1) = [2
1
2n + 1
, 2 +
1
2n
) and
limsup C
n
=

n1
[2
1
2n + 1
, 2 +
1
2n
) = [2, +2].
Similarly liminf C
n
=

n1

kn
C
k
=

n1
[2, 1]

[1, 2] =
[1, +1]
Problem 4. Let = (
1
,
2
,
3
,
4
) and P(
1
) =
1
12
, P(
2
) =
1
6
, P(
3
) =
1
3
, and P(
4
) =
5
12
. Let
A
n
=
_

1
,
3
if n is odd,

2
,
4
if n is even.
Find P(limsup A
n
), P(liminf A
n
), limsup P(A
n
), and liminf P(A
n
)
and compare.
Solution:
Using the denition, it is easily seen that
limsup A
n
= and liminf A
n
= .
Also
P(A
n
) =
_
5/12 if n is odd,
7/12 if n is even.
Hence limsup P(A
n
) = inf sup P(A
n
) = 7/12 and
liminf P(A
n
) = sup inf P(A
n
) = 5/12.
Therefore P(limsup A
n
) limsup P(A
n
) and
P(liminf A
n
) liminf P(A
n
).
Problem 5. Theorem 1.3.36.
Let X
n
be a uniformly integrable family of random variables.
Then E[liminf X
n
] liminf E[X
n
].
Proof.
First note that for any A > 0: E[X
n
] = E[X
n
I(X
n
< A)] +
E[X
n
I(X
n
A)]. By uniform integrability: for any > 0 we
can take A so large that sup
n
[E[X
n
I(X
n
< A)][ < .
By Fatous lemma:
liminf E[X
n
I(X
n
A)] E[liminf X
n
I(X
n
A)].
But X
n
I(X
n
A) X
n
, therefore
liminf E[X
n
I(X
n
A)] E[liminf X
n
].
Hence liminf E[X
n
] E[liminf X
n
] .
3
Since > 0 is arbitrary, the result follows.
Problem 7. Show that if X is a random variable, then [X[ X.
Solution:
It suces to note that [X[ = XI(X > 0)+XI(X 0). Clearly
the RHS is X-measurable. Hence [X[ being the smallest
-eld with respect to which [X[ is measurable, is included in
X.
Problem 9. Show that the class of nite unions of intervals of the form
(, a], (b, c], and (d, ) is a eld but not a -eld.
Solution:
For instance the open interval (a, b) =

n=1
(a, b1/n] is not in
this collection despite the fact it contains each interval (a, b
1/n].
Problem 10. Show that a sequence of random variables X
n
converges (a.s)
to X if and only if > 0 lim
m
P[[X
n
X[ n m] =
1.
Solution:
Suppose X
n
as
X. Let N = [ : limX
n
() ,= X()] and

0
= N. Let A

m
=

nm
[[X
n
X[ ]. A

m
is monotoni-
cally increasing. Hence limP(A

m
) = P(

m
).
If
0
, then there exists m(
0
, ) such that
[X
n
(
0
) X(
0
)[ , for all n m(
0
, ) (*)
which implies that
0

m
and P(

m
) = 1.
Let A

m1
A

m
. If
0
A

then there exists m(


0
, )
such that (*) holds. Let
A =

>0
=

n1
A
1/n
.
By assumption P(A
1/n
) = 1 for all n. Therefore P(A) =
limP(A
1/n
) = 1.
So if
0
A, then (*) holdsfor all = 1/n and hence X
n
(
0
)
X(
0
). Let
0
= A and N = A which implies P(N) = 0.
Problem 11. Show that if X
k
converges (a.s) to X then X
k
converges
to X in probability but the converse is false.
Solution:
Let A = [ : limX
n
() = X()]
=

k1

N1

nN
[[X
n
X[ < 1/k].
Since P(A) = 1 it implies that
P(

N1

nN
[[X
n
X[ < ]) = 1 for all > 0.
4
But B
N
=

nN
[[X
n
X[ < ] is monotonically increasing.
Therefore limP(B
N
) = 1. However B
N
[[X
n
X[ < ].
Hence limP([X
n
X[ < ) = 1.
In fact the statement in Problem 10 is equivalent to: X
n
as
X
if and only if
P[sup
nm
[X
n
X[ 0] 0, n .(*)
On the other hand, by denition X
n
P
X if
P[[X
n
X[ 0] 0, n .(**)
Then it is clear that (*) is stronger than (**).
Problem 13. Let X
n
be a sequence of random variables with
P[X
n
= 2
n
] = P[X
n
= 2
n
] =
1
2
n
,
P[X
n
= 0] = 1
1
2
n1
.
Show that X
n
converges (a.s) to 0 but E[X
n
[
p
does not con-
verge to 0.
Solution:
To prove the a.s. convergence of X
n
to 0 it suces to show
that

k=1
P([X
k
[ ) < is satised for all > 0. But
P([X
k
[ ) =
1
2
k1
and the result follows.
Now E[X
n
[
p
= 2
np
1
2
n1
= 2
n(p1)+1
which does not converge to
0 for any p 1.
Problem 15. Suppose Q is another probability measure on (, T) such that
P(A) = 0 implies Q(A) = 0 (Q P). Show that a.s.P conver-
gence implies a.s.Q convergence.
Solution:
Suppose that X
n
asP
X. then there exists N such that for all
/ N: X
n
() X(), and P(N) = 0. But P(N) = 0 im-
plies Q(N) = 0 and X
n
() X() for all / N so X
n
asQ
X.
Problem 16. Prove that if T
1
and T
2
are independent sub--elds and T
3
is
coarser than T
1
, then T
3
and T
2
are independent.
Solution:
Since T
3
is coarser than T
1
then T
3
T
1
and the result follows
from the independence of T
1
and T
2
.
5
Problem 17. Let = (
1
,
2
,
3
,
4
,
5
,
6
), P(
i
) = p
i
,=
1
6
and the sub--
elds
T
1
=
1
,
2
,
3
,
4
,
5
,
6
,
T
2
=
1
,
2
,
3
,
4
,
5
,
6
.
Show that T
1
and T
2
are not independent. What can be said
about the sub--elds
T
3
=
1
,
2
,
3
,
4
,
5
,
6
,
and
T
5
=
1
,
4
,
2
,
5
,
3
,
6
?
Solution:
T
1
and T
2
are not independent since if we know, for instance,
that
1
,
2
has occurred in T
1
then we also know that the
event
1
,
2
in T
2
has occurred. This fact can be checked by
direct calculation using the denition.
Problem 18. Let = (i, j) : i, j = 1, . . . , 6 and P(i, j) =
1
36
. Dene the
quantity
X() =

k=0
kI
(i,j):i+j=k
.
Is X a random variable? Find P
X
(x) = P(X = x), calculate
E[X] and describe (X), the -eld generated by X.
Solution:
First note that X() =

12
k=2
kI
(i,j):i+j=k
and as the sum of
indicator functions of events in the -eld generated by , it is
a random variable taking values in the set 2, 3, . . . , 12. For
x 2, 3, . . . , 12, P(X = x) = P(i +j = x).
E[X] =

12
k=2
kP(i+j = k). (X) is the generated by the set of
events (1, 1), (1, 2), (2, 1), (1, 3), (3, 1), (2, 2), (1, 4), (4, 1),
(2, 3), (3, 2), . . . , (6, 6), that is, it is generated by 11 atoms.
Problem 19. For the function X dened in the previous exercise, describe the
random variable P(A [ X), where A = (i, j) : i odd, j even
and nd its expected value E[P(A [ X)].
Solution:
P(A [ X)() =

12
k=2
P(A [ X = k)I(X = k)()
=

12
k=2
P(A

[X = k])
P(X = k)
I(X = k)().
6
E[P(A [ X)] =

12
k=2
P(A

[X = k])
P(X = k)
P(X = k)
=
12

k=2
P(A

[X = k]) = P(A).
Problem 20. Let be the unit interval (0, 1] and on it are given the following
-elds:
T
1
= (0,
1
2
], (
1
2
,
3
4
], (
3
4
, 1],
T
2
= (0,
1
4
], (
1
4
,
1
2
], (
1
2
,
3
4
], (
3
4
, 1],
T
3
= (0,
1
8
], (
1
8
,
2
8
], . . . , (
7
8
, 1].
Consider the mapping
X() = x
1
I
(0,
1
4
]
() +x
2
I
(
1
4
,
1
2
]
() +x
3
I
(
1
2
,
3
4
]
() +x
4
I
(
3
4
,1]
().
Find E[X [ T
1
], E[X [ T
2
], and E[X [ T
3
].
Solution:
E[X [ T
1
] = E[x
1
I
(0,
1
4
]
+x
2
I
(
1
4
,
1
2
]
+x
3
I
(
1
2
,
3
4
]
+x
4
I
(
3
4
,1]
[ T
1
]
= x
1
E[I
(0,
1
4
]
[ T
1
] +x
2
E[I
(
1
4
,
1
2
]
[ T
1
] +x
3
E[I
(
1
2
,
3
4
]
[ T
1
]
+x
4
E[I
(
3
4
,1]
[ T
1
].
For instance the rst component in the sum gives:
x
1
E[I
(0,
1
4
]
[ T
1
] = x
1
_
P(0,
1
4
] [ (0,
1
2
]I((0,
1
2
])
+P(0,
1
4
] [ (
1
2
,
3
4
]I((
1
2
,
3
4
]) +P(0,
1
4
] [ (
3
4
, 1]I((
3
4
, 1])
_
= x
1
_
P(0,
1
4
] [ (0,
1
2
]I((0,
1
2
])
_
=
1
2
I((0,
1
2
]).
Problem 21. Let be the unit interval and ((0, 1], P) be the Lebesgue mea-
surable space and consider the following sub--elds:
T
1
= (0,
1
2
], (
1
2
,
3
4
], (
3
4
, 1],
T
2
= (0,
1
4
], (
1
4
,
1
2
], (
1
2
,
3
4
], (
3
4
, 1].
Consider the mapping
X() = .
Find E[E[X [ T
1
] [ T
2
], E[E[X [ T
2
] [ T
1
] and compare.
Solution:
E[X [ T
1
] must be constant on the atoms of T
1
so that
E[X [ T
1
]() = x
1
I((0,
1
2
]) + x
2
I((
1
2
,
3
4
]) + x
3
I((
3
4
, 1]). where
7
x
1
=
E[XI((0,
1
2
])]
P((0,
1
2
])
, x
2
=
E[XI((
1
2
,
3
4
])]
P((
1
2
,
3
4
])
, x
3
=
E[XI((
3
4
, 1])]
P((
3
4
, 1])
,
or
x
1
=
_
(0,
1
2
]
xdx
1/2
= 1/4, x
2
=
_
(
1
2
,
3
4
]
xdx
1/4
= 5/8, x
3
=
_
(
3
4
,1]
xdx
1/4
=
7/8
Hence
E[X [ T
1
]() =
1
4
I((0,
1
2
])() +
5
8
I((
1
2
,
3
4
])() +
7
8
I((
3
4
, 1])(),
which is a T
1
-measurable random variable.
E[E[X [ T
1
] [ T
2
] = E[X [ T
1
].
For the derivation of E[E[X [ T
2
] [ T
1
] see the steps in problem
20.
Problem 22. Consider the probability measure P on the real line such that:
P(0) = p, P((0, 1)) = q, p +q = 1,
and the random variables dened on = R
m
X
1
(x) = 1 +x, X
2
(x) = 0I
x0
+ (1 +x)I
0<x<1
+ 2I
x1
,
X
3
(x) =
+

k=
(1 +x +k)I
kxk+1
.
Is there any a.s.P equality between X
1
, X
2
and X
3
?
Solution:
When k = 0, X
3
(x) = 1 +x and hence X
1
= X
3
a.s.P.
X
2
(0) = 0 ,= X
1
(0) = 1 with probability p
and X
2
(0) = 0 ,= X
3
(0) = 1 with probability p.
Problem 23. Let X
1
, X
2
and X
3
be three independent, identically distributed
(i.i.d.) random variables such that P(X
i
= 1) = p = 1P(X
i
=
0) = 1 q. Find P(X
1
+X
2
+X
3
= s [ X
1
, X
2
).
Solution:
See Example 1.4.3.
Problem 25. On = [0, 1] and P being Lebesgue measure show that
X = x
1
I
(0,
1
2
]
+x
2
I
(
1
2
,1]
and Y = y
1
I
(0,
1
4
](
3
4
,1]
+y
2
I
(
1
4
],
3
4
]
are independent.
Solution:
(X) = (0,
1
2
], (
1
2
, 1] and (Y ) = (0,
1
4
] (
3
4
, 1], (
1
4
],
3
4
].
8
Hence direct calculations give the result.
Chapter 2
Problem 2. Suppose that at time 0 you have $a and your component has
$b. At times 1, 2, . . . you bet a dollar and the game ends
when somebody has $0. Let S
n
be a random walk on the
integers . . . , 2, 1, 0, +1, +2, . . . with P(X = 1) = q,
P(X = +1) = p. Let = infn 1 : S
n
= a, +b, i.e. the
rst time you or your component is ruined, then S
n

n=0
is
the running total of your prot. Show that if p = q =
1
2
S
n

is a bounded martingale with mean 0 and that the probability


of your ruin is
b
a +b
.
Show that if the game is not fair (p ,= q) then S
n
is not a
martingale but Y
n
.
=
_
q
p
_
Sn
is a martingale. Find the probabil-
ity of your ruin and check that if a = b = 500, p = .499 and
q = .501 then P(ruin) = .8806 and it is almost 1 if p =
1
3
.
Solution:
From Theorem 2.3.11 we know that S
n

n=0
is a martingale
and [S
n
[ is bounded by max(a, b).
Also S

= aI(S

= a) bI(S

= b). Taking expectation on


both side we nd
0 = E[S

] = aP(S

= a) bP(S

= b)
= aP(S

= a) b(1 P(S

= a))
which gives P(S

= a) =
b
a +b
.
If (p ,= q) then it is easy to check that S
n
is not a martingale.
Clearly E[Y
n
] < .
E[Y
n+1
[ T
n
] = E[
_
q
p
_
S
n+1
[
_
q
p
_
Sn
] =
_
q
p
_
Sn
E[
_
q
p
_
X
n+1
] =
Y
n
_
_
q
p
_
p +
_
q
p
_
1
q
_
= Y
n
which shows that Y
n
is a mar-
tingale.
1 = E[S

] = (q/p)
a
P(S

= a) + (q/p)
b
(1 P(S

= a))
which gives P(S

= a) =
(q/p)
b
1
(q/p)
a+b
1
.
9
Problem 3. Show that if X
n
is an integrable, real-valued process, with
independent increments and mean 0 then it is a martingale
with respect to the ltration it generates and if in addition X
2
n
is integrable, X
2
n
E(X
2
n
) is a martingale with respect to the
same ltration.
Solution:
The rst assertion is obvious.
Let Z
n
= X
2
n
E(X
2
n
). Clearly the Z
n
are integrable.
E[Z
n+1
Z
n
[ T
n
] = E[X
2
n+1
X
2
n
[ T
n
] E[X
2
n+1
] +E[X
2
n
].
We must show that
E[X
2
n+1
X
2
n
[ T
n
] = E[X
2
n+1
] E[X
2
n
].
Let X
n+1
X
n
= X
n+1
.
We see that X
2
n+1
X
2
n
= X
n+1
X
n+1
+X
n
X
n+1
. Therefore
E[X
2
n+1
X
2
n
[ T
n
] = E[X
n+1
X
n+1
[ T
n
] +X
n
E[X
n+1
[ T
n
]
= E[X
n+1
X
n+1
[ T
n
] using the assumptions on X
n
.
Now
X
n+1
X
n+1
= (X
n+1
X
n
+X
n
)X
n+1
= (X
n+1
)
2
+X
n
X
n+1
.
Therefore
E[X
n+1
X
n+1
[ T
n
] = E[(X
n+1
)
2
+X
n
X
n+1
[ T
n
]
= E[(X
n+1
)
2
] using again the assumptions on X
n
.
Now E[(X
n+1
)
2
] = E[X
2
n+1
+X
2
n
2X
n+1
X
n
]
= E[X
2
n+1
] +E[X
2
n
] 2E[X
n
E[X
n+1
[ T
n
]]
= E[X
2
n+1
] +E[X
2
n
] 2E[X
2
n
] = E[X
2
n+1
] E[X
2
n
] using the fact
that X
n
is a martingale. This nishes the proof.
Problem 4. Let X
n
be a sequence of i.i.d random variables with E[X
n
] =
0 and E[X
2
n
] = 1. Show that S
2
n
n is an T
n
= X
1
, . . . , X
n
-
martingale, where S
n
=

n
i=1
X
i
.
Solution:
Let Y
n
= S
2
n
n. Clearly the Y
n
are adapted and integrable.
E[Y
n+1
[ T
n
] = E[S
2
n+1
(n + 1) [ T
n
]
= E[Y
n
+X
2
n+1
1 + 2X
n+1

n
i=1
X
i
[ T
n
]
= Y
n
+E[X
2
n+1
1 [ T
n
] + 2E[X
n+1

n
i=1
X
i
[ T
n
].
Using the independence assumption of X
n
,
E[X
2
n+1
1 [ T
n
] = E[X
2
n+1
1] = 1 1 = 0
and
E[X
n+1

n
i=1
X
i
[ T
n
] =

n
i=1
X
i
E[X
n+1
] = 0.
Hence E[Y
n+1
[ T
n
] = Y
n
and the result follows.
Problem 5. Let y
n
be a sequence of independent random variables with
E[y
n
] = 1. Show that the sequence X
n
=

n
k=0
y
n
is a martin-
gale with respect to the ltration T
n
= y
0
, . . . , y
n
.
10
Solution:
Since the y
n
are integrable so are the X
n
.
E[X
n+1
[ T
n
] =

n
k=0
y
n
E[y
n+1
[ T
n
] =

n
k=0
y
n
= X
n
. There-
fore X
n
is a martingale.
Problem 7. Show that two (square integrable) martingales X and Y are
orthogonal if and only if X
0
Y
0
= 0 and the process X
n
Y
n
is
a martingale.
Solution:
follows from Theorem 3.2.6.
follows directly from the denition of the:
X, Y )
n
= E[X
0
Y
0
] +

n
i=1
E[(X
i
X
i1
)(Y
i
Y
i1
) [ T
i1
] =

n
i=1
E[X
i
Y
i
X
i
Y
i1
X
i1
Y
i
+X
i1
Y
i1
[ T
i1
]
=

n
i=1
[X
i1
Y
i1
X
i1
Y
i1
X
i1
Y
i1
+X
i1
Y
i1
] = 0.
Therefore the (square integrable) martingales X and Y are or-
thogonal.
Problem 9. Let B
t
be a standard Brownian motion process (B
0
= 0,
a.s.,
2
= 1). Show that the conditional density of B
t
for
t
1
< t < t
2
P(B
t
dx [ B
t
1
= x
1
, B
t
2
= x
2
)
is a normal density with mean and variance
= x
1
+
x
2
x
1
t
2
t
1
(t t
1
),
2
=
(t
2
t)(t t
1
)
t
2
t
1
.
Solution:
The known distribution and independence of the increments
B
t
1
, B
t
B
t
1
, B
t
2
B
t
lead to the joint distribution
f(x
1
, x, x
2
) =
1

2t
1
exp
_

x
2
1
2t
1
_
1
_
2(t t
1
)
exp
_

(x x
1
)
2
2(t t
1
)
_
1
_
2(t
2
t)
exp
_

(x
2
x)
2
2(t
2
t)
_
.
Dividing by the density
f(x
1
, x
2
) =
1

2t
1
exp
_

x
2
1
2t
1
_
1
_
2(t
2
t
1
)
exp
_

(x
2
x
1
)
2
2(t
2
t
1
)
_
and after a bit of algebra yields the desired result.
11
Problem 12. Let N
t
be a standard Poisson process and Z
1
, Z
2
. . . a sequence
of i.i.d. random variables such that P(Y
i
= 1) = P(Y
i
= 1) =
1
2
. Show that the process
X
t
=
Nt

i=1
Z
i
is a martingale with respect to the ltration T
t
= X
s
, s
t.
Solution:
Using Walds equation
E[[X
t
[] = E[[

Nt
i=1
Z
i
[] E[

Nt
i=1
[Z
i
[]
= E[N
t
]E[[Z
i
[] < . Therefore X
t
is integrable.
Now for s t
E[X
t
[ T
s
] = E[

Ns
i=1
Z
i
+Z
Nt
[ T
s
] = X
s
+E[Z
Nt
[ T
s
].
However E[Z
Nt
[ T
s
] = E[E[Z
Nt
[ T
s
_
N
t
] [ T
s
] = 0 and the
result follows.
Problem 13. Show that the process B
2
t
t, T
B
t
is a martingale where B
is the standard Brownian motion process and T
B
t
its natural
ltration.
Solution:
Clearly B
2
t
t is adapted and integrable. Let s t.
E[B
2
t
t [ T
B
s
] = B
2
s
s +E[B
2
t
t B
2
s
+s [ T
B
s
].
But E[B
2
t
t B
2
s
+ s [ T
B
s
] = s t + E[B
2
t
B
2
s
[ T
B
s
] =
s t +t s = 0.
Hence B
2
t
t, T
B
t
is a martingale.
Problem 14. Show that the process (N
t
t)
2
t is a martingale where
N
t
is a Poisson process with parameter .
Solution:
Clearly (N
t
t)
2
t is adapted and integrable.
Let L
t
= (N
t
t)
2
t, M
t
= N
t
t and recall that M
t
is a
martingale.
Now
E[L
t
L
s
[ T
N
s
] = E[M
2
t
M
2
s
[ T
N
s
] t
s
,
E[M
2
t
M
2
s
[ T
N
s
] = E[(M
t
M
s
)(M
t
+M
s
) [ T
N
s
] =
E[M
t
(M
t
M
s
) [ T
N
s
] because M
t
is a martingale and M
s
is
T
N
s
-measurable. Also
E[M
t
(M
t
M
s
) [ T
N
s
] = E[M
2
t
[ T
N
s
] M
2
s
,
E[M
2
t
[ T
N
s
] = E[N
2
t
[ T
N
s
] 2tM
s
(t)
2
,
12
E[N
2
t
[ T
N
s
] = E[(N
t
N
s
)(N
t
+N
s
) [ T
N
s
] +N
2
s
,
E[(N
t
N
s
)(N
t
+N
s
) [ T
N
s
]
= E[(N
t
N
s
+N
s
)(N
t
N
s
) [ T
N
s
] +N
s
E[(N
t
N
s
) [ T
N
s
].
Using the independent increment property of N
E[(N
t
N
s
+N
s
)(N
t
N
s
) [ T
N
s
]
= E[(N
t
N
s
)
2
] +N
s
E[(N
t
N
s
)]
= t
s
+ (t s)
2
+M
s
(t s) +s(t s),
and N
s
E[(N
t
N
s
) [ T
N
s
] = N
s
M
s
+N
s
t N
2
s
= M
2
s
+M
s
s +M
s
t +st N
2
s
.
putting all this together yields
E[L
t
L
s
[ T
N
s
] = 0 and the result follows.
Problem 15. Show that the process
I
t
=
_
t
0
f(, s)dM
s
is a martingale. Here f(.) is an adapted, bounded, continuous
sample paths process and M
t
= N
t
t is the Poisson martin-
gale.
Solution:
First note that the integral is just a nite sum w.p.1 in any
nite interval and since f is bounded I
t
is integrable.
E[I
t
[ T
N
s
] = I
s
+E[

s<ut
f(u)M
u
[ T
N
s
], and
E[

s<ut
f(u)M
u
[ T
N
s
]
=

s<ut
E[f(u)E[M
u
[ T
N
u
] [ T
N
s
]. Since M is martingale
E[M
u
[ T
N
u
] = 0 which nishes the proof.
Problem 16. Referring to Example 2.4.4 dene the processes:
N
sr
n
=
n

k=1
I
(
k1
=s,
k
=r)
=
n

k=1
X
k1
, e
s
)X
k
, e
r
), (0.1)
and
O
r
n
=
n

k=1
I
(
k
=r)
=
n

k=1
X
k
, e
r
). (0.2)
Show that 0.1 and 0.2 are increasing processes and give their
Doob decompositions.
Solution:
Being somes of indicator functions, N
sr
n
and O
r
n
are clearly in-
creasing.
Recall the representation X
n
= X
n1
+M
n
.
Subsituting this form of the Markov chain X in 0.1 and 0.2
13
gives their Doob decompositions.
Problem 18. Let be a stopping time with respect to the ltration X
n
, T
n
.
Show that
T

= A T

: A : () n T
n
n 0.
is a -eld and that is T

-measurable.
Solution:
The rst assertion is easy.
To show that is T

-measurable note that for all k, n


k

n = min(k, n) T
min(k,n)
T
n
from
which the result follows.
Problem 19. Let X
n
be a stochastic process adapted to the ltration T
n

and B a Borel set. Show that

B
= infn 0, X
n
B
is a stopping time with respect T
n
.
Solution:
Since
[
B
k] = [X
0
B]

[[X
0
/ B, X
1
B]

. . .

[[X
0
/ B, X
1
/ B, . . . , X
k1
/ B, X
k
B] T
k
, therefore
B
is a stopping time with respect T
n
.
Problem 20. Show that if
1
,
2
, are two stopping times such that
1

2
(a.s.) then T

1
T

2
.
Solution:
Let A T

1
.
A

[
2
n] = (A

[
1
n])

[
2
n] T
n
for all n.
Problem 21. Show that if is a stopping time and a is a positive constant,
then +a is a stopping time.
Solution:
If a > n then [ +a n] = [ n a] = T
n
.
If a n then [+a n] = [ na] T
na
T
n
, therefore
+ a is a stopping time. Note that if a < 0, + a is a not
stopping time.
Problem 22. Show that if
n
is a sequence of stopping times and the
ltration T
t
is right-continuous then inf
n
, liminf
n
and
limsup
n
are stopping times.
14
Solution:
Since [inf
n
< t] =

[
n
< t] T
t
,
[liminf
n
< t] = [sup
n1
inf
kn

k
< t]
=

m1

n1

kn
[
k
> t + 1/m] T
t
,
[limsup
n
< t] = [inf
n1
sup
kn

k
< t]
=

m1

n1

kn
[
k
< t 1/m] T
t
,
therefore by the right continuity of T
t
inf
n
, liminf
n
and
limsup
n
are stopping times
Chapter 3
Problem 1. Let X
1
, X
2
, . . . be a sequence of i.i.d N(0, 1) random variables
and consider the process Z
0
= 0 and Z
n
=

n
k=1
X
k
.
Show that
[Z, Z]
n
=
n

k=1
X
2
k
,
Z, Z)
n
= n,
E([Z, Z]
n
) = E(
n

k=1
X
2
k
) = n.
Solution:
Direct substitution in the formula [Z, Z]
n
=

n
k=1
(Z
k
Z
k1
)
2
gives the result.
Z, Z)
n
=

n
k=1
E[(Z
k
Z
k1
)
2
[ T
k1
] =

n
k=1
E[X
2
k
[ T
k1
] =

n
k=1
E[X
2
k
] = n using the independence and ditribution as-
sumptions of X
1
, X
2
, . . . .
Problem 2. Show that if X and Y are (square integrable) martingales, then
XY X, Y ) is a martingale.
Solution:
Clearly XY X, Y ) is adapted and integrable.
E[X
n
Y
n
X, Y )
n
[ T
n1
] =
X, Y )
n1
E[(X
n
X
n1
)(Y
n
Y
n1
) [ T
n1
]
+E[X
n
Y
n
[ T
n1
].
But X
n
Y
n
= (X
n
X
n1
+X
n1
)(Y
n
Y
n1
+Y
n1
)
= (X
n
X
n1
)(Y
n
Y
n1
) +X
n
Y
n1
X
n1
Y
n1
+Y
n
X
n1
X
n1
Y
n1
+X
n1
Y
n1
.
Hence
E[X
n
Y
n
[ T
n1
] = E[(X
n
X
n1
)(Y
n
Y
n1
) [ T
n1
]
15
+X
n1
Y
n1
which yields the result.
Problem 3. Establish the identity:
[X, Y ]
n
=
1
2
([X +Y, X +Y ]
n
[X, X]
n
[Y, Y ]
n
).
Solution:
[X +Y, X +Y ]
n
= (X
0
+Y
0
)
2
+

n
i=1
(X
i
+Y
i
X
i1
Y
i1
)
2
= (X
0
+Y
0
)
2
+

n
i=1
(X
i
X
i1
)
2
+

n
i=1
(Y
i
Y
i1
)
2
+ 2

n
i=1
(X
i
X
i1
)(Y
i
Y
i1
)
= [X, X]
n
+ [Y, Y ]
n
+ 2[X, Y ]
n
from which the result follows.
Problem 6. Show that if X
n
is a square integrable martingale then X
2

X, X) is a martingale.
Solution:
See the solution to problem 2.
Problem 7. Find [B + N, B + N]
t
and B + N, B + N)
t
for a Brownian
motion process B
t
and a Poisson process N
t
.
Solution:
From the identities
[B+N, B+N]
t
= [B, B]
t
+[N, N]
t
+2[B, N] = [B, B]
t
+[N, N]
t
and
B + N, B + N)
t
= B, B)
t
+ N, N)
t
+ 2B, N) = B, B)
t
+
N, N)
t
Recall that [X, X]
t
= X
c
, X
c
)
t
+

st
(X
s
)
2
,
[N, N]
t
=

0st
(N
s
)
2
= N
t
, N, N)
t
= t, and
[B, B]
t
= B, B)
t
= t.
Therefore
[B +N, B +N]
t
= t +N
t
and
B +N, B +N)
t
= t +t.
Problem 9. Let f be a deterministic square integrable function and B
t
a
Brownian motion. Show that the stochastic integral
_
t
0
f(s)dB
s
is a normally distributed random variable with distribution
N(0,
_
t
0
f
2
(s)ds).
Solution:
Since f is square integrable there exists a sequence of simple
functions f
n
such that
16
_
t
0
[f(s) f
n
(s)[
2
ds 0, (n ), and
_
t
0
f
n
(s)dB
s
L
2

_
t
0
f(s)dB
s
. (*)
Clearly each of the random variables
_
t
0
f
n
(s)dB
s
=

k
f
n
(t
k1
)(B
t
k
B
t
k1
)
has distribution N(0,
_
t
0
f
2
n
(s)ds).
From (*)
_
t
0
f
2
n
(s)ds
_
t
0
f
2
(s)ds and the result follows.
Problem 10. Show that if
_
t
0
E[f(s)]
2
ds < ,
the Ito process
I
t
=
_
t
0
f(s)dB
s
has orthogonal increments, i.e., for 0 r s t u
E[(I
u
I
t
)(I
s
I
r
)] = 0.
Solution:
Using the fact that I is a martingale we obtain
E[(I
u
I
t
)(I
s
I
r
)] = E[(I
s
I
r
)E[(I
u
I
t
) [ T
s
]]
= E[(I
s
I
r
)(I
s
I
s
)] = 0.
Problem 11. Show that
_
t
0
(B
2
s
s)dB
s
=
B
3
t
3
tB
t
.
Solution:
Using the Ito formula
B
3
t
= 3
_
t
0
B
2
s
dB
s
+ 3
_
t
0
B
s
ds or using integration by part
B
3
t
/3 =
_
t
0
B
2
s
dB
s
+tB
t

_
t
0
sdB
s
and the result follows.
Problem 16. If N is a standard Poisson process show that the stochastic
integral
_
t
0
N
s
d(N
s
s)
is not a martingale. However, show that
_
t
0
N
s
d(N
s
s)
is a martingale. Here N
t
is a Poisson process. (Note that at
any jump time s, N
s
= N
s
1.)
17
Solution:
_
t
0
N
s
d(N
s
s) =
_
t
0
N
s
dN
s

_
t
0
N
s
ds.
Since dN
s
= N
s
is either 0 or 1, we have
_
t
0
N
s
dN
s
= N
T
1
+N
T
2
+ +N
t
= 1 + 2 + +N
t
=
N
t
(N
t
+ 1)
2
(*),
where T
1
, T
2
, . . . are the jump times of N.
Let s t, then in view of (*)
E[
_
t
0
N
u
d(N
u
u) [
_
s
0
N
u
d(N
u
u) [ T
s
]
= E[
_
t
s
N
u
d(N
u
u) [ T
s
] = E[
_
t
s
N
u
dN
u

_
t
s
N
u
du [ T
s
].
In view of (*)
E[
_
t
s
N
u
dN
u
[ T
s
] = E[
N
t
(N
t
+ 1)
2

N
s
(N
s
+ 1)
2
[ T
s
]
= (1/2)E[(N
t
N
s
)
2
+ 2N
s
(N
t
N
s
) +N
t
N
s
[ T
s
]
= (1/2)[E(N
t
N
s
)
2
+ 2N
s
E(N
t
N
s
) +E(N
t
N
s
)]
= (1/2)[t s + (t s)
2
+ (2N
s
+ 1)(t s)]
= (t s)/2[2N
s
+t s + 2].
E[
_
t
s
N
u
du [ T
s
] = E[
_
t
s
(N
u
N
s
+N
s
)du [ T
s
]
=
_
t
s
E(N
u
N
s
)du +N
s
(t s)
=
_
t
s
(u s)du +N
s
(t s) = (t s)/2[2N
s
+t s].
Since
E[
_
t
s
N
u
dN
u
[ T
s
] E[
_
t
s
N
u
du [ T
s
] ,= 0
it follows that
_
t
0
N
u
d(N
u
u) is not a martingale.
However
_
t
0
N
s
dN
s
= 0 +N
T
1
+N
T
2
+ +N
t
1
=
N
t
(N
t
1)
2
,
(or using the product rule
_
t
0
N
s
dN
s
= (1/2)(N
2
t
[N, N]
t
)).
E[
_
t
s
N
u
du [ T
s
] = (t s)/2[2N
s
+t s].
Hence E[
_
t
s
N
u
dN
u
[ T
s
] E[
_
t
s
N
u
du [ T
s
] = 0.
It follows that
_
t
0
N
u
d(N
u
u) is a martingale.
Problem 17. Prove that
_
t
0
2
N
s
dN
s
= 2
Nt
1.
Here N
t
is a Poisson process.
Solution:
_
t
0
2
N
s
dN
s
= 2
0
+ 2
1
+ + 2
Nt1
= 2
Nt
1.
18
Problem 19. Show that the unique solution of
x
t
= 1 +
_
t
0
x
s
dN
s
is given by x
t
= 2
Nt
. Here N
t
is a Poisson process.
Solution:
The result is obtained by using the stochastic exponential for-
mula
x
t
= e
Nt
1
2
N
c
,V
c
)t

st
(1 + N
s
)e
Ns
= e
Nt

st
e
Ns

st
(1 + N
s
)
= e
NtNt
2
Nt
= 2
Nt
.
Problem 21. Show that the linear stochastic dierential equation
dX
t
= F(t)X
t
dt +G(t)dt +H(t)dB
t
, (0.3)
with X
0
= has the solution
X
t
= (t)
_
+
_
t
0

1
(s)G(s)ds +
_
t
0

1
(s)H(s)dB
s
_
. (0.4)
Here F(t) is an n n bounded measurable matrix, H(t) is an
n m bounded measurable matrix, B
t
is an m-dimensional
Brownian motion and G(t) is an R
mn
-valued bounded measur-
able function. (t) is the fundamental matrix solution of the
deterministic equation
dX
t
= F(t)X
t
dt.
Solution:
Dierentiating (0.4) shows that X is solution of (0.3).
Problem 24. Show that the linear stochastic dierential equation
dX
t
= X
t
dt +dB
t
,
with E[X
0
[
2
= E[[
2
< has the solution
X
t
= e
t
+
_
t
0
e
(ts)
dB
s
,
and

t
= E[X
t
] = e
t
E[],
P(t) = V ar(X
t
) = e
2t
V ar() +

2
(1 e
2t
)
2
.
Solution:
Direct calculation yield
19
d(e
t
+
_
t
0
e
(ts)
dB
s
)
= e
t
dt +
_
t
0
e
(ts)
dB
s
+e
t
e
t
dB
t
= X
t
dt +dB
t
.
Clearly E[X
t
] = e
t
E[] because E
_
_
t
0
e
(ts)
dB
s
_
= 0.
We also have E
_
_
t
0
e
(ts)
dB
s
_
2
=
_
t
0
e
2(ts)
ds
from which the variance of X follows.
Problem 26. Suppose for R
m
X

t
= e
Mt
1
2

2
A
t
is a martingale, and suppose there is an open neighborhood I
of = 0 such that for all I and all t (P a.s.),
(i) [X

t
[ a,
(ii) [
dX

t
d
[ b,
(iii) [
d
2
X

t
d
2
[ c.
Here a, b, c are nonrandom constants which depend on I, but
not on t. Show that then the processes M
t
and M
2
t
A
t

are martingales.
Solution:
Take s t and A T
s
. Then using (i) and (ii)
_
A
E
__
dX

t
d
_
=0
[ T
s
_
dP = E
__
A
_
dX

t
d
_
=0
dP [ T
s
_
= E
__
d
d
_
A
X

t
dP
_
=0
[ T
s
_
=
_
d
d
_
A
X

s
dP
_
=0
=
_
A
_
dX

s
d
_
=0
dP.
That is,
E
__
dX

t
d
_
=0
[ T
s
_
=
_
dX

s
d
_
=0
P a.s.
so E[M
t
[ T
s
] = M
s
a.s.
The second assertion follows similarly using (iii), because
_
d
2
X

t
d
2
_
=0
= M
2
t
A
t
.
Chapter 4
20
Problem 1. Consider the probability space ([0, 1], B([0, 1]), ), where is
the Lebesgue measure on the Borel sigma-eld B([0, 1]). Let P
be another probability measure carried by the singleton 0, i.e
P(0) = 1. Let

1
= [0,
1
2
], (
1
2
, 1],

2
= [0,
1
4
], (
1
4
,
3
4
], (
3
4
, 1], . . . ,

n
= [0,
1
2
n
], . . . , (1
1
2
n
, 1].
Dene the random variable

n
([0,
1
2
n
]) =
P([0, 2
n
])
([0, 2
n
])
=
_
2
n
,
0 elswhere in [0, 1].
Show that the sequence
n
is a positive martingale (with re-
spect to the ltration generated by the partitions
n
) such that
E

[
n
] = 1 for all n but lim
n
= 0 -almost surely.
Solution:
Clearly
n
is adapted, -integrable and positive for all n .
Moreover
E[
n
] = 2
n
([0, 2
n
]) = 2
n
2
n
= 1.
E[
n+1
[ T
n
] = 2
n+1
E[I([0, 2
n1
]) [ T
n
]
= 2
n+1
([0, 2
n1
]

[0, 2
n
])
([0, 2
n
])
I([0, 2
n
]) = 2
n
I([0, 2
n
]).
Therefore is a martingale.
To prove the -a.s. convergence of
n
to 0 it suces to
show that

k=1
(
n
) < is satised for all > 0. But
(
n
) =
1
2
n
and the result follows.
Problem 2. Prove Lemma 4.2.14.
Solution:
Using Bayes Theorem we have
E[Y
k+1
[ (
k
] = E[Y
k+1
[ (
k
]
= E[Y
k+1
[ (
k
]/E[
k+1
[ (
k
]
=
k
E[
k+1
Y
k+1
[ (
k
]/
k
E[
k+1
[ (
k
]
= E[
k+1
Y
k+1
[ (
k
]
= E
_

M
i=1
Mc
i
k+1
Y
k+1
, f
i
)Y
k+1
[ (
k
_
=

M
i=1
Mc
i
k+1
f
i
E
_
Y
k+1
, f
i
) [ (
k
_
=

M
i=1
Mc
i
k+1
f
i
(1/M) =

M
i=1
c
i
k+1
f
i
= c
k+1
= CX
k
,
which nishes the proof.
21
Problem 3. Consider the order-2 Markov chain X
n
, 1 n N dis-
cussed in Example 2.4.6. Dene a new probability measure P on
(,
_
T
n
such that P(X
n
= e
k
[ X
n2
= e
i
, X
n1
= e
j
) = p
k,ij
.
Solution:
Let
0
= 1. Dene

n
() =

kij
p
k,ij
p
k,ij
I
(Xn=e
k
,X
n1
=e
j
,X
n2
=e
i
)
(),
N
=

N
n=1

n
.
It is easy to show that
n
is an T
n
, P-martingale.
We have to show that and under P the order-2 Markov chain
X has transitions probabilities p
k,ij
.
Using Bayes Theorem write
P[X
n
= e

[ T
n1
] = E[I
(Xn=e

)
[ T
n1
] =
E[I
(Xn=e

n
[ T
n1
]
E[
n
[ T
n1
]
=

n1
E[I
(Xn=e

n
[ T
n1
]

n1
E[
n
[ T
n1
]
=
E[I
(Xn=e

n
[ T
n1
]
E[
n
[ T
n1
]
= E[I
(Xn=e

n
[ T
n1
],
since E[
n
] = 1]. Therefore
P[X
n
= e

[ T
n1
] =

ij
p
,ij
p
,ij
I
(X
n1
=e
j
,X
n2
=e
i
)
P[X
n
= e

[ X
n2
= e
i
, X
n1
= e
j
]
=

ij
p
,ij
p
,ij
I
(X
n1
=e
j
,X
n2
=e
i
)
P[X
n
= e

[ X
n2
= e
i
, X
n1
= e
j
]
=

ij
p
,ij
p
,ij
I
(X
n1
=e
j
,X
n2
=e
i
)
p
,ij
= p
e

,X
n1
,X
n1
,
which gives the result.
Problem 6. Show that the exponential martingale
t
given by (4.7.5) is
the unique solution of

t
= 1 +

i,j
_
t
0

s
(
ij
s
)
1
(
ij
s

ij
s
)(d
ij
s

ij
s
ds).
Solution:
Equation (4.7.5) is
22

t
=

i,=j
exp
_
_
t
0
log

ij
s

ij
s
d
ij
r

_
t
0
(
ij
s

ij
s
)dr
_
.
Lets simply the jumping part of
t

i,=j
exp
_
_
t
0
log

ij
s

ij
s
d
ij
r
_
=

i,=j
exp
_

s
log

ij
s

ij
s

ij
r
_
=

i,=j

0<st
_

ij
s

ij
s
_

ij
r
Write
t
= e

t
0
(
ij
s

ij
s

ds)
Y
t
, where
Y
t
=

i,=j

0<st
_

ij
s

ij
s
_

ij
s
, and
t
= f(t, Y
t
).
Using Ito rule
f(t, Y
t
) = 1 +
_
t
0
f(s, Y
s
)
s
ds +
_
t
0
f(s, Y
s
)
Y
dY
s
+

0<st
_
f(s, Y
s
) f(s, Y
s
)
f(s, Y
s
)
Y
Y
s
_
. (0.5)
The rst integral in (0.5) is simply
_
t
0

s
(
ij
s

ij
s
)ds.
Because Y
t
is a purely discontinuous process and of bounded
variation the second integral in (0.5) is equal to

0<st
e

t
0
(
ij
s

ij
s
)ds
Y
s
.
In expression (0.5) we have
f(s, Y
s
) f(s, Y
s
) =
s

s
= e

t
0
(
ij
s

ij
s
)ds

i,=j
_

rs
_

ij
r

ij
r
_

ij
r
_

ij
s

ij
s
_

ij
s

rs
_

ij
r

ij
r
_

ij
r
_
=
s
_
_

i,=j
_

ij
s

ij
s
_

ij
s
1
_
_
=

i,=j

s
_

ij
s

ij
s
1
_

ij
s
.
Putting all these results together gives (0.5). For the uniqueness
see the reference in Example 3.6.11.
Problem 7. Prove Lemma 4.7.3.
23
Solution:
By the characterization theorem of Poisson processes (see Theo-
rem 3.6.14) we must show that M
ij
t
.
=
ij
t

ij
t and (M
ij
t
)
2

ij
t
are (P, T
t
)-martingales.
By Bayes Theorem, for t s 0
E[M
ij
t
[ T
s
] =
E[
t
M
ij
t
[ T
s
]
E[
t
[ T
s
]
=
E[
t
M
ij
t
[ T
s
]

s
.
Therefore, M
ij
t
is a (P, T
t
) martingale if and only if
t
M
ij
t
is a
(P, T
t
)-martingale. Now

t
M
ij
t
=
_
t
0

s
dM
ij
s
+
_
t
0
M
ij
s
d
s
+ [, M
ij
]
t
.
Recall
[, M
ij
]
t
=

o<st

s
M
ij
s
=

o<st

__
t
0

s
(
ij
)
1
(
ij

ij
)d
ij
s
_

ij
s
=
_
t
0

s
(
ij
)
1
(
ij

ij
)d[
ij
,
ij
]
s
=
_
t
0

s
(
ij
)
1
(
ij

ij
)d
ij
s
.
Therefore

t
M
ij
t
=
_
t
0

s
(d
ij
s

ij
ds)+
_
t
0
M
ij
s
d
s

_
t
0

s
(
ij
)
1
(
ij

ij
)d
ij
s
.
(0.6)
The second integral on the right of (0.6) is a (P, T
t
) martingale.
(Recall that
ij
t

ij
t is a (P, T
t
) martingale). The other two
integrals are written as
_
t
0

s
(d
ij
s

ij
ds) =
_
t
0

s
(d
ij
s

ij
ds +
ij
ds
ij
ds)
=
_
t
0

s
(d
ij
s

ij
ds) +
_
t
0

ij
ds
_
t
0

ij
ds, (0.7)
24
and
_
t
0

s
(
ij
)
1
(
ij

ij
)d
ij
s
=
_
t
0

s
(
ij
)
1
(
ij

ij
)(d
ij
s

ij
ds +
ij
ds)
=
_
t
0

s
(d
ij
s

ij
ds) +
_
t
0

s
(
ij
)
1
(
ij

ij
)
ij
ds. (0.8)
Substituting (0.7) and (0.8) in (0.6) yields the desired result and
it remains to show (M
ij
t
)
2

ij
t is also a (P, T
t
) martingale.
Now
(M
ij
)
2
t
= 2
_
t
0
M
ij
s
dM
ij
s
+ [M
ij
, M
ij
]
t
= 2
_
t
0
M
ij
s
dM
ij
s
+
ij
t
. (0.9)
Subtracting
ij
t from both sides of (0.9) makes the last term
on the right of (0.9) a (P, T
t
)-martingale and since the dM
ij
integral is a (P, T
t
)-martingale the result follows.
Chapter 5
Problem 1. Assume that the state and observation processes of a system
are given by the vector dynamics (5.4.1) and (5.4.2). For m,
k IN, m < k, write the unnormalized conditional density such
that
E[
k
I(X
m
dx) [
k
] =
m,k
(x)dx.
Using the change of measure techniques described in Section
5.3, show that

m,k
(x) =
m
(x)
m,k
(x),
where
m
(x) is given recursively by (5.4.6). Show that

m,k
(x) = E[
m+1,k
[ X
m
= x,
k
]
=
1
(y
m+1
)
_
R
m

m+1
(Y
m+1
C
m+1
z)

m+1
(z A
m+1
x)
m+1,k
(z)dz. (0.10)
Solution:
For an arbitrary integrable function f : R
m
R write
E[
k
f(X
m
) [
k
] =
_
R
m
f(x)
m,k
(x)dx.
However,
E[
k
f(X
m
) [
k
]
= E[
1,m
f(X
m
)E[
m+1,k
[ X
0
, . . . , X
m
,
k
] [
k
].
Let E[
m+1,k
[ X
m
= x,
k
]
.
=
m,k
(x).
25
Consequently,
E[
k
f(X
m
) [
k
] = E[
1,m
f(X
m
)
m,k
(x
m
) [
k
].
Therefore _
R
m
f(x)
m,k
(x)dx =
_
R
m
f(x)
m
(x)
m,k
(x)dx
from which the result follows.
Now

m,k
(x) = E[
m+1,k
[ X
m
= x,
k
]
= E[
m+1

m+2,k
[ X
m
= x,
k
]
= E[
m+1
E[
m+2,k
[ X
m
= x, X
m+1

k
] [ X
m
= x,
k
]
= E[
(D
1
m+1
(Y
m+1
C
m+1
X
m+1
))
[D
m+1
[(Y
m+1
)
(B
1
m+1
(X
m+1
A
m+1
x))
[B
m+1
[((X
m+1
)

m+1,k
(X
m+1
) [ X
m
= x,
k
].
Now recall that under P, X
m+1
is normally distributed and
independent of the Y process. Therefore

m,k
(x) =
_
R
m
(D
1
m+1
(Y
m+1
C
m+1
z)
[D
m+1
[(Y
m+1
)
(B
1
m+1
(z A
m+1
x))
[B
m+1
[((z)

m+1,k
(z)(z)dz,
which is the result.
Problem 3. Assume that the state and observation processes are given by
the vector dynamics
X
k+1
= A
k+1
X
k
+V
k+1
+W
k+1
R
m
,
Y
k
= C
k
X
k
+W
k
R
m
.
A
k
, C
k
are matrices of appropriate dimensions, V
k
and W
k
are
normally distributed with means 0 and respective covariance
matrices Q
k
and R
k
, assumed non singular. Using measure
change techniques derive recursions for the conditional mean
and covariance matrix of the state X given the observations Y .
Solution:
Notice the noise W appears in both signal and observations
processes. To circumvent this write:
X
k+1
= (A
k+1
C
k+1
)X
k
+V
k+1
+Y
k+1
, (0.11)
Y
k
= C
k
X
k
+W
k
. (0.12)
Then, we assume that we start with a reference measure P un-
der which X
k
and Y
k
are two independent sequences of random
variables normally distributed with means 0 and covariance ma-
trices the identity matrix I
mm
. With and denoting stan-
dard normal densities, dene the positive mean one martingale
26
sequence :

k
=

k
m=0
(R
1
m+1
(Y
m+1
C
m+1
X
m+1
))
[R
m+1
[(Y
m+1
)

(Q
1
m+1
(X
m+1


A
m+1
X
m
Y
m+1
))
[Q
m+1
[(X
m+1
)
.
Now dene the real world measure P by setting the restriction
of
dP
dP
to (
k
= X
0
, . . . , X
k
, Y
0
, . . . , Y
k
equal to
k
.
It can be shown that under the measure P the dynamics (0.11)
and (0.12) hold.
The remaining steps are standard and are left to the reader.
Problem 4. Let m = n = 1 in (5.8.1) and (5.8.2). The notation in Section
5.8 and Section 5.9 is used here.
Let
t
be the process dened as

t
=
_
t
0
x
p
s
ds, p = 1, 2, . . . .
Write
E[
t
I
(tdx)
[
t
] =
t
(x)dx.
Show that at time t, the density
t
(x) is completely described by
the p + 3 statistics s
t
(0), s
t
(1), . . . , s
t
(p),
t
, and m
t
as follows:

t
(x) =
_
p

i=1
s
t
(i)
_
q(x, t), (0.13)
where s
0
(i) = 0, i = 1, . . . , p, and
ds
t
(p)
dt
= p(A
t
+
1
t
B
2
t
)s
t
(p) + 1,
ds
t
(p 1)
dt
= (p 1)(A
t
+
1
t
B
2
t
)s
t
(p 1) +ps
t
(p)
1
t
B
2
t
m
t
,
ds
t
(i)
dt
= i(A
t
+
1
t
B
2
t
)s
t
(i) +
1
2
(i + 1)(i + 2)s
t
(i + 2)
+ (i + 1)s
t
(i + 1)
1
t
B
2
t
, i = 1, . . . , p 2,
ds
t
(0)
dt
= B
2
t
s
t
(2) +
1
t
s
t
(1)m
t
.
(0.14)
Solution:
27

t
g(x
t
) =
_
0
t

s
_g(x
s
)A
s
x
s
ds+
_
0
t

s
_g(x
s
)B
s
dw
s
+ (1/2)
_
0
t

s
_
2
g(x
s
)B
2
s
ds +
_
0
t

s
_g(x
s
)x
s
C
s
D
2
s
dy
s
+
_
0
t

s
g(x
s
)x
p
s
ds.
Conditioning on
t
,
_
t
0

t
(x)g(x)dx =
_
t
0
_
R

s
(x) _g(x
s
)A
s
xdxds
+ (1/2)
_
0
t
_
R

s
(x) _
2
g(x
s
)B
2
s
dxds
+
_
t
0
_
R

s
(x)g(x)xC
s
D
2
s
dxdy
s
+
_
t
0
_
R
x
p
g(x)
s
(x)dxds.
Integrating by part in x, we that
t
(.) satises the stochastic
partial dierential equation

t
(x) =
_
t
0
(d/dx)(
s
(x)A
s
x)ds+(1/2)
_
t
0
(d
2
/dx
2
)
s
(x)B
2
s
ds
+
_
t
0

s
(x)xC
s
D
2
s
dy
s
+
_
t
0
x
p

s
(x)ds.
It can be veried that (0.13) is a solution to the above equa-
tion if the time-varying coecients a
t
(0), . . . , a
t
(p) satisfy the
ordinary dierential equations (0.14).
Problem 5. Give a detailed proof of Lemma 5.7.1.
Solution:
Let us recall

k+1
=
(D
1
k+1
(Y
k+1
C
k+1
X
k+1
))
[D
k+1
[((Y
k+1
)
(B
1
l
(X
k+1
A
k+1
X
k
))
[B
k+1
[((X
k+1
)
.
Suppose f, g : R
m
R are test functions (i.e. measur-
able functions with compact support). Then with E (resp. E)
denoting expectation under P (resp. P) and using Bayes The-
orem
E[f(V
k+1
)g(W
k+1
) [ (
k
] =
E[
k+1
f(V
k+1
)g(W
k+1
) [ (
k
]
E[
k+1
[ (
k
]
= E[
k+1
f(V
k+1
)g(W
k+1
) [ (
k
],
28
Consequently
E[f(V
k+1
)g(W
k+1
) [ (
k
] = E[
k+1
f(V
k+1
)g(W
k+1
) [ (
k
]
= E
_
(D
1
k+1
(Y
k+1
C
k+1
X
k+1
))
[D
k+1
[((Y
k+1
)
(B
1
k+1
(X
k+1
A
k+1
X
k
))
[B
k+1
[((X
k+1
)
f(B
1
k+1
(X
k+1
A
k+1
X
k
))g(D
1
k+1
(Y
k+1
C
k+1
X
k+1
)) [ (
k

= E
_
(B
1
k+1
(X
k+1
A
k+1
X
k
))
[B
k+1
[((X
k+1
)
f(B
1
k+1
(X
k+1
A
k+1
X
k
))
E
_
(D
1
k+1
(Y
k+1
C
k+1
X
k+1
))
[D
k+1
[((Y
k+1
)
g(D
1
k+1
(Y
k+1
C
k+1
X
k+1
)) [ (
k
, X
k+1
_
[ (
k
_
.
Now
E
_
(D
1
k+1
(Y
k+1
C
k+1
X
k+1
))
[D
k+1
[((Y
k+1
)
g(D
1
k+1
(Y
k+1
C
k+1
X
k+1
)) [ (
k
, X
k+1
_
=
_
R
m
(D
1
k+1
(y C
k+1
X
k+1
))
[D
k+1
[(y)
g(D
1
k+1
(y C
k+1
X
k+1
))(y)dy
=
_
R
m
(u)g(u)du,
after an appropriate change of variable. Similar calculations
show that
E[f(V
k+1
)g(W
k+1
) [ (
k
] =
_
R
m
(u)f(u)du
_
R
m
(u)g(u)du,
which nishes the proof.
Problem 6. Prove (5.7.5), (5.7.6), (5.7.7) and (5.7.3).
Solution:
We have to show that

k+1
(x) =
k+1
(x, Y
k+1
)
_
R
m
(x, z)
k
(z)dz.
(0.15)
Here

k+1
(x, Y
k+1
) =
(D
1
k+1
(Y
k+1
C
k+1
x))
[B
k+1
[[D
k+1
[(Y
k+1
)
(0.16)
and
(x, z) = (B
1
k+1
(x A
k+1
z)). (0.17)
29
For any test function g
_
R
m
g(x)
k+1
(x)dx = E[
k+1
g(X
k+1
) [
k+1
]
= E[
k

k+1
g(X
k+1
) [
k+1
]
= E
_

k
(B
1
k+1
(X
k+1
A
k+1
X
k
))(D
1
k+1
(Y
k+1
C
k+1
X
k+1
))
[B
k+1
[[D
k+1
[(X
k+1
)(Y
k+1
)
g(X
k+1
) [
k+1
]
=
1
[B
k+1
[[D
k+1
[(Y
k+1
)
E
_

k
_
R
m
(B
1
k+1
(x A
k+1
X
k
))(D
1
k+1
(Y
k+1
C
k+1
x))
(x)
(x)g(x)dx [
k+1
] .
The last equality follows from the fact that X
k+1
has distribu-
tion and is independent of everything else under P. Also,
note that given Y
k+1
we condition only on
k
to get
_
R
m
g(x)
k+1
(x)dx =
1
[B
k+1
[[D
k+1
[(Y
k+1
)

_
R
m
_
R
m
(B
1
k+1
(x A
k+1
z))(D
1
k+1
(Y
k+1
C
k+1
x))
g(x)
k
(z)dxdz.
This holds for all test functions g so we can conclude that
(0.15) holds.
Let S
k
=
k

m=1
X
m1
X
m1
,
S
ij
k
=
k

m=1
X
m1
, e
i
)X
m1
, e
j
) and write

ij
k
(x) = E[
k
S
ij
k
I(X
k
dx) [
k
].
E[
k
S
ij
k
g(X
k
) [
k
] =
_
R
m

ij
k
(x)g(x)dx

k+1
(x) =
k+1
(x, Y
k+1
)
_
_
R
m
(x, z)
k
(z)dz
+
_
R
m
z, e
i
)z, e
j
)
k
(z)(x, z)dz
_
We have to show that.

k+1
(x) =
k+1
(x, Y
k+1
)
_
_
R
m
(x, z)
k
(z)dz
30
+
_
R
m
z, e
i
)z, e
j
)
k
(z)(x, z)dz
_
.
Note that S
ij
k+1
= S
ij
k
+X
k
, e
i
)X
k
, e
j
), therefore
_
R
m

ij
k+1
(x)g(x)dx
= E[
k+1
S
ij
k
g(X
k+1
) [
k+1
]
+E[
k+1
X
k
, e
i
)X
k
, e
j
)g(X
k+1
) [
k
] =
= E[
k+1

k
S
ij
k
g(X
k+1
) [
k+1
]
+E[
k+1

k
X
k
, e
i
)X
k
, e
j
)g(X
k+1
) [
k
].
The remaining steps are the by now familiar calculation so they
are skipped.
Problem 11. Give the proof of theorem 5.11.4.
Solution:
Here the dynamics are given by
dx
t
= g(x
t
)dt +s(x
t
)dB
t
,
dy
t
= h(x
t
)dt +(y
t
)dW
t
.
We have to show that if C
2
(R
d
) is a real-valued function
with compact support. Then
()
t
= ()
0
+
_
t
0
(A)
s
ds +
_
t
0
[(_(x
s
))
t
s(x
s
)(y
s
)
2
h(x
s
)]ds
+
_
t
0
(
1
(y
s
)(h))
t

1
(y
s
)dy
s
,
where A(x) =
1
2
d

i,j=1
(ss
t
)
ij
(x
s
)

2
(x
s
)
x
i
x
j
+
d

i=1
g
i
(x
s
)
(x
s
)
x
i
.
Using the vector Ito rule we establish
(x
t
) = (x
0
) +
_
t
0
A(x
s
)ds +
_
t
0
(_(x
s
))
t
s(x
s
)dw
s
(0.18)
Recall that

t
= exp
_
_
t
0
((y
s
)
1
h(x
s
))
t
(y
s
)
1
dy
s

1
2
_
t
0
[(y
s
)
1
h(x
s
)[
2
ds
_
.
and

t
= 1 +
_
t
0

s
((y
s
)
1
h(x
s
))
t
(y
s
)
1
dy
s
31
using the Ito product rule

t
(x
t
) = (x
0
) +
_
t
0

s
A(x
s
)ds +
_
t
0

s
(_(x
s
))
t
B
s
dw
s
+
_
t
0

s
(x
s
)((y
s
)
1
h(x
s
))
t
(y
s
)
1
dy
s
+ [, ]
t
(0.19)
. (0.20)
But
[, ]
t
= [
_
t
0

s
((y
s
)
1
h(x
s
))
t
(y
s
)
1
dy
s
,
_
t
0
(_(x
s
))
t
s(x
s
)dw
s
]
=
_
t
0

s
(_(x
s
))
t
s(x
s
)(y
s
)
2
h(x
s
)ds.
Conditioning both sides of (0.20) on
t
gives the result.

Vous aimerez peut-être aussi