Vous êtes sur la page 1sur 47

1

Stochastic Calculus for Finance


Solutions to Exercises
Chapter 1
Exercise 1.1: Show that for each n, the random variables K(1), . . . , K(n)
are independent.
Solution: Since K(r) have discrete distributions, the independence of
K(1), . . . , K(n) means that for each sequence V
1
, . . . , V
n
, V
i
{U, D} we
have
P(K(1) = V
1
, K(2) = V
2
, . . . , K(n) = V
n
)
= P(K(1) = V
1
) . . . P(K(n) = V
n
).
Fix a sequence V
1
, . . . , V
n
. Start by splitting the interval [0, 1] into two
intervals I
U
, I
D
of length
1
2
, I
U
= { : K(1) = U}, I
D
= { : K(1) = D}.
Repeat the splitting for each interval at each stage. At stage two we have
I
U
= I
UU
I
UD
, I
D
= I
DU
I
DD
and the variable K(2) is constant on each
I

. For example, I
UD
= { : K(1) = U, K(2) = D}. Using this notation we
have
{K(1) = V
1
} = I
V
1
, {K(1) = V
1
, K(2) = V
2
} = I
V
1
,V
2
,
. . . {K(1) = V
1
, . . . , K(n) = V
n
} = I
V
1
,...,V
n
.
The Lebesgue measure of I
V
1
,...,V
n
is
1
2
n
, so that
P(K(1) = V
1
, . . . , K(n) = V
n
) =
1
2
n
. From the denition of K(r) follows directly P(K(1) = V
1
) . . . P(K(n) =
V
n
) =
1
2
n
.
Exercise 1.2: Redesign the random variables K(n) so that P(K(n) =
U) = p (0, 1), arbitrary
Solution: Given the probability space (, F, P) = ([0, 1], B([0, 1]), m),
where m denotes the Lebesgue measure, we will dene a sequence of ran-
dom variables K(n), n = 1, 2, . . ..on .
First split [0, 1] into two subintervals: [0, 1] = I
U
I
D
, where I
U
, I
D
are
disjoint intervals with lengths |I
U
| = p, |I
D
| = q, p + q = 1, with I
U
to the
left on I
D
.. Now set
2 Solutions to Exercises
K(1, ) =
_
U if I
U
D if I
D
_
.
Clearly P(K(1) = U) = p, P(K(1) = D) = q. Repeat the procedure sep-
arately on I
U
and I
D
, splitting each into two subintervals in the proportion
p to q. Then I
U
= I
UU
I
UD
, I
D
= I
DU
I
DD
, |I
UU
| = p
2
, |I
UD
| = pq,
|I
DU
| = qp, |I
DD
| = q
2
. Repeating this recursive construction n times we
obtain intervals of the form I

1
,...,
r
, with
i
either U or D, and with length
p
l
q
rl
, where l = #{
i
:
i
= U}.
Again set
K(r, ) =
_
U if I

1
,...,
r1
,U
D if I

1
,...,
r1
,
D
_
.
If the value U appears l times in a sequence
1
, . . . ,
r1
, then |I

1
,...,
r1
,U
| =
pp
l
q
r1l
. There are
_
r1
l
_
dierent sequences
1
, . . . ,
r1
having U exactly
at l places. Then for A
r
= {K(r) = U} we nd
P(A
r
) = P(K(r) = U) = p
r1

l=0
_
r 1
l
_
p
l
q
r1l
= p(p + q)
r1
= p
As a consequence is P(K(r) = D) = q. The proof that the variables
K(1), . . . , K(n) are independent follows as in Ex. 1.1.
Exercise 1.3: Find the ltration in = [0, 1] generated by the process
X(n, ) = 21
[0,1
1
n
]
().
Solution: Since X(1)() = 0 for all [0, 1], we have F
X(1)
= {, [0, 1]}.
For any B R and R, let B = { : B}.
Now for k > 1, B B(R),
X(k)
1
(B) =
_
(
1
2
B) [0, 1
1
k
] if 0 B
(
1
2
B) [0, 1
1
k
] (1
1
k
, 1] if 0 B.
_
Then Hence F
X(k)
= {A E : A B((0, 1
1
k
])}, E {, {0} (1
1
k
, 1]}}.
Suppose 1 k n. If C F
X(k)
and C B((0, 1
1
k
]) then C B((0, 1
1
n
]) F
X(n)
. If C = A{0} (1
1
k
, 1], A B((0, 1
1
k
]), then C = (A(1
1
k
, 1
1
n
]) {0} (1
1
n
, 1] F
X(n)
because A(1
1
k
, 1
1
n
] B((0, 1
1
n
]).
In consequence F
X(k)
F
X(n)
for all k. This implies F
X
n
= F
X(n)
.
Exercise 1.4: Working on = [0, 1] nd (by means of concrete formu-
Solutions to Exercises 3
lae and sketching the graphs) the martingale E(Y|F
n
) where Y() =
2
and
F
n
is generated by X(n, ) = 21
[0,1
1
n
)
() (see Exercise 1.3).
Solution: According to Exercise 1.3 the natural ltration F
n
of X has
the formF
n
= F
X
n
= F
X(n)
, so
F
n
= {A E : A B((0, 1
1
n
]), E {, {0} (1
1
n
, 1]}}.
Hence the restriction of E(Y|F
n
) to the interval (0, 1
1
n
] must be a B((0, 1
1
n
])-measurable variable and E(Y|F
n
) = Y on (0, 1
1
n
] satises Def. 1.9
for A (0, 1
1
n
]. The restriction of E(Y|F
n
) to the set {0} (1
1
n
, 1]
must be measurable with respect to the -eld {, {0} (1
1
n
, 1]}. Thus
E(Y|F
n
) has to be a constant function: E(Y|F
n
) = c, on {0} (1
1
n
, 1].
Condition 2 of Def. 1.9 gives
_
(1
1
n
,1]
c dP =
_
(1
1
n
,1]

2
dP. It follows that
E(Y|F
n
)() = c = 1
1
n
+
1
3n
2
for (1
1
n
, 1].
Exercise 1.5: Show that the expectation of a martingale is constant in
time. Find an example showing that constant expectation does not imply
the martingale property.
Solution: Let be the trivial -algebra, consisting of P-null sets and
their complements. For every integrable randomvariable X, E(X|) = E(X).
If M is a martingale, then E(M(n + 1)|F
n
) = M(n) for all n 0. Using the
tower property we obtain
E(M(n)) = E(M(n)|) = E(M(n + 1)|F
n
)|)
= E(M(n + 1)|) = E(M(n + 1)).
If X(n), n 0 is any sequence of integrable random variables, then for
the sequence

X(n) = X(n) E(X(n)) the property E(

X(n)) = E(X(n)
E(X(n))) = E(X(n)) E(X(n)) = 0 holds for all n.
Exercise 1.6: Show that martingale property is preserved under linear
combinations with constant coecients and adding a constant.
Solution: Let X, Y be martingales with respect to the ltration F
n
and
x R. Dene Z = X + Z, W = X, U = X + . Then E(|Z(n)|) =
E(|X(n) + Y(n)|) E(|X(n)|) + E(|Y(n)|) < +, E(|W(n)|) = E(|X(n)|) =
||E(|X(n)|) < +. It implies that Z(n) and W(n) are F
n
-measurable and
they have nite expectation. Finally the linearity of conditional expectation
gives E(Z(n + 1)|F
n
) = E(X(n + 1) + Y(n + 1)|F
n
) = E(X(n + 1)|F
n
) +
E(Y(n +1)|F
n
) = X(n) +Y(n) = Z(n), E(W(n +1)|F
n
) +E(X(n +1)|F
n
) =
E(X(n +1)|F
n
) = X(n) = W(n). The process U is the special case of this
Z when Y(n) = for all n.
4 Solutions to Exercises
Exercise 1.7: Prove that if M is a martingale, then for m < n,
M(m) = E(M(n)|F
m
).
Solution: Using the tower property n m 1 times we obtain
M(m) = E(M(m + 1)|F
m
) = E(E(M(m + 2)|F
m+1
)|F
m
)
= E(M(m + 2)|F
m
) = . . . = E(M(m)|F
m
).
Exercise 1.8: Let M be a martingale with respect to the ltration gen-
erated by L(n) (as dened for random walk), and assume for simplicity
M(0) = 0. Show that there exists a predictable process H such that M(n) =
_
n
i=1
H(i)L(i) (i.e. M(n) =
_
n
i=1
H(i)[Z(i)Z(i1)], where Z(i) =
_
i
j=1
L( j).
(We are justied in calling this result a representation theorem: each mar-
tingale is a discrete stochastic integral).
Solution: Here the crucial point is that the random variables L(n) have
discrete distributions and the process (M(n))
n0
is adapted to the ltration
F
L
n
, which means that M(n), n 0 also have discrete distributions and
M(n) is constant on the sets of the partition P(L
1
, . . . , L
n
). From the for-
mula M(n) =
_
n
i=1
H(i)L(i) we obtain M(n +1) M(n) = H(n +1)L(n +1).
Since L
2
(k) = 1

a.s. for all k 1, we dene H(n + 1) = [M(n + 1)


M(n)]L(n for n 0. To prove that (H(n + 1))
n0
is a predictable process
we have to verify M(n + 1) is F
L
n
-measurable. This is equivalent to the
condition H(n + 1) is constant on the sets of the partition P(L
1
, . . . , L
n
).
Write A

1
,...,
k
= { : L
1
() =
1
, . . . , L
k
() =
k
;
j
{1, 1}}.
Then P(L
1
, . . . , L
k
) = {A

1
,...,
k
;
j
{1, 1}} and A

1
,...,
k
,1
A

1
,...,
k
,1
=
A

1
,...,
k
. Moreover, P(A

1
,...,
k
) =
1
2
k
, because the L
j
are i.i.d. random vari-
ables. Fix n and a set A

1
,...,
k
. Next, let
0
= M(n)() for A

1
,...,
n
,

1
= M(n + 1)() for A

1
,...,
n
1
,
1
= M(n + 1)() for A

1
,...,
n
,1
.
Since M is a martingale
_
A

1
,...,n
M(n)dP =
_
A

1
,...,n
M(n + 1)dP
and therefore
0
P(A

1
,...,
n
) =
1
P(A

1
,...,
n
,1
) +
1
P(A

1
,...,
n
,1
). From this
and the relation 2P(A

1
,...,
n
) = P(A

1
,...,
n
,1
) = P(A

1
,...,
n
,1
) it follows that
2
0
=
1
+
1
or, equivalently, (
1

0
) =
1

0
. Using this equality
we verify nally that
H(n + 1)1
A

1
,...,n
= [M(n + 1) M(n)]L(n + 1)1
A

1
,...,n
= (1)(
1

0
)1
A

1
,...,n,1
+ (
1

0
)1
A

1
,...,n,1
= (
1

1
)1
A

1
,...,n
Solutions to Exercises 5
so that H(n + 1) is constant on A

1
,...,
k
.
Exercise 1.9: Show that the process Z
2
(n), the square of a random walk,
is not a martingale, by checking that E(Z
2
(n + 1)|F
n
) = Z
2
(n) + 1.
Solution: Assume, as before, that L(k), k = 1, . . . is a symmetric ran-
dom walk, L(k) {1, 1} and set L(0) = 0. The variables (L(k))
k0
are
independent and Z(k +1) = Z(k) +L(k), k 0, F
k
= F
L
k
. Then E(L(k)) = 0,
E(L
2
(k)) = 1 for k 1 and the variables Z(k), Z(k
2
) are F
k
-measurable and
the variables L(k +1), L
2
(k +1) are independent of F
k
. Using the properties
of conditional expectation we have
E(Z
2
(n + 1)|F
n
) = E((Z(n) + L(n + 1))
2
|F
n
)
= E(Z
2
(n)|F
n
) + 2Z(n)E(L(n + 1)|F
n
) + E(L
2
(n + 1)|F
n
)
(linearity, measurability)
= Z
2
(n) + 2Z(n)E(L(n + 1)) + E(L
2
(n + 1))
(measurability,independence)
= Z
2
(n) + 1 for n 0.
Exercise 1.10: Show that if X is a submartingale, then its expectations
increase with n:
E(X(0)) E(X(1)) E(X(2)) ,
and if X is a supermartingale, then its expectations decrease as n increases:
E(X(0)) E(X(1)) E(X(2)) .
Solution: Since X is a submartingale, X(n) E(X(n + 1)|F
n
) for all n.
Taking expectations on both sides of this inequality we obtain
E(X(n)) E(E(X(n + 1)|F
n
)) = E(X(n + 1)) for all n.
For a supermartingale proceed similarly.
Exercise 1.11: Let X(n) be a martingale (submartingale, supermartin-
gale). For a xed m consider the sequence X

(k) = X(m + k) X(m), k 0


Show that X

is a martingale (submartingale, supermartingale) relative to


the ltration F

k
= F
m+k
.
Solution: Let X be a martingale (submartingale, supermartingale). Then
X(m) is F
m+k
measurable variable for all m, k. We have E(X

(k + 1)|F

k
) =
E(X(m+k +1) X(m)|F
m+k
) = E(X(m+k +1)|F
m+k
) E(X(m)|F
m+k
) = (
, )X(m + k) X(m) = X

(k), for all k.


6 Solutions to Exercises
Exercise 1.12: Prove the Doob decomposition for submartingales from
rst principles:
If Y(n) is a submartingale with respect to some ltration, then there ex-
ist, for the same ltration, a martingale M(n) and a predictable, increasing
process A(n) with M(0) = A(0) = 0 such that
Y(n) = Y(0) + M(n) + A(n).
This decomposition is unique.
Solution: The process Z(n) = Y(n)Y(0), n 0, is a submartingale with
Z(0) = 0. Therefore we may assume Y(0) = 0 without loss of generality.
We prove the theorem with the use the principle of induction. For n = 1,
the decomposition formula would imply the relation
E(Y(1)|F
0
) = E(M(1)|F
0
) + E(A(1)|F
0
).
If this is to hold with M a martingale and A predictable, we must set
A(1) := E(Y(1)|F
0
) M(0),
which shows that A(1) is F
0
-measurable.
To arrive at the composition formula we now dene
M(1) := Y(1) A(1).
M(1) is F
1
-measurable because Y(1) and A(1) are. Moreover,
E(M(1)|F
0
) = E(Y(1)|F
0
) E(A(1)|F
0
) = E(Y(1)|F
0
) A(1) = M(0),
which completes the initial induction step.
Assume now that we have dened an F
k
-adapted martingale M(k) and a
predictable, increasing process A(k), k n such that A(k) and M(k) satisfy
the decomposition formula for Y(k), for all k n. Once again the decom-
position formula for k = n + 1 gives
E(A(n + 1)|F
n
) = E(Y(n + 1)|F
n
) E(M(n + 1)|F
n
).
Hence it is necessary to dene
A(n + 1) := E(Y(n + 1)|F
n
) M(n). (0.1)
Having A(n + 1) to conserve the decomposition formula we set
M(n + 1) := Y(n + 1) A(n + 1). (0.2)
Solutions to Exercises 7
Now we verify that M(k), A(k), k n +1 satisfy the conditions of the theo-
rem. From (0.1) A(n+1) is F
n
-measurable, because M(n) is F
n
-measurable.
Next from (0.1) and the decomposition formula for n we have
A(n + 1) = E(Y(n + 1)|F
n
) M(n)
= [E(Y(n + 1)|F
n
) Y(n)] + A(n) A(n)
because Y is a submartingale. Then A(k) is an increasing, predictable pro-
cess for k n + 1.
From (0.2), M(n + 1) is F
n+1
-measurable and, since A(n + 1) is F
n
-
measurable,
E(M(n + 1)|F
n
) = E(Y(n + 1)|F
n
) E(A(n + 1)|F
n
) = E(Y(n + 1)|F
n
) A(n + 1)
= M(n), by (0.1)).
Thus M(k), k n + 1 is a martingale. By construction, the processes
M(k), A(k), k n + 1 satisfy the decomposition formula for Y(k) for all
k n + 1. By the principle of induction we may deduce that the processes
A and M, given by (0.1), (0.2) for all n, satisfy the conditions of the theo-
rem. The uniqueness is proved in the main text.
Exercise 1.13: Let Z(n) be a random walk (see Example 1.2), Z(0) =
0, Z(n) =
_
n
j=1
L( j), L( j) = 1, and let F
n
be the ltration generated by
L(n), F
n
= (L(1), . . . , L(n)). Verify that Z
2
(n) is a submartingale and nd
the increasing process A in its Doob decomposition.
Solution: From relations (0.1), (0.2) we can give explicit formula for
the compensator A.
A(k) = E(Y(k)|F
k1
) M(k 1)
= E(Y(k)|F
k1
) Y(k 1) + A(k 1).
Hence A(k) A(k 1) = E(Y(k) Y(k 1)|F
k1
). Adding these equalities
on both sides we obtain
A(n) =
n

k=1
E(Y(k) Y(k 1)|F
k1
), for n 1. (0.3)
By Exercise 1.9, E(Z
2
(n + 1)|F
n
) = Z
2
(n) + 1 Z
2
(n) when Z(0) = 0,
i.e., Z
2
is a submartingale. Next using the formula (0.3) given in Exercise
8 Solutions to Exercises
1.12 we obtain
A(n) =
n

k=1
E(Z
2
(k) Z
2
(k 1)|F
k1
)
=
n

k=1
[(Z
2
(k 1) + 1) Z
2
(k 1)] = n
for n 1.
Exercise 1.14: Using the Doob decomposition, showthat if Y is a square-
integrable submartingale (resp. supermartingale) and H is predictable with
bounded non-negative H(n), then the stochastic integral of H with respect
to Y is also a submartingale (resp. supermartingale).
Solution: Let Y be a submartingale. Then by the Doob decomposition
(Theorem 1.19) there exist unique martingale M and a predictable, increas-
ing process A, M(0) = A(0) = 0, such that Y(k) = Y(0) + M(k) + A(k) for
k 0. Hence Y(k) Y(k 1) = [M(k) M(k 1)] + [A(k) A(k 1)].
This relation gives the following representation for the stochastic integral
H with respect to Y
X(n + 1) =
n+1

k=1
H(k)[Y(k) Y(k 1)]
=
n+1

k=1
H(k)[M(k) M(k 1)]
+
n+1

k=1
H(k)[A(k) A(k 1)]
= Z(n + 1) + B(n + 1), n 0.
By the Theorem 1.15 Z(k), k 1 is a martingale. For the second term we
have
E(B(n + 1)|F
n
) =
n+1

k=1
E(H(k)[A(k) A(k 1)]|F
n
)
=
n+1

k=1
H(k)[A(k) A(k 1)]
= B(n) + H(n + 1)[A(n + 1) A(n)] B(n)
because by the predictability of H and A, the random variables H(k)[A(k)
A(k 1)] are F
n
-measurable for k n + 1. Also, H(k) 0, and A(k) is an
increasing process. Taking together the properties of Z and B we conclude
Solutions to Exercises 9
that E(X(n+1)|F
n
) = E(Z(n+1)|F
n
) +E(B(n+1)|F
n
) Z(n) +B(n) = X(n).
This proves the claim for a submartingale.
If Y is a supermartingale, Y is the submartingale, so the above proof
applies, which implies that the stochastic integral of a supermartingale is
again a supermartingale.
Exercise 1.15: Let be a stopping time relative to the ltration F
n
.
Which of the random variables + 1, 1,
2
is a stopping time?
Solution: )

= +1, yes. Because {

= n} = { = n 1} F
n1
F
n
for n 1. {

= 0} = F
0
.
)

= 1, no. Because we can only conclude that{

= n} = { = n+1}
F
n+1
for n 0, so this set need not be in F
n
.
)
2
, yes. Because { :
2
, k N} = { : () = k} F
k
F
n
for n = k
2
,
k N. For n {k
2
; k N}, { : () = n} = F
n
.
Exercise 1.16: Show that the constant random variable, () = m for all
, is a stopping time relative to any ltration.
Solution: { = n}=
_
if m n
if m = n
_
then { = n} F
n
for all n N.
Exercise 1.17: Show that if and are as in the Proposition, then
is also a stopping time.
Solution: Use the condition (p. 15): g : N, then {g = n} F
n
for
all n N {g n} F
n
for all n N. We have { = n} F
n
for all n
{ n} { n} F
n
for all n.
Exercise 1.18: Deduce the above theorem from Theorem 1.15 by con-
sidering H(k) = 1
{k}
. (Let M be a martingale. If is a stopping time, then
the stopped process M

is also a martingale.)
Solution: Let M and be a martingale and a stopping time for ltration
(F
n
)
n0
. Take Y(n) = M(n) M(0) for n 0. Then Y is also a martingale
and E(Y(0)) = 0. Now write for n 1
Y

(n, ) = Y(n (), ) = Y(1, ) + (Y(2, ) Y(1, ) + . . .


+(Y(n (), ) Y(n () 1, ))
=
n

k=1
1
{k}
()(Y(k) Y(k 1)).
Thus Y

can be written in the form Y

(n) =
_
n
k=1
H(k)(Y(k)Y(k1)) where
10 Solutions to Exercises
H(k) = 1
{k}
. The process H is a bounded predictable process because it is
the indicator function of the set { k} =
_
k1
m=1
{ = m} F
k1
. By The-
orem 1.15 Y

is a martingale. This gives E(Y

(n)) = (Y

(0)) = E(Y(0)) = 0.
Hence E(X

(n)) = E(X(0)) for all n.


Exercise 1.19: Using the Doob decomposition show that a stopped sub-
martingale is a submartingale, (and similarly for a supermartingale). Alter-
natively use the above representation of the stopped process and use the
denition to reach the same conclusions.
Solution: Use the form of the stopped process given in the proof of
Proposition 1.30. Let M and be a submartingale (supermartingale) and a
nite stopping time. From the form of M

we have
M

(n + 1) =

m<n+1
M(m)1
=m
+ M(n + 1)1
n+1
.
Since each term of the right hand side is integrable variable, M

(n + 1) is
also integrable variable. Now we can write
E(M

(n + 1)|F
n
) =

m<n+1
E(M(m)1
=m
|F
n
)
+E(M(n + 1)1
n+1
|F
n
).
The processes M(m)1
=m
, m < n + 1 and 1
n+1
= 1

1
n
are F
n
-
measurable, then
E(M

(n + 1)|F
n
) =

m<n+1
M(m)1
=m
+ 1
n+1
E(M(n + 1)|F
m
)
()

m<n
M(m)1
=m
+ M(n)1
=n
+ 1
n+1
M(n)
(M is sub (super) martingale)
=

m<n
M(m)1
=m
+ M(n)1
n
= M

(n) for all n 0.


Exercise 1.20: Show that F

is a sub--eld of F.
Solution: ) F

because { = n} = { = n} F
n
for all n 0.
) Let A belong F

. It is equivalent to the condition A{ = n} F


n
for all
n. Then (\ A) { = n} = { = n} (A { = n}) F
n
for all n, because
F
n
are -elds. The last condition means \ A F

.
) Let A
k
belong F

for k = 1, 2, . . . . Then A
k
{ = n} F
n
for all n.
Hence it follows (
_

k=1
A
k
) { = n} =
_

k=1
(A
k
{ = n}) F
n
for all n.
This means
_

k=1
A
k
F

.
Solutions to Exercises 11
Exercise 1.21:Show that if , are stopping times with then F

.
Solution: Let , be stopping times such that . Let A F

. Ac-
cording to the denition of F

, A { = m} F
m
for m = 0, 1, . . . Since
, it holds (
_
n
m=0
{ = m}) { n} = { n}. Hence we have
A { n} =
_
t
m=0
(A { = m}) { n} F
n
for F
m
F
n
and
{ n} F
n
. n was arbitrary, according to the condition (p. 15) A F

.
Exercise 1.22: Any stopping time is F

-measurable.
Solution: We have to prove { k} F

for each k = 0, 1, . . . . This is


equivalent to the condition { < k} { = n} F
n
for all n, k. We have
B = { k} { = n}=
_
if k < n
{ = n} if n k
_
Then B F
n
. As conse-
quence { k} F

.
Exercise 1.23: (Theorem 1.35 for supermartingales). If M is a super-
martingale and , are bounded stopping times then
E(M()|F

) M().
Solution: It is enough to prove that
_
A
E(M()|F

)dP =
_
A
M()dP
_
A
M()dP
for all A F

. We will prove the equivalent inequality. E(1


A
(M()
M())) 0 for arbitrary A F

. From the proof of Theorem 1.35 we know


that the variable 1
A
(M() M()) can be written in the form 1
A
(M()
M()) = X

(N) where the process X(n) is as follows


X(n) =
n

k=1
H(k)(M(k) M(k 1)),
X(0) = 0, H(k) = 1
A
1
{<k}
and N is a constant such that N. Ad-
ditionally H is a bounded and predictable process. Now the assumption
M is a supermartingale implies that X is also supermartingale Exercise
1.14). Hence it follows by the results of Exercise 1.19 that the stopped
process X

(n) is also supermartingale. But for a supermartingale we have


E(X

(N)) E(X

(0)) = E(X(0)) = 0, which completes the proof.


Exercise 1.24: Suppose that, with M(n) and as in the Theorem, (M(n))
p
12 Solutions to Exercises
is integrable for some p > 1. Show that we can improve (1.4) to read
P(max
kn
M(k) )
1

p
_
{max
kn
M(k)}
M
p
(n)dP
1

p
E(M
p
(n)).
Solution: If p 1 the function x
p
, x 0 is a convex, nondecreasing
function. As M
p
(n) is integrable, Jensens inequality (see p.10) and the fact
that M is a submartingale imply
E(M
p
(n + 1)|F
n
) (E(M(n + 1)|F
n
))
p
M
p
(n).
Applying Doobs maximal inequality (Theorem1.36) to the event {max
kn
M(k)
} = {max
kn
M
p

p
} we obtain the result.
Exercise 1.25: Extend the above Lemma to L
p
for every p > 1, to con-
clude that for non-negative Y L
p
, and with its relation to X 0 as stated
in the Lemma, we obtain ||X||
p

p
p1
||Y||
p
. (Hint: the proof is similar to
that given for the case p = 2, and utilises the identity p
_
{zx}
x
p1
dx = x
p
.)
Note: The denition of the normed vector space L
p
is not given explic-
itly in the text, but is well-known: one may prove that if p > 1 the map
X (E(|X|
p
))
1/p
= ||X||
p
is a norm on the vector space of all p-integrable
random variables (i.e. where E(|X|
p
) < ), again with the proviso that
we identify random variables that are a.s. equal. The Schwarz inequality
in L
2
then extends to the H older inequality: E(|XY|) ||X||
p
||Y||
q
when
X L
p
, Y L
q
,
1
p
+
1
q
= 1.
Proof: The extension of Lemma 1.38 that we require is the following:
Assume that X, Y are non-negative random variables, Y is in L
p
(), p >
1. Suppose that for all x > 0,
xP(X x)
_

1
{Xx}
YdP.
Then X is in L
p
() and
||E(X)||
p
= (E(X
p
))
1
p

p
p 1
||E(Y)||
p
.
The proof is similar to that of Lemma 1.38. First consider the case when X
is bounded. The following formula is interesting on its own:
E(X
p
) =
_

0
px
p1
P(X x)dx, for p > 0.
Solutions to Exercises 13
To prove it, substitute z = X() in the equality z
p
= p
_

0
1
{xz}
(x)x
p1
dx,
and we obtain
E(X
p
) =
_

X
p
()dP() =
_

p
__

0
1
{xX()}
(x)x
p1
dx
_
dP().
By Fubinis theorem
E(X
p
) =
_

0
x
p1
__

1
{Xx}
()dP()
_
dx =
_

0
px
p1
P(X x)dx.
Our hypothesis and Fubinis theorem once more give
E(X
p
) p
_

0
x
p2
__

1
{Xx}
()Y()dP()
_
dx
= p
_

__

0
1
{x<X()}
(x)x
p2
dx
_
Y()dP()
= p
_

__
X()
0
x
p2
dx
_
Y()dP()
=
p
p 1
_

X
p1
()Y()dP().
for p > 1.
The H older inequality with p and q =
p
p1
yields
_

X
p1
YdP (E((X
p1
)
p
p1
))
p1
p
(E(Y
p
))
1
p
.
The last two inequalities give ||X||
p
p
= E(X
p
)
p
p1
||X||
p1
p
||Y||
p
. This is
equivalent to our claim for X bounded.
If X is not bounded we can take X
n
= X n. The inclusion {X
n
x}
{X x} implies P(X
n
x) P(X x) and from the assumptions of the
theorem we obtain the inequalities
xP(X
n
x) xP(X x)
_
1
{Xx}
YdP
_
1
{X
n
x}
YdP.
As X
n
is bounded this gives E(X
p
n
) (
p
p1
)
p
E(Y
p
) for all n 1. The se-
quence X
p
n
increases to X
p
a.s., the monotone convergence theorem implies
E(X
p
) (
p
p1
)
p
E(Y
p
) and also as a consequence X
p
L
p
().
Exercise 1.26: Find the transition probabilities for the binomial tree. Is
it homogeneous?
Solution: From the denition of the binomial tree the behaviour of stock
prices is described by a sequence of random variables S (n) = S (n 1)(1 +
K(n)), where K(n, ) = U1
A
n
() + D1
[0,1]\A
n
(), S (0) given, deterministic.
14 Solutions to Exercises
As in the Exercise 1.2 we have P(K(n) = U) = 1 P(K(n) = D) =
p, p (0, 1) for n 1 and the variables K(n) are independent random
variables. From the denition of S (n), S (n) = S (0)

n
i=1
(1 + K(i)). Then
F
S
n
= (S (1), . . . , S (n)) = F
K
n
= (K(1), . . . , K(n)) and for any Borel
function f : R R is
E( f (S (n + 1))|F
S
n
) = E( f (S (n)(1 + K(n + 1)))|F
K
n
)
= E(F( f )(S (n), K(n + 1))|F
K
n
)
where F( f )(x, y) = f (x(1+y)). The variable K(n+1) is independent of F
K
n
and S (n) is F
K
n
measurable, by the Lemma 1.43 we have
E( f (S (n + 1)|F
S
n
) = G( f )(S (n))
where
G( f )(x) = E(F( f )(x, K(n + 1)))
= p f (x(1 + U)) + (1 p) f (x(1 + D)).
Since G( f ) is a Borel function, the penultimate formula implies, by def-
inition of conditional expectation, that E( f (S (n + 1))|F
S
n
) = E( f (S (n +
1))|F
S (n)
). So the process (S (n))
n0
has the Markov property. Assuming
f = 1
B
,
n
(x, B) = G(1
B
)(x) for Borel sets B we see that, for every xed B,

n
(x, B) = p1
B
(x(1 + U)) (1 p)1
B
(x(1 + B)) is a measurable function and
for every xed x R,
n
(x, ) is a probability measure on B(R). We also
have
P(S (n + 1) B|F
S (n)
) = E(1
B
(S (n + 1))|F
S (n)
) =
n
(S (n), B).
Thus the
n
are transition probabilities of the Markov process (S (n))
n0
.
This is a homogeneous Markov process, as the
n
do not depend on n.
Exercise 1.27: Show that symmetric random walk is homogeneous.
Solution: According to its denition, a symmetric random walk is de-
ned by taking Z(0) and dening Z(n) = Z(n1) +L(n), where the random
variables Z(0), L(1), . . . , L(n) are independent for every n 1. Moreover,
P(L(n) = 1) = P(L(n) = 1) =
1
2
(see Examples 1.4 and 1.46). Since
Z(n) = Z(0) +
_
n
i=1
L(i), we have F
Z
n
= (Z(0), L(1), . . . , L(n)).
For any bounded Borel function f : R R we have f (Z(n + 1)) =
f (Z(n) +L(n +1)) = F( f )(Z(n), L(n +1)) where F( f )(x, y) = f (x +y). The
variable L(n + 1) is independent of F
Z
n
and Z(n) is F
Zn
-measurable, so by
Lemma 1.43 we obtain
E( f (Z(n + 1))|F
Z
n
) = E(F( f )(Z(n), L(n + 1))) = G( f )(Z(n)),
Solutions to Exercises 15
where
G( f )(x) = E(F( f )(x, L(n + 1))) = E( f (x + L(n + 1))
=
1
2
( f (x + 1) + f (x 1)).
There last two relations testify that G( f ) is a Borel function and that the
equality E( f (Z(n + 1))|F
Z
n
) = E( f (Z(n + 1))|F
Z(n)
) holds.
Thus (Z(n))
n0
is a Markov process. Assuming f = 1
B
, (x, B) =
1
2
1
B
(x+
1) + 1
B
(x 1) =
x+1
(B) +
x1
(B), where B is a Borel set, x R we obtain
P(Z(n + 1) B|F
Z(n)
) = G(1
B
)(Z(n)) = (Z(n), B).
Again, (, B) is a measurable function for each Borel set B and (x, ) is a
probability measurable for every x R, so we conclude that is a transition
probability for the Markov process (Z(n))
n0
. Since does not depend on
n, this process is homogeneous.
Exercise 1.28: Let (Y(n))
n0
, be a sequence of independent integrable
random variables on (, F, P). Show that the sequence Z(n) =
_
n
i=0
Y(i) is
a Markov process and calculate the transition probabilities dependent on n.
Find a condition for Z to be homogeneous.
Solution: Fromthe denition Z(n) =
_
n
i=1
Y(i) followthe relations F
Z
n
=
(Z(0), . . . , Z(n)) = (Y(0), . . . , Y(n)) = F
Y
n
and Z(n+1) = Z(n)+Y(n+1).
For any bounded Borel function f : R R we have
f (Z(n + 1)) = f (Z(n) + Y(n + 1)) = F( f )(Z(n), Y(n + 1))
where F( f )(x, y) = f (x+y). The variable Z(n) is F
Z
n
measurable and by our
assumption (Y(i))
i0
is a sequence of independent variables. Thus Y(n + 1)
is independent of F
Z
n
. Now using Lemma 1.43 we obtain
E( f (Z(n + 1))|F
Z
n
) = E(F( f )(Z(n), Y(n + 1))|F
Z
n
)
= G
n
( f )(Z(n))
where
G
n
( f )(x) = E(F( f )(x, Y(n + 1))) = E( f (x + Y(n + 1)))
=
_
R
f (x + y)P
Y(n+1)
(dy) for n 0.
P
Y(n+1)
is the distribution of the random variable Y(n + 1). From the form
G
n
( f )(x) =
_
R
f (x + y)P
Y(n+1)
(dy) and the Fubini theorem, G
n
( f ) is a mea-
surable function. The equality E( f (Z(n + 1))|F
Z
n
) = G
n
( f )(Z(n)) implies
16 Solutions to Exercises
that E( f (Z(n + 1))|F
Z
n
) is F
Z(n)
measurable function. Then from the deni-
tion of conditional expectation we have E( f (Z(n + 1))|F
Z(n)
) = E( f (Z(n +
1))|F
Z(n)
) a.e. . So the process (Z(n))
n0
is a Markov process.
Putting
n
(x, B) = G
n
(1
B
)(x), n 0, we see that for every Borel set B,

n
(, B) is a measurable function. Next, denote S
x
(y) = x + y. Of course S
x
is a Borel function for every x. From the denition of
n
we have relations

n
(x, B) =
_
R
1
B
(S
x
(y))P
Y(n+1)
(dy)
=
_
R
1
S
1
x
(B)
(y)P
Y(n+1)
(dy) = P
Y(n+1)
(S
1
x
(B)).
This shows that for every x R,
n
(x, ) are probability measures. Finally
P(Z(n + 1) B|F
Z(n)
) = E(1
B
(Z(n + 1))|F
Z(n)
)
+ G(1
B
)(Z(n)) =
n
(Z(n), B)
n 0. Collecting together all these properties we conclude that the mea-
sures
n
, n 0, are the transition probabilities of the Markov process
(Z(n))
n0
. From the denition of
n
we see that if the distribution functions
P
Y(n)
of variables Y(n) are dierent, then
n
are dierent and the process
Z(n) is not homogeneous. If for all n variables Y(n) have the same distri-
bution function that is P
Y(n)
= P
Y(0)
for all n, then
n
=
0
for all n and the
process Z(n) is homogeneous.
Exercise 1.29: A Markov chain is homogeneous if and only if for every
pair i, j S
P(X(n + 1) = j|X(n) = i) = P(X(1) = j|X(0) = i) = p
i j
(0.4)
for every n 0.
Solution: A Markov chain X(n), n 0, is homogeneous if for every
Borel set B and n 0 the equation E(1
B
(X(n + 1))|F
X(n)
) = (X(n), B) is
satised, where is a xed transition probability, not depending on n. In the
discrete case, the variables X(n), n 0 take values in a nite set {0, . . . , N}.
The relation 1
B
(X(n+1)) =
_
jB
1
{X(n+1)=j}
and additivity of the conditional
expectation allows us to restrict attention to sets B = {i}, i S . Since
here the conditional expectations are simple functions, constant on the sets
A
n
i
= {X(n) = i}, the condition that the process X(n) be homogeneous is
equivalent to
E(1
{X(n+1)=j}
|F
X(n)
) 1
A
(n)
i
= (i, { j})1
A
(n)
i
Solutions to Exercises 17
for every i, j S , n 0. Denoting (i, { j}) = p
i j
we obtain that the last
equalities are equivalent to the formulae P(X(n +1) = j|X(n) = i) = p
i j
for
all i, j S and n 0.
Comming back to the nancial example (see p.32) based on credit rat-
ings this means that the process of rating of a country is a homogeneous
Markov process if the rating of a country at time n + 1 depends only on its
rating at time n (it close not depend on its previous ratings)and the proba-
bilities p
i j
of rating changes are the same for all times.
Exercise 1.30: Prove that the transition probabilities of a homogeneous
Markov chain satisfy the so-called Chapman-Kolmogorov equation
p
i j
(k + l) =

rS
p
ir
(k)p
r j
(l).
Proof: Denote by

P
k
the matrix with entries p
i j
(k) and denote the entries
of the k-th power of the transition matrix P
k
by p
(k)
i j
, i, j S. We now claim
that P
k
=

P
k
for all k 0, or equivalently p
(k)
ik
= p
i j
(k) for all i, j, k.
To prove our claim we use the induction principle.
Step1. If k = 1, then p
(1)
i j
= p
i j
= p
i j
(1), so

P = P.
Step2. The induction hypothesis. Assume that for all l m,

P
l
= P
l
.
Step3. The inductive step. We will prove that

P
m+1
= P
m+1
. We have
p
i j
(m + 1) = P(X(m + 1) = j|X(0) = i)
=

rS
P(X(m) = r|X(0) = i)P(X(m + 1) = j|X(m) = r, X(0) = i)
=

rS
p
ir
(m) P(X(m + 1) = j|X(m) = r, X(0) = i).
The following relations hold on the set {X(m) = r}
P(X(m + 1) = j|X(m) = r, X(0) = i)
= E(1
{ j}
(X(m + 1))|F
X(0),X(m)
)
= E(E(1
{ j}
(X(m + 1))|F
X
m
|F
X(0),X(m)
) (tower property)
= E(E(1
{ j}
(X(m + 1))|F
X(m)
|F
X(0),X(m)
) (Markov property)
= E(1
{ j}
(X(m + 1))|F
X(m)
) (tower property)
= P(X(m + 1) = j|X(m) = r) = p
r j
(X is Markov, homogeneous, Exercise 1.29). Utilizing this result we obtain
p
i j
(m + 1) =
_
rS
p
ir
(m)p
r j
for all i, j. This equality means that

P
m+1
=
18 Solutions to Exercises

P
m
P. Hence by the induction assumption

P
m+1
= P
m
. Nowby the Induction
Principle

P
k
= P
k
for all k 1 and the claim is true.
Now our exercise is trivial. From the equality P
k+l
= P
k
P
l
it follows that

P
k+l
=

P
k

P
l
. Writing out the last equation for the entries completes the
proof.
Solutions to Exercises 19
Chapter 2
Exercise 2.1: Showthat scalings other than by the square root lead nowhere
by proving that X(n) = h

L(n), (0,
1
2
), implies
_
N
n=1
X(n) 0 in L
2
while for >
1
2
this sequence goes to innity in this space.
Solution: Since L(n) has mean 0 and variance 1 for each n, we have, by
independence and since h =
1
N
,

n=1
X(n)

2
2
= Var(h

n=1
L(n)
= h
2
N

n=1
Var(L(n))
= h
2
N = h
21
.
When h 0, this goes to 0 if <
1
2
and to + if <
1
2
.
Exercise 2.2: Show that Cov(W(s), W(t)) = min(s, t).
Solution: Since E(W(t)) = 0 and E(W(t) W(s))
2
= t s for all t s
0, we have Cov(W(s), W(t)) = E(W(s)W(t)) and t s = E(W(t) W(s))
2
=
E(W
2
(t)) 2E(W(s)W(t)) +E(W
2
(s)) = t 2E(W(s)W(t)) +s. This equality
implies the formula E(W(s)W(t)) = s = min(s, t).
Exercise 2.3 Consider B(t) = W(t) tW(1) for t [0, 1] (this pro-
cess is called the Brownian bridge, since B(0) = B(1) = 0). Compute
Cov(B(s), B(t)).
Solution: E(B(r)) = E(W(r)) rE(W(s)) = 0 for all r 0 for E(W(s)) =
0 for all r. Then
Cov(B(s), B(t)) = Cov(B(s), B(t)) = E(W(s) sW(1))(W(t) tW(1))
= E(W(s)W(t)) sE(W(1)W(t)) tE(W(s)W(1)) + stE(W
2
(1))
= Cov(W(s), W(t)) sCov(W(1), W(t)) tCov(W(s), W(1)) + stE(W
2
(1))
= min(s, t) s min(t, 1) t(s, 1) + st
=
_
s(1 t) if s t 1
t(1 s) if t s 1
_
.
Exercise 2.4: Show directly from the denition that if W is a Wiener
20 Solutions to Exercises
process, then so are the processes given by W(t) and
1
c
W(c
2
t) for any
c > 0.
Solution: The process W(t) obviously satises the Denition 2.4. We
consider the process Y(t) =
1
c
W(c
2
t), c > 0, t 0. It is known (see [PF])
that if, for two given random variables U, V and every continuous bounded
function f : R R we have E( f (U)) = E( f (V)), then the distributions P
U
and P
V
of U and V are the same. First note that P
Y(t)
= P
W(t)
for all t 0,
since
E( f (Y(t))) =
_

f
_
1
c
W(c
2
t)
_
dP
=
_
R
f
_
1
c
x
_
1

2c
2
t
e

x
2
2c
2
t
dx (W has normal distribution)
=
_
R
f (y)
1

2t
e

y
2
2t
dy (change of variable x=cy)
= E( f (W(t))).
(i) We verify the conditions of Denition 2.4. Condition 1 is obvious.
For Condition 2 2 take 0 s < t, B B(R). Then
P((Y(t) Y(s)) B) = P
_
1
c
(W(c
2
t) W(c
2
s)) B
_
= P((W(c
2
t) W(c
2
s)) g
1
c
(B)) (where g
c
(x) =
1
c
x)
= P((W(c
2
t c
2
s)) g
1
c
(B)) (Condition 2 for W, the same increments)
= P
__
1
c
W(c
2
(t s))
_
B
_
= P(Y(t s) B) = P(W(t s) B)
= P((W(t) W(s)) B).
Thus Y(t) Y(s) and W(t) W(s) have the same distribution. For the
third condition set 0 t
1
< . . . < t
m
. Then 0 c
2
t
1
< . . . < c
2
t
m
and the increments W(c
2
t
1
) W(c
2
t
0
), . . . , W(c
2
t
m
) W(c
2
t
m1
) are
independent by independence of the increments of W(t). Hence the
process Y(t) has independent increments. The paths of Y are contin-
uous for almost all because this holds for W.
Exercise 2.5: Apply the above Proposition to solve Exercise 2.4. In other
words, use the following result to give alternative proofs of Exercise 2.4: If
a Gaussian process X has X(0) = 0, constant expectations, a.s. continuous
paths and Cov(X(s), X(t) = min(s, t), then it is a Wiener process.
Solutions to Exercises 21
Solution: The proof that W is again a Wiener process is clear, as it
is Gaussian,, a.s. continuous and has the right covariances. For the second
part we prove two auxiliary claims:
1.If X(t), t 0 is a Gaussian process and b > 0, then Y(t) = X(bt), t 0
is a Gaussian process.
2. If Y(t), t 0 is a Gaussian process, c > 0, then Z(t) =
1
c
Y(t), t 0 is a
Gaussian process.
Proof of 1: Fix 0 t
0
< t
1
< . . . < t
n
. Then the distribution vector of
increments (Y(t
1
) Y(t
0
), . . . , Y(t
n
) Y(t
n1
)) is the same as the distribution
vector of the increments (X(s
1
) X(s
0
), . . . , X(s
n
) X(s
n1
)) where s
i
=
bt
i
, i = 0, . . . , n. But the last vector is Gaussian because X is a Gaussian
process. According to Def. 2.11 Y is a Gaussian process.
For 2. we prove the following more general claim:
If U, U
T
= (U
1
, . . . , U
n
)
T
is a Gaussian random vector with the mean
vector
U
and the covariance matrix
U
and A is a nonsingular n n (real)
matrix, then V = AU is a Gaussian vector with mean vector
V
= A
U
and
covariance matrix
V
= A
U
A
T
.
To prove this consider the mapping A(u) = Au for u R
n
. Then for every
Borel set B B(R
n
) we have
P(V B) = P(A U B) = P(U A
1
(B)) =
_
A
1
(B)
f
U
(u)du
where f
U
is the density distribution function for U, du = (du
1
, . . . , du
n
).
Changing the variables, u = A
1
v, we obtain P(V B) =
_
B
f
U
(A
1
v) det(A
1
)dv.
From Denition 2.12 we conclude that
P(V B) =
_
B
(2)

n
2
(det
U
)

1
2
exp{(A
1
(v A))
T

1
U
(A
1
(v A))}(det A)
1
dv
=
_
B
(2)

n
2
det(A
U
A
T
)

1
2
exp{(v A)
T
(A
U
A
T
)
1
(v A)}dv.
This formula shows that V is a Gaussian vector with mean vector
V
= A
U
and covariance matrix
V
= A
U
A
T
.
Returning to Point 2 above let V be the vector of increments of the
process Z, V
T
= (Z(t
1
) Z(t
0
), . . . , Z(t
n
) Z(t
n1
)) and U the vector of
increments of Y, Y
T
= (Y(t
1
) Y(t
0
), ..., Y(t
n
) Y(t
n1
)). Denote A =
diag(
1
c
, . . . ,
1
c
), where diag means diagonal matrix. Then V = AU and A
is a non-singular matrix (c > 0). Since Y was Gaussian, we know that V is
also a Gaussian vector. The proof of 2. is completed.
Now to solve our Exercise we verify the assumptions of Proposition
22 Solutions to Exercises
2.13. Our claims 1 and 2 show that the process

W(t) =
1
c
W(c
2
t), t 0 is
a Gaussian process because W has this property. Next E(

W(t)) = 0 for all
t because E(W(t)) = 0 for each t.

W has continuous paths because W has
continuous paths. For the last condition
Cov(

W(s),

W(t)) = Cov
_
1
c
W(c
2
s),
1
c
W(c
2
t)
_
=
1
c
2
(c
2
s c
2
t) = s t.
By Proposition 2.13

W is a Wiener process.
Exercise 2.6: Show that the shifted Wiener process is again a Wiener
process and that the inverted Wiener process satises conditions 2,3 of the
denition.
Solution: 1. Shifted process.
Verify the conditions of Denition 2.4 for Wiener process.
1. W
u
(0) = W(u) W(u) = 0.
2. For 0 s < t, W
u
(t) W
u
(s) = W(u+t) W(u+s). Hence W
u
(t) W
u
(s)
has normal distribution with mean value 0 and standard deviation

t s.
3. For all m and 0 t
1
< . . . < t
m
the increments W
u
(t
n+1
) W
u
(t
n
) =
W(u + t
n+1
) W(u + t
n
), n = 1, . . . , m 1, are independent because the
increments of Winer process W(u + t
n+1
) W(u + t
n
), n = 1, . . . , m 1 are
independent.
4. For almost all the paths of W are continuous functions, then also the
paths of W
u
are continuous.
2. Inverted process. Consider the process Y(t) = tW(
1
t
) for t > 0, Y(0) = 0.
Since Y(t) =
1
c
W(c
2
t) for t > 0, c =
1
t
, by the previous Exercise Y(t)
have normal distributions E(Y(t)) = 0, VarY(t) = t for t > 0. To verify
condition 2 of Def. 2.4, choose 0 < s < t. Then 0 <
1
t
<
1
s
and Y(t)
Y(s) = (s)(W(
1
s
)W(
1
t
))+(ts)W(
1
t
). Since the increments W(
1
t
), W(
1
s
)
W(
1
t
) are independent, Gaussian variables, the variables (t s)W(
1
t
and
(s)(W(
1
s
) W(
1
t
) are also independent and Gaussian. Hence their sum
Y(t) Y(s) also has a Gaussian distribution. Now E(W(r)) = 0 for all r 0
implies E(Y(t)Y(s)) = 0. This lets us calculate the standard deviation of
Y(t) Y(s) as follows
2
= Var(Y(t) Y(s)) = Var((s)(W(
1
s
) W(
1
t
)) +(t
s)W(
1
t
)) = s
2
Var(W(
1
s
) W(
1
t
)) +(t s)
2
Var(W(
1
t
)) = s
2
(
1
s

1
t
) +(t s)
2 1
t
=
t s.
To verify condition 3 of Def. 2.4 take 0 < t
1
< . . . < t
m
. It is necessary
to prove that the components of the vector Y
m
, (Y
m
)
T
= (Y(t
1
), Y(t
2
)
Y(t
1
), . . . , Y(t
m
) Y(t
m1
))
T
are independent random variables. To obtain
this property we prove that Y
m
has a Gaussian distribution and Cov(Y(s), Y(t)) =
Solutions to Exercises 23
min(t, s) . These facts give the independence of components of Y
m
(see
proof of Proposition 2.13). It is 0 <
1
t
m
<
1
t
m1
< . . . <
1
t
1
. Hence the compo-
nents of the vector (

Z
m
)
T
= (W(
1
t
m
), W(
1
t
m1
) W(
1
t
m
), . . . , W(
1
t
1
) W(
1
t
2
))
T
are independent and have normal distributions as increments of a Wiener
process. Then the vector

Z
m
has a Gaussian distribution. Now it is easy to
calculate the relation

Y
m
= BA

Z
m
where the matrices A and B have the
forms
A =
_

_
t
1
t
1
t
1
t
2
t
2
0
0 0
0 0 0
t
m
0 0 0 0
_

_
,
B =
_

_
1 0 0
1 1


0 1 1
_

_
.
(i)
Since detA = t
1
. . . t
m
0, detB = 1 0, we know by Exercise 2.5,
that

Y
m
is Gaussian vector. Since the sequence (t
i
) was arbitrary,
Y(t), t 0 is a Gaussian process.
The last condition we have to verify Cov(Y(t), Y(s)) = min(t, s). Let
0 < s t. Then Cov(Y(t), Y(s)) = E(tW(
1
t
)sW(
1
s
)) = ts min(
1
t
,
1
s
) = ts
1
t
=
min(s, t). From the proof of Proposition 2.13 the increments of the process
Y(t), t 0 are independent.
Exercise 2.7: Show that X(t) =

tZ does not satisfy conditions 2,3 of


the denition of the Wiener process.
Solution: Assume 0 s < t. Then we have X(t) X(s) = (

s)Z.
Hence E(X(t) X(s)) = 0 and Var(X(t) X(s)) = E(X(t) X(s))
2
= (t +
s 2

ts). The last equality contradicts Condition 2 in Denition 2.4 of the


Wiener process.
To check condition 3 of Def. 2.4, consider the increments X(t
k+1
)X(t
k
),
k = 1, . . . , m1, where t
k+1
= (

t
k
+1)
2
, t
1
0. Then

t
k+1

t
k
= 1 and
24 Solutions to Exercises
P(X(t
2
) X(t
1
) 0, . . . , X(t
m
) X(t
m1
) 0) = P(Z 0) =
1
2
, while
m1
_
k=1
P(X(t
k+1
) X(t
k
) 0) = P(Z 0)
m1
= (
1
2
)
m1
.
So Condition 3 of Def. 2.4 is not satised.
Exercise 2.8: Prove the last claim - i.e.that if X, Y have continuous paths
and Y is a modication of X, then these processes are indistinguishable.
Solution: Suppose Y is a modication of X and X and Y have continuous
paths. Let T
0
= {t
k
: k = 1, 2, . . .} be a dense, countable subset of the time
set T. We know that the sets A
k
= {; X(t
k
, ) = Y(t
k
, )}, k = 1, 2, . . . have
P(A
k
) = 1 or, equivalently, P( \ A
k
) = 0. Now take the set A =
_

k=1
A
k
.
Since
P( \ A) = P(\

_
k=1
A
k
) = P(

_
k=1
(\ A
k
))

k=1
P( \ A
k
) = 0,
we have P(A) = 1. If
0
A, then
0
A
k
for all k = 1, 2, . . .. This
means that X(t,
0
) = Y(t,
0
) for all t T
0
. Since X(,
0
) and Y(,
0
)
are continuous functions and T
0
is a dense subset of T, it follows that
X(t,
0
) = Y(t,
0
) for all t T. But
0
was an arbitrary element of A
and P(A) = 1, so the processes X and Y are indistinguishable.
Exercise 2.9: Prove that If M(t) is a martingale with respect to F
t
, then
E(M
2
(t) M
2
(s)|F
s
) = E([M(t) M(s)]
2
|F
s
).
and in particular
E(M
2
(t) M
2
(s)) = E([M(t) M(s)]
2
).
Solution: The rst equality follows from the relations
E([M(t) M(s)]
2
|F
s
)
= E(M
2
(t) + M
2
(s)|F
s
) 2E(M(s)M(t)|F
s
) (linearity)
= E(M
2
(t) + M
2
(s)|F
s
) 2M(s)E(M(t)|F
s
) (M(s) is F
s
measurable)
= E(M
2
(t) + M
2
(s)|F
s
) 2M
2
(s) (M is a martingale)
= E(M
2
(t) M
2
(s)|F
s
).
The second equality follows from the rst by the tower property.
Solutions to Exercises 25
Exercise 2.10: Consider a process X on = [0, 1] with Lebesgue mea-
sure, given by X(0, ) = 0, and X(t, ) = 1
[0,
1
t
]
() for t > 0. Find the
natural ltration F
X
t
for X.
Solution: The denitions of the probability space ([0, 1], B([0, 1]), m),
m-Lebesgue measure, and the process X yield
F
X(s)
=
_

_
{, [0, 1]} , for 0 s 1
{, [0, 1], [0,
1
s
], (
1
s
, 1]} , for s > 1.
This implies F
X
t
= (
_
ts0
F
X(s)
) = {0, [0, 1]} for 0 t 1. In the case
t > 1 all intervals (
1
s
1
,
1
s
2
] = (
1
s
1
, 1] [0,
1
s
2
], where 0 < s
2
s
1
t, also
belong to F
X
t
. Hence we must have B((
1
t
, 1] F
X
t
and then [0,
1
t
] F
X
t
.
These conditions give F
X
t
= B((
1
t
, 1]) B

, where B

= {[0, 1] \ A : A
B((
1
t
, 1])}, because B((
1
t
, 1]) B

is a -eld.
Exercise 2.11: Find M(t) = E(Z|F
X
t
) where F
X
t
is constructed in the
previous exercise.
Solution: Let Z be a randomfunction on the probability space ([0, 1], B([0, 1]), m)
such that
_
1
0
|Z|dm exists. We will calculate the conditional mean values of
Z with respect to the ltration (F
X
t
)
t0
dened in the Exercise 2.10.
From Exercise 2.10 we know that in the case t > 1 every set A F
X
t
either belongs to B((
1
t
, 1]) or it is of the form A = [0,
1
t
] C where C
B((
1
t
, 1]). Hence every F
X
t
-measurable variable, including E(Z|F
X
t
), must
be a constant function when restricted to the interval [0,
1
t
], while restricted
to (
1
t
, 1] it is an F((
1
t
, 1])-measurable function. Then from the denition of
conditional mean value
_
[0,
1
t
]
Zdm =
_
[0,
1
t
]
E(Z|F
X
t
)dm = c
1
t
and for every
A B((
1
t
, 1]) we have
_
A
Zdm =
_
A
E(Z|F
X
t
)dm. The last equality implies
E(Z|F
X
t
) = Z on (
1
t
, 1] Finally
E(Z|F
X
t
)() =
_

_
t
_ 1
t
0
Zdm , for [0,
1
t
]
Z() , for (
1
t
, 1]
a.e. in the case t > 1. In the case t < 1 E(Z|F
X
t
) = E(Z) a.e.
Exercise 2.12: Is Y(t, ) = t
1
2
t a martingale (F
X
t
as above)? Compute
E(Y(t)).
Solution: It costs little to compute the expectation: E(Y(t)) =
_
1
0
(t
1
2
t)d = 0. If the expectation were not constant, we would conclude that
the process is not a martingale, however, constant expectation is just a nec-
essary condition, so we have to investigate further. The martingale condi-
26 Solutions to Exercises
tion reads E(Y(t)|F
X
s
) = Y(s). Consider 1 < s < t. The random variable on
the left is F
X
s
-measurable, so since [0,
1
s
] is an atom of the -eld, it has
to be constant on this event. However, Y(s) is not constant (being a linear
function of ), so Y is not a martingale for this ltration.
Exercise 2.13: Prove that for almost all paths of the Wiener process W
we have sup
t0
W(t) = + and inf
t0
W(t) = .
Solution
Set Z = sup
t0
W(t). Exercise 2.4 shows that for every c > 0 the process
cW(
t
c
2
) is also a Wiener process. Hence cZ and Z have the same distribution
for all c > 0, which implies that P(0 < Z < ) = 0, and so the distribution
of Z is concentrated on {0, +}. It therefore suces to show that P(Z =
0) = 0. Now we have
P(Z = 0) P({W(1) 0}
_
u1
{W(u) 0})
= P({W(1) 0} {sup
t0
(W(1 + t) W(1)) = 0})
since the process Y(t) = W(1 + t) W(1) is also a Wiener process, so that
its supremum is almost surely 0 or +. But (Y(t))
t0
and (W(t))
t[0,1]
are
independent, so
P(Z = 0) P(W(1) 0)P(sup
t
Y(t) = 0)
= P(W(1) 0)P(Z = 0),
(as Y is a Wiener process, sup
t0
Y(t) has the same distribution as Z) and
so P(Z = 0) = 0. The second claim is now immediate, since W is also a
Wiener process.
Exercise 2.14: Use Proposition 2.35 to complete the proof that the inver-
sion of a Wiener process is a Wiener process, by verifying path-continuity
at t = 0.
Solution: We have to verify that the process
Y(t) =
_

_
tW(
1
t
) , for t > 0
0 , for t = 0
has almost all paths continuous at 0. This follows from Proposition 2.35,
since
tW
_
1
t
_
=
W(
1
t
)
1
t
0 a.s. if t .
Solutions to Exercises 27
Exercise 2.15: Let (
n
)
n1
be a sequence of stopping times. Show that
sup
n

n
and inf
n

n
are stopping times.
Erratum: The claim for the inmum as stated in the text is false in
general. It requires right-continuity of the ltration, as shown in the proof
below.
Solution: sup
n
is a stopping time because for all t 0, {sup
n
t} =
_
n
{
n
t} F
t
as an intersection of sets in -eld F
t
.
The case of inf
n

n
needs an additional assumption.
Denition. Altration (F
t
)
tT
is called right continuous if F
t+
=
_
s>t
F
s
=
F
t
.
We now prove the following auxiliary result (see also lemma 2.48 in the
text).
Claim. If a ltration (F
t
)
tT
is right continuous, is a stopping time for
(F
t
)
tT
if and only if for every t, { < t} F
t
.
Proof. If is stopping time, then for every t and n = 1, . . ., { t
1
n
}
F
t
1
n
F
t
. Hence { < t} = (
_

n=1
{ t
1
n
}) F
t
. If { < t} F
t
for all t,
then { t} = (
_

n=1
{ < t +
1
n
}) F
t+
= F.
This allows us to prove the desired result: if (
n
)
n1
is a sequence of stop-
ping times for a right continuous ltration (F
t
)
tT
, then inf
n

n
is a stopping
time for (F
t
)
tT
Proof. According to the claim
n
are stopping times imply that for every
t and n, {
n
< t} F
t
. Hence {inf
n

n
< t} =
_
n
{
n
< t} F
t
for all t,
which, again by virtue of claim, ltration is continuous, testies, inf
n
is a
stopping times.
Exercise 2.16: Verify that F

is a -eld when is a stopping time.


Solution 2.16:1. Because F
t
is a -eld , { t} = F
t
for all t.
Then by the denition of F

, F

2.If A F

, then A { t} F
t
for all t
k
. Hence(\ A) { t} = {
t} \ (A{ t}) F
t
(both sets are in F
t
). Since t was arbitrary, \ A F

.
3. If A
k
, k = 1, 2, . . . belong to F

, then A
k
{ t} F
t
for all t. Now
28 Solutions to Exercises
(
_

k=1
A
k
) { t} =
_

k=1
(A
k
{ t}) F
t
for F
t
is a -eld. Since t
was arbitrary,
_

k=1
A
k
F

. 1.,2.,3. imply F

is a -eld.
Exercise 2.17: Show that if then F

, and that F

= F

.
Solution: If A F

then A { } F
t
for all t. From the assumption
it follows that { t} { t} and hence { t} = { t} { t}
for all t. Now A{ t} = (A{ t}){ t} F
t
, as is a stopping time
and F
t
is a -eld. Thus A F

. For the equality F

= F

, note that
by the previous result the relations , imply F

.
For the reverse inclusion take A F

. hence A { t} F
t
and
A { t} F
t
for all t. Since { t} = { t} { t}, we have
A { t} = (A { t}) (A { t}) F
t
for all t because F
t
is a -eld. A was an arbitrary set, so F

and hence the result


follows.
Exercise 2.18:Let W be a Wiener process. Show that the natural ltra-
tion is left-continuous: for each t 0 we have F
t
= (
_
s<t
F
s
). Deduce
that if
n
, where
n
, are F
W
t
-stopping times, then (
_
n1
F
W

n
) = F
W
v.
.
Solution: Proof of the rst statement:: For any s > 0 the -eld F
W
s
is generated by sets of the form A = {(W(u
1
), W(u
2
), ..., W(u
n
)) B } where
B B(R
n
) and (u
i
) is a partition of [0, s]. Now x t > 0. By left-continuity
of the paths of W we know that W(t) = lim
s
m
t
W(s
m
) a.s., the set A belongs
to (
_

m=1
F
W
s
m
) (
_
s<t
F
W
s
). So this -eld contains the generators of
F
W
t
, hence contains F
W
t
. The opposite conclusion is true for any ltration
(F
t
)
t0
, since F
s
F
t
for s < t gives (
_
s<t
F
s
) F
t
.
Erratum: The second statement should be deleted. The claim holds for
only for quasi-left-continuous ltrations, which involves concepts well be-
yond the scope of this text. (See Dellacherie-Meyer, Probabilities and Po-
tential, Vol2, Theorem 83, p.217.)
Exercise 2.19: Show that if X(t) is Markov then for any 0 t
0
< t
1
<
< t
N
T the sequence (X(t
n
))
n=0,...,N
is a discrete time Markov process.
Solution: We have to verify that the discrete process (X(t
n
)), n = 1, . . . , N
is a discrete Markov process with respect to the ltration (F
t
n
), n = 0, 1, . . . , N.
Let f be a bounded Borel function f : R R. Since X is a Markov pro-
cess, it follows that E( f (X(t
n+1
))|F
t
n
) = E( f (X(
t
n
))|F
X(t
n
)
) for all n. But this
means that (X(t
n
))
n
is a Markov chain (a discrete-parameter Markov pro-
cess).
Exercise 2.20: Let W be a Wiener process. Show that for x R, t 0,
Solutions to Exercises 29
M(t) = exp{ixW(t) +
1
2
x
2
t} denes a martingale with respect to the natu-
ral ltration of W. (Recall from [PF] that expectations of complex-valued
random variables are dened via taking the expectations of their real and
imaginary parts separately.)
Solution: Recall that Z : C, where C is the set of complex num-
bers, is a complex-valued random variable if Z = X
1
+ iX
2
and X
1
, X
2
are
real-valued random variables. Z has mean value EZ if X
1
, X
2
have mean
values and EZ = EX
1
+ iEX
2
. If G is a -subeld of F we also have
E(Z|G) = E(X
1
|G) + iE(X
2
|G) when (Z(t))
t
is a complex valued process
(martingale) if X
1
, X
2
are real processes (are real martingales for for the
same ltration) and Z(t) = X
1
(t) + iX
2
(t).
Now for Exercise 2.20 take 0 s < t and denote F(u, v) = e
ix(u+v)+
1
2
x
2
t
where u, v R. Let ReF(u, v) = F
1
(u, v) and ImF(u, v) = F
2
(u, v) be
the real and imaginary parts of F(u, v). Write Y = W(t) W(s), X = W(s),
F
W
s
= G. We prove that ((M(t))
t
is a martingale for (F)
t
. Using our notation
we can write
E(M(t)|F
W
s
) = E(F(W(s), W(t) W(s))|F
W
s
)
= E(F
1
(X, Y)|G) + iE(F
2
(X, Y)|G).
The variable Y is independent of the -eld G, X is G-measurable and
the mappings F
1
, F
2
are measurable (continuous) and bounded. So we
have: E(F
1
(X, Y)|G) = G
1
(X), E(F
2
(X, Y)|G) = G
2
(X) where G
1
(u) =
E(F
1
(u, Y)), G
2
(u) = E(F
2
(u, Y)). Setting G = G
1
+ iG
2
we have the for-
mula E(M(t)|F
W
s
) = G(X) where G(u) = E(F(u, Y)) = E(e
ix(u+Y)+
1
2
x
2
t
) =
e
ixu+
1
2
x
2
t
E(e
ixY
). Since the distribution function of Y is the same as that of
W(t s) and E(e
ixW(ts)
) is nothing other than the value at x of the char-
acteristic function of an N(0, t s)-distributed random variable, we obtain
E(e
ixY
) = e

1
2
(ts)x
2
[PF]. Hence G(u) = e
ixu+sx
2
. Finally, E(M(t)|F
W
s
) =
e
ixW(s)+sx
2
= M(s). Since 0 s < t were arbitrary (M(t))
t
is a martingale.
30 Solutions to Exercises
Chapter 3
Exercise 3.1: Prove that W and W
2
are in M
2
.
Solution: W, W
2
are measurable (continuous, adapted) processes. Since
E(W
2
(t)) = t, E(W
4
(t)) = 3t
2
(see[PF]), by the Fubini theorem we obtain
E
__
T
0
W
2
(t)dt
_
=
_
T
0
E(W
2
(t))dt =
_
T
0
tdt =
T
2
2
,
E
__
T
0
W
4
(t)dt
_
=
_
T
0
E(W
4
(t))dt =
_
T
0
3t
2
dt = T
3
.
Exercise 3.2: Prove that in general I( f ) does not depend on a particular
representation of f .
Solution: A sequence 0 = t
0
< t
1
< . . . < t
n
= T is called a partition of
the interval [0, T]. We will denote this partition by (t
i
). A partition (u
k
)
of [0, T] is a renement of the partition (t
i
) if the inclusion {t
i
} {u
k
}
holds. Let f be a simple process on [0, T], f S
2
and let (t
i
) be a partition
of [0, T] compatible with f . The latter means that f can be written in the
form
f (t, ) =
0
()1
{0}
(t) +
n1

i=1

i
()1
(t
i
,t
i+1
]
(t),
where
i
is F
t
i
measurable and
i
L
2
(). Note that f (t
i+1
) =
i
for i 0.
To emphasize the presence of the partition in the denition of the integral
of f we also write I( f ) = I
(t
i
)
( f ). If a partition (u
k
) is a renement of a
partition (t
i
) and (t
i
) is compatible with f , then (u
k
) is also compatible
with f . This is because for each i there exists k(i) such that t
i
= u
k(i)
. Since
f (t) = f (t
i+1
) =
i
for t
i
< t t
i+1
, it follows that f (u
k
) = f (t
i+1
) =
i
and f (u
k
) F
t
i
F
u
k
for all k such that t
i
= u
k(i)
< u
k
u
k(i+1)
= t
i+1
.
Additionally,

0
1
{0}
(t) +
p1

k=0
f (u
k+1
)1
(u
k
,u
k+1
]
(t)
=
0
1
{0}
(t) +
n1

i=0
_

k(i)<kk(i+1)

i
1
(u
k
,u
k+1
]
(t)
_

_
= f (t).
Solutions to Exercises 31
Now for the integral of f we have
I
(u
k
)
( f ) =
p1

k=0
f (u
k+1
)(W(u
k+1
) W(u
k
))
=
n1

i=0
_

k(i)<kk(i+1)

i
(W(u
k+1
) W(u
k
))
_

_
=
n1

i=0

i
(W(t
i+1
) W(t
i
)) = I
(t
i
)
( f ).
Returning to our exercise let (t
i
) and (s
j
) be partitions on [0, T] com-
patible with f . We can construct the partition (v
k
), where {v
k
} = {t
i
} {s
j
}
and elements v
k
are ordered as real numbers. Then the partition (v
k
) is a
renement of both partitions (t
i
) and (s
j
). By the previous results we
have I
(t
i
)
( f ) = I
(v
k
)
( f ) = I
(s
j
)
( f ). Thus the It o integral of a simple process
is independent of the representation of that process.
Exercise 3.3: Give a proof for the general case (i.e., linearity of the
integral for simple functions).
Solution: We prove two implications.
1. If f S
2
, R, then f S
2
, I(f ) = I( f ).
2. If f , g S
2
, then f + g S
2
, I( f + g) = I( f ) + I(g).
Proof 1. If f (t) =
0
1
{0}
(t) +
_
n1
i=0

i
1
(t
i
,t
i+1
]
(t), where
i
F
t
i
, the
i
F
t
i
and I(f ) =
_
n1
i=0

i
(W(t
i+1
)W(t
i
)) =
_
n1
i=1

i
(W(t
i+1
)W(t
i
)) = I( f ).
Proof 2. We use the notation and the results of Exercise 3.2. Let (t
i
) and
(s
j
) be partitions (of the interval [0, T]) compatible with processes f and
g respectively. We can construct a partition (v
k
) which is a renement of
both (t
i
) and (s
j
) (as was shown in Exercise 3.2). Then f + g S
2
and
I(t) +I(g) = I
(t
i
)
( f ) +I
(s
j
)
(g) = I
(v
k
)
( f ) +I
(v
k
)
(g) = I
(v
k
)
( f +g) = I( f +g).
Exercise 3.4: Prove that for
_
b
a
f (t)dW(t), [a, b] [0, T] we have
E[
_
b
a
f (t)dW(t)] = 0, E[(
_
b
a
f (t)dW(t))
2
] = E[
_
b
a
f
2
(t)dt].
Solution: Since for f S
2
the process 1
[
a, b] f belongs to S
2
, it follows
that E(
_
b
a
f dW) = E(
_
T
0
1
[a,b]
f dW) (denition) = 0 (Theorem 3.9). Simi-
larly E(
_
b
a
f dW)
2
= E(
_
T
0
1
[a,b]
f dW)
2
(denition)= E(
_
T
0
1
[a,b]
f
2
dt)(Theorem
3.10) = E(
_
b
a
f
2
dt).
32 Solutions to Exercises
Exercise 3.5: Prove that the stochastic integral does not depend on the
choice of the sequence f
n
approximating f .
Solution: Assume f M
2
and let ( f
n
) and (g
n
) be two sequences ap-
proximating f . That is f
n
, g
n
S
2
for all n and E(
_
T
0
( f
n
f )
2
dt)
n
0,
E(
_
T
0
( f g
n
)
2
dt)
n
0. The last two relations imply, by the inequality
(a + b)
2
2a
2
+ 2b
2
,
E
__
T
0
( f
n
g
n
)
2
dt
_
2E
__
T
0
( f
n
f )
2
dt
_
+2E
__
T
0
( f g
n
)
2
dt
_

n
0.
By assumption, the integrals I(g
n
), I( f
n
) exist and there exist lim
n
I( f
n
)
and lim
n
I(g
n
) in L
2
().
We want to prove that lim
n
I( f
n
) = lim
n
I(g
n
). Now
E((I( f
n
) I(g
n
))
2
) = E
_

_
__
T
0
( f
n
(t) g
n
(t)t)dW(t)
_
2
_

_
(linearity in S
2
)
= E
__
T
0
( f
n
(t) g
n
(t))
2
dt
_
(isometry in S
2
) 0 as n .
That last convergence was shown above. Thus lim
n
I( f
n
) = lim
n
I(g
n
)
in L
2
()-norm.
Exercise 3.6: Show that
_
t
0
sdW(s) = tW(t)
_
t
0
W(s)ds.
Solution: We have to choose an approximating sequence for our inte-
grals and next to calculate the limit of It o integrals for the approximating
sequence. Denote f (s) = s and take f
n
(s) =
_
n1
i=0
1
(
it
n
,
(i+1)t
n
]
(s)
it
n
for 0 < s t,
f
n
(0) = 0. Then f
n
are simple functions and f
n
S
2
(they do not depend on
). ( f
n
) is an approximating sequence for f because | f
n
(s) f (s)|
t
n
for
all 0 s t. This inequality gives E(
_
t
0
( f (s) f
n
(s))
2
ds)
t
2
n
2
t 0 as
n . Now we calculate lim
n
I( f
n
). According to the denition of the
Solutions to Exercises 33
integral of a simple function
I( f
n
) =
n1

i=0
it
n
_
W
_
(i + 1)t
n
_
W
_
it
n
_
_
= tW(t)
n1

i=1
t
n
W
_
it
n
_
tW(t) +
_
t
0
W(s)ds a.e.
This holds because for almost all W(, ) is continuous function and
_
n1
i=1
t
n
W(
it
n
) is a Riemann approximating sum for the integral of W(, ).
Convergence with probability one is not sucient. We need the conver-
gence in L
2
() norm. We verify the Cauchy condition to prove L
2
()-norm
convergence of (I( f
n
)):
E((I( f
n
) I( f
m
))
2
) = E((I( f
n
f
m
))
2
) (linearity in S
2
)
= E
_

_
__
t
0
( f
n
(s) f
m
(s))dW(s)
_
2
_

_
= E
__
t
0
( f
n
(s) f
m
(s))
2
ds
_
(It o isometry) .
Since f
n
f in L
2
([0, T] ) norm, it satises the Cauchy condition.
The It o isometry guarantees that the sequence of integrals also satises the
Cauchy condition in L
2
(). Then (I( f
n
))
n
converges in L
2
()-norm. But
the limits of a sequence convergent with probability one and at the same
time convergent in L
2
()-norm must be the same. Thus
I( f )(s) = ( lim
n
I( f
n
))(s) = tW(t)
_
t
0
W(s)ds.
Exercise 3.7: Compute the variance of the random variable
_
T
0
(W(t)
t)dW(t).
Solution: In order to calculate the variance of a random variable we
need its mean value. Denote by f
n
an approximating sequence for the pro-
cess f (t) = W(t) t. From the denition of the It o integral I( f
n
) I( f ) in
L
2
()-norm. We have the inequalities
|E(I( f )) E(I( f
n
))| E(|I( f ) I( f
n
)|) (Schwarz inequality)
E((I( f ) I( f
n
))
2
) 0 as n .
Since E(I( f
n
)) = 0 for all n (Theorem 3.9), it follows that E(I( f )) = 0. Now
34 Solutions to Exercises
we calculate
Var(I( f )) = E((I( f ))
2
) (E(I( f )) = 0) = E
__
T
0
f
2
(t)dt
_
(It o isometry) =
_
T
0
E(W(t) t)
2
dt =
_
T
0
(E(W
2
(t)) + t
2
)dt
=
_
T
0
(t + t
2
)dt =
T
2
2
+
T
3
3
.
Exercise 3.8: Prove that if f , g M
2
and , are real numbers, then
I(f + g) = I( f ) + I(g).
Solution: Let ( f
n
) and (g
n
), f
n
, g
n
S
2
be approximating sequences for
f and g, respectively and x , R. This gives f
n
f , g
n
g
in L
2
([0, T] )-norm. Hence the sum of these sequences (f
n
+ g
n
)
converges in L
2
([0, T] )-norm to f +g and of course f
n
+g
n
S
2
.
So the sequence (f
n
+ g
n
) is an approximating sequence for the process
f + g. From the denition of integral follow the relations
I(f
n
+ g
n
) I(f + g)
and
I(f
n
+ g
n
) = (linearity in S
2
)
I( f
n
) + I(g
n
) I( f ) + I(g)
in L
2
()-norm. Since the limit of a sequence in a normed vector space is
determined explicitly, it follows that
I( f ) + I(g) = I(f + g).
Exercise 3.9: Prove that for f M
2
, a < c < b,
_
c
a
f (s)dW(s) =
_
b
a
f (s)dW(s) +
_
c
b
f (s)dW(s).
Solutions to Exercises 35
Solution: Let a < c < b. Then
_
c
a
f (s)dW(s) +
_
b
c
f (s)dW(s) =
_
T
0
f (s)1
[a,c]
(s)dW(s)
+
_
T
0
f (s)1
[c,b]
(s)dW(s) =
_
T
0
( f (s)1
[a,c]
(s) + f (s)1
[c,b]
(s))dW(s)
(linearity) =
_
T
0
( f (s)1
[a,c]
(s) + f (s)1
(c,b]
(s))dW(s)
=
_
T
0
f (s)1
[a,b]
(s)dW(s) =
_
b
a
f (s)dW(s).
Note that a change of value of an integrand at one point has no inuence
on the value of the integral. For example, let ( f
n
) be an approximating se-
quence for f M
2
and let 0 = t
(n)
0
< t
(n)
1
< . . . < t
(n)
n
= T be the partition
for f
n
. Then
E((I( f
n
) I( f
n
1
(0,T]
)
2
) = E(( f (0)(W(t
(n)
1
) W(0)))
2
)
= E(( f
2
(0)E(W
2
(t
(n)
1
)|F
0
)) = E( f
2
(0))E(W
2
(t
1
))
(W
2
(t
1
) independent of F
0
) = E( f
2
(0))t
(n)
1

n
0.
Exercise 3.10: Show that the process M(t) =
_
t
0
sin(W(t))dW(t) is a
martingale.
Solution: Let be 0 < s < t. We have to prove the equality
E
__
t
0
sin(W(u))dW(u)|F
s
_
=
_
s
0
sin(W(u))dW(u).
Beginning with the equality
_
t
0
sin(W(u))dW(u)) =
_
s
0
sin(W(u))dW(u)
+
_
t
s
sin(W(u))dW(u) = +
We see that to solve the problem it is enough to show that E(|F
s
) =
and E(|F
s
) = 0. The rst equality means that should be an F
s
mea-
surable random variable. To prove it let ( f
n
)
n
, f
n
S
2
be an approximat-
ing sequence for the process W on the interval [0, s]. Then (sin( f
n
))
n
is an
approximating sequence for sin(W) and of course sin( f
n
) S
2
. The inte-
grals
n
= I(sin( f
n
)) converge in L
2
() norm to and they are of the form

n
=
_
m
(n)
1
i=1

(n)
i
(W(t
(n)
i+1
) W(t
(n)
i
)) where t
(n)
i
s and
(n)
i
F
t
(n)
i
F
s
.
Then
n
are F
s
-measurable variables and as a consequence must be an
36 Solutions to Exercises
F
s
measurable variable. For the equality E(|F
s
) = 0, let (g
n
)
n
, g
n
S
2
,
g
n
= 1
[s,t]
g
n
be an approximating sequence for the process W on the in-
terval [s, t]. Then similarly as in the previous case
n
= I(1
[s,t]
sin(g
n
))
are of the form
n
=
_
l
(n)
1
j=1

(n)
j
(W(s
(n)
j+1
) W(s
(n)
j
)) where
(n)
j
F
s
(n)
j
and
s s
(n)
j
t for all j and n. These conditions and the denition of Wiener
process W adapted to the ltration (F
n
) give the variable W(s
(n)
j+1
) W(s
(n)
j
)
is independent of the -eld F
s
(n)
j
for all s
(n)
j
and n. This property and the
fact that
(n)
j
F
s
(n)
j
implies that
E(
n
|F
s
) =
l
(n)
1

j=1
E(
(n)
j
(W(s
(n)
j+1
) W(s
(n)
j
))|F
s
)
and further
E(
(n)
j
(W(s
(n)
j+1
) W(s
(n)
j
))|F
s
)
= E(E(
(n)
j
(W(s
(n)
j+1
) W(s
(n)
j
))|F
s
(n)
j
)|F
s
) (tower property)
= E(
(n)
j
E(W(s
(n)
j+1
) W(s
(n)
j
)|F
s
(n)
j
)|F
s
)
= E(
(n)
j
E(W(s
(n)
j+1
) W(s
(n)
j
))|F
s
) (independence)
= E(W(s
(n)
j+1
) W(s
(n)
j
))E(
(n)
j
|F
s
) = 0
for all j, n because E(W(s
(n)
j+1
) W(s
(n)
j
)) = 0. Thus E(
n
|F
s
) = 0 for all
n. The convergence
n
in L
2
()-norm implies E(
n
|F
s
) E(|F
s
) in
L
2
() (see [PF]). The last result E((E(|F
s
))
2
) = 0 gives E(|F
s
) = 0 almost
everywhere. The proof is completed.
Exercise 3.11: For each t in [0, T] compare the mean and variance of the
It o integral
_
T
0
W(s)dW(s) with those of the randomvariable
1
2
(W(T)
2
T).
Solution: We have E(
_
T
0
W(s)dW(s)) = 0 (Theorem 3.14). As a conse-
quence
Var
__
T
0
W(s)dW(s)
_
= E
_

_
__
T
0
W(s)dW(s)
_
2
_

_
E
__
T
0
W
2
(s)ds
_
(isometry) =
_
T
0
E(W
2
(s))ds =
_
T
0
sds =
T
2
2
.
For the second random variable we obtain
E
_
1
2
(W
2
(T) T)
_
=
1
2
_
E(W
2
(T)) T
_
=
1
2
(T T) = 0
Solutions to Exercises 37
and so
Var
_
1
2
(W
2
(T) T)
_
=
1
4
Var
_
W
2
(T)
_
=
1
4
_
E((W
2
(T))
2
) (E(W
2
(T)))
2
_
=
1
4
_
E(W
4
(T)) T
2
_
=
1
4
_
3T
2
T
2
_
=
T
2
2
.
Exercise 3.12: Use the identity 2a(ba) = (b
2
a
2
)(ba)
2
and appro-
priate approximating partitions to showfromrst principles that
_
T
0
W(s)dW(s) =
1
2
(W(T)
2
T).
Solution: Since the process W belongs to M
2
, we know that the inte-
gral of W exists and it is enough to calculate the limit of integrals for an
approximating W sequence ( f
n
)
n
. We take f
n
, n = 1, . . . given by the parti-
tions t
(n)
i
=
iT
n
, i = 0, 1, . . . , n1. So we have f
n
(t) =
_
n1
i=0
W(t
(n)
i
)1
(t
(n)
i
,t
(n)
i+1
]
(t),
f
n
(0) = 0, n = 1, 2, . . .. It is easy to verify that f
n
W in L
2
([0, T] )-
norm. Using our hypothesis about lim
n
I( f
n
) we see that it is necessary
to prove that I( f
n
) =
_
n1
i=0
W(t
(n)
i
)(W(t
(n)
i+1
) W(t
(n)
i
))
1
2
(W
2
(T) T) in
L
2
()-norm. The identity 2a(b a) = (b
2
a
2
) (b a)
2
lets us write the
It o sum of I( f
n
) as follows
I( f
n
) =
1
2
n1

i=0
(W
2
(t
(n)
i+1
) W
2
(t
(n)
i
))
1
2
n1

i=1
(W(t
(n)
i+1
) W(t
(n)
i
))
2
=
1
2
W
2
(T)
1
2

n
where
n
=
_
n1
i=0
(W(t
(n)
i+1
)W(t
(n)
i
))
2
. Then it is sucient to showthat E(
n

T)
2
0. Since E((W(t
(n)
i+1
)W(t
(n)
i
))
2
) = E(W
2
(t
(n)
i+1
t
(n)
i
)) (the same distributions) =
E(W
2
(
T
n
)) =
T
n
, we obtain E(
n
) = T. Hence
E(
n
T)
2
= Var(
n
) =
n1

i=0
Var((W(t
(n)
i+1
) W(t
(n)
i
))
2
) (independence)
=
n1

i=1
Var(W
2
(t
(n)
i+1
t
(n)
i
)) (the same distributions) =
n1

i=0
Var
_
W
2
_
T
n
__
= n
_
E
_
W
4
_
T
n
__

_
E
_
W
2
_
T
n
___
2
_
= n
_
3
_
T
n
_
2

_
T
n
_
2
_
=
2T
2
n
0
as n . The proof is completed.
38 Solutions to Exercises
Exercise 3.13: Give a direct proof of the conditional It o isometry (The-
orem 3.20) : if f M
2
, [a, b] [0, T], then
E([
_
b
a
f (s)dW(s)]
2
|F
a
) = E(
_
b
a
f
2
(s)ds|F
a
)
following the method used for proving the unconditional It o isometry.
Solution: (Conditional It o isometry.) The proof has two steps. In step
1. we prove the theorem for f S
2
. In this case, let a = t
0
< t
1
< . . . <
t
n
= b be a partition of [a, b] and let f be of the form f (t) =
0
1
{a}
(t) +
_
n1
k=0

k
1
(t
k
,t
k+1
]
(t), where, for k < n,
k
is an F
t
k
-measurable variable. Then
we can calculate similarly as in Theorem 3.10
E
_

_
__
b
a
f (s)dW(s)
_
2
|F
a
_

_
= E
_

_
_

_
n1

k=0

k
(W(t
k+1
) W(t
k
))
_

_
2
|F
a
_

_
=
n1

k=0
E([
k
(W(t
k+1
) W(t
k
))]
2
|F
a
)
+2

i<k
E([
i

k
(W(t
i+1
) W(t
i
))(W(t
k+1
) W(t
k
))]|F
a
) = A + 2B.
Consider A.We have
E(
2
k
(W(t
k+1
) W(t
k
))
2
)|F
a
)
= E(E(
2
k
(W(t
k+1
) W(t
k
))
2
|F
t
k
)|F
a
) (tower property)
= E(
2
k
(E[(W(t
k+1
) W(t
k
))
2
|F
t
k
]|F
a
) (
2
k
is F
t
k
-measurable)
= E(
2
k
(E[W(t
k+1
) W(t
k
)]
2
)|F
a
) (independence)
= E((W(t
k+1
t
k
))
2
)E(
2
a
|F
a
) (linearity, the same distribution)
= (t
k+1
t
k
)E(
2
k
|F
a
) = E(
2
k
(t
k+1
t
k
)|F
a
).
This proves that
A =
n1

k=0
E(
2
k
(t
k+1
t
k
)|F
a
) = E(
n1

k=0

2
k
(t
k+1
t
k
)|F
a
)
= E
__
b
a
f
2
(t)dt|F
a
_
.
Solutions to Exercises 39
Now for B we have
E(
i

k
(W(t
i+1
) W(t
i
))(W(t
k+1
) W(t
k
))|F
a
)
= E(E[
i

k
(W(t
i+1
) W(t
i
))(W(t
k+1
) W(t
k
))|F
t
k
]|F
a
)
(tower property) = E(
i

k
(W(t
i+1
) W(t
i
))
E(W(t
k+1
) W(t
k
)|F
t
k
)|F
a
) (terms for i < k are F
t
k
-measurable) = 0
because W(t
k+1
)W(t
k
) is independent of the -eld F
k
, hence E(W(t
k+1
)
W(t
k
)|F
t
k
) = E(W(t
k+1
) W(t
k
)) = 0. Hence also B = 0.
Step 2. The general case. Let be f M
2
and let ( f
n
) be a sequence of
approximating processes for f on [a, b]. Then f
n
S
2
, n = 1, 2, . . . and || f
f
n
||
L
2
([a,b])
0 as n . The last condition implies ||I(1
[a,b]
f
n
)||
L
2
()

0.
Now we will want to utilize the conditional isometry for f
n
and to take
the limit as n . This needs the following general observation.
Observation. Let (Z
n
) be a sequence of random variables on a probability
space (

, F

, P

) and let be a sub -eld of F

. If Z
n
L
1
(

), n =
1, 2, . . . and Z
n
Z in L
1
()-norm, then E(Z
n
|) E(Z|) in L
1
()-norm.
Proof. First note that E(Z
n
|), E(Z|) belong to L
1
().
Now we have
E(|E(Z
n
|) E(Z|)|) = E(|E(Z
n
Z|)|)
E(E(|Z
n
Z||)) = E(|Z
n
Z|) 0 as n .
To use our Observation we have to verify that [I(1
[a,b]
f
n
)]
2
[I(1
[a,b]
f )]
2
and
_
T
0
(1
[a,b]
f
n
)
2
ds
_
T
0
(1
[a,b]
f )
2
ds in L
1
()-norm. The following rela-
tions hold
E(|[I(1
[a,b]
f
n
)]
2
[I(1
[a,b]
f )]
2
|) = E(|I(1
[a,b]
( f
n
f ))||I(1
[a,b]
( f
n
+ f ))|)
(E([I(1
[a,b]
( f
n
f ))]
2
))
1
2
(E([I(1
[a,b]
( f
n
+ f ))]
2
))
1
2
(Schwarz inequality)
= (E(
_
T
0
1
[a,b]
( f
n
f )
2
ds))
1
2
(E(
_
T
0
1
[a,b]
( f
n
+ f )
2
ds))
1
2
(isometry)
= || f
n
f ||
L
2
([a,b])
|| f
n
+ f ||
L
2
([a,b])
0
as n because the second sequence is bounded. For the second se-
40 Solutions to Exercises
quence we have similarly
E|
_
T
0
1
[a,b]
( f
2
n
f
2
)ds| E(
_
T
0
|1
[a,b]
( f
n
f )||1
[a,b]
( f
n
+ f )|ds)
(E(
_
T
0
1
[a,b]
( f
n
f )
2
ds))
1
2
(E(
_
T
0
1
[a,b]
( f
n
+ f )
2
ds))
1
2
= || f
n
f ||
L
2
([a,b])
|| f
n
+ f ||
L
2
([a,b])
0 as n .
Thus we have by our Observation that
E([I(1
[a,b]
f
n
)]
2
|F
a
) E([I(1
[a,b]
f )]
2
|F
a
)
and
E(
_
T
0
1
[a,b]
f
2
n
ds|F
a
) E(
_
T
0
1
[a,b]
f
2
ds|F
a
)
in L
1
() norm. Hence and from the equality
E([I(1
[a,b]
f
n
)]
2
|F
a
) = E(
_
T
0
1
[a,b]
f
2
n
ds|F
a
)
valid for all f
n
S
2
, we can obtain the nal result.
Exercise 3.14: Show
_
t
0
g(s)ds = 0 for all t [0, T] implies g = 0 almost
surely on [0, T].
Solution: Denote by g
+
and g

the positive and the negative parts of


g. Then g
+
0, g

0 and g
+
g

= g. The assumption about g


implies
_
b
a
g
+
(s) =
_
b
a
g

(s) = 0 for all intervals [a, b] [0, T]. Write

+
(A) =
_
A
g
+
(s)ds,

(A) =
_
A
g

(s)ds for A B([0, T]). Of course

and
+
are measures on B([0, T]) and the properties of g
+
and g

give

+
([a, b]) =

([a, b]) for every interval [a, b]. Since intervals generate the
-eld B([0, T]), we must have
+
(A) =

(A) for all A B([0, T]). Sup-


pose now that the Lebesbue measure of the set {x : g(x) 0} = {x : g
+
(x)
g

(x)} is positive. Then the measure of the set B = {x : g


+
(x) > g

(x)} (or
the set {x : g
+
(x) < g

(x)}) must be also positive. But this conjecture leads


to the conclusion
(B) =
+
(B)

(B) =
_
B
(g
+
g

)ds > 0
which contradicts the assumption (A) = 0 for all A B([0, T]).
Solutions to Exercises 41
Chapter 4
Exercise 4.1: Show that for cross-terms all we need is the fact that W
2
(t)t
is a martingale.
Solution: We need to calculate H = E(F

(W(t
i
))F

(W(t
j
))X
i
X
j
) for i <
j, where X
k
= [W(t
k+1
)W(t
k
]
2
[t
k+1
t
k
]. As in the proof of Theorem4.5,
(Hurdle 1-Cross terms) we have H = E(F

(W(t
i
))F

(W(t
j
))X
i
E(X
j
|F
t
j
)).
So we calculate E(X
j
|F
t
j
). It is possible to write it in the form
E(X
j
|F
j
) = E([W
2
(t
j+1
) t
j+1
]|F
t
j
) 2W(t
j
)E(W(t
j+1
)|F
t
j
)
+W
2
(t
j
) + t
j
(W(t
j
) is F
t
j
-measurable)
= W
2
(t
j
) t
j
2W
2
(t
j
) + W
2
(t
j
) + t
j
= 0
(W
2
(t) t, W(t) are martingales).
Hence also H = 0.
Exercise 4.2: Verify the convergence claimed in (4.3), using the fact that
quadratic variation of W is t.
Solution: Actually we have to repeat the calculus done for the quadratic
variation of W. As in the previous exercise write X
i
= (W(t
i+1
) W(t
i
))
2

(t
i+1
t
i
), i = 1, . . . , n 1. Since E(X
i
) = 0 and hence E(
_
n1
i=0
X
i
) = 0, and
since X
i
, i = 1, . . . , n 1 are independent random variables, we have
n1

i=0
E({(W(t
i+1
) W(t
i
))
2
(t
i+1
t
i
)}
2
) =
n1

i=0
EX
2
i
=
n1

i=0
Var(X
i
) (E(X
i
) = 0) = Var(
n1

i=0
X
i
) (X
i
are independent)
= E((
n1

i=0
X
i
)
2
) (E(
n1

i=0
X
i
) = 0) = E[((
n1

i=0
(W(t
i+1
) W(t
i
))
2
) t)]
= E((V
2
[0,t]
(n) t)
2
) E(([W, W](t) t)
2
) = 0
in L
2
()-norm, independently of the sequence of partitions with mesh go-
ing to 0 as n . (See Proposition 2.2.)
Exercise 4.3: Prove that

M
= inf{t :
_
t
0
| f (s)|ds M}
is a stopping time.
Solution: The process t
_
t
0
| f (s)|ds has continuous paths and we can
42 Solutions to Exercises
apply the argument given at the beginning of the section provided it is
adapted. For this we have to assume that the process f (t) is adapted and
notice that the integral
_
t
0
| f (s)|ds, computed pathwise, is the limit of ap-
proximating sums. These sums are F
t
-measurable and measurability is pre-
served in the limit.
Exercise 4.4: Find a process that is in P
2
but not in M
2
.
Solution: If a process has continuous paths, it is in P
2
since the integral
over any nite time interval of a continuous function is nite. We need an
example for which the expectation E
_
T
0
f
2
(s)ds is innite. Fubinis theo-
rem implies that it is sucient to nd f such that
_
T
0
E( f
2
(s))ds is innite.
Going for a simple example, let = [0, 1] and T = 1. The goal will be
achieved if E( f
2
(s)) =
1
s
. Now E( f
2
(s)) =
_
1
0
f
2
(s, )d so we need a ran-
dom variable, i.e. a Borel function X : [0, 1] R such that
_
1
0
X()d =
1
s
. Clearly, X() =
1
s
2
1
[0,s]
() does the trick, so f (s, ) =
1
s
1
[0,s]
() is the
example we are looking for.
Exercise 4.5: Show that the It o process dX(t) = a(t)dt + b(t)dW(t) has
quadratic variation [X, X](t) =
_
t
0
b
2
(s)ds.
Solution: Under the additional assumption that
_
T
0
b(s)dW(s) is bounded
the result is given by Theorem 3.26. For the general case let
n
= min{t :
_
t
0
b(s)dW(s) n}, so that, writing M(t) =
_
t
0
b(s)dW(s), the stopped pro-
cess M

n
(t) is bounded (by n). Since M

n
(t) =
_
T
0
1
[0,
n
]
b(s)dW(s), [X
t
n
, X

n
](t) =
_
t
0
1
[0,
n
]
b
2
(s)ds
_
t
0
b
2
(s)ds almost surely, because
n
is localising.
Exercise 4.6: Showthat the characteristics of an It o process are uniquely
dened by the process, i.e. prove that X = Y implies a
X
= a
Y
, b
X
= b
Y
, by
applying the It o formula to nd the form of (X(t) Y(t))
2
.
Solution: Let Z(t) = X(t)Y(t) and by the It o formula dZ
2
(t) = 2Z(t)a
Z
(t)dt+
2Z(t)b
Z
(t)dW(t) + b
2
Z
(t)dt with a
Z
= a
X
a
Y
, b
Z
= b
X
b
Y
. But Z(t) = 0
hence
_
t
0
b
2
Z
(s)ds = 0, all t, so b
Z
= 0. This implies
_
t
0
a
Z
(s)ds = 0 for all t
hence a
Z
(t) = 0 as well.
Exercise 4.7: Suppose that the It o process dX(t) = a(t)dt + b(t)dW(t) is
positive for all t and nd the characteristic of the processes Y(t) = 1/X(t),
Z(t) = ln X(t).
Solution: Y(t) =
1
X(t)
= F(X(t)), F(x) =
1
x
, F

(x) =
1
x
2
, F

(x) = 2
1
x
3
dY =
1
X
2
adt
1
X
2
bdW(t) +
1
2
1
X
3
b
2
dt,
Z(t) = ln X(t), so Z(t) = F(X(t)) with F(x) = ln x, F

(x) =
1
x
, F

(x) =
Solutions to Exercises 43

1
x
2
and
dZ =
1
X
adt +
1
X
bdW(t)
1
2
1
X
2
b
2
dt.
Exercise 4.8: Find the characteristics of exp{at + X(t)}, given the form
of the It o process X.
Solution: Let F(t, x) = exp{at + x} so F
t
= aF, F
x
= F, F
xx
= F and
with Z(t) = exp{at + X(t)}, dX(t) = a
X
(t)dt + b
X
(t)dW(t) we have
dZ = aZdt + a
x
Zdt + b
X
ZdW(t) +
1
2
b
2
X
Zdt.
Exercise 4.9: Find a version of Corollary 4.32 for the case where is a
deterministic function of time.
Solution: Let M(t) = exp{
_
t
0
(s)dW(s)
1
2
_
t
0

2
(s)ds} = exp{X(t)}
where X(t) is It o with a
X
=
1
2

2
, b
X
= . Since X(t) has normal distribu-
tion ( is deterministic), M M
2
can be show in the same way as in the
proof of the corollary and (4.16) is clearly satised.
Exercise 4.10: Find the characteristics of the process e
rt
X(t).
Solution: Let Y(t) = e
rt
, a
Y
(t) = re
rt
, b
Y
(t) = 0, dY(t) = re
rt
dt so
integration by parts (It o product rule, in other words) gives
d[e
rt
X(t)] = re
rt
X(t)dt + e
rt
dX(t).
Exercise 4.11: Find the form of the process X/Y using Exercise 4.7.
Solution: Write dX(t) = a
X
(t)dt + b
X
(t)dW(t), d(
1
Y(t)
) = a
1/Y
(t)dt +
b
1/Y
(t)dW(t) with the characteristics of Y given by Exercise 4.7:
a
1/Y
=
1
Y
2
a
Y
+
1
2
1
Y
3
b
2
Y
,
b
1/Y
=
1
Y
2
b
Y
.
All that is left is to plug these into the claim of Theorem 4.36
d(X
1
Y
) = Xd(
1
Y
) +
1
Y
dX + b
X
b
1/Y
dt.
44 Solutions to Exercises
Chapter 5
Exercise 5.1: Find an equation satised by X(t) = S (0) exp{
X
t + W(t)}.
Solution: Write the process in (5.3) in the form S (t) = S (0) exp{
S
t +
W(t)} with
S
=
X

1
2

2
and (5.2) takes the form dS (t) = (
S
+
1
2

2
)S (t)dt + S (t)dW(t) so immediately
dX(t) = (
X
+
1
2

2
)X(t)dt + X(t)dW(t)
Exercise 5.2: Find the equations for the functions t E(S (t)), t
Var(S (t)).
Solution: We have E(S (t)) = S (0) exp{t} = m(t), say, so m

(t) = m(t)
with m(0) = S (0). Next,l Var(S (t)) = E(S (t) S (0)e
t
)
2
= S
2
(0)e
2t
(e

2
t

1) = v(t), say and


v

(t) = 2S
2
(0)e
2t
(e

2
t
1) +
2
S
2
(0)e
2t
e

2
t
= [2 +
2
]v(t) +
2
m
2
(t).
Exercise 5.3: Show that the linear equation
dS (t) = (t)S (t)dt + (t)S (t)dW(t)
with continuous deterministic functions (t) and (t) has a unique solution
S (t) = S (0) exp{
_
t
0
_
(s)
1
2

2
(s)
_
ds +
_
t
0
(s)dW(s)}.
Solution: For uniqueness we can repeat the proof of Proposition 5.3 (or
notice that the coecients of the equation satisfy the conditions of Theo-
rem 5.8). To see that the process solves the equation, take
F(t, x) = S (0) exp{
_
t
0
_
(s)
1
2

2
(s)
_
ds + x},
and S (t) = F(t, X(t)) with X(t) =
_
t
0
(s)dW(s). Now
F
t
(t, x) = ((t)
1
2

2
(t))S (0)F(t, x),
F
x
(t, x) = F
xx
(t, x) = S (0)F(t, x),
dX(t) = (t)dW(t),
Solutions to Exercises 45
so by the It o formula we get the result:
dS (t) = (
1
2

2
)S (0) exp{
_
t
0
_
(s)
1
2

2
(s)
_
ds + X(t)}dt
+S (0) exp{
_
t
0
_
(s)
1
2

2
(s)
_
ds + X(t)}dW(t)
+
1
2

2
S (0) exp{
_
t
0
_
(s)
1
2

2
(s)
_
ds + X(t)}dt
= S (t)dt + S (t)dW(t)
(We have essentially repeated the proof of Theorem 5.2.)
Exercise 5.4: Find the equation solved by the process sin W(t) = X(t),
say.
Solution: Take F(x) = sin(x), F

(x) = cos(x), F

(x) = sin(x) and the


simplest version of It o formula gives
dX(t) = cos(W(t))dW(t)
1
2
sin(W(t))dt =
_
1 X
2
(t)dW(t)
1
2
X(t)dt
Exercise 5.5: Find a solution to the equation dX =

1 X
2
dW+
1
2
Xdt
with X(0) = 1.
Solution: Comparing with Exercise 5.4 we can guess X(t) = cos(W(t))
and check that with F(x) = cos x the It o formula gives the result.
Exercise 5.6: Find a solution to the equation
dX(t) = 3X
2
(t)dt X
3/2
(t)dW(t)
bearing in mind the above derivation of dX(t) = X
3
(t)dt + X
2
(t)dW(t).
Solution: An educated guess (the educated part is to solve F

= F
3/2
so that the stochastic term agrees, the guess is to use F of some special
form(1+ax)
b
, then keep ngers crossed that the dt term will be as needed)
gives F(x) = (1 +
1
2
x)
2
with F

(x) = (1 +
1
2
x)
3
= [F(x)]
3
2
, F

(x) =
3(1 +
1
2
x)
4
= 3F
2
(x) so X(t) = F(W(t)) satises the equation.
Exercise 5.7: Solve the following Vasicek equation dX(t) = (abX(t))dt+
dW(t).
Solution: Observe that d
_
e
bt
X(t)
_
= ae
bt
dt + e
bt
dW(t) (Exercise 4.10)
hence
e
bt
X(t) = X(0) + a
_
t
0
e
bu
du +
_
t
0
e
bu
dW(u)
= X(0) +
a
b
_
e
bt
1
_
+
_
t
0
e
bu
dW(u),
46 Solutions to Exercises
so that
X(t) = e
bt
X(0) +
a
b
_
1 e
bt
_
+ e
bt
_
t
0
e
bu
dW(u).
Exercise 5.8: Find the equation solved by the process X
2
where X is the
Ornstein-Uhlenbeck process.
Solution: Recall dX(t) = X(t)dt + dW(t), so by the It o formula,
dX
2
(t) = 2X(t)dX(t) +
2
dt = 2X
2
(t)dt + 2X(t)dW(t) +
2
dt.
Exercise 5.9: Prove uniqueness using the method of Proposition 5.3 for
a general equation with Lipschitz coecients (take any two solutions and
estimate the square of their dierence to show that it is zero).
Solution: Suppose
X
i
(t) = X
0
+
_
t
0
a(s, X
i
(s))ds +
_
t
0
b(s, X
i
(s))dW(s), i = 1, 2.
Then
X
1
(t) X
2
(t) =
_
t
0
[a(u, X
1
(u)) a(u, X
2
(u))]du
+
_
t
0
[b(u, X
1
(u)) b(u, X
2
(u))]dW(u)
and using (a + b)
2
2a
2
+ 2b
2
and taking expectation we get
f (t) := E(X
1
(t) X
2
(t))
2
2E
__
t
0
[a(u, X
1
(u)) a(u, X
2
(u))]du
_
2
+2E
__
t
0
[b(u, X
1
(u)) b(u, X
2
(u))]dW(u)
_
2
.
Using the Lipschitz condition for a, the rst term on the right is estimated
by 2E
__
t
0
K[X
1
(u) X
2
(u)]du
_
2
and we can continue from here as in the
proof of Proposition 5.3.
It o isometry and the Lipschitz condition for b allow us to estimate the
second term by
2E
__
t
0
[b(u, X
1
(u)) b(u, X
2
(u))]dW(u)
_
2
= 2
_
t
0
E[b(u, X
1
(u)) b(u, X
2
(u))]
2
du
2
_
t
0
K
2
E[X
1
(u)) X
2
(u)]
2
du
Solutions to Exercises 47
Putting these together we obtain f (t) 2K
2
(1 + T)
_
t
0
f (u)du and the
Gronwall lemma implies f (t) = 0, i.e. X
1
(t) = X
2
(t).
Exercise 5.10: Prove that the solution depends continuously on the ini-
tial value in the L
2
norm, namely show that if X, Y are solutions of (5.4)
with initial conditions X
0
, Y
0
, respectively, then for all t we have E(X(t)
Y(t))
2
cE(X
0
Y
0
)
2
. Find the form of the constant c.
Solution: We proceed as in Exercise 5.9 but the rst step is
X(t) Y(t) = X(0) Y(0) +
_
t
0
[a(u, X(u)) a(u, Y(u))]du
+
_
t
0
[b(u, X(u)) b(u, Y(u))]dW(u).
After taking squares, expectation and following the same estimations we
will end up with
f (t) 2E(X
0
Y
0
)
2
+ 2K
2
(1 + T)
_
t
0
f (u)du
so after Gronwall
E(X
1
(t) X
2
(t))
2
2 exp{2K
2
(1 + T)T}E(X
0
Y
0
)
2
.

Vous aimerez peut-être aussi