Vous êtes sur la page 1sur 385
Bernt Oksendal Stochastic Differential Equations An Introduction with Applications Sixth Edition With 14 Figures & Springer Bernt Oksendal Department of Mathematics University of Oslo Box 1053, Blindern 0316 Oslo, Norway e-mail: oksendal@math.uio.no ‘Cover Art: ‘The front cover shows five sample paths X;(w1), Xe(wa), Xe(ws), Xe(wa) and X+(ws) of a geometric Brownian motion X;(w), ie. of the solution of a (1- dimensional) stochastic differential equation of the form Dt ota WOX £20; Xo=e where z,r and a are constants and W; = W(w) is white noise, This process is often used to model “exponential growth under uncertainty”, See Chapters 5, 10, 11 and 12. ‘The figure is a computer simulation for the case = value of Xe, B[X¢] = exp(t), is also drawn. Courtesy of Jan Ubse, Norwegian School of Economics and Business Administra- tion, Bergen. Library of Congress Cataloging in-Publiation Data ‘@rsendal,B. K. (Bernt Karsten), 1945- cist ite qutn an intron wit apo / Bart Ohana — hed. pcm ~ (Universite) Includes bibliographical reforences and index. ISBN 3-540-04758-1 (softcover: alk paper) 1, Stochastic differential equations, I Tile. 0.6. The mean (Q4274.23.047 2003 5192-8621 2003052637 ISBN 3-540-04758-1 Springer-Verlag Berlin Heidelberg New York Mathematics Subject Classification (2000): 60H10, 60635, 60640, 60644, 93E20, 60145, 60]25 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broedcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use mast always be obtained from Springer-Verlag. Violations are liable for prosecution ‘under the German Copyright Law. ‘Springer-Verlag Berlin Heidelberg New York ‘a member of BertelsmannSpringer Science+Business Media GmbH butpuhwwespringer de © Springer-Verlag Berlin Heidelberg 1985, 989, 1992, 995, 1998, 2003 Printed in Germany ‘The use of general descriptive names, registered names, trademarks, etc. inthis publication oes not imply, even inthe absence ofa specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: design & production GmbH, Heidelberg ‘Typeset by the authors using a BIRX macro package Printed on acid-free paper ausuick 54921 To My Family Eva, Elise, Anders and Karina We have not succeeded in answering all our problems. The answers we have found only serve to raise a whole set of new questions. In some ways we feel we are as confused as ever, but we believe we are confused on a higher level and about more important things. Posted outside the mathematics reading room, ‘Troms¢ University sienna nner tenets net i Preface to the Sixth Edition ‘This edition contains detailed solutions of selected exercises. Many readers have requested this, because it makes the book more suitable for self-study. At the same time new exercises (without solutions) have been added. They have all been placed in the end of each chapter, in order to facilitate the use of this edition together with previous ones. Several errors have been corrected and formulations have been improved. ‘This has been made possible by the valuable comments from (in alphabetical order) Jon Bohlin, Mark Davis, Helge Holden, Patrick Jaillet, Chen Jing, Natalia Koroleva, Mario Lefebvre, Alexander Matasov, Thilo Meyer-Brandis, Keigo Osawa, Bjgrn Thunestvedt, Jan Ubge and Yngve Willassen. I thank them all for helping to improve the book. My thanks also go to Dina Haraldsson, who once again has performed the typing and drawn the figures with great skill. Blindern, March 2003 Bernt Oksendal Preface to the First Corrected Printing, of the Fifth Edition ‘The main corrections and improvements in this corrected printing are from Chapter 12. I have benefitted from useful comments from a number of peo- ple, including (in alphabetical order) Fredrik Dahl, Simone Deparis, Ulrich Haussmann, Yaozhong Hu, Marianne Huebner, Carl Peter Kirkebg, Niko- lay Kolev, Takashi Kumagai, Shlomo Levental, Geir Magnussen, Anders @ksendal, Jiirgen Potthoff, Colin Rowat, Stig Sandnes, Lones Smith, Set- suo Taniguchi and Bjorn Thunestvedt. I want to thank them all for helping me making the book better. I also want to thank Dina Haraldsson for proficient typing. Blindern, May 2000 Bernt Oksendal Preface to the Fifth Edition The main new feature of the fifth edition is the addition of a new chapter, Chapter 12, on applications to mathematical finance. I found it natural to include this material as another major application of stochastic analysis, in view of the amazing development in this field during the last 10-20 years. Moreover, the close contact between the theoretical achievements and the applications in this area is striking. For example, today very few firms (if any) trade with options without consulting the Black & Scholes formula! ‘The first 11 chapters of the book are not much changed from the previous edition, but I have continued my efforts to improve the presentation through- out and correct errors and misprints. Some new exercises have been added. Moreovet, to facilitate the use of the book each chapter has been divided into subsections. If one doesn’t want (or doesn’t have time) to cover all the chapters, then one can compose a course by choosing subsections from the chapters. The chart below indicates what material depends on which sections. Chapters 1-5 |—-{ caners Chapter 8 |Segtion|—] Chapter 7 Fp] | $5400 Chopier 9 ‘Chapter 10 ‘Chapter 11 For example, to cover the first two sections of the new chapter 12 it is recom- mended that one (at least) covers Chapters 1-5, Chapter 7 and Section 8.6. XI Preface to the Fifth Edition Chapter 10, and hence Section 9.1, are necessary additional background for Section 12.3, in particular for the subsection on American options. In my work on this edition I have benefitted from useful suggestions from many people, including (in alphabetical order) Knut Aase, Luis Al- varez, Peter Christensen, Kian Esteghamat, Nils Christian Framstad, Helge Holden, Christian Irgens, Saul Jacka, Naoto Kunitomo and his group, Sure Mataramvura, Trond Myhre, Anders Oksendal, Nils Ovrelid, Walter Schacher- mayer, Bjarne Schielderop, Atle Seierstad, Jan Ubge, Gjermund Vage and Dan Zes. I thank them all for their contributions to the improvement of the book. Again Dina Haraldsson demonstrated her impressive skills in typing the manuscript ~ and in finding her way in the LATpX jungle! I am very grateful for her help and for her patience with me and all my revisions, new versions and revised revisions ... Blindern, January 1998 Bernt Oksendal Preface to the Fourth Edition In this edition I have added some material which is particularly useful for the applications, namely the martingale representation theorem (Chapter IV), the variational inequalities associated to optimal stopping problems (Chapter X) and stochastic control with terminal conditions (Chapter XI). In addition solutions and extra hints to some of the exercises are now included. Moreover, the proof and the discussion of the Girsanov theorem have been changed in order to make it more easy to apply, e.g. in economics. And the presentation in general has been corrected and revised throughout the text, in order to make the book better and more useful. During this work I have benefitted from valuable comments from several persons, including Knut Aase, Sigmund Berntsen, Mark H. A. Davis, Helge Holden, Yaozhong Hu, Tom Lindstrom, Trygve Nilsen, Paulo Ruffino, Isaac Saias, Clint Scovel, Jan Ubge, Suleyman Ustunel, Qinghua Zhang, Tusheng Zhang and Victor Daniel Zukowski. I am grateful to them all for their help. My special thanks go to Hékon Nyhus, who carefully read large portions of the manuscript and gave me a long list of improvements, as well as many other useful suggestions. Finally I wish to express my gratitude to Tove Maller and Dina Haralds- son, who typed the manuscript with impressive proficiency. Oslo, June 1995 Bernt Oksendal Preface to the Third Edition The main new feature of the third edition is that exercises have been included to each of the chapters II-XI. The purpose of these exercises is to help the reader to get a better understanding of the text. Some of the exercises are quite routine, intended to illustrate the results, while other exercises are harder and more challenging and some serve to extend the theory. T have also continued the effort to correct misprints and errors and to improve the presentation. I have benefitted from valuable comments and suggestions from Mark H. A. Davis, Hdkon Gjessing, Torgny Lindvall and Hakon Nyhus, My best thanks to them all. A quite noticeable non-mathematical improvement is that the book is now typed in TX. Tove Lieberg did a great typing job (as usual) and I am very grateful to her for her effort and infinite patience. Oslo, June 1991 Bernt Oksendal Preface to the Second Edition In the second edition I have split the chapter on diffusion processes in two, the new Chapters VII and VIII: Chapter VII treats only those basic properties of diffusions that are needed for the applications in the last 3 chapters. The readers that are anxious to get to the applications as soon as possible can therefore jump directly from Chapter VII to Chapters IX, X and XI. In Chapter VIII other important properties of diffusions are discussed. While not strictly necessary for the rest of the book, these properties are central in today’s theory of stochastic analysis and crucial for many other applications. Hopefully this change will make the book more flexible for the different purposes. I have also made an effort to improve the presentation at some points and I have corrected the misprints and errors that I knew about, hopefully without introducing new ones. I am grateful for the responses that have received on the book and in particular I wish to thank Henrik Martens for his helpful comments. Tove Lieberg has impressed me with her unique combination of typing accuracy and speed. I wish to thank her for her help and patience, together with Dina Haraldsson and Tone Rasmussen who sometimes assisted on the typing. Oslo, August 1989 Bernt Oksendal Preface to the First Edition ‘These notes are based on a postgraduate course I gave on stochastic dif. ferential equations at Edinburgh University in the spring 1982. No previous knowledge about the subject was assumed, but the presentation is based on some background in measure theory. There are several reasons why one should learn more about stochastic differential equations: They have a wide range of applications outside mathe- matics, there are many fruitful connections to other mathematical disciplines and the subject has a rapidly developing life of its own as a fascinating re- search field with many interesting unanswered questions. Unfortunately most of the literature about stochastic differential equa- tions seems to place so much emphasis on rigor and completeness that it scares many nonexperts away. These notes are an attempt to approach the subject from the nonexpert point of view: Not knowing anything (except ru- mours, maybe) about a subject to start with, what would I like to know first of all? My answer would be: 1) In what situations does the subject arise? 2) What are its essential features? 3) What are the applications and the connections to other fields? I would not be so interested in the proof of the most general case, but rather in an easier proof of a special case, which may give just as much of the basic idea in the argument. And I would be willing to believe some basic results without proof (at first stage, anyway) in order to have time for some more basic applications. These notes reflect this point of view. Such an approach enables us to reach the highlights of the theory quicker and easier. Thus it is hoped that these notes may contribute to fill a gap in the existing literature. The course is meant to be an appetizer. If it succeeds in awaking further interest, the reader will have a large selection of excellent literature available for the study of the whole story. Some of this literature is listed at the back. In the introduction we state 6 problems where stochastic differential equa- tions play an essential role in the solution. In Chapter II we introduce the basic mathematical notions needed for the mathematical model of some of these problems, leading to the concept of Ito integrals in Chapter III. In Chapter IV we develop the stochastic calculus (the Ito formula) and in Chap- XX Preface to the First Edition ter V we use this to solve some stochastic differential equations, including the first two problems in the introduction. In Chapter VI we present solution of the Kinear filtering problem (of which problem 3 is an example), using the stochastic calculus. Problem 4 is the Dirichlet problem. Although this is purely deterministic we outline in Chapters VII and VIII how the introduc- tion of an associated Ito diffusion (ic. solution of a stochastic differential equation) leads to a simple, intuitive and useful stochastic solution, which is the cornerstone of stochastic potential theory. Problem 5 is an optimal stop- ping problem. In Chapter IX we represent the state of a game at time t by an Ito diffusion and solve the corresponding optimal stopping problem. The so- lution involves potential theoretic notions, such as the generalized harmonic extension provided by the solution of the Dirichlet problem in Chapter VIII. Problem 6 is a stochastic version of F.P. Ramsey’s classical control problem from 1928. In Chapter X we formulate the general stochastic control prob- lem in terms of stochastic differential equations, and we apply the results of Chapters VI and VIII to show that the problem can be reduced to solving the (deterministic) Hamilton-Jacobi-Bellman equation. As an illustration we solve a problem about optimal portfolio selection. After the course was first given in Edinburgh in 1982, revised and ex- panded versions were presented at Agder College, Kristiansand and Univer- sity of Oslo. Every time about half of the audience have come from the ap- plied section, the others being so-called “pure” mathematicians. ‘This fruitful combination has created a broad variety of valuable comments, for which I am very grateful. I particularly wish to express my gratitude to K.K. Aase, L. Csink and A.M. Davie for many useful discussions. I wish to thank the Science and Engineering Research Council, U.K. and Norges Almenvitenskapelige Forskningstid (NAVF), Norway for their finan- cial support. And I am greatly indebted to Ingrid Skram, Agder College and Inger Prestbakken, University of Oslo for their excellent typing - and their patience with the innumerable changes in the manuscript during these two years. Oslo, June 1985 Bernt Dksendal Note: Chapters VIII, IX, X of the First Edition have become Chapters IX, X, XI of the Second Edition. Table of Contents Introduction 1.1 Stochastic Analogs of Classical Differential Equations . 1.2. Filtering Problems : 1.3. Stochastic Approach to Deterministic Boundary Value Prob- lems 14 Optimal Stopping . 1.5 Stochastic Control . 1.6 Mathematical Finance . Some Mathematical Preliminaries ......................+ 2.1 Probability Spaces, Random Variables and Stochastic Processes 2.2 An Important Example: Brownian Motion ................- Exercise 06. e tenn etn n eee eee Ité Integrals 3.1 Construction of the It6 Integral : 3.2. Some Properties of the Ité Integral . 3.3. Extensions of the It6 Integral Exercises The Ité Formula and the Martingale Representation Theo- 4.1. The 1-dimensional It6 Formula . 4.2. The Multi-dimensional Ité Formula ...... 43 The Martingale Representation Theorem . Exercises .. Stochastic Differential Equations ... 5.1 Examples and Some Solution Methods . 5.2 An Existence and Uniqueness Result . 5.3. Weak and Strong Solutions Exercises XXII Table of Contents 10. 11. The Filtering Problem . 6.1 Introduction .... 6.2 The 1-Dimensional Linear Filtering Problem 6.3 The Multidimensional Linear Pikering | Problem Exercises ..........---.-s0 eee eee Diffusions: Basic Properties 0.000.000.0020 .00000e cscs 7.1 The Markov Property 7.2. The Strong Markov Property 7.3. The Generator of an It6 Diffusion 7.4 The Dynkin Formula. 7.5 The Characteristic Operator . Exercises Other Topics in Diffusion Theory 8.1 Kolmogorov’s Backward Equation. The Resolvent 8.2 The Feynman-Kac Formula, Killing . 8.3. The Martingale Problem 84 When is an Ité Process a Diffusion? 8.5 Random Time Change . 8.6 The Girsanov Theorem. Exercises Applications to Boundary Value Problems 9.1 The Combined Dirichlet-Poisson Problem. Uniqueness. 9.2. The Dirichlet Problem. Regular Points . 9.3. The Poisson Problem Exercises Application to Optimal Stopping . 10.1 The Time-Homogeneous Case 10.2 The Time-Inhomogeneous Case 10.3 Optimal Stopping Problems Involving an Integral . 10.4 Connection with Variational Inequalities...... Exercises Application to Stochastic Control. 11.1 Statement of the Problem ..... 11.2 The Hamilton-Jacobi-Bellman Equation 11.3 Stochastic Control Problems with Terminal Conditions Exercises . 104 - 105 83 83 85, 113 .113 116 im 124 + 126 . 128 . 139 . 139 . 143 . 146 - 148 . 153 - 159 . 168 10: - 175 178 - 190 ia . 205 . 205 . 217 + 222 . 224 . 228 . 235 . 235 » 237 - 251 » 252 Table of Contents XXIII 12. Application to Mathematical Finance - 261 12.1 Market, Portfolio and Arbitrage - 261 12.2 Attainability and Completeness . 271 12.3 Option Pricing . 279 Exercises . 298 Appendix A: Normal Random Variables .............00.....05 305 Appendix B: Conditional Expectation ...................06065 309 Appendix Uniform Integrability and Martingale Conver- gence Appendix D: An Approximation Result.................0..065 315 Solutions and Additional Hints to Some of the Exercises..... 319 References ....... 6.0. oe cee cece ee eceeeceeeeeeeeeeeeeeeteeeeees 345 List of Frequently Used Notation and Symbols ............... 353 Index 1, Introduction To convince the reader that stochastic differential equations are an important subject let us mention some situations where such equations appear and can be used: 1.1 Stochastic Analogs of Classical Differential Equations If we allow for some randomness in some of the coefficients of a differential equation we often obtain a more realistic mathematical model of the situation. Problem 1. Consider the simple population growth model “ =a(t)N(), —_N(0) = No (constant) (1.1.1) where N(t) is the size of the population at time t, and a(¢) is the relative rate of growth at time ¢. It might happen that a(t) is not completely known, but subject to some random environmental effects, so that we have a(t) = r(t) + “noise” , where we do not know the exact behaviour of the noise term, only its prob- ability distribution. The function r(t) is assumed to be nonrandom. How do we solve (1.1.1) in this case? Problem 2. The charge Q(t) at time ¢ at a fixed point in an electric circuit satisfies the differential equation L-QME) + RQ J -Ql0) = Fl, Q0) = Qo, Q"(0) = fo (11.2) where L is inductance, R is resistance, Cis capacitance and F(t) the potential source at time t, Again we may have a situation where some of the coefficients, say F(t), are not deterministic but of the form F(t) = G(t) + “noise” (1.1.3) 2 1. Introduction How do we solve (1.1.2) in this case? More generally, the equation we obtain by allowing randomness in the coefficients of a differential equation is called a stochastic differential equa- tion. This will be made more precise later. It is clear that any solution of a stochastic differential equation must involve some randomness, i.e. we can only hope to be able to say something about the probability distributions of the solutions. 1.2 Filtering Problems Problem 3. Suppose that we, in order to improve our knowledge about the solution, say of Problem 2, perform observations Z(s) of Q(s) at times 8 0 is a constant. (ii) A risky investment (e.g. a stock), where the price X,(t) per unit at time ¢ satisfies a stochastic differential equation of the type discussed in Problem 1: dX, dt where p > p and o € R \ {0} are constants. = (u+o- “noise” )X, (1.5.2) At each instant t the person can choose how large portion (fraction) u; of his fortune Z, he wants to place in the risky investment, thereby placing (1—1u,)Z: in the safe investment. Given a utility function U and a terminal time T the problem is to find the optimal portfolio u, € (0, 1] (ie. find the investment distribution uz; 0 < ¢ < T) which maximizes the expected utility of the corresponding terminal fortune 2%: max {E [viz )]} (1.5.3) O R”. Every random variable induces a probability measure x on R", defined by #x(B) = P(X7"(B)) . Lux is called the distribution of X. If [ |X (w)|dP(w) < co then the number a BIx}= [x(o)aPw) = f edux(e) 4 Re is called the expectation of X (w.r.t. P). More generally, if f:R” R is Borel measurable and J lf(X(w))|dP(w) < oo then we have a BO= [ 1xXWyaPe 7 J H2)dux(2). a Re The L?-spaces If X : Q + R® is a random variable and p € (1,00) is a constant we define the L?-norm of X, ||X|p, by : Xp = 1Xluree) = ( [1X )PEPW))?. 4 If p = 00 we set IX Ilo = IXlliece) = sup(|X(w)kw € 2}. The corresponding L?-spaces are defined by L?(P) = L*(2) ={X + 2+ R*;||Xllp < co} With this norm the L?-spaces are Banach spaces, i.e. complete normed linear spaces (see Exercise 2.19). If p = 2 the space L?(P) is even a Hilbert space, i.e. a complete inner product space, with inner product (X,Y )uypy = B[IX-¥); X,Y € 1°(P). The mathematical model for independence is the following: 10 2. Some Mathematical Preliminaries Definition 2.1.3. Two subsets A,B € F are called independent if P(ANB) = P(A): P(B). A collection A = {H,;i € I} of families H; of measurable sets is independent if P(Hi, 1-0 Ay) = P(Hi,) + Pix) for all choices of Hi, € Hiz,-++, Hi, € Ha, with different indices in,...,in- A collection of random variables {X;;i € I} is independent if the collec- tion of generated o-algebras Hx, is independent. If two random variables X,Y: — R are independent then E|XY] = E[X]E(Y) , provided that E[|X|] < oo and El|Y'|] < oo. (See Exercise 2.5.) Definition 2.1.4. A stochastic process is a parametrized collection of ran- dom variables {Xther defined on a probability space ({2,F, P) and assuming values in R”. The parameter space T' is usually (as in this book) the halfline [0, 00), but it may also be an interval [a,b], the non-negative integers and even subsets of R” for n > 1. Note that for each t € T fixed we have a random variable woX(w); we. On the other hand, fixing w € 2 we can consider the function t>oX(w); ter which is called a path of X;. It may be useful for the intuition to think of t as “time” and each w as an individual “particle” or “experiment”. With this picture X;(w) would represent the position (or result) at time ¢ of the particle (experiment) w. Sometimes it is convenient to write X(t,w) instead of X;(w). Thus we may also regard the process as a function of two variables (tw) + X(tw) from T x @ into R*. This is often a natural point of view in stochastic analysis, because (as we shall see) there it is crucial to have X(t,w) jointly measurable in (t,w). Finally we note that we may identify each w with the function t > Xi(w) from T into R". Thus we may regard (2 as a subset of the space { = (R")? of all functions from T into R". Then the g-algebra F will contain the o-algebra B generated by sets of the form 2.2 An Important Example: Brownian Motion iW {wiw(th) € Fis swlte) € Fe}, FC R” Borel sets (B is the same as the Borel o-algebra on @ if T = (0,00) and @ is given the product topology). Therefore one may also adopt the point of view that a stochastic process is a probability measure P on the measurable space ((R")", B) The (finite-dimensional) distributions of the process X = {X;}rer are the measures ji1,,...,t, defined on R™, k = 1,2,..., by Mtryoute (Fi X Fa X00 Fe) = P[Xy € Fis Xu € Felis €T. Here F,,..., Fx denote Borel sets in R”. ‘The family of all finite-dimensional distributions determines many (but not all) important properties of the process X. Conversely, given a family {v4,,..,4.;k € N,t; € T} of probability mea- sures on R™ it is important to be able to construct a stochastic process Y = {Yheer having vy,..,x. a8 its finite-dimensional distributions. One of Kolmogorov’s famous theorems states that this can be done provided {¥t,,.ute} Satisfies two natural consistency conditions: (See Lamperti (1977).) ‘Theorem 2.1.5 (Kolmogorov’s extension theorem). For all ty,...,th €T, KEN let r4,,...%5. be probability measures on R™ s.t. Moeryetegny (Pa X00 Fe) = Ver este Fo-1ay % 1+ X Fe-1(4)) (K1) for all permutations o on {1,2,...,k} and Yet FRC FE) = Vey otistiettonitetm (FIX Fk x RXR") (K2) for allm €N, where (of course) the set on the right hand side has a total of k+m factors. Then there exists a probability space (2,F,P) and a stochastic process {Xi} on 2X QR, ot Ves onte (Fi X 0° X Fe) = P(X, € Fy for allt; €T, k EN and all Borel sets F, ++) Xey © Fr], 2.2 An Important Example: Brownian Motion In 1828 the Scottish botanist Robert Brown observed that pollen grains sus- pended in liquid performed an irregular motion. The motion was later ex- plained by the random collisions with the molecules of the liquid. To describe the motion mathematically it is natural to use the concept of a stochastic process B,(w), interpreted as the position at time t of the pollen grain w. We will generalize slightly and consider an n-dimensional analog. 12 2. Some Mathematical Preliminaries To construct {B;}:>0 it suffices, by the Kolmogorov extension theorem, to specify a family {%4,,..,1.} of probability measures satisfying (K1) and (K2) ‘These measures will be chosen so that they agree with our observations of the pollen grain behaviour: Fix 7 € R” and define le- of 2 If0 St < te <-++- < ty define a measure %,,.,4, on R™* by p(t, x,y) = (2at)7"/? - exp(— ) for yeR",t>0 Vey, ote(Fa X02 x Fk) = (2.2.1) = ] (ti, ©, 21)p(to—t1, 21,02) ++ P(e tea) Te 1, Te )day +++ dary Fook where we use the notation dy = dy;---dy, for Lebesgue measure and the convention that p(0, x, y)dy = 6:(y), the unit point mass at x. Extend this definition to all finite sequences of t;’s by using (K1). Since JS p(t,2,y)dy = 1 for all t > 0, (K2) holds, so by Kolmogorov’s theorem Re there exists a probability space ({2, F, P*) and a stochastic process {Bz}1>0 on @ such that the finite-dimensional distributions of B, are given by (2.2.1), ie. PP(By € Fis - / p(t. 2,21) -+- p(t — tee te-ite)der...dz_, (2.2.2) Fx Fi Bu, © Fr) = Definition 2.2.1. Such a process is called (a version of) Brownian motion starting at x (observe that P¥(Bo = 2) = 1). ‘The Brownian motion thus defined is not unique, i.e. there exist. several quadruples (By, 2, F, P*) such that (2.2.2) holds. However, for our purposes this is not important, we may simply choose any version to work with. As we shall soon see, the paths of a Brownian motion are (or, more correctly, can be chosen to be) continuous, a.s. Therefore we may identify (a.a.) w € 2 with a continuous function t + B,(w) from {0, 00) into R". Thus we may adopt the point of view that Brownian motion is just the space C([0, 00), R) equipped with certain probability measures P* (given by (2.2.1) and (2.2.2) above). ‘This version is called the canonical Brownian motion. Besides having the advantage of being intuitive, this point of view is useful for the further anal- ysis of measures on C((0,00),R”), since this space is Polish (ie. a complete separable metric space). See Stroock and Varadhan (1979). We state some basic properties of Brownian motion: 2.2 An Important Example: Brownian Motion 13, i) Brisa Gaussian process, i.e. for all 0 < ty <--- < ty the random variable Z=(Byy---) By) € R™ has a (multi)normal distribution, This means that there exists a vector Mf € R™ and a non-negative definite matrix C = [ejm} € R™**"* (the set of all nk x nk-matrices with real entries) such that. nk E* [exp ( yy z)) =exp (- BD ujesmtin +E Dum) (2.2.3) i Sm 3 for all u = (t1,.--)tne) € R™, where i = YI is the imaginary unit and E¥ denotes expectation with respect to P#. Moreover, if (2.2.3) holds then M = E*[Z]_ is the mean value of Z (2.2.4) and ¢jm = E*((Z; -— Mj)(Zm — Mm)] is the covariance matrix of Z . (2.2.5) (See Appendix A). To see that (2.2.3) holds for Z = (Bz,,..., By,) we calculate its left hand side explicitly by using (2.2.2) (see Appendix A) and obtain (2.2.3) with M = EF(2] = (a,2,---,2) eR (2.2.6) and tn tiln + tila tin tatn ++: taln CHl : (2.2.7) tiln tealn +++ thIn Hence E(Bl=x forall t>0 (2.2.8) and E*|((B, — z)*] = nt, E*[(B, — z)(Bs—2)] =n min(s,t). (2.2.9) Moreover, E*|(By— B.)"] =n(t—s)ift>s, (2.2.10) since E*((B, — B,)"|] = E*((By — 2)? — 2(B, — 2)(Bs ~ 2) + (By ~ 2)*} = n(t—2s +s) =n(t—s),whent>s. 14 2. Some Mathematical Preliminaries (ii) By has independent increments, i.e. Bu, By — Bus, By, ~ Bry, are independent for all O< ty < tg: < te. (2.2.11) To prove this we use the fact that normal random variables are inde- pendent iff they are uncorrelated. (See Appendix A). So it is enough to prove that E*((Br, — By_,)(Be, — Br_,)] =0 when ti t. (iii) Finally we ask: Is t + B,(w) continuous for almost all w? Stated like this the question does not make sense, because the set H = {w;t + B,(w) is continuous} is not measurable with respect to the Borel o-algebra B on (R”)l°) mentioned above (H involves an uncountable number of t's). However, if modified slightly the question can be given a positive answer. To explain this we need the following important concept: Definition 2.2.2. Suppose that {Xi} and {¥%;} are stochastic processes on (2,F, P). Then we say that {X;} is a version of (or a modification of) {Y;} if P({w; Xe(w) = ¥i(w)}) = 1 for all t. Note that if Xi is a version of Yi, then X; and Y; have the same finite- dimensional distributions. Thus from the point of view that a stochastic pro- cess is a probability law on (R)I) two such processes are the same, but nevertheless their path properties may be different. (See Exercise 2.9.) ‘The continuity question of Brownian motion can be answered by using another famous theorem of Kolmogorov: Theorem 2.2.3 (Kolmogorov’s continuity theorem). Suppose that the process X = {Xz}e>o satisfies the following condition: For all T > 0 there exist positive constants a,{3,D such that E[\Xe— Xs|°] < D-|t—s|'48 ; O0, 1 flax)PIX = an) . im 2.2. Let X:2 — R be a random variable. The distribution function F of X is defined by F(a) = PIX 0 be a measurable function on R. We say that X has the density p if F(a) = f p(y)dy for all. Thus from (2.2.1)-(2.2.2) we know that 1-dimensional Brownian motion By at time ¢ with By = 0 has the density 2 1 (x) = Tra POR Find the density of B?. reR. 2.3. Let {Hi}ser be a family of o-algebras on 2. Prove that H= (Viet is again a o-algebra. 2.4. a) Let X: {2 R” be a random variable such that E\|X|P] <0o for some p, 0A) < spElAr for all \>0. Hint: f |X|PdP > f |X|PdP, where A = {w:|X| >A}. b) Suppose there exists & > 0 such that M = Blexp(k|X|)] < 00. Prove that Pi|X| >) < Me~™ for all A>0. Exercises 17 Let X,Y: —+ R be two independent random variables and assume for simplicity that X and Y are bounded. Prove that E[XY] = E[X]E|¥] . (xine: Assume |X| < M, |Y| < N. Approximate X and Y by sim- ple functions p(w) = > aiXr,(w), ¥w) = Do bj; XG, (w), respectively, a =1 3 where Fy = X7¥(a;,ai41)), Gj = ¥7"((by,Bj41)), —M = ao aids PUR G)).. ) : rey Let (2,F,P) be a probability space and let Ai, Ap,... be sets in F such that SY P(Ak) < 00 ta Prove the Borel-Cantelli lemma: rr) U ay=o, mal kom ie. the probability that w belongs to infinitely many Aj.s is zero. a) Suppose Gi,G2,...,Gn are disjoint subsets of 2 such that Prove that the family G consisting of @ and all unions of some (or all) of Gi,...,G, constitutes a o-algebra on 22. b) Prove that any finite o-algebra F on 9? is of the type described in a). c) Let F be a finite o-algebra on @ and let X:2 + R be F- measurable. Prove that X assumes only finitely many possible values. More precisely, there exists a disjoint family of subsets F,..., Fm © F and real numbers ¢),...,¢m such that X(w) = Satn(u) : Let B, be Brownian motion on R, By = 0. Put E = E° 18. 2.9. 2.10. 2.11. 2, Some Mathematical Preliminaries a) Use (2.2.3) to prove that Ele] = exp(-3ut) for allueR. b) Use the power series expansion of the exponential function on both sides, compare the terms with the same power of u and deduce that EB?) = 30? and more generally that, (2k)! a, E[BM|=ae-qth: REN. c) If you feel uneasy about the lack of rigour in the method in b), you can proceed as follows: Prove that (2.2.2) implies that EIs(B)] = pee [ S(e)e Fade Z for all functions f such that the integral on the right converges Then apply this to f(z) = 2% and use integration by parts and induction on k. 4) Prove (2.2.14), for example by using b) and induction on n. To illustrate that the (finite-dimensional) distributions alone do not give all the information regarding the continuity properties of a pro- cess, consider the following example: Let (2, F, P) = (0,00), B, 2) where B denotes the Borel a-algebra on (0,00) and p is a probability measure on [0,00) with no mass on single points. Define ' 1 ift=w HXalw) = { 0 otherwise and ¥;,(w) =0 for all (t,w) € [0, 00) x (0,00) . Prove that {X,} and {¥,} have the same distributions and that X; is a version of Y;. And yet we have that t + ¥;(w) is continuous for all w, while t + X;(w) is discontinuous for all w. A stochastic process X; is called stationary if {X;} has the same dis- tribution as {X;4,} for any h > 0. Prove that Brownian motion B, has stationary increments, i.e. that the process {Bi4n — Bi}n>0 has the same distribution for all t. Prove (2.2.15). +12. -13. 14, 15. -16. 17. Exercises 19 Let B be Brownian motion and fix to > 0. Prove that Bu= Bust—-Byi t20 is a Brownian motion Let B, be 2-dimensional Brownian motion and put D,={cER?*;|2| 0- Compute P°|Be € Dy). Let B, be n-dimensional Brownian motion and let K C R" have zero n-dimensional Lebesgue measure. Prove that the expected total length of time that B, spends in K is zero. (This implies that the Green measure associated with B, is absolutely continuous with respect to Lebesgue measure. See Chapter 9). Let By be n-dimensional Brownian motion starting at 0 and let UeR"*" be a (constant) orthogonal matrix, ie. UUT =I. Prove that By: = UB, is also a Brownian motion. (Brownian scaling). Let B, be a 1-dimensional Brownian motion and let c > 0 be a constant. Prove that is also a Brownian motion. If X,(-): @ — R is a continuous stochastic process, then for p > 0 the p’th variation process of X;, (X,X)\”) is defined by (XX) (w) = slim | ST [Xa )-Xe,(w))? (limit in probability) test where 0 = t) < tz <...< ty =t and At, = ty41 — te. In particular, if p = 1 this process is called the total variation process and if p = 2 this is called the quadratic variation process. (See Exercise 4.7.) For Brownian motion B, ¢ R we now show that the quadratic variation process is simply (B, B)(w) = (B, BY (w) =t as. Proceed as follows: 20 2. Some Mathematical Preliminaries a) Define ABy = Buys ~ Bu and put. ¥(tw) = S>(ABx(w))? teSt Show that. E\(Y0(ABx)? — #)7] = 2S (Ate)? teSt teSt and deduce that ¥(t,-) + t in L?(P) as Aty oo. b) Use a) to prove that a.a. paths of Brownian motion do not have a bounded variation on [0,t], ie. the total variation of Brownian motion is infinite, a.s. 2.18. a) Let 2 = {1,2,3,4,5} and let U/ be the collection U = {(1,2,3}, (3,4,5}} of subsets of (2. Find the smallest o-algebra containing U (i.e. the o-algebra Hy generated by U). b) Define X : 2+ R by X(1)=X(2)=0, X(3)=10, X(4) = X(5)=1 Is X measurable with respect to Hu? c) Define Y : 2 4 R by YQ) =0, Y(2)=¥(3)=¥(4)=¥(5) =1 Find the o-algebra Hy generated by ¥ 2.19. Let (2,F,u) be a probability space and let p € {1,00]. A sequence {fn}%21 of functions fy, € L?() is called a Cauchy sequence if fn -fmllp 70 as nym — oo. ‘The sequence is called convergent if there exists f € L?(s) such that fn f in L(y). Prove that every convergent sequence is a Cauchy sequence A fundamental theorem in measure theory states that the converse is also true: Every Cauchy sequeence in L?(s) is convergent. A normed linear space with this property is called complete. Thus the L?(s1) spaces are complete. 2.20. Let B, be 1-dimensional Brownian motion, o € R. be constant and OSs Wz, and W,, are independent. (ii) {W,} is stationary, i.e. the (joint) distribution of {Wi +.- does not depend on t. (iii) B[W;} =0 for all t. West} However, it turns out there does not exist any “reasonable” stochastic process satisfying (i) and (ii): Such a W cannot have continuous paths. (See Exercise 3.11.) If we require E{W?] = 1 then the function (t,w) + W,(w) cannot even be measurable, with respect to the a-algebra B x F, where B is the Borel c-algebra on (0, 00]. (See Kallianpur (1980, p. 10).) Nevertheless it is possible to represent W, as a generalized stochastic process called the white noise process. That the process is generalized means that it can be constructed as a probability measure on the space S’ of tempered distributions on (0, 00), and not as a probability measure on the much smaller space R!), like an 22 3. Ito Integrals ordinary process can. See e.g. Hida (1980), Adler (1981), Rozanov (1982), Hida, Kuo, Potthoff and Streit (1993) or Holden, Oksendal, Ubge and Zhang (1996). We will avoid this kind of construction and rather try to rewrite equation (3.1.2) in a form that suggests a replacement of W; by a proper stochastic process: Let 0 = to < t) < ++: < tm = t and consider a discrete version of (3.1.2): Xeai — Xp =O te, Xe) Ate + o(te, Xe)WeAte , (3.1.3) where X;=X(ts) We=We, Ate =teyr—te- We abandon the Wg-notation and replace W, Ate by AVk = Vi, — Ve» where {V;}:30 is some suitable stochastic process. The assumptions (i), (ii) and (iii) on W; suggest that V; should have stationary independent increments with mean 0. It turns out that the only such process with continuous paths is the Brownian motion By. (See Knight (1981). Thus we put V; = B, and obtain from (3.1.3): ket koa Xp = Xo+D W(t, Xj) Aty + Do (ty, Xj)AB;- (3.1.4) = ja0 Is it possible to prove that the limit of the right hand side of (3.1.4) exists, in some sense, when At; — 0? If so, then by applying the usual integration notation we should obtain ‘ ‘ Xs Xo [ oe, Xa)ds+ « [ ols, X4)dB,” (3.1.5) 3 a and we would adopt as a convention that (3.1.2) really means that X, = X;(w) is a stochastic process satisfying (3.1.5). Thus, in the remainder of this chapter we will prove the existence, in a certain sense, of : « [Hs dB.oy" where B,(w) is 1-dimensional Brownian motion starting at the origin, for a wide class of functions f: 0, 00] x 92 + R. Then, in Chapter 5, we will return to the solution of (3.1.5). Suppose 0 < S$ Byam g20 2(t,w) = D> By srya-n(w) - Ay.2-n,y4ry2-m)(t) - 320 ) + jam uaaya-my (t) Then 1 E| f ereuntei(ay] = 30 BIB (BB) 3 = since {B,} has independent increments. But t ef oalt,w)dBe)| = PBIB. (Buss ~ By) 3 320 = DEB ys - 320 T, by (2.2.10) . So, in spite of the fact that both @; and ¢2 appear to be very reasonable approximations to S(tw) = Bw) , 24 3. It6 Integrals their integrals according to (3.1.8) are not close to each other at all, no matter how large n is chosen. ‘This only reflects the fact that the variations of the paths of By are too big to enable us to define the integral (3.1.6) in the Riemann-Stieltjes sense. In fact, one can show that the paths t + B, of Brownian motion are nowhere differentiable, almost surely (as.). (See Breiman (1968). In particular, the total variation of the path is infinite, a.s. In general it is natural to approximate a given function f(t,w) by LAG) Hegel 7 where the points ¢} belong to the intervals [t,,t;41], and then define r J J(t,w)dB,(w) as the limit (in a sense that we will explain) of ¥ F(t},w)[Byj41 — Be,](w) as n —+ 00. However, the example above shows 7 that - unlike the Riemann-Stieltjes integral ~ it does make a difference here what points t} we choose, The following two choices have turned out to be the most useful ones: 1) #5 = ty (the left end point), which leads to the [té integral, from now on denoted by + [sew aBete) 3 and 2) t5 = (t-+tj41)/2 (the mid point), which leads to the Stratonovich integral, denoted by r fst w) 0dBy(w) . 5 (See Protter (1990, Th. V. 5.30)). In the end of this chapter we will explain why these choices are the best and discuss the relations and distinctions between the corresponding inte- grals. In any case one must restrict oneself to a special class of functions f(t, w) in (3.1.6), also if they have the particular form (3.1.7), in order to obtain a reasonable definition of the integral. We will here present Ité’s choice ty = t;. The approximation procedure indicated above will work out success- fully provided that f has the property that each of the functions w — f(t;,w) only depends on the behaviour of B,(w) up to time t;. This leads to the fol- lowing important concepts: 3.1 Construction of the Ito Integral 25 Definition 3.1.2. Let By(w) be n-dimensional Brownian motion. Then we define F, = Fi" to be the o-algebra generated by the random variables {Bil3)} crcnocece: In other words, F; is the smallest c-algebra containing all sets of the form {w; Be (w) € Fiy+++, Br, (w) € Fae}. where ty St and F; CR" are Borel sets, j < k = 1,2... (We assume that all sets of measure zero are included in F,). One often thinks of F; as “the history of B, up to time t”. A function A(w) will be F;-measurable if and only if h can be written as the pointwise a.e. limit of sums of functions of the form (Bt, )92(Bea)---9x(Ba) 5 where gi,.-.,9e are bounded continuous functions and ty < ¢ for j < k, k=1,2,.... (See Exercise 3.14.) Intuitively, that A is F,-measurable means that the value of h(w) can be decided from the values of B,(w) for s < ¢. For example, hy (w) = Byo(w) is F-measurable, while ho(w) = Bze(w) is not. Note that F, CF; for s < t (ie. {Fi} is increasing) and that Fy C F for all t. Definition 3.1.3. Let (Ni}is0 be an increasing family of o-algebras of sub- sets of Q. A process 9(t,w):|0, 00) x 2 + R” is called N,-adapted if for each t > 0 the function w > 9(t,w) is N,-measurable. Thus the process hi(t,w) = Byo(w) is Fi-adapted, while ha(t,w) = Boz(w) is not. We now describe our class of functions for which the It6 integral will be defined: Definition 3.1.4. Let V = V(S,T) be the class of functions f(t,w): (0,00) x Q+R such that (i) (tw) — f(tw) is Bx F-measurable, where B denotes the Borel o- algebra on (0,00). (ii) f(t,w) is Fe-adapted. T (iii) BL f f(t,w)dt] < 00, 3 26 3. 1t6 Integrals The Ité Integral For functions f € V we will now show how to define the /té integral t Tipe) = f H(tw)aB,(w), 3 where B, is 1-dimensional Brownian motion. The idea is natural: First we define Z(¢] for a simple class of functions @. Then we show that each f € V can be approximated (in an appropriate sense) by such ¢’s and we use this to define J fdB as the limit of f dB as of. We now give the details of this construction: A function ¢ € V is called elementary if it has the form (E02) = S765) Missy r(B) (3.1.9) a Note that since ¢ € V each function ey must be F,,-measurable. Thus in Example 3.1.1 above the function @; is elementary while ¢2 is not. For elementary functions ¢(t,w) we define the integral according to (3.1.8), ie. : J oleo)dBe) =O esle)lBy.. ~ Balle) (3.1.10) 3 720 Now we make the following important observation: Lemma 3.1.5 (The It6 isometry). If d(t,w) is bounded and elementary then + , . ww = w)2 . 3.1. l(Joe )aB(o)) | s| fo vat] (3.1.11) Proof. Put AB; = By,,, — By,. Then ={ 0 Il i 77 Ell Ga~t) if i=5 using that ee; AB, and AB, are independent if i < j. Thus o(( fo) = YF ABAB,| = Deel + (than ty) - of Eleces AB,AB, 3.1 Construction of the It6 Integral 27 The idea is now to use the isometry (3.1.11) to extend the definition from elementary functions to functions in V. We do this in several steps: Step 1. Let g € V be bounded and 9(-,w) continuous for each w. Then there exist elementary functions dq € Y such that T | (a on)at] +0 asnsoo ! Proof. Define dn(t,w) = 5 g(ts,)-%e, ty41y(t)- Then dn is elementary since a 9 €V, and r [(a-onfaro as n +00, for each w, 3 r since g(-,w) is continuous for each w. Hence E[{(g—@n)?dt] + 0 as n > 00, 3 by bounded convergence. Step 2. Let h € V be bounded. Then there exist bounded functions gn € V such that gn(-,w) is continuous for all w and n, and t —9,)at| >» s[ fo on) a! 0. Proof. Suppose |h(t,w)| 0 and a) F valeyde =1 Define Gn{t,w) = | Yn(s —t)h(s,w)ds. Then gq(:,w) is continuous for each w and |gn(t,w)| < M. Since h € V we can show that gq(t,-) is F,-measurable for ail t. (This is a subtle point; see e.g. Karatzas and Shreve (1991), p. 133 for details.) Moreover, 2B 3. Ito Integrals r [tonto —h(s,w))*ds +0 as n+ 00, for each w, 5 since {Vn}q constitutes an approximate identity. (See e.g. Hoffman (1962, p. 22).) So by bounded convergence e[ foes = ult] 30 asn>oo, as asserted, Step 3. Let f € V. Then there exists a sequence {hn} C V such that hy is bounded for each n. and 2| fu sora + 0asn— oo. 3 Proof. Put —n if f(tw)<—n ha{tiw) = 4 f(tw) if —n< f(tw)n. ‘Then the conclusion follows by dominated convergence. That completes the approximation procedure. We are now ready to complete the definition of the Ité integral r [tyaBe) for fev. 3 If f € V we choose, by Steps 1-3, elementary functions ¢, € V such that Tr = dnl2at| 0. alfu Ci at 0 Then define Tf\w Tr Tr = f Ho yaeuye im, | oaltaydBle) 3 3 T The limit exists as an element of L?(P), since { [dn (t,w)dB,(w)} forms a 3 Cauchy sequence in L?(P), by (3.1.11). We summarize this as follows: 3.1 Construction of the Ité Integral 29 Definition 3.1.6 (The It6 integral). Let f ¢ V(S,T). Then the Ito inte- gral of f (from S to T) is defined by T r [ feeyaB(w) = sh, f én(t.)aBi(wy (limit in L°(P)) (3.1.12) 8 5 where {én} is a sequence of elementary functions such that r el free) - énttw)?a] 0 asno. (3.1.13) 3 Note that such a sequence {Gn} satisfying (3.1.13) exists by Steps 1-3 above. Moreover, by (3.1.11) the limit in (3.1.12) exists and does not depend on the actual choice of {én}, as long as (3.1.13) holds. Furthermore, from (3.1.11) and (3.1.12) we get the following important Corollary 3.1.7 (The Ité isometry). r 2 r E f(t,w)dB,) | =E| | fr(tw)dt} for all fe V(S,T). (3.1.14) (forme) =e free Corollary 3.1.8. If f(t,w) € V(S,T) and fa(t,w) € V(S,T) forn = 1,2,... and BL] Unltw) ~ f(t,w))2dt] + 0 as n — 00, then 8 T T [ foltoydBw) = f HywydBlw) in L?(P) asn— 00. 3 5 We illustrate this integral with an example: Example 3.1.9. Assume By = 0. Then t [2a8. = 4BP-ht. 3 Proof. Put dn(s,w) = Bs(w) - Xe,,t,41)(s), where By = By,. Then ‘ bn | [60 - ,)s] ey [@ - ,)s] oO ot typ =X fos as=T Hon -4) 0 as At; +0. 7 4 F 30 3. Ito Integrals So by Corollary 3.1.8 ‘ ‘ [ B.aB. = aim, f ond, = dim BAB, . 0 0 7 (See also Exercise 3.13.) Now A(B}) = Bh, ~ BY = (Bis — By)? + 2B;(Bjsr ~ By) = (AB,)? +2B;AB;, and therefore, since By = 0, B? = > A(B?) = )7(4B,)? +2.> BAB; 7 7 7 or DY B)4B, = 4B? - 457(4B;). 7 7 Since )-(AB,)? + t in L?(P) as At; + 0 (Exercise 2.17), the result follows. 7 ‘The extra term —}¢ shows that the It6 stochastic integral does not behave like ordinary integrals. In the next chapter we will establish the t6 formula, which explains the result in this example and which makes it easy to calculate many stochastic integrals. 3.2 Some Properties of the Ité Integral First we observe the following: Theorem 3. Let f,g € V(0,T) and let0< S0 of o-algebras M, C F such that OM,cM (i.e. {M,} és increasing). An n-dimensional stochastic process {M}120 on (Q,F,P) is called a martingale with respect to a filtration {M,}r>0 (and with respect to P) if (i) M, is My-measurable for all t, (ii) El|Mil] < 00 for allt and (iii) E[M,|M,] = M, for all s > t. Here the expectation in (ii) and the conditional expectation in (iii) is taken with respect to P = P®. (See Appendix B for a survey of conditional expectation), Example 3.2.3. Brownian motion B; in R” is a martingale w.r.t. the o- algebras F, generated by {B,;s < t}, because E\|Bil? < EllBi?] =|Bol? +nt and if s > ¢ then E[B.|Fi] = E[Bs ~ Be + BilFi) = E(B, - BilFi] + E[BlF| = 0+ Be = Be Here we have used that E[(B, — Br)|¥:] = E[B, — Bi] = 0 since B, — By is independent of F; (see (2.2.11) and Theorem B.2.d)) and we have used that E|B.\Fi] = B, since B, is F;-measurable (see Theorem B.2.c)). For continuous martingales we have the following important inequality due to Doob: (See e.g. Stroock and Varadhan (1979), Theorem 1.2.3 or Revuz and Yor (1991), Theorem II.1.7) Theorem 3.2.4 (Doob’s martingale inequality). If M, is a martingale such that t + My(w) is continuous a.s., then for all p > 1,T > 0 and all A>0 7 P{ sup |Mi| >A} < <>: El|Mrl?| (sup pIM 2 AL S 5p Ello We now use this inequality to prove that the It6 integral : J fewyan, 4 can be chosen to depend continuously on t : 323. It6 Integrals Theorem 3.2.5. Let f € V(0,7). Then there exists a t-continuous version of f ovoydate) O d<3 Ellin (Tow) = In(T2)?) Tr | (on ~ bm)? »0 as mjn— oo. ! Hence we may choose a subsequence nx 1 00 sit. Pl sup Unasa(ts) — Ing (t)| > 2-*] < 27* ote : By the Borel-Cantelli lemma P[_ sup _[Ingys(t,) — Ing (tw)] > 27* for infinitely many k] =0. OstsT So for a.a. w there exists ky(w) such that sup [Ing,,(t,w) — Ing (tsw)| << 27* for & > ki(w) OstsT ‘Therefore J, (t,w) is uniformly convergent for t € [0,7], for a.a. w and so the limit, denoted by J, (w), is t-continuous for t € [0,T], a.s. Since In,(t,-) > I(t,-) in L7[P} for all t, we must have =Aas., forall te (0,7). That completes the proof. 5 t From now on we shall always assume that J f(s,w)dB,(w) means a t- a continuous version of the integral. Corollary 3.2.6. Let f(t,w) € V(0,T) for all T. Then Mw) p f flea, a is a martingale w.r.t. Fy and Pl sup, IM] >A] < 3 a was] : AT >0 (3.2.3) Proof. This follows from (3.2.2), the a.s. t-continuity of M, and the martin- gale inequality (Theorem 3.2.4), combined with the Ité isometry (3.1.14). a 34 3. Ito Integrals 3.3 Extensions of the Ité Integral The Ité integral [ fdB can be defined for a larger class of integrands f than V. First, the measurability condition ({i) of Definition 3.1.4 can be relaxed to the following: (ii)’ There exists an increasing family of o-algebras Hy;t > 0 such that a) B, is a martingale with respect to 21, and b) fe is Hi-adapted. Note that a) implies that 7, C H:. The essence of this extension is that we can allow f, to depend on more than F; as long as B, remains a martingale with respect to the “history” of f,; s < t. If (ii)’ holds, then E[B,—By|H] = 0 for all s > t and if we inspect our proofs above, we see that this is sufficient, to carry out the construction of the Ité integral as before. ‘The most important example of a situation where (ii)’ applies (and (ii) doesn’t) is the following: Suppose By(w) = By(t,w) is the k’th coordinate of n-dimensional Brown- ian motion (By,..., Bn). Let F{” be the o-algebra generated by By(s1,+),---s Ba(Sny-); se < t. Then By(t,w) is a martingale with respect to F;”) because B,(s,-) — Be(t,-) is independent of A") when s > t. Hence we can choose t Hz = F{" im (ii)' above. Thus we have now defined f f(s,w)dBx(s,w) for a #{")-edapted integrands f(t,w). That includes integrals like [mae or [sets B3) dBy involving several components of n-dimensional Brownian motion. (Here we have used the notation dB; = dBi (t,w) etc.) This allows us to define the multi-dimensional Ité integral as follows: Definition 3.3.1. Let B = (B),Bo,...,Bn) be n-dimensional Brownian motion. Then Vi2*"(S,T) denotes the set of m xn matrices v = [v;(t,w)] where each entry vis(t,w) satisfies (i) and (iii) of Definition 9.1.4 and (%i)’ above, with respect to some filtration H = {Hi}t20: Ifue VR*"(S,T) we define, using matrix notation z oi Vin dB, jon f (7 "\ (| 2 S Atha a dB, to be the m x1 matrix (column vector) whose i'th component is the following sum of (extended) 1-dimensional Ité integrals: at Df vste.wydB(s.0). Jal 3.3 Extensions of the It6 integral 35 If H = FO = (FI }iz0 we write V%"(S,T) and if m = 1 we write VE(S,T) (respectively V"(S,T)) instead of V2X1(S,T) (respectively yr=1(5,T)). We also put ymxn *"(0,00) = [1] v™™"(0,T) - T>0 ‘The next extension of the Itd integral consists of weakening condition (iii) of Definition 3.1.4 to (iii)’ of f sonst <| 3 Definition 3.3.2. W(S,T) denotes the class of processes f(t,w) € R satis- fying (i) of Definition 3.1.4 and (it)’, (1ii)’ above. Similarly to the notation for V we put We = (Wa (0,T) and in the matrix case we write WE*"(S,T) ete. FH =F we unite WIS, T) instead of Wyim (S,T) etc. If the dimen- sion is clear from the contert we sometimes drop the superscript and write F for F and so on. Let B, denote 1-dimensional Brownian motion. If f € Wy one can show that for all ¢ there exist step fumetions J, € W such that ['|f, —fl2ds —» 0 in probability, ie. in measure with respect to P. For such a sequence one has that f fa(s,w)dB, converges in probability to some random variable and the limit only depends on f, not on the sequence {f,}. Thus we may define : : fi F(s.w)aB,(w) = Jim, f f(s, w)4B,(w) (limit in probability) for fe Wr . 3 5 3.3.1 ‘As before there exists a t-continuous version of this integral. See Pedi (1975, Chap. 4) or McKean (1969, Chap. 2) for details. Note, however, that this integral is not in general a martingale. See for example Dudley’s Theorem (Theorem 12.1.5). It is, however, a local martingale. See Karatzas and Shreve (1991), p. 146. See also Exercise 7.12. A comparison of It6 and Stratonovich integrals Let us now return to our original question in this chapter: We have argued that the mathematical interpretation of the white noise equation ax = W(t, Xe) + o(t, Xe) We (3.3.2) 36 3. Ito Integrals is that X; is a solution of the integral equation X= Xo f dle,Xa)de * fols.x.yaa. (3.3.3) 3 for some suitable interpretation of the last integral in (3.3.3). However, as indicated earlier, the It interpretation of an integral of the form « f feeyabey (*) j is just one of several reasonable choices. For example, the Stratonovich in- tegral is another possibility, leading (in general) to a different result. So the question still remains: Which interpretation of (x) makes (3.3.3) the “right” mathematical model for the equation (3.8.2)? Here is an argument that in- dicates that the Stratonovich interpretation in some situations may be the most appropriate: Choose t-continuously differentiable processes B{") such that for a.a. w BO(t,u) + B(tw) as n+ 00 uniformly (in ¢) in bounded intervals. For each w let X{") (w) be the solution of the corresponding (deterministic) differential equation aX, _ api” = B(t, Xz) + oft, Xe a (3.3.4) Then X{")(w) converges to some function X;(w) in the same sense: For a.a. w we have that X{")(w) X;(w) as n — oo, uniformly (in ¢) in bounded intervals. It turns out (see Wong and Zakai (1969) and Sussman (1978) that this so- lution X; coincides with the solution of (3.3.3) obtained by using Stratonovich integrals, i.e. t t X= Xo [ Ws, X.)ds+ [os X.) oan, . (3.3.5) a 0 This implies that X; is the solution of the following modified It6 equation: X,=Xot | (8, X,)ds +3 | o'(s, Xs)o(s,X,)ds+ | o(s,X,)dB,, (3.3.6) [roxowes] i where o’ denotes the derivative of a(t, r) with respect to x. (See Stratonovich (1966)). Exercises 37 Therefore, from this point of view it seems reasonable to use (3.3.6) (ie. the Stratonovich interpretation) ~ and not the Ité interpretation t t X,=Xo+ | (s,X)ds+ | o(s,Xs)dBy (3.3.7) [roses] as the model for the original white noise equation (3.3.2) On the other hand, the specific feature of the It6 model of “not looking into the future” (as explained after Example 3.1.1) seems to be a reason for choosing the It6 interpretation in many cases, for example in biology (see the discussion in Turelli (1977)). The difference between the two interpretations is illustrated in Example 5.1.1. Note that (3.3.6) and (3.3.7) coincide if o(t, x) does not depend on z. For example, this is the situation in the linear case handled in the filtering problem in Chapter 6. In any case, because of the explicit connection (3.3.6) between the two models (and a similar connection in higher dimensions — see (6.1.3)), it will for many purposes suffice to do the general mathematical treatment for one of the two types of integrals. In general one can say that the Stratonovich integral has the advantage of leading to ordinary chain rule formulas under a transformation (change of variable), i.e. there are no second order terms in the Stratonovich analogue of the It6 transformation formula (see Theorems 4.1.2 and 4.2.1). This property makes the Stratonovich integral natural to use for example in connection with stochastic differential equations on manifolds (see Elworthy (1982) or Ikeda and Watanabe (1989)). However, Stratonovich integrals are not martingales, as we have seen that Ité integrals are. This gives the Ité integral an important computational advantage, even though it does not behave so nicely under transformations (as Example 3.1.9 shows). For our purposes the Ito integral will be most convenient, so we will base our discussion on that from now on. Exercises Unless otherwise stated B, denotes Brownian motion in R, Bo = 0. 3.1. Prove directly from the definition of It6 integrals (Definition 3.1.6) that. t saz, =tB,- [ Bats : 3 a (Hint: Note that Y4(s;B,) =D s,4B; + Yo By+1.As; .) 7 7 7 38 3.2, 3.3, 3.4, 3.5. 3.6. 3.7. 3. 16 Integrals Prove directly from the definition of Ité integrals that If Xp: — R® is a stochastic process, let H, = 71{*) denote the o- algebra generated by {Xo(-); s < ¢} (i.e. {H™ }izo is the filtration of the process {X1}:20) a) Show that if X; is a martingale w.r-t. some filtration {N}.20, then X; is also a martingale w.r.t. its own filtration (H! }iz0 - b) Show that if X; is a martingale w.r.t H{*?, then E[XiJ = E[Xo] for all t> 0. (*) c) Give an example of a stochastic process X; satisfying (+) and which is not a martingale w.r.t. its own filtration. Check whether the following processes X; are martingales w.r.t. {F.}: () X= Be+4t (ii) X, = B? ‘ (iii) X, = PB, - 2 f sB,ds 3 (iv) Xi = B1(t)Ba(t), where (By (t), Ba(t)) is 2-dimensional Brownian motion. Prove directly (without using Example 3.1.9) that M, = B?-t is an F,-martingale. Prove that N, = B} — 3tB; is a martingale. A famous result of Ité (1951) gives the following formula for n times iterated Ité integrals: n fo-ftJ dsus where hn is the Hermite polynomial of degree n, defined by 4B, =t? hy (3) (3.3.8) ha(x) = care £e*) ; n-01,2,.. (Thus ho(z) = 1, hx(z) = 2, ho(z) = 2? 1, ha(z) = 2° - 32.) a) Verify that in each of these n Ité integrals the integrand satisfies the requirements in Definition 3.1.4. 3.8. 3.9. 3.10. Exercises 39 b) Verify formula (3.3.8) for = 1,2,3 by combining Example 3.1.9 and Exercise 3.2. ©) Use b) to give a new proof of the statement in Exercise 3.6. a) Let Y be a real valued random variable on (2, F, P) such that El\Y|] <0. Define M = BIY|Fi]; t20. Show that M, is an F,-martingale. b) Conversely, let My; t > 0 be a real valued F,-martingale such that sup El|M,|?] < 0c for some p> 1 120 Show that there exists ¥ € L1(P) such that M, = ElY|Fi] (Hint: Use Corollary C.7.) Suppose f € V(0,T) and that t + f(t,w) is continuous for a.a. w. Then we have shown that r f F(o.2)dBeu) = jm...) 4B, in L>(P). 0 ’ j Similarly we define the Stratonovich integral of f by r J fe2odB.(w)= sim, 9 F652), where tj=3(t; +ty41) . 0 7) whenever the limit exists in L?(P). In general these integrals are dif- ferent. For example, compute r [Boas 3 and compare with Example 3.1.9. If the function f in Exercise 3.9 varies “smoothly” with t then in fact the It6 and Stratonovich integrals of f coincide. More precisely, assume that there exists K < oo and e > 0 such that Ellf(s-)- f(t P< Kis-ti**; OSs, t f(t), ) AB; -— Do f(t,,w)4B,I].) 5 5 3.11. Let W, be a stochastic process satisfying (i), (ii) and (iii) (below (3.1.2)). Prove that W; cannot have continuous paths. (Hint: Consider E(w) — wi), where wi” =(-N)v (NAW), N =1,2,3,...)- 3.12. As in Exercise 3.9 we let odB, denote Stratonovich differenti (i) Use (3.3.6) to transform the following Stratonovich differential equations into Ité differential equations: yXedt +aX,odB sin X; cos X;dt + (t? + cos X;) o dBy (ii) Transform the following Ito differential equations into Stratonovich differential equations: rXidt + aX,dBy 2e-Xedt + XPdBy 3.13. A stochastic process X,(-): 2 + R is continuous in mean square if E[X?] < 00 for all ¢ and lim B(X,-Xi)?]=0 for all 20. a) Prove that Brownian motion B, is continuous in mean square. b) Let f:R — R be a Lipschitz continuous function, i-e. there exists C < oo such that fe) -Sy)| [i - Xyo)Pat)) a im 114. Show that a function h(w) is F;-measurable if and only if h is a point- wise limit (for a.a. w) of sums of functions of the form (Be;) - 92( Bia)» 9x(Be,) where gi,..., gk are bounded continuous functions and tj < t for j < k, k=1,2,... Hint: Complete the following steps: a) We may assume that h is bounded. b) For n = 1,2,... and j = 1,2,... put t; = t = j-2-". For fixed n let Hp, be the o-algebra generated by {Bi,(-)}1, R be a random variable such that E[X?] < co and let CF be a o-algebra. Show that E[(E[X|H))?] < E(x?) . (See Lemma 6.1.1. See also the Jensen inequality for conditional ex- pectation (Appendix B).) Let (2,F, P) be a probability space and let X:2 + R be a random variable with E|X|] < 00. If G C F is a finite o-algebra, then by Exercise 2.7 there exists a partition 2 = LJ G; such that G consists 1 of 0 and unions of some (or all) of Gi,...,Gn- a) Explain why E[X|G](w) is constant on each Gy. (See Exercise 2.7 c).) b) Assume that P[G)] > 0. Show that Io, XaP E(x! = -G ee IXIG\w)= “Be for we c) Suppose X assumes only finitely many values a), ... dm. Then from elementary probability theory we know that (see Exercise 2.1) E[X|Gi] =O ae P[X = anlGy) - kel Compare with b) and verify that E[X|Gi] = E[X|G|w) for we Gy. Thus we may regard the conditional expectation as defined in Ap- pendix B as a (substantial) generalization of the conditional expec- tation in elementary probability theory. Let B, be 1-dimensional Brownian motion and let 7 € R be constant. Prove directly from the definition that M, := exp(oB, — 3074); t>0 is an F;-martingale. (Hint: If s > ¢ then Elexp(oB, ~ 307s)|F:] = Elexp(o(Bs — Br) - exp(oBy — 40?s)|F;|. Now use Theorem B.2 e), Theorem B.2 d) and Exercise 2.20.) 4. The Ité Formula and the Martingale Representation Theorem 4.1 The 1-dimensional It6 Formula Example 3.1.9 illustrates that the basic definition of It6 integrals is not very useful when we try to evaluate a given integral. This is similar to the situation for ordinary Riemann integrals, where we do not use the basic definition but rather the fundamental theorem of calculus plus the chain rule in the explicit calculations. In this context, however, we have no differentiation theory, only integra- tion theory. Nevertheless it turns out that it is possible to establish an Ito integral version of the chain rule, called the It6 formula. The Ité formula is, as we will show by examples, very useful for evaluating It integrals. From the example t [Pe. =4BP-ht or 4BP= ht + [Bde (4.1.1) ° 2 t we see that the image of the Ité integral B, = [dB, by the map g(x) = } a is not again an Ité integral of the form J HosyaB.tu) 4 but a combination of a dB,-and a ds-integral: ' ‘ Bi = [yas+ [ Bab, (4.1.2) 3 3 Tt turns out that if we introduce 116 processes (also called stochastic integrals) as sums of a dB,-and a ds-integral then this family of integrals is stable under smooth maps. Thus we define 44 4. The [t6 Formula and the Martingale ... Definition 4.1.1 (1-dimensional It6 processes). Let B, be 1-dimensional Brownian motion on (2,F,P). A (1-dimensional) Tté process (or stochastic integral) is a stochastic process X, on (2,.F,P) of the form ‘ ‘ =Xo + | uloeyds + [ (sua, : (4.1.3) 3 0 where v € Wx, so that t ol [ v(sweytas < co for allt > o| =1 (4.14) ° (see Definition 3.3.2). We also assume that u is H,-adapted (where H, is as in (ii)’, Section 3.3) and t PL fits,oris <00 for allt > 4| =1. (4.1.5) 3 If X; is an Ité process of the form (4.1.3) the equation (4.1.3) is sometimes written in the shorter differential form dX, = udt + vdB, . (4.1.6) For example, (4.1.1) (or (4-1.2)) may be represented by d(}B?) = bdt + BdBy . We are now ready to state the first main result in this chapter: Theorem 4.1.2 (The 1-dimensional Ité formula). Let X, be an Ité process given by dX, = udt + dB, Let g(t,x) € C?({0,00) x R) (i.e. g is twice continuously differentiable on. {0,00) x R). Then Y= g(t, X1) is again an It6 process, and a ay = Bit, Xedat +3 Slt poe Foe, A) (Gy, Gin where (dX,)? = (4X1) -(dXt) is computed according to the rules dt-dt =dt-dBy=dB,-dt=0, dBy-dB=dt. (4.1.8) Before we prove Ité’s formula let us look at some examples. 4.1 The I-dimensional It formula 45, Example 4.1.3. Let us return to the integral ‘ f B,dB, from Chapter 3. Choose X; = By and g(t,z) = 427. Then 1 Y= a(t, B) = 5B? Then by Ité’s — ay = Sar + 288, + $78 an, = BAB, + $(d.)® = BudB, + $at Hence d(4B?) = BydB, + hat. In other words, t [3B + 4t, asin Chapter 3. Example 4.1.4. What is t fas. ? a From classical calculus it seems reasonable that a term of the form tB, should appear, so we put g(ts2) = and Y, = g(t, Br) = tBr Then by Ité’s formula, dY, = Bydt + tdB, +0 Le. d(tB,) = Budt + tdBy or a. ‘ Bix | Bais saB, 3 a or t [sae =tB,- f Bads ° 0 which is reasonable from an integration-by-parts point of view. 46 4. The [t6 Formula and the Martingale More generally, the same method gives Theorem 4.1.5 (Integration by parts). Suppose f(s,w) is continuous an of bounded variation with respect to s € [0,t], for a.a. w. (See Exercise 2.17.) Then i Slo)aB, = F()Be— | Body. Note that it is crucial for the result to hold that f is of bounded variation. (See Exercise 4.3 for the general case.) Sketch of proof of the Ité formula. First observe that if we substitute dX, = udt + dB, in (4.1.7) and use (4.1.8) we get the equivalent expression ‘ Gt, Xt) = 9(0, Xo) + 28s, x,) + ue B(s.Xa) + 3 Far, x,) ie Pts) t +f 2s, X,)dB, where us =1u(s,w), ve = 0(s,w). (4.1.9) 0 Note that (4.1.9) is an It process in the sense of Definition 4. 1.1. We may assume that g, 92, 22 and #4 are bounded, for if (4.1.9) is proved in this case we obtain the general case by approximating by C? functions gn such that gn, 2, 222 and 9 are bounded for each n and converge uniformly on compact subsets of [0, 00) x R to g, 9, 92, 24, respectively. (See Exercise 4.9.) Moreover, from (3.3.1) we see that we may assume that u(t,w) and v(t,w) are elementary functions. Using Taylor’s theorem we get a(t, Xi) = 9(0, Xo) + EA 3) = = 90, Xo) + x at + Le 5% > meat og where $2, 32 etc. are evaluated at the points (t;, X:,), At = tyy1 tj, AXj = X44 — Xt, Molt Xj) = o(tj41s Xtj41) — g(t X5) and Ry = o(|At;|? +|X;l?) for all j. If At; +0 then a” re (AX; HD BN +e, 7 4.1 The I-dimensional It6 formula 47, t Le Sat; = Le (ty, Xj) At; > es, X4)ds (4.1.10) 0 ‘ x ax, = E Felt 4% = | 24, X,)dX,. (4.1.11) aed since u and v are elementary we get Lae 9 AX,)? = Le Fo 3(Ats)? +2 Pou, (At,)(OB3) 5 “Eh ?.(AB,)*, where uj = u(t;,w), vj = v(tj,w). (4.1.12) ‘The first two terms here tend to 0 as At; —> 0. For example, o[(% Seomcarn.as,) = 2 = x 4[ (uu) ]au 0 as At; +0. 7 We claim that the last term tends to far 5a" ‘ds in L?(P), as At; > 0 3 To prove this, put a(t) = 28(t, X1)v2(t,w), aj = a(t) and consider 2 E (= a;(AB;)?-S> #444)) ] =o Blaia;((ABi)?—At,)((4B;)—At;)} . a J +d If i j. So we are left with DY Fla}((4B;)? — 4t;)"] = > Efa3] - E[(AB,)* — 2(4B,)*At; + (At;)?] = Yo Bla}] - (8(At;)? — 2(At;)? + (At;)*) = 2 > Bla}] -(t;)? 7 7 0 as At; +0. In other words, we have established that 48 4. The It6 Formula and the Martingale t Ya(aB,y + folsyds in L(P) as Aty +0 a 0 and this is often expressed shortly by the striking formula (dB,)? = dt. (4.1.13) The argument above also proves that > Rj —+ 0 as At; —r 0. That completes the proof of the Ité formula. a Remark. Note that it is enough that g(t,) is C? on (0,00) x U, ifU CR is an open set such that X,(w) € U for all t > 0,w € 92. Moreover, it is sufficient that g(t,z) is C! w.rt. t and C? wrt. x. 4.2 The Multi-dimensional It6 Formula ‘We now turn to the situation in higher dimensions: Let B(t,w) =(Bi(t,w),..., B,,(t,w)) denote m-dimensional Brownian motion. If each of the processes ‘u;(t,w) and vi;(t, w) satisfies the conditions given in Definition 4.1.1 (1 | (4.2.3) Xn(t) Un Unt *** nm 4B,,(t) Such a process X (t) is called an n-dimensional It6 process (or just an It6 process). ‘We now ask: What is the result of applying a smooth function to X? The answer is given by Theorem 4.2.1 (The general It6 formula). Let X(t) = udt + vdB(t) be an n-dimensional It6 process as above. Let g(t,z) = (91(t,2),.--.9p(t,=)) be a C? map from (0,00) x R® into RP. Then the process 4.3 The Martingale Representation Theorem 49 ¥(t,w) = g(t, X(t) is again an [td process, whose component number k, Yu, is given by Pon Or; 99% 29% dY;, = BE NM SE Beh MK +E (t, X)dX,dXj where dB,dB; = 5,,dt, dBidt = dtd B, = 0. The proof is similar to the 1-dimensional version (Theorem 4.1.2) and is omitted. Example 4.2.2. Let B = (B1,...,Bn) be Brownian motion in R", n > 2, and consider Rit,w) = |B(t,w)| = (BU (t,w) +--+ Ba(t,w))#, ie. the distance to the origin of B(t,w). The function g(t, 2) = [2] is not C? at the origin, but since B, never hits the origin, a.s. when n > 2 (see Exercise 9.7) Ité’s formula still works and we get ane = Bab. an The process R is called the n-dimensional Bessel process because its generator (Chapter 7) is the Bessel differential operator Af(z) = 4 f(x) + %=1f"(z). See Example 8.4.1. 4.3 The Martingale Representation Theorem Let B(t) = (Bi(t),...,Bn(t)) be n-dimensional Brownian motion. In Chap- ter 3 (Corollary 3.2.6) we proved that if v € ¥" then the Ité integral ‘ . =Xo4 f vfsu)aB(s); t>0 4 is always a martingale w.r.t. filtration F{" (and w.r.t. the probability mea- sure P). In this section we will prove that the converse is also true: Any F{")-martingale (w.r.t. P) can be represented as an It6 integral (Theorem 4.3.4). This result, called the martingale representation theorem, is important for many applications, for example in mathematical finance. See Chapter 12. For simplicity we prove the result only when n = 1, but the reader can easily verify that essentially the same proof works for arbitrary n. We first establish some auxiliary results. 50 4. The It6 Formula and the Martingale Lemma 4. Fix T > 0. The set of random variables {0(Bus-. +, Be,)5 # € (0, T], 6 € CO(R"), n = 1,2 is dense in L*( Fr, P). Proof. Let: {t,}921 be a dense subset of [0,7] and for each n = 1,2,... let Hn be the o-algebra generated by By, (-),..-, Bz, (-)- Then clearly Hn C Hn4t and Fr is the smallest o-algebra containing all the Hn’s. Choose 9 € L*(Fr,P). Then by the martingale convergence theorem Corollary C.9 (Appendix C) we have that 9 = Elg|Fr| = lim Elg\Hn] The limit is pointwise a.e. (P) and in L?(Fr, P). By the Doob-Dynkin Lemma (Lemma 2.1.2) we can write, for each n, Elg|Htn] = gn(Bess-+++ Bea) for some Borel measurable function gq: — R. Each such gu(Bis-+-s Bea) can be approximated in L*(Fr, P) by functions n(Be,,-..,Be,) where bn € C%°(R") and the result follows. o For an alternative proof of the next result see Exercise 4.17. Lemma 4.3.2. The linear span of random variables of the type Tr Tr (ww) — 4 fh 3 ? ‘inis .3 oor{ [rane 3 [* (oat); heL?(0,T] (deterministic) (4.3.1) is dense in L?(Fr, P). Proof. Suppose g € L?(Fr, P) is orthogonal (in L?(Fr, P)) to all functions of the form (4.3.1). Then in particular GQ):= [esrb o) too + AnBe, (w)}g(w)dP(w) =0 (4.3.2) R for all A= (Ai,--+,An) € R® and alll ti,...,tn € [0,7]. The function G(A) is real analytic in A € R” and hence G has an analytic extension to the complex space C” given by Giz) = [orb tot znBig(is)}glw)dP(w) (43.2) é for all z=(z1,...,2n) €C". (See the estimates in Exercise 2.8 b).) Since G=0 on R® and G is analytic, G = 0 on C*. In particular, G(iy1, iy2,..-,ivn) = 0 for all y = (41,-..,9n) € R". But then we get, for ¢ € C2°(R"), 4.3 The Martingale Representation Theorem 51 Br,)g(w)dP(w) [2 = fan r( [Speier +48 dy) afapdPQw) a ‘ = en"? [ an ( / efi Br +4 Bea) dP) dy = (enyn? [ Byyetindy ~ (43.4) Re where du) = Comyn? f oteye tae sb is the Fourier transform of @ and we have used the inverse Fourier transform theorem ote) =(enyr? [Bye Mey ee (see e.g. Folland (1984)). By (4.3.4) and Lemma 4.3.1 g is orthogonal to a dense subset of L?(Fr, P) and we conclude that g = 0. Therefore the linear span of the functions in (4.3.1) must be dense in L?(Fr, P) as claimed. o Suppose B(t) = (Bi (t),..-,Bn(t)) is n-dimensional. If v(s,w) € ¥"(0,T) then the random variable T J eearane (4.3.5) o Vw): is Fi" measurable and by the Ité isometry T Elv?| = J Blea <0, soVEL (FM, P). ° ‘The next result states that any F € L?(F{”, P) can be represented this way: Theorem 4.3.3 (The Ité representation theorem). Let Fe L{F{), P). Then there exists a unique stochastic process f(t,w) € V"(0,T) such that r Flw) = E[F| + J feaB (4.3.6) a 52 4. The It6 Formula and the Martingale ... Proof. Again we consider only the case n = 1. (The proof in the general case is similar.) First assume that F has the form (4.3.1), ie. F(w) ~ar{ {noes f oa for some h(t) € L?[0,T]. Define rie)=eo{ [noe i [rom}: o 0. Then there exists a unique stochastic process 9(s,w) such that g € V')(0,t) for allt > 0 and t M,(w) = E[Mo} + f o(sayaBi) as. for all t>0. a Proof (n= 1). By Theorem 4.3.3 applied to T = t, F = M;, we have that for all t there exists a unique f((s,w) € L2(F,, P) such that t t Mw) = E[Mi] + f 1(s0)aB, E|Mo] + f[F%(s.0\dBu). ° a Now assume 0 < t; < fy. Then 5 BpMbFa) = Biol + BL f(s. )AB,(0)1Fa] 5 E(Mo] + ft F')(,0)dB,(w) « (4.3.7) Mu " 54 4. ‘The It6 Formula and the Martingale But we also have 4 M,= EM] + f £%se)dBate) (4.3.8) 3 Hence, comparing (4.3.7) and (4.3.8) we get that 4 . o= el( fue “x vat) | - J au ~ fOyP|ds 3 and therefore FP(s,u) = F%(s,u) for aa. (5,0) € [0,41] x 2. So we can define f(s,w) for a.a. s € [0,00) x 2 by setting Fs) =f%s,u) if s€ [0,N] and then we get t t M= EIM|+ f F(ou)dB.tw) = E[Mo|+ [How )dB.te) for all t > 0. 3 0 Oo Exercises 4.1. Use Ité’s formula to write the following stochastic processes Y; on the standard form dY, = u(t,w)dt + v(t,w)dB, for suitable choices of u€ R", v € R"*™ and dimensions n,m: a) ¥; = BP, where B; is 1-dimensional 2+t+e% (B, is 1-dimensional) B}(t) + B3(t) where (By, B2) is 2-dimensional = (to + t, By) (By is 1-dimensional) ©) Yi = (Bilt) + Ba(t) + Bs(t), B3(t)— Bi(t)Ba(t)), where (Bi, Ba, Bs) is 3-dimensional. 4.2. Use Ité’s formula to prove that t t [ sta. = }BS- [ Bets. a é 4.3. 44, Exercises 55 Let X;,¥; be Ité processes in R. Prove that A(XLY) = Xed Ve + VidXe + dX de - Deduce the following general integration by parts formula : : : [ Xd¥s = Xa Xo¥e — [ vax, - [ ax, -a¥, 3 ; (Exponential martingales) Suppose 6(t,w) = (A1(t,w),..-,On(t,w)) € R” with Ox(t,w) € V[O, 7) for k= 1,...,n, where T < oo. Define Z.= exp { [essay -} [Pane o0. Bult) = 4R(k - 1) f B.aledae; kee, 3 56 4.6. eel 4. The It6 Formula and the Martingale a) Deduce that E[B#| = 3t? (see (2.2.14)) and find EIB. b) Show that E[B(t)**") = 0 and k)t* BiB) = GE (Compare with Exercise 2.8.) a) For c,@ constants, B, € R define Xe k=1,2,... ecttabe Prove that aX, = (c+ 40?)Xidt + aXdBy b) For c,a1,..-,@n constants, By = (By(),..-, Bn(t)) € R" define X, = exp (« + DC) : jaa Prove that dX, = (e+ bda}) xt +x ( Saya) = jal Let X; be an Ité integral dX, = v(t,w)dB,(w) where v € V"(0,7), BE R",0 g uniformly, f{ > g! uni- formly and |ff/| < M, fi — 9” outside z1,..., zy. Apply a) to fr and let k — 00). Prove that we may assume that g and its first two derivatives are bounded in the proof of the It6 formula (Theorem 4.1.2) by proceeding as follows: For fixed t > 0 and n = 1,2,... choose gp, as in the statement such that gn(s,2) = 9(s,x) for all s 0;|Xs(w)] = n} (rn is called a stopping time (See Chapter 7)) and prove that ( [Be X4)Xscr,dBs: -) ttm tAtn Ogn oe [ Be, X,)dB, | Be xna, 3 3 for each n. This gives that. G(t A Tm, Xtrrm) = 9(0, Xo) tw 89, 99, 4,209 ag + [Boe + was ae [ote Plt, >t] 1 as n—+ 00 we can conclude that (4.1.9) holds (a.s.) for g. and since 58 4. The It6 Formula and the Martingale 4.10. (Tanaka’s formula and local time). What happens if we try to apply the Ité formula to g(B;) when By is 1-dimensional and g(x) = |x|? In this case g is not C? at z = 0, so we modify g(x) near 2 = 0 to ge(z) as follows: = iz] if |r| ze 9.(2) = { Me+#) if |xl 0. a) Apply Exercise 4.8 b) to show that t de{Be) = g(Bo) + f 9(Bs)AB, + 5 Ms € (05th By € (OH ; where |F| denotes the Lebesgue measure of the set F. b) Prove that t : 7 Bs [ 6B) Hacer Be = [FE Xeec-oodB 0 : 3 in L?(P) ase 0. (Hint: Apply the Ité isometry to (/ 7 : Xac-uo Bs) | ©) By letting ¢ -+ 0 prove that ‘ IBl= iBol+ f sign(B,)dB, + Lew) , (4.3.12) a where Lye= lim = fo € (0, th; B, € (-c,€)}| (limit in L2(P)) Exercises 59 and -1 for r<0 sign(2) = { ee Ly is called the local time for Brownian motion at 0 and (4.3.12) is the Tanaka formula (for Brownian motion). (See e.g. Rogers and Williams (1987)). 4.11. Use Ité’s formula (for example in the form of Exercise 4.3) to prove that the following stochastic processes are { F,}-martingales: a) X,=ehcosB, (B, € R) b) X,=et'sinB, (B,€ R) c) X, = (Bi +thexp(-B.—3t) (Be ER). 4.12, Let dX, = u(t,w)dt + v(t,w)dB, be an Itd process in R" such that ‘ ; BL f iurrute| +2| fo" unter] 0. 3 5 Suppose X; is an {F{")}-martingale. Prove that u(s,w)=0 for a.a. (s,w) € [0,00) x 2. (4.3.13) Remarks: 1) This result may be regarded as a special case of the Martingale Representation Theorem. 2) The conclusion (4.3.13) does not hold if the filtration Fi" is re- placed by the o-algebras M, generated by X,(-); s < t, ie. if we only assume that X; is a martingale w.r.t. its own filtration. See e.g. the Brownian motion characterization in Chapter 8. Hint for the solution: If X; is an F<")-martingale, then deduce that | ii wen wyari 7 | = for alls >t. Differentiate w.r.t. s to deduce that Efu(s,w)|F)=0 as. foraa.s>t. Then let ¢ T s and apply Corollary C.9. 60 4. The Ité Formula and the Martingale 4.13. Let dX; = u(t,w)dt + dB, (wu € R, By € R) be an Ité process and assume for simplicity that u is bounded. Then from Exercise 4.12 we know that unless u = 0 the process X; is not an F;-martingale. However, it turns out that we can construct an F;,-martingale from X, by multiplying by a suitable exponential martingale. More precisely, define Y= XM where t t M=en(— furore, § [ eruyir) 3 é Use Ité’s formula to prove that Y, is an F,-martingale . Remarks: a) Compare with Exercise 4.11 c). b) This result is a special case of the important Girsanov Theorem. It can be interpreted as follows: {X;}: 0 be a constant and define x, (eS 42B)?; t20. Show that aX, = 4XPSat+ X78dB; Xo =. Exercises 61 +16. By Exercise 3.8 we know that if Y is an Fr-measurable random vari- able such that E||Y?] < oo then the process Me:= ELY|Fil Osta? is a martingale with respect to {Fi}ocpep- a) Show that E[M?] < co for all ¢ € [0,7]. (Hint: Use Exercise 3.16.) b) According to the martingale representation theorem (Theorem 4.3.4) there exists a unique process g(t,w) € V(0,T) such that ; My = BIMol +f o(s,0)4B(0) t€ [0,7] 3 Find g in the following cases: (i) Y¥() = Br) (i) ¥() = BY) (iii) ¥(w) = exp(oB(T)); o € R is constant. (Hint: Use that exp(oB(t) — 407#) is a martingale.) -17. Here is an alternative proof of Theorem 4.3.3 which, in particular, does not use the complex analysis argument of Lemma 4.3.2. The idea of the proof is taken from Davis (1980), where it is extended to give a proof of the Clark representation formula. (See the Remark before Theorem 4.3.4.): In view of Lemma 4.3.1 it is enough to prove the following: Let ¥ = 9(By,...,Br,) where 0 < ty < tg < +++ < tn ST and @ € C(R"). We want to prove that there exists f(t,w) € V(0,T) such that. r Y=E(Y]+ [roan (4.3.14) 0 a) Use the It6 formula to prove that ifw = w(t,r1,...,24) : [te te] x R* — R is once continuously differentiable with respect to t and twice with respect to z, then w(t, B(tr),-.., B(te-1), B(t)) = w(tk-1, B(tr),-.-, B(tk—1), B(te-1)) : + f Boat)... B(t-»), B(oaB(s) +f Fee AFR) Blea Bla) Bods, C6 tata the 2 4, The It6 Formula and the Martingale .. b) For k = 1,...,n define functions vy : [te-1, te] x R* > R induc- tively as follows: Ou, 2G . Ge tae =0 tr-ast0 and obtain 1 1 din Ni) = dN + 8 ( - ma )ane? ? = me -a®NPat = St deat h Hence aN lt Fe = Mn Ne) + borat so from (5.1.4) we conclude NM _ ily ND =(r 32 \tt+oB, or = Noexp((r — $0”)t + aB,) (5.1.5) For comparison, referring to the discussion at the end of Chapter 3, the Stratonovich interpretation of (5.1.3), dN, =rNidt + aN, 0d, , would have given the solution N, = Noexp(rt +0B,). (5.1.6) ‘The solutions N;,,N; are both processes of the type X,=Xoexp(ut +aB,) (ut, constants) Such processes are called geometric Brownian motions. They are important also as models for stochastic prices in economics. See Chapters 10, 11, 12. 5.1 Examples and Some Solution Methods 65, Remark. It seems reasonable that if B, is independent of No we should have E\N,| = E{Nole™, (*) i.e. the same as when there is no noise in a,. To see if this is indeed the case, we let. Y= e2® and apply Ité’s formula: a, = ae? dB, + fa2ePedt or t t Y= Yo +a f eran, + go? | eotds : 3 3 Since ol e*®+dB,| = 0 (Theorem 3.2.1 (iii)), we get { ‘ Ely} = Evol + 30° [ BtYas d SPU! = JoP ELM, B06) = So : BUY) = eb, and therefore — as anticipated — we obtain E|N:] = E[Nole” . For the Stratonovich solution, however, the same calculation gives E(N,] = E[Nole"+#0°* Now that we have found the explicit solutions N, and N, in (5.1.5), (5.1.6) we can use our knowledge about the behaviour of B, to gain information on these solutions. For example, for the Ité solution N, we get the following: (i) Ifr > $a then Ny — 00 as t — 00, as. (ii) Ir < 40? then N, + 0 as t > 0, as. (iii) If r = 40? then N; will fluctuate between arbitrary large and arbitrary small values as t > 00, as. These conclusions are direct consequences of the formula (5.1.5) for Ny together with the following basic result about 1-dimensional Brownian motion Be 66 5. Stochastic Differential Equations Theorem 5.1.2 (The law of iterated logarithm). . B, lm sup Tjogiegt ~ as. For a proof we refer to Lamperti (1977), §22. For the Stratonovich solution NV; we get by the same argument that N, > Oas. ifr <0 and N, > cw as. ifr > 0. Thus the two solutions have fundamentally different properties and it is an interesting question what solution gives the best description of the situation. Example 5.1.3. Let us return to the equation in Problem 2 of Chapter 1: 1 LOY + RQ + GQ =F = Ge + awe - (5.1.7) We introduce the vector = Xi) (% . X=X(tw) (1) = (3) and obtain X=Xp {Ky = RX - 3X1 + Gi + 0, (5.1.8) or, in matrix notation, dX = dX(t) = AX (t)dt + H(t)dt + KaB (5.1.9) where oxo(H8) A-(S, Jaden (fa) (9): 39 and B, is a 1-dimensional Brownian motion. Thus we are led to a 2-dimensional stochastic differential equation. We rewrite (5.1.9) as exp(—At)d X (t) — exp(—At)AX (t)dt = exp(—At)[H(t)dt + KaB,} , (5.1.11) where for a general n x n matrix F we define exp(F’) to be the n x n matrix given by exp(F) = >> 4F". Here it is tempting to relate the left hand side x0 to d(exp(—At)X(t)) . To do this we use a 2-dimensional version of the Ité formula (Theorem 4.2.1). Applying this result to the two coordinate functions gi, g2 of g:(0,00) xR? R? given by g(t, 21,22) = exp(—At) (3) : we obtain that 5.1 Examples and Some Solution Methods 67 d(exp(—At)X (t)) = (-A) exp(—At)X(t)dt + exp(—At)d X(t) . Substituted in (5.1.11) this gives t exp(—At)X (t) — X(0) = fexe(-asyarnas + f exr(—asyKas, 3 0 or X(t) = exp(At)[X (0) + exp(—At)K By + | exp(-4s) 100) + AKB,\ds| , (5.1.12) a by integration by parts (Theorem 4.1.5). Example 5.1.4. Choose X, = By, 1-dimensional Brownian motion, and (cosz,sinz)¢R? for rR. g(t, 2) Then ¥(t) = g(t, X:) = e'* = (cos Bz, sin By) is by Ité’s formula again an It6 process. Its coordinates Yi, Yo satisfy dY;(t) = ~sin(B,)dB, — 3 cos(By)dt AY,(t) = cos(B,)dB, — } sin(B,)de . Thus the process Y = (Yj, ¥), which we could call Brownian motion on the unit circle, is the solution of the stochastic differential equations {nr = -1Y\dt ~ YodB, (5.1.13) dY = —4Yedt + Yd By Or, in matrix notation, dy (t) = —1Y(dt+KY(t)dB,, where K = (i 3) Other examples and solution methods can be found in the exercises of this chapter. For a comprehensive description of reduction methods for 1-dimensional stochastic differential equations see Gard (1988), Chapter 4. 68 5. Stochastic Differential Equations 5.2 An Existence and Uniqueness Result ‘We now turn to the existence and uniqueness question (A) above. ‘Theorem 5.2.1. (Existence and uniqueness theorem for stochastic differential equations). Let T > 0 and ¥(-,-):[0,T] x R® + R",o(-,-):[0,7} x R" 4 R"™™ be measurable functions satisfying [b(t, x)| + lo(t,x)| < CU + |ax}) 5 «é€R”, te[0,T] (5.2.1) for some constant C, (where |o|? = 3 |oi,|) and such that b(t.) — b(t, y)| +]o(t,2) -o(t,y)| < Diz-y]; 2,yER", te 0,7} (5.2.2) for some constant D. Let Z be a random variable which is independent of the o-algebra FS) generated by B,(-), s >0 and such that B\|Z)) <0. Then the stochastic differential equation dX, = b(t, Xedt +o(t,Xe)dBe, OS ST, XO=Z (5.2.3) has a unique t-continuous solution X;(w) with the property that X,(w) is adapted to the filtration F? generated by Z and B,(-); 8 0 the function 0 for ta =3xX?8; x,=0 (5.2.7) solves (5.2.7). In this case (x) = 32/3 does not satisfy the Lipschitz condition (5.2.2) at x = 0. ‘Thus condition (5.2.2) guarantees that equation (5.2.3) has a unique so- lution. Here uniqueness means that if Xy(t,w) and X2(t,w) are two t- continuous processes satisfying (5.2.3), (5.2.4) and (5.2.5) then Xi(t,w) = Xo(t,w) forall t 0. Hence PiiX,—Xj=0 for all #€QN[0, 7} =1, where Q denotes the rational numbers. By continuity of t |X — X;| it follows that PUXi(t,w) — Xa(tw)|=0 for all t€ [0,7] =1, (5.2.11) and the uniqueness is proved. The proof of the existence is similar to the familiar existence proof for ordinary differential equations: Define ¥; = Xo and ¥;" = ¥,")(w) indue- tively as follows : : yet) = Xo [ os,¥s ds + ows, ¥{)aB, . (5.2.12) 3 3 Then, similar computation as for the uniqueness above gives : EI? -¥ P< + T)8D? [BIW -ve-MP as, J fork >1,t n > 0 we get Blyet? — yP} k>0, t€ [0,7] (5.2.13) ml P-L = [Pll sneey mot SD EP VO lane mol , 1/2 > (e[ fe? - Pat) & 1? : m: Ak+lgeHL vaj2 TOE, gktipet2, 1/2 sd (f taar*) -> (or) =) (5.2.14) = 4 5.2 An Existence and Uniqueness Result 71 m,n > 00. Therefore {¥;") }92 9 is a Cauchy sequence in L?(Ax P). Hence {¥)}° 5 is convergent in L?(A x P). Define X,:= lim ¥{ (limit in L?(A x P)). Then X; is F?-measurable for all t, since this holds for each Y,"). We prove that X; satisfies (5.2.3): For all n and all t € [0,7] we have t ‘ yor) = Xo ole yids + [ots ¥.™)aB, . 3 3 Now let n — oo. Then by the Hilder inequality we get that ‘ ‘ J ols. ¥{")ds > Jos. X,)ds in £2(P) 3 3 and by the Ité isometry it follows that t t fo. ¥{")dB, [ots.xnaB, in L?(P). a a We conclude that for all t € [0,7] we have t t X= Xo | os Xyds+ [ots X,)dB, as. (5.2.15) a 0 ie. X; satisfies (5.2.3). It remains to prove that X, can be chosen to be continuous. By Theorem 3.2.5 there is a continuous version of the right hand side of (5.2.15). Denote this version by X;. Then X; is continuous and t t X= Xot fos xads + fol, Xa, for aa. w a a t t Rot [us Fas + f o(s,%)4B, for a.a.w. a a 72 5. Stochastic Differential Equations 5.3 Weak and Strong Solutions The solution X; found above is called a strong solution, because the version B, of Brownian motion is given in advance and the solution X; constructed from it is F?-adapted. If we are only given the functions b(t,) and o(t,) and ask for a pair of processes ((X;, B,), 1) on a probability space (12,71, P) such that (5.2.3) holds, then the solution X; (or more precisely (X1,By)) is called a weak solution. Here 7, is an increasing family of o-algebras such that X, is Hy-adapted and B, is an H,-Brownian motion, i.e. By is a Brownian motion, and B, is a martingale w.r.t. H, (and so E[Br4n ~ By|Hs] = 0 for all t,h > 0). Recall from Chapter 3 that this allows us to define the Ité integral on the right hand side of (5.2.3) exactly as before, even though X; need not be Ff-adapted. A strong solution is of course also a weak solution, but the converse is not true in general. See Example 5.3.2 below. The uniqueness (5.2.8) that we obtain above is called strong or path- wise uniqueness, while weak uniqueness simply means that any two solutions (weak or strong) are identical in law, i.e. have the same finite-dimensional distributions. See Stroock and Varadhan (1979) for results about existence and uniqueness of weak solutions. A general discussion about strong and weak solutions can be found in Krylov and Zvonkin (1981). Lemma 5.3.1. If} and o satisfy the conditions of Theorem 5.2.1 then we have A solution (weak or strong) of (5.2.3) is weakly unique . Sketch of proof. Let ((X+, B,), A) and ((X:, B,), Ht) be two weak solutions. Let X; and Y; be the strong solutions constructed from B, and By, respec- tively, as above. Then the same uniqueness argument as above applies to show that X, = X, and Y; = X; for all t, a.s. Therefore it suffices to show that X; and Y; must be identical in law. We show this by proving by induction that if Xx!" ¥2 are the processes in the Picard iteration defined by (5.2.12) with Brownian motions B, and B;, then (x1"),B,) and (¥{"), By) have the same law for all k. o This observation will be useful for us in Chapter 7 and later, where we will investigate further the properties of processes which are solutions of stochastic differential equations (It diffusions). From a modelling point of view the weak solution concept is often natural, because it does not specify beforehand the explicit representation of the white noise. Moreover, the concept is convenient for mathematical reasons, because there are stochastic differential equations which have no strong solutions but still a (weakly) unique weak solution. Here is a simple example: 5.3 Weak and Strong Solutions 73 Example 5.3.2 (The Tanaka equation). Consider the 1-dimensional sto- chastic differential equation dX, = sign(X,)dB,; Xo =0. (5.3.1) where . 41 if 220 siem(t) = \"1 if 2 <0 Note that here o(t,2) = o(z) = sign(z) does not satisfy the Lipschitz con- dition (5.2.2), so Theorem 5.2.1 does not apply. Indeed, the equation (5.3.1) has no strong solution. To see this, let B, be a Brownian motion generating the filtration F, and define ‘ ¥- f sign(B,)dB, . a By the Tanaka formula (4.3.12) (Exercise 4.10) we have ¥, =| By —| Bol -Li(w) , where Z,(w) is the local time for B,(w) at 0. It follows that Y; is measurable w.r.t. the -algebra G generated by | B,(.)|; s < t, which is clearly strictly contained in ¥;. Hence the o-algebra A; generated by Y,(:); s < ¢ is also strictly contained in Fy. Now suppose X; is a strong solution of (5.3.1). Then by Theorem 8.4.2 it follows that X, is a Brownian motion w-r.t. the measure P. (In case the reader is worried about the possibility of a circular argument, we point out that the proof of Theorem 8.4.2 is independent of this example!) Let M, be the o-algebra generated by X,(.); s < t. Since (sign(z))? = 1 we can rewrite (5.3.1) as dB, = sign(X,)dX, . By the above argument applied to By = X;, Y; = By we conclude that F, is strictly contained in M.. But this contradicts that X; is a strong solution. Hence strong solutions of (5.3.1) do not exist. To find a weak solution of (5.3.1) we simply choose X; to be any Brownian motion B,. Then we define B, by t [ sign(B,)aB, [ em X,)dX, ign(X.)dX- Then 74 5. Stochastic Differential Equations 4X; = sign(X:)dBy , 80 X; is a weak solution. Finally, weak uniqueness follows from Theorem 8.4.2, which ~ as noted above ~ implies that any weak solution X; must be a Brownian motion wr. P. Exercises 5.1. Verify that the given processes solve the given corresponding stochastic differential equations: (By denotes 1-dimensional Brownian motion) (i) Xp =e solves dX = }Xidt + XidBr (ii) Xt = Bh; Bo =0 solves aX, =-— 1 Xudt+ dB; Xo=0 Tee Tee ° i) X_ = sin B, with By = a € (~4, 8) solves aX, =—1X,dt + \/1-X?dB, for t < inf {s > 0;B.¢[- 3, 3] } (iv) (X1(t), X2(#)) = (t,e' Bz) solves aX) _[ 0 ee] = [x.]e+ [ss] (v) (X1(), X2(f) = (cosh( Br), sinh(B,)) solves aX) _1fX% Xo (ax) <3 [e+ [2] 5.2. A natural candidate for what we could call Brownian motion on the ellipse _ {emi G+ a= 1} where a >0,b>0 is the process X; = (X;(t),Xa(t)) defined by Xi(t)=acosB,, X2(t) = bsin By where B, is 1-dimensional Brownian motion. Show that X; is a solution of the stochastic differential equation aX, = -}Xidt + MXdB, where M [: TI 5.3. 5.4. 5.5. 5.6. 5.7. 5.8. Exercises 75 Let (B},...,B,) be Brownian motion in R", a1,. Solve the stochastic differential equation @, constants. dX, =rXidt + xi Yeas.) Xo>0 im (This is a model for exponential growth with several independent white noise sources in the relative growth rate). Solve the following stochastic differential equations: . [dX] _ fl 1 0] [dB ® (oe . [o]@+ [3 x] (cn (ii) dX, = Xidi + 4B, (Hint: Multiply both sides with “the integrating factor” e~* and compare with d(e~*X,)) (iil) dX, = —Xidt + -taB,. a) Solve the Ornstein-Uhlenbeck equation (or Langevin equation) dX, = wXidt + od By where yz, 0 are real constants, B, € R. The solution is called the Ornstein-Uhlenbeck process. (Hint: See Exercise 5.4 (ii).) b) Find E[X;] and Var[X, El(Xe — E[Xy))”). Solve the stochastic differential equation dY, = rdt + a¥,dB, where r,a are real constants, By € R. (Hint: Multiply the equation by the ‘integrating factor’ Fy = exp (— 0B, + 4at) .) The mean-reverting Ornstein-Uhlenbeck process is the solution X; of the stochastic differential equation dX, =(m—X,)dt + od By where m,o are real constants, B, € R. a) Solve this equation by proceeding as in Exercise 5.5 a). b) Find £[X;] and Var[X,]: = E[(X: — E[X,})?]. Solve the (2-dimensional) stochastic differential equation dX, (t) = Xo(t)dt + adBy(t) dXo(t) = —X1(t)dt + BdBa(t) 76 5.9. 5.10. 5. Stochastic Differential Equations where (B1(t), Bo(t)) is 2-dimensional Brownian motion and a, are constants. This is a model for a vibrating string subject to a stochastic force. See Example 5.1.3, Show that there is a unique strong solution X, of the 1-dimensional stochastic differential equation dX, =In(1+XP)dt +X. .5XdB, Xo=aeER. Let b, 0 satisfy (5.2.1), (5.2.2) and let X; be the unique strong solution of (5.2.3). Show that E\X:?] < Ki-exp(Kot) for ¢< T (5.3.2) where Ky = 3E||Z/?| + 6C?T(T +1) and Kz = 6(1 + T)C?. (Hint: Use the argument in the proof of (5.2.10). x50) Remark. With global estimates of the growth of b and o in (5.2.1) it is pos- sible to improve (5.3.2) to a global estimate of E{|X;|]. See Exercise 7.5. 5.11. (The Brownian bridge). 5.12. For fixed a,b € R consider the following 1-dimensional equation Yi ay, dt+dBj; O 0 is constant. a) Discuss this equation, for example by proceeding as in Exam- ple 5.1.3. b) Show that y(t) solves a stochastic Volterra equation of the form t a v= v0) +¥0)-¢+ fatnveydr-+ f UerduiryAB, 3 3 where a(t,r) = t v(t.r) = ert). Exercises 77 5.13. As a model for the horizontal slow drift motions of a moored floating platform or ship responding to incoming irregular waves John Grue (1989) introduced the equation af + aay + we, = (Ty — aor); , (5.3.5) where W; is I-dimensional white noise, ao,w,o,ao and 7 are con- stants. (i) Put X,= [2] and rewrite the equation in the form ‘ dX, = AX,dt + KX dB, + MaBy, where o 1 0 0 0 a=[ Ys 7, | K=c0n[p <} and u=ton[)| (ii) Show that X, satisfies the integral equation ‘ eA“) KX dB, + feteomas, if Xo =0. 0 {(E cos €t + Asin €t)I + Asin &t} where \ = %,€ = (w? — $2)4 and use this to prove that t t= afm — a0Ys)9t-sdBs (5.3.6) ° and : a ee where a1 = Lm(e") é he= pice), = =A = VAI). So we can solve for y; first in (5.3.7) and then substitute in (5.3.6) to find zy. 8 5. Stochastic Differential Equations 5.14. If (B,, Bz) denotes 2-dimensional Brownian motion we may introduce complex notation and put Bit): = Bi(t) +iBa(t) (i= V—1) B(t) is called complex Brownian motion. (i) If F(z) = u(z) + iv() is an analytic function i.e. F satisfies the Cauchy-Riemann equations du _ dv a oy and we define prove that Ou __ ov. + Oy =e! atiy Z. = F(B()) dZ, = F'(B(t))aB(t) , (5.3.8) where F” is the (complex) derivative of F. (Note that the usual second order terms in the (real) It6 formula are not present in (5.3.8)!) (ii) Solve the complex stochastic differential equation dZ, = For more information @Z,dB(t) a constant) . about complex stochastic calculus involving analytic functions see e.g. Ubge (1987). 5.15. (Population growth in a stochastic, crowded environment) The nonlinear stochastic differential equation dX, =rX(K — Xe )dt+BXidBe; Xo=z>0 (5.3.9) is often used as a model for the growth of a population of size X- in a stochastic, crowded environment. The constant K > 0 is called the carrying capacity of the environment, the constant r € R is a measure of the quality of the environment and the constant € R is a measure of the size of the noise in Verify that X= exp{(rK — 36?)t + BBi} a} tr fexp{(rK — 462)s-+ AB,}ds 3 the system. 5 #20 (5.3.10) is the unique (strong) solution of (5.3.9). (This solution can be found by performing a substitution (change of variables) which reduces (5.3.9) to a linear equation. See Gard (1988), Chapter 4 for details.) Exercises 79 +16. The technique used in Exercise 5.6 can be applied to more general nonlinear stochastic differential equations of the form dX, = f(t, Xr)dt + e(t)XedBr, = Xo= 2 (5.3.11) where f:R x R — R and c:R — R are given continuous (determinis- tic) functions. Proceed as follows: a) Define the ‘integrating factor’ Fy = Fp(w) = exp ( - [esuB, + 1 [ eons) _ (6312) 0 3 Show that (5.3.11) can be written (FX) = Fy f(t, Xe)dt . (5.3.13) b) Now define Yilw) = Fi(w)X(w) (5.3.14) so that. X=FON (5.3.15) Deduce that equation (5.3.13) gets the form ay, ME) Ao) OFM): Yrs. (6.816) Note that this is just a deterministic differential equation in the function t + ¥;(w), for each w € 2. We can therefore solve (5.3.16) with w as a parameter to find Y;(w) and then obtain X,(w) from (5.3.15). c) Apply this method to solve the stochastic differential equation aX, = eat aXdB; Xy=2>0 (5.3.17) : where a is constant. d) Apply the method to study the solutions of the stochastic differen- tial equation dX, =X7dt+oXdB; Xo=2>0 (5.3.18) where a and ¥ are constants. For what values of 7 do we get explosion? 30 5.17. 5.18. 5. Stochastic Differential Equations (The Gronwall inequality) Let v(t) be a nonnegative function such that vt sc+A f v(o)ds for 00 (5.3.21) where K,@,0 and x are positive constants. This process was used by J. Tvedt (1995) to model the spot freight rate in shipping. a) Show that the solution of (5.3.21) is given by Xess exp IG Ina + (a - g)a ~ ert) i 5.3.22) toe" fe"*dB,). (3.22) 3 (Hint: The substitution. ¥; = log Xt transforms (5.3.21) into a linear equation for Y;] b) Show that a0 - my E|X;] = exp (e"'Ing + («-Z)a-e")+ 7" 2 Exercises. 81 5.19. Let ¥;") be the process defined inductively by (5.2.12). Show that {¥{)}22_, is uniformly convergent for t € (0, 7], for a.a. w. Sitice each ¥;" is continuous, this gives a direct. proof that X; can be chosen to be continuous in Theorem 5.2.1. [Hint: Note that . sup HY i!) < f(s, 72) - 6, Y88Mlds ostsT a ' + anp,| [Cots YC) ~ (8,2 )VaB, oer 7 Hence Pl sup [Vt —y,] > 2%) oxen : PL [ b(e.¥$) — 6, YS MIas > 2") d ‘ +P(_ sup | feet vs) —o(e.¥e))aB,| ae o 27 ogteT 6. The Filtering Problem 6.1 Introduction Problem 3 in the introduction is a special case of the following general filtering problem: Suppose the state X; € R” at time t of a system is given by a stochastic differential equation aX, dt where :R"+! + R", 0: R™! + R™? satisfy conditions (5.2.1), (5.2.2) and W, is p-dimensional white noise. As discussed earlier the It6 interpretation of this equation is = b(t, Xe) +o(t,X)M, 20, (6.1.1) (system) dX, = b(t, X:)dt + a(t, Xi)dly , (6.1.2) where U; is p-dimensional Brownian motion. We also assume that the distri- bution of Xo is known and independent of Us. Similarly to the 1-dimensional situation (3.3.6) there is an explicit several-dimensional formula which ex- presses the Stratonovich interpretation of (6.1.1): AX, = W(t, X:)dt + a(t, X:) ody in terms of It6 integrals as follows: dX, = B(t, Xi)dt + o(t,X,)dU,, where Pon ~ 00%, . b(t,2) = bi(t.2) +40 _ 3 Sign. (6.1.3) jatken (See Stratonovich (1966)). From now on we will use the Ité interpretation (6.1.2). In the continuous version of the filtering problem we assume that the observations H; € R™ are performed continuously and are of the form Hy = elt, Xi) +t, XW, (6.1.4) where c:R™+! — R™, 7:R"+! — R™*r are functions satisfying (5.2.1) and W; denotes r-dimensional white noise, independent of U; and Xo 84 6. The Filtering Problem To obtain a tractable mathematical interpretation of (6.1.4) we introduce t h= [tas (6.1.5) é and thereby we obtain the stochastic integral representation (observations) dZ = c(t, Xr)dt + (t,Xi)dVi, Zo =0 (6.1.6) where V; is r-dimensional Brownian motion, independent of U; and Xo. Note that if H, is known for 0 Up=ahy. ‘Then (i) Up € £(Z,k) (ii) X —U,L£(Z,k), since BUX — Ug) 24] = BIXZ,| — on B22 = BIX(X + Wa) — nF > B1z;24) 4 ~ foe Dw \(X+ WI = qaulke?+m?| =0.) The result can be interpreted as follows: For large k we put X ~ Zy, while for small k the relation between a? and m? becomes more important. If m? >> a?, the observations are to a large extent neglected (for small k) and X;, is put equal to its mean value, 0. See also Exercise 6.11. 6.2 The 1-Dimensional Linear Filtering Problem 89 This example gives the motivation for our approach: We replace the process Z, by an orthogonal increment process N; (Step 2) in order to obtain a representation for X; analogous to (6.2.5). Such a rep- resentation is obtained in Step 3, after we have identified the best linear estimate with the best measurable estimate (Step 1) and established the con- nection between N; and Brownian motion. Step 1. Z-Linear and Z-Measurable Estimates Lemma 6.2.2. Let X,Z,; 8 < t be random variables in L?(P) and assume that (X, 2s1)Zeay-++1Zs,) € RY has a normal distribution for all 51, 52,...,8n 1. Then Pc(X) = E[X|G] = P(X). In other words, the best Z-linear estimate for X coincides with the best Z- measurable estimate in this case. Proof. Put X = P(X), X =X - X. Then we claim that X is independent of G: Recall that a random variable (¥;,...,¥,) € R* is normal iff ¢¥ + +++ + eYi: is normal, for all choices of c1,...,ck € R. And an Llimit of normal variables is again normal (Appendix A). Therefore (X,Zej1---12s,) is normal for all s1,...,8n St. Since E[XZ,,] = 0, X and Z,, are uncorrelated, for 1 < j 0 (iv) N; is a Gaussian process Proof. (i): If s < t and Y € £(Z,s) we have El(N. — Ns)¥] = a[( forma, - Jar = [aay] = [oeia,-Rovyr+e[( fvav)y] =0 since X,—X,1.£(Z,r) > £(Z, 8) for r > s and V has independent increments. (ii): By Ité’s formula, with g(t,2) = 2?, we have d(N2) = 2NdN; + 42(dN;)? = 2NdN, + Dat . So t t E|N?] = s| fanaa] + [Pens ' [ man, = fim No IMoe~ Nel 3 so since N has orthogonal increments we have t | f wan} =0, and (ii) follows . 3 (iii): It is clear that £(N,t) C £(Z,t) for all t > 0. To establish the opposite inclusion we use Lemma 6.2.4. So choose f € L?(0,¢] and let us see what functions can be obtained in the form i F(3)dN, = i Hls)d F(r)G(r) Rear 100] fate.s12a] dr — [ F(ryolride a = [ F(s)aZ, — 6.2 The 1-Dimensional Linear Filtering Problem 93 = i [fe - | Loya(r sir] a2, — [ Fle)elr)ar where we have used Lemma 6.2.2 and Lemma 6.2.4 to write, for each r, (GX)f = e(r) + forsee, for some g(r,-) € L?[0,r], c(r) ER. 0 From the theory of Volterra integral equations (see e.g. Davis (1977), p. 125) there exists for all h € L?(0,é] an f € L?(0,¢] such that H09) J Hole. s)dr = Hs) So by choosing h = %o,1,} where 0 < th < t, we obtain t t f(r)e(r)dr + | f(s)dNy Ho.u)(8)dZs = Ze, [rooms [row | which shows that £(N,t) > £(Z,t). (iv): X, is a limit (in L2(P)) of linear combinations of the form M=cot Zs, +--+ eZ 5 where s, 0. Thus a exp ( | Flo)de) Ste)ar EIX.R) = / 3 6.2 The 1-Dimensional Linear Filtering Problem 97. so that f(s,t) = aa exp (fr vo) (9) (6.2.20) We claim that S(t) satisfies the (deterministic) differential equation Gt) Dt) To prove (6.2.21) note that by the Pythagorean theorem, (6.2.15) and the It6 isometry S(t) = Bl(X1 ~ X1)?] = E[X?] — 2E1XLX] + BLP] = B[X?| - BLK?) e = 2F(t)S(t) — ) s(t) )+C7(t) (The Riccati equation) . (6.2.21) : =7(0) — f Hot)%ds — BX? (6.2.22) 3 where T(t) = E[X?] (6.2.23) Now by (6.2.16) and the Ité isometry we have T(t) = exp (2 from) + [oo (0 f pose) oron : using that Xo is independent of {U,}s>0. So a = 2F(t)-exp (: { F(sids) E[X8] + C(t) 3 + [oro exp (2 J Feaee)otone a = 2F(t)T(t) + Cr(t) - (6.2.24) Substituting in (6.2.22) we obtain, using Step 4, t o.2 $= D-H? [ve 1) Zls,i)ds —2P OEP =2F rte) +0% - & an [ri 5,1) F(t)ds — 2F(t)B[X,)? = 2F()S(t) +074) -LOLO | which is (6.2.21). Dat) * 98 6. The Filtering Problem We are now ready for the stochastic differential equation for X,: From the formula ‘ - eo(t) + f flestaR, where co(t) = B[X;] 3 it follows that a dX, = og (t)dt + f(t,t}dR. + ( | 0, — El(Xo—b)?] =a?, with observations dZ, = Xidt-+mdV; mm constant . The corresponding Riccati equation gives the logistic curve arm? 1+ Ke-?rt* S(t)= where K = 2" 1, So the equation for X, becomes dX, = (- For simplicity let us assume that a? = 2rm?, so that S\- . aa) Rete + Sua 5 Xo = E[X] = S(t) =2rm? for all t. ‘The 1-Dimensional Linear Filtering Problem 103 (In the general case S(t) > 2rm? as t — oo, so this is not an unreasonable approximation for large t). Then we get d(exp(rt)X:) = exp(rt)2rdZ, — Xo=b or t R= eot-ro| f 2rexrsydz, + 4] : a As in Example 6.2.10 this may be written t . ‘ R= e0(-rp| frre) itas +4] » if Z=SHyds. (6.2.32) 3 3 For example, assume that H, = @ (constant) for 0 < s < #, ie. that our observations (for some reason) give the same value B for all times s < t. Then R, = 28 — (28 - b)exp(—rt) +28 as t+ 00 If Hy = B-exp(as), s > 0 (a constant), we get 2rg rte X, = exp(-rt) 2r8 +a Thus, only if a = r, ie. H, = Bexp(rs); s > 0, does the filter “believe” the observations in the long run. And only if @ = r and @ = b, i.e. Hy = bexp(rs); $20, does the filter “believe” the observations at all times. (exp(r +a)t—1) +0) expat for large t. Example 6.2.13 (Constant coefficients — general discussion). Now consider the system dX, = FX,dt + Cdl; ; F,C constants #0 with observations dZ,=GXdt+DdV,; — G,D constants 0. The corresponding Riccati equation CO gece 2 S'=2FS- TS? +07, S(O) =a! s(t = Sa Karen ges ty * TK exp( Spey has the solution where 104 6. The Filtering Problem ay = G-*(FD? — DV F°D? + G?C?) a2 =G-*(FD? + DV F*D? + G?C?) and This gives the solution for X; of the form ‘ ot t 1 = exp (| Hews) Xo+ a [ow ( H(u)du) 5(s)dZ, , a 3 > 2 H(s) =F - F518) : where For large s we have S(s) © a2. This gives Roenn((F- g Tn +S $3 fow(r-§ t = Xoexp( at) + $a a? exp(—Bt) i exp(Bs)dZ, (6.2.33) xX aye ~s))d2, where 8 = D~!/F*D? + GC? . So we get approximately the same be- haviour as in the previous example. 6.3 The Multidimensional Linear Filtering Problem Finally we formulate the solution of the n-dimensional linear filtering problem (6.2.1), (6.2.2): Theorem 6.3.1 (The Multi-Dimensional Kalman-Bucy Filter). The solution X, = E|X|G1] of the multi-dimensional linear filtering problem (linear system) aX, = F(t)Xidt+C(t)dU; F(t)ER™™, C(t)eRD*? (6.3.1) (linear observations) dZ;=G(t)X,dt+ D(t)dVi; G(t)eR™", D(t)eR™? (6.3.2) satisfies the stochastic differential equation aX, = (F — SGT (DDT)"G)X,dt + SGT(DD™)-'dZ,; Xo = ElXo] (6.3.3) Exercises 105 where S(t): = El(Xe — X.)(Xe- Xi] € R™ satisfies the matrix Riccati equation s = FS +SFT ~SGT(DDT)"1GS + CCT ; S(0) = El(Xo ~ E[Xo})(Xo ~ E[Xo})”] (6.3.4) The condition on D(t) € R™” is now that D(t)D(t)" is invertible for all t and that (D(t)D(t)")~} is bounded on every bounded t-interval. A similar solution can be found for the more general situation (system) dX, = [Folt)+ A(Q)Xe+ Fa(QZdt + C(t (6.3.5) (observations) dZ_ = [Go(t) + Gilt)X: + Ga(QZilat + D(t\dvi, (6.3.6) where X; € R", Z, € R™ and B, = (U;,V;) is n + m-dimensional Brownian motion, with appropriate dimensions on the matrix coefficients. See Ben- soussan (1992) and Kallianpur (1980), who also treat the non-linear case. An account of non-linear filtering theory is also given in Pardoux (1979) and Davis (1984). For the solution of linear filtering problems governed by more general processes than Brownian motion (processes with orthogonal increments) see Davis (1977). For various applications of filtering theory see Bucy and Joseph (1968), Jaawinski (1970), Gelb (1974), Maybeck (1979) and the references in these books. Exercises 6.1. (Time-varying observations of a constant) Prove that if the (1-dimensional) system is dX,=0, E[Xo]}=0, — ELXG] and the observation process is dZ, = G(t)X,dt + dvi, Z= then S(t) = E[(X, — X,)"] is given by S(t) = (6.3.7) a toy + fo G2(s)ds° We say that we have exact asymptotic estimation if S(t) + 0.ast + 00, ie. if 106 6.2. 6.3. 6. The Filtering Problem f G(s)ds = ° (p > 0 constant) Thus for OO) = GaP or we have exact asymptotic estimation iff p < } Consider the linear 1-dimensional filtering problem with no noise in the system: (system) dX, = F(t)Xidt (6.3.8) (observations) dZ, = G(t)Xidt + D(t)dV, (6.3.9) Put S(t) = E[(X; — X,)?] as usual and assume S(0) > 0. a) Show that RO 35 satisfies the linear differential equation Gt) De)! 505 b) Use (6.3.10) to prove that for the filtering problem (6.3.8), (6.3.9) we have 557 ame? (- afm) [oo(- 2[ roe) Fes (6.3.11) Rit)=-2FR)+ 9; RO) = (6.3.10) In Example 6.2.12 we found that S(t) 2rm? ast 0, so exact asymptotic estimation (Exercise 6.1) of X; is not possible. However, prove that we can obtain exact. asymptotic estimation of Xo, in the sense that El(Xo — E[Xo|G))"] +0 as t > oo (Hint: Note that Xo = that —"tX, and therefore E[Xo|G:] = e~"'X:, so El(Xo — E[Xo|%)?] = e-7"*S(t)) - 6.4. 6.5. 6.6. Exercises 107 Consider the multidimensional linear filtering problem with no noise in the system: (system) dX, = F(t)X;dt ; X,€R", F(t)eR™* (6.3.12) (observations) dZ, = G(t)X,dt + D(t)dV ; G(t)eR™", D(theR™** — (6.3.13) Assume that S(t) is nonsingular and define R(t) = S(t)—1. Prove that R(t) satisfies the Lyapunov equation (compare with Exercise 6.2) Ri(t) = —R(t)F(t) — F(t) R(t) + Gt)? (D(t)D(t)7)-1G (et). (6.3.14) (Hint: Note that since S(t)S~(¢) = I we have S"(t)S“1(t) + S(t)(S)/(t) = 0, which gives (S"1V(t) = -S*()S"(HS*(t) -) (Prediction) In the prediction problem one seeks to estimate the value of the system X at a future time T based on the observations G, up to the present time t < T. Prove that in the linear setup (6.2.3), (6.2.4) the predicted value E[Xr|G}, T>t is given by r E{Xr|Gi] = exp (J Fas) x (6.3.15) * (Hint: Use formula (6.2.16).) (Interpolation /smoothing) ‘The interpolation or smoothing problem consists of estimating the value of the system X at a time s < t, given the observations up to time t, Ge. With notation as in (6.2.1), (6.2.2) one can show that M,: = E[X.|9:] satisfies the differential equation ts M=k. (s)M, +C(s)CT(s)S~"(s)(M,-X,); s 0, for a.a.w. In Example 5.1.4 we found that Brownian motion on the unit circle, Xz, satisfies the (Ité) stochastic differential equation [ni] =-4 [28 a+[} 7 (28) dB. (6.3.17) From this equation it is not at all apparent that its solution is situ- ated on the same circle as the starting point. However, this can be detected by proceeding as follows: First transform the equation into its Stratonovich form, which in Exercise 6.9 is found to be ae = q ul [20 od. (6.3.18) Then (formally) replace odB, by ¢(t)dt, where ¢ is some smooth (de- terministic) function, ¢(0) = 0. This gives the deterministic equation dx(t)] _ fo -1] , [Fro =! o | See. (6.3.19) If (x{(0), X$(0)) = (1,0) the solution of (6.3.19) is [x2 ~ [ora XP) ] ~ Lsing(t) J * So for any smooth ¢ the corresponding solution X‘)(t) of (6.3.19) has its support on this unit circle. We can conclude that the original solution X(¢,w) is supported on the unit circle also, in virtue of the Stroock-Varadhan support theorem. This theorem says that, quite gen- erally, the support of an Ité diffusion X,(w) coincides with the closure in R® of {X‘)(-); smooth}, where X‘)(t) is obtained by replacing odB, by ¢'(t)dt in the same way as above. See e.g. Ikeda and Watan- abe (1989, Th. VI. 8.1). (In this special case above the support could also have been found directly from (6.3.18). 110 6.11. 6.12. 6.13. 6.14. 6.15. 6. The Filtering Problem Use the procedure above to find the support of the process X; € R? given by dX, =4X,dt-+ [? j | XidBy . Consider Example 6.2.1, but now without the assumption that E|X| = 0. Show that ¢ m ? os Ke Fe + ay Jar" Rane (Compare with (6.2.8).) (Hint: Put € = X — E[X], C, = Ze — B[X]. Then apply (6.2.8) with X replaced by € and Z, replaced by ¢..) Prove formula (6.2.16). (Hint: exp (~ f F(u)du) is an integrating factor for the stochastic differential equation (6.2.3).) Consider the 1-dimensional linear filtering problem (6.2.3), (6.2.4). Find _ _ E[X} and E((X,)"}. (Hint: Use Theorem 6.1.2 and use the definition of the mean square error S(t).) Let B, be 1-dimensional Brownian motion. a) Give an example of a process Z, of the form dZ, u(tw)dt + dBy such that Z, is a Brownian motion w.t.t. P and u(t,w) € V is not identically 0. (Hint: Choose Z; to be the innovation process (6.2.13) in a linear filtering problem with D(t) = 1.) b) Show that the filtration {Z,}.50 generated by a process Z; as in a) must be strictly smaller than {F,}.>0, i.e. show that 2.0F, for allt and 2 # F; for some t. (Hint: Use Exercise 4.12.) Suppose the state X; € R at time ¢ is a geometric Brownian motion given by the equation dX, =yXidt+oXdB; Xo=2>0. (6.3.20) Here o # 0 and z are known constants. The parameter 1 is also con- stant, but we do not know its value, only its probability distribution, 16. Exercises. 111 which is assumed to be normal with mean ji and variance a?. We as- sume that j« is independent of {B,},,, and that Bly] < oo. We assume that we can observe the value of X;, for all t. Thus we have access to the information “(o-algebra)” M, generated by X,; 8 < t. Let Ni, be the o-algebra generated by &, s < t, where =pdt+odBi; — & (6.3.21) a) Prove that M, = Ni. b) Prove that Elu|M) = (0 + 0774)“ (08 +o 76) (6.3.22) where O=Elu-a)J, a= Blu. (6.3.23) ©) Define B ou — ElplMa})ds + Be. (6.3.24) Prove that B, is a Brownian motion. d) Prove that B, is M;,-measurable for all t. Hence FooM (6.3.25) where F; is the a-algebra generated by By; s R", o:R" > R™™ satisfy the conditions in Theorem 5.2.1, which in this case simplify to: |b(x) — b(y)| + fo(z)—o(y)| < Diz-yl; -t,yeR", (7.1.5) where lo? = So loul?. ‘We will denote the (unique) solution of (7.1.4) by X; = Xj"; t > s. If 8 = 0 we write X# for X?'. Note that we have assumed in (7.1.4) that b and @ do not depend on t but on x only. We shall see later (Chapters 10, 11) that the general case can be reduced to this situation. The resulting process X;(w) will have the property of being time-homogeneous, in the following sense: Note that: sth sth a+ | oxgr)dus f oxtyaB, Xihn A h at | X2R)dv+ | o(X2%,)dBy, (u=st+v) (7.1.6) [oteows | where B, = Bsyy — Bs; v > 0. (See Exercise 2.12). On the other hand of course A h XP aot foxsera + [ovxorap, : 3 3 Since {By}v>0 and {By},>0 have the same P?-distributions, it follows by weak uniqueness (Lemma 5.3.1) of the solution of the stochastic differential equation dX, =O(Xi)dt +o(Xe\dBr; Xo=e that {Xi Jaro and {X2"}aro have the same P®-distributions, i.e. {X;}:20 is time-homogeneous. We now introduce the probability laws Q? of {X¢}z>0, for 2 € R". Intu- itively, Q* gives the distribution of {X;}1>0 assuming that Xq = x. To express 7.1 The Markov Property 115 this mathematically, we let May be the a-algebra (of subsets of (2) generated by the random variables w + X;(w) = X!(w), where t > 0, y ¢ R™. Define Q* on the members of M by QXi, € Bay) Xe, € Bx) = PXE € By XE € Ex) (7.1.7) where E, CR” are Borel sets; 1 0 EX Xesn) FO Mew) = BOG (Xa)] - (7.1.8) (See Appendix B for definition and basic properties of conditional ex- pectation). Here and in the following E* denotes the expectation w.r.t. the probability measure Q*. Thus E¥|f(X,)] means E[f(X?)], where B denotes the expectation w.r.t. the measure P°. The right hand side means the func- tion EY(f(Xn)] evaluated at y = X;(w). Proof. Since, for r > t, X,(w) = Xy(w) + [i000 + foxes : : we have by uniqueness X,(w) = XE%(w) . In other words, if we define F(a,t,rw) =X8%(v) for r >t, we have X,(w) = F(Xitrw); r2t. (7.1.9) Note that w + F(z,t,r,w) is independent of F{”. Using (7.1.9) we may rewrite (7.1.8) as EUS(F( Xu tt + hw) FL] = ELf(F(e,0, byw) nx, - (7.1.10) 116 7. Diffusions: Basic Properties Put g(z,w) = fo F(e,tt + hw). Then (2,w) —+ 9(x,w) is measurable. (See Exercise 7.6). Hence we can approximate g pointwise boundedly by functions on the form Y oe(ayve(w) m Using the properties of conditional expectation (see Appendix B) we get Bla(Xew)| FE} [tim Yoe(Xovewye”? Jim 7 ba Xe) - Elva WFO] Tim YY Bide (wee(W)\ FE yrs Elg(y.w)lFI”) fl = Elg(yw)lyax » Therefore, since {X;} is time-homogeneous, ELF (Katt + byw) FO] = BLP U tt + hw) yar, = ELf(F(Y,0,h, 0) ye, which is (7.1.10). a Remark. Theorem 7.1.2 states that X; is a Markov process w.r.t. the family of o-algebras {F{" }t>9. Note that since M; C F{™ this implies that X; is also a Markov process w.r.t. the o-algebras {M,}e>0. This follows from Theorem B.3 and Theorem B.2 c)( Appendix B): E*([f(Xean) Me = BEF Xeon AOI Me) = BEX (f(Xn)|IMe = B*(f(Xn)] since E*+[f(Xp)] is Me-measurable. 7.2 The Strong Markov Property Roughly, the strong Markov property states that a relation of the form (7.1.8) continues to hold if the time t is replaced by a random time 7(w) of a more general type called stopping time (or Markov time): Definition 7.2.1. Let {N;} be an increasing family of o-algebras (of subsets of 2). A function 7: 2 — [0,00] is called a (strict) stopping time w.r.t. {Ne} if {witw)SthEeM, forallt>0. 7.2 The Strong Markov Property 117 In other words, it should be possible to decide whether or not + < t has occurred on the basis of the knowledge of N; Note that if r(w) = to (constant) for all w, then 7 is trivially a stopping time w.r.t. any filtration, because in this case Q if tot Example 7.2.2. Let UC R" be open. Then the first exit time rwixinf{t > 0; X_ ¢ U) is a stopping time w.r.t. {Mz}, since {wire SH =) U i X, ¢ Km} © Me m rea where {Km} is an increasing sequence of closed sets such that U =U Km - More generally, if HC R" is any set we define the first exit time from H, rH, 2s follows tH =inf{t > 0;X1 ¢ H} If we include the sets of measure 0 in M; (which we do) then the family {M4} is right-continuous ie. M: = Mp4, where Mr, = () Ms (see Chung spt (1982, Theorem 2.3.4., p. 61)) and therefore zy is a stopping time for any Borel set H (see Dynkin (1965 II, 4.5.C.e.), p. 111)). Definition 7.2.3. Let + be a stopping time w.r.t. {Nz} and let Ng. be the smallest o-algebra containing N, for allt > 0. Then the o-algebra N, consists of all sets N € Noo such that N(MrstheM — forallt>0. In the case when A; = Mz, an alternative and more intuitive description M, = the o-algebra generated by {Xminis,r)i$ 2 0} - (7.2.1) (See Rao (1977, p. 2.15) or Stroock and Varadhan (1979, Lemma 1.3.3, p. 33).) Similarly, if M, = F{, we get FL") = the o-algebra generated by {Biar;8 > 0} - ‘Theorem 7.2.4 (The strong Markov property for It6 diffusions). Let f be a bounded Borel function on R™, 7 a stopping time w.r.t. Fi™, Tt <00 as. Then E*[f(Xesn)/FO] = EX |f(Xa)] for all h>0. (72.2) 118 7. Diffusions: Basic Properties Proof. We try to imitate the proof of the Markov property (Theorem 7.1.2). For a.a. w we have that X/*(w) satisfies rh rh XT, =2+ | O(XP*)du t+ / o(X2*)dBy By the strong Markov property for Brownian motion (Gihman and Skorohod (1974a, p. 30)) the process By =Bryy- Be; v>0 is again a Brownian motion and independent of F{”). Therefore XTg,set foo TE)do+ [oosrien . Hence {XT}, }4>0 must coincide a.e. with the strongly unique (see (5.2.8)) solution ¥;, a the equation A h Ya=rt 7 b(Y.)dv + fora. . 3 Since {Yn}nz0 is independent of F{”, {XT%,} must be independent also. Moreover, by weak uniqueness (Lemma 5.3.1) we conclude that {¥i}nz0, and hence {X7%,}n0, has the same law as {X?"}uso. (7.2.3) Put F(z,trw)=Xi"(w) forr >t. ‘Then (7.2.2) can be written ELf(F(2, 0,7 + hw) FL] = ELf(F(e,0,h,w))p-x9 - Now, with X, = X?*, rth rh F(2,0,7 + h,w) = Xpqn(w) = 2+ i 2(X,)ds + 1 o(X,)dBy 0 ° rth rth -r+ fos pve foe )dB, + [es ds + f ox.aB, rh rth =X,+ / b(Xs)ds + / o(X,)dB, = F(X,,7,7+ hw). 7.2 The Strong Markov Property 119 Hence (7.2.2) gets the form EUL(F(X,, 757 + hyw))| FE] = Elf(F(2,0,h,w))Jenx, - Put g(z,t,r,w) = f(F(2,t,7,w)). As in the proof of Theorem 7.1.2 we may assume that g has the form g(x, t,7,0) = Donledvaltin) : ‘Then, since X77, is independent of F{”) we get, using (7.2.3) Elg(Xps 757 + hy w)|FL”] = SO Blbe(XpWelr 7 + hy wo) FO] F HDi be( Xr) Elve(r 7 +h w) FL] => Elbe )ve(r 7 +h, w) FL enx, rs z Elg(a,7,7 + hw)lFP™ e=x, = Blg(x,7,7 + hyw)lznx, = BUf(XTF, )z=x- = EUf(XR")lexx, = Elf(F(a,0,h,w))Jenx, « a We now extend (7.2.2) to the following: If fi,-++, fe are bounded Borel functions on R", r an F{"-stopping time, Tr 0 we define the shift operator OHH as follows: If = 91(Xt,) +++ 9e(Xe,) (gi Borel measurable, t; > 0) we put G1 = 91(Xerse) ++ Ge(Xevte) + Now extend in the natural way to all functions in 1 by taking limits of sums of such functions. Then it follows from (7.2.4) that 120 7. Diffusions: Basic Properties E* [en Ff] = E*(n] (7.2.5) for all stopping times r and all bounded 7 € 1, where (8-n)(w) = (Arn)(w) if rw) =t- Hitting distribution, harmonic measure and the mean value property ‘We will apply this to the following situation: Let H C R” be measurable and Jet ry be the first exit time from H for an Ité diffusion X;. Let a be another stopping time, g a bounded continuous function on R” and put 1 =9(Xru)X(encooy» TH =ink{t > a; Xe ¢ H}. ‘Then we have Gan + X acco} = 9(Xrg)Xxq- vadb) (Xemar, ) (Suna = (vv )izdt , " this gives 7.3 The Generator of an Ité Diffusion 123 ‘ of asf 40%) = 4%) + [ (mae De waza) t + Df ucLaw,. (7.3.1) uk} Hence eryo= re) + | f (Cages sD ns? ° ol + Te fon ora . (7.3.2) ik + : If g is a bounded Borel function, |g| < M say, then for all integers k we have rk . =| i aaa] = &*[ | Mucrit% 8] = since (Ys) and %(,<+} are both F{"-messurable. Moreover ‘Ak (foes, - i atvae,) | = ral | OH 3 a Ak < ME*|r—-r Ak] 0. Therefore Ak ane] f oer Combining this with (7.3.2) we get Lemma 7.3.2. = E*| | o(¥.)dB,) . ! o This gives immediately the formula for the generator A of an Ité diffusion: Theorem 7.3.3. Let X; be the It6 diffusion dX, = b(X,)dt + o(X1)dB - If f € CR") then f € Da and ; Ate) = Tog +4 Door @geL. (73) + + WW a 124 7. Diffusions: Basic Properties Proof. This follows from Lemma 7.3.2 (with r = t) and the definition of A. o Example 7.3.4. The n-dimensional Brownian motion is of course the solu- tion of the stochastic differential equation dX, = dB, , ie. we have 6 = 0 and o = In, the n-dimensional identity matrix. So the generator of B; is J = F(ziy---.2n) € C3(R") ie. A= $A, where A is the Laplace operator. Example 7.3.5 (The graph of Brownian motion). Let B denote 1-di- xX mensional Brownian motion and let X = ( ¥’ 2 ) be the solution of the stochastic differential equation dt ; X,(0)=to dB; X2(0)=20 dX =bdt + 0dB; x= (58) , with b= (3) and @ = (2). In other words, X may be regarded as the graph of Brownian motion. The generator A of X is given by 2 af BayFh, pa pene cnr). From now on we will, unless otherwise stated, let A = Ax denote the generator of the It6 diffusion X;. We let L = Ly denote the differential operator given by the right hand side of (7.3.3). From Theorem 7.3.3 we know that Ay and Lx coincide on C3(R"). 7.4 The Dynkin Formula If we combine (7.3.2) and (7.3.3) we get: Theorem 7.4.1 (Dynkin’s formula). Let f € C3(R"). Suppose r is a stopping time, E*[r] < 00. Then ERX) = 10) + "| / An(Xadas] (74.1) d 7.4 The Dynkin Formula 125 Remarks. (i) Note that if 7 is the first exit time of a bounded set, E*[r] < oo, then (7.4.1) holds for any function f € C?. (ii) For a more general version of Theorem 7.4.1 see Dynkin (1965 I), p. 133. Example 7.4.2. Consider n-dimensional Brownian motion B = (Bj,..., Bn starting at a = (a),...,@,) € R"(n > 1) and assume |a| < R. What is the expected value of the first exit time tx of B from the ball K =Kn={2€ R"|2| < R}? Choose an integer & and apply Dynkin’s formula with X = B, 1 = 0% = min(k,rx), and f € G3 such that f(x) = ||? for |z| < R: E*f(Bo)] = fle) + | fyarconus] 3 = we +e[ faa] = (a)? + n- Elon] - 3 Hence E*[ox] < 2(R? — |a|2) for all k. So letting k > oo we conclude that Tk = limoy < co'as. and Eire] = 2? -[ai?). (7.4.2) Next we assume that n > 2 and |b] > R. What is the probability that B starting at b ever hits K? Let ax be the first exit time from the annulus Ap ={aj;R<|z]<2'R}; k=1,2,... and put Tx = inf{t > 0;X, €K}. Let f = fax be a C? function with compact support such that, if R < |x| <2R, —log|z| when n =2 J(z) { le?" whenn>2 Then, since Af = 0 in Ag, we have by Dynkin’s formula E*(f(Ba,)) = f(b) for all k. (7.4.3) Put Pe = PPliBasl= R], ae = P*llBa,| = 2*R) . Let us now consider the two cases n = 2 and n > 2 separately: 126 7. Diffusions: Basic Properties 2. Then we get from (7.4.3) — log R- py — (log R+ k- log 2)qx = — log |b| for all k (7.4.4) ‘This implies that qx — 0 as k — co, so that P'TK < ool =1, (7.4.5) ie. Brownian motion is recurrent in R?. (See Port and Stone (1979)). n> 2. In this case (7.4.3) gives Pe RP" + gy: (2*R)7" = |b)”. Since 0 < qu < 1 we get by letting k + oo lias pp ET, co (2) pl Pk = K R , ie. Brownian motion is transient in R” for n > 2. 7.5 The Characteristic Operator We now introduce an operator which is closely related to the generator A, but is more suitable in many situations, for example in the solution of the Dirichlet problem. Definition 7.5.1. Let {X;} be an Ité diffusion. The characteristic operator A= Ax of {X1} is defined by E*(f(X-,)] ~ f(@) Pl wees Af(z) = Hie where the U's are open sets U, decreasing to the point x, in the sense that Ursa C Ug and (\Ux = {x}, and 1, = inf{t > 0;X_ ¢ U} is the first exit k time from U for Xz. The set of functions f such that the limit (7.5.1) exists for all x € R” (and all {U,}) is denoted by Da. If E*[ry] = 00 for all open U 2 2, we define Af(x) =0. It turns out that D4 C Dy always and that Af=Af forall fe Ds. (See Dynkin (1965 I, p. 143).) ‘We will only need that Ax and Lx coincide on C?. To obtain this we first clarify a property of exit times. 7.5 The Characteristic Operator 127 Definition 7.5.2. A point x € R" is called a trap for {X¢} if Q({Xi =x for all t}) = In other words, x is trap if and only if r¢2} = 00 a.8. Q*. For example, if b(20) = (20) = 0, then xq is a trap for X; (by strong uniqueness of X1). Lemma 7.5.3. If x is not a trap for X;, then there exists an open set U 3 x such that E* [ry] < 00. Proof. See Lemma 5.5 p. 139 in Dynkin (1965 1). Theorem 7.5.4. Let f € C?. Then f € Da and a, @ Af = ragt +4 Deo see : (7.5.2) 7 wd Proof. As before we let L denote the operator defined by the right hand side of (7.5.2). If x is a trap for {X;} then f(z) = 0. Choose a bounded open set V such that x € V. Modify f to fo outside V such that fo € C3(R"). Then fo € Da(z) and 0 = Afo(z) = Lfo(z) = Lf (x). Hence Af(z) = Lf(x) = 0 in this case. If x is not a trap, choose a bounded open set U > x such that E*[r,] < 00. Then by Dynkin’s formula (Theorem 7.4.1) (and the following Remark (i)), writing 7, =r IELf (LAX) ~ Lfte) has aia < sup |Lf(x) - Lf(y)| +0 woul. oe E*lf(Xz)] ~ E*| — Lf(z) since Lf is a continuous function. Remark. We have now obtained that an Ité diffusion is a continuous, strong Markov process such that the domain of definition of its characteristic oper- ator includes C?. Thus an Ité diffusion is a diffusion in the sense of Dynkin (1965 1). Example 7.5.5 (Brownian motion on the unit circle). The character- istic operator of the process ¥Y = (31) from Example 5.1.4 satisfying the stochastic differential equations (5.1.13), i.e. (r = -4Yidt ~ YodB dY, = -}Y¥ndt+ dB 128 7. Diffusions: Basic Properties PF af af + of 4 oye Oy Pye 1 Af(yi.y2) = fa aoe dung This is because dY = —}Ydt + KYB, where K=(7 0) so that dY = b(Y)dt + 0(¥)dB with v1 7 ou, ey, sy) = ( (yr, yo) = (- @) o(yi. ya) ( i and > wo -ny) -ny2 oF Example 7.5.6. Let D be an open subset of R" such that 7p < oo as. Q? for all x. Let @ be a bounded, measurable function on 9D and define (2) = B*[(Xr0)] (is called the X-harmonic extension of ¢). Then if U is open, z € U Cc D, we have by (7.2.8) that E*[§(Xz,)] = B*[E*e [6(Xr0)l] = E7[6(Xr0)] = G2) So @€ Da and _ Ag=0 in D, in spite of the fact that in general ¢ need not even be continuous in D (See Example 9.2.1). Exercises 7.1. Find the generator of the following Ité diffusions: a) dX, = pXydt + odB, (The Ornstein-Uhlenbeck process) (By € R; 4,0 constants). b) dX, =rX,dt + aX,dB, (The geometric Brownian motion) (By € R; r,cr constants). ©) dY¥; = rdt + a¥,dB, (B, € R; r,a constants) @) dy= [i] where X; is as in a) 7.2. 7.3. 7.5. Exercises 129 » [Se] [bJes[ Jon mem 0 [ex] = [ofa [o x.) [ae] 8) X(t) = (Xi, X2,---, Xn), where AX (t) = reXedt + Xe D> anjdB; 5 l 0; BE =0}. a) Prove that + < oo as. P¥ for all x > 0. (Hint: See Example 7.4.2, second part). b) Prove that E#[r part). co for all x > 0. (Hint: See Example 7.4.2, first Let the functions b,¢ satisfy condition (5.2.1) of Theorem 5.2.1, with a constant C’ independent of t, i.e. [b(t,2)| + lo(t,2)}0. Let X; be a solution of dX, = b(t, Xe)dt + oft, X:)dBr . Show that E(iXe?] < (1 + El|Xol?e** — 1 for some constant K independent of t. 130 7. Diffusions: Basic Properties (Hint: Use Dynkin’s formula with f(z) = ||? and r =t Arp, where Tr = inf {t > 0; |X,| > R}, and let R — oo to achieve the inequality BUXP| < BUXol]+K- [1+ £IXsP Dds, j which is of the form (5.2.9).) 7.6. Let g(x,w) = fo F(z,t,t + h,w) be as in the proof of Theorem 7.1.2. Assume that f is continuous. a) Prove that the map x — 9(z,-) is continuous from R into L?(P) by using (5.2.9). For simplicity assume that n = 1 in the following. b) Use a) to prove that (1r,w) + g(¢,w) is measurable. (Hint: For each m= 1,2,... put & =" =k-2-™, k= 1,2,... Then DHE) Mececend rs g(a,-): converges to g(z,-) in L*(P) for each x. Deduce that g™ — g in L?(dmp x dP) for all R, where dmp is Lebesgue measure on {|x| < R}. So a subsequence of g(™)(z,) converges to g(z,w) for a.a, (,w).) 7.7. Let B, be Brownian motion on R® starting at r€R" and let DCR" be an open ball centered at 2. a) Use Exercise 2.15 to prove that the harmonic measure uf, of B, is rotation invariant (about 2) on the sphere 8D. Conclude that 1.3, coincides with normalized surface measure o on OD. b) Let be a bounded measurable function on a bounded open set W c R” and define uz) = E*[(Bry)] for 2 W. Prove that: u satisfies the classical mean value property: u(e) = [ wy)doty) (7.53) tb for all balls D centered at x with D c W. c) Let W be as in b) and let w: W > R be harmonic in W, ice. dw ee =0 inW. (7.5.4) i=l Prove that w satisfies the classical mean value property (7.5.3). Exercises 131 Remark. For a converse of this see e.g. Oksendal and Stroock (1982) and the references therein. Let {N;} be a right-continuous family of o-algebras of subsets of 2, containing all sets of measure zero. a) Let 71,72 be stopping times (w.r.t. N;). Prove that 7, \72 and 7) VT2 are stopping times. b) If {mm} is a decreasing family of stopping times prove that 7: = lim Tp is a stopping time. c) If X; is an Ité diffusion in R” and F C R"® is closed, prove that Tr is a stopping time w.r-t. M;. (Hint: Consider open sets decreasing to F). Let X; be a geometric Brownian motion, i. dX,=rXidt+aXdB,, Xo=2>0 where B, € R; r,a are constants. a) Find the generator A of X; and compute Af(x) when f(z) x > 0, 7 constant. b) If r < 4a? then X; > 0 as t — oo, as. Q* (Example 5.1.1). But what is the probability p that X;,, when starting from x < R, ever hits the value R ? Use Dynkin’s formula with f(x) = «7, 71 = 1— 35, to prove that (i) Ifr > 3a? then X; — 00 as t + co, as. Q*. Put °) 7 = inf{t > 0; X, > R}. Use Dynkin’s formula with f(z) =Inz, x > 0 to prove that In& E*(7] (Hint: First consider exit times from (p,R), p > 0 and then let p — 0. You need estimates for (1 = plp))Inp , where (9) = Q? [Xz reaches the value R before p], which you can get from the calculations in a), b).) 132 7.10. 7. Diffusions: Basic Properties Let X; be the geometric Brownian motion dX, =rXrdt + aXdB, . Find E*[X7|F;] for t< T by a) using the Markov property and b) writing X; = ze"'M,, where M, = exp(aB; — $07t) is a martingale . Let X; be an Ité diffusion in R” and let f:R” -+ R bea function such that - E| fino] co and Z(t Ar) is an M-martingale for all k . a) Show that if Z(t) is a local martingale and there exists a constant T < co such that the family {Z(r)},0;|Bl R} Prove that X, is an F,,;-martingale. (Hint: Use Exercise 4.8.) Deduce that In|B;| is a local martingale (Exercise 7.12). b) Let B, € R” forn > 3, Bo = 2 £ 0. Fix € > 0, R < 00 and define Y= [BirP™; 20 where T= inf{t > 0; |B Se or |Bi| > R}. Prove that Y; is an F,,,-martingale. Deduce that |B;|?-” is a local martingale. (Doob’s h-transform) Let B, be n-dimensional Brownian motion, D C R” a bounded open set and h > 0 a harmonic function on D (ie. Ah = 0 in D). Let Xt be the solution of the stochastic differential equation dX, = V(In h)(Xp)dt + dBy More precisely, choose an increasing sequence {Dx} of open subsets of D such that D, ¢ D and U Dx = D. Then for each k the equation ket above can be solved (strongly) for t < 7p,. This gives in a natural way a solution for t<7:= lim 7o,. a) Show that the generator A of X; satisfies A(hf) oh In particular, if f = } then Af = b) Use a) to show that if there exists z9 € @D such that lim may= {2 ity # zo Af= for f € C3(D) - eyed oo ify=z0 (ie. h is a kernel function), then jim X, = x as. (Hint: Consider B*[f(X7)| for suitable stopping times T and with f=) In other words, we have imposed a drift on B, which causes the process to exit from D at the point zo only. This can also be for- mulated as follows: X; is obtained by conditioning B, to exit from D at zo. See Doob (1984). 134 7.15. 7.16. 7.17. 7. Diffusions: Basic Properties Let B, be 1-dimensional and define Fw) = (Br(w) - Ky* where K > 0, T > 0 are constants. By the It6 representation theorem (Theorem 4.3.3) we know that there exists ¢ € Y(0,T) such that r Fw) = E[F| + fotianas. . a How do we find ¢ explicitly? This problem is of interest in mathe- matical finance, where ¢ may be regarded as the replicating portfolio for the contingent claim F (see Chapter 12). Using the Clark-Ocone formula (see Karatzas and Ocone (1991), @ksendal (1996) or Aase et al (2000) one can deduce that (tw) = ElAjxcop(Br)|Al; << T. (7.5.5) Use (7.5.5) and the Markov property of Brownian motion to prove that for t0. Then we have seen in Exercise 4.15 that X; is a solution of the stochas- tic differential equation aX, =4X)3dt+ X7%aB,; Xo=a. (7.5.8) Define 7 = inf{t > 0; X; = 0} and put y= {% for t <7 flo fort>r.

Vous aimerez peut-être aussi