Vous êtes sur la page 1sur 367

APPLIED STOCHASTIC PROCESSES

G.A. Pavliotis
Department of Mathematics Imperial College London, UK
January 16, 2011
Pavliotis (IC) StochProc January 16, 2011 1 / 367
Lectures: Mondays, 10:00-12:00, Huxley 6M42.
Ofce Hours: By appointment.
Course webpage:
http://www.ma.imperial.ac.uk/~pavl/stoch_proc.htm
Text: Lecture notes, available from the course webpage. Also,
recommended reading from various textbooks/review articles.
The lecture notes are still in progress. Please send me your
comments, suggestions and let me know of any typos/errors
that you have spotted.
Pavliotis (IC) StochProc January 16, 2011 2 / 367
This is a basic graduate course on stochastic processes, aimed
towards PhD students in applied mathematics and theoretical
physics.
The emphasis of the course will be on the presentation of
analytical tools that are useful in the study of stochastic models
that appear in various problems in applied mathematics, physics,
chemistry and biology.
The course will consist of three parts: Fundamentals of the theory
of stochastic processes, applications (reaction rate theory, surface
diffusion....) and non-equilibrium statistical mechanics.
Pavliotis (IC) StochProc January 16, 2011 3 / 367
1
PART I: FUNDAMENTALS OF CONTINUOUS TIME
STOCHASTIC PROCESSES
Elements of probability theory.
Stochastic processes: basic denitions, examples.
Continuous time Markov processes. Brownian motion
Diffusion processes: basic denitions, the generator.
Backward Kolmogorov and the FokkerPlanck (forward
Kolmogorov) equations.
Stochastic differential equations (SDEs); It calculus, It and
Stratonovich stochastic integrals, connection between SDEs and
the FokkerPlanck equation.
Methods of solution for SDEs and for the Fokker-Planck equation.
Ergodic properties and convergence to equilibrium.
Pavliotis (IC) StochProc January 16, 2011 4 / 367
2. PART II: APPLICATIONS.
Asymptotic problems for the FokkerPlanck equation: overdamped
(Smoluchowski) and underdamped (Freidlin-Wentzell) limits.
Bistable stochastic systems: escape over a potential barrier, mean
rst passage time, calculation of escape rates etc.
Brownian motion in a periodic potential. Stochastic models of
molecular motors.
Multiscale problems: averaging and homogenization.
Pavliotis (IC) StochProc January 16, 2011 5 / 367
3. PART III: NON-EQUILIBRIUM STATISTICAL MECHANICS.
Derivation of stochastic differential equations from deterministic
dynamics (heat bath models, projection operator techniques etc.).
The uctuation-dissipation theorem.
Linear response theory.
Derivation of macroscopic equations (hydrodynamics) and
calculation of transport coefcients.
4. ADDITIONAL TOPICS (time permitting): numerical methods,
stochastic PDES, Markov chain Monte Carlo (MCMC)
Pavliotis (IC) StochProc January 16, 2011 6 / 367
Prerequisites
Basic knowledge of ODEs and PDEs.
Elementary probability theory.
Some familiarity with the theory of stochastic processes.
Pavliotis (IC) StochProc January 16, 2011 7 / 367
Lecture notes will be provided for all the material that we will cover
in this course. They will be posted on the course webpage.
There are many excellent textbooks/review articles on applied
stochastic processes, at a level and style similar to that of this
course.
Standard textbooks are
Gardiner: Handbook of stochastic methods (1985).
Van Kampen: Stochastic processes in physics and chemistry
(1981).
Horsthemke and Lefever: Noise induced transitions (1984).
Risken: The Fokker-Planck equation (1989).
Oksendal: Stochastic differential equations (2003).
Mazo: Brownian motion: uctuations, dynamics and uctuations
(2002).
Bhatthacharya and Waymire, Stochastic Processes and
Applications (1990).
Pavliotis (IC) StochProc January 16, 2011 8 / 367
Other standard textbooks are
Nelson: Dynamical theories of Brownian motion (1967). Available
from the web (includes a very interesting historical overview of the
theory of Brownian motion).
Chorin and Hald: Stochastic tools in mathematics and science
(2006).
Swanzing: Non-equilibrium statistical mechanics (2001).
The material on multiscale methods for stochastic processes,
together with some of the introductory material, will be taken from
Pavliotis and Stuart: Multiscale methods: averaging and
homogenization (2008).
Pavliotis (IC) StochProc January 16, 2011 9 / 367
Excellent books on the mathematical theory of stochastic
processes are
Karatzas and Shreeve: Brownian motion and stochastic calculus
(1991).
Revuz and Yor: Continuous martingales and Brownian motion
(1999).
Stroock: Probability theory, an analytic view (1993).
Pavliotis (IC) StochProc January 16, 2011 10 / 367
Well known review articles (available from the web) are:
Chandrasekhar: Stochastic problems in physics and astronomy
(1943).
Hanggi and Thomas: Stochastic processes: time evolution,
symmetries and linear response (1982).
Hanggi, Talkner and Borkovec: Reaction rate theory: fty years
after Kramers (1990).
Pavliotis (IC) StochProc January 16, 2011 11 / 367
The theory of stochastic processes started with Einsteins work on
the theory of Brownian motion: Concerning the motion, as
required by the molecular-kinetic theory of heat, of particles
suspended in liquids at rest (1905).
explanation of Browns observation (1827): when suspended in
water, small pollen grains are found to be in a very animated and
irregular state of motion.
Einsteins theory is based on
A Markov chain model for the motion of the particle (molecule, pollen
grain...).
The idea that it makes more sense to talk about the probability of
nding the particle at position x at time t , rather than about individual
trajectories.
Pavliotis (IC) StochProc January 16, 2011 12 / 367
In his work many of the main aspects of the modern theory of
stochastic processes can be found:
The assumption of Markovianity (no memory) expressed through
the Chapman-Kolmogorov equation.
The FokkerPlanck equation (in this case, the diffusion equation).
The derivation of the Fokker-Planck equation from the master
(Chapman-Kolmogorov) equation through a Kramers-Moyal
expansion.
The calculation of a transport coefcient (the diffusion equation)
using macroscopic (kinetic theory-based) considerations:
D =
k
B
T
6a
.
k
B
is Boltzmanns constant, T is the temperature, is the viscosity
of the uid and a is the diameter of the particle.
Pavliotis (IC) StochProc January 16, 2011 13 / 367
Einsteins theory is based on the Fokker-Planck equation.
Langevin (1908) developed a theory based on a stochastic
differential equation. The equation of motion for a Brownian
particle is
m
d
2
x
dt
2
= 6a
dx
dt
+ ,
where is a random force.
There is complete agreement between Einsteins theory and
Langevins theory.
The theory of Brownian motion was developed independently by
Smoluchowski.
Pavliotis (IC) StochProc January 16, 2011 14 / 367
The approaches of Langevin and Einstein represent the two main
approaches in the theory of stochastic processes:
Study individual trajectories of Brownian particles. Their evolution is
governed by a stochastic differential equation:
dX
dt
= F(X) + (X)(t),
where (t) is a random force.
Study the probability (x, t ) of nding a particle at position x at
time t . This probability distribution satises the Fokker-Planck
equation:

t
= (F(x)) +
1
2
: (A(x)),
where A(x) = (x)(x)
T
.
Pavliotis (IC) StochProc January 16, 2011 15 / 367
The theory of stochastic processes was developed during the 20th
century:
Physics:
Smoluchowksi.
Planck (1917).
Klein (1922).
Ornstein and Uhlenbeck (1930).
Kramers (1940).
Chandrasekhar (1943).
. . .
Mathematics:
Wiener (1922).
Kolmogorov (1931).
It (1940s).
Doob (1940s and 1950s).
. . .
Pavliotis (IC) StochProc January 16, 2011 16 / 367
The One-Dimensional Random Walk
We let time be discrete, i.e. t = 0, 1, . . . . Consider the following
stochastic process S
n
:
S
0
= 0;
at each time step it moves to 1 with equal probability
1
2
.
In other words, at each time step we ip a fair coin. If the outcome is
heads, we move one unit to the right. If the outcome is tails, we move
one unit to the left.
Alternatively, we can think of the random walk as a sum of independent
random variables:
S
n
=
n

j =1
X
j
,
where X
j
1, 1 with P(X
j
= 1) =
1
2
.
Pavliotis (IC) StochProc January 16, 2011 17 / 367
We can simulate the random walk on a computer:
We need a (pseudo)randomnumber generator to generate n
independent random variables which are uniformly distributed in
the interval [0,1].
If the value of the random variable is
1
2
then the particle moves
to the left, otherwise it moves to the right.
We then take the sum of all these random moves.
The sequence S
n

N
n=1
indexed by the discrete time
T = 1, 2, . . . N is the path of the random walk. We use a linear
interpolation (i.e. connect the points n, S
n
by straight lines) to
generate a continuous path.
Pavliotis (IC) StochProc January 16, 2011 18 / 367
0 5 10 15 20 25 30 35 40 45 50
6
4
2
0
2
4
6
8
50step random walk
Figure: Three paths of the random walk of length N = 50.
Pavliotis (IC) StochProc January 16, 2011 19 / 367
0 100 200 300 400 500 600 700 800 900 1000
50
40
30
20
10
0
10
20
1000step random walk
Figure: Three paths of the random walk of length N = 1000.
Pavliotis (IC) StochProc January 16, 2011 20 / 367
Every path of the random walk is different: it depends on the
outcome of a sequence of independent random experiments.
We can compute statistics by generating a large number of paths
and computing averages. For example, E(S
n
) = 0, E(S
2
n
) = n.
The paths of the random walk (without the linear interpolation) are
not continuous: the random walk has a jump of size 1 at each time
step.
This is an example of a discrete time, discrete space stochastic
processes.
The random walk is a time-homogeneous (the probabilistic law
of evolution is independent of time) Markov (the future depends
only on the present and not on the past) process.
If we take a large number of steps, the random walk starts looking
like a continuous time process with continuous paths.
Pavliotis (IC) StochProc January 16, 2011 21 / 367
Consider the sequence of continuous time stochastic processes
Z
n
t
:=
1

n
S
nt
.
In the limit as n , the sequence Z
n
t
converges (in some
appropriate sense) to a Brownian motion with diffusion
coefcient D =
x
2
2t
=
1
2
.
Pavliotis (IC) StochProc January 16, 2011 22 / 367
0 0.2 0.4 0.6 0.8 1
1.5
1
0.5
0
0.5
1
1.5
2
t
U(t)


mean of 1000 paths
5 individual paths
Figure: Sample Brownian paths.
Pavliotis (IC) StochProc January 16, 2011 23 / 367
Brownian motion W(t ) is a continuous time stochastic processes
with continuous paths that starts at 0 (W(0) = 0) and has
independent, normally. distributed Gaussian increments.
We can simulate the Brownian motion on a computer using a
random number generator that generates normally distributed,
independent random variables.
Pavliotis (IC) StochProc January 16, 2011 24 / 367
We can write an equation for the evolution of the paths of a
Brownian motion X
t
with diffusion coefcient D starting at x:
dX
t
=

2DdW
t
, X
0
= x.
This is an example of a stochastic differential equation.
The probability of nding X
t
at y at time t , given that it was at x at
time t = 0, the transition probability density (y, t ) satises the
PDE

t
= D

y
2
, (y, 0) = (y x).
This is an example of the Fokker-Planck equation.
The connection between Brownian motion and the diffusion
equation was made by Einstein in 1905.
Pavliotis (IC) StochProc January 16, 2011 25 / 367
Why introduce randomness in the description of physical systems?
To describe outcomes of a repeated set of experiments. Think of
tossing a coin repeatedly or of throwing a dice.
To describe a deterministic system for which we have incomplete
information: we have imprecise knowledge of initial and boundary
conditions or of model parameters.
ODEs with random initial conditions are equivalent to stochastic
processes that can be described using stochastic differential
equations.
To describe systems for which we are not condent about the
validity of our mathematical model.
Pavliotis (IC) StochProc January 16, 2011 26 / 367
To describe a dynamical system exhibiting very complicated
behavior (chaotic dynamical systems). Determinism versus
predictability.
To describe a high dimensional deterministic system using a
simpler, low dimensional stochastic system. Think of the physical
model for Brownian motion (a heavy particle colliding with many
small particles).
To describe a system that is inherently random. Think of quantum
mechanics.
Pavliotis (IC) StochProc January 16, 2011 27 / 367
ELEMENTS OF PROBABILITY THEORY
Pavliotis (IC) StochProc January 16, 2011 28 / 367
A collection of subsets of a set is called a algebra if it
contains and is closed under the operations of taking
complements and countable unions of its elements.
A sub-algebra is a collection of subsets of a algebra which
satises the axioms of a algebra.
A measurable space is a pair (, T) where is a set and T is a
algebra of subsets of .
Let (, T) and (E, () be two measurable spaces. A function
X : E such that the event
: X() A =: X A
belongs to T for arbitrary A ( is called a measurable function or
random variable.
Pavliotis (IC) StochProc January 16, 2011 29 / 367
Let (, T) be a measurable space. A function : T [0, 1] is
called a probability measure if () = 1, () = 1 and
(

k=1
A
k
) =

k=1
(A
k
) for all sequences of pairwise disjoint
sets A
k

k=1
T.
The triplet (, T, ) is called a probability space.
Let X be a random variable (measurable function) from (, T, )
to (E, (). If E is a metric space then we may dene expectation
with respect to the measure by
E[X] =
_

X() d().
More generally, let f : E R be (measurable. Then,
E[f (X)] =
_

f (X()) d().
Pavliotis (IC) StochProc January 16, 2011 30 / 367
Let U be a topological space. We will use the notation B(U) to
denote the Borel algebra of U: the smallest algebra
containing all open sets of U. Every random variable from a
probability space (, T, ) to a measurable space (E, B(E))
induces a probability measure on E:

X
(B) = PX
1
(B) = ( ; X() B), B B(E).
The measure
X
is called the distribution (or sometimes the law)
of X.
Example
Let 1 denote a subset of the positive integers. A vector

0
=
0,i
, i 1 is a distribution on 1 if it has nonnegative entries and
its total mass equals 1:

i I

0,i
= 1.
Pavliotis (IC) StochProc January 16, 2011 31 / 367
We can use the distribution of a random variable to compute
expectations and probabilities:
E[f (X)] =
_
S
f (x) d
X
(x)
and
P[X G] =
_
G
d
X
(x), G B(E).
When E = R
d
and we can write d
X
(x) = (x) dx, then we refer
to (x) as the probability density function (pdf), or density with
respect to Lebesque measure for X.
When E = R
d
then by L
p
(; R
d
), or sometimes L
p
(; ) or even
simply L
p
(), we mean the Banach space of measurable functions
on with norm
|X|
L
p =
_
E[X[
p
_
1/p
.
Pavliotis (IC) StochProc January 16, 2011 32 / 367
Example
Consider the random variable X : R with pdf

,m
(x) := (2)

1
2
exp
_

(x m)
2
2
_
.
Such an X is termed a Gaussian or normal random variable. The
mean is
EX =
_
R
x
,m
(x) dx = m
and the variance is
E(X m)
2
=
_
R
(x m)
2

,m
(x) dx = .
Pavliotis (IC) StochProc January 16, 2011 33 / 367
Example (Continued)
Let m R
d
and R
dd
be symmetric and positive denite. The
random variable X : R
d
with pdf

,m
(x) :=
_
(2)
d
det
_

1
2
exp
_

1
2

1
(x m), (x m))
_
is termed a multivariate Gaussian or normal random variable.
The mean is
E(X) = m (1)
and the covariance matrix is
E
_
(X m) (X m)
_
= . (2)
Pavliotis (IC) StochProc January 16, 2011 34 / 367
Since the mean and variance specify completely a Gaussian
random variable on R, the Gaussian is commonly denoted by
^(m, ). The standard normal random variable is ^(0, 1).
Since the mean and covariance matrix completely specify a
Gaussian random variable on R
d
, the Gaussian is commonly
denoted by ^(m, ).
Pavliotis (IC) StochProc January 16, 2011 35 / 367
The Characteristic Function
Many of the properties of (sums of) random variables can be
studied using the Fourier transform of the distribution function.
The characteristic function of the random variable X is dened
to be the Fourier transform of the distribution function
(t ) =
_
R
e
it
d
X
() = E(e
itX
). (3)
The characteristic function determines uniquely the distribution
function of the random variable, in the sense that there is a
one-to-one correspondance between F() and (t ).
The characteristic function of a ^(m, ) is
(t ) = e
m,t
1
2
t,t
.
Pavliotis (IC) StochProc January 16, 2011 36 / 367
Lemma
Let X
1
, X
2
, . . . X
n
be independent random variables with
characteristic functions
j
(t ), j = 1, . . . n and let Y =

n
j =1
X
j
with
characteristic function
Y
(t ). Then

Y
(t ) =
n
j =1

j
(t ).
Lemma
Let X be a random variable with characteristic function (t ) and
assume that it has nite moments. Then
E(X
k
) =
1
i
k

(k)
(0).
Pavliotis (IC) StochProc January 16, 2011 37 / 367
Types of Convergence and Limit Theorems
One of the most important aspects of the theory of random
variables is the study of limit theorems for sums of random
variables.
The most well known limit theorems in probability theory are the
law of large numbers and the central limit theorem.
There are various different types of convergence for sequences or
random variables.
Pavliotis (IC) StochProc January 16, 2011 38 / 367
Denition
Let Z
n

n=1
be a sequence of random variables. We will say that
(a) Z
n
converges to Z with probability one (almost surely) if
P
_
lim
n+
Z
n
= Z
_
= 1.
(b) Z
n
converges to Z in probability if for every > 0
lim
n+
P
_
[Z
n
Z[ >
_
= 0.
(c) Z
n
converges to Z in L
p
if
lim
n+
E
_

Z
n
Z

= 0.
(d) Let F
n
(), n = 1, +, F() be the distribution functions of
Z
n
n = 1, +and Z, respectively. Then Z
n
converges to Z in
distribution if
lim
n+
F
n
() = F()
for all R at which F is continuous.
Pavliotis (IC) StochProc January 16, 2011 39 / 367
Let X
n

n=1
be iid random variables with EX
n
= V. Then, the
strong law of large numbers states that average of the sum of
the iid converges to V with probability one:
P
_
lim
n+
1
N
N

n=1
X
n
= V
_
= 1.
The strong law of large numbers provides us with information
about the behavior of a sum of random variables (or, a large
number or repetitions of the same experiment) on average.
We can also study uctuations around the average behavior.
Indeed, let E(X
n
V)
2
=
2
. Dene the centered iid random
variables Y
n
= X
n
V. Then, the sequence of random variables
1

N
n=1
Y
n
converges in distribution to a ^(0, 1) random
variable:
lim
n+
P
_
1

N
N

n=1
Y
n
a
_
=
_
a

2
e

1
2
x
2
dx.
Pavliotis (IC) StochProc January 16, 2011 40 / 367
Assume that E[X[ < and let ( be a subalgebra of T. The
conditional expectation of X with respect to ( is dened to be
the function E[X[(] : E which is (measurable and satises
_
G
E[X[(] d =
_
G
X d G (.
We can dene E[f (X)[(] and the conditional probability
P[X F[(] = E[I
F
(X)[(], where I
F
is the indicator function of F, in
a similar manner.
Pavliotis (IC) StochProc January 16, 2011 41 / 367
ELEMENTS OF THE THEORY OF STOCHASTIC PROCESSES
Pavliotis (IC) StochProc January 16, 2011 42 / 367
Let T be an ordered set. A stochastic process is a collection of
random variables X = X
t
; t T where, for each xed t T, X
t
is a random variable from (, T) to (E, ().
The measurable space , T is called the sample space. The
space (E, () is called the state space .
In this course we will take the set T to be [0, +).
The state space E will usually be R
d
equipped with the algebra
of Borel sets.
A stochastic process X may be viewed as a function of both t T
and . We will sometimes write X(t ), X(t , ) or X
t
() instead
of X
t
. For a xed sample point , the function X
t
() : T E
is called a sample path (realization, trajectory) of the process X.
Pavliotis (IC) StochProc January 16, 2011 43 / 367
The nite dimensional distributions (fdd) of a stochastic
process are the distributions of the E
k
valued random variables
(X(t
1
), X(t
2
), . . . , X(t
k
)) for arbitrary positive integer k and
arbitrary times t
i
T, i 1, . . . , k:
F(x) = P(X(t
i
) x
i
, i = 1, . . . , k)
with x = (x
1
, . . . , x
k
).
We will say that two processes X
t
and Y
t
are equivalent if they
have same nite dimensional distributions.
From experiments or numerical simulations we can only obtain
information about the (fdd) of a process.
Pavliotis (IC) StochProc January 16, 2011 44 / 367
Denition
A Gaussian process is a stochastic processes for which E = R
d
and
all the nite dimensional distributions are Gaussian
F(x) = P(X(t
i
) x
i
, i = 1, . . . , k)
= (2)
n/2
(detK
k
)
1/2
exp
_

1
2
K
1
k
(x
k
), x
k
)
_
,
for some vector
k
and a symmetric positive denite matrix K
k
.
A Gaussian process x(t ) is characterized by its mean
m(t ) := Ex(t )
and the covariance function
C(t , s) = E
_
_
x(t ) m(t )
_

_
x(s) m(s)
_
_
.
Thus, the rst two moments of a Gaussian process are sufcient
for a complete characterization of the process.
Pavliotis (IC) StochProc January 16, 2011 45 / 367
Let
_
, T, P
_
be a probability space. Let X
t
, t T (with T = R or Z) be
a real-valued random process on this probability space with nite
second moment, E[X
t
[
2
< +(i.e. X
t
L
2
).
Denition
A stochastic process X
t
L
2
is called second-order stationary or
wide-sense stationary if the rst moment EX
t
is a constant and the
second moment E(X
t
X
s
) depends only on the difference t s:
EX
t
= , E(X
t
X
s
) = C(t s).
Pavliotis (IC) StochProc January 16, 2011 46 / 367
The constant m is called the expectation of the process X
t
. We
will set m = 0.
The function C(t ) is called the covariance or the autocorrelation
function of the X
t
.
Notice that C(t ) = E(X
t
X
0
), whereas C(0) = E(X
2
t
), which is
nite, by assumption.
Since we have assumed that X
t
is a real valued process, we have
that C(t ) = C(t ), t R.
Pavliotis (IC) StochProc January 16, 2011 47 / 367
Continuity properties of the covariance function are equivalent to
continuity properties of the paths of X
t
.
Lemma
Assume that the covariance function C(t ) of a second order stationary
process is continuous at t = 0. Then it is continuous for all t R.
Furthermore, the continuity of C(t ) is equivalent to the continuity of the
process X
t
in the L
2
-sense.
Pavliotis (IC) StochProc January 16, 2011 48 / 367
Proof.
Fix t R. We calculate:
[C(t + h) C(t )[
2
= [E(X
t+h
X
0
) E(X
t
X
0
)[
2
= E[((X
t+h
X
t
)X
0
)[
2
E(X
0
)
2
E(X
t+h
X
t
)
2
= C(0)(EX
2
t+h
+EX
2
t
2EX
t
X
t+h
)
= 2C(0)(C(0) C(h)) 0,
as h 0.
Assume now that C(t ) is continuous. From the above calculation we
have
E[X
t+h
X
t
[
2
= 2(C(0) C(h)), (4)
which converges to 0 as h 0. Conversely, assume that X
t
is
L
2
-continuous. Then, from the above equation we get
lim
h0
C(h) = C(0).
Pavliotis (IC) StochProc January 16, 2011 49 / 367
Notice that form (4) we immediately conclude that
C(0) > C(h), h R.
The Fourier transform of the covariance function of a second order
stationary process always exists. This enables us to study second
order stationary processes using tools from Fourier analysis.
To make the link between second order stationary processes and
Fourier analysis we will use Bochners theorem, which applies to
all nonnegative functions.
Denition
A function f (x) : R R is called nonnegative denite if
n

i ,j =1
f (t
i
t
j
)c
i

c
j
0 (5)
for all n N, t
1
, . . . t
n
R, c
1
, . . . c
n
C.
Pavliotis (IC) StochProc January 16, 2011 50 / 367
Lemma
The covariance function of second order stationary process is a
nonnegative denite function.
Proof.
We will use the notation X
c
t
:=

n
i =1
X
t
i
c
i
. We have.
n

i ,j =1
C(t
i
t
j
)c
i

c
j
=
n

i ,j =1
EX
t
i
X
t
j
c
i

c
j
= E
_
_
n

i =1
X
t
i
c
i
n

j =1
X
t
j

c
j
_
_
= E
_
X
c
t

X
c
t
_
= E[X
c
t
[
2
0.
Pavliotis (IC) StochProc January 16, 2011 51 / 367
Theorem
(Bochner) There is a one-to-one correspondence between the set of
continuous nonnegative denite functions and the set of nite
measures on the Borel -algebra of R: if is a nite measure, then
C(t ) =
_
R
e
ixt
(dx) (6)
in nonnegative denite. Conversely, any nonnegative denite function
can be represented in the form (6).
Denition
Let X
t
be a second order stationary process with covariance C(t )
whose Fourier transform is the measure (dx). The measure (dx) is
called the spectral measure of the process X
t
.
Pavliotis (IC) StochProc January 16, 2011 52 / 367
In the following we will assume that the spectral measure is
absolutely continuous with respect to the Lebesgue measure on R
with density f (x), d(x) = f (x)dx.
The Fourier transform f (x) of the covariance function is called the
spectral density of the process:
f (x) =
1
2
_

e
itx
C(t ) dt .
From (6) it follows that that the covariance function of a mean
zero, second order stationary process is given by the inverse
Fourier transform of the spectral density:
C(t ) =
_

e
itx
f (x) dx.
In most cases, the experimentally measured quantity is the
spectral density (or power spectrum) of the stochastic process.
Pavliotis (IC) StochProc January 16, 2011 53 / 367
The correlation function of a second order stationary process
enables us to associate a time scale to X
t
, the correlation time

cor
:

cor
=
1
C(0)
_

0
C() d =
_

0
E(X

X
0
)/E(X
2
0
) d.
The slower the decay of the correlation function, the larger the
correlation time is. We have to assume sufciently fast decay of
correlations so that the correlation time is nite.
Pavliotis (IC) StochProc January 16, 2011 54 / 367
Example
Consider the mean zero, second order stationary process with
covariance function
R(t ) =
D

e
|t|
. (7)
The spectral density of this process is:
f (x) =
1
2
D

_
+

e
ixt
e
|t|
dt
=
1
2
D

_
_
0

e
ixt
e
t
dt +
_
+
0
e
ixt
e
t
dt
_
=
1
2
D

_
1
ix +
+
1
ix +
_
=
D

1
x
2
+
2
.
Pavliotis (IC) StochProc January 16, 2011 55 / 367
Example (Continued)
This function is called the Cauchy or the Lorentz distribution.
The Gaussian stochastic process with covariance function (7) is
called the stationary Ornstein-Uhlenbeck process.
The correlation time is (we have that C(0) = D/())

cor
=
_

0
e
t
dt =
1
.
Pavliotis (IC) StochProc January 16, 2011 56 / 367
The OU process was introduced by Ornstein and Uhlenbeck in 1930
(G.E. Uhlenbeck, L.S. Ornstein, Phys. Rev. 36, 823 (1930)) as a model
for the velocity of a Brownian particle. It is of interest to calculate the
statistics of the position of the Brownian particle, i.e. of the integral
X(t ) =
_
t
0
Y(s) ds, (8)
where Y(t ) denotes the stationary OU process.
Lemma
Let Y(t ) denote the stationary OU process with covariance
function (7). Then the position process (8) is a mean zero Gaussian
process with covariance function
E(X(t )X(s)) = 2 min(t , s) + e
min(t,s)
+ e
max(t,s)
e
|ts|
1.
Pavliotis (IC) StochProc January 16, 2011 57 / 367
Second order stationary processes are ergodic: time averages equal
phase space (ensemble) averages. An example of an ergodic theorem
for a stationary processes is the following L
2
(mean-square) ergodic
theorem.
Theorem
Let X
t

t0
be a second order stationary process on a probability
space , T, P with mean and covariance R(t ), and assume that
R(t ) L
1
(0, +). Then
lim
T+
E

1
T
_
T
0
X(s) ds

2
= 0. (9)
Pavliotis (IC) StochProc January 16, 2011 58 / 367
Proof.
We have
E

1
T
_
T
0
X(s) ds

2
=
1
T
2
_
T
0
_
T
0
R(t s) dtds
=
2
T
2
_
T
0
_
t
0
R(t s) dsdt
=
2
T
2
_
T
0
(T v)R(u) du 0,
using the dominated convergence theorem and the assumption
R() L
1
. In the above we used the fact that R is a symmetric
function, together with the change of variables u = t s, v = t and an
integration over v.
Pavliotis (IC) StochProc January 16, 2011 59 / 367
Assume that = 0. From the above calculation we can conclude that,
under the assumption that R() decays sufciently fast at innity, for
t 1 we have that
E
__
t
0
X(t ) dt
_
2
2Dt ,
where
D =
_

0
R(t ) dt
is the diffusion coefcient. Thus, one expects that at sufciently long
times and under appropriate assumptions on the correlation function,
the time integral of a stationary process will approximate a Brownian
motion with diffusion coefcient D.
This kind of analysis was initiated in G.I. Taylor, Diffusion by
Continuous Movements Proc. London Math. Soc..1922; s2-20:
196-212.
Pavliotis (IC) StochProc January 16, 2011 60 / 367
Denition
A stochastic process is called (strictly) stationary if all nite
dimensional distributions are invariant under time translation: for any
integer k and times t
i
T, the distribution of (X(t
1
), X(t
2
), . . . , X(t
k
)) is
equal to that of (X(s + t
1
), X(s + t
2
), . . . , X(s + t
k
)) for any s such that
s + t
i
T for all i 1, . . . , k. In other words,
P(X
t
1
+t
A
1
, X
t
2
+t
A
2
. . . X
t
k
+t
A
k
)
= P(X
t
1
A
1
, X
t
2
A
2
. . . X
t
k
A
k
), t T.
Let X
t
be a strictly stationary stochastic process with nite second
moment (i.e. X
t
L
2
). The denition of strict stationarity implies
that EX
t
= , a constant, and E((X
t
)(X
s
)) = C(t s).
Hence, a strictly stationary process with nite second moment is
also stationary in the wide sense. The converse is not true.
Pavliotis (IC) StochProc January 16, 2011 61 / 367
Remarks
1
A sequence Y
0
, Y
1
, . . . of independent, identically distributed
random variables is a stationary process with
R(k) =
2

k0
,
2
= E(Y
k
)
2
.
2
Let Z be a single random variable with known distribution and set
Z
j
= Z, j = 0, 1, 2, . . . . Then the sequence Z
0
, Z
1
, Z
2
, . . . is a
stationary sequence with R(k) =
2
.
3
The rst two moments of a Gaussian process are sufcient for a
complete characterization of the process. A corollary of this is that
a second order stationary Gaussian process is also a (strictly)
stationary process.
Pavliotis (IC) StochProc January 16, 2011 62 / 367
The most important continuous time stochastic process is
Brownian motion. Brownian motion is a mean zero, continuous
(i.e. it has continuous sample paths: for a.e the function X
t
is a continuous function of time) process with independent
Gaussian increments.
A process X
t
has independent increments if for every sequence
t
0
< t
1
...t
n
the random variables
X
t
1
X
t
0
, X
t
2
X
t
1
, . . . , X
t
n
X
t
n1
are independent.
If, furthermore, for any t
1
, t
2
and Borel set B R
P(X
t
2
+s
X
t
1
+s
B)
is independent of s, then the process X
t
has stationary
independent increments.
Pavliotis (IC) StochProc January 16, 2011 63 / 367
Denition
A one dimensional standard Brownian motion W(t ) : R
+
R is a
real valued stochastic process with the following properties:
1
W(0) = 0;
2
W(t) is continuous;
3
W(t) has independent increments.
4
For every t > s 0 W(t) W(s) has a Gaussian distribution with
mean 0 and variance t s. That is, the density of the random
variable W(t) W(s) is
g(x; t, s) =
_
2(t s)
_

1
2
exp
_

x
2
2(t s)
_
; (10)
Pavliotis (IC) StochProc January 16, 2011 64 / 367
Denition (Continued)
A ddimensional standard Brownian motion W(t ) : R
+
R
d
is a
collection of d independent one dimensional Brownian motions:
W(t ) = (W
1
(t ), . . . , W
d
(t )),
where W
i
(t ), i = 1, . . . , d are independent one dimensional
Brownian motions. The density of the Gaussian random vector
W(t ) W(s) is thus
g(x; t , s) =
_
2(t s)
_
d/2
exp
_

|x|
2
2(t s)
_
.
Brownian motion is sometimes referred to as the Wiener process .
Pavliotis (IC) StochProc January 16, 2011 65 / 367
0 0.2 0.4 0.6 0.8 1
1.5
1
0.5
0
0.5
1
1.5
2
t
U(t)


mean of 1000 paths
5 individual paths
Figure: Brownian sample paths
Pavliotis (IC) StochProc January 16, 2011 66 / 367
It is possible to prove rigorously the existence of the Wiener process
(Brownian motion):
Theorem
(Wiener) There exists an almost-surely continuous process W
t
with
independent increments such and W
0
= 0, such that for each t the
random variable W
t
is ^(0, t ). Furthermore, W
t
is almost surely locally
Hlder continuous with exponent for any (0,
1
2
).
Notice that Brownian paths are not differentiable.
Pavliotis (IC) StochProc January 16, 2011 67 / 367
Brownian motion is a Gaussian process. For the ddimensional
Brownian motion, and for I the d d dimensional identity, we have
(see (1) and (2))
EW(t ) = 0 t 0
and
E
_
(W(t ) W(s)) (W(t ) W(s))
_
= (t s)I. (11)
Moreover,
E
_
W(t ) W(s)
_
= min(t , s)I. (12)
Pavliotis (IC) StochProc January 16, 2011 68 / 367
From the formula for the Gaussian density g(x, t s), eqn. (10),
we immediately conclude that W(t ) W(s) and
W(t + u) W(s + u) have the same pdf. Consequently, Brownian
motion has stationary increments.
Notice, however, that Brownian motion itself is not a stationary
process.
Since W(t ) = W(t ) W(0), the pdf of W(t ) is
g(x, t ) =
1

2t
e
x
2
/2t
.
We can easily calculate all moments of the Brownian motion:
E(x
n
(t )) =
1

2t
_
+

x
n
e
x
2
/2t
dx
=
_
1.3 . . . (n 1)t
n/2
, n even,
0, n odd.
Pavliotis (IC) StochProc January 16, 2011 69 / 367
We can dene the OU process through the Brownian motion via a
time change.
Lemma
Let W(t ) be a standard Brownian motion and consider the process
V(t ) = e
t
W(e
2t
).
Then V(t ) is a Gaussian second order stationary process with mean 0
and covariance
K(s, t ) = e
|ts|
.
Pavliotis (IC) StochProc January 16, 2011 70 / 367
Denition
A (normalized) fractional Brownian motion W
H
t
, t 0 with Hurst
parameter H (0, 1) is a centered Gaussian process with continuous
sample paths whose covariance is given by
E(W
H
t
W
H
s
) =
1
2
_
s
2H
+ t
2H
[t s[
2H
_
. (13)
Fractional Brownian motion has the following properties.
1
When H =
1
2
, W
1
2
t
becomes the standard Brownian motion.
2
W
H
0
= 0, EW
H
t
= 0, E(W
H
t
)
2
= [t [
2H
, t 0.
3
It has stationary increments, E(W
H
t
W
H
s
)
2
= [t s[
2H
.
4
It has the following self similarity property
(W
H
t
, t 0) = (
H
W
H
t
, t 0), > 0,
where the equivalence is in law.
Pavliotis (IC) StochProc January 16, 2011 71 / 367
The Poisson Process
Another fundamental continuous time process is the Poisson
process :
Denition
The Poisson process with intensity , denoted by N(t ), is an
integer-valued, continuous time, stochastic process with independent
increments satisfying
P[(N(t ) N(s)) = k] =
e
(ts)
_
(t s)
_
k
k!
, t > s 0, k N.
Both Brownian motion and the Poisson process are
homogeneous (or time-homogeneous): the increments
between successive times s and t depend only on t s.
Pavliotis (IC) StochProc January 16, 2011 72 / 367
A very useful result is that we can expand every centered
(EX
t
= 0) stochastic process with continuous covariance (and
hence, every L
2
-continuous centered stochastic process) into a
random Fourier series.
Let (, T, P) be a probability space and X
t

tT
a centered
process which is mean square continuous. Then, the
Karhunen-Love theorem states that X
t
can be expanded in the
form
X
t
=

n=1

n
(t ), (14)
where the
n
are an orthogonal sequence of random variables
with E[
k
[
2
=
k
, where
k

n

k=1
are the eigenvalues and
eigenfunctions of the integral operator whose kernel is the
covariance of the processes X
t
. The convergence is in L
2
(P) for
every t T.
If X
t
is a Gaussian process then the
k
are independent Gaussian
random variables.
Pavliotis (IC) StochProc January 16, 2011 73 / 367
Set T = [0, 1].Let / : L
2
(T) L
2
(T) be dened by
/(t ) =
_
1
0
K(s, t )(s) ds.
The kernel of this integral operator is a continuous (in both s and
t ), symmetric, nonnegative function. Hence, the corresponding
integral operator has eigenvalues, eigenfunctions
/
n
=
n
,
n
0, (
n
,
m
)
L
2 =
nm
,
such that the covariance operator can be expanded in a uniformly
convergent series
B(t , s) =

n=1

n
(t )
n
(s).
Pavliotis (IC) StochProc January 16, 2011 74 / 367
The random variables
n
are dened as

n
=
_
1
0
X(t )
n
(t ) dt .
The orthogonality of the eigenfunctions of the covariance operator
implies that
E(
n

m
) =
n

nm
.
Thus the random variables are orthogonal. When X(t ) is
Gaussian, then
k
are Gaussian random variables. Furthermore,
since they are also orthogonal, they are independent Gaussian
random variables.
Pavliotis (IC) StochProc January 16, 2011 75 / 367
Example
The Karhunen-Love Expansion for Brownian Motion We set
T = [0, 1].The covariance function of Brownian motion is
C(t , s) = min(t , s). The eigenvalue problem (
n
=
n

n
becomes
_
1
0
min(t , s)
n
(s) ds =
n

n
(t ).
Or,
_
1
0
s
n
(s) ds +t
_
1
t

n
(s) ds =
n

n
(t ).
We differentiate this equation twice:
_
1
t

n
(s) ds =
n

n
(t ) and
n
(t ) =
n

n
(t ),
where primes denote differentiation with respect to t .
Pavliotis (IC) StochProc January 16, 2011 76 / 367
Example
From the above equations we immediately see that the right boundary
conditions are (0) =

(1) = 0. The eigenvalues and eigenfunctions


are

n
(t ) =

2sin
_
1
2
(2n + 1)t
_
,
n
=
_
2
(2n + 1)
_
2
.
Thus, the Karhunen-Love expansion of Brownian motion on [0, 1] is
W
t
=

n=1

n
2
(2n + 1)
sin
_
1
2
(2n + 1)t
_
. (15)
Pavliotis (IC) StochProc January 16, 2011 77 / 367
The Path Space
Let (, T, ) be a probability space, (E, ) a metric space and let
T = [0, ). Let X
t
be a stochastic process from (, T, ) to
(E, ) with continuous sample paths.
The above means that for every we have that
X
t
C
E
:= C([0, ); E).
The space of continuous functions C
E
is called the path space of
the stochastic process.
We can put a metric on E as follows:

E
(X
1
, X
2
) :=

n=1
1
2
n
max
0tn
min
_
(X
1
t
, X
2
t
), 1
_
.
We can then dene the Borel sets on C
E
, using the topology
induced by this metric, and X
t
can be thought of as a random
variable on (, T, ) with state space (C
E
, B(C
E
)).
Pavliotis (IC) StochProc January 16, 2011 78 / 367
The probability measure PX
1
t
on (C
E
, B(C
E
)) is called the law of
X
t
.
The law of a stochastic process is a probability measure on its
path space.
Example
The space of continuous functions C
E
is the path space of Brownian
motion (the Wiener process). The law of Brownian motion, that is the
measure that it induces on C([0, ), R
d
), is known as the Wiener
measure.
Pavliotis (IC) StochProc January 16, 2011 79 / 367
MARKOV STOCHASTIC PROCESSES
Pavliotis (IC) StochProc January 16, 2011 80 / 367
Let (, T) be a measurable space and T an ordered set. Let
X = X
t
() be a stochastic process from the sample space (, T)
to the state space (E, (). It is a function of two variables, t T
and .
For a xed the function X
t
(), t T is the sample path of
the process X associated with .
Let / be a collection of subsets of . The smallest algebra on
which contains / is denoted by (/) and is called the
algebra generated by /.
Let X
t
: E, t T. The smallest algebra (X
t
, t T), such
that the family of mappings X
t
, t T is a stochastic process
with sample space (, (X
t
, t T)) and state space (E, (), is
called the algebra generated by X
t
, t T.
Pavliotis (IC) StochProc January 16, 2011 81 / 367
A ltration on (, T) is a nondecreasing family T
t
, t T of
subalgebras of T: T
s
T
t
T for s t .
We set T

= (
tT
T
t
). The ltration generated by X
t
, where
X
t
is a stochastic process, is
T
X
t
:= (X
s
; s t ) .
We say that a stochastic process X
t
; t T is adapted to the
ltration T
t
:= T
t
, t T if for all t T, X
t
is an T
t
measurable
random variable.
Denition
Let X
t
be a stochastic process dened on a probability space
(, T, ) with values in E and let T
X
t
be the ltration generated by
X
t
. Then X
t
is a Markov process if
P(X
t
[T
X
s
) = P(X
t
[X
s
) (16)
for all t , s T with t s, and B(E).
Pavliotis (IC) StochProc January 16, 2011 82 / 367
The ltration T
X
t
is generated by events of the form
[X
s
1
B
1
, X
s
2
B
2
, . . . X
s
n
B
n
, with
0 s
1
< s
2
< < s
n
s and B
i
B(E). The denition of a
Markov process is thus equivalent to the hierarchy of equations
P(X
t
[X
t
1
, X
t
2
, . . . X
t
n
) = P(X
t
[X
t
n
) a.s.
for n 1, 0 t
1
< t
2
< < t
n
t and B(E).
Pavliotis (IC) StochProc January 16, 2011 83 / 367
Roughly speaking, the statistics of X
t
for t s are completely
determined once X
s
is known; information about X
t
for t < s is
superuous. In other words: a Markov process has no memory.
More precisely: when a Markov process is conditioned on the
present state, then there is no memory of the past. The past and
future of a Markov process are statistically independent when the
present is known.
A typical example of a Markov process is the random walk: in
order to nd the position x(t + 1) of the random walker at time
t + 1 it is enough to know its position x(t ) at time t : how it got to
x(t ) is irrelevant.
A non Markovian process X
t
can be described through a
Markovian one Y
t
by enlarging the state space: the additional
variables that we introduce account for the memory in the X
t
. This
"Markovianization" trick is very useful since there are many more
tools for analyzing Markovian process.
Pavliotis (IC) StochProc January 16, 2011 84 / 367
With a Markov process X
t
we can associate a function
P : T T E B(E) R
+
dened through the relation
P
_
X
t
[T
X
s
_
= P(s, t , X
s
, ),
for all t , s T with t s and all B(E).
Assume that X
s
= x. Since P
_
X
t
[T
X
s

= P[X
t
[X
s
] we can
write
P(, t [x, s) = P[X
t
[X
s
= x] .
The transition function P(t , [x, s) is (for xed t , x s) a
probability measure on E with P(t , E[x, s) = 1; it is
B(E)measurable in x (for xed t , s, ) and satises the
ChapmanKolmogorov equation
P(, t [x, s) =
_
E
P(, t [y, u)P(dy, u[x, s). (17)
for all x E, B(E) and s, u, t T with s u t .
Pavliotis (IC) StochProc January 16, 2011 85 / 367
The derivation of the Chapman-Kolmogorov equation is based on
the assumption of Markovianity and on properties of the
conditional probability:
1
Let (, T, ) be a probability space, X a random variable from
(, T, ) to (E, () and let T
1
T
2
T. Then
E(E(X[T
2
)[T
1
) = E(E(X[T
1
)[T
2
) = E(X[T
1
). (18)
2
Given ( T we dene the function P
X
(B[() = P(X B[() for
B T. Assume that f is such that E(f (X)) < . Then
E(f (X)[() =
_
R
f (x)P
X
(dx[(). (19)
Pavliotis (IC) StochProc January 16, 2011 86 / 367
Now we use the Markov property, together with equations (18)
and (19) and the fact that s < u T
X
s
T
X
u
to calculate:
P(, t [x, s) := P(X
t
[X
s
= x) = P(X
t
[T
X
s
)
= E(I

(X
t
)[T
X
s
) = E(E(I

(X
t
)[T
X
s
)[T
X
u
)
= E(E(I

(X
t
)[T
X
u
)[T
X
s
) = E(P(X
t
[X
u
)[T
X
s
)
= E(P(X
t
[X
u
= y)[X
s
= x)
=
_
R
P(, t [X
u
= y)P(dz, u[X
s
= x)
=:
_
R
P(, t [y, u)P(dy, u[x, s).
I

() denotes the indicator function of the set . We have also set


E = R.
Pavliotis (IC) StochProc January 16, 2011 87 / 367
The CK equation is an integral equation and is the fundamental
equation in the theory of Markov processes. Under additional
assumptions we will derive from it the Fokker-Planck PDE, which
is the fundamental equation in the theory of diffusion processes,
and will be the main object of study in this course.
A Markov process is homogeneous if
P(t , [X
s
= x) := P(s, t , x, ) = P(0, t s, x, ).
We set P(0, t , , ) = P(t , , ). The ChapmanKolmogorov (CK)
equation becomes
P(t + s, x, ) =
_
E
P(s, x, dz)P(t , z, ). (20)
Pavliotis (IC) StochProc January 16, 2011 88 / 367
Let X
t
be a homogeneous Markov process and assume that the
initial distribution of X
t
is given by the probability measure
() = P(X
0
) (for deterministic initial conditionsX
0
= x we
have that () = I

(x) ).
The transition function P(x, t , ) and the initial distribution
determine the nite dimensional distributions of X by
P(X
0

1
, X(t
1
)
1
, . . . , X
t
n

n
)
=
_

0
_

1
. . .
_

n1
P(t
n
t
n1
, y
n1
,
n
)P(t
n1
t
n2
, y
n2
, dy
n1
)
P(t
1
, y
0
, dy
1
)(dy
0
). (21)
Theorem
(Ethier and Kurtz 1986, Sec. 4.1) Let P(t , x, ) satisfy (20) and
assume that (E, ) is a complete separable metric space. Then there
exists a Markov process X in E whose nite-dimensional distributions
are uniquely determined by (21).
Pavliotis (IC) StochProc January 16, 2011 89 / 367
Let X
t
be a homogeneous Markov process with initial distribution
() = P(X
0
) and transition function P(x, t , ). We can calculate
the probability of nding X
t
in a set at time t :
P(X
t
) =
_
E
P(x, t , )(dx).
Thus, the initial distribution and the transition function are sufcient to
characterize a homogeneous Markov process. Notice that they do not
provide us with any information about the actual paths of the Markov
process.
Pavliotis (IC) StochProc January 16, 2011 90 / 367
The transition probability P(, t [x, s) is a probability measure.
Assume that it has a density for all t > s:
P(, t [x, s) =
_

p(y, t [x, s) dy.


Clearly, for t = s we have P(, s[x, s) = I

(x).
The Chapman-Kolmogorov equation becomes:
_

p(y, t [x, s) dy =
_
R
_

p(y, t [z, u)p(z, u[x, s) dzdy,


and, since B(R) is arbitrary, we obtain the equation
p(y, t [x, s) =
_
R
p(y, t [z, u)p(z, u[x, s) dz. (22)
The transition probability density is a function of 4 arguments: the
initial position and time x, s and the nal position and time y, t .
Pavliotis (IC) StochProc January 16, 2011 91 / 367
In words, the CK equation tells us that, for a Markov process, the
transition from x, s to y, t can be done in two steps: rst the system
moves from x to z at some intermediate time u. Then it moves from z
to y at time t . In order to calculate the probability for the transition from
(x, s) to (y, t ) we need to sum (integrate) the transitions from all
possible intermediary states z. The above description suggests that a
Markov process can be described through a semigroup of operators,
i.e. a one-parameter family of linear operators with the properties
P
0
= I, P
t+s
= P
t
P
s
t , s 0.
A semigroup of operators is characterized through its generator.
Pavliotis (IC) StochProc January 16, 2011 92 / 367
Indeed, let P(t , x, dy) be the transition function of a homogeneous
Markov process. It satises the CK equation (20):
P(t + s, x, ) =
_
E
P(s, x, dz)P(t , z, ).
Let X := C
b
(E) and dene the operator
(P
t
f )(x) := E(f (X
t
)[X
0
= x) =
_
E
f (y)P(t , x, dy).
This is a linear operator with
(P
0
f )(x) = E(f (X
0
)[X
0
= x) = f (x) P
0
= I.
Pavliotis (IC) StochProc January 16, 2011 93 / 367
Furthermore:
(P
t+s
f )(x) =
_
f (y)P(t + s, x, dy)
=
_ _
f (y)P(s, z, dy)P(t , x, dz)
=
_
_
_
f (y)P(s, z, dy)
_
P(t , x, dz)
=
_
(P
s
f )(z)P(t , x, dz)
= (P
t
P
s
f )(x).
Consequently:
P
t+s
= P
t
P
s
.
Pavliotis (IC) StochProc January 16, 2011 94 / 367
Let (E, ) be a metric space and let X
t
be an E-valued
homogeneous Markov process. Dene the one parameter family
of operators P
t
through
P
t
f (x) =
_
f (y)P(t , x, dy) = E[f (X
t
)[X
0
= x]
for all f (x) C
b
(E) (continuous bounded functions on E).
Assume for simplicity that P
t
: C
b
(E) C
b
(E). Then the
one-parameter family of operators P
t
forms a semigroup of
operators on C
b
(E).
We dene by T(L) the set of all f C
b
(E) such that the strong
limit
Lf = lim
t0
P
t
f f
t
,
exists.
Pavliotis (IC) StochProc January 16, 2011 95 / 367
Denition
The operator L : T(L) C
b
(E) is called the innitesimal generator
of the operator semigroup P
t
.
Denition
The operator L : C
b
(E) C
b
(E) dened above is called the generator
of the Markov process X
t
.
The study of operator semigroups started in the late 40s
independently by Hille and Yosida. Semigroup theory was
developed in the 50s and 60s by Feller, Dynkin and others,
mostly in connection to the theory of Markov processes.
Necessary and sufcient conditions for an operator L to be the
generator of a (contraction) semigroup are given by the
Hille-Yosida theorem (e.g. Evans Partial Differential Equations,
AMS 1998, Ch. 7).
Pavliotis (IC) StochProc January 16, 2011 96 / 367
The semigroup property and the denition of the generator of a
semigroup imply that, formally at least, we can write:
P
t
= exp(Lt ).
Consider the function u(x, t ) := (P
t
f )(x). We calculate its time
derivative:
u
t
=
d
dt
(P
t
f ) =
d
dt
_
e
Lt
f
_
= L
_
e
Lt
f
_
= LP
t
f = Lu.
Furthermore, u(x, 0) = P
0
f (x) = f (x). Consequently, u(x, t )
satises the initial value problem
u
t
= Lu, u(x, 0) = f (x). (23)
Pavliotis (IC) StochProc January 16, 2011 97 / 367
When the semigroup P
t
is the transition semigroup of a Markov
process X
t
, then equation (23) is called the backward
Kolmogorov equation. It governs the evolution of an observable
u(x, t ) = E(f (X
t
)[X
0
= x).
Thus, given the generator of a Markov process L, we can
calculate all the statistics of our process by solving the backward
Kolmogorov equation.
In the case where the Markov process is the solution of a
stochastic differential equation, then the generator is a second
order elliptic operator and the backward Kolmogorov equation
becomes an initial value problem for a parabolic PDE.
Pavliotis (IC) StochProc January 16, 2011 98 / 367
The space C
b
(E) is natural in a probabilistic context, but other
Banach spaces often arise in applications; in particular when
there is a measure on E, the spaces L
p
(E; ) sometimes arise.
We will quite often use the space L
2
(E; ), where will is the
invariant measure of our Markov process.
The generator is frequently taken as the starting point for the
denition of a homogeneous Markov process.
Conversely, let P
t
be a contraction semigroup (Let X be a
Banach space and T : X X a bounded operator. Then T is a
contraction provided that |Tf |
X
|f |
X
f X), with
T(P
t
) C
b
(E), closed. Then, under mild technical hypotheses,
there is an Evalued homogeneous Markov process X
t

associated with P
t
dened through
E[f (X(t )[T
X
s
)] = P
ts
f (X(s))
for all t , s T with t s and f T(P
t
).
Pavliotis (IC) StochProc January 16, 2011 99 / 367
Example
The one dimensional Brownian motion is a homogeneous Markov
process. The transition function is the Gaussian dened in the
example in Lecture 2:
P(t , x, dy) =
t,x
(y)dy,
t,x
(y) =
1

2t
exp
_

[x y[
2
2t
_
.
The semigroup associated to the standard Brownian motion is the heat
semigroup P
t
= e
t
2
d
2
dx
2
. The generator of this Markov process is
1
2
d
2
dx
2
.
Notice that the transition probability density
t,x
of the one
dimensional Brownian motion is the fundamental solution (Greens
function) of the heat (diffusion) PDE

t
=
1
2

x
2
.
Pavliotis (IC) StochProc January 16, 2011 100 / 367
Example
The Ornstein-Uhlenbeck process V
t
= e
t
W(e
2t
) is a homogeneous
Markov process. The transition probability density is the Gaussian
p(y, t [x, s) := p(V
t
= y[V
s
= x)
=
1
_
2(1 e
2(ts)
)
exp
_

[y xe
(ts)
[
2
2(1 e
2(ts)
)
_
.
The semigroup associated to the Ornstein-Uhlenbeck process is
P
t
= e
t(x
d
dx
+
1
2
d
2
dx
2
)
. The generator of this Markov process is
L = x
d
dx
+
1
2
d
2
dx
2
.
Notice that the transition probability density p(t , x) of the 1d
OU process is the fundamental solution (Greens function) of the
heat (diffusion) PDE
p
t
=
(xp)
x
+
1
2

2
p
x
2
.
Pavliotis (IC) StochProc January 16, 2011 101 / 367
The semigroup P
t
acts on bounded measurable functions.
We can also dene the adjoint semigroup P

t
which acts on
probability measures:
P

t
() =
_
R
P(X
t
[X
0
= x) d(x) =
_
R
p(t , x, ) d(x).
The image of a probability measure under P

t
is again a
probability measure. The operators P
t
and P

t
are adjoint in the
L
2
-sense:
_
R
P
t
f (x) d(x) =
_
R
f (x) d(P

t
)(x). (24)
Pavliotis (IC) StochProc January 16, 2011 102 / 367
We can, formally at least, write
P

t
= exp(L

t ),
where L

is the L
2
-adjoint of the generator of the process:
_
Lfh dx =
_
f L

h dx.
Let
t
:= P

t
. This is the law of the Markov process and is the
initial distribution. An argument similar to the one used in the
derivation of the backward Kolmogorov equation (23) enables us
to obtain an equation for the evolution of
t
:

t
t
= L

t
,
0
= .
Pavliotis (IC) StochProc January 16, 2011 103 / 367
Assuming that
t
= (y, t ) dy, =
0
(y) dy this equation
becomes:

t
= L

, (y, 0) =
0
(y). (25)
This is the forward Kolmogorov or Fokker-Planck equation.
When the initial conditions are deterministic, X
0
= x, the initial
condition becomes
0
= (y x).
Given the initial distribution and the generator of the Markov
process X
t
, we can calculate the transition probability density by
solving the Forward Kolmogorov equation. We can then calculate
all statistical quantities of this process through the formula
E(f (X
t
)[X
0
= x) =
_
f (y)(t , y; x) dy.
We will derive rigorously the backward and forward Kolmogorov
equations for Markov processes that are dened as solutions of
stochastic differential equations later on.
Pavliotis (IC) StochProc January 16, 2011 104 / 367
We can study the evolution of a Markov process in two different
ways:
Either through the evolution of observables
(Heisenberg/Koopman)
(P
t
f )
t
= L(P
t
f ),
or through the evolution of states
(Schrdinger/Frobenious-Perron)
(P

t
)
t
= L

(P

t
).
We can also study Markov processes at the level of trajectories.
We will do this after we dene the concept of a stochastic
differential equation.
Pavliotis (IC) StochProc January 16, 2011 105 / 367
A very important concept in the study of limit theorems for
stochastic processes is that of ergodicity.
This concept, in the context of Markov processes, provides us with
information on the longtime behavior of a Markov semigroup.
Denition
A Markov process is called ergodic if the equation
P
t
g = g, g C
b
(E) t 0
has only constant solutions.
Roughly speaking, ergodicity corresponds to the case where the
semigroup P
t
is such that P
t
I has only constants in its null
space, or, equivalently, to the case where the generator L has
only constants in its null space. This follows from the denition of
the generator of a Markov process.
Pavliotis (IC) StochProc January 16, 2011 106 / 367
Under some additional compactness assumptions, an ergodic
Markov process has an invariant measure with the property that,
in the case T = R
+
,
lim
t+
1
t
_
t
0
g(X
s
) ds = Eg(x),
where E denotes the expectation with respect to .
This is a physicists denition of an ergodic process: time
averages equal phase space averages.
Using the adjoint semigroup we can dene an invariant measure
as the solution of the equation
P

t
= .
If this measure is unique, then the Markov process is ergodic.
Pavliotis (IC) StochProc January 16, 2011 107 / 367
Using this, we can obtain an equation for the invariant measure in
terms of the adjoint of the generator L

, which is the generator of


the semigroup P

t
. Indeed, from the denition of the generator of a
semigroup and the denition of an invariant measure, we conclude
that a measure is invariant if and only if
L

= 0
in some appropriate generalized sense ((L

, f ) = 0 for every
bounded measurable function).
Assume that (dx) = (x) dx. Then the invariant density
satises the stationary Fokker-Planck equation
L

= 0.
The invariant measure (distribution) governs the long-time
dynamics of the Markov process.
Pavliotis (IC) StochProc January 16, 2011 108 / 367
If X
0
is distributed according to , then so is X
t
for all t > 0. The
resulting stochastic process, with X
0
distributed in this way, is
stationary .
In this case the transition probability density (the solution of the
Fokker-Planck equation) is independent of time: (x, t ) = (x).
Consequently, the statistics of the Markov process is independent
of time.
Pavliotis (IC) StochProc January 16, 2011 109 / 367
Example
The one dimensional Brownian motion is not an ergodic process: The
null space of the generator L =
1
2
d
2
dx
2
on R is not one dimensional!
Example
Consider a one-dimensional Brownian motion on [0, 1], with periodic
boundary conditions. The generator of this Markov process L is the
differential operator L =
1
2
d
2
dx
2
, equipped with periodic boundary
conditions on [0, 1]. This operator is self-adjoint. The null space of both
L and L

comprises constant functions on [0, 1]. Both the backward


Kolmogorov and the Fokker-Planck equation reduce to the heat
equation

t
=
1
2

x
2
with periodic boundary conditions in [0, 1]. Fourier analysis shows that
the solution converges to a constant at an exponential rate.
Pavliotis (IC) StochProc January 16, 2011 110 / 367
Example
The one dimensional Ornstein-Uhlenbeck (OU) process is a
Markov process with generator
L = x
d
dx
+ D
d
2
dx
2
.
The null space of L comprises constants in x. Hence, it is an
ergodic Markov process. In order to calculate the invariant
measure we need to solve the stationary FokkerPlanck equation:
L

= 0, 0, ||
L
1
(R)
= 1. (26)
Pavliotis (IC) StochProc January 16, 2011 111 / 367
Example (Continued)
Let us calculate the L
2
-adjoint of L. Assuming that f , h decay
sufciently fast at innity, we have:
_
R
Lfh dx =
_
R
_
(x
x
f )h + (D
2
x
f )h
_
dx
=
_
R
_
f
x
(xh) + f (D
2
x
h)
_
dx =:
_
R
f L

h dx,
where
L

h :=
d
dx
(axh) + D
d
2
h
dx
2
.
We can calculate the invariant distribution by solving
equation (26).
The invariant measure of this process is the Gaussian measure
(dx) =
_

2D
exp
_


2D
x
2
_
dx.
If the initial condition of the OU process is distributed according to Pavliotis (IC) StochProc January 16, 2011 112 / 367
Let X
t
be the 1d OU process and let X
0
^(0, D/). Then X
t
is a
mean zero, Gaussian second order stationary process on [0, )
with correlation function
R(t ) =
D

e
|t|
and spectral density
f (x) =
D

1
x
2
+
2
.
Furthermore, the OU process is the only real-valued mean zero
Gaussian second-order stationary Markov process dened on R.
Pavliotis (IC) StochProc January 16, 2011 113 / 367
DIFFUSION PROCESSES
Pavliotis (IC) StochProc January 16, 2011 114 / 367
A Markov process consists of three parts: a drift (deterministic), a
random process and a jump process.
A diffusion process is a Markov process that has continuous
sample paths (trajectories). Thus, it is a Markov process with no
jumps.
A diffusion process can be dened by specifying its rst two
moments:
Pavliotis (IC) StochProc January 16, 2011 115 / 367
Denition
A Markov process X
t
with transition probability P(, t [x, s) is called a
diffusion process if the following conditions are satised.
1
(Continuity). For every x and every > 0
_
|xy|>
P(dy, t [x, s) = o(t s) (27)
uniformly over s < t .
2
(Denition of drift coefcient). There exists a function a(x, s) such
that for every x and every > 0
_
|yx|
(y x)P(dy, t [x, s) = a(x, s)(s t ) + o(s t ). (28)
uniformly over s < t .
Pavliotis (IC) StochProc January 16, 2011 116 / 367
Denition (Continued)
3. (Denition of diffusion coefcient). There exists a function b(x, s)
such that for every x and every > 0
_
|yx|
(y x)
2
P(dy, t [x, s) = b(x, s)(s t ) + o(s t ). (29)
uniformly over s < t .
Pavliotis (IC) StochProc January 16, 2011 117 / 367
In Denition 39 we had to truncate the domain of integration since we
didnt know whether the rst and second moments exist. If we assume
that there exists a > 0 such that
lim
ts
1
t s
_
R
d
[y x[
2+
P(s, x, t , dy) = 0, (30)
then we can extend the integration over the whole R
d
and use
expectations in the denition of the drift and the diffusion coefcient.
Indeed, ,let k = 0, 1, 2 and notice that
_
|yx|>
[y x[
k
P(s, x, t , dy)
=
_
|yx|>
[y x[
2+
[y x[
k(2+)
P(s, x, t , dy)

2+k
_
|yx|>
[y x[
2+
P(s, x, t , dy)

2+k
_
R
d
[y x[
2+
P(s, x, t , dy).
Pavliotis (IC) StochProc January 16, 2011 118 / 367
Using this estimate together with (30) we conclude that:
lim
ts
1
t s
_
|yx|>
[y x[
k
P(s, x, t , dy) = 0, k = 0, 1, 2.
This implies that assumption (30) is sufcient for the sample paths to
be continuous (k = 0) and for the replacement of the truncated
integrals in (28) and (29) by integrals over R
d
(k = 1 and k = 2,
respectively). The denitions of the drift and diffusion coefcients
become:
lim
ts
E
_
X
t
X
s
t s

X
s
= x
_
= a(x, s) (31)
and
lim
ts
E
_
(X
t
X
s
) (X
t
X
s
)
t s

X
s
= x
_
= b(x, s) (32)
Pavliotis (IC) StochProc January 16, 2011 119 / 367
Notice also that the continuity condition can be written in the form
P([X
t
X
s
[ [X
s
= x) = o(t s).
Now it becomes clear that this condition implies that the probability of
large changes in X
t
over short time intervals is small. Notice, on the
other hand, that the above condition implies that the sample paths of a
diffusion process are not differentiable: if they where, then the right
hand side of the above equation would have to be 0 when t s 1.
The sample paths of a diffusion process have the regularity of
Brownian paths. A Markovian process cannot be differentiable: we
can dene the derivative of a sample paths only with processes for
which the past and future are not statistically independent when
conditioned on the present.
Pavliotis (IC) StochProc January 16, 2011 120 / 367
Let us denote the expectation conditioned on X
s
= x by E
s,x
. Notice
that the denitions of the drift and diffusion coefcients (31) and (32)
can be written in the form
E
s,x
(X
t
X
s
) = a(x, s)(t s) + o(t s).
and
E
s,x
_
(X
t
X
s
) (X
t
X
s
)
_
= b(x, s)(t s) +o(t s).
Consequently, the drift coefcient denes the mean velocity vector
for the stochastic process X
t
, whereas the diffusion coefcient (tensor)
is a measure of the local magnitude of uctuations of X
t
X
s
about the
mean value. hence, we can write locally:
X
t
X
s
a(s, X
s
)(t s) + (s, X
s
)
t
,
where b =
T
and
t
is a mean zero Gaussian process with
E
s,x
(
t

s
) = (t s)I.
Pavliotis (IC) StochProc January 16, 2011 121 / 367
Since we have that
W
t
W
s
^(0, (t s)I),
we conclude that we can write locally:
X
t
a(s, X
s
)t + (s, X
s
)W
t
.
Or, replacing the differences by differentials:
dX
t
= a(t , X
t
)dt + (t , X
t
)dW
t
.
Hence, the sample paths of a diffusion process are governed by a
stochastic differential equation (SDE).
Pavliotis (IC) StochProc January 16, 2011 122 / 367
Theorem
(Kolmogorov) Let f (x) C
b
(R) and let
u(x, s) := E(f (X
t
)[X
s
= x) =
_
f (y)P(dy, t [x, s) C
2
b
(R).
Assume furthermore that the functions a(x, s), b(x, s) are continuous
in both x and s. Then u(x, s) C
2,1
(R R
+
) and it solves the nal
value problem

u
s
= a(x, s)
u
x
+
1
2
b(x, s)

2
u
x
2
, lim
st
u(s, x) = f (x). (33)
Pavliotis (IC) StochProc January 16, 2011 123 / 367
Proof:
First we notice that, the continuity assumption (27), together with the
fact that the function f (x) is bounded imply that
u(x, s) =
_
R
f (y) P(dy, t [x, s)
=
_
|yx|
f (y)P(dy, t [x, s) +
_
|yx|>
f (y)P(dy, t [x, s)

_
|yx|
f (y)P(dy, t [x, s) +|f |
L

_
|yx|>
P(dy, t [x, s)
=
_
|yx|
f (y)P(dy, t [x, s) + o(t s).
Pavliotis (IC) StochProc January 16, 2011 124 / 367
We add and subtract the nal condition f (x) and use the previous
calculation to obtain:
u(x, s) =
_
R
f (y)P(dy, t [x, s) = f (x) +
_
R
(f (y) f (x))P(dy, t [x, s)
= f (x) +
_
|yx|
(f (y) f (x))P(dy, t [x, s)
+
_
|yx|>
(f (y) f (x))P(dy, t [x, s)
= f (x) +
_
|yx|
(f (y) f (x))P(dy, t [x, s) + o(t s).
Now the nal condition follows from the fact that f (x) C
b
(R) and the
arbitrariness of .
Pavliotis (IC) StochProc January 16, 2011 125 / 367
Now we show that u(s, x) solves the backward Kolmogorov equation.
We use the Chapman-Kolmogorov equation (17) to obtain
u(x, ) =
_
R
f (z)P(dz, t [x, ) (34)
=
_
R
_
R
f (z)P(dz, t [y, )P(dy, [x, )
=
_
R
u(y, )P(dy, [x, ). (35)
The Taylor series expansion of the function u(s, x) gives
u(z, )u(x, ) =
u(x, )
x
(zx)+
1
2

2
u(x, )
x
2
(zx)
2
(1+

), [zx[ ,
(36)
where

= sup
,|zx|

2
u(x, )
x
2


2
u(z, )
x
2

.
Notice that, since u(x, s) is twice continuously differentiable in x,
lim
0

= 0.
Pavliotis (IC) StochProc January 16, 2011 126 / 367
We combine now (35) with (36) to calculate
u(x, s) u(x, s + h)
h
=
1
h
_
_
R
P(dy, s + h[x, s)u(y, s + h) u(x, s + h)
_
=
1
h
_
R
P(dy, s + h[x, s)(u(y, s + h) u(x, s + h))
=
1
h
_
|xy|<
P(dy, s + h[x, s)(u(y, s + h) u(x, s)) + o(1)
=
u
x
(x, s +h)
1
h
_
|xy|<
(y x)P(dy, s + h[x, s)
+
1
2

2
u
x
2
(x, s + h)
1
h
_
|xy|<
(y x)
2
P(dy, s +h[x, s)(1 +

) + o(1)
= a(x, s)
u
x
(x, s + h) +
1
2
b(x, s)

2
u
x
2
(x, s + h)(1 +

) + o(1).
Equation (33) follows by taking the limits 0, h 0.
Pavliotis (IC) StochProc January 16, 2011 127 / 367
Assume now that the transition function has a density p(y, t [x, s). In
this case the formula for u(x, s) becomes
u(x, s) =
_
R
f (y)p(y, t [x, s) dy.
Substituting this in the backward Kolmogorov equation we obtain
_
R
f (y)
_
p(y, t [x, s)
s
+/
s,x
p(y, t [x, s)
_
= 0 (37)
where
/
s,x
:= a(x, s)

x
+
1
2
b(x, s)

2
x
2
.
Since (37) is valid for arbitrary functions f (y), we obtain a partial
differential equations for the transition probability density:

p(y, t [x, s)
s
= a(x, s)
p(y, t [x, s)
x
+
1
2
b(x, s)

2
p(y, t [x, s)
x
2
. (38)
Notice that the variation is with respect to the "backward" variables
x, s. We can also obtain an equation with respect to the "forward"
variables y, t , the Forward Kolmogorov equation .
Pavliotis (IC) StochProc January 16, 2011 128 / 367
Now we derive the forward Kolmogorov equation or Fokker-Planck
equation. We assume that the transition function has a density with
respect to Lebesgue measure.
P(, t [x, s) =
_

p(t , y[x, s) dy.


Theorem
(Kolmogorov) Assume that conditions (27), (28), (29) are satised and
that p(y, t [, ), a(y, t ), b(y, t ) C
2,1
(R R
+
). Then the transition
probability density satises the equation
p
t
=

y
(a(t , y)p) +
1
2

2
y
2
(b(t , y)p) , lim
ts
p(t , y[x, s) = (x y).
(39)
Pavliotis (IC) StochProc January 16, 2011 129 / 367
Proof
Fix a function f (y) C
2
0
(R). An argument similar to the one used in the
proof of the backward Kolmogorov equation gives
lim
h0
1
h
__
f (y)p(y, s + h[x, s) ds f (x)
_
= a(x, s)f
x
(x)+
1
2
b(x, s)f
xx
(x),
(40)
where subscripts denote differentiation with respect to x. On the other
hand
Pavliotis (IC) StochProc January 16, 2011 130 / 367
_
f (y)

t
p(y, t [x, s) dy =

t
_
f (y)p(y, t [x, s) dy
= lim
h0
1
h
_
(p(y, t + h[x, s) p(y, t [x, s)) f (y) dy
= lim
h0
1
h
__
p(y, t +h[x, s)f (y) dy
_
p(z, t [s, x)f (z) dz
_
= lim
h0
1
h
__ _
p(y, t +s[z, t )p(z, t [x, s)f (y) dydz
_
p(z, t [s, x)f (z) dz
= lim
h0
1
h
_
_
p(z, t [x, s)
_
_
p(y, t + h[z, t )f (y) dy f (z)
__
dz
=
_
p(z, t [x, s)
_
a(z, t )f
z
(z) +
1
2
b(z)f
zz
(z)
_
dz
=
_
_


z
(a(z)p(z, t [x, s)) +
1
2

2
z
2
(b(z)p(z, t [x, s)
_
f (z) dz.
Pavliotis (IC) StochProc January 16, 2011 131 / 367
In the above calculation used the Chapman-Kolmogorov equation. We
have also performed two integrations by parts and used the fact that,
since the test function f has compact support, the boundary terms
vanish.
Since the above equation is valid for every test function f (y), the
forward Kolmogorov equation follows.
Pavliotis (IC) StochProc January 16, 2011 132 / 367
Assume now that initial distribution of X
t
is
0
(x) and set s = 0 (the
initial time) in (39). Dene
p(y, t ) :=
_
p(y, t [x, 0)
0
(x) dx.
We multiply the forward Kolmogorov equation (39) by
0
(x) and
integrate with respect to x to obtain the equation
p(y, t )
t
=

y
(a(y, t )p(y, t )) +
1
2

2
y
2
(b(y, t )p(t , y)) , (41)
together with the initial condition
p(y, 0) =
0
(y). (42)
The solution of equation (41), provides us with the probability that the
diffusion process X
t
, which initially was distributed according to the
probability density
0
(x), is equal to y at time t . Alternatively, we can
think of the solution to (39) as the Greens function for the PDE (41).
Pavliotis (IC) StochProc January 16, 2011 133 / 367
Quite often we need to calculate joint probability densities. For,
example the probability that X
t
1
= x
1
and X
t
2
= x
2
. From the properties
of conditional expectation we have that
p(x
1
, t
1
, x
2
, t
2
) = PX
t
1
= x
1
, X
t
2
= x
2
)
= P(X
t
1
= x
1
[X
t
2
= x
2
)P(X
t
2
= x
2
)
= p(x
1
, t
1
[x
2
t
2
)p(x
2
, t
2
).
Using the joint probability density we can calculate the statistics of a
function of the diffusion process X
t
at times t and s:
E(f (X
t
, X
s
)) =
_ _
f (y, x)p(y, t [x, s)p(x, s) dxdy. (43)
The autocorrelation function at time t and s is given by
E(X
t
X
s
) =
_ _
yxp(y, t [x, s)p(x, s) dxdy.
In particular,
E(X
t
X
0
) =
_ _
yxp(y, t [x, 0)p(x, 0) dxdy.
Pavliotis (IC) StochProc January 16, 2011 134 / 367
The drift and diffusion coefcients of a diffusion process in R
d
are
dened as:
lim
ts
1
t s
_
|yx|<
(y x)P(dy, t [x, s) = a(x, s)
and
lim
ts
1
t s
_
|yx|<
(y x) (y x)P(dy, t [x, s) = b(x, s).
The drift coefcient a(x, s) is a d-dimensional vector eld and the
diffusion coefcient b(x, s) is a d d symmetric matrix (second order
tensor). The generator of a d dimensional diffusion process is
L = a(s, x) +
1
2
b(s, x) :
=
d

j =1
a
j

x
j
+
1
2
d

i ,j =1
b
ij

2
x
2
j
.
Pavliotis (IC) StochProc January 16, 2011 135 / 367
Assuming that the rst and second moments of the multidimensional
diffusion process exist, we can write the formulas for the drift vector
and diffusion matrix as
lim
ts
E
_
X
t
X
s
t s

X
s
= x
_
= a(x, s) (44)
and
lim
ts
E
_
(X
t
X
s
) (X
t
X
s
)
t s

X
s
= x
_
= b(x, s) (45)
Notice that from the above denition it follows that the diffusion matrix
is symmetric and nonnegative denite.
Pavliotis (IC) StochProc January 16, 2011 136 / 367
THE FOKKER-PLANCK EQUATION
Pavliotis (IC) StochProc January 16, 2011 137 / 367
Consider a diffusion process on R
d
with time-independent drift
and diffusion coefcients. The Fokker-Planck equation is
p
t
=
d

j =1

x
j
(a
i
(x)p) +
1
2
d

i ,j =1

2
x
i
x
j
(b
ij
(x)p), t > 0, x R
d
,
(46a)
p(x, 0) = f (x), x R
d
. (46b)
Write it in non-divergence form:
p
t
=
d

j =1

a
j
(x)
p
x
j
+
1
2
d

i ,j =1

b
ij (x)

2
p
x
i
x
j
+

c(x)u, t > 0, x R
d
,
(47a)
p(x, 0) = f (x), x R
d
, (47b)
Pavliotis (IC) StochProc January 16, 2011 138 / 367
where

a
i
(x) = a
i
(x) +
d

j =1
b
ij
x
j
,

c
i
(x) =
1
2
d

i ,j =1

2
b
ij
x
i
x
j

i =1
a
i
x
i
.
The diffusion matrix is always nonnegative. We will assume that it
is actually positive, i.e. we will impose the uniform ellipticity
condition:
d

i ,j =1
b
ij
(x)
i

j
||
2
, R
d
, (48)
Furthermore, we will assume that the coefcients

a, b,

c are
smooth and that they satisfy the growth conditions
|b(x)| M, |

a(x)| M(1 +|x|), |

c(x)| M(1 +|x|


2
). (49)
Pavliotis (IC) StochProc January 16, 2011 139 / 367
We will call a solution to the Cauchy problem for the
FokkerPlanck equation (47) a classical solution if:
1
u C
2,1
(R
d
, R
+
).
2
T > 0 there exists a c > 0 such that
|u(t, x)|
L

(0,T)
ce
x
2
3
lim
t0
u(t, x) = f (x).
Pavliotis (IC) StochProc January 16, 2011 140 / 367
Theorem
Assume that conditions (48) and (49) are satised, and assume that
[f [ ce
x
2
. Then there exists a unique classical solution to the
Cauchy problem for the FokkerPlanck equation. Furthermore, there
exist positive constants K, so that
[p[, [p
t
[, |p|, |D
2
p| Kt
(n+2)/2
exp
_

1
2t
|x|
2
_
.
This estimate enables us to multiply the Fokker-Planck equation
by monomials x
n
and then to integrate over R
d
and to integrate by
parts.
For a proof of this theorem see Friedman, Partial Differential
Equations of Parabolic Type, Prentice-Hall 1964.
Pavliotis (IC) StochProc January 16, 2011 141 / 367
The FP equation as a conservation law
We can dene the probability current to be the vector whose i th
component is
J
i
:= a
i
(x)p
1
2
d

j =1

x
j
_
b
ij
(x)p
_
.
The FokkerPlanck equation can be written as a continuity
equation:
p
t
+ J = 0.
Integrating the FP equation over R
d
and integrating by parts on
the right hand side of the equation we obtain
d
dt
_
R
d
p(x, t ) dx = 0.
Consequently:
|p(, t )|
L
1
(R
d
)
= |p(, 0)|
L
1
(R
d
)
= 1.
Pavliotis (IC) StochProc January 16, 2011 142 / 367
Boundary conditions for the FokkerPlanck
equation
So far we have been studying the FP equation on R
d
.
The boundary condition was that the solution decays sufciently
fast at innity.
For ergodic diffusion processes this is equivalent to requiring that
the solution of the backward Kolmogorov equation is an element
of L
2
() where is the invariant measure of the process.
We can also study the FP equation in a bounded domain with
appropriate boundary conditions.
We can have absorbing, reecting or periodic boundary
conditions.
Pavliotis (IC) StochProc January 16, 2011 143 / 367
Consider the FP equation posed in R
d
where is a bounded
domain with smooth boundary.
Let J denote the probability current and let n be the unit normal
vector to the surface.
1
We specify reecting boundary conditions by setting
n J(x, t) = 0, on .
2
We specify absorbing boundary conditions by setting
p(x, t) = 0, on .
3
When the coefcient of the FP equation are periodic functions, we
might also want to consider periodic boundary conditions, i.e. the
solution of the FP equation is periodic in x with period equal to that
of the coefcients.
Pavliotis (IC) StochProc January 16, 2011 144 / 367
Reecting BC correspond to the case where a particle which
evolves according to the SDE corresponding to the FP equation
gets reected at the boundary.
Absorbing BC correspond to the case where a particle which
evolves according to the SDE corresponding to the FP equation
gets absorbed at the boundary.
There is a complete classication of boundary conditions in one
dimension, the Feller classication: the BC can be regular, exit,
entrance and natural.
Pavliotis (IC) StochProc January 16, 2011 145 / 367
Examples of Diffusion Processes
Set a(t , x) 0, b(t , x) 2D > 0. The Fokker-Planck equation
becomes:
p
t
= D

2
p
x
2
, p(x, s[y, s) = (x y).
This is the heat equation, which is the Fokker-Planck equation for
Brownian motion (Einstein, 1905). Its solution is
p
W
(x, t [y, s) =
1
_
2D(t s)
exp
_

(x s)
2
2D(t s)
_
.
Pavliotis (IC) StochProc January 16, 2011 146 / 367
Assume that the initial distribution is
p
W
(x, s[y, s) = W(y, s).
The solution of the Fokker-Planck equation for Brownian motion
with this initial distribution is
P
W
(x, t ) =
_
p(x, t [y, s)W(y, s) dy.
The Gaussian distribution is the fundamental solution (Greens
function) of the heat equation (i.e. the Fokker-Planck equation for
Brownian motion).
Pavliotis (IC) StochProc January 16, 2011 147 / 367
Set a(t , x) = x, b(t , x) 2D > 0:
p
t
=
(xp)
x
+ D

2
p
x
2
.
This is the Fokker-Planck equation for the Ornstein-Uhlenbeck
process (Ornstein-Uhlenbeck, 1930). Its solution is
p
OU
(x, t [y, s) =
_

2D(1 e
2(ts)
)
exp
_

(x e
(ts)
y)
2
2D(1 e
2(ts)
)
_
.
Proof: take the Fourier transform in x, use the method of
characteristics and take the inverse Fourier transform.
Pavliotis (IC) StochProc January 16, 2011 148 / 367
Set y = 0, s = 0. Notice that
lim
0
p
OU
(x, t ) = p
W
(x, t ).
Thus, in the limit where the friction coefcient goes to 0, we
recover distribution function of BM from the DF of the OU
processes.
Notice also that
lim
t+
p
OU
(x, t ) =
_

2D
exp
_

x
2
2D
_
.
Thus, the Ornstein-Uhlenbeck process is an ergodic Markov
process. Its invariant measure is Gaussian.
We can calculate all moments of the OU process.
Pavliotis (IC) StochProc January 16, 2011 149 / 367
Dene the nth moment of the OU process:
M
n
=
_
R
x
n
p(x, t ) dx, n = 0, 1, 2, . . . .
Let n = 0. We integrate the FP equation over R to obtain:
_
p
t
=
_
(xp)
x
+ D
_

2
p
x
2
= 0,
after an integration by parts and using the fact that p(x, t ) decays
sufciently fast at innity. Consequently:
d
dt
M
0
= 0 M
0
(t ) = M
0
(0) = 1.
In other words:
d
dt
|p|
L
1
(R)
= 0 p(x, t ) = p(x, t = 0).
Consequently: probability is conserved.
Pavliotis (IC) StochProc January 16, 2011 150 / 367
Let n = 1. We multiply the FP equation for the OU process by x,
integrate over R and perform and integration by parts to obtain:
d
dt
M
1
= M
1
.
Consequently, the rst moment converges exponentially fast to
0:
M
1
(t ) = e
t
M
1
(0).
Pavliotis (IC) StochProc January 16, 2011 151 / 367
Let now n 2. We multiply the FP equation for the OU process by
x
n
and integrate by parts (once on the rst term on the RHS and
twice on the second) to obtain:
d
dt
_
x
n
p = n
_
x
n
p + Dn(n 1)
_
x
n2
p.
Or, equivalently:
d
dt
M
n
= nM
n
+ Dn(n 1)M
n2
, n 2.
This is a rst order linear inhomogeneous differential equation.
We can solve it using the variation of constants formula:
M
n
(t ) = e
nt
M
n
(0) + Dn(n 1)
_
t
0
e
n(ts)
M
n2
(s) ds.
Pavliotis (IC) StochProc January 16, 2011 152 / 367
The stationary moments of the OU process are:
x
n
)
OU
=
_

2D
_
R
x
n
e

x
2
2D
dx
=
_
1.3 . . . (n 1)
_
D

_
n/2
, n even,
0, n odd.
We have that
lim
t
M
n
(t ) = x
n
)
OU
.
If the initial conditions of the OU process are stationary, then:
M
n
(t ) = M
n
(0) = x
n
)
OU
.
Pavliotis (IC) StochProc January 16, 2011 153 / 367
set a(x) = x, b(x) =
1
2
x
2
. This is the geometric Brownian
motion. The generator of this process is
L = x

x
+

2
x
2
2

2
x
2
.
Notice that this operator is not uniformly elliptic.
The Fokker-Planck equation of the geometric Brownian motion is:
p
t
=

x
(x) +

2
x
2
_

2
x
2
2
p
_
.
Pavliotis (IC) StochProc January 16, 2011 154 / 367
We can easily obtain an equation for the nth moment of the
geometric Brownian motion:
d
dt
M
n
=
_
n +

2
2
n(n 1)
_
M
n
, n 2.
The solution of this equation is
M
n
(t ) = e
(+(n1)

2
2
)nt
M
n
(0), n 2
and
M
1
(t ) = e
t
M
1
(0).
Notice that the nth moment might diverge as t , depending
on the values of and :
lim
t
M
n
(t ) =
_

_
0,

2
2
<

n1
,
M
n
(0),

2
2
=

n1
,
+,

2
2
>

n1
.
Pavliotis (IC) StochProc January 16, 2011 155 / 367
Gradient Flows
Let V(x) =
1
2
x
2
. The generator of the OU process can be written
as:
L =
x
V
x
+ D
2
x
.
Consider diffusion processes with a potential V(x), not
necessarily quadratic:
L = V(x) + D. (50)
This is a gradient ow perturbed by noise whose strength is
D = k
B
T where k
B
is Boltzmanns constant and T the absolute
temperature.
The corresponding stochastic differential equation is
dX
t
= V(X
t
) dt +

2DdW
t
.
Pavliotis (IC) StochProc January 16, 2011 156 / 367
The corresponding FP equation is:
p
t
= (Vp) + Dp. (51)
It is not possible to calculate the time dependent solution of this
equation for an arbitrary potential. We can, however, always
calculate the stationary solution.
Pavliotis (IC) StochProc January 16, 2011 157 / 367
Theorem
Assume that V(x) is smooth and that
e
V(x)/D
L
1
(R
d
). (52)
Then the Markov process with generator (50) is ergodic. The unique
invariant distribution is the Gibbs distribution
p(x) =
1
Z
e
V(x)/D
, (53)
where the normalization factor Z is the partition function
Z =
_
R
d
e
V(x)/D
dx.
Pavliotis (IC) StochProc January 16, 2011 158 / 367
The fact that the Gibbs distribution is an invariant distribution
follows by direct substitution. Uniqueness follows from a PDEs
argument (see discussion below).
It is more convenient to "normalize" the solution of the
Fokker-Planck equation wrt the invariant distribution.
Theorem
Let p(x, t ) be the solution of the Fokker-Planck equation (51), assume
that (52) holds and let (x) be the Gibbs distribution (187). Dene
h(x, t ) through
p(x, t ) = h(x, t )(x).
Then the function h satises the backward Kolmogorov equation:
h
t
= V h + Dh, h(x, 0) = p(x, 0)
1
(x). (54)
Pavliotis (IC) StochProc January 16, 2011 159 / 367
Proof.
The initial condition follows from the denition of h. We calculate the
gradient and Laplacian of p:
p = h hD
1
V
and
p = h 2D
1
V h + hD
1
V + h[V[
2
D
2
.
We substitute these formulas into the FP equation to obtain

h
t
=
_
V h + Dh
_
,
from which the claim follows.
Pavliotis (IC) StochProc January 16, 2011 160 / 367
Consequently, in order to study properties of solutions to the FP
equation, it is sufcient to study the backward equation (54).
The generator L is self-adjoint, in the right function space.
We dene the weighted L
2
space L
2

:
L
2

=
_
f [
_
R
d
[f [
2
(x) dx <
_
,
where (x) is the Gibbs distribution. This is a Hilbert space with
inner product
(f , h)

=
_
R
d
fh(x) dx.
Pavliotis (IC) StochProc January 16, 2011 161 / 367
Theorem
Assume that V(x) is a smooth potential and assume that
condition (52) holds. Then the operator
L = V(x) + D
is self-adjoint in L
2

. Furthermore, it is non-positive, its kernel consists


of constants.
Pavliotis (IC) StochProc January 16, 2011 162 / 367
Proof.
Let f , C
2
0
(R
d
). We calculate
(Lf , h)

=
_
R
d
(V + D)fh dx
=
_
R
d
(V f )h dx D
_
R
d
f h dx D
_
R
d
fh dx
= D
_
R
d
f h dx,
from which self-adjointness follows.
Pavliotis (IC) StochProc January 16, 2011 163 / 367
If we set f = h in the above equation we get
(Lf , f )

= D|f |
2

,
which shows that L is non-positive.
Clearly, constants are in the null space of L. Assume that f ^(L).
Then, from the above equation we get
0 = D|f |
2

,
and, consequently, f is a constant.
Remark
The expression (Lf , f )

is called the Dirichlet form of the


operator L. In the case of a gradient ow, it takes the form
(Lf , f )

= D|f |
2

. (55)
Pavliotis (IC) StochProc January 16, 2011 164 / 367
Using the properties of the generator L we can show that the
solution of the Fokker-Planck equation converges to the Gibbs
distribution exponentially fast.
For this we need the following result.
Theorem
Assume that the potential V satises the convexity condition
D
2
V I.
Then the corresponding Gibbs measure satises the Poincar
inequality with constant :
_
R
d
f = 0 |f |

|f |

. (56)
Pavliotis (IC) StochProc January 16, 2011 165 / 367
Theorem
Assume that p(x, 0) L
2
(e
V/D
). Then the solution p(x, t ) of the
Fokker-Planck equation (51) converges to the Gibbs distribution
exponentially fast:
|p(, t ) Z
1
e
V
|

1 e
Dt
|p(, 0) Z
1
e
V
|

1 . (57)
Pavliotis (IC) StochProc January 16, 2011 166 / 367
Proof.
We Use (54), (55) and (56) to calculate

d
dt
|(h 1)|
2

= 2
_
h
t
, h 1
_

= 2 (Lh, h 1)

= (L(h 1), h 1)

= 2D|(h 1)|

2D|h 1|
2

.
Our assumption on p(, 0) implies that h(, 0) L
2

. Consequently, the
above calculation shows that
|h(, t ) 1|

e
Dt
|H(, 0) 1|

.
This, and the denition of h, p = h, lead to (57).
Pavliotis (IC) StochProc January 16, 2011 167 / 367
The assumption
_
R
d
[p(x, 0)[
2
Z
1
e
V/D
<
is very restrictive (think of the case where V = x
2
).
The function space L
2
(
1
) = L
2
(e
V/D
) in which we prove
convergence is not the right space to use. Since p(, t ) L
1
,
ideally we would like to prove exponentially fast convergence in L
1
.
We can prove convergence in L
1
using the theory of logarithmic
Sobolev inequalities. In fact, we can also prove convergence in
relative entropy:
H(p[
V
) :=
_
R
d
p ln
_
p

V
_
dx.
Pavliotis (IC) StochProc January 16, 2011 168 / 367
The relative entropy norm controls the L
1
norm:
|
1

2
|
L
1 CH(
1
[
2
)
Using a logarithmic Sobolev inequality, we can prove exponentially
fast convergence to equilibrium, assuming only that the relative
entropy of the initial conditions is nite.
Theorem
Let p denote the solution of the FokkerPlanck equation (51) where
the potential is smooth and uniformly convex. Assume that the the
initial conditions satisfy
H(p(, 0)[
V
) < .
Then p converges to the Gibbs distribution exponentially fast in relative
entropy:
H(p(, t )[
V
) e
Dt
H(p(, 0)[
V
).
Pavliotis (IC) StochProc January 16, 2011 169 / 367
Convergence to equilibrium for kinetic equations, both linear and
non-linear (e.g., the Boltzmann equation) has been studied
extensively.
It has been recognized that the relative entropy plays a very
important role.
For more information see
On the trend to equilibrium for the Fokker-Planck equation: an
interplay between physics and functional analysis by P.A.
Markowich and C. Villani, 1999.
Pavliotis (IC) StochProc January 16, 2011 170 / 367
Eigenfunction Expansions
Consider the generator of a gradient ow with a uniformly convex
potential
L = V + D. (58)
We know that
1
L is a non-positive self-adjoint operator on L
2

.
2
It has a spectral gap:
(Lf , f )

D|f |
2

where is the Poincar constant of the potential V.


The above imply that we can study the spectral problem for L:
Lf
n
=
n
f
n
, n = 0, 1, . . .
Pavliotis (IC) StochProc January 16, 2011 171 / 367
The operator L has real, discrete spectrum with
0 =
0
<
1
<
2
< . . .
Furthermore, the eigenfunctions f
j

j =1
form an orthonormal basis
in L
2

: we can express every element of L


2

in the form of a
generalized Fourier series:
=
0
+

n=1

n
f
n
,
n
= (, f
n
)

(59)
with (f
n
, f
m
)

=
nm
and
0
^(L).
This enables us to solve the time dependent FokkerPlanck
equation in terms of an eigenfunction expansion.
Consider the backward Kolmogorov equation (54).
We assume that the initial conditions h
0
(x) = (x) L
2

and
consequently we can expand it in the form (59).
Pavliotis (IC) StochProc January 16, 2011 172 / 367
We look for a solution of (54) in the form
h(x, t ) =

n=0
h
n
(t )f
n
(x).
We substitute this expansion into the backward Kolmogorov
equation:
h
t
=

n=0

h
n
f
n
= L
_

n=0
h
n
f
n
_
(60)
=

n=0

n
h
n
f
n
. (61)
We multiply this equation by f
m
, integrate wrt the Gibbs measure
and use the orthonormality of the eigenfunctions to obtain the
sequence of equations

h
n
=
n
h
n
, n = 0, 1, . . .
Pavliotis (IC) StochProc January 16, 2011 173 / 367
The solution is
h
0
(t ) =
0
, h
n
(t ) = e

n
t

n
, n = 1, 2, . . .
Notice that
1 =
_
R
d
p(x, 0) dx =
_
R
d
p(x, t ) dx
=
_
R
d
h(x, t )Z
1
e
V
dx = (h, 1)

= (, 1)

=
0
.
Pavliotis (IC) StochProc January 16, 2011 174 / 367
Consequently, the solution of the backward Kolmogorov equation
is
h(x, t ) = 1 +

n=1
e

n
t

n
f
n
.
This expansion, together with the fact that all eigenvalues are
positive (n 1), shows that the solution of the backward
Kolmogorov equation converges to 1 exponentially fast.
The solution of the FokkerPlanck equation is
p(x, t ) = Z
1
e
V(x)
_
1 +

n=1
e

n
t

n
f
n
_
.
Pavliotis (IC) StochProc January 16, 2011 175 / 367
Self-adjointness
The FokkerPlanck operator of a general diffusion process is not
self-adjoint in general.
In fact, it is self-adjoint if and only if the drift term is the gradient
of a potential (Nelson, 1960). This is also true in innite
dimensions (Stochastic PDEs).
Markov Processes whose generator is a self-adjoint operator are
called reversible: for all t [0, T] X
t
and X
Tt
have the same
transition probability (when X
t
is statinary).
Reversibility is equivalent to the invariant measure being a Gibbs
measure.
See Thermodynamics of the general duffusion process:
time-reversibility and entropy production, H. Qian, M. Qian, X.
Tang, J. Stat. Phys., 107, (5/6), 2002, pp. 11291141.
Pavliotis (IC) StochProc January 16, 2011 176 / 367
Reduction to a Schrdinger Equation
Lemma
The FokkerPlanck operator for a gradient ow can be written in the
self-adjoint form
p
t
= D
_
e
V/D

_
e
V/D
p
__
. (62)
Dene now (x, t ) = e
V/2D
p(x, t ). Then solves the PDE

t
= D U(x), U(x) :=
[V[
2
4D

V
2
. (63)
Let H := D + U. Then L

and H have the same eigenvalues. The


nth eigenfunction
n
of L

and the nth eigenfunction


n
of H are
associated through the transformation

n
(x) =
n
(x) exp
_
V(x)
2D
_
.
Pavliotis (IC) StochProc January 16, 2011 177 / 367
Remarks
1
From equation (62) shows that the FP operator can be written in
the form
L

= D
_
e
V/D

_
e
V/D

__
.
2
The operator that appears on the right hand side of eqn. (63) has
the form of a Schrdinger operator:
H = D + U(x).
3
The spectral problem for the FP operator can be transformed into
the spectral problem for a Schrdinger operator. We can thus use
all the available results from quantum mechanics to study the FP
equation and the associated SDE.
4
In particular, the weak noise asymptotics D 1 is equivalent to
the semiclassical approximation from quantum mechanics.
Pavliotis (IC) StochProc January 16, 2011 178 / 367
Proof.
We calculate
D
_
e
V/D

_
e
V/D
f
__
= D
_
e
V/D
_
D
1
Vf +f
_
e
V/D
_
= (Vf + Df ) = L

f .
Consider now the eigenvalue problem for the FP operator:
L

n
=
n

n
.
Set
n
=
n
exp
_

1
2D
V
_
. We calculate L

n
:
Pavliotis (IC) StochProc January 16, 2011 179 / 367
L

n
= D
_
e
V/D

_
e
V/D

n
e
V/2D
__
= D
_
e
V/D
_

n
+
V
2D

n
_
e
V/2D
_
=
_
D
n
+
_

[V[
2
4D
+
V
2D
_

n
_
e
V/2D
= e
V/2D
H
n
.
From this we conclude that e
V/2D
H
n
=
n

n
e
V/2D
from which the
equivalence between the two eigenvalue problems follows.
Pavliotis (IC) StochProc January 16, 2011 180 / 367
Remarks
1
We can rewrite the Schrdinger operator in the form
H = D/

/, / = +
U
2D
, /

= +
U
2D
.
2
These are creation and annihilation operators. They can also be
written in the form
/ = e
U/2D

_
e
U/2D

_
, /

= e
U/2D

_
e
U/2D

_
3
The forward the backward Kolmogorov operators have the same
eigenvalues. Their eigenfunctions are related through

B
n
=
F
n
exp (V/D) ,
where
B
n
and
F
n
denote the eigenfunctions of the backward and
forward operators, respectively.
Pavliotis (IC) StochProc January 16, 2011 181 / 367
The OU Process and Hermite Polynomials
The generator of the OU process is
L = y
d
dy
+ D
d
2
dy
2
(64)
The OU process is an ergodic Markov process whose unique
invariant measure is absolutely continuous with respect to the
Lebesgue measure on R with density (y) C

(R)
(y) =
1

2D
e

y
2
2D
.
Let L
2

denote the closure of C

functions with respect to the norm


|f |
2

=
_
R
f (y)
2
(y) dy.
The space L
2

is a Hilbert space with inner product


(f , g)

=
_
R
f (y)g(y)(y) dy.
Pavliotis (IC) StochProc January 16, 2011 182 / 367
Consider the eigenvalue problem for L:
Lf
n
=
n
f
n
.
The operator L has discrete spectrum in L
2

n
= n, n ^.
The corresponding eigenfunctions are the normalized Hermite
polynomials:
f
n
(y) =
1

n!
H
n
_
y

D
_
, (65)
where
H
n
(y) = (1)
n
e
y
2
2
d
n
dy
n
_
e

y
2
2
_
.
Pavliotis (IC) StochProc January 16, 2011 183 / 367
The rst few Hermite polynomials are:
H
0
(y) = 1,
H
1
(y) = y,
H
2
(y) = y
2
1,
H
3
(y) = y
3
3y,
H
4
(y) = y
4
3y
2
+3,
H
5
(y) = y
5
10y
3
+ 15y.
H
n
is a polynomial of degree n.
Only odd (even) powers appear in H
n
when n is odd (even).
Pavliotis (IC) StochProc January 16, 2011 184 / 367
Lemma
The eigenfunctions f
n
(y)

n=1
of the generator of the OU process L
satisfy the following properties.
1
They form an orthonormal set in L
2

:
(f
n
, f
m
)

=
nm
.
2
f
n
(y) : n ^ is an orthonormal basis in L
2

.
3
Dene the creation and annihilation operators on C
1
(R) by
A
+
(y) =
1
_
D(n + 1)
_
D
d
dy
(y) + y(y)
_
, y R
and
A

(y) =
_
D
n
d
dy
(y).
Then
A
+
f
n
= f
n+1
and A

f
n
= f
n1
. (66)
Pavliotis (IC) StochProc January 16, 2011 185 / 367
4. The eigenfunctions f
n
satisfy the following recurrence relation
yf
n
(y) =

Dnf
n1
(y) +
_
D(n + 1)f
n+1
(y), n 1. (67)
5. The function
H(y; ) = e
y

2
2
is a generating function for the Hermite polynomials. In particular
H
_
y

D
;
_
=

n=0

n!
f
n
(y), C, y R.
Pavliotis (IC) StochProc January 16, 2011 186 / 367
The Klein-Kramers-Chandrasekhar Equation
Consider a diffusion process in two dimensions for the variables q
(position) and momentum p. The generator of this Markov process
is
L = p
q

q
V
p
+ (p
p
+ D
p
). (68)
The L
2
(dpdq)-adjoint is
L

= p
q

q
V
p
+ (
p
(p) +D
p
) .
The corresponding FP equation is:
p
t
= L

p.
The corresponding stochastic differential equations is the
Langevin equation

X
t
= V(X
t
)

X
t
+
_
2D

W
t
. (69)
This is Newtons equation perturbed by dissipation and noise.
Pavliotis (IC) StochProc January 16, 2011 187 / 367
The Klein-Kramers-Chandrasekhar equation was rst derived by
Kramers in 1923 and was studied by Kramers in his famous paper
"Brownian motion in a eld of force and the diffusion model of
chemical reactions", Physica 7(1940), pp. 284-304.
Notice that L

is not a uniformly elliptic operator: there are second


order derivatives only with respect to p and not q. This is an
example of a degenerate elliptic operator. It is, however,
hypoelliptic: we can still prove existence and uniqueness of
solutions for the FP equation, and obtain estimates on the
solution.
It is not possible to obtain the solution of the FP equation for an
arbitrary potential.
We can calculate the (unique normalized) solution of the
stationary Fokker-Planck equation.
Pavliotis (IC) StochProc January 16, 2011 188 / 367
Theorem
Assume that V(x) is smooth and that
e
V(x)/D
L
1
(R
d
).
Then the Markov process with generator (140) is ergodic. The unique
invariant distribution is the Maxwell-Boltzmann distribution

(p, q) =
1
Z
e
H(p,q)
(70)
where
H(p, q) =
1
2
|p|
2
+V(q)
is the Hamiltonian, = (k
B
T)
1
is the inverse temperature and the
normalization factor Z is the partition function
Z =
_
R
2d
e
H(p,q)
dpdq.
Pavliotis (IC) StochProc January 16, 2011 189 / 367
It is possible to obtain rates of convergence in either a weighted
L
2
-norm or the relative entropy norm.
H(p(, t )[) Ce
t
.
The proof of this result is quite complicated, since the generator L
is degenerate and non-selfadjoint.
See F. Herau and F. Nier, Isotropic hypoellipticity and trend to
equilibrium for the Fokker-Planck equation with a high-degree
potential, Arch. Ration. Mech. Anal., 171(2),(2004), 151218.
See also C. Villani, Hypocoercivity, AMS 2008.
Pavliotis (IC) StochProc January 16, 2011 190 / 367
Consider a second order elliptic operator of the form
L = X
0
+
d

i =1
X

i
X
i
,
where X
j

d
j =0
are vector elds (rst order differential operators).
Dene the Lie algebras
/
0
= Lie(X
1
, X
1
, . . . X
d
),
/
1
= Lie([X
0
, X
1
], [X
0
, X
2
], . . . [X
0
, X
d
]),
. . . = . . . ,
/
k
= Lie([X
0
, X]; X /
k1
), k 1
Dene
H = Lie(/
0
, /
1
, . . . )
Hrmanders Hypothesis. H is full in the sense that, for every x R
d
the vectors H(x) : H H span T
x
R
d
:
spanH(x); H H = T
x
R
d
.
Pavliotis (IC) StochProc January 16, 2011 191 / 367
Theorem (Hrmander/Kolmogorov.)
Under the above hypothesis, the diffusion process X
t
with generator L
has a transition density
(P
t
f )(x) =
_
R
d
p(t , x, y)f (y) dy,
where p(, , ) is a smooth function on (0, +) R
d
R
d
. Moreover,
the function p satises Kolmogorovs forward and backward equations

t
p(, x, ) = L

y
p(, x, ), x R
d
(71)
and

t
p(, , y) = L
x
p(, , y), y R
d
. (72)
For every x R
d
the function p(, x, ) is the fundamental solution
(Greens function) of (71), such that, for each f C
0
(R
d
),
lim
t0
_
R
d
p(t , x, y)f (y) dy = f (x).
Pavliotis (IC) StochProc January 16, 2011 192 / 367
Example
Consider the SDE

x =

2

W.
Write it as a rst order system

x = y,

y =

2

W.
The generator of this process is
L = X
0
+ X
2
1
, X
0
= y
x
, X
1
=
y
.
Consequently [X
0
, X
1
] =
x
and
Lie(X
1
, [X
0
, X
1
]) = Lie(
x
,
y
),
which spans T
x
R
2
.
Pavliotis (IC) StochProc January 16, 2011 193 / 367
Let (q, p, t ) be the solution of the Kramers equation and let

(q, p)
be the Maxwell-Boltzmann distribution. We can write
(q, p, t ) = h(q, p, t )

(q, p),
where h(q, p, t ) solves the equation
h
t
= /h + Sh
where
/ = p
q

q
V
p
, o = p
p
+
1

p
.
The operator / is antisymmetric in L
2

:= L
2
(R
2d
;

(q, p)), whereas o


is symmetric.
Pavliotis (IC) StochProc January 16, 2011 194 / 367
Let X
i
:=

p
i
. The L
2

-adjoint of X
i
is
X

i
= p
i
+

p
i
.
We have that
o =
1
d

i =1
X

i
X
i
.
Consequently, the generator of the Markov process q(t ), p(t ) can be
written in Hrmanders "sum of squares" form:
L = /+
1
d

i =1
X

i
X
i
. (73)
We calculate the commutators between the vector elds in (73):
[/, X
i
] =

q
i
, [X
i
, X
j
] = 0, [X
i
, X

j
] =
ij
.
Pavliotis (IC) StochProc January 16, 2011 195 / 367
Consequently,
Lie(X
1
, . . . X
d
, [/, X
1
], . . . [/, X
d
]) = Lie(
p
,
q
)
which spans T
p,q
R
2d
for all p, q R
d
. This shows that the generator L
is a hypoelliptic operator.
Let now Y
i
=

p
i
with L
2

-adjoint Y

i
=

q
i

V
q
i
. We have that
X

i
Y
i
Y

i
X
i
=
_
p
i

q
i

V
q
i

p
i
_
.
Consequently, the generator can be written in the form
L =
1
d

i =1
(X

i
Y
i
Y

i
X
i
+ X

i
X
i
) .
Notice also that
L
V
:=
q
V
q
+
1

q
=
1
d

i =1
Y

i
Y
i
.
Pavliotis (IC) StochProc January 16, 2011 196 / 367
The phase-space Fokker-Planck equation can be written in the
form

t
+ p
q

q
V
p
= Q(, f
B
)
where the collision operator has the form
Q(, f
B
) = D
_
f
B

_
f
1
B

__
.
The Fokker-Planck equation has a similar structure to the
Boltzmann equation (the basic equation in the kinetic theory of
gases), with the difference that the collision operator for the FP
equation is linear.
Convergence of solutions of the Boltzmann equation to the
Maxwell-Boltzmann distribution has also been proved. See
L. Desvillettes and C. Villani: On the trend to global equilibrium for
spatially inhomogeneous kinetic systems: the Boltzmann
equation. Invent. Math. 159, 2 (2005), 245-316.
Pavliotis (IC) StochProc January 16, 2011 197 / 367
We can study the backward and forward Kolmogorov equations
for (183) by expanding the solution with respect to the Hermite
basis.
We consider the problem in 1d. We set D = 1. The generator of
the process is:
L = p
q
V

(q)
p
+
_
p
p
+
2
p
_
.
=: L
1
+ L
0
,
where
L
0
:= p
p
+
2
p
and L
1
:= p
q
V

(q)
p
.
The backward Kolmogorov equation is
h
t
= Lh. (74)
Pavliotis (IC) StochProc January 16, 2011 198 / 367
The solution should be an element of the weighted L
2
-space
L
2

=
_
f [
_
R
2
[f [
2
Z
1
e
H(p,q)
dpdq <
_
.
We notice that the invariant measure of our Markov process is a
product measure:
e
H(p,q)
= e

1
2
|p|
2
e
V(q)
.
The space L
2
(e

1
2
|p|
2
dp) is spanned by the Hermite polynomials.
Consequently, we can expand the solution of (74) into the basis of
Hermite basis:
h(p, q, t ) =

n=0
h
n
(q, t )f
n
(p), (75)
where f
n
(p) = 1/

n!H
n
(p).
Pavliotis (IC) StochProc January 16, 2011 199 / 367
Our plan is to substitute (75) into (74) and obtain a sequence of
equations for the coefcients h
n
(q, t ).
We have:
L
0
h = L
0

n=0
h
n
f
n
=

n=0
nh
n
f
n
Furthermore
L
1
h =
q
V
p
h + p
q
h.
We calculate each term on the right hand side of the above
equation separately. For this we will need the formulas

p
f
n
=

nf
n1
and pf
n
=

nf
n1
+

n + 1f
n+1
.
Pavliotis (IC) StochProc January 16, 2011 200 / 367
p
q
h = p
q

n=0
h
n
f
n
= p
p
h
0
+

n=1

q
h
n
pf
n
=
q
h
0
f
1
+

n=1

q
h
n
_

nf
n1
+

n + 1f
n+1
_
=

n=0
(

n + 1
q
h
n+1
+

n
q
h
n1
)f
n
with h
1
0.
Furthermore

q
V
p
h =

n=0

q
Vh
n

p
f
n
=

n=0

q
Vh
n

nf
n1
=

n=0

q
Vh
n+1

n + 1f
n
.
Pavliotis (IC) StochProc January 16, 2011 201 / 367
Consequently:
Lh = L
1
+ L
1
h
=

n=0
_
nh
n
+

n + 1
q
h
n+1
+

n
q
h
n1
+

n + 1
q
Vh
n+1
_
f
n
Using the orthonormality of the eigenfunctions of L
0
we obtain the
following set of equations which determine h
n
(q, t )

n=0
.

h
n
= nh
n
+

n + 1
q
h
n+1
+

n
q
h
n1
+

n + 1
q
Vh
n+1
, n = 0, 1, . . .
This is set of equations is usually called the Brinkman hierarchy
(1956).
Pavliotis (IC) StochProc January 16, 2011 202 / 367
We can use this approach to develop a numerical method for
solving the Klein-Kramers equation.
For this we need to expand each coefcient h
n
in an appropriate
basis with respect to q.
Obvious choices are other the Hermite basis (polynomial
potentials) or the standard Fourier basis (periodic potentials).
We will do this for the case of periodic potentials.
The resulting method is usually called the continued fraction
expansion. See Risken (1989).
Pavliotis (IC) StochProc January 16, 2011 203 / 367
The Hermite expansion of the distribution function wrt to the
velocity is used in the study of various kinetic equations (including
the Boltzmann equation). It was initiated by Grad in the late 40s.
It quite often used in the approximate calculation of transport
coefcients (e.g. diffusion coefcient).
This expansion can be justied rigorously for the Fokker-Planck
equation. See
J. Meyer and J. Schrter, Comments on the Grad Procedure for
the Fokker-Planck Equation, J. Stat. Phys. 32(1) pp.53-69 (1983).
This expansion can also be used in order to solve the Poisson
equation L = f (p, q). See G.A. Pavliotis and T. Vogiannou
Diffusive Transport in Periodic Potentials: Underdamped
dynamics, Fluct. Noise Lett., 8(2) L155-L173 (2008).
Pavliotis (IC) StochProc January 16, 2011 204 / 367
There are very few potentials for which we can calculate the
eigenvalues and eigenfunctions of the generator of the Markov
process q(t ), p(t ).
We can calculate everything explicitly for the quadratic (harmonic)
potential
V(q) =
1
2

2
0
q
2
. (76)
The Langevin equation is

q =
2
0
q

q +
_
2
1
W (77)
or

q = p,

p =
2
0
q p +
_
2
1
W. (78)
This is a linear equation that can be solved explicitly (The solution
is a Gaussian stochastic process).
Pavliotis (IC) StochProc January 16, 2011 205 / 367
The generator of the process q(t ), p(t )
L = p
q

2
0
q
p
+ (p
p
+
1

2
p
). (79)
The Fokker-Planck operator is
L = p
q

2
0
q
p
+ (p
p
+
1

2
p
). (80)
The process q(t ), p(t ) is an ergodic Markov process with
Gaussian invariant measure

(q, p) dqdp =

0
2
e

2
p
2

2
0
2
q
2
. (81)
Pavliotis (IC) StochProc January 16, 2011 206 / 367
For the calculation of the eigenvalues and eigenfunctions of the
operator L it is convenient to introduce creation and annihilation
operator in both the position and momentum variables:
a

=
1/2

p
, a
+
=
1/2

p
+
1/2
p (82)
and
b

=
1
0

1/2

q
, b
+
=
1
0

1/2

q
+
0

1/2
p. (83)
We have that
a
+
a

=
1

2
p
+ p
p
and
b
+
b

=
1

2
q
+ q
q
Consequently, the operator

L = a
+
a

b
+
b

(84)
is the generator of the OU process in two dimensions.
Pavliotis (IC) StochProc January 16, 2011 207 / 367
Exercises
1
Calculate the eigenvalues and eigenfunctions of

L. Show that
there exists a transformation that transforms

L into the
Schrdinger operator of the two-dimensional quantum harmonic
oscillator.
2
Show that the operators a

, b

satisfy the commutation relations


[a
+
, a

] = 1, (85a)
[b
+
, b

] = 1, (85b)
[a

, b

] = 0. (85c)
Pavliotis (IC) StochProc January 16, 2011 208 / 367
Using the operators a

and b

we can write the generator L in the


form
L = a
+
a

0
(b
+
a

a
+
b

), (86)
We want to nd rst order differential operators c

and d

so that
the operator (86) becomes
L = Cc
+
c

Dd
+
d

(87)
for some appropriate constants C and D.
Our goal is, essentially, to map L to the two-dimensional OU
process.
We require that that the operators c

and d

satisfy the
canonical commutation relations
[c
+
, c

] = 1, (88a)
[d
+
, d

] = 1, (88b)
[c

, d

] = 0. (88c)
Pavliotis (IC) StochProc January 16, 2011 209 / 367
The operators c

and d

should be given as linear combinations


of the old operators a

and b

. They should be of the form


c
+
=
11
a
+
+
12
b
+
, (89a)
c

=
21
a

+
22
b

, (89b)
d
+
=
11
a
+
+
12
b
+
, (89c)
d

=
21
a

+
22
b

. (89d)
Notice that the c and d

are not the adjoints of c


+
and d
+
.
If we substitute now these equations into (87) and equate it
with (86) and into the commutation relations (88) we obtain a
system of equations for the coefcients
ij
,
ij
.
Pavliotis (IC) StochProc January 16, 2011 210 / 367
In order to write down the formulas for these coefcients it is
convenient to introduce the eigenvalues of the deterministic
problem

q =

q
2
0
q.
The solution of this equation is
q(t ) = C
1
e

1
t
+C
2
e

2
t
with

1,2
=

2
, =
_

2
4
2
0
. (90)
The eigenvalues satisfy the relations

1
+
2
= ,
1

2
= ,
1

2
=
2
0
. (91)
Pavliotis (IC) StochProc January 16, 2011 211 / 367
Lemma
Let L be the generator (86) and let c

, d

be the operators
c
+
=
1

_
_

1
a
+
+
_

2
b
+
_
, c

=
1

_
_

1
a

2
b

_
,
(92a)
d
+
=
1

_
_

2
a
+
+
_

1
b
+
_
, d

=
1

2
a

+
_

1
b

_
.
(92b)
Then c

, d

satisfy the canonical commutation relations (88) as well


as
[L, c

] =
1
c

, [L, d

] =
2
d

. (93)
Furthermore, the operator L can be written in the form
L =
1
c
+
c

2
d
+
d

. (94)
Pavliotis (IC) StochProc January 16, 2011 212 / 367
Proof. rst we check the commutation relations:
[c
+
, c

] =
1

1
[a
+
, a

]
2
[b
+
, b

]
_
=
1

(
1
+
2
) = 1.
Similarly,
[d
+
, d

] =
1

2
[a
+
, a

] +
1
[b
+
, b

]
_
=
1

(
2

1
) = 1.
Clearly, we have that
[c
+
, d
+
] = [c

, d

] = 0.
Furthermore,
[c
+
, d

] =
1

2
[a
+
, a

] +
_

2
[b
+
, b

]
_
=
1

(
_

2
+
_

2
) = 0.
Pavliotis (IC) StochProc January 16, 2011 213 / 367
Finally:
[L, c
+
] =
1
c
+
c

c
+
+
1
c
+
c
+
c

=
1
c
+
(1 + c
+
c

) +
1
c
+
c
+
c

=
1
c
+
(1 + c
+
c

) +
1
c
+
c
+
c

=
1
c
+
,
and similarly for the other equations in (93). Now we calculate
L =
1
c
+
c

2
d
+
d

2
2

2
1

a
+
a

+ 0b
+
b

(
1

2
)a
+
b

+
1

2
(
1
+
2
)b
+
a

= a
+
a

0
(b
+
a

a
+
b

),
which is precisely (86). In the above calculation we used (91).
Pavliotis (IC) StochProc January 16, 2011 214 / 367
Theorem
The eigenvalues and eigenfunctions of the generator of the Markov
process q, p (78) are

nm
=
1
n +
2
m =
1
2
(n + m) +
1
2
(n m), n, m = 0, 1, . . . (95)
and

nm
(q, p) =
1

n!m!
(c
+
)
n
(d
+
)
m
1, n, m = 0, 1, . . . (96)
Pavliotis (IC) StochProc January 16, 2011 215 / 367
We have
[L, (c
+
)
2
] = L(c
+
)
2
(c
+
)
2
L
= (c
+
L
1
c
+
)c
+
c
+
(Lc
+
+
1
c
+
)
= 2
1
(c
+
)
2
and similarly [L, (d
+
)
2
] = 2
1
(c
+
)
2
. A simple induction argument
now shows that ( exercise)
[L, (c
+
)
n
] = n
1
(c
+
)
n
and [L, (d
+
)
m
] = m
1
(d
+
)
m
. (97)
We use (97) to calculate
L(c
+
)
n
(d
+
)
n
1 = (c
+
)
n
L(d
+
)
m
1 n
1
(c
+
)
n
(d
+
m)1
= (c
+
)
n
(d
+
)
m
L1 m
2
(c
+
)
n
(d
+
m)1
n
1
(c
+
)
n
(d
+
m)1
= n
1
(c
+
)
n
(d
+
m)1 m
2
(c
+
)
n
(d
+
m)1
from which (95) and (96) follow.
Pavliotis (IC) StochProc January 16, 2011 216 / 367
Exercise
1
Show that
[L, (c

)
n
] = n
1
(c

)
n
, [L, (d

)
n
] = n
1
(d

)
n
, (98)
[c

, (c
+
)
n
] = n(c
+
)
n1
, [d

, (d
+
)
n
] = n(d
+
)
n1
. (99)
Remark In terms of the operators a

, b

the eigenfunctions of L are

nm
=

n!m!

n+m
2

n/2
1

m/2
2
n

=0
m

k=0
1
k!(mk)!!(n )!

_

2
_
k
2
(a
+
)
n+mk
(b
+
)
+k
1.
Pavliotis (IC) StochProc January 16, 2011 217 / 367
The rst few eigenfunctions are

00
= 1.

10
=

1
p +

0
q

01
=

2
p +

0
q

11
=
2

2
+

1
p
2

2
+ p
1

0
q +
0
q
2
p +

0
2
q
2

20
=

1
+ p
2

1
+ 2

2
p

0
q
2
+
0
2
q
2

2
.

02
=

2
+ p
2

2
+ 2

2
p

0
q
1
+
0
2
q
2

2
.
Pavliotis (IC) StochProc January 16, 2011 218 / 367
The eigenfunctions are not orthonormal.
The rst eigenvalue, corresponding to the constant eigenfunction,
is 0:

00
= 0.
The operator L is not self-adjoint and consequently, we do not
expect its eigenvalues to be real.
Whether the eigenvalues are real or not depends on the sign of
the discriminant =
2
4
2
0
.
In the underdamped regime, < 2
0
the eigenvalues are
complex:

nm
=
1
2
(n + m) +
1
2
i
_

2
+ 4
2
0
(n m), < 2
0
.
This it to be expected, since the underdamped regime the
dynamics is dominated by the deterministic Hamiltonian dynamics
that give rise to the antisymmetric Liouville operator.
We set =
_
(4
2
0

2
), i.e. = 2i . The eigenvalues can be
written as

nm
=

2
(n + m) + i (n m).
Pavliotis (IC) StochProc January 16, 2011 219 / 367
0 0.5 1 1.5 2 2.5 3
3
2
1
0
1
2
3
Re (
nm
)
I
m

(

n
m
)
Figure: First few eigenvalues of L for = = 1.
Pavliotis (IC) StochProc January 16, 2011 220 / 367
In Figure 34 we present the rst few eigenvalues of L in the
underdamped regime.
The eigenvalues are contained in a cone on the right half of the
complex plane. The cone is determined by

n0
=

2
n + i n and
0m
=

2
mi m.
The eigenvalues along the diagonal are real:

nn
= n.
In the overdamped regime, 2
0
all eigenvalues are real:

nm
=
1
2
(n + m) +
1
2
_

2
4
2
0
(n m), 2
0
.
In fact, in the overdamped limit +, the eigenvalues of the
generator L converge to the eigenvalues of the generator of the
OU process:

nm
= n +

2
0

(n m) + O(
3
).
Pavliotis (IC) StochProc January 16, 2011 221 / 367
The eigenfunctions of L do not form an orthonormal basis in
L
2

:= L
2
(R
2
, Z
1
e
H
) since L is not a selfadjoint operator.
Using the eigenfunctions/eigenvalues of L we can easily calculate
the eigenfunctions/eigenvalues of the L
2

adjoint of L.
The adjoint operator is

L := /+ S
=
0
(b
+
a

a
+
) + a
+
a

=
1
(c

(c
+
)

2
(d

) (d
+
),
where
(c
+
)

=
1

_
_

1
a

+
_

2
b

_
, (c

=
1

_
_

1
a
+

2
b
+
_
(100a)
(d
+
)

=
1

_
_

2
a

+
_

1
b

_
, (d

=
1

2
a
+
+
_

1
b
+
(100b)
Pavliotis (IC) StochProc January 16, 2011 222 / 367
Exercise
1
Show by direct substitution that

L can be written in the form

L =
1
(c

(c
+
)

2
(d

(d
+
)

.
2
Calculate the commutators
[(c
+
)

, (c

], [(d
+
)

, (d

], [(c

, (d

], [

L, (c

], [

L, (d

].
1

L has the same eigenvalues as L:

L
nm
=
nm

nm
,
2
where
nm
are given by (95).
3
The eigenfunctions are

nm
=
1

n!m!
((c

)
n
((d

)
m
1. (101)
Pavliotis (IC) StochProc January 16, 2011 223 / 367
Theorem
The eigenfunctions of L and

L satisfy the biorthonormality relation
_ _

nm

dpdq =
n

mk
. (102)
Proof. We will use formulas (98). Notice that using the third and fourth
of these equations together with the fact that c

1 = d

1 = 0 we can
conclude that (for n )
(c

(c
+
)
n
1 = n(n 1) . . . (n + 1)(c
+
)
n
. (103)
We have
Z Z

nm

dpdq =
1

n!m!!k!
Z Z
((c
+
))
n
((d
+
))
m
1((c

((d

)
k
1

dpdq
=
n(n 1) . . . (n + 1)m(m 1) . . . (m k + 1)

n!m!!k!

Z Z
((c
+
))
n
((d
+
))
mk
1

dpdq
=
n

mk
,
since all eigenfunctions average to 0 with respect to

.
Pavliotis (IC) StochProc January 16, 2011 224 / 367
From the eigenfunctions of

L we can obtain the eigenfunctions of
the Fokker-Planck operator. Using the formula
L

(f

) =

Lf
we immediately conclude that the the Fokker-Planck operator has
the same eigenvalues as those of L and

L. The eigenfunctions
are

nm
=

nm
=

n!m!
((c

)
n
((d

)
m
1. (104)
Pavliotis (IC) StochProc January 16, 2011 225 / 367
STOCHASTIC DIFFERENTIAL EQUATIONS (SDEs)
Pavliotis (IC) StochProc January 16, 2011 226 / 367
In this part of the course we will study stochastic differential
equation (SDEs): ODEs driven by Gaussian white noise.
Let W(t ) denote a standard mdimensional Brownian motion,
h : Z R
d
a smooth vector-valued function and : Z R
dm
a
smooth matrix valued function (in this course we will take
Z = T
d
, R
d
or R
l
T
dl
.
Consider the SDE
dz
dt
= h(z) + (z)
dW
dt
, z(0) = z
0
. (105)
We think of the term
dW
dt
as representing Gaussian white noise: a
mean-zero Gaussian process with correlation (t s)I.
The function h in (105) is sometimes referred to as the drift and
as the diffusion coefcient.
Pavliotis (IC) StochProc January 16, 2011 227 / 367
Such a process exists only as a distribution. The precise
interpretation of (105) is as an integral equation for
z(t ) C(R
+
, Z):
z(t ) = z
0
+
_
t
0
h(z(s))ds +
_
t
0
(z(s))dW(s). (106)
In order to make sense of this equation we need to dene the
stochastic integral against W(s).
Pavliotis (IC) StochProc January 16, 2011 228 / 367
The It Stochastic Integral
For the rigorous analysis of stochastic differential equations it is
necessary to dene stochastic integrals of the form
I(t ) =
_
t
0
f (s) dW(s), (107)
where W(t ) is a standard one dimensional Brownian motion. This
is not straightforward because W(t ) does not have bounded
variation.
In order to dene the stochastic integral we assume that f (t ) is a
random process, adapted to the ltration T
t
generated by the
process W(t ), and such that
E
_
_
T
0
f (s)
2
ds
_
< .
Pavliotis (IC) StochProc January 16, 2011 229 / 367
The It stochastic integral I(t ) is dened as the L
2
limit of the
Riemann sum approximation of (107):
I(t ) := lim
K
K1

k=1
f (t
k1
) (W(t
k
) W(t
k1
)) , (108)
where t
k
= kt and Kt = t .
Notice that the function f (t ) is evaluated at the left end of each
interval [t
n1
, t
n
] in (108).
The resulting It stochastic integral I(t ) is a.s. continuous in t .
These ideas are readily generalized to the case where W(s) is a
standard d dimensional Brownian motion and f (s) R
md
for
each s.
Pavliotis (IC) StochProc January 16, 2011 230 / 367
The resulting integral satises the It isometry
E[I(t )[
2
=
_
t
0
E[f (s)[
2
F
ds, (109)
where [ [
F
denotes the Frobenius norm [A[
F
=
_
tr (A
T
A).
The It stochastic integral is a martingale:
EI(t ) = 0
and
E[I(t )[T
s
] = I(s) t s,
where T
s
denotes the ltration generated by W(s).
Pavliotis (IC) StochProc January 16, 2011 231 / 367
Example
Consider the It stochastic integral
I(t ) =
_
t
0
f (s) dW(s),
where f , W are scalarvalued. This is a martingale with quadratic
variation
I)
t
=
_
t
0
(f (s))
2
ds.
More generally, for f , W in arbitrary nite dimensions, the integral
I(t ) is a martingale with quadratic variation
I)
t
=
_
t
0
(f (s) f (s)) ds.
Pavliotis (IC) StochProc January 16, 2011 232 / 367
The Stratonovich Stochastic Integral
In addition to the It stochastic integral, we can also dene the
Stratonovich stochastic integral. It is dened as the L
2
limit of a
different Riemann sum approximation of (107), namely
I
strat
(t ) := lim
K
K1

k=1
1
2
_
f (t
k1
) + f (t
k
)
_
(W(t
k
) W(t
k1
)) , (110)
where t
k
= kt and Kt = t . Notice that the function f (t ) is
evaluated at both endpoints of each interval [t
n1
, t
n
] in (110).
The multidimensional Stratonovich integral is dened in a similar
way. The resulting integral is written as
I
strat
(t ) =
_
t
0
f (s) dW(s).
Pavliotis (IC) StochProc January 16, 2011 233 / 367
The limit in (110) gives rise to an integral which differs from the It
integral.
The situation is more complex than that arising in the standard
theory of Riemann integration for functions of bounded variation:
in that case the points in [t
k1
, t
k
] where the integrand is evaluated
do not effect the denition of the integral, via a limiting process.
In the case of integration against Brownian motion, which does not
have bounded variation, the limits differ.
When f and W are correlated through an SDE, then a formula
exists to convert between them.
Pavliotis (IC) StochProc January 16, 2011 234 / 367
Existence and Uniqueness of solutions for SDEs
Denition
By a solution of (105) we mean a Z-valued stochastic process z(t )
on t [0, T] with the properties:
1
z(t ) is continuous and T
t
adapted, where the ltration is
generated by the Brownian motion W(t );
2
h(z(t )) L
1
((0, T)), (z(t )) L
2
((0, T));
3
equation (105) holds for every t [0, T] with probability 1.
The solution is called unique if any two solutions x
i
(t ), i = 1, 2 satisfy
P(x
1
(t ) = x
2
(t ), t [0.T]) = 1.
Pavliotis (IC) StochProc January 16, 2011 235 / 367
It is well known that existence and uniqueness of solutions for
ODEs (i.e. when 0 in (105)) holds for globally Lipschitz vector
elds h(x).
A very similar theorem holds when ,= 0.
As for ODEs the conditions can be weakened, when a priori
bounds on the solution can be found.
Pavliotis (IC) StochProc January 16, 2011 236 / 367
Theorem
Assume that both h() and () are globally Lipschitz on Z and that z
0
is a random variable independent of the Brownian motion W(t ) with
E[z
0
[
2
< .
Then the SDE (105) has a unique solution z(t ) C(R
+
; Z) with
E
_
_
T
0
[z(t )[
2
dt
_
< T < .
Furthermore, the solution of the SDE is a Markov process.
Pavliotis (IC) StochProc January 16, 2011 237 / 367
Remarks
The Stratonovich analogue of (105) is
dz
dt
= h(z) + (z)
dW
dt
, z(0) = z
0
. (111)
By this we mean that z C(R
+
, Z) satises the integral equation
z(t ) = z(0) +
_
t
0
h(z(s))ds +
_
t
0
(z(s)) dW(s). (112)
By using denitions (108) and (110) it can be shown that z
satisfying the Stratonovich SDE (111) also satises the It SDE
dz
dt
= h(z) +
1
2

_
(z)(z)
T
_

1
2
(z)
_
(z)
T
_
+ (z)
dW
dt
,
(113a)
z(0) = z
0
, (113b)
provided that (z) is differentiable.
Pavliotis (IC) StochProc January 16, 2011 238 / 367
White noise is, in most applications, an idealization of a stationary
random process with short correlation time. In this context the
Stratonovich interpretation of an SDE is particularly important
because it often arises as the limit obtained by using smooth
approximations to white noise.
On the other hand the martingale machinery which comes with
the It integral makes it more important as a mathematical object.
It is very useful that we can convert from the It to the
Stratonovich interpretation of the stochastic integral.
There are other interpretations of the stochastic integral, e.g. the
Klimontovich stochastic integral.
Pavliotis (IC) StochProc January 16, 2011 239 / 367
The Denition of Brownian motion implies the scaling property
W(ct ) =

cW(t ),
where the above should be interpreted as holding in law. From
this it follows that, if s = ct , then
dW
ds
=
1

c
dW
dt
,
again in law.
Hence, if we scale time to s = ct in (105), then we get the equation
dz
ds
=
1
c
h(z) +
1

c
(z)
dW
ds
, z(0) = z
0
.
Pavliotis (IC) StochProc January 16, 2011 240 / 367
The Stratonovich Stochastic Integral: A rst
application of multiscale methods
When white noise is approximated by a smooth process this often
leads to Stratonovich interpretations of stochastic integrals, at
least in one dimension.
We use multiscale analysis (singular perturbation theory for
Markov processes) to illustrate this phenomenon in a
one-dimensional example.
Consider the equations
dx
dt
= h(x) +
1

f (x)y, (114a)
dy
dt
=
y

2
+
_
2D

2
dV
dt
, (114b)
with V being a standard one-dimensional Brownian motion.
Pavliotis (IC) StochProc January 16, 2011 241 / 367
We say that the process x(t ) is driven by colored noise: the
noise that appears in (114a) has non-zero correlation time. The
correlation function of the colored noise (t ) := y(t )/ is (we take
y(0) = 0)
R(t ) = E((t )(s)) =
1

2
D

2
|ts|
.
The power spectrum of the colored noise (t ) is:
f

(x) =
1

2
D
2

1
x
2
+ (
2
)
2
=
D

4
x
2
+
2

D

2
and, consequently,
lim
0
E
_
y(t )

y(s)

_
=
2D

2
(t s),
which implies the heuristic
lim
0
y(t )

=
_
2D

2
dV
dt
. (115)
Pavliotis (IC) StochProc January 16, 2011 242 / 367
Another way of seeing this is by solving (114b) for y/:
y

=
_
2D

2
dV
dt

dy
dt
. (116)
If we neglect the O() term on the right hand side then we arrive,
again, at the heuristic (115).
Both of these arguments lead us to conjecture the limiting It SDE:
dX
dt
= h(X) +
_
2D

f (X)
dV
dt
. (117)
In fact, as applied, the heuristic gives the incorrect limit.
Pavliotis (IC) StochProc January 16, 2011 243 / 367
whenever white noise is approximated by a smooth process, the
limiting equation should be interpreted in the Stratonovich sense,
giving
dX
dt
= h(X) +
_
2D

f (X)
dV
dt
. (118)
This is usually called the Wong-Zakai theorem. A similar result is
true in arbitrary nite and even innite dimensions.
We will show this using singular perturbation theory.
Theorem
Assume that the initial conditions for y(t ) are stationary and that the
function f is smooth. Then the solution of eqn (114a) converges, in the
limit as 0 to the solution of the Stratonovich SDE (118).
Pavliotis (IC) StochProc January 16, 2011 244 / 367
Remarks
1
It is possible to prove pathwise convergence under very mild
assumptions.
2
The generator of a Stratonovich SDE has the from
L
strat
= h(x)
x
+
D

f (x)
x
(f (x)
x
) .
3
Consequently, the Fokker-Planck operator of the Stratonovich
SDE can be written in divergence form:
L

strat
=
x
(h(x)) +
D

x
_
f
2
(x)
x

_
.
Pavliotis (IC) StochProc January 16, 2011 245 / 367
4. In most applications in physics the white noise is an approximation
of a more complicated noise processes with non-zero correlation
time. Hence, the physically correct interpretation of the stochastic
integral is the Stratonovich one.
5. In higher dimensions an additional drift term might appear due to
the noncommutativity of the row vectors of the diffusion matrix. This
is related to the Lvy area correction in the theory of rough paths.
Pavliotis (IC) StochProc January 16, 2011 246 / 367
Proof of Proposition 61 The generator of the process (x(t ), y(t )) is
L =
1

2
_
y
y
+ D
2
y
_
+
1

f (x)y
x
+ h(x)
x
=:
1

2
L
0
+
1

L
1
+L
2
.
The "fast" process is an stationary Markov process with invariant
density
(y) =
_

2D
e

y
2
2D
. (119)
The backward Kolmogorov equation is
u

t
=
_
1

2
L
0
+
1

L
1
+L
2
_
u

. (120)
We look for a solution to this equation in the form of a power series
expansion in :
u

(x, y, t ) = u
0
+ u
1
+
2
u
2
+ . . .
Pavliotis (IC) StochProc January 16, 2011 247 / 367
We substitute this into (120) and equate terms of the same power in
to obtain the following hierarchy of equations:
L
0
u
0
= 0,
L
0
u
1
= L
1
u
0
,
L
0
u
2
= L
1
u
1
+L
2
u
0

u
0
t
.
The ergodicity of the fast process implies that the null space of the
generator L
0
consists only of constant in y. Hence:
u
0
= u(x, t ).
The second equation in the hierarchy becomes
L
0
u
1
= f (x)y
x
u.
Pavliotis (IC) StochProc January 16, 2011 248 / 367
This equation is solvable since the right hand side is orthogonal to the
null space of the adjoint of L
0
(this is the Fredholm alterantive). We
solve it using separation of variables:
u
1
(x, y, t ) =
1

f (x)
x
uy +
1
(x, t ).
In order for the third equation to have a solution we need to require
that the right hand side is orthogonal to the null space of L

0
:
_
R
_
L
1
u
1
+L
2
u
0

u
0
t
_
(y) dy = 0.
We calculate:
_
R
u
0
t
(y) dy =
u
t
.
Furthermore:
Pavliotis (IC) StochProc January 16, 2011 249 / 367
_
R
L
2
u
0
(y) dy = h(x)
x
u.
Finally
_
R
L
1
u
1
(y) dy =
_
R
f (x)y
x
_
1

f (x)
x
uy +
1
(x, t )
_
(y) dy
=
1

f (x)
x
(f (x)
x
u) y
2
) + f (x)
x

1
(x, t )y)
=
D

2
f (x)
x
(f (x)
x
u)
=
D

2
f (x)
x
f (x)
x
u +
D

2
f (x)
2

2
x
u.
Putting everything together we obtain the limiting backward
Kolmogorov equation
u
t
=
_
h(x) +
D

2
f (x)
x
f (x)
_

x
u +
D

2
f (x)
2

2
x
u,
from which we read off the limiting Stratonovich SDE
dX
dt
= h(X) +
_
2D

f (X)
dV
dt
. Pavliotis (IC) StochProc January 16, 2011 250 / 367
A Stratonovich SDE
dX(t ) = f (X(t )) dt + (X(t )) dW(t ) (121)
can be written as an It SDE
dX(t ) =
_
f (X(t )) +
1
2
_

d
dx
_
(X(t ))
_
dt + (X(t )) dW(t ).
Conversely, and It SDE
dX(t ) = f (X(t )) dt + (X(t ))dW(t ) (122)
can be written as a Statonovich SDE
dX(t ) =
_
f (X(t ))
1
2
_

d
dx
_
(X(t ))
_
dt + (X(t )) dW(t ).
The It and Stratonovich interpretation of an SDE can lead to
equations with very different properties!
Pavliotis (IC) StochProc January 16, 2011 251 / 367
Multiplicative Noise.
When the diffusion coefcient depends on the solution of the SDE
X(t), we will say that we have an equation with multiplicative
noise.
Multiplicative noise can lead to noise induced phase transitions.
See
W. Horsthemke and R. Lefever, Noise-induced transitions,
Springer-Verlag, Berlin 1984.
This is a topic of current interest for SDEs in innite dimensions
(SPDEs).
Pavliotis (IC) StochProc January 16, 2011 252 / 367
Colored Noise
When the noise which drives an SDE has non-zero correlation
time we will say that we have colored noise.
The properties of the SDE (stability, ergodicity etc.) are quite
robust under "coloring of the noise". See
G. Blankenship and G.C. Papanicolaou, Stability and control of
stochastic systems with wide-band noise disturbances. I, SIAM J.
Appl. Math., 34(3), 1978, pp. 437476.
Colored noise appears in many applications in physics and
chemistry. For a review see
P. Hanggi and P. Jung Colored noise in dynamical systems. Adv.
Chem. Phys. 89 239 (1995).
Pavliotis (IC) StochProc January 16, 2011 253 / 367
In the case where there is an additional small time scale in the
problem, in addition to the correlation time of the colored noise, it
is not clear what the right interpretation of the stochastic integral
(in the limit as both small time scales go to 0). This is usually
called the It versus Stratonovich problem.
Consider, for example, the SDE


X =

X + v(X)

(t ),
where

(t ) is colored noise with correlation time


2
.
In the limit where both small time scales go to 0 we can get either
It or Stratonovich or neither. See
G.A. Pavliotis and A.M. Stuart, Analysis of white noise limits for
stochastic systems with two fast relaxation times, Multiscale Model.
Simul., 4(1), 2005, pp. 1-35.
Pavliotis (IC) StochProc January 16, 2011 254 / 367
Given the function (z) in the SDE (105) we dene
(z) = (z)(z)
T
. (123)
The generator L is then dened as
Lv = h v +
1
2
: v. (124)
This operator, equipped with a suitable domain of denition, is the
generator of the Markov process given by (105).
The formal L
2
adjoint operator L

v = (hv) +
1
2
(v).
Pavliotis (IC) StochProc January 16, 2011 255 / 367
The It formula enables us to calculate the rate of change in time
of functions V : Z R
n
evaluated at the solution of a Z-valued
SDE.
Formally, we can write:
d
dt
_
V(z(t ))
_
= LV(z(t )) +
_
V(z(t )), (z(t ))
dW
dt
_
.
Note that if W were a smooth time-dependent function this
formula would not be correct: there is an additional term in LV,
proportional to , which arises from the lack of smoothness of
Brownian motion.
The precise interpretation of the expression for the rate of change
of V is in integrated form:
Pavliotis (IC) StochProc January 16, 2011 256 / 367
Lemma
(Its Formula) Assume that the conditions of Theorem 60 hold. Let
x(t ) solve (105) and let V C
2
(Z, R
n
). Then the process V(z(t ))
satises
V(z(t )) = V(z(0)) +
_
t
0
LV(z(s))ds +
_
t
0
V(z(s)), (z(s)) dW(s)) .
Let : Z R and consider the function
v(z, t ) = E
_
(z(t ))[z(0) = z
_
, (125)
where the expectation is with respect to all Brownian driving
paths. By averaging in the It formula, which removes the
stochastic integral, and using the Markov property, it is possible to
obtain the Backward Kolmogorov equation.
Pavliotis (IC) StochProc January 16, 2011 257 / 367
For a Stratonovich SDE the rules of standard calculus apply:
Consider the Stratonovich SDE (121) and let V(x) C
2
(R). Then
dV(X(t)) =
dV
dx
(X(t)) (f (X(t)) dt + (X(t)) dW(t)) .
Consider the Stratonovich SDE (121) on R
d
(i.e. f R
d
,
: R
n
R
d
, W(t) is standard Brownian motion on R
n
). The
corresponding Fokker-Planck equation is:

t
= (f ) +
1
2
( ())). (126)
Pavliotis (IC) StochProc January 16, 2011 258 / 367
1
The SDE for Brownian motion is:
dX =

2dW, X(0) = x.
The Solution is:
X(t ) = x +

2W(t ).
Pavliotis (IC) StochProc January 16, 2011 259 / 367
2. The SDE for the Ornstein-Uhlenbeck process is
dX = X dt +

2dW, X(0) = x.
We can solve this equation using the variation of constants formula:
X(t) = e
t
x +

2
_
t
0
e
(ts)
dW(s).
We can use Its formula to obtain equations for the moments of the
OU process. The generator is:
L = x
x
+
2
x
.
Pavliotis (IC) StochProc January 16, 2011 260 / 367
We apply Its formula to the function f (x) = x
n
to obtain:
dX(t )
n
= LX(t )
n
dt +

2X(t )
n
dW
= nX(t )
n
dt + n(n 1)X(t )
n2
dt
+n

2X(t )
n1
dW.
Consequently:
X(t )
n
= x
n
+
_
t
0
_
nX(t )
n
+ n(n 1)X(t )
n2
_
dt
+n

2
_
t
0
X(t )
n1
dW.
By taking the expectation in the above equation we obtain the
equation for the moments of the OU process that we derived
earlier using the Fokker-Planck equation:
M
n
(t ) = x
n
+
_
t
0
(nM
n
(s) + n(n 1)M
n2
(s)) ds.
Pavliotis (IC) StochProc January 16, 2011 261 / 367
3. Consider the geometric Brownian motion
dX(t ) = X(t ) dt + X(t ) dW(t ), (127)
where we use the It interpretation of the stochastic differential. The
generator of this process is
L = x
x
+

2
x
2
2

2
x
.
The solution to this equation is
X(t) = X(0) exp
_
(

2
2
)t + W(t)
_
. (128)
Pavliotis (IC) StochProc January 16, 2011 262 / 367
To derive this formula, we apply Its formula to the function
f (x) = log(x):
d log(X(t )) = L
_
log(X(t ))
_
dt + x
x
log(X(t )) dW(t )
=
_
x
1
x
+

2
x
2
2
_

1
x
2
__
dt + dW(t )
=
_


2
2
_
dt + dW(t ).
Consequently:
log
_
X(t )
X(0)
_
=
_


2
2
_
t + W(t )
from which (128) follows.
Pavliotis (IC) StochProc January 16, 2011 263 / 367
Notice that the Stratonovich interpretation of this equation leads to
the solution
X(t ) = X(0) exp(t + W(t ))
Exercise: calculate all moments of the geometric Brownian motion
for the It and Stratonovich interpretations of the stochastic
integral.
Pavliotis (IC) StochProc January 16, 2011 264 / 367
Consider the Landau equation:
dX
t
dt
= X
t
(c X
2
t
), X
0
= x. (129)
This is a gradient ow for the potential V(x) =
1
2
cx
2

1
4
x
4
.
When c < 0 all solutions are attracted to the single steady state
X

= 0.
When c > 0 the steady state X

= 0 becomes unstable and


X
t

c if x > 0 and X
t

c if x < 0.
Pavliotis (IC) StochProc January 16, 2011 265 / 367
Consider additive random perturbations to the Landau equation:
dX
t
dt
= X
t
(c X
2
t
) +

2
dW
t
dt
, X
0
= x. (130)
This equation denes an ergodic Markov process on R: There
exists a unique invariant distribution:
(x) = Z
1
e
V(x)/
, Z =
_
R
e
V(x)/
dx, V(x) =
1
2
cx
2

1
4
x
4
.
(x) is a probability density for all values of c R.
The presence of additive noise in some sense "trivializes" the
dynamics.
The dependence of various averaged quantities on c resembles
the physical situation of a second order phase transition.
Pavliotis (IC) StochProc January 16, 2011 266 / 367
Consider now multiplicative perturbations of the Landau equation.
dX
t
dt
= X
t
(c X
2
t
) +

2X
t
dW
t
dt
, X
0
= x. (131)
Where the stochastic differential is interpreted in the It sense.
The generator of this process is
L = x(c x
2
)
x
+ x
2

2
x
.
Notice that X
t
= 0 is always a solution of (131). Thus, if we start
with x > 0 (x < 0) the solution will remain positive (negative).
We will assume that x > 0.
Pavliotis (IC) StochProc January 16, 2011 267 / 367
Consider the function Y
t
= log(X
t
). We apply Its formula to this
function:
dY
t
= Llog(X
t
) dt + X
t

x
log(X
t
) dW
t
=
_
X
t
(c X
2
t
)
1
X
t
X
2
t
1
X
2
t
_
dt + X
t
1
X
t
dW
t
= (c ) dt X
2
t
dt + dW
t
.
Thus, we have been able to transform (131) into an SDE with
additive noise:
dY
t
=
_
(c ) e
2Y
t
_
dt + dW
t
. (132)
This is a gradient ow with potential
V(y) =
_
(c )y
1
2
e
2y
_
.
Pavliotis (IC) StochProc January 16, 2011 268 / 367
The invariant measure, if it exists, is of the form
(y) dy = Z
1
e
V(y)/
dy.
Going back to the variable x we obtain:
(x) dx = Z
1
x
(c/2)
e

x
2
2
dx.
We need to make sure that this distribution is integrable:
Z =
_
+
0
x

x
2
2
< , =
c

2.
For this it is necessary that
> 1 c > .
Pavliotis (IC) StochProc January 16, 2011 269 / 367
Not all multiplicative random perturbations lead to ergodic
behavior!
The dependence of the invariant distribution on c is similar to the
physical situation of rst order phase transitions.
Exercise Analyze this problem for the Stratonovich interpretation
of the stochastic integral.
Exercise Study additive and multiplicative random perturbations
of the ODE
dx
dt
= x(c + 2x
2
x
4
).
For more information see M.C. Mackey, A. Longtin, A. Lasota
Noise-Induced Global Asymptotic Stability, J. Stat. Phys. 60
(5/6) pp. 735-751.
Pavliotis (IC) StochProc January 16, 2011 270 / 367
Theorem
Assume that is chosen sufciently smooth so that the backward
Kolmogorov equation
v
t
= Lv for (z, t ) Z (0, ),
v = for (z, t ) Z 0 , (133)
has a unique classical solution v(x, t ) C
2,1
(Z (0, ), ). Then v is
given by (125) where z(t ) solves (106).
Pavliotis (IC) StochProc January 16, 2011 271 / 367
Now we can derive rigorously the Fokker-Planck equation.
Theorem
Consider equation (106) with z(0) a random variable with density

0
(z). Assume that the law of z(t ) has a density
(z, t ) C
2,1
(Z (0, )). Then satises the Fokker-Planck
equation

t
= L

for (z, t ) Z (0, ), (134a)


=
0
for z Z 0. (134b)
Pavliotis (IC) StochProc January 16, 2011 272 / 367
Proof
Let E

denote averaging with respect to the product measure


induced by the measure with density
0
on z(0) and the
independent driving Wiener measure on the SDE itself.
Averaging over random z(0) distributed with density
0
(z), we nd
E

((z(t ))) =
_
Z
v(z, t )
0
(z) dz
=
_
Z
(e
Lt
)(z)
0
(z) dz
=
_
Z
(e
L

0
)(z)(z) dz.
But since (z, t ) is the density of z(t ) we also have
E

((z(t ))) =
_
Z
(z, t )(z)dz.
Pavliotis (IC) StochProc January 16, 2011 273 / 367
Equating these two expressions for the expectation at time t we
obtain
_
Z
(e
L

0
)(z)(z) dz =
_
Z
(z, t )(z) dz.
We use a density argument so that the identity can be extended to
all L
2
(Z). Hence, from the above equation we deduce that
(z, t ) =
_
e
L

0
_
(z).
Differentiation of the above equation gives (134a).
Setting t = 0 gives the initial condition (134b).
Pavliotis (IC) StochProc January 16, 2011 274 / 367
THE SMOLUCHOWSKI AND FREIDLIN-WENTZELL LIMITS
Pavliotis (IC) StochProc January 16, 2011 275 / 367
There are very few SDEs/Fokker-Planck equations that can be
solved explicitly.
In most cases we need to study the problem under investigation
either approximately or numerically.
In this part of the course we will develop approximate methods for
studying various stochastic systems of practical interest.
Pavliotis (IC) StochProc January 16, 2011 276 / 367
There are many problems of physical interest that can be
analyzed using techniques from perturbation theory and
asymptotic analysis:
1
Small noise asymptotics at nite time intervals.
2
Small noise asymptotics/large times (rare events): the theory of
large deviations, escape from a potential well, exit time problems.
3
Small and large friction asymptotics for the Fokker-Planck
equation: The FreidlinWentzell (underdamped) and
Smoluchowski (overdamped) limits.
4
Large time asymptotics for the Langevin equation in a periodic
potential: homogenization and averaging.
5
Stochastic systems with two characteristic time scales: multiscale
problems and methods.
Pavliotis (IC) StochProc January 16, 2011 277 / 367
We will study various asymptotic limits for the Langevin equation
(we have set m = 1)

q = V(q)

q +
_
2
1
W. (135)
There are two parameters in the problem, the friction coefcient
and the inverse temperature .
We want to study the qualitative behavior of solutions to this
equation (and to the corresponding Fokker-Planck equation).
There are various asymptotic limits at which we can eliminate
some of the variables of the equation and obtain a simpler
equation for fewer variables.
In the large temperature limit, 1, the dynamics of (183) is
dominated by diffusion: the Langevin equation (183) can be
approximated by free Brownian motion:

q =
_
2
1
W.
Pavliotis (IC) StochProc January 16, 2011 278 / 367
The small temperature asymptotics, 1 is much more
interesting and more subtle. It leads to exponential, Arrhenius
type asymptotics for the reaction rate (in the case of a particle
escaping from a potential well due to thermal noise) or the
diffusion coefcient (in the case of a particle moving in a periodic
potential in the presence of thermal noise)
= exp (E
b
) , (136)
where can be either the reaction rate or the diffusion coefcient.
The small temperature asymptotics will be studied later for the
case of a bistable potential (reaction rate) and for the case of a
periodic potential (diffusion coefcient).
Pavliotis (IC) StochProc January 16, 2011 279 / 367
Assuming that the temperature is xed, the only parameter that is
left is the friction coefcient . The large and small friction
asymptotics can be expressed in terms of a slow/fast system of
SDEs.
In many applications (especially in biology) the friction coefcient
is large: 1. In this case the momentum is the fast variable
which we can eliminate to obtain an equation for the position. This
is the overdamped or Smoluchowski limit.
In various problems in physics the friction coefcient is small:
1. In this case the position is the fast variable whereas the
energy is the slow variable. We can eliminate the position and
obtain an equation for the energy. This is the underdampled or
Freidlin-Wentzell limit.
In both cases we have to look at sufciently long time scales.
Pavliotis (IC) StochProc January 16, 2011 280 / 367
We rescale the solution to (183):
q

(t ) =

q(t /

).
This rescaled process satises the equation

q
V(q

+
_
2
2


1
W, (137)
Different choices for these two parameters lead to the
overdamped and underdamped limits:
Pavliotis (IC) StochProc January 16, 2011 281 / 367

= 1,

=
1
, 1.
In this case equation (137) becomes

=
q
V(q

)

q

+
_
2
1
W. (138)
Under this scaling, the interesting limit is the overdamped limit,
1.
We will see later that in the limit as +the solution to (138)
can be approximated by the solution to

q =
q
V +
_
2
1
W.
Pavliotis (IC) StochProc January 16, 2011 282 / 367

= 1,

= , 1:

=
2
V(q

)

q

+
_
2
2

1
W. (139)
Under this scaling the interesting limit is the underdamped limit,
1.
We will see later that in the limit as 0 the energy of the
solution to (139) converges to a stochastic process on a graph.
Pavliotis (IC) StochProc January 16, 2011 283 / 367
We consider the rescaled Langevin equation (138):

(t ) = V(q

(t ))

q

(t ) +
_
2
1
W(t ), (140)
where we have set
1
= , since we are interested in the limit
, i.e. 0.
We will show that, in the limit as 0, q

(t ), the solution of the


Langevin equation (140), converges to q(t ), the solution of the
Smoluchowski equation

q = V +
_
2
1
W. (141)
Pavliotis (IC) StochProc January 16, 2011 284 / 367
We write (140) as a system of SDEs:

q =
1

p, (142)

p =
1

V(q)
1

2
p +

2

W. (143)
This systems of SDEs dened a Markov process in phase space.
Its generator is
L

=
1

2
_
p
p
+
1

p
_
+
1

_
p
q

q
V
p
_
=:
1

2
L
0
+
1

L
1
.
This is a singularly perturbed differential operator.
We will derive the Smoluchowski equation (141) using a pathwise
technique, as well as by analyzing the corresponding Kolmogorov
equations.
Pavliotis (IC) StochProc January 16, 2011 285 / 367
We apply Its formula to p:
dp(t ) = L

p(t ) dt +
1

_
2
1

p
p(t ) dW
=
1

2
p(t ) dt
1

q
V(q(t )) dt +
1

_
2
1
dW.
Consequently:
1

_
t
0
p(s) ds =
_
t
0

q
V(q(s)) ds +
_
2
1
W(t ) +O().
From equation (142) we have that
q(t ) = q(0) +
1

_
t
0
p(s) ds.
Combining the above two equations we deduce
q(t ) = q(0)
_
t
0

q
V(q(s)) ds +
_
2
1
W(t ) +O()
from which (141) follows.
Pavliotis (IC) StochProc January 16, 2011 286 / 367
Notice that in this derivation we assumed that
E[p(t )[
2
C.
This estimate is true, under appropriate assumptions on the
potential V(q) and on the initial conditions.
In fact, we can prove a pathwise approximation result:
_
E sup
t[0,T]
[q

(t ) q(t )[
p
_
1/p
C
2
,
where > 0, arbitrary small (it accounts for logarithmic
corrections).
Pavliotis (IC) StochProc January 16, 2011 287 / 367
For the rigorous proof see
E. Nelson, Dynamical Theories of Brownian Motion, Princeton
University press 1967.
A similar approximation theorem is also valid in innite dimensions
(i.e. for SPDEs):
S. Cerrai and M. Freidlin, On the Smoluchowski-Kramers
approximation for a system with an innite number of degrees of
freedom, Probab. Theory Related Fields, 135 (3), 2006, pp.
363394.
Pavliotis (IC) StochProc January 16, 2011 288 / 367
The pathwise derivation of the Smoluchowski equation implies
that the solution of the Fokker-Planck equation corresponding to
the Langevin equation (140) converges (in some appropriate
sense to be explained below) to the solution of the Fokker-Planck
equation corresponding to the Smoluchowski equation (141).
It is important in various applications to calculate corrections to
the limiting Fokker-Planck equation.
We can accomplish this by analyzing the Fokker-Planck equation
for (140) using singular perturbation theory.
We will consider the problem in one dimension. This mainly to
simplify the notation. The multidimensional problem can be
treated in a very similar way.
Pavliotis (IC) StochProc January 16, 2011 289 / 367
The FokkerPlanck equation associated to equations (142) and
(143) is

t
= L

=
1

(p
q
+
q
V(q)
p
) +
1

2
_

p
(p) +
1

2
p

_
=:
_
1

2
L

0
+
1

1
_
. (144)
The invariant distribution of the Markov process q, p, if it exists,
is

0
(p, q) =
1
Z
e
H(p,q)
, Z =
_
R
2
e
H(p,q)
dpdq,
where H(p, q) =
1
2
p
2
+ V(q). We dene the function f(p,q,t)
through
(p, q, t ) = f (p, q, t )
0
(p, q). (145)
Pavliotis (IC) StochProc January 16, 2011 290 / 367
Theorem
The function f (p, q, t ) dened in (145) satises the equation
f
t
=
_
1

2
_
p
q
+
1

2
p
_

(p
q

q
V(q)
p
)
_
f
=:
_
1

2
L
0

L
1
_
f . (146)
remark
This is "almost" the backward Kolmogorov equation with the
difference that we have L
1
instead of L
1
. This is related to the
fact that L
0
is a symmetric operator in L
2
(R
2
; Z
1
e
H(p,q)
),
whereas L
1
is antisymmetric.
Pavliotis (IC) StochProc January 16, 2011 291 / 367
Proof.
We note that L

0
= 0 and L

0
= 0. We use this to calculate:
L

0
= L
0
(f
0
) =
p
(f
0
) +
1

2
p
(f
0
)
=
0
p
p
f +
0

2
p
f + f L

0
+ 2
1

p
f
p

0
=
_
p
p
f +
1

2
p
f
_

0
=
0
L
0
f .
Similarly,
L

1
= L

1
(f
0
) = (p
q
+
q
V
p
) (f
0
)
=
0
(p
q
f +
q
V
p
f ) =
0
L
1
f .
Consequently, the FokkerPlanck equation (144) becomes

0
f
t
=
0
_
1

2
L
0
f
1

L
1
f
_
,
from which the claim follows.
Pavliotis (IC) StochProc January 16, 2011 292 / 367
We look for a solution to (146) in the form of a power series in :
f (p, q, t ) =

n=0

n
f
n
(p, q, t ). (147)
We substitute this expansion into eqn. (146) to obtain the following
system of equations.
L
0
f
0
= 0, (148)
L
0
f
1
= L
1
f
0
, (149)
L
0
f
2
= L
1
f
1
+
f
0
t
(150)
L
0
f
n+1
= L
1
f
n
+
f
n
t
, n = 2, 3 . . . (151)
The null space of L
0
consists of constants in p. Consequently,
from equation (148) we conclude that
f
0
= f (q, t ).
Pavliotis (IC) StochProc January 16, 2011 293 / 367
Now we can calculate the right hand side of equation (149):
L
1
f
0
= p
q
f .
Equation (149) becomes:
L
0
f
1
= p
q
f .
The right hand side of this equation is orthogonal to ^(L

0
) and
consequently there exists a unique solution. We obtain this
solution using separation of variables:
f
1
= p
q
f +
1
(q, t ).
Pavliotis (IC) StochProc January 16, 2011 294 / 367
Now we can calculate the RHS of equation (150). We need to
calculate L
1
f
1
:
L
1
f
1
=
_
p
q

q
V
p
__
p
q
f
1
(q, t )
_
= p
2

2
q
f p
q

q
V
q
f .
The solvability condition for (150) is
_
R
_
L
1
f
1

f
0
t
_

OU
(p) dp = 0,
from which we obtain the backward Kolmogorov equation
corresponding to the Smoluchowski SDE:
f
t
=
q
V
q
f +
1

2
p
f . (152)
Now we solve the equation for f
2
. We use (152) to write (150) in
the form
L
0
f
2
=
_

1
p
2
_

2
q
f +p
q

1
.
Pavliotis (IC) StochProc January 16, 2011 295 / 367
The solution of this equation is
f
2
(p, q, t ) =
1
2

2
q
f (p, q, t )p
2

1
(q, t )p +
2
(q, t ).
The right hand side of the equation for f
3
is
L
1
f
2
=
1
2
p
3

3
q
f p
2

2
q

1
+ p
q

q
V
2
q
fp
q
V
q

1
.
The solvability condition
_
R
L
1
f
2

OU
(p) dp = 0.
This leads to the equation

q
V
q

2
q

1
= 0
The only solution to this equation which is an element of
L
2
(e
V(q)
) is

1
0.
Pavliotis (IC) StochProc January 16, 2011 296 / 367
Putting everything together we obtain the rst two terms in the
-expansion of the FokkerPlanck equation (146):
(p, q, t ) = Z
1
e
H(p,q)
_
f + (p
q
f ) +O(
2
)
_
,
where f is the solution of (152).
Notice that we can rewrite the leading order term to the expansion
in the form
(p, q, t ) = (2
1
)

1
2
e
p
2
/2

V
(q, t ) +O(),
where
V
= Z
1
e
V(q)
f is the solution of the Smoluchowski
Fokker-Planck equation

V
t
=
q
(
q
V
V
) +
1

2
q

V
.
Pavliotis (IC) StochProc January 16, 2011 297 / 367
It is possible to expand the n-th term in the expansion (147) in
terms of Hermite functions (the eigenfunctions of the generator of
the OU process)
f
n
(p, q, t ) =
n

k=0
f
nk
(q, t )
k
(p), (153)
where
k
(p) is the kth eigenfunction of L
0
:
L
0

k
=
k

k
.
We can obtain the following system of equations
(

L =
1

q

q
V):

Lf
n1
= 0,

k + 1

1

Lf
n,k+1
+
_
k
1

q
f
n,k1
= kf
n+1,k
, k = 1, 2 . . . , n 1,
_
n
1

q
f
n,n1
= nf
n+1,n
,
_
(n + 1)
1

q
f
n,n
= (n + 1)f
n+1,n+1
.
Pavliotis (IC) StochProc January 16, 2011 298 / 367
Using this method we can obtain the rst three terms in the
expansion:
(x, y, t ) =
0
(p, q)
_
f + (
_

q
f
1
)
+
2
_

2

2
q
f
2
+ f
20
_
+
3
_

3
3!

3
q
f
3
+
_

1
L
2
q
f

q
f
20
_

1
__
+O(
4
),
Pavliotis (IC) StochProc January 16, 2011 299 / 367
The FreilinWentzell Limit
Consider now the rescaling
,
= 1,
,
= . The Langevin equation
becomes

=
2
V(q

)

q

+
_
2
2

1
W. (154)
We write equation (154) as system of two equations

=
1
p

,

p

=
1
V

(q

) p

+
_
2
1
W.
This is the equation for an O(1/) Hamiltonian system perturbed by
O(1) noise. We expect that, to leading order, the energy is conserved,
since it is conserved for the Hamiltonian system. We apply Its
formula to the Hamiltonian of the system to obtain

H =
_

1
p
2
_
+
_
2
1
p
2
W
with p
2
= p
2
(H, q) = 2(H V(q)).
Pavliotis (IC) StochProc January 16, 2011 300 / 367
Thus, in order to study the 0 limit we need to analyze the following
fast/slow system of SDEs

H =
_

1
p
2
_
+
_
2
1
p
2
W (155a)

=
1
V

(q

) p

+
_
2
1
W. (155b)
The Hamiltonian is the slow variable, whereas the momentum (or
position) is the fast variable. Assuming that we can average over the
Hamiltonian dynamics, we obtain the limiting SDE for the Hamiltonian:

H =
_

1
p
2
)
_
+
_
2
1
p
2
)

W. (156)
The limiting SDE lives on the graph associated with the Hamiltonian
system. The domain of denition of the limiting Markov process is
dened through appropriate boundary conditions (the gluing
conditions) at the interior vertices of the graph.
Pavliotis (IC) StochProc January 16, 2011 301 / 367
We identify all points belonging to the same connected
component of the a level curve x : H(x) = H, x = (q, p).
Each point on the edges of the graph correspond to a trajectory.
Interior vertices correspond to separatrices.
Let I
i
, i = 1, . . . d be the edges of the graph. Then (i , H) denes a
global coordinate system on the graph.
For more information see
Freidlin and Wentzell, Random Perturbations of Dynamical
Systems, Springer 1998.
Freidlin and Wentzell, Random Perturbations of Hamiltonian
Systems, AMS 1994.
Sowers, A Boundary layer theory for diffusively perturbed transport
around a heteroclinic cycle, CPAM 58 (2005), no. 1, 3084.
Pavliotis (IC) StochProc January 16, 2011 302 / 367
We will study the small asymptotics by analyzing the corresponding
backward Kolmogorov equation using singular perturbation theory.
The generator of the process q

, p

is
L

=
1
(p
q

q
V
p
) p
p
+
1

2
p
=
1
L
0
+L
1
.
Let u

= E(f (p

(p, q; t ), q

(p, q; t ))). It satises the backward


Kolmogorov equation associated to the process q

, p

:
u

t
=
_
1

L
0
+L
1
_
u

. (157)
Pavliotis (IC) StochProc January 16, 2011 303 / 367
We look for a solution in the form of a power series expansion in :
u

= u
0
+ u
1
+
2
u
2
+ . . .
We substitute this ansatz into (157) and equate equal powers in to
obtain the following sequence of equations:
L
0
u
0
= 0, (158a)
L
0
u
1
= L
1
u
1
+
u
0
t
, (158b)
L
0
u
2
= L
1
u
1
+
u
1
t
. (158c)
. . . . . . . . .
Notice that the operator L
0
is the backward Liouville operator of the
Hamiltonian system with Hamiltonian
H =
1
2
p
2
+ V(q).
Pavliotis (IC) StochProc January 16, 2011 304 / 367
We assume that there are no integrals of motion other than the
Hamiltonian. This means that the null space of L
0
consists of functions
of the Hamiltonian:
^(L
0
) =
_
functions ofH
_
. (159)
Let us now analyze equations (158). We start with (158a); eqn. (159)
implies that u
0
depends on q, p through the Hamiltonian function H:
u
0
= u(H(p, q), t ) (160)
Now we proceed with (158b). For this we need to nd the solvability
condition for equations of the form
L
0
u = f (161)
My multiply it by an arbitrary smooth function of H(p, q), integrate over
R
2
and use the skew-symmetry of the Liouville operator L
0
to deduce:
1
1
We assume that both u
1
and F decay to 0 as |p| to justify the integration by
parts that follows.
Pavliotis (IC) StochProc January 16, 2011 305 / 367
_
R
2
L
0
uF(H(p, q)) dpdq =
_
R
2
uL

0
F(H(p, q)) dpdq
=
_
R
2
u(L
0
F(H(p, q))) dpdq
= 0, F C

b
(R).
This implies that the solvability condition for equation (161) is that
_
R
2
f (p, q)F(H(p, q)) dpdq = 0, F C

b
(R). (162)
We use the solvability condition in (158b) to obtain that
_
R
2
_
L
1
u
1

u
0
t
_
F(H(p, q)) dpdq = 0, (163)
To proceed, we need to understand how L
1
acts to functions of
H(p, q). Let = (H(p, q)). We have that

p
=
H
p

H
= p

H
and
Pavliotis (IC) StochProc January 16, 2011 306 / 367

p
2
=

p
_

H
_
=

H
+ p
2

2

H
2
.
The above calculations imply that, when L
1
acts on functions
= (H(p, q)), it becomes
L
1
=
_
(
1
p
2
)
H
+
1
p
2

2
H
_
, (164)
where
p
2
= p
2
(H, q) = 2(H V(q)).
Pavliotis (IC) StochProc January 16, 2011 307 / 367
We want to change variables in the integral (163) and go from (p, q) to
p, H. The Jacobian of the transformation is:
(p, q)
(H, q)
=
p
H
p
q
q
H
q
q
=
p
H
=
1
p(H, q)
.
We use this, together with (164), to rewrite eqn. (163) as
_ _
_
u
t
+
_
(
1
p
2
)
H
+
1
p
2

2
H
_
u
_
F(H)p
1
(H, q) dHdq
= 0.
We introduce the notation
) :=
_
dq.
The integration over q can be performed "explicitly":
_
_
u
t
p
1
) +
_
(
1
p
1
) p))
H
+
1
p)
2
H
_
u
_
F(H) dH = 0.
Pavliotis (IC) StochProc January 16, 2011 308 / 367
This equation should be valid for every smooth function F(H), and this
requirement leads to the differential equation
p
1
)
u
t
=
_

1
p
1
) p)
_

H
u +p)
1

2
H
u,
or,
u
t
=
_

1
p
1
)
1
p)
_

H
u + p
1
)
1
p)
1

2
H
u.
Thus, we have obtained the limiting backward Kolmogorov equation for
the energy, which is the "slow variable". From this equation we can
read off the limiting SDE for the Hamiltonian:

H = b(H) + (H)

W (165)
where
b(H) =
1
p
1
)
1
p), (H) =
1
p
1
)
1
p).
Pavliotis (IC) StochProc January 16, 2011 309 / 367
Notice that the noise that appears in the limiting equation (165) is
multiplicative, contrary to the additive noise in the Langevin equation.
As it well known from classical mechanics, the action and frequency
are dened as
I(E) =
_
p(q, E) dq
and
(E) = 2
_
dI
dE
_
1
,
respectively. Using the action and the frequency we can write the
limiting FokkerPlanck equation for the distribution function of the
energy in a very compact form.
Theorem
The limiting FokkerPlanck equation for the energy distribution
function (E, t ) is

t
=

E
__
I(E) +
1

E
__
(E)
2
__
. (166)
Pavliotis (IC) StochProc January 16, 2011 310 / 367
Proof.
We notice that
dI
dE
=
_
p
E
dq =
_
p
1
dq
and consequently p
1
)
1
=
(E)
2
. Hence, the limiting FokkerPlanck
equation can be written as

t
=

E
__

1
I(E)(E)
2
_

_
+
1

2
E
2
_
I
2
_
=
1

E
+

E
_
I
2

_
+
1

E
_
dI
dE

2
_
+
1

E
_
I

E
_

2
_
_
=

E
_
I
2

_
+
1

E
_
I

E
_

2
_
_
=

E
__
I(E) +
1

E
__
(E)
2
__
,
which is precisely equation (166).
Pavliotis (IC) StochProc January 16, 2011 311 / 367
Remarks
1
We emphasize that the above formal procedure does not provide
us with the boundary conditions for the limiting FokkerPlanck
equation. We will discuss about this issue in the next section.
2
If we rescale back to the original time-scale we obtain the equation

t
=

E
__
I(E) +
1

E
__
(E)
2
__
. (167)
We will use this equation later on to calculate the rate of escape
from a potential barrier in the energy-diffusion-limited regime.
Pavliotis (IC) StochProc January 16, 2011 312 / 367
Exit Time Problems and Escape from a Potential Well
Pavliotis (IC) StochProc January 16, 2011 313 / 367
Escape From a Potential Well
There are many systems in physics, chemistry and biology that
exist in at least two stable states:
Switching and storage devices in computers.
Conformational changes of biological macromolecules (they can
exist in many different states).
The problems that we would like to study are:
How stable are the various states relative to each other.
How long does it take for a system to switch spontaneously from
one state to another?
How is the transfer made, i.e. through what path in the relevant
state space?
How does the system relax to an unstable state?
Pavliotis (IC) StochProc January 16, 2011 314 / 367
The study of bistability and metastability is a very active research
area. Topics of current research include:
The development of numerical methods for the calculation of
various quantities such as reaction rates, transition pathways etc.
The study of bistable systems in innite dimensions.
Pavliotis (IC) StochProc January 16, 2011 315 / 367
We will consider the dynamics of a particle moving in a bistable
potential, under the inuence of thermal noise in one dimension:

x = V

(x) +
_
2k
B
T

. (168)
We will consider potentials that have to local minima, one local
maximum (saddle point) and they increase at least quadratically
fast at innity. This ensures that the state space is "compact", i.e.
that the particle cannot escape at innity.
The standard potential that satises these assumptions is
V(x) =
1
4
x
4

1
2
x
2
+
1
4
. (169)
This potential has three local minima, a local maximum at x = 0
and two local minima at x = 1.
Pavliotis (IC) StochProc January 16, 2011 316 / 367
0 50 100 150 200 250 300 350 400 450 500
3
2
1
0
1
2
3
Figure: Sample Path of solution to equation (168).
Pavliotis (IC) StochProc January 16, 2011 317 / 367
The values of the potential at these three points are:
V(1) = 0, V(0) =
1
4
.
We will say that the height of the potential barrier is
1
4
. The
physically (and mathematically!) interesting case is when the
thermal uctuations are weak when compared to the potential
barrier that the particle has to climb over.
More generally, we assume that the potential has two local minima
at the points a and c and a local maximum at b.
We will consider the problem of the escape of the particle from the
left local minimum a.
The potential barrier is then dened as
E
b
= V(b) V(a).
Pavliotis (IC) StochProc January 16, 2011 318 / 367
Our assumption that the thermal uctuations are weak can be
written as
k
B
T
E
b
1. (170)
Under condition (170), the particle is most likely to be found close
to a or c. There it will perform small oscillations around either of
the local minima.
We can study the weak noise asymptotics for nite times using
standard perturbation theory.
We can describe locally the dynamics of the particle by
appropriate Ornstein-Uhlenbeck processes.
This result is valid only for nite times: at sufciently long times
the particle can escape from the one local minimum, a say, and
surmount the potential barrier to end up at c. It will then spend a
long time in the neighborhood of c until it escapes again the
potential barrier and end at a.
Pavliotis (IC) StochProc January 16, 2011 319 / 367
This is an example of a rare event. The relevant time scale, the
exit time or the mean rst passage time scales exponentially in
:= (k
B
T)
1
:
=
1
exp(E
b
).
the rate with which particles escape from a local minimum of the
potential is called the rate of escape or the reaction rate
:=
1
:
= exp(E). (171)
It is important to notice that the escape from a local minimum, i.e.
a state of local stability, can happen only at positive temperatures:
it is a noise assisted event:
In the absence of thermal uctuations the equation of motion
becomes:

x = V

(x), x(0) = x
0
.
Pavliotis (IC) StochProc January 16, 2011 320 / 367
In this case the potential becomes a Lyapunov function:
dx
dt
= V

(x)
dx
dt
= (V

(x))
2
< 0.
Hence, depending on the initial condition the particle will converge
either to a or c. The particle cannot escape from either state of
local stability.
On the other hand, at high temperatures the particle does not
"see" the potential barrier: it essentially jumps freely from one
local minimum to another.
We will study this problem and calculate the escape rate using by
calculating the mean rst passage time.
Pavliotis (IC) StochProc January 16, 2011 321 / 367
The Arrhenius-type factor in the formula for the reaction rate,
eqn. (171) is intuitively clear and it has been observed
experimentally in the late nineteenth century by Arrhenius and
others.
What is extremely important both from a theoretical and an
applied point of view is the calculation of the prefactor , the rate
coefcient.
A systematic approach for the calculation of the rate coefcient, as
well as the justication of the Arrhenius kinetics, is that of the
mean rst passage time method (MFPT).
Since this method is of independent interest and is useful in
various other contexts, we will present it in a quite general setting
and apply it to the problem of the escape from a potential barrier
later on.
Pavliotis (IC) StochProc January 16, 2011 322 / 367
Let X
t
be a continuous time diffusion process on R
d
whose
evolution is governed by the SDE
dX
x
t
= b(X
x
t
) dt + (X
x
t
) dW
t
, X
x
0
= x. (172)
Let D be a bounded subset of R
d
with smooth boundary. Given
x D, we want to know how long it takes for the process X
t
to
leave the domain D for the rst time

x
D
= inf t 0 : X
x
t
/ D .
Clearly, this is a random variable. The average of this random
variable is called the mean rst passage time MFPT or the rst
exit time:
(x) := E
x
D
.
We can calculate the MFPT by solving an appropriate boundary
value problem.
Pavliotis (IC) StochProc January 16, 2011 323 / 367
Theorem
The MFPT is the solution of the boundary value problem
L = 1, x D, (173a)
= 0, x D, (173b)
where L is the generator of the SDE 173.
The homogeneous Dirichlet boundary conditions correspond to an
absorbing boundary: the particles are removed when they reach
the boundary. Other choices of boundary conditions are also
possible.
The rigorous proof of Theorem 67 is based on Its formula.
Pavliotis (IC) StochProc January 16, 2011 324 / 367
Derivation of the BVP for the MFPT
Let (X, x, t ) be the probability distribution of the particles that
have not left the domain D at time t . It solves the FP equation
with absorbing boundary conditions.

t
= L

, (X, x, 0) = (X x), [
D
= 0. (174)
We can write the solution to this equation in the form
(X, x, t ) = e
L

t
(X x),
where the absorbing boundary conditions are included in the
denition of the semigroup e
L

t
.
The homogeneous Dirichlet (absorbing) boundary conditions
imply that
lim
t+
(X, x, t ) = 0.
That is: all particles will eventually leave the domain.
Pavliotis (IC) StochProc January 16, 2011 325 / 367
The (normalized) number of particles that are still inside D at time
t is
S(x, t ) =
_
D
(X, x, t ) dx.
Notice that this is a decreasing function of time. We can write
S
t
= f (x, t ),
where f (x, t ) is the rst passage times distribution.
Pavliotis (IC) StochProc January 16, 2011 326 / 367
The MFPT is the rst moment of the distribution f (x, t ):
(x) =
_
+
0
f (s, x)s ds =
_
+
0

dS
ds
s ds
=
_
+
0
S(s, x) ds =
_
+
0
_
D
(X, x, s) dXds
=
_
+
0
_
D
e
L

s
(X x) dXds
=
_
+
0
_
D
(X x)
_
e
Ls
1
_
dXds =
_
+
0
_
e
Ls
1
_
ds.
We apply L to the above equation to deduce:
L =
_
+
0
_
Le
Lt
1
_
dt =
_
t
0
d
dt
_
e
Lt
1
_
dt
= 1.
Pavliotis (IC) StochProc January 16, 2011 327 / 367
Consider the boundary value problem for the MFPT of the one
dimensional diffusion process (168) from the interval (a, b):

1
e
V

x
_
e
V

_
= 1 (175)
We choose reecting BC at x = a and absorbing B.C. at x = b.
We can solve (175) with these boundary conditions by
quadratures:
(x) =
1
_
b
x
dye
V(y)
_
y
a
dze
V(z)
. (176)
Now we can solve the problem of the escape from a potential well:
the reecting boundary is at x = a, the left local minimum of the
potential, and the absorbing boundary is at x = b, the local
maximum.
Pavliotis (IC) StochProc January 16, 2011 328 / 367
We can replace the B.C. at x = a by a repelling B.C. at x = :
(x) =
1
_
b
x
dye
V(y)
_
y

dze
V(z)
.
When E
b
1 the integral wrt z is dominated by the value of the
potential near a. Furthermore, we can replace the upper limit of
integration by :
_
z

exp(V(z)) dz
_
+

exp(V(a))
exp
_

2
0
2
(z a)
2
_
dz
= exp (V(a))

2
0
,
where we have used the Taylor series expansion around the
minimum:
V(z) = V(a) +
1
2

2
0
(z a)
2
+ . . .
Pavliotis (IC) StochProc January 16, 2011 329 / 367
Similarly, the integral wrt y is dominated by the value of the
potential around the saddle point. We use the Taylor series
expansion
V(y) = V(b)
1
2

2
b
(y b)
2
+ . . .
Assuming that x is close to a, the minimum of the potential, we
can replace the lower limit of integration by . We nally obtain
_
b
x
exp(V(y)) dy
_
b

exp(V(b)) exp
_

2
b
2
(y b)
2
_
dy
=
1
2
exp (V(b))

2
b
.
Pavliotis (IC) StochProc January 16, 2011 330 / 367
Putting everything together we obtain a formula for the MFPT:
(x) =

b
exp (E
b
) .
The rate of arrival at b is 1/. Only half of the particles escape.
Consequently, the escape rate (or reaction rate), is given by
1
2
:
=

0

b
2
exp (E
b
) . (177)
Pavliotis (IC) StochProc January 16, 2011 331 / 367
Consider now the problem of escape from a potential well for the
Langevin equation

q =
q
V(q)

q +
_
2
1
W. (178)
The reaction rate depends on the ction coefcient and the
temperature. In the overdamped limit ( 1) we retrieve (177),
appropriately rescaled with :
=

0

b
2
exp (E
b
) . (179)
We can also obtain a formula for the reaction rate for = O(1):
=
_

2
4

2
b


2

0
2
exp (E
b
) . (180)
Naturally, in the limit as +(180) reduces to (179)
Pavliotis (IC) StochProc January 16, 2011 332 / 367
In order to calculate the reaction rate in the underdamped or
energy-diffusion-limited regime 1 we need to study the
diffusion process for the energy, (165) or (166). The result is
= I(E
b
)

0
2
e
E
b
, (181)
where I(E
b
) denotes the action evaluated at b.
A formula for the escape rate which is valid for all values of friction
coefcient was obtained by Melnikov and Meshkov in 1986, J.
Chem. Phys 85(2) 1018-1027. This formula requires the
calculation of integrals and it reduced to (179) and (181) in the
overdamped and underdamped limits, respectively.
Pavliotis (IC) StochProc January 16, 2011 333 / 367
The escape rate can be calculated in closed form only in 1d or in
multidimesional problems that can be reduced to a 1d problem
(e.g., problems with radial symmetry).
For more information on the calculation of escape rates and
applications of reaction rate theory see
P. Hnggi, P. Talkner and M. Borkovec: Reaction rate theory: fty
years after Kramers. Rev. Mod. Phys. 62(2) 251-341 (1990).
R. Zwanzig: Nonequilibrium Statistical Mechanics, Oxford 2001,
Ch. 4.
C.W. Gardiner: Handbook of Stochastic Methods, Springer 2002,
Ch. 9.
Freidlin and Wentzell: Random Perturbations of Dynamical
Systems, Springer 1998, Ch. 6.
Pavliotis (IC) StochProc January 16, 2011 334 / 367
The Kac-Zwanzig model and the Generalized Langevin Equation
Pavliotis (IC) StochProc January 16, 2011 335 / 367
Contents
Introduction: Einsteins theory of Brownian motion.
Particle coupled to a heat bath: the Kac-Zwanzig model.
Coupled particle/eld models.
The Mori-Zwanzig formalism.
Pavliotis (IC) StochProc January 16, 2011 336 / 367
Heavy (Brownian) particle immersed in a uid.
Collisions of the heavy particle with the light uid particles.
Statistical description of the motion of the Brownian particle.
Fluid is assumed to be in equilibrium.
Brownian particle performs an effective random walk:
X(t )
2
) = 2Dt .
The constant D is the diffusion coefcient (which should be
calculated from rst principles, i.e. through Green-Kubo formulas).
Pavliotis (IC) StochProc January 16, 2011 337 / 367
Phenomenological theories based either on an equation for the
evolution of the probability distribution function (q, p, t ) (the
Fokker-Planck equation)

t
= p
q
+
q
V
p
+
_

p
(p) +
1

2
p

_
(182)
or the a stochastic equation for the evolution of trajectories, the
Langevin equation

q = V

(q)

q +
_
2
1
, (183)
where (t ) is a white noise process, i.e. a (generalized) mean zero
Gaussian Markov process with correlation function
(t )(s)) = (t s).
Pavliotis (IC) StochProc January 16, 2011 338 / 367
Notice that the noise and dissipation in the Langevin equation are
not independent. They are related through the
uctuation-dissipation theorem.
This is not a coincidence since, as we will see later, noise and
dissipation have the same source, namely the interaction between
the Brownian particle and its environment (heat bath).
There are two phenomenological constants, the inverse
temperature and the friction coefcient .
Our goal is to derive the Fokker-Planck and Langevin equations
from rst principles and to calculate the phenomenological
coefcients and .
We will consider some simple "particle + environment" systems for
which we can obtain rigorously a stochastic equation that
describes the dynamics of the Brownian particle.
Pavliotis (IC) StochProc January 16, 2011 339 / 367
We can describe the dynamics of the Brownian particle/uid
system:
H(Q
N
, P
N
; q, p) = H
BP
(Q
N
, P
N
) + H
HB
(q, p) + H
I
(Q
N
, q), (184)
where q, p :=
_
q
j

N
j =1
, p
j

N
j =1
_
are the positions and
momenta of the uid particles, N is the number of uid (heat bath)
particles (we will need to take the thermodynamic limit N +).
The initial conditions of the Brownian particle are taken to be xed,
whereas the uid is assumed to be initially in equilibrium (Gibbs
distribution).
Pavliotis (IC) StochProc January 16, 2011 340 / 367
Goal: eliminate the uid variables q, p :=
_
q
j

N
j =1
, p
j

N
j =1
_
to
obtain a closed equation for the Brownian particle.
We will see that this equation is a stochastic integrodifferential
equation, the Generalized Langevin Equation (GLE) (in the limit
as N +)

Q = V

(Q)
_
t
0
R(t s)

Q(s) ds + F(t ), (185)
where R(t ) is the memory kernel and F(t ) is the noise.
We will also see that, in some appropriate limit, we can derive the
Markovian Langevin equation (183).
Pavliotis (IC) StochProc January 16, 2011 341 / 367
Need to model the interaction between the heat bath particles and
the coupling between the Brownian particle and the heat bath.
The simplest model is that of a harmonic heat bath and of linear
coupling:
H(Q
N
, P
N
, q, p) =
P
2
N
2
+ V(Q
N
) +
N

n=1
p
2
n
2m
n
+
1
2
k
n
(q
n
Q
N
)
2
. (186)
The initial conditions of the Brownian particle
Q
N
(0), P
N
(0) := Q
0
, P
0
are taken to be deterministic.
The initial conditions of the heat bath particles are distributed
according to the Gibbs distribution, conditional on the knowledge
of Q
0
, P
0
:

(dpdq) = Z
1
e
H(q,p)
dqdp, (187)
where is the inverse temperature. This is a way of introducing
the concept of the temperature in the system (through the average
kinetic energy of the bath particles).
Pavliotis (IC) StochProc January 16, 2011 342 / 367
In order to choose the initial conditions according to

(dpdq) we
can take
q
n
(0) = Q
0
+
_

1
k
1
n

n
, p
n
(0) =
_
m
n

n
, (188)
where the
n

n
are mutually independent sequences of i.i.d.
^(0, 1) random variables.
Notice that we actually consider the Gibbs measure of an effective
(renormalized) Hamiltonian.
Other choices for the initial conditions are possible. For example,
we can take q
n
(0) =
_

1
k
1
n

n
. Our choice of I.C. ensures that
the forcing term in the GLE that we will derive is mean zero (see
below).
Pavliotis (IC) StochProc January 16, 2011 343 / 367
Hamiltons equations of motion are:

Q
N
+ V

(Q
N
) =
N

n=1
k
n
(q
n

2
Q
N
), (189a)

q
n
+
2
n
(q
n
Q
N
) = 0, n = 1, . . . N, (189b)
where
2
n
= k
n
/m
n
.
The equations for the heat bath particles are second order linear
inhomogeneous equations with constant coefcients. Our plan is
to solve them and then to substitute the result in the equations of
motion for the Brownian particle.
Pavliotis (IC) StochProc January 16, 2011 344 / 367
We can solve the equations of motion for the heat bath variables
using the variation of constants formula
q
n
(t ) = q
n
(0) cos(
n
t ) +
p
n
(0)
m
n

n
sin(
n
t )
+
n

_
t
0
sin(
n
(t s))Q
N
(s) ds.
An integration by parts yields
q
n
(t ) = q
n
(0) cos(
n
t ) +
p
n
(0)
m
n

n
sin(
n
t ) + Q
N
(t )
Q
N
(0) cos(
n
t )
_
t
0
cos(
n
(t s))

Q
N
(s) ds.
Pavliotis (IC) StochProc January 16, 2011 345 / 367
We substitute this in equation (189) and use the initial
conditions (188) to obtain the Generalized Langevin Equation

Q
N
= V

(Q
N
)
2
_
t
0
R
N
(t s)

Q
N
(s) ds + F
N
(t ), (190)
where the memory kernel is
R
N
(t ) =
N

n=1
k
n
cos(
n
t ) (191)
and the noise process is
F
N
(t ) =
N

n=1
k
n
(q
n
(0) Q
0
) cos(
n
t ) +
k
n
p
n
(0)
m
n

n
sin(
n
t )
=
_

1
N

n=1
_
k
n
(
n
cos(
n
t ) +
n
sin(
n
t )) . (192)
Pavliotis (IC) StochProc January 16, 2011 346 / 367
1
The noisy and random term are related through the
uctuation-dissipation theorem:
F
N
(t )F
N
(s)) =
1
N

n=1
k
n
_
cos(
n
t ) cos(
n
s)
+sin(
n
t ) sin(
n
s)
_
=
1
R
N
(t s). (193)
2
The noise F(t ) is a mean zero Gaussian process.
3
The choice of the initial conditions (188) for q, p is crucial for the
form of the GLE and, in particular, for the uctuation-dissipation
theorem (193) to be valid.
4
The parameter measures the strength of the coupling between
the Brownian particle and the heat bath.
5
By choosing the frequencies
n
and spring constants k
n
() of the
heat bath particles appropriately we can pass to the limit as
N +and obtain the GLE with different memory kernels R(t )
and noise processes F(t ).
Pavliotis (IC) StochProc January 16, 2011 347 / 367
Let a (0, 1), 2b = 1 a and set
n
= N
a

n
where
n

n=1
are
i.i.d. with
1
|(0, 1). Furthermore, we choose the spring
constants according to
k
n
=
f
2
(
n
)
N
2b
,
where the function f (
n
) decays sufciently fast at innity.
We can rewrite the dissipation and noise terms in the form
R
N
(t ) =
N

n=1
f
2
(
n
) cos(
n
t )
and
F
N
(t ) =
N

n=1
f (
n
) (
n
cos(
n
t ) +
n
sin(
n
t ))

,
where = N
a
/N.
Pavliotis (IC) StochProc January 16, 2011 348 / 367
Using now properties of Fourier series with random
coefcients/frequencies and of weak convergence of probability
measures we can pass to the limit:
R
N
(t ) R(t ) in L
1
[0, T],
for a.a.
n

n=1
and
F
N
(t ) F(t ) weakly in C([0, T], R).
The time T > 0 if nite but arbitrary.
The limiting kernel and noise satisfy the uctuation-dissipation
theorem (193):
F(t )F(s)) =
1
R(t s). (194)
Q
N
(t ), the solution of (190) converges weakly to the solution of
the limiting GLE

Q = V

(Q)
2
_
t
0
R(t s)

Q(s) ds + F(t ). (195)
Pavliotis (IC) StochProc January 16, 2011 349 / 367
The properties of the limiting dissipation and noise are determined
by the function f ().
Ex: Consider the Lorentzian function
f
2
() =
2/

2
+
2
(196)
with > 0. Then
R(t ) = e
|t|
.
The noise process F(t ) is a mean zero stationary Gaussian
process with continuous paths and, from (194), exponential
correlation function:
F(t )F(s)) =
1
e
|ts|
.
Hence, F(t ) is the stationary Ornstein-Uhlenbeck process:
dF
dt
= F +
_
2
1

dW
dt
, (197)
with F(0) ^(0,
1
).
Pavliotis (IC) StochProc January 16, 2011 350 / 367
The GLE (195) becomes

Q = V

(Q)
2
_
t
0
e
|ts|

Q(s) ds +
2
F(t ), (198)
where F(t ) is the OU process (197).
Pavliotis (IC) StochProc January 16, 2011 351 / 367
Q(t ), the solution of the GLE (195), is not a Markov process, i.e.
the future is not statistically independent of the past, when
conditioned on the present. The stochastic process Q(t ) has
memory.
We can turn (195) into a Markovian SDE by enlarging the
dimension of state space, i.e. introducing auxiliary variables.
We might have to introduce innitely many variables!
For the case of the exponential memory kernel, when the noise is
given by an OU process, it is sufcient to introduce one auxiliary
variable.
Pavliotis (IC) StochProc January 16, 2011 352 / 367
We can rewrite (198) as a system of SDEs:
dQ
dt
= P,
dP
dt
= V

(Q) + Z,
dZ
dt
= Z P +
_
2
1
dW
dt
,
where Z(0) ^(0,
1
).
The process Q(t ), P(t ), Z(t ) R
3
is Markovian.
It is a degenerate Markov process: noise acts directly only on one
of the 3 degrees of freedom.
We can eliminate the auxiliary process Z by taking an appropriate
distinguished limit.
Pavliotis (IC) StochProc January 16, 2011 353 / 367
Set =

1
, =
2
. Equations (200) become
dQ
dt
= P,
dP
dt
= V

(Q) +

Z,
dZ
dt
=
1

2
Z

P +
_
2
1

2
dW
dt
.
We can use tools from singular perturbation theory for Markov
processes to show that, in the limit as 0, we have that
1

Z
_
2
1
dW
dt
P.
Thus, in this limit we obtain the Markovian Langevin Equation
(R(t ) = (t ))

Q = V

(Q)

Q +
_
2
1
dW
dt
. (201)
Pavliotis (IC) StochProc January 16, 2011 354 / 367
Whenever the GLE (195) has "nite memory", we can represent it
as a Markovian SDE by adding a nite number of additional
variables.
These additional variables are solutions of a linear system of
SDEs.
This follows from results in approximation theory.
Consider now the case where the memory kernel is a bounded
analytic function. Its Laplace transform

R(s) =
_
+
0
e
st
R(t ) dt
can be represented as a continued fraction:

R(s) =

2
1
s +
1
+

2
2
...
,
i
0, (202)
Since R(t ) is bounded, we have that
lim
s

R(s) = 0.
Pavliotis (IC) StochProc January 16, 2011 355 / 367
Consider an approximation R
N
(t ) such that the continued fraction
representation terminates after N steps.
R
N
(t ) is bounded which implies that
lim
s

R
N
(s) = 0.
The Laplace transform of R
N
(t ) is a rational function:

R
N
(s) =

N
j =1
a
j
s
Nj
s
N
+

N
j =1
b
j
s
Nj
, a
j
, b
j
R. (203)
This is the Laplace transform of the autocorrelation function of an
appropriate linear system of SDEs. Indeed, set
dx
j
dt
= b
j
x
j
+ x
j +1
+ a
j
dW
j
dt
, j = 1, . . . , N, (204)
with x
N+1
(t ) = 0. The process x
1
(t ) is a stationary Gaussian
process with autocorrelation function R
N
(t ).
Pavliotis (IC) StochProc January 16, 2011 356 / 367
For N = 1 and b
1
= , a
1
=
_
2
1
we derive the GLE (198)
with F(t ) being the OU process (197).
Consider now the case N = 2 with b
i
=
i
, i = 1, 2 and
a
1
= 0, a
2
=
_
2
1

2
. The GLE becomes

Q = V

(Q)
2
_
t
0
R(t s)

Q(s) ds + F
1
(t ),

F
1
=
1
F
1
+ F
2
,

F
2
=
2
F
2
+
_
2
1

2

W
2
,
with

1
R(t s) = F
1
(t )F
1
(s)).
Pavliotis (IC) StochProc January 16, 2011 357 / 367
We can write (206) as a Markovian system for the variables
Q, P, Z
1
, Z
2
:

Q = P,

P = V

(Q) + Z
1
(t ),

Z
1
=
1
Z
1
+ Z
2
,

Z
2
=
2
Z
2
P +
_
2
1

2

W
2
.
Notice that this diffusion process is "more degenerate" than (198):
noise acts on fewer degrees of freedom.
It is still, however, hypoelliptic (Hormanders condition is satised):
there is sufcient interaction between the degrees of freedom
Q, P, Z
1
, Z
2
so that noise (and hence regularity) is transferred
from the degrees of freedom that are directly forced by noise to
the ones that are not.
The corresponding Markov semigroup has nice regularizing
properties. There exists a smooth density.
Pavliotis (IC) StochProc January 16, 2011 358 / 367
Stochastic processes that can be written as a Markovian process
by adding a nite number of additional variables are called
quasimarkovian.
Under appropriate assumptions on the potential V(Q) the solution
of the GLE equation is an ergodic process.
It is possible to study the ergodic properties of a quasimarkovian
processes by analyzing the spectral properties of the generator of
the corresponding Markov process.
This leads to the analysis of the spectral properties of hypoelliptic
operators.
Pavliotis (IC) StochProc January 16, 2011 359 / 367
When studying the Kac-Zwanzing model we considered a one
dimensional Hamiltonian system coupled to a nite dimensional
Hamiltonian system with random initial conditions (the harmonic
heat bath) and then passed to the theromdynamic limit N .
We can consider a small Hamiltonian system coupled to its
environment which we model as an innite dimensional
Hamiltonian system with random initial conditions. We have a
coupled particle-eld model.
The distinguished particle (Brownian particle) is described through
the Hamiltonian
H
DP
=
1
2
p
2
+ V(q). (207)
Pavliotis (IC) StochProc January 16, 2011 360 / 367
We will model the environment through a classical linear eld
theory (i.e. the wave equation) with innite energy:

2
t
(t , x) =
2
x
(t , x). (208)
The Hamiltonian of this system is
H
HB
(, ) =
_
_
[
x
[
2
+[(x)[
2
_
. (209)
(x) denotes the conjugate momentum eld.
The initial conditions are distributed according to the Gibbs
measure (which in this case is a Gaussian measure) at inverse
temperature , which we formally write as

= Z
1
e
H(,)
dd. (210)
Care has to be taken when dening probability measures in
innite dimensions.
Pavliotis (IC) StochProc January 16, 2011 361 / 367
Under this assumption on the initial conditions, typical
congurations of the heat bath have innite energy.
In this way, the environment can pump enough energy into the
system so that non-trivial uctuations emerge.
We will assume linear coupling between the particle and the eld:
H
I
(q, ) = q
_

q
(x)(x) dx. (211)
where The function (x) models the coupling between the particle
and the eld.
This coupling is inuenced by the dipole coupling approximation
from classical electrodynamics.
Pavliotis (IC) StochProc January 16, 2011 362 / 367
The Hamiltonian of the particle-eld model is
H(q, p, , ) = H
DP
(p, q) +H(, ) + H
I
(q, ). (212)
The corresponding Hamiltonian equations of motion are a coupled
system of equations of the coupled particle eld model.
Now we can proceed as in the case of the nite dimensional heat
bath. We can integrate the equations motion for the heat bath
variables and plug the solution into the equations for the Brownian
particle to obtain the GLE. The nal result is

q = V

(q)
_
t
0
R(t s)

q(s) + F(t ), (213)


with appropriate denitions for the memory kernel and the noise,
which are related through the uctuation-dissipation theorem.
Pavliotis (IC) StochProc January 16, 2011 363 / 367
Consider now the N + 1-dimensional Hamiltonian (particle + heat
bath) with random initial conditions. The N + 1 probability
distribution function f
N+1
satises the Liouville equation
f
N+1
t
+f
N+1
, H = 0, (214)
where H is the full Hamiltonian and , is the Poisson bracket
A, B =
N

j =0
_
A
q
j
B
p
j

B
q
j
A
p
j
_
.
We introduce the Liouville operator
L
N+1
= i , H.
The Liouville equation can be written as
i
f
N+1
t
= L
N+1
f
N+1
. (215)
Pavliotis (IC) StochProc January 16, 2011 364 / 367
We want to obtain a closed equation for the distribution function of
the Brownian particle. We introduce a projection operator which
projects onto the distribution function f of the Brownian particle:
Pf
N+1
= f , Pf
N+1
= h.
The Liouville equation becomes
i
f
t
= PL(f + h), (216a)
i
h
t
= (I P)L(f + h). (216b)
Pavliotis (IC) StochProc January 16, 2011 365 / 367
We integrate the second equation and substitute into the rst
equation. We obtain
i
f
t
= PLf i
_
t
0
PLe
i (IP)Ls
(I P)Lf (t s) ds+PLe
i (IP)Lt
h(0).
(217)
In the Markovian limit (large mass ratio) we obtain the
Fokker-Planck equation (182).
Pavliotis (IC) StochProc January 16, 2011 366 / 367
L. Rey-Bellet Open Classical Systems.
D. Givon, R. Kupferman and A.M. Stuart Extracting Marcoscopic
Dynamics: Model Problems and Algorithms, Nonlinearity 17
(2004) R55-R127.
R.M. Mazo Brownian Motion, Oxford University Press, 2002.
Pavliotis (IC) StochProc January 16, 2011 367 / 367

Vous aimerez peut-être aussi