Vous êtes sur la page 1sur 36

The optimal timing of investment decisions.

DRAFT∗
Timothy C. Johnson† and Mihail Zervos‡
Department of Mathematics
King’s College London
The Strand
London WC2R 2LS, UK
November 22, 2006

Abstract
This paper addresses reversible investment decisions, where the investor can enter
an investment that pays a dividend, and has the possibility to leave the investment,
either receiving or paying a fee. The objective of the decision maker is to maximise the
expected discounted cashflow of the system over an infinite time horizon. The under-
lying stochastic process driving the system is modelled by a general one-dimensional
positive Itô diffusion and the initialisation and abandonment costs, the discounting
factor, and the running payoffs can all be functions of the diffusion. A set of sufficient
conditions on the problem’s data is identified that admit an explicit analytic solution
to this problem.
This problem has a number of applications in finance and economics.


Research supported by EPSRC grant no. GR/S22998/01 and the Isaac Newton Institute, Cambridge.

E-mail: timothy.johnson@kcl.ac.uk

E-mail: mihail.zervos@kcl.ac.uk, Tel: +44 20 7848 2633, Fax: +44 20 7848 2017.

1
IO1 November 22, 2006 2

1 Introduction
We consider the the timing of investment decisions. In this context, an investment is charac-
terised by making a known payment in order to receive, subsequently, an unknown cashflow.
The holder of the investment may stop the cashflow in the future, for example, if the payoff
of stopping is expected to be greater than the future cashflows. A simple example could
be the decision to buy an equity, which has a cost. Holding the equity gives the investor a
dividends stream. If the investor feels that the equity is overvalued, and the current market
price exceeds the net present value of future dividends, they would sell the equity. The in-
vestor could repeat the process any number of times. Another example could be the decision
to build a production facility, which will have a cost but will provide an income based on
the demand for its product. At some point in the future the demand for the product may
fall, or equivalently more producers enter the economy, and so the cashflow generated by
the facility becomes negative. At this point the investor may be tempted to abandon the
production facility, which could incur a cost. This type of decision may be only possible
once.
In order to address the question of whether to take or leave investments, we consider the
situation where there is some stochastic process that represents the state of the economy,
such as the price of an equity or the demand for a product. We also assume that the costs
associated with taking and leaving the investment and the cashflow of the investment are
known given the state process. In addition, there is discounting, which again may be state
dependent, and the investor has an infinite time horizon. Within this general context we
consider two types of situation; where only one entry or exit into the investment may be
made, and, where any number of entries and exits can be made.
Models relating to single entry and exit problems have been studied in context of real
options by various authors. For example, Paddock, Siegel and Smith [?] and Dixit and
Pindyck [?, Sections 6.3, 7.1] adopt an economics perspective. More recent works include
Knudsen, Meister and Zervos [?], who generalise Dixit and Pindyck’s model, but consider
the abandonment problem alone, and Duckworth and Zervos [?], who extend Knudsen,
Meister and Zervos to include the initialisation problem. In fact, the model studied by
Duckworth and Zervos can be seen as an extension of the fundamental real options problem
introduced by McDonald and Siegel [?], which is concerned with determining the optimal
time to invest in a given project. McDonald and Siegel implicitly considered the payoff being
the discounted future cashflows of the project, Knudsen, Meister and Zervos explicitly model
these payoffs in solving the abandonment problem, and Duckworth and Zervos combine the
two approaches. All these papers focus on the state process being represented by a geometric
Brownian motion, the entry and exit costs and discounting rate being constant.
With regard to the problem of sequential entry and exit, Brekke and Øksendal [?] analyse
a general model, without providing explicit results. Duckworth and Zervos [?], consider a
IO1 November 22, 2006 3

special case, where the state process is represented by a geometric Brownian motion, the
entry and exit costs and discounting rate are constant, and provide explicit results. Other
authors, such as Lumley and Zervos [?], Hodges [?] and Pham [?], consider related problems.
The objective of this paper is to obtain explicit results for the general model. Specifically
to consider a stochastic process driven by general Itô diffusions rather than a geometric
Brownian motion, non-constant cost functions and state dependent discounting. Essentially
we extend Johnson and Zervos [?] in the direction of the models presented by Duckworth
and Zervos. The reasoning behind developing the existing theory so that it can be applied
to stochastic process driven by general Itô diffusions is to enable models to be developed
for a wider range of applications than just those driven by a geometric Brownian motion.
These include financial or economic applications where the asset price shows mean-reversion,
such as interest rates, exchange rates and commodities, but also enables modelling of non-
financial situations, such as those found in biological systems. Introducing state dependent
discounting enables a more realistic modelling framework of decisions. In the context of in-
vestment decisions, state dependent discounting reflects the dependence of default likelihood
of an investment project on the economic environment affecting the project. Specifically, if a
firm’s incomes relies on the price of one product, they will find their borrowing costs higher if
the price of that product falls. In a biological setting it reflects the dependence of extinction
likelihood on the environment. State dependent cost and payoffs enable utility based decision
making as well as using demand as the underlying state process, rather than asset prices. In
addition, state-dependent cost functions are useful when dealing with cases where inputs are
a finite resource. For example, consider the case where a financier has decided to invest in a
widget production facility, because the price of widgets is high, in this situation one would
expect there to be other production facilities being introduced and if widget producers are
a scarce resource, their cost may go up.
Within this general framework, we establish the investment strategies which are optimal,
depending on the nature of the state process, payoff and cost functions and the discounting
environment.
The paper is organised as follows. Section 2 is concerned with a rigorous formulation of
the investment problems that we solve. In this section we also develop a set of assumptions
that are sufficient for our problem to admit a solution, the structure of which conforms with
the applications in finance, economics and biology discussed above. We adopt a weak for-
mulation, which involves no additional technicalities but has offers extra degrees of freedom
relative to modelling, and has a view to a wider range of applications. In Section 3, we study
the situation where only a single entry or exit decision may be made. Specifically we address
initialisation, abandonment and initialisation and then abandonment problems. It turns out
that the first two of these problems are closely related to the discretionary stopping problems
studied in Johnson and Zervos [?]. In Section 4, we study the situation where any number
of entry and exit decisions may be made. Using intuition developed in Section 3, we start
IO1 November 22, 2006 4

by investigating the case where multiple entry and exit decisions may define the optimal
strategy. We then consider the simpler case where being “in” or “out” of the investment is
optimal, whatever the value of the state process. We finish by considering the case, where
although any number of entry and exit decisions are possible, the optimal strategy is, in
fact, similar to the initialisation or abandonment strategies in Section 3. Finally, Appendix
A presents some results, initially presented in Johnson and Zervos [?], which are key to our
analysis.
IO1 November 22, 2006 5

2 Problem formulation and assumptions


To address the problems discussed above, we consider a stochastic system, whose state
process X satisfies the one-dimensional SDE

dXt = b(Xt ) dt + σ(Xt ) dWt , X0 = x > 0, (1)

where W is a one-dimensional, standard Brownian motion and b, σ : ]0, ∞[ → R are given


deterministic functions. We impose conditions (ND)′ and (LI)′ in Karatzas and Shreve [?,
Section 5.5C], which are sufficient for (1) to have a weak soluton that is unique in the sense
of probability law. Specifically, we impose the following assumption.

Assumption 2.1 The functions b, σ : ]0, ∞[ → R satisfy the following conditions:

σ 2 (x) > 0 for all x ∈ ]0, ∞[, (2)


Z x+ε
1 + |b(s)|
for all x ∈]0, ∞[, there exists ε > 0 such that ds < ∞. (3)
x−ε σ 2 (s)

In the presence of Assumption 2.1, given any c > 0, the scale function pc and the speed
measure mc (dx) given by
Z x  Z s 
b(u)
pc (x) = exp −2 2
du ds, for x > 0, (4)
c c σ (u)
2
mc (dx) = 2 dx, (5)
σ (x)p′c (x)

respectively, are well-defined and characterise one-dimensional diffusions such as the one
associated with (1). In what follows, we assume that the constant c > 0 is fixed.
We also assume that the diffusion X is non-explosive, i.e., the probability that X hits
either of the boundaries 0 or ∞ of its state space in finite time is zero. In particular, we
impose the following assumption that follows from Feller’s test for explosions (see Karatzas
and Shreve [?, Theorem 5.5.29]).

Assumption 2.2 If we define


Z x h i
u(x) = pc (x) − pc (y) m(dy),
c

then limx↓0 u(x) = limx→∞ u(x) = ∞.


IO1 November 22, 2006 6

In addition, we consider the strictly decreasing function φ and the strictly increasing
function ψ defined by
(
1/Ez [e−Λτx ], for x < z,
φ(x) = (6)
Ex [e−Λτz ], for x ≥ z,
(
Ex [e−Λτz ], for x < z,
ψ(x) = (7)
1/Ez [e−Λτx ], for x ≥ z,
Rt
where Λt = 0 r(Xs )ds and τx is the first hitting time of x by the diffusion. These functions
are C 1 , their first derivatives are absolutely continuous functions and they are independent
solutions to the homogeneous ODE
1
Lw(x) := σ 2 (x)w ′′ (x) + b(x)w ′ (x) − r(x)w(x) = 0, for x > 0, (8)
2
(see Johnson and Zervos [?, Appendix A]). Using the fact that φ and ψ satisfy the ODE (8),
it is a straightforward exercise to verify that the scale function pc defined by (4) satisfies

φ(x)ψ ′ (x) − φ′ (x)ψ(x)


p′c (x) = , for all x > 0, (9)
W(c)

where W is the Wronskian, defined by W(x) = φ(x)ψ ′ (x) − φ′ (x)ψ(x).


We adopt weak formulations of the optimisation problems that we solve.
Having specified the conditions on the state process of our stochastic system, we formu-
late the problems we will address. To this end, we define the following classes of decision
strategies.

Definition 2.1 Given an initial condition x > 0, we define the following decision strategies:

A admissible stopping strategy is any pair (Sx , τ ) where Sx = (Ω, F, Ft, Px , X, W ) is a weak
solution to (1) and τ is an (Ft )-stopping-time. We denote by Sx the set of all admissible
stopping strategies

An initialisation strategy is any admissible stopping strategy (Sx , τ1 ) ∈ Sx .

An abandonment strategy is any admissible stopping strategy (Sx , τ0 ) ∈ Sx .

An admissible initialisation and abandonment strategy is any triplet (Sx , τ1 , τ0 ) where Sx =


(Ω, F, Ft, Px , X, W ) is a weak solution to (1) and τ1 , τ0 are (Ft )-stopping times such
that τ1 ≤ τ0 , Px -a.s.. We denote by Cx the family of all such admissible strategies.
IO1 November 22, 2006 7

An admissible switching strategy, is a pair (Sx , Z) where Sx = (Ω, F, Ft, Px , X, W ) is a weak


solution to (1) and Z is an Ft adapted, finite variation, càdlàg process taking values
in {0, 1} with Z0 = z. We denote by Zx,z the set of all such control strategies.
We consider the following related optimisation problems. The first one, the initialisation
of a payoff flow problem, this can be regarded as determining the optimal time at which
a decision maker should activate an investment project. In this context, each initialisation
strategy (Sx , τ1 ) ∈ Sx is associated with the performance criterion
Z ∞  
I −Λt −Λτ1
J (Sx , τ1 ) = Ex e h(Xt ) dt − e g1 (Xτ1 ) 1{τ1 <∞} . (10)
τ1

Here h : ]0, ∞[ → R is a deterministic function modelling the payoff flow that the project
yields after its initialisation, while g1 : ]0, ∞[ →RR is a deterministic function modelling the
t
cost of initialising the project and where Λt := 0 r(Xs )ds, with r : ]0, ∞[→]0, ∞[, is a given
deterministic discounting function. The objective of the decision maker is to maximise J I
over all initialisation strategies, (Sx , τ1 ) ∈ Sx . The resulting value function is defined by
v I (x) = sup J I (Sx , τ1 ), for x > 0. (11)
(Sx ,τ1 )∈Sx

The abandonment of a payoff flow problem aims at determining the optimal time at which
a decision maker, who receives a payoff from an active investment project, should terminate
the project. In this case, each abandonment strategy, (Sx , τ0 ) ∈ Sx , is associated with the
performance index
Z τ0 
A −Λt −Λτ0
J (Sx , τ0 ) = Ex e h(Xt ) dt − e g0 (Xτ0 )1{τ0 <∞} . (12)
0

Here, the function h : ]0, ∞[ → R is the same as in (10), while g0 : ]0, ∞[ → R is a determin-
istic function modelling the cost of abandoning the project. This problem’s value function
is given by
v A (x) = sup J A (Sx , τ0 ), for x > 0. (13)
(Sx ,τ0 )∈Sx

The initialisation of a payoff flow with the option to abandon problem is the combination
of the previous two problems. It arises when a decision maker is faced with the requirement
to optimally determine the time at which an investment project should be activated and,
subsequently, the time at which the project should be abandoned. In this problem, the
associated performance criterion is defined by
 Z τ0 
IA −Λτ1 −Λt
J (Sx , τ1 , τ0 ) = Ex −e g1 (Xτ1 ) + e h(Xt )dt 1{τ1 <∞}
τ1

−Λτ0
−e g0 (Xτ0 )1{τ0 ≤∞} , (14)
IO1 November 22, 2006 8

where the functions h, g1 and g0 are the same as in (10) and (12). The objective is to maximise
this performance index over all admissible initialisation and abandonment strategies in Cx ,
and the associated value function is defined by

v IA (x) = sup J IA (Sx , τ1 , τ0 ), for x > 0. (15)


(Sx ,τ1 ,τ0 )∈Cx

The sequential entry and exit problem models the situation where the decision maker can
choose between two deterministic payoffs of the state process, and there is no limit as to the
number of times the decision maker can switch between the two payoffs. In this problem,
the task of the decision maker is to select the times when the system should be switched.
These sequential decisions form a control strategy and they are modelled by the process
Z. For clarity, we shall differentiate the two payoffs by calling one “closed” and the other
“open”. If the system is open at time t then Zt = 1, where as if it closed at time t then
Zt = 0. While operating in the open mode the system provides a running payoff given by a
function h1 : ]0, ∞[→ R and while in closed mode the payoff is given by h0 : ]0, ∞[→ R. The
transition from one operating mode to the other is immediate. The transitions between open
and closed modes are indicated by ∆Zs+ = 1{Zt −Zt− =1} with a cost given by the function
g1 : ]0, ∞[→ R. Switching between closed and open modes is given by ∆Zs− = 1{Zt− −Zt =1}
and the associated cost is given by g0 : ]0, ∞[→ R. Given this formulation, the objective of
the decision maker is to maximise the performance criterion
Z ∞ h i
S
J (Sx , Z) := Ex,z e−Λt Zt h1 (Xt ) + (1 − Zt )h0 (Xt ) dt
0
X  
−Λs + −
− e g1 (Xs )(∆Zs ) + g0 (Xs )(∆Zs ) (16)
0≤s

over all admissible switching strategies. Accordingly we define the value function v by

v S (x, z) := sup J S (Sx , Z), for x > 0. (17)


(Sx ,Z)∈Zx,z

In addition to Assumptions 2.1–2.2 and with reference to Johnson and Zervos [?], we
impose the following conditions on the problems data.

Assumption 2.3 The functions g1 , g0 : ]0, ∞[ → R are C 1 with absolutely continuous first
derivatives and h and h1 − h0 are measurable. In addition:

a. There exists a constant r0 such that

0 < r0 ≤ r(x) < ∞, for all x > 0. (18)


IO1 November 22, 2006 9

b. σ is locally bounded.

c. Given any weak solution, Sx to (1) the functions g1 , g0 satisfy


Z ∞ 
−Λt

Ex e Lg0 (Xt ) dt < ∞, for all x > 0,
(19)
0
Z ∞ 
−Λt

Ex e Lg1 (Xt ) dt < ∞, for all x > 0, (20)
0

where the operator L is defined as in (8).


Also, given any weak solution, Sx to (1) the functions h, h1 − h0 satisfy
Z ∞ 
−Λt

Ex e h(Xt ) dt < ∞, for all x > 0,
(21)
0
Z ∞ 
−Λt

Ex e h1 (Xt ) − h0 (Xt ) dt < ∞, for all x > 0,
(22)
0

With reference to Johnson and Zervos [?], a consequence of Assumptions 2.1–2.3 are
Propositions A.1–A.2 and Remark A.1 of the Appendix. These results will prove useful in
the analysis that we undertake.
In the cases where there is the possibility of both entry and exit decisions, we impose the
following assumption based on hindsight from our subsequent analysis.

Assumption 2.4 The functions g1 , g0 : ]0, ∞[ → R satisfy the following conditions

L (g0 + g1 ) (x) < 0 for all x > 0. (23)

In the case of the sequential entry and exit problem, in order to exclude the possibility
of making a profit simply by switching, instantaneously, between open and closed modes, we
have an additional assumption.

Assumption 2.5 The functions g1 , g0 : ]0, ∞[ → R satisfy the following conditions

g0 (x) + g1 (x) > 0, for all x > 0. (24)


IO1 November 22, 2006 10

3 The single entry or exit problems


3.1 The initialisation of a payoff flow problem
Given Assumptions 2.1–2.3, we have that the function Rh , defined in Proposition A.1 with
the representation
Z ∞ 
−Λs
Rh (x) = Ex e h(Xs ) ds , for all x > 0,
0

is well-defined. Furthermore, we can use (118) of Proposition A.2 in the Appendix to see
that, given any initialisation strategy (Sx , τ1 ),
J I (Sx , τ1 ) = Ex e−Λτ1 (Rh − g1 ) (Xτ1 )1{τ1 <∞} .
 

We can easily check that our assumptions and the results established in the Appendix im-
ply that the this discretionary stopping problem satisfies the assumptions of Johnson and
Zervos [?, Case III, Theorem 3.2], so its solution is provided by the following result.
Theorem 3.1 Suppose that Assumptions 2.1–2.3 hold, and consider the initialisation prob-
lem formulated in Section 2.
Case I. If L (Rh − g1 ) (x) > 0 for all x > 0 then given any initial condition x > 0,
the value function is given by v I (x) = 0 and there is no admissible initialisation
strategy.
Case II. If L (Rh − g1 ) (x) < 0 for all x > 0 then given any initial condition x > 0, the
value function is given by v I (x) = (Rh − g1 ) (x) and the initialisation strategy
(SIx , 0) ∈ Sx , where SIx is a weak solution to (1) is optimal.
Case III. Suppose that (Rh − g1 ) (x) > 0 for some x ∈]0, ∞[ and
(
> 0, for x < x1 ,
L (Rh − g1 ) (x) x1 > 0. (25)
< 0, for x > x1 ,

The value function v I identifies with the function w I defined by


(
Bψ(x), if x < xI1 ,
w I (x) = (26)
(Rh − g1 ) (x), if x ≥ xI1 ,

with xI1 > 0 being the unique solution to q(x) = 0, where q is defined by
Z x
I ′
q (x) = pc (x) L (Rh − g1 ) (s)ψ(s) m(ds), for all x > 0, (27)
0
IO1 November 22, 2006 11

and B I > 0 being given by


(Rh − g1 ) (xI1 ) (Rh′ − g1′ ) (xI1 )
BI = = . (28)
ψ(xI1 ) ψ ′ (xI1 )
Furthermore, given any initial condition x > 0, the initialisation strategy (SIx , τ1I ) ∈
Sx , where SIx is a weak solution to (1) and
τ1I = inf{t ≥ 0 | Xt ≥ xI1 },
is optimal.

3.2 The abandonment of a payoff flow problem


Now, (117) implies that, given any abandonment strategy (Sx , τ0 ),
J A (Sx , τ0 ) = Rh (x) − Ex e−Λτ0 (Rh + g0 ) (Xτ0 )1{τ0 <∞}
 

and so
v A (x) = Rh (x) + Ex −e−Λτ0 (Rh + g0 ) (Xτ0 )1{τ0 <∞} .
 
sup
(Sx ,τ0 )∈Sx

The structure of Rh implies that we associate this situation with Case IV of Theorem 3.2 in
Johnson and Zervos [?] so its solution is provided by the following result.
Theorem 3.2 Suppose that Assumptions 2.1–2.3 hold, and consider the abandonment prob-
lem formulated in Section 2.
Case I. If L (Rh + g0 ) (x) < 0 or (Rh + g0 ) (x) ≥ 0 for all x > 0 then given any initial
condition x > 0, the value function is given by v A (x) = Rh (x) and there is no
admissible abandonment strategy.
Case II. If L (Rh + g0 ) (x) > 0 for all x > 0 then given any initial condition x > 0,
the value function is given by v A (x) = −g0 (x) and the abandonment strategy
(SA A
x , 0) ∈ Sx , where Sx is a weak solution to (1) is optimal.

Case III. Suppose that (Rh + g0 ) (x) < 0 for some x ∈]0, ∞[ and
(
> 0, for x < x0 ,
L (Rh + g0 ) (x) x0 > 0. (29)
< 0, for x > x0 ,

The value function v A identifies with the function w A defined by


(
−g0 (x), if x ≤ xA
0 < x0 ,
w A (x) = A
(30)
Aφ(x) + Rh (x), if x > x0 ,
IO1 November 22, 2006 12

with xA
0 > 0 being the unique solution to q(x) = 0, where q is defined by
Z ∞
A ′
q (x) = pc (x) L (Rh + g0 ) (s)φ(s) m(ds), for all x > 0, (31)
x

and AA > 0 being given by


(Rh + g0 ) (xA
0) (Rh′ + g0′ ) (xA
0)
AA = A
= ′ A
. (32)
φ(x0 ) φ (x0 )

Furthermore, given any initial condition x > 0, the abandonment strategy (SA A
x , τ0 ) ∈
A
Sx , where Sx is a weak solution to (1) and

τ0A = inf{t ≥ 0 | Xt ≤ xA
0 },

is optimal.

3.3 The initialisation of a payoff flow with the option to abandon


problem
With regard to the initialisation of a payoff flow with the option to abandon problem,
"
 h i
J IA (Sx , τ1 , τ0 ) = Ex e−Λτ1 Rh (Xτ1 ) − EXτ1 e−Λτ0 Rh + g0 (Xτ0 ) 1{τ0 <∞}
 

#

− g1 (Xτ1 ) 1{τ1 <∞}

In addressing this problem we start by observing that the problem is not well posed if the
optimal strategy involves initialisation followed immediately by abandonment, which will be
the case if the initialisation point is below the abandonment point, xIA IA
0 ≥ x1 . In addition,
if abandonment is never optimal, then this problem does not arise and
"  #
 
J IA (Sx , τ1 , τ0 ) = Ex e−Λτ1 Rh − g1 (Xτ1 ) 1{τ1 <∞}

= J I (Sx , τ1 ).

When abandonment is optimal, recall that the performance criterion associated with aban-
donment was solved in Theorem 3.2, is given and by
h i
J A (Sx , τ0 ) = Rh (x) − Ex e−Λτ0 Rh + g0 (Xτ0 ) 1{τ0 <∞} .
 
IO1 November 22, 2006 13

Hence

v IA (x) = sup J IA (Sx , τ1 , τ0 )


(Sx ,τ1 ,τ0 )∈Cx
" #
 
= sup Ex e−Λτ1 J A (SXτ1 , τ0 ) − g1 (Xτ1 ) 1{τ1 <∞}
(Sx ,τ1 ,τ0 )∈Cx
"  #
 
= sup Ex e−Λτ1 v A − g1 (Xτ1 ) 1{τ1 <∞} ,
(Sx ,τ1 )∈Sx

which equates to an optimal stopping problem with a payoff of v A (x) −g1 (x). With reference
to Case III of Theorem 3.2 in Johnson and Zervos [?], we require v A (x) − g1 (x) > 0 for some
x ∈ ]0, ∞[ and
(
> 0, for x < x1 ,
L(v A − g1 )(x) x1 > 0.
< 0, for x > x1 ,

with
(
−g0 (x), if x ≤ x∗0 ,
v A (x) =
AA φ(x) + Rh (x), if x > x∗0 , .

Assuming that x1 > x0 > xA 0 , we note that


(
−(g0 (x) + g1 (x)), if x ≤ xA
0,
v A (x) − g1 (x) = A A
A φ(x) + Rh (x) − g1 (x), if x > x0 .
(
−L(g0 + g1 )(x), if x ≤ xA0,
L(v A − g1 )(x) = A
L(Rh − g1 )(x), if x > x0 .

Theorem 3.3 Suppose that Assumptions 2.1–2.4 hold, and consider the initialisation and
abandonment problem formulated in Section 2. Suppose, in addition that
(
> 0, for x < x0 ,
L (Rh + g0 ) (x) x0 > 0, (33)
< 0, for x > x0 ,
(
> 0, for x < x1 ,
L (Rh − g1 ) (x) x1 > 0, (34)
< 0, for x > x1 ,
Rh (x) + g0 (x) < 0 and Rh (x) − g1 (x) > 0 for some x > 0. (35)
IO1 November 22, 2006 14

In this case, the optimal strategy will involve initialisation of the payoff flow and abandonment
at a later time. The value function v IA identifies with w where
(
B IA ψ(x), if x < xIA
1 ,
w(x) = A IA
where xIA A
1 > x0 . (36)
A φ(x) + Rh (x) − g1 (x), if x ≥ x1 ,

Here, AA and xA IA
0 are as defined in Theorem 3.2, with x1 > 0 being the unique solution to
IA
q(x) = 0, where q is defined by
 Z x 
IA ′ A
q (x) = W(c)pc (x) A + L (Rh − g1 ) (s)ψ(s) m(ds) , for all x > 0, (37)
0

and B IA > 0 given by

AA φ + Rh − g1 (xIA AA φ′ + Rh′ − g1′ (xIA


 
1 ) 1 )
B IA = IA
= ′ IA
. (38)
ψ(x1 ) ψ (x1 )

Furthermore, given any initial condition x > 0, the initialisation strategy (SIA IA
x , τ1 ) ∈ Sx ,
where SIA
x is a weak solution to (1) and

τ1IA = inf{t ≥ 0 | Xt ≥ xIA


1 },

combined with the abandonment strategy (SIA IA IA


x , τ0 ) ∈ Sx , where Sx is a weak solution to (1)
and

τ0IA = inf{t > τ1IA | Xt ≤ xIA


0 },

is optimal.

Proof: We first note that the payoff Aφ(x) + Rh (x) − g1 (x) satisfies all the assumptions
associated with Theorem 3.1 and the payoff Rh +g0 satisfies all the conditions of Theorem 3.2.
In addition
AA φ(xIA ′ IA A ′ IA
0 )ψ (x0 ) − A φ (x0 )ψ(x0 )
IA
AA =
W(xIA 0 )
   
Rh (xIA
0 ) + g 0 (xIA
0 ) ψ ′ IA
(x0 ) − R ′
h 0(xIA
) + g ′
(x
0 0
IA
) ψ(xIA
0 )
= −
W(xIA
0 )
Z xIA
0
= L(Rh − g0 )(s)ψ(s) m(ds)
0
IO1 November 22, 2006 15

and so
 Z x 
IA
q (x) = p′c (x) A − A
L (Rh − g1 ) (s)ψ(s) m(ds)
0
"Z IA #
x0 Z x
= p′c (x) L(Rh − g0 )(s)ψ(s) m(ds) − L(Rh − g1 )(s)ψ(s) m(ds)
0 0
"Z #
xIA
0
Z x
= p′c (x) −L(g0 + g1 )ψ(s) m(ds) + L(Rh − g1 )(s)ψ(s) m(ds) .
0 xIA
0

The structure of L(Rh − g1 )(x), given by (34), and L(g0 + g1 ), from (23) of Assumption
2.4, mean that q IA (x) = 0 has a unique solution and in addition xIA IA
0 < x1 . Consequently,
τ0IA > τ1IA . 2
IO1 November 22, 2006 16

4 The sequential entry and exit problems


We can see that the performance criterion J S (Sx , Z) is equivalent to
Z ∞ 
S
J (Sx , Z) = Ex e h0 (Xt )dt + J˜S (Sx , Z)
−Λt
0

where
"Z #
∞  
J˜S (Sx , Z) = Ex,z
X
e−Λt Zt h(Xt )dt − e−Λs g1 (Xs )(∆Zs )+ + g0 (Xs )(∆Zs )−
0 0≤s

and h(x) = h1 (x) − h0 (x). Hence,

v S (z, x) = Rh0 (x) + ṽ S (z, x)


ṽ S (z, x) = sup J˜S (Sx , Z), for x > 0, z ∈ {0, 1}
Sx ,Z∈Zx,z

and we focus our attention on identifying ṽ S (z, x), since Rh0 (x) is a deterministic function.
With regard to standard theory of optimal stopping, we expect that the value function,
ṽ S , identifies with a solution, w, of the Hamilton-Jacobi-Bellman (HJB) equation
n1
max σ 2 (x)wxx (z, x) + b(x)wx (z, x) − r(x)w(z, x) + zh(x),
2 o
w(1 − z, x) − w(z, x) − zg0 (x) − (1 − z)g1 (x) = 0. (39)

We start by considering the case where switching between the “open” and “closed” modes
is the optimal strategy, depending on the state process. We then turn our attention to the
cases where it is optimal to operate either in the open or closed modes for all values of the
state process, or, it is optimal to switch to the open or closed modes for specific values of
the state process but not to switch back.

4.1 The case when switching is optimal


Given the intuition developed in the study of the initialisation, abandonment and initialisa-
tion and abandonment cases, we start by considering the switching case where the optimal
strategy is to switch from the “open” mode to “closed” for all x ≤ xS0 , and optimal to switch
from the “closed” mode to the “open” mode for all x ≥ xS1 . Again, with reference to standard
heuristic arguments that explain the structure of (39), we look for a solution w to (39) that
IO1 November 22, 2006 17

satisfies,
w(0, x) − w(1, x) − g0 (x) = 0, for x ≤ xS0 , (40)
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) = 0, for x > xS0 , (41)
2
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) = 0, for x < xS1 , (42)
2
w(1, x) − w(0, x) − g1 (x) = 0, for x ≥ xS1 . (43)
Such a solution is given by
(
B S ψ(x) − g0 (x), if x ≤ xS0 ,
w(1, x) = (44)
AS φ(x) + Rh (x), if x > xS0 .
(
B S ψ(x), if x < xS1 ,
w(0, x) = (45)
AS φ(x) + Rh (x) − g1 (x), if x ≥ xS1 .

To specify the parameters AS , B S , xS0 and xS1 , we appeal to the so-called “smooth-pasting”
condition of optimal stopping that requires the value function to be C 1 , in particular at the
free boundary points xS0 and xS1 . This requirement yields the system of equations
AS φ(xS0 ) + Rh (xS0 ) = B S ψ(xS0 ) − g0 (xS0 ),
AS φ′ (xS0 ) + Rh′ (xS0 ) = B S ψ ′ (xS0 ) − g0′ (xS0 ),
and
AS φ(xS1 ) + Rh (xS1 ) − g1 (xS1 ) = B S ψ(xS1 ),
AS φ′ (xS1 ) + Rh′ (xS1 ) − g1′ (xS1 ) = B S ψ ′ (xS1 ).
From these expressions we can see that
(Rh (xS0 )ψ ′ (xS0 ) − Rh′ (xS0 )ψ(xS0 )) + (g0 (xS0 )ψ ′ (xS0 ) − g0′ (xS0 )ψ(xS0 ))
AS = −
(ψ ′ (c) − φ′ (c))p′c (xS0 )
(g1 (xS1 )ψ ′ (xS1 ) − g1′ (xS1 )ψ(xS1 )) − (Rh (xS1 )ψ ′ (xS1 ) − Rh′ (xS1 )ψ(xS1 ))
= . (46)
(ψ ′ (c) − φ′ (c))p′c (xS1 )
and
(Rh (xS0 )φ′ (xS0 ) − Rh′ (xS0 )φ(xS0 )) + (g0 (xS0 )φ′ (xS0 ) − g0′ (xS0 )φ(xS0 ))
BS = −
(ψ ′ (c) − φ′ (c))p′c (xS0 )
(g1 (xS1 )φ′(xS1 ) − g1′ (xS1 )φ(xS1 )) − (Rh (xS1 )φ′ (xS1 ) − Rh′ (xS1 )φ(xS1 ))
= . (47)
(ψ ′ (c) − φ′ (c))p′c (xS1 )
IO1 November 22, 2006 18

Combining (46)–(47) with the identities (114)–(115) we have the system of equations

q0S (xS0 , xS1 ) = 0,


q1S (xS0 , xS1 ) = 0,

where
Z ∞ Z ∞
q0S (y, z) = L(g0 + Rh )(s)φ(s)m(ds) + L(g1 − Rh )(s)φ(s)m(ds) (48)
y z

and
Z y Z z 
q1S (y, z) = − L(g0 + Rh )(s)ψ(s)m(ds) + L(g1 − Rh )(s)ψ(s)m(ds) . (49)
0 0

With these observations in mind we have the following theorem.

Theorem 4.1 Suppose that Assumptions 2.1–2.5 hold, and consider the switching problem
formulated in Section 2. Suppose, in addition that
(
> 0, for x < x0 ,
L (g0 + Rh ) (x) x0 > 0, (50)
< 0, for x > x0 ,
(
< 0, for x < x1 ,
L (g1 − Rh ) (x) x1 > 0, (51)
> 0, for x > x1 ,
Rh (x) + g0 (x) < 0 and Rh (y) − g1 (y) > 0 for some x, y > 0. (52)

The value function ṽ S identifies with w defined by (44)–(45) with AS , B S > 0, being given
by (46)–(47), respectively, and 0 < xS0 < xS1 being the unique solutions to q0S (y, z) = 0 and
q1S (y, z) = 0, where q0S , q1S are defined by (48)–(49), respectively.
Furthermore, define the Ft -adapted, finite variation, càdlàg control process Z S , taking
S
values in {0, 1}, as being switched from the closed state to the open state (Zs− = 0 to
S
Zs = 1) at stopping times given by

τ1S = inf{t ≥ s ≥ 0 | Xt ≥ xS1 and ZsS = ZtS = 0},

while Z S is switched from the open state to the closed state (Zs−
S
= 1 to ZsS = 0) at stopping
times given by

τ0S = inf{t ≥ s ≥ 0 | Xt ≤ xS0 and ZsS = ZtS = 1}.

Given any initial condition x > 0 and z ∈ {0, 1}, the switching strategy (SSx , Z S ) ∈ Zx,z ,
where SSx is a weak solution to (1) and Z S with Z0S = z is optimal.
IO1 November 22, 2006 19

Proof. We begin by proving that the opening point, xS1 , and closing point, xS0 , are
unique with xS0 < xS1 . We start by making the trivial observation that (50) and (51) and
Assumption 2.4 imply that x0 < x1 . Now, we show that (49) defines uniquely a mapping
l : ]0, ∞[ → ]0, ∞[ such that

q1S (y, l(y)) = 0 and l(y) > y.

First, fix any 0 < y < ∞ such that


Z y
q1S (y, y) = − L(g0 + g1 )(s)ψ(s)m(ds)
0

which is positive given (23) of Assumption 2.4. Also

∂ S 2L(g1 − Rh )(z)ψ(z)
q1 (y, z) = −
∂z σ 2 (z)W(c)p′c (z)

which is positive for x < x1 and negative for x > x1 , and so if there is l(y) > y such
that q1S (y, l(y)) = 0 then it is unique. To show that l(y) exists, combine the fact that
limx→∞ (g1 −Rh )(x)/ψ(x) = 0, which is a consequence of (20) and the identities in Proposition
A.2, with (51) and (52) to see that
Z x
lim L(g1 − Rh )(s)ψ(s)m(ds) = ∞,
x→∞ 0

and so, for any y < ∞,


lim q1S (y, z) < 0.
z→∞

Now, observe that


Z l(y)
lim q1S (y, l(y)) = − L(g1 − Rh )(s)ψ(s)m(ds)
y↓0 0
1
= − q I (l(y))
p′c (l(y))

where q I is given by (27). Since we define l(y) by q1S (y, l(y)) = 0, with reference to Theorem
3.1, which tells us that q I (x) = 0 at x = xI1 , we can see that limy↓0 l(y) = xI1 > x1 > 0 and
consequently xS1 > x1 . For future reference, observe that
∂ S ∂
dq1S (y, l(y)) = q1 (y, l(y))dy + q1S (y, l(y))dz
∂y ∂z
=0
IO1 November 22, 2006 20

and so we have that


∂ S

q (y, l(y))
∂y 1
l (y) = − ∂ S
q (y, l(y))
∂z 1
L(g0 + Rh )(y) ψ(y) σ 2 (l(y))p′c(l(y))
= − . (53)
L(g1 − Rh )(l(y)) ψ(l(y)) σ 2 (y)p′c(y)
Now consider
Z ∞ Z ∞
q0S (y, l(y)) = L(g0 + Rh )(s)φ(s)m(ds) + L(g1 − Rh )(s)φ(s)m(ds)
y l(y)
Z l(y) Z ∞
= L(g0 + Rh )(s)φ(s)m(ds) + L(g0 + g1 )(s)φ(s)m(ds)
y l(y)

and noting that L(g0 + g1 )(x) ≤ 0 by Assumption 2.4, and L(g0 + Rh )(x) < 0 for x > x0 by
(50), we have that
lim q0S (y, l(y)) < 0.
y→∞

Using (53), observe that


∂ S
q (y, l(y)) = − L(g0 + Rh )(y)φ(y)m(dy) + L(g1 − Rh )(l(y))φ(l(y))m(dl(y))l′(y)
∂y 0
 
φ(y) φ(l(y))
= − L(g0 + Rh )(y)ψ(y)m(dy) − .
ψ(y) ψ(l(y))
Given
ψ ′ (x)φ(x) − ψ(x)φ′ (x)
 
d φ(x)
=− <0
dx ψ(x) ψ 2 (x)
∂ S
we have that ∂y q0 (y, l(y)) has the sign of −L(g0 + Rh )(y). Using this with the fact that
limy→∞ q0 (y, l(y)) < 0, there will be a unique xS0 < x0 such that q0S (xS0 , l(xS0 )) = 0 if
S

limy↓0 q0 (y) is positive. Noting that limy↓0 l(y) = xI1 > x1 we have
Z ∞ Z ∞
lim L(g1 − Rh )(s)φ(s)m(ds) = L(g1 − Rh )(s)φ(s)m(ds)
y↓0 l(y) xI1
   
g1 (x1 ) − R(x1 ) φ (x1 ) − g1 (x1 ) − R (x1 ) φ(xI1 )
I I ′ I ′ I ′ I

=
p′c (xI1 )
−B I ψ(xI1 )φ′ (xI1 ) + B I ψ ′ (xI1 )φ(xI1 )
=
W(c)p′c (xI1 )
= BI
IO1 November 22, 2006 21

where B I > 0 is the co-efficient defined in (28) and is positive given (51) and (52). So
Z ∞
S
lim q0 (y, l(y)) > lim L(g0 + Rh )(s)φ(s)m(ds)
y↓0 y↓0 y
1
= lim q A (y)
y↓0 p′c (y)

where q A (y) is defined by (31). Since Rh satisfies (112) and g0 satisfies (125), we have that
limy↓0 (g0 +Rh )(y)/φ(y) = 0. Combined with (50) and (52), we have that limy↓0 q A (y) positive
and limy↓0 q0 (y) are both positive. As a result,

xS0 < x0 < x1 < xS1 (54)

and xS0 , xS1 are unique.


Combining (46) with (115) and (127) we have either,

(Rh (xS0 )ψ ′ (xS0 ) − Rh′ (xS0 )ψ(xS0 )) + (g0 (xS0 )ψ ′ (xS0 ) − g0′ (xS0 )ψ(xS0 ))
AS = −
(ψ ′ (c) − φ′ (c))p′c (xS0 )
Z xS
0
= L(Rh + g0 (s))ψ(s) m(ds) (55)
0
≥0

with the final inequalities following from (50) and (54), or,

(g1 (xS1 )ψ ′ (xS1 ) − g1′ (xS1 )ψ(xS1 )) − (Rh (xS1 )ψ ′ (xS1 ) − Rh′ (xS1 )ψ(xS1 ))
AS =
(ψ ′ (c) − φ′ (c))p′c (xS1 )
Z xS1
= − L(g1 − Rh )(s)ψ(s) m(ds). (56)
0

Similarly (47) with (114) and (126) yield

(Rh (xS0 )φ′ (xS0 ) − Rh′ (xS0 )φ(xS0 )) + (g0 (xS0 )φ′ (xS0 ) − g0′ (xS0 )φ(xS0 ))
BS = −
(ψ ′ (c) − φ′ (c))p′c (xS0 )
Z ∞
= L (g1 − Rh ) (s)φ(s) m(ds)
xS
1

≥ 0

with the final inequality following from (51) and (54). Hence, both AS and B S are positive.
IO1 November 22, 2006 22

For (44) and (45) to satisfy (39) they also need to satisfy the following inequalities.
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) ≤ 0, for x ≤ xS0 , (57)
2
w(0, x) − w(1, x) − g0 (x) ≤ 0, for x > xS0 , (58)
w(1, x) − w(0, x) − g1 (x) ≤ 0, for x < xS1 , (59)
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) ≤ 0, for x ≥ xS1 . (60)
2
We note that (44) satisfies (57) given L(g0 + Rh )(x) > 0 for x < x0 by (50) the fact that
xS0 < x0 while (45) satisfies (60) given xS1 > x1 and (51).
For x > xS0 , (58) becomes

AS φ(xS0 ) + R(xS0 ) + g0 (xS0 ) AS φ(x) + R(x) + g0 (x)



ψ(xS0 ) ψ(x)

and so we require AS φ(x) + R(x) + g0 (x)/ψ(x) to be increasing for x > xS0 . With reference
to (49), (56), (115) and (127), we have that

d AS φ(x) + R(x) + g0 (x) [AS φ′ (x) + R′ (x) + g0′ (x)]ψ(x) − [AS φ(x) + R(x) + g0 (x)]ψ ′ (x)
 
=
dx ψ(x) ψ 2 (x)
Z x 
W(x) S
= 2 L(g0 + Rh )(s)ψ(s)m(ds) − A
ψ (x) 0
Z x
W(x)
= 2 L(g0 + Rh )(s)ψ(s)m(ds)
ψ (x) 0
Z xS1 
+ L(g1 − Rh )(s)ψ(s)m(ds)
0
W(x)
= − 2 q1S (x, xS1 ).
ψ (x)

For x ∈ ]xS0 , xS1 [, (58) is satisfied since q1S (y, z) < 0 for y > xS0 and z = xS1 . For x > xS1 , (58)
is satisfied given (50) and (23) of Assumption 2.4.
For x < xS1 , (59) becomes

AS φ(x) + R(x) − g1 (x) AS φ(xS1 ) + R(xS1 ) − g1 (xS1 )



ψ(x) ψ(xS1 )

and so we require AS φ(x) + R(x) − g1 (x)/ψ(x) to be increasing in ]xS0 , xS1 ]. Using (49), (55),
IO1 November 22, 2006 23

(115) and (127), note that

d AS φ(x) + R(x) − g1 (x) [AS φ′ (x) + R′ (x) − g1′ (x)]ψ(x) − [AS φ(x) + R(x) − g1 (x)]ψ ′ (x)
 
=
dx ψ(x) ψ 2 (x)
 Z x 
W(x) S
= − 2 A + ψ(s)L(g1 − Rh )(s)m(ds)
ψ (x) 0
 Z xS
W(x) 0
= − 2 ψ(s)L(g0 + Rh )(s)m(ds)
ψ (x) 0
Z x 
+ ψ(s)L(g1 − Rh )(s)m(ds)
0
W(x)
= 2 q1S (xS0 , x).
ψ (x)

Given q1S (y, z) > 0 for y = xS0 and z < xS1 , (59) is satisfied for x ∈ ]xS0 , xS1 [. For x ≤ xS0 , (59)
is satisfied given (51) and (23) of Assumption 2.4.
To prove that the solution w to the HJB equation (39) that we have constructed identifies
with the value function v of the optimal stopping problem, we fix any initial condition x > 0
and any weak solution Sx to (1). Define an arbitrary control strategy, Z, by picking arbitrary
times at which to switch between the open and closed modes. We can now define a sequence
of (Ft )-stopping times (τm ) by

τ1 = inf{t ≥ 0 Zt 6= z}

τm+1 = inf{t > τm Zt 6= Zt− }.

In addition, given any T > 0, fix any initial condition x > 0 and any stopping strategy
(Sx , τ ) ∈ Sx , define
n o
τn = inf t ≥ 0 Xt ∈
/ [1/n, n] , for n ≥ 1.

Now, since w ∈ C 1 (]0, ∞[) ∩ C 2 (]0, ∞[ \ {xS0, xS1 }), we can use the Itô-Tanaka formula, to
calculate

e−Λt∧τn ∧T w(Zt∧τn ∧T , Xt∧τn ∧T ) = w(z, x) + MtT,n


Z t∧τn ∧T
+ e−Λs (Lw(Zs , Xs )) ds
0
X
+ e−Λs [w(Zs , Xs ) − w(Zs−, Xs )] (61)
0<s≤t∧τn ∧T
IO1 November 22, 2006 24

where
Z t∧τn ∧T
MtT,n := e−Λs (σ(Xs )wx (Zs , Xs )) dWs .
0
T,n
= Lt + M̃tT,n

and
Z t∧τn ∧T 
LT,n −Λs
B S ψ ′ (Xs )1{Xs ≤xS0 } + AS φ′ (Xs )1{Xs >xS0 } 1{Zs =1}

t := e σ(Xs )
0

S ′ S ′

+ B ψ (Xs )1{Xs <xS1 } + A φ (Xs )1{Xs ≥xS1 } 1{Zs =0} dWs .

Given that g0 , g1 and Rh all satisfy Dynkin’s formula, we have that Ex,z [M̃tT,n ] = Ex,z [MtT,n −
LT,n ′ ′
t ] = 0. With reference to Itô’s isometry, the continuity of ψ and φ and Assumption 2.3a–
b, we can see that
 2  Z T  
T,n
Ex,z LT = e−2Λt σ 2 (Xt ) B S ψ ′ (Xt )1{Xt ≤xS0 } + AS φ′ (Xt )1{Xt >xS0 } 1{Zt =1}
0
!2
 
S ′ S ′
+ B ψ (Xt )1{Xt <xS1 } + A φ (Xt )1{Xt ≥xS1 } 1{Zt =0} 1{s≤τn } dt
Z T  2
−2Λt 2 S ′ S ′
= e σ (Xt ) B ψ (Xt )1{Xt ≤xS0 } + A φ (Xt )1{Xt >xS0 } 1{Zt =1}
0
!
 2
+ B S ψ ′ (Xt )1{Xt <xS1 } + AS φ′ (Xt )1{Xt ≥xS1 } 1{Zt =0} 1{s≤τn } dt
Z T  2 2 
= e−2Λt σ 2 (Xt ) B S ψ ′ (Xt ) 1{Xt ≤xS0 } + AS φ′ (Xt ) 1{Xt >xS0 } 1{Zt =1}
0
!
 2 2 
+ B S ψ ′ (Xt ) 1{Xt <xS1 } + AS φ′ (Xt ) 1{Xt ≥xS1 } 1{Zt =0} 1{s≤τn } dt
Z T  2 2 
−2Λt 2 S ′ S ′
≤ e2B ψ (Xt ) 1{Xt ≤xS1 } + 2A φ (Xt ) 1{Xt >xS0 } 1{s≤τn } dt
σ (Xt )
0
 2 Z T  2 Z T
S ′ −2Λt S ′
≤ sup 2B ψ (x)σ(x) e dt + sup 2A φ (x)σ(x) e−2Λt dt
x<xS
1 0 x>xS
0 0

<∞
IO1 November 22, 2006 25

This calculation shows that LT,n is a square-integrable martingale.


 Therefore by appeal-
ing to Doob’s optional sampling theorem it follows that Ex,z LT,n

t = 0, and consequently
T,n
Ex,z [Mt ] = 0. In view of this observation, we can add
Z τn ∧T X  
e−Λt Zt h(Xt )dt − e−Λt g1 (Xt )(∆Zt )+ + g0 (Xt )(∆Zt )−
0 0≤t≤τn ∧T

to both sides of (61), on taking expectations and given that w satisfies (39), we have
"Z #
τn ∧T X  
Ex,z e−Λt Zt h(Xt )dt − e−Λt g1 (Xt )(∆Zt )+ + g0 (Xt )(∆Zt )−
0 0≤t≤τn ∧T
−Λτn ∧T
 
≤ w(z, x) − Ex,z e w(Zτn∧T , Xτn ∧T ) (62)

The dominated convergence theorem gives

lim Ex,z e−Λτn ∧T w(Zτn ∧T , Xτn ∧T ) = Ex,z e−Λτn w(Zτn , Xτn ) .


   
(63)
T →∞

Noting that
"  
−Λτn −Λτn
B S ψ(Xτn ) − g0 (Xτn ) 1{Xτn ≤xS0 }
 
Ex,z e w(Zτn , Xτn ) = Ex,z e
  
S
+ A φ(Xτn ) + Rh (Xτn ) 1{Xτn >xS0 } 1{Zτn =1}
 
S
+ B ψ(Xτn ) 1{Xτn <xS1 }
 !#
 
S
+ A φ(Xτn ) + Rh (Xτn ) − g1 (Xτn ) 1{Xτn ≥xS1 } 1{Zτn =0}

and given Rh , g0 and g1 all satisfy the transversality condition, (119) of Proposition A.2, and
that AS φ and B S ψ are positive, we have that
"  #
0 ≤ lim Ex,z e n w(Zτn , Xτn ) ≤ lim Ex,z e−Λτn AS φ(Xτn )1{Xτn >xS0 } + B S ψ(x)1{Xτn <xS1 }
 −Λτ 
n→∞ n→∞
 
S
φ(xS0 ) S
ψ(xS1 ) lim Ex,z e−Λτn
 
≤ A +B
n→∞

=0 (64)
IO1 November 22, 2006 26

with the last equality following as a consequence of Assumption 2.3a. Given (21) of As-
sumption 2.3c and the continuity of g0 , g1 and Assumption 2.3a, the dominated convergence
theorem gives
"Z #
τn ∧T X  
lim lim Ex,z e−Λt Zt h(Xt )dt − e−Λt g1 (xS1 )(∆Zt )+ + g0 (xS0 )(∆Zt )−
n→∞ n→∞ 0 0≤t≤τn ∧T
"Z #
∞ X  
= Ex,z e−Λt Zt h(Xt )dt − e−Λt g1 (Xt )(∆Zt )+ + g0 (Xt )(∆Zt )− (65)
0 0≤t≤∞

In view of (63)–(65), (62) implies


"Z #
∞ X  
Ex,z e−Λt Zt h(Xt )dt − e−Λt g1 (xS1 )(∆Zt )+ + g0 (xS0 )(∆Zt )− ≤ w(z, x), (66)
0 0≤t≤∞

which proves J˜S (Sx , Z) ≤ w(z, x).


To prove that ṽ(z, x) = w(z, x) for the optimal strategy proposed in the statement of
the theorem, let (SSx , Z S ) ∈ Zx,z be the switching strategy considered in the statement of the
theorem. By following the arguments that lead to (66) we can see that
"Z #
∞ X  
Ex,z e−Λt ZtS h(Xt )dt − e−Λt g1 (xS1 )(∆ZtS )+ + g0 (xS0 )(∆ZtS )− = w(z, x).
0 0≤t≤∞

2
IO1 November 22, 2006 27

4.2 The case when one mode is optimal for all values of the state
process
We consider two cases, where it is always optimal for the system to be operated in the “open”
mode for all values of the state process, or, it is optimal to always have the system operated
in the “closed” mode.
In the case where it is always optimal to operate in the open mode (Zt = 1 for all t), we
look for a solution w to (39) that satisfies
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) = 0, for all x > 0, (67)
2
w(1, x) − w(0, x) − g1 (x) = 0, for all x > 0. (68)

Such a solution is given by

w(1, x) = Rh (x) (69)


w(0, x) = w(1, x) − g1 (x). (70)

Theorem 4.2 Suppose that Assumptions 2.1–2.5 hold, and consider the switching problem
formulated in Section 2. Suppose, in addition that

L (g1 − Rh ) (x) > 0, for all x > 0. (71)

The value function ṽ S identifies with w defined by (69)–(70). Furthermore, given any initial
condition x > 0 and z ∈ {0, 1}, the control strategy (SSx , Z S ) ∈ Zx,z , where SSx is a weak
solution to (1) and Z S = 1 for all t > 0, is optimal.

Proof: We start by confirming that w, defined by (69) and (70), satisfies the HJB equation
(39). In addition to (67)–(68) they also need to satisfy the following inequalities
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) ≤ 0, for all x > 0, (72)
2
w(0, x) − w(1, x) − g0 (x) ≤ 0, for all x > 0. (73)

Noting that w(0, x) = Rh (x) − g1 (x), simple substitution of (71) into (72) and (24) of
Assumption 2.5 into (73) show that these are satisfied.
Having confirmed that w, defined by (69) and (70), satisfy (39), we can use similar
arguments to those used in developing (66) of the proof of Theorem 4.1 to show that for an
arbitrary control strategy J˜S (Sx , Z) ≤ w(z, x), while adopting the optimal strategy defined
in the statement of the theorem we have w = ṽ S . 2
IO1 November 22, 2006 28

Similarly, in the case where it is always optimal to operate in the closed mode (Z = 0
for all t), we look for a solution w to (39) that satisfies
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) = 0, for all x > 0, (74)
2
w(0, x) − w(1, x) − g0 (x) = 0, for all x > 0. (75)

Such a solution is given by

w(0, x) = 0 (76)
w(1, x) = w(0, x) − g0 (x). (77)

Theorem 4.3 Suppose that Assumptions 2.1–2.5 hold, and consider the switching problem
formulated in Section 2. Suppose, in addition that

L(Rh + g0 )(x) ≥ 0, for all x > 0. (78)

The value function ṽ S identifies with w defined by (76)–(77). Furthermore, given any initial
condition x > 0 and z ∈ {0, 1}, the control strategy (SSx , Z S ) ∈ Zx,z , where SSx is a weak
solution to (1) and Z S = 0 for all t > 0, is optimal.

Proof: For (76) and (77) to satisfy (39) they also need to satisfy the following inequalities.
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) ≤ 0, for all x > 0, (79)
2
w(1, x) − w(0, x) − g1 (x) ≤ 0, for all x > 0. (80)

Substitution of (78) into (79) and (24) of Assumption 2.5 into (80) show that these are
satisfied.
Similar arguments used in developing (66) of the proof of Theorem 4.1 will show that
for an arbitrary control strategy J˜S (Sx , Z) ≤ w(z, x), while adopting the optimal strategy
defined in the statement of the theorem we have w = ṽ S . 2
IO1 November 22, 2006 29

4.3 The case when one mode is optimal for certain values of the
state process
We now consider the case where the optimal strategy is to operate either in the “open” mode
for all x, but it is only optimal to switch from the “closed” mode to the “open” mode for
x ≥ xS1 , or, to operate in the “closed” mode for all x, but it is only optimal to switch from
the “open” mode to the “closed” mode for x ≤ xS0 .
For the case where, once in the open mode, it is never optimal to switch to the closed
mode, we look for a solution w to (39) that satisfies
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) = 0, for all x, (81)
2
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) = 0, for x < xS1 , (82)
2
w(1, x) − w(0, x) − g1 (x) = 0, for x ≥ xS1 . (83)
Such a solution is given by
w(1, x) = Rh (x) (84)
(
B S ψ(x), if x < xS1 ,
w(0, x) = (85)
Rh (x) − g1 (x), if x ≥ xS1 .

Noting that w(0, x), defined by (85), identifies with w I (x), defined by (26) of Case III in
Theorem 3.1, we can state the following theorem.
Theorem 4.4 Suppose that Assumptions 2.1–2.5 hold, and consider the switching problem
formulated in Section 2. Suppose, in addition that
(Rh − g1 ) (x) > 0, for some x ∈ ]0, ∞[, (86)
(
> 0, for x < x1 ,
L (Rh − g1 ) (x) x1 > 0, (87)
< 0, for x > x1 ,
L (Rh + g0 ) (x) < 0, for all x. (88)
Then, xS1 > 0 is the unique solution to q1S (x) = 0, where q1S is defined by
Z x
S ′
q1 (x) = W(c)pc (x) L (Rh − g1 ) (s)ψ(s) m(ds), for all x > 0,
0
S
and B > 0 being given by
Rh (xS1 ) − g1 (xS1 ) Rh′ (xS1 ) − g1′ (xS1 )
BS = = . (89)
ψ(xS1 ) ψ ′ (xS1 )
IO1 November 22, 2006 30

The value function ṽ S identifies with w defined by (84)–(85). Furthermore, given any
initial condition x > 0 and z ∈ {0, 1}, the control strategy (SSx , Z S ) ∈ Zx,z , where SSx is a
weak solution to (1) and if z = 0 then Z S = 1 for all t ≥ τ1S ,

τ1S = inf{t ≥ 0 | Xt ≥ xS1 },

where as if z = 1 then Z S = 1 for all t is optimal.

Proof: The statements regarding xS1 and B S are a consequence of Case III in Theorem 3.1
and (86)–(87). In addition to w, given by (84)–(85), satisfying (81)–(83), we need to show
that that it satisfies the following inequalities

w(0, x) − w(1, x) − g0 (x) ≤ 0, for all x > 0, (90)


w(1, x) − w(0, x) − g1 (x) ≤ 0, for x < xS1 , (91)
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) ≤ 0, for x ≥ xS1 . (92)
2
For (90), we have two distinct inequalities

B S ψ(x) − Rh (x) − g0 (x) ≤ 0, if x < xS1 , (93)


xS1 ,

− g0 (x) + g1 (x) ≤ 0, if x ≥ (94)

and (94) is true given (24) of Assumption 2.5. Given that B S is defined by (89), (93) can be
written as
Rh (xS1 ) − g1 (xS1 )
   
Rh (x) + g0 (x)
≥ for all x < xS1 .
ψ(x) ψ(xS1 )

which is true at xS1 , since g0 (x) > −g1 (x), for all x, by (24) of Assumption 2.5. To see that
(93) is true for all x < xS1 , observe that

W(c)p′c (x) x
 
d R(x) + g0 (x)
Z
= L (Rh + g0 ) (s)ψ(s)m(ds)
dx ψ(x) ψ 2 (x) 0

and given L (Rh + g0 ) < 0 for all x, by (88), the condition is satisfied. Similarly, we can
write (91) as

Rh (x) − g1 (x) Rh (xS1 ) − g1 (xS1 )


≤ , for x < xS1 .
ψ(x) ψ(xS1 )
IO1 November 22, 2006 31

Consider
   
′ ′ ′ ′
d

R(x) − g1 (x)
 R (x)ψ(x) − R(x)ψ (x) + g1 (x)ψ (x) − g1 (x)ψ(x)
=
dx ψ(x) ψ 2 (x)
x
W(c)p′c (x)
Z
= L (Rh − g1 ) (s)ψ(s) m(ds)
ψ 2 (x) 0
q S (x)
= 12
ψ (x)
and given that q1S (x) > 0 for x < xS1 , we have that [R(x)−g1 (x)]/ψ(x) is increasing for x < xS1
and hence (91) is satisfied. Recalling that, as a consequence of Theorem 3.1, xS1 > x1 , (85)
satisfies (92) under (87).
Having confirmed that w, defined by (84) and (85), satisfies (39), we need to confirm that
the solution w equates with the value function ṽ S . To do this we can use similar arguments
to those used in developing (66) of the proof of Theorem 4.1. We can show that for an
arbitrary control strategy J˜S (Sx , Z) ≤ w(z, x), while adopting the optimal strategy defined
in the statement of the theorem gives w = ṽ S .
2

We now consider the case where the optimal strategy is to operate in the “closed” mode
for all x, but it is only optimal to switch from the “open” mode to the “closed” mode for
x < xS0 .

Again, with reference to standard heuristic arguments that explain the structure of (39),
we look for a solution w to (39) that satisfies
1 2
σ (x)wxx (0, x) + b(x)wx (0, x) − r(x)w(0, x) = 0, for all x > 0 (95)
2
w(0, x) − w(1, x) − g0 (x) = 0, for x ≤ xS0 , (96)
1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h(x) = 0, for x > xS0 . (97)
2
Such a solution is given by
w(0, x) = 0 (98)
(
−g0 (x), if x < xS0 .
and w(1, x) = (99)
Aφ(x) + Rh (x), if x ≥ xS0 .

Noting that w(1, x), defined by (99), identifies with w A (x), defined by (30) of Case III in
Theorem 3.2, and we can state the following theorem.
IO1 November 22, 2006 32

Theorem 4.5 Suppose that Assumptions 2.1–2.5 hold, and consider the switching problem
formulated in Section 2. Suppose, in addition that

(Rh + g0 ) (x) < 0, for some x ∈ ]0, ∞[, (100)


(
> 0, for x < x0 ,
L (Rh + g0 ) (x) x0 > 0, (101)
< 0, for x > x0 ,
L (Rh − g1 ) (x) < 0, for all x. (102)

Then, xS0 > 0 is the unique solution to q0S (x) = 0, where q0S is defined by
Z ∞
S ′
q0 (x) = pc (x) L (Rh + g0 ) (s)φ(s) m(ds), for all x > 0,
x

and AS > 0 being given by

Rh (xS0 ) + g0 (xS0 ) Rh′ (xS0 ) + g0′ (xS0 )


−AS = = . (103)
φ(xS0 ) φ′ (xA0)

The value function ṽ S identifies with w defined by (98)–(99). Furthermore, given any
initial condition x > 0 and z ∈ {0, 1}, the control strategy (SSx , Z S ) ∈ Zx,z , where SSx is a
weak solution to (1) and if z = 1 then Z S = 0 for all t ≥ τ0S ,

τ0S = inf{t ≥ 0 | Xt ≤ xS0 },

where as if z = 0 then Z S = 0 for all t is optimal.

Proof: The statements regarding xS0 and AS are a consequence of Case III in Theorem 3.2
and (100)–(101). In addition to w, given by (98)–(99), satisfying (95)–(97), we need to show
that that it satisfies the following inequalities

w(1, x) − w(0, x) − g1 (x) ≤ 0, for all x > 0, (104)


1 2
σ (x)wxx (1, x) + b(x)wx (1, x) − r(x)w(1, x) + h1 (x) ≤ 0, for x ≥ x∗0 , (105)
2
w(0, x) − w(1, x) − g0 (x) ≤ 0, for x > x∗0 . (106)

For (104), we have two distinct inequalities

if x < xS0 ,

− g0 (x) + g1 (x) ≤ 0, (107)
AS φ(x) + Rh (x) − g1 (x) ≤ 0, if x ≥ xS0 , (108)
IO1 November 22, 2006 33

and (107) is true given (24) of Assumption 2.5. Given that AS is defined by (103), (108) can
be written as
Rh (xS0 ) + g0 (xS0 ) Rh (x) − g1 (x)
− S
≤− , for all x ≥ xS0 ,
φ(x0 ) φ(x)

which is true, noting that g1 (xS0 ) > −g0 (xS0 ) by (24) of Assumption 2.5. Noting

W(c)p′c (x) x
 
d Rh (x) − g1 (x)
Z
= L (Rh − g1 ) (s)φ(s)m(ds)
dx φ(x) φ2 (x) 0

and given L (Rh − g1 ) < 0 for all x, by (102), this ensures the condition is satisfied. Similarly,
we can write (106) as

Rh (xS0 ) + g0 (xS0 ) Rh (x) + g0 (x)


≤ , for x > xS0 .
φ(xS0 ) φ(x)

Consider
   
d

Rh (x) + g0 (x)
 Rh′ (x)φ(x) − Rh (x)φ′ (x) + g0′ (x)φ(x) − g0 (x)φ′ (x)
=
dx φ(x) φ2 (x)
x
W(c)p′c (x)
Z
= L (Rh + g0 ) (s)φ(s) m(ds)
φ2 (x) 0
q0S (x)
= − 2
φ (x)

and given that q0S (x) < 0 for x > xS0 , we have that [Rh (x) + g0 (x))]/φ(x) is increasing
for x > xS0 and hence (106) is satisfied. Recalling that, as a consequence of Theorem 3.1,
xS0 < x0 , (99) satisfies (105) under (101).
Having confirmed that w, defined by (98) and (99), satisfies (39), we need to confirm
that the solution w equates with the value function ṽ S . To do this we again use similar
arguments to those used in developing (66) of the proof of Theorem 4.1. We can show that
for an arbitrary control strategy J˜S (Sx , Z) ≤ w(z, x), while adopting the optimal strategy
defined in the statement of the theorem yields w = ṽ S . 2
IO1 November 22, 2006 34

A Study of a non-homogeneous ordinary differential


equation
With regard to the non-homogeneous ODE
1 2
σ (x)w ′′ (x) + b(x)w ′ (x) − r(x)w(x) + h(x) = 0, x ∈ ]0, ∞[. (109)
2
Under Assumptions 2.1, 2.2 and 2.3a–b and with reference to Johnson and Zervos [?, Ap-
pendix B] we have the following results.

Proposition A.1 Suppose that Assumptions 2.1–2.3 hold. The following statements are
equivalent:
(I) Given any initial condition x > 0 and any weak solution Sx to (1),
Z ∞ 
−Λt
Ex e |h(Xt )| dt < ∞.
0

(II) There exists an initial condition y > 0 and a weak solution Sy to (1) such that
Z ∞ 
−Λt
Ey e |h(Xt )| dt < ∞.
0

(III) Given any x > 0,


Z x Z ∞
|h(s)| ψ(x) m(ds) < ∞ and |h(s)| φ(x) m(ds) < ∞.
0 x

(IV) There exists y > 0 such that


Z y Z ∞
|h(s)| ψ(x) m(ds) < ∞ and |h(s)| φ(x) m(ds) < ∞.
0 y

If these conditions hold, then the function


Z x Z ∞
Rh (x) = φ(x) h(s)ψ(s) m(ds) + ψ(x) h(s)φ(s) m(ds), x ∈ ]0, ∞[, (110)
0 x

is well-defined, is twice differentiable in the classical sense and satisfies the ODE (109),
Lebesgue-a.e.. In addition, Rh admits the expression
Z ∞ 
−Λs
Rh (x) = Ex e h(Xs ) ds , for all x > 0. (111)
0
IO1 November 22, 2006 35

The following result is concerned with a number of properties of the function Rh studied
in the previous proposition.
Proposition A.2 Suppose that Assumptions 2.1–2.3 hold, and let h : ]0, ∞[ → R be a mea-
surable function satisfying Conditions (I)–(IV) in Proposition A.1. The function Rh given
by (110) or (111) satisfies
Rh (x) Rh (x)
lim = lim = 0, (112)
x↓0 φ(x) x→∞ ψ(x)

h(x) h(x)
inf ≤ Rh (x) ≤ sup , for all x > 0, (113)
x>0 r(x) x>0 r(x)
Z ∞
Rh′ (x)φ(x) − Rh (x)φ′ (x) ′
= pc (x) h(s)φ(s) m(ds), for all x > 0, (114)
W(c) x
Z x
Rh′ (x)ψ(x) − Rh (x)ψ ′ (x) ′
= −pc (x) h(s)ψ(s) m(ds), for all x > 0, (115)
W(c) 0

if h/r is increasing (resp., decreasing), then Rh is increasing (resp., decreasing). Also,


Rr (x) = 1, for all x > 0. (116)
Furthermore, given a solution Sx = (Ω, F, Ft, Px , X, W ) to (1) and an (Ft )-stopping time τ ,
Z τ 
 −Λτ  −Λt
Ex e Rh (Xτ )1{τ <∞} = Rh (x) − Ex e h(Xt ) dt , (117)
0
Z ∞ 
 −Λτ  −Λt
Ex e Rh (Xτ )1{τ <∞} = Ex e h(Xt ) dt , (118)
τ

while, if (τn ) is a sequence of stopping times such that limn→∞ τn = ∞, Px -a.s., then
lim Ex e−Λτn |Rh (Xτn )| 1{τn <∞} = 0.
 
(119)
n→∞

Remark A.1 If we define the measurable function h as


Lg(x) = −h(x) (120)
then, the function g satisfies the non-homogenous ODE
1 2
σ (x)g ′′ (x) + b(x)g ′ (x) − r(x)g(x) + h(x) = 0, x ∈ ]0, ∞[.
2
If we also assume that given any weak solution, Sx to (1) the function g satisfies
Z ∞ 
−Λt

Ex e Lg(Xt ) dt < ∞, for all x > 0. (121)
0

Then, by by Propositions A.1 and A.2, the following statements are true.
IO1 November 22, 2006 36

a. Given any weak solution, Sx to (1) and any (Ft )-stopping time, τ , the function g satisfies
 
−Λτ
Ex e g(Xτ )1{τ <∞} < ∞, for all x > 0. (122)

In addition, Dynkin’s formula holds,


  Z τ  
−Λτ −Λt
Ex e g(Xτ )1{τ <∞} = g(x) + Ex e Lg(Xt )dt 1{τ <∞} , for all x > 0.
0
(123)

b. The function g satisfies the following transversality condition. Given any weak solution,
Sx , to (1), if (τn ) is a sequence of stopping times such that limn→∞ τn = ∞, Px -a.s., then

lim Ex e−Λτn |g(Xτn )| 1{τn <∞} = 0.


 
(124)
n→∞

c. With reference to the functions φ and ψ, defined by (6) and (7), respectively, and the
scale function pc defined by (4), the function g satisfies the following identities

g(x) g(x)
lim = lim = 0, (125)
x↓0 φ(x) x→∞ ψ(x)
Z ∞
g(x)φ′(x) − g ′ (x)φ(x) ′
= pc (x) Lg(s)φ(s) m(ds), for all x > 0, (126)
W(c) x
Z x
g(x)ψ ′ (x) − g ′ (x)ψ(x) ′
= −pc (x) Lg(s)ψ(s) m(ds), for all x > 0. (127)
W(c) 0

Lemma A.1 In addition, for increasing h/r, if 0 is a natural boundary (not an entrance
boundary) then

h(x)
lim Rh (x) = lim , (128)
x↓0 x↓0 r(x)

and if ∞ is a natural boundary then

h(x)
lim Rh (x) = lim . (129)
x→∞ x→∞ r(x)

Vous aimerez peut-être aussi