Vous êtes sur la page 1sur 24

Optimal I nvestment-Consumption Models in

I nternational Finance
Wendell H. Fleming&J eromeL. Stein
Divisionof AppliedMathematics
BrownUniversity
Providence, RI 02912
Abstract
We consider some multi-stage stochastic optimization models, which arise, in in-
ternational nance. The evolution over time of national capital and debt are a!ected
by controls, in the form of investment and consumption rates. The goal is to choose
controls that maximize total HARA discounted utility of consumption, subject to con-
straints on investment and consumption rates as well as the debt-to-capital ratio. The
methods are based on dynamic programming. Totally risk sensitive limits are consid-
ered. In this analysis, ordinary expectations are replaced by totally risk-averse expec-
tations which are additive under max-plus addition rather than ordinary addition.
1 I ntroduction.
In this paper we consider some multi-stage stochastic optimization models which arise in
international nance. In these models an economic unit has productive capital and also
liabilities in the form of debt. In an international nance context, the unit is a nation and
debt refers to that owed to foreigners. The evolution over time of capital and debt are
a!ected by controls, in the form of investment and consumption rates. The uctuations
of debt are also inuenced by changing interest rates and productivity of capital, which
are modelled as stochastic processes. The goal is to choose controls which maximize total
discounted utility of consumption, subject to constraints which are imposed. Some of these
constraints are evident. For example, capital and consumption rates cannot be negative.
Other constraints are a matter of choice in the model. A crucial modelling issue is how
debt is to be constrained, to avoid unreasonable free lunch solutions to the problem which
allow unlimited consumption. One such constraint is that the total wealth must be positive,
where wealth is capital minus debt. This constraint will be used in Sections 2 and 4. Another
possible constraint, considered in Section 3, is an upper bound on the ratio of debt to capital.
We begin in Section 2 with a simple discrete time model, in which the capital K
j
in
each time period j can be chosen arbitrarily. Thus no constraints on rates of investment
or disinvestment are imposed. In this model, the controls at time j are capital K
j
and
"
consumption C
j
. The state at time j is the wealth X
j
. The interest rates r
j
and productivity
of capital b
j
are independent across di!erent time steps j. The utility function is HARA.
Dynamic programming reduces the problem to nding a positive constant A which satises
the nonlinear equation (2.9). The optimal controls keep the ratios k
!
of capital-to-wealth and
c
!
of consumption-to-wealth constant. See (2."2). If interest rates are constant, this problem
is mathematically equivalent to a classical discrete time Merton-type portfolio optimization
problem. Capital K
j
has the role of a risky asset with the no short-selling constraint K
j
! 0.
In Sections 3." and 3.2 we consider a linear investment-consumption model in which
capital K
j
and debt L
j
are state variables. The controls are investment I
j
and consumption
C
j
, subject to linear constraints (3.4). In addition, condition (3.3) imposes an upper bound

` on the debt-to-capital ratio. When



` = ", this is equivalent to assuming that wealth
X
j
= K
j
"L
j
cannot negative. However, the bound

` must hold for all possible interest rates
r
j
and productivities b
j
, including worst case scenarios. This requires a more stringent
condition

` < `
max
< ", where the constant `
max
is dened in formula (3.9). This stochastic
investment-consumption control problem is studied by dynamic programming. No explicit
solution is available. However, for HARA utility the problem is equivalent to one with a
single state variable `
j
= K
"1
j
L
j
. See (3."4) and (3."5). In [9] we considered a similar
two-period model and found an explicit solution in a special case. The empirical paper ["3]
examines data on default risk by countries, in the context of this two-period model.
In Section 4 we consider a continuous time version of the discrete time model in Sec-
tion 2. In this continuous time model, the wealth X
t
at time t uctuates according to
the linear stochastic di!erential equation (4.6). Random uctuations in interest rates and
productivity of capital are incorporated in the model through the Brownian motion terms
in (4.6). Dynamic programming leads to explicit formulas for optimal capital-to-wealth
and consumption-to-wealth ratios k
!
and c
!
. See (4."") and (4."2). In the international
nance/debt interpretation of this model, it is di#cult to measure a countrys capital K
t
.
However, data for the gross domestic product (GDP) Y
t
are widely available. In Remark
4." we describe a mathematically equivalent model in which K
t
is replaced by Y
t
. In [8] we
explored this model in greater detail and pointed out some of its economic implications.
At the end of Section 4 we mention some work in progress on variants of this continuous
time model. If bounds are imposed on the investment rate I
t
, then no xed upper bound
K
"1
t
L
t
#

` can be enforced with probability " when uctuations in interest rates and produc-
tivity are modelled via Brownian motions, as in Section 4. Some possible modications in
the criterion J to be maximized are suggested. Moreover, large investment-to-capital ratios
may be less e#cient. A nonlinear model in which this is taken into account is mentioned.
The HARA parameter ! is a measure of risk sensitivity. In Section 5 we consider totally
risk-sensitive limits as ! $"%. For simplicity, only the models in Sections 2 and 4 with no
bounds on investment rates are considered. If the interest rate and productivity probabilities
p(r, b) in Section 2 do not depend on !, then in the totally risk-averse limit controls are chosen
to protect against worst case interest rates and productivities. Worst case scenarios may
have positive, but quite small probabilities. In the theory of large deviations for stochastic
processes [2]["0] such worst case scenarios are rare events. A more interesting totally
2
risk-averse limit is to let p(r, b) = p
!
(r, b) to be asymptotically like q(r, b)
!
as ! $ "%,
where q(r, b) ! ". See (5.2). The event r
j
= r, b
j
= b is rare or not depending on whether
q(r, b) > " or q(r, b) = ".
A totally risk-averse limit stochastic problem is formulated and solved by dynamic pro-
gramming. In this analysis, an ordinary expectation E(") is replaced by a totally risk-averse
expectation E
#
("), described by either formula (5.5) or (5.6). The totally risk-averse ex-
pectation operator is additive, not with respect to ordinary addition but with respect to
min-plus addition & in which a & b = min(a, b). We nd a steady-state solution to the
totally risk-averse control problem, in which the optimal k
!
, c
!
are again constants. Another
noteworthy feature of the solution is that the totally risk-averse expectation of consumption
C
j
remains constant over all time periods, when these optimal controls are chosen. See
Theorems 5." and 5.2. In the steady-state totally risk-averse formulation, the solution does
not involve a discount factor and thus does not depend on the length of the planing horizon.
In the discrete time formulation, k
!
, and c
!
are described implicitly through formula (5."").
However, there is an explicit formula for k
!
, and c
!
for the special case considered in Ex-
ample 5.". In the continuous time formulation, explicit formulas (5."6a) for k
!
, and c
!
are
obtained as limits as ! $"%of the corresponding formulas (4."") and (4."2). The optimal
consumption-to-wealth ratio c
!
can be interpreted as the expected totally risk-averse rate of
growth $ of wealth, in a model without consumption. See Remark 5.2.
We refer to ["] for background on the dynamic programming method in discrete time
stochastic control. For continuous time stochastic control models involving stochastic dif-
ferential equations and Markov di!usion processes, we refer to [6][7]. The survey articles
[3][4] may also provide a further introduction and helpful references. The books [""]["4]["5]
are concerned with dynamic models in macroeconomics, from di!erent perspectives from the
one in the present paper.
2 Discrete-time Merton-type model.
We begin with a simple model, for which optimal investment and consumption control poli-
cies can be found explicitly. The solution to this simple problem will provide an upper bound
for the value function for the optimal investment-consumption problem to be considered in
Section 3.
We consider discrete time periods j = 0, ", 2, . Let X
j
denote the wealth of some
economic unit at time j. The wealth consists of capital K
j
and net nancial asseets S
j
.
Thus X
j
= K
j
+S
j
. We require that X
j
> 0 and K
j
! 0. If we let L
j
= "S
j
, as in Section
3, then the debt L
j
is subject to the upper bound L
j
# K
j
. Note that S
j
> 0 corresponds to
negative debt. Let r
j
denote the interest rate and by b
j
the productivity of capital at time
j. Then
(2.") X
j+1
= X
j
+r
j
S
j
+b
j
K
j
"C
j
,
3
where C
j
! 0 denotes consumption during time period j. We rewrite (".") as
(2.2) X
j+1
= X
j
[" +r
j
+ (b
j
"r
j
) k
j
"c
j
]
where k
j
, c
j
are the ratios of capital and consumption to wealth:
(2.3) k
j
= K
j
/X
j
, c
j
= C
j
/X
j
.
We assume that the interest rates r
j
have nitely many possible values, the smallest of which
is denoted by r
"
and the largest by r
+
. Similarly, b
j
has nitely many possible values, the
smallest and largest of which are b
"
and b
+
respectively. We assume that r
"
> 0, b
"
> 0
and
(2.4) b
"
< r
+
< b
+
.
In (2.2) we must have " +r
j
+ (b
j
"r
j
) k
j
"c
j
! 0, since otherwise X
j+1
< 0 when X
j
> 0.
When k
j
> " the debt is positive, and the worst case is r
j
= r
+
, b
j
= b
"
. If 0 # k
j
< ", the
worst case is r
j
= r
"
, b
j
= b
"
. Therefore, if the initial wealth x = X
0
is positive, then X
j
> 0
for all j = 0, ", 2, provided (k
j
, c
j
) ' %, where % is the quadrilateral in the (k, c) plane
bounded by the lines k = 0, c = 0, " +r
+
= (r
+
"b
"
) k +c and " +r
"
= (r
"
"b
"
) k +c.
We assume that r
j
, b
j
are random, with probabilities
p(r, b) = Pr [(r
j
, b
j
) = (r, b)] ,
(2.5)
p (r
+
, b

) > 0, p (r
"
, b
"
) > 0.
Moreover, the pairs (r
j
, b
j
), j = 0, ", 2, are independent across di!erent time steps j.
Consider the following optimal stochastic control problem. The state at time j is X
j
,
and the controls are k
j
, c
j
. The constraints are X
j
> 0 and (k
j
, c
j
) ' %. In choosing (k
j
, c
j
)
the initial state x = X
0
and the pairs (r
"
, b
"
) for # < j are known. However, the current
interest rate r
j
and productivity b
j
are unknown when k
j
and c
j
are chosen. The goal is to
maximize total expected discounted HARA utility of consumption:
(2.6) J = !
"1
E
!
"
#
X
j=0
$
j"1
C
!
j
#
$
, ! < ", ! 6= 0
where 0 < $ < " ($ is the discount factor.)
This problem is solved using dynamic programming (see ["] for background on discrete
time stochastic dynamic programming). Let W(x) denote the value function, which is the
supremum over admissible controls of J considered as a function of the initial wealth x =
X
0
. We will see below that the supremum is nite provided ! < ! # ". The dynamic
programming equation for W is
(2.7) W(x) = max
(k,c)
%
&
'
!
"1
(cx)
!
+ $
X
r,b
p(r, b)W (x
1
)
(
)
*
,
4
x
1
= x(" +r + (b "r)k "c) ,
where the max is subject to the constraint (k, c) ' %. From (2.2) and (2.6), W(%x) = %
!
W(x)
for any % > 0. Hence, for some A > 0
(2.8) W(x) = !
"1
Ax
!
.
Let us rst consider 0 < ! < ". Then (2.7) becomes
(2.9) " = max
(k,c)
h
A
"1
c
!
+ $&(k, c, !)
i
&(k, c, !) =
X
r,b
p(r, b) (" +r + (b "r)k "c)
!
.
Let
(2."0) (!) = max
(k,c)
&(k, c, !)
and assume that
(2."") $(!) < ".
Theorem 2.1 If 0 < ! < " and $(!) < ", then equation (2.9) has a solution for a unique
A > 0. The maximumin (2.8) is attained at a unique (k
!
, c
!
) which is either interior of %
or on thevertical segment of theboundary '% where k
!
= 0, 0 < c
!
< " +r
"
.
Proof. Let "(A) denote the right side of (2.9). Then " is continuous. Moreover,
lim
A$0
+
"(A) = % , lim
A$#
"(A) = $(!).
Since $(!) < ", "(A) = " for some A > 0.
The maximum in (2.9) is attained at a unique (k
!
, c
!
) ' %, since A
"1
c
!
+$&(k, c, !) is a
strictly concave function of (k, c). Moreover
'
'c
h
A
"1
c
!
+ $&(k, c, !)
i
= +%
when c = 0. This excludes the possibility that c
!
= 0. Since p(r

, b
"
) > 0, this partial
derivative is "% on the segments of '% where " + r
+
+ (b
"
"r
+
) k " c = 0 or " + r
"
+
(b
"
"r
"
) k "c = 0. Hence either (k
!
, c
!
) is interior to % or k
!
= 0, 0 < c
!
< ". Finally, since
c
!
> 0 it is easy to show that "(A) is strictly decreasing as A increases. Hence, the solution
of "(A) = " is unique.
Once a solution A to (2.9) is found, a standard verication argument in stochastic control
shows that W(x) in (2.9) is indeed the value function. Moreover the constant controls
(2."2) k
j
= k
!
, c
j
= c
!
, j = 0, ", 2,
5
are optimal. See [7,p. "74] for the continuous time Merton problem.
Remark 2.1. If the interest rate r
j
= r is constant, this problem is equivalent to the
discrete-time Merton optimal portfolio problem with no short selling. The control k
j
corre-
sponds to the fraction of wealth X
j
in a risky asset, and r is the riskless interest rate. If r
is constant, then k
!
> 0 provided r <

b, where

b is the mean productivity:

b =
X
b
bp(b).
In this case, at a point (0, c
!
) with 0 < c
!
< " +r,
'&
'k
= ! (" +r "c
!
)
!"1

b "r

> 0.
Hence, we must have k
!
> 0 in this case.
Lemma 2.1 For 0 # ! # ", (!) is increasingas ! increases, and (0) = ".
Proof. From the denition, it is immediate that (0) = " and that for ! > 0
(!) ! &(0, 0, !) !

" +r
"

!
> ".
The maximum in (2."0) occurs at some (k
!
, c
!
). Then
(!) =
P
r,b
p(r, b)g(r, b)
g(r, b) = (" +r + (b "r)k
!
"c
!
)
!
.
The sum on the right side is an expectation Eg. Let ! > ! and ( = !/!. Since (!) < "
and ( > ",
(!) < [(!)]
#
= [Eg]
#
# E

g
#

# (!).
For ! = ",
&(k, c, ") = " + r +

b " r

k "c
where r,

b are the mean interest rate and mean productivity. The maximum over % is
(2."2) (") =
(" +r
+
)

r
+
" r +

b "b
"

r
+
"b
"
.
Corollary 2.1 Let ! = " if $(") # ". If $(") > ", let ! be the solution to $( !) = ".
Then the value function W(x) is nite for ! < !. Moreover, the constant controls k
j
= k
!
,
c
j
= c
!
for j = 0, ", 2, areoptimal.
The case ! < 0. In the above discussion we took 0 < ! < !. The discussion for ! < 0
is almost exactly the same. When ! < 0, the max in (2.9) becomes min. Similarly, in (2.9)
(!) is the minimum over % of &(k, c, !). When ! < 0
(!) # &(0, 0, !) #

" +r
"

!
< "
and inequality (2."") always holds.
6
3 Linear investment-consumption model.
In this section we consider the following discrete time stochastic control model. An economic
entity has productive capital and also liabilities in the form of debt. Let K
j
denote the capital
and L
j
the debt at time period j = 0, ", 2, . In the context of international nance, the
economic entity is a country. K
j
is the nations capital and L
j
the debt owed to foreigners.
See [8][9]. If L
j
< 0, then S
j
= "L
j
is the amount loaned to foreigners.
Let I
j
denote the amount of investment in capital and C
j
the consumption in period j.
Then capital and debt are updated according to the linear di!erence equations
(3.") K
j+1
= K
j
+I
j
(3.2) L
j+1
= (" +r
j
) L
j
+I
j
+C
j
"b
j
K
j
.
As in Section 2, r
j
is the interest rate and b
j
the productivity of capital. We again assume
that (r
j
, b
j
) is random and independent across di!erent time steps j. Moreover, (2.4) and
(2.5) hold.
The following constraints are imposed:
(3.3) K
j
> 0 , L
j
#

`K
j

` < "

(3.4) C
j
! 0 , "iK
j
# I
j
#

iK
j
where 0 # i < " and 0 <

i < %.
Consider the following stochastic control problem. The state variables are K
j
and L
j
.
The control variables are I
j
and C
j
. In choosing I
j
, C
j
, the initial state (K
0
, L
0
) and the
pairs (r
"
, b
"
) for # < j are known. However, the current interest rate r
j
and productivity b
j
are unknown when I
j
and C
j
are chosen.
Let U(C) be an increasing, concave utility function. Later in the section we take U(C)
to be HARA, as in Section 2. The goal is to maximize total expected discounted utility of
consumption:
(3.5) J = E
!
"
#
X
j=0
$
j"1
U (C
j
)
#
$
.
The constraints imply a further restriction on the upper limit

`. Since

` < ", X
j
> 0
where X
j
= K
j
"L
j
!

" "

K
j
. Let
(3.6) `
j
=
L
j
K
j
, i
j
=
I
j
K
j
, c
j
=
C
j
X
j
=
C
j
(" "`
j
) K
j
.
From (3.") and (3.2)
(3.7) `
j+1
= `
j
+
r
j
`
j
+ (i
j
+c
j
) (" "`
j
) "b
j
" +i
j
7
From (3.4), c
j
! 0 and "i # i
j
#

i. Since L
j+1
#

`K
j+1
, (3.3) requires `
j+1
#

` for all
possible r
j
, b
j
. If L
j
# 0, there are certainly controls I
j
, C
j
such that `
j+1
#

`, for instance
I
j
= C
j
= 0. For debt L
j
> 0, the worst case is r
j
= r
+
, b
j
= b
"
. In this case, it must be
possible to nd I
j
, C
j
such that
(3.8)

` ! `
j
+
r
+
`
j
+ (i
j
+c
j
) (" "`
j
) "b
"
" +i
j
.
We take i
j
= "i, c
j
= 0. Then (3.8) holds provided `
j
#

` and
r
+
`
j
"i (" "`
j
) "b
"
# 0,
(3.9) `
j
#
b
"
+i
r
+
+i
= `
max
(This denes `
max
.) Note that `
max
< " since b
"
< r
+
. For the upper bound

` on the
debt-to-capital ratio `
j
in (3.3), we require that 0 <

` < `
max
.
The above discussion shows that (i
j
, c
j
) ' &(`
j
) where &(`) is dened for ` #

` as the
set of all (i, c) satisfying the following inequalities: "i # i #

i, c ! 0 and
(3."0)
r

` + (i +c)(" "`) "b


"
" +i
#

` "`.
In (3."0), r

= r
"
if ` # 0 and r

= r
+
if ` > 0. Since

` < `
max
, &(`) is not empty.
Let V (K, L) denote the value function. It is the supremum of J over all admissible
control sequences (I
j
, C
j
), j = 0, ", , considered as a function of the initial capital and
debt K = K
0
, L = L
0
.
Proposition 3.1 Assumethat V (K, L) is nitefor all K > 0 and L #

`K. Then
(a) V (K, L) is a concavefunction;
(b) V (K, L) is a nondecreasingfunction of K;
(c) V (K, L) is a nonincreasingfunction of L.
Proof. Concavity of V follows by a standard argument from concavity of U(C) together
with linearity of the state dynamics (3.")-(3.2) and of the constraints (3.3)-(3.4).
To prove (b), let

K > K and let

K
j
,

L
j
be the solution to (3.") with

K
0
=

K,

L
0
= L and
the same controls I
j
, C
j
. Then K
j
<

K
j
,

L
j
< L
j
, which implies

L
j
#

K
j
when L
j
#

`K
j
.
Moreover, the constraint
"i

K
j
# I
j
#

i

K
j
holds when I
j
satises (3.4). Thus, the class of admissible controls is enlarged when initial
capital K is increased. This implies (b).
Similarly the class of admissible controls is increased when initial debt L is decreased.
This implies (c).
From now on we consider HARA utility:
U(C) = !
"1
C
!
, ! < "(! 6= 0).
8
Proposition 3.2 Let ! < !, where ! is as in Corollary 2.1. Then
(3."") V (K, L) # !
"1
A(K "L)
!
whereA is as in Theorem2.1.
Proof. We subtract (3.2) from (3.") to obtain
(3."2) X
j+1
= X
j
"r
j
L
j
+b
j
K
j
"C
j
.
This is the same as (2.") since L
j
= "S
j
. Then k
j
, c
j
dened by (2.3) satisfy the condition
(k
j
, c
j
) ' %. Otherwise, X
j+1
, would become negative by (2.2). Therefore,
J # W(K "L)
where W(x) is the value function in Section 2. Then (3."") follows from (2.8).
Remark 3.1. Since k
j
= (" "`
j
)
"1
the bound (3.9) is equivalent to k
j
# k
max
where
k
max
=
r
+
+i
r
+
"b
"
.
As i $" k
max
tends to the largest value of k in the quadrilateral % in Section 2. If one took
i = ",

i = %, then there would be no investment control constraint except i


j
! "" to insure
that K
j+1
! 0. Even when investment control constraints are omitted, the investment-
consumption model in this section is not equivalent to the Merton-type model in Section
2. In the Merton-type model, the wealth X
j
is freely divided into K
j
and S
j
= "L
j
at
the start of time period j. The ration k
j
= K
j
/X
j
can be taken as a control rather than
I
j
. However, in the present section, interest and productivity coe#cients r
j
, b
j
apply to the
current states K
j
, L
j
. Investment I
j
a!ects the proportion of wealth in capital only in the
next time period j +". In continuous time, this distinction disappears. In Section 4, we will
nd that if investment control constraints are ignored, then a continuous time version of the
investment-consumption problem in this section reduces to a continuous time Merton-type
problem.
The value function satises the dynamic programming equation
(3."3) V (K, L) = max
(I,C)
+
,
U(C) + $
X
r,b
p(r, b)V (K
1
, L
1
)
-
.
where K
1
, L
1
satisfy (3."),(3.2) with j = 0, K
0
= K, L
0
= L, (I
0
, C
0
) = (I, C). In (3."3),
(I, C) must satisfy the constraints (3.4) and also (K
"1
I, K
"1
C) ' &(K
"1
L). For HARA
utility U(C), V (K, L) is homogeneous of degree ! and hence
(3."4) V (K, L) = !
"1
K
!

V

K
"1
L

9
where

V (`) = !V (", `). For 0 < ! < !, ` #

`, (3."3) becomes
(3."5)

V (`) = max
(i,c)
+
,
c
!
+ $
X
r,b
p(r, b)

V (`
1
)
-
.
where (i, c) ' &(`) and `
1
satises (3.7) with j = 0 and `
0
= `, (i
0
, c
0
) = (i, c). Thus, for
HARA utility, the stochastic control problem reduces to one with a single state `
j
which is
updated according to (3.7), and with controls (i
j
, c
j
) ' &(`
j
). [For ! < 0, max in (3."5)
become min.]
Remark 3.2. We have assumed that the pairs (r
j
, b
j
) are independent across di!erent time
periods. One could assume instead that (r
j
, b
j
) evolve according to a nite state Markov
chain with stationary transition probabilities P (r
j
, b
j
; r
j+1
, b
j+1
). If r
j
and b
j
are assumed
known before I
j
and C
j
are chosen, then dynamic programming can again be used. In this
case the value function V (K, L, r, b) depends on r = r
0
and b = b
0
as well as initial capital
and debt K = K
0
, L = L
0
.
4 Continuous-time investment consumption models.
Let us now consider continuous-time versions of the discrete-time models in Sections 2 and
3. Let K
t
denote capital, L
t
debt, I
t
investment rate and C
t
consumption rate at time t.
Then capital and debt satisfy (at least formally) the di!erential equations.
(4.") dK
t
= I
t
dt
(4.2) dL
t
= (r
t
L
t
+I
t
+C
t
"b
t
K
t
) dt,
where r
t
and b
t
are the interest rate and productivity of capital. Let us ignore possible
constraints on I
t
. (See comments later in the section if the investment rate I
t
is constrained,
or if (4.") depends nonlinearly on the ratio of investment to capital.) The only constraints
imposed are
(4.3) K
t
! 0, C
t
! 0, X
t
> 0
where X
t
= K
t
" L
t
. If there are no constraints on I
t
, then wealth X
t
can be shifted
instantaneously and frictionlessly between capital K
t
and net nancial assets S
t
= "L
t
.
We can take X
t
as the state, and K
t
, C
t
as controls rather than I
t
, C
t
. By subtracting (4.2)
from (4.") we obtain the dynamics of X
t
. Equivalently, we will take as controls k
t
, c
t
, where
(4.4) K
t
= k
t
X
t
, C
t
= c
t
X
t
.
Then
(4.5) dX
t
= X
t
[r
t
+ (b
t
"r
t
) k
t
"c
t
] dt.
"0
However, equation (4.5) is merely formal. We wish the pairs (r
t
, b
t
) to be independent for
di!erent times t. To make this mathematically precise, we replace r
t
dt and b
t
dt in (4.2),
(4.5) by dR
t
and dB
t
, where R
t
, B
t
are processes with stationary independent increments.
In fact, we assume as in [8] that
R
t
= rt + )
1
w
1t
B
t
= bt + )
2
w
2t
where w
1t
, w
2t
are Brownian motions which may be correlated. Let * denote the correlation
coe#cient:
E (w
1t
w
2t
) = *t , "" < * < ".
The constants r, b are the mean continuous-time interest rate and mean productivity of
capital.
Equation (4.5) now becomes the stochastic di!erential equation (Ito sense)
(4.6) dX
t
= X
t
[(r + (b "r)k
t
"c
t
) dt +k
t
)
2
dw
2t
+ (" "k
t
) )
1
dw
1t
] .
The goal is to choose the controls k
t
, c
t
to maximize expected total discounted HARA utility
of consumption
J = E
Z
#
0
e
"$t
!
"1
(c
t
x
t
)
!
dt,
where + > 0 is the continuous-time discount factor. The controls are chosen based on past
information up to time t. This can be made precise by requiring k
t
, c
t
to be progressively
measurable with respect to an increasing family of )-algebras to which the Brownian motions
w
1t
, w
2t
are adapted [7, Chapter 4].
This stochastic control problem has an explicit solution, which is found by dynamic
programming in the same way as for the continuous time Merton portfolio optimization
problem. The Merton problem corresponds to the special case of constant interest rate
r ()
1
= 0). Let V (x) denote the value function. The continuous time dynamic programming
equation is [ , Sec. 4.5]
(4.7) +V = max
k%0
"
Q(k)
2
x
2
V
xx
+ (b "r)kxV
x
#
+rxV
x
+ max
c%0
h
"cxV
x
+ !
"1
(cx)
!
i
(4.8) Q(k) = )
2
2
k
2
+ 2*)
1
)
2
k(" "k) + )
2
1
(" "k)
2
.
As in Section 2, we seek a homogeneous solution V (x) = !
"1
Ax
!
to (4.7) and then verify
that V (x) is the value function. The details are nearly the same as for the Merton problem
[7, p. "74]. Let
(4.9) $
!
= max
k%0

"
"
2
Q(k)(" "!) + (b "r)k

+r.
""
The condition for existence of the required homogeneous solution V (x) to (4.7) with A > 0
is
(4."0) + > !$
!
.
The maximum in (4.9) occurs when k = k
!
, where
(4."") k
!
= %k
!
m
+ %((( "*)
provided that the right side of (4."") is positive. Otherwise, k
!
= 0. In (4."")
k
!
m
=
b "r
)
2
2
(" "!)
, ( =
)
1
)
2
, %
"1
= " "2*( + (
2
.
Note that k
!
m
is the Merton solution ()
1
= 0). The maximum on the right side of (4.7) occur
when k = k
!
and c = c
!
, where
(4."2) c
!
=
+ "!$
!
" "!
.
The constant controls k
!
, c
!
are the optimal ratios of capital to wealth and consumption to
wealth.
Remark 4.1 In the international nance/debt interpretation of this model, it is di#cult
to measure a countrys capital K
t
. However, data for the gross domestic product (GDP)
Y
t
= b
t
K
t
are widely available. The following model is mathematically equivalent to the one
above. Instead of K
t
, L
t
take Y
t
and L
t
as the state variables. Assume that the GDP Y
t
satises
(4."3) dY
t
= bI
t
dt + )
2
Y
t
d w
2t
, b > 0,
and that L
t
again satises (4.2), or equivalently (4.2
0
) below. Let

K
t
= b
"1
Y
t
, X
t
=

K
t
"L
t
, k
t
= (

K
t
)
"1
X
t
.
Then X
t
again satises (4.6) and the solution to the stochastic control problem is the same
as before.
Variants of the model. The simple continuous-time investment/consumption model
just considered has special features which permit an explicit solution. We now mention some
variants of this simple model which are currently being investigated.
A) Bounds on investment rates and debt to capital ratios. Suppose that nite upper and
lower bounds
"iK
t
# I
t
#

iK
t
are imposed, like those in (3.4). The pair (k
t
, L
t
) is the state of the continuous-time
stochastic system being controlled. The controls are I
t
, C
t
, or equivalently i
t
, c
t
, where
i
t
= K
"1
t
I
t
and c
t
= C
"1
t
X
t
. With the model chosen for R
t
, B
t
equation (4.2) becomes
(4.2
0
) dL
t
= (rL
t
+I
t
+C
t
"bK
t
) dt + )
1
L
t
dw
1t
")
2
K
t
dw
2t
.
"2
With positive probability, the Brownian motions w
1t
, w
2t
can undergo arbitrarily large
excursions in any time interval. Hence, any xed upper bound

` for the debt to capital
ratio K
"1
t
L
t
will be exceeded with positive probability. The criterion J considered
above must be modied to account for this fact. The choice of an appropriate criterion
to be optimized is an interesting modelling issue. The following criteria J
1
and J
2
are
two possibilities.
". Given initial state (K, L) with L <

`K, let , denote the rst time t such that
L
t
=

`K
t
. Consider the problem of maximizing
J
1
= !
"1
E
Z
%
0
e
"$t
C
!
t
dt "'e
"$%
L
!
%

,
where ' ! 0 is a penalty imposed when the upper debt-to-capital ratio bound

`
is reached.
2. Consider the problem of maximizing
J
2
= !
"1
E
Z
#
0
e
"$t
[C
!
t
"K
!
t
"(`
t
)] ,
where `
t
= K
"1
t
L
t
and "(`) is a penalty function which increases as ` increases.
The function "(`) must increase rapidly enough for large ` to insure that J
2
cannot
be made arbitrarily large by suitable choice of the controls I
t
, C
t
.
Yet another possibility would be to require that the constant mean interest rate r
in (4.2
0
) is replaced by r (`
t
), where r(`) increases at a suitable rate as ` increases.
B) Nonlinear dependence of capital growth on investment rates. Large values of |i
t
|, where
i
t
= K
"1
t
I
t
may be (in some sense) less e#cient. Instead of (4.") suppose that
dK
t
= g (i
t
) K
t
dt,
where g(i) # i, g(0) = 0, g
00
(i) < 0. The linear case g(i) = i is (4."). As in variant
A) above, one could consider J
1
or J
2
as criterion to be optimized. The following
heuristic argument suggests (but does not prove) that the form of nonlinearity of g(i)
should imply bounds for optimal investment ratios i
!
t
. The investment control variable i
appears in the dynamic programming equation for the value function V (K, L) through
a term
K max
i
[g(i)V
K
+iV
L
]
where V
K
, V
L
are the rst-order partial derivatives. The max would occur at
i
!
= (g
0
)
"1

"V
"1
K
V
L

,
where the assumption g
00
(i) < 0 insures that g
0
has an inverse. A positive lower bound
for V
K
and an upper bound for |V
L
| would imply a corresponding bound for the optimal
investment ratio i
!
.
"3
5 Totally risk-averse limits.
The HARA parameter ! is a measure of sensitivity to risk. For utility function U(C), risk
sensitivity is dened as |U
00
(C)|/U
0
(C) which for HARA utility equals (" " !)C
"1
. In this
section we consider the totally risk averse limit ! $ "%. For simplicity, we consider only
problems without investment control constraints, similar to those in Sections 2 and 4.
Let us begin with a nite-time horizon version of the model in Section 2. The discount
factor $ plays no role in the totally risk averse limit. Hence we take $ = ". For the control
problem with N steps, we wish to choose controls (k
j
, c
j
) ' % for j = 0, ", , N " " to
maximize
(5.") J
N
(!) = !
"1
E
+
,
N"1
X
j=0
C
!
j
+A
0
(!)X
!
N
-
.
, A
0
(!) ! 0.
The last term in (5.") is a measure of the utility of nal wealth X
N
. If A
0
(!) = 0 and the
probabilities p(r, b) do not depend on !, then the main contribution to (5.") comes from the
smallest consumption C
j
in the ! $ "% limit. Note that C
j
= c
j
X
j
and X
j
depends on
controls (k
"
, c
"
) and also on (r
"
, b
"
) for all # < j. See (2.2). In the totally risk averse limit,
the optimal controls (k
j
, c
j
) are chosen to maximize minimum consumption under worst case
scenarios r
"
= r

, b
"
= b
"
.
A more interesting totally risk-averse limit problem is obtained by introducing ideas from
the theory of large deviations for stochastic processes [2]. Suppose that p(r, b) = p
!
(r, b)
depends on ! in the following way:
(5.2) lim
!$"#
(p
!
(r, b))
1
!
= q(r, b).
Since 0 < p
!
(r, b) < " and these probabilities sum to ", q(r, b) ! " and q(r, b) = " for at
least one of the nitely many possible pairs (r, b). If p
!
(r, b) = p(r, b) does not depend on !,
then q(r, b) = " for all (r, b). As a simple example, suppose that the interest rate r
j
= r is
constant and productivity of capital is either b
j
= b
+
(good) or b
j
= b
"
(bad). See Example
5." below. If q (b
+
) = " and q (b
"
) > ", then the undesirable event b
j
= b
"
becomes rare as
! $"%. We also assume that (A
0
(!))
1
!
tends to a limit B
0
> 0 as ! $"%. Let
(5.3) J
N
= lim
!$"#
(!J
N
(!))
1
!
.
The nite horizon, totally risk-averse control problem is to choose a sequence of controls
(k
j
, c
j
) , j = 0, ", , N "" to maximize J
N
. We recall from Section 2 that (k
j
, c
j
) ' % and
that k
j
and c
j
are functions of ~r
j
,
~
b
j
, where
~r
j
= (r
0
, r
1
, , r
j"1
) ,
~
b
j
= (b
0
, b
1
, , b
j"1
)
It is convenient to express J
N
as a totally risk-averse expectation, which is dened as
follows. The totally risk-averse expectation of a positive function "

~r
N
,
~
b
N

is dened as
"4
(a) E
#
(") = min
(~r
N
,
~
b
N)
q
N

~r
N
,
~
b
N

"

~r
N
,
~
b
N

,
(5.4)
(b) q
j

~r
j
,
~
b
j

=
j=1
Y
"=0
q (r
"
, b
"
) = lim
!$"#
!
"
j"1
Y
"=0
p
!
(r
"
, b
"
)
#
$
1
!
, j = ", , N.
From the denition and the assumption that (r
j
, b
j
) are independent over di!erent time
periods, it is easily shown (see Appendix) that
(5.5) E
#
(") = lim
!$"#
[E ("
!
)]
1
!
.
Let a & b = min(a, b) denote the min-plus sum of two numbers a, b. Then E
#
is additive
with respect to min-plus addition (see the Appendix):
(5.6) E
#
("
1
& &"
m
) = E
#
("
1
) & &E
#
("
m
) = lim
!$"#
"
E

m
X
`=1
"
!
`
!#1
!
.
If we take m = N +", "
j+1
= C
j
= c
j
X
j
for 0 # j # N "", "
N+1
= B
0
X
N
, then
(5.7) J
N
= E
#
+
,
N"1
M
j=0
C
j
&B
0
X
N
-
.
.
In other words, J
N
is the totally risk averse expectation of the smaller of the minimum
consumption over N time periods and B
0
X
N
, where X
N
is the nal wealth.
This totally-risk averse limit control problem can be solved by straightforward modica-
tions of standard dynamic programming methods. Let Z
N
(x) denote the maximum of J,
considered as a function of the initial wealth X
0
= x. It satises the nite time dynamic
programming equation
(5.8) Z
N
(x) = max
(k,c)
{(cx) &E
#
[Z
N"1
(X
1
)]} ,
where from (2.2) with r = r
0
, b = b
0
X
1
= x(" +r + (b "r)k "c)
The totally risk-averse value function is linear: Z
N
(x) = B
N
x. From (5.8) the constants B
N
satisfy the recursive formula
(5.9) B
N
= max
(k,c)
"
c &B
N"1
min
(r,b)
q(r, b) (" +r + (b "r)k "c)
#
Let (k
!
N
, c
!
N
) give the maximum in (5.9) among all (k, c) ' %. The optimal controls for the
totally risk-averse control problem are
(5."0) k
j
= k
!
N"j
, c
j
= c
!
N"j
, j = 0, ", , N "".
"5
Steady-state solution. Of particular interest is the case when B
0
is chosen such that
B
N
= B
0
for all N = ", 2, . In this case, the optimal controls k
!
, c
!
do not depend on j.
The following lemma states that a steady state solution exists and is unique.
Lemma 5.1 Thereexists a uniqueB > 0 such that
(5."") " = max
(k,c)
"
B
"1
c &min
(r,b)
q(r, b) (" +r + (b "r)k "c)
#
.
Proof. We proceed as in the proof of Theorem (2."). Let &(B) denote the right side
of (5."). Then &(B) is a continuous, strictly decreasing function of B and &(B) $ 0 as
B $%. For each xed c
&
,
max
k
min
(r,b)
q(r, b) (" +r + (b "r)k "c
&
) ! " +b
"
"c
&
,
where we have taken k = " on the right side and have used the fact that q(r, b) ! ". Choose
c
&
such that 0 < c
&
< b
"
. If B < c
&
, then
&(B) ! min
n
B
"1
c
&
, " +b
"
"c
&
o
> ".
Hence, there exists a solution B to &(B) = " with B ! b
"
. Since & is strictly decreasing, B
is unique.
From now on, we take B
0
= B and hence from (5.9) B
N
= B for all N. The steady state
totally risk-sensitive value function is Z(x) = Bx.
Remark 5.1. Z(x) can also be obtained from the discounted innite horizon control
problem in Section 2, as follows. In (2.8) assume that A = A(!) = B
!
, and write W(x) =
W(x, !).
Then
" = max
(k,c)
n
B
"1
c

!
+ $&(k, c, !)
o1
!
and the right side tends to the right side of (5."") as ! $"%. Thus
Z(x) = lim
!$"#
(!W(x, !))
1
!
.
Lemma 5.2 B = c
!
and
(5."2) min
(r,b)
q(r, b) (" +r + (b "r)k
!
"c
!
) = ".
Proof. We rst note that either (k
!
, c
!
) is interior to % or k
!
= 0, 0 < c
!
< " + r
"
.
Otherwise, the right side of (5."") would be 0. We must have
B
"1
c
!
= min
(r,b)
q(r, b) (" +r + (b "r)k
!
"c
!
) .
Otherwise, the max in (5."") could be increased by keeping k = k
!
xed and slightly in-
creasing or decreasing c (thus c = c
!
+ for small + > 0). By (5.""), B = c
!
.
We summarize these results in the following Theorem
"6
Theorem 5.1 Let (k
!
, c
!
) givethemaximumin (5.11). Then:
(a) k
!
, c
!
aretheoptimal capital-to-wealth and consumption-to-wealth ratios for thesteady
state, totally risk-averse optimal control problemand Z(x) = c
!
x is the steady state
valuefunction.
(b) Let C
!
j
= c
!
X
!
j
, where X
!
j
satises (2.2) with k
j
= k
!
, c
j
= c
!
and X
!
0
= x. Then
E
#

C
!
j

= c
!
x does not depend on j.
Proof (outline). For part (a), a standard verication argument shows that the maximum
of J
N
over all admissible control sequences k
j
, c
j
is Z(x) = Bx, and that the maximum is
attained by the controls k
!
, c
!
.
We prove part (b) by induction on j. By using the product form (5.4b) of q
j

~r
j
,
~
b
j

, we
obtain
E
#

X
!
j+1

= min
r
j
,b
j
q (r
j
, b
j
) (" +r
j
+ (b
j
"r
j
) k
!
"c
!
) E
#

X
!
j

.
By (5."2), E
#

X
!
j+1

= E
#

X
!
j

. By induction, E
#

X
!
j

= x for j = ", 2, . Since


C
!
j
= c
!
X
!
j
, E
#

C
!
j

= c
!
E
#

X
!
j

= c
!
x.
Since
N"1
M
j=0
C
!
j
= min
0'j'N"1
C
!
j
,
we have by (5.6):
Corollary 5.1 For every N = ", 2,
(5."3) E
#

min
0'j'N"1
C
!
j

= c
!
x.
Example 5.1 Let r
+
= r
"
= r, b
j
= b
+
or b
"
where b
"
< r < b
+
. Let q (b
+
) = ",
q (b
"
) > ". In large deviations terminology, the undesirable productivity b
j
= b
"
is rare if
! < 0 and |!| is large. Let q (b
"
) = q
"
. By (5."") and Lemma 5.2,
" = max
k
n
min
h
" +r +

b
+
"r

k "c
!

, q
"

" +r +

b
"
"r

k "c
!
io
and the max occurs for k = k
!
. Since b
+
"r > 0, b
"
"r < 0,
" = " +r +

b
+
"r

k
!
"c
!
= q
"

" +r +

b
"
"r

k
!
"c
!

.
From these equations
(a) k
!
=
" "(q
"
)
"1
b
+
"b
"
(5."4)
(b) c
!
= r +
b
+
"r
b
+
"b
"

" "

q
"

"1

.
"7
Note that k
!
does not depend on the interest rate r in this example.
Continuous time model. We turn now to the continuous time model in Section 4. For
simplicity, we take r =constant ()
1
= 0 in equation (4.6)). The problem is then mathe-
matically equivalent to the Merton portfolio optimization problem with the no short selling
contraint K
t
! 0. We let the volatility coe#cient )
2
= )
2
(!) depend on ! in such a way that
(5."5) lim
!$"#
[)
2
(!)]
2
(" "!) = )
2
#
> 0
From (4.9), (4."0), (4."2), the optimal controls k
!
(!), c
!
(!) for the innite horizon problem
in Section 4 tend as ! $"% to k
!
, c
!
, where
(a) k
!
=
b "r
)
2
#
, c
!
= $
(5."6)
(b) $ =
(b "r)
2
2)
2
#
+r.
We write V (x) = V (x, !) for the value function in Section 4. Then
!V (x, !) = A(!)x
!
,
where from the last term in the dynamic programming equation (4.7)
A(!) = [c
!
(!)]
!"1
, lim
!$"#
[A(!)]
1
!
= $.
Thus
(5."7) lim
!$"#
[!V (x, !)]
1
!
= Z(x) = $x.
We will interpret Z(x) as the steady state value function for a totally risk-averse limit control
problem, and k
!
, c
!
as optimal controls for this problem The optimal controls turn out to be
constants. Hence in (4.6) let us consider only controls k
t
, c
t
which are not random. Since
)
1
= 0, (4.6) becomes
(5."8) dX
t
= X
t
[(r + (b "r)k
t
"c
t
) dt +k
t
)
2
(!)dw
2t
] .
Since )
2
(!) $ 0 as ! $ "%, this is a small random perturbation of the corresponding
linear di!erential equation with )
2
= 0. The asymptotic behavior of E (X
!
t
) as ! $"% is
obtained from the Freidlin-Wentzell theory of small random perturbations ["0]. Let k
t
, c
t
be
continuous functions of t, and let v
t
denote an arbitrary deterministic disturbance function.
Let x
t
be the solution to the di!erential equation
(5."9)
dx
t
dt
= x
t
[r + (b "r) k
t
"c
t
+ )
#
k
t
v
t
]
"8
with the same initial data x
0
= x as for (5."8). Then
(5.20) log E
#
(x
t
) = log lim
!$"#
[E (X
!
t
)]
1
!
= E
"
[log x
t
] ,
where for g(x) = log(ax) with any constant a > 0
(5.2") E
"
[g (x
t
)] = inf
v.

g (x
t
) +
"
2
Z
t
0
v
2
s
ds

.
If k
t
= k, c
t
= c are constants, then (5.20) can easily be derived directly. See the Appendix.
E
"
is sometimes called a min-plus expectation [5].
As in (5.") let
(5.22) J(T, !) = !
"1
E
"
Z
T
0
e
"$t
C
!
t
dt +A(!)e
$T
X
!
t
#
.
Note that C
t
= c
t
X
t
, where c
t
is not random. Let
(5.23) J(T) = lim
!$"#
[!J(T, !)]
1
!
.
By (5.20)
(5.24) log lim
!$"#
[E (C
!
t
)]
1
!
= E
"
[log (c
t
x
t
)] = log c
t
+E
"
[log x
t
] .
From this it can be seen that
(5.25) J(T) = min
0't'T
E
#


C
t

&$E
#
(x
T
) ,
where

C
t
= c
t
x
t
and for any constant a > 0
(5.26) E
#
[ax
t
] = aE
#
(x
t
) = exp

E
"
[log (ax
t
)]

.
Formula (5.25) is a continuous time analogue of (5.7).
The steady state, totally risk-averse continuous time optimal control problem is to nd
control functions k
.
, c
.
which minimize J(T) on any nite time interval 0 # t # T.
Theorem 5.2 Let k
!
, c
!
, $ beas in (5.16). Then
(a) k
!
, c
!
areoptimal controls and Z(x) = $x is thesteady statevaluefunction.
(b) Let

C
t
= c
!
x
!
t
, where x
!
t
satises (5.19) with k
t
= k
!
, c
t
= c
!
and x
!
0
= x. Then
E
#


C
t
!

= c
!
x = Z(x) does not depend on t.
"9
Proof. The dynamic programming principle in stochastic control then implies that J(T, !) #
V (x, !) for all k
.
, c
.
. Therefore J(T) # Z(x) for all such controls k
.
, c
.
. Let k
t
= k
!
, c
t
= c
!
$
for 0 # t # T. Then a direct calculation shows that E
#
(x
!
t
) = x for all t (see Appendix).
Since

C
t
!
= c
!
x
!
t
,
J(T) = E
#


C
t
!

= $E
#
(x
T
) = $x for 0 # t # T.
Remark 5.2. The optimal capital-to-wealth ratio k
!
in (5."6) depends on the interest r,
as well as on b and )
#
. In contrast, the corresponding k
!
for the discrete time problem in
Example 5." does not depend on r. See (5."4a). These results are not comparable, since the
large deviations scaling of parameters is di!erent for the discrete time and continuous time
models.
Remark 5.3. If consumption were omitted from the model (c
t
= 0 in (5."9)), then
E
"
[log x
t
] is maximized by choosing k
s
= k
!
for 0 # s # t. With this choice of investment
control,
E
#
(x
t
) = xe
!t
.
Thus, $ is the optimal totally risk-averse growth rate in absence of consumption. When the
consumption-to-wealth ratio is a constant c > 0, then this growth rate becomes $ "c. For
the steady state, totally risk-averse model with consumption, Theorem 5.2 implies that the
optimal choice c
!
= $ makes the totally risk-averse growth rate equal to 0.
20
Appendix
Consider the totally risk-averse expectation E
#
(") dened by (5.4). Formula (5.5) follows
readily from (5.2) and
E("
!
) =
X
r,b
N"1
Y
"=0
h
p
!
(r
"
, b
"
)
1
!
"
N

~r
N
,
~
b
N
i
!
.
Similarly, given positive functions "
`

~r
N
,
~
b
N

` = ", , m,
(A.") lim
!$"#
"
E

m
X
`=1
"
!
`
!#1
!
= min
(
~r
N
,
~
b
N)
q
N

~r
N
,
~
b
N

min
`
"
`

~r
N
,
~
b
N

= min
`
min
(~r
N
,
~
b
N)
q
N

~r
N
,
~
b
N

"
`

~r
N
,
~
b
N

.
This gives (5.6). We also note that if " = "

~r
j
,
~
b
j

does not depend on (r


"
, b
"
) for # ! j,
then
(A2) E
#
(") = min
(~r
j
,
~
b
j)
q
j

~r
j
,
~
b
j

"

~r
j
,
~
b
j

.
This is seen by choosing (r
"
, b
"
) such that q(r
"
, b
"
) = " when # ! j. From (A2) it is also
immediate that
E
#
(a") = aE
#
(") for any constant a > 0
E
#
(") # E
#
(&) if " # &.
Derivation of (5.20). Let us next derive formula (5.20) in the special case when k
t
= k
and c
t
= c are constants. Let = r + (b "r)k "c. The (5."8) becomes
dX
t
= X
t
(dt +k)
2
(!)dw
2t
) .
A direct calculation using the Ito di!erential rule gives
E (X
!
t
) = x
!
exp
"
!

"
(" "!)k
2
)
2
(!)
2
2
!
t
#
.
By (5."5)
(A.3) log lim
!$"#
[E (X
!
t
)]
1
!
= log x +

"
k
2
)
2
#
2
!
t.
Let y
t
= log x
t
. From (5."9)
dy
t
dt
= +k)
#
v
t
, y
0
= y = log x.
2"
Consider the following control problem: choose v
s
for 0 # s # t to minimize
((v
.
) = y
t
+
"
2
Z
t
0
v
2
s
ds.
The minimum is the same as among constant controls (v
s
= v). By elementary calculus,
E
"
(y
t
) = inf
v.

y
t
+
"
2
Z
t
0
v
2
s
ds

= inf
v

y +

+k)
#
v +
"
2
v
2

= y +

"
k
2
)
2
#
2
!
t.
Since this is the same as (A.3), we get (5.20). If g(x) = log(ax) as in (5.2"), the constant
loga is added to both sides.
Completion of proof of Theorem 5.2. Let us take k = k
!
, c = c
!
= $ in (5."6). In the
above calculation =
1
2
(k
!
)
2
)
2
#
in this case. Therefore, E
"
(log x
!
t
) = log x which by (5.26)
is equivalent to E
#
(x
!
t
) = x, as required.
Remark A.1. When k = k
!
, c = c
!
and y
t
= log x
t
,
y
t
= y +
Z
t
0

k
!
)
#
v
s
+
k
!
)
#
2
!
ds.
In the terminology of [5], "y
t
is a max-plus martingale.
Remark A.2. The computation above also provides a direct, elementary proof of optimality
of k
!
, c
!
if only constant controls k
t
= k, c
t
= c are admitted. For such controls
(A.4) E
#


C
t

= cx exp

r + (b "r)k "c "


"
2
k
2
)
2
#

.
For xed c, the right side is maximized by taking k = k
!
. When k = k
!
,

C
t
= c x
t
E
#


C
t

= c xexp [($ "c)t] = c E


#
(x
t
) ,
E
#

min
0't'T

C
t

= min
0't'T
E
#


C
t

=
(
c x if 0 < c # $
c x exp($ "c)T if c ! $.
If 0 < c # $, the rst term is no more than $E
#
(x
T
). If $ # c, then the second term is no
less than $E
#
(x
T
). Thus
J(T) =
(
c x if 0 < c # $
$E
#
(x
T
) if c ! $.
The maximum of J(T) over all c > 0 occurs at c
!
= $, and equals Z(x) = $x = c
!
x.
22
References
". D.P. Bertsekas, Dynamic Programming and Optimal Control, Vols. I and II, Athena
Scientic Press, Belmont, MA "995.
2. P. Dupuis and R.S. Ellis, A Weak Convergence Approach to Large Deviations, Wiley,
New York, "997.
3. W.H. Fleming, Controlled Markov processes and mathematical nance, in Nonlinear
Analysis, Di!erential Equations and Control, (F.H. Clarke and R.J. Stern, eds) 407
446, Kluwer Academic Publishers, "999.
4. W.H. Fleming, Stochastic control models of optimal investment and consumption,
Aportaciones Matematicas, Sociedad Matematica, Mexicana No. "6, 200", "59204.
5. W.H. Fleming, Max-plus stochastic processes and control, preprint.
6. W.H. Fleming and R.W. Rishel, Deterministic and Stochastic Optimal Control, Springer-
Verlag, New York, "975.
7. W.H. Fleming and H.M. Soner, Controlled Markov Processes and Viscosity Solutions,
Springer-Verlag, New York, "992.
8. W.H. Fleming and J.L. Stein, A stochastic control approach to international nance
and debt, CESifo Working Paper #204, Munich "999. Available at the CESifo site.
http://www/CESifo.de
9. W.H. Fleming and J.L. Stein, Stochastic inter-temporal optimization in discrete time,
in Economic Theory, Dynamics and Markets (eds T. Negishi, R.V. Ramachandran and
K. Mino) 325339, Kluwer Academic Publishers, 200".
"0. M.I. Fredlin and A.D.Wentzell, Random Perturbations of Dynamical Systems, Springer-
Verlag, New York, "984.
"". M. Obstfeld and K. Rogo!, Foundations of International Microeconomics, MIT Press,
"996.
"2. A.A. Puhalskii, Large Deviations and Indempotent Probability, Chapman and Hall
CRC Press, 200"
"3. J.L. Stein and G. Paladino, Country default risk: an empirical assessment, Monash
University Conf. on Growth, Performance and Concentration in International Financial
Markets, Prato, Italy, November 2000. Australian Economic Papers 40 (200") 4"7
436.
"4. N.J. Stokey and R.E. Lucas, Jr., Recursive Methods in Economic Dynamics, Harvard
University Press, "989.
23
"5. S.J. Turnovsky, International Macroeconomic Dynamics, MIT Press, "997.
24

Vous aimerez peut-être aussi