Economics Department of the University of Pennsylvania
Institute of Social and Economic Research  Osaka University
Sufficient Conditions in Optimal Control Theory
Author(s): Atle Seierstad and Knut Sydsaeter
Reviewed work(s):
Source: International Economic Review, Vol. 18, No. 2 (Jun., 1977), pp. 367391
Published by: Wiley for the Economics Department of the University of Pennsylvania and Institute of Social
and Economic Research  Osaka University
Stable URL: http://www.jstor.org/stable/2525753 .
Accessed: 04/01/2013 13:03
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
Wiley, Economics Department of the University of Pennsylvania, Institute of Social and Economic Research 
Osaka University are collaborating with JSTOR to digitize, preserve and extend access to International
Economic Review.
http://www.jstor.org
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
INTERNATIONAL ECONOMIC REVIEW
VToL 18, No. 2, Julle, 1977
SUFFICIENT CONDITIONS IN OPTIMAL CONTROL THEORY*
BY ATLE SEIERSTADAND KNUT SYDSETER
1. INTRODUCTION
During the last ten years or so a large number of papers in professional journals
in economics dealing with dynamic optimization problems have been employing
the modern version of the calculus of variations called optimal control theory.
The central result in the theory is the well known Pontryagin maximum principle
providing necessary conditions for optimality in very general dynamic optimiza
tion problems. These conditions are not, in general, sufficient for optimality.
Of course, if an existence theorem can be applied guaranteeing that a solution
exists, then by comparing all the candidates for optimality that the necessary con
ditions produce, we can in principle pick out an optimal solution to the problem.
In several cases, however, there is a more convenient method that can be used.
Suppose that a solution candidate suggests itself through an application of the
necessary conditions, or possibly also by a process of informed guessing. Then,
conditions of the type con
if we can prove that the solution satisfies suLfficiency
sidered in this paper, then these conditions will ensure the optimality of the solu
tion. In such a case we need not go through the process of finding all the candi
dates for optimality, comparing them and finally appealing to all existence thieo
rem.
In ordcle to get all idea of what types of conditions might be involved in such
sufficiency theorems, it is natural to look at the corresponding problem in static
optimization. Here it is well known that the first order calculus or KulhnTLcker
conditions are sU11ficient for optimality, provided suitable concavity/convexity
conditions are imposed on the functions involved. It is natural to expect that
similar conditions might secure sufficiencyalso in dynamic optimization problems.
Growth theorists were early aware of this and proofs of sufficienccyin particular
problems were constructed; see, e.g., Uzawa's 1964 paper [19]. In the mathemati
cal literature few and only rather special results were available until Mangasarian.
in a 1966 paper [10] proved a rather general sufficiency theorem in which he was
dealing with a nonlinear system, state and control variable constraints and a fixed
time interval. In the maximization case, when there are no state space con
straints, his result was, essentially, that the Pontryagin necessary conditions
plus concavity of the Hamiltonian function with respect to the state and control
variables, were sufficient for optimality.
The Mangasarian concavity condition is rather strong and in many economic
problems his theorem does not apply. Arrow [1] proposed an interesting partial
* ManuscriptreceivedSeptember,1975; revised May 21, 1976.
367
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
368 ATLE SEIERSTAD AND KNUT SYDSAETER
generalization of the Mangasarian result, where concavity with respect to the
state variables of the Hamiltonian maximized with respect to the control variables
was the crucial concavity requirement. Subsequently Arrow and Kurz [2]
indicated a proof of the result, well aware of its nonrigorousness, and applied it
to several economic problems. This work is now a standard reference in the
field. Arrow had indicated in [1] that his result was a minor variant of that of
Mangasarian. Kamien and Schwartz [8] reported that they were unable to derive
the Arrow result from Mangasarian's, but set out to prove (essentially) the Arrow
conjecture.
On this background the content of the present paper can be described as follows:
In Section 2 we state a version of the Pontryagin maximum principle in the absence
of state space constraints, and present a sufficiencytheorem where the Hamiltonian
is assumed to be a concave function of the state and control variables. (Con
trary to the usual practice, we have a general control variable restriction of the
form u(t) e U). In Section 3 we state the Arrow sufficiency theorem in the ab
sence of state space constraints (a proof is given in Note 1 to Theorem 5 in Section
5). A general formulation of a control problem with joint restrictions on the state
and control variables is given in Section 4 and we prove a general sufficiency
result (Theorem 4) suitable as a reference, where the adjoint variables are allowed
to be discontinuous. In Section 5 we prove a sufficiency theorem (Theorem 5)
in which we have weakened the Arrow condition on the Hamiltonian maximized
w.r.t. the control variables. By using Theorem 5 we prove in Section 6 a sufficiency
theorem where constraints of the type h(x, tu, t)?0 are present, (Theorem 6).
The Mangasarian theorem is (essentially) a special case. In Section 7 a precise
statement and a proof of the Arrow sufficiency theorem is given (Theorem 7).
This does not seem to be available in the literature.
In Section 8 two new sufficiency theorems are presented. Theorem 8 covers
a case with constraints of the form u e U and g(x, t) ?0, whereas Theorem 9
allows constraints of the form h(x, u, t)>0 as well as g(x, t)?0. In Section 9
we discuss the case when in the previous problems we introduce an infinite ho
rizon. Finally, in Appendix 1 our way of handling constraints of the form
g(x, t) ?0 is compared to other ways of doing it; in Appendix 2 we prove a simple
generalization of a sufficiency result in nonlinear programming needed in some of
our proofs, and in Appendix 3 a few results on subgradients are given.
Seierstad and Sydsxter [16] covers essentially the same ground as this paper,
but at a somewhat more leisurely pace.
2. A SUFFICIENCY THEOREM IN THE ABSENCE OF STATE SPACE CONSTRAINTS
The first problem we will consider is an optimal control problem that does not
involve state space constraints, but which includes the type of boundary condi
tions usually encountered in economic applications. Shortly formulated the
problem is this:
Find a piecewise continuous control vector u(t) =( (t),..., ur(t)) and an associ
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 369
ated continuous and piecewise differentiable state vector x(t)=(x (t),..., x"(t)),
defined on the fixed tire interval [t0, tj], that will
(1) maximize 5fO(x(t), u(t), t)dt
to
subject to the vector differential equation
(2) x(t) = f(x(t), Iel(t),
t)
initial conditions
(3) X(to)  x0 (x0 fixed point in RI,)
terminal conditions
IXi(tJ = X'1 i 1,..., 1
(4) X'(t1) ? xi, i = i + 1,1... ni (xi = 1,. , fixed numbers)
xi(t1) free, i in+1,..., n
and control variable restriction
(5) u1(t)e U, U given set in Rr
NOTE. Throughout this paper the equivalent terms "piecewise continuous"
and "continuous except at a finite number of points" are used in the sense that
at the points of discontinuity the onesided limits always exist.
PuLttingf=(f'..,jf'), we will assume in this section that fi and Of /8xJ are
continuous functions on R'+r+ 1 for all i =0, 1,.., n; j = 1,..., n.
A pair (x(t), u(t)) is called admissible if it satisfies (2)*(5) above. An admis
sible pair that maximizes the integral in (1), and thus solves the given problem, is
called an optimal pair. It will be convenient to state the well known Pontryagin
maximum principle for our problem.
THEOREM1. In order that (x(t), ti(t)) be an optional pair in the above prob
1emn,it is necessary that there exist a constant po and a continuous function
p(t)=(p1(t),..., p,A(t)),where for all t 6 [to, tj], (po, p(t))#A(0,0) and such that:
For any t c [to, t1]
(6) H(A(t), ii, p(t), t) < H(x(t), U(tpt), t) for all it e U
where the Hamniltonianfunction H is defined by
H(x, u, p, t) =poff(x, u, t) + p ifi(x, u, t)
(7) i=1
poff(x, t, t) + p.f(x, u, t)
Except at the points of discontinuity of i(t),
F ther e Pi(t) H i (x(t), ii(t), p(t), t), i
Furthlerinor07e
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
370 ATLE SEIERSTAD AND KNUT SYDSAETER
(9) ' Po1 = 01 Po=0
and finally thefollowing tratisversality conditions are satisfied
pi(tl) no conditions iI ... I
(10) pi(t1)?0 (= 0 if xV(t) > xi) i=I + 1,.n, in
(pi(ti) 0 inz+l, 1,
The following question now arises: Suppose we find an admissible pair (x(t),
fi(t)), a constant po and functions pl(t),..., p,(t) such that all the conditions in
Theorem I are satisfied. Will (.X(t),il(t)) be an optimal pair? Not always, but
in some important cases we can prove that it is optimal.
A rather simple case is the one presented in the next theorem where we have a
concavity condition on the Hamiltonian.
THEOREM2. Suppose (x:(t), 6(t)) is an admissible pair satisfying all the con
ditions in Theorem 1 with po= t and U convex. If the Hamniltonian H(x, al, p(t),
t) is joinly concave in x and u, then (x(t), R(t)) is a solution to problem (l)(5).
(This theorem follows from Theorem 3 below, since it is an easy and well known
fact that concavity of H(x, ui, p(t), t) w..t. (x, u) implies the concavity of H*(x,
p(t), t) (defined in (11) below) w.r.t. x.)
3. THE ARROW SUFFICIENCY THEOREM IN THE ABSENCE
OF STATE SPACE CONSTRAINTS
We will now for the problem described by (l)+(5) consider the Arrow pro
posal for a generalization of Theorem 2. To this end, let us define
(11) H*(x, p, t) = maxfH(x,al,p, t)
niev
where H(x, a, p, t) is as defined in (7) and we assume that the maximum is at
taimed. One can then prove the following result.
THEOREM3. Suppose (x(t), Fi(t))is an admissiblepair satisfying all the con
ditions in Theorem 1 with po= 1. If H*(x, p(t), t) defined in (I 1) is concave in
x, then (x(t), Ei(t))is a solution to problem (I)*(5).
We will give a proof of this result in Note 1 to Theorem 5. A correct proof
does not seem to be available in the literature. In Arrow and Kurz [2] the
argument is not satisfactory. They assume, for example, that e(t) is an interior
point of U (see the justification for equation (12) p. 36 in [2], which is needed to
establish (10) on p. 40, which in turn is used in the proof of proposition 6, p. 45).
Also, they assume, without any proof, that H* defined in (11) above is differ
entiable w.r.t. x.
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 371
4. MIXED RESTRICTIONS ON THE STATE AND CONTROL VARIABLES
A very general control problem can be described as follows: Consider problem
(1)*(5) and assume in addition that for every t E [to, t1] there is given a set A(t)
in RI'x Rr. For the pair (x(t), u(t)) to be admissible we require in addition to
(2)*(5) that
(12) for all t e [to, t1]
(x(t), u(t)) e A(t)
We will later in particular consider the case when A(t) is defined by
(13) A(t) = {(x, tu): h(x, U, t) _ 0} where h: Rt+r+ l +RS
Throughout this paper we will assume that
f0 andf are jointly continuous functions of (x, u, t) on the set
(14) I
{(x, u, t): (x, u) E A(t), t E [to, t1]}
In the situation when A(t) is arbitrary we will now prove a sufficiency theorem
in which we allow the adjoint function p(t) to be discontinuous. When state space
constraints are present, for example in the form gi(x, t)>0, (e.g. some of the com
ponents of h in (13) are independent of u), it will often be necessary to allow
for this possibility. This is stressed by Funk and Gilbert [3]. (In the case when
p(t) is continuous, see Theorem 3.1 in Leitmann and Stalford [9]).
THEOREM 4. Let (x~(t),ii(t)) be an admissible pair in problem (1)*(5) and
(12) and suppose that there exists a piecewise continuousfunction p(t)=(p1(t),...,
p,(t)) on [to, t1] which has a piecewise continuous derivative, p(t), except at a
finite niiimber of points. At the points of discontinuity c1,..., Ckof p(t), t<C
< <Ck? t1, we require the following jump condition to be satisfied,
k
(15) E (p())  p(Tt)).(x(Ti)  JV(k))? 0
where p(rc)=p(t1) f k=tl. We assume moreover that satisfies (10) in
i, p(t,)
Theorem 1. Let H be as defined in (7) wit/i po= 1, and assume finally that
(16) H(5(t), fi(t), p(t), t) H(x(t), ul(t),p(t), t) > P(t) (x(t) x(t))
for all admissible pairs (x(t), u(t)) andfor all t E [to, t1] except at afinite number
of points. Then (x(t), Ft(t))is an optimal pair.
PROOF: The criterion for (x(t),6(t)) to be optimal is that the difference
(17) A = t f0(5(t), fii(t), t)dt  fO(x(t), u(t), t)dt
to to
is ?0 for all admissible pairs (x(t), iu(t)). If Po = 1 and we make use of definition
(7) and the fact that (2) is satisfied for all admissible pairs, we easily obtain:
+ [H(p(t), 6l(t), p(t), t)H(x(t), u(t), p(t), t)]dt
to
+ E( p(t) (x~(t) At))]dt
to
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
372 ATLE SEIERSTAD AND KNUT SYDSIETER
Making use of (16), it follows that
X 52 [j3t)
to0
(x(t) (t)) + pt) ((X t, (t))] dt
= dt [p(t) (X(t) xAt))] dt
If we let 0(t) = p(t). (x(t)  x~(t)), we further get
d 0t t
A t
d_
0(t) = 51 dt (t)dt +
k r
E (t)dt + (t)dt
= t(Tz1) +
ON(t0) E [0(zi +1)  O/4 )] + 4(tt)  (Sk
By some rearrangingof this expression, we see that it is equal to
k _k
4(t1)  O(tN) + E [40Ci)  41(zt)] > (pAi))  pvf). (x(Ti)  A(zi)) > 0
i=l i=1
Here we made use of (15) and the facts that 4(to)=0 (by (3)), and finally 0(t1)>O,
since pi(tj) (xi(tj) xi(t1))O0 for all i=1,..., n by (4) and (10).
In Theorem 8 we will, in the case when A(t) = {(x, u): g(x, t)> 0}, present a
more amenable condition which implies (15). The relationship between our theo
rem and the results of Funk and Gilbert [3] will be discussed in Appendix 1.
5. AN ARROW TYPE SUFFICIENCY THEOREM IN THE CASE
OF MIXED CONSTRAINTS
In this section we present an Arrow type sufficiencytheorem in the general case
of state and control variable restrictions, suitable for later reference. To this
end, if A(t) is the set referred to in (12), let us define,
(18) Ax(t) = {u e U: (x, u) e A(t)}
and
(19) A1(t) = {x e Rn:(x, u) e A(t) for some u e U}
With H(x, u, p, t) as the usual Hamiltonian, let us define for each t c [to, tj] and
each xeAl(t),
(20) H*(x, p, t) = max H(x, u, p, t)
u Ax(t)
assuming that the maximum is attained (which is the case, for example, if Aj(t)
is compact).
THEOREM 5. Let the assumptions be as in Theorem 4 with the exception
that instead of (16) we require that the following two conditions hold for all
x e A1(t) and all t e [to, tj], except at a finite number of points,
(21) H((x, p(t), ir.t) p(t), _ (xx
t)oiHa(pa(t), (t)
(22) H*(xt), p(t), t) =H(x(t), 6(t), p(t), t)
Then (x(t),u(t)) is an optional pair.
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 373
PROOF: Let (x(t), u(t)) be an arbitrary admissible pair. Then by (22) and
the definition of Ht,
p(t), t) H(x(t),
II(,;qt), 11(t), u(t), p(t), t) > H*(5x(t),p(t)5 t) H*(x(t), p(t), t)
By (21) we see that the last difference is >? (t) . (x(t)  (t)). Hence (16) in Theo
rem 4 is satisfied, so (X(t), ui(t))is optimal.
NOTE 1. (21) tells us that for all t, except at a finite number of points, p(t)
is a subgradient (see Appendix 2) at x~(t)for H*(x, p(t), t) w.r.t. A1(t). To obtain a
proof of Theorem 3, all we need to show is that  p(t) = H'(x(t), 6(t), p(t), t)
is a subgradient to f(t)=LI*(x, p(t), t) w.r.t. A1(t)=RR' at x=Jc(t). (Here A(t)
=R'I x U). Since ifr(x)= H(x, fi(t), p(t), t)<H*(x, p(t), t)= (x) for all x eRI',
this follows from the results in the Appendix 3.
NOTE 2. C'ondition(21) ill the theorem can be formulated thus:
(23) max [H*(x, p(t), t) ? P(t)* x] = H*(.x(t), p(t), t) 4 p(t). (t)
xeA I(t)
Combiniingthis with requirement(22), we easily see that (21) and (22) are equiva
lent to  and can thus be replaced by 
(24) max [H(x, u, p(t), t) + P(t) x] H(At), 1i(t), p(t), t) + P(t) x(t)
n(R ' x U)
(x, uz)eA (i)
Note that (24) is closely related to inequality (16). In fact it tells us that (16)
is satisfied for all (x, Iu) E A(t) n (R x U), when x(t), u(t) is replaced by x, u.
6. PROBLEMS WITH CONSTRAINTS OF THE TYPE h(x, u, t) ? 0
We will now consider an important case frequently encountered in applications
where the set A(t) is given by (13). Our problem is then, shortly formulated
I
n ax fO(x(t), u(t), t)dt when *(t) =f(x(t), u(t), t) and
(25) to
h(x(t), u(t), t) ? 0
with the same boundary conditions as before ((3) and (4) in Section 2). We will
assume that (14) holds for A(t)={(x, u): h(x, a, t)00}. The function h is a con
tinuous vector function from RI'+r+I into Rs. The restrictions can be written in
the form
(26) hi(x(t), u(t), t) _ 0, i = 1,.., s
Note that possible separate restrictions on the control variables must be incor
porated in (26); we have not now the requirement u(t) e U. To deal with this
problem one usually introduces a function qi(t) for each of the constraints in (26),
and forms the Lagrangian or generalized Hamniltonian L defined by
(27) L(x, at, p, q, t) = poff(x, u, t) + p f (x, a, t) + q  h(x, a, t)
where pof0(x, a, t)p f(x, a, t) = H(x, M, p, t) is the usual Hamiltonian and
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
374 ATLE SEIERSTAD AND KNUT SYDSAETER
q=(ql9 9 qj.
We are now in a position to state and prove a theorem which is essentially
the Mangasarian sufficiency theorem referredto in the introduction.
THEOREM 6. Let (x(t), 61(t))be an admissible pair in problem (25) with the
boundary conditions (3) and (4). If there exist vector functions p(t)=(p1(t),...,
p,(t)) and q(t)=(qj(t),..., qs(t))?>O, where p(t) is continuous, p(t), q(t) piecewise
continuous, such that for all t e [to, tj] where ii(t) and q(t) are continuous, the
following conditions are satisfied,with po= 1:
(28) NO~t= L'i(5(t), t1(t),p(t), q(t), t), i =1, . n.
(29) Lt'j(5x(t),u(t), p(t), q(t), t) = O. j = 1, ... , r
(30) qj(t)hj(x(t), ii(t), t) = 0, i = 1 ..., s
pi(t1) no conditions i = 1,..., 1
(31) pi(t1) > (O if 5xi(t1) > x ii 1I + III.)
tpi(ti) = 0 17=Oi + 1,..., n
{H(x,
u, p(t), t) is concavein (x, u) e R'Ix Rr and
(32)
differentiable w.r.t. (x, u) at (x(t), al(t)).
I l(x, u, t) is quasiconcave in (x, u) e R' x Rr and differentiable
(33)
(3)
ist (x, u) at (5x(t)l 61(t)).
then (5(t), i(t)) is an optional pair.
PROOF: We will prove the theorem by showing that the conditions in Theo
rem 5 are met. In fact, using (24) in Note 2 to Theorem 5, we see that it suffices
to prove that, for each t for which l(t) and q(t) are continuous, the problem
(34) max [H(x, u, p(t), t) + p(t). x] when h(x, it, t) > 0
( X ,1)
has x =x.V(t),u = i(t) as its solution. To this end we will make use of Theorem
12 in Appendix 2. To apply this theorem, put 4(x, u)=H(x, u, p(t), t)+ip(t) x.
Then b(x, u) is differentiable and concave in (x, u) so (see Note 1 in Appendix 2)
it has Q = ( ib(x(t), l(t)), ?bt(.A(t),ii(t))) as a subgradient at (x(t), l(t)) w.r.t. R"x Rr
and hence, in particular, w.r.t. A(t) = {(x, u): h(x, u, t) ? }. Since = H' + P
and 4' = H', we see that
Q = (H'(5x(t), 61(t), p(t), t) + p(t), Ht14040) l(t), p(t), t))
Moreover, since h(x, u, t) is differentiable and quasiconcave in (x, u), W
=(h,(x(t), ii(t), t), hl(5(t), i(t), t)) is a subquasigradient at (x(t), il(t)) w.r.t. RI
x Rs and hence w.r.t. A(t).
Now, it is easy to check that (28) and (29) above amounts to Q + q(t). W=0.
Since qj(t) . 0 and hj({X(t), i4(1), t) ?0 for all i, (30) is equivalent to the condition
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 375
q(t) 11((t), i(t), t) = 0. It follows that all the conditions in Theorem 12 in
Appendix 2 are satisfied for the problem (34), so we conclude that (x(t), u(t))
solves problem (34). Hence our theorem is proved.
NOTE 1. 1ff0 and Jare concave in (x, it) and p ?0, then H is concave in (x, u).
If moreover h is concave rather than quasiconcave in (x, ii), then Theorem 6 is
the Mangasarian sufficiency theorem ([10], (141)) for our problem.
NOTE 2. It follows from the proof of the theorem that the requirements on H
and h can be relaxed. In fact it suffices to assume that H(x, it, p(t), t) + p5(t) *x
has Q (as defined above) as a subgradient w.r.t. A(t) at (A(t), ii(t)) and that h has
W as a subquasigradieritat (X(t), U(t)) w.r.t. A(t).
NOTE 3. In the theorem above we assumed that the control variable restric
tions were included in the constraints (26). A variant of the theorem is obtained
if we add the requirement u(t) E U, where U e Rr. In this situation, if we keep
all the assumptions in the theorem, except that (29) is changed to
(35) mnaxL(Xt), iY(t), p(t), q(t), t) (ui  i(t)) = 0
ileU
then it is easy to verify by means of Theorem 12 again that sufficiency will be
ensured.
Requirements of the form loE U usually can be expressed as inequalities and in
corporated in the vector inequality hIiO, but this procedure might destroy the
quasiconcavity of hi, so the generalization just mentioned will in some cases be
called for.
Note that, by (32) and (33), it is easily secn that (35) implies
(36) max i((, it) p(t) t) H(Q(t), Ki(t),p(t), t)
u c U, l(U ( t) t) 0
a condition frequently re(lUirecl both elsewhere in this paper, and in necessary con
ditions.
NOTE4. The conditions in Theorem 6 are closely related to the necessary
conditions for problem (25) inl the case when the hi functions all contain u. See
for example Arrow and KUrZ([2] (41)) or Takayama ([18] (648, 649)). In
cidentally, Takayama's Theorem 8.C. l. does not seem to be entirely correct as it
is stated. If he wants to incorporate the constraint u(t) c U, the conditions in
the theorem must be changed. Also, the constraint qualification (iv) in his lemma
on p. 648 only implies (iii) and not (ii) in his theorem. Finally we cannot see that
the alternative constraint qualifications, (i), (ii) and (iii) in the lemma will work in
this case. As in Arrow and Kurz [2] the analogy with the constraint qualification
in nonlinear programming is pressed too far. (For an authoritative statement of
necessary conditions in such control problems, see, for instance, Hestenes [7].)
NOTE 5. We assume in the theorem (as does Mangasarian in [10]) that p(t)
is continuous. It is easy to prove that the same type of discontinuities of p(t)
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
376 ATLE SEIERSTAD AND KNUT SYDSATER
as in Theorem 4 can be allowed also in the present case. Sometimes this general
ization is needed. Note, however, that if suitable constraint qualifications are
satisfied for all t, there is in normal cases (po = 1) no need to consider p(t) functions
that are discontinuous, as the necessary conditions imply the existence of a con
tinuous p(t).
NOTE6. It can be proved that if we find one solution (xQ(t),iYt)) together with
some p(t) satisfying (28)(33), then for this p(t) all other optimal solutions
(x(t), ii(t)) of (25) satisfy (28)+(33). For the proof, see Seierstad [14]. In
particular, if L(x, u, p(t), q(t), t) has continuous second derivatives w.r.t. x and u
in a neighborhood of (x(t), Ki(t))for each t, and L,,1(x(t), ii(t), p(t), q(t), t) is
negative definite, then a solution which is found to satisfy (28)*(33) is the only
optimal solution.
7. THE ARROW SUFFICIENCY THEOREM IN THE PRESENCE OF
CONSTRAINTS OF THE TYPE h(x, U, t) ? 0
In the sufficiency result of Theorem 6 the crucial concavity condition was that
the Hamiltonian be jointly concave in the state and control variables. As to the
constraint functions hi (see (26)), we assumed them to be differentiableand quasi
concave in the state and control variables. We had no other conditions on the
hifunctions. In particular, some of them (or all) might very well fail to contain
the control variables and thereby be of the form hi(x, iu, t) = gi(x, t). In the Arrow
proposal for a partial generalization of Theorem 6, the concavity requirement on
the Hamiltonian is relaxed. Moreover, quasiconcavity of hi is not assumed, but
a constraint qualification on the hifunctions must be satisfied which in particular
implies that some control variables must be present in all (the active) constraints.
Let us be more specific.
We consider problem (25) with the usual boundary conditions. Let (A(t), ii(t))
be an admissible pair in the problem, and let I(t){i: hj(5(t), u(t), t)= O}. We
say that the constraint qualification is satisfiedfor (X(t), u(t)) at t, provided
(37) (t), U(t), t)
rank Khihi(x I = the number of indices in I(t)
Hence, if there are s* indices in I(t), so that s* of the constraints are active, con
dition (37) requires the s* x r matrix in (37) to have rank equal to s*. In particu
lar, s* must be ?r.
In accordance with (18), (19) and (20) in Section 5 let us define
(38) Ax(t)= {it: h(x, iLe t) > 0}, AI(t) = {x:h(x, u, t) ? 0 for some u e Rr}
and
(39) H*(x, p, t) = max H(x, u, p, t)
ueA, (t)
As before L(x, it, p, q, t)=IH(x, it, p, t)+q *h(x, it, t) where q=(q,.., q,). We
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 377
are now in a position to state and prove a precise theorem which is the Arrow
sufficiency theorem for our problem.
THEOREM 7. Let (A(t), ii(t)) be an admissible pair in pJrobleill(25) with the
boundary conditions (3) anid (4). Assume that there exist vector functions
p(t)=(pi(t),., p,(t)) and q(t)=(q1(t),..., q,(t))0O where p(t) is continuous
and p(t) and q(t) are piecewise continuous, such that for all t e [to, t1l] where
ii(t) and q(t) are continuous, the constraint qualification (37) is satisfied as well
as the following conditions, witi po =l:
(40) H*(T,(t),p(t), t) = H(x~(t),11(t),pWt)t)
(41) L'~) 6l(t),p(t), q(t.), t),
i(5x(t), i =
l .
(42) Lj(x(t), 17(t),p(t), q(t), t) = 0 j I= .., .
(43) qj(t)hj(V(t),Ft(t),t) = 0 i = 1,..., s
' pi(tj) no conditions i= 1
(44) pi(t1) > 0 (= 0 if _(t1) > .) i I + ...I , 711
1
l pi(t1) 0 i i. + 1 ...,I.
Ii*(x, p(t), t) as defined in (39) is a concave function of x on
41(t), if A1(t) is convex. If A1(t) is not convex, ive assume
(45) that H* has an extension to co (A(t)) (the convex hull of Al(t))
whidc/ is concave in x. H(x, u, p(t), t) is differentiable w.;.t.
(x, 2l) at (x(t, 1(t)) .
(46) tf h(x, it, t) is continuously differentiable w.r.t. (x, it) in a neigh
I borhood of (?(t), 1(t)).
Then (x(t), 17(t))is an optimal pair.
PROOF: By using Theorem 5 we see that the theorem is proved if we can prove
inequality (21) in Theorem 5 for all x e A1(t).
Let us first show that x~(t)is an interior point of A1.(t). Assume for simplicity
that the constraint functions are so arranged that
(4.7) hK(At), 7t(t), t) = 0 i = 1,..., s*
(48) ij(&(t) 17(t), t) > 0 i = s* + 1,.., s
where s* is the rank of the matrix in (37). Now, by the continuity of the h1
functions, in a neighborhood of (.5(t), 17(t)),
(49) h1(x, u, t) > i
i s + 1,..., s
Moreover, by the implicit function theorem, making use of assumption (37), the
system
(50) hi(x, u, t) = 0 i =I,...,s*
defines s* of the components of u as differentiable functions of x and the rest of
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
378 ATLE SEIERSTAD AND KNUT SYDSIETER
the uj's in a neighborhood of (X(t), UQ)). Inserting the resulting functions in
(49) we see, all in all, that if x is suLfficiently
close to A(t), there is a u e RI such that
1i(x, 1, t)>0 for i= 1,..., s. Hence x e A (t) so x(t) is an interior point of A1(t).
The extension of H*(x, p(t), t), as a function of x, to co4(A4(t)) is concave in x,
and jx(t) is an interior point of Al(t) and thus an interior point of co(A1(t)).
According to Theorem 13 in Appendix 3, there exists a function a(t) such that
(51) H*(x, p(t), t)  H*Q((t), p(t), t) < a(t)*(x At))
for all x e co (A1(t)). Now, for any (x, u) such that Ii(x, if, t) >?,
(52) H(x, uf, p(t), t) H(x~(t), l(t), p(t), t) <! H*(x, p(t.),t)  H*B(,(t), p(t)9 t)
From (51) and (52) it follows that for all (x, ui) where Ii(x, it, t)_O,
(53) H(x, it, p(t), t)  H((t)), 6(t), p(t), t) ? a(t). (x At))
We see that (53) means that the problem
(54) max[IH(x,it, p(t), t)  a(t). x] when I1(x,it, t) ? 0
(x, u)
has the solution x = (t), u = 17(t). This is a nonlinear programming problem and
since the constraint qualification (37) is satisfied, it follows (see for example
Takayama [18], (chapter 1, D)) that there must exist a vector q(t)(q1(t),...,
> 0 such that
&0(t))
(55)  ai(t) + ((t) alu = 0 1 1,... n
(SG) + ej(t) * a
O>H9 = 0 =
(57) q7(t) =1 0 i  1,.., s
(A bar onl a symbol for a function denotes that it is to be cvalUatcd at a point
where x =(t), u i(t).)
=
Using the same convention as i11 (47) and (48) above, we see from (57) that
J7[(t)=0 for i=s + 1,.., s. By condition (37), equation (56) uniquely determines
c71(t),.., qs(t), and since L= Hq h, it follows by comparing (42), (43) with
(56), (57) that 4i(t) = qi(t) for i = 1, .., s. Comparing (41) and (55) it follows that
a(t) P(t). Inserted in (51) this gives us (21) and thereby the theorem.
NOTE. A correct proof of this result does not seem to be available in the liter
ature. For a critical discussion of the arguments given by Arrow and Kurz [2]
and Karnien and Schwartz [8], see Seierstad and Sydsxter [16] (20, 21).
The constraint qualification (37) is a standard assumption in proofs of the
necessary conditions. One might wonder if it is really "necessary for sufficiency."
The following example tells LIS that condition (37) cannot be dropped in Theorem
7, anid thai it cannot be replaced by other weaker standard constraint qualifica
tions occUrring in necessary conditions (e. g. (Ai i/Ou) . c >0 for all active i,
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 379
for some vector c). This fact is not immediately apparent in the above mentioned
proofs.
EXAMPLE. Consider the problem
(a) max ((u  1)2 + x)dt
0
(b) x = Ii, X(0) = 0, x(1) free
(c) 1 (x, u, t) u  2X _ 0
(d) 1i2(x, u,t) =2u > 0
(e) 113(X, ui, t II > 0
Let us put x~=0, It=0, p=O,qI =2, q2=0, q3=0, where p is the adjoint variable
associated with (b) and qj, q2, q3 are the multipliers associated with (c), (d) and
(e). Here it is easy to see that H*(x, 0, t)x + t which is concave in x on the con
vex set A1(t)=( oo, 4]. Moreover, it is easy to check that (40)*(44) are all
satisfied. The integral in (a) is in this case equal to 1. But we have not found the
optimal solution. In fact, if we put x=2t, u =2, then (b)*(e) are all satisfied and
the integral in (a) is equal to 2. Clearly the rank condition (37) is not satisfied
at x.(t)=0, W(t)=0,so Theorem 7 does not apply.
Incidentally, the pair (2t, 2) is the optimal solution. Put x = 2t, ui= 2, p(t)
=1t, q1=0, q2=3t, q3=0. Then H*(x, p, t)=x+1+2(1t) which is con
cave in x on A1(t)=( oo, 4]. Moreover, it is easy to check that (40)*(44) are
all satisfied. Also, the rank condition (37) is satisfied in this case, since only the
constraint (d) is active. So by Theorem 7,) = 2t, u= 2 is the solution of the
problem.
8. SOME ARROW TYPE SUFFICIENCY THEOREMS FOR DIFFERENT
TYPES OF CONSTRAINTS
In this section we will prove some Arrow type sufficiencytheorems for problems
of interest in applications when Theorem 7 does not apply. We will first consider
the case when the hifunctions are all independent of ui, but we will allow the
additional constraint u(t) e U, where U is a given set in R . Our problem is
therefore of the following form:
(58) max fO(x, It, t)dt when x = f(x, I, t), Ite U and g(x, t) 0
to
with the boundary conditions (3) and (4). Here g is a continuous vector function
from RI'+I into RS. The Lagrangian function L is in this case
(59) L(x, u, p, q, t) = poff(x, u, t) + pf(x, It, t) + q *g(x, t)
where q=(q1, , q).
Our sufficiency result will be based on Thleorem 5. According to (23) in
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
380 ATLE SEIERSTAD AND KNUT SYDSAETER
Note 2 to Theorem 5, requirement (21) in Theorem 5 can be form7ulated. in this
way: The problem
(60) max [H*(x, p(t), t) + p(t) x] when g(x, t) ? 0
has the solution x= (t). Stufficientconditions for this to occur can be found using,
Theorem 12 in Appendix 2. If we assume in particular that g(x, t) is quasicon
cave, we arrive at the following result:
THEOREM 8. Let (JV(t),17(t)) be an admissible pair in problem (58), let H(x,
u, p, t) be the Hainiltonian wvith po=1, H*(x, p, t) as defined in (11) and L(x,
u, p, q, t) as defined in (59). Let Ius assume that there exist a vector function
p(t)=(p1(t),..., p,,(t)) wvithp(t) and i3(t) piecewise continutous, a vector function
q(t)=(q1(t),..., q(t))0 and, finally, numtibe rs fj, i= 1,., k, 1, .., s, such1
n
that for all t [to, t1], except at a finite number of points, the following condi
tions are satisfied:
t)
(61) Hl*(.:(t), p(t), t) = H(.T(t), Ut(t), p(t),
(62) 13i(t) Li(x(t), 6l(t), pv(t),q(t), t)i . ,s
(63) ( ) 1,.., kc
) where to < 'c1 < *.. < 'u?k t1 are the discontinuity points of
p(t), (if 'Ck t1, let p(t) = p(t1)), and where
(64) /> 0 (0 if gj(x(Ci), c) > 0) for i = 1,..., s; =1,..., s
(65) qi(t) gi(.(t), t) = 0 i = ,..., s
pi(t1) no conditions i I, ., I
(66) p.(ti) >0 (=0 if?x(t1) > xi) i 1* ...., in
+
pi(t1) = 0 i i + 1,.., n
(67) H1(x, p(t), t) is concave in x c R'
(68) g(x, t) is quiasiconcavein x and differentiable in x at 5x(t).
Then (A(t), t1(t))is an optimal pair.
PROOF: As in Note 1 to Theorem 5 we see that H = H(x(t), 17(t),p(t), t) is a
subgradient to H*(x, p(t), t) w.r.t. RI'. By (62), HX'=(t)  qg',(5(t), t), so
qgJX5(t), t) is a subgradient for II*(x, p(t), t) + p(t). x at A(t)w.r.t. RI. By
(68), g'(57(t), t) is a subquasigradient for g at 5x(t)w.r.t. G(t)=x: g(x, t)?0}
(see Appendix 2). By Theorem 12 in Appendix 2, problem (60) has x(t) as its
solution. This means that condition (21) in Theorem 5 is satisfied. Condition
(61) takes care of (22) in Theorem 5. It remains to establish the jump condition
(15). Note that if  (p(,)  p('rt)) Q, then (15) is implied by the property that
the function Q . Z=f(z), when maximized over {z: g(z, Tr)?O,) W has a maximum
at z = 5(zi). That thfelatter property holds can now be shown by an application
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 381
of Theorem 12 in Appendix 2, with q=(Pi,. flgl) g(z)=g(z, T), since (63)
implies 3 in Theorem 12.
All the conditions in Theorem 5 are now satisfied, so we conclude that (x(t),
11(t)) solves our problem.
NOTE L. Theorem 8 still holds if conditions (62), (67) and (68) are replaced
by the condition
H*(x, p(t), t) has  P(t)  q(t) w(t) as a subgradient at 5x(t)w.r.t.
(69) G(t)= {x: g(x, t) > 0}, where w(t) is a subquasigradient of
1 g(x, t) at x = (t)w.r.t. G(t) for all t # i, i = 1,..., k.
NOTE 2. Let us consider the situation when H*(x, p(t), t) is only concave for
those values of x where g(x, t) 0. Then the reasoning in Note 1 does not apply,
but we can still prove that HX(X(t),11(t),p(t), t) is a subgradient for H*(x, p(t), t)
at x(t) w.r.t. G(t) = {x: g(x, t)> 0} provided we have the following additional
conditions to the ones in Theorem 8:
(70) U is compact
(71) { HI(3(t),ii, p(t), t) has a unique maximum at 11(t)for any t e [to, t1]
except for a finite number of points.
A sketch of a proof of this last generalization is given in Seierstad and Sydsxter
[16, (24, 25)]. Note that if H(A(t), ii, p(t), t) is strictly concave in it, and U is
convex as well as compact, H does have a unique maximum in U.
NOTE 3. The relationship between Theorem 8 and the results of Funk and Gil
bert [3] is discussed in Appendix 1.
Theorem 8 generalized Theorem 3 to the case when constraints of the form
g(x, t)_0 were present. Now we want to consider the case when the same
constraints are added to the ones in the problem covered by Theorem 7. Hence
we consider the problem:
(72) max 51f(x, ut, t)dt when xf(x, ui, t), /h(X, ui, t) > 0 and g(x, t) > 0
to
with the usual boundary conditions (3) and (4). Here h is a continuous function
from R "r+' iinto Rs and g is a continuous function from RI'+' into RI. Define
(73) L(x, ui, p, q, 4, t) = poff(x, it, t)  pf(x, ll, t) + qh(x, ti, t) + 4g(x, t)
where q=(q1, ., q) q=(q1,. ., q4)
In this situation we can prove the following theorem which generalizes Theorem
7.
THEOREM9. Let (A(t), il(t)) be an admissible pair in problem (72) with the
boundary conditions (3) and (4). Let H(x, u, p, t) be the Hamiltonian with
po=1, let H*(x, p, t) be as defined in (39) and L(x, at, p, q, 4, t) as defined in
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
382 ATLE SETERSTAD AND KNUT SYDSAETER
(73), let uts assume that there exist a vector jfintction p(t)=(pl(t),..., p,(t)) with
p(t) and 1i(t) piecewise continuiouts, vector functions q(t)=(q1(t),..., q,(t))>O,
q^(t)q1(t),..., !(t))?0 and, finally, numbers ,BJ, i= 1,..., k, j=:l,..., 3, sICh
that for all t at which 11I(t) is continutouts and i3(t) exists, the constraint qualifica
tion (37) is satisfied as well as the following conditions:
(74) H*(5(t), p(t), t) = H(x1(t),
(t), p(t), t)
(75) Pi(t) = L/'i(5X(t), ll(t), p(t), q(t) q^t,)
OL7;
(7) , j=
(5(t), ll(t), p(t), q(t) A^(t), t)=0 .............................
P (ri)  p(r ) = X iJ g(Xr~ ti) i1,., k
(77) I where to < <r < * < zrk < t1 are the discontinuity points of p(t)
(if 'Ek= t1, let p(,rk) = p(t1), and where
(78) /. >0(=0 if gi(X(4), z) > 0) for i = 1,..., k, j ,S..,3.
= 0, i=].. S
(79) qi(t)h1i(5X(t), ll1(t), t)
(80) q(OEtgiG40t) t)
= 0, i=, . ,S
pi(t1) no conditions i1,..., 1
(81) Pi(t) 0 (= 0 if XI(t1)>xl ) i  1 , n...,m
1pi(t1) =0 i in  + 1,..., n
H*(x, p(t), t) is concave in x on A41(t) {x: h(x, u, t) _ 0 for
(82) some u e Rr} if 41(t) is convex, or (if A,(t) is not convex), H*(x,
p(t), t) has an extension to co(JI(t)), which is concave in x.
{ h(x, ul, t) is continuously differentiable w.r.t.(x, i) in] a neigh
(83) t boyhood of (Q(t), l(t)).
(84) g(x, t) is quasiconcave in x and differentiable in x at x(t)
Then (x(t), il(t)) is an optimal pair.
PROOF: By using Theorem 5 we see that it suffices to establish inequality
(21) in Theorem 5 and the jump condition (15) in Theorem 4. Let H*(x, p(t), t)
even denote its extension to co (A1(t)). Then H*(x, p(t), t)+q(t) .g(5(t) t) x
is a concave function of x on co (2i1(t)). As in proof of Theorem 7, we prove that
there exists a vector a(t) such that for all x e A1(t),
H*(x, p(t), t)  H*(5x(t), p(t), t) + 4(t) g'(x:(t), t) . (x  x~(t)) ?
( )a(t)  (x x5(t))
so that
(86) H(x? it p(t),
t)( H(t),
1t(t),
)gx(t) p(t), t + 4(t) t) (xx5t)
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 383
for any (x, tu) where I1(x, ui, t) >0. It follows that H(x, iu, p(t), t) ?q4(t) g'Q(t),
t) x  a(t) x has its maximum at x =x(t), it = l(t). Writing down the necessary
conditions for this nonlinear programming problem gives us a(t) =  P(t) as in the
proof of Theorem 7. Inserting this in (85) we see that (21) follows provided we
t) . (x  (t)) is ?0 when g (x, t) 0. This follows from
can prove that 4(t) . g ,((t),
arguments identically equal to those used to obtain q . W(z  z) ?00 in the proof of
Theorem 12 in Appendix 2.
NOTE. So far f?, f, g and h have been assumed to be continuous. This can
easily be somewhat relaxed. For example, all the results above (except the
uniqueness property in Note 6 to Theorem 6), still hold if we assume f0, f, g
and h to be continuous at all points (x, u, t) E RI'x Rr X ([to, tj]  {10 O0k*})
where 01,.., Ok* are fixed points in [to, tj], and, furthermore, derivatives H., g'
and h' can, at all places where they are used, be allowed not to exist for t E 101,
Ok*}. Finally, to obtain integrability, we assume that f0?(x(t), u(t), t) and f(x(t),
u(t), t) are bounded functions for all admissible pairs (x(t), u(t)). This observa
tion is useful, e.g., in problems of differential games.
9. PROBLEMSWITH INFINITE HORIZONS
So far we have assumed that the time horizons in our problems were finite.
Actually, in most economic models in the literature where optimal control theory
has been used, one has introduced the fiction that the planning horizon is infinite.
We will therefore consider the additional problems encountered in such situations
with regard to the sufficiency theorems.
Let us start by considering the problem
(87) max f O(x, I,, t)clt when  Jf(x, i[, t), Ii(x, u, 1) > 0, g(x, t) > 0, it e U
. (,
where x x(t) is requiredto satisfy (3), and Ih,g are vector functions as before and
U is a given set in Rr. As one possible terminal condition we might assume that
lim xi(t) exists and is finite for all i= I,..., n and that
(88) limxi(t) ? x,
xl i 1,..., n, (xi fixed numbers i = 1,..., n).
t +o
In many situations we would like to consider as admissible, paths x(t) for
which the limits in (88) do not exist. These are three alternative terminal condi
tions:
(89) There exists a t' such that t ?a t' implies xi(t) > xi , i = I, ..., n
(90) limnxi(t) ?xii, i 1,..., n
t ,C
I For every X > 0 and every t' there exists some t ? t'
(91)
such that xi(t) > xi  for all i
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
384 ATLE SEIERSTAD AND KNUT SYDSkETER
NOTE. Recall that
lim +(t) = lin inf {+(t): t E [r, C)}J, liM +(t) r SLp {(t):
liM t E [C,o)}
t eJ>Co en* t GusCr 'C),C
so that lim.5(t) > lim 0(t) always. From these definitions it follows that
t aj tCjn
mor F every ? > 0 there exists a t' such that
> bels if t > t, then 0(t) > e
ti+(tX)
lim b(t)? Omans
means: For every e > 0 and every t' there exists some
sbt)
timOD
>
it ? t' where b(t) P
If the integral in (87) does not converge for some admissible pairs, we must
look for other optimality criteria.
Let (x(t), u(t)) be any admissible pair and let (x~(t),ii(t)) be the pair we want to
test for optimality. Define
(92) z(t)_
to
5 f1 (x(z), ul(C),z)Cd
fo (5(z), Iir)CT)dclr
to
A) If there exists a number t' such that /1(t) 0 for all t ? t', then we say that
x(t) overtakes x(t).
B) If lim4(t) ? 0 we say that X(t) catches uip wit/ x(t).
t +0
C) If limzt(t) ? 0 we say that x(t) sporadically catches up with x(t).
t >ct
Now if x(t) overtakes, catches up or sporadically catches up with any other
admissible x(t), we say that x(t) is optimal according to the overtaking criterion,
the catching uip criterion, or the sporadically catching up criterion, respectively.
It follows immediately that if x(t) overtakes x(t), then x(t) catches up with x(t)
which in turn implies (since always limr?nim)that x(t) sporadically catches up
with x(t), so the principles are presented in a decreasing order of strictness.
Note that if the integral in (87) converges for all admissible pairs, then lim 4(t)
t oo
=1 mrn(t)=lim 4(t), so the criterion (87) can be regarded as a special case of the
tCo to
catching up or the sporadically catching up criterion.
Now, if any of the three optimality criteria introduced above are used, we can
state the following result:
THEOREM10. Suppose that in any of the preceding sufficiency theorems we
put t1,=o, and assume that all the conditions required in the theorems are
satisfied on [to, oo), except condition (10). Then the conditions of these theorems
are sufficientfor optimality,
1. according to the overtaking criterion if for all admissible x(t) there exists
a number t' (in general dependent on x(t)), sueC/ that
(93) p(t). (x(t) (t)) ? 0 fr t ? t'
2. according to the catching tip criterion if for all admissible x(t),
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 385
(94) lim p(t) * (x(t)  x~(t)) > 0
3. according to the sporadically catching tip criterion if for all admissible
x(t),
(95) lim p(t) *(x(t)  Jv(t)) ? 0
t +0
The proof of this result is immediate if we remember that all the sufficiencyproofs
in the preceding sections aimed at showing inequality (16), which in turn implied
that A =zl(t,) given in (17) is greater or equal to 4(t,)=p(t1) (x(t,)x(t,)). In
in
this derivation t1 could have been an arbitrarynumber (to, on), so
(96) AI(t) ? p(t) . (x(t)  x(t)) for all t > to
and the conclusion follows.
We will end this section by giving some conditions which imply (93), (94) and
(95) in the cases where the terminal conditions are (89), (90) and (91) respec
tively.
It will be convenient to split the sum involved in the scalar product in (93)*(95)
in the following way,
(97) E pi(t)(xi(t)  ki(t)) = I pi(t)(xt) J xi)
+ E pi(t)(x xi(t))
We first consider the case when the terminal condition is (89), and introduce the
following assumptions:
(98) There exists a t' such that for t t', pi(t) > 0, i 1,..., n
(99) There exists a t' such that for t > t', pi(t) (xi  i(t)) > 0, i = 1,.., n
Then we easily see that, shortly formulated,
(100) (89), (98) and (99)  x(t) is optimal according to the overtaking criterion.
In fact, if t is sufficiently large, all the terms involved in the two sums on the
right hand side in (97) is _0, so the conclusion follows.
Let us turn to the case where the terminal condition is (90), and introduce the
following assumptions
(101) There exists a constant M such that for all t. pi(t) < Mg i = 1,..., n
( 102) lim pi(t) (xil  JV(t))_~0, il...
t ';0(.D
Then we can easily see that, shortly formulated,
(103) (90), (98), (101) and (102) t i (t) is optimal according to
1the catching up criterion.
This follows from the observation that the assumptions imply that lim of each
term in the sums in (97) is ?0, so lim of the sum is ?!0.
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
386 ATLE SEIERSTAD ANI) KNUT SYDSTETER
Let us then look at the case where the terminal condition is (91). Then we have
the following implication
Jx(t) is optimal according to the
(104) (91), (98), (101) and (102) sporadically catching uip cri
t terion.
If x(t) is an arbitraryadmissible path we must prove, (see (95) and the note below
(91)), that for every e >0 and every t' there exists some ta? t' such that the sum in
(97) for this value of t is ? e. It is a straightforwardexercise to establish that
our assumptions imply this conclusion.
We will consider another application of (97). Assume that our terminal condi
tion is (90) and let us introduce the following requirements,
(105) limps(t) _ 0 i = 1,..., n
( There exists a constant M such that for all admissible
(106) I x(t), Ixi(t)l? M for i = l,...,
Then by using (97) we see that the following implication is valid
(107) (90), (101), (102), (105) and (106) = (t) is optimal according to
1the catching up criterion.
NOTE. It is well known that (102) and (105) are not necessary for optimality.
(There is a counter example due to H. Halkin. See Halkin [6] or Arrow and
Kurz [2, (46.)]. They are not alone sufficient transversality conditions either.
One must be careful to introduce additional assumptions to ensure sufficiency,
e.g. (101), (106). (This must be noted in reading Arrow and Kurz [2, (46)]
where the condition (2) is essentially our (102) and (105).)
We will end this section by presenting another type of sufficiency theorem.
THEOREM 11. Let (x~(t),ii(t)) be an admissible pair in the followvingproblem:
(108) max \Jfo (x, ii, t) dt, when f (x, at, t), h(x, u, t) > 0, I E U.
to
with terminal condition (90), where U is convex, f 0, f and h are nondecreasing in
x for each (a, t) and f 0, f and h are concave in (x, u) for each t. Furthermore,
assume that fx', fx hx hfO',f lh all exist and are jointly continuous in a neigh
borhood of (x(t), ii(t+), t) and of (x(t), Ci(t), t), for each t. If the integral in
(108) does not exist, replace the criterion by the catching up criterion.
Assume that there exist a vector p satisfying
(109) Pi >0 (=0iflimxi(t)>xxi)

i=1,...,n
t 00
and a piecewise continuous function q(t) >Osuch that
(1a10) as(t)ota
qa(t)he(x(t), t) O= ... s
and assume al so that
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 387
(il1) 5
rim
rT
L4j(7(t), il(t), p(t, T), q(t), t) [ii(t)  u(t)] dt > 0,
holds for all admissible controls u(t), where p(t, T) for each T>to, is a con
tinuous function of t that satisfies:
(112) a pj(t, T) = L'i(i(t), 6 (t), p(t, T), q(t), t), p(T, T) =p5
at
Then ( i(t), ii(t)) is optimal according to the catching tip criterion.
NOTE. The conditions are still sufficient even if p(t, T) is piecewise continuous,
provided it satisfies the following jump condition in its discontinuity points
Ti, i1,..., k: For some vectors fit and pi in Rs,
p(Ty, T)  p(Tt, T) = fith[(Tt) + fiyihi(T )
where fit and flysatisfies:
(113) fitfhlr) =0 ft > 0, fiyh(') = 0, fl, 0
fithj(Tt) (U(Tt)  il/t)) < ?, fhl,(Tz) (U(Tz)  ii(T,)) < 0
for all admissible u(t).
(By the way, if p(t, T)=p(t) satisfies (113) and 1? is quasiconcave in (x, u), it
satisfies (15).) If hi for some i is independent of u, (113) reduces to (63), (64)
with fAt= A = fi.
For a proof of these results, see Seierstad [15].
NOTE TO THE LITERATURE The overtaking criterion seems to originate with
von Weizsicker [21] and is used by many writers, e.g., Mirrlees [12]. The catch
ing up criterion is due to Gale [4], and is used, e.g., by Stern [17]. For a discus
sion of these concepts, see Wan [20, (301303)]. The sporadically catching up
criterion (our terminology) was introduced by Halkin [6]. (This last criterion
is, as we have seen, the weakest of these criteria. Halkins choice is clearly dictated
by his aim of deriving necessary conditions. Of course, his necessary conditions
will hold the more if any of the stronger optimality criteria are employed.)
University of Oslo, Norway
APPENDIX 1.
Two ways of handling constraints of the form g(x, t)>O
In theorems giving necessary conditions as well as in theorems giving sufficient
conditions for optimality in control problems where constraints of the from
g(x, t)>0 are present, the adjoint variables associated with the constraints are
often different from those that we have used. In fact, in the Lagrangian function
the constraint g(x, t)?O is usually "represented by" the function dg(x, t)/dt
=gx(x, t)* X+ g(x, t)= g(x, t) . f(x, u, t)+ g(x, t). This is, for example, the
case in Funk and Gilbert [3], Hestenes [7, (chapter 8)], Hadley and Kemp
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
388 ATLE SEIERSTAD AND KNUT SYDSAETER
[5, section 5.9] and Arrow and Kurz [2] (in the case when g(x, t) x; see prop. 5,
p. 42 and prop. 6, p. 45).
We will briefly examine the relation between the two approaches. To this end
let us consider Theorem 9, and assume that it is satisfied with a piecewise con
tinuous (7(t) and define (7*(t) as a piecewise differentiable function satisfying
(1 14) q*(t) =  q(t)
except at zA,..., Tk and except at the points of discontinuity of ^(t), and also
(115) q*(t) > 0, (7*(TT) *(t)/i i==1,..., k
where fii==(f31,...,f3E)is as in (77). Note that if gi(5x(t),t) > 0 on an open interval,
then by (80), (7i(t)=0 here, so qi*(t) is a constant on this interval. Now, let us
define
= XO  ^
(116) p*(O (0
).gXx'(t), t
It is easy to see that p*(T)p*(+t)0 for i= ,..., k, so p*(t) becomes a con1
tinuous function. Let us also define
(117) L*(x, tl, p*, q, q*, t) = pof0 + p*.f+ q *h + 4(* . , where + gf? g;
At points where q(t) is continuous we see from (116) that j* = Qf*
.q 4*.
d(g')/dt =  * 4*. (/ .f + /). Moreover, for q defined in (117),
=?g1 f+g' X X+ g'", so using (75) we obtain:
P= (of  Q*
g qfi(x  + fx
9x)
 Pof.O'(p ^* .g') fx q~h (N14)g *i
Making use of (116), (114) and ( 17) we conclude that
(118) P3*
AL*
Ax
Since L* = L  .g(x, t) +*.gt(x, t), we obtain by using (76),
(119)
AL*
u 0
au
Another important relation will be derived. Define
(120) H*(x, a
U, 4*, t) = pof0 + p*f + V* ./
where 0 is given in (117). Then H* pof?+p f + (gg) f+4*gt The
last term is independent of u, so it follows from (74) that
(121) H*(5X(t),Usp*, q^*,t) _<H*040t) '1(t), p*, V*, t)
for all admissible u.
The Lagrangian function L* in (117) is exactly the one introduced in the works
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICIENT CONDITIONS 389
mentioned above and by the calculations we have performed the relationship
between our theorems and the others should be clear; we refrainfrom spelling them
out in more detail.
APPENDIX 2.
A sufficiency theoremiin nonlinear programming
and z a point of A. A s x m matrix Q is called a subgradient
Let A be a set in Rmn
at z for the vector function G: RmIf>Rsw.r.t.the set A provided
(122) G(z)  G(z) < Q.(z  z) for all zeA
In the same setting, a s x m matrix W is called a subquasigradient at 7 for G w.r.t.
the set A provided
(123) G(z) ? G(z) = I' (z  f) ?0 for all zelA
THEOREM 12. Consicier the problems:
(1.24) max f(z) wvheni g(z) 0 and Z,EZo
wVherefis afuinctionfroin Rn?to R, g is a vectorfunction from RI' to Rs, and ZOis
an arbitrary set in R'I. Let Z = {z: g(z) >0} lnZo. Assume that:
1. f has a subgradient Q at z w.r.t. the set Z.
2. g has a subquasigradient Wat z w.r.t. the set Z.
3. For some 1x s vector q 0, (Q + q .W) .(z z) 0 for all ze Z.
4. q g(z) = 0
5. g(Z) ? 0
Then z solves problem (124).
PROOF: Let I ={i: gi(f)=0}. Since qi > 0 and gi(z) _ 0 for all i, by 4, qigi(z)
=0 for all i=l1,..., s. In particular, if i?I, gj(z)>0 and qj=0. Letting w1
denote the ith row vector of J'V,we get from 2 that for all i,
gi(Z) >_gi(Z)  (Z z) >~?
Iv4i for all z c Z
When i e I, gi(f) = 0 so for all z c Z, gi(z) _ gj(fl), so wl (z  z) ?0 and therefore
(125) qi (Z  Z) ? 0 for all ieI and all zeZ
Since q =0 for i4I, (125) actually holds for all i= 1,..., s. Summing the inequali
ty (125) from i=I to i=s, we obtain q. W.(zz)>0 for all z6Z.
By applying 3, it follows that Q  (z  z) ? 0 for all z e Z. This inequality together
with 1 give us
f (z)  f (Z)  Q (z  z) < 0 for all ze Z
Hence f (z) < f (z) for all z e Z, so z solves problem (124).
NOTE 1. This theorem (and the proof) is a straightforwardgeneralization of a
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
390 ATLE SEIERSTAD AND KNUT SYDSAETER
sufficiency theorem in Mangasarian [11, (151153)]. Note in particular that if
f is concave and differentiable,f'(z), is a subgradient of f at z. Also, if g is quasi
concave and differentiable, g'(f) is a subquasigradient of g at 2 (see Mangasarian
[11, (147)].
NOTE 2. It is easy to see from the proof above that it suffices to assume in 2
that gi has wi as a subquasigradient at f w.r.t. Z for i e I. For i 0 I, wi can be
arbitrarily chosen.
APPENDIX 3.
THEOREM 13. Let A be a convex set in Rn and 0 a realvalued, concavefunc
tion defined on A. If x~is an interior point of A, there exists a vector a e Rn
such that
(126) 0(x) 0(x~) _ G (x 5) for all x cA
For a proof, see Rockafellar [13(ch. 5, ? 23)]. (The proof consists of a separation
of the point (x, +(x)) from the convex set M = {(x, z): x e A, z ? 4(x)}. The
separating functional (vector) has a nonzero last component since x is an interior
point).
COROLLARY. In the setting of Theorem 13, let , be a realvalued function
defined in a ball B(5, 3) around x, such that a,>is differentiable at x, *)=+(x)
and
(X) < +(x) for all x EB(5Z,6)
Then (126) is satisfied with a=i'(5).
PROOF: With xeB(5Z, 3), define for each A)c[1, 1], x(A)=ix+ A(x35).
Now,
/>(x)  0X(5) _ +(x) +0(x) _ a *(x x5) for all x EB(5x,6)
Hence
if(x( ))  () J< a.(x5) when A> 0,
iA t > asa(xx) when A < 0
Since( 4 (xG)) t(x) . (x  x~), it follows that
= V4(5)(x  x) = a. (x 
.
for all x E B(5, 3). Hence 0r(5) = a.
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
SUFFICENT CONDITIONS 391
REFERENCES
[1 ] ARROW,K. J., "Applications of Control Theory to Economic Growth", in G. B. Dantzig
and A. F. Veinott, Jr., eds. Mathematics of tlie Decision Sciences, (Providence, R. I.:
American Mathematical Society, 1968).
[2] ARROW,K. J. AND M. KURZ, Public Investment, tihe Rate of Return, and Optinal Fiscal
Policy, (Baltimore: John Hopkins Press, 1970).
[3] FUNK, J. E. AND E. G. GILBERT,"Some Sufficiency Conditions for Optimality in Control
Problems with State Space Constraints," Siami Jouonal on Control, VIII (November 1.970),
498504.
[4] GALE,D., "On Optimal Development in a MultiSector Economy," Review of Econtonlic
Studies, XXXIV (January 1967), l18 .
[5] HADLEY,G. AND M. C. KEMP, Variational Methods in Economics, (Amisterdamn:North
Holland, 1971).
[6] HALKIN,H., "Necessary Conditions for Optimal Control Probleml,with Infinite Horizons,"
E~conomnetrica, XLII (March 1974), 267272.
[7] HESTEINES, M. R., Calculus of Variations and Optimal Control Theory (New York: John
Wiley, 1966).
[8] KAMIEN, M. I. ANDN. L. SCHWARTZ, "Sufficient Conditions in Optimal Control Theory,"
Journal of Economic Theory, III (June 1971), 207214.
[9] LEITMAN, G. AN1 H. STALFORD, "A Sufficiency Theorem for Optimal Control," JIournalof
Optimlization Theory (and Applications, VII (September 1971), 169174.
[10] MANGASARIAN, 0. L., "Sufficient Conditions for the Optimal Control of Nonlinear Sys
tems," Sialn Journal on Control, IV (February 1966), 139152.
[11] , Nonlinear Progranuning, (New York: McGrawHill, 1969).
[12] MIRRLEES, J. A., "OptimIuLm Growth when the Technology is Changing," Review of Eco
no/nic Studies, XXXIV (January 1967), 95124.
[13] ROCKAFELLAR, R. T., Convex 4nalylsis, (NewJersey: Princeton University Press, 1970).
[14] SEIERSTAD, A., "Existence of a Control Satisfying Mangasarian's Sufficiency Conditions
for Optimality Makes these Conditions Necessary," Memnoraduhim fromn Institute of Eco
1nom1lics,
University of Oslo, (December 5, 1975).
[15] , "A Sufficient Condition for Control Problems with Infinite Horizons," Memo
ran/dumfrio Institute of E1conoinics,University of Oslo. (January 12, 1977).
[16] , AND K. SyvsTrER, "Sufficiency Conditions in Optimal Control Theory," Menmo
ranfidmun
oan Institute of Economnics,University of Oslo, (February 14, 1975).
[17] STERN,N. H., "Optimui Developnment in a Dual Economy," Reviemvof EconomnicStudies
XXXIX (April 1972), 171184.
[18] TAKAYANIA. A., Alathcimatical Economnics,(Hinsdale, Illinois: Trhe Dryden Press, 1974).
[19] UZAWA,H., "Optimal Growth in a Twosector Model of Capital Accumulation," Review
of Economic Studies, XXXI (January 1964), 124.
[20] WAN, H. Y. JR, EconomnicGromith,(New York: Harcourt Brace Jovanovich, 1971).
[21] VON WEIZSACKER, C. C., "Existence of Optimal Programs of Accumulation for an Infinite
Time Horizon," Review of Econondic Studies, XXXII (April 1965), 85104.
This content downloaded on Fri, 4 Jan 2013 13:03:29 PM
All use subject to JSTOR Terms and Conditions
Bien plus que des documents.
Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.
Annulez à tout moment.