Vous êtes sur la page 1sur 397

Postgraduate Mathematics Programme M820 CN

M820
The Calculus of Variations and
Advanced Calculus

Course Notes

Prepared by

Prof D. Richards
(December 2009)

Copyright © 2009 The Open University SUP 01824 8


2.1
First published 2008.
Second edition 2009.

Copyright © 2009. The Open University

Printed in the United Kingdom by


The Open University

2.1
Contents

1 Preliminary Analysis 7
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Notation and preliminary remarks . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 The Order notation . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Functions of a real variable . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Continuity and Limits . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.3 Monotonic functions and inverse functions . . . . . . . . . . . . . 17
1.3.4 The derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.5 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . 22
1.3.6 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.7 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3.8 Taylor series for one variable . . . . . . . . . . . . . . . . . . . . 31
1.3.9 Taylor series for several variables . . . . . . . . . . . . . . . . . . 36
1.3.10 L’Hospital’s rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.3.11 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.4 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2 Ordinary Differential Equations 51


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.2 General definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.3 First-order equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.3.1 Existence and uniqueness of solutions . . . . . . . . . . . . . . . 60
2.3.2 Separable and homogeneous equations . . . . . . . . . . . . . . . 62
2.3.3 Linear first-order equations . . . . . . . . . . . . . . . . . . . . . 63
2.3.4 Bernoulli’s equation . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.5 Riccati’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.4 Second-order equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.4.2 General ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.4.3 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.4.4 Second-order, constant coefficient equations . . . . . . . . . . . . 76
2.4.5 Inhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . 78
2.4.6 The Euler equation . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.5 An existence and uniqueness theorem . . . . . . . . . . . . . . . . . . . 81

1
2 CONTENTS

2.6 Envelopes of families of curves (optional) . . . . . . . . . . . . . . . . . 82


2.7 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.7.1 Applications of differential equations . . . . . . . . . . . . . . . . 91

3 The Calculus of Variations 93


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.2 The shortest distance between two points in a plane . . . . . . . . . . . 93
3.2.1 The stationary distance . . . . . . . . . . . . . . . . . . . . . . . 94
3.2.2 The shortest path: local and global minima . . . . . . . . . . . . 96
3.2.3 Gravitational Lensing . . . . . . . . . . . . . . . . . . . . . . . . 98
3.3 Two generalisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.3.1 Functionals depending only upon y 0 (x) . . . . . . . . . . . . . . . 99
3.3.2 Functionals depending upon x and y 0 (x) . . . . . . . . . . . . . . 101
3.4 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5 Examples of functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.5.1 The brachistochrone . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.5.2 Minimal surface of revolution . . . . . . . . . . . . . . . . . . . . 106
3.5.3 The minimum resistance problem . . . . . . . . . . . . . . . . . . 106
3.5.4 A problem in navigation . . . . . . . . . . . . . . . . . . . . . . . 110
3.5.5 The isoperimetric problem . . . . . . . . . . . . . . . . . . . . . . 110
3.5.6 The catenary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.5.7 Fermat’s principle . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.5.8 Coordinate free formulation of Newton’s equations . . . . . . . . 114
3.6 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

4 The Euler-Lagrange equation 121


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.2 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.2.1 Relation to differential calculus . . . . . . . . . . . . . . . . . . . 122
4.2.2 Differentiation of a functional . . . . . . . . . . . . . . . . . . . . 123
4.3 The fundamental lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4 The Euler-Lagrange equations . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.1 The first-integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.5 Theorems of Bernstein and du Bois-Reymond . . . . . . . . . . . . . . . 134
4.5.1 Bernstein’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.6 Strong and Weak variations . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.7 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

5 Applications of the Euler-Lagrange equation 145


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2 The brachistochrone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.1 The cycloid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.2.2 Formulation of the problem . . . . . . . . . . . . . . . . . . . . . 149
5.2.3 A solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.3 Minimal surface of revolution . . . . . . . . . . . . . . . . . . . . . . . . 154
5.3.1 Derivation of the functional . . . . . . . . . . . . . . . . . . . . . 155
5.3.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.3.3 The solution in a special case . . . . . . . . . . . . . . . . . . . . 157
CONTENTS 3

5.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


5.4 Soap Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.5 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

6 Further theoretical developments 173


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.2 Invariance of the Euler-Lagrange equation . . . . . . . . . . . . . . . . . 173
6.2.1 Changing the independent variable . . . . . . . . . . . . . . . . . 174
6.2.2 Changing both the dependent and independent variables . . . . . 176
6.3 Functionals with many dependent variables . . . . . . . . . . . . . . . . 181
6.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.3.2 Functionals with two dependent variables . . . . . . . . . . . . . 182
6.3.3 Functionals with many dependent variables . . . . . . . . . . . . 185
6.3.4 Changing dependent variables . . . . . . . . . . . . . . . . . . . . 186
6.4 The Inverse Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.5 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

7 Symmetries and Noether’s theorem 195


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.2 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.2.1 Invariance under translations . . . . . . . . . . . . . . . . . . . . 196
7.3 Noether’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
7.3.1 Proof of Noether’s theorem . . . . . . . . . . . . . . . . . . . . . 205
7.4 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

8 The second variation 209


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
8.2 Stationary points of functions of several variables . . . . . . . . . . . . . 210
8.2.1 Functions of one variable . . . . . . . . . . . . . . . . . . . . . . 210
8.2.2 Functions of two variables . . . . . . . . . . . . . . . . . . . . . . 211
8.2.3 Functions of n variables . . . . . . . . . . . . . . . . . . . . . . . 212
8.3 The second variation of a functional . . . . . . . . . . . . . . . . . . . . 215
8.3.1 Short intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
8.3.2 Legendre’s necessary condition . . . . . . . . . . . . . . . . . . . 218
8.4 Analysis of the second variation . . . . . . . . . . . . . . . . . . . . . . . 220
8.4.1 Analysis of the second variation . . . . . . . . . . . . . . . . . . . 222
8.5 The Variational Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 226
8.6 The Brachistochrone problem . . . . . . . . . . . . . . . . . . . . . . . . 229
8.7 Surface of revolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.8 Jacobi’s equation and quadratic forms . . . . . . . . . . . . . . . . . . . 232
8.9 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236

9 Parametric Functionals 239


9.1 Introduction: parametric equations . . . . . . . . . . . . . . . . . . . . . 239
9.1.1 Lengths and areas . . . . . . . . . . . . . . . . . . . . . . . . . . 241
9.2 The parametric variational problem . . . . . . . . . . . . . . . . . . . . 244
9.2.1 Geodesics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
9.2.2 The Brachistochrone problem . . . . . . . . . . . . . . . . . . . . 250
4 CONTENTS

9.2.3 Surface of Minimum Revolution . . . . . . . . . . . . . . . . . . . 250


9.3 The parametric and the conventional formulation . . . . . . . . . . . . . 251
9.4 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

10 Variable end points 257


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
10.2 Natural boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . 259
10.2.1 Natural boundary conditions for the loaded beam . . . . . . . . . 263
10.3 Variable end points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
10.4 Parametric functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
10.5 Weierstrass-Erdmann conditions . . . . . . . . . . . . . . . . . . . . . . 271
10.5.1 A taut wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
10.5.2 The Weierstrass-Erdmann conditions . . . . . . . . . . . . . . . . 273
10.5.3 The parametric form of the corner conditions . . . . . . . . . . . 277
10.6 Newton’s minimum resistance problem . . . . . . . . . . . . . . . . . . . 277
10.7 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

11 Conditional stationary points 287


11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
11.2 The Lagrange multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
11.2.1 Three variables and one constraint . . . . . . . . . . . . . . . . . 291
11.2.2 Three variables and two constraints . . . . . . . . . . . . . . . . 293
11.2.3 The general case . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
11.3 The dual problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
11.4 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

12 Constrained Variational Problems 299


12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
12.2 Conditional Stationary values of functionals . . . . . . . . . . . . . . . . 300
12.2.1 Functional constraints . . . . . . . . . . . . . . . . . . . . . . . . 300
12.2.2 The dual problem . . . . . . . . . . . . . . . . . . . . . . . . . . 304
12.2.3 The catenary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
12.3 Variable end points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
12.4 Broken extremals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
12.5 Parametric functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
12.6 The Lagrange problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
12.6.1 A single non-holonomic constraint . . . . . . . . . . . . . . . . . 317
12.6.2 An example with a single holonomic constraint . . . . . . . . . . 318
12.7 Brachistochrone in a resisting medium . . . . . . . . . . . . . . . . . . . 319
12.8 Brachistochrone with Coulomb friction . . . . . . . . . . . . . . . . . . . 329
12.9 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

13 Sturm-Liouville systems 339


13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
13.2 The origin of Sturm-Liouville systems . . . . . . . . . . . . . . . . . . . 342
13.3 Eigenvalues and functions of simple systems . . . . . . . . . . . . . . . . 348
13.3.1 Bessel functions (optional) . . . . . . . . . . . . . . . . . . . . . . 353
13.4 Sturm-Liouville systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
CONTENTS 5

13.4.1 Separation and Comparison theorems . . . . . . . . . . . . . . . 359


13.4.2 Self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . 363
13.4.3 The oscillation theorem (optional) . . . . . . . . . . . . . . . . . 365
13.5 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

14 The Rayleigh-Ritz Method 375


14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
14.2 Basic ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
14.3 Eigenvalues and eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . 379
14.4 The Rayleigh-Ritz method . . . . . . . . . . . . . . . . . . . . . . . . . . 384
14.5 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

15 Solutions to exercises 397


15.1 Solutions for chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
15.2 Solutions for chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
15.3 Solutions for chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
15.4 Solutions for chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
15.5 Solutions for chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
15.6 Solutions for chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
15.7 Solutions for chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
15.8 Solutions for chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
15.9 Solutions for chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
15.10 Solutions for chapter 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
15.11 Solutions for chapter 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
15.12 Solutions for chapter 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
15.13 Solutions for chapter 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
15.14 Solutions for chapter 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
6 CONTENTS
Chapter 1

Preliminary Analysis

1.1 Introduction
This course is about two related mathematical concepts which are of use in many areas
of applied mathematics, are of immense importance in formulating the laws of theoret-
ical physics and also produce important, interesting and some unsolved mathematical
problems. These are the functional and variational principles : the theory of these
entities is named The Calculus of Variations.
A functional is a generalisation of a function of one or more real variables. A real
function of a single real variable maps an interval of the real line to real numbers: for
instance, the function (1 + x2 )−1 maps the whole real line to the interval (0, 1]; the
function ln x maps the positive real axis to the whole real line. Similarly a real function
of n real variables maps a domain of Rn into the real numbers.
A functional maps a given class of functions to real numbers. A simple example of
a functional is
Z 1 p
S[y] = dx 1 + y 0 (x)2 , y(0) = 0, y(1) = 1, (1.1)
0

which associates a real number with any real function y(x) which satisfies the boundary
conditions and for which the integral exists. We use the square bracket notation 1 S[y]
to emphasise the fact that the functional depends upon the choice of function used to
evaluate the integral. In chapter 3 we shall see that a wide variety of problems can be
described in terms of functionals. Notice that the boundary conditions, y(0) = 0 and
y(1) = 1 in this example, are often part of the definition of the functional.
Real functions of n real variables can have various properties; for instance they
can be continuous, they may be differentiable or they may have stationary points and
local and global maxima and minima: functionals share many of these properties. In
1 In this course we use conventions common in applied mathematics and theoretical physics. A

function of a real variable x will usually be represented by symbols such as f (x) or just f , often
with no distinction made between the function and its value; as is often the case it is often clearer
to use context to provide meaning, rather than precise definitions, which initially can hinder clarity.
Similarly, we use the older convention, S[y], for a functional, to emphasise that y is itself a function;
this distinction is not made in modern mathematics. For an introductory course we feel that the older
convention, used in most texts, is clearer and more helpful.

7
8 CHAPTER 1. PRELIMINARY ANALYSIS

particular the notion of a stationary point of a function has an important analogy in


the theory of functionals and this gives rise to the idea of a variational principle, which
arises when the solution to a problem is given by the function making a particular
functional stationary. Variational principles are common and important in the natural
sciences.
The simplest example of a variational principle is that of finding the shortest distance
between two points. Suppose the two points lie in a plane, with one point at the origin,
O, and the other at point A with coordinates (1, 1), then if y(x) represents a smooth
curve passing through O and A the distance between O and A, along this curve is given
by the functional defined in equation 1.1. The shortest path is that which minimises the
value of S[y]. If the surface is curved, for instance a sphere or ellipsoid, the equivalent
functional is more complicated, but the shortest path is that which minimises it.
Variational principles are important for three principal reasons. First, many prob-
lems are naturally formulated in terms of a functional and an associated variational
principle. Several of these will be described in chapter 3 and some solutions will be
obtained as the course develops.
Second, most equations of mathematical physics can be derived from variational
principles. This is important partly because it suggests a unifying theme in our descrip-
tion of nature and partly because such formulations are independent of any particular
coordinate system, so making the essential mathematical structure of the equations
more transparent and easier to understand. This aspect of the subject is not consid-
ered in this course; a good discussion of these problems can be found in Yourgrau and
Mandelstam (1968)2 .
Finally, variational principles provide powerful computational tools; we explore as-
pects of this theory in chapter 13.
Consider the problem of finding the shortest path between two points on a curved
surface. The associated functional assigns a real number to each smooth curve joining
the points. A first step to solving this problem is to find the stationary values of the
functional; it is then necessary to decide which of these provides the shortest path. This
is very similar to the problem of finding extreme values of a function of n variables,
where we first determine the stationary points and then classify them: the important
and significant difference is that the space of allowed functions is not usually finite
in dimension. The infinite dimensional spaces of functions, with which we shall be
dealing, has many properties similar to those possessed by finite dimensional spaces,
and in the many problems the difference is not significant. However, this generalisation
does introduce some practical and technical difficulties some of which are discussed in
section 4.6. In this chapter we review calculus in order to prepare for these more general
ideas of calculus.
In elementary calculus and analysis, the functions studied first are ‘real functions, f ,
of one real variable’, that is, functions with domain either R, or a subset of R, and
codomain R. Without any other restrictions on f , this definition is too general to be
useful in calculus and applied mathematics. Most functions of one real variable that
are of interest in applications have smooth graphs, although sometimes they may fail
to be smooth at one or more points where they have a ‘kink’ (fail to be differentiable),
or even a break (where they are discontinuous). This smooth behaviour is related to
2 Yourgrau W and Mandelstram S Variational Principles in Dynamics and Quantum Theory (Pit-

man).
1.1. INTRODUCTION 9

the fact that most important functions of one variable describe physical phenomena
and often arise as solutions of ordinary differential equations. Therefore it is usual to
restrict attention to functions that are differentiable or, more usually, differentiable a
number of times.
The most useful generalisation of differentiability to functions defined on sets other
than R requires some care. It is not too hard in the case of functions of several (real)
variables but we shall have to generalise differentiation and integration to functionals,
not just to functions of several real variables.
Our presentation conceals very significant intellectual achievements made at the
end of the nineteenth century and during the first half of the twentieth century. During
the nineteenth century, although much work was done on particular equations, there
was little systematic theory. This changed when the idea of infinite dimensional vector
spaces began to emerge. Between 1900 and 1906, fundamental papers appeared by
Fredholm3 , Hilbert4 , and Fréchet5 . Fréchet’s thesis gave for the first time definitions of
limit and continuity that were applicable in very general sets. Previously, the concepts
had been restricted to special objects such as points, curves, surfaces or functions. By
introducing the concept of distance in more general sets he paved the way for rapid
advances in the theory of partial differential equations. These ideas together with the
theory of Lebesgue integration introduced in 1902, by Lebesgue in his doctoral thesis 6 ,
led to the modern theory of functional analysis. This is now the usual framework of
the theoretical study of partial differential equations. They are required also for an
elucidation of some of the difficulties in the Calculus of Variations. However, in this
introductory course, we concentrate on basic techniques of solving practical problems,
because we think this is the best way to motivate and encourage further study.
This preliminary chapter, which is assessed, is about real analysis and introduces
many of the ideas needed for our treatment of the Calculus of Variations. It is possible
that you are already familiar with the mathematics described in this chapter, in which
case you could start the course with chapter 2. You should ensure, however, that you
have a good working knowledge of differentiation, both ordinary and partial, Taylor
series of one and several variables and differentiation under the integral sign, all of
which are necessary for the development of the theory. In addition familiarity with the
theory of linear differential equations with both initial and boundary value problems is
assumed.
Very many exercises are set, in the belief that mathematical ideas cannot be un-
derstood without attempting to solve problems at various levels of difficulty and that
one learns most by making one’s own mistakes, which is time consuming. You should
not attempt all these exercise at a first reading, but these provide practice of essential
mathematical techniques and in the use of a variety of ideas, so you should do as many
as time permits; thinking about a problem, then looking up the solution is usually of
3 I. Fredholm, On a new method for the solution of Dirichlet’s problem, reprinted in Oeuvres

Complètes, l’Institut Mittag-Leffler, (Malmö) 1955, pp 61-68 and 81-106


4 D. Hilbert published six papers between 1904 and 1906. They were republished as Grundzüge

einer allgemeinen Theorie der Integralgleichungen by Teubner, (Leipzig and Berlin), 1924. The most
crucial paper is the fourth.
5 M. Fréchet, Doctoral thesis, Sur quelques points du Calcul fonctionnel, Rend. Circ. mat. Palermo

22 (1906), pp 1-74.
6 H. Lebesgue, Doctoral thesis, Paris 1902, reprinted in Annali Mat. pura e appl., 7 (1902) pp

231-359.
10 CHAPTER 1. PRELIMINARY ANALYSIS

little value until you have attempted your own solution. The exercises at the end of
this chapter are examples of the type of problem that commonly occur in applications:
they are provided for extra practice if time permits and it is not necessary for you to
attempt them.

1.2 Notation and preliminary remarks


We start with a discussion about notation and some of the basic ideas used throughout
this course.
A real function of a single real variable, f , is a rule that maps a real number x
to a single real number y. This operation can be denoted in a variety of ways. The
approach of scientists is to write y = f (x) or just y(x), and the symbols y, y(x), f
and f (x) are all used to represent the function. Mathematics uses the more formal
and precise notation f : X → Y , where X and Y are subsets of the real line: the set
X is named the domain, or the domain of definition of f , and set Y the codomain.
With this notation the symbol f denotes the function and the symbol f (x) the value
of the function at the point x. In applications this distinction is not always made and
both f and f (x) are used to denote the function. In recent years this has come to be
regarded as heresy by some: however, there are good practical reasons for using this
freer notation that do not affect pure mathematics. In this text we shall frequently use
the Leibniz notation, f (x), and its extensions, because it generally provides a clearer
picture and is helpful for algebraic manipulations, such as when changing variables and
integrating by parts.
Moreover, in the sciences the domain and codomain are frequently omitted, either
because they are ‘obvious’ or because they are not known. But, perversely, the scientist,
by writing y = f (x), often distinguishes between the two variables x and y, by saying
that x is the independent variable and that y is the dependent variable because it depends
upon x. This labelling can be confusing, because the role of variables can change, but
it is also helpful because in physical problems different variables can play quite different
roles: for instance, time is normally an independent variable.
In pure mathematics the term graph is used in a slightly specialised manner. A graph
is the set of points (x, f (x)): this is normally depicted as a line in a plane using rect-
angular Cartesian coordinates. In other disciplines the whole figure is called the graph,
not the set of points, and the graph may be a less restricted shape than those defined
by functions; an example is shown in figure 1.5 (page 28).
Almost all the ideas associated with real functions of one variable generalise to
functions of several real variables, but notation needs to be developed to cope with this
extension. Points in Rn are represented by n-tuples of real numbers (x1 , x2 , . . . , xn ).
It is convenient to use bold faced symbols, x, a and so on, to denote these points,
so x = (x1 , x2 , . . . , xn ) and we shall write x and (x1 , x2 , . . . , xn ) interchangeably. In
hand-written text a bold character, x, is usually denoted by an underline, x.
A function f (x1 , x2 , . . . , xn ) of n real variables, defined on Rn , is a map from Rn , or a
subset, to R, written as f : Rn → R. Where we use bold face symbols like f or φ to refer
to functions, it means that the image under the function f (x) or φ(y) may be considered
as vector in Rm with m ≥ 2, so f : Rn → Rm ; in this course normally m = 1 or m = n.
Although the case m = 1 will not be excluded when we use a bold face symbol, we shall
continue to write f and φ where the functions are known to be real valued and not vector
1.2. NOTATION AND PRELIMINARY REMARKS 11

valued. We shall also write without further comment f (x) = (f1 (x), f2 (x), . . . , fm (x)),
so that the fi are the m component functions, fi : Rn → R, of f .
On the real line the distance between two points x and y is naturally defined by
|x − y|. A point x is in the open interval (a, b) if a < x < b, and is in the closed interval
[a, b] if a ≤ x ≤ b. By convention, the intervals (−∞, a), (b, ∞) and (−∞, ∞) = R are
also open intervals. Here, (−∞, a) means the set of all real numbers strictly less than
a. The symbol ∞ for ‘infinity’ is not a number, and its use here is conventional. In
the language and notation of set theory, we can write (−∞, a) = {x ∈ R : x < a}, with
similar definitions for the other two types of open interval. One reason for considering
open sets is that the natural domain of definition of some important functions is an
open set. For example, the function ln x as a function of one real variable is defined for
x ∈ (0, ∞).
The space of points Rn is an example of a linear space. Here the term linear has
the normal meaning that for every x, y in Rn , and for every real α, x + y and αx are
in Rn . Explicitly,

(x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn ) = (x1 + y1 , x2 + y2 , · · · , xn + yn )

and
α(x1 , x2 , . . . , xn ) = (αx1 , αx2 , . . . , αxn ).
Functions f : Rn → Rm may also be added and multiplied by real numbers. Therefore
a function of this type may be regarded as a vector in the vector space of functions —
though this space is not finite dimensional like Rn .
In the space Rn the distance |x| of a point x from
p the origin is defined by the nat-
ural generalisation of Pythagoras’ theorem, |x| = x21 + x22 + · · · + x2n . The distance
between two vectors x and y is then defined by
q
2 2 2
|x − y| = (x1 − y1 ) + (x2 − y2 ) + · · · + (xn − yn ) . (1.2)

This is a direct generalisation of the distance along a line, to which it collapses when
n = 1.
This distance has the three basic properties

(a) |x| ≥ 0 and |x| = 0 if and only if x = 0,


(b) |x − y| = |y − x|, (1.3)
(c) |x − y| + |y − z| ≥ |x − z|, (Triangle inequality).

In the more abstract spaces, such as the function spaces we need later, a similar concept
of a distance between elements is needed. This is named the norm and is a map from
two elements of the space to the positive real numbers and which satisfies the above
three rules. In function spaces there is no natural choice of the distance function and
we shall see in chapter 4 that this flexibility can be important.
For functions of several variables, that is, for functions defined on sets of points in
Rn , the direct generalization of open interval is an open ball.
Definition 1.1
The open ball Br (a) of radius r and centre a ∈ Rn is the set of points

Br (a) = {x ∈ Rn : |x − a| < r},


12 CHAPTER 1. PRELIMINARY ANALYSIS

Thus the ball of radius 1 and centre (0, 0) in R2 is the interior of the unit circle, not
including the points on the circle itself. And in R, the ‘ball’ of radius 1 and centre 0
is the open interval (−1, 1). However, for R 2 and for Rn for n > 2, open balls are not
quite general enough. For example, the open square
{(x, y) ∈ R2 : |x| < 1, |y| < 1}
is not a ball, but in many ways is similar. (You may know for example that it may be
mapped continuously to an open ball.) It turns out that the most convenient concept
is that of open set 7 , which we can now define.
Definition 1.2
Open sets. A set U in Rn is said to be open if for every x ∈ U there is an open ball
Br (a) wholly contained within U which contains x.

In other words, every point in an open set lies in an open ball contained in the set.
Any open ball is in many ways like the whole of the space R n — it has no isolated or
missing points. Also, every open set is a union of open balls (obviously). Open sets
are very convenient and important in the theory of functions, but we cannot study the
reasons here. A full treatment of open sets can be found in books on topology8 . Open
balls are not the only type of open sets and it is not hard to show that the open square,
{(x, y) ∈ R2 : |x| < 1, |y| < 1}, is in fact an open set, according to the definition we gave;
and in a similar way it can be shown that the set {(x, y) ∈ R 2 : (x/a)2 + (y/b)2 < 1},
which is the interior of an ellipse, is an open set.
Exercise 1.1
Show that the open square is an open set by constructing explicitly for each (x, y)
in the open square {(x, y) ∈ R2 : |x| < 1, |y| < 1} a ball containing (x, y) and
lying in the square.

1.2.1 The Order notation


It is often useful to have a bound for the magnitude of p
a function that does not require
exact calculation. For example, the function f (x) = sin(x2 cosh x) − x2 cos x tends
to zero at a similar rate to x2 as x → 0 and this information is sometimes more helpful
than the detailed knowledge of the function. The order notation is designed for this
purpose.
Definition 1.3
Order notation. A function f (x) is said to be of order xn as x → 0 if there is a
non-zero constant C such that |f (x)| < C|xn | for all x in an interval around x = 0.
This is written as
f (x) = O(xn ) as x → 0. (1.4)

The conditional clause ‘as x → 0’ is often omitted when it is clear from the context.
More generally, this order notation can be used to compare the size of functions, f (x)
7 As with many other concepts in analysis, formulating clearly the concepts, in this case an open

set, represents a major achievement.


8 See for example W A Sutherland, Introduction to Metric and Topological Spaces, Oxford University

Press.
1.2. NOTATION AND PRELIMINARY REMARKS 13

and g(x): we say that f (x) is of the order of g(x) as x → y if there is a non-zero
constant C such that |f (x)| < C|g(x)| for all x in an interval around y; more succinctly,
f (x) = O(g(x)) as x → y.
When used in the form f (x) = O(g(x)) as x → ∞, this notation means that
|f (x)| < C|g(x)| for all x > X, where X and C are positive numbers independent
of x.
This notation is particularly useful when truncating power series: thus, the series
for sin x up to O(x3 ) is written,
x3
sin x = x − + O(x5 ),
3!
meaning that the remainder is smaller than C|x|5 , as x → 0 for some C. Note that in
this course the phrase “up to O(x3 )” means that the x3 term is included. The following
exercises provide practice in using the O-notation and exercise 1.2 proves an important
result.

Exercise 1.2
Show that if f (x) = O(x2 ) as x → 0 then also f (x) = O(x).

Exercise 1.3
Use the binomial expansion to find the order of the following expressions as x → 0.
p x x3/2
(a) x 1 + x2 , (b) , (c) .
1+x 1 − e−x

Exercise 1.4
Use the binomial expansion to find the order of the following expressions as x → ∞.
x p
(a) , (b) 4x2 + x − 2x, (c) (x + b)a − xa , a > 0.
x−1

The order notation is usefully extended to functions of n real variables, f : R n → R,


by using the distance |x|. Thus, we say that f (x) = O(|x|n ) if there is a non-zero
constant C and a small number δ such that |f (x)| < C|x|n for |x| < δ.

Exercise 1.5
(a) If f1 = x and f2 = y show that f1 = O(f ) and f2 = O(f ) where f (x, y) =
1
(x2 + y 2 ) 2 .
(b) Show that the polynomial φ(x, y) = ax2 + bxy + cy 2 vanishes to at least the
same order as the polynomial f (x, y) =px2 + y 2 at (0, 0). What conditions are
needed for φ to vanish faster than f as x2 + y 2 → 0?

Another expression that is useful is


f (x)
f (x) = o(|x|) which is shorthand for lim = 0.
|x|→0 |x|
Informally this means that f (x) vanishes faster than |x| as |x| → 0. More generally
f = o(g) if lim|x|→0 |f (x)/g(x)| = 0, meaning that f (x) vanishes faster than g(x) as
|x| → 0.
14 CHAPTER 1. PRELIMINARY ANALYSIS

1.3 Functions of a real variable


1.3.1 Introduction
In this section we introduce important ideas pertaining to real functions of a single real
variable, although some mention is made of functions of many variables. Most of the
ideas discussed should be familiar from earlier courses in elementary real analysis or
Calculus, so our discussion is brief and all exercises are optional.
The study of Real Analysis normally starts with a discussion of the real number
system and its properties. Here we assume all necessary properties of this number
system and refer the reader to any basic text if further details are required: adequate
discussion may be found in the early chapters of the texts by Whittaker and Watson 9 ,
Rudin10 and by Kolmogorov and Fomin11 .

1.3.2 Continuity and Limits


A continuous function is one whose graph has no vertical breaks: otherwise, it is dis-
continuous. The function f1 (x), depicted by the solid line in figure 1.1 is continuous
for x1 < x < x2 . The function f2 (x), depicted by the dashed line, is discontinuous at
x = c.
y
f 2 (x )
f 1 (x )

f 2 (x )
x
x1 c x2
Figure 1.1 Figure showing examples of a continuous
function, f1 (x), and a discontinuous function f2 (x).

A function f (x) is continuous at a point x = a if f (a) exists and if, given any arbitrarily
small positive number, , we can find a neighbourhood of x = a such that in it |f (x) −
f (a)| < . We can express this in terms of limits and since a point a on the real line
can be approached only from the left or the right a function is continuous at a point
x = a if it approaches the same value, independent of the direction. Formally we have
Definition 1.4
Continuity: a function, f , is continuous at x = a if f (a) is defined and
lim f (x) = f (a).
x→a

For a function of one variable, this is equivalent to saying that f (x) is continuous at
x = a if f (a) is defined and the left and right-hand limits
lim f (x) and lim f (x),
x→a− x→a+

9A
Course of Modern Analysis by E T Whittaker and G N Watson, Cambridge University Press.
10 Principles of Mathematical Analysis by W Rudin (McGraw-Hill).
11 Introductory Real Analysis by A N Kolmogorov and S V Fomin (Dover).
1.3. FUNCTIONS OF A REAL VARIABLE 15

exist and are equal to f (a).


If the left and right-hand limits exist but are not equal the function is discontinuous
at x = a and is said to have a simple discontinuity at x = a.
If they both exist and are equal, but do not equal f (a), then the function is said to
have a removable discontinuity at x = a.

Quite elementary functions exist for which neither limit exists: these are also dis-
continuous, and said to have a discontinuity of the second kind at x = a, see Rudin
(1976, page 94). An example of a function with such a discontinuity at x = 0 is

sin(1/x), x 6= 0,
f (x) =
0, x = 0.
We shall have no need to consider this type of discontinuity in this course, but simple
discontinuities will arise.
A function that behaves as
|f (x + ) − f (x)| = O() as  → 0
p
is continuous, though the converse is not true, a counter example being f (x) = |x| at
x = 0.
Most functions that occur in the sciences are either continuous or piecewise continu-
ous, which means that the function is continuous except at a discrete set of points. The
Heaviside function and the related sgn functions are examples of commonly occurring
piecewise continuous functions that are discontinuous. They are defined by
 
1, x > 0, 1, x > 0,
H(x) = and sgn(x) = sgn(x) = −1 + 2H(x).
0, x < 0, −1, x < 0,
(1.5)
These functions are discontinous at x = 0, where they are not normally defined. In
some texts these functions are defined at x = 0; for instance H(0) may be defined to
have the value 0, 1/2 or 1.
If limx→c f (x) = A and limx→c g(x) = B, then it can be shown that the following
(obvious) rules are adhered to:
(a) lim (αf (x) + βg(x)) = αA + βB;
x→c
(b) lim (f (x)g(x)) = AB;
x→c
f (x) A
(c) lim = , if B 6= 0;
x→c g(x) B
(d) if lim f (x) = fB then lim (f (g(x)) = fB .
x→B x→c
The value of a limit is normally found by a combination of suitable re-arrangements
and expansions. An example of an expansion is
1 3
sinh ax ax + 3! (ax) + O(x5 )  
lim = lim = lim a + O(x2 ) = a.
x→0 x x→0 x x→0

An example of a re-arrangement, using the above rules, is


sinh ax sinh ax x sinh ax x a
lim = lim = lim lim = , (b 6= 0).
x→0 sinh bx x→0 x sinh bx x→0 x x→0 sinh bx b
16 CHAPTER 1. PRELIMINARY ANALYSIS

Finally, we note that a function that is continuous on a closed interval is bounded


above and below and attains its bounds. It is important that the interval is closed; for
instance the function f (x) = x defined in the open interval 0 < x < 1 is bounded above
and below, but does not attain it bounds. This example may seem trivial, but similar
difficulties exist in the Calculus of Variations and are less easy to recognise.
Exercise 1.6
A function that is finite and continuous for all x is defined by
8
A
< 2 + x + B, 0 ≤ x ≤ a, a > 0,
>
>
x
f (x) =
: C + Dx,
>
> a ≤ x,
x2
where A, B, C, D and a are real numbers: if f (0) = 1 and limx→∞ f (x) = 0, find
these numbers.

Exercise 1.7
Find the limits of the following functions as x → 0 and w → ∞.
sin ax tan ax sin ax 3x + 4 “ z ”w
(a) , (b) , (c) , (d) , (e) 1 + .
x x sin bx 4x + 2 w
For functions of two or more variables, the definition of continuity is essentially the
same as for a function of one variable. A function f (x) is continuous at x = a if f (a)
is defined and
lim f (x) = f (a). (1.6)
x→a
Alternatively, given any  > 0 there is a δ > 0 such that whenever |x − a| < δ,
|f (x) − f (a)| < .
It should be noted that if f (x, y) is continuous in each variable, it is not necessarily
continuous in both variables. For instance, consider the function
 (x + y)2

, x2 + y 2 6= 0,
f (x, y) = x2 + y 2
1, x = y = 0,

and for fixed y = β 6= 0 the related function of x,


(x + β)2
f (x, β) = = 1 + O(x) as x → 0
x2 + β 2
and f (x, 0) = 1 for all x: for any β this function is a continuous function of x. On the
line x + y = 0, however, f = 0 except at the origin so f (x, y) is not continuous along
this line. More generally, by putting x = r cos θ and y = r sin θ, −π < θ ≤ π,  r 6=π0, we
2
can approach the origin from any angle. In this representation f = 2 sin θ + so
4
on any circle round the origin f takes any value between 0 and 2. Therefore f (x, y) is
not a continuous function of both x and y.
Exercise 1.8
Determine whether or not the following functions are continuous at the origin.
2xy x2 + y 2 2x2 y
(a) f = 2 2
, (b) f = 2 2
, (c) f = 2 .
x +y x −y x + y2
Hint use polar coordinates x = r cos θ, y = r sin θ and consider the limit r → 0.
1.3. FUNCTIONS OF A REAL VARIABLE 17

1.3.3 Monotonic functions and inverse functions


A function is said to be monotonic on an interval if it is always increasing or always
decreasing. Simple examples are f (x) = x and f (x) = exp(−x) which are mono-
tonic increasing and monotonic decreasing, respectively, on the whole line: the function
f (x) = sin x is monotonic increasing for −π/2 < x < π/2. More precisely, we have,
Definition 1.5
Monotonic functions: A function f (x) is monotonic increasing for a < x < b if
f (x1 ) ≤ f (x2 ) for a < x1 < x2 < b.
A monotonic decreasing function is defined in a similar way.
If f (x1 ) < f (x2 ) for a < x1 < x2 < b then f (x) is said to be strictly monotonic (in-
creasing) or strictly increasing ; strictly decreasing functions are defined in the obvious
manner.

The recognition of the intervals on which a given function is strictly monotonic is


sometimes important because on these intervals the inverse function exists. For instance
the function y = ex is monotonic increasing on the whole real line, R, and its inverse is
the well known natural logarithm, x = ln y, with y on the positive real line.
In general if f (x) is continuous and strictly monotonic on a ≤ x ≤ b and y = f (x)
the inverse function, x = f −1 (y), is continuous for f (a) ≤ y ≤ f (b) and satisfies
y = f (f −1 (y)). Moreover, if f (x) is strictly increasing so is f −1 (y).
Complications occur when a function is increasing and decreasing on neighbouring
intervals, for then the inverse may have two or more values. For example the function
f (x) = x2 is monotonic increasing for x > 0 and monotonic decreasing for x < 0: hence

the relation y = x2 has the two familiar inverses x = ± y, y ≥ 0. These two inverses are
often refered to as the different branches of the inverse; this idea is important because
most functions are monotonic only on part of their domain of definition.
Exercise 1.9
(a) Show that y = 3a2 x − x3 is strictly increasing for −a < x < a and that on this
interval y increases from −2a3 to 2a3 .
(b) By putting x = 2a sin φ and using the identity sin3 φ = (3 sin φ − sin 3φ)/4,
show that the equation becomes
„ “ y ”«
1
y = 2a3 sin 3φ and hence that x(y) = 2a sin sin−1 .
3 2a3
(c) Find the inverse for x > 2a. Hint put x = 2a cosh φ and use the relation
cosh3 φ = (cosh 3φ + 3 cosh φ)/4.

1.3.4 The derivative


The notion of the derivative of a continuous function, f (x), is closely related to the
geometric idea of the tangent to a curve and to the related concept of the rate of
change of a function, so is important in the discussion of anything that changes. This
geometric idea is illustrated in figure 1.2: here P is a point with coordinates (a, f (a))
on the graph and Q is another point on the graph with coordinates (a + h, f (a + h)),
where h may be positive or negative.
18 CHAPTER 1. PRELIMINARY ANALYSIS

Q
f(a+h)

Tangent
P φ at P
f(a)
a a+h
Figure 1.2 Illustration showing the chord P Q and the tan-
gent line at P .

The gradient of the chord P Q is tan φ where φ is the angle between P Q and the x-axis,
and is given by the formula
f (a + h) − f (a)
tan φ = .
h
If the graph in the vicinity of x = a is represented by a smooth line, then it is intuitively
obvious that the chord P Q becomes closer to the tangent at P as h → 0; and in the
limit h = 0 the chord becomes the tangent. Hence the gradient of the tangent is given
by the limit
f (a + h) − f (a)
lim .
h→0 h
This limit, provided it exists, is named the derivative of f (x) at x = a and is commonly
df
denoted either by f 0 (a) or . Thus we have the formal definition:
dx
Definition 1.6
The derivative: A function f (x), defined on an open interval U of the real line, is
differentiable for x ∈ U and has the derivative f 0 (x) if
df f (x + h) − f (x)
f 0 (x) = = lim , (1.7)
dx h→0 h
exists.

If the derivative exists at every point in the open interval U the function f (x) is said
to be differentiable in U : in this case it may be proved that f (x) is also continuous.
However, a function that is continuous at a need not be differentiable at a: indeed,
it is possible to construct functions that are continuous everywhere but differentiable
nowhere; such functions are encountered in the mathematical description of Brownian
motion.
Combining the definition of f 0 (x) and the definition 1.3 of the order notation shows
that a differentiable function satisfies

f (x + h) = f (x) + hf 0 (x) + o(h). (1.8)

The formal definition, equation 1.7, of the derivative can be used to derive all its useful
properties, but the physical interpretation, illustrated in figure 1.2, provides a more
useful way to generalise it to functions of several variables.
1.3. FUNCTIONS OF A REAL VARIABLE 19

The tangent line to the graph y = f (x) at the point a, which we shall consider to
be fixed for the moment, has slope f 0 (a) and passes through f (a). These two facts
determine the derivative completely. The equation of the tangent line can be written
in parametric form as p(h) = f (a) + f 0 (a) h. Conversely, given a point a, and the
equation of the tangent line at that point, the derivative, in the classical sense of the
definition 1.6, is simply the slope, f 0 (a), of this line. So the information that the
derivative of f at a is f 0 (a) is equivalent to the information that the tangent line at
a has equation p(h) = f (a) + f 0 (a) h. Although the classical derivative, equation 1.7,
is usually taken to be the fundamental concept, the equivalent concept of the tangent
line at a point could be considered equally fundamental - perhaps more so, since a
tangent is a more intuitive idea than the numerical value of its slope. This is the key
to successfully defining the derivative of functions of more than one variable.
From the definition 1.6 the following useful results follow. If f (x) and g(x) are
differentiable on the same open interval and α and β are constants then
d  
(a) αf (x) + βg(x) = αf 0 (x) + βg 0 (x),
dx
d  
(b) f (x)g(x) = f 0 (x)g(x) + f (x)g 0 (x), (The product rule)
dx 
f 0 (x)g(x) − f (x)g 0 (x)

d f (x)
(c) = , g(x) 6= 0. (The quotient rule)
dx g(x) g(x)2

We leave the proof of these results to the reader, but note that the differential of 1/g(x)
follows almost trivially from the definition 1.6, exercise 1.14, so that the third expression
is a simple consequence of the second.
The other important result is the chain rule concerning the derivative of composite
functions. Suppose that f (x) and g(x) are two differentiable functions and a third is
formed by the composition,
F (x) = f (g(x)), sometimes written as F = f ◦ g,
which we assume to exist. Then the derivative of F (x) can be shown, as in exercise 1.18,
to be given by
dF df dg
= × or F 0 (x) = f 0 (g)g 0 (x). (1.9)
dx dg dx
This formula is named the chain rule. Note how the prime-notation is used: it denotes
the derivative of the function with respect to the argument shown, not necessarily the
original independent variable, x. Thus f 0 (g) or f 0 (g(x)) does not mean the derivative
of F (x); it means the derivative f 0 (x) with x replaced by g or g(x).
A simple example should make this clear: suppose f (x) = sin x and g(x) = 1/x,
x > 0, so F (x) = sin(1/x). The chain rule gives
     
dF d d 1 1 1 1
= (sin g) × = cos g × − 2 = − 2 cos .
dx dg dx x x x x
The derivatives of simple functions, polynomials and trigometric functions for instance,
can be deduced from first principles using the definition 1.6: the three rules, given above,
and the chain rule can then be used to find the derivative of any function described with
finite combinations of these simple functions. A few exercises will make this process
clear.
20 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.10
Find the derivative of the following functions
p
a sin2 x + b cos2 x , (c) cos(x3 ) cos x , (d) xx .
p
(a) (a − x)(b + x) , (b)

Exercise 1.11
dx 1
If y = sin x for −π/2 ≤ x ≤ π/2 show that = p .
dy 1 − y2

Exercise 1.12
(a) If y = f (x) has the inverse x = g(y), show that f 0 (x)g 0(y) = 1, that is
„ «−1
dx dy
= .
dy dx

d2 x dy d2 y
(b) Express 2
in terms of and .
dy dx dx2

Clearly, if f 0 (x) is differentiable, it may be differentiated to obtain the second derivative,


which is denoted by
d2 f
f 00 (x) or .
dx2
This process can be continued to obtain the functions

df d2 f d3 f dn−1 f dn f
f, , , , · · · , , ··· ,
dx dx2 dx3 dxn−1 dxn
where each member of the sequence is the derivative of the preceeding member,

dp f
 p−1 
d d f
p
= , p = 2, 3, · · · .
dx dx dxp−1

The prime notation becomes rather clumsy after the second or third derivative, so the
most common alternative is
dp f
= f (p) (x), p ≥ 2,
dxp
with the conventions f (1) (x) = f 0 (x) and f (0) (x) = f (x). Care is needed to distinguish
between the pth derivative, f (p) (x), and the pth power, denoted by f (x)p and sometimes
f p (x) — the latter notation should be avoided if there is any danger of confusion.
Functions for which the nth derivative is continuous are said to be n-differentiable
and to belong to class Cn : the notation Cn (U ) means the first n derivatives are continu-
ous on the interval U : the notation Cn (a, b) or Cn [a, b], with obvious meaning, may also
be used. The term smooth function describes functions belonging to C∞ , that is func-
tions, such as sin x, having all derivatives; we shall, however, use the term sufficiently
smooth for functions that are sufficiently differentiable for all subsequent analysis to
work, when more detail is deemed unimportant.
In the following exercises some important, but standard, results are derived.
1.3. FUNCTIONS OF A REAL VARIABLE 21

Exercise 1.13
If f (x) is an even (odd) function, show that f 0 (x) is an odd (even) function.

Exercise 1.14
f 0 (x)
„ «
d 1
Show, from first principles using the limit 1.7, that =− , and
dx f (x) f (x)2
that the product rule is true.

Exercise 1.15
Leibniz’s rule
If h(x) = f (x)g(x) show that

h00 (x) = f 00 (x)g(x) + 2f 0 (x)g 0 (x) + f (x)g 00 (x),


(3)
h (x) = f (3) (x)g(x) + 3f 00 (x)g 0 (x) + 3f 0 (x)g 00 (x) + f (x)g (3) (x),

and use induction to derive Leibniz’s rule


n „ «
X n
h(n) (x) = f (n−k) (x)g (k) (x),
k
k=0

„ «
n n!
where the binomial coefficients are given by = .
k k! (n − k)!

Exercise 1.16
d f 0 (x)
Show that ln(f (x)) = and hence that if
dx f (x)

p0 f0 f0 f0
p(x) = f1 (x)f2 (x) · · · fn (x) then = 1 + 2 + ··· + n,
p f1 f2 fn

provided p(x) 6= 0. Note that this gives an easier method of differentiating prod-
ucts of three or more factors than repeated use of the product rule.

Exercise 1.17
If the elements of a determinant D(x) are differentiable functions of x,
˛ ˛
˛ f (x) g(x) ˛˛
D(x) = ˛˛
φ(x) ψ(x) ˛

show that ˛ 0
g 0 (x) ˛˛ ˛˛ f (x)
˛ ˛ ˛
˛ f (x) g(x) ˛˛
D0 (x) = ˛˛ + .
φ(x) ψ(x) ˛ ˛ φ0 (x) ψ 0 (x) ˛
Extend this result to third-order determinants.
22 CHAPTER 1. PRELIMINARY ANALYSIS

1.3.5 Mean Value Theorems


If a function f (x) is sufficiently smooth for all points inside the interval a < x < b,
its graph is a smooth curve12 starting at the point A = (a, f (a)) and ending at B =
(b, f (b)), as shown in figure 1.3.

f(b) P B

A Q
f(a)
a b
Figure 1.3 Diagram illustrating Cauchy’s form
of the mean value theorem.

From this figure it seems plausible that the tangent to the curve must be parallel to
the chord AB at least once. That is

f (b) − f (a)
f 0 (x) = for some x in the interval a < x < b. (1.10)
b−a

Alternatively this may be written in the form

f (b) = f (a) + hf 0 (a + θh), h = b − a. (1.11)

where θ is a number in the interval 0 < θ < 1, and is normally unknown. This relation
is used frequently throughout the course. Note that equation 1.11 shows that between
zeros of a continuous function there is at least one point at which the derivative is zero.
Equation 1.10 can be proved and is enshrined in the following theorem

Theorem 1.1
The Mean Value Theorem (Cauchy’s form). If f (x) and g(x) are real and differen-
tiable for a ≤ x ≤ b, then there is a point u inside the interval at which
   
f (b) − f (a) g 0 (u) = g(b) − g(a) f 0 (u), a < u < b. (1.12)

By putting g(x) = x, equation 1.10 follows.

A similar idea may be applied to integrals. In figure 1.4 is shown a typical continuous
function, f (x), which attains its smallest and largest values, S and L respectively, on
the interval a ≤ x ≤ b.

12 A smooth curve is one along which its tangent changes direction continuously, without abrupt

changes.
1.3. FUNCTIONS OF A REAL VARIABLE 23

L
f(x)

a b
Figure 1.4 Diagram showing the upper and
lower bounds of f (x) used to bound the integral.

It is clear that the area under the curve is greater than (b − a)S and less than (b − a)L,
that is Z b
(b − a)S ≤ dx f (x) ≤ (b − a)L.
a

Because f (x) is continuous it follows that


Z b
dx f (x) = (b − a)f (ξ) for some ξ ∈ [a, b]. (1.13)
a

This observation is made rigorous in the following theorem.


Theorem 1.2
The Mean Value theorem (integral form). If, on the closed interval a ≤ x ≤ b, f (x)
is continuous and φ(x) ≥ 0 then there is an ξ satisfying a ≤ ξ ≤ b such that
Z b Z b
dx f (x)φ(x) = f (ξ) dx φ(x). (1.14)
a a

If φ(x) = 1 relation 1.13 is regained.

Exercise 1.18
The chain rule
In this exercise the Mean Value Theorem is used to derive the chain rule, equa-
tion 1.9, for the derivative of F (x) = f (g(x)).
Use the mean value theorem to show that
“ ”
F (x + h) − F (x) = f g(x) + hg 0 (x + hθ) − f (g(x))

and that
“ ”
f g(x) + hg 0 (x + hθ) = f (g(x)) + hg 0 (x + hθ) f 0 (g + hφg 0 )

where 0 < θ, φ < 1. Hence show that


F (x + h) − F (x)
= f 0 (g + hφg 0 ) g 0 (x + hθ),
h
and by taking the limit h → 0 derive equation 1.9.
24 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.19
Use the integral form of the mean value theorem, equation 1.13, to evaluate the
limits,
1 x p
Z Z x
1
dt ln 3t − 3t2 + t3 .
` ´
(a) lim dt 4 + 3t3 , (b) lim
x→0 x 0 x→1 (x − 1)3 1

1.3.6 Partial Derivatives


Here we consider functions of two or more variables, in order to introduce the idea of
a partial derivative. If f (x, y) is a function of the two, independent variables x and
y, meaning that changes in one do not affect the other, then we may form the partial
derivative of f (x, y) with respect to either x or y using a minor modification of the
definition 1.6 (page 18).
Definition 1.7
The partial derivative of a function f (x, y) of two variables with respect to the first
variable x is
∂f f (x + h, y) − f (x, y)
= fx (x, y) = lim .
∂x h→0 h
In the computation of fx the variable y is unchanged.
Similarly, the partial derivative with respect to the second variable y is
∂f f (x, y + k) − f (x, y)
= fy (x, y) = lim .
∂y k→0 k
In the computation of fy the variable x is unchanged.

We use the conventional notation, ∂f /∂x, to denote the partial derivative with respect
to x, which is formed by fixing y and using the rules of ordinary calculus for the deriva-
tive with respect to x. The suffix notation, fx (x, y), is used to denote the same function:
here the suffix x shows the variable being differentiated, and it has the advantage that
when necessary it can be used in the form fx (a, b) to indicate that the partial derivative
fx is being evaluated at the point (a, b).
In practice the evaluation of partial derivatives is exactly the same as ordinary
derivatives and the same rules apply. Thus if f (x, y) = xey ln(2x + 3y) then the partial
derivatives with respect to x and y are, repectively
∂f 2xey ∂f 3xey
= ey ln(2x + 3y) + and = xey ln(2x + 3y) + .
∂x 2x + 3y ∂y 2x + 3y

Exercise 1.20
(a) If u = x2 sin(ln y) compute ux and uy .
∂r x ∂r y
(b) If r 2 = x2 + y 2 show that = and = .
∂x r ∂y r

The partial derivatives are also functions of x and y, so may be differentiated again.
Thus we have
∂2f ∂2f
   
∂ ∂f ∂ ∂f
= = f xx (x, y) and = = fyy (x, y). (1.15)
∂x ∂x ∂x2 ∂y ∂y ∂y 2
1.3. FUNCTIONS OF A REAL VARIABLE 25

But now we also have the mixed derivatives


   
∂ ∂f ∂ ∂f
and . (1.16)
∂x ∂y ∂y ∂x

Except in special circumstances the order of differentiation is irrelevant so we obtain


the mixed derivative rule
∂2f ∂2f
   
∂ ∂f ∂ ∂f
= = = . (1.17)
∂x ∂y ∂y ∂x ∂x∂y ∂y∂x

Using the suffix notation the mixed derivative rule is fxy = fyx . A sufficient condi-
tion for this to hold is that both fxy and fyx are continuous functions of (x, y), see
equation 1.6 (page 16).
Similarly, differentiating p times with respect to x and q times with respect to y, in
any order, gives the same nth order derivative,
∂ nf
where n = p + q,
∂xp ∂y q
provided all the nth derivatives are continuous.

Exercise 1.21
If Φ(x, y) = exp(−x2 /y) show that Φ satisfies the equations

∂Φ 2xΦ ∂2Φ ∂Φ 2Φ
=− and =4 − .
∂x y ∂x2 ∂y y

Exercise 1.22
∂2u ∂u ∂u
Show that u = x2 sin(ln y) satisfies the equation 2y 2 + 2y +x = 0.
∂y 2 ∂y ∂x

The generalisation of these ideas to functions of the n variables x = (x1 , x2 , . . . , xn ) is


straightforward: the partial derivative of f (x) with respect to xk is defined to be

∂f f (x1 , x2 , · · · , xk−1 , xk + h, xk+1 , · · · , xn ) − f (x1 , x2 , . . . , xn )


= lim . (1.18)
∂xk h→0 h
All other properties of the derivatives are the same as in the case of two variables, in
particular for the mth derivative the order of differentiation is immaterial provided all
mth derivatives are continuous.
For a function of a single variable, f (x), the existence of the derivative, f 0 (x),
implies that f (x) is continuous. For functions of two or more variables the existence of
the partial derivatives does not guarantee continuity.

The total derivative


If f (x1 , x2 , . . . , xn ) is a function of n variables and if each of these variables is a function
of the single variable t, we may form a new function of t with the formula

F (t) = f (x1 (t), x2 (t), · · · , xn (t)). (1.19)


26 CHAPTER 1. PRELIMINARY ANALYSIS

Geometrically, F (t) represents the value of f (x) on a curve C defined parametrically by


the functions (x1 (t), x2 (t), · · · , xn (t)). The derivative of F (t) is given by the relation
n
dF X ∂f dxk
= , (1.20)
dt ∂xk dt
k=1

so F 0 (t) is the rate of change of f (x) along C. Normally, we write f (t) rather than
df
use a different symbol F (t), and the left-hand side of the above equation is written .
dt
This derivative is named the total derivative of f . The proof of this when n = 2 and
x0 and y 0 do not vanish near (x, y) is sketched below; the generalisation to larger n is
straightforward. If F (t) = f (x(t), y(t)) then

F (t + ) = f (x(t + ), y(t + ))


 
= f x(t) + x0 (t + θ), y(t) + y 0 (t + φ) , 0 < θ, φ < 1,

where we have used the mean value theorem, equation 1.11. Write the right-hand side
in the form
h i h i
f (x+x0 , y+y 0) = f (x + x0 , y + y 0 ) − f (x, y + y 0 ) + f (x, y + y 0 ) − f (x, y) +f (t)

so that
F (t + ) − F (t) f (x + x0 , y + y 0 ) − f (x, y + y 0 ) 0 f (x, y + y 0 ) − f (x, y) 0
= x + y.
 x0 y 0
Thus, on taking the limit as  → 0 we have
dF ∂f dx ∂f dy
= + .
dt ∂x dt ∂y dt
This result remains true if either or both x0 = 0 or y 0 = 0, but then more care is needed
with the proof.
Equation 1.20 is used in chapter 4 to derive one of the most important results in
the course: if the dependence of x upon t is linear and F (t) has the form

F (t) = f (x + th) = f (x1 + th1 , x2 + th2 , · · · , xn + thn )

where the vector h is constant and the variable xk has been replaced by xk + thk , for
d
all k. Since dt (xk + thk ) = hk , equation 1.20 becomes
n
dF X ∂f
= hk . (1.21)
dt ∂xk
k=1

This result will also be used in section 1.3.9 to derive the Taylor series for several
variables.
A variant of equation 1.19, which frequently occurs in the Calculus of Variations, is
the case where f (x) depends explicitly upon the variable t, so this equation becomes

F (t) = f (t, x1 (t), x2 (t), · · · , xn (t))


1.3. FUNCTIONS OF A REAL VARIABLE 27

and then equation 1.20 acquires an additional term,


n
dF ∂f X ∂f dxk
= + . (1.22)
dt ∂t ∂xk dt
k=1

For an example we apply this formula to the function

f (t, x, y) = x sin(yt) with x = et and y = e−2t ,

so
F (t) = f t, et , e−2t = et sin te−2t .
 

Equation 1.22 gives


dF ∂f ∂f dx ∂f dy
= + +
dt ∂t ∂x dt ∂y dt
= xy cos(yt) + et sin(yt) − 2xt cos(yt)e−2t ,

which can be expressed in terms of t only,


dF
= (1 − 2t)e−t cos te−2t + et sin te−2t .
 
dt
The same expression can also be obtained by direct differentiation of F (t) = et sin te−2t .


The right-hand sides of equations 1.20 and 1.22 depend upon both x and t, but
because x depends upon t often these expressions are written in terms of t only. In the
Calculus of Variations this is usually not helpful because the dependence of both x and
t, separately, is important: for instance we often require expressions like
   
d ∂F ∂ dF
and .
dt ∂x1 ∂x1 dt
The second of these expressions requires some clarification because dF/dt contains the
derivatives x0k . Thus
n
  !
∂ dF ∂ ∂f X ∂f dxk
= + .
∂x1 dt ∂x1 ∂t ∂xk dt
k=1

Since x0k (t) is independent of x1 for all k, this becomes


n
∂2f ∂ 2 f dxk
 
∂ dF X
= +
∂x1 dt ∂x1 ∂t ∂x1 ∂xk dt
k=1
 
d ∂F
= ,
dt ∂x1
the last line being a consequence of the mixed derivative rule.

Exercise 1.23
If f (t, x, y) = xy − ty 2 and x = t2 , y = t3 show that
df dx dy
= −y 2 + y + (x − 2ty) = t4 (5 − 7t2 ),
dt dt dt
28 CHAPTER 1. PRELIMINARY ANALYSIS

and that
„ «
∂ df dx dy
= 2t 1 − 4t2 ,
` ´
= − 2y − 2t
∂y dt dt dt
„ «
d ∂f d dx dy
= 2t 1 − 4t2 .
` ´
= (x − 2ty) = − 2y − 2t
dt ∂y dt dt dt

Exercise 1.24

If F = 1 + x1 x2 , and x1 and x2 are functions of t, show by direct calculation
of each expression that

x0 x2 (x01 x2 + x1 x02 )
„ « „ «
∂ dF d ∂F
= = √ 2 − .
∂x1 dt dt ∂x1 2 1 + x 1 x2 4(1 + x1 x2 )3/2

Exercise 1.25
Euler’s formula for homogeneous functions
(a) A function f (x, y) is said to be homogeneous with degree p in x and y if it has
the property f (λx, λy) = λp f (x, y), for any constant λ and real number p. For
such a function prove Euler’s formula:

pf (x, y) = xfx (x, y) + yfy (x, y).

Hint use the total derivative formula 1.20 and differentiate with respect to λ.
(b) Find the equivalent result for homogeneous functions of n variables that satisfy
f (λx) = λp f (x).
(c) Show that if f (x1 , x2 , · · · , xn ) is a homogeneous function of degree p, then
each of the partial derivatives, ∂f /∂xk , k = 1, 2, · · · , n, is homogeneous function
of degree p − 1.

1.3.7 Implicit functions


An equation of the form f (x, y) = 0, where f is a suitably well behaved function of
both x and y, can define a curve in the Cartesian plane, as illustrated in figure 1.5.

y
f(x,y)=0

y+k
y
x
x x+h
Figure 1.5 Diagram showing a typical curve defined
by an equation of the form f (x, y) = 0.
1.3. FUNCTIONS OF A REAL VARIABLE 29

For some values of x the equation f (x, y) = 0 can be solved to yield one or more real
values of y, which will give one or more functions of x. For instance the equation
x2 + y 2 − 1 = 0 defines a circle in the plane and√ for each x in |x| < 1 there are two
values of y, giving the two functions y(x) = ± 1 − x2 . A more complicated example
is the equation x − y + sin(xy) = 0, which cannot be rearranged to express one variable
in terms of the other.
Consider the smooth curve sketched in figure 1.5. On a segment in which the curve
is not parallel to the y-axis the equation f (x, y) = 0 defines a function y(x). Such a
function is said to be defined implicitly. The same equation will also define x(y), that
is x as a function of y, provided the segment does not contain a point where the curve
is parallel to the x-axis. This result, inferred from the picture, is a simple example of
the implicit function theorem stated below.
Implicitly defined functions are important because they occur frequently as solutions
of differential equations, see exercise 1.29, but there are few, if any, general rules that
help understand them. It is, however, possible to obtain relatively simple expressions
for the first derivatives, y 0 (x) and x0 (y).
We assume that y(x) exists and is differentiable, as seems reasonable from figure 1.5,
so F (x) = f (x, y(x)) is a function of x only and we may use the chain rule 1.22 to
differentiate with respect to x. This gives
dF ∂f ∂f dy
= + .
dx ∂x ∂y dx
On the curve defined by f (x, y) = 0, F 0 (x) = 0 and hence
∂f ∂f dy dy fx
+ = 0 or =− . (1.23)
∂x ∂y dx dx fy
Similarly, if x(y) exists and is differentiable a similar analysis using y as the independent
variable gives
∂f dx ∂f dx fy
+ = 0 or =− . (1.24)
∂x dy ∂y dy fx
This result is encapsulated in the Implicit Function Theorem which gives sufficient
conditions for an equation of the form f (x, y) = 0 to have a ‘solution’ y(x) satisfying
f (x, y(x)) = 0. A restricted version of it is given here.
Theorem 1.3
Implicit Function Theorem: Suppose that f : U → R is a function with continuous
partial derivatives defined in an open set U ⊆ R2 . If there is a point (a, b) ∈ U for
which f (a, b) = 0 and fy (a, b) 6= 0, then there are open intervals I = (x1 , x2 ) and
J = (y1 , y2 ) such that (a, b) lies in the rectangle I × J and for every x ∈ I, f (x, y) = 0
determines exactly one value y(x) ∈ J for which f (x, y(x)) = 0. The function y : I → J
is continuous, differentiable, with the derivative given by equation 1.23.

Exercise 1.26
In the case f (x, y) = y − g(x) show that equations 1.23 and 1.24 leads to the
relation „ «−1
dx dy
= .
dy dx
30 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.27
If ln(x2 + y 2 ) = 2 tan−1 (y/x) find y 0 (x).

Exercise 1.28
If x − y + sin(xy) = 0 determine the values of y 0 (0) and y 00 (0).

Exercise 1.29
Show that the differential equation

dy y − a2 x
= , y(1) = A > 0,
dx y+x

has a solution defined by the equation


„ «
1 ` 2 2 ´ 1 “ y ” 1 ` ´ 1 A
ln a x + y 2 + tan−1 =B where B = ln a2 + A2 + tan−1 .
2 a ax 2 a a

Hint the equation may be put in separable form by defining a new dependent
variable v = y/x.

The implicit function theorem can be generalised to deal with the set of functions

fk (x, t) = 0, k = 1, 2, · · · , n, (1.25)

where x = (x1 , x2 , . . . , xn ) and t = (t1 , t2 , . . . , tm ). These n equations have a unique


solution for each xk in terms of t, xk = gk (t), k = 1, 2, · · · , n, in the neighbourhood of
(x0 , t0 ) provided that at this point the derivatives ∂fj /∂xk , exist and that the deter-
minant
∂f1 ∂f1 ∂f1

···
∂x1 ∂x2 ∂xn

∂f2 ∂f2 ∂f2
∂x1 ∂x2 · · · ∂xn

J = (1.26)
. . .
. .. ..
.
∂f ∂fn ∂fn
n
···

∂x1 ∂x2 ∂xn

is not zero. Furthermore all the functions gk (t) have continuous first derivatives. The
determinant J is named the Jacobian determinant or, more usually, the Jacobian. It is
often helpful to use either of the following notations for the Jacobian,

∂f ∂(f1 , f2 , . . . , fn )
J= or J = . (1.27)
∂x ∂(x1 , x2 , . . . , xn )

Exercise 1.30
Show that the equations x = r cos θ, y = r sin θ can be inverted to give functions
r(x, y) and θ(x, y) in every open set of the plane that does not include the origin.
1.3. FUNCTIONS OF A REAL VARIABLE 31

1.3.8 Taylor series for one variable


The Taylor series is a method of representing a given sufficiently well behaved function
in terms of an infinite power series, defined in the following theorem.
Theorem 1.4
Taylor’s Theorem: If f (x) is a function defined on x1 ≤ x ≤ x2 such that f (n) (x) is
continuous for x1 ≤ x ≤ x2 and f (n+1) (x) exists for x1 < x < x2 , then if a ∈ [x1 , x2 ]
for every x ∈ [x1 , x2 ]

(x − a)2 00 (x − a)n (n)


f (x) = f (a) + (x − a)f 0 (a) + f (a) + · · · + f (a) + Rn+1 . (1.28)
2! n!
The remainder term, Rn+1 , can be expressed in the form

(x − a)n+1 (n+1)
Rn+1 = f (a + θh) for some 0 < θ < 1 and h = x − a. (1.29)
(n + 1)!

If all derivatives of f (x) are continuous for x1 ≤ x ≤ x2 , and if the remainder term
Rn → 0 as n → ∞ in a suitable manner we may take the limit to obtain the infinite
series

X (x − a)k (k)
f (x) = f (a). (1.30)
k!
k=0

The infinite series 1.30 is known as Taylor’s series, and the point x = a the point of
expansion. A similar series exists when x takes complex values.
Care is needed when taking the limit of 1.28 as n → ∞, because there are cases
when the infinite series on the right-hand side of equation 1.30 does not equal f (x).
If, however, the Taylor series converges to f (x) at x = ξ then for any x closer
to a than ξ, that is |x − a| < |ξ − a|, the series converges to f (x). This caveat is
necessary because of the strange example g(x) = exp(−1/x2 ) for which all derivatives
are continuous and are zero at x = 0; for this function the Taylor series about x = 0
can be shown to exist, but for all x it converges to zero rather than g(x). This means
that for any well behaved function, f (x) say, with a Taylor series that converges to
f (x) a different function, f (x) + g(x) can be formed whose Taylor series converges, but
to f (x) not f (x) + g(x). This strange behaviour is not uncommon in functions arising
from physical problems; however, it is ignored in this course and we shall assume that
the Taylor series derived from a function converges to it in some interval.
The series 1.30 was first published by Brook Taylor (1685 – 1731) in 1715: the result
obtained by putting a = 0 was discovered by Stirling (1692 – 1770) in 1717 but first
published by Maclaurin (1698 – 1746) in 1742. With a = 0 this series is therefore often
known as Maclaurin’s series.
In practice, of course, it is usually impossible to sum the infinite series 1.30, so it is
necessary to truncate it at some convenient point and this requires knowledge of how,
or indeed whether, the series converges to the required value. Truncation gives rise to
the Taylor polynomials, with the order-n polynomial given by
n
X (x − a)k
f (x) = f (k) (a). (1.31)
k!
k=0
32 CHAPTER 1. PRELIMINARY ANALYSIS

The series 1.30 is an infinite series of the functions (x − a)n f (n) (a)/n! and summing
these requires care. A proper understanding of this process requires careful definitions
of convergence which may be found in any text book on analysis. For our purposes,
however, it is sufficient to note that in most cases there is a real number, rc , named the
radius of convergence, such that if |x − a| < rc the infinite series is well mannered and
behaves rather like a finite sum: the value of rc can be infinite, in which case the series
converges for all x.
If the Taylor series of f (x) and g(x) have radii of convergence rf and rg respectively,
then the Taylor series of αf (x) + βg(x), for constants α and β, and of f (x)a g(x)b , for
positive constants a and b, exist and have the radius of convergence min(rf , rg ). The
Taylor series of the compositions f (g(x)) and g(f (x)) may also exist, but their radii of
convergence depend upon the behaviour of g and f respectively. Also Taylor series may
be integrated and differentiated to give the Taylor series of the integral and derivative
of the original function, and with the same radius of convergence.
Formally, the nth Taylor polynomial of a function is formed from its first n deriva-
tives at the point of expansion. In practice, however, the calculation of high-order
derivatives is very awkward and it is often easier to proceed by other means, which rely
upon ingenuity. A simple example is the Taylor series of ln(1 + tanh x), to fourth order;
this is most easily obtained using the known Taylor expansions of ln(1 + z) and tanh x,

z2 z3 z4 x3 2x5
ln(1 + z) = z − + − + O(z 5 ) and tanh x = x − + + O(x7 ),
2 3 4 3 15
and then put z = tanh x retaining only the appropriate order of the series expansion.
Thus
 " 2 2 #
x3 x2 x3 x4

5 x
ln(1 + tanh x) = x− + O(x ) − 1− +··· + − + O(x5 )
3 2 3 3 4
x2 x4
= x− + + O(x5 ).
2 12
This method is far easier than computing the four required derivatives of the original
function.
For |x − a| > rc the infinite sum 1.30 does not exist. It follows that knowledge of
rc is important. It can be shown that, in most cases of practical interest, its value is
given by either of the limits
(k)

an
rc = lim or rc = lim |an |−1/n where ak = f (a) . (1.32)
n→∞ an+1 n→∞ k!
Usually the first expression is most useful. Typically, we have, for large n

n! 1/n
  n! n
 

f (n) (a) = r c 1 + O(1/n) so that = Ar c 1 + O(1/n)
f (n) (a)
n
for some constant A. Then the nth term of the series behaves as ((x − a)/rc ) , and
decreases rapidly with increasing n provided |x − a| < rc and n is sufficiently large.
Superficially, the Taylor series appears to be a useful representation and a good
approximation. In general this is not true unless |x−a| is small; for practical applications
1.3. FUNCTIONS OF A REAL VARIABLE 33

far more efficient approximations exist — that is they achieve the same accuracy for
far less work. The basic problem is that the Taylor expansion uses knowledge of the
function at one point only, and the larger |x − a| the more terms are required for a
given accuracy. More sensible approximations, on a given interval, take into account
information from the whole interval: we describe some approximations of this type in
chapter 13.
The first practical problem is that the remainder term, equation 1.29, depends upon
θ, the value of which is unknown. Hence Rn cannot be computed; also, it is normally
difficult to estimate.
In order to understand how these series converge we need to consider the magnitude
of the nth term in the Taylor series: this type of analysis is important for any numerical
evaluation of power series. The nth term is a product of (x − a)n /n! and f (n) (a). Using
Stirling’s approximation,
√  n n  
n! = 2πn 1 + O(1/n) (1.33)
e
we can approximate the first part of this product by
n
(x − a)n

' √1 e|x − a|)
= gn . (1.34)
n! 2πn n
The expression gn decreases very rapidly with increasing n, provided n is large enough.
Hence the term |x − a|n /n! may be made as small as we please. But for practical
applications this is not sufficient; in figure 1.6 we plot a graph of the values of log(gn ),
that is the logarithm to the base 10, for x − a = 10.

3.5 log(gn)
3

2.5

1.5
n
1
2 4 6 8 10 12 14 16 18 20
Figure 1.6 Graph showing the value of log(gn ),
equation 1.34, for x − a = 10. For clarity we have
joined the points with a continuous line.

In this example the maximum of gn is at n = 10 and has a value of about 2500, before it
starts to decrease. It is fairly simple to show thatp that gn has a maximum at n ' |x − a|
and here its value is max(gn ) ' exp(|x − a|)/ 2π|x − a|.
The value of f (n) (a) is also difficult to estimate, but it usually increases rapidly with
n. Bizarrely, in many cases of interest, this behaviour depends upon the behaviour
of f (z), where z is a complex variable. An understanding of this requires a study
of Complex Variable Theory, which is beyond the scope of this chapter. Instead we
illustrate the behaviour of Taylor polynomials with a simple example.
First consider the Taylor series of sin x, about x = 0,
x3 x5 x2n−1
sin x = x − + + · · · + (−1)n−1 +··· , (1.35)
3! 5! (2n − 1)!
34 CHAPTER 1. PRELIMINARY ANALYSIS

which is derived in exercise 1.31.


Note that only odd powers occur, because sin x is an odd function, and also that the
radius of convergence is infinite. In figure 1.7 we show graphs of this series, truncated
at x2n−1 with n = 1, 4, 8 and 15 for 0 < x < 4π.

2 n=1
n=15
1
x
0 2 4 6 8 10 12

-1
n=4 n=8
-2
Figure 1.7 Graph comparing the Taylor polynomials, of order n,
for the sine function with the exact function, the dashed line.

These graphs show that for large x it is necessary to include many terms in the series
to obtain an accurate representation of sin x. The reason is simply that for fixed, large
x, x2n−1 /(2n − 1)! is very large at n = x, as shown in figure 1.6. Because the terms
of this series alternate in sign the large terms in the early part of the series partially
cancel and cause problems when approximating a function O(1): it is worth noting that
as a consequence, with a computer having finite accuracy there is a value of x beyond
which the Taylor series for sin x gives incorrect values, despite the fact that formally it
converges for all x.

Exercise 1.31
Exponentional and Trigonometric functions
If f (x) = exp(ix) show that f (n) (x) = in exp(ix) and hence that its Taylor series is

X (ix)k
eix = .
k=0
k!

Show that the radius of convergence of this series in infinite. Deduce that

x2 x4 (−1)n x2n
cos x = 1− + + ··· + + ··· ,
2! 4! (2n)!
x3 x5 (−1)n x2n+1
sin x = x− + + ··· + + ··· .
3! 5! (2n + 1)!

Exercise 1.32
Binomial expansion
Show that the Taylor series of (1 + x)a is

1 a(a − 1)(a − 2) · · · (a − k + 1) k
(1 + x)a = 1 + ax + a(a − 1)x2 + · · · x +··· .
2 k!
1.3. FUNCTIONS OF A REAL VARIABLE 35

When a = n is an integer this series terminates at k = n and becomes the binomial


expansion
n „ « „ «
X n n n!
(1 + x)n = xk where =
k k k! (n − k)!
k=0

are the binomial coefficients.

Exercise 1.33
1
If f (x) = tan x find the first three derivatives to show that tan x = x+ x3 +O(x5 ).
3

Exercise 1.34
The natural logarithm
1
(a) Show that = 1 − t + t2 + · · · + (−1)n tn + · · · and use the definition of
1+t Z x
1
the natural logarithm, ln(1 + x) = dt , to show that
0 1+t

x2 x3 (−1)n−1 xn
ln(1 + x) = x − + + ··· + + ··· .
2 3 n

(b) For which values of x is this expression valid.


x3 x2n−1
„ « „ «
1+x
(c) Use this result to show that ln =2 x+ + ··· + + ··· .
1−x 3 2n − 1

Exercise 1.35
The inverse tangent function
Z x
1
Use the definition tan−1 x =dt to show that for |x| < 1,
0 1 + t2

X (−1)k x2k+1
tan−1 x = .
k=0
2k + 1

Exercise 1.36
x2 x3 5x4
Show that ln(1 + sinh x) = x − + − + O(x5 ).
2 2 12

Exercise 1.37
Obtain the first five terms of the Taylor series of the function that satisfies the
equation
dy
(1 + x) = 1 + xy + y 2 , y(0) = 0.
dx
Hint use Leibniz’s rule given in exercise 1.15 (page 21) to differentiate the equation
n times.
36 CHAPTER 1. PRELIMINARY ANALYSIS

1.3.9 Taylor series for several variables


The Taylor series of a function f : Rm → R is trivially derived from the Taylor expan-
sion of a function of one variable using the chain rule, equation 1.21 (page 26). The
only difficulty is that the algebra very quickly becomes unwieldy with increasing order.
We require the expansion of f (x) about x = a, so we need to represent f (a + h) as
some sort of power series in h. To this end, define a function of the single variable t by
the relation
F (t) = f (a + th) so F (0) = f (a),
and F (t) gives values of f (x) on the straight line joining a to a + h. The Taylor series
of F (t) about t = 0 is, on using equation 1.28 (page 31),

t2 00 tn
F (t) = F (0) + tF 0 (0) + F (0) + · · · + F (n) (0) + Rn+1 , (1.36)
2! n!
which we assume to exist for |t| ≤ 1. Now we need only express the derivatives F (n) (0)
in terms of the partial derivatives of f (x). Equation 1.21 (page 26) gives
m
X
F 0 (0) = fxk (a)hk .
k=1

Hence to first order the Taylor series is


m
X ∂f
f (a + h) = f (a) + hk fxk (a) + R2 = f (a) + h · + R2 , (1.37)
∂a
k=1

where R2 is the remainder term which is second order in h and is given below. Here
we have introduced the notation ∂f /∂x for the vector function,
  m
∂f ∂f ∂f ∂f ∂f X ∂f
= , ,··· , with the scalar product h · = hk .
∂x ∂x1 ∂x2 ∂xm ∂x ∂xk
k=1

For the second derivative we use equation 1.21 (page 26) again,
m m m
!
00
X d X X
F (t) = hk fxk (a + th) = hk hi fxk xi (a + th) .
dt i=1
k=1 k=1

At t = 0 this can be written in the form,


m
X m
X
F 00 (0) = hk hi fxk xi (a)
k=1 i=1
Xm m−1
X m
X
= h2k fxk xk (a) + 2 hk hi fxk xi (a), (1.38)
k=1 k=1 i=k+1

where the second relation comprises fewer terms because the mixed derivative rule has
been used. This gives the second order Taylor series,
m m m
!
X 1 X X
f (a + h) = f (a) + hk fxk (a) + hk hi fxk xi (a) + R3 , (1.39)
2! i=1
k=1 k=1

where the remainder term is given below.


1.3. FUNCTIONS OF A REAL VARIABLE 37

The higher-order terms are derived in exactly the same manner, but the algebra
quickly becomes cumbersome. It helps, however, to use the linear differential operator
h · ∂/∂a to write the derivatives of F (t) at t = 0 in the more convenient form,
   2  n
∂ ∂ ∂
F 0 (0) = h · f (a), F 00 (0) = h · f (a) and F (n) (0) = h · f (a).
∂a ∂a ∂a
(1.40)
Then we can write Taylor series in the form
n  s
X 1 ∂
f (a + h) = f (a) + h· f (a) + Rn+1 (1.41)
s=1
s! ∂a
where the remainder term is
1
Rn+1 = F (n+1) (θ) for some 0 < θ < 1.
(n + 1)!
Because the high order derivatives are so cumbersome and for the practical reasons
discussed in section 1.3.8, in particular figure 1.7 (page 34), Taylor series for many vari-
ables are rarely used beyond the second order term. This term, however, is important
for the classification of stationary points, considered in chapter 8.
For functions of two variables, (x, y), the Taylor series is
1 2
h fxx + 2hkfxy + k 2 fyy

f (a + h, b + k) = f (a, b) + hfx + kfy +
2
1 3
+ h fxxx + 3h kfxxy + 3hk 2 fxyy + k 3 fyyy + · · ·
2

6
s
X hs−r k r ∂sf
+ + · · · + Rn+1 , (1.42)
r=0
(s − r)! r! ∂x ∂y r
s−r

where all derivatives are evaluated at (a, b). In this case the sth term is relatively easy
to obtain by expanding the differential operator (h∂/∂x + k∂/∂y)s using the binomial
expansion (which works because the mixed derivative rule means that the two operators
∂/∂x and ∂/∂y commute).
Exercise 1.38
Find the Taylor expansions about x = y = 0, up to and including the second order
terms, of the functions
(a) f (x, y) = sin x sin y, (b) f (x, y) = sin x + e−y − 1 .
` ´

Exercise 1.39
Show that the third-order Taylor series for a function, f (x, y, z), of three variables
is
f (a + h, b + k, c + l) = f (a, b, c) + hfx + kfy + lfz
1 ` 2
h fxx + k2 fyy + l2 fzz + 2hkfxy + 2klfyz + 2lhfzx
´
+
2!
1 “ 3
+ h fxxx + k3 fyyy + l3 fzzz + 6hklfxyz
3!
3hk2 fxyy + 3hl2 fxzz + 3kh2 fyxx + 3kl2 fyzz

+3lh2 fzxx + 3lk2 fzyy .
38 CHAPTER 1. PRELIMINARY ANALYSIS

1.3.10 L’Hospital’s rule


Ratios of functions occur frequently and if

f (x)
R(x) = (1.43)
g(x)

the value of R(x) is normally computed by dividing the value of f (x) by the value of
g(x): this works provided g(x) is not zero at the point in question, x = a say. If g(x)
and f (x) are simultaneously zero at x = a, the value of R(a) may be redefined as a
limit. For instance if
sin x
R(x) = (1.44)
x
then the value of R(0) is not defined, though R(x) does tend to the limit R(x) → 1 as
x → 0. Here we show how this limit may be computed using L’Hospital’s rule13 and
its extensions, discovered by the French mathematician G F A Marquis de l’Hospital
(1661 – 1704).
Suppose that at x = a, f (a) = g(a) = 0 and that each function has a Taylor series
about x = a, with finite radii of convergence: thus near x = a we have for small,
non-zero ||,

f (a + ) f 0 (a) + O(2 ) f 0 (a)


R(a + ) = = 0 = + O() provided g 0 (a) 6= 0.
g(a + ) g (a) + O(2 ) g 0 (a)

Hence, on taking the limit  → 0, we obtain the result given by the following theorem.
Theorem 1.5
L’Hospital’s rule. Suppose that f (x) and g(x) are real and differentiable for −∞ ≤
a < x < b ≤ ∞. If

lim f (x) = lim g(x) = 0 or lim g(x) = ∞


x→a x→a x→a

then
f (x) f 0 (x)
lim = lim 0 , (1.45)
x→a g(x) x→a g (x)

provided the right-hand limit exists.

More generally if f (k) (a) = g (k) (a) = 0, k = 0, 1, · · · , n − 1 and g (n) (a) 6= 0 then

f (x) f (n) (x)


lim = lim (n) ,
x→a g(x) x→a g (x)

provided the right-hand limit exists.


Consider the function defined by equation 1.44; at x = 0 L’Hospital’s rule gives
sin x cos x
R(0) = lim = lim = 1.
x→0 x x→0 1

13 Here we use the spelling of the French national bibliography, as used by L’Hospital. Some modern

text use the spelling L’Hôpital, instead of the silent s.


1.3. FUNCTIONS OF A REAL VARIABLE 39

Exercise 1.40
Find the values of the following limits:

cosh x − cosh a sin x − x 3x − 3−x


(a) lim , (b) lim , (c) lim .
x→a sinh x − sinh a x→0 x cos x − x x→0 2x − 2−x

Exercise 1.41
f 0 (x) f (x)
(a) If f (a) = g(a) = 0 and lim = ∞ show that lim = ∞.
x→a g 0 (x) x→a g(x)

(b) If both f (x) and g(x) are positive in a neighbourhood of x = a, tend to infinity
f 0 (x) f (x)
as x → a and lim 0 = A show that lim = A.
x→a g (x) x→a g(x)

1.3.11 Integration
The study of integration arose from the need to compute areas and volumes. The
theory of integration was developed independently from the theory of differentiation
and the Fundamental Theorem of Calculus, described in note P I on page 40, relates
these processes. It should be noted, however, that Newton knew of the relation between
gradients and areas and exploited it in his development of the subject.
In this section we provide a very brief outline of the simple theory of integration
and discuss some of the methods used to evaluate integrals. This section is included
for reference purposes; however, although the theory of integration is not central to
the main topic of this course, you should be familiar with its contents. The important
idea, needed in chapter 4, is that of differentiating with respect to a parameter, or
‘differentiating under the integral sign’ described in equation 1.52 (page 43).
In this discussion of integration we use an intuitive notion of area and refer the
reader to suitable texts, Apostol (1963), Rudin (1976) or Whittaker and Watson (1965)
for instance, for a rigorous treatment.
If f (x) is a real, continuous function of the interval a ≤ x ≤ b, it is intuitively clear
that the area between the graph and the x-axis can be approximated by the sum of the
areas of a set of rectangles as shown by the dashed lines in figure 1.8.

y
f(x)

x
a x1 x2 x3 x4 x5 b
Figure 1.8 Diagram showing how the area under the curve y = f (x) may be approx-
imated by a set of rectangles. The intervals xk − xk−1 need not be the same length.
40 CHAPTER 1. PRELIMINARY ANALYSIS

In general the closed interval a ≤ x ≤ b may be partitioned by a set of n − 1 distinct,


ordered points
a = x0 < x1 < x2 < · · · < xn−1 < xn = b
to produce n sub-divisions: in figure 1.8 n = 6 and the spacings are equal. On each
interval we construct a rectangle: on the kth rectangle the height is f (lk ) chosen to
be the smallest value of f (x) in the interval. These rectangles are shown in the figure.
Another set of rectangles of height f (hk ) chosen to be the largest value of f (x) in the
interval can also be formed. If A is the area under the graph it follows that
n
X n
X
(xk − xk−1 ) f (lk ) ≤ A ≤ (xk − xk−1 ) f (hk ). (1.46)
k=1 k=1

This type of approximation underlies the simplest numerical methods of approximating


integrals and, as will be seen in chapter 4, is the basis of Euler’s approximations to
variational problems.
The theory of integration developed by Riemann (1826 – 1866) shows that for con-
tinuous functions these two bounds approach each other, as n → ∞ in a meaningful
manner, and defines the wider class of functions for which this limit exists. When these
limits exist their common value is named the integral of f (x) and is denoted by
Z b Z b
dx f (x) or f (x) dx. (1.47)
a a

In this context the function f (x) is named the integrand, and b and a the upper and
lower integration limits, or just limits. It can be shown that the integral exists for
bounded, piecewise continuous functions and also some unbounded functions.
From this definition the following elementary properties can be derived.
Z x
P:I: If F (x) is a differentiable function and F 0 (x) = f (x) then F (x) = F (a) + dt f (t).
a
This is the Fundamental theorem of Calculus and is important because it provides one
of the most useful tools for evaluating integrals.
Z b Z a
P:II: dx f (x) = − dx f (x).
a b
Z b Z c Z b
P:III: dx f (x) = dx f (x) + dx f (x) provided all integrals exist. Note, it is not
a a c
necessary that c lies in the interval (a, b).
Z b   Z b Z b
P:IV: dx αf (x) + βg(x) = α dx f (x) + β dx g(x), where α and β are real
a a a
or complex numbers.
Z
b Z b
P:V: dx f (x) ≤ dx |f (x)| . This is the analogue of the finite sum inequality

a a
n n
X X
ak ≤ |ak | , where ak , k = 1, 2, · · · , n, are a set of complex numbers or functions.



k=1 k=1
1.3. FUNCTIONS OF A REAL VARIABLE 41

P:VI: The Cauchy-Schwarz inequality for real functions is


Z b !2 Z b ! Z !
b
dx f (x)g(x) ≤ dx f (x)2 dx g(x)2
a a a

with equality if and only if g(x) = cf (x) for some real constant c. This inequality is
sometimes named the Cauchy inequality and sometimes the Schwarz inequality. It is
the analogue of the finite sum inequality
n
!2 n
! n
!
X X X
a k bk ≤ a2k b2k
k=1 k=1 k=1

with equality if and only if bk = cak for all k and some real constant c.
1 1
P:VII: The Hölder inequality: if + = 1, p > 1 and q > 1 then
p q
!1/p !1/q
Z b Z b Z b
p q
dx f (x)g(x) ≤ dx |f (x)| dx |g(x)| ,

a a a

is valid for complex functions f (x) and g(x) with equality if and only if |f (x)|p |g(x)|−q
and arg(f g) are independent of x. It is the analogue of the finite sum inequality

n n
!1/p n
!1/q
X X p
X q 1 1
|ak bk | ≤ |ak | |bk | , + = 1,
p q
k=1 k=1 k=1

with equality if and only if |an |p |bn |−q and arg(an bn ) are independent of n (or ak = 0
for all k or bk = 0 for all k). If all ak and bk are positive and p = q = 2 these inequalities
reduce to the Cauchy-Schwarz inequalities.
P:VIII: The Minkowski inequality for any p > 1 and real functions f (x) and g(x) is
!1/p !1/p !1/p
Z b p Z b Z b
p p
dx f (x) + g(x) ≤ dx |f (x)| + dx |g(x)|

a a a

with equality if and only if g(x) = cf (x), with c a non-negative constant. It is the
analogue of the finite sum inequality valid for ak , bk > 0, for all k, and p > 1

n
!1/p n
!1/p n
!1/p
p
apk bpk
X X X
ak + b k ≤ + ,
k=1 k=1 k=1

with equality if and only if bk = cak for all k and c a non-negative constant.

R Sometimes it is convenient to ignore the integration limits, here a and b, and write
dx f (x): this is named the indefinite integral: its value is undefined to within an
additive constant. However, it is almost always possible to express problems in terms
of definite integrals — that is, those with limits.
42 CHAPTER 1. PRELIMINARY ANALYSIS

The theory of integration is concerned with understanding the nature of the inte-
gration process and with extending these simple ideas to deal with wider classes of
functions. The sciences are largely concerned with evaluating integrals, that is convert-
ing integrals to numbers or functions that can be understood: most of the techniques
available for this activity were developed in the nineteenth century or before, and we
describe them later in this section.
There are two important extensions to the integral defined above. If either or both
−a and b tend to infinity we define an infinite integral as a limit of integrals: thus if
b → ∞ we have !
Z ∞ Z b
dx f (x) = lim dx f (x) , (1.48)
a b→∞ a

assuming the limit exists. There are similar definitions for


Z b Z ∞
dx f (x) and dx f (x),
−∞ −∞

however, it should be noted that the limit


Z a Z a
lim dx f (x) may exist, but the limit lim lim dx f (x)
a→∞ −a a→∞ b→∞ −b

may not. An example is f (x) = x/(1 + x2 ) for which


Z a
1 + a2
 
x 1
dx = ln .
−b 1 + x2 2 1 + b2
If a = b the right-hand side is zero for all a (because f (x) is an odd function) and the
first limit is zero: if a 6= b the second limit does not exist.
Whether or not infinite integrals exist depends upon the behaviour of f (x) as
|x| → ∞. Consider the limit 1.48. If f (x) 6= 0 for some X > 0, the limit exist provided
|f (x)| → 0 faster than x−α , α > 1: if f (x) decays to zero slower than 1/x1− , for any
 > 0 the integral diverges, see however exercise 1.52, (page 45).
If the integrand is oscillatory cancellation between the positive and negative parts
of the integral gives convergence when the magnitude of the integrand tends to zero.
In this case we have the following useful theorem from 1853, due to Chartier14 .
Theorem 1.6 R x
If f (x) → 0 monotonically as x → ∞ and if a dt φ(t) is bounded as x → ∞ then
R∞
a dx f (x)φ(x) exists.
R∞
For instance if φ(x) = sin(λx), and f (x) = x−α , 0 < α < 2 this shows that 0 dx x−α sin λx
exists: if α = 1 its value is π/2, for any λ > 0. It should be mentioned that the very
cancellation which ensures convergence may cause difficulties when evaluating such in-
tegrals numerically.
The second important extension deals with integrands that are unbounded. Suppose
that f (x) is unbounded at x = a, then we define
Z b Z b
dx f (x) = lim dx f (x), (1.49)
a →0+ a+
14 J Chartier, Journal de Math 1853, XVIII, pages 201-212.
1.3. FUNCTIONS OF A REAL VARIABLE 43

provided the limit exists. As a general rule, provided |f (x)| tends to infinity slower
than |x − a|β , β > −1, the integral exists, which is why, in the previous example, we
needed α < 2; note that if f (x) = O(ln(x − a)), as x → a, it is integrable. For functions
unbounded at an interior point the natural extension to P III is used.
The evaluation of integrals of any complexity in closed form is normally difficult, or
impossible, but there are a few tools that help. The main technique is to use the Funda-
mental theorem of Calculus in reverse and simply involves recognising those F (x) whose
derivative is the integrand: this requires practice and ingenuity. The main purpose of
the other tools is to convert integrals into recognisable types. The first is integration
by parts, derived from the product rule for differentiation:
Z b h ib Z b
dv du
dx u = uv − dx v. (1.50)
a dx a a dx
The second method is to change variables:
Z b Z B Z B
dx
dx f (x) = dt f (g(t)) = dt g 0 (t)f (g(t)), (1.51)
a A dt A

where x = g(t), g(A) = a, g(B) = b, and g(t) is monotonic for A < t < B. In these
circumstances the Leibniz notation is helpfully transparent because dx dt can be treated
like a fraction, making the equation easier to remember. The geometric significance of
this formula is simply that the small element of length δx, at x, becomes the element
of length δx = g 0 (t)δt, where x = g(t), under the variable change.
The third method involves the differentiation of a parameter. Consider a function
f (x, u) of two variables, which is integrated with respect to x, then
Z b(u) Z b(u)
d db da ∂f
dx f (x, u) = f (b, u) − f (a, u) + dx , (1.52)
du a(u) du du a(u) ∂u

provided a(u) and b(u) are differentiable and fu (x, u) is a continuous function of both
variables; the derivation of this formula is considered in exercise 1.50. If neither limit
depends upon u the first two terms on the right-hand side vanish. A simple example
shows how this method can work. Consider the integral
Z ∞
I(u) = dx e−xu , u > 0.
0

The derivatives are


Z ∞ Z ∞
0
I (u) = − dx xe−xu and, in general, I (n)
(u) = (−1) n
dx xn e−xu .
0 0

But the original integral is trivially integrated to I(u) = 1/u, so differentiation gives
Z ∞
n!
dx xn e−xu = n+1 .
0 u
This result may also be found by repeated integration by parts but the above method
involves less algebra.
The application of these methods usually requires some skill, some trial and error
and much patience. Please do not spend too long on the following problems.
44 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.42
Z a
(a) If f (x) is an odd function, f (−x) = −f (x), show that dx f (x) = 0.
−a
Z a Z a
(b) If f (x) is an even function, f (−x) = f (x), show that dx f (x) = 2 dx f (x).
−a 0

Exercise 1.43 Z ∞
sin λx
Show that, if λ > 0, the value of the integral I(λ) = dx is independent
0 x
of λ. How are the values of I(λ) and I(−λ) related?

Exercise 1.44
Use integration by parts to evaluate the following indefinite integrals.
Z Z Z Z
x
(a) dx ln x, (b) dx 2
, (c) dx x ln x, (d) dx x sin x.
cos x

Exercise 1.45
Evaluate the following integrals
Z π/4 Z π/4 Z 1
2
(a) dx sin x ln(cos x), (b) dx x tan x, (c) dx x2 sin−1 x.
0 0 0

Exercise 1.46
Z x
If In = dt tn eat , n ≥ 0, use integration by parts to show that aIn = xn eax −
0
nIn−1 and deduce that
n
X (−1)n−k k (−1)n n!
In = n!eax x − .
k=0
an−k+1 k! an+1

Exercise 1.47
Z a Z a
(a) Using the substitution u = a − x, show that dx f (x) = dx f (a − x).
0 0

(b) With the substitution θ = π/2 − φ show that


Z π/2 Z π/2
sin θ cos φ
I= dθ = dφ
0 sin θ + cos θ 0 cos φ + sin φ

and deduce that I = π/4.


1.3. FUNCTIONS OF A REAL VARIABLE 45

Exercise 1.48
Use the substitution t = tan(x/2) to prove that if a > |b| > 0
Z π
1 π
dx = √ .
0 a + b cos x a 2 − b2

Why is the condition a > |b| necessary?


Use this result and the technique of differentiating the integral to determine the
values of,
Z π Z π Z π Z π
dx dx cos x
2
, 3
, dx , dx ln(a+b cos x).
0 (a + b cos x) 0 (a + b cos x) 0 (a + b cos x)2 0

Exercise 1.49 Z t
1
Prove that y(t) = dx f (x) sin ω(t − x) is the solution of the differential equa-
ω a
tion
d2 y
+ ω 2 y = f (t), y(a) = 0, y 0 (a) = 0.
dt2

Exercise 1.50 Z a(u)


(a) Consider the integral F (u) = dx f (x), where only the upper limit de-
0
pends upon u. Using the basic definition, equation 1.7 (page 18), derive the
derivative F 0 (u).
Z b
(b) Consider the integral F (u) = dx f (x, u), where only the integrand depends
a
upon u. Using the basic definition derive the derivative F 0 (u).

Exercise 1.51
Assuming that both integrals exist, show that
Z ∞ „ « Z ∞
1
dx f x − = dx f (x).
−∞ x −∞

Hence show that Z „∞ « √


1 π
dx exp −x2 − 2 = 2 .
x e
Z−∞
∞ 2 √
You will need the result dx e−x = π.
−∞

Exercise 1.52
Find the limits as X → ∞ of the following integrals
Z X Z X
1 1
dx and dx .
2 x ln x 2 x(ln x)2
Hint note that if f (x) = ln(ln x) then f 0 (x) = (x ln x)−1 .

Exercise 1.53
Determine the values of the real constants a > 0 and b > 0 for which the following
limit exists Z X
1
lim dx a .
X→∞ 2 x (ln x)b
46 CHAPTER 1. PRELIMINARY ANALYSIS

1.4 Miscellaneous exercises


The following exercises can be tackled using the method described in the corresponding
section, though other methods may also be applicable.

Limits
Exercise 1.54
Find, using first principles, the following limits

xa − 1 1+x−1 x1/3 − a1/3
(a) lim , (b) lim √ , (c) lim ,
x→1 x − 1 x→0 1 − 1−x x→a x1/2 − a1/2
„ «1/x
1+x
(d) lim (π − 2x) tan x, (e) lim x1/x , (f) lim ,
x→(π/2)− x→0+ x→0 1−x

where a is a real number.

Inverse functions
Exercise 1.55
Show that the inverse functions of y = cosh x, y = sinh x and y = tanh x, for
x > 0 are, respectively
„ «
“ p ” “ p ” 1 1+y
x = ln y + y 2 − 1 , x = ln y + y 2 + 1 and x = ln .
2 1−y

Exercise 1.56
The function y = sin x may be defined to be the solution of the differential equation

d2 y
+ y = 0, y(0) = 0, y 0 (0) = 1.
dx2
Show that the inverse function x(y) satisfies the differential equation
„ «3 Z y
d2 x dx −1 1
= y which gives x(y) = sin y = du √ .
dy 2 dy 0 1 − u2

Hence find the Taylor series of sin−1 y to O(y 5 ).


Hint you may find it helpful to solve the equation by defining z = dx/dy.

Derivatives
Exercise 1.57
Find the derivative of y(x) where
r r
p+x q+x p
(a) y = f (x)g(x) , (b) y = , (c) y n = x + 1 + x2 .
p−x q−x

Exercise 1.58
If y = sin(a sin−1 x) show that (1 − x2 )y 00 − xy 0 + a2 y = 0.
1.4. MISCELLANEOUS EXERCISES 47

Exercise 1.59
d2 y dy
If y(x) satisfies the equation (1 − x2 ) − 2x + λy = 0, where λ is a constant
dx2 dx
and |x| ≤ 1, show that changing the independent variable, x, to θ where x = cos θ
changes this to
d2 y dy
+ cot θ + λy = 0.
dθ2 dθ

Exercise 1.60
The Schwarzian derivative of a function f (x) is defined to be
«2 !
f 000 (x) f 00 (x) d2

3 p 1
Sf (x) = 0 − = −2 f 0 (x) 2 .
f 0 (x)
p
f (x) 2 dx f 0 (x)

Show that if f (x) and g(x) both have negative Schwarzian derivatives, Sf (x) < 0
and Sg(x) < 0, then the Schwarzian derivative of the composite function h(x) =
f (g(x)) also satisfies Sh(x) < 0.
Note the Schwarzian derivative is important in the study of the fixed points of
maps.

Partial derivatives
Exercise 1.61
x
If z = f (x + ay) + g(x − ay) − 2 cos(x + ay) where f (u) and g(u) are arbitrary
2a
functions of a single variable and a is a constant, prove that

∂2z ∂2z
a2 2
− = sin(x + ay).
∂x ∂y 2

Exercise 1.62
If f (x, y, z) = exp(ax + by + cz)/xyz, where a, b and c are constants, find the
partial derivatives fx , fy and fz , and solve the equations fx = 0, fy = 0 and
fz = 0 for (x, y, z).

Exercise 1.63
The equation f (u2 − x2 , u2 − y 2 , u2 − z 2 ) = 0 defines u as a function of x, y and z.
1 ∂u 1 ∂u 1 ∂u 1
Show that + + = .
x ∂x y ∂y z ∂z u

Implicit functions
Exercise 1.64
Show that the function f (x, y) = x2 + y 2 − 1 satisfies the conditions of the Implicit
Function Theorem for most values of (x, y), and that the function y(x) obtained
from the theorem has derivative y 0 (x) = −x/y.
The
√ equation f (x, y) = 0 can be solved explicitly to give the equations y =
± 1 − x2 . Verify that the derivatives of both these functions is the same as that
obtained from the Implicit Function Theorem.
48 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.65
Prove that the equation x cos xy = 0 has a unique solution, y(x), near the point
(1, π2 ), and find its first and second derivatives.

Exercise 1.66
The folium of Descartes has equation f (x, y) = x3 + y 3 − 3axy = 0. Show that at
all points on the curve where y 2 6= ax, the implicit function y(x) has derivative

dy x2 − ay
=− 2 .
dx y − ax

Show that there is a horizontal tangent to the curve at (a21/3 , a41/3 ).

Taylor series
Exercise 1.67
By sketching the graphs of y = tan x and y = 1/x for x > 0 show that the equation
x tan x = 1 has an infinite number of positive roots. By putting x = nπ + z, where
n is a positive integer, show that this equation becomes (nπ + z) tan z = 1 and
use a first order Taylor expansion of this to show that the root nearest nπ is given
1
approximately by xn = nπ + .

Exercise 1.68
Determine the constants a and b such that (1 + a cos 2x + b cos 4x)/x4 is finite at
the origin.

Exercise 1.69
Find the Taylor series, to 4th order, of the following functions:
(a) ln cosh x, (b) ln(1 + sin x), (c) esin x , (d) sin2 x.

Mean value theorem


Exercise 1.70
If f (x) is a function such that f 0 (x) increases with increasing x, use the Mean
Value theorem to show that f 0 (x) < f (x + 1) − f (x) < f 0 (x + 1).

Exercise 1.71
Use the functions f1 (x) = ln(1 + x) − x and f2 (x) = f1 (x) + x2 /2 and the Mean
Value Theorem to show that, for x > 0,

1 2
x− x < ln(1 + x) < x.
2
1.4. MISCELLANEOUS EXERCISES 49

L’Hospital’s rule
Exercise 1.72
sin ln x 1
Show that lim =− .
x→1 x5 − 7x3 + 6 16

Exercise 1.73
2 a sin bx − b sin ax
Determine the limits lim (cos x)1/ tan x
and lim .
x→0 x→0 x3

Integrals
Exercise 1.74
Using differentiation under the integral sign show that
Z ∞
tan−1 (ax) 1
dx 2)
= π ln(1 + a).
0 x(1 + x 2

Exercise 1.75
Prove that, if |a| < 1
Z π/2
ln(1 + cos πa cos x) π2
dx = (1 − 4a2 ).
0 cos x 8

Exercise 1.76 Z π/2 Z π


2
If f (x) = (sin x)/x, show that dx f (x)f (π/2 − x) = dx f (x).
0 π 0

Exercise 1.77
Use the integral definition
Z x Z ∞
1 1
tan−1 x = dt to show that for x > 0 tan−1 (1/x) = dt
0 1 + t2 x 1 + t2
and deduce that tan−1 x + tan−1 (1/x) = π/2.

Exercise 1.78 Z 2x
Determine the values if x that make g 0 (x) = 0 if g(x) = dt f (t) and
x
(a) f (t) = et , and (b) f (t) = (sin t)/t.

Exercise 1.79
If f (x) is integrable for a ≤ x ≤ a + h show that
n
1 a+h
„ « Z
1X kh
lim f a+ = dx f (x).
n→∞ n n h a
k=1

Hence find the following limits


„ «
1 1 1
(a) lim n−6 1 + 25 + 35 + · · · + n5 ,
` ´
(b) lim + + ··· + ,
n→∞ n→∞ 1+n 2+n 3n
„ “ ” „ « «
1 y 2y h i1/n
(c) lim sin + sin + · · · + sin y , (d) lim n−1 (n + 1)(n + 2) . . . (2n) .
n→∞ n n n n→∞
50 CHAPTER 1. PRELIMINARY ANALYSIS

Exercise 1.80
If the functions f (x) and g(x) are differentiable find expressions for the first deriva-
tive of the functions
Z u Z u
f (x) g(x)
F (u) = dx √ and G(u) = dx where 0 < a < 1.
0 u 2 − x2
0 (u − x)a

This is a fairly difficult problem. The formula 1.52 does not work because the
integrands are singular, yet by substituting simple functions for f (x) and g(x), for
instance 1, x and x2 , we see that there are cases for which the functions F (u) and
G(u) are differentiable. Thus we expect an equivalent to formula 1.52 to exist.
Chapter 2

Ordinary Differential
Equations

2.1 Introduction
Differential equations are an important component of this course, so in this chapter we
discuss relevant, elementary theory and provide practice in solving particular types of
equations. You should be familiar with all the techniques discussed here, though some
of the more general theorems may be new. If you already feel confident with the theory
presented here, then a detail study may not be necessary, though you should attempt
some of the end of section exercises. If you wish to delve deeper into the subject there
are many books available; those used in the preparation of this chapter include Birkoff
and Rota1 (1962), Arnold (1973)2 , Ince3 (1956) and the older text by Piaggio4, which
provides a different slant on the subject than modern texts.
Differential equations are important because they can be used to describe a wide
variety of physical problems. One reason for this is that Newton’s equations of motion
usually relate the rates of change of the position and momentum of particles to their
position, so physical systems ranging in size from galaxies to atoms are described by
differential equations. Besides these important traditional physical problems, ordinary
differential equations are used to describe some electrical circuits, population changes
when populations are large and chemical reactions. In this course we deal with the sub-
class of differential equations that can be derived from variational principles, a concept
introduced in chapter 3. A simple example is the stationary chain hanging between two
fixed supports: this assumes a shape that minimises its gravitational energy, and we
can use this fact to derive a differential equation, the solution of which describes the
chain’s shape. This problem is dealt with in chapter 12.
The term ‘differential equation’ first appeared in English in 1763 in ‘The method by
increments’ by William Emerson (1701 – 1782) but was introduced by Leibniz (1646 –
1 Birkhoff G and Rota G-C 1962 Ordinary differential equations (Blaisdell Publishing Co.).
2 Arnold V I 1973, Ordinary Differential Equations (The MIT Press), translated by R A Silverman
3 Ince E L 1956 Ordinary differential equations (Dover).
4 Piaggio H T H 1968 An Elementary treatise on Differential Equations, G Bell and Sons, first

published in 1920.

51
52 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

1716) eighty years previously5 in the Latin form, ‘aequationes differentiales’.


The study of differential equations began with Newton (1642 – 1727) and Leibniz.
Newton considered ‘fluxional equations’, which related a fluxion to its fluent, a fluent
being a function and a fluxional its derivative: in modern terminology he considered the
two types of equation dy/dx = F (x) and dy/dx = F (x, y), and derived solutions using
power series in x, a method which he believed to be universally applicable. Although
this work was completed in the 1670s it was not published until the early 18 th century,
too late to affect the development of the general theory which had progressed rapidly
in the intervening period.
Much of this progress was due to Leibniz and the two Bernoulli brothers, James
(1654 – 1705), and his younger brother John (1667 – 1748), but others of this scientifi-
cally talented family also contributed; many of these are mentioned later in this chapter
so the following genealogical tree of the scientifically important members of this family
is shown in figure 2.1.
The Bernoulli family
Nicholas (1623−1708)

James Nicolas I John


(1654−1705) (1662−1716) (1667−1748)

Nicolas II
(1687−1759)
Nicholas III Daniel John II
(1695−1726) (1700−1782) (1710−1790)

John III Daniel II James II


(1744−1807) (1751−1834) (1759−1789)

Christoph (1782−1863)

John Gustav (1811−1863)

Figure 2.1 Some of the posts held by some members of the Bernoulli family are:
James: Prof of Mathematics, Basle (1687-1705);
John: Prof of Mathematics, Groningen (1695-1705), Basle (1705-1748);
Nicholas III: Prof at Petrograd;
Daniel: Prof at Preograd and Basle (Bernoulli principle in hydrodynamics);
John II: Prof at Basle;
John III: Astronomer Royal and Director of Mathematical studies at Berlin;
James II: Prof at Basle, Verona and Petrograd.

In 1690 James Bernoulli solved the brachistochrone problem, discussed in section 5.2,
which involves solving a nonlinear, first-order equation: in 1692 Leibniz discovered the
method of solving first-order homogeneous and linear problems, sections 2.3.2 and 2.3.3:
Bernoulli’s equation, section 2.3.4, was proposed in 1695 by James Bernoulli and solved
by Leibniz and John Bernoulli soon after. Thus within a few years of their discovery
many of the methods now used to solve differential equations had been discovered.
5 Acta Erruditorum, Oct 1684
2.1. INTRODUCTION 53

The first treatise to provide a systematic discussion of differential equations and their
solutions was published in four volumes by Euler (1707 – 1783), the first in 1755 and
the remaining three volumes between 1768 and 1770.
This work on differential equations involved rearranging the equation, using alge-
braic manipulations and transformations, so that the solution could be expressed as an
integral. This type of solution became known as solving the equation by quadrature,
a term originally used to describe the area under a plane curve and in particular to
the problem of finding a square having the same area as a given circle: it was in this
context that the term was first introduced into English in 1596. The term quadrature
is used regardless of whether the integral can actually be evaluated in terms of known
functions. Other common terms used to describe this type of solution are closed-form
solution and analytic solution: none of these terms have a precise definition.
Much of this early work was concerned with the construction of solutions but raised
fundamental questions concerning what is meant by a function or by a ‘solution’ of
a differential equation, which led to important advances in analysis. These questions
broadened the scope of enquiries, and the first of these newer studies was the work of
Cauchy (1789 – 1857) who investigated the existence and uniqueness of solutions, and
in 1824 proved the first existence theorem; this is quoted on page 81. The extension of
this theorem6 , due to Picard (1856 – 1941), is quoted in section 2.3. These theorems,
although important, deal only with a restricted class of equations, which do not include
many of the quite simple equations arising in this course, or many other practical
problems.
In 1836 Sturm introduced a different approach to the subject, whereby properties of
solutions to certain differential equations are derived directly from the equation, without
the need to find solutions. Subsequently, during the two years 1836-7, Sturm (1803 –
1855) and Liouville (1809 – 1882) developed these ideas, some of which are discussed in
chapter 13. The notion of extracting information from an equation, without solving it,
may seem rather strange, but you can obtain some idea of what can be achieved by
doing exercises 2.57 and 2.58 at the end of this chapter.
Liouville was also responsible for starting another important strand of enquiry. He
was interested in the ‘problem of integration’, the main objective of which is to decide
if a given class of indefinite integrals can be integrated in terms of a finite expression
involving algebraic, logarithmic or exponential functions. This work was performed
between 1833 and 1841 and towards the end of this period Liouville turned his atten-
tion to similar problems involving first and second-order differential equations, a far
more difficult problem. A readable history of this work is provided by Lützen7 . This
line of enquiry became of practical significance with the advent of Computer Assisted
Algebra during the last quarter of the 20 th century, and is now an important part of
software such as Maple (used in MS325 and M833) and Mathematica: for some modern
developments see Davenport et al 8 .
Applications that involve differential equations often require solutions, so the third
approach to the subject involves finding approximations to those equations that cannot
be solved exactly in terms of known functions. There are far too many such methods
6 Picard E 1893 J de Maths, 9 page 217.
7 Joseph Liouville 1809-1882: Master of Pure and Applied Mathematics, 1990 by J Lützen, pub-
lished by Springer-Verlag.
8 Davenport J H, Siret Y and Tournier E 1989 Computer Algebra. Sytems and algorithms for

algebraic computation, Academic Press.


54 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

to describe here, but one important technique is described in chapter 14.


The current chapter has two principal aims. First, to give useful existence and
uniqueness theorems, for circumstances where they exist. Second, to describe the var-
ious classes of differential equation which can be solved by the standard techniques
known to Euler. In section 2.3 we discuss first-order equations: some aspects of second-
order equations are discussed in section 2.4. In the next section some general ideas are
introduced.

2.2 General definitions


An nth order differential equation is an equation that gives the nth derivative of a real
function, y(x), of a real variable x, in terms of x and some or all of the lower derivatives
of y,
dn y 
0 00 (n−1)

= F x, y, y , y , · · · , y , a ≤ x ≤ b. (2.1)
dxn
The function y is named the dependent variable, and the real variable x the independent
variable and this is often limited to a given interval of the real axis, which may be the
whole axis or an infinite portion of it. The function F must be single-valued and
differentiable in all variables, see theorem 2.2 (page 81).
Frequently we obtain equations of the form

G(x, y, y 0 , y 00 , · · · , y (n) ) = 0, a ≤ x ≤ b, (2.2)

and this is also referred to as an nth order differential equation. But in order to progress,
it is usually necessary to rearrange 2.2 into the form of 2.1, and this usually gives more
than one equation. A simple example is the first-order
p equation y 0 2 + y 2 = c2 , c a
0 2 2
constant, which gives the two equations y = ± c − y .
Another important type of system, is the set of n coupled first-order equations

dzk
= fk (x, z1 , z2 , · · · , zn ) , k = 1, 2, · · · , n, a ≤ x ≤ b, (2.3)
dx
where fk are a set of n real-valued, single-valued functions of (x, z1 , z2 , · · · , zn ). If all
the fk are independent of x these equations are described as autonomous; if one or more
of the fk depend explicitly upon x they are named non-autonomous 9 .
The nth order equation 2.1 can always be expressed as a set of n coupled, first-order
equations. For instance, if we define

z1 = y, z2 = y 0 , z3 = y 00 , ··· zn = y (n−1) ,

then equation 2.1 becomes

zn0 = y (n) = F (x, z1 , z2 , · · · , zn ) and zk0 = zk+1 , k = 1, 2, · · · , n − 1.


9 This distinction is important in dynamics where the independent variable, x, is usually the time.

The significance of this difference is that if y(x) is a solution of an autonomous equations then so is
y(x + a), for any constant a: it will be seen in chapter 7, when we consider Noether’s theorem, that
this has an important consequence and in dynamics results in energy being constant.
2.2. GENERAL DEFINITIONS 55

This transformation is not unique, as seen in exercise 2.4. Coupled, first-order equations
are important in many applications, and are used in many theorems quoted later in this
chapter, which is why we mention them here.
In this course most differential equations encountered are first-order, n = 1, or
second-order, n = 2.
A solution of equation 2.1 is any function that satisfies the equation, and it is helpful
and customary to distinguish two types of solutions. The general solution of an nth
order equation is a function

f (x, y, c1 , c2 , · · · , cn ) = 0 (2.4)

involving x, y and n arbitrary constants which satisfies equation 2.1 for all values of
these constants in some domain: this solution is also named the complete primitive.
The most general solution of an nth order equation contains n arbitrary constants, but
this is difficult to prove for a general equation.
A particular solution or particular integral g(x, y) = 0 is a function satisfying equa-
tion 2.1, but containing no arbitrary constants: particular integrals can be obtained
from a general solution by giving the constants particular values, but some equations
have independent particular solutions that are independent of the general solution:
these are named singular solutions. For instance the equation

yy 0
y 00 = has the general solution y(x) = 2c1 tan(c2 + c1 ln x) − 1
x
and also the singular solution y = c3 , where c3 is an arbitrary constant, which cannot
be obtained from the general solution, see exercise 2.49(c) (page 84). Another example
is given in exercise 2.6 (page 59); one origin of singular solutions of first-order equations
is discussed in section 2.6.
If y(x) = 0 is a solution of a differential equation it is often referred to as the trivial
solution.
The values of the n arbitrary constants are determined by n subsidiary conditions
which will be discussed later.

Linear and Nonlinear equations


An important category of differential equation are linear equations, that is, equations
that are of first-degree in the dependent variable and all its derivatives: the most general
nth order, linear differential equation has the form

dn y dn−1 y dy
an (x) + a n−1 (x) + · · · + a1 (x) + a0 (x)y = h(x) (2.5)
dxn dxn−1 dx
where h(x) and ak (x), k = 0, 1, · · · , n, are functions of x, but not y.
If h(x) = 0 the equation is said to be homogeneous, otherwise it is an inhomogeneous
equation.
Linear equations are important for three principal reasons. First, they often approx-
imate physical situations where the appropriate variable, here y, has small magnitude,
so terms O(y 2 ) can be ignored. Second, by comparison with nonlinear equations they
are relatively easy to solve. Third, their solutions have benign properties that are
56 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

well understood: some of these properties are discussed in section 2.4 and others in
chapter 13.
Differential equations which are not linear are nonlinear equations. These equations
are usually difficult to solve and their solutions often have complicated behaviours.
Most equations encountered in this course are nonlinear.

Initial and Boundary value problems


An important distinction we need to mention is that between initial value problems and
boundary value problems, which we discuss in the context of the second-order equation
d2 y
= F (x, y, y 0 ), a ≤ x ≤ b. (2.6)
dx2
The general solution of this equation contains two arbitrary constants, and in practical
problems the values of these constants are determined by conditions imposed upon the
solution.
In an initial value problem 10 the value of the solution and its first derivative are
defined at the point x = a. Thus a typical initial value problem is
d2 y
+ y = 0, y(a) = A, y 0 (a) = B. (2.7)
dx2
In a boundary value problem the value of the solution is prescribed at two distinct points,
normally the end points of the range, x = a and x = b. A typical problem is
d2 y
+ y = 0, y(a) = A, y(b) = B. (2.8)
dx2
The distinction between initial and boundary value problems is very important. For
most initial value problems occurring in practice a unique solution exists, see theo-
rems 2.1 and 2.2, pages 61 and 81 respectively. On the contrary, for most boundary
value problems it is not known whether a solution exists and, if it does, whether it is
unique: we encounter examples that illustrate this behaviour later in the course. It is
important to be aware of this difficulty when numerical methods are used.
For example the solutions of equations 2.7 and 2.8 are, respectively,
A sin(b − x) + B sin(x − a)
y = A cos(x − a) + B sin(x − a) and y = .
sin(b − a)
The former solution exists for all a, A and B; the latter exists only if sin(b − a) 6= 0.
Other types of boundary conditions occur, and are important, and are introduce
later as needed.

Singularities: movable and fixed


The solution of the nonlinear equation
dy A
= y2, y(0) = A, is y(x) = .
dx 1 − Ax
10 It is named an initial value problem because in this type of system the independent variable, x, is

often related to the time and we require the solution subsequent to the initial time, x = a.
2.2. GENERAL DEFINITIONS 57

This solution is undefined at x = 1/A, a point which depends upon the initial condition.
Thus this singularity in the solution moves as the initial condition changes, and is
therefore named a movable singularity.
On the other hand the general, non-trivial solution of the linear equation
dy y C
+ = 0 is y = , C 6= 0, (2.9)
dx x x
where C is a constant. This solution is undefined at x = 0, regardless of the value of
the integration constant C. This type of singularity in the solution is named a fixed
singularity. The significance of this classification is that the singularities in the solutions
of nonlinear equations are almost always movable. For the solutions of linear equations
they are always fixed and their positions are at the same points as the singularities of
the coefficient functions defining the equation: in equation 2.9 the coefficient of y is
1/x, which is why the solution is singular at x = 0. For the linear equation 2.5 any
singularities in the solution are at points when one or more of the ratios ak (x)/an (x),
k = 0, 1, · · · , n − 1 has a singularity.
In the above examples the singularity is the point where the solution is unbounded.
But at a singularity a function is not necessarily a point where it is unbounded; a careful
definition of a singularity can only be provided in the context of complex variable theory
and for single-valued functions there are two types of singularity, poles and essential
singularities. We cannot describe this theory here11 but, instead we list some typical
examples of these singularities.
Functional form Name of singularity
1
, n = 1 , 2 · · ·. Pole
(a − x)n
(a − x)α , α a real, non-integer number. Essential singularity

Other types of essential singularities are exp(−1/(x − a)), exp( x − a) and ln(x − a).
Functions of a real variable are less
p refined and can misbehave in a variety of unruly
ways: some typical examples are |x|, 1/|x| and ln |x|.

Exercise 2.1
Show that the following equations have the solutions given, and state whether the
singularity in each solution is fixed or movable, and whether the equation is linear
or nonlinear.
dy A
(a) = xy 3 , y(0) = A, y= √ .
dx 1 − A 2 x2
dy y 3A − 1 1
(b) + = x, y(1) = A, y(x) = + x2 .
dx x 3x 3

Various types of solution


A general solution of a differential equation can take many different forms, some more
useful than others. Most useful are those where y(x) is expressed as a formula involving
a finite number of familiar functions of x; this is rarely possible. This type of solution is
11 A brief summary of the relevant theory is provided in the course Glossary; for a fuller discussion

see Whittaker and Watson (1965).


58 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

often named a closed-form solution or sometimes an analytic solution12 . It is frequently


possible, however, to express solutions as an infinite series of increasing powers of x:
these solutions are sometimes useful, but normally only for a limited range of x.
A solution may be obtained in the form f (x, y) = 0, which cannot be solved to
provide a formula for y(x). In such cases the equation f (x, y) = 0, for fixed x, often
has many solutions, so the original differential equation has many solutions. A simple
example is the function f (x, y) = y 2 − 2xy + C = 0, where C is a constant, which
is a solution
√ of the equation y 0 = y/(y − x), which therefore has the two solutions
2
y = x ± x − C.
Another type of solution involves some form of approximation. From the beginning
of the 18 th century to the mid 20 th century were developed a range of techniques that
approximate solutions in terms of simple finite formulae: these approximations and the
associated techniques are important but do not feature in this course, except for the
method described in chapter 14.
Another type of approximation is obtained by solving the equation numerically,
but these methods find only particular solutions, not general solutions, and may fail
for initial value problems on large intervals and for nonlinear boundary value problems.
Moreover, if the equations contain several parameters it is usually difficult to understand
the effect of changing the parameters.

Exercise 2.2
Which of the following differential equations are linear and which are nonlinear?
(a) y 00 + x2 y = 0, (b) y 00 + xy 2 = 1, (c) y 00 + |y| = 0,
(
1, y > 0,
(d) y 000 + xy 0 2 + y = 0, (e) y 00 + y sin x = ex , (f) y 00 + y =
−1, y ≤ 0.
(g) y 0 = |x|, y(1) = 2, (h) y 0 = 0, y(1)2 = 1,
(i) y 00 = x, y(0) + y 0 (0) = 1, y(1) = 2.

Exercise 2.3
Which of the following problems are initial value problems, which are boundary
value problems and which are neither?
(a) y 00 + y = sin x, y(0) = 0, y(π) = 0,
00 0
(b) y + y = |x|, y(0) = 1, y (π) = 0,
00 0
(c) y + 2y + y = 0, y(0) + y(1) = 1, y 0 (0) = 0,
(d) y 00 − y = cos x, y(1) = y(2), y 0 (1) = 0,
(e) y 000 + 2y 00 + x2 y 0 2 + |y| = 0, y(0) = y 0 (0) = 0, y 00 (1) = 1,
(f) y (4) + 3y 000 + 2x2 y 0 2 + x3 |y| = x, y(0) = 1, y 0 (0) = 2, y 00 (0) = 1, y 000 (0) = 1,
(g) y 00 sin x + y cos x = 0, y(π/2) = 0, y 0 (π/2) = 1,
(h) y 00 + y 2 = y(x2 ), y(0) = 0, y 0 (0) = 1.

12 Used in this context the solution need not be analytic in the sense of complex variable theory.
2.2. GENERAL DEFINITIONS 59

Exercise 2.4
Liénard’s equation,

d2 x dx
+ νf (x) + g(x) = 0, ν ≥ 0,
dt2 dt
where f (x) and g(x) are well behaved functions and ν a constant, describes certain
important dynamical systems.
(a) Show that if y = dx/dt this equation can be written as the two coupled first-
order equations
dx dy
= y, = −νf (x)y − g(x).
dt dt
Rx 1 dx
(b) If F (x) = 0 du f (u), by defining z = + F (x), show that an alternative
ν dt
representation of Liénard’s equation is
dx dz g(x)
= ν (z − F (x)) , =− .
dt dt ν
This exercise demonstrates that there is no unique way of converting a second-
order equation to a pair of coupled first-order equations. The transformation of
part (b) may seem rather artificial, but if ν  0 it provides a basis for a good
approximation to the periodic solution of the original equations which is not easily
obtained by other means.

Exercise 2.5
Clairaut’s equation
An equation considered by A C Clairaut(1713 – 1765) is
dy
y = px + f (p) where p= .
dx
By differentiating with respect to x show that (x + f 0 (p)) p0 = 0 and deduce
that one solution is p = c, a constant, and hence that the general solution is
y = cx + f (c).
Show also that the function derived by eliminating p from the equations

x = −f 0 (p) and y = px + f (p)

is a particular solution. The geometric significance of this solution, which is usually


a singular solution, and its connection with the general solution is discussed in
exercise 2.67 (page 90).

Exercise 2.6
Find the general and singular solutions of the differential equation y = px − ep ,
p = y 0 (x).

Exercise 2.7
Consider the second-order differential equation F (x, y 0 , y 00 ) = 0 in which y(x)
is not explicitly present. Show that by introducing the new dependent variable
p = dy/dx, this equation is reduced to the first-order equation F (x, p, p0 ) = 0.
60 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.8
Consider the second-order differential equation F (y, y 0 , y 00 ) = 0 in which the inde-
pendent variable x is not explicitly present.
Define p = dy/dx and show that by considering p as a function of y,

d2 y dp dp
= =p ,
dx2 dx dy
„ «
dp
and hence that the equation reduces to the first-order equation F y, p, p = 0.
dy

2.3 First-order equations


Of all ordinary differential equations, first-order equations are usually the easiest to
solve using conventional methods and there are fives types that are amenable to these
methods. When confronted with an arbitrary first-order equation, the trick is to recog-
nise the type or a transformation that converts it to a given equation of one of these
types. Before describing these types we first discuss the existence and uniqueness of
their solutions.

2.3.1 Existence and uniqueness of solutions


The first-order equation
dy
= F (x, y), a ≤ x ≤ b, (2.10)
dx
does not have a unique solution unless the value of y is specified at some point x =
c ∈ [a, b]. Why this is so can be seen geometrically: consider the Cartesian plane Oxy,
shown in figure 2.2. Take any point (x, y) with x ∈ [a, b] and where F (x, y) is defined
and single valued, so a unique value of y 0 (x) is defined: this gives the gradient of the
solution passing through this point, as shown by the arrows.

y
y(x)

x
x x +δ x x+2δ x

Figure 2.2 Diagram showing the construction of a solu-


tion through a given point in the Oxy-plane.

At an adjacent value of x, x + δx, the value of y on this solution is y(x + δx) =


y(x) + δxF (x, y(x)) + O(δx2 ), as shown. By taking the successive values of y at x + kδx,
k = 1, 2, · · · , we obtain a unique curve passing through the initial point. By letting
δx → 0 it can be shown that this construction gives the exact solution. This unique
2.3. FIRST-ORDER EQUATIONS 61

solution can be found only if the initial value of y is specified. Normally y(a) is defined
and this gives the initial value problem
dy
= F (x, y), y(a) = A, a ≤ x ≤ b. (2.11)
dx
If F (x, y) and its derivatives Fx and Fy are continuous in a suitable neighbourhood
surrounding the initial point, defined in theorem 2.2 (page 81), then it can be shown
that a unique solution satisfying the initial condition y(a) = A exists. This is essentially
the result deveoloped by Cauchy in his lectures at the École Polytechnique between 1820
and 1830, see Ince (1956, page 76). The solution may not, however, exist in the desired
interval [a, b]. The following, more useful, result was obtained by Picard13 in 1893, and
shows how, in principle, a solution can be constructed.
Theorem 2.1
In a rectangular region D of the Oxy plane a − h ≤ x ≤ a + h, A − H ≤ y ≤ A + H, if
in D we can find positive numbers M and L such that
a) |F (x, y)| < M , and
b) |F (x, y1 ) − F (x, y2 )| < L|y1 − y2 |,
then the sequence of functions
Z x
yn+1 (x) = A + dt F (t, yn (t)), y0 (x) = A, n = 0, 1, · · · , (2.12)
a

converges uniformly to the exact solution. If F (x, y) is differentiable in D conditions a)


and b) are satisfied.

The proof of this theorem, valid for nth order equations, can be found in Ince14 , Pi-
aggio15 and a more modern treatment in Arnold16 . In general this iterative formula
results in very long expressions even after the first few iterations, or the integrals cannot
be evaluated in closed form.
Typically, if the integrals can be evaluated, equation 2.12 gives the solution as a
series in powers of x, but from this it is usually difficult to determine the radius of
convergence of the series, see for instance exercise 2.10. Even if it converges for all x, it
may be of little practical value when |x| is large: the standard example that illustrates
these difficulties is the Taylor series for sin x, which converges for all x, but for large |x|
is practically useless because of rounding errors, see section 1.3.8.
Exercise 2.9
Use the iterative formula 2.12 to find the infinite series solution of
dy
= y, y(0) = 1, x ≥ 0.
dx
For which values of x does this solution exist?
Note that you will need to use induction to construct the infinite series.
13 Picard E 1893, J de Maths 9 page 217: a history of this developement is given by Ince (1956,
page 63).
14 Ince E L 1956, Ordinary Differential Equations Dover, chapter 3.
15 Piaggio H T H 1962 An elementary Treaties on Differential Equations and their applications, G

Bell and Sons, London.


16 Arnold V I 1973 Ordinary Differential Equations, Translated and Edited by R A Silverman, The

MIT Press.
62 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.10
(a) Use the iterative formula 2.12 to show that the second iterate of the solution
to
dy
= 1 + xy 2 , y(0) = A, x ≥ 0,
dx
is
1 2 1 1 1 4 6
y(x) = A + x + A2 x2 + Ax3 + (1 + A3 )x4 + Ax5 + A x .
2 3 4 5 24
(b) An alternative method of obtaining this series is by direct calculation of the
Taylor series, but for the same accuracy more work is usually required. Find
the values of y 0 (0), y 00 (0) and y 000 (0) directly from the differential equation and
construct the third-order Taylor series of the solution.
(c) The differential equation shows that for x > 0, y(x) is a monotonic increasing
function, so we expect that for sufficiently large x, xy(x)2  1, and hence that
the solution will be given approximately by the equation y 0 = xy 2 . Use this ap-
proximation to deduce that y(x) → ∞ at some finite value of x. Explain the likely
effect of this on the radius of convergence of this series. In exercise 2.21
p (page 68)
it is shown that for large A the singularity is approximately at x = 2/A.

2.3.2 Separable and homogeneous equations


Equation 2.11 is separable if F (x, y) can be written in the form F = f (x)g(y), with
f (x) depending only upon x and g(y) depending only upon y. Then the equation can
be rearranged in the form of two integrals: the following expression also incorporates
the initial condition Z y Z x
dv
= du f (u). (2.13)
A g(v) a
Provided these integrals can be evaluated this gives a representation of the solution,
although rarely in the convenient form y = h(x). This is named the method of separa-
tion of variables, and was used by both Leibniz and John Bernoulli at the end of the
17 th century.
Sometimes a non-separable equation y 0 = F (x, y) can be made separable if new
dependent and independent variables, u and v, can be found such that it can be written
in the form
du
= U (u)V (v),
dv
with U (u) depending only upon u and V (v) depending only upon v.
A typical separable equation is
dy
= cos y sin x, y(0) = A,
dx
so its solution can be written in the form
Z y Z x  
dv tan(y/2 + π/4)
= du sin u, that is ln = 1 − cos x,
A cos v 0 tan(A/2 + π/4)
  
π −1 A π
which simplifies to y = − + 2 tan exp(1 − cos x) tan + .
2 2 4
2.3. FIRST-ORDER EQUATIONS 63

Exercise 2.11
Use the method of separation of variables to find solutions of the following equa-
tions.
(a) (1 + x2 )y 0 = x(1 − y 2 ), (b) (1 + x)y 0 − xy = x,
√ 1 + 2x + 2y
(c) (1 + x)y 0 = x 1 + y, y(0) = 0, (d) y 0 = . Hint, define z = x + y.
1 − 2x − 2y

A sub-class of equations that can be transformed into separable equations are those for
which F (x, y) depends only upon the ratio y/x, rather than on x and y separately,

dy y 
=F , y(a) = A. (2.14)
dx x
Such equations are often named homogeneous equations. The general theory of this
type of equation is developed in the following important exercise.

Exercise 2.12
a) Show that by introducing the new dependent variable v(x) by the relation
y = xv, equation 2.14 is transformed to the separable form

dv F (v) − v A
= , v(a) = .
dx x a
Use this transformation to find solutions of the following equations.
“ y” y x + 3y
(b) y 0 = exp − + , (c) y 0 = , y(1) = 0,
x x 3x + y
3x2 − xy + 3y 2
(d) x(x + y)y 0 = x2 + y 2 , (e) y 0 = ,
2x2 + 3xy
4x − 3y − 1
(f) y 0 = . Hint, set x = ξ + a and y = η + b, where (a, b) is the
3x + 4y − 7
point of intersection of the lines 4x − 3y = 1 and 3x + 4y = 7.

2.3.3 Linear first-order equations


The equation
dy
+ yP (x) = Q(x), y(a) = A, (2.15)
dx
where P (x) and Q(x) are real functions of x only, is linear because y and y 0 occur only
to first-order. Its solution can always be expressed as an integral, by first finding a
function, p(x), to write the equation as

d
(yp(x)) = Q(x)p(x), y(a) = A, (2.16)
dx
which can be integrated directly. The unknown function, p(x), is found by expand-
ing 2.16, dividing by p(x) and equating the coefficient of y(x) with that in the original
equation. This gives

p0
Z 
= P (x) which integrates to p(x) = exp dx P (x) . (2.17)
p
64 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

The function p(x) is named the integrating factor: rather than remembering the formula
for p(x) it is better to remember the idea behind the transformation, because similar
ideas are used in other contexts.
Equation 2.16 integrates directly to give
Z
y(x)p(x) = C + dx p(x)Q(x), (2.18)

where C is the arbitrary constant of integration, defined by the initial condition. In


this analysis there is no need to include an arbitrary constant in the evaluation of p(x).
This method produces a formula for the solution only if both integrals can be eval-
uated in terms of known functions. If this is not the case it is often convenient to write
the solution in the form
Z x Z t 
y(x)p(x) = A + dt Q(t)p(t), p(t) = exp du P (u) , (2.19)
a a

because this expression automatically satisfies the initial condition and the integrals
can be evaluated numerically.

Exercise 2.13
Use a suitable integrating factor to find solutions of the following equations. In
each case show that the singularity in the solution is fixed and relate its position
to properties of the coefficient functions.
(a) (x + 2)y 0 + (x + 3)y = 4 exp(−x),
(b) y 0 cos x + y sin x = 2 cos2 x sin x, y(0) = 0,
2 0
(c) x y + 1 + (1 − 2x)y = 0.
(d) Without solving it, use the properties of the differential equation
dy
cos2 x − y sin x cos x + (1 + cos2 x) tan x = 0, y(0) = 2,
dx
to show that the solution is stationary at x = 0.
Find this solution and show that y(0) is a local maximum.

Exercise 2.14
Variation of Parameters
Another method of solving equation 2.15 is to use the method of variation of
parameters which involves finding a function, f (x), which is either a solution of
part of the equation or a particular integral of the whole equation, expressing the
required solution in the form y(x) = v(x)f (x), and finding a simpler differential
equation for the unknown function v(x).
For equation 2.15, we find the solution of
dy
+ yP (x) = 0. (2.20)
dx
(a) Show that the solution of equation 2.20 with condition y(a) = 1 is
„ Z x «
f (x) = exp − dt P (t) .
a
2.3. FIRST-ORDER EQUATIONS 65

(b) Now assume that the solution of


dy
+ yP (x) = Q(x), y(a) = A
dx
can be written as y = v(x)f (x), v(a) = A, and show that f v 0 = Q, and hence that
the required solution is
„ Z x «
Q(t)
y(x) = f (x) A + dt .
a f (t)
Relate this solution to that given by equation 2.19.

Exercise 2.15
Use the idea introduced in exercise 2.6 (page 56) to solve the differential equation

d2 y dy
x − = 3x2 , y(1) = A, y 0 (1) = A0 .
dx2 dx

Exercise 2.16
Use the idea introduced in exercise 2.7 (page 56) to solve the differential equation

d2 y
+ ω 2 y = 0, y(0) = A, y 0 (0) = 0, ω > 0.
dx2

2.3.4 Bernoulli’s equation


Two Bernoulli brothers, James and John, and Leibniz studied the nonlinear, first-order
equation
dy
+ yP (x) = y n Q(x), (2.21)
dx
where n 6= 1 is a constant and P (x) and Q(x) are functions only of x; this equation is
now named Bernoulli’s equation. The method used by John Bernoulli is to set z = y 1−n ,
dz 1 − n dy
so that = , and equation 2.21 becomes
dx y n dx
dz
+ (1 − n)P (x)z = (1 − n)Q(x), (2.22)
dx
which is a first-order equation of the type treated in the previous section.
An example of such an equation is
dy
x(x2 − 1) − y = x3 y 2 , y(2) = A.
dx
1 x2
By dividing through by x(x2 − 1) we see that P (x) = − , Q(x) = 2 and
x(x2− 1) x −1
n = 2.
Thus equation 2.22 becomes
dz z x2 1 1
+ 2
=− 2 , z= , z(2) = .
dx x(x − 1) x −1 y A
66 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

The integrating factor, equation 2.17, is


Z 
1
p(x) = exp dx
x(x2 − 1)
Z   √ 2
1 1 1 x −1
= exp dx + − = ,
2(x − 1) 2(x + 1) x x

hence the equation for z can be written in the form


√ !
d x2 − 1 x 1
z = −√ , z(2) = .
dx x x2 − 1 A

Integrating this and using the condition at x = 2 gives

1 √
 
1 x
= 3 1+ √ − x.
y 2A 2
x −1

Exercise 2.17
Solve the following equations.
(a) y 0 = 2y − xy 2 , y(0) = 1, (b) x(1 − x2 )y 0 + (2x2 − 1)y = x2 y 3 ,
(c) y 0 cos x − y sin x = y 3 cos2 x, y(0) = 1, (d) x3 y 0 = y(x2 + y).

2.3.5 Riccati’s equation


Jacopo Francesco, Count Riccati of Venice (1676 – 1754), introduced an important class
of first-order, nonlinear equations. Here we consider the most general of this type of
equation which was introduced by Euler, namely
dy
= P (x) + yQ(x) + y 2 R(x), (2.23)
dx
where P , Q and R are functions only of x. This equation is now named Riccati’s
equation17 . If R(x) = 0 Riccati’s equation is a linear equation of the type already
considered, and if P (x) = 0 it reduces to Bernoulli’s equation, so we ignore these cases.
Riccati’s studies were mainly limited to the equations
dy dy
= ay 2 + bxβ and = ay 2 + bx + cx2 ,
dx dx
where a, b, c and β are constants. The first of these equations was introduced in Riccati’s
1724 paper18 . It can be shown that the solution of the first of the above equations can
be represented by known functions if β = −2 or β = −4k/(2k − 1), k = 1, 2, · · · , and in
1841 Liouville showed that for any other values of β its solution cannot be expressed as
an integral of elementary functions19 . The more general equation 2.23 was also studied
17 Itwas apparently D’Alembert in 1770 first used the name ‘Riccati’s equation’ for this equation.
18 Acta Eruditorum, Suppl, VIII 1794, pp. 66-73.
19 Here the term ‘elementary function’ has a specific meaning which is defined in the glossary.
2.3. FIRST-ORDER EQUATIONS 67

by Euler. This equation has since appeared in many contexts, indeed whole books are
devoted to it and its generalisations: we shall meet it again in chapter 8.
This type of equation arose in Riccati’s investigations into plane curves with radii
of curvature solely dependent upon the ordinate. The radius of curvature, ρ, of a curve
described by a function y(x), where x and y are Cartesian coordinates, is given by

1 y 00 (x)
= . (2.24)
ρ (1 + y 0 (x)2 )3/2
This expression is derived in exercise 2.66. Thus if ρ depends only upon the ordinate,
y, we would have a second-order equation f (y, y 0 , y 00 ) = 0, which does not depend
explicitly upon x. Such equations can be converted to first-order equations by the
simple device of regarding y as the independent variable: define p = dy/dx and express
y 00 (x) in terms of p and p0 (y), using the chain rule as follows,

d2 y dp dp dy dp
= = =p .
dx2 dx dy dx dy
Thus the second-order equation f (y, y 0 , y 00 ) = 0 is reduced to the first-order equation
f (y, p(y), p0 (y)) = 0. Riccati chose particular functions to give the equations quoted at
the beginning of this section, but note that the symbols have changed their meaning.

Exercise 2.18
If a function y(x) can be expressed as the ratio
cg(x) + G(x)
y=
cf (x) + F (x)
where c is a constant and g, G, f and F are differentiable functions of x, by
eliminating the constant c from this equation and its first derivative, show that y
satisfies a Riccati equation.
Later we shall see that all solutions of Riccati’s equations can be expressed in this
form.

Reduction to a linear equation


Riccati’s equation is an unusual nonlinear equation because it can be converted to
a linear, second-order equation by defining a new dependent variable u(x) with the
equation
1 du
y=− (2.25)
uR dx
to give, assuming R(x) 6= 0 in the interval of interest,

d2 u R0 du
 
− Q+ + P Ru = 0, (2.26)
dx2 R dx
which is a linear, second-order equation.

Exercise 2.19
Derive equation 2.26.
68 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.20
(a) Consider the equation

d2 u du
+ p1 (x)
p2 (x) + p0 (x)u = 0.
dx2 dx
„Z «
By introducing a new function y(x) by u = exp dx y show that y satisfies
the Riccati equation
dy p0 p1
= − − y − y2.
dx p2 p2
(b) The general solution of the second-order equation for u has two arbitrary
constants. The general solution of the first-order equation for y has one arbitrary
constant. Explain how this contradiction can be resolved.

Exercise 2.21
(a) Show that the Riccati equation considered in exercise 2.10 (page 62),

dy
= 1 + xy 2 , y(0) = A,
dx
has the associated linear equation

d2 u du 1 du
x − + x2 u = 0, where y=− .
dx2 dx xu dx

(b) By substituting the series u = a0 + a1 x + a2 x2 + · · · into this equation show


that a3k+1 = 0, k = 0, 1, · · · , and that

(n2 − 1)an+1 + an−2 = 0.

By choosing (a0 , a2 ) = (1, 0) and (a0 , a2 ) = (0, 1) obtain the two independent
solutions

u1 (x) = 1 + a3 x3 + a6 x6 + · · · + a3k x3k + · · · ,


u2 (x) = x2 + b5 x5 + b8 x8 + · · · + b3k+2 x3k+2 + · · · ,

where

(−1)k (−1)k
a3k = and b3k+2 = .
(22 − 1)(52 − 1) · · · ((3k − 1)2 − 1) (42 − 1)(72 − 1) · · · ((3k + 1)2 − 1)

Deduce that the radii of convergence of the series for u1 (x) and u2 (x) are infinite.
(c) Show that the solution of the original Riccati equation is

u01 (x) − Au02 (x)/2


y(x) = − .
x(u1 (x) − Au2 (x)/2)

By considering
p the denominator show that for large A the singularity in y(x) is
at x = 2/A, approximately.
2.3. FIRST-ORDER EQUATIONS 69

Method of solution when one integral is known (optional)


Euler noted that if a particular solution v(x) is known then the substitution y = v +1/z
gives a linear equation for z(x), from which the general solution can be constructed.
Substituting for y = v + 1/z into Riccati’s equation gives

z0 Q 2Rv R
v0 − = P + Qv + Rv 2 + + + 2
z2 z z z
which simplifies to
z 0 + P1 z = −R where P1 = Q + 2Rv. (2.27)
This is a linear, first-order equation that can be solved using the methods previously
discussed. There are a number of special values for P , Q and R for which this method
yields the general solution in terms of an integral: for completeness these are listed in
table 2.1. You are not expected to remember this table.

Table 2.1: A list of the coefficients for Riccati’s equation, y 0 = P (x) + Q(x)y + R(x)y 2 , for
which a simple particular integral, v(x) can be found. In this list λ is a real number and n an
integer, but in some cases it may be a real number.
Cases 7 and 13 have two particular integrals and this allows the general solution to be expressed
as an integral, see equation 2.28.
Case 16 is special, because the transformation z = xn y makes the equation separable, see
exercise 2.26.
Case 17 has an explicit solution if n = −2 and reduces to a Bessel function if n 6= −2, see
exercise 2.28.

P (x) Q(x) R(x) v


1 −a(a + f (x)) f (x) 1 a
2 −b(a + bf (x)) a f (x) b
3 f (x) xf (x) 1 −1/x
4 anxn−1 −axn f (x) f (x) axn
5 anxn−1 − a2 x2n f (x) 0 f (x) axn
6 −f (x) xn+1 f (x) −(n + 1)xn x−n−1

7 ax2n−1 f (x) n/x f (x)/x ± −a xn
8 −a2 f (x) − ag(x) g(x) f (x) a
9 −a2 x2n f (x) − axn g(x) + anxn−1 g(x) f (x) axn
10 λf (x) aeλx f (x) aeλx −λe−λx /a
11 aλeλx −aeλx f (x) f (x) aeλx
12 aλeλx − a2 e2λx f (x) 0 f (x) ae√λx

13 ae2λx f (x) λ f (x) ± −a eλx


14 f 0 (x) − f (x)2 0 1 f (x)
15 g 0 (x) −f (x)g(x) f (x) g(x)
16 bf (x)/x (axn f (x) − n)/x x2n−1 f (x)
17 bxn 0 a
70 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.22
Use the method described in this section to find the solutions of the following
equations using the form of the particular solution, v(x), suggested, where a, b
are constants to be determined.
(a) y 0 = xy 2 + (1 − 2x)y + x − 1, y(0) = 1/2, v = a,
(b) y 0 = 1 + x − x3 + 2x2 y − xy 2 , y(1) = 1, v = ax + b,
(c) 2y 0 = 1 + (y/x)2 , y(1) = 1, v = ax + b,
x 2 x
0
(d) 2y = (1 + e )y + y − e , y(0) = −1, v = aebx .

Exercise 2.23
For the equation
dy
= −a(a + f (x)) + f (x)y + y 2 ,
dx
which is case 1 of table 2.1, show that the general solution is
„ Z «
p(x)
y = a+ R , p(x) = exp 2ax + dx f (x) .
C − dx p(x)

Exercise 2.24
Decide which of the cases listed in table 2.1 corresponds to the equation
dy
= 1 − xy + y 2 .
dx
Find the general solution in terms of an integral, and the solution for the condition
y(0) = a.

Method of solution when two integrals are known (optional)


If two particular integrals, v1 (x) and v2 (x), are known then the general solution can be
expressed as an integral of a known function. Suppose that y is the unknown, general
solution: then from the defining equations,

y 0 − v10 = (y − v1 )(Q + (y + v1 )R) and y 0 − v20 = (y − v2 )(Q + (y + v2 )R)

and hence
y 0 − v10 y 0 − v20
− = (v1 − v2 )R.
y − v1 y − v2
This equation can be integrated directly to give
  Z
y − v1
ln = dx (v1 − v2 )R, (2.28)
y − v2

which is the general solution.


2.3. FIRST-ORDER EQUATIONS 71

Exercise 2.25
Using the trial function y = Axa , where A and a are constants, for each of the
following equations find two particular integrals and hence the general solution.
dy
(a) x2 + 2 + x2 y 2 = 2xy,
dx
dy
(b) (x2 − 1) + x + 1 − (x2 + 1)y + (x − 1)y 2 = 0,
dx
dy
(c) x2 = 2 − x2 y 2 .
dx

Exercise 2.26
Show that the equation

dy b2 2 x3 y 2
= 2
− y+
dx x(1 + x ) x 1 + x2

is an example of case 16 of table 2.1 and hence find its general solution.

Exercise 2.27
Use case 7 to show that the particular and general solutions of

dy n f (x) 2
= −A2 x2n−1 f (x) + y + y
dx x x
are y = ±Axn and
Z
n1
+ B exp(F (x))
y = Ax where F (x) = 2A dx xn−1 f (x)
1 − B exp(F (x))

and B is arbitrary constant.

Exercise 2.28
This exercise is about case 17, that is, the Riccati equation
dy
= bxn + ay 2 .
dx
w0
(a) Using the transformation y = − transform this equation to the linear
aw
equation
d2 w
+ abxn w = 0.
dx2
Using the further transformation to the independent variable z = xα and of the
dependent variable w(z) = z β u(z), show that u(z) satisfies the equation
2
„ «
2d u du 2 β ab n+2
z +z + β − + 2z α u = 0.
dz 2 dz α α

Choosing the coefficient of zu0 to be unity and α such that (n + 2) = 2α, show
that
d2 u
„ «
du 4ab 1
z2 2 + z + z 2
− u = 0, n =
6 −2.
dz dz (n + 2)2 (n + 2)2
72 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Deduce that the general solution is


√ ! √ !
2 ab 2 ab n+2
u(z) = AJ 1 z + BY 1 z , z=x 2 ,
n+2 n+2 n+2 n+2

where Jν (ξ) and Yν (ξ) are the two ordinary Bessel functions satisfying Bessel’s
equation
d2 w dw
ξ2 2 + ξ + (ξ 2 − ν 2 )w = 0
dξ dξ
and α = (n + 2)/2 and β = 1/(n + 2).
(b) If n = −2 show that solutions of the equation for w(x) are w = Bxγ where B
is an arbitrary constant and γ are the solutions of γ 2 − γ + ab = 0. Deduce that
γ
particular solutions of the original Riccati equation are y = − and hence that
ax
its general solution is

Aγ2 − γ1 xd √ 1
y= , d= 1 − 4ab, γ1, 2 = (1 ± d),
ax(xd − A) 2

where A is an arbitrary constant.

2.4 Second-order equations


2.4.1 Introduction
In this section we introduce some aspects of linear, second-order equations, which fre-
quently arise in the description of physical systems. There are two themes to this
section: in section 2.4.2 we discuss some important general properties of linear equa-
tions, which are largely due to linearity and which make this type of equation much
easier to deal with than nonlinear equations: this discussion is continued in chapter 13
where we shall see that many properties of the solutions of some equations can be de-
termined without finding explicit solutions. Second, in section 2.4.4 we describe various
‘tricks’ to find solutions for particular types of equation.

2.4.2 General ideas


In this section we describe some of the general properties of linear, second-order differ-
ential equations. The equation we consider is the inhomogeneous equation,

d2 y dy
p2 (x) 2
+ p1 (x) + p0 (x)y = h(x), a ≤ x ≤ b, (2.29)
dx dx
where the coefficients pk (x), k = 0, 1, 2 are real and assumed to be continuous for
x ∈ (a, b). The interval (a, b) may be finite or infinite.
The nature of the solutions depends upon p2 (x), the coefficient of y 00 (x). The theory
is valid in intervals for which p2 (x) 6= 0 and for which p1 /p2 and p0 /p2 are continuous.
If p2 (x) = 0 at some point x = c the equation is said to be singular at x = c, or to have
a singular point. Singular points, when they exist, always define the ends of intervals
of definition; hence we may always choose p2 (x) ≥ 0 for x ∈ [a, b].
2.4. SECOND-ORDER EQUATIONS 73

The homogeneous equation associated with equation 2.29 is obtained by setting


h(x) = 0,
d2 y dy
p2 (x) 2 + p1 (x) + p0 (x)y = 0, a ≤ x ≤ b. (2.30)
dx dx
All homogeneous equations have the trivial solution y(x) = 0, for all x. Solutions that
do not vanish identically are called nontrivial.
Equations 2.29 and 2.30 can be transformed into other forms which are more use-
ful. The two most useful changes are dealt with in exercise 2.31; the first of these is
important for the general theory discussed in this course and the second is particularly
useful for certain types of approximations.
The solutions of equations 2.29 and 2.30 satisfy the following properties.
P1: Solutions of the homogeneous equation satisfy the superposition principle:
that is if f (x) and g(x) are solutions of equation 2.30 then so is any linear com-
bination
y(x) = c1 f (x) + c2 g(x)
where c1 and c2 are any constants.
P2: Uniqueness of the initial value problem. If p1 /p2 and p0 /p2 are contin-
uous for x ∈ [a, b] then at most one solution of equation 2.29 can satisfy the given
initial conditions y(a) = α0 , y 0 (a) = α1 , theorem 2.2 (page 81).
P3: If f (x) and g(x) are solutions of the homogeneous equation 2.30 and if, for
some x = ξ, the vectors (f (ξ), f 0 (ξ)) and (g(ξ), g 0 (ξ)) are linearly independent,
then every solution of equation 2.30 can be written as a linear combination of
f (x) and g(x),
y(x) = c1 f (x) + c2 g(x).
The two functions f (x) and g(x) are said to form a basis of the differential equa-
tion.
P4: The general solution of the inhomogeneous equation 2.29 is given by the sum of
any particular solution and the general solution of the homogeneous equation 2.30.
Finally we observe that an ordinary point, x0 , is where p1 (x)/p2 (x) and p0 (x)/p2 (x)
can be expanded as a Taylor series about x0 , and that at every ordinary point the
solutions of the homogeneous equation 2.30 can also be represented by a Taylor series.
It is common, however, for either or both of p1 (x)/p2 (x) and p0 (x)/p2 (x) to be singular
at some point x0 , and these points are named singular points, and these are divided
into two classes. If (x − x0 )p1 (x)/p2 (x) and (x − x0 )2 p0 (x)/p2 (x) can be expanded as a
Taylor series, the singular point is regular: otherwise it is irregular. Irregular singular
points do not occur frequently in physical problems but, for the geometric reasons
discussed in chapter 13, regular singular points are common. For ordinary and regular
singular points there is a well developed and important theory of deriving the series
representation for the solutions of the homogeneous equation, but this is not relevant for
this course; good treatments can be found in Ince20 , Piaggio21 and Simmons22 . There
is no equivalent theory for nonlinear equations.
20 Ince E L , 1956 Ordinary differential equations, chapter XVI (Dover).
21 Piaggio H T H 1968, An Elementary treatise on Differential Equations, chapter IX, G Bell and
Sons, first published in 1920.
22 Simmons G F 1981, Differential Equations, chapter 5, McGraw-Hill Ltd.
74 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.29
Use property P2 to show that if a nontrivial solution of equation 2.30 y(x) is zero
at x = ξ, then y 0 (ξ) 6= 0, that is the zeros of the solutions are simple.

Exercise 2.30
Consider the two vectors x = (x1 , x2 ) and y = (y1 , y2 ) in the Cartesian plane.
Show that they are linearly independent, that is not parallel, if
˛ ˛
˛ x1 x2 ˛˛
x1 y2 − x2 y1 = ˛˛ 6= 0.
y1 y2 ˛

Exercise 2.31
Consider the second-order, homogeneous, linear differential equation

d2 y dy
p2 (x) + p1 (x) + p0 (x)y = 0.
dx2 dx
(a) Show that it may be put in the canonical form
„ «
d dy
p(x) + q(x)y = 0 (2.31)
dx dx
„Z «
p1 (x) p0 (x)
where p(x) = exp dx and q(x) = p(x).
p2 (x) p2 (x)
Equation 2.31 is known as the self-adjoint form and this transformation shows that
most linear, second-order, homogeneous differential equation may be cast into this
form: the significance of this transformation will become apparent in chapter 13.
(b) By putting y = uv, with a judicious choice of the function v(x), show that
equation 2.31 may be cast into the form

d2 u √
+ I(x)u = 0, u = y p, (2.32)
dx2
1 ` 02
p + 4qp − 2pp00 . Equation 2.32 is sometimes known as
´
and where I(x) =
4p2
the normal form and I(x) the invariant of the original equation.

2.4.3 The Wronskian


In property P3 we introduced the vectors (f, f 0 ) and (g, g 0 ) and in exercise 2.30 it was
shown that these vectors are linearly independent if
f (x) f 0 (x)

W (f, g; x) =
= f (x)g 0 (x) − f 0 (x)g(x) 6= 0. (2.33)
g(x) g 0 (x)

The function W (f, g; x) is named the Wronskian23 of the functions f (x) and g(x). This
notation for the Wronskian shows which functions are used to construct it and the
23 Josef Hoëné (1778 – 1853) was born in Poland, moved to France and become a French citizen in

1800. He moved to Paris in 1810 and adpoted the name Josef Hoëné de Wronski at about that time,
just after he married.
2.4. SECOND-ORDER EQUATIONS 75

independent variable; sometimes such detail is unnecessary so either of the notations


W (x) or W (f, g) is freely used.
If W (f, g; x) 6= 0 for a < x < b the functions f (x) and g(x) are said to be linearly
independent in (a, b); alternatively if W (f, g; x) = 0 they are linearly dependent. These
rules apply only to sufficiently smooth functions.
The Wronskian of any two solutions, f and g, of equation 2.30 satisfies the identity
 Z x 
p1 (t)
W (f, g; x) = W (f, g; a) exp − dt . (2.34)
a p2 (t)

This identity is proved in exercise 2.36 by showing that W (x) satisfies a first-order
differential equation and solving it. Because the right-hand side of equation 2.34 always
has the same sign, it follows that the Wronskian of two solutions is either always positive,
always negative or always zero. Thus, if f and g are linearly independent at one point
of the interval (a, b) they are linearly independent at all points of (a, b). Conversely, if
W (f, g) vanishes anywhere it vanishes everywhere. Further, if p1 (x) = 0 the Wronskian
is constant.
The Wronskian can be used with one known solution to construct another. Suppose
that f (x) is a known solution and let g(x) be another (unknown) solution. The equation
for W (x) can be interpreted as a first-order equation for g,

g 0 f − gf 0 = W (x),
 
0 0 2 d g
and, because g f − gf = f , this equation, with 2.34, can be written in the
dx f
form    Z x 
d g W (a) p1 (t)
= exp − dt
dx f f (x)2 a p2 (t)
having the general solution
 x  Z s 
1 p1 (t)
Z
g(x) = f (x) C + W (a) ds exp − dt , (2.35)
a f (s)2 a p2 (t)

where C is an arbitrary constant.

Exercise 2.32
If F (z) is a differentiable function and g = F (f ), with f (x) a differentiable,
non-constant function of x, show that W (f, g) = 0 only if g(x) = cf (x) for any
constant c.

Exercise 2.33
Show that the functions a1 sin x + a2 cos x and b1 sin x + b2 cos x are linearly inde-
pendent if a1 b2 6= a2 b1 .

Exercise 2.34
Use equation 2.35 to show that if f (x) is any nontrivial solution ofZ the equation
x
ds
y 00 + q(x)y = 0 for a < x < b, then another solution is g(x) = f (x) 2
.
a f (s)
76 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.35
(a) If f and g are linearly independent solutions of the homogeneous differential
equation y 00 + p1 (x)y 0 + p0 (x)y = 0, show that
f g 00 − gf 00 f 0 g 00 − g 0 f 00
p1 (x) = − and p0 (x) = .
W (f, g; x) W (f, g; x)

(b) Construct three linear, homogeneous, second-order differential equations hav-


ing the following bases of solutions:
(i) (x, sin x), (ii) (xa , xa+b ), (iii) (x, eax ),
where a and b are real numbers. Determine any singular points of these equations,
and in case (ii) consider the limit b = 0.

Exercise 2.36
By differentiating the Wronskian W (f, g; x), where f and g are linearly indepen-
dent solutions of equation 2.30, show that it satisfies the first-order differential
equation
dW p1 (x)
=− W
dx p2 (x)
and hence derive equation 2.34.

2.4.4 Second-order, constant coefficient equations


A linear, second-order equation with constant coefficients has the form
d2 y dy
a2 2
+ a1 + a0 y = h(x), (2.36)
dx dx
where ak , k = 0, 1, 2 are real constants, h(x) a real function of only x and, with no loss
of generality, a2 > 0. Normally this type of equation is solved by finding the general
solution of the homogeneous equation,
d2 y dy
a2 2
+ a1 + a0 y = 0, (2.37)
dx dx
which contains two arbitrary constants, and adding to this any particular solution of
the original inhomogeneous equation, defined by equation 2.36.
The first part of this process is trivial because, for any constant λ, the nth derivative
of exp(λx) is λn exp(λx), that is, a constant multiple of the original function. Thus if
we substitute y = exp(λx) into the homogeneous equation a quadratic equation for λ
is obtained,
a2 λ2 + a1 λ + a0 = 0. (2.38)
This has two roots, λ1 and λ2 , and provided λ1 6= λ2 we have two independent solutions,
giving the general solution

yg = c1 exp(λ1 x) + c2 exp(λ2 x), (λ1 6= λ2 ). (2.39)


If λ1 and λ2 are real, so are the constants c1 and c2 . If the roots are complex then
λ1 = λ∗2 and, to obtain a real solution, we need c1 = c∗2 . The case λ1 = λ2 is special
and will be considered after the next exercise.
2.4. SECOND-ORDER EQUATIONS 77

Exercise 2.37
Find real solutions of the following constant coefficient, differential equations: if
no initial or boundary values are given find the general solution. Here ω and k
are real.

(a) y 00 + 5y 0 + 6y = 0,
(b) 4y 00 + 8y 0 + 3y = 0,
(c) y 00 + y 0 + y = 0,
(d) y 00 + 4y 0 + 5y = 0, y(0) = 0, y 0 (0) = 2,
(e) y 00 + 6y 0 + 13y = 0, y(0) = 2, y 0 (0) = 1,
(f) y 00 + ω 2 y = 0, y(0) = a, y 0 (0) = b,
(g) y 00 − ω 2 y = 0, y(0) = a, y 0 (0) = b,
(h) y 00 + 2ky 0 + (ω 2 + k2 )y = 0.

Repeated roots
a1
If the roots of equation 2.38 are identical, that is a21 = 4a0 a2 , then λ = − and the
2a2
above method yields only one solution, y = exp(λx). The other solution of 2.37 is found
using the method of variation of parameters, introduced in exercise 2.14. Assuming
that the other solution is y = v(x) exp(λx), where v(x) is an unknown function, and
substituting into equation 2.37 gives

d2 v
= 0. (2.40)
dx2

Thus another independent solution is y = x exp(λx), the general solution is

a1
yg (x) = (c1 + c2 x) exp(λx), λ=− , (a21 = 4a0 a2 ). (2.41)
2a2

Exercise 2.38
Derive equation 2.40.

Exercise 2.39
Find the solutions of

(a) y 00 − 6y 0 + 9y = 0, y(0) = 0, y 0 (0) = b,


(b) y 00 + 2y 0 + y = 0, y(0) = a, y(X) = b.
78 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

2.4.5 Inhomogeneous equations


The general solution of the inhomogeneous equation

d2 y dy
a2 2
+ a1 + a0 y = h(x) (2.42)
dx dx
can be written as the sum of the general solution of the homogeneous equation and
any particular integral of the inhomogeneous equation. This is true whether or not the
coefficients ak , k = 0, 1 and 2, are constant: but here we consider only the simpler
constant coefficient case. Boundary or initial conditions must be applied to this sum,
not the component parts.
There are a variety of methods for attempting to find a particular integral. The
problem can sometimes be made simpler by splitting h(x) into a sum of simpler terms,
h = h1 + h2 , and finding particular y1 and y2 for h1 and h2 : because the equation is
linear the required particular integral is y1 + y2 .
n
Sometimes the integral can be found by a suitable guess. Thus if h(x) Pn= x , kn
being a positive integer, we expect a particular integral to have the form k=0 ck x .
By substituting this into equation 2.42 and equating the coefficients of xk to zero, n + 1
equations for the n + 1 coefficients are obtained.

Exercise 2.40
Find the general solution of

d2 y
+ ω 2 y = x2 , ω > 0,
dx2
and find the solution that satisfies the initial conditions y(0) = a, y 0 (0) = b.

If h(x) = eµx , where µ may be complex, then provided a2 µ2 + a1 µ + a0 6= 0 a particular


integral is eµx /(a2 µ2 + a1 µ + a0 ). But if a2 µ2 + a1 µ + a0 = 0 we can use the method
of variation of parameters by substituting y = v(x)eµx into the equation, to form a
simpler equation for v(x). These calculations form the basis of the next exercise.

Exercise 2.41
(a) For the equation
d2 y dy
a2 + a1 + a0 y = eµx ,
dx2 dx
by substituting the function y = Aeµx , A a constant, into the equation find a
particular integral if a2 µ2 + a1 µ + a0 6= 0.
(b) If a2 µ2 + a1 µ + a0 = 0, put y = v(x)eµx and show that v satisfies the equation

d2 v dv
a2 + (2µa2 + a1 ) = 1,
dx2 dx
and that this equation has the general solution
x Aa2
v= +B− e−x(2µ+a1 /a2 ) .
2µa2 + a1 2µa2 + a1
x
Hence show that a particular integral is y = eµx .
2µa2 + a1
2.4. SECOND-ORDER EQUATIONS 79

Exercise 2.42
Find the solutions of the following inhomogeneous equations with the initial con-
ditions y(0) = a, y 0 (0) = b.

d2 y d2 y d2 y
(a) + y = eix , (b) − y = sin x, (c) − 4y = 6,
dx2 dx 2 dx2
d2 y d2 y dy
(d) + 9y = 1 + 2x, (e) − − 6y = 14 sin 2x + 18 cos 2x.
dx2 dx2 dx

A more systematic method of finding particular integrals is to convert the solution


to an integral. This transformation is achieved by applying the method of variation
of parameters using two linearly independent solutions of the homogeneous equation,
which we denote by f (x) and g(x). We assume that the solution of the inhomogeneous
equation can be written in the form

y = c1 (x)f (x) + c2 (x)g(x), (2.43)

where c1 (x) and c2 (x) are unknown functions, to be found. It transpires that both of
these are given by separable, first-order equations; but the analysis to derive this result
is a bit involved.
By substituting this expression into the differential equation, it becomes

a2 (c001 f + 2c01 f 0 + c1 f 00 ) + a2 (c002 g + 2c02 g 0 + c2 g 00 )


+a1 (c01 f + c1 f 0 ) + a1 (c02 g + c2 g 0 ) + a0 (c1 f + c2 g) = h(x).

We expect this expression to simplify because f and g satisfy the inhomogeneous equa-
tion: some re-arranging gives

c1 (a2 f 00 + a1 f 0 + a0 f ) + c2 (a2 g 00 + a1 g 0 + a0 g)
+a2 (c001 f + 2c01 f 0 ) + a1 c01 f
+a2 (c002 g + 2c02 g 0 ) + a1 c02 g = h(x).

The first line of this expression is identically zero; the second line can be written in the
form
a2 (c001 f + c01 f 0 ) + a2 c01 f 0 + a1 c01 f = a2 (c01 f )0 + a2 (c01 f 0 ) + a1 (c01 f ),
and similarly for the third line. Adding these two expressions we obtain
0
a2 (c01 f + c02 g) + a2 (c01 f 0 + c02 g 0 ) + a1 (c01 f + c02 g) = h(x). (2.44)

This identity will hold if c1 and c2 are chosen to satisfy the two equations
c01 f + c02 g = 0,
h(x) (2.45)
c01 f 0 + c02 g 0 = .
a2
Any solutions of these equations will yield a particular integral.
For each x, these are linear equations in c01 and c02 , and since the Wronskian
W (f, g) 6= 0, for any x, they have unique solutions given by
hg hf
c01 (x) = − , c02 (x) = . (2.46)
W (f, g) W (f, g)
80 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Integrating these gives a particular integral. Notice that in this derivation, at no point
did we need to assume that the coefficients a0 , a1 and a2 are constant. Hence this result
is true for the general case, when these coefficients are not constant, although it is then
more difficult to find the solutions, f and g, of the homogeneous equation.
As an example we re-consider the problem of exercise 2.40, for which two linearly
independent solutions are f = cos ωx and g = sin ωx, giving W (f, g) = ω and equa-
tions 2.46 give
1 1
Z Z
c1 = − dx x2 sin ωx, c2 = dx x2 cos ωx,
ω ω
 2   2 
x 2 2x x 2 2x
= 2
− 4
cos ωx − 2
sin ωx, = 2
− 4
sin ωx + 2 cos ωx.
ω ω ω ω ω ω

Thus
x2 2
y = c1 cos ωx + c2 sin ωx + 2
− 4,
ω ω
the result obtained previously, although the earlier method was far easier.

Exercise 2.43
d2 y π
Find the general solution of the equation + y = tan x, 0≤x< .
dx2 2

2.4.6 The Euler equation


The linear, second-order differential equation

d2 y dy
a 2 x2 + a1 x + a0 y = 0, a2 > 0, (2.47)
dx2 dx
where the coefficients a0 , a1 and a2 are constants, is named a (homogeneous) Euler
equation, of second order. This equation is normally defined on an interval of the x-
axis which does not include the origin except, possibly, as an end point. It is one of the
relatively few equations with variable coefficients that can be solved in terms of simple
functions.
If we introduce a new independent variable, t, by x = et , then
dy dy dt 1 dy dy dy
= = that is x = . (2.48)
dx dt dx x dt dx dt
A second differentiation gives

d2 y d2 y d2 y dy
 
d dy
x x = 2 x2 = − , (2.49)
dx dx dt dx2 dt2 dt

and hence equation 2.47 becomes the constant coefficient equation

d2 y dy
a2 + (a1 − a2 ) + a0 y = 0. (2.50)
dt2 dt
This can be solved using the methods described in section 2.4.4.
2.5. AN EXISTENCE AND UNIQUENESS THEOREM 81

Exercise 2.44
Use the method described above to solve the equation

d2 y dy
x2 + 2x − 6y = 0, y(1) = 1, y 0 (1) = 0, x ≥ 1.
dx2 dx

Exercise 2.45
Find the solution of
d2 y dy
x + = 0, y(1) = A, y 0 (1) = A0 , x ≥ 1.
dx2 dx

Exercise 2.46
d3 y d3 y d2 y dy
Show that if x = et then = x3 3 + 3x2 2 + x , and hence that
dt3 dx dx dx
d3 y d3 y d2 y dy
x3 3
= 3 −3 2 +2 .
dx dt dt dy
Hence find the general solution of the equation

d3 y d2 y dy √
x3 3
− 3x2 2 + 6x − 6y = x, x ≥ 0.
dx dx dx

2.5 An existence and uniqueness theorem


Here we quote a basic existence theorem for coupled first-order systems, which is less
restrictive than theorem 2.1, but which does not provide a method of constructing
the solution. This proof was first given by Cauchy in his lecture course at the École
polytechnique between 1820 and 1830.
Theorem 2.2
For the n coupled first-order, autonomous, initial value system
dyk
= fk (y), y(x0 ) = A, (2.51)
dx
where y = (y1 , y2 , . . . , yn ), A = (A1 , A2 , . . . , An ) and where fk (y) are differentiable
functions of y on some domain D, ak ≤ yk ≤ bk , −∞ ≤ ak < bk ≤ ∞, k = 1, 2, · · · , n,
then:
(i) for every real x0 and A ∈ D there exists a solution satisfying the initial conditions
y(x0 ) = A, and;
(ii) this solution is unique in some neighbourhood of x containing x0 .
A geometric understanding of this theorem comes from noting that in any region
where fk (y) 6= 0, for some k, all solutions are non-intersecting, smooth curves. More
precisely in a neighbourhood of a point y0 where fk (y0 ) 6= 0, for some k, if all fk (y0 )
have continuous second derivatives, it is possible to find a new set of variables u such
du1 duk
that in the neighbourhood y0 equation 2.51 transform to = 1 and = 0,
dx dx
82 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

k = 2, 3, · · · , n. In this coordinate system the solutions are straight lines, so such a


transformation is said to rectify the system. From this it follows that a unique solution
exists. A proof of the above theorem that uses this idea is give in Arnold24 .

There are two points to notice. First, the non-autonomous system


dyk
= fk (y, x), y(x0 ) = A,
dx
can, by setting x = yn+1 , fn+1 = 1, be converted to an n + 1 dimensional autonomous
system.
Second, we note that differentiability of fk (y), for all k, is necessary for uniqueness.
Consider, for instance, the system dy/dx = y 2/3 , y(0) = 0, which has the two solutions
y(x) = 0 and y(x) = (x/3)3 .

2.6 Envelopes of families of curves (optional)


The equation f (x, y) = 0 defines a curve in the Cartesian space Oxy. If the function
contains a parameter C, the equation becomes f (x, y, C) = 0 and a different curve is
obtained for each value of C. By varying C over an interval we obtain a family of curves:
the envelope of this family is the curve that touches every member of this family.
This envelope curve is given by eliminating C between the two equations
∂f
f (x, y, C) = 0 and = 0. (2.52)
∂C
Before proving this result we illustrate the idea with the equation

x cos φ + y sin φ = r, r > 0, 0 ≤ φ ≤ 2π, (2.53)

where r is fixed and φ is the parameter: for a given


value of φ this equation defines a straight line cutting 1.2
the x and y axes at r/ cos φ and r/ sin φ, respectively,
1
and passing a minimum distance of r from the origin.
Segments of five of these lines are shown in figure 2.3, 0.8
and it is not too difficult to imagine more segments 0.6
and to see that the envelope is a circle of radius r.
0.4
For this example equations 2.52 become
0.2
x cos φ + y sin φ = r and − x sin φ + y cos φ = 0. 0
0 0.2 0.4 0.6 0.8 1 1.2
Figure 2.3 Diagram showing five
examples of the line defined in equa-
tion 2.53, with r = 1 and φ = kπ/14,
k = 2, 3, · · · , 6.

Squaring and adding these eliminates φ to give x2 + y 2 = r2 , which is the equation of


a circle with radius r and centre at the origin.
24 Arnold V I 1973 Ordinary Differential Equations, section 32.6, translated and edited by R A

Silverman, The MIT Press.


2.6. ENVELOPES OF FAMILIES OF CURVES (OPTIONAL) 83

The significance of envelopes in the theory of first-order differential equations is as


follows. Suppose that f (x, y, C) = 0 is the general solution of a first-order equation,
so on each member of the family of curves f (x, y, C) = 0 the gradient satisfies the
differential equation. Where the envelope touches a member of the family the gradient
and coordinates of the point on the envelope also satisfy the differential equation. But,
by definition, the envelope touches some member of the family at every point along its
length. We therefore expect the envelope to satisfy the differential equation: since it
does not include any arbitrary constant and is not one of the family of curves, it is a
singular solution.
We prove equation 2.52 by considering neighbouring
members of the family of curves f (x, y, C + kδC) = 0,
k = 1, 2, 3, · · · , 0 < δC  |C|, such that the curves
R
defined by f (x, y, C) and f (x, y, C+δC) intersect at P , Q
those by f (x, y, C +δC) and f (x, y, C +2δC) at Q, and P
so on, as shown in figure 2.4. As δC → 0 the members
4
of this family of curves approach each other as do the 3
points P , Q and R. The locus of these points form a 2
curve each point of which lies on successive members k=1
of the original family.
Figure 2.4

Consider two neighbouring members of the family

f (x, y, C) = 0 and f (x, y, C + δC) = 0.

as δC → 0 we require values of x and y that satisfy both of these equations. The second
equation can be exanded,

f (x, y, C) + δCfC (x, y, C) + O(δC 2 ) = 0 and hence fC (x, y, C) = 0.

Thus the points on the locus of P , Q, R · · · each satisfy both equations f (x, y, C) = 0
and fC (x, y, C) = 0, so the equation of the envelope is obtained by eliminating C from
these equations.

Exercise 2.47
The equation of a straight line intersecting the x-axis at a and the y-axis at b is
x/a + y/b = 1.
(a) Find the envelope, in the first quadrant, of the family of straight lines such
that the sum of the intersects is constant, a + b = d > 0.
(b) Find the envelope, in the first quadrant, of the family of straight lines such
that the product of the intersects is constant, ab = d2 .
84 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

2.7 Miscellaneous exercises


Exercise 2.48
Find the solution of each of the following differential equations: if no initial or
boundary values are given, find the general solution.
dy
(a) + y = y 1/2 , y(1) = A > 0.
dx
dy
(b) − y = y 1/5 , y(1) = A > 0.
dx
1 dy 1
(c) − x = xy, y(0) = .
y dx 2
dy
(d) = sin(x − y), y(0) = 0.
dx
dy p
(e) x = y + x2 + y 2 , y(0) = A > 0.
dx
dy x + 2y − 1
(f) = , y(1) = 0.
dx 2x + 4y + 3
dy
(g) y − x + y = 0, y(1) = 1.
dx
dy
(h) + y sinh x = sinh 2x, y(0) = 1.
dx
dy
(i) x + 2y = x3 , y(1) = 0.
dx
dy √
(j) = y tan x + y 3 tan3 x, y(0) = 2.
dx
dy
(k) x3 = y(x2 + y).
dx

Exercise 2.49
Find the solution of each of the following differential equations: if no initial or
boundary values are given, find the general solution.
„ «2
d2 y dy
(a) = 2x , y(0) = 0, y 0 (0) = 1.
dx2 dx
d2 y dy
(b) x2 −x + y = x3 ln x.
dx2 dx
d2 y dy
(c) x 2 = y .
dx dx
d2 y dy
(d) x 2 − = 3x2 .
dx dx
„ «2
d2 y dy
(e) x 2 = .
dx dx
„ «2
d2 y dy
(f) + (x + a) = 0, y(0) = A, y 0 (0) = B, 0 < Ba2 < 2.
dx2 dx
„ «2
d2 y dy
(g) (y − a) 2 + = 0.
dx dx
2.7. MISCELLANEOUS EXERCISES 85

Exercise 2.50
For each of the following equations, show that the given function, v(x), is a solution
and hence find the general solution.
d2 y dy
(a) x(1 − x)2 2 + (1 − x2 ) + (1 + x)y = 0, v(x) = 1 − x.
dx dx
d2 y dy cos x
(b) x 2 + 2 + xy = 0, v = .
dx dx x
d2 y dy
(c) + 2 tan x + 2y tan2 x = 0, v = ex cos x.
dx2 dx

Exercise 2.51
(a) Consider the Riccati equation with constant coefficients,

dy
= a + by + cy 2 , c 6= 0,
dx
where a, b and c are constants.
Show that if b2 =6 4ac the general solution is
8 ω √
tan(ωx + α), ω = 12 4ac − b2 if 4ac > b2
b <
c
y(x) = − + √
2c : − ν tanh(νx + α), ν = 12 b2 − 4ac if 4ac < b2
c
where α is a constant. Also find the general solution if b2 = 4ac.
(b) Find the solutions of the following equations;
(i) y 0 = 2 + 3y + y 2 , (ii) y 0 = 9 − 4y 2 ,
(iii) y = 1 − 2y + y , (iv) y 0 = 1 + 4y + 5y 2 .
0 2

Exercise 2.52
(a) Show that the change of variable v = y 0 /y reduces the second-order equation

d2 y dy
+ a1 (x) + a0 (x)y = 0
dx2 dx
to the Riccati equation
dv
+ v 2 + a1 (x)v + a0 (x) = 0.
dx
Hence deduce that the problem of solving the original second-order equation is
equivalent to solving the coupled first-order equations
dy dv
= vy, = −v 2 − a1 (x)v − a0 (x).
dx dx
This equation is named the associated Riccati equation.
(b) Using an appropriate solution of y 00 + ω 2 y = 0, where ω is a real constant,
show that the general solution of v 0 + v 2 + ω 2 = 0 is v = −ω tan(ωx + c).
86 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.53
(a) If x(t) and y(t) satisfy the pair of coupled, linear equations
dx dy
= ax + by, = cx + dy,
dt dt
where a, b, c and d are constants, show that the ratio z = y/x satisfies the Riccati
equation
dz
= c + (d − a)z − bz 2 .
dt
(b) Hence show that the general solution of this Riccati equation is
λ1 eλ1 t + Cλ2 eλ2 t p
z= where 2λ1,2 = (d − a) ± (d − a)2 + 4bc
b (eλ1 t + Ceλ2 t )
and C is an arbitrary constant.

Exercise 2.54
In this and the next exercise you will show that some of the equations studied by
Riccati have closed form solutions.
(a) Consider the equation
dz
x = az − bz 2 + cxn , (2.54)
dx
where a, b and c are constants. By putting z = yxa show that this becomes the
Riccati equation
dy
= −bxa−1 y 2 + cxn−a−1
dx
and by changing the independent variable to ξ = xa transform this to
dy c b
= ξ (n−2a)/a − y 2 .
dξ a a
Deduce that if n = 2a the solution of the original equation can be expressed in
terms of simple functions.
a xn
(b) By substituting z = + into equation 2.54 show that it becomes
b u
du
x = (n + a)u − cu2 + bxn ,
dx
which is the same but with (a, b, c) replaced by (n + a, c, b). Deduce that the
solution of equation 2.54 can be expressed in terms of simple functions if n = 2a
or n = 2(n + a).
Using further, similar transformations show that the original equation has closed-
form solutions if n = 2(ns + a), s = 0, 1, 2, · · · .

Exercise 2.55
By putting z = xn /u into equation 2.54 show that u satisfies the equation
du
x = (n − a)u − cu2 + bxn
dx
and deduce that z(x) can be expressed in terms of simple functions if n = 2(n−a).
By making further transformations of the type used in exercise 2.54, show that
z(x) can be expressed in terms of simple functions if n = 2(ns − a), s = 1, 2, · · · .
2.7. MISCELLANEOUS EXERCISES 87

Exercise 2.56
Consider the sequence of functions
y0 (x) = A + A0 (x − a),
Z x
yn (x) = dt (t − x)G(t)yn−1 (t), n = 1, 2, · · · ,
a

where A and A0 are arbitrary constants.


Show that if

X
y(x) = yk (x)
k=0
and assuming that the infinite series is uniformly convergent on an interval con-
taining x = a, then y(x) satisfies the second-order equation
d2 y
+ G(x)y = 0, y(a) = A, y 0 (0) = A0 .
dx2

Exercise 2.57
It is well known that the exponential function, E(x) = ex , is the solution of the
first-order equation
dE
= E, E(0) = 1. (2.55)
dx
x
Not so well known is the fact that many of the properties of e , for real x, can be
deduced directly from this equation.
(a) Using theorem 2.2 (page 81) deduce that there are no real values of x at which
E(x) = 0.
(b) By defining the function W (x) = 1/E(x), show that W 0 (y) = W (y), W (0) = 1,
where y = −x, and deduce that E(x)E(−x) = 1.
(c) If Z(x) = E(x + y) show that Z 0 (x) = Z(x), Z(0) = E(y), and hence deduce
that E(x + y) = E(x)E(y).
(d) If L(y) is the inverse function, that is if E(x) = y then L(y) = x, show that
L0 (y) = 1/y, L(y1 y2 ) = L(y1 ) + L(y2 ) and L(1/y) = −L(y).
“ ”
1+z
(e) Show that the Taylor series of E(x), L(1 + z) and L 1−z are
∞ ∞ ∞
xn zn x2n+1
„ «
X X 1+z X
E(x) = , L(1 + z) = (−1)n−1 and L =2 .
n=0
n! n=1
n 1−z n=0
2n + 1

Exercise 2.58
In this exercise you will derive some important properties of the sine and cosine
functions directly from the differential equations that can be used to define them.
Your solutions must not make use of trigonometric functions.
(a) Show that the solution of the initial value problem
d2 z
+ z = 0, z(0) = α, z 0 (0) = β,
dx2
can be written as an appropriate linear combination of the functions C(x) and
S(x) which are defined to be the solutions of the equations
„ 0 « „ « „ «
C C 0 −1
0 =A , A= , C(0) = 1, S(0) = 0.
S S 1 0
88 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

(b) Show that C(x)2 + S(x)2 = 1, which is Pythagoras’s theorem.


(c) If, for any real constant a,

f (x) = C(x + a) and g(x) = S(x + a)

show that
f0
„ « „ «
f
=A , f (0) = C(a), g(0) = S(a),
g0 g
and deduce that

C(x + a) = C(x)C(a) − S(x)S(a) and S(x + a) = S(x)C(a) + C(x)S(a),

which are the trigonometric addition formulae.


(d) Show that there is a non-negative number X such that

C(nX) = 1 and S(nX) = 0 for all n = 0, ±1, · · · ,

and hence that for all x

C(x + X) = C(x), S(x + X) = S(x),

so that C(x) and S(x) are periodic functions with period X.


(e) Show that
„ « „ « „ « „ « „ « „ «
X X X X 3X 3X
S = 1, C = 0; S = 0, C = −1; S = −1, C = 0.
4 4 2 2 4 4

(f) Show that A2 = −I and hence that A2n = (−1)n I and A2n+1 = (−1)n A.
By repeated differentiation of the equations defining C and S show that
„ (n) « „ «
C (0) n 1
= A
S (n) (0) 0

and deduce that the Taylor expansions of C(x) and S(x) are

x2 x4 x2n
C(x) = 1− + + · · · + (−1)n + ··· ,
2! 4! (2n)!
x3 x5 x2n+1
S(x) = x− + + · · · + (−1)n + ··· .
3! 5! (2n + 1)!

Exercise 2.59
Find the normal forms, as defined in exercise 2.31(b) of Legendre’s equation
„ «
d dy
(1 − x2 ) + λy = 0.
dx dx

Exercise 2.60 Z x p
Show that changing to the independent variable t = dx q(x) converts the
a
00 0
equation y + p1 (x)y + q(x)y = 0, a ≤ x ≤ b, q(x) > 0, into

d2 y q 0 (x) + 2p1 q dy
2
+ + y = 0.
dt 2q 3/2 dt
2.7. MISCELLANEOUS EXERCISES 89

Exercise 2.61
If f (x) and g(x) and h(x) are any solutions of the second-order equation y 00 +
p1 (x)y 0 + q(x)y = 0, show that the following determinant is zero
˛ f f 0 f 00 ˛
˛ ˛
˛ g g 0 g 00 ˛ .
˛ ˛
˛ ˛
˛ h h0 h00 ˛

Exercise 2.62
Using the results found in exercise 2.35 (page 76) to construct a linear, homoge-
neous, second-order differential equation having the solutions
(a) (sinh x, sin x), (b) (tan x, 1/ tan x).

Exercise 2.63
Use the results found in exercise 2.35 (page 76) to show that the equation

d2 y u0 dy f0
2
− − u2 y = 0, u= ,
dx u dx f

has solutions f (x) and 1/f (x).

Exercise 2.64
Let f (x), g(x) and h(x) be three solutions of the linear, third order differential
equation
d3 y d2 y dy
3
+ p2 (x) 2 + p1 (x) + p0 (x)y = 0.
dx dx dx
Derive a first-order differential equation for the Wronskian
˛ ˛
˛ f g h ˛˛
˛ 0 0 0
W (x) = ˛˛ f g h ˛˛ .
˛ f 00 g 00 h00 ˛

You will need to differentiate this determinant: the derivative of an n × n deter-


minant, A, where the elements depend upon x is
n
d X
det(A) = det(Ak )
dx k=1

where Ak is the determinant formed by differentiating the kth row of A.

Exercise 2.65
The Schwarzian derivative
(a) If f (x) and g(x) are any two linearly independent solutions of the equation
y 00 + q(x)y = 0, show that the ratio v = f /g is a solution of the third order,
nonlinear equation S(v) = 2q(x), where
«2
v 000 v 00

3
S(v) = − .
v0 2 v0
90 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

(b) If a, b, c and d are four constants with ad 6= bc deduce that


„ «
av + b
S = S(v).
cv + d

The function S(v) is named the Schwarzian derivative and has the important
property that if S(F ) < 0 and S(G) < 0 in an interval, then S(H) < 0, where
H(x) = F (G(x)). This result is useful in study of bifurcations of the fixed points
of one dimensional maps.

Exercise 2.66
The radius of curvature
The equation of the normal to a curve repre- y
sented by the function y = f (x), through the
point (ξ, η) is r
y−η =−
1
(x − ξ), m(x) =
df
. η
m(ξ) dx

x
ξ ξ+δξ

(a) Consider the adjacent normal, through the point (ξ + δξ, η + δη), where δη =
f 0 (ξ)δξ, and find the point where this intersects the normal through (ξ, η), correct
to first order in δξ.
(b) If the curve defined by f (x) is a segment of a circle of radius r, all normals
intersect at its centre, a distance r from (ξ, η). The point of intersection found in
part (a) will be a distance r(ξ, δξ) from the point (ξ, η) and we define the radius
of curvature by the limit ρ(ξ) = limδξ→0 r(ξ, δξ). Use this definition to show that

1 f 00 (ξ)
= .
ρ (1 + f 0 (ξ)2 )3/2

Exercise 2.67
The tangent to a curve C intersects the x- and y-axes at x = a and y = b,
respectively. If the product ab = 2∆ is constant as the tangent moves on C, show
that the differential equation for C is given by

dy
2p∆ = −(px − y)2 , where p= .
dx

Notice that |∆| is the area of the triangle formed by the axes and the tangent.

Show that the singular solution of this equation is the hyperbola xy = , and
2
show that the general solution is a family of straight lines.
2.7. MISCELLANEOUS EXERCISES 91

2.7.1 Applications of differential equations


This section of exercises contains a few elementary applications giving rise to simple
first-order equations. Part of each of these questions involves deriving a differential
equation, so all of these exercises are optional.

Exercise 2.68
The number, N , of a particular species of atom that decays in sufficiently large
volume of material decreases at a rate proportional to N . The half-life of a sub-
stance containing only one species of decaying atoms is defined to be the time
for N to decrease to N/2. The half-life of Carbon-14 is 5600 years; if initially
there are N0 Carbon-14 atoms find an expression for N (t), the number of atoms
at t ≥ 0.

Exercise 2.69
A moth ball evaporates, losing mass at a rate proportional to its surface area.
Initially it has radius 10 cm and after a month this has become 5 cm. Find its
radius as a function of time and the time at which it vanishes.

Exercise 2.70
A tank contains 1000 L of pure water. At time t = 0 brine containing 1 kg of
salt/L is added at a rate of one litre a minute, with the mixture kept uniform by
constant stirring, and one litre of the mixture is run off every minute, so the total
volume remains constant. When will there be 50 kg of dissolved salt in the tank?

Exercise 2.71
Torricelli’s law
Torricelli’s law states that water flows out of an open tank through a small hole
at a speed it would acquire falling freely from the surface to the hole.
A hemispherical bowl of radius R has a small circular hole, of radius a, drilled in
its bottom. It is initially full of water and at time t = 0 the hole is uncovered.
How long does it take for the bowl to empty?

Exercise 2.72
Water clocks
Water clocks, or clepsydra meaning ‘water thief’, are devices for measuring time
using the regular rate of flow of water, and were in use from the 15 th century BC,
in Egypt, to about 100 BC25
A simple version is a vessel from which water escapes from a small hole in the
bottom. It was used in Greek and Roman courts to time the speeches of lawyers.
Determine the shape necessary for the water level to fall at a constant rate.

25 Richards E G 1998 Mapping Time, pages 51-57, Oxford University Press.


92 CHAPTER 2. ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.73
By winding a rope round a circular post, a rope can be used to restrain large
weights with a small force. If T (θ) and T (θ + δθ) = T (θ) + δT are the tensions
in the rope at angles θ and θ + δθ, then it can be shown that a normal force of
approximately δT is exerted by the rope on the post in (θ, θ + δθ). If µ is the
coefficient of friction between the rope and the post, then µT δθ ' δT .
Use this to find a differential equation satisfied by T (θ) and by solving this
find T (θ).

Exercise 2.74
A chain of length L starts with a length l0 hanging over the edge of a horizontal
table. It is released from rest at time t = 0. Neglecting friction determine how
long it takes to fall off the table.

Exercise 2.75
Lambert’s law of absorption
Lambert’s law of absorption states that the percentage of incident light absorbed
by a thin layer of translucent material is proportional to the thickness of the
layer. If sunlight falling vertically on ocean water is reduced to one-half its initial
intensity at a depth of 10 feet, find a differential equation for the intensity as a
1
function of the depth and determine the depth at which the intensity is 16 th of
the initial intensity.

Exercise 2.76

An incompressible liquid in a U-tube, as


shown in the figure, will oscillate if one side
of the liquid is initially higher than the other
side. If the liquid is initially a height h0 above
the other side, use conservation of energy to
h h2
show that, if friction can be ignored,
2g 2
ḣ2 = (h − h2 ), h1
L 0
where h(t) is the difference in height at time
t, L is the total length of the tube and g is
the acceleration due to gravity.
Use this formula to find h(t) and top
show that
the period of oscillations is T = π 2L/g.
Figure 2.5

Exercise 2.77
It can be shown that a body inside the earth is attracted towards the centre by a
force that is directly proportional to the distance from the centre.
If a hole joining any two points on the surface is drilled through the earth and
a particle can move without friction along this tube, show that the period of
oscillation is independent of the end points. The rotation of the earth should be
ignored.
Chapter 3

The Calculus of Variations

3.1 Introduction
In this chapter we consider the particular variational principle defining the shortest
distance between two points in a plane. It is well known that this shortest path is the
straight line, however, it is almost always easiest to understand a new idea by applying it
to a simple, familiar problem; so here we introduce the essential ideas of the Calculus of
Variations by finding the equation of this line. The algebra may seem overcomplicated
for this simple problem, but the same theory can be applied to far more complicated
problems, and we shall see in chapter 4 the most important equation of the Calculus of
Variations, the Euler-Lagrange equation, can be derived with almost no extra effort.
The chapter ends with a description of some of the problems that can be formulated
in terms of variational principles, some of which will be solved later in the course.
The approach adopted is intuitive, that is we assume that functionals behave like
functions of n real variables. This is exactly the approach used by Euler (1707 – 1783)
and Lagrange (1736 – 1813) in their original analysis and it can be successfully applied
to many important problems. However, it masks a number of problems, all to do
with the subtle differences between infinite and finite dimensional spaces which are not
considered in this course.

3.2 The shortest distance between two points in a


plane
The distance between two points Pa = (a, A) and Pb = (b, B) in the Oxy-plane along a
given curve, defined by the function y(x), is given by the functional
Z b p
S[y] = dx 1 + y 0 (x)2 . (3.1)
a

The curve must pass through the end points, so y(x) satisfies the boundary conditions,
y(a) = A and y(b) = B. We shall usually assume that y 0 (x) is continuous on (a, b).
We require the equation of the function that makes S[y] stationary, that is we need
to understand how the values of the functional S[y] change as the path between Pa and

93
94 CHAPTER 3. THE CALCULUS OF VARIATIONS

Pb varies. These ideas are introduced here, and developed in chapter 4, using analogies
with the theory of functions of many real variables.

3.2.1 The stationary distance


In the theory of functions of several real variables a stationary point is one at which the
values of the function at all neighbouring points are ‘almost’ the same as at the station-
ary point. To be precise, if G(x) is a function of n real variables, x = (x1 , x2 , · · · , xn ),
we compare values of G at x and the nearby point x + ξ, where ||  1 and |ξ| = 1.
Taylor’s expansion, equation 1.37 (page 36), gives,
n
X ∂G
G(x + ξ) − G(x) =  ξk + O(2 ). (3.2)
∂xk
k=1

A stationary point is defined to be one for which the term O() is zero for all ξ. This
gives the familiar conditions for a point to be stationary, namely ∂G/∂xk = 0 for
k = 1, 2, · · · , n.
For a functional we proceed in the same way. That is, we choose adjacent paths
joining Pa to Pb and compare the values of S along these paths. If a path is represented
by a differentiable function y(x), adjacent paths may be represented by y(x) + h(x),
where  is a real variable and h(x) another differentiable function. Since all paths must
pass through Pa and Pb , we require y(a) = A, y(b) = B and h(a) = h(b) = 0; otherwise
h(x) is arbitrary. The difference
δS = S[y + h] − S[y],
may be considered as a function of the real variable , for arbitrary y(x) and h(x) and
for small values of , ||  1. When  = 0, δS = 0 and for small || we expect δS to be
proportional to ; in general this is true as seen in equation 3.3 below.
However, there may be some paths for which δS is proportional to 2 , rather than .
These paths are special and we define these to be the stationary paths, curves or sta-
tionary functions. Thus a necessary condition for a path y(x) to be a stationary path
is that
S[y + h] − S[y] = O(2 ),
for all suitable h(x). The equation for the stationary function y(x) is obtained by
examining this difference more carefully.
The distances along these adjacent curves are
Z b p Z b p
S[y] = dx 1 + y 0 (x)2 , and S[y + h] = dx 1 + [y 0 (x) + h0 (x)]2 .
a a

We proceed by expanding the integrand of S[y + h] in powers of , retaining only the
terms proportional to . One way of making this expansion is to consider the integrand
as a function of  and to use Taylor’s series to expand in powers of ,
 
p p d p
0 0 2
1 + (y + h ) = 1+y + 0 2 0 0
1 + (y + h ) 2 + O(2 ),
d =0
p y 0 h0
= 1 + y0 2 +  p + O(2 ).
1 + y0 2
3.2. THE SHORTEST DISTANCE BETWEEN TWO POINTS IN A PLANE 95

Substituting this expansion into the integral and rearranging gives the difference be-
tween the two lengths,
Z b
y 0 (x)
S[y + h] − S[y] =  dx p h0 (x) + O(2 ). (3.3)
0
1 + y (x) 2
a

This difference depends upon both y(x) and h(x), just as for functions of n real variables
the difference G(x+ξ)−G(x), equation 3.2, depends upon both x and ξ, the equivalents
of y(x) and h(x) respectively.
Since S[y] is stationary it follows, by definition, that
Z b
y 0 (x)
dx p h0 (x) = 0 (3.4)
1 + y 0 (x)2
a

for all suitable functions h(x).


We shall see in chapter 4 that because 3.4 holds for all those functions h(x) for
which h(a) = h(b) = 0 and h0 (x) is continuous, this equation is sufficient to determine
y(x) uniquely. Here, however, we simply show that if
y 0 (x)
p = α = constant for all x, (3.5)
1 + y 0 (x)2
then the integral in equation 3.4 is zero for all h(x). Assuming that 3.5 is true, equa-
tion 3.4 becomes
Z b
dx αh0 (x) = α {h(b) − h(a)} = 0 since h(a) = h(b) = 0.
a

In section 4.3 we show that condition 3.5 is necessary as well as sufficient for equation 3.4
to hold.
Equation 3.5 shows that y 0 (x) = m, where m is a constant, and integration gives
the general solution,
y(x) = mx + c
for another constant c: this is the equation of a straight line as expected. The constants
m and c are determined by the conditions that the straight line passes through Pa
and Pb :
B−A Ab − Ba
y(x) = x+ . (3.6)
b−a b−a
This analysis shows that the functional S[y] defined in equation 3.1 is stationary along
the straight line joining Pa to Pb . We have not shown that this gives a minimum
distance: this is proved in exercise 3.2.

Exercise 3.1
Use the above method on the functional
Z 1 p
S[y] = dx 1 + y 0 (x), y(0) = 0, y(1) = B > −1,
0

to show that the stationary function is the straight


√ line y(x) = Bx, and that the
value of the functional on this line is S[y] = 1 + B.
96 CHAPTER 3. THE CALCULUS OF VARIATIONS

3.2.2 The shortest path: local and global minima


In this section we show that the straight line 3.6 gives the minimum distance. For
practical reasons this analysis is divided into two stages. First, we show that the
straight line is a local minimum of the functional, using an analysis that is generalised
in chapter 8 to functionals. Second, we show that, amongst the class of differentiable
functions, the straight line is actually a global minimum: this analysis makes use of
special features of the integrand.
The distinction between local and global extrema is illustrated in figure 3.1. Here
we show a function f (x), defined in the interval a ≤ x ≤ b, having three stationary
points B, C and D, two of which are minima the other being a maximum. It is clear
from the figure that at the stationary point D, f (x) takes its smallest value in the
interval — so this is the global minimum. The function is largest at A, but this point
is not stationary — this is the global maximum. The stationary point at B is a local
minimum, because here, f (x) is smaller than at any point in the neighbourhood of B:
likewise the points C and D are local maxima and minima, respectively. The adjective
local is frequently omitted. In some texts local extrema are named relative extrema.

f ( x) A
E
C

B
D
x
a b
Figure 3.1 Diagram to illustrate the difference be-
tween local and global extrema.

It is clear from this example that to classify a point as a local extremum requires an
examination of the function values only in the neighbourhood of the point. Whereas,
determining whether a point is a global extremum requires examining all values of the
function; this type of analysis usually invokes special features of the function.
The local analysis of a stationary point of a function, G(x), of n variables proceeds
by making a second order Taylor expansion about a point x = a,
n n n
X ∂G 1 X X ∂2G
G(a + ξ) = G(a) +  ξk + 2 ξk ξj + · · · ,
∂xk 2 j=1
∂xk ∂xj
k=1 k=1

where all derivatives are evaluated at x = a. If G(x) is stationary at x = a then all


first derivatives are zero. The nature of the stationary point is usually determined by
the behaviour of the second order term. For a stationary point to be a local minimum
it is necessary for the quadratic terms to be strictly positive for all ξ, that is
n Xn
X ∂2G
ξk ξj > 0 for all ξk , ξj , k, j = 1, 2, · · · , n,
j=1
∂xk ∂xj
k=1

with |ξ| = 1. The stationary point is a local maximum if this quadratic form is strictly
negative. For large n it is usually difficult to determine whether these inequalities are
satisfied, although there are well defined tests which are described in chapter 8.
3.2. THE SHORTEST DISTANCE BETWEEN TWO POINTS IN A PLANE 97

For a functional we proceed in the same way: the nature of a stationary path
is usually determined by the second order expansion. If S[y] is stationary then, by
definition,
1
S[y + h] − S[y] = ∆2 [y, h]2 + O(3 )
2
for some quantity ∆2 [y, h], depending upon both y and h; special cases of this expansion
are found in exercises 3.2 and 3.3. Then S[y] is a local minimum if ∆2 [y, h] > 0 for
all h(x), and a local maximum if ∆2 [y, h] < 0 for all h(x). Normally it is difficult to
establish these inequalities, and the general theory is described in chapter 8. For the
functional defined by equation 3.1, however, the proof is straight forward; the following
exercise guides you through it.

Exercise 3.2
(a) Use the binomial expansion, exercise 1.32 (page 34), to obtain the following
expansion in ,
p αβ β 2 2
+ O(3 ).
p
1 + (α + β)2 = 1 + α2 + √ +
1+α 2 2(1 + α2 )3/2

(b) Use this result to show that if y(x) is the straight line defined in equation 3.6
and S[y] the functional 3.1, then,
Z b
2 B−A
S[y + h] − S[y] = dx h0 (x)2 + O(3 ), m = .
2(1 + m2 )3/2 a b−a

Deduce that the straight line is a local minimum for the distance between Pa
and Pb .

Exercise 3.3
In this exercise the functional defined in exercise 3.1 is considered in more detail.
By expanding the integrand of S[y + h] to second order in  show that, if y(x) is
the stationary path, then
Z 1
2
S[y + h] = S[y] − dx h0 (x)2 , B > −1.
8(1 + B)3/2 0

Deduce that the path y(x) = Bx, B > −1, is a local maximum of this functional.

Now we show that the straight line between the points (0, 0) and (a, A) gives a global
minimum of the functional, not just a local minimum. This analysis relies on a special
property of the integrand that follows from the Cauchy-Schwarz inequality.

Exercise 3.4
Use the Cauchy-Schwarz inequality (page 41) with a = (1, z) and b = (1, z + u)
to show that p
1 + (z + u)2 1 + z 2 ≥ 1 + z 2 + zu
p

with equality only if u = 0. Hence show that


p p zu
1 + (z + u)2 − 1 + z 2 ≥ √ .
1 + z2
98 CHAPTER 3. THE CALCULUS OF VARIATIONS

The distance between the points (0, 0) and (a, A) along the path y(x) is
Z a p
S[y] = dx 1 + y0 2, y(0) = 0, y(a) = A.
0

On using the inequality derived in the previous exercise, with z = y 0 (x) and u = h0 (x),
we see that Z a
y0
S[y + h] − S[y] ≥ dx p h0 .
0 1 + y 0 2

But on the stationary path y 0 is a constant and since h(0) = h(a) = 0 we have
S[y + h] ≥ S[y] for all h(x).
This analysis did not assume that |h| is small, and since all admissible paths can
be expressed in the form x + h(x), we have shown that in the class of differentiable
functions the straight line gives the global minimum of the functional.

An observation
Problems involving shortest distances on surfaces other than a plane illustrate other
features of variational problems. Thus if we replace the plane by the surface of a sphere
then the shortest distance between two points on the surface is the arc length of a
great circle joining the two points — that is the circle created by the intersection of
the spherical surface and the plane passing through the two points and the centre of
the sphere; this problem is examined in exercise 5.20 (page 168). Now, for most points,
there are two stationary paths corresponding to the long and the short arcs of the great
circle. However, if the points are at opposite ends of a diameter, there are infinitely
many shortest paths. This example shows that solutions to variational problems may
be complicated.
In general, the stationary paths between two points on a surface are named geodesics 1 .
For a plane surface the only geodesics are straight lines; for a sphere, most pairs of points
are joined by just two geodesics that are the segments of the great circle through the
points. For other surfaces there may be several stationary paths: an example of the
consequences of such complications is described next.

3.2.3 Gravitational Lensing


The general theory of relativity, discovered by Einstein (1879 – 1955), shows that the
path taken by light from a source to an observer is along a geodesic on a surface in a
four-dimensional space. In this theory gravitational forces are represented by distortions
to this surface. The theory therefore predicts that light is “bent” by gravitational
forces, a prediction that was first observed in 1919 by Eddington (1882 – 1944) in his
measurements of the position of stars during a total solar eclipse: these observations
provided the first direct confirmation of Einstein’s general theory of relativity.
The departure from a straight line path depends upon the mass of the body be-
tween the source and observer. If it is sufficiently massive, two images may be seen as
illustrated schematically in figure 3.2.
1 In some texts the name geodesic is used only for the shortest path.
3.3. TWO GENERALISATIONS 99
Quasar Image

Galaxy Earth

Quasar Light paths

Quasar Image
Figure 3.2 Diagram showing how an intervening galaxy can sufficiently dis-
tort a path of light from a bright object, such as a quasar, to provide two
stationary paths and hence two images. Many examples of such multiple im-
ages, and more complicated but similar optical effects, have now been observed.
Usually there are more than two stationary paths.

3.3 Two generalisations


3.3.1 Functionals depending only upon y 0 (x)
The functional 3.1 (page 93) depends only upon the derivative of the unknown function.
Although this is a special case it is worth considering in more detail in order to develop
the notation we need.
If F (z) is a differentiable function of z then a general functional of the form of 3.1 is
Z b
S[y] = dx F (y 0 ), y(a) = A, y(b) = B, (3.7)
a
0 0
where F (y ) simply means that in F (z) all occurrences
√ of z are replaced
p by y (x). Thus
0
for the distance between two points F (z) = 1 + z so F (y ) = 1 + y (x)2 . Note
2 0

that the symbols F (y 0 ) and F (y 0 (x)) denote the same function.


The difference between the functional evaluated along y(x) and the adjacent paths
y(x) + h(x), where ||  1 and h(a) = h(b) = 0, is
Z b
dx F (y 0 + h0 ) − F (y 0 ) .

S[y + h] − S[y] = (3.8)
a

Now we need to express F (y 0 + h0 ) as a series in ; assuming that F (z) is differentiable,


Taylor’s theorem gives
dF
F (z + u) = F (z) + u + O(2 ).
dz
The expansion of F (y 0 +h0 ) is obtained from this simply by the replacements z → y 0 (x)
and u → h0 (x), which gives
d
F (y 0 + h0 ) − F (y 0 ) = h0 (x) F (y 0 ) + O(2 ) (3.9)
dy 0
where the notation dF/dy 0 means

d 0 dF
F (y ) = . (3.10)
dy 0 dz z=y0 (x)
100 CHAPTER 3. THE CALCULUS OF VARIATIONS

For instance, if F (z) = 1 + z 2 then

dF z dF y 0 (x)
=√ and = .
dy 0
p
dz 1 + z2 1 + y 0 (x)2

Exercise 3.5
Find the expressions for dF/dy 0 when
(a) F (y 0 ) = (1 + y 0 2 )1/4 , (b) F (y 0 ) = sin y 0 , (c) F (y 0 ) = exp(y 0 ).

Substituting the difference 3.9 into the equation 3.8 gives


b
d
Z
S[y + h] − S[y] =  dx h0 (x) F (y 0 ) + O(2 ). (3.11)
a dy 0

The functional S[y] is stationary if the term O() is zero for all suitable functions h(x).
As before we give a sufficient condition, deferring the proof that it is also necessary. In
this analysis it is important to remember that F (z) is a given function and that y(x)
is an unknown function that we need to find. Observe that if

d
F (y 0 ) = α = constant (3.12)
dy 0
then

S[y + h] − S[y] = α h(b) − h(a) + O(2 ) = O(2 )



since h(a) = h(b) = 0.

In general equation 3.12 is true only if y 0 (x) is also constant, and hence

B−A Ab − Ba
y(x) = mx + c and therefore y(x) = x+ ,
b−a b−a

the last result following from the boundary conditions y(a) = A and y(b) = B.
This is the same solution as given in equation 3.6. Thus, for this class of functional,
the stationary function is always a straight line, independent of the form of the inte-
grand, although its nature can sometimes depend upon the boundary conditions, see
for instance exercise 3.18 (page 117).
The exceptional example is when F (z) is linear, in which case the value of S[y]
depends only upon the end points and not the values of y(x) in between, as shown in
the following exercise.

Exercise 3.6
If F (z) = Cz + D, where C and D are constants, by showing that the value of
Rb
the functional S[y] = a dx F (y 0 ) is independent of the chosen path, deduce that
equation 3.12 does not imply that y 0 (x) = constant.
What is the effect of making either, or both C and D a function of x?
3.3. TWO GENERALISATIONS 101

3.3.2 Functionals depending upon x and y 0 (x)


Now consider the slightly more general functional
Z b
S[y] = dx F (x, y 0 ), y(a) = A, y(b) = B, (3.13)
a

where the integrand F (x, y 0 ) depends explicitly upon the two variables x and y 0 . The
difference in the value of the functional along adjacent paths is
Z b
dx F (x, y 0 + h0 ) − F (x, y 0 ) .

S[y + h] − S[y] = (3.14)
a

In this example F (x, z) is a function of two variables and we require the expansion
∂F
F (x, z + u) = F (x, z) + u + O(2 )
∂z
where Taylor’s series for functions of two variables is used. Comparing this with the
expression in equation 3.9 we see that the only difference is that the derivative with
respect to y 0 has been replaced by a partial derivative. As before, replacing z by y 0 (x)
and u by h0 (x), equation 3.14 becomes
Z b

S[y + h] − S[y] =  dx h0 (x) 0 F (x, y 0 ) + O(2 ). (3.15)
a ∂y
If y(x) is the stationary path it is necessary that
Z b

dx h0 (x) 0 F (x, y 0 ) = 0 for all h(x).
a ∂y
As before a sufficient condition for this is that Fy0 (x, y 0 ) = constant, which gives the
following differential equation for y(x),

F (x, y 0 ) = c, y(a) = A, y(b) = B, (3.16)
∂y 0
where c is a constant. This is the equivalent of equation 3.12, but now the explicit
presence of x in the equation means that y 0 (x) = constant is not a solution.
Exercise 3.7
Consider the functional
Z 1 p
S[y] = dx 1 + x + y 0 2 , y(0) = A, y(1) = B.
0

Show that the function y(x) defined by the relation,

y 0 (x) = c 1 + x + y 0 (x)2 ,
p

where c is a constant, makes S[y] stationary. By expressing y 0 (x) in terms of x


solve this equation to show that
(B − A) “ ”
y(x) = A + 3/2 (1 + x)3/2 − 1 .
(2 − 1)
102 CHAPTER 3. THE CALCULUS OF VARIATIONS

3.4 Notation
In the previous sections we used the notation F (y 0 ) to denote a function of the derivative
of y(x) and proceeded to treat y 0 as an independent variable, so that the expression
dF/dy 0 had the meaning defined in equation 3.10. This notation and its generalisation
are very important in subsequent analysis; it is therefore essential that you are familiar
with it and can use it. √
Consider a function F (x, u, v) of three variables, for instance F = x u2 + v 2 , and
assume that all necessary partial derivatives of F (x, u, v) exist. If y(x) is a function of
x we may form a function of x with the substitutions u → y(x), v → y 0 (x), thus

F (x, u, v) becomes F (x, y, y 0 ).

Depending upon circumstances F (x, y, y 0 ) can be considered either as a function of a


Rb
single variable x, as when evaluating the integral a dx F (x, y(x), y 0 (x)), or as a function
of three independent variables (x, y, y 0 ). In the latter case the first partial derivatives
with respect to y and y 0 are just

∂F ∂F ∂F ∂F
= and = .
∂y ∂u u=y,v=y0 ∂y 0 ∂v u=y,v=y0

Because y depends upon x we may also form the total derivative of F (x, y, y 0 ) with
respect to x using the chain rule, equation 1.22 (page 27)

dF ∂F ∂F 0 ∂F
= + y (x) + 0 y 00 (x). (3.17)
dx ∂x ∂y ∂y

In the particular case F (x, u, v) = x u2 + v 2 these rules give

∂F p ∂F xy ∂F xy 0
= y2 + y0 2 , =p , = .
∂y 0
p
∂x ∂y y + y0 2
2 y2 + y0 2

Similarly, the second order derivatives are

∂ 2F ∂ 2 F ∂2F ∂ 2 F ∂2F ∂ 2 F

= , = and = .
∂y 2 ∂u2 u=y,v=y0 ∂y 0 2 ∂v 2 u=y,v=y0 ∂y∂y 0 ∂u∂v u=y,v=y0

Because you must be able to use this notation we suggest that you do all the following
exercises before proceeding.

Exercise 3.8 „ «
p ∂F ∂F ∂F dF d ∂F
If F (x, y 0 ) = x2 + y 0 2 find , , , and . Also, show that,
∂x ∂y ∂y 0 dx dx ∂y 0
„ « „ «
d ∂F ∂ dF
= .
dx ∂y 0 ∂y 0 dx
3.4. NOTATION 103

Exercise 3.9
Show that for an arbitrary differentiable function F (x, y, y 0 )

∂ 2 F 00 ∂2F 0 ∂2F
„ «
d ∂F
= 2
y + y + .
dx ∂y 0 ∂y 0 ∂y∂y 0 ∂x∂y 0

Hence show that „ « „ «


d ∂F ∂ dF
6= ,
dx ∂y 0 ∂y 0 dx
with equality only if F does not depend explicitly upon y.

Exercise 3.10
Use the first identity found in exercise 3.9 to show that the equation
„ «
d ∂F ∂F
− =0
dx ∂y 0 ∂y

is equivalent to the second-order differential equation

∂ 2 F 00 ∂2F 0 ∂2F ∂F
0 2
y + 0
y + − = 0.
∂y ∂y∂y ∂x∂y 0 ∂y

Note the first equation will later be seen as crucial to the general theory described
in chapter 4. The fact that it is a second-order differential equation means that
unique solutions can be obtained only if two initial or two boundary conditions
are given. Note also that the coefficient of y 00 (x), ∂ 2 F/∂y 0 2 , is very important in
the general theory of the existence of solutions of this type of equation.

Exercise 3.11
p ∂F ∂F ∂ 2 F
(a) If F (y, y 0 ) = y 1 + y 0 2 find , , and show that the equation
∂y ∂y 0 ∂y 0 2
«2
d2 y
„ « „
d ∂F ∂F dy
− =0 becomes y −1− =0
dx ∂y 0 ∂y dx2 dx

and also that


„ « „ „ 0« «
d ∂F ∂F ´−3/2 d y
= 1 + y0 2 y2
`
− −1 .
dx ∂y 0 ∂y dx y

(b) By solving the equation y 2 (y 0 /y)0 = 1 show that a non-zero solution of


«2
d2 y

dy 1
y −1− =0 is y= cosh(Ax + B),
dx2 dx A

for some constants A and B. Hint, let y be the independent variable and define a
new variable z by the equation yz(y) = dy/dx to obtain an expression for dy/dx
that can be integrated.
104 CHAPTER 3. THE CALCULUS OF VARIATIONS

3.5 Examples of functionals


In this section we describe a variety of problems that can be formulated in terms of
functionals, with solutions that are stationary paths of these functionals. This list is
provided because it is likely that you will not be familiar with these descriptions and
will be unaware of the wide variety of problems for which variational principles are
useful, and sometimes essential. You should not spend long on this section if time is
short; in this case you you should aim at obtaining a rough overview of the examples.
Indeed, you may move directly to chapter 4 and return to this section at a later date,
if necessary.
In each of the following sub-sections a different problem is described and the relevant
functional is written down; some of these are derived later. In compiling this list one
aim has been to describe a reasonably wide range of applications: if you are unfamiliar
with the underlying physical ideas behind any of these examples, do not worry because
they are not an assessed part of the course. Another aim is to show that there are
subtly different types of variational problems, for instance the isoperimetric and the
catenary problems, described in sections 3.5.5 and 3.5.6 respectively.

3.5.1 The brachistochrone


Given two points Pa = (a, A) and Pb = (b, B) in the same vertical plane, as in the
diagram below, we require the shape of the smooth wire joining Pa to Pb such that a
bead sliding on the wire under gravity, with no friction, and starting at Pa with a given
speed shall reach Pb in the shortest possible time.

y
x
Pa

Pb
Figure 3.3 The curved line joining Pa to Pb is
a segment of a cycloid. In this diagram the axes
are chosen to give a = A = 0.

The name given to this curve is the brachistochrone, from the Greek, brachistos, shortest,
and chronos, time.
If the y-axis is vertical it can be shown that the time taken along the curve y(x) is
s
b
1 + y0 2
Z
T [y] = dx , y(a) = A, y(b) = B,
a C − 2gy

where g is the acceleration due to gravity and C a constant depending upon the initial
speed of the particle. This expression is derived in section 5.2.
3.5. EXAMPLES OF FUNCTIONALS 105

This problem was first considered by Galileo (1564 – 1642) in his 1638 work Two
New Sciences, but lacking the necessary mathematical methods he concluded, erro-
neously, that the solution is the arc of a circle passing vertically through Pa ; exercise 5.4
(page 150) gives part of the reason for this error.
It was John Bernoulli (1667 – 1748), however, who made the problem famous when in
June 1696 he challenged the mathematical world to solve it. He followed his statement
of the problem by a paragraph reassuring readers that the problem was very useful in
mechanics, that it is not the straight line through Pa and Pb and that the curve is well
known to geometers. He also stated that he would show that this is so at the end of
the year provided no one else had.
In December 1696 Bernoulli extended the time limit to Easter 1697, though by this
time he was in possession of Leibniz’s solution, sent in a letter dated 16 th June 1696,
Leibniz having received notification of the problem on 9 th June. Newton also solved
the problem quickly: apparently2 the letter from Bernoulli arrived at Newton’s house,
in London, on 29 th January 1697 at the time when Newton was Warden of the Mint.
He returned from the Mint at 4pm, set to work on the problems and had solved it by
the early hours of the next morning. The solution was returned anonymously, to no
avail with Bernoulli stating upon receipt “The lion is recognised by his paw”. Further
details of this history and details of these solutions may be found in Goldstine (1980,
chapter 1).
The curve giving this shortest time is a segment of a cycloid, which is the curve traced
out by a point fixed on the circumference of a vertical circle rolling, without slipping,
along a straight line. The parametric equations of the cycloid shown in figure 3.3 are

x = a(θ − sin θ), y = −a(1 − cos θ),

where a is the radius of the circle: these equations are derived in section 5.2.1, where
other properties of the cycloid are discussed.
Other historically important names are the isochronous curve and the tautochrone.
A tautochrone is a curve such that a particle travelling along it under gravity reaches
a fixed point in a time independent of its starting point; a cycloid is a tautochrone
and a brachistochrone. Isochronal means “equal times” so isochronous curves and
tautochrones are the same.
There are many variations of the brachistochrone problem. Euler3 considered the
effect of resistance proportional to v 2n , where v is the speed and n an integer. The
problem of a wire with friction, however, was not considered until 19754. Both these
extensions require the use of Lagrange multipliers and are described in chapter 11.
Another variation was introduced by Lagrange5 who allowed the end point, Pb in fig-
ure 3.3, to lie on a given surface and this introduces different boundary conditions that
the cycloid needs to satisfy: the simpler variant in which the motion remains in the
plane and one or both end points lie on given curves is treated in chapter 10.
2 This anecdote is from the records of Catherine Conduitt, née Barton, Newton’s niece who acted as

his housekeeper in London, see Newton’s Apple by P Aughton, (Weidenfeld and Nicolson), page 201.
3 Chapter 3 of his 1744 opus, The Method of Finding Plane Curves that Show Some Property of

Maximum or Minimum. . . .
4 Ashby A, Brittin W E, Love W F and Wyss W, Brachistochrone with Coulomb Friction, Amer J

Physics 43 902-5.
5 Essay on a new method. . . , published in Vol II of the Miscellanea Taurinensai, the memoirs of

the Turin Academy.


106 CHAPTER 3. THE CALCULUS OF VARIATIONS

3.5.2 Minimal surface of revolution


Here the problem is to find a curve y(x) passing through two given points Pa = (a, A)
and Pb = (b, B), with A ≥ 0 and B > 0, as shown in the diagram, such that when
rotated about the x-axis the area of the curved surface formed is a minimum.
y (b,B)
(a,A)
B
A x
a b

Figure 3.4 Diagram showing the cylindrical shape pro-


duced when a curve y(x), joining (a, A) to (b, B), is rotated
about the x-axis.

The area of this surface is shown in section 5.3 to be


Z b p
S[y] = 2π dx y(x) 1 + y 0 2 ,
a

and we shall see that this problem has solutions that can be expressed in terms of
differentiable functions only for certain combinations of A, B and b − a.

3.5.3 The minimum resistance problem


Newton formulated one of the first problems to involve the ideas of the Calculus of
Variations. Newton’s problem is to determine the shape of a solid of revolution with
the least resistance to its motion along its axis through a stationary fluid.
Newton was interested in the problem of fluid resistance and performed many exper-
iments aimed at determining its dependence on various parameters, such as the velocity
through the fluid. These experiments were described in Book II of Principia (1687) 6 ;
an account of Newton’s ideas is given by Smith (2000)7 . It is to Newton that we owe
the idea of the drag coefficient, CD , a dimensionless number allowing the force on a
body moving through a fluid to be written in the form
1
FR = CD ρAf v 2 , (3.18)
2
where Af is the frontal area of the body, ρ the fluid density8 , v = |v| where v is the
relative velocity of the body and the fluid. For modern cars CD has values between
about 0.30 and 0.45, with frontal areas of about 30 ft2 (about 2.8m2 ).
6 The full title is Philopsophiae naturalis Principia Mathematica, (Mathematical Principles of nat-

ural Philosophy.
7 Smith G E Fluid Resistance: Why Did Newton Change His Mind?, in The Foundations of New-

tonian Scholarship.
8 Note that this suggests that the 30◦ C change in temperature between summer and winter changes

FR by roughly 10%. The density of dry air is about 1.29 kg m−3 .


3.5. EXAMPLES OF FUNCTIONALS 107

Newton distinguished two types of forces:


a) those imposed on the front of the body which oppose the motion, and
b) those at the back of the body resulting from the disturbance of the fluid and which
may be in either direction.
He also considered two types of fluid:
a) rarefied fluids comprising non-interacting particles spread out in space, such as a gas,
and
b) continuous fluids, comprising particles packed together so that each is in contact
with its neighbours, such as a liquid.
The ideas sketched below are most relevant to rarefied fluids and ignore the second
type of force. They were used by Newton in 1687 to derive a functional, equation 3.21
below, for which the stationary path yields, in theory, a surface of minimum resistance.
This solution does not, however, agree with observation largely because the physical
assumptions made are too simple. Moreover, the functional has no continuously dif-
ferentiable paths that can satisfy the boundary conditions, although stationary paths
with one discontinuity in the derivative exist; but, Weierstrass (1815 – 1897) showed
that this path does not yield a strong minimum. These details are discussed further in
section 10.6. Nevertheless, the general problem is important and Newton’s approach,
and the subsequent variants, are of historical and mathematical importance: we shall
mention a few of these variants after describing the basic problem.
It is worth noting that the problem of fluid resistance is difficult and was not properly
understood until the early part of the 20 th century. In 1752 d’Alembert, (1717 – 1783),
published a paper, Essay on a New theory of the resistance of Fluids, in which he derived
the partial differential equations describing the motion of an ideal, incompressible invis-
cid fluid; the solution of these equations showed that resisting force was zero, regardless
of the shape of the body: this was in contradiction to observations and was hence-
forth known as d’Alembert’s paradox. It was not resolved until Prandtl (1875 – 1953)
developed the theory of boundary layers in 1904. This shows how fluids of relatively
small viscosity, such as water or air, may be treated mathematically by taking account
of friction only in the region where essential, namely in the thin layer that exists in
the neighbourhood of the solid body. This concept was introduced in 1904, but many
decades passed before its ramifications were understood: an account of these ideas can
be found in Schlichting (1955)9 and a modern account of d’Alembert’s paradox can be
found in Landau and Lifshitz (1959)10 . An effect of the boundary layer, and also turbu-
lence, is that the drag coefficient, defined in equation 3.18, becomes speed dependent;
thus for a smooth sphere in air it varies between 0.07 and 0.5, approximately.
We now return to the main problem, which is to determine a functional for the
fluid resistance. In deriving this it is necessary to make some assumptions about the
resistance and this, it transpires, is why the stationary path is not a minimum. The
main result is given by equation 3.21, and you may ignore the derivation if you wish.
It is assumed that the resistance is proportional to the square of the velocity. To
see why, consider a small plane area moving through a fluid comprising many isolated
stationary particles, with density ρ: the area of the plane is δA and it is moving with
velocity v along its normal, as seen in the left-hand side of figure 3.5.
In order to derive a simple formula for the force on the area δA it is helpful to
9 Schlichting H Boundary Layer Theory (McGraw-Hill, New York).
10 Landau L D and Lifshitz E M Fluid mechanics (Pergamon).
108 CHAPTER 3. THE CALCULUS OF VARIATIONS

imagine the fluid as comprising many particles, each of mass m and all stationary. If
there are N particles per unit volume, the density is ρ = mN . In the small time δt the
area δA sweeps through a volume vδtδA, so N vδtδA particles collide with the area, as
shown schematically on the left-hand side of figure 3.5.

N
vδt ψ
ψ
O
v δΑ ψ

Figure 3.5 Diagram showing the motion of a small area, δA, through a rar-
efied gas. On the left-hand side the normal to the area is perpendicular to the
relative velocity; on the right-hand side the area is at an angle. The direction
of the arrows is in the direction of the gas velocity relative to the area.

For an elastic collision between a very large mass (that of which δA is the small surface
element) with velocity v, and a small initially stationary mass, m, the momentum
change of the light particle is 2mv — you may check this by doing exercise 3.23,
although this is not part of the course. Thus in a time δt the total momentum transfer
is in the opposite direction to v, ∆P = (2mv) × (N vδtδA). Newton’s law equates force
with the rate of change of momentum, so the force on the area opposing the motion is,
since ρ = mN ,

∆P
δF = = 2ρv 2 δA. (3.19)
δt

Equation 3.19 is a justification for the v 2 -law. If the normal, ON , to the area δA is at
an angle ψ to the velocity, as in the right-hand side side of figure 3.5, where the arrows
denote the fluid velocity relative to the body, then the formula 3.19 is modified in two
ways. First, the significant area is the projection of δA onto v, so δA → δA cos ψ.
Second, the fluid particles are elastically scattered through an angle 2ψ (because the
angle of incidence equals the angle of reflection), so the momentum transfer along the
direction of travel is v(1 + cos 2ψ) = 2v cos2 ψ: hence 2v → 2v cos2 ψ, and the force
in the direction (−v) is δF = 2ρv 2 cos3 ψ δA. We now apply this formula to find the
force on a surface of revolution. We define Oy to be the axis: consider a segment CD
of the curve in the Oxy-plane, with normal P N at an angle ψ to Oy, as shown in the
left-hand panel of figure 3.6.
3.5. EXAMPLES OF FUNCTIONALS 109

y y
N δs
ψ
ψ A C
C
P D D

δx δx
x x
O
b
Figure 3.6 Diagram showing change in velocity of a particle colliding with the
element CD, on the left, and the whole curve which is rotated about the y-axis,
on the right.

The force on the ring formed by rotating the segment CD about Oy is, because of axial
symmetry, in the y-direction. The area of the ring is 2πxδs, where δs is the length of
the element CD, so the magnitude of the force opposing the motion is

δF = 2πxδs 2ρv 2 cos3 ψ .




The total force on the curve in figure 3.6 is obtained by integrating from x = 0 to x = b,
and is given by the functional,
Z x=b
2
F [y] = 4πρv ds x cos3 ψ, y(0) = A, y(b) = 0. (3.20)
x=0

But dy/dx = tan ψ and cos ψ = dx/ds, so that


b
F [y] x
Z
= dx , y(0) = A, y(b) = 0. (3.21)
4πρv 2 0 1 + y0 2

For a disc of area Af , y 0 (x) = 0, and this reduces to F = 2Af ρv 2 , giving a drag
coefficient CD = 4, which compares with the measured value of about 1.3. Newton’s
problem is to find the path making this functional a minimum and this is solved in
section 10.6.

Exercise 3.12
Use the definition of the drag coefficient, equation 3.18, to show that, according
to the theory described here,
Z b
8 x
CD = dx .
b2 0 1 + y0 2

Show that for a sphere, where x2 + y 2 = b2 this gives CD = 2. The experimental


value of the drag coefficient for the motion of a sphere in air varies between 0.07
and 0.5, depending on its speed.

Variations of this problem were considered by Newton: one is the curve CBD, shown
in figure 3.7, rotated about Oy.
110 CHAPTER 3. THE CALCULUS OF VARIATIONS

y
B
A C

D x
O a b
Figure 3.7 Diagram showing the modified geometry considered by Newton.
Here the variable a is an unkown, the line CB is parallel to the x-axis and
the coordinates of C are (0, A).

In this problem the position D is fixed, but the position of B is not; it is merely
constrained to be on the line y = A, parallel to Ox. The resisting force is now given by
the functional
Z b
F1 [y] 1 2 x
= a + dx , y(a) = A, y(b) = 0. (3.22)
4πρv 2 2 a 1 + y0 2
Now the path y(x) and the number a are to be chosen to make the functional stationary.
Problems such as this, where the position of one (or both) of the end points are
also to be determined are known as variable end point problems and are dealt with in
chapter 10.

3.5.4 A problem in navigation


Given a river with straight, parallel banks a distance b apart y
and a boat that can travel with constant speed c in still water, v(x)
the problem is to cross the river in the shortest time, starting
and landing at given points.
If the y-axis is chosen to be the left bank, the starting point y(x)
B
to be the origin, O, and the water is assumed to be moving
parallel to the banks with speed v(x), a known function of the x
distance from the left-hand bank, then the time of passage O x=b
along the path y(x) is, assuming c > max(v(x)),
Z b p
c2 (1 + y 0 2 ) − v(x)2 − v(x)y 0
T [y] = dx , y(0) = 0, y(b) = B,
0 c2 − v(x)2
where the final destination is a distance B along the right-hand bank. The derivation of
this result is set in exercise 3.22, one of the harder exercises at the end of this chapter.
A variation of this problem is obtained by not defining the terminus, so there is only
one boundary condition, y(0) = 0, and then we need to find both the path, y(x) and
the terminal point. It transpires that this is an easier problem and that the path is the
solution of y 0 (x) = v(x)/c, as is shown in exercise 10.7 (page 262).

3.5.5 The isoperimetric problem


Among all curves, y(x), represented by functions with continuous derivatives, that join
the two points Pa and Pb in the plane and have given length L[y], determine that which
3.5. EXAMPLES OF FUNCTIONALS 111

encompasses the largest area, S[y] shown in diagram 3.8.

y
Pb
B

L[ y]
Pa
A S [ y]
x
a b
Figure 3.8 Diagram showing the area, S[y], under a
curve of given length joining Pa to Pb .

This is a classic problem discussed by Pappus of Alexandria in about 300 AD. Pappus
showed, in Book V of his collection, that of two regular polygons having equal perimeters
the one with the greater number of sides has the greater area. In the same book he
demonstrates that for a given perimeter the circle has a greater area than does any
regular polygon. This work seems to follow closely the earlier work of Zenodorus (circa
180 BC): extant fragments of his work include a proposition that of all solid figures, the
surface areas of which are equal, the sphere has the greatest volume.
Returning to figure 3.8, a modern analytic treatment of the problem requires a
differentiable function y(x) satisfying y(a) = A, y(b) = B, such that the area,

Z b
S[y] = dx y
a

is largest when the length of the curve,

Z b p
L[y] = dx 1 + y0 2,
a

is given. It transpires that a circular arc is the solution.


This problem differs from the first three because an additional constraint — the
length of the curve — is imposed. We consider this type of problem in chapter 12.

3.5.6 The catenary

A catenary is the shape assumed by an inextensible cable, or chain, of uniform density


hanging between supports at both ends. In figure 3.9 we show an example of such a
curve when the points of support, (−a, A) and (a, A), are at the same height.
112 CHAPTER 3. THE CALCULUS OF VARIATIONS

y
(-a,A) (a,A)
A

x
-a a
Figure 3.9 Diagram showing the catenary formed by
a uniform chain hanging between two points at the
same height.

If the lowest point of the chain is taken as the origin, the catenary equation is shown
in section 12.2.3 to be  x 
y = c cosh −1 (3.23)
c
for some constant c determined by the length of the chain and the value of a.
If a curve is described by a differentiable function y(x) it can be shown, see exer-
cise 3.19, that the potential energy E of the chain is proportional to the functional
Z a p
S[y] = dx y 1 + y 0 2 .
−a

The curve
p that minimises this functional, subject to the length of the chain L[y] =
Ra
−a dx 1 + y 0 2 remaining constant, is the shape assumed by the hanging chain. In
common with the previous example, the catenary problem involves a constraint — again
the length of the chain — and is dealt with using the methods described in chapter 12.

3.5.7 Fermat’s principle


Light and other forms of electromagnetic radiation are wave phenomena. However, in
many common circumstances light may be considered to travel along lines joining the
source to the observer: these lines are named rays and are often straight lines. This is
why most shadows have distinct edges and why eclipses of the Sun are so spectacular.
In a vacuum, and normally in air, these rays are straight lines and the speed of light in
a vacuum is c ' 2.9 × 1010 cm/sec, independent of its colour. In other uniform media,
for example water, the rays also travel in straight lines, but the speed is different: if
the speed of light in a uniform medium is cm then the refractive index is defined to be
the ratio n = c/cm . The refractive index usually depends on the wave length: thus for
water it is 1.333 for red light (wave length 6.50×10−5 cm) and 1.343 for blue light (wave
length 7.5 × 10−5 cm); this difference in the refractive index is one cause of rainbows.
In non-uniform media, in which the refractive index depends upon position, light rays
follow curved paths. Mirages are one consequence of a position-dependent refractive
index.
A simple example of the ray description of light is the reflection of light in a plane
mirror. In diagram 3.10 the source is S and the light ray is reflected from the mirror
3.5. EXAMPLES OF FUNCTIONALS 113

at R to the observer at O. The plane of the mirror is perpendicular to the page and it
is assumed that the plane SRO is in the page.

S θ1 θ2 h2
h1
A R B

Figure 3.10 Diagram showing light travelling from a source S to


an observer O, via a reflection at R. The angles of incidence and of
reflection are defined to be θ1 and θ2 , respectively.

It is known that light travels in straight lines and is reflected from the mirror at a
point R as shown in the diagram. But without further information the position of R is
unknown. Observations, however, show that the angle of incidence, θ1 , and the angle
of reflection, θ2 , are equal. This law of reflection was known to Euclid (circa 300 BC)
and Aristotle (384 – 322 BC); but it was Hero of Alexandria (circa 125 BC) who showed
by geometric argument that the equality of the angles of incidence and reflection is a
consequence of the Aristotelean principle that nature does nothing the hard way; that
is, if light is to travel from the source S to the observer O via a reflection in the mirror
then it travels along the shortest path.
This result was generalised by the French mathematician Fermat (1601 – 1665) into
what is now known as Fermat’s principle which states that the path taken by light rays
is that which minimises the time of passage11. For the mirror, because the speed along
SR and RO is the same this means that the distance along SR plus RO is a minimum.
If AB = d and AR = x, the total distance travelled by the light ray depends only upon
x and is q q
f (x) = x2 + h21 + (d − x)2 + h22 .
This function has a minimum when θ1 = θ2 , that is when the angle of incidence, θ1 ,
equals the angle of reflection, θ2 , see exercise 3.14.
In general, for light moving in the Oxy-plane, in a medium with refractive index
n(x, y), with the source at the origin and observer at (a, A) the time of passage, T ,
along an arbitrary path y(x) joining these points is

1 a
Z p
T [y] = dx n(x, y) 1 + y 0 2 , y(0) = 0, y(a) = A.
c 0

This follows
p because the time taken to travel along an element of length δs is n(x, y)δs/c
and δs = 1 + y 0 (x)2 δx. If the refractive index, n(x, y), is constant then this integral
reduces to the integral 3.1 and the path of a ray is a straight line, as would be expected.
11 Fermat’s original statement was that light travelling between two points seeks a path such that the

number of waves is equal, as a first approximation, to that in a neighbouring path. This formulation
has the form of a variational principle, which is remarkable because Fermat announced this result in
1658, before the calculus of either Newton or Leibniz was developed.
114 CHAPTER 3. THE CALCULUS OF VARIATIONS

Fermat’s principle can be used to show that for light reflected at a mirror the angle
of incidence equals the angle of reflection. For light crossing the boundary between two
media it gives Snell’s law,
sin α1 c1
= ,
sin α2 c2
where α1 and α2 are the angles between the ray and the normal to the boundary and
ck is the speed of light in the media, as shown in figure 3.11: in water the speed of light
is approximately c2 = c1 /1.3, where c1 is the speed of light in air, so 1.3 sin α2 = sin α1 .

O Air
α1
N

S’
Water α2
S

Figure 3.11 Diagram showing the refraction of light at the surface of wa-
ter. The angles of incidence and refraction are defined to be α2 and α1
respectively; these are connected by Snell’s law.

In figure 3.11 the observer at O sees an object S in a pond and the light ray from S
to O travels along the two straight lines SN and N O, but the observer perceives the
object to be at S 0 , on the straight line OS 0 . This explains why a stick put partly into
water appears bent.

3.5.8 Coordinate free formulation of Newton’s equations


Newton’s laws of motion accurately describe a significant portion of the physical world,
from the motion of large molecules to the motion of galaxies. However, Newton’s
original formulation is usually difficult to apply to even quite simple mechanical systems
and hides the mathematical structure of the equations of motion, which is important
for the advanced developments in dynamics and for finding approximate solutions. It
transpires that in many important circumstances Newton’s equations of motion can be
expressed as a variational principle the solution of which is the equations of motion.
This reformulation took some years to accomplish and was originally motivated partly
by Snell’s law and Fermat’s principle, that minimises the time of passage, and partly
by the ancient philosophical belief in the “Economy of Nature”; for a brief overview of
these ideas the introduction of the book by Yourgrau and Mandelstam (1968) should
be consulted.
The first variational principle for dynamics was formulated in 1744 by Maupertuis
(1698 – 1759), but in the same year Euler (1707 – 1783) described the same principle
more precisely. In 1760 Lagrange (1736 – 1813) clarified these ideas, by first reformu-
lating Newton’s equations of motion into a form now known as Lagrange’s equations of
motion: these are equivalent to Newton’s equations but easier to use because the form
of the equations is independent of the coordinate system used — this basic property
3.5. EXAMPLES OF FUNCTIONALS 115

of variational principles is discussed in chapter 6 — and this allows easier use of more
general coordinate systems.
The next major step was taken by Hamilton (1805 – 1865), in 1834, who cast La-
grange’s equations as a variational principle; confusingly, we now name this Lagrange’s
variational principle. Hamilton also generalised this theory to lay the foundations for
the development of modern physics that occurred in the early part of the 20 th century.
These developments are important because they provide a coordinate-free formulation
of dynamics which emphasises the underlying mathematical structure of the equations
of motion, which is important in helping to understand how solutions behave.

Summary
These few examples provide some idea of the significance of variational principles. In
summary, they are important for three distinct reasons
• A variational principle is often the easiest or the only method of formulating a
problem.
• Often conventional boundary value problems may be re-formulated in terms of a
variational principle which provides a powerful tool for approximating solutions.
This technique is introduced in chapter 13.
• A variational formulation provides a coordinate free method of expressing the
laws of dynamics, allowing powerful analytic techniques to be used in ordinary
Newtonian dynamics. The use of variational principles also paved the way for
the formulation of dynamical laws describing motion of objects moving at speeds
close to that of light (special relativity), particles interacting through gravita-
tional forces (general relativity) and the laws of the microscopic world (quantum
mechanics).
116 CHAPTER 3. THE CALCULUS OF VARIATIONS

3.6 Miscellaneous exercises


Exercise 3.13
Functionals do not need to have the particular form considered in this chapter.
The following expressions also map functions to real numbers:

(a) D[y] = y 0 (1) + y(1)2 ;


Z 1 h i
(b) K[y] = dx a(x) y(x) + y(1)y 0 (x) ;
0
h i1 Z 1 h i
(c) L[y] = xy(x)y 0(x) + dx a(x)y 0 (x) + b(x)y(x) , where a(x) and b(x)
0 0
are prescribed functions;
Z 1 Z 1
dt s2 + st y(s)y(t).
` ´
(d) S[y] = ds
0 0

Find the values of these functionals for the functions y(x) = x2 and y(x) = cos πx
when a(x) = x and b(x) = 1.

Exercise 3.14
Show that the function
q q
f (x) = x2 + h21 + (d − x)2 + h22 ,

where h1 , h2 are defined in figure 3.10 (page 113) and x and d denote the lengths
AR and AB respectively, is stationary when θ1 = θ2 where

x d−x
sin θ1 = p , sin θ2 = p .
x2 + h21 (d − x)2 + h22

Show that at this stationary value f (x) has a minimum.

Exercise 3.15
Consider the functional
Z 1
dx y 0 1 + y 0 ,
p
S[y] = y(0) = 0, y(1) = B > −1.
0

(a) Show that the stationary function is the straight


√ line y(x) = Bx and that the
value of the functional on this line is S[y] = B 1 + B.
(b) By expanding the integrand of S[y + h] to second order in , show that
1
(4 + 3B)2
Z
S[y + h] = S[y] + dx h0 (x)2 , B > −1,
8(1 + B)3/2 0

and deduce that on this path the function has a minimum.


3.6. MISCELLANEOUS EXERCISES 117

Exercise 3.16
Using the method described in the text, show that the functionals
Z b Z b
dx 1 + xy 0 y 0 and S2 [y] = dx xy 0 2 ,
` ´
S1 [y] =
a a

where b > a > 0, y(b) = B and y(a) = A are both stationary on the same curve,
namely
ln(x/a)
y(x) = A + (B − A) .
ln(b/a)
Explain why the same function makes both functionals stationary.

Exercise 3.17
In this exercise the theory developed in section 3.3.1 is extended. The function
F (z) has a continuous second derivative and the functional S is defined by the
integral Z b
S[y] = dx F (y 0 ).
a
(a) Show that
b b
d2 F 0 2
Z Z
dF 0 1
S[y + h] − S[y] =  dx h (x) + 2 dx h (x) + O(3 ),
a dy 0 2 a dy 0 2
where h(a) = h(b) = 0.
(b) Show that if y(x) is chosen to make dF/dy 0 constant then the functional is
stationary.
(c) Deduce that this stationary path makes the functional either a maximum or a
minimum, provided F 00 (y 0 ) 6= 0.

Exercise 3.18
Show that the functional
Z 1
´1/4
dx 1 + y 0 (x)2
`
S[y] = , y(0) = 0, y(1) = B,
0

is stationary for the straight line y(x) = Bx.


In addition, √
show that this straight line gives a minimum value of the functional
only if B < 2, otherwise it gives a maximum.

Harder exercises
Exercise 3.19
If a uniform, flexible, inextensible chain of length L is suspended between two
supports having the coordinates (a, A) and (b, B), with the y-axis pointing verti-
cally upwards, show that, if the shape assumed by the chain Ris described by the
b
p
differentiable function y(x), then its length is given by L[y] = a dx 1 + y 0 2 and
its potential energy by
Z b p
E[y] = gρ dx y 1 + y 0 2 , y(a) = A, y(b) = B,
a

where ρ is the line-density of the chain and g the acceleration due to gravity.
118 CHAPTER 3. THE CALCULUS OF VARIATIONS

Exercise 3.20
This question is about the shortest distance between two points on the surface of a
right-circular cylinder, so is a generalisation of the theory developed in section 3.2.
(a) If the cylinder axis coincides with the z-axis we may use the polar coordinates
(ρ, φ, z) to label points on the cylindrical surface, where ρ is the cylinder radius.
Show that the Cartesian coordinates of a point (x, y) are given by x = ρ cos φ, y =
ρ sin φ and hence that the distance between two adjacent points on the cylinder,
(ρ, φ, z) and (ρ, φ + δφ, z + δz) is, to first order, given by δs2 = ρ2 δφ2 + δz 2 .
(b) A curve on the surface may be defined by prescribing z as a function of φ.
Show that the length of a curve from φ = φ1 to φ2 is
Z φ2 p
L[z] = dφ ρ2 + z 0 (φ)2 .
φ1

(c) Deduce that the shortest distance on the cylinder between the two points
(ρ, 0, 0) and (ρ, α, ζ) is along the curve z = ζφ/α.

Exercise 3.21
An inverted cone has its apex at the origin and axis along the z-axis. Let α be
the angle between this axis and the sides of the cone, and define a point on the
conical surface by the coordinates (ρ, φ), where ρ is the perpendicular distance to
the z-axis and φ is the polar angle measured from the x-axis.
Show that the distance on the cone between adjacent points (ρ, φ) and (ρ + δρ, φ +
δφ) is, to first order,
δρ2
δs2 = ρ2 δφ2 + .
sin2 α
Hence show that if ρ(φ), φ1 ≤ φ ≤ φ2 , is a curve on the conical surface then its
length is r
Z φ2
ρ0 2
L[ρ] = dφ ρ2 + 2
.
φ1 sin α

Exercise 3.22
A straight river of uniform width a flows with velocity (0, v(x)), where the axes
are chosen so the left-hand bank is the y-axis and where v(x) > 0. A boat can
travel with constant speed c > max(v(x)) relative to still water. If the starting
and landing points are chosen to be the origin and (b, B), respectively, show that
the path giving the shortest time of crossing is given by minimising the functional

c2 (1 + y 0 (x)2 ) − v(x)2 − v(x)y 0 (x)


Z b p
T [y] = dx , y(0) = 0, y(b) = B.
0 c2 − v(x)2

Exercise 3.23
In this exercise the basic dynamics required for the derivation of the minimum
resistance functional, equation 3.21, is derived. This exercise is optional, because it
requires knowledge of elementary mechanics which is not part of, or a prerequisite
of, this course.
Consider a block of mass M sliding smoothly on a plane, the cross section of which
is shown in figure 3.12.
3.6. MISCELLANEOUS EXERCISES 119

V’ v’ After collision
V v Before collision

M
m

Figure 3.12 Diagram showing the velocities of the block and


particle before and after the collision.

The block is moving from left to right, with speed V , towards a small particle of
mass m moving with speed v, such that initially the distance between the particle
and the block is decreasing. Suppose that after the inevitable collision the block
is moving with speed V 0 , in the same direction, and the particle is moving with
speed v 0 to the right. Use conservation of energy and linear momentum to show
that (V 0 , v 0 ) are related to (V, v) by the equations

M V 2 + mv 2 = M V 0 2 + mv 0 2 and M V − mv = M V 0 + mv 0 .

Hence show that


2m 2M V + (M − m)v
V0 = V − (V + v) and v0 = .
M +m M +m
Show that in the limit m/M → 0, V 0 = V and v 0 = 2V + v and give a physical
interpretation of these equations.
120 CHAPTER 3. THE CALCULUS OF VARIATIONS
Chapter 4

The Euler-Lagrange equation

4.1 Introduction
In this chapter we apply the methods introduced in section 3.2 to more general problems
and derive the most important result of the Calculus of Variations. We show that for
the functional Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (4.1)
a
where F (x, u, v) is a real function of three real variables, a necessary and sufficient
condition for the twice differentiable function y(x) to be a stationary path is that it
satisfies the equation
 
d ∂F ∂F
0
− = 0 and the boundary conditions y(a) = A, y(b) = B. (4.2)
dx ∂y ∂y
This equation is known either as Euler’s equation or the Euler-Lagrange equation, and is
a second-order equation for y(x), exercise 3.10 (page 103). Conditions for a stationary
path to give either a local maximum or a local minimum are more difficult to find and
we defer a discussion of this problem to chapter 8.
In order to derive the Euler-Lagrange equation it is helpful to first discuss some
preliminary ideas. We start by briefly describing Euler’s original analysis, because
it provides an intuitive understanding of functionals and provides a link between the
calculus of functions of many variables and the Calculus of Variations. This leads
directly to the idea of the rate of change of a functional, which is required to define
a stationary path. This section is followed by the proof of the fundamental lemma of
the Calculus of Variations which is essential for the derivation of the Euler-Lagrange
equation, which follows.
The Euler-Lagrange equation is usually a nonlinear boundary value problem: this
combination causes severe difficulties, both theoretical and practical. First, solutions
may not exist and if they do uniqueness is not ensured: second, if solutions do exist
it is often difficult to compute them. These difficulties are in sharp contrast to initial
value problems and, because the differences are so marked, in section 4.5 we compare
these two types of equations in a little detail. Finally, in section 4.6, we show why the
limiting process used by Euler is subtle and can lead to difficulties.

121
122 CHAPTER 4. THE EULER-LAGRANGE EQUATION

4.2 Preliminary remarks


4.2.1 Relation to differential calculus
Euler (1707 – 1783) was the first to make a systematic study of problems that can
be described by functionals, though it was Lagrange (1736 – 1813) who developed the
method we now use. Euler studied functionals having the form defined in equation 4.1.
He related these functionals to functions of many variables using the simple device of
dividing the abscissa into N + 1 equal intervals,

a = x0 , x1 , x2 , . . . xN , xN +1 = b, where xk+1 − xk = δ,

and replacing the curve y(x) with segments of straight lines with vertices

(x0 , A), (x1 , y1 ), (x2 , y2 ), . . . (xN , yN ), (xN +1 , B) where yk = y(xk ),

y(a) = A and y(b) = B, as shown in the following figure.

y
Pb
B

y(x)

Pa
A
x
a=x0 x1 x2 x3 x4 x5 b=x6
Figure 4.1 Diagram showing the rectification of a curve by a
series of six straight lines, N = 5.

Approximating the derivative at xk by the difference (yk − yk−1 )/δ the functional 4.1
is replaced by a function of the N variables (y1 , y2 , · · · , yN ),
N +1  
X yk − yk−1 b−a
S(y1 , y2 , · · · , yN ) = δ F xk , y k , where δ = , (4.3)
δ N +1
k=1

and where y0 = A and yN +1 = B. This association with ordinary functions of many


variables can illuminate the nature of functionals and, if all else fails, it can be used
as the basis of a numerical approximation; examples of this procedure are given in
exercises 4.1 and 4.21. The integral 4.1 is obtained from this sum by taking the limit
N → ∞; similarly the Euler-Lagrange equation 4.2 may be derived by taking the same
limit of the N algebraic equations ∂S/∂yk , k = 1, 2, · · · , N , see exercise 4.30 (page 141).
In any mathematical analysis care is usually needed when such limits are taken and the
Calculus of Variations is no exception; however, here we discuss these problems only
briefly, in section 4.6.
Euler made extensive use of this method of finite differences. By replacing smooth
curves by polygonal lines he reduced the problem of finding stationary paths of func-
tionals to finding stationary points of a function of N variables: he then obtained exact
4.2. PRELIMINARY REMARKS 123

solutions by taking the limit as N → ∞. In this sense functionals may be regarded


as functions of infinitely many variables — that is, the values of the function y(x) at
distinct points — and the Calculus of Variations may be regarded as the corresponding
analogue of differential calculus.
Exercise 4.1
If the functional depends only upon y 0 ,
Z b
S[y] = dx F (y 0 ), y(a) = A, y(b) = B,
a

show that the approximation defined by equation 4.3 becomes


 „ «
y1 − A “y − y ”
2 1
“y − y
k k−1

S(y1 , y2 , · · · , yN ) = δ F +F + ··· + F +
δ δ δ
“y − y „ «ff
N N −1
” B − yN
··· +F +F .
δ δ
Hence show that a stationary point of S satisfies the equations
F 0 ((yk − yk−1 )/δ) = c, k = 1, 2, · · · , N + 1,
where c is a constant, independent of k. Deduce that, if F (z) is sufficiently smooth,
S(y1 , y2 , · · · , yN ) is stationary when the points (xk , y(xk )) lie on a straight line.

4.2.2 Differentiation of a functional


The stationary points of a function of n variables are where all n first partial derivatives
vanish. The stationary paths of a functional are defined in a similar manner and
the purpose of this section is to introduce the idea of the derivative of a functional
and to show how it may be calculated. First, however, it is necessary to make a few
preliminary remarks in order to emphasise the important differences between functionals
and functions of n variables: we return to these problems later.
In the study of functions of n variables, it is convenient to use geometric language
and to regard the set of n numbers (x1 , x2 , · · · , xn ) as a point in an n-dimensional
space. Similarly, we regard each function y(x), belonging to a given class of functions,
as a point in some function space.
For functions of n variables it is sufficient to consider a single space, for instance
the n-dimensional Euclidean space. But, there is no universal function space and the
nature of the problem determines the choice of function space. For instance, when
dealing with a functional of the form 4.1 it is natural to use the set of all functions with
a continuous first derivative. In the case of functionals of the form
Z b
dx F (x, y, y 0 , y 00 )
a

we would require functions with two continuous derivatives.


The concept of continuity of functions is important and you will recall, section 1.3.2,
that a function f (x) is continuous at x = c if the values of f (x) at neighbouring values
of c are close to f (c); more precisely we require that
lim f (c + ) = f (c).
→0
124 CHAPTER 4. THE EULER-LAGRANGE EQUATION

Remember that if the usual derivative of a function exists at any point x, it is continuous
at x.
The type of functional defined by equation 4.1 involves paths joining the points
(a, A) and (b, B) which are differentiable or piecewise differentiable for a ≤ x ≤ b.
In order to find a stationary path we need to compare values of the functional on
nearby paths; this means that a careful definition of the distance between nearby paths
(functions) is important. This is achieved most easily by using the notion of a norm of
a function. A norm defined on a function space is a map taking elements of the space
to the non-negative real numbers; it represents the ‘distance’ from an element to the
origin (zero function). It has the same properties as the Euclidean distance defined in
equation 1.2 (page 11).
In Rn the Euclidean distance suffices for most purposes. In infinite dimensional
function spaces there is no obvious choice of norm that can be used in all circumstances.
Use of different norms and the corresponding concepts of ‘distance’ can lead to different
classifications of stationary paths as is seen in section 4.6.
For this reason it is usual to distinguish between a function space and a normed
space by using a different name whenever a specific norm on the set of functions is being
considered. For example, we have introduced the space C0 [a, b] of continuous functions
on the interval [a, b]. One of the simplest norms on this space is the supremum norm1
ky(x)k = max |y(x)|,
a≤x≤b

and this norm can be shown to satisfy the conditions of equation 1.3 (page 11). The
‘distance’ between two functions y and z is of course ky − zk. When we wish to
emphasise that we are considering this particular normed space, and not just the space
of continuous functions, we shall write D0 [a, b], by which we shall mean the space of
continuous functions with the specified norm. When we write C0 [a, b], no particular
norm is implied.
In what follows, we shall sometimes need to restrict attention to functions which
have a continuous and bounded derivative. A suitable norm for such functions is
y(x) = max |y(x)| + max |y 0 (x)|,

1 a≤x≤b a≤x≤b

and we shall denote by D1 [a, b] the normed space of functions with continuous bounded
derivative equipped with the norm k . k1 defined above. This space consists of the same
functions as the space C1 [a, b], but as before use of the latter notation will not imply
the use of any particular norm on the space.
It is usually necessary to restrict the class of functions we consider to the subset
of all possible functions that satisfy the boundary conditions, if defined. Normally we
shall simply refer to this restricted class of functions as the admissible functions: these
are defined to be those differentiable functions that satisfy any boundary conditions
and, in most circumstances, to be in D1 (a, b), because it is important to bound the
variation in y 0 (x). Later we shall be less restrictive and allow piecewise differentiable
functions.
We now come to the most important part of this section, that is the idea of the rate
of change of a functional which is implicit in the idea of a stationary path. Recall that a
1 In analysis texts max |y(x)| is replaced by sup |y(x)|, but for continous functions on closed finite

intervals max and sup are identical.


4.2. PRELIMINARY REMARKS 125

real, differentiable function of n real variables, G(x), x = (x1 , x2 , · · · , xn ), is stationary


at a point if all its first partial derivatives are zero, ∂G/∂xk = 0, k = 1, 2, · · · , n.
This result follows by considering the difference between the values of G(x) at adjacent
points using the first-order Taylor expansion, equation 1.39 (page 36),
n
X ∂G
G(x + ξ) − G(x) =  ξk + O(2 ), |ξ| = 1,
∂xk
k=1

where ξ = (ξ1 , ξ2 , · · · , ξn ). The rate of change of G(x) in the direction ξ is obtained by


dividing by  and taking the limit  → 0,
n
G(x + ξ) − G(x) X ∂G
∆G(x, ξ) = lim = ξk . (4.4)
→0  ∂xk
k=1

A stationary point is defined to be one at which the rate of change, ∆G(x, ξ), is zero
in every direction; it follows that at a stationary point all first partial derivatives must
be zero.
The idea embodied in equation 4.4 may be applied to the functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B,
a

which has a real value for each admissible function y(x). The rate of change of a
functional S[y] is obtained by examining the difference between neighbouring admissible
paths, S[y + h] − S[y]; since both y(x) and y(x) + h(x) are admissible functions for all
real , it follows that h(a) = h(b) = 0. This difference is a function of the real variable
, so we define the rate of change of S[y] by the limit,

S[y + h] − S[y] d
∆S[y, h] = lim = S[y + h] , (4.5)
→0  d =0

which we assume exists. The functional ∆S depends upon both y(x) and h(x), just as
the limit of the difference [G(x + ξ) − G(x)]/, of equation 4.4, depends upon x and ξ.
Definition 4.1
The functional S[y] is said to be stationary if y(x) is an admissible function and if
∆S[y, h] = 0 for all h(x) for which y(x) and y(x) + h(x) are admissible.

The functions for which S[y] is stationary are named stationary paths. The stationary
path, y(x), and the varied path y(x) + h(x) must be admissible: for most variational
problems considered in this chapter both paths needs to satisfy the boundary conditions,
so h(a) = h(b) = 0. But in more general problems considered later, particularly in
chapter 10, these conditions on h(x) are removed, but see exercises 4.12 and 4.13. If
y(x) is an admissible path we name the allowed variations, h(x), to be those for which
y(x) + h(x) are admissible.
On a stationary path the functional may achieve a maximum or a minimum value,
and then the path is named an extremal. The nature of stationary paths is usually
determined by the term O(2 ) in the expansion of S[y + h]: this theory is described in
chapter 8.
126 CHAPTER 4. THE EULER-LAGRANGE EQUATION

In all our applications the limit



d
∆S[y, h] = S[y + h]
d =0

is linear in h, that is if c is any constant then ∆S[y, ch] = c∆S[y, h]; in this case it is
named the Gâteaux differential.
Notice that if S is an ordinary function of n variables, (y1 , y2 , · · · , yn ), rather than
a functional, then the Gâteaux differential is
n
d X ∂S
∆S = lim S(y + h) = hk ,
→0 d ∂yk
k=1

which is proportional to the rate of change defined in equation 4.4.


As an example, consider the functional
Z b p
S[y] = dx 1 + y0 2, y(a) = A, y(b) = B,
a

for the distance between (a, A) and (b, B), discussed in section 3.2.1. We have
Z b p Z b
d d 0 0 2
d p
S[y + h] = dx 1 + (y + h ) = dx 1 + (y 0 + h0 )2 ,
d d a a d
Z b
(y 0 + h0 )
= dx p h0 .
a 1 + (y 0 + h0 )2

Note that we may change the order of differentiation with respect to  and integration
with respect to x because a and b are independent of  and all integrands are assumed
to be sufficiently well-behaved functions of x and . Hence, on putting  = 0
Z b
y0

d
h0 ,

∆S[y, h] = S[y + h]
= dx p
d =0 a 1 + y0 2

which is just equation 3.4 (page 95).


For our final comment, we note the approximation defined in equation 4.3 (page 122)
gives a function of N variables, so the associated differential is

S(y + h) − S(y)


∆S[y, h] = lim .
→0 

Comparing this with ∆G, equation 4.4, we can make the equivalences y ≡ x and h ≡ ξ.
However, for functions of N variables there is no relation between the variables ξ k and
ξk+1 , but h(x) is differentiable, so |hk − hk+1 | = O(δ). This suggests that some care is
required in taking the limit N → ∞ of equation 4.3 and shows why problems involving
finite numbers of variables can be different from those with infinitely many variables
and why the choice of norms, discussed above, is important. Nevertheless, provided
caution is exercised, the analogy with functions of several variables can be helpful.
4.3. THE FUNDAMENTAL LEMMA 127

Exercise 4.2
Find the Gâteaux differentials of the following functionals:
Z π/2 Z b
y0 2
dx y 0 2 − y 2 ,
` ´
(a) S[y] = (b) S[y] = dx 3 , b > a > 0,
0 a x
Z b Z 1
dx y 0 2 + y 2 + 2yex , (d) S[y] =
` ´ p p
(c) S[y] = dx x2 + y 2 1 + y 0 2 .
a 0

Exercise 4.3
Show that the Gâteaux differential of the functional,
Z b Z b
S[y] = ds dt K(s, t)y(s)y(t)
a a

is Z b Z b “ ”
∆S[y, h] = ds h(s) dt K(s, t) + K(t, s) y(t).
a a

4.3 The fundamental lemma


This section contains the essential result upon which the Calculus of Variations depends.
Using the result obtained here we will be able to use the stationary condition that
∆S[y, h] = 0, for all suitable h(x), to form a differential equation for the unknown
function y(x).

The fundamental lemma: if z(x) is a continuous function of x for a ≤ x ≤ b and if


Z b
dx z(x)h(x) = 0
a

for all functions h(x) that are continuous for a ≤ x ≤ b and are zero at x = a and
x = b, then z(x) = 0 for a ≤ x ≤ b.

In order to prove this we assume on the contrary that z(η) 6= 0 for some η satisfying
a < η < b. Then, since z(x) is continuous there is an interval [x1 , x2 ] around η with

a < x1 ≤ η ≤ x 2 < b

in which z(x) 6= 0. We now construct a suitable function h(x) that yields a contradic-
tion. Define h(x) to be
(
(x − x1 )(x2 − x), a < x1 ≤ x ≤ x2 < b,
h(x) =
0, otherwise,

so h(x) is continuous and


Z b Z x2
dx z(x)h(x) = dx z(x)(x − x1 )(x2 − x) 6= 0,
a x1
128 CHAPTER 4. THE EULER-LAGRANGE EQUATION
Rb
since the integrand is continuous and non-zero on (x1 , x2 ). However, a dx zh = 0, so
we have a contradiction.
Thus the assumptions that z(x) is continuous and z(x) 6= 0 for some x ∈ (a, b)
lead to a contradiction and we deduce that z(x) = 0 for a < x < b: because z(x) is
continuous it follows that z(x) = 0 for a ≤ x ≤ b. This result is named the fundamental
lemma of the Calculus of Variations.
This proof assumed only that h(x) is continuous and made no assumptions about
its differentiability. In previous applications h(x) had to be differentiable for x ∈ (a, b).
However, for the function h(x) defined above h0 (x) does not exist at x1 and x2 . The
proof is easily modified to deal with this case. If h(x) needs to be n times differentiable
then we use the function
(
(x − x1 )n+1 (x2 − x)n+1 , x1 ≤ x ≤ x 2 ,
h(x) =
0, otherwise.

Exercise 4.4
In this exercise a result due to du Bois-Reymond (1831 – 1889) which is closely
related to the fundamental lemma will be derived. This is required later, see
exercise 4.11.
If z(x) and h0 (x) are continuous, h(a) = h(b) = 0 and
Z b
dx z(x)h0 (x) = 0
a

for all h(x), then z(x) is constant for a ≤ x ≤ b.


Prove this result by defining a constant C and a function g(x) by the relations
Z b Z x
1
C= dx z(x) and g(x) = dt (C − z(t)).
b−a a a

Show that g(a) = g(b) = 0 and


Z b Z b Z b
dx z(x)g 0 (x) = dx z(x)(C − z(x)) = − dx (C − z(x))2 .
a a a

Hence, deduce that z(x) = C.

4.4 The Euler-Lagrange equations


This section contains the most important result of this chapter. Namely, that if
F (x, u, v) is a sufficiently differentiable function of three variables, then a necessary
and sufficient condition for the functional2
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (4.6)
a
2 Many texts state that a necessary condition for y(x) to be an extremal of S[y] is that it satisfies the

Euler-Lagrange equation. Here we consider stationary paths and then the condition is also sufficient.
4.4. THE EULER-LAGRANGE EQUATIONS 129

to be stationary on the path y(x) is that it satisfies the differential equation and bound-
ary conditions,  
d ∂F ∂F
− = 0, y(a) = A, y(b) = B. (4.7)
dx ∂y 0 ∂y
This is named Euler’s equation or the Euler-Lagrange equation. It is a second-order
differential equation, as shown in exercise 3.10, and is the analogue of the conditions
∂G/∂xk = 0, k = 1, 2, · · · , n, for a function of n real variables to be stationary, as
discussed in section 4.2.2. We now derive this equation.
The integral 4.6 is defined for functions y(x) that are differentiable for a ≤ x ≤ b.
Using equation 4.5 we find that the rate of change of S[y] is
Z b
d 0

0
∆S[y, h] = dx F (x, y + h, y + h )
d a
=0
Z b
d
= dx F (x, y + h, y 0 + h0 ) . (4.8)

a d
=0

The integration limits a and b are independent of  and we assume that the order of
integration and differentiation may be interchanged. The integrand of equation 4.8 is a
total derivative with respect to  and equation 1.21 (page 26) shows how to write this
expression in terms of the partial derivatives of F . Using equation 1.21 with n = 3,
t =  and the variable changes (x1 , x2 , x3 ) = (x, y, y 0 ) and (h1 , h2 , h3 ) = (0, h(x), h0 (x)),
so that

f (x1 + th1 , x2 + th2 , x3 + th2 ) becomes F (x, y + h, y 0 + h0 )

we obtain
d ∂F ∂F
F (x, y + h, y 0 + h0 ) = h + h0 0 .
d ∂y ∂y
Now set  = 0, so the partial derivatives are evaluated at (x, y, y 0 ), to obtain,
b  
∂F ∂F
Z
∆S[y, h] = dx h(x) + h0 (x) 0 . (4.9)
a ∂y ∂y

The second term in this integral can be simplified by integrating by parts,


b  b Z b  
∂F ∂F d ∂F
Z
dx h0 (x) = h(x) − dx h(x) ,
a ∂y 0 ∂y 0 a a dx ∂y 0

assuming that Fy0 is differentiable. But h(a) = h(b) = 0 so the boundary term on the
right-hand side vanishes and the rate of change of the functional S[y] becomes
b    
d ∂F ∂F
Z
∆S[y, h] = − dx − h(x). (4.10)
a dx ∂y 0 ∂y

If the Euler-Lagrange equation is satisfied ∆S[y, h] = 0 for all allowed h, so y(x) is a


stationary path of the functional.
130 CHAPTER 4. THE EULER-LAGRANGE EQUATION

If S[y] is stationary then, by definition, ∆S[y, h] = 0 for all allowed h and it follows
from the fundamental lemma of the Calculus of Variations that y(x) satisfies the second-
order differential equation
 
d ∂F ∂F
− = 0, y(a) = A, y(b) = B. (4.11)
dx ∂y 0 ∂y

Hence a necessary and sufficient condition for a functional to be stationary on a suffi-


ciently differentiable path, y(x), is that it satisfies the Euler-Lagrange equation 4.7.
The paths that satisfy the Euler-Lagrange equation are not necessarily extremals,
that is do not necessarily yield maxima or minima, of the functional. The Euler-
Lagrange equation is, in most cases, a second-order, nonlinear, boundary value problem
and there may be no solutions or many. Finally, note that functionals that are equal
except for multiplicative or additive constants have the same Euler-Lagrange equations.

Exercise 4.5
Show that the Euler-Lagrange equation for the functional
Z X “ ”
S[y] = dx y 0 2 − y 2 , y(0) = 0, y(X) = 1, X > 0,
0

is y 00 + y = 0. Hence show that provided X 6= nπ, n = 1, 2, · · · , the stationary


function is y = sin x/ sin X.
The significance of the point X = π will be revealed in chapter 8, in particular
exercise 8.12. There it is shown that for 0 < X < π this solution is a minimum of
the functional, but for X > π it is simply a stationary point. In this example at
the boundary, X = π, the Euler-Lagrange equation does not have a solution.

4.4.1 The first-integral


The Euler-Lagrange equation is a second-order differential equation. But if the inte-
grand does not depend explicitly upon x, so the functional has the form
Z b
S[y] = dx G(y, y 0 ), y(a) = A, y(b) = B, (4.12)
a

then the Euler-Lagrange equation reduces to the first-order differential equation,

∂G
y0 − G = c, y(a) = A, y(b) = B, (4.13)
∂y 0

where c is a constant determined by the boundary conditions, see for example exer-
cise 4.6 below. The expression on the left-hand side of this equation is often named the
first-integral of the Euler-Lagrange equation. This result is important because, when
applicable, it often saves a great deal of effort, because it is usually far easier to solve
this lower order equation. Two proofs of equation 4.13 are provided: the first involves
deriving an algebraic identity, see exercise 4.7, and it is important to do this yourself.
The second proof is given in section 7.2.1 and uses the invariance properties of the inte-
grand G(y, y 0 ). A warning, however; in some circumstances a solution of equation 4.13
4.4. THE EULER-LAGRANGE EQUATIONS 131

will not be a solution of the original Euler-Lagrange equation, see exercise 4.8, also
section 5.3 and chapter 6.
Another important consequence is that the stationary function, the solution of 4.13,
depends only upon the variables u = x − a and b − a (besides A and B), rather than
x, a and b independently, as is the case when the integrand depends explicitly upon x.
A specific example illustrating this behaviour is given in exercise 4.20.

An observation
You may have noticed that the original functional 4.6 is defined on the class of func-
tions for which F (x, y(x), y 0 (x)) is integrable: if F (x, u, v) is differentiable in all three
variables this condition is satisfied if y 0 (x) is piecewise continuous. However, the Euler-
Lagrange equation 4.11 requires the stronger condition that y 0 (x) is differentiable. This
extra condition is created by the derivation of the Euler-Lagrange equation, in partic-
ular the step between equations 4.9 and 4.10: a necessary condition for the functional
S[y] to be stationary, that does not make this step and does not require y 00 to exist, is
derived in exercise 4.11.
There are important problems where y 00 (x) does not exist at all points on a stationary
path — the minimal surface of revolution, dealt with in the next chapter, is one simple
example; the general theory of this type of problem will be considered in chapter 10.

Exercise 4.6
Consider the functional
Z 1
dx y 0 2 − y ,
` ´
S[y] = y(0) = 0, y(1) = 1.
0

and show that the Euler-Lagrange equation is the linear equation,


d2 y
2 + 1 = 0, y(0) = 0, y(1) = 1,
dx2
and find its solution.
Show that the first-integral, equation 4.13, becomes the nonlinear equation
„ «2
dy
+ y = c.
dx
Find the general solution of this equation and find the solution that satisfies the
boundary conditions.
In this example it is easier to solve the linear second-order Euler-Lagrange equation
than the first-order equation 4.13, which is nonlinear. Normally, both equations
are nonlinear and then it is easier to solve the first-order equation. In the examples
considered in sections 5.3 and 5.2 it is more convenient to use the first-integral.

Exercise 4.7
If G(y, y 0 ) does not depend explicitly upon x, that is ∂G/∂x = 0, show that
„ „ « « „ «
d ∂G ∂G d 0 ∂G
y 0 (x) − = y − G
dx ∂y 0 ∂y dx ∂y 0
and hence derive equation 4.13.
Hint: you will find the result derived in exercise 3.10 (page 103) helpful.
132 CHAPTER 4. THE EULER-LAGRANGE EQUATION

Exercise 4.8
(a) Show that provided Gy0 (y, 0) exists the differential equation 4.13 (without the
boundary conditions) has a solution y(x) = γ, where the constant γ is defined
implicitly by the equation G(γ, 0) = −c.
(b) Under what circumstances is the solution y(x) = γ also a solution of the
Euler-Lagrange equation 4.11?

Exercise 4.9
Show that the Euler-Lagrange equation for the functional
Z 1 “ ”
S[y] = dx y 0 2 + y 2 + 2axy , y(0) = 0, y(1) = B,
0

where a is a constant, is y 00 − y = ax and hence that a stationary function is


sinh x
y(x) = (a + B) − ax.
sinh 1
By expanding S[y + h] to second order in  show that this solution makes the
functional a minimum.

Exercise 4.10
In this exercise we consider a problem, due to Weierstrass (1815 – 1897), in which
the functional achieves its minimum value of zero for a piecewise continuous func-
tion but for continuous functions the functional is always positive.
The functional is
Z 1
J[y] = dx x2 y 0 2 , y(−1) = −1, y(1) = 1,
−1

so J[y] ≥ 0 for all real functions. The function



−1, −1 ≤ x < 0
y(x) =
1, 0 < x ≤ 1,
has a piecewise continuous derivative and J[y] = 0.
(a) Show that the associated Euler-Lagrange equation gives x2 y 0 = A for some
constant A and that the solutions of this that satisfy the boundary conditions at
x = −1 and x = 1 are, respectively,
A
8
< −1 − A − ,
> −1 ≤ x < 0
y(x) = x
: 1 + A − A,
>
0 < x ≤ 1.
x
Deduce that no continuous function satisfies the Euler-Lagrange equation and the
boundary conditions.
(b) Show that for the class of continuous function defined by
8
< −1,
> −1 ≤ x ≤ −,
y(x) = x/, |x| < ,
>
: 1,  ≤ x ≤ 1,
where  is a small positive number, J[y] = 2/3. Deduce that for continuous
functions the functional can be made arbitrarily close to the smallest possible
value of J, that is zero, so there is no stationary path.
4.4. THE EULER-LAGRANGE EQUATIONS 133

(c) A similar result can be proved for a class of continuously differentiable func-
tions. For the functions
1 “x” 1
y(x) = tan−1 , tan β = , 0 <  < 1,
β  

show that
2
J[y] =+ O(2 ).
π
Deduce that J[y] may take arbitrarily small values, but cannot be zero.
Hint the relation tan−1 (1/z) = π/2 − tan−1 (z) is needed.

It may be shown that for no continuous function satisfying the boundary condi-
tions is J[y] = 0. Thus on the class of continuous functions J[y] never equals its
minimum value, but can approach it arbitrarily closely.

Exercise 4.11
The Euler-Lagrange equation 4.11 requires that y 00 (x) exists, yet the original func-
tional does not. The second derivative arises when equation 4.9 is integrated by
parts to replace h0 (x) by h(x). In this exercise you will show that this step may be
avoided and that a sufficient condition not depending upon y 00 (x) may be derived.
Define the function φ(x) by the integral
Z x
φ(x) = dt Fy (t, y(t), y 0 (t)),
a

so that φ(a) = 0 and φ (x) = Fy (x, y, y 0 ), and show that equation 4.9 becomes
0

Z b » –
∂F
∆S = dx h0 (x) − φ(x) .
a ∂y 0

Using the result derived in exercise 4.4 show that a necessary condition for S[y]
to be stationary is that Z x
∂F ∂F
− dt = C,
∂y 0 a ∂y
where C is a constant.
In practice, this equation is not usually as useful as the Euler-Lagrange equation.

Exercise 4.12
The boundary conditions y(a) = A, y(b) = B are not always appropriate so we
need functionals that yield different conditions. In this exercise we illustrate how
this can sometimes be achieved. The technique used here is important and will
be used extensively in chapter 10.
Consider the functional
Z b
1
dx y 0 2 + y 2 ,
` ´
S[y] = −G(y(b)) + y(a) = A,
2 a

with no condition being given at x = b. For this functional the variation h(x)
satisfies h(a) = 0, but h(b) is not constrained.
134 CHAPTER 4. THE EULER-LAGRANGE EQUATION

(a) Use the fact that h(a) = 0 to show that the Gâteaux differential can be written
in the form
“ ” Z b
∆S[y, h] = y 0 (b) − Gy (y(b)) h(b) − dx y 00 − y h.
` ´
a

(b) Using a subset of variations with h(b) = 0 show that the stationary paths
satisfy the equation y 00 − y = 0, y(a) = A, and that on this path
“ ”
∆S[y, h] = y 0 (b) − Gy (y(b)) h(b).

Deduce that S[y] is stationary only if y(b) and y 0 (b) satisfy the equation

y 0 (b) = Gy (y(b)).

(c) Deduce that the stationary path of

1 b
Z
dx y 0 2 + y 2 ,
` ´
S[y] = −By(b) + y(a) = A,
2 a

satisfies the Euler-Lagrange equation y 00 − y = 0, y(a) = A, y 0 (b) = B.

Exercise 4.13
Use the ideas outlined in the previous exercise to show that if G(b, y, B) is defined
by the integral Z y
G(b, y, B) = dz Fy0 (b, z, B)

the functional
Z b
S[y] = −G(b, y(b), B) + dx F (x, y, y 0 ), y(a) = A,
a

is stationary on the path satisfied by the Euler-Lagrange equation


„ «
d ∂F ∂F
− = 0, y(a) = A, y 0 (b) = B.
dx ∂y 0 ∂y

4.5 Theorems of Bernstein and du Bois-Reymond


In section 4.4 it was shown that a necessary condition for a function, y(x), to represent
Rb
a stationary path of the functional S = a dx F (x, y, y 0 ), y(a) = A, y(b) = B, is
that it satisfies the Euler-Lagrange equation 4.11 or, in expanded form, exercise 3.10
(page 103),

y 00 Fy0 y0 + y 0 Fy y0 + Fx y0 − Fy = 0, y(a) = A, y(b) = B. (4.14)

This is a second-order, differential equation and is usually nonlinear; even without


the boundary conditions this equation cannot normally be solved in terms of known
functions: the addition of the boundary values normally makes it even harder to solve. It
is therefore frequently necessary to resort to approximate or numerical methods to find
solutions, in which case it is helpful to know that solutions actually exist and that they
4.5. THEOREMS OF BERNSTEIN AND DU BOIS-REYMOND 135

are unique: indeed it is possible for “black-box” numerical schemes to yield solutions
when none exists. In this course there is insufficient space to discuss approximate
and numerical methods, but this section is devoted to a discussion of a theorem that
provides some information about the existence and uniqueness of solutions for the Euler-
Lagrange equation. In the last part of this section we contrast these results with those
for the equivalent equation, but with initial conditions rather than boundary values.
First, however, we return to the question, discussed on page 131, of whether the
second derivative of the stationary path exists, that is whether it satisfies the Euler-
Lagrange equation in the whole interval.
The following theorem due to the German mathematician du Bois-Reymond (1831 –
1889) gives necessary conditions for the second derivative of a stationary path to exist.
Theorem 4.1
If
(a) y(x) has a continuous first derivative,
(b) ∆S[y, h] = 0 for all allowed h(x),
(c) F (x, u, v) has continuous first and second derivatives in all variables and
(d) ∂ 2 F/∂y 0 2 6= 0 for a ≤ x ≤ b,
then y(x) has a continuous second derivative and satisfies the Euler-Lagrange equa-
tion 4.11 for all a ≤ x ≤ b.

This result is of limited practical value because its application sometimes requires
knowledge of the solution, or at least some of its properties. A proof of this theorem may
be found in Gelfand and Fomin (1963, page 17)3 . An example in which Fy0 y0 = 0 on the
stationary path and where this path does not possess a second derivative, yet satisfies
the Euler-Lagrange equation almost everywhere, is given in exercise 4.28 (page 141).

4.5.1 Bernstein’s theorem


The theorem quoted in this section concerns the boundary value problem that can be
written in form of the second-order, nonlinear, boundary value equation,
d2 y
 
dy
= H x, y, , y(a) = A, y(b) = B. (4.15)
dx2 dx
For such equations this is one of the few general results about the nature of the solutions
and is due to the Ukrainian mathematician S N Bernstein (1880 – 1968). This theorem
provides a sufficient condition for equation 4.15 to have a unique solution.
Theorem 4.2
If for all finite y, y 0 and x in an open interval containing [a, b], that is c < a ≤ x ≤ b < d,
(a) the functions H, Hy and Hy0 are continuous,
(b) there is a constant k > 0 such that Hy > k, and,
(c) for any Y > 0 and all |y| < Y and a ≤ x ≤ b there are positive constants α(Y )
and β(Y ), depending upon Y , and possibly c and d, such that
|H(x, y, y 0 )| ≤ α(Y )y 0 2 + β(Y ),
3 I M Gelfand and S V Fomin Calculus of Variations, (Prentice Hall, translated from the Russian

by R A Silverman), reprinted 2000 (Dover).


136 CHAPTER 4. THE EULER-LAGRANGE EQUATION

then one and only one solution of equation 4.15 exists.

A proof of this theorem may be found in Akhiezer (1962, page 30)4 . The conditions
required by this theorem are far more stringent than those required by theorems 2.1
(page 61) and 2.2 (page 81), which apply to initial value problems. These theorems
emphasise the significant differences between initial and boundary value problems as
discussed in section 2.2.

Some examples
The usefulness of Bernstein’s theorem is somewhat limited because the conditions of
the theorem are too stringent; it is, however, one of the rare general theorems applying
to this type of problem. Here we apply it to the two problems dealt with in the next
chapter, for which the integrands of the functionals are
p
F = y 1 + y0 2 Minimal surface of revolution,
s
1 + y0 2
F = Brachistochrone.
y
Substituting these into the Euler-Lagrange equation 4.14 we obtain the following ex-
pressions for H,
1 + y0 2
y 00 = H = Minimal surface of revolution,
y
1 + y0 2
y 00 = H = − Brachistochrone.
2y
In both cases is H discontinuous at y = 0, so the conditions of the theorem do not hold.
In fact, the Euler-Lagrange equation for the minimal surface problem has one piecewise
smooth solution and, in addition, either two or no differentiable solutions, depending
upon the boundary values. The brachistochrone problem always has one, unique solu-
tion. These examples emphasise the fact that Bernstein’s theorem gives sufficient as
opposed to necessary conditions.

Exercise 4.14
Use Bernstein’s theorem to show that the equation y 00 −y = x, y(0) = A, y(1) = B,
has a unique solution, and find this solution.

Exercise 4.15
(a) Apply Bernstein’s theorem to the equation y 00 + y = x, y(0) = 0, y(X) = 1
with X > 0.
(b) Show that the solution of this equation is
sin x
y = x + (1 − X)
sin X
and explain why this does not contradict Bernstein’s theorem.

Exercise 4.16
The integrand of p the functional for Brachistochrone problem, described in sec-

tion 3.5.1, is F = 1 + y 0 2 / y. Show that the associated Euler-Lagrange equa-
1 + y0 2
tion is y 00 = − and that this may be written as the pair of first-order
2y
equations
dy1 dy2 1 + y22
= y2 , =− where y1 = y.
dx dx 2y1
4N I Akhiezer The Calculus of Variations (Blaisdell).
4.6. STRONG AND WEAK VARIATIONS 137

Exercise 4.17 Z 1 ´2
dx y 2 1 − y 0 , y(−1) = 0, y(1) = 1, the
`
Consider the functional S[y] =
−1
smallest value of which is zero. Show that the solution of the Euler-Lagrange
equation that minimises this functional is

0, −1 ≤ x ≤ 0,
y(x) =
x, 0 < x < 1,
which has a discontinuous derivative at x = 0. Show that this result is consistent
with theorem 4.1 of du Bois-Reymond.

4.6 Strong and Weak variations


In section 4.2.2 we briefly discussed the idea of the norm of a function. Here we show
why the choice of the norm is important.
Consider the functional for the distance between the origin and the point (1, 0), on
the x-axis,
Z 1 p
S[y] = dx 1 + y 0 2 , y(0) = 0, y(1) = 0. (4.16)
0
It is obvious, and proved in section 3.2, that in the class of smooth functions the
stationary path is the segment of the x-axis between 0 and 1, that is y(x) = 0 for
0 ≤ x ≤ 1.
Now consider the value of the functional as the path is varied about y = 0, that is
S[h], where h(x) is first restricted to D1 (0, 1) and then to D0 (0, 1).
In the first case the norm of h(x) is taken to be
||h(x)||1 = max |h(x)| + max |h0 (x)|. (4.17)
0≤x≤1 0≤x≤1

and without loss of generality we may restrict h to satisfy ||h||1 = 1, so that |h0 (x)| ≤
H1 < 1. On the varied path the value of the functional is
Z 1 p p
S[h] = dx 1 + 2 h0 2 ≤ 1 + (H1 )2
0

and hence
p (H1 )2
S[h] − S[0] ≤ 1 + (H1 )2 − 1 = p < (H1 )2 < 2 .
1 + 1 + (H1 )2
Thus if h(x) belongs to D1 (0, 1), S[y] changes by O(2 ) on the neighbouring path and
since S[h] − S[0] > 0 for all  the straight line path is a minimum.
Now consider the less restrictive norm
||h(x)||0 = max |h(x)|, (4.18)
0≤x≤1

which restricts the magnitude of h, but not the magnitude of its derivative. A suitable
path close to y = 0 is given by h(x) =  sin nπx, n being a positive integer. Now we
have Z 1 p Z 1
S[h] = dx 1 + (nπ)2 cos2 nπx ≥ nπ dx |cos nπx| .
0 0
138 CHAPTER 4. THE EULER-LAGRANGE EQUATION

But
1 1/2n
2
Z Z
dx |cos nπx| = 2n dx cos nπx = .
0 0 π
Hence S[h] ≥ 2n. Thus for any  > 0 we may chose a value of n to make S[h] as
large as we please, even though the varied path is arbitrarily close to the straight-line
path: hence the path y = 0 is not stationary when this norm is used.
This analysis shows that the definition of the distance between paths is important
because different definitions can change the nature of a path; consequently two types
of stationary path are defined.
The functional S[y] is said to have a weak stationary path, ys if there exists a δ > 0
such that S[ys + g] − S[ys ] has the same sign for all variations g satisfying ||g||1 < δ.
On the other hand, S[y] is said to have a strong stationary path, ys if there exists
a δ > 0 such that S[ys + g] − S[ys ] has the same sign for all variations g satisfying
||g||0 < δ.
A strong stationary path is also a weak stationary path because if ||g||1 < δ then
||g||0 < δ. The converse is not true in general.
It is easier to find weak stationary paths and, fortunately these are often the most
important. The Gâteaux differential is defined only for weak variations and, as we have
seen, it leads to a differential equation for the stationary path.

Exercise 4.18
In this exercise we give another example of a path satisfying the ||z||0 norm which
is arbitrarily close to the line y = 0, but for which S is arbitrarily large.
Consider the isosceles triangle with base AC of length a, height h and base angle β,
as shown on the left-hand side of the figure.

B B

l B1 B2
h

β
A D C A D C
Figure 4.2

(a) Construct the two smaller triangles AB1 D and DB2 C by halving the height
and width of ABC, as shown on the right. If AB = l and BD = h, show that
AB1 = l/2, 2l = a/ cos β and h = l sin β. Hence show that the lengths of the lines
AB1 DB2 C and ABC are the same and equal to 2l.
(b) Show that after n such divisions there are 2n similar triangles of height 2−n h
and that the total length of the curve is 2l. Deduce that arbitrarily close to AC,
the shortest distance between A and C, we may find a continuous curve every
point of which is arbitrarily close to AC, but which has any given length.
4.7. MISCELLANEOUS EXERCISES 139

4.7 Miscellaneous exercises


Exercise 4.19
Show that the Euler-Lagrange equation for the functional
Z 1
dx y 0 2 − y 2 − 2xy , y(0) = y(1) = 0,
` ´
S[y] =
0

00
is y + y = −x. Hence show that the stationary function is y(x) = sin x/ sin 1 − x.

Exercise 4.20
Consider the functional
Z b
S[y] = dx F (y, y 0 ), y(a) = A, y(b) = B,
a

where F (y, y 0 ) does not depend explicitly upon x. By changing the independent
variable to u = x − a show that the solution of the Euler-Lagrange equation
depends on the difference b − a rather than a and b separately.

Exercise 4.21
Euler’s original method for finding solutions of variational problems is described
in equation 4.3 (page 122). Consider approximating the functional defined in
exercise 4.19 using the polygon passing through the points (0, 0), ( 12 , y1 ) and (1, 0),
so there is one variable y1 and two segments.
This polygon can be defined by the straight line segments
(
2y1 x, 0 ≤ x ≤ 12 ,
y(x) =
2y1 (1 − x), 12 ≤ x ≤ 1.

Show that the corresponding polygon approximation to the functional becomes


11 2 1
S(y1 ) = y1 − y1 ,
3 2
and hence that the stationary polygon is given by y(1/2) ' y1 = 3/44. Note that
this gives y(1/2) ' 0.0682 by comparison to the exact value 0.0697.

Exercise 4.22
Find the stationary paths of the following functionals.
Z 1
dx y 0 2 + 12xy ,
` ´
(a) S[y] = y(0) = 0, y(1) = 2.
0
Z 1
dx 2y 2 y 0 2 − (1 + x)y 2 ,
` ´
(b) S[y] = y(0) = 1, y(1) = 2.
0
Z 2
(c) S[y] = − 21 By(2) + dx y 0 2 /x2 , y(1) = A.
1
b
y(0)2
Z
(d) S[y] = − + dx y/y 0 2 , y(b) = B 2 , B 2 > 2Ab > 0.
A3 0

Hint for (c) and (d) use the method described in exercise 4.12.
140 CHAPTER 4. THE EULER-LAGRANGE EQUATION

Exercise 4.23
What is the equivalent of the fundamental lemma of the Calculus of Variations in
the theory of functions of many real variables?

Exercise 4.24
Find the general solution
Z b of the Euler-Lagrange equation corresponding to the
p
functional S[y] = dx w(x) 1 + y 0 2 , and find explicit solutions in the special
√ a
cases w(x) = x and w(x) = x.

Exercise 4.25 Z 1 ´2
dx y 0 2 − 1 ,
`
Consider the functional S[y] = y(0) = 0, y(1) = A > 0.
0
02
(a) Show that the Euler-Lagrange equation reduces to y = m2 , where m is a
constant.
(b) Show that the equation y 0 2 = m2 , with m > 0, has the following three solu-
tions that fit the boundary conditions, y1 (x) = Ax,
8
A+m
>
> mx, 0≤x≤ ,
2m
<
y2 (x) = m>A
: A + m(1 − x), A + m ≤ x ≤ 1,
>
>
2m
and 8
m−A
>
> −mx, 0≤x≤ ,
2m
<
y3 (x) = m > A.
>
: A − m(1 − x), m−A
> ≤ x ≤ 1,
2m
Show also that on these solutions the functional has the values

S[y1 ] = (A2 − 1)2 , S[y2 ] = (m2 − 1)2 and S[y3 ] = (m2 − 1)2 .

(c) Deduce that if A ≥ 1 the minimum value of S[y] is (A2 − 1)2 and that this
occurs on the curve y1 (x), but if A < 1 the minimum value of S[y] is zero and this
occurs on the curves y2 (x) and y3 (x) with m = 1.

Exercise 4.26
Show that the following functionals do not have stationary values
Z 1 Z 1 Z 1
(a) dx y 0 , (b) dx yy 0 , (c) dx xyy 0 ,
0 0 0

where, in all cases, y(0) = 0 and y(1) = 1.

Exercise 4.27
Show that the Euler-Lagrange equations for the functionals
Z b Z b „ «
d
S1 [y] = dx F (x, y, y 0 ) and S2 [y] = dx F (x, y, y 0 ) + G(x, y)
a a dx
are identical.
4.7. MISCELLANEOUS EXERCISES 141

Exercise 4.28 Z 1 ´2
dx y 2 2x − y 0 ,
`
Show that the functional S[y] = y(−1) = 0, y(1) = 1,
−1
achieves its minimum value, zero, when
(
0, −1 ≤ x ≤ 0,
y(x) =
x2 , 0 ≤ x ≤ 1,
which has no second derivative at x = 0. Show that, despite the fact that y 00 (x)
does not exist everywhere, the Euler-Lagrange equation is satisfied for x 6= 0.

Exercise 4.29 Z b
The functional S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, is stationary on
a
those paths satisfying the Euler-Lagrange equation
„ «
d ∂F ∂F
− = 0, y(a) = A, y(b) = B.
dx ∂y 0 ∂y
In this formulation of the problem we choose to express y in terms of x: however,
we could express x in terms of y, so the functional has the form
Z B
J[x] = dy G(y, x, x0 ), x(A) = a, x(B) = b,
A

where x0 = x0 (y) = dx/dy.


(a) Show that G(y, x, x0 ) = x0 F (x, y, 1/x0 ), and that the Euler-Lagrange equation
for this functional,
„ «
d ∂G ∂G
− = 0, x(A) = a, x(B) = b,
dy ∂x0 ∂x
when expressed in terms of the original function F is
Fy0 y0 00 1
x − 0 Fyy0 − Fxy0 + Fy = 0
x0 3 x
where, for instance, the function Fy0 is the differential of F (x, y, y 0 ) with respect
to y 0 expressed in terms of x0 after differentiation.
(b) Derive the same result from the original Euler-Lagrange equations for F .

Exercise 4.30
Use the approximation 4.3 (page 122) to show that the equations for the values
of y = (y1 , y2 , · · · , yn ), where xk+1 = xk + δ, that make S(y) stationary are
∂S ∂ ∂ ∂
= δ F (zk ) + F (zk ) − F (zk+1 ) = 0, k = 1, 2, · · · , n,
∂yk ∂u ∂v ∂v
where zk = (xk , u, v), u = yk , v = (yk − yk−1 )/δ and where y0 = A and yn+1 = B.
Show also that zk+1 = zk + δ (1, yk0 , yk0 0 ) + O(δ 2 ), and hence that
∂2F ∂2F ∂2F
„ «
∂S ∂F
= δ − − yn0 − yn0 0 2 + O(δ 2 ),
∂yk ∂u ∂x∂v ∂u∂v ∂v
„ „ « «
d ∂F ∂F 2
= −δ − + O(δ ),
dx ∂v ∂u
where F and its derivatives are evaluated at z = zk .
Hence derive the Euler-Lagrange equations.
142 CHAPTER 4. THE EULER-LAGRANGE EQUATION

Harder exercises
Exercise 4.31
This exercise is a continuation of exercise 4.21 and uses a set of n variables to
define the polygon. Take a set of n + 2 equally spaced points on the x-axis,
xk = k/(n + 1), k = 0, 1, · · · , n + 1 with x0 = 0 and xn+1 = 1, and a polygon
passing through the points (xk , yk ). Since y(0) = y(1) = 0 we have y0 = yn+1 = 0,
leaving N unknown variables.
Show that the functional defined in exercise 4.19 approximates to
n  „ «ff
1X 2k 1
S= (yk+1 − yk )2 − h2 yk2 + yk , h= .
h n+1 n+1
k=0

(a) For n = 1, the case treated in exercise 4.21, show that this reduces to
7 2 1
S(y1 ) = y1 − y1 .
2 2
Explain the difference between this and the previous expression for S(y1 ), given
in exercise 4.21.
(b) For n = 2 show that this becomes
17 2 17 2 2 4
S= y + y − 6y1 y2 − y1 − y2 ,
3 1 3 2 9 9
and hence that the equations for y1 and y2 are
2 4
34y1 − 18y2 = , 34y2 − 18y1 = .
3 3
Solve these equations to show that y(1/3) ' 35/624 ' 0.0561 and y(2/3) '
43/624 ' 0.0689. Note that these compare favourably with the exact values,
y(1/3) = 0.0555 and y(2/3) = 0.0682.

Exercise 4.32 Z b
Consider the functional S[y] = dx F (y 00 ) where F (z) is a differentiable func-
a
tion and the admissible functions are at least twice differentiable and satisfy the
boundary conditions y(a) = A1 , y(b) = B1 , y 0 (a) = A2 and y 0 (b) = B2 .
(a) Show that the function making S[y] stationary satisfies the equation
∂F
= c(x − a) + d
∂y 00
where c and d are constants.
(b) In the case that F (z) = 21 z 2 show that the solution is

1 1
y(x) = c(x − a)3 + d(x − a)2 + A2 (x − a) + A1 ,
6 2
where c and d satisfy the equations
1 3 1
cD + dD2 = B1 − A1 − A2 D where D = b − a,
6 2
1 2
cD + dD = B 2 − A2 .
2

(c) Show that this stationary function is also a minimum of the functional.
4.7. MISCELLANEOUS EXERCISES 143

Exercise 4.33
The theory described in the text considered functionals with integrands depend-
ing only upon x, y(x) and y 0 (x). However, functionals depending upon higher
derivatives also exist and are important, for example in the theory of stiff beams,
and the equivalent of the Euler-Lagrange equation may be derived using a direct
extension of the methods described in this chapter.
Consider the functional
Z b
S[y] = dx F (x, y, y 0 , y 00 ), y(a) = A1 , y 0 (a) = A2 , y(b) = B1 , y 0 (b) = B2 .
a

Show that the Gâteaux differential of this functional is


Z b „ «
∂F ∂F ∂F
∆S[y, h] = dx h + h0 0 + h00 00 .
a ∂y ∂y ∂y

Using integration by parts show that


Z b Z b
d2
„ «
∂F ∂F
dx h00 00 = dx h 2
a ∂y a dx ∂y 00

being careful to describe the necessary properties of h(x). Hence show that S[y]
is stationary for the functions that satisfy the fourth-order differential equation

d2
„ « „ «
∂F d ∂F ∂F
− + = 0,
dx2 ∂y 00 dx ∂y 0 ∂y

with the boundary conditions y(a) = A1 , y 0 (a) = A2 , y(b) = B1 , and y 0 (b) = B2 .

Exercise 4.34
Using the result derived in the previous exercise, find the stationary functions of
the functionals
Z 1
(a) S[y] = dx (1 + y 00 2 ), y(0) = 0, y 0 (0) = y(1) = y 0 (1) = 1,
0
Z π/2 “π” “π ”
dx y 00 2 − y 2 + x2 , y 0 (0) = y y0
` ´
(b) S[y] = y(0) = 1, = 0, = −1.
0 2 2
144 CHAPTER 4. THE EULER-LAGRANGE EQUATION
Chapter 5

Applications of the
Euler-Lagrange equation

5.1 Introduction
In this chapter we solve the Euler-Lagrange equations for two classic problems, the
brachistochrone, section 5.2, and the minimal surface of revolution, section 5.3. These
examples are of historic importance and special because the Euler-Lagrange equations
can be solved in terms of elementary functions. They are also important because they
are relatively simple yet provide some insight into the complexities of variational prob-
lems.
The first example, the brachistochrone problem, is the simpler of these two prob-
lems and there is always a unique solution satisfying the Euler-Lagrange equation. The
second example is important because it is one of the simplest examples of a minimum
energy problem; but it also illustrates the complexities inherent in nonlinear boundary
value problems and we shall see that there are sometimes two and sometimes no differ-
entiable solutions, depending upon the values of the various parameters. This example
also shows that some stationary paths have discontinuous derivatives and therefore can-
not satisfy the Euler-Lagrange equations everywhere. This effect is illustrated in the
discussion of soap films in section 5.4 and in chapter 10 is considered in more detail.
In both these cases you may find the analysis leading to the required solutions com-
plicated. It is, however, important that you are familiar with this type of mathematics
so you should understand the text sufficiently well to be able to write the analysis in
your own words.

5.2 The brachistochrone


The problem, described previously in section 3.5.1 (page 104), is to find the smooth
curve joining two given points Pa and Pb , lying in a vertical plane, such that a bead
sliding on the curve, without friction but under the influence of gravity, travels from
Pa to Pb in the shortest possible time, the initial speed at Pa being given. It was
pointed out in section 3.5.1 that John Bernoulli made this problem famous in 1696

145
146 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

and that several solutions were published in 1697: Newton’s comprised the simple
statement that the solution was a cycloid, giving no proof. In section 5.2.3 we prove
this result algebraically, but first we describe necessary preliminary material. In the next
section we derive the parametric equations for the cycloid after giving some historical
background. In section 5.2.2 the brachistochrone problem is formulated in terms of a
functional and the stationary path of this is found in section 5.2.3.

5.2.1 The cycloid


The cycloid is one of a class of curves formed by a point fixed on a circle that rolls,
without slipping, on another curve. A cycloid is formed when the fixed point is on the
circumference of the circle and the circle rolls on a straight line, as shown in figure 5.1:
other curves with similar constructions are considered in chapter 9. A related curve is
the trochoid where the point tracing out the curve is not on the circle circumference;
clearly different types of trochoids are produced depending whether the point is inside
or outside the circle, see exercise 9.18 (page 253).

θ
C B
a
x

O A D
Figure 5.1 Diagram showing how the cycloid OP D is traced out by a circle
rolling along a straight line.

In figure 5.1 a circle of radius a rolls along the x-axis, starting with its centre on the
y-axis. Fix attention on the point P attached to the circle, initially at the origin O. As
the circle rolls P traces out the curve OP D named the cycloid .
The cycloid has been studied by many mathematicians from the time of Galileo
(1564 – 1642), and was the cause of so many controversies and quarrels in the 17 th
century that it became known as “the Helen of geometers”. Galileo named the cycloid
but knew insufficient mathematics to make progress. He tried to find the area between
it and the x-axis, but the best he could do was to trace the curve on paper, cut out the
arc and weigh it, to conclude that its area was a little less than three times that of the
generating circle — in fact it is exactly three times the area of this circle, as you can
show in exercise 5.3. He abandoned his study of the cycloid, suggesting only that the
cycloid would make an attractive arch for a bridge. This suggestion was implemented
in 1764 with the building of a bridge with three cycloidal arches over the river Cam in
the grounds of Trinity College, Cambridge, shown in figure 5.2.
The reason why cycloidal arches were used is no longer known, all records and
original drawings having been lost. However, it seems likely that the architect, James
Essex (1722 – 1784), chose this shape to impress Robert Smith (1689 – 1768), the Master
of Trinity College, who was keen to promote the study of applied mathematics.
5.2. THE BRACHISTOCHRONE 147

Figure 5.2 Essex’s bridge over the Cam, in the grounds of Trinity
college, having three cycloidal arches.

The area under a cycloid was first calculated in 1634 by Roberval (1602 – 1675). In
1638 he also found the tangent to the curve at any point, a problem solved at about
the same time by Fermat (1601 – 1665) and Descartes (1596 – 1650). Indeed, it was at
this time that Fermat gave the modern definition of a tangent to a curve. Later, in
1658, Wren (1632 – 1723), the architect of St Paul’s Cathedral, determined the length
of a cycloid.
Pascal’s (1623 – 1662) last mathematical work, in 1658, was on the cycloid and,
having found certain areas, volumes and centres of gravity associated with the cycloid,
he proposed a number of such questions to the mathematicians of his day with first and
second prizes for their solution. However, publicity and timing were so poor that only
two solutions were submitted and because these contained errors no prizes were awarded,
which caused a degree of aggravation among the two contenders A de Lalouvère (1600 –
1664) and John Wallis (1616 – 1703).
At about the time of this contest Huygens (1629 – 1695) designed the first pendulum
clock, which was made by Salomon Closter in 1658, but was aware that the period of the
pendulum depended upon the amplitude of the swing. It occurred to him to consider the
motion of an object sliding on an inverted cycloidal arch and he found that the object
reaches the lowest point in a time independent of the starting point. The question
that remained was how to persuade a pendulum to oscillate in a cycloidal, rather than
a circular arc. Huygens now made the remarkable discovery illustrated in figure 5.3.
If one suspends from a point P at the cusp, between two inverted cycloidal arcs P Q
and P R, then a pendulum of the same length as one of the semi-arcs will swing in a
cycloidal arc QSR which has the same size and shape as the cycloidal arcs of which P Q
and P R are parts. Such a pendulum will have a period independent of the amplitude
of the swing.
148 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

Q R

S
Figure 5.3 Diagram showing how Huygens’ cy-
cloidal pendulum, P T , swings between two fixed,
similar cycloidal arcs P R and P Q.

Huygens made a pendulum clock with cycloidal jaws, but found that in practice it
was no more accurate than an ordinary pendulum clock: his results on the cycloid
were published in 1673 when his Horologium Oscillatorium appeared1 . However, the
discovery illustrated in figure 5.3 was significant in the development of the mathematical
understanding of curves in space.

The equations for the cycloid


The equation of the cycloid is obtained by finding the coordinates of P , in figure 5.1,
after the circle has rolled through an angle θ, so the length of the longer circular arc P A
is aθ. Because there is no slipping, OA = P A = aθ and coordinates of the circle centre
are C = (aθ, a). The distances P B and BC are P B = −a cos θ and BC = −a sin θ and
hence the coordinates of P are

x = a(θ − sin θ), y = a(1 − cos θ), (5.1)

which are the parametric equations of the cycloid. For |θ|  1, x and y are related
approximately by y = (a/2)(6x/a)2/3 , see exercise 5.2. The arc OP D is traced out as
θ increases from 0 to 2π.
If, in figure 5.3 the y-axis is in the direction P S, that is pointing downwards, the
upper arc QP R, with the cusp at P is given by these equations with −π ≤ θ ≤ π and
it can be shown, see exercise 5.28, that the lower arc is described by x = a(θ + sin θ),
y = a(3 + cos θ), and the same range of θ. The following three exercises provide practice
in the manipulation of the cycloid equations; further examples are given in exercises 5.26
– 5.28.

Exercise 5.1
dy 1
Show that the gradient of the cycloid is given by = . Deduce that the
dx tan(θ/2)
cycloid intersects the x-axis perpendicularly when θ = 0 and 2π.

1 A more detailed account of Huygens’ work is given in Unrolling Time by J G Yoder (Cambridge

University Press).
5.2. THE BRACHISTOCHRONE 149

Exercise 5.2
By using the Taylor series of sin θ and cos θ show that for small |θ|, x ' aθ 3 /6
and y ' aθ 2 /2. By eliminating θ from these equations show that near the origin
y ' (a/2)(6x/a)2/3 .

Exercise 5.3
Show that the area under the arc OP D in figure 5.1 is 3πa2 and that the length
of the cycloidal arc OP is s(θ) = 8a sin2 (θ/4).

5.2.2 Formulation of the problem


In this section we formulate the variational principle for the brachistochrone by obtain-
ing an expression for the time of passage from given points (a, A) to (b, B) along a curve
y(x).
Define a coordinate system Oxy with the y-axis vertically upwards and the origin
chosen to make a = B = 0, so the starting point, at (0, A), is on the y-axis and the
final point is on the x-axis at (b, 0), as shown in figure 5.4.

y
A
s(x)

P
x
O b
Figure 5.4 Diagram showing the curve y(x) through (0, A) and
(b, 0) on which the bead slides. Here s(x) is the distance along
the curve from the starting point to P = (x, y(x)) on it.

At a point P = (x, y(x)) on this curve let s(x) be the distance along the curve from the
starting point, so the speed of the bead is defined to be v = ds/dt. The kinetic energy
of a bead having mass m at P is 21 mv 2 and its potential energy is mgy; because the
bead is sliding without friction, energy conservation gives
1
mv 2 + mgy = E, (5.2)
2
where the energy E is given by the initial conditions, E = 21 mv02 + mgA, v0 being the
initial speed at Pa = (0, A). Small changes in s are given by δs2 = δx2 + δy 2 , and so
 2  2  2  2 
ds dx dy dx 
= + = 1 + y 0 (x)2 . (5.3)
dt dt dt dt
Thus on rearranging equation 5.2 we obtain
 2 r
ds 2E dx p 0 2
2E
= − 2gy or 1 + y (x) = − 2gy(x). (5.4)
dt m dt m
150 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

The time of passage from x = 0 to x = b is given by the integral

T b
1
Z Z
T = dt = dx .
0 0 dx/dt

Thus on re-arranging equation 5.4 to express dx/dt in terms of y(x) we obtain the
required functional,
Z b s
1 + y0 2
T [y] = dx . (5.5)
0 2E/m − 2gy

This functional may be put in a slightly more convenient form by noting that the energy
and the initial conditions are related by equation 5.2, so by defining the new dependent
variable
Z b s
v02 1 + z0 2
z(x) = A + − y(x) we obtain T [z] = dx . (5.6)
2g 0 2gz

Exercise 5.4

(a) Find the time, T , taken for a particle of mass m to slide down the straight
line, y = Ax, from the point (X, AX) to the origin when the initial speed is v0 .
Show that if v0 = 0 this is
r
2X p
T = 1 + A2 .
gA

(b) Show also that if the point (X, AX) lies on the circle of radius R and with
centre at (0, R), so the equation of the circle is x2 + (y − R)2 = R2 , then the time
taken to slide along the straight line to the origin is independent of X and is given
by
r
R
T =2 .
g
This surprising result was known by Galileo and seems to have been one reason
why he thought that the solution to the brachistochrone problem was a circle.

Exercise 5.5
Show that the functional defined in equation 5.6 when expressed using z as the
independent variable and if v0 = 0 becomes
r
A
1 + x0 (z)2
Z
1
T [x] = √ dz , x(0) = 0, x(A) = b,
2g 0 z

and write down the Euler-Lagrange equation for this functional.


5.2. THE BRACHISTOCHRONE 151

5.2.3 A solution
The integrand of the functional 5.6 is independent of x, so we may use equation 4.13
(page 130) to write Euler’s equation in the form
r
0 ∂F 0 1 + z0 2
z 0
− F = constant where F (z, z ) = .
∂z z
Note that the external constant (2g)−1/2 can be ignored. Since
r
∂F z0 z0 2 1 + z0 2 1
= this gives − =−
∂z 0
p p
0
z(1 + z )2 0
z(1 + z ) 2 z c
for some positive constant c — note that c must be positive because the left-hand side
of the above equation is negative. Rearranging the last expression gives
r
02
 2 dz c2
z 1+z = c or =± − 1. (5.7)
dx z
This first-order differential equation is separable and can be solved. First, however, note
that because the y-axis is vertically upwards we expect the solution y(x) to decrease
away from x = 0, that is z(x) will increase so we take the positive sign and then
integration gives, r
z
Z
x = dz .
c2 − z
Now substitute z = c2 sin2 φ to give
Z Z
x = 2c2 dφ sin2 φ = c2 dφ (1 − cos 2φ)
1 2 1 2
= c (2φ − sin 2φ) + d and z = c (1 − cos 2φ), (5.8)
2 2
where d is a constant. Both c and d are determined by the values of A, b and the
initial speed, v0 . Comparing these equations with equation 5.1 we see that the required
stationary curve is a cycloid. It is shown in chapter 8 that, in some cases, this solution
is a global minimum of T [z].
In the case that the particle starts from rest, v0 = 0, these solutions give
1 1
x = d + c2 (2φ − sin 2φ) , y = A − c2 (1 − cos 2φ)
2 2
where c and d are constants determined by the known end points of the curve.
At the starting point y = A so here φ = 0 and since x = 0 it follows that d = 0:
because φ(0) = 0 the particle initially falls vertically downwards. At the final point of
the curve, x = b, y = 0, let φ = φb . Then
2b 2A
= 2φb − sin 2φb , = 1 − cos 2φb ,
c2 c2
giving two equations for c and φb : we now show that these equations have a unique,
real solution. Consider the cycloid
u = 2θ − sin 2θ, v = 1 − cos 2θ, 0 ≤ θ ≤ π. (5.9)
152 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

The value of φb is given by the value of θ where this cycloid intersects the straight line
Au = bv. The graphs of these two curves are shown in the following figure.

2 v

1.5 Au=bv
cycloid
1

0.5
u
0 1 2 3 4 5 6
Figure 5.5 Graph of the cycloid defined in equation 5.9 and
the straight line bv = Au.

Because the gradient of the cycloid at θ = 0, (u = v = 0), is infinite this graph shows
that there is a single value of φb for all positive values of the ratio A/b. By dividing the
first of equations 5.9 by the second we see that φb is given by solving the equation

2φb − sin 2φb b


= , 0 < φb < π. (5.10)
2 sin2 φb A

Unless b/A is small this equation can only be solved numerically. Once φb is known,
the value of c is given from the equation 2A/c2 = 1 − cos 2φb , which may be put in the
more convenient form c2 = A/ sin2 φb .

Exercise 5.6
Show that if A  b then φb ' 3b/2A and that y/A ' 1 − (x/b)2/3 .

Exercise 5.7
Use the solution defined in equation 5.8 to show that on the stationary path the
time of passage is
r
2A φb
T [z] = .
g sin φb

We end this section by showing a few graphs of the solution 5.8 and quoting some
formulae that help understand them; the rest of this section is not assessed.
In the following figure are depicted graphs of the stationary paths for A = 1 and
various values of b, ranging from small to large, so all curves start at (0, 1) but end at
the points (b, 0), with 0.1 ≤ b ≤ 4.
5.2. THE BRACHISTOCHRONE 153

1
y
b=0.1
0.5
b=π/2
x
0
1 2 3 4
b=0.5
-0.5

-1
Figure 5.6 Graphs showing the stationary paths joining the points
(0, 1) and (b, 0) for b = 0.1, 1/2, 1, π/2, 2, 3 and 4.

From figure 5.6 we see that for small b the stationary path is close to that of a straight
line, as would be expected. In this case φb is small and it was shown in exercise 5.6
that
3b 9b3 5 y  x 2/3
φb = − + O(b ) and that ' 1 − .
2A 20A3 A b
Also the time of passage is
s
3b2 81b4
 
2A 6
T = 1+ − + O(b ) .
g 8A2 640A4

By comparison, if a particle slides down the straight line joining (0, A) to (b, 0), that is
y/A + x/b = 1, so z = Ax/b, then the time of passage is
 s
b2
 
 2A 4
1 + + O(b ) , b  A,
s 

g 2A2

2(A2 + b2 ) 
TSL = =
Ag
A2
r  
2


−4
 b 1 + 2 + O(b ) , b  A.


Ag 2b
Thus for, small b, the relative difference is
b2
TSL − T = T + O(b4 ).
8A2
Returning to figure 5.6 we see for small b the stationary paths cross the x-axis at
the terminal point. At some critical value of b the stationary path is tangential to the
x-axis at the terminal point. We can see from the equation for x(φ) that this critical
path occurs when y 0 (φ) = 0, that is when φb = π/2 and, from equation 5.10, we see
that this gives b = Aπ/2. On this path the time of passage is
s r
π 2A 4
T = and also TSL = T 1 + 2 = 1.185T.
2 g π

For b > Aπ/2 the stationary path dips below the x-axis and approaches
p the terminal
point from below. For b  Aπ/2 it can be shown that φb = π − Aπ/b + O(b−3/2 ),
154 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

and that the path is given approximately by


b b
x' (2φ − sin 2φ), y 'A− sin2 φ,
2π π
and that s r √  3/2 !
2πb A π A
T = 1− + +··· .
g bπ 6 b

Thus the time of passage increases as b, compared with the time to slide down the
straight line, which is proportional to b, for large b. Further, the stationary path reaches
its lowest point when φ = π/2, where y = A − b/π, in other words the distance it falls
below the x-axis is about 1/3 the distance it travels along it, providedp b  Aπ. That
is, the particle first accelerates to a high speed, reaching
√ a speed v ' 2gb/π, before
slowing to reach the terminal point at speed v = 2gA: on the straight line path the
particle accelerates uniformly to this speed.

Exercise 5.8
Galilieo thought that the solution to the brachistrchrone problem was given by the
circle passing through the initial and final points, (0, A) and (b, 0), and tangential
to the y-axis at the start point.
Show that the equation of this circle is (x − R)2 + (y − A)2 = R2 , where R is
its radius given by 2bR = A2 + b2 . Show also that if x = R(1 − cos θ) and
y = A − R sin θ, then the time of passage is
r Z θ
R b
1 A 2Ab
T = dθ √ where sin θb = = 2 .
2g 0 sin θ R A + b2
p
If b  A show that T ' 2A/g.

5.3 Minimal surface of revolution


The problem is to find the non-negative, smooth function y(x), with given end points
y(a) = A and y(b) = B, such that the cylindrical surface formed by rotating the curve
y(x) about the x-axis has the smallest possible area. The left-hand side of the following
figure shows the construction of this surface: note that the end discs do not contribute
to the area considered.

y (b,B) y

(a,A) δs
x x

δx
Figure 5.7 Diagram showing the construction of a surface of revolution, on the left,
and, on the right, the small segment used to construct the integral 5.11.
5.3. MINIMAL SURFACE OF REVOLUTION 155

This section is divided into three parts. First, we derive the functional S[y] giving the
required area. Second, we derive the equation that a sufficiently differentiable function
must satisfy to make the functional stationary. Finally we solve this equation in a
simple case and show that even this relatively simple problem has pitfalls.

5.3.1 Derivation of the functional


An expression for the area of this surface is obtained by first finding the area of the
edge of a thin disc of width δx, shown in the right-hand side of figure 5.7. The small
segment of the boundary curve may be approximated by a straight line provided δx is
sufficiently small, so its length, δs, is given by
p
δs = 1 + y 0 2 δx + O(δx2 ).

The area δS traced out by this segment as it rotates about the x-axis is the circumference
of the circle of radius y(x) times δs; to order δx this is.
p
δS = 2πy(x)δs = 2πy 1 + y 0 2 δx.

Hence the area of the whole surface from x = a to x = b is given by the functional
Z b p
S[y] = 2π dx y 1 + y 0 2 , y(a) = A ≥ 0, y(b) = B > 0, (5.11)
a

with no loss of generality we may assume that A ≤ B and hence that B > 0.

Exercise 5.9
Show that the equation of the straight line joining (a, A) to (b, B) is
B−A
y= (x − a) + A.
b−a
Use this together with equation 5.11 to show that the surface area of the frustum
of the cone shown in figure 5.8 is given by
p
S = π(B + A) (b − a)2 + (B − A)2 .

Note that the frustum of a solid is that part of the solid lying between two parallel
planes which cut the solid; its area does not include the area of the parallel ends.

y l

B
A x

b−a

Figure 5.8 Diagram showing the frustum of a cone, the unshaded area. The
slant-height is l and the radii of the circular ends are A and B.

Show further that this expression may be written in the form π(A + B)l where l
is the length of the slant height and A and B are the radii of the end circles.
156 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

The following exercise may seem a non-sequitur, but it illustrates two important points.
First, it shows how a simple version of Euler’s method, section 4.2, can provide a useful
approximation to a functional. Second, it shows how a very simple approximation can
capture the essential, quite complicated, behaviour of a functional: this is important
because only rarely can the Euler-Lagrange equation be solved exactly. In particular
it suggests that in the simple case A = B, with y(x) defined on |x| ≤ a, there are
stationary paths only if A/a is sufficiently large and then there are two stationary
paths.

Exercise 5.10
Consider the case A = B and with −a ≤ x ≤ a, so the functional 5.11 becomes
Z a p
S[y] = 2π dx y 1 + y 0 2 , y(±a) = A > 0.
−a

(a) Assume that the required stationary paths are even and use a variation of
Euler’s method, described in section 4.2.1, by assuming that

A−α
y(x) = α + x, 0≤x≤a
a
where α is a constant, to derive an approximation, S(α), for S[y].
(b) By differentiating
` this
√ expression
´ with respect to α show that S(α) is station-
ary if α = α± = A ± A2 − 2a2 /2, and deduce that no such solutions exist if

A < a 2. Note that the exact calculation, described below, shows that there are
no continuous stationary paths if A < 1.51a.

(c) Show that if A > a 2 the two stationary values of S satisfy S(α− ) > S(α+ )
(d) If A  a show that the two values of α are given approximately are by

a2 a2 a2
„ «
α+ = A − + ··· and α− = 1+ + ··· ,
2A 2A 2A2

and find suitable approximations for the associated stationary paths. Show also
that the stationary values of S are given approximately by S(α− ) ' 2πA2 and
S(α+ ) ' 4πAa, and give a physical interpretation of these values.

5.3.2 Applications of the Euler-Lagrange equation


The integrand of the functional 5.11 does not depend explicitly upon x, hence the first-
integral of the Euler-Lagrange equation 4.13
p (page 130) may be used. In this case we
may take the integrand to be G(y, y 0 ) = y 1 + y 0 2 so that

∂G yy 0 ∂G y
= and y 0 − G = −p .
∂y 0 0
p
1 + y0 2 ∂y 1 + y0 2

Hence the Euler-Lagrange equation integrates to


y
p = c, y(a) = A ≥ 0, y(b) = B > 0, (5.12)
1 + y0 2
5.3. MINIMAL SURFACE OF REVOLUTION 157

for some constant c; since y(b) > 0 we may assume that c is positive. By squaring and
re-arranging this equation we obtain the simpler first-order equation
p
dy y 2 − c2
=± , y(a) = A ≥ 0, y(b) = B > 0. (5.13)
dx c
The solutions of equation 5.13, if they exist, ensure that the functional 5.11 is stationary.
We shall see, however, that suitable solutions do not always exist and that when they
do further work is necessary in order to determine the nature of the stationary point.

5.3.3 The solution in a special case


Here we solve the first-order differential equation 5.13 when the ends of the cylinder
have the same radius, that is A = B > 0: in this case it is convenient to put b = −a,
so that the origin is at the centre of the cylinder which has length 2a. Now there
are two independent parameters, the lengths a and A; since there are no other length
scales we expect the solution to depend upon a single, dimensionless parameter, which
may be taken to be the ratio A/a. If B 6= A, there are two independent dimensionless
parameters, A/a and B/a for instance, and this makes understanding the behaviour
of the solutions more difficult. However, even the seemingly simple case A = B has
surprises in store and so provides an indication of the sort of difficulties that may
be encountered with variational problems: such difficulties are typical of nonlinear
boundary value problems. Because the following analysis involves several strands, you
will probably understand it more easily by re-writing it in your own words.
The ends have the same radius so it is convenient to introduce a symmetry by re-
defining a and putting the cylinder ends at x = ±a. This change, which is merely a
shift along the x-axis, does not affect the differential equation 5.13 (because its right-
hand side is independent of x); but the boundary conditions are slightly different. If we
denote the required solution by f (x), then, from equation 5.13 we see that it satisfies
the differential equation and boundary conditions,
p
df f 2 − c2
=± , f (−a) = f (a) = A > 0. (5.14)
dx c
The identity cosh2 z − sinh2 z = 1 suggests changing the dependent variable from f
to φ, where f = c cosh φ. This gives the simpler equation cdφ/dx = ±1 with solution
cφ = β ± x for some real constant β. Hence the general solution2 is
 
β±x
f (x) = c cosh .
c
The boundary conditions give
   
A β+a β−a β a
= cosh = cosh , that is sinh sinh = 0.
c c c c c
Since a 6= 0, the only way of satisfying this equation is to set β = 0, which gives
x a
f (x) = c cosh with c determined by A = c cosh . (5.15)
c c
2 Another solution is f (x) = c in the special case that c = A; however, this solution is not a solution

of the original Euler-Lagrange equation, see the discussion in section 4.4, in particular exercise 4.8.
158 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

Notice that f (0) = c, so c is the height of the curve at the origin, where f (x) is
stationary; also, because β = 0 the solution is even. The required solutions are obtained
by finding the real values of c satisfying this equation. Unfortunately, the equation
A = c cosh(a/c) cannot be inverted to express c in terms of known functions of A.
Numerical solutions may be found, but first it is necessary to determine those values of
a and A for which real solutions exist.
A convenient way of writing this equation is to introduce a new dimensionless vari-
able η = a/c so we may write the equation for c in the form
A 1
= g(η) where g(η) = cosh η. (5.16)
a η
This equation shows directly that η depends only upon the dimensionless ratio A/a. In
terms of η and A the solution 5.15 becomes
a  x cosh (xη/a)
f (x) = cosh η =A . (5.17)
η a cosh η
The stationary solutions are found by solving the equation A/a = g(η) for η. The
graph of g(η), depicted in figure 5.9, shows that g(η) has a single minimum and that for
A/a > min(g) there are two real solutions, η1 and η2 , with η1 < η2 , giving the shapes
f1 (x) and f2 (x) respectively.

10
g(η)
8
6
4 A/a
2
η
0 η1 1 2 3 η2 4
Figure 5.9 Graph of g(η) = η −1 cosh η showing the solu-
tions of the equation g(η) = A/a.

This graph also suggests that g(η) → ∞ as η → 0 and ∞; this behaviour can be verified
with the simple analysis performed in exercise 5.12, which shows that
1 eη
g(η) ∼ for η  1 and g(η) ∼ for η  1.
η 2η
The minimum of g(η) is at the real root of η tanh η = 1, see exercise 5.13; this may be
found numerically, and is at ηm ' 1.200, and here g(ηm ) = 1.509. Hence if A < 1.509a
there are no real solutions of equation 5.16, meaning that there are no functions with
continuous derivatives making the area stationary. For A > 1.509a there are two
real solutions giving two stationary values of the functional 5.11; we denote these two
solutions by η1 and η2 with η1 < η2 . Because there is no upper bound on the area
neither solution can be a global maximum. Recall that in exercise 5.10 it was shown √
that a simple polygon approximation √ to the stationary path did not exist if A < a 2
and there were two solutions if A > a 2.
5.3. MINIMAL SURFACE OF REVOLUTION 159

The following graph shows values of the dimensionless area S/a2 for these two sta-
tionary solutions as functions of A/a when A/a ≥ g(ηm ) ' 1.509. The area associated
with the smaller root, η1 , is denoted by S1 , with S2 denoting the area associated with
η2 . These graphs show that S2 > S1 for A > ag(ηm ) ' 1.51a.

60 2
2
S/a S2 /a
50
2
40 S1 /a
30

20 A/a
1.5 1.75 2 2.25 2.5 2.75 3
Figure 5.10 Graphs showing how the dimensionless area
S/a2 varies with A/a.

It is difficult to find simple approximations for the area S[f ] except when A  a, in
which case the results obtained in exercise 5.12 and 5.13 may be used, as shown in the
following analysis. We consider the smaller and larger roots separately.
If A  a the smaller root, η1 is seen from figure 5.9 to be small. The approximation
developed in exercise 5.12 gives η1 ' a/A so that equation 5.17 becomes
f1 (x) ' A cosh(x/A) ' A,
since |x| ≤ a  A and cosh(x/A) ' 1. Because f1 (x) is approximately constant the
original functional, equation 5.11, is easily evaluated to give
S1 A
S1 = S[f1 ] = 4πaA or = 4π .
a2 a
The latter expression is the equation of the approximately straight line seen in fig-
ure 5.10. The area S1 is that of the right circular cylinder formed by joining the ends
with parallel lines.
For the larger root, η2 , since cosh η ' eη /2, for large η, equation 5.16 for η becomes,
see exercise 5.12
A 1 η
= e (5.18)
a 2η
and  η  η
2

2
 η2
f2 (x) ' A exp − (a − x) + A exp − (a + x) ,  1.
a a a
For positive x the second term is negligible (because η2  1) provided xη2  a. For
negative x the first term is negligible, for the same reason. Hence an approximation for
f2 (x) is  η 
2
f2 (x) ' A exp − (a − |x|) provided |x|η2  a. (5.19)
a
The behaviour of this function as η → ∞ is discussed after equation 5.20. In exer-
cise 5.12 it is shown that the area is given by
 2
S2 A
S2 = S[f2 ] ' 2πA2 or 2
= 2π ,
a a
160 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

which is the same as the area of the cylinder ends. The latter expression increases
quadratically with A/a, as seen in figure 5.10.
These approximations show directly that if A  a then S2 > S1 , confirming the
conclusions drawn from figure 5.10. They also show that when A  a the smallest area
is given when the surface of revolution approximates that of a right circular cylinder.
In the following three figures we show examples of these solutions for A = 2a,
A = 10a and A = 100a. In the first example, on the left, the ratio A/a = 2 is only
a little larger than min(g(η)) ' 1.509, but the two solutions differ substantially, with
f1 (x) already close to the constant value of A for all x. In the two other figures the
ratio A/a is larger and now f1 (x) is indistinguishable from the constant A, while f2 (x)
is relatively small for most values of x.

A=2a A=10a A=100a


1 1 1
f1(x) /A f1(x) /A f1(x) /A
0.75 0.75 0.75

0.5 0.5 0.5


f2 (x) /A
0.25 0.25 f2 (x) /A 0.25 f2 (x) /A
x/a x/a x/a
0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1

Figure 5.11 Graphs showing the stationary solutions f (x)/A = cosh(xη/a) as a function of x/a
and for various values of A/a, with a = 1.

These figures and the preceding analysis show that when the ends are relatively close,
that is A/a large, f1 (x) ' A, for all x, and that as A/a → ∞, f2 (x) tends to the
function 
0, |x| < a,
f2 (x) → fG (x) = (5.20)
A, |x| = a.
This result may be derived from the approximate solution given in equation 5.19. Con-
sider positive values of x, with xη2  a. If x = a(1 − δ), where δ is a small positive
number, then
f2 (x) ' Ae−δη2 .
But from equation 5.18 ln(A/a) = η − ln(2η) and if η  1, ln(2η)  η, so η ' ln(A/a)
and the above approximation for f2 (x) becomes

f2 (x)  a δ
= , x = a(1 − δ).
A A
Hence, provided δ > 0, that is x 6= a, f2 /A → 0 as A/a → ∞.
The surface defined by the limiting function fG (x) comprises two discs of radius A, a
distance 2a apart, so has area SG = 2πA2 , independent of a. Since this limiting solution
has discontinuous derivatives at x = ±a it is not an admissible function. Nevertheless
it is important because if A < ag(ηm ) ' 1.509a it can be shown that this surface gives
the global minimum of the area and, as will be seen in the next subsection, has physical
significance. This solution to the problem was first found by B C W Goldschmidt in
1831 and is now known as the Goldschmidt curve or Goldschmidt solution.
5.3. MINIMAL SURFACE OF REVOLUTION 161

5.3.4 Summary
We have considered the special case where the ends of the cylinder are at x = ±a and
each end has the same radius A; in this case the curve y = f (x) is symmetric about
x = 0 and we have obtained the following results.
1. If the radius of the ends is small by comparison to the distance between them,
A < ag(ηm ) ' 1.509a, there are no curves described by differentiable functions
making the traced out area stationary. In this case it can be shown that the
smallest area is given by the Goldschmidt solution, fG (x), defined in equation 5.20,
and that this is the global minimum.
2. If A > 1.51a there are two smooth stationary curves. One of these approaches
the Goldschmidt solution as A/a → ∞ and the other approaches the constant
function f (x) → A in this limit, and this gives the smaller area. This solution is
a local minimum of the functional, as will be shown in chapter 8.
The nature of the stationary solutions is not easy to determine. In the following graph
we show the areas S1 /a2 and S2 /a2 , as in figure 5.10 and also, with the dashed lines,
the areas given by the Goldschmidt solution, SG /a2 = 2π(A/a)2 , curve G, and the area
of the right circular cylinder, Sc /a2 = 4πA/a, curve c.

60
2
S/a 2
50 S2/a
G
40 c
30

20 S1/a
2

A/a
1.5 1.75 2 2.25 2.5 2.75 3
Figure 5.12 Graphs showing how the dimensionless area S/a2 varies
with A/a. Here the curves k, k = 1, 2, denote the area Sk /a2
as in figure 5.10; G the scaled area of the Goldschmidt curve,
SG = 2π(A/a)2 and c the scaled area of the cylinder, 4πA/a.

If A > ag(ηm ) ' 1.509a it will be shown in chapter 8 that S1 is a local minimum of
the functional. The graphs shown in figure 5.12 suggest that for large enough A/a,
S1 < SG , but for smaller values of A/a, SG < S1 . The value of η at which SG = S1 is
given by the solution of 1 + e−2η = 2η, see exercise 5.14. The numerical solution of this
equation gives η = 0.639 at which A = 1.8945a. Hence if A < 1.89a the Goldschmidt
curve yields a smaller area, even though S1 is a local minimum. For A > 1.89a, S1
gives the smallest area.
This relatively simple example of a variational problem provides some idea of the
possible complications that can arise with nonlinear boundary value problems.

Exercise 5.11
(a) If f (x) = c cosh(x/c) show that
S[f ] 2π a
= 2 (η + sinh η cosh η) , η= .
a2 η c

(b) Show that S[f ] considered as function of η is stationary at the root of η tanh η = 1.
162 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

Exercise 5.12
(a) Use the expansion cosh η = 1 + 21 η 2 + O(η 4 ) to show that, for small η, g(η) =
1/η + η/2 + O(η 3 ), where g(η) is defined in equation 5.16. Hence show that if
A  a then η ' a/A and hence that c ' A and f (x) ' A. Using the result
obtained in the previous exercise, or otherwise, show that S1 = 4πAa.
(b) Show that if η2 is large the equation defining it is given approximately by
A 1 η
' e
a 2η
and, using the result obtained in the previous exercise, that
„ η «2 „ η «2
S2 e 2π e
' 2π + ' 2π , (η = η2 ).
a2 2η η 2η

Exercise 5.13
(a) Show that the position of the minimum of the function g(η) = η −1 cosh η,
η > 0, is at the real root, ηm , of η tanh η = 1.
By sketching the graphs of y = 1/η and y = tanh η, for η > 0, show that the
equation η tanh η = 1 has only one real root.
(b) If a/c = ηm and A/a = g(ηm ) use the result derived in exercise 5.11 to
show that the area of the cylinder formed is Sm = 2πA2 ηm , and that Sm /a2 =
2πηm−1
cosh2 ηm .

Exercise 5.14
Use the result derived in exercise 5.12 to show that SG = S1 when η satisfies
the equation cosh2 η = η + sinh η cosh η. Show that this equation simplifies to
1 + e−2η = 2η and that there is only one positive root, given by η = 0.639232.

Exercise 5.15
(a) Show that the functional
Z 1 p
S[y] = dx y (1 + y 0 2 ), y(−1) = y(1) = A > 0,
−1

is stationary on the two paths


1 ` 4 1“ p ”
4c + x2 c2 = c2± =
´
y(x) = 2
where A ± A2 − 1 .
4c 2
In the following these solutions are denoted by y± (x).
(b) Show that on these stationary paths
1
S[y] = 2c + ,
6c3

and deduce that when A > 1, S[y− ] > S[y+ ], and that when A = 1, S[y] = 4 2/3.
Show also that if A  1
4 √
S[y− ] ' A3/2 and S[y+ ] ' 2 A.
3
5.4. SOAP FILMS 163

(c) Find the value of S[y] for the function


8
< 0, 0 ≤ x < 1 − δ, 0 < δ  1,
yδ (x) = A
: A − (1 − x), 1 − δ ≤ x ≤ 1.
δ
Show that as δ → 0, yδ (x) → fG (x), the Goldschmidt curve defined in equa-
tion 5.20. Show also that
4 3/2
lim S [yδ ] = S[fG ] = A .
δ→0 3

5.4 Soap Films


An easy way of forming soap films is to dip a loop of wire into soap solution and then to
blow on it. Almost everyone will have noticed the initial flat soap film bounded by the
wire forms a segment of a sphere when blown. It transpires that there is a very close
connection between these surfaces and problems in the Calculus of Variations. The exact
physics of soap films is complicated, but a fairly simple and accurate approximation
shows that the shapes assumed by soap films are such as to minimise their areas, because
the surface-tension energy is approximately proportional to the area and equilibrium
positions are given by the minimum of this energy. Thus, in some circumstances the
shapes given by the minimum surface of revolution, described above, are those assumed
by soap films.
The study of the formation and shapes of soap films has a very distinguished pedi-
gree: Newton, Young, Laplace, Euler, Gauss, Poisson are some of the eminent scientists
and mathematicians who have studied the subject. Here we cannot do the subject jus-
tice, but the interested reader should obtain a copy of Isenberg’s fascinating book 3 .
The essential property is that a stable soap film is formed in the shape of a surface of
minimal area that is consistent with a wire boundary.
Probably the simplest example is that of a soap film supported by a circular loop of
wire. If we distort it by blowing on it gently to form a portion of a sphere, when we stop
blowing the surface returns to its previous shape, that is a circular disc. Essentially this
is because in each case the surface-tension energy, which is proportional to the area, is
smallest in the assumed configuration.
Imagine a framework comprising two identical circular wires of radius A, held a
distance 2a apart (like wheels on an axle), as in figure 5.13 below. What shape soap
film can such a frame support? These figures illustrate the alternatives suggested by
the analysis of the previous section and agree qualitatively with the solutions one would
intuitively expect.
The left-hand configuration (large separation), with two distinct surfaces, is the
Goldschmidt solution, equation 5.20, and it gives an absolute minimum area if A <
1.89a. The shape on the right is a catenoid of revolution and represents the absolute
minimum if A > 1.89a. It is a local minimum if 1.51a < A < 1.89a and does not exist
if A < 1.51a. When 1.51a < A < 1.89a the catenoid is unstable and we have only
to disturb it slightly, by blowing on it for instance, and it may suddenly jump to the
Goldschmidt solution which has a smaller area, as seen in figure 5.12.
3 The Science of Soap Films and Soap Bubbles, by C Isenberg (Dover 1992).
164 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

A A

2a 2a

Figure 5.13 Diagrams showing two configurations assumed by soap films on two rings of radius
A and a distance 2a appart. On the left, 1.89a > A, the soap film simply fills the two circular
wires because they are too far apart: this is the Goldschmidt solution, equation 5.20. On the right
1.51a < A the soap film joins the two rings in the shape defined by equation 5.17 with η = η 1 .

The methods discussed previously provide the shape of the right-hand film, but the
matter of determining whether these stationary positions are extrema, local or global,
is of a different order of difficulty. The complexity of this physical problem is further
compounded when one realises that there can be minimum energy solutions of a quite
unexpected form. The following diagram illustrates a possible configuration of this kind.
We do not expect the theory described in the previous section to find such a solution
because the mathematical formulation of the physical problem makes no allowance for
this type of behaviour.

2a
Figure 5.14 Diagram showing a possible soap film. In this example a circular
film, perpendicular to the axis, is formed in the centre and this is joined to
both outer rings by a catenary.

The relationship between soap films and some problems in the Calculus of Variations
can certainly add to our intuitive understanding, but this example should provide a
salutary warning against dependence on intuition.
Examples of the complex shapes that soap films can form, but which are difficult
to describe mathematically, are produced by dipping a wire frame into a soap solution.
Photographs of the varied shapes obtained by cubes and tetrahedrons are provided in
Isenberg’s book.
Here we describe a conceptually simple problem which is difficult to deal with math-
ematically, but which helps to understand the difficulties that may be encountered with
certain variational problems. Further, this example has potential practical applications.
Consider the soap film formed between two clear, parallel planes joined by a number
of pins, of negligible diameter, perpendicular to the planes. When dipped into a soap
5.4. SOAP FILMS 165

solution the resulting film will join the pins in such a manner as to minimise the length
of film, because the surface tension energy is proportional to the area, which is propor-
tional to the length of film. In figure 5.15 we show three cases, viewed from above, with
two and three pins.
In panel A there are two pins: the natural shape for the soap films is the straight line
joining them. In panels B and C there are three pins and two different configurations
are shown which, it transpires, are the only two allowed; but which of the pair is actually
assumed depends upon the relative positions of the pins.

A B C

Figure 5.15 Diagram showing possible configurations


of soap films for two and three pins.

The reason for this follows from elementary geometry and the application of one of
Plateau’s (1801 – 1883)4 three geometric rules governing the shapes of soap films, which
he inferred from his experiments. In the present context the relevant rule is that three
intersecting planes meet at equal angles of 120◦ : this is a consequence of the surface
tension forces in each plane being equal. Plateau’s other two rules are given by Isenberg
(1992, pages 83 – 4).
We can see how this works, and some of the consequences for certain problems in
the Calculus of Variations, by fixing two points, a and b, and allowing the position of
the third point to vary. The crucial mathematical result needed is Proposition 20 of
Euclid5 , described next.
C
Euclid: proposition 20
The angle subtended by a chord AB at the centre of α
the circle, at O, is twice the angle subtended at any
O
point C on the circumference of the circle, as shown
in the figure. This is proved using the properties of 2α
similar triangles. A B

With this result in mind draw a circle through the points a and b such that the angle
subtended by ab on the circumference is 120◦ , figures √ 5.16 and 5.17. If L is the distance
between a and b the radius of this circle is R = L/ 3. The orientation of this circle is
chosen so the third point is on the same side of the line ab as the 120◦ angle.
Then for any point c outside this circle the shortest set of lines is obtained by joining
c to the centre of the circle, O, and if c0 is the point where this line intersects the circle,
4 Joseph Plateau was a Belgian physicist who made extensive studies of the surface properties of

fluids.
5 See Euclid’s Elements, Book III.
166 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

see figure 5.16, the lines cc0 , ac0 and c0 b are the shortest set of lines joining the three
points a, b and c.
c

c’ c
a 120
o
b a >120
o b

O
Figure 5.16 Diagram of the shortest Figure 5.17 Diagram of the shortest
length for a point c outside the circle.The length for a point c inside the circle.
point O is the centre of the circle.

If the third point c is inside this circle the shortest line joining the points comprises
the two straight line segments ac and cb, as shown in figure 5.17. This result can be
proved, see Isenberg (1992, pages 67 – 73) and also exercise 5.16.
As the point c moves radially from outside to inside the circle the shortest config-
uration changes its nature: this type of behaviour is generally difficult to predict and
may cause problems in the conventional theory of the Calculus of Variations.
If more pins join the parallel planes the soap film will form configurations making
the total length a local minimum; there are usually several different minimum configu-
rations, and which is found depends upon a variety of factors, such as the orientation of
the planes when extracted from the soap solution. The problem of minimising the total
length of a path joining n points in a plane was first investigated by the Swiss mathe-
matician Steiner (1796 – 1863) and such problems are now known as Steiner problems.
The mathematical analysis of such problems is difficult. One physical manifestation of
this type of situation is the laying of pipes between a number of centres, where, all else
being equal, the shortest total length of pipe is desirable.

Exercise 5.16
Consider the three points, O, A and C, in the Cartesian plane with coordinates
O = (0, 0), A = (a, 0) and C = (c, d) and where the angle OAC is less than 120◦ .
Consider a point X, with coordinates (x, y) inside the triangle OAC. Show that
the sum of the lengths OX, AX and CX is stationary and is a minimum when
the angles between the three lines are all equal to 120◦ .

Exercise 5.17
Consider the case where four pins are situated at the corners of a square with side
of length L.
(a) One possible configuration of the soap films is for them to lie along the two

diagonals, to form the cross . Show that the length of the films is 2 2 L = 2.83L.
(b) Another configuration is the ‘H’-shape, . Show that the length of film is 3L.
(c) Another possible configuration is, , where the angle√ between three intersect-
ing lines is 120◦ . Show that the length of film is (1 + 3)L = 2.73L.
5.4. SOAP FILMS 167

Exercise 5.18
aL, a>1
Consider the configuration of four pins forming a rectangle
with sides of length L and aL. L
(a) For the case shown in the√top panel, a > 1, show that
a<1
total line length is d1 = L(a + 3) and √
that for the case in the
bottom panel, a < 1, it is d2 = L(1 + a 3). L
(b) Show that the minimum of these two lengths is d1 if a > 1
and d2 if a < 1.
168 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

5.5 Miscellaneous exercises


Exercise 5.19
Show that the Euler-Lagrange equation for the minimal surface of revolution on
the interval 0 ≤ x ≤ a with the boundary conditions y(0) = 0, y(a) = A > 0, has
no solution.
Note that in this case the only solution is the Goldschmidt curve, equation 5.20,
page 160.

Exercise 5.20
Show that the functional giving the distance between two points on a sphere of
radius r, labelled by the spherical polar coordinates (θa , φa ) and (θb , φb ) can be
expressed in either of the forms
Z θb q Z φb q
S=r dθ 1 + φ0 (θ)2 sin2 θ or S = r dφ θ0 (φ)2 + sin2 θ
θa φa

giving rise to the two equivalent Euler-Lagrange equations, respectively,


q
φ0 sin2 θ = c 1 + φ0 (θ)2 sin2 θ, φ(θa ) = φa , φ(θb ) = φb ,
where c is a constant, and
θ00 sin θ − 2θ 0 2 cos θ − sin2 θ cos θ = 0, θ(φa ) = θa , θ(φb ) = θb .
Both these equations can be solved, but this task is made easier with a sensible
choice of orientation. The two obvious choices are:
(a) put the initial point at the north pole, so θa = 0 and φa is undefined, and
(b) put both points on the equator, so θa = θb = π/2, and we may also choose
φa = 0.
Using one of these choices show that the stationary paths are great circles.

Exercise 5.21
Consider the minimal surface problem with end points Pa = (0, A) and Pb = (b, B),
where b, A and B are given and A ≤ B.
(a) Show that the general solution of the appropriate Euler-Lagrange equation is
“α − x”
y = c cosh ,
c
where α and c are real constants with c > 0. Show that if c = bη the boundary
conditions give the following equation for η
q
2
B = f (η) where f (x) = A cosh(1/x) − A − x2 sinh(1/x)
and A = A/b, B = B/b, with 0 ≤ η ≤ A.
(b) Show that for x  A and x ' A the function f (x) behaves, respectively, as
x2 1/x
q
f (x) ' e and f (x) ' A cosh(1/A) − 2A(A − x) sinh(1/A).
4A
Deduce that f (x) has at least one minimum in the interval 0 < x < A and that
the equation B = f (η) has at least two roots for sufficiently large values of B and
none for small B.
5.5. MISCELLANEOUS EXERCISES 169

(c) If A  1 show „that« the minimum value of f (x) occurs near x = A and that
1 1
min(f ) ' A exp . Deduce that if A  1 there are two solutions of the
2 A „ «
1 1
Euler-Lagrange equation if B > A exp , approximately, otherwise there are
2 A
no solutions.

Exercise 5.22
(a) For the brachistochrone problem suppose that the initial and final points of
the curve are (x, y) = (0, A) and (b, 0), respectively, as in the text, but that the
initial speed, v0 , is not zero.
Show that the parametric equations for the stationary path are
1 2 v02
x=d+ c (2φ − sin 2φ), z = c2 sin2 φ, y = A+ − z,
2 2g
where φ0 ≤ φ ≤ φb , for some constants c, d, φ0 and φb . Show that these four
constants are related by the equations
v02
sin2 φ0 = k2 sin2 φb , k2 = < 1,
v02 + 2gA
v02 “ ”
b = 2
(2φ b − sin 2φ b ) − (2φ 0 − sin 2φ0 ) ,
4gk2 sin φb
v2
c2 sin2 φb = A+ 0.
2g

(b) If v02  Ag, show that k is small and find an approximate solution for these
equations. Note, this last part is technically demanding.

Exercise 5.23
In this exercise you will show that the cycloid is a local minimum for the brachis-
tochrone problem using the functional found in exercise 5.5. Consider √ the varied
path x(z) + h(z) and show that (ignoring the irrelevant factor 1/ 2g )

2 A h0 (z)2
Z
T [x + h] − T [x] = dz √ + O(3 ),
2 0 z(1 + x0 2 )3/2
Z φA
= 2 c dφ h0 (z)2 cos4 φ,
0

where z(x) is the stationary path, given parametrically by z = c2 sin2 φ, x =


1 2
2
c (2φ−sin 2φ) and where A = c2 sin2 φA . Deduce that T [x+h] > T [x], for || > 0
and all h(x), and hence that the stationary path is actually a local minimum.

Exercise 5.24
The Oxy-plane is vertical with the Oy-axis vertically upwards. A straight line
is drawn from the origin to the point P with coordinates (x, f (x)), for some
differentiable function f (x). Show that the time taken for a particle to slide
smoothly from P to the origin is
s
x2 + f (x)2
T (x) = 2 .
2gf (x)
170 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION

By forming a differential equation for f (x), and solving it, show that T (x) is
independent of x if f satisfies the equation x2 +(f −α)2 = α2 , for some constant α.
Describe the shape of the curve defined by this equation.

Exercise 5.25
A cylindrical shell of negligible thickness is formed by rotating the curve y(x),
a ≤ x ≤ b, about the x-axis. If the material is uniform with density ρ the moment
of inertia about the x-axis is given by the functional
Z b
dx y 3 1 + y 0 2 , y(a) = A, y(b) = B
p
I[y] = πρ
a

where A and B are the radii of the ends and are given.
(a) In the case A = B and with the end points at x = ±a show that I[y] is
stationary on the curve y = c cosh φ(x) where φ(x) is given implicitly by
Z φ
x 1
= dv p ,
c 0 1 + cosh v + cosh4 v
2

and the constant c is given by A = c cosh φa where φa = φ(a) is given by the


solution of the equation
Z z
a 1 1
= f (φa ) where f (z) = dv p .
A cosh z 0 1 + cosh v + cosh4 v
2

(b) Show that for small and large z


8 z 3
< √ + O(z ),
>
3
f (z) ' −z
Z ∞
1
: 2βe ,
> β= dv p .
0 1 + cosh v + cosh4 v
2

Hence show that for a/A  1 there are two solutions. Show, also that there
is a critical value of a/A above which there are no appropriate solutions of the
Euler-Lagrange equation.

Problems on cycloids
Exercise 5.26
The cycloid OP D of figure 5.1 (page 146) is rotated about the x-axis to form a
solid of revolution. Show that the surface area, S, and volume, V , of this solid are
Z 2π Z 2π
ds dx
S = 2π dθ y V = π dθ y 2
0Z dθ 0Z dθ
2π 2π
2 3
= 4πa dθ (1 − cos θ) sin(θ/2) = πa dθ (1 − cos θ)3
0 0
64 2
= πa = 5π 2 a3 .
3

Exercise 5.27
The half cycloid with parametric equations x = a(φ − sin φ), y = a(1 − cos φ) with
0 ≤ φ ≤ θ ≤ π is rotated about the y-axis to form a container.
5.5. MISCELLANEOUS EXERCISES 171

(a) Show that the surface area, S(θ), and volume, V (θ), are given by
Z θ “ ”
S(θ) = 4πa2 dφ φ − sin φ sin(φ/2),
0
Z θ “ ”2
3
V (θ) = πa dφ φ − sin φ sin φ.
0

(b) Show that for small x these integrals are approximated by


2π 2/3 1/3 5/3 π 2/3 1/3 8/3
S(x) = 6 a x + O(x7/3 ) and V (x) = 6 a x + O(x10/3 ).
5 8
(c) Find the general expressions for S(θ) and V (θ) and their values at θ = π.

Exercise 5.28
This exercise shows that the arc QST in figure 5.3, (page 148) is a cycloid, a
result discovered by Huygens and used in his attempt to construct a pendulum
with period independent of its amplitude for use in a clock.
Consider the situation shown in figure 5.18, where the arcs ABO and OCD are
cycloids defined parametrically by the equations
x = a(φ − sin φ), y = a(1 − cos φ), −2π ≤ φ ≤ 2π,
where B and C are at the points φ = ±π, respectively.

A O x
D

B C

θ R

y
Figure 5.18

The curve OQR has length l, is wrapped round the cycloid along OQ, is a straight
line between Q and R and is tangential to the cycloid at Q.
(a) If the point Q has the coordinates
xQ = a(φ − sin φ) and yQ = a(1 − cos φ)
show that the angle θ between QR and the x-axis is given by θ = (π − φ)/2.
(b) Show that the coordinates of the point R are
xR = xQ + (l − s(φ)) sin(φ/2) and yR = yQ + (l − s(φ)) cos(φ/2),
where s(φ) is the arc length OQ.
(c) If the length of OQR is the same as the length of OQC show that
xR = a(φ + sin φ) and yR = a(3 + cos φ).
172 CHAPTER 5. APPLICATIONS OF THE EULER-LAGRANGE EQUATION
Chapter 6

Further theoretical
developments

6.1 Introduction
In this chapter we continue the development of the general theory, by first considering
the effects of changing variables and then by introducing functionals with several de-
pendent variables. The chapter ends with a discussion of whether any second-order dif-
ferential equation can be expressed as an Euler-Lagrange equations and hence whether
its solutions are stationary paths of a functional.
The motivation for changing variables is simply that most problems can be simpli-
fied by a judicious choice of variables, both dependent and independent. If a set of
differential equations can be derived from a functional it transpires that changing vari-
ables in the functional is easier than the equivalent change to the differential equations,
because the order of differentiation is always smaller.
The introduction of two or more dependent variables is needed when stationary paths
are described parametrically — an idea introduced in chapter 9. Another very important
use, however, is in the reformulation of Newton’s laws as a variational principle, an
important topic we have no room for in this course.

6.2 Invariance of the Euler-Lagrange equation


In this section we consider the effect of changing both the dependent and independent
variables and show that the form of the Euler-Lagrange equation remains unchanged,
an important property first noted by Euler in 1744. This technique is useful because
one of the principal methods of solving differential equations is to change variables with
the aim of converting it to a standard, recognisable form. For instance the unfamiliar
equation
d2 y dy
z + (1 − a) + a2 z 2a−1 y = 0 (6.1)
dz 2 dz

173
174 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

becomes, on setting z = x1/a , (x ≥ 0), the familiar equation

d2 y
+ y = 0.
dx2
It is rarely easy to find suitable new variables, but if the equation can be derived from
a variational principle the task is usually made easier because the algebra is simpler:
you will see why in exercise 6.1 where the above example is treated with a = 2.
We start with functionals having only one dependent variable, but the full power of
this technique becomes apparent mainly in the advanced study of dynamical systems
which cannot be dealt with here.

6.2.1 Changing the independent variable


The easiest way of understanding why the form of the Euler-Lagrange equation is invari-
ant under a coordinate change is to examine the effect of changing only the independent
variable x. Thus for the functional
Z b
S[y] = dx F (x, y(x), y 0 (x)) (6.2)
a

we change to a new independent variable u, where x = g(u) for a known differentiable


function g(u), assumed to be monotonic so the inverse exists. With this change of
variable y(x) becomes a function of u and it is convenient to define Y (u) = y(g(u)).
Then the chain rule gives

dy dy du Y 0 (u) Y 0 (u)
= = = 0 ,
dx du dx dx/du g (u)

and the functional becomes


d
Y 0 (u)
Z  
0
S[Y ] = du g (u)F g(u), Y (u), 0 , (6.3)
c g (u)

with the integration limits, c and d, defined implicitly by the equations a = g(c) and
b = g(d).
The integrand of the original functional depends upon x, y(x) and y 0 (x). The
integrand of the transformed functional depends upon u, Y (u) and Y 0 (u), so if we
define
Y 0 (u)
 
0 0
F(u, Y (u), Y (u)) = g (u)F g(u), Y (u), 0 , (6.4)
g (u)
the functional can be written as
Z d
S[Y ] = du F(u, Y (u), Y 0 (u)). (6.5)
c

The Euler-Lagrange equation in the new variable, u, is therefore


 
d ∂F ∂F
0
− =0 (6.6)
du ∂Y ∂Y
6.2. INVARIANCE OF THE EULER-LAGRANGE EQUATION 175

whereas the original Euler-Lagrange equation is


 
d ∂F ∂F
0
− = 0. (6.7)
dx ∂y ∂y
These two equations have the same form, in the sense that the formula 6.6 is obtained
from 6.7 by replacing the explicit occurrences of x, y, y 0 and F by u, Y , Y 0 and F
respectively. The new second-order differential equation for Y , obtained from 6.6 is,
however, normally quite different from the equation for y derived from 6.7, because F
and F have different functional forms.
A simple example is the functional
Z 2
y0 2
S[y] = dx 2 , y(1) = 1, y(2) = 2,
1 x
which is similar to the example dealt with in exercise 4.22(c) (page 139). The general
solution to the Euler-Lagrange is y(x) = β + αx3 and the boundary conditions give
β = 6/7 and α = 1/7.
Now make the transformation x = ua , for some constant a: the chain rule gives
dy dy du Y 0 (u)
= = a−1 where Y (u) = y(ua )
dx du dx au
and the functional becomes
Z 21/a 2 Z 1/a
1 2
 0
1 Y (u) Y 0 (u)2
S[Y ] = a du ua−1 2a = du .
1 u aua−1 a 1 u3a−1
Choosing 3a = 1, simplifies this functional to
Z 8
S[Y ] = 3 du Y 0 2 , Y (1) = 1, Y (8) = 2.
1

The Euler-Lagrange equation for this functional is Y 00 (u) = 0, having the general so-
lution Y = C + Du. The boundary conditions give C + 8D = 2 and C + D = 1 and
hence
1 1
6 + x3 .

Y (u) = (6 + u) giving y(x) = Y (u(x)) =
7 7
In this example little was gained, because the Euler-Lagrange equation is equally easily
solved in either representation. This is not always the case as the next exercise shows.
Exercise 6.1 Z X
dx y 0 2 − ω 2 y 2 , where ω is a constant, gives rise to
` ´
The functional S[y] =
0
the Euler-Lagrange equation y 00 + ω 2 y = 0.
(a) Show that changing the independent variable to z where x = z 2 gives the
functional
1 Z √
Z „ 0 2 «
y (z)
S[y] = dz − 4ω 2 zy 2 , Z = X,
2 0 z
with the associated Euler-Lagrange equation
d2 y dy
z− + 4ω 2 z 3 y = 0.
dz 2 dz
Show that this is the same as equation 6.1 when a = 2 and ω = 1.
176 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

(b) Show that


d2 y
„ 2 «
1 d y dy
= z −
dx2 4z 3 dz 2 dz
and hence derive the above Euler-Lagrange equation directly.
Note that the first method requires only that we compute dy/dx and avoids the
need to calculate the more difficult second derivative, d2 y/dx2 , required by the
second method. This is why it is normally easier to transform the functional rather
than the differential equation.

Exercise 6.2
A simpler type of transformation involves a change of the dependent variable.
Consider the functional Z b
S[y] = dx y 0 2 .
a
(a) Show that the associated Euler-Lagrange equation is y 00 (x) = 0.
(b) Define a new variable z related to y by the differentiable monotonic function
y = G(z) and show that the functional becomes
Z b
S[z] = dx G0 (z)2 z 0 2 .
a

Show also that the Euler-Lagrange equation for this functional is

G0 (z)z 00 + G00 (z)z 0 2 = 0

and that this is identical to the original Euler-Lagrange equation.

Exercise 6.3
Show that if y = G(z), where G(z) is a differentiable function, the functional
Z b Z b
S[y] = dx F (x, y, y 0 ) transforms to S[z] = dx F (x, G(z), G0 (z)z 0 )
a a

with associated Euler-Lagrange equation


„ «
d 0 ∂F ∂F 0 ∂F 00 dz
G (z) 0 − G (z) − G (z) = 0.
dx ∂y ∂y ∂y 0 dx

6.2.2 Changing both the dependent and independent variables


In the previous section it was seen that when changing the independent variable the
algebra is simpler if the transformation is made to the functional rather than the associ-
ated Euler-Lagrange equation because changing the functional involves only first-order
derivatives, recall exercise 6.1.
For the same reason it is far easier to apply more general transformations to the
functional than to the Euler-Lagrange equation. The most general transformation we
need to consider will be between the Cartesian coordinates (x, y) and two new variables
(u, v): such transformations are defined by two equations

x = f (u, v), y = g(u, v)


6.2. INVARIANCE OF THE EULER-LAGRANGE EQUATION 177

which we assume take each point (u, v) to a unique point (x, y) and vice-versa, so the
Jacobian determinant of the transformation, equation 1.26 (page 30), is not zero in the
relevant ranges of u and v.
Before dealing with the general case we illustrate the technique using the particular
example in which (u, v) are polar coordinates, which highlights all relevant aspects of
the analysis.

The transformation between Cartesian and polar coordinates


The Cartesian coordinates (x, y) are defined in terms of the plane polar coordinates
(r, θ) by
x = r cos θ, y = r sin θ, r ≥ 0, −π < θ ≤ π. (6.8)
The inverse transformation is (for r 6= 0),
y
r 2 = x2 + y 2 , tan θ =
, (6.9)
x
where the signs of x and y need to be taken into account when inverting the tan function.
At the origin r = 0, but θ is undefined. In Cartesian coordinates we normally choose x
to be the independent variable, so points on the curve C joining the points (a, A) and
(b, B), figure 6.1 below, are given by the Cartesian coordinates (x, y(x)).

y
C (x,y)

B
r rb
A θb
θ x
a b
Figure 6.1 Diagram showing the relation between the Cartesian and polar
representations of a curve joining (a, A) and (b, B).

Alternatively, we can define each point on the curve by expressing r as a function of θ,


and then the curve is defined by the polar coordinates (r(θ), θ).
The aim is to transform a functional
Z b
S[y] = dx F (x, y(x), y 0 (x)) (6.10)
a

to an integral over θ in which y(x) and y 0 (x) are replaced by expressions involving θ,
r(θ) and r0 (θ). First we change to the new independent variable θ: then since x = r cos θ
and y = r sin θ we have
Z θb
dx
S[r] = dθ F (r cos θ, r sin θ, y 0 (x)) . (6.11)
θa dθ
The differential dx/dθ is obtained from the relation x = r cos θ using the chain rule and
remembering that r depends upon θ,
dx dr dy dr
= cos θ − r sin θ and similarly = sin θ + r cos θ. (6.12)
dθ dθ dθ dθ
178 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

It remains only to express y 0 (x) in terms of r, θ and r 0 , and this given by the relation

dy dy dθ dy . dx r0 sin θ + r cos θ
= = = 0 , (6.13)
dx dθ dx dθ dθ r cos θ − r sin θ
where r is assumed to depend upon θ. Hence the functional transforms to
Z θb
S[r] = dθ F(θ, r, r0 ), (6.14)
θa

where
r0 sin θ + r cos θ
 
0
F = (r cos θ − r sin θ) F r cos θ, r sin θ, 0 . (6.15)
r cos θ − r sin θ
The new functional depends only upon θ, r(θ) and the first derivative r 0 (θ), so the
Euler-Lagrange equation is  
d ∂F ∂F
0
− =0 (6.16)
dθ ∂r ∂r
which is the transformed version of
 
d ∂F ∂F
− = 0. (6.17)
dx ∂y 0 ∂y

This analysis shows that the transformation to polar coordinates keeps the form of
Euler’s equation invariant because the transformation of the functional introduces only
first derivatives, via equations 6.12 and 6.13, so does not alter the derivation of the
Euler-Lagrange equation. The same transformation applied to the Euler-Lagrange
equation 6.17 involves finding a suitable expression for the second derivative, y 00 (x),
which is harder.

Exercise 6.4
The integrand of the functional 6.14 contains the denominator r 0 cos θ − r sin θ.
Why can we assume that this is not zero?

Exercise 6.5
Show that if r is taken to be the independent variable the functional 6.10 becomes
Z rb
sin θ + rθ 0 cos θ
„ «
dr F(r, θ, θ 0 ) where F = cos θ − rθ 0 sin θ F r cos θ, r sin θ,
` ´
S[θ] = .
ra cos θ − rθ 0 sin θ

Exercise 6.6
(a) Show that the Euler-Lagrange equation for the functional
θb «2
d2 r
Z „ „ «
dr d 1 dr
−r2 = 0
p
S[r] = dθ r2 + r0 (θ)2 is r −2 or r = 1.
θa dθ2 dθ dθ r2 dθ

(b) Show that the general solution of this equation is r = 1/(A cos θ + B sin θ), for
constants A and B.
6.2. INVARIANCE OF THE EULER-LAGRANGE EQUATION 179

dθ xy 0 − y dr (yy 0 + x)r
(c) By showing that = 2 2
and = , where (x, y) are the
dx x +y dθ xy 0 − y
Cartesian coordinates, show that this functional becomes
Z b p
S[y] = dx 1 + y 0 (x)2 .
a

(d) If the boundary conditions in the Cartesian plane are (x, y) = (a, a) and
(b, b + ), b > a and  > 0 show that in each representation the stationary path is
„ «
 a a
y= 1+ x− and r = .
b−a b−a (b − a + ) cos θ − (b − a) sin θ
Consider the limit  → 0 and explain why the polar equation fails in this limit.

This example illustrates that simplification can sometimes occur when suitable transfor-
mations are made: the art is to find such transformations. The last part of exercise 6.6
also shows that representations that are undefined at isolated points can cause diffi-
culties. In this case a problem is created because polar coordinates are not unique at
the origin, where θ is undefined. The same problems occur when using spherical polar
coordinates at the north and south poles, where the azimuthal angle is undefined.

Exercise 6.7
Show that in polar coordinates the functional
Z b p p Z θb p
S[y] = dx x2 + y 2 1 + y 0 (x)2 becomes S[r] = dθ r r2 + r0 (θ)2
a θa

and that the resulting Euler-Lagrange equation is


„ «2
d2 r d2
„ «
3 dr 1 4
− − 2r = 0 which can be written as + = 0.
dθ2 r dθ dθ2 r2 r2
Hence show that equations for the stationary paths are
1
= A cos 2θ + B sin 2θ or A(x2 − y 2 ) + 2Bxy = 1,
r2
where A and B are constants and 0 ≤ θ < π.

The general transformation


The analysis for the general transformation

x = f (u, v), y = g(u, v)

is very similar to the special case dealt with above and, as in that case (see exercise 6.6),
it is necessary that the transformation is invertible, so that the Jacobian determinant,
equation 1.26 (page 30) is not zero,

∂f ∂f

∂(f, g) ∂u ∂v
= 6= 0.
∂(u, v) ∂g ∂g
∂u ∂v

180 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

If the admissible curves are denoted by y(x) and v(u) in the two representations, with
a ≤ x ≤ b and c ≤ u ≤ d, then the functional
Z b Z d
0
S[y] = dx F (x, y, y ) transforms to S[v] = du F(u, v, v 0 ), (6.18)
a c

where
gu + g v v 0
 
F = (fu + fv v 0 ) F f, g, . (6.19)
fu + f v v 0
This result follows because the chain rule gives
dx dv dy dv
= fu + fv and = gu + gv .
du du du du
In the (u, v)-coordinate system the stationary path is given by the Euler-Lagrange
equation  
d ∂F ∂F
− = 0.
du ∂v 0 ∂v

Exercise 6.8
Consider the elementary functional
Z b
S[y] = dx F (y 0 ), y(a) = A, y(b) = B.
a

If the roles of the dependent and independent variables are interchanged, by noting
that y 0 (x) = 1/x0 (y), show that the functional becomes
Z B
S[x] = dy G(x0 ) where G(u) = uF (1/u).
A

Exercise 6.9
Consider the functional
Z „ «
1
S[y] = dx A(x)y 0 2 + B(x, y) ,
2

where A(x) is a non-zero function of x and B(x, y) a function of x and y. Note


that the boundary conditions play no role in this question, so are omitted.
(a) If a new independent variable, u, is defined by the relation x = f (u), where
f (u) is a differentiable, monotonic increasing function, show that with an appro-
priate choice of f the functional can be written in the form
Z „ «
1 0 2
S[y] = du y (u) + AB .
2

(b) Use this transformation to show that the equation

d2 y dy d2 y
x − − 4x3 y = 8x3 can be converted to − 4y = 8,
dx2 dx du2
with a suitable choice of the variable u.
6.3. FUNCTIONALS WITH MANY DEPENDENT VARIABLES 181

Exercise 6.10 2
y0 2
Z
Consider the functional S[y] = dx , y(1) = A, y(2) = B, where A and B
1 x2
are both positive.
(a) Using the fact that y 0 (x) = 1/x0 (y) show that if y is used as the independent
variable the functional becomes
Z B
1
S[x] = dy 2 0 , x(A) = 1, x(B) = 2.
A x x (y)

(b) Show that the Euler-Lagrange equation for the functional S[x] is
„ «2
d2 x
„ «
d 1 2 2 dx
− = 0 which can be written as + = 0.
dy x2 x0 2 x3 x0 dy 2 x dy

(c) By writing this last equation in the form


„ «
1 d 2 dx
x = 0, x(A) = 1, x(B) = 2,
x2 dy dy

and integrating twice show that the stationary path is x3 = (7y +B −8A)/(B −A).

6.3 Functionals with many dependent variables


6.3.1 Introduction
In chapter 4 we considered functionals of the type
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (6.20)
a

which involve one independent variable, x, and a single dependent variable, y(x), and
its first derivative. There are many useful and important extensions to this type of
functional and in this chapter we discuss one of these — the first in the following list —
which is important in the study of dynamical systems and when representing stationary
paths in parametric form, an idea introduced in chapter 9. Before proceeding we list
other important generalisations in order to provide you with some idea of the types of
problems that can be tackled: some are treated in later chapters.
(i) The integrand of the functional 6.20 depends upon the independent variable x and
a single dependent variable y(x), which is determined by the requirement that S[y]
be stationary. A simple generalisation is to integrands that depend upon several
dependent variables yk (x), k = 1, 2, · · · , n, and their first derivatives. This type
of functional is studied later in this section.
(ii) The integrand of 6.20 depends upon y(x) and its first derivative. Another simple
generalisation involves functionals depending upon second or higher derivatives.
Some examples of this type are treated in exercises 4.32, 4.33 (page 143) and 7.12.
The elastic theory of stiff beams and membranes requires functionals containing
the second derivative which represents the bending energy and some examples are
described in chapter 10.
182 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

(iii) Broken extremals: a broken extremal is a continuous solution of the Euler-


Lagrange equation with a piecewise continuous first derivative. A simple example
of such a solution is the Goldschmidt curve defined by equation 5.20, (page 160).
That such solutions are important is clear, partly because they occur in the rela-
tively simple case of the surface of minimum revolution and also from observations
of soap films that often comprise spherical segments such that across common
boundaries the normal to the surface changes direction discontinuously. We con-
sider broken extremals in chapter 10.

(iv) In all examples so far considered the end points of the curve have been fixed.
However, there are variational problems where the ends of the path are free to
move on given curves: an example of this type of problem is described at the end
of section 3.5.1, equation 3.22. The general theory is considered in chapter 10.

(v) The integral defining the functional may be over a surface, S, rather than along
a line,  
∂y ∂y
ZZ
J[y] = dx1 dx2 F x1 , x2 , y, ,
S ∂x1 ∂x2
where S is a region in the (x1 , x2 )-plane, so the functional depends upon two inde-
pendent variables, (x1 , x2 ), rather than just one. In this case the Euler-Lagrange
equation is a partial differential equation. Many of the standard equations of
mathematical physics can be derived from such functionals. There is, of course, a
natural extension to integrals over higher-dimensional spaces; such problems are
not considered in this course.

6.3.2 Functionals with two dependent variables


First we find the necessary conditions for a functional depending on two functions to
be stationary. We are ultimately interested in functionals depending upon any finite
number of variables, so we shall often use a notation for which this further generalisation
becomes almost trivial.
If the two dependent variables are (y1 (x), y2 (x)) and the single independent variable
is x, the functional is
Z b
S[y1 , y2 ] = dx F (x, y1 , y2 , y10 , y20 ) (6.21)
a

where each function satisfies given boundary conditions,

yk (a) = Ak , yk (b) = Bk , k = 1, 2. (6.22)

We require functions (y1 (x), y2 (x)) that make this functional stationary and proceed in
the same manner as before. Let y1 (x) and y2 (x) be two admissible functions — that
is, functions having continuous first derivatives and satisfying the boundary conditions
— and use the Gâteaux differential of S[y1 , y2 ] to calculate its rate of change. This is

d
∆S[y1 , y2 , h1 , h2 ] = S[y1 + h1 , y2 + h2 ] , (6.23)
d =0
6.3. FUNCTIONALS WITH MANY DEPENDENT VARIABLES 183

where yk (x) + hk (x), k = 1, 2, are also admissible functions, which means that hk (a) =
hk (b) = 0, k = 1, 2. As in equation 4.8 (page 129) we have
b
d
Z
∆S = dx F (x, y1 + h1 , y2 + h2 , y10 + h01 , y20 + h02 )

a d =0

and the integrand is simplified using the chain rule,



dF ∂F ∂F ∂F ∂F
= h1 + 0 h01 + h2 + 0 h02 .
d =0 ∂y1 ∂y1 ∂y2 ∂y2

Hence the Gâteaux differential is


Z b  
∂F ∂F ∂F ∂F
∆S = dx h1 + 0 h01 + h2 + 0 h02 . (6.24)
a ∂y1 ∂y1 ∂y2 ∂y2

For a stationary path we need, by definition (chapter 4, page 125), ∆S = 0 for all
h1 (x) and h2 (x). An allowed subset of variations is obtained by setting h2 (x) = 0,
then the above equation becomes the same as equation 4.9, (page 129), with y and h
replaced by y1 and h1 respectively. Hence we may use the same analysis to obtain the
second-order differential equation
 
d ∂F ∂F
0 − = 0, y1 (a) = A1 , y1 (b) = B1 . (6.25)
dx ∂y1 ∂y1

This equation looks the same equation 4.11, (page 130), but remember that here F also
depends upon the unknown function y2 (x).
Similarly, by setting h1 (x) = 0, we obtain another second-order equation
 
d ∂F ∂F
0 − = 0, y2 (a) = A2 , y2 (b) = B2 . (6.26)
dx ∂y2 ∂y2

Equations 6.25 and 6.26 are the Euler-Lagrange equations for the functional 6.21. These
two equations will normally involve both y1 (x) and y2 (x), so are named coupled differ-
ential equations; normally this makes them far harder to solve than the Euler-Lagrange
equations of chapter 4, which contain only one dependent variable.
An example will make this clear: consider the quadratic functional
Z π/2
dx y10 2 + y20 2 + 2y1 y2

S[y1 , y2 ] = (6.27)
0

so that equation 6.25 becomes


d2 y1
− y2 = 0, (6.28)
dx2
which involves both y1 (x) and y2 (x), and equation 6.26 becomes

d2 y2
− y1 = 0, (6.29)
dx2
which also involves both y1 (x) and y2 (x).
184 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

Equations 6.28 and 6.29 now have to be solved. Coupled differential equations are
normally very difficult to solve and their solutions can behave in bizarre ways, including
chaotically; but these equations are linear which makes the task of solving them much
easier, and the solutions are generally better behaved. One method is to use the first
equation to write y2 = y100 , so the second equation becomes the fourth-order linear
equation
d4 y1
− y1 = 0.
dx4
By substituting a function of the form y1 = α exp(λx), where α and λ are constants,
into this equation we obtain an equation for λ that gives λ4 = 1, showing that there
are four solutions obtained by setting λ = ±1, ±i and hence that the general solution
is a linear combination of these functions,

y1 (x) = A cos x + B sin x + C cosh x + D sinh x

where A, B, C and D are constants. Since y2 = y100 we also have

y2 (x) = −A cos x − B sin x + C cosh x + D sinh x.

The four arbitrary constants may now be determined from the four boundary conditions,
as demonstrated in the following exercise.

Exercise 6.11
Find the values of the constants A, B, C and “ πD” if the functional
“π ” 6.27 has the
boundary conditions y1 (0) = 0, y2 (0) = 0, y1 = 1, y2 = −1.
2 2

Exercise 6.12
Show that the Euler-Lagrange equations for the functional
Z 1
dx y10 2 + y20 2 + y10 y20 ,
` ´
S[y1 , y2 ] =
0

with the boundary conditions y1 (0) = 0, y2 (0) = 1, y1 (1) = 1, y2 (1) = 2, integrate


to 2y10 + y20 = a1 and 2y20 + y10 = a2 , where a1 and a2 are constants. Deduce
that the stationary path is given by the equations

y1 (x) = x and y2 (x) = x + 1.

Exercise 6.13
By defining a new variable z1 = y1 + y2 /2, show that the functional defined in the
previous exercise becomes
Z 1 „ «
3 1
S[z1 , y2 ] = dx z10 2 + y20 2 , z1 (0) = , y2 (0) = 1, z1 (1) = 2, y2 (1) = 2,
0 4 2
and that the corresponding Euler-Lagrange equations are

d2 z 1 d2 y 2
=0 and = 0.
dx2 dx2
Solve these equations to derive the solution obtained in the previous exercise.
6.3. FUNCTIONALS WITH MANY DEPENDENT VARIABLES 185

Note that by using the variables (z1 , y2 ) each of the new Euler-Lagrange equations
depends only upon one of the dependent variables and are therefore far easier to
solve. Such systems of equations are said to be uncoupled and one of the main
methods of solving coupled Euler-Lagrange equations is to find a transformation
that converts them to uncoupled equations. In real problems finding such trans-
formations is difficult and often relies upon understanding the symmetries of the
problem and then the methods described in sections 6.2 and 7.3 can be useful.

6.3.3 Functionals with many dependent variables


The extension of the above analysis to functionals involving n dependent variables,
their first derivatives and a single independent variable is straightforward. It is helpful,
however, to use the notation y(x) = (y1 (x), y2 (x), · · · , yn (x)) to denote the set of n
functions. There is still only one independent variable, so the functional is
Z b
S[y] = dx F (x, y, y0 ), y(a) = A, y(b) = B, (6.30)
a
0
where y = (y10 , y20 , · · · , yn0 ),
A = (A1 , A2 , · · · , An ), B = (B1 , B2 , · · · , Bn ) and
h = (h1 , h2 , · · · , hn ). If y(x) and y(x) + h(x) are admissible functions, so that h(a) =
h(b) = 0, the Gâteaux differential is given by the relation
Z b
d d 0 0

∆S[y, h] = S[y + h]
= dx F (x, y + h, y + h ) ,
d =0 a d =0

and for y to be a stationary path this must be zero for all allowed h. Using the chain
rule we have
n  
d 0 0
X ∂F 0 ∂F
F (x, y + h, y + h ) = hk + hk 0 ,
d =0 ∂yk ∂yk
k=1

and hence n Z b  
X ∂F ∂F
∆S[y, h] = dx hk + h0k 0 . (6.31)
a ∂yk ∂yk
k=1
Now integrate by parts to cast this in the form
n  b X n Z b    
X ∂F d ∂F ∂F
∆S[y, h] = hk 0 − dx − hk . (6.32)
∂yk a a dx ∂yk0 ∂yk
k=1 k=1

But, since h(a) = h(b) = 0, the boundary term vanishes. Further, since ∆S[y, h] = 0
for all allowed h, by the same reasoning used when n = 2, we obtain the set of n coupled
equations
 
d ∂F ∂F
− = 0, yk (a) = Ak , yk (b) = Bk , k = 1, 2, · · · , n. (6.33)
dx ∂yk0 ∂yk
This set of n coupled equations is usually nonlinear and difficult to solve. The one
circumstance when the solution is relatively simple is when the integrand of the func-
tional S[y] is a quadratic form in both y and y0 , and then the Euler-Lagrange equations
are coupled linear equations; this is an important example because it describes small
oscillations about an equilibrium position of an n-dimensional dynamical system.
186 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

Exercise 6.14
(a) If A and B are real, symmetric, positive definite, n × n matrices1 consider the
functional Z b n X
X n
` 0
yi Aij yj0 − yi Bij yj ,
´
S[y] = dx
a i=1 j=1

with the integrand quadratic in y and y0 . Show that the n Euler-Lagrange equa-
tions are the set of coupled, linear equations
n „
d2 y j
X «
Akj + B kj y j = 0, 1 ≤ k ≤ n.
j=1
dx2

(b) Show that if we interpret y as an n-dimensional column vector and its trans-
pose y> as a row vector, the functional can be written in the equivalent matrix
form Z b “ ”
S[y] = dx y0 > Ay0 − y> By ,
a
and that the Euler-Lagrange equations can be written in the matrix form
d2 y
A + By = 0.
dx2
Show that this can also be written in the form
d2 y
+ A−1 By = 0. (6.34)
dx2
(c) It can be shown that the matrix A−1 B has non-negative eigenvalues ωk2 and
n orthogonal eigenvectors zk , k = 1, 2, · · · , n, possibly complex, which each satis-
fying A−1 Bzk = ωk2 zk . By expressing y as the linear combination of the zk ,
n
X
y= ak (t)zk
k=1

show that the coefficients ak (t) satisfy the uncoupled equations


d2 a j
+ ωj2 aj = 0, j = 1, 2, · · · , n.
dx2

6.3.4 Changing dependent variables


In this section we consider the effect of changing the dependent variables. A simple
example of such a transformation was dealt with in exercise 6.13 where it was shown how
a linear transformation uncoupled the Euler-Lagrange equations. In general the aim of
changing variables is to simplify the Euler-Lagrange equations and it is generally easier
to apply the transformation to the functional rather than the Euler-Lagrange equations.
Before explaining the general theory we deal with a specific example, which high-
lights all salient points. The functional is
Z b  
1 02
q
02
y1 + y2 − V (r) , r = y12 + y22 ,

S[y1 , y2 ] = dt (6.35)
a 2
1 A real symmetric matrix, A, has real elements satisfying A
ij = Aji , for all i and j, and it can be
shown that its eigenvalues are real; a positive definite matrix has positive eigenvalues.
6.3. FUNCTIONALS WITH MANY DEPENDENT VARIABLES 187

where V (r) is any suitable function: this functional occurs frequently because it arises
when describing the planar motion of a particle acted upon by a force depending only
on the distance from a fixed point, for example in a simplified description of the motion
of the Earth round the Sun; in this case the independent variable, t, is the time. The
functional S[y] is special because its integrand depends only upon the combinations
y10 2 + y20 2 and y12 + y22 , which suggests that changing to polar coordinates may lead to
simplification. These are (r, θ) where y1 = r cos θ and y2 = r sin θ so that y12 +y22 = r2
and, on using the chain rule,
dy1 dr dθ dy2 dr dθ
= cos θ − r sin θ and = sin θ + r cos θ.
dt dt dt dt dt dt
Squaring and adding these equations gives y10 2 + y20 2 = r0 2 + r2 θ0 2 . Hence the functional
becomes Z b  
1 02 1 2 02
S[r, θ] = dt r + r θ − V (r) . (6.36)
a 2 2

Exercise 6.15
(a) Show that the Euler-Lagrange equations for the functional 6.35 are
d2 y 1 y1 d2 y 2 y2
+ V 0 (r) =0 and + V 0 (r) = 0. (6.37)
dt2 r dt2 r
(b) Show that the Euler-Lagrange equations for the functional 6.36 can be written
in the form
d2 r L2 dθ L
2
− 3 + V 0 (r) = 0 and = 2, (6.38)
dt r dt r
where L is a constant. Note that the equation for r does not depend upon θ and
that θ(t) is obtained from r(t) by a single integration. In older texts on dynamics,
see for instance Whittaker (1904), problems are said to be soluble by quadrature
if their solutions can be reduced to known functions or integrals of such functions.

The general theory is not much more complicated. Suppose that y = (y1 , y2 , · · · , yn )
and z = (z1 , z2 , · · · , zn ) are two sets of dependent variables related by the equations

yk = ψk (z), k = 1, 2, · · · , n, (6.39)
where we assume, in order to slightly simplify the analysis, that each of the ψk is not
explicitly dependent upon the independent variable, x. The chain rule gives
n
dyk X ∂ψk dzi
=
dx i=1
∂zi dx

showing that each of the yk0 depends linearly upon zi0 . These linear equations can be
inverted to give zi0 in terms of yk0 , k = 1, 2, · · · , n, if the n × n matrix with elements
∂ψk /∂zi is nonsingular, that is if the Jacobian determinant, equation 1.26 (page 30),
is non-zero. This is also the condition for the transformation between y and z to be
invertible.
Under this tranformation the functional
Z b Z b
S[y] = dx F (x, y, y0 ) becomes S[z] = dx G(x, z, z0 ) (6.40)
a a
188 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

where
n n
!
X ∂ψ1 X ∂ψn
G=F x, ψ1 (z), ψ2 (z), · · · , ψn (z), zi0 , · · · , zi0 ,
i=1
∂zi i=1
∂zi
that is, G(x, z, z0 ) is obtained from F (x, y, y0 ) simply by replacing y and y0 . In practice,
of course, the transformation 6.39 is chosen to ensure that G(x, z, z0 ) is simpler than
F (x, y, y0 ).

Exercise 6.16
Show that under the transformation y1 = ρ cos φ, y2 = ρ sin φ, y3 = z, the func-
tional
Z b  ff
1 ` 02
q
y1 + y20 2 + y30 2 − V (ρ) , ρ = y12 + y22 ,
´
S[y1 , y2 , y3 ] = dt
a 2
becomes Z b  ff
1 ` 02
ρ + ρ2 φ0 2 + z 0 2 − V (ρ) .
´
S[ρ, φ, z] = dt
a 2
Find the Euler-Lagrange equations and show that those for ρ and z are uncoupled.

6.4 The Inverse Problem


The ideas described in this chapter have shown that there are several advantages in
formulating a system of differential equations as a variational principle. This naturally
raises the question as to whether any given system of equations can be formulated in
the form of the Euler-Lagrange equations and hence possesses an associated variational
principle.
In this section it is shown that any second-order equation of the form
d2 y
= f (x, y, y 0 ), (6.41)
dx2
where f (x, y, y 0 ) is a sufficiently well behaved function of the three variables can, in
principle, be expressed as a variational principle. When there are two or more dependent
variables there is no such general result, although there are special classes of equations
for which similar results hold: here, however, we do not discuss these more difficult
cases.
First, consider linear, second-order equations, the most general equation of this type
being,
d2 y dy
a2 (x) 2 + a1 (x) + a0 (x)y = b(x), (6.42)
dx dx
where ak (x), k = 0, 1 and 2, and b(x) depend only upon x and a2 (x) 6= 0 in the relevant
interval of x. This equation may be transformed to the canonical form
 
d dy p(x)b(x)
p(x) + q(x)y = (6.43)
dx dx a2 (x)
where Z 
a1 (x) a0 (x)
p(x) = exp dx and q(x) = p(x). (6.44)
a2 (x) a2 (x)
6.4. THE INVERSE PROBLEM 189

Inspection shows that the associated functional is


 2 !
dy 2pb
Z
2
S[y] = dx p − qy + y . (6.45)
dx a2

This is an important example and is discussed at length in chapter 13.

Exercise 6.17
(a) Show that equations 6.42 and 6.43 are equivalent if p(x) and q(x) are defined
as in equation 6.44.
(b) Show that the Euler-Lagrange equation associated with the functional 6.45 is
equation 6.43.

Now consider the more general equation 6.41. Suppose that the equivalent Euler-
Lagrange exists and is
 
d ∂F ∂F
− = 0 that is y 00 Fy0 y0 + y 0 Fyy0 + Fxy0 − Fy = 0, (6.46)
dx ∂y 0 ∂y

where F (x, y, y 0 ) is an unknown function. Using equation 6.41 we can express y 00 in


terms of x, y and y 0 to give

f Fy0 y0 + y 0 Fyy0 + Fxy0 − Fy = 0, (6.47)

which is an equation relating the partial derivatives of F and may therefore be regarded
as second-order partial differential equation for F . As it stands this equation is of limited
practical value because it can rarely be solved directly. If, however, we define a new
function z = Fy0 y0 , we shall see that z satisfies a first-order equation for which the
solutions are known to exist. This is seen by differentiating equation 6.47 with respect
to y 0 ,
f Fy0 y0 y0 + fy0 Fy0 y0 + y 0 Fy0 y0 y + Fy0 y0 x = 0.
In terms of z this becomes the first-order equation,
∂z ∂z ∂z
f + y0 + + fy0 z = 0. (6.48)
∂y 0 ∂y ∂x

It can be shown that solutions of this partial differential equation for z(x, y, y 0 ) exist,
see for example Courant and Hilbert2 (1937b). It follows that the function F (x, y, y 0 )
exists and that equation 6.41 can be written as an Euler-Lagrange equation and that
there is an associated functional.
Finding F (x, y, y 0 ) explicitly, however, is not usually easy or even possible because
this involves first solving the partial differential equation 6.48 and then integrating this
solution twice with respect to y 0 . At either stage it may prove impossible to express
the result in a useful form. Some examples illustrate this procedure in simple cases.
Consider the differential equation

y 00 = f (x, y)
2R Courant and D Hilbert, Methods of Mathematical Physics, Volume 2.
190 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

where the right-hand side is independent of y 0 . Then equation 6.48 contains only
derivatives of z and one solution is z = c, a constant. Now the equation Fy0 y0 = z can
be integrated directly to give
1 02
F (x, y, y 0 ) = cy + y 0 A(x, y) + B(x, y),
2
where A and B are some functions of x and y, but not y 0 . The derivatives of F are,

Fy = y 0 A y + B y , Fyy0 = Ay , Fxy0 = Ax ,

so that the Euler-Lagrange equation 6.46 becomes

cy 00 = By − Ax .

Comparing this with the original equation gives c = 1, and By − Ax = f (x, y): two
obvious solutions are
Z y Z x
B(x, y) = dv f (x, v), A = 0 and A = − du f (u, y), B = 0,
c1 c2

where c1 and c2 are constants, so that the integrands of the required functional are
Z y Z x
1 1
F1 = y 0 2 + dv f (x, v) or F2 = y 0 2 − y 0 du f (u, y). (6.49)
2 c1 2 c2

It may seem strange that this procedure yields two seemingly quite different expressions
for F (x, y, y 0 ). But, recall that different functionals will give the same Euler-Lagrange
equation if the integrands differ by a function that is the derivative with respect to x of
a function g(x, y), see exercises 4.27 (page 140) and 6.22 (page 191). Thus, we expect
that there is a function g(x, y) such that F1 − F2 = dg/dx.
In the next exercise it is shown that the Euler-Lagrange equations associated with
F1 and F2 are identical and an explicit expression is found for g(x, y).

Exercise 6.18
(a) Show that F1 and F2 , defined in equation 6.49 give the same Euler-Lagrange
equations.
d
(b) Show that F1 − F2 = g(x, y), and find g(x, y).
dx

Exercise 6.19
d2 y dy
Find a functional for the equation +α + y = 0, where α is a constant.
dx2 dx
6.5. MISCELLANEOUS EXERCISES 191

6.5 Miscellaneous exercises


Exercise 6.20 Z b
dx y 0 2 − ω 2 y 2 and the change of variable z = x1/c ,
` ´
Using the functional S[y] =
a
show that the differential equation y 00 + ω 2 y = 0 is transformed into

d2 y dy
z + (1 − c) + c2 ω 2 z 2c−1 y = 0.
dz 2 dz

Exercise 6.21 Z b
Show that the Euler-Lagrange equations for the functional S[y1 , y2 ] = dx F (y10 , y20 ),
a
which depends only upon the first derivatives of y1 and y2 , are

∂ 2 F 00 ∂2F ∂2F ∂ 2 F 00
y + 0 0 y200 = 0
02 1
and y 00 +
0 1
y2 = 0.
∂y1 ∂y1 ∂y2 0
∂y1 ∂y2 ∂y20 2

Deduce that, provided the determinant

∂2F ∂2F
˛ ˛
˛ ˛
∂y10 2 ∂y10 ∂y20
˛ ˛
˛ ˛
d = ˛˛ ˛
˛ ∂2F ∂2F ˛
˛
∂y10 ∂y20 ∂y20 2
˛ ˛

is non-zero, the stationary paths are the straight lines

y1 (x) = Ax + B, y2 (x) = Cx + D

where A, B, C and D are constants. Describe the solution if d = 0.


What is the equivalent condition if there is only one dependent variable?

Exercise 6.22
If Φ(x, y1 , y2 ) is any twice differentiable function show that the functionals
Z b
“ ”
S1 [y1 , y2 ] = dx F x, y1 , y2 , y10 , y20 and
a
Z b » “ ” “ ”–
S2 [y1 , y2 ] = dx F x, y1 , y2 , y10 , y20 + Ψ x, y1 , y2 , y10 , y20 ,
a

where
dΦ ∂Φ ∂Φ 0 ∂Φ 0
Ψ= = + y + y for some Φ(x, y1 , y2 ),
dx ∂x ∂y1 1 ∂y2 2
lead to the same Euler-Lagrange equation.
Note that this is the direct generalisation of the result derived in exercise 4.27
(page 140).
192 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS

Exercise 6.23
Consider the two functionals
Z b » –
1 ` 02
y1 + y20 2 + g1 (x)y10 + g2 (x)y20 − V (x, y1 , y2 )
´
S1 [y1 , y2 ] = dx
a 2
and Z b » –
1 ` 02
y1 + y20 2 − V (x, y1 , y2 )
´
S2 [y1 , y2 ] = dx
a 2
where V = V + g10 (x)y1 + g20 (x)y2 . Use the result proved in the previous exercise
to show that S1 and S2 give rise to identical Euler-Lagrange equations.

Exercise 6.24
(a) Show that the Euler-Lagrange equation of the functional
Z ∞
d2 y ` −x ´ dy
dx e−x y − ex y 0 is + 2e−2x y = 0.
p
S[y] = 2
− 3e − 1 (6.50)
0 dx dx

(b) Show that the change of variables u = e−x , with inverse x = ln(1/u), trans-
forms this functional to
Z 1 p
S[Y ] = du Y (u) + Y 0 (u), Y (u) = y(− ln(u)),
0

and that the Euler-Lagrange equation for this functional is

d2 Y dY
+3 + 2Y = 0. (6.51)
du2 du

(c) By making the substituting x = − ln(u), show that equation 6.50 transforms
into equation 6.51.

Exercise 6.25 Z π/2


dx y 0 2 + z 0 2 + 2yz ,
` ´
Show that the stationary paths of the functional S[y, z] =
0
with the boundary conditions y(0) = 0, z(0) = 0, y(π/2) = 3/2 and z(π/2) = 1/2,
satisfy the equations
d2 y d2 z
− z = 0, − y = 0.
dx2 dx2
Show that the solution of these equations is
sinh x 1 sinh x 1
y(x) = + sin x, z(x) = − sin x.
sinh(π/2) 2 sinh(π/2) 2

Exercise 6.26
The ordinary Bessel function, denoted by Jn (x), is defined to be proportional to
the solution of the second-order differential equation

d2 y dy
x2 +x + (x2 − n2 )y = 0, n = 0, 1, 2, · · · , (6.52)
dx2 dx
that behaves as (x/2)n near the origin.
6.5. MISCELLANEOUS EXERCISES 193

(a) Show that equation 6.52 is the Euler-Lagrange equation of the functional
Z X
n2
„ „ « «
F [y] = dx xy 0 (x)2 − x − y(x)2 , y(X) = Y 6= 0, ,
0 x

where the admissible functions have continuous second derivatives.


(b) Define a new independent variable u by the equation x = f (u), where f (u) is
monotonic and smooth, and set w(u) = y(f (u)) to cast this functional into the
form
Z u1 „ „ 0
« «
f (u) 0 2 0 2 f (u) 2
F [w] = du w (u) − f (u)f (u) − n w(u) ,
u0 f 0 (u) f (u)

where f (u0 ) = 0 and f (u1 ) = X.


(c) Hence show that if f (u) = eu , w(u) satisfies the equation

d2 w ` 2u
+ e − n2 w = 0
´
(6.53)
du2
and deduce that a solution of equation 6.53 is w(u) = Jn (eu ).
194 CHAPTER 6. FURTHER THEORETICAL DEVELOPMENTS
Chapter 7

Symmetries and Noether’s


theorem

7.1 Introduction
In this chapter we show how symmetries can help solve the Euler-Lagrange equations.
The simplest example of the theory presented here was introduced in section 4.4.1 where
it was shown that a first-integral existed if the integrand did not depend explicitly upon
the independent variable. This simplification was used to help solve the brachistochrone
and the minimum surface area problems, sections 5.2 and 5.3. Here we show how the
first-integral may be derived using a more general principle which can be used to derive
other first-integrals.
Students knowing some dynamics will be aware of how important the conservation of
energy, linear and angular momentum can be: the theory described in section 7.3 unifies
all these conservation laws. In addition these ideas may be extended to deal with those
partial differential equations that can be derived from a variational principle, although
this theory is not included in the present course.

7.2 Symmetries
The Euler-Lagrange equations for the brachistrochrone, section 5.2, and the minimal
surface area, section 5.3, were solved using the fact that the integrand, F (y, y 0 ), does
not depend explicitly upon x, that is, ∂F/∂x = 0. In this situation it was shown in
exercise 4.7 (page 131) that
     
d ∂F ∂F d 0 ∂F
y 0 (x) − = y − F .
dx ∂y 0 ∂y dx ∂y 0

This result is important because it shows that if y(x) satisfies the second-order Euler
equation it also satisfies the first-order equation

∂F
y0 − F = constant,
∂y 0

195
196 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

which is often simpler: we ignore the possibility that y 0 (x) = 0 for the reasons discussed
after exercise 7.8. This result is proved algebraically in exercise 4.7, (page 131), and
there we relied only on the fact that ∂F/∂x = 0. In the following section we re-derive
this result using the equivalent but more fundamental notion that the integrand F (y, y 0 )
is invariant under translations in x: this is a fruitful method because it is more readily
generalised to other types of transformations; for instance in three-dimensional problems
the integrand of the functional may be invariant under all rotations, or just rotations
about a given axis. The general theory is described in section 7.3, but first we introduce
the method by applying it to functionals that are invariant under translations.

7.2.1 Invariance under translations


The algebra of the following analysis is fairly complicated and requires careful thought
at each stage, so you may need to read it carefully several times and complete the
intermediate steps.
Consider the functional
Z b
S[y] = dx F (y, y 0 ), y(a) = A, y(b) = B, (7.1)
a

where the integrand does not depend explicitly upon x, that is ∂F/∂x = 0. The
stationary function, y(x), describes a curve C in the two-dimensional space with axes
Oxy, so that a point, P , on the curve has coordinates (x, y(x)) as shown in figure 7.1.

y y
P
δ C
x
x x
O O
Figure 7.1 Diagram showing the two coordinate systems Oxy and Ox y, con-
nected by a translation along the x-axis by a distance δ.

Consider now the coordinate system Ox y where x = x + δ and y = y, with the origin,
O, of this system at x = δ, y = 0 in the original coordinate system; that is, Ox y is
translated from Oxy a distance δ along the x-axis. In this coordinate system the curve
C is described by y(x), so the coordinates of a point P are (x, y(x)) and these are
related to coordinates in Oxy, (x, y(x)), by

x =x−δ and y(x) = y(x) or y(x) = y(x + δ),

the latter equation defining the function y; differentiation, using the chain rule, gives

dy dy dx dy
= = .
dx dx dx dx
7.2. SYMMETRIES 197

The functional 7.1 can be computed in either coordinate system and, for reasons that
will soon become apparent, we consider the integral in the Ox y representation over a
limited, but arbitrary, range
Z d   dy
G= du F y(u), y 0 (u) where y 0 = ,
c du
c = c − δ, d = d − δ and where a < c < d < b. The integrand of G depends on u only
through the function y(u): this means that at each value of u the integrand has the
same value as the integrand of S[y] at the equivalent point, x = u + δ. Hence
Z d   Z d  
0
du F y(u), y (u) = dx F y(x), y 0 (x) where x = u + δ, (7.2)
c c

and this is true for all δ.


Now consider small values of δ and expand to O(δ), first writing the integral in the
form
Z d−δ
G = du F (y(u), y 0 (u))
c−δ
Z d Z c Z d
= du F (y, y0 ) + du F (y, y0 ) − du F (y, y0 ). (7.3)
c c−δ d−δ

But, for small δ Z z


du g(u) = g(z)δ + O(δ 2 ),
z−δ
and to this order
y(u) = y(u + δ) = y(u) + y 0 (u)δ + O(δ 2 ), and y 0 (u) = y 0 (u) + y 00 (u)δ + O(δ 2 ).
Thus the expression 7.3 for G becomes, to first order in δ,
Z d h id
G = du F (y + y 0 δ, y 0 + y 00 δ) − δ F (y, y 0 ) + O(δ 2 )
c c
Z d   id
∂F 0 ∂F h
= du F (y, y 0 ) + δ y + δ 0 y 00 − δ F (y, y 0 ) + O(δ 2 ).
c ∂y ∂y c

Because of equation 7.2 this gives


Z d   id
∂F ∂F h
0=δ du y 0 + y 00 0 − δ F (y, y 0 ) + O(δ 2 ). (7.4)
c ∂y ∂y c

Now integrate the second integral by parts,


Z d  d Z d  
00 ∂F 0 ∂F 0 d ∂F
du y = y − du y .
c ∂y 0 ∂y 0 c c du ∂y 0
Substituting this into 7.4 and dividing by δ gives
 d Z d    
0 ∂F 0 d ∂F ∂F
0= y −F − du y − + O(δ). (7.5)
∂y 0 c c du ∂y 0 ∂y
198 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

But y(u) is a solution of the Euler-Lagrange equation, so the integrand is identically


zero, and hence on letting δ → 0,
   
∂F 0 ∂F
F − y0

0
= F − y 0
.
∂y x=c
∂y x=d

Finally, recall that c and d are arbitrary and hence, for any x in the interval a < x < b,
the function

y 0 0 F (y, y 0 ) − F (y, y 0 ) = constant. (7.6)
∂y

Because the function on the left-hand side is continuous the equality is true in the
interval a ≤ x ≤ b. This relation is always true if the integrand of the functional does
not depend explicitly upon x, that is ∂F/∂x = 0.
Equation 7.6 relates y 0 (x) and y(x) and by rearranging it we obtain one, or more,
first-order equations for the unknown function y(x). Noether’s theorem, stated below,
shows that solutions of the Euler-Lagrange equation also satisfy equation 7.6. In prac-
tice, because this equation is usually easier to solve than the original Euler-Lagrange
equation, it is implicitly assumed that its solutions are also solutions of the Euler-
Lagrange equation. In general this is true, but the examples treated in exercises 4.8
(page 132) and 7.8 show that care is sometimes needed. By differentiating equation 7.6
with respect to x the Euler-Lagrange equation is regained, as shown in exercise 4.7
(page 131).
The function y 0 Fy0 − F is named the first-integral of the Euler-Lagrange equation,
this name being suggestive of it being derived by integrating the original second-order
equation once to give a first-order equation. For the same reason in dynamics, quantities
that are conserved, for instance energy, linear and angular momentum, are also named
first-integrals, integrals of the motion or constants of the motion, and these dynamical
quantities have exactly the same mathematical origin as the first-integral defined in
equation 7.6.
This proof of equation 7.6 may seem a lot more elaborate than that given in ex-
ercise 4.7 (page 131). However, there are circumstances when the algebra required to
use the former method is too unwieldy to be useful and then the present method is
superior. An example of such a problem is given in exercise 7.12.
In the context of Newtonian dynamics the equivalent of equation 7.6 is the conserva-
tion of energy in those circumstances when the forces are conservative and are indepen-
dent of the time; that is in Newtonian mechanics energy conservation is a consequence
of the invariance of the equations of motion under translations in time. Similarly, in-
variance under translations in space gives rise to conservation of linear momentum and
invariance under rotations in space gives rise to conservation of angular momentum.
As an example of a functional that is not invariant under translations of the inde-
pendent variable, consider
Z b
J[y] = dx xy 0 (x)2 , y(a) = A, y(b) = B.
a

It is instructive to go through the above proof to see where and how it breaks down.
7.3. NOETHER’S THEOREM 199

In this case equation 7.2 becomes


Z d Z d
J[y] = du u y0 (u)2 = dv (v − δ)y 0 (v)2
c c
Z d
= J[y] − δ dv y 0 (v)2 6= J[y],
c

and we see how the explicit dependence of x destroys the invariance needed for the
existence of the first-integral.

7.3 Noether’s theorem


In this section we treat functionals having several dependent variables. The analysis
is a straightforward generalisation of that presented above but takes time to absorb.
For a first reading, ensure that you understand the fundamental ideas and try to avoid
getting lost in algebraic details. That is, you should try to understand the definition of
an invariant functional, the meaning of Noether’s theorem, rather than the proof, and
should be able to do exercises 7.1 – 7.3.
There are two ingredients to Noether’s theorem:
(i) functionals that are invariant under transformations of either or both dependent
and independent variables:
(ii) families of transformations that depend upon one or more real parameters, though
here we deal with situations where there is just one parameter.
We consider each of these in turn in relation to the functional
Z b
S[y] = dx F (x, y, y0 ), y = (y1 , y2 , · · · , yn ), (7.7)
a

which has stationary paths defined by the solutions of the Euler-Lagrange equations.
We do not include boundary conditions because they play no role in this theory.
The value of the functional depends upon the path taken which, in this section, is
not always restricted to stationary paths. We shall consider the change in the value
of the functional when the path is changed according to a given transformation: in
particular we are interested in those transformations which change the path but not
the value of the functional.
Consider, for instance, the two functionals
Z 1 Z 1
dx y10 2 + y20 2 dx y10 2 + y20 2 y1 .
 
S1 [y] = and S2 [y] = (7.8)
0 0

A path γ can be defined by the pair of functions (f (x), g(x)), 0 ≤ x ≤ 1, and on each γ
the functionals have a value.
Consider the transformation
y1 = y1 cos α − y2 sin α y1 = y1 cos α + y 2 sin α
with inverse (7.9)
y2 = y1 sin α + y2 cos α y2 = −y1 sin α + y 2 cos α
200 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

which can be interpreted as an anticlockwise rotation in the (y1 , y2 )-plane through an


angle α. Hence under this transformation the curve γ is rotated bodily to the curve γ,
as shown in figure 7.2.
y2

γ

Rotated curve

Original curve γ
y1
Figure 7.2 Diagram showing the rotation of the curve γ
anticlockwise through the angle α to the curve γ.

The points on γ are parametrised by (f(x), g(x)), 0 ≤ x ≤ 1, where

f = f cos α − g sin α and g = f sin α + g cos α. (7.10)

Hence on γ the functional S1 has the value


Z 1 h i
S1 (γ) = dx f 0 (x)2 + g 0 (x)2
0

and on γ it has the value


Z 1 h 0 i
S1 (γ) = dx f (x)2 + g 0 (x)2 .
0
0
But on using equation 7.10 we obtain f (x)2 + g0 (x)2 = f 0 (x)2 + g 0 (x)2 which gives
Z 1 h i
S1 (γ) = dx f 0 (x)2 + g 0 (x)2 = S1 (γ).
0

That is the functional S1 has the same value on γ and γ for all α and is therefore
invariant with respect to the rotation 7.9.
On the other hand the values of S2 are
Z 1 h i
S2 (γ) = dx f 0 (x)2 + g 0 (x)2 f (x)
0

and
Z 1 h 0 i
S2 (γ) = dx f (x)2 + g 0 (x)2 f (x)
0
Z 1 h ih i
= dx f 0 (x)2 + g 0 (x)2 f (x) cos α − g(x) sin α
0
Z 1 h i
= S2 (γ) cos α − sin α dx f 0 (x)2 + g 0 (x)2 g(x).
0
7.3. NOETHER’S THEOREM 201

In this case the functional has different values on γ and γ, unless α is an integer multiple
of 2π. That is, S2 [y] is not invariant with respect to the rotation 7.9.
The transformation 7.9 does not involve changes to the independent variable x,
whereas the transformation considered in section 7.2.1 involved only a change in the
independent variable, via a translation along the x-axis, see figure 7.1. In general it is
necessary to deal with a transformation in both dependent and independent variables,
which can be written as

x = Φ(x, y, y0 ) and yk = Ψk (x, y, y0 ), k = 1, 2, · · · , n. (7.11)

We assume that these relations can be inverted to give x and y in terms of x and y.
For a curve γ, defined by the equation y = f (x), a ≤ x ≤ b, this transformation moves
γ to another curve γ defined by the transformed equation y = f (x).
Definition 7.1
The functional 7.7 is said to be invariant under the transformation 7.11 if
Z d   Z d  
dy dy
G = G where G = dx F x, y, , G= dx F x, y, ,
c dx c dx

and where c = Φ(c, y(c), y0 (c)) and d = Φ(d, y(d), y0 (d)), for all c and d satisfying
a ≤ c < d ≤ b.

The meaning of the equality G = G is easiest to understand if x = x. Then the functions


y(x) and y(x) define two curves, γ and γ in an n-dimensional space, each parametrised
by the independent variable x. The functional G is the integral of F (x, y, y 0 ) along γ
and G is the integral of the same function along γ.
In the case x 6= x the parametrisation along γ and γ is changed. An important
example of a change to the independent variable, x, is the uniform shift x = x + δ,
where δ is independent of x, y and y0 , which is the example dealt with in the previous
section. The scale transformation, whereby x = (1 + δ)x, is also useful, see exercise 7.8.
A one-parameter family of transformations is the set of transformations

x = Φ(x, y, y0 ; δ) and yk = Ψk (x, y, y0 ; δ), k = 1, 2, · · · , n, (7.12)

depending upon the single parameter δ, which reduces to the identity when δ = 0, that
is
x = Φ(x, y, y0 ; 0) and yk = Ψk (x, y, y0 ; 0), k = 1, 2, · · · , n,
and where Φ and all the Ψk have continuous first derivatives in all variables, including δ.
This last condition ensures that the transformation is invertible in the neighbourhood of
δ = 0, provided the Jacobian determinant is not zero. An example of a one-parameter
family of transformations is defined by equation 7.9, which becomes the identity when
α = 0.

Exercise 7.1
Which of the following is a one-parameter family of transformations
(a) x = x − yδ, y = y + xδ,
(b) x = x cosh δ − y sinh δ, y = x sinh δ − y cosh δ,
(c) y = y exp(Aδ) where A is a square, non-singular, n × n matrix.
B
Note that
P if Bk is a square matrix the matrix e is defined by the sum
eB = ∞ k=0 B /k!.
202 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

Exercise 7.2
Families of transformations are very common and are often generated by solutions
of differential equations, as illustrated by the following example.
(a) Show that the solution of the equation

dy zet
= y(1−y), 0 ≤ y(0) ≤ 1, is y = ψ(z, t) = , where z = y(0).
dt 1 + (et − 1) z

(b) Show that this defines a one-parameter family of transformations, y = Ψ(z, t),
with parameter t, so that for each t, ψ(z, t) transforms the initial point z to the
value y(t).

Exercise 7.3 Z b
dx y10 2 − y20 2 is invariant under the trans-
` ´
Show that the functional S[y] =
a
formation

y1 = y1 cosh δ + y2 sinh δ, y2 = y1 sinh δ + y2 cosh δ, x = x + δg(x),

only if g(x) is a constant.

We have finally arrived at the main part of this section, the statement and proof of
Noether’s theorem. The theorem was published in 1918 by Emmy Noether (1882-
1935), a German mathematician, considered to be one of the most creative abstract
algebraists of modern times. The theorem was derived for certain Variational Principles,
and has important applications to physics especially relativity and quantum mechanics,
besides systematising many of the known results of classical dynamics; in particular it
provides a uniform description of the laws of conservation of energy, linear and angular
momentum, which are, respectively, due to invariance of the equations of motion under
translations in time, space and rotations in space. The theorem can also be applied to
partial differential equations that can be derived from a variational principle.
The theorem deals with arbitrarily small changes in the coordinates, so in equa-
tion 7.12 we assume |δ|  1 and write the transformation in the form

0 2 ∂Φ
x = x + δφ(x, y, y ) + O(δ ), φ= ,
∂δ δ=0
(7.13)
∂Ψ k
y k = yk + δψk (x, y, y0 ) + O(δ 2 ), ψk = ,
∂δ δ=0

where we have used the fact that when δ = 0 the transformation becomes the identity.
In all subsequent analysis second order terms in the parameter, here δ, are ignored.

Exercise 7.4
Show that to first order in α the rotation defined by equation 7.9 becomes

y1 = y1 + αψ1 , y2 = y2 + αψ2 where ψ1 = −y2 and ψ2 = y1 .


7.3. NOETHER’S THEOREM 203

Theorem 7.1
Noether’s theorem: If the functional
Z d
S[y] = dx F (x, y, y0 ) (7.14)
c

is invariant under the family of transformations 7.13, for arbitrary c and d, then
n n
!
X ∂F X
0 ∂F
ψk + F − yk 0 φ = constant (7.15)
∂yk0 ∂yk
k=1 k=1

along each stationary path of S[y].


The function defined on the left-hand side of this equation is often named a first-
integral of the Euler-Lagrange equations.1

In the one dimensional case, n = 1, and when y = y, (ψ = 0) and φ = 1, equation 7.15


reduces to the result derived in the previous section, equation 7.6 (page 198). In general,
if n = 1, equation 7.15 furnishes a first-order differential equation which is usually easier
to solve than the corresponding second-order Euler-Lagrange equation, as was seen in
sections 5.2 and 5.3. Normally, solutions of this first-order equation are also solutions of
the Euler-Lagrange equations: however, this not always true, so some care is sometimes
needed, see for instance exercise 7.8 and the following discussion. A proof of Noether’s
theorem is given after the following exercises.

Exercise 7.5 Z b
dx y10 2 + y20 2 is invariant under the
` ´
Use the fact that the functional S[y] =
a
rotation defined by equation 7.9 and the result derived in exercise 7.4 to show that
a first-integral, equation 7.15, is y1 y20 − y2 y10 = constant.
In the context of dynamics this first-integral is the angular momentum.

Exercise 7.6 Z b
dx y10 2 + y20 2 is invariant under the following
` ´
Show that the functional S[y] =
a
three transformations
(i) y1 = y1 + δg(x), y 2 = y2 , x = x,
(ii) y 1 = y1 , y 2 = y2 + δg(x), x = x,
(iii) y 1 = y1 , y 2 = y2 , x = x + δg(x).
only if g(x) is a constant.
In the case g = 1 show that these three invariant transformations lead to the
first-integrals,
(i) y10 = constant, (ii) y20 = constant (iii) y10 2 + y20 2 = constant.

1 The name first-integral comes from the time when differential equations were solved by successive

integration, with n integrations being necessary to find the general solution of an nth order equation.
The term solution dates back to Lagrange, but it was Poincaré who established its use; what is now
named a solution used to be called an integral or a particular integral.
204 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

Exercise 7.7
Show that the functional
Z b » –
1 ` 02
y1 + y20 2 + V (y1 − y2 ) ,
´
S[y] = dx
a 2

where V (z) is a differentiable function, is invariant under the transformations

y 1 = y1 + δg(x), y 2 = y2 + δg(x), x = x,

only if g(x) is a constant, and then a first-integral is y10 + y20 = constant.

Exercise 7.8
A version of the Emden-Fowler equation can be written in the form

d2 y 2 dy
+ + y 5 = 0.
dx2 x dx
(a) Show that this is the Euler-Lagrange equation associated with the functional
Z b „ «
1
S[y] = dx x2 y 0 2 − y 6 .
a 3

(b) Show that this functional is invariant under the scaling transformation x = αx,
y = βy, where α and β are constants satisfying αβ 2 = 1. Use Noether’s theorem
to deduce that a first-integral is
„ «
1
x2 yy 0 + x3 y 0 2 + y 6 = c,
3

where c is a constant.
(c) By substituting the trial function y = Axγ into the first-integral, find a value
of γ that yields a solution of the first-integral, for any A. Show also that this is
a solution of the original Euler-Lagrange equation, but only for particular values
of A.
(d) By differentiating the first-integral, obtain the following equation for y 00 ,

xy + 2y 0 + xy 5 2x2 y 0 + xy = 0.
` 00 ´` ´

Show that the solutions of this equation are y = Ax−1/2 , for any constant A,
together with the solutions of the Euler-Lagrange equation.

In the previous exercise we saw that the first-order differential equation defined by
the first-integral had a solution that was not a solution of the original Euler-Lagrange
equation. This feature, surprising at first, is typical and a consequence of the orig-
inal equation being nonlinear in x and y. In general the Euler-Lagrange equation
can be written in the form y 00 = f (x, y, y 0 ); suppose this possesses the first-integral
φ(x, y, y 0 ) = c, then differentiation of this eliminates the constant c to give the second-
order equation y 00 φy0 + y 0 φy + φx = 0, which is linear in y 00 . By definition solutions of
the Euler-Lagrange equation satisfy the first-integral, so this equation factorises in the
form  
y 00 φy0 + y 0 φy + φx = y 00 − f (x, y, y 0 ) g(x, y, y 0 ) = 0,
7.3. NOETHER’S THEOREM 205

for some function g(x, y, y 0 ), which may be a constant. This latter equation may also
have another solution given by g(x, y, y 0 ) = 0, which may be integrated to furnish a
relation between x and y involving the single constant c. Usually this function is not a
solution of the original Euler-Lagrange equation.
The general solution of the Euler-Lagrange equation contains two independent vari-
ables, which are determined by the boundary conditions that can be varied indepen-
dently. The extra solution of the first-integral, if it exists, can involve at most one
constant, so does not usually satisfy both boundary conditions. A simple example of
this was seen in the minimal surface area problem, equation 5.14 (page 157).

7.3.1 Proof of Noether’s theorem


Noether’s theorem is proved by substituting the transformation 7.13 into the func-
tional 7.14 and expanding to first order in δ. The algebra is messy, so we proceed in
two stages.
First, we assume that x = x, that is φ = 0, which simplifies the algebra. It is easiest
to start with the transformed functional
Z d  
dy
G= dx F x, y, , (since x = x).
c dx

Now substitute for y and y 0 and expand to first order in δ to obtain,


d  

Z
G = dx F x, y + δ ψ, y0 + δ
,
c dx
Z d Z d n  
0
X ∂F ∂F dψk
= dx F (x, y, y ) + δ dx ψk + 0 . (7.16)
c c ∂yk ∂yk dx
k=1

But the first term is merely the untransformed functional which, by definition equals
the transformed functional — because it is invariant under the transformation. Also,
using integration by parts
d  d Z d  
∂F dψk ∂F d ∂F
Z
dx = ψ k − dx ψ k
c ∂yk0 dx ∂yk0 c c dx ∂yk0

and hence, by substituting this result into equation 7.16 we obtain


d n    n  d
∂F d ∂F ∂F
Z X X
0=δ dx ψk − +δ ψk 0 . (7.17)
c ∂yk dx ∂yk0 ∂yk c
k=1 k=1

The term in curly brackets is, by virtue of the Euler-Lagrange equation, zero on a
stationary path and hence it follows that
n n
X ∂F X ∂F
ψk = ψ k .
∂yk0 x=d ∂yk0 x=c
k=1 k=1

Since c and d are arbitrary we obtain equation 7.15, with φ = 0.


206 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

The general case, φ 6= 0, proceeds similarly but is algebraically more complicated.


As before we start with the transformed functional, which is now
d  
dy
Z
G= dx F x, y, ,
c dx

where d = d + δφ(d), c = c + δφ(c), with φ(c) denoting φ(c, y(c), y 0 (c)). Now we have
to change the integration variable and limits, besides expanding F . First consider the
differential; using equations 7.13 and the chain rule,
 
dy dy dψ dx dx dφ
= +δ but =1+δ (7.18)
dx dx dx dx dx dx

dx dφ
giving =1−δ + O(δ 2 ) and so, to first order in δ we have
dx dx
    
dy dy dψ dφ dy dψ dy dφ
= +δ 1−δ = +δ − .
dx dx dx dx dx dx dx dx

Thus the integral becomes


d   
dx dy dψ dy dφ
Z
G= dx F x + δφ, y + δ ψ, +δ − .
c dx dx dx dx dx

Now expand to first order in δ and use the fact that the functional is invariant. After
some algebra we find that
n n 
Z d ( ! )
X ∂F dyk dφ ∂F X ∂F ∂F dψk
0=δ dx F− +φ + ψk + 0 . (7.19)
c ∂yk0 dx dx ∂x ∂yk ∂yk dx
k=1 k=1

Notice that if φ = 0 this is the equivalent of equation 7.17. Now integrate those terms
containing dφ/dx and dΨk /dx by parts to cast this equation into the form
" n
! n
#d
X ∂F dyk X ∂F
0 = δ F− φ+ ψk 0
∂yk0 dx ∂yk
k=1 k=1 c
n
Z d ( !)
∂F d X ∂F dyk
+δ dx φ − F−
c ∂x dx ∂yk0 dx
k=1
Z d n   
X ∂F d ∂F
+δ dx ψk − . (7.20)
c ∂yk dx ∂yk0
k=1

Finally, we need to show that on stationary paths the integrals are zero. The second
integral is clearly zero, by virtue of the Euler-Lagrange equations. On expanding the
integrand of the first-integral it becomes
n  n n
( ) X  
∂F ∂F X ∂F 0 ∂F 00 ∂F 00 X 0 d ∂F
− + yk + 0 yk + y + y .
∂x ∂x ∂yk ∂yk ∂yk0 k k
dx ∂yk0
k=1 k=1 k=1
7.3. NOETHER’S THEOREM 207

Using the Euler-Lagrange equations to modify the last term it is seen that this expres-
sion is zero. Hence, because c and d are arbitrary we have shown that the function
n n
!
X ∂F 0 X ∂F
F− y φ + ψk 0 ,
∂yk0 k ∂yk
k=1 k=1

where y(x) is evaluated along a stationary path, is independent of x.

Exercise 7.9
Derive equations 7.19 and 7.20.

Exercise 7.10 Z b
Consider the functional S[y] = dx F (x, y 0 ), where the integrand depends only
a
upon x and y 0 . Show that Noether’s theorem gives the first-integral Fy0 (x, y 0 ) =
constant, and that this is consistent with the Euler-Lagrange equation.
208 CHAPTER 7. SYMMETRIES AND NOETHER’S THEOREM

7.4 Miscellaneous exercises


Exercise 7.11
(a) Show that the Euler-Lagrange equation for the functional
Z b „ «2
d2 y dy dy
S[y] = dx xyy 0 2 is 2xy 2 + x + 2y = 0.
a dx dx dx

(b) Show that this functional is invariant with respect to scale changes in the
independent variable, x, that is, under the change to the new variable x = (1+δ)x,
where δ is a constant. Use Noether’s theorem to show that the first-integral of the
„ «2
dy
above differential equation is, x2 y = c, for some constant c,
dx

Exercise 7.12 Z
Consider the functional S[y] = dx x3 y 2 y 0 2 .
(a) Show that S is invariant under the scale transformation x = αx, y = βy if
αβ 2 = 1. Hence show that a first-integral is x3 y 3 y 0 + x4 y 2 y 0 2 = c = constant.
(b) Using the function y = Axγ find a solution of this equation; show also that
this is not a solution of the associated Euler-Lagrange equation.
(c) Show that the general solution of the Euler-Lagrange equation is y 2 + Ax−2 = B,
where A and B are arbitrary constants.
(d) Using the independent variable u where x = ua , show that with a suitable
choice of the constant a the functional becomes
Z „ «2
1 dy
S[y] = du y .
a du
Find the first-integral of this functional and show that the solution of the first-
integral found in part (b) does not satisfy this new first-integral.

Exercise 7.13
Show that the Euler-Lagrange equation of the functional
Z b
S[y] = dx F (x, y, y 0 , y 00 ), y(a) = A1 , y 0 (a) = A2 , y(b) = B1 , y 0 (b) = B2 ,
a

derived in exercise 4.33 (page 143), has the first-integral


„ «
d ∂F ∂F
− = constant
dx ∂y 00 ∂y 0

if the integrand does not depend explicitly upon y(x) and the integral
„ „ « «
∂F d ∂F ∂F
y 00 00 − − y 0 − F = constant
∂y dx ∂y 00 ∂y 0
if the integrand does not depend explicitly upon x.
Hint: the second part of this question is most easily done using the theory de-
scribed in section 7.2.
Chapter 8

The second variation

8.1 Introduction
In this chapter we derive necessary and sufficient conditions for the functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (8.1)
a

to have an actual extremum, rather than simply a stationary value. You will recall from
chapter 4 that necessary and sufficient conditions for this functional to be stationary on
a sufficiently differentiable curve y(x) is that it satisfies the Euler-Lagrange equation,
 
d ∂F ∂F
− = 0, y(a) = A, y(b) = B. (8.2)
dx ∂y 0 ∂y

Necessary and sufficient conditions for a solution of the Euler-Lagrange equation to


be an extremum are stated in theorems 8.3 and 8.4 (page 221) respectively. This
theory is important because many variational principles require the functional to have
a minimum value. But the theory is limited because it does not determine whether an
extremum is local or global, section 3.2.2, and sometimes this distinction is important,
as for example with geodesics. The treatment of local and global extrema is different.
For a local extremum we need compare only neighbouring paths — the behaviour far
away is irrelevant. For a global extremum, we require information about all admissible
paths, which is clearly a far more demanding, and often impossible, task. For this
reason we shall concentrate on local extrema but note that the analysis introduced in
exercise 3.4 (page 97) uses a global property of the functional which can be used for
some brachistochrone problems, as shown in section 8.6. This, and other, methods are
analysed in more depth by Troutman1 .
We start this chapter with a description of the standard procedure used to classify
the stationary points of functions of several real variables, leading to a statement of
the Morse lemma, which shows that with n variables most stationary points can be
categorised as one of n + 1 types, only one of which is a local minimum and one a
local maximum. In section 8.4 we derive a sufficient condition for the functional 8.1 to
1 Troutman J L, 1983, Variational Calculus with Elementary Convexity, Springer-Verlag.

209
210 CHAPTER 8. THE SECOND VARIATION

have an extremum. But to understand this theory it necessary to introduce Jacobi’s


equation and the notion of conjugate points, so these are introduced first. Theorem 8.4
(page 221) is important and useful because it provides a test for determining whether a
stationary path actually yields a local extremum; in sections 8.6 and 8.7 we shall apply
it to the brachistochrone problem and the minimum surface of revolution, respectively.
Finally, in section 8.8 we complete the story by showing how the classification method
used for functions of n variables tends to Jacobi’s equation as n → ∞.

8.2 Stationary points of functions of several variables


Suppose that x = (x1 , x2 , · · · , xn ) and F (x) is a function in Rn that possesses deriva-
tives of at least second-order. Using the Taylor expansion of F (x + ξ), equation 1.39
(page 36), with |ξ| = 1 to guarantee that ξ tends to zero with , we have
1
F (x + ξ) = F (x) + ∆F [x, ξ] + 2 ∆2 F [x, ξ] + O(3 ), (8.3)
2
where ∆F is the Gâteaux differential
n n X
n
X ∂F X ∂2F
∆F [x, ξ] = ξk and ∆2 F [x, ξ] = ξk ξj . (8.4)
∂xk ∂xk ∂xj
k=1 k=1 j=1

At a stationary point, x = a, by definition F (a + ξ) − F (a) = O(2 ), for all ξ so that


all the first partial derivatives, ∂F/∂xk , must be zero at x = a. Provided ∆2 F [x, ξ]
is not identically zero for any ξ the nature of the stationary point is determined by
the behaviour of ∆2 F [a, ξ]. If this is positive for all ξ then F (a) has a local minimum
at x = a: note that the adjective local is usually omitted. If it is negative for all ξ
then F (a) has a local maximum at x = a. If the sign of ∆2 F [a, ξ] changes with ξ the
stationary point is said to be a saddle. In some texts the terms stationary point and
critical point are synonyms.

8.2.1 Functions of one variable


For functions of one variable, n = 1, there are three types of stationary points as
illustrated in figure 8.1. The function shown in this figure has a local maximum and
a local minimum, but the global maximum and minimum are at the ends of the range
and are not stationary points.

f(x) local
maximum inflection

local
minimum

x
Figure 8.1 Diagram showing the three possible types
of stationary point of a function, f (x), of one variable.
8.2. STATIONARY POINTS OF FUNCTIONS OF SEVERAL VARIABLES 211

At a typical maximum f 0 (x) = 0 and f 00 (x) < 0; at a typical minumum f 0 (x) = 0 and
f 00 (x) > 0 whereas at a stationary point which is also a point of inflection2 f 0 (x) =
f 00 (x) = 0. Care is needed when classifying stationary points because there are many
special cases; for instance the function f (x) = x4 is stationary at the origin and
f (k) (0) = 0 for k = 1, 2 and 3. For this reason we restrict the discussion to typi-
cal stationary points, defined to be those at which the second derivative is not zero:
without this restriction complications arise, for the reasons discussed in the following
exercise.

Exercise 8.1
Stationary points which are also points of inflection are not typical because small,
arbitrary changes to a function with such a stationary point usually change its
nature.

(a) Show that the function f (x) = x3 is stationary and has a point of inflection
at the origin. By adding x, with 0 < ||  1, so f (x) becomes x3 + x, show that
the stationary point is removed if  > 0 or converted to two ordinary stationary
points (at which the second derivative is not zero) if  < 0.

(b) If the function f (x) is stationary and has a point of inflection at x = a, so


f 0 (a) = f 00 (a) = 0, but f (3) (a) 6= 0, show that the function F (x) = f (x) + g(x),
where 0 < ||  1 and where g(a) = 0 and g 0 (a) 6= 0, is either not stationary or
has ordinary stationary points in the neighbourhood of x = a. You may assume
that all functions possess a Taylor expansion in the neighbourhood of x = a.

Note that a sufficiently smooth function f (x) defined on an interval [a, b] is often said
to be structurally stable if the number and nature of its stationary points are unchanged
by the addition of a small, arbitrary function; that is for arbitrarily small , f (x) and
f (x)+g(x), also sufficiently smooth, have the same stationary point structure on [a, b].
Generally, functions describing the physical world are structurally stable, provided f
and g have the same symmetries. For functions of one real variable there are just two
typical stationary points, maxima and minima.

8.2.2 Functions of two variables


If n = 2 there are three types of stationary points, maxima, minima and saddles,
examples of which are

2 2

 −x1 − x2 , maximum,

f (x) = x21 + x22 , minimum, (8.5)

x21 − x22 , saddle.

These three functions all have stationary points at the origin and their shapes are
illustrated in the following figures.

2A general point of infection is where f 00 (x) changes sign, though f 0 (x) need not be zero.
212 CHAPTER 8. THE SECOND VARIATION

Figure 8.2 Maximum Figure 8.3 Minimum Figure 8.4 Saddle

In the neighbourhood of a maximum or a minimum the function is always smaller or


always larger, respectively, than its value at the stationary point. In the neighbourhood
of a saddle it is both larger and smaller.
The nature of a stationary point is determined by the value of the Hessian determi-
nant at the stationary point. The Hessian determinant is the determinant of the real,
symmetric Hessian matrix with elements Hij = ∂ 2 f /∂xi ∂xj . A stationary point is said
to be non-degenerate if, at the stationary point, det(H) 6= 0; a degenerate stationary
point is one at which det(H) = 0. At a degenerate stationary point a function can have
the characteristics of an extremum or a saddle, but there are other cases. In this text
the adjectives typical and non-degenerate when used to describe a stationary point are
synonyms.

Exercise 8.2
Find the Hessian determinant of the functions defined in equation 8.5.

Exercise 8.3
Show that the function f (x, y) = x3 − 3xy 2 has a degenerate stationary point at
the origin.

8.2.3 Functions of n variables


For a scalar function, f (x), of n variables the Hessian matrix, H, is the n × n, real
symmetric matrix with elements Hij (x) = ∂ 2 f /∂xi ∂xj . A stationary point, at x = a,
is said to be typical, or non-degenerate, if det(H(a)) 6= 0 and the classification of these
points depends entirely upon the second-order term of the Taylor expansion, that is
∆2 F [a, ξ], equation 8.4. Further, the following lemma, due to Morse (1892 – 1977),
shows that there are n + 1 types of stationary points, only two of which are extrema.
The Morse Lemma: If a is a non-degenerate stationary point of a smooth function
f (x) then there is a local coordinate system3 (y1 , y2 , · · · , yn ), where yk (a) = 0, for all
k, such that

f (y) = f (a) − y12 + y22 + · · · + yl2 + yl+1


2 2
+ · · · + yn2 ,
 
+ yl+2

for some 0 ≤ l ≤ n.
3 This means that in the neighbourhood of x = a the transformation from x to y is one to one and

that each coordinate yk (x) satisfies the conditions of the implicit function theorem.
8.2. STATIONARY POINTS OF FUNCTIONS OF SEVERAL VARIABLES 213

Note that this representation of the function is exact in the neighbourhood of the
stationary point: it is not an expansion. The integer l is a topological invariant, meaning
that a smooth, invertible coordinate change does not alter its value, so it is a property
of the function not the coordinate system used to represent it.
At the extremes, l = 0 and l = n, we have

Xn
f (a) + yk2 , (l = 0), minimum,





k=1
f (y) = n
 X
 f (a) − yk2 , (l = n), maximum.



k=1

For 0 < l < n the function is said to have a Morse l-saddle and if n  1 there are many
more types of saddles than extrema4 .
The Morse lemma is an existence theorem: it does not provide a method of deter-
mining the transformation to the coordinates y(x) or the value of the index l: this is
usually determined using the second-order term of the Taylor expansion, most conve-
niently written in terms of the Hessian matrix, H, evaluated at the stationary point a,
n X
X n
>
∆2 F [a, z] = z H(a)z = Hij (a)zi zj where z = x − a (8.6)
i=1 j=1

and z> is the transpose of the vector z. Thus the nature of the non-degenerate station-
ary point depends upon the behaviour of the quadratic form ∆2 .
(a) if ∆2 is positive definite, ∆2 > 0 for all |z| > 0, the stationary point is a minimum;
(b) if ∆2 is negative definite, ∆2 < 0 for all |z| > 0, the stationary point is a maximum;
(c) otherwise the stationary point is a saddle.
The following two statements are equivalent and both provide necessary and sufficient
conditions to determine the behaviour of ∆2 and hence the nature of the stationary
point of f (x).
(I) If the eigenvalues of H(a) are λk , k = 1, 2, · · · , n, then
(a) ∆2 will be positive definite if λk > 0 for k = 1, 2, · · · , n:
(b) ∆2 will be negative definite if λk < 0 for k = 1, 2, · · · , n:
(c) ∆2 will be indefinite if the λk are of both signs; further, the index l is equal
to the number of negative eigenvalues.
Since H is real and symmetric all its eigenvalues are real. If one of its eigenvalues
is zero the stationary point is degenerate.
(II) Let Dr be the determinant derived from H by retaining only the first r rows and
columns, so that

H11 H12 · · · H1r

H21 H22 · · · H2r
Dr =
, r = 1, 2, · · · , n,
· · ··· ·
Hr1 Hr2 · · · Hrr
4 Note that some texts use l 0 = n − l in place of l in which case the function has a minimum when

l0 = n and maximum when l0 = 0.


214 CHAPTER 8. THE SECOND VARIATION

giving D1 = H11 and Dn = det(H). Then

(a) ∆2 will be positive definite if Dk > 0 for k = 1, 2, · · · , n:


(b) ∆2 will be negative definite if (−1)k Dk > 0 for k = 1, 2, · · · , n:
(c) ∆2 will be indefinite it neither conditions (a) nor (b) are satisfied.

The proof of this statement may be found in Jeffrey5 (1990, page 288).
The determinants Dk , k = 1, 2, · · · , n, are known as the descending principal minors
of H. In general a minor of a given element, aij , of an N th-order determinant is the
(N − 1)th-order determinant obtained by removing the row and column of the given
element, and with the sign (−1)i+j .

Exercise 8.4
Determine the nature of the stationary points at the origin of the following quadratic
functions:
(a) f = 2x2 − 8xy + y 2 ,
(b) f = 2x2 + 4y 2 + z 2 + 2(xy + yz + xz).

Exercise 8.5
The functions f (x, y) = x2 + y 3 and g(x, y) = x2 + y 4 are both stationary at the
origin. Show that for both functions this is a degenerate stationary point, classify
it and determine the expression for ∆2 .

Exercise 8.6
Show that a nondegenerate stationary point of the function of two variables, f (x, y)
is a minimum if
„ 2 «2
∂2f ∂2f ∂2f ∂2f ∂ f
> 0, > 0 and − > 0,
∂x2 ∂y 2 ∂x2 ∂y 2 ∂x∂y

and a maximum if
«2
∂2f ∂2f ∂2f ∂2f ∂2f

< 0, <0 and − > 0,
∂x2 ∂y 2 ∂x2 ∂y 2 ∂x∂y

where all derivatives are evaluted at the stationary point. Show also that if
«2
∂2f ∂2f ∂2f

− <0
∂x2 ∂y 2 ∂x∂y

the stationary point is a saddle.

Exercise 8.7
(a) Show that the function f (x, y) = (x3 + y 3 ) − 3(x2 + y 2 + 2xy) has stationary
points at (0, 0) and (4, 4), and classify them.
(b) Find the stationary points of the function f (x, y) = x4 + 64y 4 − 2(x + 8y)2 ,
and classify them.

5 Linear Algebra and Ordinary Differential Equations by A Jeffrey, (Blackwell Scientific Publica-

tions).
8.3. THE SECOND VARIATION OF A FUNCTIONAL 215

Exercise 8.8
The least squares fit
Given a set of N pairs of data points (xi , yi ) we require a curve given by the line
y = a + bx with the constants (a, b) chosen to minimise the function
N
X
Φ(a, b) = (a + bxi − yi )2 .
i=1

Show that (a, b) are given by the solutions of the linear equations
X X
aN + b xi = yi ,
X X 2 X
a xi + b xi = x i yi ,

where the sums are all from i = 1 to i = N .


Hint use the Cauchy-Schwarz inequality for sums (page 41), to show that the
stationary point is a minimum.

8.3 The second variation of a functional


As with functions of n real variables the nature of a stationary functional is usually
determined by considering the second-order expansion, that is the term O(2 ) in the
difference δS = S[y +h]−S[y], where y(x) is a solution of the Euler-Lagrange equation.
In order to determine this we use the Taylor expansion of the integrand,
 
0 0 0 ∂F 0 ∂F
F (x, y + h, y + h ) = F (x, y, y ) +  h + h
∂y 0 ∂y
2 ∂ 2 F 0 2 ∂2F ∂2F 2
 
0
+ h +2 hh + h + O(3 ),
2 ∂y 0 2 ∂y∂y 0 ∂y 2

and assume that h(x) belongs to D1 (a, b), defined on page 124, that is, we are consid-
ering weak variations, section 4.6. It is convenient to write
1
δS = ∆[y, h] + 2 ∆2 [y, h] + o(2 )
2
where ∆[y, h] is the Gâteaux differential, introduced in equation 4.5 (page 125), and
∆2 [y, h] is named the second variation6 and is given by
b
∂2F 0 2 ∂2F ∂2F 2
Z  
0
∆2 [y, h] = dx h +2 hh + h .
a ∂y 0 2 ∂y∂y 0 ∂y 2

On a stationary path, by definition ∆[y, h] = 0 for all admissible h, and henceforth we


shall assume that on this path
1 2
δS =  ∆2 [y, h] + 2 R() with lim R() = 0, for all admissible h. (8.7)
2 →0
6 Note, in some texts 2 ∆ /2 is named the second variation but, whichever convention is used the
2
subsequent analysis is identical.
216 CHAPTER 8. THE SECOND VARIATION

This means that for small  the first term dominates and the sign of δS is the same
as the sign of the second variation, ∆2 , which therefore determines the nature of the
stationary path.
The expression for ∆2 may be simplified by integrating the term involving hh0 by
parts, giving
b b
∂2F 1 ∂ 2 F dh2
Z Z
dx hh0 = dx
a ∂y∂y 0 2 a ∂y∂y 0 dx
b
1 2 ∂2F 1 b
  2 
2 d ∂ F
Z
= h 0
− dx h .
2 ∂y∂y a 2 a dx ∂y∂y 0

Because of the boundary conditions h(a) = h(b) = 0 and the boundary term vanishes
to give
Z b Z b
∂2F
 2 
0 2 d ∂ F
2 dx 0
hh = − dx h .
a ∂y∂y a dx ∂y∂y 0
Thus the second variation becomes
Z b h i
∆2 [y, h] = dx P (x)h0 (x)2 + Q(x)h(x)2 , (8.8)
a

where
∂2F ∂2F ∂2F
 
d
P (x) = and Q(x) = − (8.9)
∂y 0 2 ∂y 2 dx ∂y∂y 0
are known functions of x, because here y(x) is a solution of the Euler-Lagrange equation.
The significance of ∆2 leads to the first two important results conveniently expressed
as theorems, which we shall not prove7 .
Theorem 8.1
A sufficient condition for the functional S[y] to have a minimum on a path y(x) for
which the first variation vanishes, ∆[y, h] = 0, is that ∆2 [y, h] > 0 for all allowed h 6= 0.
For a maximum we reverse the inequality.
Note that the condition ∆2 > 0 for all h is often described by the statement “∆2 is
strongly positive”.

If ∆2 [y, h] = 0 for some h(x) then for these h the sign of δS is determined by the
higher-order terms in the expansion, as for the examples considered in exercise 8.5.
Theorem 8.2
A necessary condition for the functional S[y] to have a minimum along the path y(x)
is that ∆2 [y, h] ≥ 0 for all allowed h. For a maximum, the sign ≥ is replaced by ≤.

The properties of ∆2 [y, h] are therefore crucial in determining the nature of a stationary
path. We need to show that ∆2 [y, h] 6= 0 for all admissible h, and this is not easy; the
remaining part of this theory is therefore devoted to understanding the behaviour of ∆ 2 .
7 Proofs of theorems 8.1 and 8.2 are provided in I M Gelfand and S V Fomin Calculus of Variations,

(Prentice Hall), chapter 5.


8.3. THE SECOND VARIATION OF A FUNCTIONAL 217

8.3.1 Short intervals


For short intervals, that is sufficiently small b − a, there is a very simple condition for
a functional to have an extremum, namely that for a ≤ x ≤ b, P (x) 6= 0: if P (x) > 0
the stationary path is a minimum and if P (x) < 0 it is a maximum. Unfortunately,
estimates of the magnitude of b − a necessary for this condition to be valid are usually
hard to find.
This result follows because if h(a) = h(b) = 0 the variation of h0 (x)2 is larger that
that of h(x)2 . We may prove this using Schwarz’s inequality, that is
Z 2 Z
b b Z b
2
dx u(x)v(x) ≤ dx |u(x)| dx |v(x)|2 , (8.10)


a a a

Rx
provided all integrals exist. Since h(x) = a du h0 (u), we have
Z x 2 Z x  Z x  Z x
2 0 0 2
h(x) = du h (u) ≤ du du h (u) = (x − a) du h0 (u)2
a a a a
Z b
≤ (x − a) du h0 (u)2 .
a

Now integrate again to obtain


b b
1
Z Z
dx h(x) ≤ (b − a)2
2
dx h0 (x)2 . (8.11)
a 2 a

Exercise 8.9
As an example consider the function g(x) = (x − a)(b − x) and show that
Z b Z b
1 1
I= (b − a)5 ,
dx g(x)2 = I0 = dx g 0 (x)2 = (b − a)3 ,
a 30 a 3

and deduce that I < I 0 if b − a < 10.

Now all that is necessary is an application of the integral mean value theorem (page 23):
consider each component of ∆2 , equation 8.8, separately:
Z b Z b
(P )
∆2 = dx P (x)h0 (x)2 = P (xp ) dx h0 (x)2 ,
a a
Z b Z b
(Q)
∆2 = dx Q(x)h(x)2 = Q(xq ) dx h(x)2 ,
a a

where xp and xq are in the closed interval [a, b].


If Q(x) > 0 and P (x) > 0 for a < x < b then ∆2 > 0 for all admissible h and
the stationary path is a local minimum. However, this is neither a common nor very
interesting case, and the result follows directly from equation 8.8. We need to consider
the effect of Q(x) being negative with P (x) > 0.
218 CHAPTER 8. THE SECOND VARIATION

If Q(x) is negative in all or part of the interval (a, b) it is necessary to show that
(P ) (Q)
∆2 + ∆2 > 0. We have, on using the above results,
b b
1 1 Q(xq )
Z Z
(Q) (P )
∆2 = Q(xq ) 2
dx h(x) ≤ Q(xq )(b − a)2 dx h0 (x)2 = (b − a)2 ∆2 .
a 2 a 2 P (xp )
(Q) (P )
Since P (xp ) > 0 it follows that for sufficiently small (b − a), ∆2 < ∆2 and hence
that ∆2 > 0. If P (x) < 0 for a ≤ x ≤ b, we simply consider −∆2 .
Thus for sufficiently small b − a we have
∂2F
a) if P (x) = > 0, a ≤ x ≤ b, S[y] has a minimum;
∂y 0 2
∂2F
b) if P (x) = < 0, a ≤ x ≤ b, S[y] has a maximum.
∂y 0 2
This analysis shows that for short intervals, provided P (x) does not change sign, the
functional has either a maximum or a minimum and no other type of stationary point
exists. This result highlights the significance of the sign of P (x) which, as we shall see,
pervades the whole of this theory. In practice this result is of limited value because it
is rarely clear how small the interval needs to be.
Dynamical systems
For a one-dimensional dynamical system described by a Lagrangian defined by the dif-
ference between the kinetic and potential energy, L = 21 mq̇ 2 − V (q, t), where q is the
Rt
generalised coordinate, the action, defined by the functional S[q] = t12 dt L(t, q, q̇), is
stationary along the orbit from q1 = q(t1 ) to q2 = q(t2 ). For short times the kinetic en-
ergy dominates the motion which is therefore similar to rectilinear motion and the action
has an actual minimum along the orbit. A similar result holds for many-dimensional
dynamical systems.
Comment
The analysis of this section emphasises the fact that for short intervals the solutions
of most differential equations behave similarly and very simply, and is the idea the
rectification described after theorem 2.2 (page 81).

8.3.2 Legendre’s necessary condition


In this section we show that a necessary condition for ∆2 [y, h] ≥ 0 for all admissible h(x)
is that P (x) = Fy0 y0 ≥ 0 for a ≤ x ≤ b. Unlike the result derived in the previous section
this does not depend upon the interval being small, though only a necessary condition is
obtained. This result is due to Legendre: it is important because it is usually easier to
apply than the necessary condition of theorem 8.2. Further, it is of historical significance
because Legendre attempted (unsuccessfully) to show that a sufficient condition for S[y]
to have a weak minimum on the path y(x) is that Fy0 y0 > 0 on every point of the curve.
Recall theorem 8.2 which states that a necessary condition for S[y] to have a mini-
mum on the stationary path y(x) is that ∆2 [y, h] ≥ 0 for all allowed h(x). We now show
that a necessary condition for ∆2 ≥ 0, for all h(x) in D1 (a, b) (defined on page 124),
such that h(a) = h(b) = 0, is that P (x) = ∂ 2 F/∂y 0 2 ≥ 0 for a ≤ x ≤ b. The proof is by
contradiction.
8.3. THE SECOND VARIATION OF A FUNCTIONAL 219

Suppose that at some point x0 ∈ [a, b], P (x0 ) = −2p, (p > 0). Then since P (x) is
continuous
P (x) < −p if a ≤ x0 − α ≤ x ≤ x0 + α ≤ b
for some α > 0. We now construct a suitable function h(x) for which ∆2 < 0. Let
  
2 π(x − x0 )
sin , x0 − α ≤ x ≤ x0 + α,

h(x) = α
0, otherwise.

Then
Z b  
∆2 = dx P (x)h0 2 + Q(x)h2
a
π 2 x0 +α
  Z x0 +α  
2π(x − x0 ) π(x − x0 )
Z
2 4
= dx P (x) sin + dx Q(x) sin
α2 x0 −α α x0 −α α
pπ 2 3
< − + M α, M = max (|Q|).
α 4 a≤x≤b

For sufficiently small α the last expression becomes negative and hence ∆2 < 0. But, it
is necessary that ∆2 ≥ 0, theorem 8.2, and hence it follows that we need P (x) ≥ 0 for
x ∈ [a, b]. Note that as in the analysis leading to 8.11 it is the term depending upon
h0 (x) that dominates the integral.
Exercise 8.10
Explain why h(x) has to be in D1 (a, b).

We have shown that a necessary condition for ∆2 [y, h] ≥ 0 is that P (x) ≥ 0 for x ∈ [a, b].
Using theorem 8.2 this shows that a necessary condition for S[y] to be a minimum on
the stationary path y(x) is that P (x) ≥ 0.
Legendre also attempted, unsuccessfully, to show that the weaker condition P (x) > 0,
x ∈ [a, b], is also sufficient. That this cannot be true is shown by the following counter
example.
We know that the minimum distance between two points on a sphere is along the
shorter arc of the great circle passing through them — assuming that the two points are
not on the same diameter. Thus for the three points A, B and C, on the great circle
through A and B, shown in figure 8.5, the shortest distance between A and B and
between B and C is along the short arcs and on these P > 0, exercise 5.20 (page 168).

Great circle C

A Diameter
through A
B

Figure 8.5

Hence on the arc ABC, P > 0, but this is not the shortest distance between A and C.
Hence, it is not sufficient that P > 0 for a stationary path to give a minimum.
220 CHAPTER 8. THE SECOND VARIATION

8.4 Analysis of the second variation


In this section we continue our analysis of the second variation
Z b h i
∆2 [y, h] = dx P (x)h0 (x)2 + Q(x)h(x)2 (8.12)
a

with h(x) in D1 (a, b) and h(a) = h(b) = 0, where

∂2F ∂2F ∂2F


 
d
P (x) = and Q(x) = −
∂y 0 2 ∂y 2 dx ∂y∂y 0

and where y(x) is a solution of the Euler-Lagrange equation.


In order that the functional S[y], defined in equation 8.1, has a minimum (maximum)
on y(x) it is necessary and sufficient that ∆2 > 0 (∆2 < 0) for all allowed h(x),
theorems 8.2 and 8.1. We saw in the section 8.3.1 that provided P (x) 6= 0 for a ≤ x ≤ b
and if (b − a) is sufficiently small the stationary path is automatically an extremum.
In this section we derive necessary and sufficient conditions for the functional S[y] to
have an extremum on a stationary path, moreover, the condition obtained is usually
relatively simple to apply provided y(x) is known.
The central result of this section, theorem 8.4, is quite simple, but the route to it
involves a number of intermediate theorems and requires one new, important idea —
that of a conjugate point. This idea is central to the main result so we start by defining
it then quote the main result of this chapter.
Definition 8.1
Conjugate point. The point ã 6= a is said to be conjugate to a point a if the solution
of the linear, second-order equation
 
d du
P (x) − Q(x)u = 0, u(a) = 0, u0 (a) = 1, (8.13)
dx dx

where P (x) and Q(x) are known functions of x, vanishes at x = ã.

Equation 8.13 is the Euler-Lagrange equation of the functional ∆2 [y, u] regarded as


a quadratic functional in u and is named the Jacobi equation of the original func-
tional S[y]. Note that equation 8.13 is an initial value problem and the existence of a
unique solution is guaranteed, see theorem 2.1 (page 61): moreover, later we see that
its solution can be derived from the general solution of the associated Euler-Lagrange
equation, exercise 8.18. The significance of equation 8.13 will become apparent as the
analysis unfolds.

Exercise 8.11
(a) Show that if P = 1 and Q is either 0 or 1 there are no points conjugate to
x = a.
(b) Show that if P = 1 and Q = −1 the point x = a + π is conjugate to x = a.
8.4. ANALYSIS OF THE SECOND VARIATION 221

The following two theorems show why conjugate points are important. The first pro-
vides a necessary condition, and is an extension of Legendre’s condition. The second is
the more important because it provides a sufficient condition and a useful criterion for
determining the nature of a stationary path.
Theorem 8.3
Jacobi’s necessary condition. If the stationary path y(x) corresponds to a minimum
of the functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (8.14)
a

and if P (x) = ∂ 2 F/∂y 0 2 > 0 along the path, then the open interval a < x < b does not
contain points conjugate to a.
For a maximum of the functional, we simply replace the condition P (x) > 0 by
P (x) < 0.

If the interval [a, b] contains a conjugate point this theorem provides no information
about the nature of the stationary path, which is why the next theorem is important.
Theorem 8.4
A sufficient condition. If y(x) is an admissible function for the functional 8.14 and
satisfies the three conditions listed below, then the functional 8.14 has a weak minimum
along y(x).
S-1: The function y(x) satisfies the Euler-Lagrange equation,
 
d ∂F ∂F
0
− = 0.
dx ∂y ∂y

S-2: Along the curve y(x), P (x) = Fy0 y0 > 0 for a ≤ x ≤ b.


S-3: The closed interval [a, b] contains no points conjugate to the point x = a.

Theorem 8.4 is the important result because it provides a relatively simple test for
whether a stationary path is an actual extremum. It means that to investigate the
nature of a stationary path we need only compute the solution of Jacobi’s equation
(either analytically or numerically) and either determine the value of the first conjugate
point or determine whether or not one exists in the relevant interval. This, together
with the sign of P (x) provides the necessary information to classify a stationary path.
In section 8.6 we apply this test to the brachistochrone problem and in section 8.7 we
consider the surface of revolution.

Exercise 8.12
Show that the functional
1 X
Z h i
S[y] = dx y 0 2 − y 2 , y(0) = 0, y(X) = 1, X > 0,
2 0
has a weak minimum on the path y = sin x/ sin X provided 0 < X < π.
222 CHAPTER 8. THE SECOND VARIATION

Exercise 8.13 Z b
Show that the stationary paths of the function S = dx F (x, y 0 ) have no conju-
a
gate points provided that along them Fy0 y0 6= 0.

Exercise 8.14
If y(x) is a stationary path of the functional 8.14 and the point x = b is a conjugate
point, so u(a) = u(b) = 0, show that ∆2 [y, u] = 0. Deduce that the functional
may not be an extremum on y(x).
Hint multiply equation 8.13 by u and integrate the integral over (a, b) by parts.

8.4.1 Analysis of the second variation


In the analysis of the second variation, ∆2 , the emphasis changes from y(x), the station-
ary path (assumed known), to the behaviour of ∆2 with h(x), the variation from y(x).
This analysis starts by converting the integrand of ∆2 into a non-negative function,
which is achieved by noting that for any differentiable function w(x)
b
d
Z
wh2 = 0

dx
a dx

because h(a) = h(b) = 0. Hence we may re-write ∆2 in the form


Z b
dx P h0 2 + (Q + w0 )h2 + 2whh0 .

∆2 =
a

If P (x) > 0 for a ≤ x ≤ b the integrand may be re-arranged,


 √ p 2  p 
P h0 2 + (Q + w0 )h2 + 2whh0 = h0 P + h Q + w0 + 2hh0 w − (Q + w0 )P .

Hence if w(x) is defined by the nonlinear equation


 
dw
w2 = Q(x) + P (x) (8.15)
dx

it follows that
b
w 2
Z 
∆2 = dx P (x) h0 + h . (8.16)
a P
Provided w(x) exists this shows that ∆2 ≥ 0 when P (x) > 0: if P (x) < 0 for all
x ∈ [a, b] a similar analysis shows that ∆2 ≤ 0.
Further, ∆2 = 0 only if h(x) satisfies the linear equation

dh w
+ h = 0, a ≤ x ≤ b. (8.17)
dx P
But h(a) = 0, so a solution of 8.17 is h(x) = 0, and since solutions of linear equations
are unique, see theorem 2.1 (page 61) this is the only solution of 8.17, which contradicts
the assumption that h(x) 6= 0, for almost all x ∈ (a, b). Hence equation 8.17 is not true
8.4. ANALYSIS OF THE SECOND VARIATION 223

and ∆2 6= 0, that is ∆2 > 0, provided w exists. The problem has therefore reduced to
showing that w(x) exists.
Equation 8.15 for w(x) is Riccati’s equation, see 2.3.5 and there is a standard trans-
formation 2.25 (page 67), that converts this to a linear, second-order equation. For this
example the new dependent variable, u, is defined by,
u0
w=− P (8.18)
u
which casts equation 8.15 into Jacobi’s equation
 
d du
P − Qu = 0, a ≤ x ≤ b. (8.19)
dx dx
This is just the Euler-Lagrange equation for the functional ∆2 [y, u] with y(x) a sta-
tionary path of the original functional: the reason for this ‘coincidence’ is discussed in
section 8.5.

Exercise 8.15
Derive equation 8.19 from 8.15.

If the interval (a, b] contains no points conjugate to a, Jacobi’s equation, with the initial
conditions u(a) = 0, u0 (a) = 1, has a solution which does not vanish in (a, b], then from
equation 8.18, w(x) exists on the whole interval and ∆2 > 0.
A little care is needed in deriving this result because the definition of a conjugate
point involves a solution of Jacobi’s equation which is zero at x = a. However, if
the interval (a, b] contains no points conjugate to a then, since the solutions of this
equation depend continuously on the initial conditions, the interval [a, b] contains no
points conjugate to a − , for some sufficiently small, positive . Therefore the solution
that satisfies the initial conditions u(a − ) = 0, u0 (a − ) = 1 does not vanish anywhere
in the interval [a, b]. Here, of course, it is implicit that P (x) 6= 0 in [a, b].
Thus we have just proved the following theorem.
Theorem 8.5
If P (x) > 0 for a ≤ x ≤ b and if the interval [a, b] contains no points conjugate to a,
then ∆2 [y, h] > 0 for all h ∈ D1 (a, b) and for which h(a) = h(b) = 0.

This result indicates that conjugate points are important, but it does not describe the
behaviour of ∆2 if there is a point conjugate to a in (a, b]. It is necessary to prove
the converse result that if ∆2 > 0 then there are no points in (a, b] conjugate to a. It
then follows that ∆2 > 0 if and only if there are no conjugate points and hence, using
theorem 8.1, that the stationary path gives a local minimum value of the functional.
Thus we now need to prove the following theorem.
Theorem 8.6
If ∆2 [y, h] > 0, where P (x) > 0 for a ≤ x ≤ b, for all h ∈ D1 (a, b) such that
h(a) = h(b) = 0, then the interval (a, b] contains no points conjugate to a.

We give an outline of the proof of this theorem by assuming the existence of a con-
jugate point ã satisfying a < ã ≤ b and showing that this contradicts the assumption
∆2 [y, h] > 0. First consider the limiting case ã = b.
224 CHAPTER 8. THE SECOND VARIATION

Note that if h(x) satisfies the equation


 
d dh
P − Qh = 0, h(a) = h(b) = 0,
dx dx
then ∆2 [y, h] = 0, because of the identity
Z b     Z b
d dh
dx P h0 2 + Qh2 = −∆2 [y, h],

0= dx P − Qh h = − (8.20)
a dx dx a

obtained using integration by parts. This contradicts the assumption that ∆2 [y, h] > 0.
Theorem 8.6 is proved by constructing a family of positive definite functionals,
Jλ [h], depending upon a real parameter λ, such that for λ = 1 we obtain the quadratic
functional Z b  
J1 [h] = ∆2 [y, h] = dx P (x)h0 (x)2 + Q(x)h(x)2
a
where y(x) is a stationary path. For λ = 0 the functional is chosen to give
Z b
J0 [h] = dx h0 (x)2
a

for which there are no points conjugate to a, see exercise 8.11. It has to be shown that
as λ is increased continuously from 0 to 1 no conjugate points appear in the interval
[a, b].
Thus, consider the functional
Z b n  o
Jλ [h] = dx P (x)h0 (x)2 + Q(x)h(x)2 λ + (1 − λ)h0 (x)2 (8.21)
a

which is positive definite for 0 ≤ λ ≤ 1 since we are assuming J1 [h] = ∆2 [y, h] > 0. The
Euler-Lagrange equation for this functional is the linear, first-order equation
h i dh 
d
λP + (1 − λ) − λQh = 0. (8.22)
dx dx
Suppose that h(x, λ) is a solution of 8.22 satisfying the initial conditions h(a, λ) = 0,
hx (a, λ) = 1 for all 0 ≤ λ ≤ 1. It can be shown that this solution is a continuous function
of the parameter λ8 , and for λ = 1 reduces to the solution of the Jacobi equation 8.13
for S[y]; for λ = 0 it reduces to the equation h00 (x) = 0, that is h(x) = x − a.
Now assume that a conjugate point ã, such that a < ã < b exists. By considering
the set of all points in the rectangle a ≤ x ≤ b, 0 ≤ λ ≤ 1, in the (x, λ)-plane, such that
h(x, λ) = 0, we can show that this assumption leads to a contradiction.
If such a set of points exists it represents a curve in the (x, λ)-plane. This follows
because where h(x, λ) = 0 we must have hx (x, λ) 6= 0, for if h(c, λ) = hx (c, λ) = 0, for
some c, the uniqueness theorem for linear differential equations shows that h(x, λ) = 0
for all x ∈ [a, b], which is impossible since hx (a, λ) = 1 for 0 ≤ λ ≤ 1.
Thus, since hx (x, λ) 6= 0 the implicit function theorem allows the equation h(x, λ) = 0
to be inverted to define a continuous function x = x(λ) in the neighbourhood of each
point. The point (ã, 1) lies on this curve, as shown in figure 8.6.
8 In fact h(x, λ) has as many continuous differentials as the functions P (x) and Q(x).
8.4. ANALYSIS OF THE SECOND VARIATION 225

λ C a
λ=1

A
B

E
D x
λ=0 b
a
Figure 8.6 Figure showing possible lines along which h(x, λ) = 0.

This figure shows the five possible curves A · · · E, that can emanate from (ã, 1); we
now discuss each of these curves in turn, showing that each is impossible.
Curve A terminates inside the rectangle. This is impossible because it would contra-
dict the continuous dependence of the solution h(x, λ) on the parameter λ.
Curve B intersects the line x = b for 0 ≤ λ ≤ 1. This is impossible because if true
h(x, λ) would satisfy equation 8.22 with the boundary conditions h(a, λ) = h(b, λ) = 0,
giving h(x, λ) = 0 and hence Jλ [h] = 0, which contradicts the assumption that this
functional is positive definite.
Curve C intersects the line a ≤ x ≤ b, λ = 1. This is impossible for then we would
have dλ/dx = 0 at some point (c, λ), inside the rectangle, and since
∂h ∂h dλ
+ =0
∂x ∂λ dx
this would mean h = 0 and hx = 0 at this point which, for the reasons discussed
above, means that h(x, λ) = 0 for all a ≤ x ≤ b.
Curve D intersects the x-axis between a ≤ x ≤ b when λ = 0. This is impossible
because when λ = 0 the solution of equation 8.22 reduces to h(x, 0) = x − a, which
is zero only at x = a.
Curve E intersects the line x = a for 0 ≤ λ ≤ 1. This implies that hx (a, λ) = 0 and
hence contradicts an initial assumption, as is seen by expanding the solution of the
differential equation 8.22 about x = a: using the initial conditions h(a, λ) = 0, the
Taylor expansion gives
1
h(x, λ) = (x − a)hx (a, λ) + (x − a)2 hxx (a, λ) + O (x − a)3 .

2
But the curve x(λ) is a solution of h(x, λ) = 0 and hence on this curve
1
hx (a, λ) = − (x − a)hxx (a, λ) + O (x − a)2 .

2
As x → a the right-hand side becomes zero, so for the curve x(λ) to intersect the
line x = a we must have hx (a, λ) = 0, contradicting the initial assumption that
hx (a, λ) = 1.
226 CHAPTER 8. THE SECOND VARIATION

Hence, we see that if Jλ [h] > 0 there is no conjugate point ã satisfying a < ã ≤ b, which
completes the proof.

Exercise 8.16
For the functional
Z X
1 h i
S[y] = dx y 0 2 − y 2 , y(0) = 0, y(X) = 1, X > 0,
2 0

√ √
show that equation 8.22 becomes h00 +λh = 0, with solution h(x, λ) = sin(x λ)/ λ.
Show that h(x, 0) = x and hence that for λ = 0 there are no conjugate points; √
show also that for λ > 0 there are infinitely many conjugate points at xk = kπ/ λ,
k = 1, 2, · · · .

8.5 The Variational Equation


Here we provide an alternative interpretation of Jacobi’s equation. Consider the Euler-
Lagrange equation
 
d ∂F ∂F
0
− = 0, y(a) = A, y(b) = B. (8.23)
dx ∂y ∂y

and assume that a stationary path, y(x), exists. This Euler-Lagrange equation is
second-order so requires two conditions to uniquely specify a solution. We now consider
a family of solutions by ignoring the boundary condition at x = b and constructing a
set of solutions passing through the first point (a, A), restricting attention to those
solutions that are close to y(x).
One such a family is obtained by defining z(x) = y(x) + h(x), where || is small and
h(x) = O(1). Since, by definition z(a) = y(a) = A we must have h(a) = 0. A sketch of
such a family is shown in figure 8.7.

y
B

A
y(x) x
a b
Figure 8.7 Diagram showing a stationary path, y(x), the solid line, and some
members of the family of varied paths, z(x), passing through (a, A).

Taylor’s theorem gives

F (x, z, z 0 ) = F (x, y + h, y 0 + h0 ) = F (x, y, y 0 ) + (hFy + h0 Fy0 )  + O(2 ), (8.24)


8.5. THE VARIATIONAL EQUATION 227

where the derivatives, Fy and Fy0 , are evaluated on the stationary path, y(x). Since
z(x) satisfies the equation
 
d ∂F ∂F
− = 0, z(a) = A,
dx ∂z 0 ∂z

we can substitute the Taylor expansion 8.24 into this equation to obtain a differen-
tial equation for h(x). For this substitution we use the equivalent expansions of the
derivatives,

Fz (x, z, z 0 ) = Fy (x, y, y 0 ) + (hFyy + h0 Fyy0 )  + O(2 ),


Fz0 (x, z, z 0 ) = Fy0 (x, y, y 0 ) + (hFy0 y + h0 Fy0 y0 )  + O(2 ),

and since y(x) also satisfies the Euler-Lagrange equation, we obtain

d
(Fyy0 h + Fy0 y0 h0 ) − Fyy h − Fyy0 h0 = O().
dx

On taking the limit  → 0, this reduces to the variational equation


 
d dh
P (x) − Q(x)h = 0, h(a) = 0, (8.25)
dx dx

where
∂2F ∂2F ∂2F
 
d
P (x) = and Q(x) = − .
∂y 0 2 ∂y 2 dx ∂y∂y 0

This equation for h(x) is linear and homogeneous so all solutions may be obtained from
the solution with the initial conditions h(a) = 0, h0 (a) = 1, which is the solution of
Jacobi’s equation.
If the end point x = b is a conjugate point, that is the solution of equation 8.25
satisfies h(b) = 0 then, because of equation 8.20, ∆2 S[y, h] = 0 and the stationary
path is not necessarily an extremum. Also, the adjacent paths z = y + h are, to
O() solutions of the Euler-Lagrange equation because z(a) = A and z(b) = B and
z(x) satisfies the Euler-Lagrange equation to O(). This has many important physical
consequences.
One example that helps visualise this property are geodesics on a sphere, that is,
the shorter arcs of the great circles through two points. We choose the north pole to
coincide with one of the points, N . For any other point, M , on the sphere, excluding
the south pole, there is only one geodesic joining N to M and the varied path, that is
the solutions of equation 8.25, do not pass through M . If, however, we place M at the
south pole, all great circles through N pass through M and in this case all the varied
paths (not just the neighbouring paths, as in the general theory) pass through M . This
behaviour is shown schematically in figure 8.8.
228 CHAPTER 8. THE SECOND VARIATION

S
Figure 8.8 Diagram showing the shortest path between the
north pole, N and a point M on a sphere, the solid line, and
two neighbouring paths, the dashed lines.

In the older texts on dynamics conjugate points are often referred to as kinetic foci
because of an analogy with ray optics, and this analogy also affords another helpful
visualisation of the phenomenon. Consider the passage of light rays from a point source
through a convex lens, as shown in figure 8.9. The lens is centred at a point O on an
axis AB: a point source at S, on AB emits light which converges to the point S 0 , also
on the axis. If the distances OS and OS 0 are u and v respectively it can be shown that
1/u + 1/v = 1/f , where f is the magnitude of the focal length of the lens.

A S O S’ B

u v

Figure 8.9 Diagram showing the passage of rays from a point


source, S, through a convex lens, L, to the focal point S 0 .

This picture is connected with the Euler-Lagrange and the Jacobi equation by observing
that the rays from S to S 0 satisfy Fermat’s principle, described in section 3.5.7. The
direct ray SS 0 is the shortest path and satisfies the associated Euler-Lagrange equation;
the adjacent rays denoted by the dashed lines are small variations about this path with
the variation satisfying Jacobi’s equations. The lens focuses rays from S at the focal, or
conjugate, point S 0 . If u is decreased beyond a certain point S 0 no longer exists, giving
an illustration of how the existence of conjugate points depend upon the boundary
conditions.

Exercise 8.17
Derive equation 8.25.
8.6. THE BRACHISTOCHRONE PROBLEM 229

Exercise 8.18
(a) The general solution of the Euler-Lagrange with a single initial condition at
x = a contains one arbitrary constant. We denote this constant by c and the
solution by y(x, c), so that y(a, c) = A for all c in some interval. Show that
∂y/∂c = yc (x, c) is a solution of Jacobi’s equation 8.25.
(b) For the functional defined in exercise 8.12 (page 221) find the general solution
satisfying the condition y(0) = 0 and confirm that this can be used to find the
solution of the variational equation 8.25.

Exercise 8.19
For short intervals both P (x) and Q(x) may be approximated by their values at
x = a. By solving equation 8.25 with this approximation show that for sufficiently
short intervals the stationary path is a minimum if P (a) > 0.

8.6 The Brachistochrone problem


This problem was considered in section 5.2 where it was shown that the relevant func-
tional can be written, apart from the external factor (2g)−1/2 , in the form
Z b r
1 + z0 2 v2
T [z] = dx where z(x) = A + 0 − y(x). (8.26)
0 z 2g
If the motion starts from rest at x = 0, y = A > 0 and terminates at x = b, y = 0, so
z(0) = 0 and z(b) = A, it is shown in section 5.2.3 that the stationary path is a segment
of a cycloid that can be written in the parametric form,
1 2
x= c (2φ − sin 2φ) , z = c2 sin2 φ, 0 ≤ φ ≤ π,
2
where the constant c(b, A) is defined uniquely by b and A, (section 5.2.3).
For this example the direct application of the general theory is left as an exercise
for the reader, see exercise 8.20.
An alternative method, in which the role of x and z are swapped, proves far easier:
provided z 0 (x) 6= 0, it is shown in exercise 5.5 (page 150) that the functional may be
written in the form,
Z A r
1 + x0 2 dx
T [x] = dz , x0 = . (8.27)
0 z dz
Since z 0 (x) = 1/ tan φ, this representation is valid when 0 ≤ φ < π/2, that is, the value
of b must not be so great√that the cycloid
√ dips below the x-axis.
The integrand, F = 1 + x0 2 / z, is independent of x(z) and depends only upon
the dependent variable, z, and x0 (z) so Q = 0 and
∂2F 1
P (z) = =√ > 0.
∂x0 2 z(1 + x0 2 )3/2
Thus the second variation, equation 8.8, becomes
Z A
∆2 = dz P (z)h0 (z)2 .
0
230 CHAPTER 8. THE SECOND VARIATION

The singularity at z = 0 is integrable, see exercise 5.23 (page 169), and hence, for all
h(z), ∆2 > 0 and from theorems 8.4 and 8.6 we see that the cycloidal path is a local
minimum if φb < π/2. In exercise 8.20 it is shown that the cycloid is a local minimum
for all φb .
For this example we can, however, prove that T [x] is a global minimum. If x(z) is
the stationary path (and x0 (z) 6= 0) the value of the functional along another admissible
path x + h is
Z A p
1 + (x0 + h0 )2
T [x + h] = dz √ .
0 z
Using the result derived in exercise 3.4 (page 97), we see that
A
x0 h 0
Z
T [x + h] − T [x] ≥ dz p .
0 z(1 + x0 2 )

x0
But, the Euler-Lagrange equation for the functional 8.27 gives p = d, where
z(1 + x0 2 )
d is a constant. Hence
Z A
T [x + h] − T [x] ≥ d dz h0 = 0.
0

Thus, provided x0 (z) 6= 0, that is φb < π/2 (page 151), the cycloid is a global minimum
of the functional.

Exercise 8.20
For the functional 8.26 and the stationary cycloidal path show that

∂2F 1 1
P = = √ = sin2 φ,
∂z 0 2 z(1 + z 0 2 )3/2 c
and that
z0
„ «
3 p 02 +
1 d
Q = 1 + z √
4z 5/2 2 dx z 3/2 1 + z 0 2
z0
„ «
3 p 02 +
1 d 1
= 1 + z 2
√ = 5 4 .
4z 5/2 4c2 sin φ dφ z 3/2 1 + z 0 2 2c sin φ
Deduce that Jacobi’s equation is

d2 u
sin2 φ − 2u = 0.
dφ2
Show that the general solution of this equation is
„ «
A φ cos φ
u= +B 1−
tan φ sin φ
where A and B are constants, and deduce that the stationary path is a local
minimum.
Note that one solution of Jacobi’s equation, A/ tan φ, is singular at φ = 0 because
the Jacobi equation is singular here; in turn, this is because the cycloid has an
infinite gradient at φ = 0.
8.7. SURFACE OF REVOLUTION 231

Exercise 8.21
If s(x) is a stationary path of the functional
Z b p
S[y] = dx f (x) 1 + y 0 2 , y(a) = A, y(b) = B,
a

show that S[s + h] ≥ S[s] for all h(x) satisfying h(a) = h(b) = 0.

8.7 Surface of revolution


This problem is considered in section 5.3. In the special case in which both ends of
the curve have the same height, A = B see figure 5.7 (page 154), it was shown that
provided the ratio A/a exceeds 1.51, that is the ends are not too far apart, then there
are two stationary paths given by the two real roots of the equation A/a = g(η), see
figure 5.9 (page 158). Here we show that the solution corresponding to the smaller root,
η = η1 , gives a local minimum.
In this examplep the functional integrand is, apart from an irrelevant multiplicative
constant, F = y 1 + y 0 2 , so
!
∂2F
 2 
y d ∂ F d y0
P (x) = = and Q(x) = − =− .
∂y 0 2 (1 + y 0 2 )3/2 dx ∂y∂y 0
p
dx 1 + y0 2

For the boundary conditions y(−a) = y(a) = A the stationary paths are given by

a  xη  A 1
y= cosh where = g(η) = cosh η
η a a η

and there are two real solutions if A/a > min(g) ' 1.5089. With these solutions we
have
a η
P (x) = > 0 and Q(x) = − . (8.28)
η cosh2 (xη/a) a cosh2 (xη/a)
By defining z = xη/a Jacobi’s equation becomes
 
d 1 du u
+ = 0, u(−η) = 0, u0 (−η) = 1, (8.29)
dz cosh2 z dz cosh2 z
or
d2 u du
− 2 tanh z + u = 0, u(−η) = 0, u0 (−η) = 1. (8.30)
dz 2 dz
An obvious solution is u = sinh z, so putting u = v(z) sinh z gives the following equation
for v(z),

v 00 2
=− which integrates to ln v 0 = −2 ln | tanh z| + constant
v0 sinh z cosh z
Hence for some constant C,
dv C z sinh z − cosh z
= , and integration gives v(z) = C + D,
dz tanh2 z sinh z
232 CHAPTER 8. THE SECOND VARIATION

where D is another constant. Hence the general solution of Jacobi’s equation is



u(z) = C(z sinh z − cosh z) + D sinh z, z= .
a
The constants C and D are obtained from the initial conditions u(−η) = 0 and
u0 (−η) = 1, which give the two equations
 
0 = C η sinh η − cosh η − D sinh η and 1 = −Cη cosh η + D cosh η.

These equations can be solved to give


sinh η cosh η − η sinh η
C=− and D = .
cosh2 η cosh2 η
Hence the required solution to Jacobi’s equation is
cosh η − η sinh η sinh η xη
u(z) = sinh z + (cosh z − z sinh z) , z= . (8.31)
cosh2 η cosh2 η a
At z = η this solution has the value
2 sinh η sinh η 0 1
u(η) = 2 (cosh η − η sinh η) = −2η 2 g (η) where g(η) = cosh η.
cosh η cosh2 η η

Here η is a number defined by the solution of g(η) = A/a and for A/a > min(g) there
are two solutions η1 < 1.509 < η2 and from the graph of g(η), see figure 5.9 (page 158),
we see that g 0 (η1 ) < 0 and g 0 (η2 ) > 0. Hence u(η2 ) < 0 so this stationary path is not
an extremum. But u(η1 ) > 0 so either u(z) > 0 for −η1 < z ≤ η1 or there are an even
number of real roots of u0 (z) = 0. But the equation
cosh z
u0 (z) = (cosh η − η sinh η − z sinh η) = 0,
cosh2 η
has at most one real root for −η < z ≤ η — because cosh z 6= 0 and the other factor
is linear in z. Hence for η = η1 there are no conjugate points in the interval (−η1 , η1 ],
and the stationary path is a local minimum.

8.8 The connection between Jacobi’s equation and


quadratic forms
In section 8.2.3 we considered the n-dimensional quadratic form z> Hz and provided
two conditions on the real, symmetric, n × n matrix H for the quadratic form to be
either positive or negative definite. Here we show that the test involving the descending
minors, Dk , k = 1, 2, · · · , n (page 213) becomes, in the limit n → ∞, the condition that
Jacobi’s equation should have no conjugate points. This analysis is complicated and is
not assessed: it is included because it is important to know that the finite dimensional
theory tends to the correct limit as n → ∞. The method described here is taken from
Gelfand and Fomin (1963)9 .
9I M Gelfand and S V Fomin Calculus of Variations, (Prentice Hall).
8.8. JACOBI’S EQUATION AND QUADRATIC FORMS 233

The connection is made using Euler’s original simplification whereby the interval
[a, b] along the real axis is divided into n + 1 equal length intervals,
a = x0 < x1 < x2 < · · · < xk < xk+1 < · · · < xn < xn+1 = b,
b−a
with xk+1 − xk = δ = , k = 0, 1, · · · , n, described in section 4.2.
n+1
Consider the quadratic functional
Z b  
∆2 [y, h] = J[h] = dx P (x)h0 2 + Q(x)h2 , (8.32)
a

where y(x) is a stationary path and P (x) and Q(x) are defined in equation 8.9. Then,
as in equation 4.3 (page 122) we replace the integral by a sum, so the functional J[h]
becomes a function of h = (h1 , h2 , . . . , hn ) with hk = h(xk ),
n  2
X hk+1 − hk
J(h) = δ Pk + Qk h2k , (8.33)
δ
k=0

where Pk = P (xk ), Qk = Q(xk ), for all k: also h0 = h(a) = 0 and hn+1 = h(b) = 0.
The function J(h) is quadratic in all the hk and it is helpful to collect the squares
and the cross terms together,
n   
X Pk−1 + Pk 2
J(h) = Qk δ + h2k − Pk−1 hk−1 hk . (8.34)
δ δ
k=1

This quadratic form may be written in the form J = h> M h where M is the real,
symmetric, n × n matrix,
 
a 1 b1 0 0 ··· 0 0 0
 b1 a 2 b2 0 ··· 0 0 0 
 
 0 b 2 a 3 b3 ··· 0 0 0 
 
M =  0 0 b 3 a4 ··· 0 0 0 ,
 
 .. .. .. .. .. .. .. .. 
 . . . . . . . . 
 
 0 0 0 0 ··· bn−2 an−1 bn−1 
0 0 0 0 ··· 0 bn−1 an
where
Pk−1 + Pk Pk
ak = Q k δ + , bk = − , k = 1, 2, · · · , n.
δ δ
This matrix is the equivalent of H(a) in equation 8.6 (page 213). The descending minors
of det(M ) are the determinants

a 1 b1 0 0 · · · 0 0 0 0

b1 a 2 b2 0 · · · 0 0 0 0

0 b 2 a 3 b3 · · · 0 0 0 0

0 0 b 3 a4 · · · 0 0 0 0
.. .. .. .. .. .. .. .. .. , i = 1, 2, · · · , n,

Di = . . . . . . . . .

0 0 0 0 · · · ai−3 bi−3 0 0

0 0 0 0 · · · bi−3 ai−2 bi−2 0

0 0 0 0 ··· 0 bi−2 ai−1 bi−1

0 0 0 0 ··· 0 0 bi−1 ai
234 CHAPTER 8. THE SECOND VARIATION

with D1 = a1 and Dn = det(M ).


We need to show that the conditions Dk > 0, for all k, become, in the limit n → ∞,
the same as the condition that the solution of the Jacobi equation has no conjugate
points in (a, b]. This is achieved by first obtaining a recurrence relation for the descend-
ing principal minors, by expanding Dk with respect to the elements of the last row; this
gives
Dk = ak Dk−1 − bk−1 Ek−1
where Ek−1 is the minor of Bk−1 in Dk ,

a 1 b1 0 0 · · · 0 0 0 0

b1 a 2 b2 0 · · · 0 0 0 0

0 b 2 a 3 b3 · · · 0 0 0 0

0 0 b 3 a4 · · · 0 0 0 0
Ek−1 = . .. .. .. .. .. .. .. .. .

.. . . . . . . . .

0 0 0 0 ··· bk−4 ak−3 bk−3 0

0 0 0 0 ··· 0 bk−3 ak−2 0

0 0 0 0 ··· 0 0 bk−2 bk−1

Evaluating this determinant we obtain, Ek−1 = bk−1 Dk−2 and hence

Dk = ak Dk−1 − b2k−1 Dk−2 , k = 1, 2, · · · , n,

where we define D−1 = 0 and D0 = 1 in order to ensure that the relation is valid for
k = 1 and 2: the first few terms are

D1 = a 1 , D2 = a1 a2 − b21 and D3 = a1 a2 a3 − a3 b21 − a1 b22 ,

which can be checked by direct calculation.


Now substitute for ak and bk to obtain
 
Pk−1 + Pk Pk−1
Dk = Q k δ + Dk−1 − 2 Dk−2 . (8.35)
δ δ

From this we see that D1 = O(δ −1 ), D2 = O(δ −2 ) and hence that Dj = O(δ −j ). Since
we are interested in the limit δ → 0 it is necessary to re-scale the variables to remove
this dependence upon δ; thus we define new variables Xj , j = 0, 1, · · · , n + 1, by the
relation
Xj+1 X1
Dj = j+1 , D−1 = X0 = 0, D0 = = 1.
δ δ
Then equation 8.35 becomes

Xk+1 = Qk δ 2 + Pk−1 + Pk Xk − Pk−1 2



Xk−1 . (8.36)

This is still not in an appropriate form, so we define the variables Zk , by the relations

Xk = P1 P2 · · · Pk−2 Pk−1 Zk , Z0 = 0, Z1 = X1 = δ,

and equation 8.36 becomes, after cancelling the product (P1 P2 · · · Pk−1 ),

(Zk+1 − Zk ) Pk − (Zk − Zk−1 ) Pk−1 − Qk Zk δ 2 = 0. (8.37)


8.8. JACOBI’S EQUATION AND QUADRATIC FORMS 235

Assuming that Zk can be derived from a function z(x), by Zk = z(xk ), we have,

(Zk+1 − Zk ) Pk − (Zk − Zk−1 ) Pk−1 = P (x) (z(x + δ) − z(x)) − P (x − δ) (z(x) − z(x − δ))
 
= δ 2 z 00 (x)P (x) + P 0 (x)z 0 (z) + O(δ 3 ),

where, on the right-hand side, we have put xk = x and xk±1 = x ± δ. Hence, in the
limit n → ∞ equation 8.36 becomes
 
d dz
P − Q(x)z = 0,
dx dx

which is Jacobi’s equation. The conditions Dk > 0, become, since P (x) > 0 and δ > 0,
z(x) > 0. Finally we observe that the initial conditions are Z(0) = z(a) = 0 and

Z1 − Z 0
z 0 (a) = lim = 1.
δ→0 δ
Thus the condition Dk > 0, for all k, becomes, in the limit n → ∞, the condition that
there should be no conjugate points in the interval a < x ≤ b.
236 CHAPTER 8. THE SECOND VARIATION

8.9 Miscellaneous exercises


Exercise 8.22
Find expressions for the second variation, ∆2 , in terms of the stationary path y(x)
and its derivatives,
Z 1 for the following functionals.
Z 1
´2
dx y 0 3 , dx y 0 2 − 1 ,
`
(a) S[y] = (b) S[y] =
0 0
Z 1 „Z 1 «
2 ` 02 ´2
(c) S[y] = dx y y − 1 , (d) S[y] = exp dx F (x, y 0 ) .
0 0

Exercise 8.23
Find the second
Z b variation of the following
Z functionals.
b
(a) S[y] = dx F (x, y), (b) S[y] = dx F (x, y, y 0 , y 00 ).
a a
Note that in part (b) suitable boundary conditions need to be defined.

Exercise 8.24 Z b “ ”
Show that the second variation of the linear functional S[y] = dx A(x) + B(x)y 0
a
is zero.

Exercise 8.25
(a) Show that the stationary path of the functional
Z 1
´2
dx y 2 y 0 2 − 1 , y(0) = 0,
`
S[y] = y(1) = 1
0

is y = x and that this yields a global minimum.


(b) If the boundary conditions are y(0) = 0, y(1) = A > 0, show that a stationary
path exists only if A ≥ 1.

Exercise 8.26
By examining the solution of Jacobi’s equation for the functional
Z a “ ”
S[f ] = dx f 0 (x)2 − ω 2 f (x)2 , f (0) = f (a) = 0,
0

where a and ω are positive constants, and using theorem 8.6 determine the range
of values of a and ω such that S[f ] > 0. Deduce that
Z a
k2 a
Z
2
0
dx f (x) > 2 dx f (x)2 , k < π.
0 a 0

Compare this inequality with equation 8.11 (page 217).

Exercise 8.27
Show that y = 1 − sin 2x is a stationary path of the functional
Z π/4
dx 4y 2 − 8y − y 0 2 , y(0) = 1, y(π/4) = 0,
` ´
S[y] =
0

and use the result of exercise 8.26 to show that this is a global maximum.
8.9. MISCELLANEOUS EXERCISES 237

Exercise 8.28 Z a
Prove that the stationary path of the functional S[y] = dx y 0 b , y(0) = 0,
0
y(a) = A 6= 0, where b is a real number, yields a weak but not a strong local
minimum.

Exercise 8.29
If y(x, α, β) is the general solution of an Euler-Lagrange equation and α and β

are two constants, show that if the ratio R(x) = has the same value at two

distinct points these points are conjugate.

Exercise 8.30
(a) Find the stationary path of the functional
Z a
y
S[y] = dx 0 2 , y(0) = 1, y(a) = A2 ,
0 y
where a > 0 and 0 < A < 1.
(b) Show that the Jacobi equation can be written in the form

d2 u du 1
(1 − bx)2 + 2b(1 − bx) + 2b2 u = 0 where b= (1 − A).
dx2 dx a

(c) By substituting u = (1 − bx)λ into Jacobi’s equation find values of λ to obtain


a general solution, and hence the solution satisfying the conditions u(0) = 0 and
u0 (0) = 1; deduce that the stationary path yields a minimum.
238 CHAPTER 8. THE SECOND VARIATION
Chapter 9

The parametric representation


of functionals

9.1 Introduction: parametric equations


The functionals considered so far have the form
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (9.1)
a

giving rise to the Euler-Lagrange equation


 
d ∂F ∂F
0
− = 0, y(a) = A, y(b) = B. (9.2)
dx ∂y ∂y
With this type of functional the path in the Oxy-plane is described explicitly by the
function y(x). Such a formulation is restrictive because the admissible functions must
be single valued, so curves with the shapes shown in figure 9.1 are not allowed; in
particular vertical gradients are forbidden, except at the end points.
y
y

B
A
A x
B x
a b a b
Figure 9.1 Examples of paths that cannot be described by a function y(x).

In addition, such a formulation creates an asymmetry between the x and y variables,


which is often not present in the original problem. In chapter 1 we encountered several
problems where these restrictions may create difficulties. For example the solution
to the simplest isoperimetric problem — described in section 3.5.5 and solved later
in chapter 12 — is a circular arc, which may involve curves such as that depicted in

239
240 CHAPTER 9. PARAMETRIC FUNCTIONALS

figure 9.1. Another example is the navigation problem, described on page 110, where it
is entirely feasible that for some water velocities the required path may at some point
be parallel to the bank.
These anomalies may be removed by considering curves or arcs defined parametri-
cally; the coordinates of points on a path in a two dimensional space can be defined by
a pair of functions (x(t), y(t)), where x(t) and y(t) are piecewise continuous functions of
the parameter t, usually confined to a finite interval t1 ≤ t ≤ t2 . The curve is closed if
x(t1 ) = x(t2 ) and y(t1 ) = y(t2 ). In this context a common convention is the convenient
‘dot’ notation to represent derivatives, so ẋ = dx/dt and ẍ = d2 x/dt2 .
The simplest example of a parametrically defined curve is the straight line, x = at+b,
y = ct + d, where a, b, c and d are all constants. Another example is x(t) = cos t,
y(t) = sin t, giving x2 + y 2 = 1. If the parameter t is restricted to the range 0 ≤ t ≤ π
this is a semi-circle in the upper half plane, and as t increases the arc is traversed
anticlockwise: if −π ≤ t ≤ π it is a circle, that is a closed curve. Many other curves
are most easily described parametrically: the cycloid has already been discussed in
section 5.2.1 and several others are described in exercises 9.2 – 9.5 and 9.17 – 9.18. Other
examples are solutions of Newton’s equations which are often expressed parametrically
in terms of the time.
It is important to note that there is never a unique parameter for a given √ curve.
For instance, the pair of functions x(s) = cos(s2 ), y(s) = sin(s2 ), 0 ≤ s < 2π, also
defines the circle x2 + y 2 = 1. In geometric terms this merely states the obvious, that
the shape of the curve is independent of the particular choice of parameter: the value of
the parameter simply defines a point on the curve. This simple fact affects the nature
of functionals because functionals must be independent of the parametrisation used to
label the points on a curve.
If a curve is described by the functions x = f (t), y = g(t) and also explicitly by the
function y(x), then its gradient is

dy dg dt ġ
= = .
dx dt df f˙

One reason for using a parametric representation is that both coordinates are treated
equally. An example showing why this is sometimes helpful is the problem of determin-
ing the stationary values of the distance between the origin and points on the parabola

y 2 = a2 (1 − x), x ≤ 1, a > 0. (9.3)

This parabolic shape changes with a, as shown in figure 9.2. The parabola axis coincides
with the x-axis; all parabolas are symmetric about Ox and pass through (1, 0); they
intersect the y-axis at y = ±a.
A parametric representation of these curves is

x = 1 − τ 2, y = aτ, a > 0, −∞ < τ < ∞. (9.4)


9.1. INTRODUCTION: PARAMETRIC EQUATIONS 241

y
4 a=3
a=√2 2
x
-1 -0.5 0 0.5 1
-2
a=1/2
-4

Figure 9.2 Diagram showing √ the shape of the


parabola 9.3, with a = 1/2, 2 and 3.

Now consider the distance, d(x), between a point on this curve and the origin. Intuition
suggests that for sufficiently small a the shortest distance will be to a point close to Oy
and that there will be a local maximum at (1, 0). But for large a the point (1, 0) will
become a local minimum. p
Pythagoras’
√ theorem gives d(x) = x2 + y(x)2 and elementary calculus shows that √
if a < 2, d(x) is stationary at x = a2 /2, where it has a local minimum. But if a > 2,
d(x) has no stationary points. p
With the parametric representation the distance is d(τ ) = (τ 2 − 1)2 + (aτ )2 , and
this function is stationary at τ =√0, corresponding to the point (x, y) = (1, 0), for all
a, and at τ 2 = 1 − a2 /2 for a < 2; the first of these stationary points was not given
by the previous representation. Thus the parametric formulation of this problem has
circumvented the difficulty caused by a stationary point being at the edge of the range,
x = 1.
In chapter 10 we shall see that the two equivalent variational principles behave in
the same manner.

Exercise 9.1
2 2
p
(a) Show that the function d(x) = x2 + y(x)2√ , where y(x) √ = a (1 − x) and
2
x ≤ 1, has a single minimum at x = a /2 if a < 2. If a > 2 show that d(x) is
not stationary for any x ≤ 1 and has a global minimum at x = 1.
p
(b) Show that the function d(τ ) = (τ 2 − 1)2 + (aτ )2 , is stationary at τ = 0, for
p √
all a, and at τ = ± 1 − a2 /2 if a < 2; classify the stationary points.

9.1.1 Lengths and areas


Lengths and areas of curves defined parametrically are obtained using simple general-
isations of the formulae used when a curve is represented by a single function. The
length along a curve is given by a direct generalisation of equation 3.1 (page 93) and
the derivation of the relevant formula is left to the reader in exercise 9.2.
The area, however, is different because there is no base-line, normally taken to be
the x-axis. Consider the arc AB in the Oxy-plane, shown in figure 9.3.
242 CHAPTER 9. PARAMETRIC FUNCTIONALS

y
B

S increasing t
A x
O
Figure 9.3 Diagram showing the area traced out by the
arc defined parametrically by (x(t), y(t)), tA ≤ t ≤ tB .

If the arc is defined parametrically by (x(t), y(t)) with t = tA and tB at the points
A and B then it can be shown, by splitting the region into elementary triangles, see
exercise 9.20, that the area, S, of the region OAB is given by

1 tB
 
dy dx
Z
S= dt x −y , tA < t B . (9.5)
2 tA dt dt

If the curve is closed it is traced out anticlockwise and S is the area enclosed. Elementary
geometry can be used to connect this area with the area represented by the usual integral
Z xB
dx y(x), namely the area between the arc and the x-axis, see exercise 9.3.
xA
If the arc AB in figure 9.3 is defined parametrically by (x1 (τ ), y1 (τ )) with τ increas-
ing from B to A, the sign of the above formula is reversed and the area is

1 τA
 
dx1 dy1
Z
S= dτ y1 − x1 , τB < τ A . (9.6)
2 τB dτ dτ

If the curve is closed it is traced out clockwise.


The formula 9.5 is useful for finding the area of closed curves. Consider, for instance,
2
the Lemniscate of Bernoulli, having the Cartesian equation x2 + y 2 = a2 x2 − y 2


and the shape shown in figure 9.4.

y=-x y y=x

Figure 9.4 The Lemniscate of Bernoulli.

One parametric representation of this curve is


a a
q
x = √ sin η 1 + sin2 η, y = √ sin 2η, −π ≤ η ≤ π.
2 2 2
9.1. INTRODUCTION: PARAMETRIC EQUATIONS 243

Restricting η to the interval [0, π] gives the right-hand loop and as η increases from 0
to π the curve is traversed clockwise. The area S of each loop can be computed using
equation 9.6: we have

dx dy a2 (1 + 2 sin2 η) cos η sin 2η a2


q
y −x = p − sin η cos 2η 1 + sin2 η
dη dη 4 1 + sin2 η 2
a2 sin3 η
= p ,
1 + sin2 η
so the area of each loop is

a2 π sin3 η
Z
S = dη p , and putting c = cos η gives
2 0 1 + sin2 η
a2 1
Z π/4
1 − c2 a2
Z
2
S = dc √ =a dφ cos 2φ = ,
2 −1 2 − c2 0 2

where, for the penultimate integral, we have used the substitution c = 2 sin φ and
then used the trigonometric identity cos 2φ = 1 − 2 sin2 φ.

Exercise 9.2
Show that the length of a curve
Z tb defined parametrically by functions (x(t), y(t))
p
between t = ta and tb is s = dt ẋ2 + ẏ 2 .
ta

Exercise 9.3
Consider the curve defined by the function y(x) and parametrically by the pair
(x(t), y(t)), ta ≤ t ≤ tb . Show that the area, S, under the curve between x = a,
x = b and the x-axis, is given by
Z b Z tb
1 tb
Z
1
S= dx y(x) = dt ẋy = dt (ẋy − xẏ) + (bB − aA)
a ta 2 ta 2

where (x(ta ), y(ta )) = (a, A) and (x(tb ), y(tb )) = (b, B).


Use figure 9.3 to provide a geometric interpretation of this formula.

Exercise 9.4
We expect the formula quoted in equation 9.5 to be independent of the choice
of parameter. If τ (θ) is an increasing function, τ 0 (θ) > 0, with τ (θa ) = ta and
τ (θb ) = tb , show that by putting t = τ (θ) and using θ as the independent variable
the area becomes
1 θb
Z h i
S= dθ x(θ)y 0 (θ) − x0 (θ)y(θ) ,
2 θa

so that the form of the expression for the area is unchanged, as would be expected.
Note that the formula for the area is invariant under parameter changes because
the integrand is homogeneous of degree one in the first derivatives of x and y.
244 CHAPTER 9. PARAMETRIC FUNCTIONALS

Exercise 9.5
Four typical parametric curves, with their parametric representation are:
The ellipse: x = a cos θ, y = b sin θ;
The astroid: x = a cos3 θ, y = a sin3 θ;
The cardioid: x = a(2 cos θ − cos 2θ), y = a(2 sin θ − sin 2θ);
The cycloid: x = a(θ − sin θ), y = a(1 − cos θ),
where 0 ≤ θ ≤ 2π and a and b are positive numbers: in the first three cases the
angle θ is polar angle and the curve is traced out anticlockwise. Examples of the
first three curves are shown in the following figure: a cycloid is shown in figure 5.1
(page 146).

Ellipses Astroid Cardioid


y y y
b/a=1/4
3a a
x x x

b/a=3/4

Find the area enclosed and the length of each of these curves.
Note the arc length of an ellipse, the first example, is given in terms of the function
known as a complete Elliptic integral which cannot be expressed as a finite com-
bination of standard functions. This integral, and its relations, are important in
many problems and have played an important role in the development of Analysis.

9.2 The parametric variational problem


In this section we re-write the functional 9.1 in terms of the parametrically defined
curve (x(t), y(t)). One might think that such an apparently minor change would make
no substantial difference. This, however, is not the case as will soon become apparent.
The gradient y 0 (x) of a curve is just the ratio ẏ/ẋ and hence the parametric func-
tional which is the equivalent of the functional 9.1 is
Z t2
S[x, y] = dt Φ(x, y, ẋ, ẏ), x(t1 ) = a, y(t1 ) = A, x(t2 ) = b, y(t2 ) = B, (9.7)
t1

where1
Φ(x, y, ẋ, ẏ) = ẋF (x, y, ẏ/ẋ). (9.8)
Now, both the functions x(t) and y(t) have to be determined, not just y(x). The two
Euler-Lagrange equations for the functional 9.7 are given by the general formulae of
1 The function Φ used here is unrelated to the Φ used in section 7.3.
9.2. THE PARAMETRIC VARIATIONAL PROBLEM 245

equation 6.33 (page 185), that is


   
d ∂Φ ∂Φ d ∂Φ ∂Φ
− = 0 and − = 0, (9.9)
dt ∂ ẋ ∂x dt ∂ ẏ ∂y

with the boundary conditions defined in equation 9.7. These two equations replace the
original single equation 9.2 which, at first sight, seems rather strange since they define
the same curve. However, the change to the parametric form of the functional is more
subtle than is apparent; in particular, it will be shown that these two equations are not
independent, meaning that a solution of one is automatically a solution of the other.
It is clear that for any parametric functional,
Z t2
S[x, y] = dt G(x, y, ẋ, ẏ), (9.10)
t1

to be useful the value of the integral must be independent of the particular parameteri-
sation chosen. This means that the integrand, G(x, y, ẋ, ẏ), must not depend explicitly
upon the parameter, t. In addition, it was proved by Weierstrass that a necessary and
sufficient condition is that G(x, y, ẋ, ẏ) is a positive homogeneous function of degree one
in ẋ and ẏ, that is,

G(x, y, λẋ, λẏ) = λG(x, y, ẋ, ẏ) for any real λ > 0, (9.11)

provided ẋ and ẏ are in C1 . It is clear that the function Φ defined in equation 9.8
satisfies this condition. p
We require positive homogeneity because the distance ẋ2 + ẏ 2 occurs frequently
and p p
(λẋ)2 + (λẏ)2 = |λ| ẋ2 + ẏ 2 ,
that is, this function is homogeneous only if λ > 0.
Changing the parameter t to θ, where t = τ (θ), and τ 0 (θ) > 0, transforms equa-
tion 9.10 into
Z θ2  Z θ2
x0 y 0

S[x, y] = dθ τ 0 (θ)G x, y, 0 , 0 = dθ G(x, y, x0 , y 0 ).
θ1 τ τ θ1

Hence, in both the t- and θ-representation the Euler-Lagrange equations are identical,
and have the same solutions. That is, the variational principle is invariant with respect
to a parameter transformation. Note that one consequence of this is that the range of
integration plays no role and can always be scaled to [0, 1], or some other convenient
range.
The second consequence of homogeneity is that the two Euler-Lagrange equations 9.9
are not independent. This result follows from Euler’s formula, see exercise 1.25 (page 28),
which gives
G = ẋGẋ + ẏGẏ . (9.12)
Before proving this you should do the following two exercises involving systems where
the relation between the two Euler-Lagrange equations is clear. Following these exer-
cises we derive the general result.
246 CHAPTER 9. PARAMETRIC FUNCTIONALS

Exercise 9.6 Z 1
Show that the functional S[y] = dx y 0 2 when expressed in parametric form,
Z 1 0

becomes S[x, y] = dt ẏ 2 ẋ−1 .


0
Show also that the Euler-Lagrange equations for x and y are, respectively,
d ` 2´ dz ẏ
z =0 and =0 where z= .
dt dt ẋ

Exercise 9.7 Z 1
Show that the functional S[y] = dx F (y 0 ) when expressed in parametric form,
Z 1 0

becomes S[x, y] = dt ẋF (ẏ/ẋ).


0
Show also that the Euler-Lagrange equations for x and y are, respectively,
d ` d ` 0 ´ ẏ
F (z) − zF 0 (z) = 0
´
and F (z) = 0 where z= ,
dt dt ẋ
and that these equations have the same general solution provided F 00 (z) 6= 0.

The results derived in these two exercises are particular examples of a general relation
between the two Euler-Lagrange equations. This relation is obtained by forming the
total derivative of Φ with respect to t and rearranging the result:

= ẋΦx + ẏΦy + ẍΦẋ + ÿΦẏ
dt
d dΦẋ dΦẏ
= ẋΦx + ẏΦy + (ẋΦẋ + ẏΦẏ ) − ẋ − ẏ .
dt dt dt
Now use Euler’s formula, that is Φ = ẋΦẋ + ẏΦẏ , to rearrange the latter expression in
the following manner,
       
d ∂Φ ∂Φ d ∂Φ ∂Φ
− ẋ + − ẏ = 0. (9.13)
dt ∂ ẋ ∂x dt ∂ ẏ ∂y
This is an identity derived by assuming only that Φ(x, y, ẋ, ẏ) is homogeneous of degree
one in ẋ and ẏ: at this stage we have not assumed that x(t) and y(t) are solutions of
the Euler-Lagrange equations. If, however, one of the Euler-Lagrange equations 9.9 is
satisfied, it follows from this identity that the other equation must also be satisfied.
The exceptional case when either ẏ = 0 or ẋ = 0 for all t, is ignored. Thus, in general,
the two Euler-Lagrange equations are not independent.
Furthermore, because Φ is homogeneous of degree one in ẋ and ẏ, either of the Euler-
Lagrange equations 9.9 can be expressed as the second-order equation in y(x) derived
from the non-parametric form of the functional, as would be expected, see exercise 9.22.

Exercise 9.8
Using the fact that the distance between the origin and the point (a, A) of the
Oxy-plane can be expressed in terms of the parametric functional
Z 1 p
S[x, y] = dt ẋ2 + ẏ 2 , x(0) = y(0) = 0, x(1) = a, y(1) = A,
0
9.2. THE PARAMETRIC VARIATIONAL PROBLEM 247

show that the general solution of the Euler-Lagrange equations is αx + βy = γ,


where α, β and γ are constants, and hence that the required solution is ay = Ax.
What is the solution if a = 0 and can this particular solution be described by the
original functional of equation 3.1 (page 93)?

Exercise 9.9
Show that Noether’s theorem applied to the functional 9.7 gives the first-integral
Φ = ẋΦẋ + ẏΦẏ + c where c is a constant.
Recall that Euler’s formula gives this identity but with c = 0, so in this instance
no further information is gleaned from Noether’s theorem.

Exercise 9.10
Use equation 9.8 to show that

∂Φ ∂F ∂Φ ∂F
= F − y0 0 and = .
∂ ẋ ∂y ∂ ẏ ∂y 0

9.2.1 Geodesics
In this section we assume that all curves and surfaces are defined by smooth functions.
A geodesic is a line on a curved surface joining two given points, along which the
distance2 is stationary3 . Here we assume that the surface is two-dimensional so the
position on it may be defined by two real variables, which we denote by (u, v). Thus
the Cartesian coordinates, (x, y, z), of every point on the surface can be described by
three functions x = f (u, v), y = g(u, v) and z = h(u, v), which are assumed to be at
least twice differentiable.
An example of such a surface is the sphere of radius r, centred at the origin: the
natural coordinates on this sphere are the spherical polar angles (θ, φ), where θ is related
to the latitude, as shown in the diagram, and φ is the longitude.

z
P
θ r

x y
φ

Figure 9.5 Diagram showing the physical meaning of the


spherical polar coordinates (r, θ, φ).

2 At this point of the discussion we use an intuitive notion of the distance; a formal definition is

given in equation 9.17.


3 In some texts the name geodesic is used only for the shortest path.
248 CHAPTER 9. PARAMETRIC FUNCTIONALS

The Cartesian coordinates of the point P are

x = r sin θ cos φ, y = r sin θ sin φ, z = r cos θ, (9.14)

and it is conventional to limit θ to the closed interval [0, π], with the north and south
poles being represented by θ = 0 and θ = π respectively. Since the points with coordi-
nates (θ, φ) and (θ, φ + 2kπ), k = ±1, ±2, · · · , are physically identical it is sometimes
necessary to be careful about the range of φ. Note that both the north and south
poles are singular points of this coordinate system in the sense that at both points φ is
undefined.
Returning to the general case, the distance, δs, between the two points with coor-
dinates (x, y, z) and (x + δx, y + δy, z + δz) is, from Pythagoras’ theorem,

δs2 = δx2 + δy 2 + δz 2 . (9.15)

If these two points are on the surface, with coordinates (u, v) and (u + δu, v + δv),
respectively, then if δu and δv are small

x + δx = f (u + δu, v + δv)
= f (u, v) + fu (u, v)δu + fv (u, v)δv + higher order terms,

with similar expressions for y + δy and z + δz. Thus, to first-order,

δx = fu (u, v)δu+fv (u, v)δv, δy = gu (u, v)δu+gv (u, v)δv, δz = hu (u, v)δu+hv (u, v)δv.

Substituting these expressions into equation 9.15 and rearranging, gives the following
expression for the distance between neighbouring points on the surface

δs2 = E(u, v)δu2 + 2F (u, v)δuδv + G(u, v)δv 2 + higher order terms, (9.16)

where

E = fu2 + gu2 + h2u , G = fv2 + gv2 + h2v and F = fu fv + gu gv + hu hv .

Now consider a line on the surface defined by the two differentiable functions
(u(t), v(t)), depending on the parameter t. Then to first-order δu = u̇δt and δv = v̇δt
and the distance 9.16 becomes, to this order,
p
δs = δt E(u, v)u̇2 + 2F (u, v)u̇v̇ + G(u, v)v̇ 2 .

Finally, the distance between the points r1 = (u(t1 ), v(t1 )) and r2 = (u(t2 ), v(t2 )) on
the surface and along the line parameterised by the functions (u(t), v(t)) is defined by
the functional obtained by integrating the above expression,
Z t2 p
S[u, v] = dt E(u, v)u̇2 + 2F (u, v)u̇v̇ + G(u, v)v̇ 2 . (9.17)
t1

A geodesic is, by definition, given by those functions that make S[u, v] stationary. These,
therefore, satisfy the associated Euler-Lagrange equations for u(t) and v(t) which we
now derive.
9.2. THE PARAMETRIC VARIATIONAL PROBLEM 249

For this functional Φ2 = E u̇2 + 2F u̇v̇ + Gv̇ 2 , so that


∂Φ ∂Φ
Φ= E u̇ + F v̇, Φ = Gv̇ + F u̇.
∂ u̇ ∂ v̇
Hence the Euler-Lagrange equations for u and v are, respectively,
u̇2 Eu + 2u̇v̇Fu + v̇ 2 Gu
 
d E u̇ + F v̇
− = 0, (9.18)
dt Φ 2Φ
u̇2 Ev + 2u̇v̇Fv + v̇ 2 Gv
 
d Gv̇ + F u̇
− = 0. (9.19)
dt Φ 2Φ
We illustrate this theory by finding stationary paths on a sphere, also treated in exer-
cise 5.20 (page 168). For the polar coordinates illustrated in figure 9.5 we have, from
equation 9.14, and remembering that r is a constant,
f (θ, φ) = r sin θ cos φ, g(θ, φ) = r sin θ sin φ, h(θ, φ) = r cos θ,
which gives,
E = r2 , G = r2 sin2 θ and F = 0, (9.20)
q
so that Φ = r θ̇2 + φ̇2 sin2 θ and the functional for the distance becomes
Z 1 q
S[θ, φ] = r dt θ̇2 + φ̇2 sin2 θ. (9.21)
0
For any two points there is some freedom of choice in the coordinate system and it is
convenient to choose the north pole to coincide with one end of the path, by setting
θ(0) = 0. If the final point is (θ(1), φ(1)) = (θ1 , φ1 ) the Euler-Lagrange equation for φ
is !
d φ̇ sin2 θ
q
2
= 0 which gives φ̇ sin θ = A θ̇2 + φ̇2 sin2 θ, (9.22)
dt Φ
for some constant A. But at t = 0, θ = 0 and hence A = 0, so φ̇ = 0 for all t. Therefore
φ(t) = φ1 . As expected the stationary path is the line of constant longitude, that is
it lies on the great circle through the two points. Notice that unless the second point
is at the south pole this gives two stationary paths, the short and the long route. If
the second path is at the south pole there are infinitely many paths, all with the same
length.
Exercise 9.11
Consider a cylinder with axis along Oz, with circular cross section of radius r.
The position of a point r on this cylinder can be described using cylindrical polar
coordinates (φ, z), r being fixed, where r = (r cos φ, r sin φ, z).
(a) Show that E = r 2 , G = 1 and F = 0.
(b) Show that the Euler-Lagrange equations for the geodesic between the points
(φ1 , z1 ) and (φ2 , z2 ) are
0 1 0 1
d @ r2 φ̇ A = 0, d @q ż A = 0, φ(tk ) = φk , z(tk ) = zk , k = 1, 2.
q
dt dt
r2 φ̇2 + ż 2 r2 φ̇2 + ż 2
φ − φ1
Hence show that the stationary path is z = z1 + (z2 − z1 ) .
φ2 − φ 1
250 CHAPTER 9. PARAMETRIC FUNCTIONALS

9.2.2 The Brachistochrone problem


The functional for this problem is given in equation 5.6 (page 150) and in parametric
form this becomes (on omitting the factor (2g)−1/2 )
Z 1 r 2
ẋ + ż 2
T [x, z] = dt , x(0) = 0, x(1) = b, z(0) = A, z(1) = 0, (9.23)
0 z
and the Euler-Lagrange equations for x and y are, respectively,
! ! √
d ẋ d ż ẋ2 + ż 2
= 0 and + = 0. (9.24)
2z 3/2
p p
dt z(ẋ2 + ż 2 ) dt z(ẋ2 + ż 2 )

The first of these equations may integrated and rearranged to give ẋ2 (c2 − z) = z ż 2 ,
for some constant c, which is just equation 5.7 (page 151). One more integration gives

c2
x=d+ (2φ − sin 2φ), z = c2 sin2 φ,
2
for some constant d.
For this problem there is no apparent advantage in using a parametric formula-
tion: in chapter 12, however, we shall see that it enables more general brachistochrone
problems to be tackled.

9.2.3 Surface of Minimum Revolution


The functional for this problem is given in equation 5.11 (page 155); for the symmetric
version the parametric form becomes
Z 1 p
S[x, y] = dt y ẋ2 + ẏ 2 , x(±1) = ±a, y(−1) = y(1) = A, (9.25)
−1

and the Euler-Lagrange equations for x and y are, respectively,


! !
d y ẋ d y ẏ p
p = 0 and p − ẋ2 + ẏ 2 = 0. (9.26)
dt ẋ2 + ẏ 2 dt ẋ2 + ẏ 2

The first of these gives

y ẋ c2
= c > 0 or ẋ2 = ẏ 2 , (9.27)
y 2 − c2
p
ẋ2 + ẏ 2

which is the same as equation 5.14 (page 157), and for c > 0 gives the solution obtained
in that section. Another solution, however, is obtained by setting c = 0, so that y ẋ = 0,
which gives either y = 0 or ẋ = 0; the latter is the Goldschmidt solution, equation 5.20
(page 160), which cannot be found using the original formulation of this problem. Thus
enlarging the space of admissible functions allows more solutions to be found.

Exercise 9.12
Show directly that the two equations 9.26 satisfy the identity 9.13.
9.3. THE PARAMETRIC AND THE CONVENTIONAL FORMULATION 251

Exercise 9.13
By substituting y = c cosh(αt + β), where α and β are constants, into the first of
equations 9.27 show that the solution satisfying the boundary conditions is
„ «
at A “a”
x = at, y = c cosh with = cosh .
c c c

9.3 The parametric and the conventional formula-


tion compared
We end this chapter with the important observation that the original variational princi-
ple, equation 9.1, and the associated, parametric problem, equation 9.7 are not precisely
equivalent, because a path that is an extremum of the first problem is not necessarily
an extremum of the second because the class of admissible functions has been enlarged.
This feature is illustrated in the following example.
Consider the functional
Z 1
S[y] = dx y 0 2 , y(0) = 0, y(1) = 1. (9.28)
0

If the admissible paths are in C 1 [0, 1] the stationary path is y = x; this is a global
minimum and on this path S = 1, see exercise 3.4 (page 97). Also, if we widen the class
of admissible functions to piecewise continuously differentiable functions, the stationary
path is the same straight line.
The associated parametric problem is
Z 1
ẏ 2
S[x, y] = dt , x(0) = y(0) = 0, x(1) = y(1) = 1, (9.29)
0 ẋ
and now we let the admissible functions be piecewise differentiable. In this case the
stationary path is the straight line, see exercise 9.14, but this no longer yields a minimum
of the functional. The reason for this is simply that in the original functional 9.28 the
integrand is never negative, so S[y] ≥ 0: in the parametric functional the integrand can
be negative, which means, in this case, that the functional is unbounded below.
Consider the path OBA, passing through the point B = (α, β) and where A = (1, 1),
shown in figure 9.6.

y
(1,1) A
1

β
B
(α,β)

x
O 1 α
Figure 9.6
252 CHAPTER 9. PARAMETRIC FUNCTIONALS

This path comprises two straight line segments OB and BA, defined, respectively, by
the parametric equations

x = αt, x = (1 − α)(t − 1) + α,
0 ≤ t ≤ 1, and 1 ≤ t ≤ 2.
y = βt, y = (1 − β)(t − 1) + β,

Such a curve can be described by a single valued function y(x) only if 0 < α < 1. On
this path, provided α is neither 0 nor 1,

β2 (1 − β)2
S(α, β) = + ,
α (1 − α)

and hence −S can be made arbitrarily large by setting α = 1 + δ 2 , for arbitrarily


small δ: hence the functional has no minimum. If α is restricted to the interval (0, 1)
then S(α, β) is always positive and has a local minimum at α = β, giving the straight
line y = x.

Exercise 9.14
Show that the general solution of the Euler-Lagrange equations for the func-
tional 9.29 is the line y = mx + c for some constants m and c, and use the
boundary conditions to determine their values.

Exercise 9.15
y
Consider the zig-zag path comprising the se- 1
quence of segments like AB and BC shown in
the figure. On AB, ẏ = 0 and on BC ẏ = 1,
ẋ = −1. C
Show that on this path S = −1. Show also that A
B
such a path may be made arbitrarily close to the x
1
straight line y = x.
9.4. MISCELLANEOUS EXERCISES 253

9.4 Miscellaneous exercises


Exercise 9.16
Show that the curve defined by the parametric equations x = at2 , y = 2at, where
a is a positive constant, is the parabola
h√ y 2 = 4ax iand that the length of the curve

between t = −1 and t = 1 is 2a 2 + ln(1 + 2) .

Exercise 9.17
An epicycloid is a curve traced out by a point on a circle which rolls, without
slipping, on the outside of a fixed circle. If the radii of the fixed and rolling circles
are R and r respectively, the parametric equations of the epicycloid are
„ « „ «
R+r R+r
x = (R + r) cos θ − r cos θ , y = (R + r) sin θ − r sin θ ,
r r

with 0 ≤ θ < 2π and the curve is traced out anticlockwise as θ increases. It is


a worthwhile exercise to derive these equations. If R/r = n is a positive integer
these equations define a closed curve. If n = 1 we obtain a cardioid, defined in
exercise 9.5: other examples are shown in the following figures.

y Epicyloids y
n=4 n=5

x x

Show that the area enclosed by such a closed curve is S = π(R + r)(R + 2r) and
that the length of the curve is 8(R + r). In the limit r  R explain how this result
is related to the length of a cycloid.

Exercise 9.18
A trochoid is formed in a manner similar to the cycloid, section 5.2.1, except that
the point tracing out the curve is fixed on a radius at a distance ka from the centre
of the rolling circle. The parametric equations of this curve are

x = a(θ − k sin θ), y = a(1 − k cos θ),

which reduce to the equations of the cycloid when k = 1.


When k < 1 the trochoid is a smooth arc above the x-axis, as shown on the left-
hand side of figure 9.7: when k > 1 there are loops below the line y = a(1−k cos β),
where β is the root of θ = k sin θ in the interval (0, π), as shown on the right of
the figure.
254 CHAPTER 9. PARAMETRIC FUNCTIONALS

2 y k=0.6 3 y k=1.8

1.5 2

1 1

0.5
x
0
0 2 4 6
x
0 1 2 3 4 5 6
Figure 9.7 Graphs showing two typical trochoids.

(a) If k < 1 show that the area under the trochoid between θ = 0 and 2π is
πa2 (2 + k2 ).
(b) Show that the area enclosed by the loop surrounding the origin for k > 1 is
given by
S = a2 β k2 − 2 + k cos β
` ´
where β = k sin β, 0 < β < π.

(c) (Harder) Show that the two √ loops shown in the right-hand figure touch when
k satisfies the equation π = k2 − 1 − cos−1 (1/k) (the only real solution of this
equation is k = 4.6033 . . .).

Exercise 9.19
The coordinates of a point on a curve are given by
a
x = a(u − tanh u), y= ,
cosh u
where a is a positive constant and u the parameter. Show that the arc length
from u = 0 to u = v is L = a ln(cosh v).

Exercise 9.20
If the points (xk , yk ), k = 1, 2, 3, in the Cartesian plane define the vertices of the
triangle ABC, respectively, then it may be shown that the area of the triangle is
|∆| where
1 1 1
∆ = x1 (y2 − y3 ) + x2 (y3 − y1 ) + x3 (y1 − y2 ),
2 2 2
1 1 1
= − y1 (x2 − x3 ) − y2 (x3 − x1 ) − y3 (x1 − x2 ),
2 2 2
and that ∆ is positive if moving round the vertices A, B and C, in this order,
represents an anti-clockwise rotation. By putting C = (x3 , y3 ) at the origin and

(x1 , y1 ) = (x(t1 ), y(t1 )) and (x2 , y2 ) = (x(t2 ), y(t2 )) where t 2 ≥ t1

y
use the above formula to show that the quantity Q
(x 2,y 2)
1 t2
Z
S= dt (xẏ − ẋy)
2 t1
P (x ,y )
1 1
represents the area of the region OP Q shown in the x
figure, where the curve P Q is an arc of the curve de- O
fined parametrically by (x(t), y(t)) for t1 ≤ t ≤ t2 .
9.4. MISCELLANEOUS EXERCISES 255

Exercise 9.21
Using the fact that Φ(x, y, ẋ, ẏ) is homogeneous of degree 1 in ẋ and ẏ show that

Φ(x, y, ẋ, ẏ) = ẋΦ(x, y, 1, y 0 (x)) and Φẋ (x, y, ẋ, ẏ) = Φẋ (x, y, 1, y 0 (x)).

Exercise 9.22
Use the two parametric Euler-Lagrange equations defined in equation 9.9 to derive
the Euler-Lagrange equation for y(x), defined in equation 9.2.
Hint use the Euler-Lagrange equation for x and use x for the parameter t.

Exercise 9.23
Consider the functional
Z 1 p
S[x, y] = dt ẋ2 + ẏ 2 + ẋẏ, x(0) = y(0) = 0, x(1) = X, y(1) = Y.
0

(a) Show that the Euler-Lagrange equation for x can be integrated to give the
general solution p
y0 + 2 = A 1 + y0 + y0 2
for some constant A.
(b) Hence show that the stationary path is Xy = Y x.
(c) Explain why the Euler-Lagrange equation for y is not needed to find this
solution.

Exercise 9.24
Consider the functional
Z 1
dt ẋ2 + ẏ 2 + ẋẏ ,
` ´
S[x, y] = x(0) = y(0) = 0, x(1) = X, y(1) = Y.
0

(a) Show that the two Euler-Lagrange equations be integrated to give the general
solution
2x + y = At + B, x + 2y = Ct + D
where A, B, C and D are constants.
(b) Hence show that the stationary path is x = Xt, y = Y t.
(c) Explain why both Euler-Lagrange equations are needed to find this solution.

Exercise 9.25
Show that the stationary paths of the functional
Z 1 » –
1 p
S[x, y] = dt (xẏ − y ẋ) − λ ẋ2 + ẏ 2
0 2

are the circles (x − A1 )2 + (y − A2 )2 = λ2 , where A1 and A2 are constants.


256 CHAPTER 9. PARAMETRIC FUNCTIONALS
Chapter 10

Variable end points

10.1 Introduction
The functionals considered previously all involve fixed end points, that is the inde-
pendent variable is defined on a given interval at the ends of which the value of the
dependent variable is known. It is not hard to find variational problems with different
types of boundary conditions: in this introduction we describe a few of these problems
in order to motivate the analysis described here and in chapter 12.
The simplest generalisation is to natural boundary conditions in which the interval
of integration is given, but the value of the path at either one or both ends is not
given but needs to be determined as part of the variational principle. An example is a
stationary, loaded, stiff beam, which adopts a configuration that minimises its energy.
If the unloaded beam is horizontal along the x-axis, between x = 0 and L, and y(x)
represents the displacement, assumed small, the bending energy is proportional to its
curvature, which for small |y| is proportional to y 00 (x)2 ; then if ρ(x) is the density per
unit length (of the beam and the load) the energy functional can be shown to be
L  
1 00 2
Z
E[y] = dx κy − gρ(x)y , (10.1)
0 2

where κ is a positive constant and g the acceleration due to gravity. Note that here
y(x) is positive for displacements below the x-axis. The Euler-Lagrange equation for
this functional is a linear, fourth order equation, see section 10.2.1, so requires four
boundary conditions.
If the beam is clamped horizontally at x = 0, there are just two boundary conditions,
y(0) = y 0 (0) = 0, though experience shows that this problem has a unique solution.
It transpires that the other two conditions, needed to determine this solution of the
Euler-Lagrange equation, can be derived directly from the variational principle that
requires E[y] to be stationary.
Alternatively, if the beam is simply supported at both ends, giving the boundary
conditions y(0) = y(L) = 0, it can be shown that the remaining two boundary conditions
are also obtained by insisting that E[y] is stationary. We explore this problem in
section 10.2.1.

257
258 CHAPTER 10. VARIABLE END POINTS

The first person to generalise boundary conditions was Newton in his investigations
of the motion of an axially symmetric body through a resisting medium, see equa-
tion 3.22 (page 110).
The brachistochrone problem was generalised by John Bernoulli in 1697, by allowing
the lower end of the stationary path to move on a given curve, defined by an equation
of the form τ (x, y) = 0. In figure 10.1 we show an example where the right end of the
brachistochrone lies on the straight line defined by τ (x, y) = x + y − 1 = 0, and the
left end is fixed at (0, A), with A < 1. In figure 10.1 are shown the brachistochrones
for various values of A when the particle starts at rest at (0, A). The equation for the
stationary paths is derived in exercise 10.13. Notice that the cycloid intersects the curve
τ (x, y) = 0 at right angles and at x = 0 the gradient of the cycloid is infinite.

1
y
0.8 τ(x,y)=x+y-1=0

0.6
Cycloid segments

0.4

0.2
x
0
0.2 0.4 0.6 0.8 1

L R

Figure 10.1 Diagram showing stationary paths through the point (0, A), for
A = 0.2, 0.5 and 0.9, and (v, y(v)) where the right end is constrained to lie on the
straight line τ (x, y) = x + y − 1 = 0, and the particle starts from rest at (0, A).

In this case the functional is, see equation 5.5 (page 150),
Z v s
1 + y0 2
T [y] = dx , y(0) = A, τ (v, y(v)) = 0, (10.2)
0 2E/m − 2gy

where A is known, but v and y(x), 0 ≤ x ≤ v, need to be determined. The actual


stationary path is clearly a cycloid, but which, of the infinitely many cycloids through
these points, needs to be determined by an additional equation for v. In Bernoulli’s
original formulation the curve τ (x, y) = 0 was a vertical line through a given point,
that is the value of v is fixed, but y(v) is unknown; in this case τ (x, y) = x − v.

Exercise 10.1
Explain why the stationary curves depicted in figure 10.1 are cycloids.

Many fixed end point problems can be modified in this manner. For instance a variation
of the catenary problem, described in section 3.5.6 (page 111) is given by an inelastic
10.2. NATURAL BOUNDARY CONDITIONS 259

rope hanging between two curves, defined by τ1 (x, y) and τ2 (x, y), on which the ends
may slide without hindrance, as shown in figure 10.2: the curve AB is a catenary, but
now we also need to determine the positions of A and B. Another example is a cable
hanging between two points A and B between which a weight of mass M is attached at
a given point C, with the distances AC and CB along the curve known, see figure 10.3.
The segments AC and CB will be catenaries but the gradient at C will be discontinuous.
Both these problems involve constraints, so are dealt with in chapter 12.

y τ 1 (x,y) τ 2 (x,y) y
A B

Catenary
A B C

x Mg x

Figure 10.2 Diagram of a rope hanging between Figure 10.3 Diagram of a rope hanging between
the two curves defined by τk (x, y) = 0, k = 1, 2, two given points, A and B, and with a weight
on which it can slide freely. firmly attached at a given point of the rope.

10.2 Natural boundary conditions

In this section we develop the theory for a particularly simple type of free boundary,
because this illustrates the method in the clearest manner. The ideas used for the more
general case are similar, but the algebra is more complicated. Here the interval [a, b]
and the value of the path at x = a are given, but the value of y(b) is to be determined.
Thus the functional is

Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, (10.3)
a

and both y(x) and y(b) need to be chosen to ensure that S[y] is stationary, as shown
schematically in figure 10.4, where the stationary and a varied path are depicted. This
problem differs from the general case, treated later, in that the value of x at the right-
hand end is given. This type of boundary condition is known as a natural condition or
a natural boundary condition because the value of y(b) is not imposed, but is defined
by the variational principle.
The admissible paths all pass through (a, A); the right end is constrained to lie on
the line x = b, but the actual position on this line needs to be determined. If y + h
are admissible paths then h(a) = 0, but h(b) need not be zero.
260 CHAPTER 10. VARIABLE END POINTS
y

y(b)
y(b) + εh(b)
y(x)+εh(x)

y(x)
A x
a b
L

Figure 10.4 Diagram showing the stationary path, the solid curve, and a varied path,
the dashed curve, for a problem in which the left end is fixed, but the other end is free
to move along the line x = b, parallel to the y-axis, so y(b) needs to be determined.

The Gâteaux differential of the functional is given by equation 4.9 (page 129), that is
Z b  
∂F ∂F
∆S[y, h] = dx h(x) + h0 (x) 0 . (10.4)
a ∂y ∂y
As before we integrate the second term by parts: using the fact that h(a) = 0 this
gives, Z b    
∂F d ∂F ∂F
∆S[y, h] = h(b) 0 − dx − h(x). (10.5)
∂y x=b dx ∂y 0
a ∂y
This is the equivalent of equation 4.10 (page 129) but now the boundary term is not
automatically zero.
For a stationary path ∆S[y, h] = 0 for all h(x) and because the allowed variations
include those functions for which h(b) = 0 the stationary paths must satisfy the Euler-
Lagrange equation  
d ∂F ∂F
0
− = 0, y(a) = A, (10.6)
dx ∂y ∂y
with only one boundary condition1 . The general solution of this equation will contain
one arbitrary constant c, so we write the solution as y(x, c). Because y(x, c) satisfies
equation 10.6, the Gâteaux differential becomes ∆S = h(b)Fy0 (x, y, y 0 )|x=b and because
this must be zero for all h(b), the solution of the Euler-Lagrange equation must satisfy
the boundary condition
Fy0 (b, y(b, c), y 0 (b, c)) = 0, (10.7)
which determines possible values of c, and hence the stationary paths. Equation 10.7
is the natural boundary condition.
As an example consider the brachistochrone problem, studied in section 5.2. It is
convenient to use the dependent variable z(x) = A − y(x), defined in equation 5.6
(page 150), and as before, we suppose that the initial velocity is zero, v0 = 0. Then the
functional may be taken to be2
Z b r
1 + z0 2
T [z] = dx , z(0) = 0. (10.8)
0 z
1 This derivation assumes that there exists at least a one parameter family of variations, h(x), such

that h(a) = h(b) = 0, which is always the case for the problems we consider.
2 For convenience we ignore the factor (2g)−1/2 , which does not affect the Euler-Lagrange equation.
10.2. NATURAL BOUNDARY CONDITIONS 261

The Euler-Lagrange equation is the same as in the previous discussion and, because the
functional does not depend explicitly upon x, it reduces to the first-order equation 5.7,
having the solution, see equation 5.8 (page 151),
1 2
x= c (2φ − sin 2φ), z = c2 sin2 φ, (10.9)
2
where we have set d = 0, because z = 0 when x = 0; the boundary condition at x = b
determines the value of c. For future reference we note that
dz dz . dx 2 sin φ cos φ 1
= = = because cos 2φ = 1 − 2 sin2 φ.
dx dφ dφ 1 − cos 2φ tan φ
At x = b this solution must satisfy the boundary condition 10.7 which, for this
problem, becomes
z0
Fz 0 = p = 0.
z(1 + z 0 2 )
But z is bounded so the only solution is z 0 = 0, and since z 0 = 1/ tan φ, this gives
φ = π/2, and means that the cycloid intersects the vertical line through x = b or-
thogonally, see figure 10.5. But at the right end x = b, so 2b = πc2 , which gives the
solution
b 2b π
x = (2φ − sin 2φ) , z = sin2 φ, 0 ≤ φ ≤ . (10.10)
π π 2
The shape of this curve depends only upon b, rather than both A and b as in the
conventional problem. Here the value of A merely changes the vertical displacement of
the whole curve. It is therefore convenient to set A = 2b/π, and then the dependence
upon b becomes a change of scale, seen by setting x = πx/b and y = πy/b to give
π
x = 2φ − sin 2φ, y = 2 cos2 φ, 0≤φ≤ . (10.11)
2
The graph of this scaled solution is shown in figure 10.5.

2
y
1.5

0.5
x
0 0.5 1 1.5 2 2.5 3
L

Figure 10.5 Graph showing the cycloid defined in equation 10.11,


where x = πx/b and y = πy/b.

The time
p of passage is also independent of A, and is given by the simple formula
T (b) = πb/g, a result derived in exercise 10.6.
Exercise 10.2
Write down a functional for the distance between the point (0, A) and the line
x = X > 0, parallel to the y-axis. Show that the stationary path is the straight
line through (0, A) and parallel to the x-axis.
262 CHAPTER 10. VARIABLE END POINTS

Exercise 10.3 Z π/4


dx y 0 2 − y 2 , y(0) = A > 0,
` ´
Find the stationary path of the functional S[y] =
0
where the right-hand end of the path lies on the line x = π/4.

Exercise 10.4 Rb
Show that the functional S[y] = a dx F (x, y, y 0 ), y(b) = B, with the left end
of the path constrained to the line x = a, is stationary on the solution of the
Euler-Lagrange equation,
„ «
d ∂F ∂F
˛
− = 0, Fy0 (x, y, y 0 )˛ = 0, y(b) = B.
˛
dx ∂y 0 ∂y x=a

Exercise 10.5 Z 1
dx y 0 2 + y 2 , y(1) = B > 0,
` ´
Find the stationary path for the functional S[y] =
0
with the left end of the path constrained to the y-axis.

Exercise 10.6 p
Show that the time to traverse the curve 10.10 is T (b) = πb/g.
Hint use equation 10.8, but remember the factor (2g)−1/2 .

Exercise 10.7
The navigation problem defined in section 3.5.4 gives rise to the functional
p
c2 (1 + y 0 2 ) − v 2 − vy 0
Z b
T [y] = dx F (x, y 0 ), F (x, y 0 ) = ,
0 c2 − v 2

for the time to cross a river. The start point is at the origin so y(0) = 0, but the
terminus is, in this version of the problem, undefined so the boundary condition
at x = b is a natural boundary condition. Assuming that v(x) ≥ 0 show that the
stationary path is given by

1 x
Z
y(x) = du v(u).
c 0

Exercise 10.8
This exercise is important because it uses the method introduced in this section
to extend the range of boundary conditions that can be described by functionals.
(a) Show that the Euler-Lagrange equation for the functional
Z b
dx y 0 (x)2 − y(x)2 ,
` ´
S[y] = y(a) = A, y(b) = B,
a

is y 00 + y = 0, y(a) = A, y(b) = B.
10.2. NATURAL BOUNDARY CONDITIONS 263

(b) Second-order equations of the above form occur frequently, but the boundary
conditions are sometimes different, involving linear combinations of y and y 0 . Thus
a typical equation is
d2 y
+ y = 0, ga y(a) + y 0 (a) = 0, gb y(b) + y 0 (b) = 0. (10.12)
dx2
where ga and gb are constants.
Show, from first principles, that the functional
Z b
S[y] = gb y(b)2 − ga y(a)2 + dx y 0 (x)2 − y(x)2
` ´
a

is stationary on the path that satisfies equation 10.12, for all ga and gb .

10.2.1 Natural boundary conditions for the loaded beam


In this section we discuss functionals such as those for the energy of the loaded beam,
equation 10.1, which contain the second derivative, y 00 , so the associated Euler-Lagrange
equation is fourth-order, see equation 10.16 below. We start with the general functional
Z b
S[y] = dx F (x, y, y 0 , y 00 ), y(a) = A, y 0 (a) = A0 ,
a

with natural boundary conditions at x = b. The derivation provided is brief because it


is similar to previous analysis. The Gâteaux differential is
Z b
∆S[y, h] = dx (hFy + h0 Fy0 + h00 Fy00 ) . (10.13)
a

Integration by parts gives


Z b ib Z b  
h d ∂F
dx h0 Fy0 = hFy0 − dx h ,
a a a dx ∂y 0
b b Z b
d2
   
d ∂F ∂F
Z
dx h00 Fy00 = h0 Fy00 − h + dx h ,
a dx ∂y 00 a a dx2 ∂y 00
so the Gâteaux differential can be cast into the form
    b
∂F d ∂F 0 ∂F
∆S[y, h] = h − +h
∂y 0 dx ∂y 00 ∂y 00 a
Z b  2     
d ∂F d ∂F ∂F
+ dx − + h. (10.14)
a dx2 ∂y 00 dx ∂y 0 ∂y
In this example h(a) = h0 (a) = 0 but there are no conditions on h(b). Hence ∆S
reduces to
  
∂F d ∂F 0 ∂F
∆S[y, h] = h(b) − + h (b) ∂y 00

∂y 0 dx ∂y 00 b b
Z b  2     
d ∂F d ∂F ∂F
+ dx − + h. (10.15)
a dx2 ∂y 00 dx ∂y 0 ∂y
264 CHAPTER 10. VARIABLE END POINTS

On a stationary path this must be zero for all allowed h(x). A subset of varied paths
has h(b) = h0 (b) = 0 and hence the stationary path must satisfy the Euler-Lagrange
equation

d2
   
∂F d ∂F ∂F
− + = 0, y(a) = A, y 0 (a) = A0 . (10.16)
dx2 ∂y 00 dx ∂y 0 ∂y

The solution of this equation contains two arbitrary constants. Now consider those
varied paths for which h(b) = 0 and h0 (b) 6= 0, and those for which h(b) 6= 0 and
h0 (b) = 0, to see that the solutions of this Euler-Lagrange equation must also satisfy
the two extra boundary conditions,

d
Fy00 = 0 and Fy0 − Fy00 = 0 at x = b, (10.17)
dx
which determine the two constants in the solution of equation 10.16.

Exercise 10.9
Derive equation 10.13.

Exercise 10.10
For the functional defined in equation 10.1 (page 257) with ρ = constant and the
boundary conditions y(0) = y 0 (0) = 0, use equations 10.16 and 10.17 to derive the
associated Euler-Lagrange equation and show that its solution is

ρg 2 ` 2
x x − 4Lx + 6L2 .
´
y(x) =
24κ

Exercise 10.11

(a) Show that the stationary paths of the functional


Z b
S[y] = dx F (x, y, y 0 , y 00 ), y(a) = A, y(b) = B,
a

satisfy the Euler-Lagrange equation

d2
„ « „ «
∂F d ∂F ∂F
˛ ˛
− + = 0, y(a) = A, y(b) = B, Fy00 ˛ = Fy00 ˛ = 0.
˛ ˛
dx2 ∂y 00 dx ∂y 0 ∂y a b

(b) Apply the result found in part (a) to the functional defined in equation 10.1
(page 257), with ρ = constant and the boundary conditions y(0) = y(L) = 0, to
derive the associated Euler-Lagrange equation and show that its solution is

ρg
x(L − x) L2 + xL − x2 .
` ´
y(x) =
24κ
10.3. VARIABLE END POINTS 265

10.3 Variable end points


The theory for variable end points is similar to that described above, but is slightly
more complicated because the x-coordinate of the free end must also be determined.
Here we consider the case where the left end of the stationary path is known, and has
coordinates (a, A), but the right end is free to lie on a given curve, defined by the
equation τ (x, y) = 0, as shown schematically in figure 10.6: we shall assume that τ x
and τy are not simultaneously zero in the region of interest. Note that if τ = x − b, the
equation τ = 0 defines the line x = b parallel to the y-axis, which is the example dealt
with in the previous section.

y
y(x) + εh(x)

y(x)

A τ(x,y)=0
x

L
a v v+εξ R

Figure 10.6 Diagram showing the stationary path, the solid line, and a varied
path, the dashed curve, for a problem in which the left-hand end is fixed, but
the other end is free to move along the line defined by τ (x, y) = 0.

The functional is Z v
S[y] = dx F (x, y, y 0 ), y(a) = A, (10.18)
a
where the path y(x) and v need to be chosen to make the functional stationary.
Let y(x)+h(x) be an admissible varied path, so h(a) = 0. If x = v is the right-hand
terminal point of y(x), the terminal point of the varied path is at x = v + ξ, for some
ξ, so the x and y coordinates of this point are,
x = v + ξ and y = y(v + ξ) + h(v + ξ)
 
= y(v) +  ξy 0 (v) + h(v) + O(2 ).

This point also lies on the constraining curve so, to first-order in ,


 
τ v + ξ, y(v) +  ξy 0 (v) + h(v) = 0.


Expanding this to first-order in , and remembering that τ (v, y(v)) = 0, gives


ξ (τx + y 0 (v)τy ) + h(v)τy = 0, (10.19)
which provides a relation between ξ and h(v) that is needed later.
The Gâteaux differential of the functional is computed using equation 4.5 (page 125),
in the normal manner, except that the upper limit of the integral now depends upon .
Thus on the varied path
Z z
S[y + h] = dx F (x, y + h, y 0 + h0 ), z = v + ξ, (10.20)
a
266 CHAPTER 10. VARIABLE END POINTS

so the derivative with respect to  is given by equation 1.52, (page 43), with b = z()
so dz/d = ξ,

dS  Z z 
∂F

0 ∂F

0 0
= ξF z, y(z) + h(z), y (z) + h (z) + dx h +h .
d a ∂y ∂y 0

On putting  = 0, so z = v, we obtain the Gâteaux differential


Z v
∆S[y, h] = ξF (v, y(v), y 0 (v)) + dx (hFy + h0 Fy0 ) . (10.21)
a

Now use integration by parts and the fact that h(a) = 0 to give
Z v iv Z v
0
h d
dx h Fy = hFy
0 0 − dx h (Fy0 )
a a a dx
Z v
d
= hFy0 − dx h (Fy0 ) .

x=v a dx
Hence the Gâteaux differential, equation 10.21, becomes
Z v    
  d ∂F ∂F
∆S[y, h] = ξF + hFy0 − dx − h. (10.22)

v a dx ∂y 0 ∂y

Finally we use equation 10.19 to express h(v) in terms of ξ to arrive at the relation
Z v    
ξ  0
 d ∂F ∂F
∆S[y, h] = − Fy τx + (y Fy − F )τy − dx − h. (10.23)

0 0
τy v a dx ∂y 0 ∂y

On a stationary path ∆S[y, h] = 0 for all allowed h. A subset of these variations will
have ξ = 0, consequently y(x) must satisfy the Euler-Lagrange equation,
 
d ∂F ∂F
− = 0, y(a) = A. (10.24)
dx ∂y 0 ∂y

On a path satisfying this equation the Gâteaux differential reduces to


ξ  
∆S[y, h] = − τx Fy0 + (y 0 Fy0 − F )τy (10.25)

τy v

and this must also be zero for all ξ. Hence, the equation

τx Fy0 + τy (y 0 Fy0 − F ) = 0, x = v, (10.26)

must be satisfied. This equation is the required boundary condition for the right-hand
end of the path and is named a transversality condition.
In order to see how this works, consider the solution of equation 10.24, y(x, c),
which depends upon a single constant c, because there is only one boundary condition.
By substituting this into equation 10.26 we obtain an equation relating v and c. But
the right-hand end of the path satisfies the condition τ (v, y(v, c)) = 0, and this gives
another relation between v and c: if these two equations can be solved for one or more
real pairs of v and c, stationary paths are obtained.
10.3. VARIABLE END POINTS 267

The derivation of equation 10.26 implicitly assumed that τy 6= 0, see equation 10.23.
Suppose that on the stationary path τy = 0, which means that at this point the curve
τ (x, y) = 0 is parallel to the y-axis, then from equation 10.19 we see that ξ = 0, since we
assumed that τx and τy are not simultaneously zero, the boundary term of 10.22 reduces
to hFy0 = 0, which means that Fy0 = 0. Equation 10.26 also gives Fy0 = 0 if τy = 0 so it
is also valid in this exceptional case. Note that in this limit the transversality condition
reduces to the natural boundary condition of equation 10.7, which is also retrieved by
setting τ = x − b in equation 10.26.
The transversality condition can be written in an alternative form by noting that
if the equation τ (x, y) = 0 defines a curve y = g2 (x) then g20 (x) = −τx /τy , and equa-
tion 10.26 becomes
F + (g20 − y 0 )Fy0 = 0, x = v. (10.27)
This form of the transversality condition is not valid when τy = 0, that is where |g20 (x)|
is infinite.
If the left end of the path is also constrained to a prescribed curve, γ(x, y) = 0, then
a similar equation can be derived. In summary we have the following result.
Theorem 10.1 Z v
For the functional S[y] = dx F (x, y, y 0 ) and the smooth curves Cγ and Cτ defined by
u
the equations γ(x, y) = 0 and τ (x, y) = 0, the continuously differentiable path joining
Cγ and Cτ , at x = u and x = v respectively, that makes S[y] stationary, satisfies the
Euler-Lagrange equation  
d ∂F ∂F
− =0 (10.28)
dx ∂y 0 ∂y
and the boundary conditions

γx Fy0 + γy (y 0 Fy0 − F ) = 0 and τx Fy0 + τy (y 0 Fy0 − F ) = 0. (10.29)

x=u x=v

Either of these boundary conditions may be replaced by conventional boundary condi-


tions.

As an example consider the functional


Z v p
S[y] = dx f (y) 1 + y 0 2 , y(0) = a, (10.30)
0

with the right end of the path terminating on the curve C defined by τ (x, y) = 0. For
this functional a first-integral exists and is given by
f (y)
F − y 0 Fy 0 = p = c = constant.
1 + y0 2
The transversality condition 10.26 then gives
τx y 0 f (y)
p − cτy = 0 that is τx y 0 (v) = τy .
1 + y0 2
But the gradient of C is −τx /τy and hence at the terminal point the stationary path is
perpendicular to C.
268 CHAPTER 10. VARIABLE END POINTS

Exercise 10.12 p
v
1 + y0 2
Z
Find the stationary path of the functional S[y] = dx , y(0) = 0, for
0 y
a path terminating on the line y = x − a, a > 0.
Hint first show that the solutions of the Euler-Lagrange equation are circles
through the origin and with centres on the x-axis.

Exercise 10.13
Consider the brachistochrone in which the left end is fixed at (0, A) and the right
end is constrained to the curve x/a + y/b = 1, a, b > 0. Initially the particle is
stationary at (0, A).
Show that the equations of the stationary path are
1 2
x= c (2φ − sin 2φ) , y = A − c2 sin2 φ, 0 ≤ φ ≤ φb = tan−1 (−b/a),
2
where c is given by the equation c2 φb = a (1 − A/b).
Graphs of this solution, for various values of A and a = b = 1, are shown in
figure 10.1 (page 258).

Exercise 10.14
Consider the ellipse and the straight line defined, respectively, by the equations

x2 y2 x y
2
+ 2 =1 and + = 1, x > 0, y > 0,
a b A B
in the first quadrant, where a, b, A and B are positive constants.
(a) Show that these curves do not intersect if AB > ∆, where ∆2 = A2 b2 + B 2 a2 .
(b) Construct a functional for the distance between two points (u, v) on the ellipse,
and (ξ, η) on the straight line, and show that the solution of the associated Euler-
Lagrange equation is the straight line y = mx + c. Show also that the values of
the six constants m and c , (u, v) and (ξ, η) making this distance stationary satisfy
the equations

mu v A u2 v2 ξ η
= 2, m= , + 2 = 1, + = 1,
a2 b B a2 b A B
together with v = mu + c and η = mξ + c.
(c) Solve these equations to show that when the curves do not intersect the sta-
AB − ∆
tionary distance is d = √ .
A2 + B 2

10.4 Parametric functionals


It is sometimes useful to formulate a functional in terms of curves defined parametrically
using the theory described in chapter 9. For variable end point problems the derivation
of the appropriate formulae follows in a similar manner to that described above, but
the homogeneity of the integrand simplifies the final result.
10.4. PARAMETRIC FUNCTIONALS 269

Consider the parametric functional


Z 1
S[x, y] = dt Φ(x, y, ẋ, ẏ), x(0) = a, y(0) = A, (10.31)
0

where the end of the path at t = 0 is fixed and the end at t = 1 lies on a smooth
curve, C, defined parametrically by x = φ(τ ), y = ψ(τ ), where both φ(τ ) and ψ(τ )
are continuously differentiable and such that φ0 (τ ) and ψ 0 (τ ) are not simultaneously
zero for any τ in the region of interest. Notice that the parameter t varies in the fixed
interval [0, 1] because the integrand is homogeneous of degree one in ẋ and ẏ: this is
different from the functional 10.18 in which it was necessary to allow the upper limit
to vary. Here 0 ≤ t ≤ 1 on all paths.
By considering the varied path (x + h1 , y + h2 ) we obtain the Gâteaux differential
in the usual manner,
Z 1  
∆S[x, y, h1 , h2 ] = dt h1 Φx + ḣ1 Φẋ + h2 Φy + ḣ2 Φẏ . (10.32)
0

The left end of the path is fixed at t = 0, consequently h1 (0) = h2 (0) = 0, and
integration by parts gives
Z 1         
  d ∂Φ ∂Φ d ∂Φ ∂Φ
∆S = h1 Φẋ + h2 Φẏ − dt h1 − + h2 − .

t=1 0 dt ∂ ẋ ∂x dt ∂ ẏ ∂y
(10.33)
If S[x, y] is stationary it is necessary that ∆S = 0 for all allowed variations h1 (t) and
h2 (t). By restricting the varied paths to those on which h1 (1) = h2 (1) = 0 we see that
the stationary path must satisfy the Euler-Lagrange equations
   
d ∂Φ ∂Φ d ∂Φ ∂Φ
− = 0, − = 0, x(0) = a, y(0) = A. (10.34)
dt ∂ ẋ ∂x dt ∂ ẏ ∂y
The general solutions of these equations satisfying the conditions at t = 0 will contain
two constants, which we denote by c and d. On these paths the Gâteaux differential
becomes  
∆S = h1 (t)Φẋ + h2 (t)Φẏ . (10.35)

t=1
Because all admissible paths terminate on C, as shown in figure 10.7, the values of h1 (1)
and h2 (1) are related.
y
τ=τ 1 + εζ
τ=τ 1

C
A Stationary path

Varied path
x
a
Figure 10.7 Diagram showing the stationary path, the terminating
curve, C, and a varied path. At the intersection of C and the sta-
tionary path τ = τ1 ; and the varied path intersects C at τ = τ1 + ζ.
270 CHAPTER 10. VARIABLE END POINTS

Suppose that the stationary path terminates at (φ(τ1 ), ψ(τ1 )) and a varied path at a
different value of τ , τ1 + ζ. Hence

x(1) = φ(τ1 ) and x(1) + h1 (1) = φ(τ1 + ζ).

Expanding to first-order in  gives h1 (1) = ζφ0 (τ1 ) and, similarly, h2 (1) = ζψ 0 (τ1 ).
Thus equation 10.35 becomes
 
∆S = ζ φ0 (τ1 )Φẋ + ψ 0 (τ1 )Φẏ . (10.36)

t=1

But ∆S must be zero for all ζ 6= 0 and hence the required boundary condition is
 
φ0 (τ1 )Φẋ + ψ 0 (τ1 )Φẏ = 0. (10.37)

t=1

This is the transversality condition in parametric form and is the equivalent of equa-
tion 10.26 (page 266).
There are now three constants that need to be determined: these are (c, d) from
the solution of equations 10.34 and the value of the parameter τ1 , where the stationary
path intersects C. Equation 10.37 gives one relation between these three parameters:
the other two are x(1, c, d) = φ(τ1 ) and y(1, c, d) = ψ(τ1 ). In principle these equations
can be solved to give the required stationary path.
In order to see how this theory works consider the problem solved in exercise 9.1(b)
(page 241), that is the stationary values of the distance between the origin and the
parabola now defined parametrically by (1 − τ 2 , aτ ).
The parametric form of the functional is
Z 1 p
S[x, y] = dt ẋ2 + ẏ 2 , x(0) = y(0) = 0, (10.38)
0

and the boundary curve is φ(τ ) = 1 − τ 2 , ψ(τ ) = aτ . Hence the boundary condi-
tion 10.37 becomes
aẏ = 2τ1 ẋ at t = 1. (10.39)
The Euler-Lagrange equations and the solutions that satisfy the boundary conditions
at the origin are
! !
d ẋ d ẏ
p = 0, p = 0 =⇒ x = ct, y = dt, (10.40)
dt ẋ2 + ẏ 2 dt ẋ2 + ẏ 2

where c and d are constants to be determined: these solutions are the parametric
equations of a straight line through the origin, as expected. Hence equation 10.39
becomes ad = 2τ1 c. But at t = 1 the solution 10.40 intersects the parabola, hence
c = 1 − τ12 and d = aτ1 . Substituting these into the equation ad = 2τ1 c gives

a2
a2 τ1 = 2τ1 (1 − τ12 ) that is τ1 = 0 or τ12 = 1 − .
2
The first of these solutions, τ1 = 0, gives
√ x = 1 and y = 0. The second equation,
τ12 = 1 − a2 /2, has real solutions if a < 2, which are the solutions found previously in
exercise 9.1(b).
10.5. WEIERSTRASS-ERDMANN CONDITIONS 271

Exercise 10.15
For the parametrically defined curve x = φ(τ ), y = ψ(τ ), use the method described
above to show that the distance along the straight line y = mx from the origin to
a point on this curve is stationary if m = ψ 0 (τ )/φ0 (τ ). If the curve is represented
by the function y(x), show that this becomes my 0 (x) = −1 and give a geometric
interpretation of this formula.

Exercise 10.16
Express the functional defined in equation 10.38 in non-parametric form and find
its stationary paths.

10.5 Broken Extremals: the Weierstrass-Erdmann


conditions
The theory so far has dealt almost entirely with continuously differentiable solutions
of the Euler-Lagrange equations. In the construction of the minimum area of a surface
of revolution, section 5.3, it was seen that the Goldschmidt function, equation 5.20
(page 160), was the only solution if the end radii were too small: this function is
continuous, but at two points its derivatives do not exist.
Solutions of variational problems that are continuous, but have discontinuous deriva-
tives at a finite number of points are named broken extremals (though they are often
merely stationary paths rather than extremals). The points of discontinuity are named
corners. Such solutions are dealt with by dividing the path into contiguous segments
in each of which the path is continuously differentiable and satisfies the Euler-Lagrange
equation; supplementing these equations are the Weierstrass-Erdmann (corner) condi-
tions which allow the paths in each segments to be joined to form a continuous path.
It transpires that the variational principle and the requirement of continuity provides
just sufficient extra conditions for particular solutions to be formed.
It is quite easy to find real problems that require broken extremals. One example is
illustrated in figure 10.3 (page 259), and we use a variant of this to introduce the basic
ideas before developing the general theory.

10.5.1 A taut wire


Consider a taut, elastic wire under tension T , fixed at both ends one being at the origin,
the other at x = L, on the horizontal x-axis. We suppose the wire sufficiently light that
it lies along Ox. If a weight of mass M is hung from the wire at a given point x = ξ
it will deform as shown in figure 10.8, and we assume that the deflection is sufficiently
small that the change in tension is negligible.
If the y-axis is vertically upwards the energy due to the tension in the wire can
RL
be shown to be T2 0 dx y 0 2 provided the displacement, y(x), is sufficiently small for
Hooke’s law to be valid. The potential energy of the mass is M gy(ξ), g being the
acceleration due to gravity, and, for the sake of simplicity, we assume that the wire
is sufficiently light that its potential energy is negligible. The functional for the total
272 CHAPTER 10. VARIABLE END POINTS

energy of the system is


L
1
Z
E[y] = M gy(ξ) + T dx y 0 2 , y(0) = y(L) = 0, 0 < ξ < L. (10.41)
2 0

The configuration adopted by the wire is the continuous stationary path of this func-
tional.
y

ξ L x

y1 y2

Mg
Figure 10.8 Diagram of a light, taut wire of length L sup-
porting a weight at x = ξ.

This energy functional is different from others considered because the point x = ξ is
special. We deal with this by splitting the interval [0, L] into two subintervals, [0, ξ]
and [ξ, L] and writing the whole path, y(x) in terms of two functions,
(
y1 (x), 0 ≤ x ≤ ξ,
y(x) = (10.42)
y2 (x), ξ ≤ x ≤ L,

and since y(x) is continuous at x = ξ, we have y1 (ξ) = y2 (ξ). The derivatives of y(x)
are not defined at x = ξ, but this does not hinder the analysis because we require only
the left and right-hand derivatives. These are defined, respectively, by
y(ξ) − y(ξ − ) y1 (ξ) − y1 (ξ − )
lim = lim , (left derivative),
→0+  →0+ 
y(ξ + ) − y(ξ) y2 (ξ + ) − y2 (ξ)
lim = lim , (right derivative).
→0+  →0+ 
In the following the derivatives at x = ξ are to be understood in this sense.
Now evaluate the functional on the varied path, y + h, also continuous at x = ξ,
and where h(0) = h(L) = 0,
  1 Z ξ 1
Z L
E[y + h] = M g y(ξ) + h(ξ) + T dx (y10 + h0 )2 + T dx (y20 + h0 )2 ,
2 0 2 ξ

so that the Gâteaux differential is


Z ξ Z L
∆E[y, h] = M gh(ξ) + T dx y10 h0 +T dx y20 h0 .
0 ξ

Integration by parts gives, on remembering that h(0) = h(L) = 0,


n  o Z ξ Z L
0 0 00
∆E[y, h] = M g + T y1 (ξ) − y2 (ξ) h(ξ) − T dx y1 h − T dx y200 h. (10.43)
0 ξ
10.5. WEIERSTRASS-ERDMANN CONDITIONS 273

Now proceed in the usual manner. By choosing those h(x) for which h(x) = 0 for
ξ ≤ x ≤ L and those for which h(x) = 0 for 0 ≤ x ≤ ξ, we obtain the Euler-Lagrange
equations for y1 (x) and y2 (x),

d2 y1
= 0, 0 ≤ x < ξ, y1 (0) = 0,
dx2
(10.44)
d2 y2
= 0, ξ < x ≤ L, y2 (L) = 0.
dx2
On the path satisfying these equations the Gâteaux differential becomes
n  o
∆E[y, h] = M g + T y10 (ξ) − y20 (ξ) h(ξ)

and this can be zero for all h(x) only if

Mg
y20 (ξ) − y10 (ξ) = . (10.45)
T
Physically, this equation represents the resolution of forces acting on the weight in the
vertical direction. Together with the continuity of y(x) this condition provides sufficient
information to find a stationary path, as we now show by solving the equations.
The solutions of the Euler-Lagrange equations 10.44 that satisfy the boundary con-
ditions at x = 0 and x = L are

y1 (x) = αx and y2 (x) = β(L − x),

for some constants α and β. Since y1 (ξ) = y2 (ξ) we have (α + β)ξ = βL and equa-
tion 10.45 gives α + β = −M g/T . Hence the stationary path comprises the two straight
line segments, 
 − M g (L − ξ)x,

0 ≤ x ≤ ξ,
TL

y(x) = (10.46)
 Mg
 −
 ξ(L − x), ξ ≤ x ≤ L.
TL

Exercise 10.17
Find the continuous stationary paths of the functional

1 L
Z
S[y] = Cy(ξ)2 + dx y 0 2 , y(0) = A, 0 < ξ < L,
2 0
with natural boundary conditions at x = L. Explain why there cannot be a
unique, nontrivial solution if A = 0.

10.5.2 The Weierstrass-Erdmann conditions


Now consider the problem of finding stationary paths of the functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (10.47)
a
274 CHAPTER 10. VARIABLE END POINTS

that are continuously differentiable for a ≤ x ≤ b, except possibly at a single, unknown


point c (a < c < b), where the path is continuous but its derivatives are not.
The main difference between this and the previous special case is that the value of
c is not known in advance. However, we proceed in the same manner by splitting the
functional into two components,
Z c Z b
S[y] = dx F (x, y1 , y10 ) + dx F (x, y2 , y20 ), (10.48)
a c

and compute its value on the varied paths y1 + h1 and y2 + h2 , and also allowing the
point x = c to move to c0 = c + ξ, as shown diagrammatically in figure 10.9.

y y 2 +ε h2
B
y 1 +ε h1 y2

A y1
x
a c’ c b
Figure 10.9 Diagram showing the stationary and a varied
path: here c0 = c + ξ.

The value of the functional on the varied path is


Z c+ξ Z b
S[y + h] = dx F (x, y1 + h1 , y10 + h01 ) + dx F (x, y2 + h2 , y20 + h02 ).
a c+ξ

Each integral is similar to that defined in equation 10.20 (page 265) and using the same
analysis that leads to equation 10.25, see exercise 10.18, we obtain,
  
∆S[y, h] = ξF + h1 Fy0 − ξF + h2 Fy0 , (10.49)

(x,y)=(c,y1 ) (x,y)=(c,y2 )

with y1 (x) and y2 (x) satisfying the Euler-Lagrange equations


 
d ∂F ∂F
− = 0, y1 (a) = A, a ≤ x ≤ c, (10.50)
dx ∂y10 ∂y1
 
d ∂F ∂F
− = 0, y2 (b) = B, c ≤ x ≤ b. (10.51)
dx ∂y20 ∂y2

On the stationary path the coordinates of the corner are (c, y(c)) and on a varied path
these become (c + ξ, y(c) + η), with ξ and η independent variables. In terms of y1 and
y2 we have

y(c) + η = yk (c + ξ) + hk (c + ξ), k = 1 and 2,


= yk (c) +  ξyk0 (c) + hk (c) + O(2 ).

10.5. WEIERSTRASS-ERDMANN CONDITIONS 275

Since y(x) is continuous, y1 (c) = y2 (c) = y(c), these equations allow h1 (c) and h2 (c)
to be expressed in terms of the independent variables ξ and η. Substituting these
expressions into equation 10.49 for ∆S we obtain
   
∆S[y, h] = (F − y 0 Fy0 )ξ + Fy0 η − (F − y 0 Fy0 )ξ + Fy0 η .

(x,y)=(c,y1 ) (x,y)=(c,y2 )
(10.52)
Note that each term of the right-hand side of this equation is similar to the left-hand
side of equation 10.26 (page 266) with ξ = −τy and η = τx : the important difference is
that ξ and η are independent variables. Because of this ∆S = 0 only if the coefficients
of ξ and η are both zero, which gives the two relations

lim F − y 0 Fy0 = lim F − y 0 Fy0 ,


 
(10.53)
x→c− x→c+

lim Fy0 = lim Fy0 . (10.54)


x→c− x→c+

These relations between the values of y1 and y2 , and their first derivative at x = c are
known as the Weierstrass-Erdmann (corner) conditions and they hold at every corner of
a stationary path. With one corner the Euler-Lagrange equations 10.50 and 10.51 may
be solved to give functions y1 (x, α) and y2 (x, β), each involving one arbitrary constant.
Substituting these into the corner conditions gives two equations relating α, β and c:
a third equation is given by the continuity equation y1 (c, α) = y2 (c, β). These three
equations allow, in principle, values for α, β and c to be found.

Exercise 10.18
Derive equations 10.49–10.52.

For an example consider the functional


Z 2
2
S[y] = dx y 0 2 (1 − y 0 ) , y(0) = 0, y(2) = 1. (10.55)
0

Because the integrand depends only upon y 0 , the solutions of the Euler-Lagrange equa-
tion are the straight lines y = mx + α, for some constants m and α. Therefore the
smooth solution that fits the boundary conditions is y = x/2 and on this path S = 1/8:
moreover, by considering the second-order terms in the expansion of S[y + h] we see
that this is path is a local maximum of S.
However, if y 0 = 0 or y 0 = 1 the integrand is zero, so we can imagine a broken
path comprising segments of straight lines at 45◦ and parallel to the x-axis on which
S[y] = 0; because the integrand is non-negative such a path gives a global minimum.
We now show that the corner conditions give such solutions.
Suppose that there is one corner at x = c. The two solutions that fit the boundary
conditions either side of c are
(
y1 = m1 x, 0 ≤ x ≤ c,
y=
y2 = m2 (x − 2) + 1, c ≤ x ≤ 2.

Since

Fy0 = 2y 0 (1 − y 0 )(1 − 2y 0 ) and F − y 0 Fy0 = −y 0 2 (1 − y 0 )(1 − 3y 0 )


276 CHAPTER 10. VARIABLE END POINTS

the Weierstrass-Erdmann conditions become


m21 (1 − m1 )(1 − 3m1 ) = m22 (1 − m2 )(1 − 3m2 ) and
(10.56)
m1 (1 − m1 )(1 − 2m1 ) = m2 (1 − m2 )(1 − 2m2 ).

The only non-trivial solutions of these equations and the continuity condition, m1 c =
m2 (c − 2) + 1 are (m1 , m2 , c) = (1, 0, 1) and (0, 1, 1), which give the two solutions shown
by the solid and dashed lines, respectively, in figure 10.10. On both lines the functional
has its smallest possible value of zero.

y
(1,1) (2,1)

Figure 10.10 Graph of some broken extremals for the functional 10.55. On
the solid line (m1 , m2 ) = (1, 0): on the dashed line (m1 , m2 ) = (0, 1) and in
both cases c = 1. The dotted line is a broken extremal with several corners.

In this example there are solutions with any number of corners comprising alternate
lines with unit gradient and horizontal lines; an example is depicted by the dotted line
in figure 10.10.

Exercise 10.19
(a) Show that the stationary path of the functional 10.55 without corners is y =
x/2 and that on this path S[y] = 1/8.
(b) If y = x/2 show that
Z 2
1 2
S[y + h] = S[y] −  dx h0 (x)2
2 0

and deduce that this path gives a local maximim of S[y].

Exercise 10.20
Show that the only solutions of equations 10.56 are those given in the text.

Exercise 10.21
Find the stationary paths of the functional 10.55 with two corners.

Exercise 10.22 Z 4 ´2
dx y 0 2 − 1 , y(0) = 0,
`
Find the stationary paths of the functional S[y] =
0
y(4) = 2, having just one corner.
10.6. NEWTON’S MINIMUM RESISTANCE PROBLEM 277

10.5.3 The parametric form of the corner conditions


The Weierstrass-Erdmann corner conditions for the parametric functional
Z 1
S[x, y] = dt Φ(x, y, ẋ, ẏ), x(0) = a, y(0) = A, x(1) = b, y(1) = B, (10.57)
0

can be derived directly from equations 10.53 and 10.54, by setting Φ(x, y, ẋ, ẏ) =
ẋF (x, y, ẏ/ẋ) and recalling the results of exercise 9.10 (page 247), to give

lim Φẋ = lim Φẋ and lim Φẏ = lim Φẏ , (10.58)
t→c− t→c+ t→c− t→c+

where the corner is at t = c, with 0 < c < 1. At such a corner either or both of ẋ(t)
and ẏ(t) are discontinuous.

10.6 Newton’s minimum resistance problem


We now consider the solution of Newton’s minimum resistance problem described in sec-
tion 3.5.3 where the relevant functionals are derived, equations 3.21 and 3.22 (page 110).
Although the solution of this problem is of little practical value, for the reasons discussed
in section 3.5.3, its derivation is worth pursuing because of the techniques needed. The
detailed analysis in this section is not, however, assessed. To recap we require the
stationary paths of the functionals
Z b
x
S1 [y] = dx , y(0) = A > 0, y(b) = 0, (10.59)
0 1 + y0 2
and
b
1 2 x
Z
S2 [y] = a + dx , y(a) = A > 0, y(b) = 0, 0 < a < b. (10.60)
2 a 1 + y0 2
For S2 [y] both the stationary path and the value of a need to be determined. Physical
considerations suggest that y 0 (x) is piecewise continuous; further in the derivation of
these functionals we made the implicit assumption that y 0 (x) ≤ 0 and without this
constraint S1 [y] can be made arbitrarily small, as shown in exercise 10.28.
Here we show that S1 [y] has no stationary paths that can satisfy the boundary
conditions; using this analysis we derive a stationary path for S2 [y].
The functional S1 [y] is of the type considered in exercise 8.13 (page 222) because
the integrand, F = x/(1 + y 0 2 ), does not depend explicitly upon y. The conclusion of
this exercise shows that if y(x) is a stationary path and if
2x(3y 0 2 − 1)
Fy 0 y 0 = > 0, (10.61)
(1 + y 0 2 )3
it gives a minimum value of S1 .
The Euler-Lagrange equation associated with S1 can be integrated directly, and
assuming that y(x) decreases monotonically from y(0) = A > 0 to y(b) = 0, we obtain
xy 0
= −c, with c > 0. (10.62)
(1 + y 0 2 )2
278 CHAPTER 10. VARIABLE END POINTS

This equation can be solved by defining a new positive variable3 ,

p(x) = −y 0 (x) giving the equation xp = c(1 + p2 )2 . (10.63)

Integrating the first equation gives


Z x
y(x) = A − dx p(x)
Z0 p Z p  
dx d
= A− dp p =A− dp (xp) − x ,
p0 dp p0 dp

where p0 = p(0) is an unknown constant. The last expression gives


h ip Z p  
1 3
y(p) = A − xp +c dp + 2p + p .
p0 p0 p

This equation can be integrated directly to obtain (x, y) in terms of p,

c(1 + p2 )2
 
2 3 4
x(p) = and y(p) = B + c ln p − p − p (10.64)
p 4
where B is a constant which has absorbed all other constants: in these equations p
may be regarded as a parameter, so we have found a solution in parametric form. The
required solution is obtained by finding the appropriate values of B, c and a range of p
that satisfy (x, y) = (0, A) and (b, 0): it transpires that this is impossible, as will now
be demonstrated.
Define the related functions
x (1 + p2 )2 y−B 3
ξ(p) = = and η(p) = = ln p − p2 − p4 , p>0 (10.65)
c p c 4
which contain no arbitrary constants. Since, by definition, p = −y 0 (x) it follows from
the chain rule that pξ 0 (p) = −η 0 (p) and hence for p 6= 0 the stationary points of ξ(p)
and η(p) coincide. The graphs of ξ(p) and η(p) are shown in figure 10.11.

7 0
ξ η
6 -1
-2
5
-3
4
-4
3 -5
p p
2 -6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4

Figure 10.11 Graphs of ξ(p) and η(p). Each function is stationary at p = 1/ 3.


Since ξ 0 (p) = (p2 + 1)(3p2 − 1)/p2, ξ(p) has a single minimum
√ at p = 1/ 3 and, because
pξ 0 (p) = −η 0 (p), η(p) has a single maximum at p = 1/ 3. The coordinates of the
3 See for instance exercise 2.7, page 59.
10.6. NEWTON’S MINIMUM RESISTANCE PROBLEM 279
√ 5
stationary points are (ξ, η) = (16 3/9, − 21 ln 3 − 12 ) = (3.08, −0.97). The minimum

value of x(p) is 16 3c/9, with c > 0, hence there is no nontrivial stationary path that
can pass through x = 0, the lower boundary. However, we pursue the investigation of
this general solution because it is needed for the stationary path of S2 .
The graphs of ξ(p) and η(p) show that there are two branches√of the function η(ξ),
the solution of equation
√ 10.62, one defined by p in the interval [1/ 3, ∞) and the other
on the interval (0, 1/ 3]. Consider each case

√ √
• p 3 > 1: for p increasing from 1/ 3, ξ(p) increases monotonically from its
minimum value and η(p) decreases monotonically from its maximum value. Hence
the function
√ η(ξ) remains in the fourth quadrant starting at (3.08, −0.97), where
4/3
p = 1/ 3, and behaving √ as η0 ' −3ξ √ /4 for large p. This
√ is the curve M R in
figure 10.12. At p = 1/ 3, η (ξ) = −1/ 3. Since p > 1/ 3, this curve is a local
minimum of S1 [y], see equation 10.61.

√ √
• p 3 < 1: for p decreasing from 1/ 3, ξ(p) increases monotonically from its
minimum value and η(p) decreases monotonically from its maximum value: again
η(ξ) remains in the fourth quadrant, and for small p, ξ ' 1/p and η(ξ) ' − ln √ξ.
On this curve√η(ξ) decreases more slowly than on the previous curve. At p = 1/ 3,
η 0 (ξ) = −1/ 3. This is the curve M S in figure 10.12.

The equations 10.64 define the parametric equations of a curve in the (ξ, η)-plane with
parameter p; this curve is shown in figure 10.12. In principle η can be expressed in
terms of ξ, but no simple formula for this relation exists. The two branches M R and
M S of η(ξ), shown in figure 10.12, start at (3.08, −0.97) with the same gradient.

2 3 4 5 6 7
0
η p√3 <1 ξ
-1 M
-2 S
-3
p√3 >1
-4
-5
R
-6
Figure 10.12 Graph of the two branches of η(ξ), the so-
lution of equation 10.62.

Solid surrounding a hollow right circular cylinder

The above analysis shows that there is no smooth solution stationary path for S 1 [y].
However, suppose that the solid of revolution surrounds a hollow cylinder with axis
along Oy and with a given radius, a and height A, and through which the fluid flows
unhindered, as shown in figure 10.13.
280 CHAPTER 10. VARIABLE END POINTS

A
x

a b
Figure 10.13 Diagram of a solid surrounding a hollow cylinder.

The functional for this problem is a variation of that defined in equation 10.59,

b
x
Z
S3 [y] = dx , y(a) = A, y(b) = 0. (10.66)
a 1 + y0 2

The solution of the associated Euler-Lagrange


√ equation that makes S3 [y] a minimum is
given by equations 10.64 with 1/ 3 < p1 ≤ p ≤ p2 , with p1 corresponding to the point
(a, A) and p2 to (b, 0). The four constants, (p1 , p2 , c, B) are given in terms of (a, b, A)
by the four boundary conditions,

A = B + cη(p1 ), (from y(p1 ) = A), (10.67)


0 = B + cη(p2 ), (from y(p2 ) = 0), (10.68)
a = cξ(p1 ), b = cξ(p2 ), (from x(p1 ) = a and x(p2 ) = b > a). (10.69)

For given values of b/a and A/a we now show that these equations have a unique
solution, provided A/a is larger than some minimum value that depends upon b/a, and
tends to zero as b → a. The equations can be solved numerically for the constants,
(p1 , p2 , c, B). But this task is made easier by first expressing p2 and c in terms of p1 .
This is achieved by dividing the two equations 10.69 to eliminate c and writing the
resultant expression in the form

b 1
ξ(p2 ) = ξ(p1 ), p1 > √ . (10.70)
a 3

This equation can be interpreted geometrically as illustrated in figure 10.14 which shows
the graphs √of ξ(p) = (1 + p2 )2 /p and bξ(p)/a > ξ(p), the dashed line. For a given value
of p1 > 1/ 3, we can see by following the arrows on the dotted lines that a unique
value of p2 is obtained. For large p we have ξ(p) ' p3 giving the approximate solution
p2 ' (b/a)1/3 p1 .
10.6. NEWTON’S MINIMUM RESISTANCE PROBLEM 281

bξ(p)/a

ξ(p)

1/√3 p1 p2 p
Figure 10.14 Graphs of the functions ξ(p), the solid line, and bξ(p)/a, the
dashed line and the geometric interpretation of the solution of equation 10.70.

The equation 10.70 for p2 is simplified by defining a new variable ψ2 , p2 = 1/ tan ψ2 =


tan(π/2 − ψ2 ), with 0 < ψ2 < π/3, so equation 10.70 becomes


3 3 3 a p1 3 3a
sin ψ2 cos ψ2 = d , d = < , d < 0.69. (10.71)
b (1 + p21 )2 16 b

The left-hand side is O(ψ23 ) as ψ2 → 0 and in this limit the solution is approximately
ψ2 = d, that is p2 ' (b/a)1/3 p1 , which is why we wrote d3 on the right-hand side. For
larger values of d the solution can be approximated by a truncated Taylor series. It
transpires that the first few terms of this series provide sufficient accuracy for graphical
representations of the solution: the first four terms give,

 
1 2 1 78
p2 (d) = 1 − d2 − d4 − d6 + · · · . (10.72)
d 3 3 81

The constant c is given directly in terms of p1 by equation 10.69, so, on eliminating B


from equations 10.67 and 10.68 we obtain an equation for p1 ,

 
A η(p1 ) − η(p2 )
= . (10.73)
a ξ(p1 )

This equation can be used to determine p1 for a given value of A/a. Numerical inves-
tigations suggest that for a given value of b/a there is a maximum value of A above
which there are no solutions, with this critical value of A tending to zero as b → a.
Alternatively, as a → 0, for fixed b, the minimum value of A for which solutions exist
tends to infinity, see exercise 10.24. A few such solutions are shown in figure 10.15 for
A = 4a, and in this case there are no solutions if b > 3.72a.
282 CHAPTER 10. VARIABLE END POINTS

y
4
a
3 b=3.5a

2
b=2.5a
1
b=1.5a b=2a x/a
0 0.5 1 1.5 2 2.5 3 3.5
Figure 10.15 Examples of the stationary paths of S3 [y] for A = 4a and various
values of b/a. For this value of A there are no solutions for b > 3.72a.

Body surrounding a solid right circular cylinder


We now return to Newton’s original problem. Another solution of the Euler-Lagrange
equation 10.62 is y 0 (x) = 0, with c = 0, so we might expect a suitable solution to be
the piecewise differentiable function,

A, 0 ≤ x ≤ a,
z(x) =
y(x), a ≤ x ≤ b,
where y(x) is the solution found in equation 10.64 above, with y(a) = A, and a a
parameter to be found. This solution has a corner at (a, A), so can be treated using a
modification of the Weierstrass-Erdmann corner conditions — the modification being
required because A is fixed and y(a) = A, so constraining the end of the varied path to
move only in the x-direction.
However a more transparent formulation of this problem is obtained by explicitly
including the path y = A, for 0 ≤ x ≤ a, in the functional. Thus the equivalent
functional is
Z b
1 x
S2 [y] = a2 + dx , y(a) = A, y(b) = 0, (10.74)
2 a 1 + y0 2
where both the path and the variable a need to be found. The varied path is y(x)+h(x),
with h(b) = 0, and a + k. At the corner, x = a
A = y(a) and A = y(a + k) + h(a + k) (10.75)
and expanding the second equation to first-order in  we obtain a relation between k
and h(a),
ky 0 (a) + h(a) = 0. (10.76)
Setting F (x, y 0 ) = x/(1 + y 0 2 ) we obtain
Z b Z a+k
1
S2 [y + h] = (a + k)2 + dx F (x, y 0 + h0 ) − dx F (x, y 0 + h0 ). (10.77)
2 a a

Differentiating with respect to , then setting  to zero, integrating by parts and using
the fact that h(b) = 0 gives the Gâteaux differential, see exercise 10.26,
Z b
dFy0
∆S2 = ak − kF a, y 0 (a) − h(a)Fy0 a, y 0 (a) −
 
dx h . (10.78)
a dx
10.6. NEWTON’S MINIMUM RESISTANCE PROBLEM 283

Using the subset of variations with k = h(a) = 0 gives equation 10.62, having the
parametric solution
(1 + p2 )2
 
3 1
x=c , y = B + c ln p − p2 − p4 , √ ≤ p1 ≤ p ≤ p2 , (10.79)
p 4 3

where c and B are constants and we restrict p > 1/ 3 because in the previous case
only this range of p gave a minimum: this assumption is justified in exercise 10.27.
On using equation 10.76 to express h(a) in terms of k, we see that on the stationary
path the Gâteaux differential has the value
 
∆S2 = a − F a, y 0 (a) + y 0 (a)Fy0 a, y 0 (a) k.

(10.80)

This must be zero for all k and hence


1 + 3y 0 2 (a)
a = F a, y 0 (a) − y 0 (a)Fy0 a, y 0 (a) = a
 
. (10.81)
(1 + y 0 2 (a))2
From this it follows that y 0 (a) = ±1 (we ignore the solution y 0 (a) = 0) and since y(x)
is a decreasing function y 0 (a) = −1. From the definition of p, equation 10.63, it follows
that p1 = 1. Thus the solution is
a (1 + p2 )2
 
7a a 3
x= , y =A+ + ln p − p2 − p4 , 1 ≤ p ≤ p2 . (10.82)
4 p 16 4 4
Finally the values of a and p2 are determined from the boundary conditions x(p2 ) = b
and y(p2 ) = 0. Combining these equations we obtain
 
bp2 3 4 2 7
A= p + p − ln p 2 − . (10.83)
(1 + p22 )2 4 2 2
4
The term in curly brackets is zero at p2 = 1 and the right-hand side of this equation
increases as p2 for large p2 . Also the gradient of the right-hand side is positive for p2 > 1,
see exercise 10.25. Hence for any positive value of A this equation gives a unique value
of p2 ; and then a can be determined from either of equations 10.82. Further, this path
is a local (weak) minimum, see exercise 10.27.
In figure 10.16 are shown some solutions for the cases A = 4, b = 1, 2, · · · , 5, 8 and
10: in this figure only the curved parts of the solutions are shown.

4
y
3

2
1 2 3 4 b=5 b=8 b=10
1

0 1 2 3 4 5 6 7 8 9 x 10
Figure 10.16 Graphs of the solutions defined in equation 10.82 for A = 4 and
b = 1, 2, · · · , 5, 8 and 10. Here the horizontal part of each solution, from x = 0
to a, is not shown.
284 CHAPTER 10. VARIABLE END POINTS

Exercise 10.23
Derive the first two terms of equation 10.72.

Exercise 10.24
Show that as a → 0 equation 10.73 can be written in the approximate form,
„ «4/3
A 3 b
' ξ(p1 )1/3 ,
a 4 a

and hence that for sufficiently small a there is no solution if A ≤ 1.09b4/3 a−1/3 .

Exercise 10.25
Denote the right-hand side of equation 10.83 by bG(p2 ) where
 ff
p 3 4 2 7
G(p) = p + p − ln p −
(1 + p2 )2 4 4

and show that G(1) = 0, G(p) = 3


p + O(1/p) as p → ∞, and that G0 (p) > 0 for
√ 4
p > 1/ 3.

Exercise 10.26
Derive the Gâteaux differential 10.78.

Exercise 10.27
Show that the second derivative of S2 [y + h] evaluated at  = 0 is
Z b
d 2 S2 1 2 x(3y 0 2 − 1) 0 2
2
= k + 2 dx h ,
d 2 a (1 + y 0 2 )3
where k is defined in equation 10.76. Deduce that the stationary path defined by
equation 10.82 gives a (weak) local minimum of S2 [y], provided 3y 0 2 > 1.

Exercise 10.28
(a) Consider the value of S1 [y], defined in equation 10.59, on the path
„ «
1 πx
z(x) = A cos n + , 0≤x≤b
2 b

where n is any integer, and show that S1 [z] may be made arbitrarily small.
(b) Which norm does this path satisfy?
10.7. MISCELLANEOUS EXERCISES 285

10.7 Miscellaneous exercises


Exercise 10.29 Z b p
Show that the stationary paths of the functional S[y] = dx f (x, y) 1 + y0 2,
a
y(a) = 0 and with natural boundary conditions at x = b are parallel to the x-axis
at x = b if f (x, y) 6= 0.

Exercise 10.30 Z a p
Show that the stationary paths of the functional S[y] = dx y 1 + y 0 2 , y(0) = A > 0,
0
with natural boundary conditions at x = a, are given by
“a − x” “a”
y = c cosh with A = c cosh .
c c
Show that there are two solutions if A > 1.509a and none for smaller A.

Exercise 10.31
Derive the Euler-Lagrange equations for the functional
Z v
S[y] = G(y(v)) + dx F (x, y, y 0 ), y(a) = A,
a

for each of the two boundary conditions,


(a) natural boundary conditions at x = v, and
(b) the right end of the stationary path terminating on the curve defined by
τ (x, y) = 0.

Exercise 10.32 pv
1 + y0 2
Z
Find the stationary paths for the functional S[y] = dx , y(0) = 0,
0 y
where the point (v, y(v)) is constrained to the curve x + (y − r) = r2 , that is a
2 2

circle of radius r with centre on the y-axis at y = r.

Exercise 10.33
Consider the functional
Z v p “ ”
S[y] = dx f (x, y) 1 + y 0 2 exp α tan−1 y 0 , y(a) = A,
a

with the condition that the right-hand end of the stationary path lies on the curve
C defined by τ (x, y) = 0. If the gradient of C and the stationary path at the point
of intersection are, respectively, tan θC and tan θ, show that

1
tan(θ − θC ) = .
α
286 CHAPTER 10. VARIABLE END POINTS

Exercise 10.34
Show that the stationary path of the functional
Z 1
S[y] = dx (xy + y 00 2 ), y(0) = y 0 (0) = y(1) = 0,
0
2 2
is y(x) = x (1 − x)(2x + 2x − 7)/480.

Exercise 10.35
A weight of mass M is hung from the end, x = L, of the beam described by the
functional of equation 10.1 and the beam is clamped at x = 0. The relevant energy
functional is
Z L „ «
1 00 2
E[y] = −M gy(L) + dx κy − ρ(x)gy , y(0) = y 0 (0) = 0,
0 2
where the y-axis is pointing downwards. Find the associated Euler-Lagrange equa-
tion and boundary conditions for this problem. Solve this equation in the case
that ρ is independent of x.

Exercise 10.36
A weight of mass M is hung from a given point, x = ξ, 0 < ξ < L, of the beam
described by the functional of equation 10.1 and the beam rests on supports at
x = 0 and x = L, both at the same level. The relevant energy functional is
Z L „ «
1 00 2
E[y] = −M gy(ξ) + dx κy − ρ(x)gy , y(0) = y(L) = 0,
0 2
where the y-axis is pointing downwards. Assuming that y(x) is continuous at
x = ξ, find the associated Euler-Lagrange equation and all the boundary condi-
tions for this problem.

Exercise 10.37
Prove that the functional
Z b
dx y 0 2 + 2βyy 0 + γy 2 ,
` ´
S[y] = y(a) = A, y(b) = B,
a

can have no broken extremals.

Exercise 10.38 Ra
Can the functional S[y] = 0
dx y 0 3 , y(0) = 0, y(a) = A, have broken extremals

Exercise 10.39 Z a
dx y 0 4 − 6y 0 2 , y(0) = 0, y(a) = A > 0, have any
` ´
Does the functional S[y] =
0
stationary paths with a single corner? Find any such paths.

Exercise 10.40
Find the equation for the stationary curve of the modified brachistrochrone prob-
lem in which the initial point is (0, A), A > 0, and the final point is on a circle
with centre on the x-axis at x = b and with radius r < b. The particle starts from
rest at (0, A).
Chapter 11

Conditional stationary points

11.1 Introduction
In this chapter we introduce the method needed to treat constrained variational prob-
lems, examples of which are the isoperimetric and catenary problems, described in
sections 3.5.5 and 3.5.6. With such problems the admissible paths are constrained to
a subset of all possible paths: in the isoperimetric and catenary problems these con-
straints are the lengths of the boundary and chain, respectively.
We introduce the technique required using the simpler example of constrained sta-
tionary points of functions of two or more variables, beginning with a discussion of a few
elementary cases; the method is applied to the Calculus of Variations in the next chap-
ter. Throughout this chapter we assume that all functions are sufficiently differentiable
in the region of interest.
Consider a walker on a hill but confined to a one-dimensional path, AB, as shown
in figure 11.1.

3
2.5
2
h 1.5
1 B
0.5 –2
0 A
–3 –2 0
–1 0 x
y 1 2 3 2
Figure 11.1 Graph showing the height h(x, y) of the hill as x
and y vary. The path x + y = 1 is depicted by the solid line.

In this example the height of the hill is represented by the function


h(x, y) = 3 exp(−x2 − y 2 /2) (11.1)

287
288 CHAPTER 11. CONDITIONAL STATIONARY POINTS

and the path by the equation x + y = 1. This hill has a global maximum at x = y = 0,
but because the path does not pass through this point the maximum height attained
by the walker is less. The problem is to find this stationary point and its position: we
should also like to classify this stationary point, but usually this is more difficult.
The maximum height of the walker may be determined by rearranging the equation
of the path to express y in terms of x, y = 1 − x, and then by expressing the height in
terms of x alone,  
3 2 1
h(x) = 3 exp − x + x − . (11.2)
2 2
The maximum of this function may be found by the methods described in section 8.2,
see also exercise 11.1, and is max(h(x, y)) = 3e−1/3 . In this example the path x + y = 1
constrains the walker and is named the constraint, or the equation of constraint.
Another problem is that of inscribing a rectangle of maximum area inside a given
ellipse, such that all corners of the rectangle lie on the ellipse, as shown in figure 11.2.

y
b
(x,y)

a x

Figure 11.2 Diagram of a rectangle inscribed in the ellipse


defined by equation 11.3.

The coordinates of the top right-hand corner of the rectangle are (x, y) and since the
equation of the ellipse is
x2 y2
+ = 1, (11.3)
a2 b2
this is the equation of constraint. The area of the rectangle is

A(x, y) = 4xy, x > 0, y > 0, (11.4)

so we need the maximum of this function subject to the constraint 11.3: this problem
is solved in exercise 11.2.
If there are two independent variables, (x, y), there can be only one constraint which
we denote by g(x, y) = 0, and we require the stationary points of f (x, y) subject to this
constraint. Geometrically the constraint equation, g(x, y) = 0, defines a curve C g in the
Oxy plane, see for example figure 11.3, so we are searching for the stationary points of
f (x, y) along this curve.
With two independent variables there can be only one constraint because another
constraint, γ(x, y) = 0, defines another curve, Cγ that intersects Cg at isolated points,
if at all. Sometimes, however, the equations g(x, y) = 0 and γ(x, y) = 0 will define the
same curve, despite being algebraically dissimilar: then the functions g and γ are said
to be dependent and it can be shown that in the region where the curves g(x, y) = 0 and
γ(x, y) = 0 coincide there is a differentiable function F (u, v) of two real variables such
11.1. INTRODUCTION 289

that F (g(x, y), γ(x, y)) =constant: alternatively, using the implicit function theorem,
section 1.3.7, it can be shown that γ can be expressed in terms of g, γ(x, y) = G(g(x, y)),
or vice versa. It is not always obvious that two functions define the same curve: for
instance the equations
2
g(x, y) = y − sinh−1 (tan x) = 0 and γ(x, y) = − 1 − e−y = 0 (11.5)
1 + tan(x/2)

define the same line in the vicinity of the origin.


The equation of constraint, g(x, y) = 0, can be used to express y in terms of x,
provided gy (x, y) 6= 0, and then the function f (x, y) becomes a function, f (x, y(x)), of
the single variable x, representing the variation of f along those segments of the curve
Cg not including tangents parallel to Oy, for instance the segments AB and BC of the
curve depicted in figure 11.3. The stationary points of f (x, y(x)) can then be found in
the usual manner. Similarly, for segments of Cg on which gx (x, y) 6= 0, such as A0 B 0
or B 0 C 0 in figure 11.3, we can form f (x(y), y) and treat this as a function of the single
variable y.

y C’
C Cg
A’

A B
B’
x

Figure 11.3 A typical curve defined by a constraint equation


g(x, y) = 0. The segments AB and BC may be represented by func-
tions y(x) and the segment A0 B 0 and B 0 C 0 by functions x(y).

If there are three independent variables, (x, y, z) and we require the stationary points
of f (x, y, z) subject to the single constraint g1 (x, y, z) = 0, we may proceed in the same
manner, by using the constraint to express z in terms of (x, y) to form the function
f (x, y, z(x, y)) of two independent variables. With two constraints gk (x, y, z) = 0,
k = 1, 2, the more general implicit function theorem, described on page 30, may be used
to express any two variables in terms of the third, to express f (x, y, z) as a function
of one variable. In either case there are three ways to proceed and it is rarely clear in
advance which yields the simplest algebra.
In general, with n variables x = (x1 , x2 , . . . , xn ) there can be at most n − 1 con-
straints. Suppose there are m constraints, m ≤ n − 1, gk (x) = 0, k = 1, 2, · · · , m.
Then, in principle we may use these m equations to express m of the variables in terms
of the remaining n − m, hence giving a function of n − m variables. In practice this is
rarely an easy task.
There are two main methods of dealing with constrained stationary problems. The
conceptually simplest method is to reduce the number of independent variables, as
described above, and in simple examples this method is usually preferable. The more
elegant method, due to Lagrange (1736 – 1813), is described in the next section.
There are two main disadvantages with the direct method:
290 CHAPTER 11. CONDITIONAL STATIONARY POINTS

1. The method is biased because it treats the variables asymmetrically, by expressing


some in terms of the others; it is often difficult to determine the most convenient
choice in advance.
2. The most important difficulty, however, is that the method cannot easily be gen-
eralised to deal with other situations, such as functionals.
Use of the direct method is illustrated in the following exercises.

Exercise 11.1
Show that the function defined in equation 11.1 has a local maximum at x = 1/3,
where y = 2/3, and that the height of the hill here is 3e−1/3 .

Exercise 11.2
Show that the area of the rectangle inscribed in the ellipse shown in figure 11.2
can be expressed in the form
4b p 2
A(x) = x a − x2 , 0 ≤ x ≤ a,
a
and by finding the stationary point of this expression show that max(A) = 2ab.

Exercise 11.3
Geometric problems often give rise to constrained stationary problems and here
we consider a relatively simple example.
Let P be a point in the Cartesian plane with coordinates
(A, B) and D the distance from P to any point (x, y) on y
the straight line with equation
x y b (x,y)
+ = 1. D
a b (A,B)
P
Show that D 2 = (x − A)2 + (y − B)2 and deduce that
the shortest distance is
x
|ab − Ab − Ba|
min(D) = √ . a
a2 + b 2

Exercise 11.4
If A, B and C are the angles of a triangle show that the function

f (A, B, C) = sin A sin B sin C

is stationary when the triangle is equilateral.


Hint the constraint is A + B + C = π.

Exercise 11.5
If z = f (x, y) and x and y satisfy the constraint g(x, y) = 0, show that at the
stationary points of z the contours of f (x, y), that is the curves defined by the
equations f (x, y) = constant, are tangential to the curve defined by g(x, y) = 0.
11.2. THE LAGRANGE MULTIPLIER 291

11.2 The Lagrange multiplier


The method for finding constrained stationary points described in the introduction
is unsatisfactory partly because it forces an arbitrary distinction between variables,
and partly because this technique cannot be applied to constrained problems in the
Calculus of Variations. The introduction of the Lagrange multiplier overcomes both
these difficulties.

11.2.1 Three variables and one constraint


Lagrange’s method allows all variables to be treated equally, and may be illustrated
using a function f (x, y, z) of three variables and with one constraint g(x, y, z) = 0.
The problem is to find the points at which f (x, y, z) is stationary subject to the con-
straint. Let (a, b, c) be the required stationary point and consider the neighbouring
points (a + uδ, b + vδ, c + wδ), where δ is small, which also satisfy the constraint, that
is g(a + uδ, b + vδ, c + wδ) = 0. Using Taylor’s theorem, see section 1.3.9, we have
 
∂g ∂g ∂g
g(a + uδ, b + vδ, c + wδ) = g(a, b, c) + u +v +w δ + O(δ 2 ), (11.6)
∂x ∂y ∂z
where all derivatives are evaluated at (a, b, c). But both points satisfy the constraint so
we have
∂g ∂g ∂g
u +v +w = O(δ).
∂x ∂y ∂z
The left-hand side is independent of δ, so taking the limit δ → 0 gives
∂g ∂g ∂g
u +v +w = 0. (11.7)
∂x ∂y ∂z
This equation can be interpreted as the equation of a plane
passing through the origin, in the Cartesian space with axes w
Ou, Ov and Ow, as shown in the diagram. The normal to
n
this plane is parallel to the vector n = (gx , gy , gz ), and the
plane exists provided |n| 6= 0: this means that the constraint O
must not be stationary at (a, b, c). Any point in this plane can
be defined uniquely with just two coordinates. It follows that u v
(u, v, w) cannot vary independently but that usually any one
of these variables can be expressed in terms of the other two.
This is, of course, equivalent to using the implicit function theorem on the equation
g(x, y, z) = 0 to express one variable in terms of the other two.
If f (x, y, z) is stationary then, by definition, see section 3.2.1,
f (a + uδ, b + vδ, c + wδ) − f (a, b, c) = O(δ 2 )
which means, by the same argument as before, that
∂f ∂f ∂f
u +v +w = 0. (11.8)
∂x ∂y ∂z
Recall that if there were no constraint this equation must hold for independent variations
u, v and w: then by choosing v = w = 0 and u 6= 0 we see that ∂f /∂x = 0: the other two
292 CHAPTER 11. CONDITIONAL STATIONARY POINTS

equations, ∂f /∂y = ∂f /∂z = 0, are obtained similarly. But because of the constraint
u, v and w cannot vary independently.
We proceed by introducing a new variable, the Lagrange multiplier λ, also named the
undetermined multiplier, so there are now four variables to be determined (x, y, z) and
λ: surprisingly this simplifies the problem. Multiply equation 11.7 by λ and subtract
from equation 11.8 to form another equation,
     
∂f ∂g ∂f ∂g ∂f ∂g
−λ u+ −λ v+ −λ w = 0. (11.9)
∂x ∂x ∂y ∂y ∂z ∂z

This equation is true for any value of λ. Because of the constraint, variations in u, v
and w are not independent but, if ∂g/∂z 6= 0 we may choose λ to make the coefficient
of w in equation 11.9 zero, that is

∂f ∂g
−λ = 0. (11.10)
∂z ∂z
Then equation 11.9 reduces to
   
∂f ∂g ∂f ∂g
−λ u+ −λ v = 0.
∂x ∂x ∂y ∂y

Because u and v may be varied independently, by first setting v = 0 and then u = 0,


we obtain the two equations

∂f ∂g ∂f ∂g
−λ = 0, −λ = 0. (11.11)
∂x ∂x ∂y ∂y

The three equations 11.10 and 11.11 relate the four variables x, y, z and λ. Assuming
that the implicit function theorem can be applied, that is the Jacobian 1.26 (page 30)
is not zero, we can use these equations to express (x, y, z) in terms of λ. Then the
constraint becomes g(x(λ), y(λ), z(λ)) = 0, which determines appropriate values of λ.
This procedure is equivalent to defining an auxiliary function of four variables

F (x, y, z, λ) = f (x, y, z) − λg(x, y, z) (11.12)

and finding the stationary points of F (x, y, z, λ) using the conventional theory for all
four variables, that is the solutions of

∂f ∂g ∂f ∂g ∂f ∂g
Fx = −λ = 0, Fy = −λ = 0, Fz = −λ = 0,
∂x ∂x ∂y ∂y ∂z ∂z

and Fλ = −g(x, y, z) = 0. Usually the first three of these are solved first to give
(x(λ), y(λ), z(λ)) in terms of λ, and then the fourth, the equation of constraint, is
used to determine λ, although the order in which these equations are solved is clearly
immaterial.
Thus the introduction of the Lagrange multiplier , λ, gives a method of finding
stationary points that treats the three original variables equally. Before showing how
this method generalises to n variables and m ≤ n − 1 constraints we apply it to the
triangle problem treated in exercise 11.4.
11.2. THE LAGRANGE MULTIPLIER 293

For this problem f (x, y, z) = sin x sin y sin z and g(x, y, z) = x + y + z − π, so that
the auxiliary function is

F (x, y, z, λ) = sin x sin y sin z − λ(x + y + z − π),

with each of x, y and z in the interval (0, π). Equations 11.10 and 11.11 become

sin x sin y cos z − λ = 0, sin x cos y sin z − λ = 0, cos x sin y sin z − λ = 0,

and x + y + z = π. Three different equations, independent of λ, may be obtained by


forming pairs of differences: thus subtracting the second equation from the first gives

sin x (sin y cos z − cos y sin z) = sin x sin(y − z) = 0. (11.13)

Similarly, by subtracting the third from the second and the third from the first we
obtain
sin z sin(x − y) = 0 and sin y sin(z − x) = 0.
From 11.13 either sin x = 0 or sin(y − z) = 0; but for a triangle of nonzero area none
of x, y or z can be zero or π, so −π < y − z < π and the only solution is y = z. The
remaining two equations give y = x and z = x and hence x = y = z and then the
constraint gives x = y = z = π/3.

Exercise 11.6
Use a Lagrange multiplier to find the stationary points of the problems set in
exercises 11.1, 11.2 and 11.3.

Exercise 11.7
Show that the stationary distance between the origin and
√ the plane defined by the
equation ax + by + cz = d is given by the formula |d|/ a2 + b2 + c2 .

Exercise 11.8
Consider a rectangle, two sides of which are along the x- and y-axes; the bot-
tom left-hand corner is at the origin and the opposite corner lies on the line
x/a + y/b = 1, where a and b are positive numbers. Show that the stationary
area of such a rectangle is A = ab/4 and that for this rectangle the top right-hand
corner is at (a/2, b/2).

11.2.2 Three variables and two constraints


If there are three variables and two constraints, g1 (x, y, z) = 0 and g2 (x, y, z) = 0, then
equation 11.7 must hold for both constraints so we have the two equations
∂g1 ∂g1 ∂g1 ∂g2 ∂g2 ∂g2
u +v +w = 0 and u +v +w = 0, (11.14)
∂x ∂y ∂z ∂x ∂y ∂z

where all derivatives are evaluated at the stationary point. Provided neither g 1 (x, y, z)
nor g2 (x, y, z) is stationary, and that the normals to the planes defined by the equations
294 CHAPTER 11. CONDITIONAL STATIONARY POINTS

are not parallel, so that the planes exist and are distinct, then the planes intersect along
a line and there can be only one independent variable.
Equation 11.8 remains valid and now we proceed by introducing two Lagrange mul-
tipliers, λ1 and λ2 , one for each constraint. Thus from equations 11.8 and 11.14 we
may form another equation,
     
∂f ∂g1 ∂g2 ∂f ∂g1 ∂g2 ∂f ∂g1 ∂g2
− λ1 − λ2 u+ − λ1 − λ2 v+ − λ1 − λ2 w = 0.
∂x ∂x ∂x ∂y ∂y ∂y ∂z ∂z ∂z
(11.15)
Now choose λ1 and λ2 to make the coefficients of v and w zero, that is
∂f ∂g1 ∂g2 ∂f ∂g1 ∂g2
− λ1 − λ2 = 0 and − λ1 − λ2 = 0. (11.16)
∂y ∂y ∂y ∂z ∂z ∂z
Then, since u may be varied independently, we have a third equation
∂f ∂g1 ∂g2
− λ1 − λ2 = 0. (11.17)
∂x ∂x ∂x
The three equations 11.16 and 11.17 may, in principle, be solved to give (x, y, z) in terms
of λ1 and λ2 and then the constraints, gj (x, y, z) = 0, j = 1, 2, give two equations for
λ1 and λ2 . Needless to say, in practice these equations are not usually easy to solve.
As in the previous case this is formally equivalent to defining an auxiliary function

F (x, y, z) = f (x, y, z) − λ1 g1 (x, y, z) − λ2 g2 (x, y, z), (11.18)

of five variables and finding the stationary points of this, that is the solutions of
∂F ∂F ∂F ∂F ∂F
= 0, = 0, = 0, = 0 and = 0.
∂x ∂y ∂z ∂λ1 ∂λ2
We illustrate this method by showing how to find the stationary values of f (x, y, z) =
ax2 + by 2 + cz 2 , subject to the variables being confined to the planes x + y + z = 1 and
x + 2y + 3z = 2. The auxiliary function is

F = ax2 + by 2 + cz 2 − λ1 (x + y + z − 1) − λ2 (x + 2y + 3z − 2)

so the equations to be solved are

Fx = 2ax − (λ1 + λ2 ) = 0,
Fy = 2by − (λ1 + λ2 ) − λ2 = 0,
Fz = 2cz − (λ1 + λ2 ) − 2λ2 = 0.

In this case it is convenient to define a new variable λ = λ1 + λ2 , and then these three
equations can be solved to give
λ λ + λ2 λ + 2λ2
x= , y= , z= ,
2a 2b 2c
and the equations of constraint become

λ (ab + ac + bc)+λ2 (2ab + ac) = 2abc and λ (3ab + 2ac + bc)+λ2 (6ab + 2ac) = 4abc,
11.2. THE LAGRANGE MULTIPLIER 295

which have the solution


4ab 2b(c − a)
λ= and λ2 = .
a + 4b + c a + 4b + c

Hence
2b a+c 2b
x= , y= and z = x = . (11.19)
a + 4b + c a + 4b + c a + 4b + c

Exercise 11.9
Derive equations 11.19 by using the constraints to express x and y in terms of z.
Note that in this example the direct method is easier, because the constraints are
linear.

Exercise 11.10
If f (x) is a function of the n variables x = (x1 , x2 , · · · , xn ) constrained by the
single function g(x) = 0 show that the stationary points can be found by forming
the auxiliary function F (x, λ) = f (x) − λg(x) of n + 1 variables and finding its
stationary points.

11.2.3 The general case


The method of Lagrange multipliers is applied to the case of n variables and m ≤ n − 1
constraints, gj (x), j = 1, 2, · · · , m, in a similar fashion, but with m multipliers, so the
auxiliary function has n + m variables,
m
X
F (x, λ1 , λ2 , · · · , λm ) = f (x) − λj gj (x) (11.20)
j=1

where f (x) is the function for which stationary points are required. The stationary
points of F are at the roots of
m
∂F ∂f X ∂gj
= − λj = 0, k = 1, 2, · · · , n, (11.21)
∂xk ∂xk j=1 ∂xk
∂F
= −gj (x) = 0, j = 1, 2, · · · , m ≤ n − 1. (11.22)
∂λj

This method has the advantage of treating all variables equally and hence retaining any
symmetries that might be present.
The Lagrange multiplier method determines the position of stationary points. It is
generally more difficult to determine the nature of a constrained stationary point and
normally one has to use physical or geometric considerations besides algebraic methods
to understand the problem.
296 CHAPTER 11. CONDITIONAL STATIONARY POINTS

11.3 The dual problem


We end this chapter by returning to the case of only one constraint and one Lagrange
multiplier. That is we seek the stationary points of the function f (x) subject to the
constraint g(x) = 0. The auxiliary function is F (x, λ) = f (x) − λg(x) and, provided
λ 6= 0 this may be rewritten in the alternative form

G(x, µ) = g(x) − µf (x) where µλ = 1 and λG(x, µ) = −F (x, λ). (11.23)

This equation can be used to find the stationary points of g(x) subject to the constraint
f (x) = 0, which are given by the roots of

∂G ∂g ∂f
= −µ = 0, k = 1, 2, · · · , n,
∂xk ∂xk ∂xk
which are the same equations as for the stationary points of the original problem. If
x(µ) is a solution of these equations the stationary point of the new constrained problem
is given by those µ satisfying f (x(µ)) = 0. Further, since µλ = 1, the stationary points
of the original problem are x(1/λ) with the values of λ given by g(x(1/λ)) = 0. Thus
the Lagrange multiplier method highlights a duality between,
a) the stationary points of f (x) with the constraint g(x) = 0, and
b) the stationary points of g(x) with the constraint f (x) = 0,
which is not apparent in the conventional method.

Exercise 11.11
This exercise provides an illustration of the duality described above; compare this
problem with that considered in exercise 11.1.
Find the stationary value of the function g(x, y) = x + y − 1 subject to the
constraint f (x, y) = 3 exp(−x2 − y 2 /2) − c where c is a positive constant.

Exercise 11.12
An open rectangular box made of thin sheet metal and sides of height z and a
rectangular base of interior dimensions x and y. The base and sides of length x
are of (small) uniform thickness d and the sides of length y are of thickness 2d. If
the volume of metal is fixed prove that the volume of the box is stationary when
x = 2y = 4z.

Exercise 11.13
A vessel comprises a cylinder of radius r and height h with equal conical ends, the
semi-vertical angle of each cone being α. Show that the volume V and the surface
area, S, are given by

2πr3 2πr2
V = πr2 h + and S = 2πrh + .
3 tan α sin α
If r, h and α can vary, show that for a vessel of given volume the stationary surface
area occurs when cos α = 23 . Also find the value of h in terms of r and and r in
terms of V .
11.4. MISCELLANEOUS EXERCISES 297

11.4 Miscellaneous exercises


Exercise 11.14
Show that equations 11.5 define the same line in the neighbourhood of the origin.

Exercise 11.15
Find the stationary value of f = x2 + y 2 + z 2 + w2 , subject to the constraint
(xyzw)2 = 1, and the values of the variables at which the stationary values are
attained.

Exercise 11.16
Find the stationary points of f = xyzw 9 subject to the constraint g = 4x4 + 2y 8 +
z 16 + 9w16 = 1 in the region where all variables are positive.

Exercise 11.17
If a, b, c and d are given positive numbers and x, y and z are positive, real variables
satisfying the equation x + y + z = d, show that the function

a2 b2 c2
f (x, y, z) = + +
x y z

possesses a stationary value (a + b + c)2 /d.

Exercise 11.18
Show that the shortest distance between the plane ax + by + cz = d in the Oxy-
plane and the point (A, B, C) is given by

|Aa + Bb + Cc − d|
D= √ .
a2 + b 2 + c 2

Exercise 11.19
For a simple lens with focal length f the object distance p and the image distance
q are related by 1/p + 1/q = 1/f . If p + q =constant find the stationary value of f .

Exercise 11.20
Show that the stationary points of f = ax2 + by 2 + cz 2 , where the constants a, b
and c are all positive, on the line where the vertical cylinder, x2 +y 2 = 1, intersects
the plane x + y + z = 1, are given by
λ2 λ2 λ2
x= , y= and z= ,
2(a − λ1 ) 2(b − λ1 ) 2c

where the two possible values of (λ1 , λ2 ) are

1 2c(6c ± 2∆) p
λ1 = (a + b + 4c) ± ∆, λ2 = with ∆= (a − b)2 + 8c2 .
2 2c ± ∆
298 CHAPTER 11. CONDITIONAL STATIONARY POINTS

Exercise 11.21
Show that the area, S, of canvas needed to make a tent of given volume V com-
prising a right circular cylinder of radius r, made of a single thickness of canvas,
together with conical top of height h, made of two thickness of canvas is given by
2V p 2
S= + 2πr r2 + h2 − πrh.
r 3
If both
√ r and h can vary show that the stationary value of S, for fixed V , is given
by r 2 = 4h = R where V = 2πR3 /3.
Chapter 12

Constrained Variational
Problems

12.1 Introduction
In this chapter we apply the Lagrange multiplier method to functionals with constrained
admissible functions. Examples are the isoperimetric and the catenary problems, de-
scribed in sections 3.5.5 and 3.5.6, where the constraint is another functional. In these
examples the stationary path is described by a single function, y(x).
But, the most celebrated isoperimetric problem is that enshrined in the myth de-
scribing the foundation of the Phoenician city of Carthage in 814 BC: this is that Dido,
also known as Elissa, having fled from Tyre after her brother, King Pygmalion, had
killed her husband, was granted by the Libyans as much land as an ox-hide could cover.
By cutting the hide into thin strips, she was able to claim far more ground than an-
ticipated. In common with all foundation myths there is no trace of evidence for its
veracity.
Dido’s solution is a circle which cannot be described by a single function, the natural
representation being parametric. Thus, we need to consider the effects of constraints
on both types of functionals.
There is, however, another type of constrained problem, of equal significance, exem-
plified by the problem of finding geodesics on surfaces. Consider a surface defined in the
three dimensional Cartesian space, which we suppose can be defined by an equation of
the form S(x, y, z) = 0. Given two points on this surface we require the shortest line, on
the surface, joining these points. Any smooth path can be represented parametrically
by three functions (x(t), y(t), z(t)) of a parameter t, with end points at t = 0 and t = 1.
The distance along this path is given by the functional
Z 1 p
D[x, y, z] = dt ẋ2 + ẏ 2 + ż 2
0

and the constraint that forces this path to be on the surface is S(x(t), y(t), z(t)) = 0 for
0 ≤ t ≤ 1. This is a different type of constraint than found in the problems described
above. In the non-assessed sections 12.7 and 12.8 this theory is used to solve variants
of the brachistochrone problem.

299
300 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

No fundamentally new ideas are presented in this chapter, but many ideas and
techniques introduced in previous chapters are used in a slightly different context, to
derive new results. As you read through this chapter you should ensure that you
thoroughly understand the previous work upon which it is based.

12.2 Conditional Stationary values of functionals


12.2.1 Functional constraints
One possible method of dealing with constrained problems is to use admissible functions
that automatically satisfy the constraint; this is the equivalent of the direct method
discussed in the introduction to the previous chapter. Unfortunately it is not always
possible to formulate satisfactory rules for defining such functions so the alternative
method, described in theorem 12.1 below, is essential.
The general theory for this type of function is a combination of the Lagrange multi-
plier method, described in chapter 11, and the derivation of the Euler-Lagrange equation
given in chapter 4; it is convenient to summarise the result as a theorem.
Theorem 12.1
Given the functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (12.1)
a

where the admissible curves must also satisfy the constraint functional
Z b
C[y] = dx G(x, y, y 0 ) = c, (12.2)
a

where c is a given constant, then, if y(x) is not a stationary path of C[y], there exists a
Lagrange multiplier λ such that y(x) is a stationary path of the auxiliary functional
Z b
S[y] = dx F (x, y, y 0 ), y(a) = A, y(b) = B, (12.3)
a

where F = F − λG. That is, the stationary path is given by the solutions of the
Euler-Lagrange equation
 
d ∂F ∂F
− = 0, y(a) = A, y(b) = B. (12.4)
dx ∂y 0 ∂y

The solution of this Euler-Lagrange equation will depend upon λ, the value of which is
determined by substituting the solution into the constraint functional 12.2.

The proof of this theorem requires a significant, and not immediately obvious, change
to the proof presented in section 4.4. Thus before providing the proof it is instructive
to see what happens when the general theory of section 4.4 is applied directly to this
type of problem; this shows why a modification is required.
12.2. CONDITIONAL STATIONARY VALUES OF FUNCTIONALS 301

Suppose that y(x) is the required solution: consider the neighbouring admissible
function y(x) + h(x) where h(a) = h(b) = 0, then the Gâteaux differential is
Z b   
∂F d ∂F
∆S[y, h] = dx − h(x). (12.5)
a ∂y dx ∂y 0
But both y(x)+h(x) and y(x) are chosen to satisfy the constraint, that is C[y + h] = C[y],
so the rate of change of C[y] is zero, that is the Gâteaux differential is zero
Z b   
∂G d ∂G
∆C[y, h] = dx − h(x) = 0 for all h(x). (12.6)
a ∂y dx ∂y 0
It is assumed that C[y] is not stationary, so G(x, y, y 0 ) does not satisfy the Euler-
Lagrange equation. But 12.6 is true for all h(x) only if G satisfies the Euler-Lagrange
equation. This contradiction can be resolved with a judicious choice of h(x). The
problem is that the constraint places an additional restriction on the variation h(x)
so that the theory developed in chapter 5, which placed no restriction (other than
differentiability) on h(x), needs to be modified.
The same problem arises with functions of n real variables, s(x) and a single con-
straint c(x) = 0. In this case the equivalents of expressions 12.5 and 12.6 are
n n
X ∂s X ∂c
∆s[x, h] = hk = 0 and ∆c[x, h] = hk = 0.
∂xk ∂xk
k=1 k=1

But the second of these equations is true for all variations satisfying the constraint, so
the hk cannot be varied independently, and therefore we cannot deduce that ∂s/∂xk = 0
for all xk .
In order to derive the Euler-Lagrange equation 12.4 we use a special set of variations.
Recall that when first deriving the Euler-Lagrange equation in section 4.4 we used the
fundamental Lemma, section 4.3, which involved sets of functions h(x) that isolated
small intervals of the integrand. Here we use a modification of this method that involves
picking out two, small, distinct intervals.
This is achieved by writing
h(x) = 1 g(x − η1 ) + 2 g(x − η2 ), η1 6= η2 , (12.7)
where the function g(x − η) is strongly peaked in a neighbourhood of x = η and zero
for other x.
Such functions can be constructed from the type of function used to prove the
fundamental lemma, section 4.3; for example define
 1 δ 2 − (x − η)2  ,

a < η − δ ≤ x ≤ η + δ < b,
g(x − η) = δ2 (12.8)
0, otherwise.

The coefficient δ −2 is chosen to make g = O(1). This function is zero except in the
neighbourhood of width 2δ centred at x = η.
For any function f (x) possessing a third derivative for a ≤ x ≤ b, we have, see
exercise 12.1
Z b

dx f (x)g(x − η) = γf (η) + O(γ 3 ), γ = , a + δ < η < b − δ. (12.9)
a 3
302 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

In the following analysis we use the specific ‘family’ of functions 12.8 in order to illustrate
how the proof works. Such a restriction is not necessary, but without it the more general
equivalent of equation 12.9 needs to be derived; the only significant difference between
equation 12.9 and the general case is that the term O(δ 3 ) is replaced by a term O(δ 2 ).
For convenience define the functions
   
d ∂F ∂F d ∂G ∂G
F(x) = 0
− and G(x) = 0

dx ∂y ∂y dx ∂y ∂y

which we assume are sufficiently well behaved for a ≤ x ≤ b. Then the integrals 12.5
and 12.6 become, respectively,
Z b  
∆S = dx F(x) 1 g(x − η1 ) + 2 g(x − η2 )
a
 
= γ 1 F(η1 ) + 2 F(η2 ) + O(γ 3 ), (12.10)
Z b  
∆C = dx G(x) 1 g(x − η1 ) + 2 g(x − η2 )
a
 
= γ 1 G(η1 ) + 2 G(η2 ) + O(γ 3 ). (12.11)

The functional C[y] is not stationary therefore we may choose η2 such that G(η2 ) 6= 0,
and then equation 12.11 gives, since ∆C = 0,

G(η1 )
2 = − 1 + O(γ 2 ).
G(η2 )

Substituting this into equation 12.10 and using the fact S[y] is stationary, so ∆S = 0,
 
F(η1 ) F(η2 )
− 1 = O(γ 2 ).
G(η1 ) G(η2 )

Since this equation must be true for all 1 , and the left-hand side is independent of γ,
we must have
F(η1 ) F(η2 )
= .
G(η1 ) G(η2 )
Finally, recall that η1 and η2 are arbitrary, so it follows that the ratio F(x)/G(x) is
independent of x. Setting this ratio to a constant λ we obtain
     
d ∂F ∂F d ∂G ∂G
− − λ − = 0 for a ≤ x ≤ b, (12.12)
dx ∂y 0 ∂y dx ∂y 0 ∂y

which is just equation 12.4 and can be derived from the functional S[y] in the usual
manner.
This proof shows clearly why two small parameters, 1 and 2 , are necessary; we
need the flexibility to isolate two distinct points, η1 and η2 , in the interval (a, b) to
show that the ratio F(x)/G(x) is independent of x. In this proof it is necessary to
assume that G(x) 6= 0 for almost all values of x in this interval: that is, C[y] must not
be stationary.
12.2. CONDITIONAL STATIONARY VALUES OF FUNCTIONALS 303

Exercise 12.1
Prove equation 12.9

Exercise 12.2
Use theorem 12.1 to show that the stationary path of the variational problem
Z 1
S[y] = dx y 0 2 , y(0) = y(1) = 0,
0

subject to the constraint that the area under the curve is fixed, that is
Z 1
C[y] = dx y(x) = A,
0

is given by y = 6Ax(1 − x), and that the undetermined multiplier is λ = 24A.

Exercise 12.3
Show that the stationary path of the functional
Z 2
S[y] = dx xy 0 2 , y(1) = y(2) = 0,
1

subject to the constraint


Z 2 „ «
2 ln 2 ln x
dx y = 1 is given by y(x) = 1−x+ .
1 3 ln 2 − 2 ln 2

We end this section by considering the effect of M functional constraints on a functional


of n dependent variables. The extension required is identical to that described in
the previous chapter, namely a Lagrange multiplier is added for each constraint: we
summarise the result as a theorem.
Theorem 12.2
Given the functional
Z b
S[y] = dx F (x, y(x), y0 (x)), y(a) = A, y(b) = B, (12.13)
a

where y = (y1 , y2 , . . . , yn ) and where the admissible curves must also satisfy the M
constraint functionals
Z b
Cj [y] = dx Gj (x, y(x), y0 (x)) = cj , j = 1, 2, · · · , M, (12.14)
a

where the cj are M given constants, then, if y(x) is not a stationary path of any of the
constraints, there exists a set of M Lagrange multipliers λj , j = 1, 2, · · · , M , such that
y(x) is a stationary path of the functional
Z b
S[y] = dx F (x, y(x), y0 (x)), y(a) = A, y(b) = B, (12.15)
a
304 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS
PM
where F = F − j=1 λj Gj . That is, the stationary path is given by the solution of the
Euler-Lagrange equations
 
d ∂F ∂F
− = 0, yk (a) = Ak , yk (b) = Bk , k = 1, 2, · · · , n. (12.16)
dx ∂yk0 ∂yk

The solution of these n Euler-Lagrange equations will depend upon M Lagrange mul-
tipliers, the values of which are determined by substituting the solution into the M
constraint functionals 12.14.

Exercise 12.4
Show that the stationary paths of the functional
Z 1 “ ”
S[y, z] = dx y 0 2 + z 0 2 − 2xz 0 − 4z , y(0) = z(0) = 0, y(1) = z(1) = 1,
0

subject to the constraint


Z 1 “ ”
C[y, z] = dx y 0 2 − xy 0 − z 0 2 = c,
0

are given by

(4 − 3λ)x − λx2 (3 + 2λ)x − x2


y= and z=
4(1 − λ) 2(1 + λ)

24 − 46λ + 23λ2 1
where λ is a solution of 1 + c = − .
48(1 − λ)2 12(1 + λ)2

12.2.2 The dual problem


The form of theorem 12.1 suggests the same duality as for functions of real variables,
described in section 11.3. Thus we may change the roles of the functionals S[y] and C[y]
in theorem 12.1 and, provided λ 6= 0, the Euler-Lagrange equation of the functional
C[y] = C[y]−µS[y] gives the stationary paths of C[y] subject to the constraint S[y] = s,
which provides the equation for the Lagrange multiplier µ. The following exercise is
the dual of the problem considered in exercise 12.2.

Exercise 12.5
Show that the stationary path of the variational problem
Z 1
S[y] = dx y, y(0) = y(1) = 0,
0

Z 1 √
subject to the constraint C[y] = dx y 0 2 = c, is given by y(x) = 3c x(1 − x),
0 √
and that the undetermined multiplier is µ = 1/(4 3c).
12.2. CONDITIONAL STATIONARY VALUES OF FUNCTIONALS 305

12.2.3 The catenary


Here we determine the shape of the catenary, that is, the shape assumed by an inexten-
sible cable of uniform density, ρ, and known length, hanging between fixed supports.
In figure 12.1 we show an example of such a curve with the points of support at (0, B)
and (a, A), with a > 0 and B < A.

(a,A)
A
y

B
x
x=0 x=a
Figure 12.1 The catenary formed by a uniform cable
hanging between two points at different heights.

If a curve is described by a differentiable function y(x) it can be shown, see exercise 3.19
(page 117), that the potential energy E of the cable is proportional to the functional
Z a p
E[y] = ρg dx y 1 + y0 2, y(0) = B, y(a) = A ≥ B. (12.17)
0

The curve that minimises this functional, subject to the length of the cable,
Z a p
L[y] = dx 1 + y 0 2 , (12.18)
0

remaining constant is the shape assumed by the hanging cable.


Notice that the functional E[y] is identical to that giving the area of a surface of
revolution, see equation 5.11 (page 155). But, in the present case we shall see that the
existence of the constraint changes the behaviour of the solutions.
Experiencep leads us to expect that provided L is larger than the distance between
the supports, a2 + (A − B)2 , the cable hangs in a specific manner; thus we expect
that there is a unique
p path that minimises E[y] with the constraint L[y]. Here we show
that provided L > a2 + (A − B)2 (and the cable is strong enough) there are always
two stationary paths. But, in section 5.3 we saw that when A = B, with no constraint,
there are either two or no smooth stationary paths of E[y], depending upon the ratio
A/a. In exercise 5.19 it was shown that if B = 0 and A > 0, again with no constraint,
there is no solution. This illustrates the significance of constraints.
A physical interpretation of the effect of the removal of this constraint is given by
considering a slight modification of the catenary, whereby the points of support are two
smooth pegs and the cable is draped over these with the surplus cable resting on the
ground, as shown in figure 12.2: the important property of a smooth peg is that around
it tension in the cable does not change. We suppose that the cable is sufficiently long
that there is always some cable on the ground.
306 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

a
Figure 12.2 Diagram showing a cable hanging over two
smooth pegs, at the same height, A, above the ground, a
distance a apart. The cable is long enough to reach the
ground on both sides.

In this example the potential energy of the vertical segments is independent of the
shape of the hanging portion, so the energy is given by equation 12.17, and there is no
constraint. This is the same functional as gives the area of a surface of revolution.
The hanging portion of the cable is supported only by the weight of the vertical
portion of the cable, so we consider the effect of keeping A and B fixed and changing
a, the separation between the pegs. First consider the case A = B.
If a  A the weight of the hanging cable is relatively small by comparison to the
vertical portion, and we expect the portion between the pegs to be almost horizontal.
In addition there will be a solution where the hanging portion falls almost vertically
near the pegs and with a section of it resting on the floor. Figure 5.11 (page 160) shows
the two solutions, for a  A; one is almost horizontal and is shown in section 8.7 to be
a local minimum.
If a  A the weight of the hanging cable is relatively large and cannot be sup-
ported by the vertical portion. Now the only solution is the Goldschmidt solution,
equation 5.20, which is physically possible only for an infinitely flexible cable.
Notice that if B = 0 the length of the vertical portion of the cable must be less
than the hanging portion, which therefore cannot be supported, so there is no smooth
solution, as in exercise 5.19. This example demonstrates the importance of constraints.
Returning to the main problem, choose the axes so the left-hand support is at the
origin, that is B = 0, and the right-hand end has coordinates (a, A). Further we may
assume, with no loss of generality, that A ≥ 0. The energy and constraint functionals
are given in equations 12.17 and 12.18, so if λρg is the Lagrange multiplier the auxiliary
functional is proportional to
Z a p
E[y] = dx (y − λ) 1 + y 0 2 , y(0) = 0, y(a) = A ≥ 0, (12.19)
0

and this can be expressed in terms of a new variable z = y − λ


Z a p
E[z] = dx z 1 + z 0 2 , z(0) = −λ, z(a) = A − λ.
0

The first-integral of this functional is


z
√ = c,
1 + z0 2
12.2. CONDITIONAL STATIONARY VALUES OF FUNCTIONALS 307

where c is a constant. Solving this equation for z 0 gives the first-order equation

dz z 2 − c2
=± ,
dx c
which is same equation as derived in section 5.3.2. Putting z = c cosh φ(x) gives
cφ0 = ±1, so the general solution is
 
x+d
y = λ + c cosh , (12.20)
c

where d is another constant. This solution contains three unknown constants, λ, d


and c, which are obtained from the two boundary conditions and the constraint, as
shown next.
The boundary conditions y(0) = 0 and y(a) = A give the equations
   
d a+d
λ = −c cosh and A = λ + c cosh , (12.21)
c c

and the constraint becomes


s
a  Z a
  
x+d x+d
Z
2
L = dx 1 + sinh = dx cosh
0 c 0 c
      
a+d d a + 2d a
= c sinh − sinh = 2c cosh sinh . (12.22)
c c 2c 2c

Equations 12.21 and 12.22 give three equations enabling the constants λ, c and d to be
determined in terms of L, B and (a, A). It is not possible to find formulae for these
constants, but a numerical solution is made relatively easy after some rearrangements
are made. Subtracting equations 12.21 gives
      
a+d d a + 2d a
A = c cosh − cosh = 2c sinh sinh . (12.23)
c c 2c 2c

On squaring and subtracting equations 12.22 and 12.23 we obtain



2 2 2 2
a a L2 − A 2
L − A = 4c sinh or, with ξ = , sinh ξ = ξ . (12.24)
2c 2c a
This equation for ξ has two real solutions, ξ = ±ξ0 , where ξ0 is the positive solution
of the second equation; so c = ±c0 , c0 = a/(2ξ0 ) > 0. These two values of c give two
values, d± , of d which can be found by dividing 12.23 by 12.22 to give
   
a + 2d± A A
tanh =± , 0< <1 .
2c0 L L

If D0 is the positive solution of tanh D0 = A/L then

d± a
= −ξ0 ± D0 , ξ0 = .
c0 2c0
308 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

Then equation 12.21 gives the following two values for λ,

λ± = ∓c0 cosh(ξ0 ∓ D0 ), giving λ+ + λ− = A.

Hence the two solutions are


   
2ξ0
y+ (x) = c0 cosh x − (ξ0 − D0 ) − cosh(ξ0 − D0 ) (12.25)
a

and    
2ξ0
y− (x) = −c0 cosh x − (ξ0 + D0 ) − cosh(ξ0 + D0 ) . (12.26)
a

The solution y+ (x) has a local minimum at x = xm = a(1 − D0 /ξ0 )/2, and y− (x) has
a maximum at x = xm . Also we note that y− (a − x) = A − y+ (x). An example of each
of these solutions is shown in figure 12.3. Only y+ (x) is physically significant in the
present context.

2 y
y-(x)
1.5

0.5
x
0
0.2 0.4 0.6 0.8 1
-0.5
y+(x)
Figure 12.3 Graphs of the functions y± (x) in the case
a = A = 1 and L = 3.

We can deduce the existence of these two solutions directly from the original functional.
Suppose that y(x) satisfies the Euler-Lagrange equations associated with E[y], then if
w(x) = A − y(a − x), so w(0) = 0 and w(a) = A, then
Z a  p
E[y] = dx A − λ − w(a − x) 1 + w0 (a − x)2
0
Z a  p
= − du w(u) − λ 1 + w0 (u)2 , u = a − x, λ + λ = A.
0

Thus if y+ (x) is a stationary path then so is y− (x). Also,


 
E[y− ] − E[y+ ] = c20 sinh 2ξ0 − 2ξ0 > 0 (12.27)

that is the potential energy of the path y+ (x) is less that that of the path y− (x); physical
considerations suggest that y+ (x) gives a minimum value of E[y].
In figure 12.4 we show some examples of y+ (x) for a = A = 1 and various values
of L.
12.3. VARIABLE END POINTS 309

y
1

L=1.42
0.5
L=1.5
x
0 0.2 0.4 0.6 0.8 1
L=2
-0.5
L=3

-1
Figure 12.4 Graphs showing catenaries y+ (x), defined in
equation 12.25, of various lengths, L, for a = A = 1.

Exercise 12.6
Show that equation 12.24 for ξ has a unique real solution if L is larger than the
distance between the origin and (a, A). What is the positive limiting value of c as
the stationary path y+ (x) tends to the straight line between the end points?

Exercise 12.7
For given values of a and L, (L > a), show that the catenary y+ (x) with zero
gradient at the left end, x = 0, has the height difference A = L tanh ξ where
a sinh 2ξ = 2Lξ.

Exercise 12.8
Prove the inequality 12.27.

Exercise 12.9
(a) Show that the Euler-Lagrange equation associated with the functional E[y],
defined in equation 12.19,, is

(y − λ)y 00 = 1 + y 0 2 , y(0) = 0, y(a) = A.

(b) If y(x) is a solution of this equation and w(u) = A − y(x), u = a − x, show


that w(u) satisfies the equation

(w − λ)w00 (u) = 1 + w 0 (u)2 , w(0) = 0, w(a) = A and λ + λ = A.

Hence explain why another solution for the minimum surface problem, discussed
in section 5.3, cannot be generated by this transformation.

12.3 Variable end points


Variational problems with variable end points, but without constraints, were considered
in section 10.3. The addition of one or more constraints does not alter this theory in
any significant way, although its implementation is usually more difficult.
310 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

Suppose that we require the stationary paths of the functional


Z v
S[y] = dx F (x, y, y 0 ), y(a) = A, (12.28)
a

where the right-hand end of the path lies on the curve defined by τ (x, y) = 0 and where
the constraint Z v
C[y] = dx G(x, y, y 0 ) = c, a constant, (12.29)
a
also needs to be satisfied.
Using a similar analysis to that outlined in section 12.2.1 it can be shown that the
required stationary path is given by the stationary path of the auxiliary functional
Z v
S[y] = dx F (x, y, y 0 ), y(a) = A, F = F − λG, (12.30)
a

where λ is a Lagrange multiplier and at x = v the transversality condition (page 266),



τx F y 0 + τ y y 0 F y 0 − F = 0, (12.31)
x=v

is satisfied.
As before the solution of the associated Euler-Lagrange equation depends upon λ,
the value of which is determined by the constraint.

Exercise 12.10
A curve of given length L is described by the positive function y(x) passing through
the origin and some point, (v, 0), with v > 0, to be determined. Find the shape
of the curve making the area under it stationary.
Hint in this example the boundary curve is τ (x, y) = y = 0.

Exercise 12.11
A curve described by the positive function y(x) passing through the origin and
some point, to be determined, x = v > 0 on the x-axis, is rotated about the x-axis
to form a solid body.
(a) Show that the volume, V [y], and the surface area, A[y], of this body are given
by Z v Z v
dx y 2 and A[y] = 2π
p
V [y] = π dx y 1 + y 0 2 .
0 0

(b) If the surface area is given determine the path making the volume stationary,
and find the volume in terms of A.
Hint in this example the boundary curve is τ (x, y) = y = 0.

Exercise 12.12
Show that the equation of the cable with the right-hand end fixed at (a, A), where
a and A are positive, and with the left-hand end free
“ x ”to slide on“ aa ”vertical pole
aligned along the y-axis is given by y = A + c cosh − c cosh , where c is
c c
given by the positive root of Lξ/a = sinh ξ and c = a/ξ.
12.4. BROKEN EXTREMALS 311

Exercise 12.13
Show that the equations of a cable of length L and uniform density, with the left
end free to slide on a vertical pole aligned along the y-axis and the right end free
to slide along the straight line x/a + y/b = 1, a, b > 0, is

bL “ ax ” bL “a”
y =λ+ cosh , 0≤x≤ sinh−1 ,
a bL a b
for some λ for which you should find an expression in terms of a, b and L.

12.4 Broken extremals


The theory of broken extremals, section 10.5.2, remains essentially unchanged when
constraints are added. For one constraint the theory is as described in that section
except that the integrand F is replaced by F = F −λG, where λ is a Lagrange multiplier
and G the integrand of the constraint.
We illustrate this theory with the simple example requiring the stationary paths of
Z a
S[y] = dx y 0 2 , y(0) = 0, y(a) = A (12.32)
0

of given length Z a p
L[y] = dx 1 + y0 2
0

and with a discontinuous derivative at x = c, with 0 < c < a.


The modified functional is
Z a  p 
S[y] = dx y 0 2 − λ 1 + y 0 2 , y(0) = 0, y(a) = A. (12.33)
0

This integrand depends only upon y 0 , so the solutions of the associated Euler-Lagrange
equation are straight lines, y = mx + d. On the interval 0 ≤ x ≤ c, since y(0) = 0, the
appropriate solution is y = m1 x, for some constant m1 . On the interval c ≤ x ≤ a, the
solution through y(a) = A is y = A + m2 (x − a). The solution is continuous at x = c,
so
(m1 − m2 )c = A − m2 a. (12.34)
The Weierstrass-Erdmann (corner) conditions connecting the two sides of the solution
at x = c are, see equations 10.53 and 10.53 (page 275),

lim F − y 0 F y0 = lim F − y 0 F y0 ,
 
x→c− x→c+

lim F y0 = lim F y0 .
x→c− x→c+

Since !
0 λ λ
F y0 = y 2− p and F − y 0 F y0 = −y 0 2 − p
1 + y0 2 1 + y0 2
312 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

these conditions become


λ λ
m21 + p = m22 + p ,
1+ m21 1 + m22
! ! (12.35)
λ λ
m1 2− p = m2 2− p .
1 + m21 1 + m22

A solution of the first equation is m1 = m, m2 = −m, for some m; then the second
equation gives
 
λ p λ
m 2− √ = 0 giving the nontrivial solution 1 + m2 = .
1+m 2 2

The constraint now gives


r
a
2L L2
Z p p
L= dx 1 + m2 = a 1 + m2 and hence λ = and m = ± − 1.
0 a a2

Equation 12.34 for continuity then gives c = (A + ma)/2m. Hence the stationary paths
are

 mx, 0 ≤ x ≤ c, r
y(x) = 2
 A + m(a − x), c ≤ x ≤ a, where m = ± L − 1 and c = A + ma .
a2 2m
Since 0 < c < a we must have |A| < ma. √
With no corner conditions the differentiable solution exists only if L = a2 + A2 ,
there being insufficient flexibility to satisfy the constraints and the Euler-Lagrange
equation for any other values of L, a and A.

Exercise 12.14
Show that the only solutions of equations 12.35 are those considered in the text.

Exercise 12.15
This is a long, difficult question which should be attempted only if time permits.
An inextensible cable with uniform density ρ, is suspended between the points
(0, B) and (a, A), with A ≥ B, where the y-axis is vertically upwards. A weight of
mass M is firmly attached to the cable at distances L1 and L2 from the left and
right ends respectively, all distances being measured along the cable.
(a) Show that the energy functional is
Z ξ p Z a p
E[y] = M gy(ξ) + ρg dx y 1 + y 0 2 + ρg dx y 1 + y0 2,
0 ξ

where ξ is the x-coordinate of the weight, and that the two constraints are
Z ξ p Z a p
L1 = dx 1 + y0 2 and L2 = dx 1 + y0 2 .
0 ξ
12.5. PARAMETRIC FUNCTIONALS 313

(b) Derive the Euler-Lagrange equations for the cable and show that their solu-
tions are
„ «
x − d1
y1 (x) = λ1 + c1 cosh , 0 ≤ x ≤ ξ, y1 (0) = B,
c1
„ «
x − d2
y2 (x) = λ2 + c2 cosh , ξ ≤ x ≤ a, y2 (L) = A,
c2

where λ1 and λ2 are two Lagrange multipliers and (c1 , c2 , d1 , d2 ) are constants
arising from the integration of the Euler-Lagrange equations.
(c) Show that c1 = c2 = c and that the six remaining unknown constants (λ1 , λ2 , ξ, c, d1 , d2 )
are determined by the following six equations.
Z ξ » „ « „ «–
ξ − d1 d1
q
L1 = dx 1 + y10 2 = c sinh + sinh
0 c c
Z a » „ « „ «–
a − d ξ − d2
q
2
L2 = dx 1 + y20 2 = c sinh − sinh .
ξ c c
„ « „ «
d1 a − d2
B = λ1 + c cosh and A = λ2 + c cosh .
c c
„ „ « „ ««
ξ − d2 ξ − d1
M = ρc sinh − sinh
c c
and „ « „ «
ξ − d1 ξ − d2
λ1 + c cosh = λ2 + c cosh .
c c

12.5 Parametric functionals


The general theory for a parametrically defined curve is identical to that described
in section 12.2.1, in particular theorem 12.2. Consider the case of three independent
variables, (x, y, z), depending upon a parameter t, and one constraint: the functional
will be Z 1
S[x, y, z] = dt Φ(x, y, z, ẋ, ẏ, ż) (12.36)
0

with given boundary conditions and with admissible functions restricted to those paths
that satisfy the constraint,
Z 1
C[x, y, z] = dt G(x, y, z, ẋ, ẏ, ż) = c (12.37)
0

where c is a constant. This is just the problem dealt with by theorem 12.2, so the
stationary paths satisfy the three Euler-Lagrange equations
 
d ∂Φ ∂Φ
− , u = {x, y, z}, Φ = Φ − λG, (12.38)
dt ∂ u̇ ∂u

with the same boundary conditions as defined for the original functional, and where λ
is a Lagrange multiplier.
314 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

We illustrate this theory by applying it to the original isoperimetric problem of


Dido, that is we require the shape of the closed curve of given length L that encloses
the largest area, though we show only that the area is stationary. A version of this
problem was considered in exercise 12.10, where only the upper half of the curve was
considered: using a parametric representation of the functions this restriction is not
necessary.
The area of a closed curve in the Oxy-plane, see equation 9.5 (page 242), is
1 2π
Z
A[x, y] = dt (xẏ − ẋy) , x(0) = x(2π), y(0) = y(2π), (12.39)
2 0
where the range of the parameter t is appropriate for it to be an angle, and the curve
is traversed anti-clockwise. The constraint is the length,
Z 2π p
C[x, y] = dt ẋ2 + ẏ 2 = L. (12.40)
0

If λ is the Lagrange multiplier the modified functional is


1 2π h
Z p i
A[x, y] = dt xẏ − ẋy − 2λ ẋ2 + ẏ 2 (12.41)
2 0
and the two associated Euler-Lagrange equations for x and y, respectively, are
!
d λẋ 1 1
p + y + ẏ = 0,
dt 2
ẋ + ẏ 2 2 2
!
d λẏ 1 1
p − x − ẋ = 0.
dt 2
ẋ + ẏ 2 2 2

These simplify to
! !
d ẋ d ẏ
λ p + ẏ = 0 and λ p − ẋ = 0, (12.42)
dt ẋ2 + ẏ 2 dt ẋ2 + ẏ 2
which integrate directly to
ẋ ẏ
λp =α−y and λ p = β + x, (12.43)
ẋ2 + ẏ 2 ẋ2 + ẏ 2
for some constants α and β. Now multiply the first of these by ẏ, the second by ẋ and
subtract to give (α − y)ẏ − (β + x)ẋ = 0. Integrate this to obtain
(x + β)2 + (y − α)2 = γ 2 , (12.44)
where γ is another real constant. This is the equation of the circle with centre at
(−β, α) and radius γ. Its circumference is 2πγ = L, which gives the required path. In
parametric form its equations are
L L
x = −β +cos t and y = α + sin t. (12.45)
2π 2π
The position of the centre of this circle cannot be determined from the information
provided.
12.6. THE LAGRANGE PROBLEM 315

Exercise 12.16
An alternative method of finding the stationary path for the area from equa-
tions 12.42 is to use the arc length, s, as the independent variable, which is related
to the parameter t by the relation
ds p
= ẋ2 + ẏ 2 .
dt
(a) Show that with s as the independent variable equations 12.42 become
dx dy
λ =α−y and λ = β + x.
ds ds
Further, show that these equations can be converted to
(
2 y = α + a cos(s/λ + γ)
2d y
λ + y = α having the general solutions
ds2 x = −β − a sin(s/λ + γ),

where a and γ are constants.


(b) Show that 2πλ = L, derive equations 12.41 and deduce that 2πa = L.

Exercise 12.17
What is the shape of the closed curve, enclosing a given area, for which the length
is stationary.

12.6 The Lagrange problem


A different type of problem, originally formulated by Lagrange (1736 – 1813) and since
associated with his name, consists of finding stationary paths of the functional
Z b
S[y] = dx F (x, y, y0 ), (12.46)
a

where y(x) = (y1 , y2 , . . . , yn ) is an n-dimensional vector function, with constraints


defined by the m < n functions

Cj (x, y, y0 ) = 0, j = 1, 2, · · · , m < n, (12.47)

and such that certain boundary conditions are satisfied.


There are a number of complications and variants to this type of problem, which is
one reason that boundary conditions were not specified, but is also why this introductory
treatment is not assessed.
There are two different types of constraints to consider. The simplest type depends
upon x and y, but not the derivatives y0 . Such constraints play an important role
in dynamics and are known as holonomic constraints. Constraints that depend upon
y0 , and cannot be reduced to a form independent of y0 , are known as non-holonomic
constraints: both types of constraints are sometimes named finite subsidiary conditions,
or side-conditions. We consider holonomic constraints first.
The simplest method of dealing with holonomic constraints is to use a coordinate
system that automatically satisfies the constraint. If possible this is usually the most
316 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

convenient method and has the advantage that each constraint reduces the number
of dependent variables by unity. For example if there are three variables with the
constraint C(y1 , y2 , y3 ) = y12 + y22 + y32 − r2 = 0, that forces the admissible paths to lie
on a sphere, it is usually better to use the two spherical polar angles (θ, φ), where
y1 = r sin θ cos φ, y2 = r sin θ sin φ, y3 = r cos θ.
The method described here is an alternative, and a specific example is considered in
section 12.6.2.
We assume that the m constraint equations Cj (x, y) = 0 are sufficiently well behaved
that along the stationary path they can be used to express m of the dependent variables
in terms of the remaining n − m variables, which means that boundary conditions for at
most n − m variables need be specified. We shall assume that all holonomic constraints
are consistent with the boundary conditions. In the following proof we assume that
there is just one constraint, C(x, y) = 0.
Suppose that y(x) is the stationary path with the boundary conditions y(a) = A,
y(b) = B. If y + h is a neighbouring admissible path that also satisfies the constraint,
so h(a) = h(b) = 0, and for each j, hj (x) is in D1 (a, b), then the Gâteaux differential
is Z b n   
X ∂F d ∂F
∆S[y, h] = dx − hk (x). (12.48)
a ∂yk dx ∂yk0
k=1
But also C(x, y) = C(x, y + h) for all a ≤ x ≤ b, and hence
n
X ∂C
hk (x) = 0, (12.49)
∂yk
k=1

which shows that the variations, hk (x), are not independent.


Now integrate this expression over the range of x and choose the hk to be func-
tions peaked about x = η, see equation 12.8 (page 301), so that for any sufficiently
differentiable function f (x)
Z b
dx f (x)hk (x) = γk f (η) + O(γk3 )
a

as in equation 12.9 (page 301). Thus equations 12.48 and 12.49 become
n n   
X ∂C 3
X ∂F d ∂F
γk = O(γ ) and γk − = O(γ 3 ),
∂yk ∂yk dx ∂yk0
k=1 k=1

all functions being evaluated at x = η. Introduce a Lagrange multiplier, λ(η), which is


a function of η, and subtract these equations to obtain
n    
X ∂F d ∂F ∂C
γk − − λ(η) = 0. (12.50)
∂yk dx ∂yk0 ∂yk x=η
k=1

Now choose λ(η) so that the coefficient of γn is zero. We have the freedom to choose
the remaining n − 1 coefficients γk , k = 1, 2, · · · , n − 1, independently hence, using the
same argument as in section 11.2, we obtain the n Euler-Lagrange equations
 
d ∂F ∂F ∂C
0 − + λ(x) = 0, y(a) = A, y(b) = B, k = 1, 2, · · · , n. (12.51)
dx ∂yk ∂yk ∂yk
12.6. THE LAGRANGE PROBLEM 317

The derivation of this result assumed that there is a single holonomic constraint C(x, y).
This is not necessary; the addition of another holonomic constraint adds another La-
grange multiplier and in equation 12.51 the term
∂C ∂C1 ∂C2
λ(x) is replaced by λ1 (x) + λ2 (x) .
∂yk ∂yk ∂yk
A common type of problem involving a single holonomic constraint is described in
section 12.6.2.

12.6.1 A single non-holonomic constraint


In order to be specific consider a single non-holonomic constraint, that is m = 1, and
n = 2 in equations 12.46 and 12.47. Assume first that the boundary conditions,
y1 (a) = A1 , y2 (a) = A2 , y1 (b) = B1 , y2 (b) = B2 ,
are prescribed. But the constraint C(x, y1 , y2 , y10 , y20 )
= 0 can, provided Cy20 6= 0, be
inverted to express y20 as a function of all the other variables. If we assume that y1 is
known, as it would be if the stationary paths had been found, then the constraint gives
another first-order differential equation for y2 : integration gives one arbitrary constant,
which may be chosen to satisfy the boundary condition y2 (a) = A2 , but there is no
guarantee that the other boundary condition, y2 (b) = B2 , will be satisfied. In these
circumstances it is usually necessary to impose fewer boundary conditions and rely on
natural boundary conditions to supply the rest. Because there are many combinations
of imposed and natural boundary conditions we provide a flavour of the theory by
quoting a theorem valid for the restricted set of imposed conditions,
y1 (a) = A1 , y2 (a) = A2 , y1 (b) = B1 ,
and a natural boundary condition on y2 at x = b.
Theorem 12.3
Given the functional
Z b
S[y] = dx F (x, y1 , y2 , y10 , y20 ), y1 (a) = A1 , y2 (a) = A2 , y1 (b) = B1 , (12.52)
a
with the single constraint
C(x, y1 , y2 , y10 , y20 ) = 0 where Cy20 6= 0, a ≤ x ≤ b, (12.53)
then if y1 (x) and y2 (x) are twice continuously differentiable, stationary paths of this
system, there exists a Lagrange multiplier, λ(x), such that
Z b
S[y] = dx F (x, y1 , y2 , y10 , y20 ), y1 (a) = A1 , y2 (a) = A2 , y1 (b) = B1 , (12.54)
a

where F = F − λ(x)C, is stationary on this path and satisfies the natural boundary
condition
F y20 = Fy20 − λCy20 = 0. (12.55)

x=b
The solution of the associated Euler-Lagrange equation will depend upon λ(x), which
is determined by substituting the solution into the constraint equation 12.53.
318 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

12.6.2 An example with a single holonomic constraint


A simple problem with a single holonomic constraint involves finding geodesics on the
surface of a right circular cylinder. Consider such a surface in Oxyz, with equation
x2 + y 2 = a2 : we require the geodesics on this surface through points with coordinates
(a, 0, 0) and (a cos α, a sin α, b). Let the paths be parameterised by a variable 0 ≤ t ≤ 1,
so the distance along a path is
Z 1 p
S[x, y, z] = dt ẋ2 + ẏ 2 + ż 2 with the constraint x(t)2 + y(t)2 = a2 . (12.56)
0

Using the Euler-Lagrange equations 12.51 we see that


 
d ẋ
+ 2λx = 0, ∆2 = ẋ2 + ẏ 2 + ż 2 ,
dt ∆
   
d ẏ d ż
+ 2λy = 0 and = 0,
dt ∆ dt ∆

It is now helpful to use s, the arc length along the curve for the independent variable,
so that ṡ2 = ∆2 . First we note that

d2 x
   
d ẋ d 1 dx
= ṡ ṡ = ṡ 2 .
dt ∆ ds ∆ ds ds

Since t is an arbitrary parameter we may put t = s to reduce the three Euler-Lagrange


equations (since now ṡ = 1) to

x00 (s) + 2λ(s)x(s) = 0, y 00 (s) + 2λ(s)y(s) = 0, z 00 (s) = 0. (12.57)

We now show that the Lagrange multiplier is a constant, which makes the integration
of these equations easy. Differentiating the constraint twice with respect to s gives

xx00 + yy 00 + x0 2 + y 0 2 = 0.

But, from the definition of s we have x0 2 +y 0 2 +z 0 2 = 1 and together with equations 12.57
we obtain −2λa2 + 1 − z 0 2 = 0, and hence λ =constant (since z 0 is a constant). The
solution of equations 12.57 that fit the initial conditions are, with µ2 = 2λ,

x = a cos µs, y = a sin µs, z = βs,

for some constant β. If the length of the curve is S then µS = α + 2nπ, for some integer
n, and βS = b. Defining a new variable τ = µs we obtain a parametric representation
of a geodesic,


x = a cos τ, y = a sin τ, z= , 0 ≤ τ ≤ 2πn + α. (12.58)
2πn + α
For this example it is far easier to use cylindrical polar coordinates, see exercise 3.20
(page 118), which automatically satisfy the constraint.
12.7. BRACHISTOCHRONE IN A RESISTING MEDIUM 319

Exercise 12.18
Consider the functional
Z b
dx y 0 2 + z 0 2 − y 2 ,
` ´
S[y, z] = y(a) = A1 , z(a) = A2 , y(b) = B1 ,
a

with the constraint C(z, y 0 ) = z − y 0 = 0 and with a natural boundary condition


for z(b).

(a) Show that the Euler-Lagrange equations can be written in the form

d4 y d2 y
− − y = 0, y(a) = A1 , y 0 (a) = A2 , y(b) = B1 , y 00 (b) = 0,
dx4 dx2

with z = y 0 and λ = −y 000 .


(b) Show that this equation for y(x) can be derived from the associated functional
of the single dependent variable
Z b
dx y 00 2 + y 0 2 − y 2 , y 0 (a) = A2 ,
` ´
J[y] = y(a) = A1 , y(b) = B1
a

and with a natural boundary condition for y 00 (b).

12.7 Brachistochrone in a resisting medium


The modification of the brachistochrone problem to include a resistance is of historical
importance and was first successfully treated by Euler (1707 – 1783) in chapter 3 of
his 1744 volume The method of Finding Plane Curves that Show Some Property of
Maximum or Minimum . . . . Indeed it was Euler who first considered the problems
described here and in chapter 10. The problem considered here is difficult, requiring
many of the techniques and ideas developed earlier in the course, and is therefore good
revision even though this section is not assessed. Euler, on the other hand, developed
these techniques in order to solve this type of problem.
The analysis that follows is difficult and follows that outlined by Pars1 (1965, chap-
ter 8). You may find it hard to understand why certain steps are taken but, as usual
with any complicated problem, there is often no simple explanation and what is written
down is the result of trial and many errors: the blind alleys cannot be shown.
There are a variety of types of resistance that can be considered and here we follow
Euler by assuming that the resistance depends only upon the speed, v, of the particle.
This is a more difficult problem than that dealt with in chapter 5 because now energy
is not conserved, which means that there is not a simple relation between the speed
and the height of the particle, as in equation 5.2 (page 149). Instead we need to use
Newton’s equation of motion, which here takes on the role of a constraint. First we
need to derive this equation in an appropriate form.
1L A Pars, An Introduction to the Calculus of Variations, (Heinemann).
320 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

Newton’s equation of motion


For a particle of mass, m, sliding along a smooth, rigid wire a natural variable for the
description of its position is the distance, s, measured along the wire, from the starting
point. The Cartesian coordinates of the initial point are taken to be (x, y) = (0, A),
and here s = 0; we take the y-axis to be vertically upwards. There are two forces acting
on the particle, the downward force of gravity and the resistance, that depends upon
the speed, v = ṡ and acts tangentially (because the wire is smooth) so as to slow the
motion.
y
A δs
s δy φ
P δx
R(v)
φ
mg x
O
N b
Figure 12.5 Diagram showing the forces acting on the particle, assuming that the
distance AP is increasing with time. The line P N is the tangent to the curve at
the instantaneous position P , and makes an angle φ with the downward vertical.

For a particle at P , consider the tangent P N , figure 12.5, to the curve which makes an
angle φ with the downward vertical, and let s be the distance along the curve from the
starting point, increasing with x. The component of the vertical force of gravity along
the tangent, P N , in the direction of increasing s, is mg cos φ.
If the magnitude of the resistance per unit mass is R(v), where R(v) is a positive
function such that2 R(0) = 0, then by resolving forces along the tangent at P , Newton’s
equation becomes
d2 s
m 2 = −mR(v) + mg cos φ. (12.59)
dt
The chain rule gives
d2 s dv dv ds dv
= = =v
dt2 dt ds dt ds
and since δy = −δs cos φ the equation of motion can be written as the first-order
equation
dv dy
v = −R(v) − g . (12.60)
ds ds
We consider only cases where initially the particle is either stationary or moving down-
wards with a speed such that R(v) is small compared with the gravitational force, g
per unit mass. Thus, v 0 (s) is initially increasing. Subsequently there are two possible
types of motion:
A: v(s) steadily increases until the terminal point is reached, or;
B: v(s) increases to a maximum value at which v 0 (s) = 0, so here gy 0 (s) = −R(v) < 0,
2 A typically approximation to assume that R is proportional to v 2 , see section 3.5.3, but this is

poor for low speeds, when R is proportional to v, and fails near the speed of sound.
12.7. BRACHISTOCHRONE IN A RESISTING MEDIUM 321

after which v(s) decreases to its value at the terminal point.


We assume that the actual motion is either type A or type B; it will be seen that the
distinction between these two types of motion is important.

Exercise 12.19
If the wire is vertical, so s = −y, and the particle starts from rest at s = 0, and
R(v) = κv 2 , for some constant κ, show that the equation of motion 12.60 becomes
dv g`
= −κv 2 + g and hence show that v 2 = 1 − e−2κs , for a particle start-
´
v
ds κ
ing at rest where s = 0. p
Note that as s → ∞, v → g/κ and approaches this limiting or terminal speed
monotonically.

The functional and boundary conditions


Now consider the integral for the time taken to travel between two given points (0, A)
and (b, 0), along a curve parameterised by τ ∈ [0, 1]. The time of passage, T , is given
by
Z T Z 1
dt
T = dt = dτ . (12.61)
0 0 dτ
If the coordinates of points on the curve are (x(τ ), y(τ )), by definition,
s 
2  2
dx dy p dτ
v= + = x0 (τ )2 + y 0 (τ )2 ,
dt dt dt

and hence the functional for the three variables x(τ ), y(τ ) and v(τ ) is
Z 1 p
x0 (τ )2 + y 0 (τ )2
T [x, y, v] = dτ , (12.62)
0 v
where a prime denotes differentiation with respect to τ . Now express the equation of
motion in terms of the independent variable τ , rather than s. Since
ds p 0 2
= x (τ ) + y 0 (τ )2 (12.63)

dv dτ dτ
equation 12.60 becomes, on using the chain rule, v = −R(v) − gy 0 , that is,
dτ ds ds
p
vv 0 + R(v) x0 (τ )2 + y 0 (τ )2 + gy 0 = 0.

This constraint is satisfied by the three variables, so the auxiliary functional is


Z 1
T [x, y, v] = dτ F (x0 , y 0 , v, v 0 ), (12.64)
0

where
p 1
F = H(τ, v) x0 (τ )2 + y 0 (τ )2 − λvv 0 − λgy 0 with H(τ, v) = − λ(τ )R(v) (12.65)
v
322 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

and where λ(τ ) is the Lagrange multiplier, that depends upon the independent variable,
here τ .
There are five known boundary conditions. The initial values of (x, y, v) are assumed
known, and given by (0, A, v0 ), and the final values of (x, y) are given by (b, 0). The
final value of v is not known, because this depends upon the path taken. For this we
use the natural boundary condition, equation 12.55 (page 317), at τ = 1,
∂F
= −λ(1)v(1) = 0. (12.66)
∂v 0
Assuming that v(1) 6= 0, this gives λ(1) = 0. In exercise 12.20 it is shown that λ(τ ) < 0
for 0 ≤ τ < 1, and hence that H(τ, v) > 0.
Exercise 12.20
(a) Show that λ0 (τ ) > 0 at τ = 1.
(b) If λ(τ1 ) = 0 for 0 < τ1 < 1, show that λ0 (τ1 ) > 0. Deduce that λ(τ ) < 0 for
0 ≤ τ < 1, and that H(τ, v) > 0 for 0 ≤ τ ≤ 1.

Hint for part (a) you will need the Euler-Lagrange equation for λ(τ ), given in
equation 12.71.

The Euler-Lagrange equations and their solution


The Euler-Lagrange equations for x and y are particularly simple because F does not
depend upon either x or y. Thus the equation for x is
!
d x0 H x0 H
p = 0 that is p = α, (12.67)
dτ x0 (τ )2 + y 0 (τ )2 x0 (τ )2 + y 0 (τ )2

where α is a constant. Because we expect x(τ ) to be an increasing function of τ , α


must be a positive constant.
The Euler-Lagrange equation for y is
!
d y0 H y0 H
p − λg = 0 that is p = λg − β, (12.68)
dτ x0 (τ )2 + y 0 (τ )2 x0 (τ )2 + y 0 (τ )2

where β is another constant. Since λ(1) = 0 it follows that for type A motion, in which
y 0 (τ ) < 0 for all τ , β > 0; for type B motion during which y 0 (τ ) changes sign we must
have, β < 0. It is shown how the values of the constants α and β may be determined
by expressions derived at the end of this calculation.
It is now helpful to use s as the independent variable because, using equation 12.63,
1 dx dx 1 dy dy
p = and p =
x0 (τ )2 + y 0 (τ )2 dτ ds x0 (τ )2 + y 0 (τ )2 dτ ds
and hence equations 12.67 and 12.68 have the simpler form
dx
H(s, v) = α, (12.69)
ds
dy 1
H(s, v) = λg − β, H= − λ(s)R(v). (12.70)
ds v
12.7. BRACHISTOCHRONE IN A RESISTING MEDIUM 323

The third Euler-Lagrange equation, for v, is

d p
(−λv) − x0 (τ )2 + y 0 (τ )2 Hv + λv 0 = 0

and this simplifies to
dλ p 0 2
v+ x (τ ) + y 0 (τ )2 Hv = 0. (12.71)

Again using s for the independent variable gives the simpler equation


v = −Hv . (12.72)
ds
Equations 12.69, 12.70 and 12.72 are the three Euler-Lagrange equations that we need
to solve. The remaining analysis is difficult partly because we change variables several
times and partly because it is necessary to keep in mind the expected behaviour of the
solution: in particular the two types of motion described before exercise 12.19 need to
be treated slightly differently.
Since x0 (s)2 + y 0 (s)2 = 1, squaring and adding equations 12.69 and 12.70 gives
 2
2 2 2 1
H = α + (λg − β) that is − λR = α2 + (λg − β)2 (12.73)
v

where we have used the definition 12.65. This is a quadratic equation for λ and hence
can be used to express λ as a function of v.
Before solving this equation consider its value at the terminal point, τ = 1, where
λ(1) = 0. If the speed at the terminus is Vt , this equation gives

1
Vt2 = ,
α2 + β 2

a result needed later.


It is helpful to concentrate first on the type A motion, in which the speed steadily
increases. Then β > 0, the maximum speed is at the terminus, max(v) = Vt , conse-
quently during the motion v −2 > α2 + β 2 . The quadratic equation 12.73 can be written
in the form    
R 1
λ2 g 2 − R2 − 2λ βg − 2 2

− − α − β = 0. (12.74)
v v2
In general air resistance is relatively small, so we assume that g > R for the range of
speeds considered. Then this quadratic equation has the solutions
  s 2  
2 2
 R R 2 2
1 2 2
g − R λ = βg − ± βg − + (g − R ) − α − β . (12.75)
v v v2

Since λ = 0 at the terminus the correct solution is given by the negative sign, and this
is conveniently written in the form
 
2 2
 R f (v)
g − R λ − βg − = −p (12.76)
v α2 + β 2
324 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

where f (v) is the positive function defined by


 2  
2 2 2 βg 2 2 1 2 2
f (v) = (α + β )R − +α g −α −β .
v v2
The first two Euler-Lagrange equations, 12.69 and 12.70, are simplified if divided by
the third, equation 12.72, to give
dx αv dy (λg − β)v
=− and =− . (12.77)
dλ HHv dλ HHv
These equations can be used to express (x, y) as integrals over known functions of the
speed v. First we need to express HHv in terms of known quantities: differentiate H
with respect to v,
dH dλ dλ
= Hv + Hλ = Hv − R(v) ,
dv dv dv
where we have used equation 12.65 for H. Similarly, from equation 12.73
dH dλ
H = g(λg − β) ,
dv dv
and on combining these two results
    
dλ dλ 2 2
 R dλ
H Hv − R = g(λg − β) that is HHv = g − R λ − gβ − .
dv dv v dv
(12.78)
Observe that the right-hand side of this equation is proportional to the left-hand side
of equation 12.76 for λ, and hence
dv f (v)
HHv = −p . (12.79)
dλ α2 + β 2
But, using the chain rule equations 12.77 can be written in the form
dx αv dλ dy (λg − β)v dλ
=− and =−
dv HHv dv dv HHv dv
so that using equation 12.79 these can be written as a pair of uncoupled first-order
differential equations,
dx p v dy p (β − λg)v
= α α2 + β 2 and = − α2 + β 2 . (12.80)
dv f (v) dv f (v)
Notice that x0 (v) > 0 and y 0 (v) < 0, since λ ≤ 0 and β > 0 (for type A motion).
The right-hand sides of these equations are functions of v, with λ(v) being given
by equation 12.76. Integration, and taking account of the initial conditions, gives the
equation of the curve in the form
Z v
p
2 2
v
x(v) = α α + β dv , (12.81)
v0 f (v)
Z v
p (β − λg)v
y(v) = A − α2 + β 2 dv . (12.82)
v0 f (v)
12.7. BRACHISTOCHRONE IN A RESISTING MEDIUM 325

Using equation 12.76 for λ we obtain


 
1 gR 2 g f (v)
β − gλ = 2 2
− βR + 2 2
p
g −R v g −R α2 + β 2
so that the equation for y(v) becomes
Z v Z v
v p
2 2
gR − βvR2
y(v) = A − g dv 2 − α + β dv . (12.83)
v0 g − R2 v0 f (v)(g 2 − R2 )
These expressions depend upon the unknown constants α and β, which are obtained
using information about the terminal point at which
1
v = Vt = p , x(Vt ) = b and y(Vt ) = 0.
α2 + β2
Thus the end conditions give two equations for the unknown constants α and β in terms
of the given parameters b and A. These equations are, however, nonlinear so are difficult
to solve: this difficulty is compounded by the fact that the relations can usually only
be determined by numerically evaluating the integrals. Physical considerations suggest,
however, that for any pair of values (b, A) a solution exists.
Equations 12.81 and 12.82 define the stationary path parametrically, with the speed
v as the parameter. They are therefore directly equivalent to equations 5.8 (page 151),
in which the parameter is the angle φ. Further, in the limit R(v) = 0 these equations
should reduce to those found previously: it is important that we establish that this is
true in order to check the derivation.

Exercise 12.21
In this exercise the limit R = 0 is considered and it is shown that equations 12.81
and 12.83 reduce to the conventional parametric equations of the cycloid.
(a) Show that in the limit R = 0 equation 12.83 for y(v) reduces to the energy
equation
1 1
mg(A − y) = mv 2 − mv02 .
2 2
(b) Show that if R = 0,
p
α2 + β 2 v
= √
f (v) g 1 − α2 v 2
and hence that equation 12.81 for x(v) becomes

α v v2
Z
x(v) = dv √ .
g v0 1 − α2 v 2

(c) Using the substiution αv = sin φ and setting v0 = 0, show that the equation
found in parts (a) and (b) become

c2 1
x= (2φ − sin 2φ), y = A − c2 sin2 φ, c2 = , 0 ≤ φ ≤ φb .
2 2α2 g

(d) Show also that gλ = β − α/ tan φ, and hence that β = α/ tan φb . Deduce that
β > 0 if 0 ≤ φb < π/2 and β < 0 if π/2 < φb < π; explain the significance of the
condition φb = π/2.
326 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

In general equations 12.81 and 12.82 can be dealt with only using numerical methods,
and this is not easy because it is necessary to solve two coupled nonlinear equations,
which can be evaluated only by numerical integration. However, if the resistance is
relatively small we expect the stationary path to be close to that of the cycloid of the
resistance free motion, which suggests making an expansion in powers of R(v). Such
an analysis also helps check numerical solutions.
In order to facilitate this expansion it is helpful to replace R(v) by R(v), where 
is a small positive, dimensionless, quantity, and to use  to keep track of the expansion.
An approximation to f (v), to order , can be written,
p
α2 + β 2 v
= q + O(2 ).
f (v) g 1−α v −2 2 2β
vR(v)
g

so that equation 12.81 for x(v) becomes


α v v2
Z
x(v) = dv q
g v0 1 − α2 v 2 − 2β
g vR(v)
Z v
v2
 
α β vR(v) 2
= dv √ 1+ + O( )
g v0 1 − α2 v 2 g 1 − α2 v 2
and equation 12.83 for y(v) becomes, to this order
Z v
1 2 2
  vR(v)
y(v) = A − v − v0 − 2 dv √ .
2g g v0 1 − α2 v 2
We now set v0 = 0 and use the same substitution as used in exercise 12.21, αv = sin φ,
to write these relations in the form
Z φ
1 β sin3 φ
x(φ) = 2
(2φ − sin 2φ) + 3 2
dφ R(v) (12.84)
4α g α g 0 cos2 φ
Z φ
1 2 
y(φ) = A − 2 sin φ − 2 2 dφ sin φ R(v). (12.85)
2α g α g 0
p
At the terminal point (x, y) = (b, 0), if φ = φb we have α = α2 + β 2 sin φb , which can
be rearranged to give β = α/ tan φb , so the two unknown parameters are now α and φb .
It is now necessary to choose a particular function for the resistance: a natural
choice is R = κv 2 , where κ is a constant (with the dimensions of inverse length). Then
equations 12.84 and 12.85 become
1 κβ
x(φ) = 2
(2φ − sin 2φ) + 2 5 G1 (φ) (12.86)
4gα g α
sin2 φ κ
y(φ) = A− − 2 4 G2 (φ) (12.87)
2gα2 g α
where αv = sin φ and
φ
sin5 φ 1 8 7 1
Z
G1 (φ) = dφ 2
= − + cos φ − cos 3φ,
0 cos φ cos φ 3 4 12
φ
2 3 1
Z
G2 (φ) = dφ sin3 φ = − cos φ + cos 3φ.
0 3 4 12
12.7. BRACHISTOCHRONE IN A RESISTING MEDIUM 327

The first task is to determine the values of α and φb from the terminal conditions. This
is facilitated by noting that the equation y(φb ) = 0 is a quadratic in 1/(gα2 ),

κ sin2 φb
G2 (φb ) + − A = 0.
g 2 α4 2gα2
The quadratic term is proportional to κ so one of the roots behaves as κ−1 , as κ → 0,
and since we require a root that is finite when there is no resistance, the relevant
solution is
1 4A
= . (12.88)
gα2
q
sin2 φb + sin4 φb − 16κAG2 (φb )
This expression defines α in terms of φb , but numerical calculations show that it is real
only for small κ.
Using the equation β = α/ tan φb for β allows us to write the equation x(φb ) = b in
the form
1 κ
b= (2φb − sin 2φb ) + 2 4 G1 (φb ). (12.89)
4gα2 g α tan φb
Since gα2 is given in terms of φb by equation 12.88 this is a single equation for φb that
can be solved numerically.
In figure 12.6 we show an example of such a solution. For the purposes of illustration
we choose g = 1 and take the end points to be (0, A), with A = 2/(π − 2), and (b, 0),
with b = 1, so for the cycloid φb = π/4. For these parameters it is necessary that
κ < 0.135 (approximately) for α(φb ) to be real, so we take κ = 0.12. As might be
expected the resistance forces the stationary path below that of the cycloid, on to a
path that is initially steeper.

2
y
1.5 Cycloid

1 With resistance

0.5
x
L
0 0.2 0.4 0.6 0.8 1 R

Figure 12.6 An example of a stationary path of a brachistochrone with resis-


tance, with end points given by A = 2/(π−2) and b = 1. The other parameters
used are defined in the text.

Now return briefly to case B when the speed reaches a maximum value along the path,
so v(s) is not a monotonic increasing function for all s. If v 0 (s) = 0 at some intermediate
point where s = Sm and v = Vm , then the equation of motion 12.60 shows that at this
point gy 0 (Sm ) = −R(Vm ) < 0, that is y(s) is still decreasing, so the maximum speed is
reached before the lowest point of the path; this is contrary to the case R = 0 where
energy conservation ensures that these points coincide. Substituting this value of y 0
into the Euler-Lagrange equation 12.70 for y 0 (s) gives the relation
R
λ g 2 − R2 = βg − ,

v
328 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

which, on comparing with equation 12.76, gives f (Vm ) = 0. Prior to this point the speed
is increasing to its maximum, Vm and y 0 (s) < 0; subsequently v decreases steadily to the
speed at the terminus. The vertical component of the velocity changes when y 0 (s) = 0.
This situation is summarised in figure 12.7.
y
f(v)=0, v’(s)=0
A
gλ−β=0, y’(s)=0
x

Figure 12.7 Diagram showing where v 0 (s) and y 0 (s) are zero.

On the first part of the path v 0 (s) > 0 and λg − β < 0 and we have
g f (v) gR − βvR2
gλ − β = − − , β < 0.
g 2 − R 2 α2 + β 2 v(g 2 − R2 )
p

On the second part of the path λg − β = 0 at some point and


g f (v) gR − βvR2
gλ − β = − , β < 0.
g 2 − R 2 α2 + β 2 v(g 2 − R2 )
p

We now use the limiting case, R = 0 dealt with in exercise 12.20, to suggest how this
problem may be simplified. Assuming that at v = Vm , f (v)2 has a simple zero, it is
convenient to factor f (v) in the form f (v)2 = (Vm2 − v 2 )f1 (v), where f1 (v) > 0 for
0 ≤ v ≤ Vm . Now define a new parameter φ ∈ [0, π] by v = pVm sin φ, so v increases for
φ < π/2 and decreases for φ > π/2, and then f (v) = Vm f1 (v) cos φ. If R = 0 then
Vm = 1/α and this is the same parameter used for the cycloid. The two expressions for
gλ − β can now both be written in the form
p
gVm cos φ f1 (v) gR − βvR2
gλ − β = − 2 − , v = Vm sin φ.
g − R2 v(g 2 − R2 )
p
α2 + β 2
In terms of φ equations 12.80 for x and y become
dx p v dy p v(gλ − β)
= α α2 + β 2 p and = α2 + β 2 p .
dφ f1 (v) dφ f1 (v)
Substituting for gλ − β and integrating gives
Z φ Z φ
p v p sin φ cos φ
x(φ) = α α2 + β 2 dφ p = Vm2 α α2 + β 2 dφ , (12.90)
φ0 f1 (v) φ0 f (v)
and
φ φ
sin φ cos φ p 2 gR − βvR2
Z Z
y(φ) = gVm2 dφ − α + β2 dφ
g 2 − R2
p
φ0 φ0 (g 2 − R2 ) f1 (v)
v φ
v cos φ gR − βvR2
Z p Z
= A−g dv 2 2
− Vm α2 + β 2 dφ . (12.91)
v0 g −R φ0 f (v) g 2 − R2
12.8. BRACHISTOCHRONE WITH COULOMB FRICTION 329

The first-integral in this expression is the equivalent of the kinetic energy discussed in
part (a) of exercise 12.21, to which it reduces when R = 0. Further, for φ < π/2 these
two equations for (x(φ), y(φ)) are identical to equations 12.81 and 12.82, but now they
are valid for all φ. The two equations for α and β are obtained by integrating to φ t ,
where Vt = Vm sin φt and where φt > π/2 if β < 0 and φt < π/2 if β > 0.

Exercise 12.22
Consider the case where the initial speed, v0 , is large, so that R(v0 ) > g, and show
that the equations for the stationary path are now

dx p v dy p v
= −α α2 + β 2 and = −(β − λg) α2 + β 2
dv f (v) dv f (v)

where
„ «2 „ «
βg 1
f (v)2 = (α2 + β 2 )R − − α 2 g 2 α2 + β 2 − 2 .
v v
Hence show that in the limit g → 0 the stationary path between the points (0, A)
and (b, 0) is the straight line y = A(1 − x/b), as expected.

12.8 Brachistochrone with Coulomb friction


In this variant of the brachistochrone problem there is friction between the wire and
the bead. Coulomb friction is proportional to the normal force between the bead and
the wire and opposes the motion. Thus the force normal to the wire affects the motion,
which is not so for a smooth wire as in the conventional brachistochrone or the problem
treated in the previous section. This means that energy is not conserved, and the
simplicity of the original problem is lost, as when the bead falls through a resisting
medium. A complete solution of this problem appears to have been described only
relatively recently by Ashby et al (1975)3 , and here we follow their analysis.
If the ratio of the horizontal to the vertical distance of the end points is large and
the initial speed is zero, the frictional forces must be small for a stationary path to
exist. As this ratio increases we expect the critical value of the friction, beyond which
there is no stationary path, to decrease: this behaviour is difficult to see in the exact
solution but is illustrated in exercises 12.23, 12.24 and 12.32.

Newton’s equation of motion

The Cartesian coordinates of the end points of the wire are taken to be (x, y) = (0, A),
for the starting point, and (b, 0) for the terminus, with A > 0 and b > 0, and where
the y-axis is vertically upwards. If m is the mass of the bead this configuration and
the forces acting on the bead are shown in figure 12.8. The gradient of the wire at the
bead is tan θ = dy/dx, where y(x) is the required curve.

3N Ashby, W E Brittin, W F Love and W Wyss, Amer J Phys 1975, 43 pages 902-6.
330 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

y
A y
N wire
θ x
b x
θ µN
mg
mg
Figure 12.8 Diagram showing the wire and its terminal points, on the left, and the
forces acting on the bead on the right: here N is the force normal to the wire.

There are three forces acting on the bead, as shown on the right of figure 12.8; that due
to gravity, the force N normal to the wire, which does not directly affect the motion, and
the frictional force of magnitude µN directed along the wire and opposing the motion.
Here µ is the constant coefficient of friction and µ ≥ 0. For the reason discussed above
for a given value of µ we expect no stationary paths if b/A is too large.
The forces on the bead in the x- and y-directions are obtained directly by resolving
the forces shown in the inset of figure 12.8,

Fx = −N (sin θ + µ cos θ), Fy = N (cos θ − µ sin θ) − mg, (12.92)

so the force in the tangential direction is

FT = Fx cos θ + Fy sin θ = −µN − mg sin θ. (12.93)

Newton’s equations of motion are therefore

mẍ = −N (sin θ + µ cos θ), (12.94)


mÿ = N (cos θ − µ sin θ) − mg, (12.95)

where we use the notation (due to Newton) ẋ = dx/dt and ẍ = d2 x/dt2 . Along the
wire, if v is the speed
mv̇ = FT = −µN − mg sin θ. (12.96)
Eliminating µN from equations 12.94 and 12.95 gives

m (ẍ sin θ − ÿ cos θ) = −N + mg cos θ. (12.97)

But also
ẋ = v cos θ and ẏ = v sin θ, (12.98)
and by differentiation we see that ẍ sin θ−ÿ cos θ = −v θ̇, so that equation 12.97 becomes

N = mv θ̇ + mg cos θ. (12.99)

By substituting this into equation 12.96, for the tangential motion, we obtain the equa-
tion of motion
v̇ + µ(v θ̇ + g cos θ) + g sin θ = 0. (12.100)
12.8. BRACHISTOCHRONE WITH COULOMB FRICTION 331

Using equation 12.98 this equation can be written in the alternative form

v v̇ + µv 2 θ̇ + µg ẋ + g ẏ = 0. (12.101)

In this equation (ẋ, ẏ) are related to v and θ̇, by geometry, equation 12.98; squaring
and adding these equations gives the obvious identity v 2 = ẋ2 + ẏ 2 , which is one of the
constraints on the functional. Differentiation of equations 12.98 gives

ÿ cos θ − ẍ sin θ ÿẋ − ẍẏ


θ̇ = = .
v v2

This relation, together with the equation of motion 12.101, is the other constraint.

Exercise 12.23
A bead slides on a rough wire joining (0, A) to (b, 0) in a straight line, starting
from (0, A) with speed v0 .
Show that provided v02 > 2g(µb − A) the bead reaches the terminus at the time

2 A2 + b 2
t= p .
v0 + v02 + 2g(A − µ)

Exercise 12.24
Consider a wire in the shape of the quadrant of a circle of radius R, centre at
(R, R) joining the points (0, R) and (R, 0). The coordinates of a point on this
quadrant can be expressed in terms of the angle φ,

π
x = R(1 − cos φ), y = R(1 − sin φ), 0≤φ≤ ,
2

with φ increasing from 0 at (0, R) to π/2 at (R, 0).

(a) Show that θ = φ − π/2 where θ is the angle defined in figure 12.8.
(b) Show that the equation of motion of the bead on the wire is

dv
v + µv 2 = gR(cos φ − µ sin φ).

(c) By making an appropriate change of variable deduce, without solving the equa-
tion, that if v(0) = 0 the value of µ for which v(π/2) = 0 is independent of R.
(d) By solving the differential equation derived in part (b) with v(0) = 0 show
that v(π/2) = 0 for µ = µ1 where µ1 is the solution of

2µ2 + 3µe−µπ = 1.

Deduce that if µ is slightly larger that µ1 the bead does not reach the terminus.
332 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

The functional and boundary conditions


The time of passage, T , is given by equation 12.61 (page 321),
1
dt
Z
T = dτ (12.102)
0 dτ
where τ is the parameter defining the position along the path — if the natural variable,
t the time, were used the required quantity T would appear as a limit in the integral,
which is inconvenient.
This functional has two constraints: the equation of motion 12.101 and the relation
between v and (ẋ, ẏ), so this is a Lagrange problem with two multipliers. The constraints
need to be expressed in terms of τ . For v
s 2  2 p 0 2
dx dτ dy dτ x (τ ) + y 0 (τ )2
v(τ ) = + = ,
dτ dt dτ dt t0 (τ )

with a prime denoting differentiation with respect to τ . For θ 0 , since tan θ = ẏ/ẋ =
y 0 /x0 , differentiation gives

1 dθ y 00 y 0 x00 dθ y 00 x0 − y 0 x00 y 00 x0 − y 0 x00


2
= 0 − 02 hence = 0 2 0 2
= .
cos θ dτ x x dτ x +y v 2 t0 2
Thus the equation of motion 12.101 becomes

y 00 x0 − y 0 x00
vv 0 + gy 0 + µ + µgx0 = 0.
t0 2
The auxiliary functional is therefore
Z 1
T [x, y, v, t] = dτ F (x0 , x00 , y 0 , y 00 , v, v 0 , t0 ) (12.103)
0

where
y 00 x0 − y 0 x00
  p 
0 0 0
F = t + λ1 vv + gy + µ + µgx0 + λ2 x0 2 + y 0 2 − vt0 , (12.104)
t0 2

with both the Lagrange multipliers, λ1 and λ2 , depending upon τ . The dependent
variables are (x, y, v, t) and the functional contains second derivatives of x and y.
The known boundary conditions at the start, τ = 0, are

x(0) = 0, y(0) = A > 0, v(0) = v0 ≥ 0, t(0) = 0, (12.105)

and at the terminus, τ = 1,


x(1) = b, y(1) = 0. (12.106)
The remaining conditions are determined by the natural boundary conditions: for x
and y,

∂F y0 ∂F x0
00
= −µλ1 0 2 = 0, 00
= µλ1 0 2 = 0, at τ = 0 and 1, (12.107)
∂x t ∂y t
12.8. BRACHISTOCHRONE WITH COULOMB FRICTION 333

and for v at the terminus


∂F
= λ1 (1)v 0 (1) = 0. (12.108)
∂v 0
This gives λ1 (1) = 0 and hence the boundary condition 12.107 at the terminus is
automatically satisfied.
The four Euler-Lagrange equations are obtained from the derivatives

∂F ∂F x0 y 00 − y 0 x00
= 0, = 1 − 2λ 1 µ − λ2 v
∂t ∂t0 t0 3
∂F ∂F
= λ 1 v 0 − λ 2 t0 , = λ1 v,
∂v ∂v 0
∂F µλ1 y 00 λ 2 x0 ∂F λ1 µy 0
= + µλ 1 g + , = − ,
∂x0 t0 2 ∂x00 t0 2
p
x0 2 + y 0 2
∂F µλ1 x00 λ2 y 0 ∂F λ1 µx0
0
= − 0 2 + λ1 g + p , 00
= 02 .
∂y t x0 2 + y 0 2 ∂y t

From these expressions we obtain the four Euler-Lagrange equations in terms of τ , after
which we may replace τ by t (because the choice of parameter is arbitrary). Thus the
four following Euler-Lagrange equations are obtained

λ2 v + 2µλ1 v 2 θ̇ = c1 (for t), (12.109)


v λ̇1 + λ2 = 0 (for v), (12.110)
2µλ1 ÿ + µẏλ̇1 + λ1 µg + λ2 cos θ = cx (for x), (12.111)
2µλ1 ẍ + µẋλ̇1 − λ1 g − λ2 sin θ = cy (for y), (12.112)

where c1 , cx and cy are integration constants. These four equations, together with the
constraints allow a solution to be found; remarkably these equations can be integrated
in terms of known functions, though this process is not simple.
Using equations 12.110 and 12.98 we see that ẋλ̇1 = −λ2 cos θ and ẏλ̇1 = −λ2 sin θ.
Equation 12.109 gives λ2 in terms of λ1 , and the second derivatives in equations 12.111
and 12.112, ẍ and ÿ, may be replaced by the first derivatives v̇ and θ̇ using

ẍ = v̇ cos θ − v θ̇ sin θ, ÿ = v̇ sin θ + v θ̇ cos θ,

so equations 12.111 and 12.112 become, respectively,


  c1
2µλ1 v̇ + µv θ̇ sin θ + µλ1 g + (cos θ − µ sin θ) = cx , (12.113)
v
  c1
2µλ1 v̇ + µv θ̇ cos θ − λ1 g − (sin θ + µ cos θ) = cy . (12.114)
v

Now note that the combination v̇ + µv θ̇ also occurs in the equation of motion 12.101,
which can therefore be used to obtain two algebraic equations relating v and λ 1 . Thus
equations 12.113 and 12.114 become
c1
µλ1 g 1 − 2µ sin θ cos θ − 2 sin2 θ + (cos θ − µ sin θ)

= cx , (12.115)
v
2 2
 c1
−λ1 g 1 + 2µ sin θ cos θ + 2µ cos θ − (sin θ + µ cos θ) = cy . (12.116)
v
334 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

These equations are linear in λ1 g and c1 /v so may be solved directly to give

cos θ
v(θ) = where h(θ) = 1 + 2µ sin θ cos θ + 2µC cos2 θ (12.117)
Bh(θ)

and
cx − µcy cy + µcx
λ1 g = −Bc1 (C + tan θ) where B = , C= . (12.118)
c1 (1 + µ2 ) cx − µcy

Thus both v and λ1 are explicit functions of θ.


If v(0) = 0 the initial value of θ satisfies cos θ = 0, and physical considerations give
θ(0) = −π/2; that is, the stationary curve is initially vertical, as in the conventional
problem.
Because v is a function of θ it is possible to express x and y as first-order differential
equations with θ as the independent variable. First note that θ̇ = 1/t0 (θ), then

x0 (θ) dx dt dy dt
ẋ = = v(θ) cos θ that is = v cos θ and similarly = v sin θ.
t0 (θ) dθ dθ dθ dθ

An expression for t0 (θ) is obtained from the equation of motion 12.101 by dividing by
θ̇ to give
v (v 0 + µv) + gt0 (θ) (sin θ + µ cos θ) = 0,
that is
dt v 0 + µv
g =− .
dθ sin θ + µ cos θ
Using equation 12.117 in this expression it becomes, after some algebra,

dt 2 1
gB = 2− . (12.119)
dθ h h
Hence the differential equations for x(θ) and y(θ) are
   
dx 2 1 2 dy 2 1
gB 2 = − cos 2
θ and gB = − sin θ cos θ. (12.120)
dθ h3 h2 dθ h3 h2

At the terminus, where θ = θ1 , λ1 = 0 so equation 12.118 relates C to θ1 , C = − tan θ1 .


Thus the equations for the stationary path are
Z θ  
1 2 1
x(θ, B) = dθ − cos2 θ, x(θ1 , B) = b, (12.121)
gB 2 −π/2 h3 h2
Z θ  
1 2 1
y(θ, B) = A + dθ − sin θ cos θ, y(θ1 , B) = 0. (12.122)
gB 2 −π/2 h3 h2

The two boundary conditions give two equations for B and θ1 which may be solved
(numerically) to yield the stationary path. Some examples of the solutions of these
equations are shown in figure 12.9; here the frictionless case ends tangentially to the
x-axis and if µ > 0 the stationary path dips below the x-axis, but too little to be seen
on this graph.
12.8. BRACHISTOCHRONE WITH COULOMB FRICTION 335

1
y
0.8
µ=0.5
0.6
µ=0.3
µ=0 µ=0.2
0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 x
Figure 12.9 Graphs of the curves traced out by equations 12.121 and 12.122
for the terminal points (0, 1) and (π/2, 0) for which the frictionless brachis-
tochrone, µ = 0, ends tangentially to the x-axis; this is depicted by the dashed
line. The cases µ = 0.2, 0.3 and 0.5 are shown.

In figure 12.10 is shown stationary paths with the end points (0, 1) and (5, 0) for which
the frictionless brachistochrone dips below the x-axis. In this case the distance travelled
is longer than in figure 12.9 and the value of µ above which there is no stationary path
is smaller, as illustrated in the problems considered in exercises 12.24 and 12.32.

1
y µ=0.2

µ=0.15
0.5
µ=0.1

x
0
1 2 3 4 5

-0.5
µ=0
µ=0.05
Figure 12.10 Graphs of the curves traced out by equations 12.121 and 12.122
for the terminal points (0, 1) and (5, 0) and various values of µ, with the case
µ = 0 shown with the dashed line.

Exercise 12.25
Assuming that v0 = 0 show that at the end points h(θ) = 1, where h(θ) is defined
in equation 12.117, and that h(θ) has a single minimum at θ = θ1 /2 − π/4.
Find the minimum value of h(θ) and deduce that solutions exist only if
„ «
θ1 π
µ tan + < 1.
2 4
336 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

Exercise 12.26
In the friction free limit, µ = 0, show that equations 12.121 and 12.122 give
1 1 π
x= (2φ − sin 2φ) and y =A− (1 − cos 2φ), φ= + θ,
4gB 2 4gB 2 2
and that θ1 is related to A by
b 2φ1 − sin 2φ1 π
= , φ1 = + θ1 .
A 1 − cos 2φ1 2
12.9. MISCELLANEOUS EXERCISES 337

12.9 Miscellaneous exercises


Exercise 12.27
(a) Show that the functional
Z π
S[y] = dx y 0 2 y(0) = y(π) = 0,
0
Z π
subject to the constraint dx y 2 = 1, gives rise to the equation
0

d2 y
+ λy = 0, y(0) = y(π) = 0,
dx2
where λ is the Lagrange multiplier.
(b) Show that the functions y(x) = 2/π sin nx, with Lagrange multiplier λ = n2 ,
p

n = 1, 2, · · · , are solutions of this equation.

Exercise 12.28
(a) Show that the functional, which is quadratic in y and y 0 ,
Z b
dx p(x)y 0 2 − q(x)y 2 , y(a) = y(b) = 0,
` ´
S[y] =
a
Z b
and the constraint dx w(x)y(x)2 = 1 leads to the linear equation
a
„ «
d dy
p(x) + (q(x) + λw(x))y = 0, y(a) = y(b) = 0.
dx dx

(b) If the constraint were not also quadratic in y(x) would the resulting Euler-
Lagrange equation be linear?

Exercise 12.29 Z 1
Find the stationary value of the functional S[y] = dx y 2 subject to the con-
Z 1 0

straint dx y = a.
0

Exercise 12.30 Z ∞
Find the function y(x) making the functional P [y] = − dx y ln y stationary
Z ∞ Z ∞ −∞
subject to the two constraints dx y = 1 and dx x2 y = σ 2 , and where
−∞ −∞
y(x) goes to zero sufficiently rapidly as |x| → ∞ for all integrals to exist.
You will find the following integrals useful:
Z ∞ r Z ∞ √
2 π 2 π
dx e−ax = , dx x2 e−ax = 3/2 where <(a) > 0.
−∞ a −∞ 2a
This is an important problem that occurs in statistical physics and information
theory, where y(x) is the probability distribution of a continuously distributed
random variable x and P [y] is the entropy. The first constraint is just the normal-
isation condition, satisfied by all distributions, and the second is the variance.
338 CHAPTER 12. CONSTRAINED VARIATIONAL PROBLEMS

Exercise 12.31
Show that the the stationary path of the functional
Z π
S[y] = dx y 0 2 , y(0) = y(π) = 0,
0

subject to the constraint 0
dx y sin x = a, is y(x) = (2a/π) sin x.

Exercise 12.32
The points (0, a) and (b, 0), respectively on the Oy and Ox axes, are joined by
a rough wire in the shape of the quadrant of the ellipse parameterised by the
equations
π
x = b(1 − cos φ), y = a(1 − sin φ), 0 ≤ φ ≤ .
2
A bead slides down this wire under the influence of gravity and Coulomb friction,
show that the equation of motion 12.101 can be written in the form
dz 2µabz
+ 2 = g(a cos φ − µb sin φ),
dφ a cos2 φ + b2 sin2 φ

where z = v 2 /2. If v(0) = 0 show that


Z φ
1 2 g
v = dw (a cos w − µb sin w)f (w),
2 f (φ) 0
where „ „ ««
b
f (φ) = exp 2µ tan−1 tan φ .
a
Deduce that if µ = µ1 where µ1 is the positive solution of
Z π/2
dw (cos w − µη sin w)f (w) = 0
0

and η = b/a the bead has zero speed at the terminus.


Chapter 13

Sturm-Liouville systems

13.1 Introduction
The general theory of Sturm-Liouville systems presented in this chapter was created in
a series of articles in 1836 and 1837 by Sturm (1803 – 1855) and Liouville (1809 – 1882):
their work, later known as Sturm-Liouville theory, created a new subject in mathe-
matical analysis. The theory deals with the general linear, second-order differential
equation  
d dy 
p(x) + q(x) + λw(x) y = 0 (13.1)
dx dx
where the real variable, x, is confined to an interval, a ≤ x ≤ b, which may be the whole
real line or just x ≥ 0. The functions p(x), q(x) and w(x) are real and satisfy certain,
not very restrictive, conditions that will be delineated in section 13.4; in any particular
problem these functions are known. A second-order differential equation is said to be
in self-adjoint form when expressed as in equation 13.1: most second-order equations
can be expressed in this form, see exercise 2.31 (page 74).
In addition to the differential equation, boundary conditions are specified with the
consequence that solutions exist for only particular values of the constant λ = λ k ,
k = 1, 2, · · · , which are named1 eigenvalues: the solution yk (x) is named the eigenfunc-
tion for the eigenvalue2 λk . At this stage we shall not specify any boundary conditions,
despite their importance, because different types of problems produce different types of
conditions. Equation 13.1, together with any necessary boundary conditions, is known
as a Sturm-Liouville system, or problem, which belongs to the class of problems known
as eigenvalue problems.
In physical problems it is often important to compute or estimate the values of the
eigenvalues, for instance in problems associated with waves the eigenvalue is related to
the wave length: in the next chapter we show how variational methods can be used
1 The fact that we use the same symbol for the eigenvalue and the Lagrange multiplier introduced

in chapter 12, is not a coincidence, as is seen by comparing equation 13.1 with the equation derived in
exercise 12.28, page 337.
2 There are also important examples where the eigenvalues can take any real number in an interval

(which may be infinite), and there are examples in which the eigenvalues can be both discrete and
continuous. Such problems are common and important in quantum mechanics. In this course we deal
only with discrete sets of eigenvalues.

339
340 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

to provide estimates using a very simple technique requiring only the evaluation of
integrals, and which is readily extended to the more difficult case of partial differential
equations. In this chapter we concentrate on the properties of the eigenfunctions and
eigenvalues.
Sturm-Liouville problems are important partly because they arise in diverse cir-
cumstances and partly because the properties of the eigenvalues and eigenfunctions are
well understood. Moreover, the behaviour of both the eigenvalues and eigenfunctions
of a wide class of Sturm-Liouville systems are remarkably similar and is independent
of the particular form of the functions p(x), q(x) and w(x). In this class of problems
there is always a countable infinity of real eigenvalues λk , k = 1, 2, · · · , and the set of
eigenfunctions yk (x), k = 1, 2, · · · , is complete, meaning that these functions may be
used to form generalised Fourier series, as described in section 13.3. Further, there are
simple approximations for both the eigenvalues and eigenfunctions which are accurate
for large k, as shown in exercise 13.25 (page 371).
The achievements of Sturm and Liouville are more impressive when seen in the
context of early nineteenth century mathematics. Prior to 1820 work on differential
equations was concerned with finding solutions in terms of finite formulae or power
series; but for the general equation 13.1 Sturm could not find an expression for the
solution and instead obtained information about the properties of the solution from
the equation itself. This was the first qualitative theory of differential equations and
anticipated Poincaré’s work on nonlinear differential equations developed at the end
of that century. Today the work of Sturm and Liouville is intimately interconnected:
however, though lifelong friends who discussed their work prior to publication, this
theory emerged from a series of articles published separately by each author during
the period 1829 to 1840. More details of this history may be found in Lützen (1990,
chapter 10).
Sturm-Liouville systems are important because they arise in attempts to solve the
linear, partial differential equations that describe a wide variety of physical problems.
In addition most of the special functions that are so useful in mathematical physics,
and the study of which led to advances in analysis in the 19 th century, originate in
Sturm-Liouville equations. The importance of these functions should not be under-
estimated, as is frequent in this age of computing, for they furnish useful solutions to
many physical problems and can lead to a broader understanding than purely numerical
solutions. Further, the mathematics associated with these functions is elegant and its
study rewarding. There is no time in this course for any discussion of these functions,
but aspects of the important Bessel function are described in section 13.3.1.
Section 13.2 therefore briefly describes how Sturm-Liouville systems occur and gives
some idea of the variety of types of Sturm-Liouville problems that need to be tackled.
This section is optional, but recommended.
In section 13.3 we consider a particularly simple, solvable, Sturm-Liouville system
and examine the properties of its eigenvalues and eigenfunctions in order to illustrate
all the relevant properties of more general systems, which normally cannot be solved in
terms of elementary functions. Some of these properties depend on elementary prop-
erties of second-order differential equations; this theory in described in section 13.3.
Other properties are endowed on the eigenvalues and eigenfunctions because the canon-
ical form of equation 13.1 is self-adjoint, a term defined in section 13.4.2.
Equation 13.1 can be cast into a variety of other forms which are useful in the
13.1. INTRODUCTION 341

following discussion. Additionally this equation, with appropriate boundary conditions,


is the Euler-Lagrange equation of a constrained variational problem, with λ as the
Lagrange multiplier, and this is crucial for the later developments in chapter 14. The
following exercises lead you through this background and we recommend that you do
these exercises.

Exercise 13.1
(a) Show that the Euler-Lagrange equation for the functional and constraint
Z b “ ” Z b
S[y] = dx py 0 2 − qy 2 , C[y] = dx w(x)y 2 = 1,
a a

with admissible functions satisfying y(a) = y(b) = 0, is


„ «
d dy
p + (q + λw) y = 0, y(a) = y(b) = 0.
dx dx
Z x
du
(b) Define a new independent variable by ξ = to show that this Euler-
a p(u)
Lagrange equation is transformed into

d2 y
+ p(q + wλ)y = 0.
dξ 2

(c) By putting y = uv and by choosing v carefully, show that the original func-
tional and constraint can be written in the form
Z b „ « Z b
1 ` w
dx u0 2 − 2 p0 2 + 4pq − 2pp00 u2 , C[u] = dx u2 ,
´
S[u] =
a 4p a p

where u(a) = u(b) = 0. Hence derive the Euler-Lagrange equation for u and
compare this with equation 2.32 (page 74).

Exercise 13.2
Liouville’s normal form:
Consider the functional
Z b “ ”
S[y] = dx p(x)y 0 2 − (q + λw)y 2 .
a

(a) Change the independent variable to ξ = ξ(x) and the dependent variable to
v(ξ) where y = A(ξ)v(ξ). With a suitable choice of ξ(x) show that the functional
can be written in the form
Z d „ «2 !
1h 0 ` 2 ´0 2 i d dv 2
S[v] = pξ (x) A v + dξ − F (ξ)v ,
2 c c dξ

dξ 1
where = , c = ξ(a), d = ξ(b) and
dx pA2

d2
„ «
1
F (ξ) = (q + λw)pA4 − A .
dξ 2 A
342 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

(b) By defining A = (wp)−1/4 , show that ξ 0 (x) = w/p and the associated Euler-
p

Lagrange equation is
d2 v d2
„ „ « «
q 1
+ − A + λ v = 0.
dξ 2 w dξ 2 A
This transformation is sometimes named Liouville’s transformation , and is par-
ticularly useful for approximating the eigenvalues and eigenfunctions when λ is
large, see exercise 13.25 (page 371).

13.2 The origin of Sturm-Liouville systems


In this section we show how various types of Sturm-Liouville problems arise. This
material is not assessed but it is recommended that you read it and, time permitting,
that you do some of the exercises at the end of this section because it is important
background material.
The original work of Sturm appears to have been motivated by the problem of
heat conduction. One example he discussed is the temperature distribution in a one-
dimensional bar, described by the linear partial differential equation
 
∂u ∂ ∂u
h(x) = p(x) − l(x)u, (13.2)
∂t ∂x ∂x
where u(x, t) denotes the temperature at a point x of the bar at time t, and h(x), p(x)
and l(x) are positive functions. If the surroundings of the bar are held at constant
temperature and the ends of the bar, at x = 0 and x = L, are in contact with large
bodies at a different temperature, then the boundary conditions can be shown to be
∂u
p(x) + αu(x, t) = 0, at x = 0,
∂x (13.3)
∂u
p(x) + βu(x, t) = 0, at x = L,
∂x
for some constants α and β. Finally, the initial temperature of the bar needs to be
specified, so u(x, 0) = f (x) where f (x) is the known initial temperature.
Sturm attempted to solve this equation by first substituting a function of the form
u(x, t) = X(x)e−λt , where λ is a constant and X(x) is independent of t. This yields
the ordinary differential equation
 
d dX 
p(x) + λh(x) − l(x) X = 0 (13.4)
dx dx
for X(x) in terms of the unknown constant λ, together with the boundary conditions
p(0)X 0 (0) + αX(0) = 0 and p(L)X 0 (L) + βX(L) = 0. (13.5)
This is an eigenvalue problem. Assuming that there are solutions Xk (x) with eigenvalues
λ = λk , for k = 1, 2, · · · , Sturm used the linearity of the original equation to write a
general solution as the sum

X
u(x, t) = Ak Xk (x)e−λk t ,
k=1
13.2. THE ORIGIN OF STURM-LIOUVILLE SYSTEMS 343

where the coefficients Ak are arbitrary. This solution formally satisfies the differential
equation and the boundary conditions, but not the initial condition u(x, 0) = f (x),
which will be satisfied only if

X
f (x) = Ak Xk (x).
k=1

Thus the problem reduces to that of finding the values of the Ak satisfying this equation.
Fourier (1768 – 1830) and Poisson (1781 – 1840) found expressions for the coefficients
Ak for particular functions h(x), p(x) and l(x), but Sturm and Liouville determined
the general solution.
Typically Sturm-Liouville equations occur when the method of separating variables
is used to solve the linear partial differential equations that arise frequently in physical
problems; some common examples are

∇2 ψ + k 2 ψ = 0, (13.6)
∂ψ
∇2 ψ − k = 0, heat or diffusion equation, (13.7)
∂t
2
1 ∂ ψ
∇2 ψ − 2 2 = 0, wave equation, (13.8)
 c ∂t
1 ∂2ψ

1 ∂ ∂ψ
β(x) − 2 2 = 0, canal or horn equation, (13.9)
β(x) ∂x ∂x c ∂t
where c is a constant representing the speed of propagation of small disturbances in the
medium, k is a positive constant, β(x) some positive function of x and

∂2ψ ∂2ψ ∂2ψ


∇2 ψ = + + .
∂x2 ∂y 2 ∂z 2
The first of these equations arises in the solution of Poisson’s equation — that is,
∇2 ψ = −F (r) — and similar equations occur when using separation of variables. The
second equation describes diffusion processes and heat flow. The third equation 13.8
is the wave equation for propagation of small disturbances in an isotropic medium and
describes a variety of wave phenomena such as electromagnetic radiation, water and
air waves, waves in strings and membranes. The fourth equation is a variant of the
previous wave equation and in this form was derived by Green (1793 – 1841) in his
18383 paper describing waves on a canal of rectangular cross section but with a width
varying along its length; a similar equation describes, approximately, the air pressure
in a horn, though in many instruments the flare is sufficiently rapid for the longitudinal
and radial modes to couple, so it is necessary to use the two-dimensional version of 13.9
in which the variation of the air pressure along the length of the pipe and in the radial
direction is included.
The many different forms of the Sturm-Liouville system that we discuss in the fol-
lowing sections are largely a consequence of the shapes of the regions in which the
physical system is defined and of the coordinate system that simplifies the equations.
A Sturm-Liouville system arises when the method of separation of variables is used to
3 On the Motion of Waves in a variable Canal of small Depth and Width, 1838 Camb Phil Soc, Vol

VI, part III.


344 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

reduce a partial differential equation to a set of uncoupled ordinary differential equa-


tions. Whether or not such a simplification is feasible depends upon the existence of
a suitable coordinate system and this depends upon the form of the original equation
and the shape of the boundary. Relatively few problems yield to this treatment, but
it is important because it is one of the principal means of finding solutions in terms
of known functions: the main alternatives are numerical and variational methods, the
latter being introduced in chapter 14.
In problems with two spatial dimensions separation of variables can be used with
equations 13.6 and 13.8 for rectangular, circular and elliptical boundaries but not, for
example, most triangular boundaries.
We end this section by separating variables for the equation ∇2 ψ + k 2 ψ = 0, using
the spherical polar coordinates,
x = r cos θ cos φ, y = r cos θ sin φ, z = r sin θ,
where 0 ≤ θ ≤ π, 0 ≤ φ ≤ 2π and r ≥ 0 which are appropriate when the equation
is defined in a spherically symmetric region, for instance the interior or exterior of a
sphere of given radius or the region between two spheres of given radii and coincident
centres. The purpose of this section is to show how and why different Sturm-Liouville
systems occur. Although this material is not assessed, you should read it in order to
understand why some of the later mathematics is necessary.
In these coordinates it can be shown that equation 13.6 becomes
1 ∂2ψ
   
∂ ∂ψ 1 ∂ ∂ψ
r2 + sin θ + + k 2 r2 ψ = 0. (13.10)
∂r ∂r sin θ ∂θ ∂θ sin2 θ ∂φ2
First, write ψ(r, θ, φ) as the product ψ(r, θ, φ) = R(r)S(θ, φ) where R depends only
upon r and S only upon (θ, φ). Equation 13.10 then can be written in the form
1 ∂2S
     
1 d 2 dR 2 2 1 1 ∂ ∂S
r +k r =− sin θ + .
R dr dr S sin θ ∂θ ∂θ sin2 θ ∂φ2
The left-hand side of this equation depends only upon r and the right-hand side only
upon (θ, φ). Because (r, θ, φ) are independent variables this equation can be satisfied
only if each side is equal to the same constant, which we denote by µ; constants intro-
duced for this purpose are named separation constants . Note that the constant k is also
a separation constant obtained when separating the time from the spatial coordinates,
as in passing from equations 13.2 to 13.4. Thus we obtain the two equations,
 
d dR
r2 + k 2 r2 − µ R = 0,

(13.11)
dr dr
1 ∂2S
 
1 ∂ ∂S
sin θ + + µS = 0. (13.12)
sin θ ∂θ ∂θ sin2 θ ∂φ2
The first of these equations is already in the canonical form of equation 13.1, and
contains two constants k and µ which are determined by the boundary conditions.
The second equation for S is converted into two suitable equations in the same
manner: substitute S = Θ(θ)Φ(φ) where Θ and Φ are respectively functions of θ and φ
only. Then equation 13.12 can be cast in the form,
1 d2 Φ
 
sin θ d dΘ
sin θ + µ sin2 θ = − .
Θ dθ dθ Φ dφ2
13.2. THE ORIGIN OF STURM-LIOUVILLE SYSTEMS 345

The left-hand side of this equation depends only upon θ and the right-hand side only
upon φ, so each must equal the same constant. Later we shall see that the separation
constant
√ must be positive or zero: denoting it by ω 2 , with ω ≥ 0 so that the sign of
2
ω is unambiguous, gives the two equations
d2 Φ
+ ω 2 Φ = 0, (13.13)
dφ2
ω2
   
1 d dΘ
sin θ + µ− Θ = 0. (13.14)
sin θ dθ dθ sin2 θ
Finally, if we define a new independent variable by x = cos θ, so
   
df df 1 d df d df
= − sin θ and sin θ = (1 − x2 ) ,
dθ dx sin θ dθ dθ dx dx
the equation for Θ becomes
ω2
   
d 2 dΘ
(1 − x ) + µ− Θ = 0, −1 ≤ x ≤ 1. (13.15)
dx dx 1 − x2
Both equation 13.13 for Φ and the two equations 13.14 and 13.15 for Θ are in the
canonical form of equation 13.1.
Comparison of 13.13 for Φ with equation 13.1 shows that the separation constant ω 2
now plays the role of the eigenvalue; its value is determined by the boundary conditions
that Φ needs to satisfy. Comparison of 13.15 for Θ with equation 13.1 shows that here
µ plays the role of the eigenvalue.
This analysis shows that in spherical polar coordinates the equation ∇2 ψ + k 2 ψ = 0
gives rise to three Sturm-Liouville systems for R(r), Θ(θ) and Φ(φ) where ψ = R(r)Θ(θ)Φ(φ).
These equations are summarised in table 13.1.

Table 13.1: Summary of the three Sturm-Liouville systems arising from separation of vari-
ables of equation 13.6 using spherical polar coordinates, giving the explicit form for the three
functions p, q and w, in each case.
Equation p q w Eigenvalue

Φ00 + ω 2 Φ = 0 1 0 1 ω2
0  ω2 ω2
 
2 0
(1 − x )Θ (x) + µ − Θ=0 1 − x2 − 1 µ
1 − x2 1 − x2
0
r2 R0 (r) + (k 2 r2 − µ)R = 0 r2 −µ r2 k2

Now consider the boundary conditions.


The equation for Φ: the points with coordinates (r, θ, φ) and (r, θ, φ + 2nπ), n =
0, ±1, ±2, · · · , all label the same point in space, so in most physical problems we must
have Φ(φ + 2nπ) = Φ(φ) for all φ, that is Φ(φ) must be 2π-periodic. This is why the
separation constant introduced to derive equations 13.11 and 13.12 had to be positive,
for the equation Φ00 − ν 2 Φ = 0, with ν > 0, does not have periodic solutions; further,
346 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

Φ is 2π-periodic only if ω is a non-negative integer, ω = m, m = 0, 1, 2, · · · , see exer-


cise 13.9 (page 353).
The equation for Θ, has p(x) = 1 − x2 , which is zero at the ends of the interval
(−1, 1), that is at θ = 0 and π, corresponding to the poles. The poles are singular
points of spherical polar coordinates, because at each pole φ is undefined, and this is
why p(x) = 0 at x = ±1. Further, because the coefficient of Θ00 (θ) is zero at x = ±1,
the general theory of linear differential equations shows that there are two types of
solutions, those that are bounded at x = ±1 and those that are unbounded. Physical
considerations suggest that in most circumstances only bounded solutions are signifi-
cant. Thus for this type of Sturm-Liouville problem the boundary conditions are simply
that Θ(θ) is bounded for x ∈ [−1, 1]. It can be shown that with ω = m, this condition
gives µ = l(l + 1), l = m, m + 1, m + 2, · · · 4 ; these solutions are named the associated
Legendre polynomials and are denoted by Plm (x).
The radial equation for R(r) has p(r) = r 2 , so if the original space includes the origin
we find that because p(0) = 0 the solutions are of two types, those that are bounded
and those that are unbounded at r = 0. Again, physical considerations usually suggest
that the bounded solutions are chosen. The other boundary conditions are either given
by some condition at r = a > 0, where a is the radius of the sphere in which the original
problem is defined, or that the solutions remain bounded as r → ∞.
Summary: the method of separation of variables applied to the equation ∇2 ψ + k 2 ψ = 0,
using spherical polar coordinates leads to three different types of Sturm-Liouville sys-
tems. In this summary we introduce the idea of regular and singular Sturm-Liouville
systems, that will be discussed further and defined in section 13.4.

(1) The equation


d2 Φ
+ ω2 Φ = 0 (13.16)
dφ2
with periodic boundary conditions Φ(φ + 2π) = Φ(φ) for all φ, which determines
possible values of ω. Note that this condition implies the conditions Φ(0) = Φ(2π)
and Φ0 (0) = Φ0 (2π).
(2) The equation
ω2
   
d 2 dΘ
(1 − x ) + µ− Θ = 0, −1 ≤ x ≤ 1. (13.17)
dx dx 1 − x2
The condition that Θ(θ) is bounded for all x serves the same purpose as boundary
conditions, and determines possible values of the eigenvalue µ, once ω 2 is known.
Because p(x) = 1−x2 is zero at the ends of the interval this type of Sturm-Liouville
equation is classified as a singular Sturm-Liouville system.
(3) The equation  
d dR
r2 + k 2 r2 − µ R = 0.

(13.18)
dr dr
4 A physical reason why l ≥ m is that in some circumstances l is proportional to the magnitude of

an angular momentum and m a projection of this vector along a given axis, which can be no longer
than the original vector.
13.2. THE ORIGIN OF STURM-LIOUVILLE SYSTEMS 347

For this equation several types of conditions can specify the solution uniquely and
determine possible values of the eigenvalue k 2 .
(i) If 0 ≤ r ≤ a, since p(r) = r 2 is zero at r = 0, the solutions will normally
be required to be bounded at r = 0 and satisfy a condition of the form
A1 y(a) + A2 y 0 (a) = 0 at r = a, where A1 and A2 are constants. This system
is classified as a singular Sturm-Liouville system because p(r) = 0 at r = 0.
(ii) If r ∈ [0, ∞), since p(0) = 0 the solutions will normally be required to
be bounded at r = 0 and tend to zero as r → ∞. Again this is a singular
Sturm-Liouville system.
(iii) If 0 < a ≤ r ≤ b the solution will be required to satisfy boundary condi-
tions of the form
A1 y(a) + A2 y 0 (a) = 0 and B1 y(b) + B2 y 0 (b) = 0,
where A1 , A2 , B1 and B2 are constants. For this system p(r) = r 2 > 0 for all
r and the system is a regular Sturm-Liouville system.
The examples described in this section show how Sturm-Liouville equations arise and
why a variety of types of these equations exist. The significance of the differing types
will become clear as the theory develops.
Exercise 13.3
Consider the system ∇2 ψ + k2 ψ = 0 with ψ(x, y) = 0 on the rectangle defined
by the x- and y-axes, and the lines x = a > 0, y = b > 0. Show that inside
this rectangle separation of variables with Cartesian coordinates leads to the two
Sturm-Liouville systems
d2 X d2 Y
+ ω12 X = 0 and + ω22 Y = 0,
dx2 dy 2
with X(0) = X(a) = 0, Y (0) = Y (b) = 0 and where ψ = X(x)Y (y) and
ω12 + ω22 = k2 .

Exercise 13.4
Consider the system ∇2 ψ + k2 ψ = 0 with ψ(x, y) = 0 defined inside the circle of
radius a. Use the polar coordinates x = r cos θ, y = r sin θ, 0 ≤ r ≤ a to cast the
equation in the form
∂2ψ 1 ∂ψ 1 ∂2ψ
2
+ + 2 + k2 ψ = 0.
∂r r ∂r r ∂θ2
By putting ψ = R(r)Θ(θ), where R(r) depends only upon r and Θ(θ) only upon
θ, show that
d2 Θ
+ ω 2 Θ = 0, with Θ(θ) 2π-periodic,
dθ2
d2 R dR ` 2 2
r2 2 + r + k r − ω 2 R = 0,
´
dr dr
where ω is a positive constant. Show further that the equation for R(r) can be
cast in self-adjoint form
ω2
„ « „ «
d dR
r + k2 r − R = 0.
dr dr r
348 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

13.3 Eigenvalues and functions of simple systems


The eigenvalues and eigenfunctions of most Sturm-Liouville systems are not easy to
find; yet the theory of Sturm-Liouville systems, to be described later, shows that the
eigenfunctions for most Sturm-Liouville systems with discrete eigenvalues behave sim-
ilarly, independent of the detailed form of the three functions p, q and w and of the
boundary conditions. This important fact is one reason why the approximate method
described in section 14.3 works so well.
Thus in this section, in order to help understand this behaviour, we consider the
Sturm-Liouville system defined by the equation

d2 y
+ λy = 0, y(0) = y(π) = 0, (13.19)
dx2

with p(x) = w(x) = 1, q(x) = 0 and defined in the interval [0, π]. This equation
has simple solutions, found in exercise 13.5, and by studying these it is possible to
understand almost everything about the solutions of other Sturm-Liouville systems
with discrete eigenvalues. We illustrate this point in section 13.3.1 by describing the
properties of a singular Sturm-Liouville system closely related to equation 13.18, and
whose eigenfunctions are Bessel functions.

Exercise 13.5

(a) Show that equation 13.19 has no real, nontrivial solutions if λ ≤ 0.

(b) Find the values of λ > 0 for which solutions exist and find these solutions.

In exercise 13.5 it was shown that the eigenfunctions and eigenvalues of equation 13.19
are
yn (x) = B sin nx, λ n = n2 , n = 1, 2, · · · . (13.20)

The constant B is undetermined because the equation and boundary conditions are
homogeneous. It is often convenient to fix the value of this constant by normalising the
eigenfunctions to unity, that is we set

π π
1
Z Z
dx yn (x)2 = 1 and this gives B 2 dx sin2 nx = πB 2 = 1. (13.21)
0 0 2

By choosing B to be positive this convention gives the following eigenfunctions and


eigenvalues
r
2
yn (x) = sin nx, λn = n2 , n = 1, 2, · · · . (13.22)
π

Graphs of the adjacent pairs of eigenfunctions {y1 (x), y2 (x)}, and {y5 (x), y6 (x)} are
shown in the following figure.
13.3. EIGENVALUES AND FUNCTIONS OF SIMPLE SYSTEMS 349

y y
k=1
k=5
0.5 0.5
k=2 k=6
x x
0 0
1 2 3 1 2 3

-0.5 -0.5

p
Figure 13.1 Graphs of yk (x) = 2/π sin kx for k = 1, 2 on the left, and k = 5, 6 on the right.

We now list the important properties of these eigenvalues and eigenfunctions and state
which are common to all Sturm-Liouville systems. It is surprising that most of these
properties are common to all Sturm-Liouville systems regardless of the precise forms of
the functions p, q and w.
In this list we first state the specific property of the solutions of the Sturm-Liouville
system 13.19, and then state the equivalent general property of the solutions for the
general system, equation 13.1.

Real eigenvalues The eigenvalues λn = n2 , n = 1, 2, · · · are real.


The eigenvalues of all Sturm-Liouville systems are real and this is a consequence of
the form of the differential equation and the boundary conditions, which together
produce a self-adjoint operator: for an example of boundary conditions that give
complex eigenvalues, see exercise 13.10 (page 353).
Behaviour of eigenvalues The smallest eigenvalue is unity, but there is no largest
eigenvalue: further, λn /n2 = O(1) as n → ∞.
For the general Sturm-Liouville system there is a smallest but no largest eigenvalue
and λn increases as n2 for large n; this is proved in exercise 13.25 (page 371).
Uniqueness of eigenfunctions For each eigenvalue λn there is a single eigenfunction,
yn ∝ sin nx, unique to within a multiplicative constant.
This is also true of regular Sturm-Liouville systems and most singular Sturm-
Liouville systems of physical interest. The important exception described in ex-
ercise 13.9 (page 353) shows that there is not always a unique eigenfunction for
periodic boundary conditions. The example of exercise 13.13 shows that some
singular Sturm-Liouville systems have no eigenfunctions.
Interlacing zeros The zeros of adjacent eigenfunctions interlace, so there is one and
only one zero of yn+1 (x) between adjacent zeros of yn (x), see figure 13.1.
This is also true in the general case, and is a property of many solutions of second-
order equations, see theorem 13.2 (page 360), see also theorem 13.3.
Number of zeros of the nth eigenfunction The nth eigenfunction has n − 1 zeros
in 0 < x < π.
For the general Sturm-Liouville problem on the interval [a, b] the nth eigenfunction
has n − 1 zeros in a < x < b. This property is largely a consequence of the
interlacing of zeros.
350 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

Orthogonality of eigenfunctions The integral of the product of two distinct eigen-


functions over the interval (0, π) is zero,
Z π Z π
dx yn (x)ym (x) = dx sin nx sin mx = 0, n =
6 m.
0 0

For the general Sturm-Liouville system, regular and singular, defined in equa-
tion 13.1 there is a similar result. If φn (x) and φm (x) are eigenfunctions belonging
to two distinct eigenvalues, then they can be shown to satisfy the orthogonality
relation Z b
dx w(x)φn (x)∗ φm (x) = hn δnm , (13.23)
a

where hn is a sequence of positive numbers, δnm is the Kronecker delta5 and


a ∗ denotes the complex conjugate. Note that there are two differences between
the specific example of equation 13.19 and the general case. First, the function
w(x), the same function that multiplies the eigenvalue in the original differential
equation 13.1, has been included in the integrand: in this context w(x) is often
named the weight function. Second, the complex conjugate of φn (x) appears.
This is necessary because there are circumstances when it is more convenient to
use complex solutions even though the equations are real: for instance, we often
use einx in place of the real trigonometric functions cos nx and sin nx.
By analogy with ordinary geometric vectors this integral is named an inner product
and it is convenient to introduce the short-hand notation
Z b
(f, g)w = dx w(x)f (x)∗ g(x) (13.24)
a

where f (x) and g(x) are any functions, which may be complex, for which the
integral exists. Notice that (g, f )w = (f, g)∗w . With this notation equation 13.23
can be written in the form hn = (φn , φn )w . If w(x) = 1 we denote the inner
product by (f, g).
If (f, g)w = 0 the two functions are said to be orthogonal and if (f, f )w = 1 the
function f is said to be normalised.

Completeness of eigenfunctions The eigenfunctions yn (x) = sin nx may be used in


a
R πFourier series to represent any sufficiently well behaved function f (x) for which
2
0
dx |f (x)| exists. The Fourier representation of f (x) is,
∞ π
2
X Z
f (x) = bn sin nx, 0<x<π where bn = dx f (x) sin nx. (13.25)
n=1
π 0

The infinite set of functions sin nx, n = 1, 2, · · · , is said to be complete on the


interval (0, π) because any sufficiently well behaved function can be represented
in terms of such an infinite series.
5 The Kronecker delta is a function of two integers, (n, m), defined as δ
nm = 0 if n 6= m and 1 if
n = m.
13.3. EIGENVALUES AND FUNCTIONS OF SIMPLE SYSTEMS 351

In general if φn (x), n = 1, 2, · · · , are the eigenfunctions of a Sturm-Liouville


system defined on (a, b), with given boundary conditions, they are complete which
Rb
means that that any sufficiently well behaved function f (x) for which a dx |f (x)|2
exists, can be represented by the infinite series

X
f (x) = an φn (x), a < x < b, (13.26)
n=1

where
b
(φn , f )w 1
Z
an = = dx w(x)φn (x)∗ f (x), hn = (φn , φn )w .
(φn , φn )w hn a

It is conventional to name the more general series 13.26 a Fourier series and the
coefficients an the Fourier components: the series 13.25 is often referred to as a
trigonometric series, if a distinction is necessary.
The twin properties of orthogonality and completeness of the eigenfunctions, and
hence the existence of the series 13.26, are two reasons why Sturm-Liouville sys-
tems play a significant role in the theory of linear differential equations. It means,
for instance, that solutions of the inhomogeneous equation
 
d dy
p(x) + q(x)y = F (x), (13.27)
dx dx
with suitable boundary conditions, can usually be expressed as a linear combina-
tion of the eigenfunctions of the related Sturm-Liouville system,
  
d dy 
p(x) + q(x) + λw(x) y = 0,
dx dx
with the same boundary conditions. The rigorous treatment of this theory is too
involved to be included in this course, but an outline of the theory is contained
in the next exercise.

Exercise 13.6
Suppose that the Sturm-Liouville system
„ «
d dy
p + (q + λw)y = 0, y(a) = y(b) = 0,
dx dx
has an infinite set of eigenvalues and eigenfunctions λk and φk (x), k = 1, 2, · · · ,
with 0 < λ1 < λ2 < · · · . which satisfy the orthogonality relation 13.23.
(a) Consider the infinite series

X
y(x) = yk φk (x)
k=1

where the coefficients yk are constants. Assuming the order of summation and
differentiation can be interchanged, show that
„ « ∞
d dy X
p + qy = − yk λk w(x)φk (x).
dx dx
k=1
352 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

(b) Hence show that the solution of the inhomogeneous equation 13.27 can be
written in the form
Z b ∞
X φk (u)∗ φk (x)
y(x) = du G(x, u)F (u) where G(x, u) = − .
a k=1
h k λk

Exercise 13.7
This exercise shows how the boundary conditions can affect the eigenvalues and
eigenfunctions. Find all eigenvalues and eigenfunctions of the Sturm-Liouville
systems defined by the differential equation

d2 y
+ λy = 0,
dx2
and the three sets of boundary conditions

(a) y 0 (0) = y 0 (π) = 0, (b) y(0) = y 0 (π) = 0, (c) y(0) = 0, y(π) = y 0 (π).

In each case show that the eigenfunctions, φn (x), belonging to distinct eigenvalues
are orthogonal, that is satisfy
Z π
dx φn (x)∗ φm (x) = hn δnm
0

where hn is a sequence of positive numbers which you should find.

Exercise 13.8
This exercise involves lengthy algebraic manipulations. In exercise 13.7 you found
the following sets of eigenfunctions, yn (x), and eigenvalues, λn , for the equation
d2 y/dx2 + λy = 0 with three different boundary conditions,

(a) yn (x) = cos nx, λn = n2 , n = 0, 1, · · · , y 0 (0) = y 0 (π) = 0;


(b) yn (x) = sin(n + 1/2)x, λn = (n + 1/2)2 , n = 0, 1, · · · , y(0) = y 0 (π) = 0;
(c) y0 (x) = sinh ω0 x, λ0 = −ω02 , yn (x) = sin ωn x, λn = ωn2 , where tanh ω0 π = ω0
and tan πωn = ωn , n = 1, 2, · · · .

The Sturm-Liouville theorem shows that each of these sets of functions is complete
on (0, π). Use equation 13.26 to show that the function x may be represented by
any of the following series on the interval (0, π)

π 4 X cos(2k + 1)x
x = − ,
2 π k=0 (2k + 1)2

2 X (−1)k
„ «
1
x = sin k + x,
π (k + 1/2)2 2
k=0

2(π − 1) cosh πω0 X cos ωk π sin ωk x
x = − 2 sinh ω0 x − 2(π − 1) .
ω0 (π − cosh πω0 ) ωk (π − cos2 ωk π)
k=1
13.3. EIGENVALUES AND FUNCTIONS OF SIMPLE SYSTEMS 353

Exercise 13.9
Periodic boundary conditions:
(a) Show that the eigenvalues of the Sturm-Liouville system

d2 y
+ λy = 0, y(0) = y(2πa), y 0 (0) = y 0 (2πa), a > 0,
dx2
are given by
“ n ”2
λn = , n = 0, 1, 2, · · · ,
a
and that there are no negative eigenvalues. Show also that for n = 0 there is
just one eigenfunction, which can be taken to be y0 (x) = 1, and for n ≥ 1 each
eigenvalue has two linearly independent eigenfunctions,
n “ nx ” “ nx ”o
yn (x) = cos , sin ,
a a
or any linear combination of these.
(b) Consider the two eigenfunctions associated with the nth eigenvalue
“ nx ” “ nx ” “ nx ” “ nx ”
u1 (x) = A1 cos + B1 sin and u2 (x) = A2 cos + B2 sin .
a a a a
Show that these are orthogonal only if A1 A2 + B1 B2 = 0.

Exercise 13.10
Mixed boundary conditions:
The solutions of a Sturm-Liouville equation with mixed boundary conditions usu-
ally behave quite differently from those with unmixed conditions. An example is
considered in this exercise.
Consider the system with mixed boundary conditions

d2 y
+ λy = 0, y(0) = 0, y(π) = ay 0 (0), a > 0.
dx2
Show that if 0 < a < π there are a finite number of real eigenvalues given by the
real roots of the equation sin ωπ = aω, (ω1 , ω2 , · · · , ωN ), with λ = ω 2 and with
eigenfunctions yk (x) = sin ωk x and N ' 1/a.
Are these eigenfunctions orthogonal?

13.3.1 Bessel functions (optional)


Here we show that the properties described in the previous section are shared by Bessel
functions, which is one of the special functions that can be defined by a singular Sturm-
Liouville equation, given in equation 13.28.
We choose the Bessel function for this illustration because it is one of the more
important special functions of mathematical physics. It was one of the first special
functions to be the subject of a comprehensive treatise (Watson 1966)6 which provides
6 G N Watson 1966 A treatise on the Theory of Bessel Functions (Cambridge University Press),

first published 1922.


354 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

a thorough history of the early development and use of Bessel functions: they have oc-
curred in the work of Euler (1764, in the vibrations of a stretched membrane), Lagrange
(1770, in the theory of planetary motion), Fourier (1822, in his theory of heat flow),
Poisson (1823, in the theory of heat flow in spherical bodies) and by Bessel (1824, who
studied these functions in detail): Watson (1966) abandons his attempt to delineate the
chronological order of the study after Bessel as ‘After the time of Bessel, investigations
on the functions become so numerous . . . ’.
Bessel functions are important because, unlike most other special functions, they
arise in two quite distinct types of problems. The first is in the solution of linear partial
differential equations where separation of variables is used to derive ordinary differential
equations; typically problems involving cylindrical and spherical symmetry give rise to
Bessel functions, but so does the problem of the small vibrations of a chain suspended
from one end (considered by Euler in 1782).
These types of problem lead to differential equations that can be cast into the form
d2 y dy
x2 +x + (x2 − ν 2 )y = 0, (13.28)
dx2 dx
where ν is a real number7 , though in the following we consider only the case ν = 1.
The various solutions of this equation are collectively named Bessel functions. This
equation is singular at the origin (see section 13.3) and, as a consequence, it can be
shown to possess two types of solution. Those denoted by Jν (x) are bounded at the
origin: those denoted by Yν (x) are unbounded at the origin.
The second application arises because it is frequently necessary to expand the func-
tion e−iz sin t , which is 2π-periodic in t, as a Fourier series. It transpires that the Fourier
components are Bessel functions,

X
e−iz sin t = Jn (z)e−int . (13.29)
n=−∞

This relation is useful in the modern problem of the interaction of periodic electric
fields, lasers for example, with atoms and molecules: but the original application of
Bessel functions in this context was the inversion of Kepler’s equation, which relates
the time, t, to the eccentric anomaly, u, of a planet in an elliptical orbit with the Sun
at one focus,
ωt = u −  sin u (Kepler’s equation). (13.30)
Here ω is the angular frequency of the planet and  the eccentricity of the elliptical
path — typically less than 0.1, the exceptions being Mercury (0.21) and Pluto (0.25).
Elementary dynamics gives the approximate position of each planet in terms of u, but
for practical applications they are needed in terms of the time. By writing ωt = θ and
u = θ + P (θ), so P (θ) is a 2π-periodic function, we find that the Fourier components
of P (θ) are related to Bessel functions, see exercise 13.26.
This application gives rise to the integral definition of Jn (x),
Z π
1
Jn (x) = dt exp i (nt − x sin t) , n = 0, ±1, ±2, · · · . (13.31)
2π −π
7 In the general theory both x and ν are complex variables. The important Modified Bessel functions

are obtained by making ν purely imaginary.


13.3. EIGENVALUES AND FUNCTIONS OF SIMPLE SYSTEMS 355

The integral representation of Jν (x), where ν is not an integer, is more complicated


(Whittaker and Watson, 1965, sections 17.1 and 17.231). It can be shown, by differ-
entiating equation 13.31, that the function defined in this way satisfies the differential
equation 13.28, see exercise 13.27.

Exercise 13.11
(a) Show that the self-adjoint form of equation 13.28 is

ν2
„ « „ «
d dy
x + x− y = 0.
dx dx x

(b) Show that the normal form, defined in exercise 2.31 (page 74), of equa-
tion 13.28 is
!
d2 u ν 2 − 14 u(x)
+ 1− u = 0 where y(x) = √ , x > 0.
dx2 x2 x

(c) Apply the Liouville transformation, defined in exercise 13.2, to equation 13.28
to give the alternative form of Bessel’s equation

d2 y “ 2ξ ”
2
+ e − ν2 y = 0 where ξ = ln x, x > 0.

Exercise 13.12
(a) Use the Fourier series 13.29 to show that
(i) J−n (x) = (−1)n Jn (x);
(ii) Jn (−x) = (−1)n Jn (x);
(iii) J0 (x) + 2J2 (x) + 2J4 (x) + · · · = 1.
(b) Use the integral definition to show that J0 (0) = 1 and that Jn (0) = 0 for
n 6= 0.
(c) By differentiating the integral definition 13.31 with respect to x derive the
recurrence relation
2Jn0 (x) = Jn−1 (x) − Jn+1 (x).
(d) Use the integral definition 13.31 to show that
2n
Jn−1 (x) + Jn+1 (x) = Jn (x).
x

In the remainder of this section we describe the behaviour of the eigenvalues and eigen-
functions of the singular Sturm-Liouville system associated with Bessel’s equation,

d2 y dy
x2 2
+x + (µ2 x2 − 1)y = 0, 0 ≤ x ≤ 1, y(1) = 0. (13.32)
dx dx
with µ > 0, in particular we show that they satisfy most of the properties listed at
the beginning of section 13.3. By converting equation 13.32 to the self-adjoint form
(xy 0 )0 + (xµ2 − 1/x)y = 0, see exercise 13.11, and comparing with equation 13.1 we see
that the eigenvalue is λ = µ2 (and p = w = x, q = −1/x). By changing the independent
variable to ξ = µx we see that this equation is the same as equation 13.28 with ν = 1
356 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

and hence has the solutions Y1 (µx) and J1 (µx); we require the solution that is bounded,
that is J1 (µx).
The boundary condition at x = 1 then gives J1 (µ) = 0, that is µ must be one of the
zeros of the Bessel function. A graph of J1 (µ) is shown in figure 13.2 and this suggests
that there are an infinite number of positive zeros, µk , k = 1, 2, · · · .

J1(µ)
0.6

0.4

0.2
µ
0
2 4 6 8 10 12 14 16 18 20

-0.2

-0.4
Figure 13.2 Graph of the Bessel function J1 (µ).

Using its series expansion Daniel Bernoulli (1738) first suggested that this Bessel func-
tion has an infinite set of zeros. Later we shall see how this follows from the general
theory of second-order differential equations: the first five zeros are

µ1 = 3.832, µ2 = 7.016, µ3 = 10.17, µ4 = 13.32, µ5 = 16.47,

and these numbers can be approximated by the formula


 
1 3
µk = k+ π− + O(k −3 ), k = 1, 2, · · · ,
4 8π(k + 1/4)

which gives the first zero to within 0.006% and progressively improves in accuracy with
increasing k.
The easiest way to understand why J1 (x) oscillates in the manner shown in fig-
ure 13.2 is to√use the result derived in exercise 13.11(b). For large x this shows
00
that u(x) = xJ1 (x) is given √ approximately by the equation u + u = 0, so that
J1 (x) ' (A cos x + B sin x)/ x; this shows why J1 (x) oscillates but does not give the
phase of the oscillations, that is the values of A and B.
The eigenfunctions of equation 13.32 are thus

yk (x) = J1 (µk x), k = 1, 2, · · · . (13.33)

In the following two figures are shown the graphs of the eigenfunctions {y1 (x), y2 (x)}
and {y5 (x), y6 (x)}, as in figure 13.1 (page 349), with which you should compare the
present figures.
13.4. STURM-LIOUVILLE SYSTEMS 357

y y
k=5
0.4 k=1 0.4

0.2 k=2 0.2


k=6
x x
0 0
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
-0.2 -0.2

-0.4 -0.4

Figure 13.3 Graphs of yk (x) = J1 (µk x), for k = 1, 2, on the left, and k = 5, 6 on the right.

These eigenfunctions and eigenvalues all behave as previously described, namely:


• the eigenvalues are real:
• for large n the eigenvalues behave as λn = µ2n ' (n+1/4)2 π 2 , that is λn /n2 = O(1)
as n → ∞:
• the nth eigenfunction has n − 1 zeros in the interval 0 < x < 1:
• there is one and only one zero of yn+1 (x) between adjacent zeros of yn (x):
• the eigenfunctions are orthogonal with weight function w(x) = x. In this case it
can be shown that
Z 1
1
dx xJ1 (xµn )J1 (xµm ) = δnm hn with hn = J10 (µn )2 .
0 2
• The eigenfunctions are complete, which means that any sufficiently well behaved
real function, f (x), on the interval 0 < x < 1 can be expressed as the infinite
series, equation 13.26 (page 351),
∞ Z 1
X 2
f (x) = an J1 (xµn ) where an = 0 dx xf (x)J1 (xµn ).
n=1
J1 (µn )2 0

13.4 Sturm-Liouville systems


In the previous section it was shown how the eigenvalues and eigenfunctions of a par-
ticular Sturm-Liouville system behave and it was stated that most of these systems
behave similarly. We now formally define regular and singular systems before investi-
gating some of these properties. The distinction between regular and singular Sturm-
Liouville systems is important, because not all singular systems have eigenvalues, see
exercise 13.13; however, the regular and singular systems that arise from linear partial
differential equations behave similarly.
A regular Sturm-Liouville system is defined to be the linear, homogeneous, second-
order differential equation8
 
d dy 
p(x) + q(x) + λw(x) y = 0 (13.34)
dx dx
8 There is no agreed convention for the signs in this equation. For instance, in Courant and Hilbert

(1965) and Birkhoff and Rota (1962) the sign in front of q(x) is negative and in Körner (1988) the
signs in front of q(x) and λ are negative. Care is needed when using different sources.
358 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

defined on a finite interval of the real axis a ≤ x ≤ b, together with the homogeneous
boundary conditions
A1 y(a) + A2 y 0 (a) = 0 and B1 y(b) + B2 y 0 (b) = 0, (13.35)
with A1 , A2 , B1 and B2 real constants, and the two cases A1 = A2 = 0 and B1 = B2 = 0
are excluded. These conditions are sometimes named separated boundary conditions.
• the functions p(x), q(x) and w(x) are real and continuous for a ≤ x ≤ b;
• p(x) and w(x) are strictly positive for a ≤ x ≤ b;
• p0 (x) exists and is continuous for a ≤ x ≤ b.
Equation 13.32, defining the Bessel function, and the radial equation 13.18 for R(r) and
equation 13.17 for Θ(θ), do not satisfy the condition p > 0. Further, equation 13.16
for Φ(φ) has a different type of boundary condition than those of equation 13.35. It
follows that the scope of the theory needs to be extended if it is to be useful.
First, it needs to apply to periodic boundary conditions, that is
y(a) = y(b), y 0 (a) = y 0 (b) (13.36)
which are an important subset of the class of mixed boundary conditions, see exer-
cise 13.9. Equation 13.16 for Φ(φ) has this type of boundary condition. Another
common Sturm-Liouville system with periodic boundary conditions is Mathieu’s equa-
tion,
d2 y
+ (λ − 2q cos 2θ) y = 0, y(0) = y(π), y 0 (0) = y 0 (π), (13.37)
dθ2
where here q is a real variable. This equation seems to have been first studied by the
French mathematician Mathieu (1835 – 1890) in his discussion of the vibrations of an
elliptic membrane and occurs when separating variables in elliptical coordinates, see
exercise 13.26 (page 373). In this example λ(q) is the eigenvalue and it has a fairly
complicated dependence upon the variable q.
The main difference between periodic and separated boundary values is that some-
times, see exercise 13.9, each eigenvalue has more than one eigenfunction. In such cases
it is always possible to choose linear combinations that are orthogonal.
The second necessary extension is to those equations where p(x) = 0 at either or
both end points. In the example treated in section 13.2, the equation 13.18 for R(r) is
singular if the interval contains r = 0, as is the Bessel function example, equation 13.32:
the equation 13.17 for Θ(θ) is singular because p(x) = 1 − x2 is zero at both ends of
the interval. Thus singular systems are as common as regular systems.
As an aside we note that all these singular systems arise because the spherical polar
coordinates used to separate variables are singular at the poles, where x = cos θ = ±1
and φ is undefined, and at r = 0 where neither θ nor φ are defined. It is this geo-
metric singularity in the transformation between Cartesian and polar coordinates that
makes the Sturm-Liouville systems singular: therefore we do not expect these particular
singular systems to be much different from regular systems.
A Sturm-Liouville system for which p(x) is positive for a < x < b but vanishes at
one or both ends is named a singular Sturm-Liouville system. These systems comprise
the differential equation 13.34, with w(x) and q(x) satisfying the same conditions as
for a regular system, and
13.4. STURM-LIOUVILLE SYSTEMS 359

• the solution is bounded for a ≤ x ≤ b;

• at an end point at which p(x) does not vanish, y(x) satisfies a boundary condition
of the type 13.35.

The example of equation 13.17 shows that for some singular systems q(x) is unbounded
at the interval ends. The behaviour of q(x) is not, however, so important in determining
the behaviour of the eigenfunctions.
The third necessary extension is to systems defined on infinite or semi-infinite inter-
vals, which arise in many applications in quantum mechanics. We shall not deal with
these problems, but note that in many cases these systems behave like regular systems.

Exercise 13.13
Consider the eigenvalue problem
„ «
d dy
x2 + λy = 0, 0 ≤ x ≤ 1, y(1) = c ≥ 0,
dx dx

and with y(x) bounded.


(a) Find the general solution of this equation and show that this problem has no
eigenvalues if c = 0 and infinitely many if c > 0.
(b) How does this problem change if the boundary conditions become y(a) =
y(1) = 0, 0 < a < 1?

13.4.1 Separation and Comparison theorems


In this section we use the Wronskian, introduced in section 2.4.3, to derive useful
properties of the positions of the zeros of the solutions of the homogeneous equation,

d2 y dy
p2 (x) + p1 (x) + p0 (x)y = 0, a ≤ x ≤ b, (13.38)
dx2 dx
where p2 (x) 6= 0 for x ∈ [a, b]. The theorems given here were first discovered by Sturm:
the first involves the relative positions of the zeros of two linearly independent solutions,
f (x) and g(x), of the homogeneous equation 13.38. Since W (f, g) 6= 0, if g(x) = 0 at
x = c, then
W (f, g; c) = f (c)g 0 (c) 6= 0.
Hence f (c) 6= 0 and g 0 (c) 6= 0.
Now let c and d be two successive zeros of g(x), so g(c) = g(d) = 0 then f (c) 6= 0
and f (d) 6= 0; also g 0 (c) and g 0 (d) must have different signs (because if g(x) is increasing
at x = c it must be decreasing at x = d, or vice-versa). Since W (f, g; x) has constant
sign and
W (c) = f (c)g 0 (c), W (d) = f (d)g 0 (d),
it follows that f (c) and f (d) must have opposite signs. Hence f (x) must have at least
one zero for c < x < d; two possible situations are shown in figure 13.4.
360 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

y y
f(x) g(x) f(x) g(x)

x x
c d c d

Figure 13.4 Diagram showing the behaviour of f (x) between two adjacent zeros of g(x),
consistent with W (f, g) not changing sign. Only the behaviour on the left-hand side is
actually possible, because we assume that g(x) 6= 0 for c < x < d, see text.

However, there can be only one zero of f (x) between adjacent zeros of g(x). Suppose
there are more: by reversing the roles of f and g we see that between two of the zeros
of f (x), there must be at least one zero of g(x), which contradicts the assumption that
c and d are adjacent zeros. Thus we have the following theorem.
Theorem 13.1
Sturm’s separation theorem. If f (x) and g(x) are linearly independent solutions of
the second-order homogeneous equation 13.38, then the zeros of f (x) and g(x) alternate
in (a, b).

A well known example of this theorem is the equation y 00 + y = 0, on the whole real
line, which has the independent solutions sin x and cos x with the alternating zeros nπ
and (n + 1/2)π, n = 0, ±1, ±2, · · · , respectively. A less obvious consequence is that the
two functions
f (x) = a1 sin x + a2 cos x and g(x) = b1 sin x + b2 cos x
have alternating zeros provided a1 b2 6= a2 b1 , which ensures that the two functions are
linearly independent, see exercise 2.33 (page 75).
Note that this theorem does not prove that the zeros exist. The equation y 00 −y = 0,
with solutions sinh x and cosh x shows that zeros need not exist.
The next theorem is more useful and in some circumstances can be used to show that
zeros exist and also to give their approximate positions. This is Sturm’s comparison
theorem, which we first state, then prove.
Theorem 13.2
Sturm’s comparison theorem. Let y1 (x) and y2 (x) be, respectively, nontrivial so-
lutions of the differential equations
d2 y d2 y
+ Q1 (x)y = 0 and + Q2 (x)y = 0 (13.39)
dx2 dx2
on an interval (a, b) and assume that Q1 (x) ≥ Q2 (x) everywhere in this interval. Then
between any two zeros of y2 (x) there is at least one zero of y1 (x), unless Q1 (x) = Q2 (x)
everywhere and y1 is a constant multiple of y2 .

A simple example of this theorem is the equation y 00 + ω 2 y = 0, with solution sin ωx


having zeros at nπ/ω, equally spaced, a distance π/ω apart. Hence for the two equations
13.4. STURM-LIOUVILLE SYSTEMS 361

with ω = ω2 and ω = ω1 > ω2 there must be at least one zero of sin ω1 x between
adjacent zeros of sin ω2 x.
Proof of the comparison theorem
The following proof depends upon the properties of the Wronskian. If x = c and x = d
are adjacent zeros of y2 (x), with c < d, suppose that y1 (x) 6= 0 for c ≤ x ≤ d. We may
assume that both y1 (x) and y2 (x) are positive in (c, d). Then

W (y1 , y2 ; c) = y1 (c)y20 (c) > 0, since y20 (c) > 0,


(13.40)
W (y1 , y2 ; d) = y1 (d)y20 (d) < 0, since y20 (d) < 0.

But
dW d
= (y1 y20 − y10 y2 ) = y1 y200 − y100 y2
dx dx
and, on using the differential equations 13.39 defining y1 and y2 , this simplifies to
dW  
= Q1 (x) − Q2 (x) y1 (x)y2 (x) ≥ 0, c ≤ x ≤ d.
dx
It follows that if Q1 (x) > Q2 (x), W (y1 , y2 ; x) is a monotonic increasing function of x,
so that W (c) ≤ W (d), which contradicts equation 13.40. Thus we must have y 1 (d) < 0
and hence y1 (x) must have at least one zero in (c, d).
Further, if Q1 = Q2 the separation theorem implies that there is one zero unless y1
and y2 are linearly dependent, that is y2 (x) is a multiple of y1 (x).
Applications of the comparison theorem
The equation y 00 + Q(x)y = 0, Q(x) ≤ 0
The first important result that follows from this is that every nontrivial solution of
d2 y
+ Q(x)y = 0 (13.41)
dx2
has at most one zero in any interval where Q(x) ≤ 0.
The proof is by contradiction. A solution of y 00 = 0 (that is, Q1 (x) = 0) is y1 (x) = 1.
If a solution of 13.41 has two zeros in a region where Q2 ≤ Q1 = 0, then y1 (x) would
have at least one zero in between, which is a contradiction.
The elementary equation y 00 − y = 0, with the two sets of linearly independent
solutions, {cosh x, sinh x} and {e−x, ex }, illustrates this result: only the second member
of the first pair, sinh x, has a zero.
Bessel functions
The comparison theorem can sometimes be applied to obtain useful properties of solu-
tions. For instance the equation for an ordinary Bessel function of order ν is
d2 y ν2
 
1 dy
+ + 1 − 2 y = 0, (13.42)
dx2 x dx x
which can be written in the normal form, exercise 2.31 (page 74),

d2 u 1 − 4ν 2
 
u(x)
2
+ 1 + 2
u = 0 where y(x) = √ , x > 0. (13.43)
dx 4x x
362 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

1 − 4ν 2
If ν < 1/2 the function Q1 (x) = 1 + > 1 so a suitable comparison equation is
00
4x2
v + v = 0, that is Q2 = 1 < Q1 . A solution of the comparison equation is v = sin x,
with positive zeros at x = nπ, n = 1, 2, · · · . Hence u(x) has at least one zero in each
of the intervals (nπ, (n + 1)π), n = 1, 2, · · · .
If ν > 1/2 we can show that the solution has an infinity of positive zeros. In this case
Q1 (x) = 1 − (4ν 2 − 1)/x2 < 1, so we take the comparison equation to be v 00 + ω 2 v = 0,
with 0 < ω < 1: then for x > x0 (ω), where Q1 (x0 ) = ω 2 , Q1 (x) > Q2 = ω 2 , and
the comparison theorem shows that there is at least one zero of u(x) in each interval
(nπ/ω, (n + 1)π/ω), with nπ > ωx0 ; as x → ∞, we may chose ω close to unity.
We end this section by quoting, without proof, a more general comparison theorem,
needed later to obtain approximate positions of the zeros of an eigenfunction. The proof
of this theorem may be found in Birkhoff and Rota (1962, chapter 10).
Theorem 13.3
Sturm’s comparison theorem II. For the differential equations
   
d dy d dy
p1 (x) + Q1 (x)y = 0 and p2 (x) + Q2 (x)y = 0, a ≤ x ≤ b,
dx dx dx dx

where p2 (x) ≥ p1 (x) and Q2 (x) ≤ Q1 (x) for x ∈ (a, b), then if y1 (x) is a solution of the
first equation and y2 (x) any solution of the second equation, between any two adjacent
zeros of y2 there lies at least one zero of y1 , except if p1 = p2 , Q1 = Q2 , for all x ∈ [a, b],
and y1 is a constant multiple of y2 .

A shorter, approximate, easy to remember version is that as Q(x) increases and/or p(x)
decreases, the number of zeros of every solution increases.
The first comparison theorem is a direct consequence of this theorem. These the-
orems can be used to show that for a regular Sturm-Liouville system, provided the
eigenfunctions yn (x) exist and the eigenvalues satisfy λ1 < λ2 < · · · < λn < λn+1 < · · · ,
then the zeros of yn (x) interlace and that yn (x) has n − 1 zeros in (a, b). We outline a
proof that these eigenfunctions exist in section 13.4.3.

Exercise 13.14
Use the Liouville normal form found in exercise 13.2 (page 341) and the comparison
theorem to show that there is a lower bound on the eigenvalues of a regular Sturm-
Liouville system with the boundary conditions y(a) = y(b) = 0.

Exercise 13.15
(a) Show that every solution of the Airy equation y 00 + xy = 0 vanishes infinitely
often for x > 1 and at most once for x < 0.
(b) Show that if y(x) satisfies Airy’s equation, then v(x) = y(ax) satisfies the
equation v 00 + a3 xv = 0.
(c) Show that the Sturm-Liouville system y 00 + λxy = 0, y(0) = y(1) = 0, has an
infinite sequence of positive eigenvalues and no negative eigenvalues.
13.4. STURM-LIOUVILLE SYSTEMS 363

13.4.2 Self-adjoint operators


The eigenvalues of a Sturm-Liouville system are real and the eigenfunctions of most
systems are orthogonal. These two important properties follow directly from the form
of the real, differential operator,
 
d df
Lf = p(x) + q(x)f, a ≤ x ≤ b, (13.44)
dx dx

which defines the Sturm-Liouville equation.


The first result we need is Lagrange’s identity,

du∗
  
∗ ∗ d ∗ dv
v(Lu) − u Lv = p(x) v −u (13.45)
dx dx dx

where u and v are any, possibly complex, functions for which both sides of the identity
exist.

Exercise 13.16
Prove Lagrange’s identity, equation 13.45.
Z b
9
Using the the inner product notation, with unit weight function , (f, g) = dx f (x)∗ g(x),
a
Lagrange’s identity can be written in the form
b
du∗
 
dv
(Lu, v) − (u, Lv) = p(x) v(x) − u(x)∗ . (13.46)
dx dx a

For some boundary conditions the right-hand side of this equation is zero and then

(Lu, v) = (u, Lv). (13.47)

In this case the operator and the boundary conditions are said to be self-adjoint. It is
important to note that a differential operator cannot be self-adjoint without appropriate
boundary conditions.
For the homogeneous, separated boundary conditions defined in equation 13.35
(page 358) we have, since A1 and A2 are real, and assuming A2 6= 0,

A1 u(a)∗ + A2 u0 (a)∗ = 0 u0 (a)∗ v 0 (a)


=⇒ = .
A1 v(a) + A2 v 0 (a) = 0 u(a) ∗ v(a)

This shows that the boundary term of equation 13.46 is zero at x = a; a similar analysis
shows it to be zero at x = b. If A2 = 0 then u(a) = v(a) = 0 and the same result follows.
For a singular system, if p(a) = 0 the boundary term at x = a is clearly zero. Thus
for regular and singular systems (Lu, v) = (u, Lv) and the operator L is self-adjoint.
Periodic boundary conditions also make the system self-adjoint, as shown in the next
exercise.
9 There is no agreed version of the inner product notation. That adopted here is normally used in

physics, particularly in quantum mechanics, but in mathematics texts the integrand is often taken to
be f (x)g(x)∗ . Provided one definition is used consistently the difference is immaterial.
364 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

Exercise 13.17
Prove that if the boundary conditions are periodic, y(a) = y(b) and y 0 (a) = y 0 (b)
and p(a) = p(b), then L is self-adjoint.
Note: periodic boundary conditions are examples of mixed boundary conditions
in which the values of the function, and possibly its derivative, at the two ends of
the range are non-trivially related. Normally mixed boundary conditions produce
operators that are not self-adjoint, exercise 13.20.

Exercise 13.18
In this chapter the operators considered are real but complex operators are often
useful.
R∞
Show that on the space of differentiable functions for which −∞ dx |u(x)|2 exists
d
the real operator L = dx is not self-adjoint, but that the complex operator L = iL
is self-adjoint.

RIn∞this example2
there are no boundary conditions: the condition that the integral
−∞
dx |u(x)| exists means that |u| → 0 as x → ±∞ and this plays the role of
the boundary conditions.

Exercise 13.19
Show that the operator L defined by
d2 y
Ly = + αy = 0, y(0) = A, y 0 (π) = B,
dx2
where α, A and B are nonzero constants, is not self-adjoint. This exercise shows
why the boundary conditions need to be homogeneous.

Exercise 13.20
Show that the system Ly = y 00 + λy = 0, with the mixed boundary conditions,
y(0) = 0, y(π) = ay 0 (0), a 6= 0, is not self-adjoint.
Note in exercise 13.10 (page 353) it was shown that some of the eigenvalues of this
system are complex and that the eigenfunctions are not orthogonal.

The eigenvalues of a self-adjoint operator are real


If φ(x) is an eigenfunction corresponding to an eigenvalue λ, then Lφ = −λwφ and

(Lφ, φ) = (−λwφ, φ) = −λ∗ (wφ, φ).

Also
(φ, Lφ) = (φ, −λwφ) = −λ(φ, wφ)
and hence, since w(x) is real,
Z b

0 = (Lφ, φ) − (φ, Lφ) = (λ − λ ) dx w(x)|φ(x)|2 .
a

Since w(x) > 0 and (φ, φ)w > 0, the right-hand side can be zero only if λ = λ∗ , that is
the eigenvalues of a Sturm-Liouville system are real: this proof is valid for regular and
singular systems and if the boundary conditions are periodic.
13.4. STURM-LIOUVILLE SYSTEMS 365

The eigenfunctions are orthogonal


Now consider two eigenfunctions φ(x) and ψ(x) corresponding to distinct eigenvalues λ
and µ, respectively, that is Lφ = −λwφ and Lψ = −µwψ. By the self-adjoint property
0 = (Lφ, ψ) − (φ, Lψ) = −λ(φ, ψ)w + µ(φ, ψ)w
Z b
= (µ − λ) dx w(x)φ(x)∗ ψ(x).
a

Since we have assumed that µ 6= λ it follows that


Z b
(φ, ψ)w = dx w(x)φ(x)∗ ψ(x) = 0. (13.48)
a

13.4.3 The oscillation theorem (optional)


In this section we provide a brief outline of a proof that a regular Sturm-Liouville system
possesses a countable infinity of eigenfunctions. The final result is summarised in the
following theorem, which is a consequence of the oscillation theorem, theorem 13.5. In
the remainder of this section we describe the ideas behind the proof of the oscillation
theorem: the details may be found in Birkhoff and Rota (1962, chapter 10).
Theorem 13.4
The regular Sturm-Liouville system
 
d dy
p + Q(x)y = 0, Q(x) = q(x) + λw(x), a ≤ x ≤ b, (13.49)
dx dx
with the separated boundary conditions
A1 y(a) + A2 y 0 (a) = 0 and B1 y(b) + B2 y 0 (b) = 0 (13.50)
has an infinite sequence of real eigenvalues λ1 < λ2 < · · · < λn < λn+1 < · · · with
limn→∞ λn = ∞. The eigenfunction yn (x) belonging to the eigenvalue λn has exactly
n − 1 zeros in the interval a < x < b and is determined uniquely up to a constant
multiplicative factor.

The main idea behind the proof outlined here is the Prüfer substitution, named after
the German mathematician Heinz Prüfer (1896 – 1934); this involves using polar coordi-
nates in the Cartesian plane having coordinates (py 0 , y) to understand how the solution
behaves. Two new dependent variables (r(x), θ(x)) are defined by the relations
p(x)y 0 = r cos θ and y = r sin θ (13.51)
so that
y
r 2 = y 2 + p2 y 0 2 and tan θ = . (13.52)
py 0
Since y and y 0 cannot simultaneously be zero, r > 0. Notice that y(x) = 0 when
θ(x) = nπ, where n is an integer.
First we need the differential equations for r and θ. Differentiating the equation for
tan θ gives
2
1 y(py 0 )0

1 dθ 1 y
= − = + Q
cos2 θ dx p (py 0 )2 p py 0
366 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

where we have used the relation (py 0 )0 = −Qy. Multiplying by cos2 θ gives
dθ 1
= Q(x) sin2 θ + cos2 θ, Q = q(x) + λw(x). (13.53)
dx p(x)
This first-order equation for θ is independent of r, and provided p(x) 6= 0, it has a
unique solution for every initial value of θ, that is θ(a). Further it can be shown that
the solution θ(x, λ) is a continuous function of x and λ in the intervals a ≤ x ≤ b and
−∞ < λ < ∞.
The equation for r is found by differentiating the equation for r 2 and then using the
original equation
dr dy d(py 0 ) r2
r =y + py 0 = sin θ cos θ − Qr 2 sin θ cos θ.
dx dx dx p
Hence  
dr 1 1
= − Q(x) r sin 2θ. (13.54)
dx 2 p(x)
The two equations 13.53 and 13.54 are equivalent to the original differential equation
and are named the Prüfer system assocated with the self-adjoint equation 13.34.
The equation for r can be expressed as an integral
 Z x   
1 1
r(x) = r(a) exp dt − Q(t) sin 2θ(t) , (13.55)
2 a p(t)
which can be evaluated once θ(x) is known; however, we shall not need this equation.
Notice that because the original equation for y is homogeneous the magnitude of r(x)
is unimportant, and is why r(x) depends linearly upon r(a).
The solution of equation 13.53 for θ(x) depends only upon the initial conditions,
that is the boundary condition A1 y(a) + A2 y 0 (a) = 0, which gives
A2
tan θa = − with 0 ≤ θa < π, (13.56)
A1 p(a)
and with θa = π/2 if A1 = 0 The eigenvalues are given by those values of λ for which
θb = θ(b, λ), satisfies the equation tan θb = −B2 /(B1 p(b)). However, here the main
objective is not to find the eigenvalues but to first determine that they exist and second
to determine some of their properties, and for this only the initial condition is required.
It is necessary to understand how θ(x, λ) behaves as a function of x and λ; this
behaviour is summarised in the following theorem which is proved in Birkhoff and Rota
(1962, chapter 10).
Theorem 13.5
The oscillation theorem. The solution of the differential equation 13.53 satisfying
the initial condition θ(a, λ) = θa < π, for all λ, is a continuous and strictly monotonic
in λ for fixed x on the interval a < x ≤ b. Also
lim θ(x, λ) = ∞ and lim θ(x, λ) = 0 for a < x ≤ b.
λ→∞ λ→−∞

This theorem show that y(b, λ) = r(b) sin θ(b, λ) has infinitely many zeros for λ > 0,
and hence that there are infinitely many eigenfunctions.
13.4. STURM-LIOUVILLE SYSTEMS 367

In order to understand why θ(x, λ) behaves in the manner described in theorem 13.5
we consider two specific examples. The first is a very simple system with known eigen-
functions; the second example is sufficiently general to contain all the essential features
of the general case.
The first system is
d2 y
+ λy = 0, 0 ≤ x ≤ π, (13.57)
dx2
and here p = 1 and Q = λ, so the equation 13.53 for θ is


= cos2 θ + λ sin2 θ, θ(0) = θ0 .
dx

This equation is particularly simple because the right-hand side is independent of x, so


it can be integrated directly, to give

θ
1
Z
x(θ) = dφ . (13.58)
θ0 cos2 φ + λ sin2 φ

However, this means that it is unrepresentative which is why another example is con-
sidered after the following discussion. We now deduce the qualitative behaviour of the
function θ(x) from this integral.
If λ > 0, θ0 (x) > 0 and θ(x) is a monotonic increasing function of x; the larger λ the
greater the rate of increase of θ(x, λ). In particular θ(π, λ) is an increasing function of
λ: this is clear from the integral 13.58 because the integrand is positive and for most
values of φ a decreasing function of λ. Thus for a given value of x the upper limit,
θ(x), must increase as λ increases to compensate for the decreasing magnitude of the
integrand, see exercise 13.21.
If λ < 0, then θ(x) > 0 tends to a constant value as x → ∞. √ To see this observe
that θ0 (x) = 0 when θ = θc and π − θc where 0 < θc = tan−1 (1/ −λ) < π/2, and thus,

• if θ0 = θc then θ(x) = θc for all x; this solution is stable;

• if θ0 = π − θc then θ(x) = π − θc for all x; this solution is unstable;

• if 0 ≤ θ0 < θc , then θ0 (0) > 0 and θ(x) increases monotonically to θc as x → ∞;

• if θc < θ0 < π − θc , then θ0 (0) < 0 and θ(x) decreases monotonically to θc as


x → ∞;

• if π − θc < θ0 < π then θ0 (0) > 0 and θ(x) increases monotonically to θc + π


as x → ∞.

This behaviour is shown graphically in figure 13.5, where λ = −1/4, which gives
θc = 1.107 and graphs of θ(x, λ) are shown for various initial conditions. Figure 13.6
shows the graphs of θ(x, λ), with the same initial condition θ0 = 0.6, but various values
368 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

of λ. Since θc depends upon λ, θ0 (0) > 0 for λ > −2.14, and θ 0 (0) < 0 for λ < −2.14.

5 θ(x)
1.5 θ(x) λ
4 π+θc 0
-0.1
3 1 -0.5
-1.0
2 π−θc -1.5
-2.0
0.5
-5.0
1 θc

0 x/π 0 x/π
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Figure 13.5 Graphs of θ(x) for λ = −1/4 Figure 13.6 Graphs of θ(x), for the initial
and various initial conditions. condition θ0 = 0.6, and various negative λ.

It is clear from these graphs that there can be at most one negative eigenvalue. For the
parameters of figure 13.6, θ0 = 0.6, θ(π, λ) varies between 0 and tan−1 (π + tan θ0 ) =
1.315, as λ increases from −∞ to 0: if the boundary condition θπ , at x = π, lies in this
range there will be a single eigenvalue for some negative λ. Otherwise there will be no
negative eigenvalue.
Now restrict attention to the case λ > 0, where θ(x, λ) increases with x, for fixed
λ, and with λ for fixed x. Graphs of θ(x, λ) for 0 ≤ x ≤ π and various values of λ are
shown in figure 13.7.

60 θ λ
50 250
40 150
120
30 100
20 50
40 30
10 10
1
0
0 0.2 0.4 0.6 0.8 x/π 1
Figure 13.7 Some representative graphs of θ(x), defined by equation 13.58
with θ0 = 0, for various√values of λ. Using the integral 13.58 it can be shown
that if λ  1, θ(x) ' x λ.

The following exercise uses the integral 13.58 to deduce some propoerties of θ(x, λ) for
the differential equation 13.57 with the boundary conditions y(0) = y(π) = 0.

Exercise 13.21
(a) For the boundary value problem

y 00 + λy = 0, y(0) = y(π) = 0,

show that θ(0, λ) = 0 and θ(π, λ) = nπ, for some positive integer n. Use equa-
tion 13.58 to deduce that the value of λ satisfying this last equation is λ = n2 .
Deduce that the nth eigenvalue is λn = n2 and show that its eigenfunction has
n − 1 zeros in the interval 0 < x < π.
13.4. STURM-LIOUVILLE SYSTEMS 369

(b) If θ(π, λ) = θπ (λ) show that


θπ
sin2 φ
Z
dθπ
= cos2 θπ + λ sin2 θπ
` ´
dφ > 0.
dλ 0 (cos2 φ + λ sin2 φ)2

(c) Show that limλ→∞ θ(π, λ) = ∞.

For part (a) you will need the integral


Z π/2
1 π
dφ 2 2 φ + b2 sin φ
= , a > 0, b > 0.
0 a cos 2ab

Now consider a slightly different, but more typical problem, for which there is no simple
formula for θ(x). Consider the eigenvalue problem
d2 y
+ λxy = 0, y(0) = y(1) = 0, (13.59)
dx2
also treated in exerise 13.15. In this example p = 1 and Q = λx, so the equation for θ
is

= cos2 θ + λx sin2 θ, θ(0) = 0, 0 ≤ x ≤ 1. (13.60)
dx
If λ > 0, θ0 (x) > 0 and, as before, θ(x) is a monotonic increasing function of x, with a
greater rate of increase the larger λ. Further if λ2 > λ1 , θ(x, λ2 ) ≥ θ(x, λ1 ), as shown by
an application of the theorem for first-order equations quoted in exercise 13.22. Thus
for λ > 0 there is little qualitative difference between this and the previous simpler
example; some representative graphs of θ(x, λ) are depicted in figure 13.8.

10 θ λ
150
8
120
100
6 50
4 40
30
2 10
1
0
0 0.2 0.4 0.6 0.8 x 1
Figure 13.8 Some representative graphs of θ(x), defined by equation 13.60
for various values of λ.

If λ < 0 the behaviour is not so easy to understand but, nevertheless, is similar to the
simpler example. Put λ = −µ, with µ > 0, so the equation for θ becomes

= cos2 θ − µx sin2 θ, θ(0) = 0, 0 ≤ x ≤ 1. (13.61)
dx
For small x, µxθ2 < 1 this equation is approximated by θ 0 = cos2 θ ' 1, so θ(x) grows
linearly with x, that is θ(x) = x. The two terms on the right-hand side of equation 13.61
are comparable when 1 = µx3 and near this value of x, θ 0 (x) becomes negative and for
large µ both θ and x are small, so the equation is approximately

= 1 − µxθ2 . (13.62)
dx
370 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

For µx3 > 1 the approximate solution of this equation is the function that makes the
derivative zero, that is µxθ 2 = 1. To see this put xθ 2 = µ−1 + , so θ0 = −µ: if  > 0,
θ0 decreases; if  < 0, θ 0 increases. In either case the solution moves towards the line10
µxθ2 = 1. A more accurate solution in the region µx3 > 1 is found in exercise 13.23.
In figure 13.9 we compare the numerically generated solution of equation 13.61 with
the linear approximation, for x < µ−1/3 and the approximation µxθ 2 = 1 for larger x,
for the cases µ = 10 and 100. This comparison confirms the predicted behaviour.

0.5 θ 0.25 θ
µ=10 µ=100
0.4 0.2

0.3 0.15

0.2 0.1

0.1 0.05
x x
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Figure 13.9 Graphs of the numerical solution of equation 13.61 and the approximations θ = x
and µxθ 2 = 1, shown by the dashed lines, for small and larger values of x, respectively. The
boundary x = µ−1/3 is shown by the arrows.


These graphs and the approximations show that for λ  −1, θ(1, λ) ' 1/ −λ, and
max0<x<1 (θ) ' (−λ)−1/3 ; hence for λ  −1, there can be no eigenvalues for the
boundary conditions y(0) = y(1) = 0.
We now apply the same method to the general case θ 0 = (q+λw) sin2 θ+(1/p) cos2 θ,
to show that its solutions behave similarly. If λ is sufficiently large that is q + λw > 0
for a < x < b, then θ(x) is a monotonic increasing function of x. Further, it can be
shown, see exercise 13.24, that θ(b, λ) increases with λ and that lim λ→∞ θ(b, λ) = ∞.
Hence there are infinitely many positive eigenvalues with distinct eigenfunctions.
If −λ = µ  1, with the initial condition θ(0) = 0, we again see that for small
x, θ(x) ' (x − a)/p(a). This growth continues until µw sin2 θ is large enough, that is
µw(x)(x − a)2 ' p(x), and subsequently the equation is approximately

dθ 1
' − µw(x)θ2 .
dx p(x)

The same reasoning as above shows that the approximate solution is µp(x)w(x)θ 2 = 1,
giving θ(b, λ) ' (−λp(b)w(b))−1/2 , that is the variation of θ(x) is too small for eigenval-
ues to exist, for the boundary conditions y(a) = y(b) = 0: for other boundary conditions
one negative eigenvalue may exist.

Exercise 13.22
In this exercise bounds on the positions of zeros and eigenvalues are obtained for
the Sturm-Liouville system defined by equation 13.49 with the boundary condi-
tions y(a) = y(b) = 0. For this the following comparison theorem for the first-order
equations y 0 = F (x, y) is needed.
10 This type of analysis is useful in the study of boundary layer problems, relaxation oscillations and

certain types of limit cycles.


13.4. STURM-LIOUVILLE SYSTEMS 371

Suppose that F (x, y) and G(x, z) satisfy the Lipshitz condition


|F (x, y) − G(x, z)| ≤ L|y − z|, a ≤ x ≤ b,
on suitable intervals of y and z, for some constant L. If y 0 = F (x, y) and z 0 = G(x, z),
with y(a) = z(a), then if F (x, y) ≤ G(x, y) for a ≤ x ≤ b and a suitable domain
of y, it can be shown that y(x) ≤ z(x) for a < x ≤ b.
Use this theorem with equation 13.53 for θ(x) to show that the kth zero, xk lies
between the limits,
r r
p1 xk − a p2
≤ ≤ .
q2 + λw2 kπ q1 + λw1
where p1 ≤ p(x) ≤ p2 , q1 ≤ q(x) ≤ q2 , and w1 ≤ w(x) ≤ w2 . Deduce that λn , the
nth eigenvalue satisfies
„ «2 „ «2
p1 nπ q2 p2 nπ q1
− ≤ λn ≤ − .
w2 b − a w2 w1 b − a w1

Exercise 13.23 √
In equation 13.62 define a new variable φ = θ/, where  = 1/ µ, and show that
0 2
φ (x) = 1 − xφ .
By writing the solution of this equation in the form
φ(x) = φ0 (x) + φ1 (x) + 2 φ2 (x) + · · · ,
and equating the coefficients of the powers of  to zero, show that φ0 , φ1 and φ2
satisfy the equations
“ ”
1 − xφ20 = 0, φ00 = −2xφ0 φ1 , φ01 = −x φ21 + 2φ0 φ2

and hence show that


1 1 7
θ(x) = √ + + + O(µ−5/2 ).
µx 4µx2 32µ3/2 x7/2

Exercise 13.24
Use the comparison theorem for first-order equations quoted in exercise 13.22 to
show that if λ2 > λ1 then θ(b, λ2 ) ≥ θ(b, λ1 ).

Exercise 13.25
In this exercise an approximation to the eigenvalues and eigenfunctions for large n
is found. The Liouville transformation, exercise 13.2 (page 341), shows that the
equation „ «
d dy
p + (q + λw) y = 0, a ≤ x ≤ b,
dx dx
can be transformed to the equation
d2 v d2
„ «
q 1
+ Q(x)v = 0, Q(ξ, λ) = − A + λ,
dξ 2 w dξ 2 A
where y = A(ξ)v(ξ),
Z x r
w
ξ(x) = dx and A(ξ) = (wp)−1/4 .
a p
372 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

(a) Define the modified Prüfer transformation

v(ξ) = RQ−1/4 cos φ, v 0 (ξ) = RQ1/4 sin φ,

where we assume Q(ξ) > 0 for all ξ, and show that

dφ p Q0 d Q0
=− Q− sin 2φ and (ln R) = cos 2φ.
dξ 4Q dξ 4Q

(b)
√ Assume that Q is bounded and that λ  max(Q) and show that φ(ξ) ' α −
ξ λ and R ' r, where α and r are constants, and deduce that with the boundary
conditions y(a) = y(b) = 0 the approximate eigenvalues and eigenfunctions are
„ «2 „ «
nπ r nπξ
λn = and vn (ξ) = sin .
ξ(b) Q(ξ, λn )1/4 ξ(b)
13.5. MISCELLANEOUS EXERCISES 373

13.5 Miscellaneous exercises


Exercise 13.26
For problems defined inside an elliptical region it is sometimes convenient to use
elliptical coordinates defined by

x = ρ cosh u cos v and y = sinh y sin v

where ρ is a positive constant, so that

x2 y2
2 + 2 = 1,
ρ2 cosh u ρ sinh2 u
and when v changes by 2π, with u fixed, this equation defines an ellipse.
Any elliptical boundary can be defined by a particular choice of ρ and u = u0 ,
and the interior of the ellipse if given by 0 ≤ u ≤ u0 , −π ≤ v ≤ π.
In these coordinates the partial differential equation ∇2 ψ + k2 ψ = 0 becomes
„ 2
∂2ψ
«
1 ∂ ψ
+ + k2 ψ = 0.
2ρ2 (cosh 2u − cos 2v) ∂u2 ∂v 2

By putting ψ = f (u)g(v) and using separation of variables form the two equations

d2 g
− (a − 2q cos 2v) g = 0, q = (kρ)2 , g(v + 2π) = g(v) for all v,
dv 2
d2 f
+ (a − 2q cosh 2u) = 0.
du2
The first of these equations is commonly known as Mathieu’s equation and periodic
solutions exists only for certain values of a(q).

Exercise 13.27
Kepler’s equation
Show that Kepler’s equation θ = u −  sin u with 0 ≤  < 1 can be inverted in
terms of Bessel functions with the formula,

X 1
u=θ+2 Jk (k) sin kθ.
k
k=1

Exercise 13.28
Show that the function defined by the integral 13.31 (page 354) satisfies the dif-
ferential equation 13.28, with ν = n.
Hint, by differentiating under the integral sign, show that Bessel’s equation can
be written in the form
Z π „ «
1 d “ ” n cos t
dt g(t)ei(nt−x sin t) with g(t) = i + .
2π −π dt x2 t

Exercise 13.29
Find the eigenvalues and eigenfunctions of the Sturm-Liouville system y 00 +λy = 0,
y(0) = 0, y(π) = βy 0 (π) any real β.
374 CHAPTER 13. STURM-LIOUVILLE SYSTEMS

Exercise 13.30
Find the self-adjoint form of the equation y 00 + y 0 tan x = 0.

Exercise 13.31 x
Use a comparison theorem to show that the solutions of y 00 + y = 0 have
1+x
infinitely many zeros for x > 1.

Exercise 13.32
Show that the eigenvalues of the Sturm-Liouville system y 00 + λy = 0 with the
2π-periodic boundary conditions y(−π) = y(π) and y 0 (−π) = y 0 (π) are λn = n2 ,
n = 0, 1, 2, · · · and that for each eigenvalue, except λ0 , there are two distinct
eigenfunctions, which can be expressed as the real or the complex functions
yn (x) = {cos nx, sin nx} or yn (x) = e±inx , n = 0, 1, 2, · · · .
±inx
Show, also that any linear combination of the pairs e is also an eigenfunction
with eigenvalue λn = n2 .

Exercise 13.33
(a) Using the new independent variable defined by x = et , show that if B > 1/4
the equation y 00 (x) + By/x2 = 0 has infinitely many zeros on (1, ∞).
(b) Show that the equation y 00 (x) + q(x)y/x2 = 0 has infinitely many zeros on
(1, ∞) if q(x) > 1/4 for x ≥ 1.

Exercise 13.34
Consider the system
d2 y dy λ
x + + y = 0, x ≥ 0.
dx2 dx x
(a) Show that the self-adjoint form of this equation is
„ «
d dy λ
x + y = 0, x ≥ 0,
dx dx x
and determine the intervals on which it is a regular system and on which it is a
singular system.
(b) Show that the normal form of the equation is
d2 u λ + 41 √
2
+ u = 0, u(x) = y(x) x,
dx x2
and determine the intervals on which it is a regular system and on which it is a
singular system.
(c) Find any eigenvalues and eigenfunctions for the boundary conditions y(0) =
y(1) = c, for any c.
(d) Find the eigenvalues and eigenfunctions for the boundary conditions y(a) = y(b) = 0,
0 < a < b.

Exercise 13.35
Use the bounds determined in exercise 13.22 (page 370) to show that the nth
eigenvalue of system y 00 + (xa + λ)y = 0, y(0) = y(1) = 0, with a > 0 is bounded
by (nπ)2 − 1 ≤ λn ≤ (nπ)2 .
Chapter 14

The Rayleigh-Ritz Method

14.1 Introduction
The approach adopted in this course has been to use a variational principle to obtain a
functional from which the Euler-Lagrange equation is derived. The stationary paths of
the functional are obtained by solving this equation. This approach is not always the
most practical because the Euler-Lagrange equation is usually a nonlinear boundary
value problem, and these are notoriously difficult to solve even numerically. The diffi-
culties of this approach are compounded if there are two or more independent variables
when the Euler-Lagrange equation becomes a partial differential equation.
These difficulties have led to the development of direct methods which avoid the
need to solve differential equations by dealing directly with the functional. Starting
with a differential equation the approach is to find an associated functional and to use
this to find approximations to the stationary paths, which are necessarily solutions of
the original differential equation.
A further refinement applies to those functionals for which the stationary paths
are actual minima. The technique, described in section 14.4, shows how to construct
a sequence of stationary paths so that the functional approaches its minimum value
from above: this idea is particularly useful for Sturm-Liouville systems because the
eigenvalues are equal to the value of the functional to be minimised, provided suitable
admissible functions are used.

14.2 Basic ideas


The direct method is very simple and was introduced by Euler before the Euler-Lagrange
equation was discovered. Suppose we require a stationary path of a functional S[y],
with y belonging to a given class of admissible functions. Rather than solving the
Euler-Lagrange equation, we use a restricted set of admissible functions z(x; a), each
member of which is named a trial function, depending upon a set of real variables
a = (a1 , a2 , . . . , an ). Substituting this into the functional gives a function, S(a) = S[z],
of the n real variables a. The stationary points of S(a) can be determined using the
methods of ordinary calculus and this provides an approximation to the exact stationary
path. An example of this procedure was described in exercise 5.10 (page 156) and there

375
376 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

it was shown how a very simple trial function captured the qualitative features of the
exact solution for the minimum surface of revolution. Another example, described in
section 4.2.1, is Euler’s original method whereby smooth paths are approximated by
straight line segments, with the vertex values (y1 , y2 , . . . , yN ), see figure 4.1 (page 122),
playing the part of the parameters a: exercise 5.10 is an example of this method.
Generally there are no rules for choosing the trial function z(x; a), except that it
must be an admissible function, with the choice being guided by intuition and conve-
nience. The number of parameters, n, can be as small as one, or as large as one pleases;
but the larger n, the harder the algebra, though computers are particularly useful for
this type of problem. We illustrate this method with some simple problems.
Consider the functional
Z 1  
S[y] = dx y 0 2 − y 2 + 2xy , y(0) = y(1) = 0. (14.1)
0

The Euler-Lagrange equation is y 00 + y = x and has the solution


sin x
y(x) = x − .
sin 1
A simple trial function satisfying the boundary conditions is the polynomial z(x; a) =
ax(1 − x), having just one free parameter, a. Substituting this into the functional we
obtain the integrals
1 1 1 1
1 2 1 2
Z Z Z Z
dx z 0 2 = a2 dx (1 − 2x)2 = a , dx z 2 = a2 dx x2 (1 − x)2 = a ,
0 0 3 0 0 30
1 1
1
Z Z
2
2 dx xz = 2a dx x (1 − x) = a,
0 0 6
3 2 1
so that S(a) = a + a. This is stationary at a = −5/18, giving the approximation
10 6
5x sin x
z=− (1 − x) to y = x − . (14.2)
18 sin 1
In the left-hand of figure 14.1 we compare the graphs of the exact and approximate
functions.

1 100(y-z)
y
0
0.2 0.4 0.6 0.8 x 1

-0.02 approximation, z(x)


0.5
x
-0.04 exact, y(x) 0
0.2 0.4 0.6 0.8 1

-0.06 -0.5

-0.08 -1
Figure 14.1 On the left we compare the exact solution of y 00 + y = x with the variational
approximation, z(x), defined in equation 14.2. On the right we show the difference, 100(y − z),
between the exact solution and the variational approximation.
14.2. BASIC IDEAS 377

Further thought suggests that this trial function is a poor choice, because the actual
solution is an odd function of x. This can be deduced from the differential equation
because its right-hand side is odd, so we expect the solution to be odd, for if y(x) were
even, so also is y 00 (x) and the left-hand side of the equation would be even. Thus a
more sensible trial function is

z(x; a) = ax(1 − x2 ), (14.3)

which leads to a = −7/38. This estimate of the solution is very close to the exact
solution as seen in figure 14.2 where we show the graphs of 100(y − z): notice that
the differences are about 10 times smaller than those in figure 14.1, which shows that
a careful choice of trial function can lead to significantly improved results with little
extra effort.

0.1 100(y-z)

0.05

x
0
0.2 0.4 0.6 0.8 1

-0.05

L
-0.1
R

Figure 14.2 Graph of the difference 100(y − z) between the exact solution and the trial
function defined in equation 14.3. Notice that the differences are about 10 times smaller
than those in figure 14.1.

A more general odd trial function that satisfies the boundary conditions is

z(x; a) = x(1 − x2 ) a0 + a1 x2 + a2 x4 + · · · + an x2n ,



(14.4)

and this has n + 1 parameters.


For the second, slightly more complicated, example we find an approximate solution
to the nonlinear boundary value problem

d2 y
+ x2 y 2 = x, y(0) = y(2) = 0, (14.5)
dx2
whose solution cannot be expressed in terms of elementary functions. The functional
for this equation is
Z 2  
1 02 1 2 3
S[y] = dx y − x y + xy , y(0) = y(2) = 0. (14.6)
0 2 3

Now we use trial function


z(x; a) = ax(2 − x).
The three integrals needed are

1 2 4 1 2
64 3 2
4
Z Z Z
dx z 0 2 = a2 , dx x2 z 3 = a , dx xz = a,
2 0 3 3 0 189 0 3
378 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

so that
64 3 4 2 4 64 8 4
S(a) = − a + a + a and S 0 (a) = − a2 + a + .
189 3 3 63 3 3
Now there are two stationary paths given by the roots of this quadratic, which we
denote by
1 √  1 √ 
a− = − 777 − 21 and a+ = 777 + 21 ,
16 16
which suggests that this nonlinear boundary value problem has two solutions. Nu-
merical calculations, guided by this approximation, confirm this and in figure 14.3 we
compare the above approximate solutions with those given by a numerical calculation.

0 y(x) 4 y(x)
a<0 a>0
-0.1
approximate 3
-0.2
numerical 2
-0.3
1
-0.4
x x
-0.5 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Figure 14.3 A graphical comparison of the numerical solutions of equation 14.5, the solid lines,
and the variational approximation, the dashed lines. On the left is the comparison for a < 0 and
on the right for a > 0.

By substituting a power series into the differential equation, it can be seen that a better
trial function is z = ax(4 − x2 ), because the coefficient of the term x2 is zero, but for
this trial function the integrals are slightly more complicated. We have

2 2 2
1 64 2 1 512 3 64
Z Z Z
02 2 3
dx z = a , dx x z = a , dx xz = a
2 0 5 3 0 45 0 15

so that

512 3 64 2 64 512 2 128 64


S(a) = − a + a + a and S 0 (a) = − a + a+
45 5 15 15 5 15

and the two stationary paths are given by setting a to the values
√ √
3− 17 3+ 17
a− = and a+ = .
8 8

In figure 14.4 are compared these approximations with numerically generated solutions
of equation 14.5. For a = a− the trial solution, shown by the circles, is very close to
the exact solution. In both cases the approximations are better, which again illustrates
the value of choosing suitable trial functions.
14.3. EIGENVALUES AND EIGENFUNCTIONS 379

0 y(x) a=a_ < 0 3 y(x) a=a+ > 0


-0.1 2.5
2
-0.2
1.5
-0.3
approximate
1
-0.4 0.5 exact
x x
-0.5 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Figure 14.4 Graphs comparing the exact numerically generated solution of equation 14.5 and
the approximations obtained using the trial function z = ax(1 − x2 ), denoted by the circles in the
left panel.

It is worth noting that some “black-box” numerical methods for solving boundary value
problems give only the solution with a > 0, and provide no inkling that another so-
lution exists. Thus, simple variational calculations, such as described here, can avoid
embarrassing errors; but they give no guarantee that only two, or indeed any, solutions
to this problem exist.

Exercise 14.1
Using the trial function y = 1 − ax − (1 − a)x2 obtain an approximate solution for
the equation y 00 + xy = 0, y(0) = 1, y(1) = 0.

Exercise 14.2
(a) Show that the functional associated with the equation y 00 + y 3 = 0, y(0) = 0,
y 0 (X) = 0 is
Z X „ «
1 02 1 4
S[y] = dx y − y , y(0) = 0,
0 2 4
with a natural boundary condition at x = X.
(b) Use the trial function y = a sin(πx/(2X)) to find an approximate solution.
Z π/2

You will need the integral du sin4 u = .
0 16

14.3 Eigenvalues and eigenfunctions


In this section we show how the first n eigenvalues and eigenfunctions of a Sturm-
Liouville system can be approximated by solving a set of n linear equations in n vari-
ables, using a method originally due to J W Strutt, third Baron Rayleigh (1842 – 1919).
The method can start either from the Euler-Lagrange equation or the associated func-
tional, and though it is normally slightly easier to use the Euler-Lagrange equation, see
exercise 14.7; we use the functional because this analysis is needed in the next section
to provide upper bounds to the eigenvalues.
We illustrate the method with the functional
Z 1  
S[y] = dx py 0 2 − qy 2 , y(0) = y(1) = 0, (14.7)
0
380 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

subject to the constraint


Z 1
C[y] = dx y 2 = 1. (14.8)
0

If λ is the Lagrange multiplier this leads to the Sturm-Liouville system


 
d dy
p + (q + λ)y = 0, y(0) = y(1) = 0, (14.9)
dx dx

which we assume to have an infinite sequence of real eigenvalues λ1 < λ2 < λ3 · · · , and
associated eigenfunctions y1 (x), y2 (x), · · · .
First we need the following relation between the nth eigenfunction, yn (x), and its
eigenvalue
Z 1  
λn = S[yn ] = dx pyn0 2 − qyn2 . (14.10)
0

This formula is useful because we shall use it, with approximations for yn (x), to both
approximate and bound λn .

Exercise 14.3
By multiplying equation 14.9 by yn and integrating over (0, 1) prove equation 14.10.

Exercise 14.4
If yn (x) is an exact eigenfunction with eigenvalue λn and zn = yn + u(x), with
||  1 and O(u) = 1, is an admissible function, show that

λn = S[zn ] + O(2 ).

The result derived in exercise 14.4 is important. It shows that if an eigenfunction is


known approximately, with an accuracy O(), then it can be used to approximate the
eigenvalue to an accuracy O(2 ).
For the linear system 14.9 we construct trial functions using a subset of a complete
set of functions {φ} = {φ1 , φ2 , · · · }, each of which satisfies the boundary conditions.
Normally these functions are eigenfunctions of another Sturm-Liouville system and it
is clear that when choosing this system it is sensible to use a system that is similar to
that being studied.
Here we use the complete, orthogonal sequence φk (x), k = 1, 2, · · · , satisfying
Z 1
dx φi (x)φj (x) = hi δij , (14.11)
0

with each φi (x) satisfying the same boundary conditions as the original Sturm-Liouville
system, in this case φi (0) = φi (1) = 0. At the end of this analysis we shall use a specific
set of functions by setting φk = sin kπx. A trial function is obtained using a linear
combination of the first n of these functions
n
X
z(x; a) = ak φk (x),
k=1
14.3. EIGENVALUES AND EIGENFUNCTIONS 381

and this will provide an approximation to the first n of the required eigenvalues and
eigenfunctions. The trial function needs to satisfy the constraint C[z] = 1, and this
defines the function
Z 1 n
!2 n
X X
C(a) = dx ak φk (x) = hk a2k = 1, (14.12)
0 k=1 k=1

where we have used the orthogonal property, equation 14.11. In the space of real
variables a = (a1 , a2 , . . . , an ) this quadratic function of a, equation 14.12, defines an
n-dimensional ellipsoid. It is convenient to write this constraint in terms of the vector a,

C(a) = a> Ha = 1,

where H is the n × n, diagonal matrix with Hkk = hk > 0. The functional S[z] defines
another function of a,
 !2 !2 
Z 1 Xn Xn
0
S(a) = dx p(x) ak φk (x) − q(x) ak φk (x)  , (14.13)
0 k=1 k=1

which is also a quadratic form and can be written as


n X
X n
S(a) = ai Sij aj = a> Sa, (14.14)
i=1 j=1

where S is a real, symmetric n × n matrix, with elements Sij . Specifically, these matrix
elements are given by
Z 1  
Sij = dx p(x)φ0i (x)φ0j (x) − q(x)φi (x)φj (x) . (14.15)
0

An approximation to the first n eigenvalues and eigenfunctions of the Euler-Lagrange


equation 14.9 is given by the stationary values of S(a), subject to the constraint 14.12;
this is a conventional constrained stationary problem, dealt with in section 11.2. If µ
is the Lagrange multiplier for this problem the auxiliary function is

S(a) = S(a) − µC(a)


Xn Xn n
X
= ai Sij aj − µ hi a2i = a> Sa − µa> Ha. (14.16)
i=1 j=1 i=1

The stationary values of S(a) are given by the solutions of ∂S(a)/∂ai = 0, i = 1, 2, · · · , n,


that is, the solution of the matrix eigenvalue equation

Sa = µHa or H −1 Sa = µa. (14.17)

That is the stationary points are given by the eigenvectors of H −1 S. Further, since
H −1 S is a real, symmetric matrix its n eigenvalues are real and can be ordered, µ1 <
µ2 < · · · < µn , and the kth eigenvalue provides an approximation to the kth eigenvalue
of the original Euler-Lagrange equation, as shown next.
382 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

If ak is the kth eigenvector of H −1 S with eigenvalue µk , then assuming that the


associated trial function, z(x; ak ), is an approximation to the kth eigenfunction of the
Sturm-Liouville system we have, from the result found in exercise 14.4,

λk ' S[z] = S(ak ) = a> >


k Sak = µk ak Hak = µk .

In the next section we shall show that λ1 ≤ µ1 , so that µ1 gives an upper bound to the
lowest eigenvalue, λ1 .

An example using the orthogonal set φk (x) = sin kπx


For the interval (0, 1) and boundary conditions y(0) = y(1) = 0, a convenient orthogonal
set is φk (x) = sin kπx. For this set hk = 1/2 for all k, and the matrix elements of H −1 S
become
Z 1   Z 1  
(H −1 S)ij = 2π 2 ij dx p(x) cos iπx cos jπx − 2 dx q(x) sin iπx sin jπx .
0 0

Now we apply this approximation to the particular eigenvalue problem

d2 y
+ (x + λ)y = 0, y(0) = y(1) = 0, (14.18)
dx2
where p(x) = 1, q(x) = x and w(x) = 1. The associated functional and constraint are
Z 1   Z 1
S[y] = dx y 0 2 − xy 2 , C[y] = dx y 2 = 1, y(0) = y(1) = 0. (14.19)
0 0

We use the complete set φk = sin πkx, k = 1, 2, · · · , to construct the trial functions,
and the simplest of these is obtained by using only the first function,

z(x; a1 ) = a1 sin πx.


Z 1
The constraint gives a21 dx sin2 πx = 1, that is a21 = 2. Thus the first approximation
0
for λ1 is given by
1
1
Z  
λ1 ' S(a1 ) = a21 dx π 2 cos2 πx − x sin2 πx = π 2 − ' 9.3696. (14.20)
0 2
That is λ1 ' 9.3696. The exact eigenvalues are given by the real solutions of

Ai(−λ)Bi(−1 − λ) − Ai(−1 − λ)Bi(−λ) = 0,

where Ai(z) and Bi(z) are the Airy functions which are solutions of Airy’s equation,
y 00 − xy = 0: to 7 significant figures the first eigenvalue is, 9.368507, so the approxima-
tion is larger than this by 0.01%.
Better approximations to λ1 are obtained by increasing n, but quickly the algebra
becomes cumbersome, time consuming and error prone. However, because the equa-
tions for a are linear the standard methods of linear algebra may be used and so the
calculations become relatively trivial if a computer is available.
14.3. EIGENVALUES AND EIGENFUNCTIONS 383

Here we limit the calculation to n = 2, which illustrates all the relevant details: for
the sake of brevity the minor details of this calculation are omitted. The trial function
is now
z(x; a1 , a2 ) = a1 sin πx + a2 sin 2πx
and the constraint, equation 14.12, gives a21 + a22 = 2. The functional is given by
 2   
π 1 1 16
S2 (a1 , a2 ) = − a21 + 2π 2 − a22 + 2 a1 a2 .
2 4 4 9π
Hence the matrix eigenvalue equation 14.17 is
1 16
 
2
π −
2 9π 2
1  a = µa. (14.21)
 
 16 2
4π −
9π 2 2
Since 16/(9π 2 )  1 the eigenvalues of the matrix on the left are approximately π 2 −
1/2 and (2π)2 − 1/2, giving λ1 ' 9.3696 (the previous value) and λ2 ' 38.9784; the
eigenvalues of this matrix are actually (9.368509, 38.9795), which compare favourably
with the exact eigenvalues, (9.368507, 38.9787), of equation 14.18.

Exercise 14.5
Consider the eigenvalue problem

y 00 + (x2 + λ)y = 0, y(0) = y(1) = 0.

(a) Using the orthogonal set φk (x) = sin kπx, k = 1, 2, · · · , show that an upper
bound to the smallest eigenvalue is
Z 1
1 1
dx π 2 cos2 πx − x2 sin2 πx = π 2 − + 2 ' 9.59.
` ´
λ1 ≤ 2
0 3 2π

(b) Show that the trial function z = ax(1 − x) gives the bound λ1 ≤ 68/7 ' 9.71.
Which of these two estimates is closer to the exact value?

Exercise 14.6
Using the complete set of functions φk (x) = sin(k − 12 )πx, k = 1, 2, · · · , which
satisfy the boundary conditions φk (0) = φ0k (1) = 0, find approximations to the
first eigenvalue of the system

d2 y
+ (x + λ)y = 0, y(0) = y 0 (1) = 0,
dx2
using trial functions with one and with two parameters.
The following integral will be useful
1 1
8
> + 2 , n = m = 1,
4 π
>
>
Z 1 >
“ nπ ” “ mπ ” >
1 1
<
dx x sin x sin x = + 2 , n = m = 3,
0 2 2 > 4
> 9π
: −1 ,
>
>
n = 1, m = 3.
>
π2
384 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

Exercise 14.7
(a) Determine an approximation to the eigenvalues and eigenfunctions of the equa-
tion
d2 y
+ (x + λ)y = 0, y(0) = y(1) = 0,
dx2
by substituting the series
Xn
y(x) = ak sin kπx
k=1

into the equation to form the matrix equation M a = λa where a = (a1 , a2 , . . . , an )


and M is an n × n, real symmetric matrix, with elements
Z 1
Mij = π 2 i2 δij − 2 dx x sin iπx sin jπx.
0

(b) Show that for n = 1 and 2 this method gives the approximations 14.20
and 14.21 respectively.
(c) Show that for arbitrary n this method gives the equation 14.17 (page 381)
for a if φk = sin kπx, p(x) = 1 and q(x) = x.

14.4 The Rayleigh-Ritz method


In this section we consider the eigenvalues of Sturm-Liouville systems which are spe-
cial because the associated functionals have strict minima. This property allows the
eigenvalues to be bounded above arbitrarily accurately by a well defined and relatively
(especially with a computer) simple procedure. The method is also applicable to many
important partial differential equations which is one reason why it is important.
Suppose that the functional S[y] has a minimum value: this means that in the
class of admissible functions, M, S[y] has a greatest lower bound, s and this bound is
achieved by a function in M. There are several technical issues behind this assertion
that we ignore.
The aim is to construct a sequence of functions {y1 , y2 , · · · }, each in M, such
that S[yk ] ≥ S[yk+1 ], so that sk = S[yk ] is a decreasing infinite sequence such that
limk→∞ sk = s.
We start with an infinite set of functions {φ} = {φ1 , φ2 , · · · }, each in M, and with
a natural ordering. For instance if the admissible functions are defined on [0, 1] and are
zero at the end points, x = 0 and 1, two typical sequences are
sin kπx, k = 1, 2, 3, · · · ,
x(1 − x), x(1 − x)(1 + x), x(1 − x)(1 + x + x2 ), x(1 − x)(1 + x + x2 + x3 ), · · · .

From the sequence {φ} a finite dimensional subspace is formed from the first n members
{φ1 , φ2 , · · · , φn }; that is the set of all the linear combinations

z(x; a) = a1 φ1 (x) + a2 φ2 (x) + · · · + an φn (x), (14.22)

where the ak , k = 1, 2, · · · , n are any real numbers. On this subspace S[z] becomes a
function of the real numbers a = (a1 , a2 , . . . , an ),

S(a) = S[z].
14.4. THE RAYLEIGH-RITZ METHOD 385

This is exactly as in the previous section; but now we use the fact that the functional
has a minimum.
Choose (a1 , a2 , . . . , an ) to minimise S(a) and denote this minimum value by sn and
the associated element of Mn by yn ,
 
sn = min S(a1 , a2 , . . . , an ) .

Clearly sn cannot increase with n — because Mn+1 contains Mn , that is any linear
combination of {φ1 , φ2 , · · · , φn } is a linear combination of {φ1 , φ2 , · · · , φn , φn+1 }. If
the sequence {φ} is complete, then it can be shown that the sequence sn converges to s,
the minimum value of S[y].
This method of successively approximating a functional using sequences of functions
is essentially that used by Lord Rayleigh, but in 1909 it was put on a rigorous basis by
W Ritz, and is now named the Rayleigh-Ritz method, or sometimes the Ritz method.
For Sturm-Liouville systems the significance of this result is that the eigenvalue is
just the value of the functional that has a minimum, equation 14.10 (page 380).

The smallest eigenvalue of a Sturm-Liouville system


The simplest use of this technique is to estimate the lowest eigenvalue and eigenfunction
of a Sturm-Liouville system.
Consider the functional S[y] and constraint C[y],
Z 1   Z 1
02 2
S[y] = dx py − qy , y(0) = y(1) = 0, C[y] = dx y 2 = 1.
0 0

A suitable subspace Mn is {sin πx, sin 2πx, · · · , sin nπx}, giving the linear combination
n
X
z(x; a) = ak sin kπx.
k=1

Then the functional becomes a function S(a)


 !2 !2 
Z 1 n
X n
X
S(a) = dx π 2 p(x) kak cos kπx − q(x) ak sin kπx  ,
0 k=1 k=1

which has a minimum, because S(a) is continuous, therefore bounded above and below,
and the constraint limits each ak to a finite region, so there is some value of a that
yields the minimum value. Substituting this value for a into S(a) gives an upper bound
(n)
λ1 for λ1 ,
(n)
λ1 ≤ λ1 = S(a). (14.23)
(m)
For each m = 1, 2, · · · , we similarly obtain an upper, λ1 for the lowest eigenvalue,
and by the same reasoning as used above, we see that
(1) (2) (m) (m)
λ1 ≥ λ 1 ≥ · · · ≥ λ 1 ≥ ··· and lim λ1 = λ1 . (14.24)
m→∞

Thus the method used in the previous section provides successively closer upper bounds
to the lowest eigenvalue.
386 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

A numerical example of this behaviour was seen in the calculation of the smallest
eigenvalue of equation 14.18 (page 382) where we used the trial function
n
X
z(x; a) = ak sin kπx.
k=1

The exact value of this eigenvalue is, to 10 significant figures, 9.368 507 162: for n = 1, 2
and 3 the variational estimates for λ1 are 9.3698, 9.368 509 and 9.368 508 6. With the
trial function

z(x, a) = x(1 − x) a0 + a1 x + a2 x2 + · · · an−1 xn−1




the estimates of this eigenvalue with n = 1, 2, 3 and 4 are 9.5, 9.4989, 9.3687 and
9.368 513. As predicted the estimates approach the exact value from above.
The Rayleigh-Ritz method can be applied to any functional with a minimum value.
In particular it applies to the general Sturm-Liouville system
Z b
2 2
dx py 0 2 − qy 2 ,

S[y] = −αp(a)y(a) + βp(b)y(b) + (14.25)
a

with natural boundary conditions, and with the constraint


Z b
C[y] = dx wy 2 = 1 (14.26)
a

which leads to the Euler-Lagrange equation


 
d dy
p + (q + λw)y = 0, (14.27)
dx dx

with the separated boundary conditions

αy(a) + y 0 (a) = 0, and βy(b) + y 0 (b) = 0,

see exercise 14.8. Provided the integrals exist, the Rayleigh-Ritz method applies to
singular and regular systems. For the boundary conditions y(a) = 0 and/or y(b) = 0
the appropriate boundary term of the functional 14.25 is removed.
For this system a sequence can be found such that the smallest eigenvalue satisfies
the conditions of equation 14.24. Further, the rigorous application of this method proves
the existence of an infinite sequence of eigenvalues and eigenfunctions for both regular
and singular systems, see for instance Fomin and Gelfand (1992, chapter 8) or Courant
and Hilbert (1965, chapter 6).
By adding an additional constraint that forces the admissible functions to be or-
thogonal to y1 (x), the eigenfunction associated with the smallest eigenvalue, we obtain
bounds for the next eigenvalue. Thus by considering the system defined by equa-
tions 14.25 and 14.26 with the additional constraint
Z b
C1 [y, y1 ] = dx wyy1 = 0, (14.28)
a
14.4. THE RAYLEIGH-RITZ METHOD 387

and using trial functions z satisfying the two constraints C[z] = 1 and C1 [z, y1 ] = 0 we
obtain another convergent sequence
(1) (2) (m) (m)
λ2 ≥ λ 2 ≥ · · · ≥ λ 2 ≥ ··· and lim λ2 = λ2 . (14.29)
m→∞

By adding further constraints this process can be continued to obtain upper bounds for
any eigenvalue.

Exercise 14.8
(a) Show that the constrained functional with natural boundary conditions
Z b “ ”
S[y] = −αp(a)y(a)2 + βp(b)y(b)2 + dx py 0 2 − qy 2
a

and the constraint Z b


C[y] = dx wy 2 = 1
a
gives rise to the Euler-Lagrange equations
„ «
d dy
p + (q + λw)y = 0, αy(a) + y 0 (a) = 0, βy(b) + y 0 (b) = 0,
dx dx

where λ is the Lagrange multiplier.


(b) Show that if λk is the eigenvalue of the eigenfunction yk , λk = S[yk ].

Exercise 14.9
(a) Show that the eigenvalue problem

d2 y √
2
+ λ x y = 0, y(0) = y(1) = 0
dx
with eigenvalue λ, can be written as the constrained variational problem with
functional
Z 1 Z 1

S[y] = dx y 0 2 with the constraint C[y] = dx x y 2 ,
0 0

with admissible functions satisfying y(0) = y(1) = 0.


(b) Using the trial function z = ax(1 − x) show that the smallest eigenvalue
231
satisfies λ1 ≤ .
16
388 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

14.5 Miscellaneous exercises


Exercise 14.10
Using the trial function z = a(1 − x2 ) show that a lower bound to the smallest
eigenvalue of the system

y 00 + x2p + λ y = 0, y(−1) = y(1) = 0,


` ´

where p is a positive integer, is given by


„ «
5 6
λ1 ≤ 1− .
2 (2p + 1)(2p + 3)(2p + 5)

Exercise 14.11
(a) Find the eigenvalues and eigenfunctions of the problem

d2 y
+ λy = 0, y 0 (0) = y(1) = 0.
dx2

(b) Use the first eigenfunction of this problem to show that an approximation to
the first eigenvalue of

d2 y “ “ πx ” ” π2 4b
+ b sin + λ y = 0, y 0 (0) = y(1) = 0 is λ1 ' − .
dx2 2 4 3π

(c) Show that an approximation to the nth eigenvalue is


„ «
2b 1 1
λn ' π 2 n 2 − 1− , n=n− .
π 16n2 − 1 2

Hint use the nth eigenfunction of the system defined in part (a) to construct a
one parameter trial function.

Exercise 14.12
(a) Show that the equation
„ «
d 1 dy
+ λxy = 0, y(1) = 0, y(2) − y 0 (2) = 0,
dx x dx

is a regular Sturm-Liouville system, associated with the functional and constraint


Z 2 Z 2
1 y0 2
S[y] = − y(2)2 + dx , C[y] = dx xy 2 = 1,
2 1 x 1

with admissible functions satisfying the conditions y(1) = 0 and y(2) = y 0 (2).
(b) Using the trial function z = (x−1)(Ax+B), show that the smallest eigenvalue,
λ1 , satisfies the inequality
6 12
λ1 ≤ − + ln 2.
7 7
14.5. MISCELLANEOUS EXERCISES 389

References
Books and articles referred to in the text
Akhiezer N I 1962 The Calculus of Variations, (Blaisdell Publishing Company, trans-
lated from Russian by A H Frink)
Apostol T M 1963 Mathematical Analysis: A Modern Approach to Advanced Calculus,
(Addison-Wesley)
Arnold V I 1973 Ordinary Differential Equations, (The MIT press)
Ashby A, Brittin W E, Love W F and Wyss W, 1975 Brachitochrone with Coulomb
Friction, Amer J Physics 43 902-5.
Aughton P 2001 Newton’s Apple, (Weidenfeld and Nicolson)
Bernstein S N 1912 Sur les équations su calcul des variations, Ann. Sci. École Norm
Sup. 29 431-485
Birkhoff G and Rota G-C 1962 Ordinary differential equations (Blaisdell Publishing
Co.)
Brunt, van B 2004 The Calculus of Variations, (Springer)
Courant R and Hilbert D 1937a Methods of Mathematical Physics, Vol 1 (Interscience
Publishers Inc)
Courant R and Hilbert D 1937b Methods of Mathematical Physics, Vol 2 (Interscience
Publishers Inc)
Davenport J H, Siret Y and Tournier E 1989 Computer Algebra. Sytems and algorithms
for algebraic computation, Academic Press.
Gelfand I M and Fomin S V 1963 Calculus of Variations, (Prentice Hall, translated
from the Russian by R A Silverman), reprinted 2000 (Dover)
Goldstine H H 1980 A History of the Calculus of Variations from the 17 th through the
19 th Century, (Springer, New York)
Ince E L 1956 Ordinary differential equations (Dover)
Isenberg C 1992 The Science of Soap Films and Soap Bubbles, (Dover)
Jeffrey A 1990 Linear Algebra and Ordinary Differential Equations (Blackwell Scientific
Publications)
Kolmogorov A N and Fomin S V 1975 Introductory Real Analysis, (Dover)
Landau L D and Lifshitz E M 1959 Fluid mechanics, (Pergamon)
Lützen J 1990 Joseph Liouville 1809-1882: Master of Pure and Applied Mathematics,
Springer-Verlag
Piaggio H T H 1968 An Elementary treatise on Differential Equations, G Bell and Sons,
first published in 1920
Prandtl L 1904 Über Flüssigkeitsbewegung bei sehr kleiner Reibung, Verhandlungendes
III. internationalen Mathematiker-kongresses, Heidelberg, 1904
Richards E G 1998 Mapping Time, Oxford University Press.
Rudin W 1976 Principles of Mathematical Analysis, (McGraw-Hill)
390 CHAPTER 14. THE RAYLEIGH-RITZ METHOD

Schlichting H 1955 Boundary Layer Theory, (McGraw-Hill, New York)


Simmons G F 1981, Differential Equations, McGraw-Hill Ltd.
Smith G E 2000 Fluid Resistance: Why Did Newton Change His Mind? Published in
The Foundations of Newtonian Scholarship, Eds R H Dalitz and M Nauenberg, (World
Scientific)
Sutherland W A 1975 Introduction to Metric and Topological Spaces, (Oxford University
Press)
Troutman J L 1983 Variational Calculus with Elementary Convexity, (Springer-Verlag)
Watson G N 1965 A Treatise on the Theory of Bessel Functions (Cambridge University
Press), first published in 1922.
Whittaker E T and Watson G N 1965 A Course of Modern Analysis, (Cambridge
University Press)
Yoder J G 1988 Unrolling Time, (Cambridge University Press)
Yourgrau W and Mandelstram S 1968 Variational Principles in Dynamics and Quantum
Theory (Pitman)

Books on the Calculus of Variations


The following books have also been used in the preparation of these course notes and
should be consulted for a more detailed study of the subject.
Akhiezer N I 1962 The Calculus of Variations, (Blaisdell Publishing Company, trans-
lated from Russian by A H Frink)
Forsyth A R 1926 Calculus of Variations, (Cambridge University Press), reprinted 1960
(Dover)
Fox C 1963 An Introduction to the Calculus of Variations, (Oxford University Press),
reprinted 1987 (Dover)
Gelfand I M and Fomin S V 1963 Calculus of Variations, (Prentice Hall, translated
from the Russian by R A Silverman), reprinted 2000 (Dover)
Pars L A 1962 An Introduction to the Calculus of Variations, (Heineman)
Sagan H 1969 An Introduction to the Calculus of Variations, (General publishing Com-
pany, Canada), Reprinted 1992 (Dover)
Troutman J L 1983 Variational Calculus with Elementary Convexity, (Springer-Verlag)
Index

C∞ , 20 periodic, 353, 364


Cn (a, b), 20 separated, 358
O-notation, 12 boundary layer, 107
ln x, 35 boundary value problem, 56, 121, 135
D0 norm, 124, 137 brachistochrone, 104, 145, 250, 258, 260
D1 norm, 124, 137 in a resisting medium, 319
f −1 (y), 17 with Coulomb friction, 329
o-notation, 13 broken extremals, 271
Brownian motion, 18
admissible function, 124 bubbles, 163
Airy’s equation, 362
allowed variations, 125 Cam, bridge over, 147
analytic solution, 53, 58 canal equation, 343
Aristotle, 113 cardioid, 244
associated Riccati equation, 85 catenary, 258, 305
astroid, 244 catenary equation, 112
autonomous equation, 54 Cauchy A C, 53, 61
auxiliary Cauchy inequality, 41
function, 292 chain rule, 19
functional, 300 Chartier’s theorem, 42
Clairaut’s equation, 59
basis functions, 73 clepsydra, 91
beam, loaded, 257, 263 closed interval, 11
Bernoulli closed-form solution, 53, 58
Daniel, 356 codomain, 10
James, 52 comparison theorem
John, 52, 62 first-order equation, 370
Bernoulli John, 105, 145, 258 second-order equation, 360, 362
Bernoulli’s equation, 65 complete primitive, 55
Bernstein S N, 135 completeness, 350
Bernstein’s theorem, 135 conjugate point, 220
Bessel F W, 354 and geodesics, 227
Bessel functions, 353, 361 and lenses, 228
binomial conservation laws, 202
coefficients, 21, 35 constant of the motion, 198
expansion, 34 constraint, 288
Bois-Reymond, P du, 128, 135 constraint, functional, 300
boundary conditions continuous function, 14
mixed, 353, 364 corners, 271

391
392 INDEX

coupled equations, 183 extrema, local and global, 96


critical point, 210 extremal, 125, 130
cycloid, 105, 146, 244, 261
area and length, 148 Fermat P de, 113, 147
pendulum, 148, 171 Fermat’s principle, 112
finite subsidiary conditions, 315
d’Alembert’s paradox, 107 first-integral, 130, 198, 203
definite integrals, 41 fixed singularity, 57
degenerate stationary point, 212 fluent, 52
dependent folium of Descartes, 48
functions, 288 Fourier components, 351
variable, 10 Fourier J B J, 343, 354
dependent variable, 54 Fourier series, 351
derivative, 18 Fréchet M, 9
partial, 24 Fredholm I, 9
total, 26 frustum, 155
Descartes R, 147 functional, 7
Dido, 299 differentiation of, 123
differentiable, 18 stationary value of, 125
differential equation, 54 Fundamental lemma of the Calculus of
differentiation of an integral, 43 Variations, 128
diffusion equation, 343 Fundamental Theorem of Calculus, 40
direct methods, 375
discontinuity Gâteaux differential, 126
removable, 15 Galileo G, 105
simple, 15 general solution, 55
domain, 10 general theory of relativity, 98
drag coefficient, 106 geodesic, 98, 247
dual problem, 296 geodesics and conjugate points, 227
global extrema, 96
Eddington A S, 98 Goldschmidt solution, 160
eigenfunction, 339 graph, 10
eigenvalue, 186, 339 gravitational lensing, 98
Einstein A, 98 great circle, 98, 249
elastic wire, 271 Green G, 343
ellipse, 244
elliptical coordinates, 373 Hölder inequality, 41
Emden-Fowler equation, 204 Hamilton W R, 115
Emerson W, 51 hanging cable, 111, 305
epicycloid, 253 heat equation, 343
equation of constraint, 288 Heaviside function, 15
Essex J, 146 Hero of Alexandria, 113
Euclid, 113, 165 Hessian matrix, 212
The Euler equation, 80 Hilbert D, 9
Euler L, 53, 69, 93, 114, 122, 173, 319, holonomic constraint, 315
354 homogeneous equation, 55, 63
Euler’s formula, 28 homogeneous functions, 28
Euler-Lagrange equation, 121, 129 horn equation, 343
INDEX 393

Huygens C, 147 Leibniz’s rule, 21


Lemniscate of Bernoulli, 242
implicit function, 29 lenses and conjugate points, 228
theorem, 29 Liénard’s equation, 59
indefinite integral, 41 linear differential equations, 55
independent variable, 10, 54 linear independence, 75
inflection, point of, 211 Liouville J, 53, 339
inhomogeneous equation, 55 Liouville transformation, 342, 371
initial value problems, 56 Lipshitz condition, 371
inner product, 350, 363 loaded beam, 257, 263
integral local
definite, 41 extrema, 96
differentiation of a parameter, 43 maximum, 210
indefinite, 41 minimum, 210
of oscillatory functions, 42 logarithm, natural, 35
integral of the motion, 198
integrand, 40 Maclaurin C, 31
integrating factor, 64 Mathieu E L, 358
integration Mathieu’s equation, 373
by parts, 43 Maupertuis P L N de, 114
limits, 40 maximum point, 210
invariant, 74 Mean Value Theorem
invariant functional, 201 one variable Cauchy’s form, 22
inverse function, 17 one variable integral form, 23
inverse problem, 188 minimal
irregular singular point, 73 moment of inertia, 170
Isochrone, 105 surface of revolution, 154
isoperimetric problem, 110, 299 minimum point, 210
minimum resistance problem, 106, 277
Jacobi’s equation, 220 Minkowski inequality, 41
Jacobian determinant, 30, 179 minor of determinant, 214
mixed derivative rule, 25
Kepler’s equation, 354, 373 monotonic function, 17
kinetic focus, 228 Morse H C M, 212
Kronecker delta, 350 Morse Lemma, 212
movable singularity, 57
L’Hospital G F A, Marquis de , 38
L’Hospital’s rule, 38 natural boundary condition, 257, 260
Lagrange J-L, 93, 114, 122, 289, 315, natural logarithm, 35
354 navigation problem, 110, 262
Lagrange multiplier, 292 Newton I, 39, 52, 105, 109, 146, 258
Lagrange’s identity, 363 Newton’s problem, 106, 277
Lalouvère, de A, 147 Noether E, 202
Lambert’s law of absorption, 92 Noether’s theorem, 203
least squares fit, 215 non-autonomous equation, 54
Lebesgue H, 9 nonlinear equation, 56
Legendre’s condition, 218 nontrivial solution, 73
Leibniz G W, 52, 62, 105 norm, 11
394 INDEX

on function space, 124 Riemann G F B, 40


normal form, 74 Ritz method, 385
Liouville’s, 341 Ritz W, 385
normalised functions, 350 Roberval G P de, 147

open saddle, 211


ball, 11 scale transformation, 201
interval, 11 Schwarz inequality, 41
set, 12 Schwarzian derivative, 47, 89
order notation, 12 second variation, 215
ordinary point, 73 self-adjoint form, 74, 339
orthogonal functions, 350 self-adjoint operator, 363
oscillation theorem, 366 separable equations, 62
separated boundary conditions, 358
Pappus of Alexandria, 111
separation constant, 344
parametric functional, 244, 268
separation of variables, 62
partial derivative, 24
separation theorem, 360
particular
Sgn function, 15
integral, 55
shortest distance
solution, 55
in a plane, 98
Pascal B, 147
on a cylinder, 118, 318
pendulum
on a sphere, 98
clock, 147
side-conditions, 315
cycloidal, 148, 171
singular point, 72, 73
periodic boundary conditions, 358
singular solution, 55, 59, 83
piecewise continuous, 15
singular Sturm-Liouville system, 346
Plateau J A F, 165
Smith R, 146
Poisson S D, 343, 354
smooth function, 20
positive definite matrix, 186
Snell’s law, 114
Prüfer system, 366
soap films, 163
Prandtl L, 107
principal minor, 214 special functions, 340, 353
product rule, 19 speed of light, 112
spherical polar coordinates, 247
quadratic form, 213 stationary
quadrature, 53, 187 point, classification, 96
quotient rule, 19 curve, 94
functional, 125, 128
radius of convergence, 32 path, 94, 125, 129
radius of curvature, 67 point, 94, 125
Rayleigh Lord, 379 point, degenerate, 212
Rayleigh-Ritz method, 385 Steiner J, 166
rectification, 82 stiff beam, 257, 263
regular singular point , 73 Stirling J, 31
regular Sturm-Liouville, 357 Stirling’s approximation, 33
relative extrema, 96 strictly
Riccati J F, 66 increasing, 17
Riccati’s equation, 66, 223 monotonic, 17
INDEX 395

strong stationary path, 138


strongly positive, 216
structurally stable functions, 211
Strutt J W, 379
Sturm J C F, 53, 339
Sturm-Liouville system, 357
regular, 357
singular, 358
sufficiently smooth, 20
supremum norm, 124
surface of revolution
minimum area, 250, 305, 309
symmetric matrix, 186

tangent line, 18
Tautochrone, 105
Taylor
polynomials, 31
series, 31, 37
Taylor B, 31
Torricelli’s law, 91
total derivative, 26
transversality condition, 266, 270, 310
trial functions, 375
triangle inequality, 11
trigonometric series, 351
trivial solution, 55, 73
trochoid, 146, 253

uncoupled equations, 185


undetermined multiplier, 292

variable end points, 110, 265


variation of parameters, 64
variational equation, 227

Wallis J, 147
Water clock, 91
wave equation, 343
weak stationary path, 138
Weierstrass K, 107, 132, 245
Weierstrass-Erdmann conditions, 271, 275,
311
weight function, 350
Wren C, 147
Wronski J H de, 74
Wronskian, 74, 89

Zenodorus, 111

Vous aimerez peut-être aussi