Vous êtes sur la page 1sur 42

2/24/2019 Asymptotic methods for perturbation problems

This is the HTML version of the file http://www-


m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel2018_skript.pdf. Google automatically generates HTML
versions of documents as we crawl the web.
Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar.

Page 1

Lecture notes, Winter term 2018/19

Fallstudien der Mathematischen Modellbildung:

Asymptotic methods
for perturbation problems
Approximation techniques in science and engineering

Stefan Possanner (stefan.possanner@ma.tum.de )

January 7, 2019

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 1/42
2/24/2019 Asymptotic methods for perturbation problems

Technical University Munich


Department of Mathematics

Page 2

Abstract

The goal of this lecture is to introduce some fundamental notions and techniques used
in the asymptotic analysis of perturbation problems. Such problems are called singular
if they undergo a change in their mathematical structure as the perturbation parame-
ter ε tends to zero. A solution of the reduced problem (ε = 0) coincides with the limit
solution of the full problem as ε → 0 only if the perturbation is regular. It is the subject
of asymptotic analysis to find approximate solutions of the full problem that are valid
uniformly for 0 < ε ≤ ε0, even if the perturbation is singular. Singular perturbation

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 2/42
2/24/2019 Asymptotic methods for perturbation problems

problems usually
- their analysis andarise at theresolution
ultimate most critical
has(and
ofteninteresting) regimes
lead to major of physical
advances modeling
in a specific
field of science. In the first part of this course we focus on some basic principles and
examples in the context of ordinary differential equations: we introduce the principle of
dominant balance and discuss boundary layers, the WKB method, the method of (vari-
ational) averaging and the method of multiple scales. The guiding-center approximation
of plasma physics is considered as a generic example of nonlinear perturbation theory.
In the second part we extend our analysis to partial differential equations and present
Prandtl’s boundary layer for the Navier-Stokes equation. Moreover, we elaborate on
macroscopic limits of kinetic equations in the strongly collisional regime, leading to fluid
models of reduced dimensionality.

Page 3

Contents

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 3/42
2/24/2019 Asymptotic methods for perturbation problems

1 Introduction 4
1.1 Regular and singular problems: algebraic examples . . . . . . . . . . . . 5
1.2 The process of “non-dimensionalization” . . . . . . . . . . . . . . . . . . 8
1.3 General problem setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Asymptotic expansions 13
2.1 Order functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Order of a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Asymptotic expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Regular perturbations of nonlinear initial value problems 22


3.1 Some fundamentals of ordinary differential equations . . . . . . . . . . . 22
3.2 Regular perturbations of IVPs . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Application 1: The nonlinear spring . . . . . . . . . . . . . . . . . . . . . 28
3.4 Application 2: The guiding-center approximation . . . . . . . . . . . . . 28

4 Singular perturbations of linear ODEs 29

5 Averaging and multiple scales 30

6 Extension to linear PDEs of elliptic and hyperbolic type 31

7 Macroscopic limits of kinetic equations 32

Index 33

Page 4

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 4/42
2/24/2019 Asymptotic methods for perturbation problems

1 Introduction
A mathematical problem is deemed a perturbation problem if it is “close” to a simpler
problem for which the solution is either known or can be computed with standard tech-
niques. The closeness is usually measured in terms of a dimensionless parameter ε ≪ 1
in the governing equations, which are typically systems of algebraic and/or differential
equations with suitable initial and boundary conditions, in a way that setting ε = 0
yields the standard problem - henceforth called the reduced problem. In asymptotic
analysis, the approach is to view the solutions of the governing equations as functions
of ε, i.e. as a family of solutions depending on the parameter ε, and to construct an
approximation to this family in the form of a series expansion in terms of simple func-
tions of ε (typically power series in εn, n ≥ 0). The big advantage is that this series can
be computed term by term from simplified equations and is thus much easier to obtain
than the exact solution. It occurs quite often that such an asymptotic expansion (AE)
is the basis for a numerical investigation of the problem, which would otherwise be stiff
or simply too large (in terms of degrees of freedom) to solve.

In regular perturbation problems the lowest order of the AE is indeed the solution
of the reduced problem with ε = 0 across the whole domain of interest. In this case,
it is straightforward to derive a system of equations with suitable initial and boundary
conditions for the terms in the series, which can then be solved recursively. The lower the
value of ε, the better the approximation obtained via the series expansion. We shall study
regular perturbations of nonlinear ordinary differential equations (ODEs) in Chapter 3.
A perturbation problem that is not regular is called singular. For singular problems the
limiting behavior ε → 0 is not captured by naive AE and the above procedure fails. In
ODEs for example, singular problems occur when the derivative of the highest order is
of size ε or smaller, which leads to the formation of boundary layers in certain regions
of the domain as ε → 0. This is because the order of the reduced problem is less than
the number of initial/boundary conditions. Hence, a more subtle treatment is required
to capture the correct asymptotics uniformly in the domain of interest, which is the
subject of Chapter 4. Regular expansions may also fail when the domain is infinite,
i.e. when small errors accumulate and become large over long times due to so-called
secular terms. The method of averaging and the method of multiple scales can deal
with secularities and will be discussed in Chapter 5. With regard to partial differential
equations (PDEs), singular problems occur when the type of the PDE changes in the
reduced problem, or when the boundary conditions are such that the reduced problem
is ill-posed (has no unique solution). We will present some generic examples of linear
elliptic and hyperbolic equations in Chapter 6. In particular, we shall revisit Prandtl’s
analysis (from 1904) of boundary layer formation around a body in a nearly inviscid

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 5/42
2/24/2019 Asymptotic methods for perturbation problems

Page 5

flow. This is a prime example of the power of asymptotic analysis for advancing hard
physical problems, where straightforward reasoning might stall. Finally, in Chapter 7
we illustrate more examples in the context of kinetic-fluid transitions in gas dynamics
and magnetized plasmas.

1.1 Regular and singular problems: algebraic examples


The difference between regular and singular perturbation problems can be readily un-
derstood by means of the following two algebraic examples.

Example 1. Consider the cubic equation

x3 (1.1)
ε − xε + ε = 0.

We view the roots xε as a family of solutions depending on the parameter ε. The reduced
problem is obtained by setting ε = 0,

x3 (1.2)
0 − x0 = 0,

which yields the three roots x0 ∈ {0,±1}. In order to produce an AE of the family xε
we assume
xε = x0 + εx1 + ε2 x2 + O(ε3). (1.3)

We will clarify the meaning of the symbol O(ε3) later, here it is sufficient to know that
it describes terms that tend to zero at least as ε3 when ε → 0 (we say these terms are
“of order 3”). Inserting the AE into (1.1) and ordering terms in powers of ε yields

0=(x0 + εx1 + ε2 x2 + ...)3 − x0 − εx1 − ε2 x2 + ε + O(ε3)


(1.4)
= (x3
0 − x0) + ε(3x2 x − x1 + 1) + ε2(3x2
0 1 x + 3x0x2
0 2 1 − x2) + O(ε3).
Since we assume the expansion (1.3) to be valid for a finite interval (0,ε0] of ε-values,
and moreover the functions εn are linearly independent, the coefficients in (1.4) must
vanish. This leads to the system

x3 (1.5)
0 − x0 = 0,

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 6/42
2/24/2019 Asymptotic methods for perturbation problems

3x20x1 − x1 +1=0, (1.6)


3x2 (1.7)
x + 3x0x2
0 2 1 − x2 = 0,
...

In the first equation we recognize the reduced problem (1.2). The other equations are
linear for x1, x2, etc. and can be solved recursively:

1 3x0x2 1
x1 = , x2 = , ... . (1.8)
1 − 3x2 0 1 − 3x2 0

Page 6

The AEs for the three roots of (1.1) thus read

xI,ε = ε + O(ε3), (1.9)

ε 3ε2
xII,ε = 1 − − + O(ε3), (1.10)
2 8
ε 3ε2
xIII,ε = −1 − + + O(ε3). (1.11)
2 8

We leave it up to the reader to verify that these are indeed the first terms of the Taylor
expansions of the exact roots.

Example 2. Consider now the following cubic equation:

εx3 (1.12)
ε − xε +1=0.

Inserting the ansatz (1.3) and equating the coefficients of εn to zero yields

−x0 +1=0, (1.13)

x3 (1.14)
0 − x1 = 0,
3x2 (1.15)
x − x2 = 0,
0 1

...

such that the AE reads


http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 7/42
2/24/2019 Asymptotic methods for perturbation problems

xε =1+ ε + 3ε2 + O(ε3). (1.16)


Here, we obtained only one root of the cubic equation, because the problem degenerates
to a linear equation when setting ε = 0. Such a qualitative change in the mathe-
matical nature is typical for a singular perturbation problem. What happened
to the other two roots? In the present case they cannot be described with an AE of
the form (1.3) because they tend to infinity as ε → 0. In order to recover the correct
asymptotics of these roots we need to reformulate the problem (1.12) into a regular
perturbation problem. This can be done via a change of variables of the form


xε = , (1.17)
δ(ε)

where yε = O(1) as ε → 0 and the function δ(ε) is still to be determined. The equation
for yε reads
ε 1
y3 yε +1=0. (1.18)
δ(ε)3 ε − δ(ε)
To obtain a non-trivial reduced problem we require at least two leading-order terms in
(1.18) to be of the same order in ε. This is called the principle of dominant balance,
which we will encounter repeatedly in this course. Following this principle, there will be

Page 7

one choice of δ(ε) which leads to meaningful results. Balancing the first two terms leads
to
ε 1 √
= =⇒ δ(ε) = ε. (1.19)
δ(ε)3 δ(ε)
With this choice of δ(ε) we obtain the regular perturbation problem

y3 ε = 0, (1.20)
ε − yε +

which we can solve in the same way as (1.1) with the ansatz

yε = y0 + εy1 + εy2 + O(ε3/2), (1.21)
The non-zero solutions read

ε 3ε
yε = ±1 − ∓ + O(ε3/2), (1.22)
2 8
which yields the missing roots

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 8/42
2/24/2019 Asymptotic methods for perturbation problems

1 1 3√ ε
xε = ± √ − ∓ + O(ε). (1.23)
ε 2 8
The limit ε → 0 in such an AE with xε = O(ε−1/2) is called a distinguished limit.
There are two other possibilities for balancing two terms in (1.12). Balancing the last
two terms leads to δ(ε) = 1 and thus to the original problem - this is clearly a bad
choice. On the other hand, balancing the first and the third term leads to δ(ε) = ε1/3.
However, in this case the second term would be of order O(ε−1/3) and thus dominate
the other two terms, which violates the principle of dominant balance. In this example,
no three-term dominant balance is possible but this can occur in other problems.
Example 3. As a last algebraic example consider the quadratic equation
(1 − ε)x2 (1.24)
ε − 2xε +1=0.
Trying the ansatz
xε = x0 + εx1 + O(ε2) (1.25)
leads to
x2 (1.26)
0 − 2x0 +1=0,
2x0x1 − x2 (1.27)
− 2x1 = 0.
0

From the first equation we obtain x0 = 1 as a double root and then the second equation
yields the contradiction −1 = 0. Hence, a solution of the form (1.25) cannot exist. The
difficulty arises because x0 = 1 is a repeated root of the reduced problem and thus the
exact solution √
1± ε
xε = (1.28)
1−ε

does not have a power series expansion in ε, but rather in ε. An expansion of the form

xε = x0 + εx1 + εx2 + O(ε3/2) (1.29)
leads to x0 = 1 and x2
1 = 1, which yields the correct
√ expansion
xε = 1 ± ε + O(ε). (1.30)

Page 8

1.2 The process of “non-dimensionalization”


The dimension-less parameter ε is a key factor in perturbation problems, as it allows to

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 9/42
2/24/2019 Asymptotic methods for perturbation problems

give a mathematical meaning to the transition between the reduced and the full prob-
lem. However, science problems are usually written in terms of dimensional quantities,
i.e. variables with physical dimensions such as time, length or temperature. Most prob-
lems consist of equations that model real-world phenomena, and are thus linked at some
point to experimental observations in which the model is rooted - hence the dimensional
nature of the variables. The first task in the perturbative treatment of a science problem
is thus to write it in non-dimensional form, thereby identifying the parameter ε. This
crucial step enables the formulation of the “physical” problem as a well-defined per-
turbation problem. Depending on the considered problem parameters, the same model
equations may lead to either a regular perturbation problem, a singular perturbation
problem, or not a perturbation problem at all (ε ≈ 1)! It is the physical information of
the considered scenario, i.e. the size of the problem parameters, that in the end defines
the nature of the perturbation problem.

As we will see later, the term “asymptotic” means “for ε sufficiently small”. Hence, in
mathematical terms, an asymptotic approximation - that is is an AE which is asymptot-
ically equivalent to the exact solution (clarified later) - can be made arbitrarily precise
by rendering ε as small as necessary. By contrast, in a given non-dimensional science
problem the size of ε is fixed, usually denoting the ratio of two characteristic problem
parameters. The validity of the asymptotic approximation then has to be checked for
this particular value of ε. This can be done for instance if explicit expression for the
remainder of the series expansion are available.

Example 4. We will now familiarize ourselves with the concept of non-dimensionalization


(or scaling) by considering the damped harmonic oscillator. Let x(t) denote the displace-
ment of a mass m > 0 attached to spring from its equilibrium position as a function of
time. If the mass is set into motion from its equilibrium position with an impulse p0,
the ensuing dynamics can be described in terms of the following initial value problem
(IVP):
d2x dx
m +β + k x = 0,
dt2 dt
(1.31)
dx p0
x(0) = 0, (0) = .
dt m
Here, k > 0 stands for the spring constant and β ≥ 0 denotes the damping coefficient.
Scaling the problem (1.31) starts with writing the dependent variable x and the indepen-
dent variable t in terms of characteristic units of length L and of time T, respectively,

x(t) = Lx (t ), t=Tt. (1.32)

Here, x and t are dimension-less. From the chain rule we have

dx(t) L dx (t ) d2x(t) L d2x (t )


= , = . (1.33)
dt T dt dt2 T2 (dt )2

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 10/42
2/24/2019 Asymptotic methods for perturbation problems

Page 9

Inserting this into (1.31) yields

mL d2x βL dx
+ + kL x = 0,
T2 (dt )2 T dt
(1.34)
L dx p0
Lx (0) = 0, (0) = .
T dt m
Now this problem is still dimensional but the dimensions (or units) have been made
explicit via the pre-factors of each term. The dimension-less form is obtained by dividing
by one of the pre-factors which leads for example to

m d2x β dx
+ + x = 0,
kT2 (dt )2 kT dt
(1.35)
dx p0T
x (0) = 0, (0) = .
dt mL
Our next task is to identify the small parameter ε. Hence we need to assign, a priori, a
“size” to each of the terms in the problem. This requires additional information regarding
the physical scenario we aim to consider. First of all, we identify two characteristic time
scales of the problem:
√m β
τ1 = and τ2 = . (1.36)
k k
Here, τ1 is the period of the undamped oscillator and τ2 is a characteristic damping
time. With the choice of the time scale T we can determine which phenomena we want
to resolve on a scale of order one, thus T is also called the time scale of observation.
Setting T = τ1 will resolve the oscillatory phenomena, while setting T = τ2 will lead us
to observe the damping of the oscillator on a scale of order one. The length scale L can
be determined by requiring an initial velocity dx /dt of order one, hence L = p0T/m.
Let us compare two physical scenarios:

1. Weakly damped oscillator: τ2 ≪ τ1 and T = τ1. Setting


τ2
ε := , (1.37)
τ1

and omitting the primes for lighter notation, this leads to

d2xε dxε
+ε + xε = 0,
dt2 dt
P(osc)
ε (1.38)
dxε
xε(0) = 0, (0) = 1.
dt
As we will learn later this problem can be classified a regular perturbation problem
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathModel… 11/42
2/24/2019 Asymptotic methods for perturbation problems

as long as t is bounded, t ∈ [0,T0) with T0 independent of ε (more precisely,


T0 = O(1) as ε → 0). Indeed, trying the AE

xε = x0 + εx1 + O(ε2) (1.39)

Page 10

leads to
d2x0 d2x1 dx0
+ x0 = 0, + x1 = − ,
(osc) dt2 (osc) dt2 dt
P 0 P 1
dx0 dx1
x0(0) = 0, (0) = 1, x1(0) = 0, (0) = 0.
dt dt
(osc)
The reduced problem yields x0(t) = sin(t) and then the problem P 1 yields the
correction x1(t) = −tsin(t)/2. Therefore, the AE reads
εt
xε = sin(t) − sin(t) + O(ε2). (1.40)
2
This is a valid expansion as long as εt is small compared to one, hence for finite
time t. Terms that blow up as t → ∞ are called secular terms in perturbation
theory. There are more sophisticated methods for (nearly-) periodic problems
which can avoid secular terms in AEs (averaging, method of multiple scales).

2. Strongly damped oscillator: τ1 ≪ τ2 and T = τ2. We define

(τ1 )2
ε := , (1.41)
τ2
which leads to
d2xε dxε
ε + + xε = 0,
dt2 dt
P(dmp)
ε (1.42)
dxε
xε(0) = 0, (0) = 1.
dt
This is a singular perturbation problem because the highest order derivative is multiplied
by ε. The reduced problem reads
dx0
+ x0 = 0,
(dmp) dt
P0 (1.43)
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 12/42
2/24/2019 Asymptotic methods for perturbation problems

dx0
x0(0) = 0, (0) = 1.
dt
This problem is obviously ill-posed because x0 cannot satisfy both initial conditions.
Such a situation, where the order of the differential equation is decreased in the reduced
problem, leads to the formation of boundary layers. The terminology becomes clear by
(dmp)
looking at the exact solutions of P ε for different values of ε in panel (b) of Figure 1.1.
Boundary layer problems can be treated with asymptotic matching, cf. Chapter 4.

1.3 General problem setting


In this course we are faced with finding approximate solutions to mathematical problems
as a small parameter ε > 0 tends to zero. We write these problems symbolically as

Pε[uε]=0. (1.44)

10

Page 11

Figure 1.1: Exact solutions for the weakly damped oscillator (1.38) in (a) and for the
strongly damped oscillator (1.42) in (b), for different values of ε.

Here, uε is the solution of the problem and Pε represents a set of model equations. In
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 13/42
2/24/2019 Asymptotic methods for perturbation problems

this course we focus on ordinary and partial differential equations (ODEs and PDEs).
Hence, uε is defined on a domain D ⊂ Rn and we write uε = u(x1,...,xn,ε); Pε is
some differential operator. Often times, the problem (1.44) is too hard to solve, even
with the help of numerical methods. The goal of asymptotic analysis is to find ”simple”
approximations for uε when ε is small. The term simple is subjective here; for instance,
it could mean approximating the solution in terms of elementary functions or finding
an approximation that can be computed on a lower dimensional subset D ⊂ D. Setting
formally ε = 0 leads to the reduced problem

P0[u0]=0. (1.45)

An immediate question arises:

Is the limit of the solution equal to the solution of the limit?

As already hinted before, there are two cases:

regular problem: ∀ x ∈ D lim uε(x) = u0(x),


ε→0

singular problem: ∃ x0 ∈ D s.t. lim uε(x0) = u0(x0).


ε→0

From this it is clear that a naive expansion of the form

uε = u0 + εu1 + ε2u2 + ... (1.46)

cannot be uniformly valid in D when a problem is singular. In the latter case, a viable
strategy is to reformulate the equations into a regular perturbation problem and then

11

Page 12

apply an expansion of the form (1.46) to approximate the solution. This is commonly the
strategy for developing asymptotic-preserving (AP) numerical schemes for stiff equations
[5, 3]. If such a reformulation is not available, one distinguishes two types of problems:
singular problems of cumulative type and singular problems of boundary layer type.

Singular problems of cumulative type. Theses are problems with oscillating


solutions where the influence of the small parameter ε on the limit solution becomes
observable only after long times of the order t = O(1 ε
). The error terms are called
secular terms and blow up as time goes to infinity, but are small for times of order one.
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 14/42
2/24/2019 Asymptotic methods for perturbation problems

The domain D is infinite, for instance D = R for ODEs. The secular terms lead to the
following behavior:
(1 )
lim uε(t) = u0(t) for t = O .
ε→0 ε
The techniques we will discuss in this course for dealing with problems of cumulative
type are

• classical averaging,

• variational averaging for Lagrangian dynamical systems,

• the method of multiple scales.

Most of these techniques have been developed for studying the motion of celestial bodies
and date back to the times of Poincaré (∼ 1900). The technique of variational averaging
[1, 7] has gained renewed attention for studying the helical motion of a charged particle
in a strong magnetic field, a classical example for a problem of cumulative type.

Singular problems of boundary layer type. Many interesting phenomena in


physics are characterized by a sudden change of state variables, for instance the formation
of shock waves in gas dynamics or the boundary layer flow along the surface of a body.
Mathematically, such problems can be described as singular perturbation problems where
the domain D is finite. As ε tends to zero, the solution develops a jump in a very narrow
region of D, called the boundary layer. The boundary layer can be located at the edges
of the domain but it does not have to be (free boundary layer problem). The main tool
for treating such problems is called

• asymptotic matching.

12

Page 13

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 15/42
2/24/2019 Asymptotic methods for perturbation problems

2 Asymptotic expansions
Let us now introduce the main tools for asymptotic analysis. We shall give a precise def-
inition of the terms appearing in expansions of the form (1.46) and generalized versions
thereof. Moreover, we must clarify what is meant by an asymptotic expansion (AE) of
uε. An interesting concept will be that of an asymptotic series, which in general does
not converge, but nevertheless provides a good approximation to uε for small ε. The
different notions of convergence and asymptotic convergence will be compared in detail.

2.1 Order functions


Definition 1. Let E be the set of real functions δ(ε) that are strictly positive and
continuous on the interval (0,ε0] and such that
1. limε→0 δ(ε) exists (it can be ∞),

2. δ1,δ2 ∈ E ⇒ δ1δ2 ∈ E.

A function δ(ε) ∈ E is called an order function.

The following functions are examples of order functions:


1 ε 1
1, ε, 1 + ε, ε3, , , , e−1/ε .
ε 1+ε ln(1/ε)

Note that if δ(ε) is an order function, then 1/δ(ε) is too. The first condition above accepts
1/ε, but it excludes functions with rapid variations near zero such as 1 + sin2(1/ε). The
second condition excludes products of such functions with viable order functions, like
ε[1 + sin2(1/ε)]. A comparison of order functions is accomplished via Hardy’s notation:
δ1
δ1 is asymptotically smaller than δ2, δ1 ≺ δ2, if lim = 0,
ε→0 δ2
δ1
δ1 is asymptotically equal to δ2, δ1 ≈ δ2, if lim = λ, 0 <λ< ∞,
ε→0 δ2
δ1
δ1 is asymptotically smaller than or equal to δ2, δ1 ≼ δ2, if lim = λ, 0 ≤ λ < ∞.
ε→0 δ2
Example 5. Using Hardy’s notation we have, for n ∈ N0:
1
εn+1 ≺ εn, e−1/ε ≺ εn ≺ ,
ln(1/ε)
ε
2εn ≈ εn, 2ε ≈ , 2 ≈ 1 + ε, ε ≈ sin(ε).
1+ε

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 16/42
2/24/2019 Asymptotic methods for perturbation problems

13

Page 14

Definition 2. A sequence of order functions δn is called asymptotic sequence if

δn+1 ≺ δn ∀n.

If δn and γn are two asymptotic sequences with δn ≈ γn for all n, than these sequences
are asymptotically equivalent. For example,

(ε )n
εn, , sinn(ε),
1+ε

are asymptotically equivalent. The relation δ1 ≈ δ2 defines an equivalence relation on E,


meaning it is reflexive, symmetric and transitive. Hence we can choose representatives
of each class of equivalence. Usually we will work with the most convenient set of
order functions like εn where n is an integer or εα where α is rational, and call these
representatives gauge functions. The choice of gauge functions has implications on the
uniqueness of asymptotic expansions discussed below.

2.2 Order of a function


Let us consider real-valued functions uε(x) = u(x, ε) where x ∈ D ⊂ Rn and ε ∈ (0,ε0].
We suppose that u(x,·) belongs to a normed linear space with the norm || · || in D and
that ||uε|| is continuous in ε.

Definition 3. (Landau’s notation or big-O notation).

1. uε = O(δ(ε)) and we say uε is of order δ(ε) in D if there exists a constant K,


independent of ε, such that ||uε|| ≤ Kδ (remark that the limit limε→0 ||uε||/δ need
not exist).

2. uε = o(δ(ε)) and we say uε is of lesser order than δ(ε) in D if

||uε||
lim = 0.
ε→0 δ

3. uε = Os(δ(ε)) and we say uε is of strict (or sharp) order δ(ε) in D if

||uε||
lim =K,
ε→0 δ

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 17/42
2/24/2019 Asymptotic methods for perturbation problems

where K is a finite, nonzero constant.

If uε is an order function then Hardy’s and Landau’s notation are equivalent. Landau’s
notation is however more general, because the limit in 1. need not exist, for instance we
have sin(1/ε) = O(1) in Landau’s notation.
The order of a function depends on the chosen norm || · ||. Te supremum norm is used
most frequently in asymptotic analysis, that is, for uε continuous and bounded in D,

||uε|| = max |uε|.


D

14

Page 15

However, other norms such as L2 can be used, depending on the type of problem one is
interested in. The order of a function can then be completely different, depending on
the norm chosen. Consider for example uε(x)=e−x/ε on D = [0,1]. In the supremum
norm we have uε = O(1), whereas in the L2-norm,

(∫ 1 )1/2
||uε|| = u2 ,
0
ε dx

we have uε = O( ε). In what follows we shall always use the supremum norm if
not stated otherwise. It can be proved (see for example Eckhaus [4]) that if ||uε|| is
continuous in ε there exists an order function δ such that uε = Os(δ). It follows that we
can always rescale the function uε such that

= Os(1). (2.1)
δ

2.3 Asymptotic expansions


We will now clarify the notions of

• asymptotic approximation (AA)

• asymptotic series (AS)

• asymptotic expansion (AE).

These terms are often used synonymously in the literature, however some clarification
is provided for example by Eckhaus [4].
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 18/42
2/24/2019 Asymptotic methods for perturbation problems

Definition 4. Let uε(x) be a function that is Os(1) in D. A function u0(x, ε) is an


asymptotic approximation of uε in D if

uε − u0 = o(1) ⇔ lim ||uε − u0|| = 0.


ε→0

This definition extends to functions uε that are of arbitrary sharp order δ via the
rescaling procedure (2.1). In general, u0 is an asymptotic approximation of uε = Os(δ)
in D if
uε − u0 ||uε − u0||
= o(1) ⇔ lim = 0. (2.2)
δ ε→0 δ
This implies u0 = Os(δ). An asymptotic expansion of uε is constructed by successive
AAs of the remainder terms. For example, let u0 = Os(1) be an AA of uε in the sense
of Definition 4. Then we define the remainder as

u∗ (2.3)
1 := uε − u0 = o(1).
It follows there exists a δ1 ≺ 1 such that u∗
1 = Os(δ1), and we can rescale via (2.1) to
obtain uε
1 := u∗ 1/δ1 = Os(1). Therefore, we can write

uε = u0 + δ1uε (2.4)
1 .

15

Page 16

Suppose now we are able to find an AA of the remainder uε


1(x, ε) and call it u1(x, ε).
Then from Definition 4 we define the new remainder as

u∗ (2.5)
2 := uε 1 − u1 = o(1),

and there exists a δ2 such that u∗


2 = Os(δ2). Rescaling leads to uε 2 := u∗ 2/δ2 = Os(1) and
by inserting this into (2.4) we can write

uε = u0 + δ1u1 + δ1δ2uε (2.6)


2 .
We can repeat this procedure N times to obtain

uε(x) = ∑ δ̃n(ε)un(x, ε) + o(˜δN (ε)), (2.7)


n=0

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 19/42
2/24/2019 Asymptotic methods for perturbation problems

where ˜δ0 = 1, ˜δn≥1(ε) = δ1(ε)·...·δn(ε) and un(x, ε) = Os(1). Remark that the product
of order functions is again an order function due to the property 2 of Definition 1. The
right-hand side in (2.7) is called an asymptotic expansion (AE) of the function uε in D.
It follows in particular that
N
||uε(x) − ∑ n=0 δ̃n(ε)un(x, ε)||
lim =0 ∀x∈D. (2.8)
ε→0
δ̃N (ε)

Let us now formalize this result:

Definition 5. Let (u∗


n(x, ε))∞ n=0 denote a sequence of functions on D × (0,ε0] with
u∗
n = Os(δn). A series
N

∑ u∗ u∗
n(x, ε), n = Os(δn),
n=0

is called an asymptotic series if δn is an asymptotic sequence, hence if δn+1 ≺ δn (see


Definition 2).

By rescaling as in (2.1) one obtains u∗


n = δnun with un = Os(1), hence any asymptotic
series can be written as
N
∑ δn(ε)un(x, ε). (2.9)
n=0

Definition 6. Let
N

u(N) ∑ δn(ε)un(x, ε), un = Os(1),


as (x, ε) =
n=0

(N)
be an asymptotic series in D. u as is an asymptotic expansion to order N of uε in D if

uε(x) = u(N) x in D .
as (x, ε) + o(δN ),

16

Page 17

If the asymptotic expansion (AE) in Definition 6 holds for any positive integer N one
writes ∞

uε ∼ ∑ δn(ε)un(x, ε), x in D , (2.10)

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 20/42
2/24/2019 Asymptotic methods for perturbation problems
n=0

and says the series is asymptotically convergent to uε in D. Be mindful however that


asymptotic convergence does not imply convergence of the series in the usual
sense! Indeed, most asymptotic series (AS) do not converge, but this is not important.
The purpose of an AS is to provide good approximations to a function as ε → 0. Let us
compare the two notions of convergence with respect to ε in detail. Consider x0 ∈ D to
be a parameter that is not important here:

• Convergence of a series: the limit

lim ∑ δn(ε)un(x0,ε)
N→∞
n=0

exists for ε ∈ (0,R), where R is the radius of convergence.

• Asymptotic convergence to a function uε(x0):

∣ N ∣
∣ n=0 δn(ε)un(x0,ε) ∣
lim ∣uε(x0) − ∑ ∣ =0 ∀ N ∈ N.
ε→0 δN

We see that convergence is about the behavior of the series in a finite ε-region (0,R) as
N → ∞, whereas asymptotic convergence is about the approximation of the function
uε(x0) as ε tends to zero. While the former is an absolute concept, the latter is a relative
concept, always with respect to a given function uε. It thus makes no sense to ask a
question like ’Is this series asymptotically convergent?’ A reasonable question would be
’Is this series asymptotically convergent to uε?’

The most common examples of AEs are Taylor expansions. Suppose uε is N-times
differentiable at ε = 0, then Taylor’s theorem states

∑ dnuε ∣
uε = εn 1 ∣ + o(εN ). (2.11)
n=0
n! dεn ∣ε=0

In case that N becomes infinite the series can be convergent within a certain radius R
around zero and divergent elsewhere (R can be zero). However, the Taylor expansion is
always a good approximation of the function uε in the vicinity of ε = 0.

Example 6. The Taylor series of the exponential function reads

ε2 εn
eε =1+ ε + + ... + + ... . (2.12)
2 n!

17

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 21/42
2/24/2019 Asymptotic methods for perturbation problems

Page 18

This series converges for all values of ε. The AE

ε2
eε =1+ ε + + o(ε2) (2.13)
2

is valid near ε = 0 but not elsewhere.

Example 7. In the above definitions we assume the statements to hold uniformly in


x ∈ D. However, it may be that an AE is not uniformly valid in its domain of definition.
Consider for example the function

√ √ √ ε
uε(x) = x+ε= x· 1+ , x > 0. (2.14)
x

Taylor expansion leads to

√ ( ε ε2 (−1)n−1(2n − 3)!! εn )
uε(x) ∼ x 1+ − + ... + + ... . (2.15)
2x 2x2 2nn! xn

This AE is uniformly valid in any left bounded interval x ≥ α > 0; however it is not
uniformly valid in x > 0 because the remainder terms are not o(εn) for x = O(ε).

Example 8. Consider the error function defined by

2 ∫∞
erf (t)=1 − √ e−s 2 ds. (2.16)
π t

It can be shown ([2], page 16) that this function can be approximated by

e−t {N 1 }

2
∑ (1 )
erf (t)=1 − (−1)n−1 (2n − 3)!! +o , (2.17)
π n=1
2n−1 t2n−1 t2N−1

which is true for any value of N ∈ N. Hence, substituting t = 1/ε, the following series
is asymptotically convergent to the error function,

(1 ) ∼ 1 − e−1/ε
2

erf √ (−1)n−1 (2n − 3)!! ε2n−1 . (2.18)
ε π 2n−1
n=1

This is an AE of erf with respect to the sequence (ε2n+1e−1/ε )∞


n=0 which diverges for any
2

value of ε. It is nevertheless a useful AE for obtaining values of the error function for
large arguments.

Example 9. The literature on special functions is full of useful AEs of these functions.
Consider for instance the Bessel function J0(t), which has the series expansion
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 22/42
2/24/2019 Asymptotic methods for perturbation problems

∑ )2
J0(t) = (−1)n( tn . (2.19)
2nn!
n=0

18

Page 19

This AE is convergent on any bounded interval of R. For large t, writing t = 1/ε, we


have the well-known AE ([2], page 17)

√ 2ε {
(1 ) ∼ (1 − π ) ∞∑
J0 cos (−1)n (4n − 1)!!2 ε2n
ε π ε 4 n=0 26n(2n)!
(2.20)
}
(1 − π ) ∞∑ (4n + 1)!!2
sin (−1)n ε2n+1 .
ε 4 n=0 26n+3(2n + 1)!

While the convergent expansion is rather useless for for obtaining values of J0 for large
arguments, the AE is very useful. To approximate the value of J0(3) to three digits
precision one needs eight terms of the convergent series but only one term of the AE.

It is clear from the computations leading to (2.7) that an asymptotic expansion (AE)
can always be studied as a repeated process of asymptotic approximations (AAs). More-
over, if we fix the δn to a certain set of gauge functions, for example δn = εn, it follows
from the above procedure that the AE of a function uε is unique with respect to
these gauge functions, because the coefficients un are uniquely determined
by
uε − ∑n−1 m=0 δm um
un = lim . (2.21)
ε→0 δn
(N)
The converse is however not true: a given asymptotic series u as is indeed the AE of
an infinity of functions which differ by a term of o(δN ). These statements regarding
uniqueness even hold for asymptotically convergent series. Suppose for example that we
choose δn = εn as gauge functions, then
∞ ∞

uε ∼ ∑ εn un ⇔ uε + e−1/ε ∼ ∑ εn un . (2.22)
n=0 n=0

Two functions which have the same AE with respect to the same asymptotic sequence

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 23/42
2/24/2019 Asymptotic methods for perturbation problems

of gauge functions are called asymptotically equal. Hence uε is a.e. to uε + e−1/ε with
respect to the sequence εn. It may thus very well be that

uε(x) = ∑ δn(ε)un(x, ε), x ∈ D, ε ∈ (0,R), (2.23)


n=0

in case that the series converges. Finally, it is clear that the AE of a function changes
when the asymptotic sequence changes.
In case that the functions un of an AE are independent of ε, hence un = un(x) = Os(1),
one speaks of a regular AE. In the special case that δn = εn, one calls

uε(x) = ∑ εnun(x) + o(εN ) (2.24)


n=0

19

Page 20

the Poincaré expansion (sometimes also Hilbert expansion) of uε. Poincaré expansions
are important tools in asymptotic analysis because of their relative simplicity; a lot can
be gained by approximating a complicated function of x and ε by a sequence of functions
with the simple structure εnun(x).
Another interesting question is the following: given a divergent AE that is asymp-
totically convergent to uε, what is the optimal truncation of the series for a fixed value
of ε? Optimal in this sense means with minimal error in the supremum norm. Hence we
search for the optimal number Nopt after which to truncate the series to get the smallest
error. Such questions arise frequently in physics problems, where ε is usually defined by
the fraction of some fixed problem parameters.

Let us now consider some elementary operations on AEs like addition, multiplication,
integration and differentiation. Let us assume expansions of Poincaré type,
∞ ∞

uε(x) ∼ ∑ δn(ε)un(x), vε(x) ∼ ∑ δn(ε)vn(x),


n=0 n=0

in x ∈ D. Then the following is true:

• Addition:

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 24/42
2/24/2019 Asymptotic methods for perturbation problems

uε(x) + vε(x) ∼ ∑ δn(ε)[un(x) + vn(x)], (2.25)


n=0

• Multiplication: if δnδm = δn+m then


∞ n

uε(x)vε(x) ∼ ∑ δn(ε)wn(x), wn(x) = ∑ um(x)vn−m(x). (2.26)


n=0 m=0

• Integration along a path C in D: assuming everything is integrable along C,



∫ ∑ ∫
uε(x)dσ ∼ δn(ε) un(x)dσ , (2.27)
C n=0 C

where dσ is the line element along C : I ⊂ R → D.

• Integration with respect to ε: assuming everything is integrable w.r.t ε,



∫ε ∫ε
uε (x)dε ∼ ∑ δn(ε )dε un(x). (2.28)
0 0
n=0

• Differentiation with respect to x: if uε(x) and un(x) are differentiable in D,


∂uε(x) ∂un(x)
∼ ∑ δn(ε) . (2.29)
∂xi ∂xi
n=0

20

Page 21

dδn(ε)
• Differentiation with respect to ε: if ∂u (x) ε
∂ε
and dε
exist for x ∈ D and for
ε ∈ (0,ε0], and if

∂uε(x) dδn(ε)
∼ ∑ un(x), (2.30)
∂ε n=0

then un(x) = un(x) in x ∈ D.

The proof of these properties is straightforward.

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 25/42
2/24/2019 Asymptotic methods for perturbation problems

21

Page 22

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 26/42
2/24/2019 Asymptotic methods for perturbation problems

3 Regular perturbations of nonlinear


initial value problems
3.1 Some fundamentals of ordinary differential equations
We present here some basic notions and theorems about systems of ordinary differential
equations (ODEs) that are important in the context of this document. For an in-depth
reading on the theory of ODEs we recommend the books [6, 8, 9].

In what follows J ⊂ R denotes an interval on the real line and and Ω ⊂ Rn stands for
an open and connected subset of Rn. We shall be concerned with systems of ODEs of
the form
dz(t)
= f(z(t),t), (3.1)
dt
where f : Ω × J → Rn is a suitable regular function. The image of f is called the
direction field of the system (3.1). A C1-curve z : J → Rn satisfying (3.1) is tangent
everywhere to the direction field. In case that f is independent of t, the system (3.1)
is called autonomous. For autonomous systems, we call a function E : Ω → R a first
integral if f(z) · ∇E(z)=0 ∀ z ∈ Ω. It is easy to see that

d
E(z(t)) = 0 ⇐⇒ E is a first integral. (3.2)
dt

The level sets of E are hypersurfaces in Ω. In case that E is a first integral, E(z(t)) = c,
the motion lies in the corresponding hypersurface.
If (3.1) is furnished with a condition z(t0) = C ∈ Ω for some t0 ∈ J we call it an
initial value problem (IVP):
dz(t)
= f(z(t),t),
dt (3.3)
z(t0) = C.

A function z is a solution of (3.3) if and only if it satisfies the integral equation

∫t
z(t) = C + f(z(s),s)ds . (3.4)
t0

Hence, (3.3) and (3.4) are equivalent formulations of the IVP. By a solution of (3.3) we
mean a C1-curve z : I → Ω on some interval I ⊂ J, where t0 ∈ I, z(t0) = C and (3.3)

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 27/42
2/24/2019 Asymptotic methods for perturbation problems

22

Page 23

holds for all t ∈ I. We remark that a solution may be defined only on I ⊂ J, even if the
IVP is defined on J. The graph of a solution is the set

γz := {(t,z(t)) ∈ J × Ω : t ∈ I}. (3.5)

A proper extension of z : I → Ω, solution of (3.3), is a function ˜z : ˜I → Ω, where


I ⊂ I˜ ⊂ J, ˜I = I and ˜z(t) = z(t) for t ∈ I. A maximal solution of the IVP is a solution
with no proper extension. The corresponding interval ˜I is called the maximal interval.

From the integral formulation (3.4) of the IVP we are inclined to search for solutions
with lesser regularity, namely z ∈ C0(I,Ω), such that the integrand on the right-hand-
side of (3.4) is a piecewise continuous function of s. The integral solution than satisfies
(3.3) at points of continuity of t ↦→ f(z(t),t) due to the fundamental theorem of calculus.
Such solutions are important in case where the function f is not continuous with respect
to its second argument (sometimes arising in control problems).

Existence and uniqueness of solutions to the IVP (3.3) depend on the properties of
the function f. Peano’s existence theorem states that there is at least one solution of
(3.3) on some interval I ⊂ J if f is continuous on Ω × J. For uniqueness one needs an
additional property: f must be locally Lipschitz with respect to its first argument, which
means there exist neighborhoods Ωz ⊂ Ω of z and Jt ⊂ J of t such that

||f(z1,s) − f(z2,s)||
≤L<∞ ∀ z1,z2 ∈ Ωz, s ∈ Jt . (3.6)
||z1 − z2||

The constant L, which possibly depends on Ωz and Jt, but not on z1,z2, is called the
Lipschitz constant. Remark that the limit ||z1 − z2|| → 0 in the above expression is
finite.

Example 10. Let us view the function f(z) = z1/3 as a mapping f : R → R. f is


continuous but not locally Lipschitz. Consider the point z = 0. Since f(−1) = −1, one
has limε→0 |f(ε) − f(−ε)|/(2ε) = limε→0 ε−2/3 = ∞, which means that f is not locally
Lipschitz at z = 0. Moreover, let J = R = Ω and consider the scalar IVP

dz(t)
= tz1/3(t), z(0) = 0. (3.7)
dt

We readily verify that z = 0 on R is a maximal solution, but so is the continuously



http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 28/42
2/24/2019 Asymptotic methods for perturbation problems

differentiable function √
{ (t/ 3)3, t ≥ 0
z(t) = .
0 , t< 0

Hence, there are at least two maximal solutions to the IVP (3.7). The cause for this is
that f(z) = z1/3 is not locally Lipschitz at z = 0.

In what follows we will call

23

Page 24

Assumption A: f is continuous on Ω × J and locally Lipschitz with respect to


its first argument, and t ↦→ f(z(t),t) is jointly continuous for continuous z.

We can now state the basic existence and uniqueness theorem:

Theorem 1. Under the assumption A, the IVP (3.3) has a unique maximal solution
for each (C,t0) ∈ Ω × J. The associated maximal interval is denoted I(C,t0) ⊂ J.

This theorem is known as the Cauchy-Lipschitz theorem or the Picard-Lindelöf the-


orem. There are two different approaches to the proof, depending on whether one uses
the continuity of f with respect to the first argument (Gronwall’s lemma, c.f. Theorem
4.17 in [6] for example), or a fixed-point technique (c.f. Theorem 4.22 in [6] for example).

Let us now introduce the notion of the transition map ψ of the IVP (3.3). The
transition map is called the local flow in case that the IVP is autonomous. We define
the map ψ : dom(ψ) ⊂ J × Ω × J → Ω by the property that t ↦→ ψ(t;C,t0) is the
solution of the IVP (3.3). Hence the transition map can be viewed as the solution of the
IVP with dependence on the initial condition. The domain of ψ is

dom(ψ) = {(t,C,t0) ∈ J × Ω × J : t ∈ I(C,t0)}.

Definition 7. The system (3.3) is autonomous if J = R and if f does not depend on t.

For an interval I ⊂ R let us denote the interval shifted by t0 via

I + t0 := {t + t0 ∈ R : t ∈ I}.

We then have the following Corollary to the above theorem:

Corollary 2. Let f : Ω × J → Rn satisfy assumption A.


http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 29/42
2/24/2019 Asymptotic methods for perturbation problems

1. Let (C,t0) ∈ Ω × J and let s ∈ I(C,t0). Then I(ψ(s;C,t0),s) = I(C,t0) and

ψ(t;ψ(s;C,t0),s) = ψ(t;C,t0) ∀ t ∈ I(C,t0). (3.8)

2. Assume that the system is autonomous. Then, for arbitrary t0,s ∈ R and C ∈ Ω,
I(C,t0) = I(C,s) − s + t0 and

ψ(t + s − t0;C,s) = ψ(t;C,t0) ∀ t ∈ I(C,t0). (3.9)

Proof. 1. The curve z : t ↦→ ψ(t;ψ(s;t0,C),s) is the unique solution of the IVP

dz(t)
= f(z(t),t), z(s) = ψ(s;C,t0). (3.10)
dt
However, the curve ˜z : t ↦→ ψ(t;C,t0) clearly satisfies (3.10) too, which according to
Theorem 1 means z(t)=˜z(t) for all t ∈ I(ψ(s;C,t0),s) and the interval is maximal.
Moreover, since ˜z is the unique solution of

d˜z(t)
= f(˜z(t),t), ˜z(t0) = C,
dt

24

Page 25

we obtain I(C,t0) = I(s,ψ(s, t0,C)).


2. The curve z : t ↦→ ψ(t;C,s) is the unique maximal solution of

dz(t)
= f(z(t)), z(s) = C,
dt

defined on the interval I(C,s). The curve ˜z(t) := z(t + s − t0) is thus defined on
I(C,s) − s + t0. It is easily verified that ˜z satisfies

d˜z(t)
= f(˜z(t)), ˜z(t0) = C,
dt
and is hence defined on the maximal interval I(C,t0).

The property (3.8) is the semigroup property of the transition map; it leads to an
intuitive interpretation of ψ as a dynamical propagator. The property (3.9) mirrors the
translational invariance of autonomous systems, that is a translation (with respect to

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 30/42
2/24/2019 Asymptotic methods for perturbation problems

time) of a solution is also a solution. We have another Corollary:


Corollary 3. Let f : Ω × J → Rn satisfy assumption A. Then the relation on Ω × J
defined by
(C,t0) ∼ (D,s) if s ∈ I(C,t0) and D = ψ(s;C,t0)

is an equivalence relation, and the graphs of solutions to initial conditions (C,t0) ∈ Ω×J
are the equivalence classes.

Proof. We need to prove the three properties defining an equivalence relation:

1. Reflexivity: Clearly t0 ∈ I(C,t0) and ψ(t0;C,t0) = C such that (C,t0) ∼ (C,t0).

2. Symmetry: Assume that (C,t0) ∼ (D,s), hence

I(D,s) = I(ψ(s;C,t0),s) = I(C,t0) =⇒ t0 ∈ I(D,s),

where we used Corollary 2 in the second equality. Moreover,

ψ(t0;D,s) = ψ(t0;ψ(s;C,t0),s) = ψ(t0;C,t0) = C.

It follows that (D,s) ∼ (C,t0), which proves symmetry.

3. Transitivity: Suppose that (C,t0) ∼ (D,s) and (D,s) ∼ (E,r), then from the
above it follows that

r ∈ I(D,s) = I(C,t0) }
=⇒ (C,t0) ∼ (E,r),
E = ψ(r;D,s) = ψ(r;ψ(s;C,t0),s) = ψ(r;C,t0)

which proves transitivity.

By definition, if (C,t0) ∼ (D,s), the point (D,s) ∈ Ω × J lies on the graph of the
solution z with initial condition z(t0) = C, so that the graphs form the equivalence
classes.

25

Page 26

An equivalence relation partitions a set into pairwise disjoint subsets (the equivalence
classes), whose union forms the whole set. Therefore, the above Corollary states that
the graphs of two solutions with initial conditions at (C,t0) and at (D,s) are either
identical or disjoint, i.e. different graphs do not intersect (cross).

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 31/42
2/24/2019 Asymptotic methods for perturbation problems

Let us now focus on autonomous systems,


dz(t)
= f(z(t)), z(0) = C, (3.11)
dt
where f is locally Lipschitz on Ω. We may set the initial condition at t0 = 0 in full gener-
ality because of the translation invariance of solutions, see the second part of Corollary
2. A solution is a C1-curve z : I → Ω on some interval I containing 0, such that (3.11)
holds for t ∈ I. Of course Theorem 1 applies in this case and gives existence and unique-
ness of a maximally extended solution on a maximal interval IC = I(C,0). Therefore,
we may define the local flow ϕ of the system (3.11) as the map ϕ : (t,C) ↦→ ψ(t;C,0).
The domain of the flow is

dom(ϕ) = {(t,C) ∈ R × Ω : t ∈ IC}.

In particular, t ↦→ ϕ(t,C) is the solution of the IVP (3.11). If IC = R for some C,


then the solution t ↦→ ϕ(t,C) is said to be global. If IC = R for all C ∈ Ω, then ϕ is
called simply the flow of (3.11), and ϕ is said to be a dynamical system. The local flow
satisfies

ϕ(0,C) = C ∀ C ∈ Ω, (3.12a)

and for s ∈ IC : ϕ(t + s,C) = ϕ(t,ϕ(s,C)) ∀ t ∈ IC − s. (3.12b)

The first relation follows from the definition of ϕ. To prove the second one we evoke the
second result of Corollary 2 with t0 = 0:

Iϕ(s,C) = I(ψ(s;C,0),0) = I(ψ(s;C,0),s) − s = I(C,0) − s = IC − s.

Moreover, for t ∈ IC − s we get

ϕ(t,ϕ(s,C)) = ψ(t;ψ(s;C,0),0) = ψ(t+s;ψ(s;C,0),s) = ψ(t+s;C,0) = ϕ(t+s,C).

Relation (3.12b) is termed the group property of the local flow. The terminology becomes
clear when we regard the case where IC = R for all C ∈ Ω, so that ϕ is a dynamical
system. The family of mappings Φt : C ↦→ ϕ(t,C), t ∈ R, forms a commutative group
due to Φs ◦ Φt = Φs+t for all s, t ∈ R.

For autonomous systems, the space Ω ⊂ Rn is called the phase space. In addition to
the graph of the solution z defined in (3.5), for autonomous systems one calls the image
of z an orbit (or trajectory) of the system. We have seen for general ODE systems that
different graphs do not intersect. In the autonomous case, even more is true: different
orbits do not intersect. This is because the relation C ∼ C̃ if ˜C is in the orbit of C,
is an equivalence relation on Ω, similar to Corollary 3. The orbits are thus equivalence
classes.

26

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 32/42
2/24/2019 Asymptotic methods for perturbation problems

Page 27

3.2 Regular perturbations of IVPs


Let us consider initial value problems (IVPs) of the form

dzε
= f(zε,t) + εg(zε, t, ε)
Pε dt (3.13)
zε(t0) = C ∈ Rn ,

where f ∈ C1(Rn ×R) and g ∈ C0(Rn ×R×(0,ε0]) are time-dependent vector fields and
g also depends on ε. We suppose that both f and g are locally Lipschitz with respect
to their first argument, with Lipschitz constant L independent of t and ε. Moreover,
f = Os(1) and g = Os(1) as ε → 0. The reduced problem corresponding to (3.13) reads

dz0
= f(z0,t)
P0 dt (3.14)
z0(t0) = C ∈ Rn .

According to Theorem 1 the problem P0 has a unique solution in some maximal interval
I0 = I0(C,t0) ⊂ R containing t0. For simplicity, let us restrict this solution to an interval
t0 ≤ t ≤ t0 + T where we consider only forward propagation in time.

The above perturbation problem Pε occurs in many applications and is of fundamental


importance for perturbation theory. The natural question is whether the solution zε stays
close to the solution z0 of the reduced problem and if so, for how long. This question is
answered by the theorem below, the proof of which requires the application of Gronwall’s
lemma:

Lemma 1. Suppose for t ∈ [t0,t0 + T],

∫t
ϕ(t) ≤ b(t − t0) + a ϕ(s)ds + c,
t0

with ϕ(t) continuous, ϕ(t) ≥ 0 for t ∈ [t0,t0 + T] and constants a > 0, b, c ≥ 0, then

(b ) b
ϕ(t) ≤ + c ea(t−t ) − 0
a a

for t ∈ [t0,t0 + T].

Theorem 4. Let zε denote the unique solution of problem Pε in (3.13) with f and g
satisfying the conditions above. Moreover, let z0(t) stand for the unique solution of the
reduced problem P0 in (3.14) for t0 ≤ t ≤ t0 +T. Then there exists a constant t1 = O(1),
independent of ε, with t0 < t1 ≤ t0 + T, such that

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 33/42
2/24/2019 Asymptotic methods for perturbation problems

|zε(t) − z0(t)| = O(ε), uniformly in t0 ≤ t ≤ t1,


where |z|2 = ∑ z2
i i is the usual Euclidean vector norm.

27

Page 28

Proof. Let us choose an arbitrary compact and convex set D ⊂ Rn ×[t0,∞). Theorem 1
guarantees the existence of a unique solution zε(t) of Pε in some interval t0 ≤ t ≤ t1(ε).
Since g is continuous in D × (0,ε0], it can be shown that that t1 may be chosen inde-
pendently of ε ([2], page 36). Let us denote rε := zε − z0 and let us take t1 ≤ t0 + T.
The residual rε satisfies

drε
= f(z0 + rε,t) − f(z0,t) + εg(zε, t, ε)
dt
rε(t0)=0,

3.3 Application 1: The nonlinear spring

3.4 Application 2: The guiding-center approximation

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 34/42
2/24/2019 Asymptotic methods for perturbation problems

28

Page 29

4 Singular perturbations of linear


ODEs

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 35/42
2/24/2019 Asymptotic methods for perturbation problems

29

Page 30

5 Averaging and multiple scales

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 36/42
2/24/2019 Asymptotic methods for perturbation problems

30

Page 31

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 37/42
2/24/2019 Asymptotic methods for perturbation problems

6 Extension to linear PDEs of elliptic


and hyperbolic type

31
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 38/42
2/24/2019 Asymptotic methods for perturbation problems

Page 32

7 Macroscopic limits of kinetic


equations

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 39/42
2/24/2019 Asymptotic methods for perturbation problems

32

Page 33

Index

asymptotic approximation, 15
asymptotic convergence, 17
asymptotic expansion, 4, 13, 16, 19
asymptotic series, 16
asymptotically equal, 19
autonomous ODE, 22

boundary layer, 10

distinguished limit, 7
dominant balance, principle of, 6
dynamical system, 26

flow of an ODE, 24

gauge functions, 14
graph of a solution, 23
Gronwall lemma, 27

order function, 13
http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 40/42
2/24/2019 Asymptotic methods for perturbation problems

Poincaré expansion, 20

reduced problem, 4, 11
regular perturbation problem, 4

scaling, 8
secular term, 4, 10, 12
singular perturbation problem, 4

Taylor’s theorem, 17

33

Page 34

Bibliography
[1] A.J. Brizard and T.S. Hahm. Foundations of nonlinear gyrokinetic theory. Rev.
Mod. Phys., 79:421, 2007.

[2] Eduardus Marie de Jager and JF Furu. The theory of singular perturbations, vol-
ume 42. Elsevier, 1996.

[3] Pierre Degond. Asymptotic-preserving schemes for fluid models of plasmas. arXiv
preprint arXiv:1104.1869, 2011.

[4] Wiktor Eckhaus. Asymptotic analysis of singular perturbations, volume 9. Elsevier,


http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 41/42
2/24/2019 Asymptotic methods for perturbation problems

2011.

[5] Shi Jin. Efficient asymptotic-preserving (AP) schemes for some multiscale kinetic
equations. SIAM Journal on Scientific Computing, 21(2):441–454, 1999.

[6] H. Logemann and E.P. Ryan. Ordinary Differential Equations: Analysis, Qualitative
Theory and Control. Undergraduate Mathematics Series. Springer, 2014.

[7] Stefan Possanner. Gyrokinetics from variational averaging: existence and error
bounds. arXiv preprint arXiv:1711.09620, November 2017.

[8] G. Teschl. Ordinary Differential Equations and Dynamical Systems, volume 140 of
Graduate Studies in Mathematics. AMS, 2012.

[9] W. Walter. Ordinary Differential Equations. Number 182 in Graduate Texts in


Mathematics. Springer, 1991.

34

http://webcache.googleusercontent.com/search?q=cache:NaAWSJ38xVAJ:www-m16.ma.tum.de/foswiki/pub/M16/Allgemeines/Fallstd18/MathMode… 42/42

Vous aimerez peut-être aussi