Académique Documents
Professionnel Documents
Culture Documents
Didier Gonze
Introduction
The logistic equation (sometimes called the Verhulst model or logistic growth curve) is a
model of population growth first published by Pierre-François Verhulst (1845,1847). The
model is continuous in time, but a modification of the continuous equation to a discrete
quadratic recurrence equation, known as the logistic map, is also widely studied.
The continuous version of the logistic model is described by the differential equation:
! "
dN N
= rN 1− (1)
dt K
where r is the Malthusian parameter (rate of maximum population growth) and K is the
carrying capacity (i.e. the maximum sustainable population). Dividing both sides by K
and defining X = N/K then gives the differential equation
dX
= rX(1 − X) (2)
dt
The discrete version of the logistic model is written:
Here we will first describe the fascinating properties of the discrete version of the logistic
equation and then present the continuous form of the equation.
Discrete logistic equation
Before reading the present notes, you are invited to do some exploration using a com-
puter. The goal is to understand the behavior of the following innocent-looking difference
equation:
Xn+1 = f (Xn ) = rXn (1 − Xn ) (4)
Let r = 0.5 and x0 = 0.1, and compute x1 ,x2 ,... x30 using equation (4). Now repeat the
process for r = 2.0, r = 2.7, r = 3.2, r = 3.5, or r = 3.8. We will limit our analysis to
0 ≤ r ≤ 4 (which guarantees that 0 ≤ Xn ≤ 1). As r increases you should observe some
changes in the type of solution you get.
The first thing you should notice about eq. (4) is that it is non-linear, since it involves
a term Xn2 . Because of this non-linearity, this equation has remarkable non-trivial prop-
erties, but can not be solved analytically. Therefore we must resort to other methods
to explore its behaviour. This equation and its variants still puzzle the mathematicians.
As we will see in the following, this equation will allow to introduce many fundamental
concepts pertaining to non-linear systems.
2
Steady state and stability
The concept of steady state (or equilibrium) relates to the absence of changes in a system.
In the context of difference equations, the steady state Xss is defined by
By definition, a stable steady state is a state that can be reached from the neighbour
states, whereas an unstable steady state is a state that the system will leave as soon as
a small perturbation will move the system out of this state. The notion of stability is
schematized here:
Unfortunately, eq (11) is still not useable because it involves the evaluation of the function
f at Xss + xn , which is unknown. Hopefully, to overcome this difficulty, there is a trick.
3
We can indeed exploit the fact that xn is small compared to Xss and develop the function
as a Taylor expansion around Xss :
! "
df
f (Xss + xn ) = f (Xss ) + xn + O(x2n ) (12)
dX X=XSS
The very small terms O(x2n ) can be neglected, at least close to the steady state (i.e. when
xn is small). This approximation results in some cancellation of terms in eq. (11) because
f (Xss ) = Xss . Thus the approximation
! " ! "
df df
xn+1 ≃ f (Xss ) − Xss + xn = xn (13)
dX X=XSS dX X=XSS
can be written as
xn+1 ≃ axn (14)
where ! "
df
a= (15)
dX X=XSS
Clearly, if |a| < 1, the steady state is stable (the perturbation xn tends to 0 as n increases),
while if |a| > 1, the steady state is unstable (the perturbation xn increases as n increases).
In the case of the logistic equation, we have for the steady state Xss1
! "
df
a= = (r − 2rX)X=Xss1 =0 = r (16)
dX X=Xss1
We conclude that the steady state Xss2 of the logistic equation will be stable when 1 <
r < 3. The steady state Xss1 thus becomes unstable when the second steady state Xss2
starts to exist and is stable. Now the $1000 question is: what happens when r > 3 ?
0.9
X =1−1/r
SS2
0.8
(unstable)
0.7
0.6
XSS2=1−1/r
ss
0.5 (stable)
X
0.4
0.3
0.2
0.1
XSS1=0 (stable) XSS1=0 (unstable)
0
0 0.5 1 1.5 2 2.5 3 3.5 4
r
4
Graphical method
In this section we examine a simple technique to visualize the solution of a first-order
difference equation as the logistic equation.
First, let us draw the graph of f (X), the next generation function. In our case, f (X) =
rX(X − 1), so that f (X) is a parabola passing through 0 at X = 0 and X = 1, and with
a maximum at X = 1/2 (red curve in fig. 2).
Choosing an initial value X0 , we can read X1 = f (X0 ) directly from the parabolic curve.
To continue finding X2 = f (X1 ), X3 = f (X2 ), and so on, we need to similarly evaluate
f (X) at each succeeding value of Xn . One way of achieving this is to use the line Xn+1 =
Xn to reflect each value of Xn+1 back to the Xn axis (blue trajectory in fig. 2). This
process, which is equivalent to bouncing between the curves Xn+1 = Xn (diagonal line)
and Xn+1 = f (X) (parabola) is a recursive graphical method (also called cobwebbing) for
determining the population level at each iterative step n.
As we can see in figure 2 (for r = 2.8), the sequence of points converges to a single point
at the intersection of the parabola with the diagonal line. This point satisfies Xn+1 = Xn .
This is by!definition
" the steady state of the equation. Recall that the condition for stability
df
is |a| = < 1. Interpreting graphically, this condition means that the tangent
dX Xss
line L to f (x) at the steady state must have a slope not steeper that 1.
1
r = 2.8
0.8
0.6
n+1
X
0.4
0.2
0
0 X0 0.2 X 0.4 X2 0.6 0.8 1
1
X
n
0.8
0.6
n
X
0.4
0.2
0
0 5 10 15 20 25 30
step, n
5
In figure 3, several time sequences, corresponding to different values of the parameter r
are shown. When the parameter r increases, this effectively increases the steepness of the
parabola, which makes the slope of this tangent steeper, so that eventually the stability
condition is violated. The steady state then becomes unstable and the system undergoes
oscillations. When r further increases the periodic solution becomes unstable and higher
period oscillations are observed. When all the cycles become unstable, chaos is observed.
In the next section, we discuss the period-2 oscillations observed just beyond r = 3 and
its stability.
1 1 1
r=2
0.8 0.8 0.8
n+1
n
X
X
0 0 0
0 0.5 1 0 0.5 1 0 10 20 30
X X Step, n
n n
1 1 1
r = 3.2
0.8 0.8 0.8
n+1
n
X
X
0 0 0
0 0.5 1 0 0.5 1 0 10 20 30
X X Step, n
n n
1 1 1
r = 3.5
0.8 0.8 0.8
n+1
n
X
X
0 0 0
0 0.5 1 0 0.5 1 0 10 20 30
X X Step, n
n n
1 1 1
r = 3.8
0.8 0.8 0.8
n+1
n
X
X
0 0 0
0 0.5 1 0 0.5 1 0 10 20 30
X X Step, n
n n
Figure 3: Graphical resolution for various values of r. In the left panels, the complete
trajectory, from the initial condition (here x0 = 0.2) is shown. In the middle panels, the
transients have been removed. In the right panels, the complete time series is shown.
6
Beyond r=3...
We present here the approach proposed by May (1976) to prove that as r increases slightly
beyond r = 3, stable oscillations of period 2 appear. A stable oscillation is a periodic
behavior that is maintained despite small perturbations. Period 2 implies that successive
generations alternate between two fixed values of X, which we will call X1∗ and X2∗ . Thus
period 2 oscillations (sometimes called two-point cycles) simultaneously satisfy the two
equations:
and let k be the new index that skips every two generations:
k = n/2 (22)
The steady state X ∗ of this equation, i.e. the fixed point of g(X), is the period 2 solution
of equation (4). Note that there must be two such values, X1∗ and X2∗ since by assumption
X oscillates between two fixed values.
By this trick, we have reduced the new problem to one which we are familiar with. Indeed,
the stability of a period 2 oscillation can be determined by using the method described
here above. Briefly, suppose an initial small perturbation x: X → X +x. Stability implies
that periodic behavior will be reestablished, i.e. that the deviation x from this behavior
will decrease. This will happen when:
#! " #
# dg #
# #<1 (24)
# dX #
X=X ∗
From this equation, we conclude that the stability of period 2 oscillations depends on the
magnitude of df /dX at X ∗ .
7
We will now apply this approach to the logistic equation. First we have to determine the
two fixed points X1∗ and X2∗ of equation (4).
To do so, we first make the composite function g(X) = f (f (X)) explicit:
X ∗ = r 2 X ∗ (1 − X ∗ )(1 − rX ∗ (1 − X ∗ ))
1 = r 2 (1 − X ∗ )(1 − rX ∗ (1 − X ∗ ))
0 = r 2 (1 − X ∗ )(1 − rX ∗ (1 − X ∗ )) − 1 (27)
In order to solve this third-order polynomial expression, we will make use of the fact that
the solution of eq. (7) is also solution of eq. (27). Indeed
The second factor is a quadratic expression whose the roots are solutions of the equation
! " ! "
2 r+1 r+1
X − X+ =0 (32)
r r2
Hence ⎛ &! ⎞
"2
1 ⎝r + 1 r+1 4(r + 1) ⎠
X∗ = ± − (33)
2 r r r2
)
r+1± (r − 3)(r + 1)
X1∗ , X2∗ = (34)
2r
The possible roots, denoted X1∗ and X2∗ , are real if r < −1 or r > 3. Thus, for positive
values of r, steady states of the two-generation map f (f (Xn )) exist only when r > 3.
Note that this occurs when Xss = 1 − 1/r ceases to be stable.
8
With X1∗ and X2∗ computed it is possible (albeit algebraically messy) to test their stability.
To do so, it is necessary to compute dg/dX and to evaluate this derivative at the values
X1∗ and X2∗ . When this is done, we obtain
√ a second range of behavior: stability of the
two-fixed point cycles for 3 < r < 1 + 6 = 3.449.
In fig. 4 we represented in red the third-order function g(X) (26). The steady states
corresponds to the intersection of this function with the diagonal line. The stability is
determined by the slope dg/dX at the steady states.
Again, we could ask a $10000 question: What happen beyond r = 3.449? In theory,
the trick used in exploring period 2 oscillations could be used for any higher period n:
n = 3, 4, ... Because the analysis becomes increasingly cumbersome, this method will
not be further applied here. We will rather discuss the results obtained by numerical
simulations.
1
r=2
0.8
0.6
n+2
X
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
X
n
1
r = 2.8
0.8
0.6
n+2
X
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
X
n
1
r = 3.5
0.8
0.6
n+2
X
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
Xn
9
Bifurcation diagram
One way of summarizing the range of behaviours encountered when r increases is to
construct a bifurcation diagram. Such a diagram gives the value and stability of the
steady state and periodic orbits (fig. 5). In this diagram, for each value of r is reported
the local maximum of values of Xn . The transition from one regime to another is called
a bifurcation.
0.9
0.8
0.7
0.6
max (X)
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5 2 2.5 3 3.5 4
Parameter r
Figure 5: Bifurcation diagram. The inset is a zoom on the right part of the diagram. This
diagram is obtained by computing for each value of r the steady state or the maxima and
minima of Xn after the transients, e.g. from X100 to X1000 .
10
Period doubling and chaos
The schematic representation shown in fig. 6 highlights the structure of the bifurcation
diagram: As r increases, the system undergoes successively cycles of period 2, 4, 8, 16,...
Such a sequence is called a period doubling cascade. It ultimately leads to a chaotic
attractor. This is the most typical “route to chaos”. Note that in a chaotic attractor, Xn
never take two times the same value.
11
Periodic windows and intermittency
In the chaotic domain, there are windows of periodic behaviors. A large window of period
3 cycle is visible on the bifurcation diagram (see zoom on the inset). This period 3 cycle
can be explained by cobweb diagrams as done of the period 2 cycle (Fig. 7).
Interestingly, at the border just before the period 3 window, we can observe something
that looks like period 3 cycle but interrupted by irregular “bursts”. This behaviour is
called intermittency (Fig. 8). As the control parameter r is moved further away from
the periodic window, the irregular bursts become more frequent until the system becomes
fully chaotic. This progression is known as the “intermittency route to chaos”.
1 1
r=3.8 r=3.85
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
Xn+3
Xn+3
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Xn X
n
Figure 7: Graphical analysis of period 3 cycles. Blue dots: stable points; open dots:
unstable points.
r=3.8282 zoom
1
0.56
0.8
0.54
0.6
n+3
n+3
X
0.52
0.4
0.2 0.5
0 0.48
0 0.2 0.4 0.6 0.8 1 0.48 0.5 0.52 0.54 0.56
X X
n n
0.8
0.6
n
X
0.4
0.2
0
0 20 40 60 80 100 120 140 160 180 200
n
Figure 8: Intermittency
12
Sensitivity to initial conditions and Lyapunov exponent
Chaotic behaviours are characterized by a high sensitivity to initial conditions: start-
ing from initial conditions arbitrarily close to each other, the trajectories will rapidely
diverge (Fig. 9). Said otherwise, a small difference in the initial condition will produce
large differences in the long-term behaviour of the system. This property is sometimes
called the “butterfly effect”.
0.9
0.8
0.7
0.6
n
0.5
X
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30
n
Figure 9: Sensitivity to initial conditions. Both curves have been obtained for r = 3.8
but differ by their initial conditions: x0 = 0.4 for the blue curve and x0 = 0.41 for the
red curve.
Another way to appreciate the sensitivity to initial condition is to observe the evolution
of a small interval of initial conditions (Fig. 10).
1 0.25
Interval of initial conditions
0.9
0.8 0.2
0.7
Frequency
0.6 0.15
0.5
0.4 0.1
0.3
0.2 0.05
0.1
0 0
0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1
Iteration X
Figure 10: Evolution of the interval [0.47,0.48]. The red dots indicate the initial boundary
of the interval (i.e. x0 = 0.47 and x0 = 0.48). These results have been obtained for r = 4.
The values obtained after some iterations cover the whole range [0 1] but they are not
distributed uniformely.
13
The sensitivity to initial conditions can be quantified by the Lyapunov exponent. Given
an initial condition x0 , consider a nearby point x0 + δ0 , where the initial separation δ0 is
extremely small. Let δn be the separation after n iterations. If
|δn | ≈ |δ0 | enλ (37)
then λ is called the Lyapunov exponent. A positive value is a signature of chaos.
Hence:
# #
n−1
1 ##* ′ #
λ = ln # f (xi )#
#
n # i=0 #
n−1
1+
= ln |f ′ (xi )| (42)
n i=0
14
If this expression has a limit as n → ∞, we define that limit as the Lyapunov exponent
for the trajectory starting at x0 :
, n−1 -
1+ ′
λ = lim ln |f (xi )| (43)
n→∞ n
i=0
Note that λ depends on x0 . However it is the same for all x0 in the basin of attraction of
a given attractor. The sign of λ is characteristic of the attractor type: For stable fixed
points (steady states) and (limit) cycles, λ is negative; for chaotic attractors, λ is positive.
For the logistic map,
f (x) = rx(1 − x) (44)
f ′ (x) = r − 2rx, (45)
we have, by definition: , -
n−1
1+
λ(r) = lim ln |r − 2rxi | (46)
n→∞ n i=0
Figure 12 shows the Lyapunov exponent computed for the logistic map, for 3 < r < 4.
We notice that λ remains negative for r < r ∗ ≈ 3.57, and approaches 0 at the period
doubling bifurcation. The negative spikes correspond to the 2n − cycle. The onset of
chaos is visible near r = r ∗ , where λ becomes positive. For r > r ∗ , windows of periodic
behaviour are cleary visible (spikes of λ < 0).
0.5
0
Lyapunov exponent
−0.5
−1
−1.5
−2
−2.5
−3
−3.5
−4
3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4
Parameter r
Figure 12: Lyapunov exponent for 3 < r < 4 (transients = 200; number of iterations =
5000).
15
Generalization of the logistic map
Maroto (1982) studied the following discrete map:
The dynamical properties of such equation have been investigated namely by Levin &
May (1976), Hernandez-Bermejo & Brenig (2006), Briden & Zhang (1995, 1994), and
others.
Note that if we define the new variable Yn = Xn−1 , the 2-order equation can be converted
into a system of two equations of the first order:
Xn+1 = rXn (1 − Yn )
Yn+1 = Xn (51)
16
Continuous logistic equation
The continuous form of the logistic equation is written
dX
= rX(1 − X) (53)
dt
This equation can be solved either numerically, using the usual integration algorithm, or
analytically.
dX
= X(1 − X) (54)
dt
Xn+1 = Xn + Xn (1 − Xn )∆t
= Xn (1 + ∆t − Xn ∆t) (56)
∆t
If we define Yn = Xn and r = 1 + ∆t, then we find
1 + ∆t
∆t
Yn+1 = Xn (1 + ∆t − Xn ∆t)
1 + ∆t
= Yn (r − rYn )
= rYn (1 − Yn ) (57)
17
2
1.8
r = 0.5
1.6
1.4
1.2
X
1
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
Time
Figure 13: Numercial solution of the logistic equation obtained for r = 0.5, for various
initial condition X(0).
1
X(t) = (59)
e −rt
(X0 − 1)
1−
X0
X0
X(t) = (60)
X0 − e (X
−rt
0 − 1)
X0 (X(t) − 1)
e−rt = (61)
X(t)(X0 − 1)
18
X0
X(t + T ) = ) (X
(62)
X0 − e−(t+rT
0 − 1)
If we substitute for the exponential in eq (62) using eq (61) we get, after a little rearrang-
ing,
X(t)
X(t + T ) = (63)
X(t) − e−rT (X(t) − 1)
This last equation is a solution map. It lets us calculate X(t + T ) knowing only X(t) and
some parameters.
Unlike equation (57) which gives approximations to the solution of the logistic differential
equation at fixed intervals ∆t, equation (63) is exact. We were able to obtain this equation
because we were able to solve the differential equation. In general of course, we cant do
that, but we can still obtain numerical representations of the solution map by sampling
the numerical solution (obtained with a good numerical method, of course) at fixed time
intervals and plotting X(t + T ) vs X(t). This is sometimes called a Ruelle plot.
0.8
0.6
X
0.4
19
Generalization of the logistic equation
As for the discrete version, the continuous logistic equation can be generalized:
! "q
dX p X
= rX 1 − (64)
dt K
20
References
Text books
• Glass L & MacKey MC (1988) From Clocks to Chaos. Princeton Univ. Press.
Original papers
• May RM. (1975) Biological populations obeying difference equations: stable points,
stable cycles, and chaos. J Theor Biol. 51:511-24.
• May R (1976) Simple mathematical models with very complicated dynamics, Nature
261: 459-467.
• Levin SA, May RM (1976) A note on difference-delay equations. Theor Popul Biol
9:178-87.
21
More recent papers
22