Vous êtes sur la page 1sur 40

MATH 2405

Mathematical Methods 1 Honours


Part 1: Ordinary Differential Equations

rof uayal Wlckramaslnghe
MaLhemaucs ueparLmenL
AusLrallan nauonal unlverslLy
Ordinary Differential
Equations


2
rescrlbed 1exL:
LlemenLary ulerenual Lquauons and 8oundary value roblems:
8oyce & ulrlma (Wlley)
3
art 1: I|rst Crder D|erenna| Lquanons

- Lxamples of ulerenual Lquauons arlslng from MaLhemaucal Modelllng
- Classlcauon of ulerenual Lquauons (llnear and non-llnear)
- Anu-uerlvauve 1heorem for llrsL order llnear dlerenual equauons
- Separable equauons
- roperues of llnear dlerenual equauons (LuLs) and Lhe LxlsLence-unlqueness
1heorem for lnlual value problems
- Lxamples of Lhe break down of exlsLence and/or unlqueness aL slngular polnLs
- Llnear ulerenual CperaLors and ConunulLy SeLs
- lormal soluuon Lo rsL order LuLs and lnLerpreLauon ln Lerms of Lhe one slded
Creen's funcuon
- 1he ulrac delLa funcuon and lLs use ln consLrucung Lhe Creen's funcuon
- 1he Peavlslde SLep funcuon and dlsconunuous forclng Lerms.
- Appllcauons Lo ClrculL 1heory
- non llnear uLs and Lhe LxlsLence-unlqueness 1heorem
- Speclal uLs (LxacL equauons, equauons homogeneous ln y/x, 8ernoulls Lype
equauons)
- AuLonomous equauons and Lhelr properues
- lnLroducuon Lo blfurcauon Lheory
- SysLems of LuLs, ComparLmenLal models, and soluuon ln slmple cases.
art 2: Second and n|gher Crder D|erenna| Lquanons

-LxlsLence and unlqueness of soluuons for lvs
- Llnear dependence and lndependence of soluuons
- Ad[olnL of LuL, lnLegraung facLors for 2
nd
Crder LuLs
- Self-ad[olnL CperaLors
- 1he Wronsklan
-Soluuon of consLanL coemclenL LuLs
-1he meLhod of undeLermlned coemclenLs and Lhe symbollc u meLhod
-1he harmonlc osclllaLor
-Soluuon of non consLanL coemclenL LuLs, MeLhod of varlauon of parameLers
- Creen's funcuon for 2nd order LuLs


5
art 3: ower Ser|es So|unons
-Crdlnary and slngular polnLs
-Soluuon abouL an ordlnary polnL
-Soluuon abouL regular slngular polnLs
-Soluuon Lo 8essel's equauon
-1he gamma funcuon
-8essel's funcuons of Lhe rsL and second klnd
-Lqual rooLs ln Lhe lndlclal equauon
-8ooLs dlerlng by an lnLeger ln Lhe lndlclal equauon
-Legendre's equauon and power serles soluuons

art 4: Systems of D|erenna| Lquanons
- AuLonomous sysLems, equlllbrlum polnLs and cycles
- hase porLralLs
-redaLor-prey models
-llrsL order llnear sysLems, sLablllLy
-Locally llnear sysLems
-Second order llnear sysLems, normal modes of osclllauon
-non llnear pendulum
- erlodlc soluuons and llmlL cycles
- 1he Popf blfurcauon
- 1he van der ol equauon
6
artS: 8oundary Va|ue rob|ems (Not covered th|s year but
notes are prov|ded for those |nterested)

-8oundary value problems for 2
nd
order Llnear uLs
-Condluon for exlsLence and unlqueness of soluuons of
homogeneous llnear boundary value problems (L8vs) wlLh
homogeneous boundary condluons
-Creens funcuon for lnhomogeneous L8vs
-roperues of self-ad[olnL 8vS and SLurme-Llouvllle 1heory.
-Llgenfuncuon soluuons of SLurme-Llouvllle sysLems


7
7 7
Ordinary Differential
Equations

Part 1: First order differential equations
Philosophy of the course

This is a course in applied mathematics. The emphasis is on
developing methods for solving differential equations that arise
in real life problems. Theorems are stated and plausibility arguments
given when possible but proofs are generally not given in this course.
If you are interested in the pure side of mathematics this is NOT the
course for you you will be disappointed. But the Department does
offer such courses in later years.

Although the focus is on methods, some applications are presented.
In keeping with your text book, these applications are drawn from
the theory of mechanical oscillations (based on Newtons laws of
motion-assumed knowledge) and from the theory of electrical circuits
based on Kirchhoffs laws which are presented in the course
(no prior knowledge is assumed).
Of course there are applications in numerous other areas which we
do not have time to cover.
MaLhemaucal Models
9
YES
NO
Interpretation of
Results
Solution to
Special Cases
(e.g. Linearised Equation)
Validation
Ready for
General Use
(prediction)
Natural
(or other )
Process
Abstraction for
Modelling
Mathematical
Model
opulauon CrowLh
10
YES
NO
Interpretation of
Results
Solution to
Special Cases
(e.g. Linearised Equation)
Validation
Ready for
General Use
(prediction)
Natural
(or other )
Process
Abstraction for
Modelling
Mathematical
Model
Population at time t :
N(t) (an observable)
How does N(t) vary with time?
Increase in population between t and t +!t :
N(t +!t) ! N(t) = kN(t)!t (linear approximation)
"
N(t +!t) ! N(t)
!t
= kN(t)
"
dN
dt
= kN (in the limit !t "0)
Full solutions possible
N(t) = N
0
e
kt
("exponential growth model")
nC
no
11
lorced Csclllauons of
a Sprlng
** Laws governing evolution of
the natural process
(Newton!s Laws, Hook!s Law)
** Mathematical Model
Response
or
Output
Driving Terms
or
Source Terms
Input (initial
or boundary
conditions)
!
5
d
2
y
dt
2
= 5g" 6y + Acos#t (Newton' s second law: g = acceleration due to gravity)
5
d
2
y
dt
2
+6y = 5g+ Acos#t - example of differential equation.
5g
F = A cos ! t
(Driving Force)
T= 6 y (Hook!s Law)
Forced oscillation of
mass attached to spring
under garvity
y
Unextended
spring
An LlecLrlcal ClrculL
12
!
Given : L = Inductance, C = Capacitance, R = Resistance
E(t) externally applied voltage (Driving Force)
Find : Charge Q(t) at time t :
Appropriate DE :
L
dI
dt
+ RI +
1
C
Q(t) = E(t), I =
dQ
dt
"L
d
2
Q
dt
2
+ R
dQ
dt
+
1
C
Q(t) = E(t)
8
C
L(L)
L
SLrucLure of dlerenual equauon ls slmllar Lo LhaL of sprlng (mechanlcal)
problem [usL descrlbed and ln Lhls case ls LhaL of a Second Crder L|near
Crd|nary D|erenna| Lquanon.
13
Classication of Differential Equations

An ordlnary dlerenual equauon (CuL) ls an equauon lnvolvlng
an unknown funcuon y(L) of a slngle varlable L and lLs ordlnary derlvauves
y
/
, y
//
, y
///
..y
(n).




Crder ls Lhe order of Lhe hlghesL derlvauve

L|near lf Lhe equauon can be seen Lo be a ||near comblnauon
of Lhe unknown funcuon (dependenL varlable) and lLs derlvauves

Lo be evaluaLed.


lndependenL varlable

uependenL
varlable
y(L)
14
!
dy
dt
+ y = t - - - - First order, linear
y
dy
dt
+ y = t - - - First order, non - linear
dy
dt
+ y
2
= t - - - First order, non - linear
d
2
y
dt
2
+ t
3
y = t
2
- - Second order, linear
A so|unon specles a funcnon !(t) over an lnLerval l: when
Lhe funcuon ls subsuLuLed ln Lhe uL, Lhe equauon
becomes an ldenuLy ln Lhe lnLerval l. Cf course, Lhe
soluuon musL Lurn ouL Lo be sumclenLly dlerenuable
Lo enable Lhe Lerms ln Lhe dlerenual equauon Lo be
evaluaLed. ?ou may be glven an lnLerval l and be asked Lo
nd soluuons or be asked Lo nd all lnLervals over whlch a
soluuon exlsLs.
Genera| so|unon ls Lhe seL of all soluuons.
15
Lxamples of Soluuons
Example 1: Consider the differential equation
!! y + y = 2e
"t
y = e
"t
+sint (for all t) is a solution because substitution gives
LHS=e
-t
"sint +e
"t
+sint = 2e
"t
= RHS for all t. This is a solution
for all t.
This is guesswork, and there may be other solutions.
Note that this is a constant coefficient second order linear DE.
There is a theory for this type of equation which enables a
general solution to be found (see later) without guess work.
Recognising the type of equation is often the key to finding
a solution, since standard solution methods exists for certain
types of equations. 16
Example 2: Sometimes solutions cannot be found in terms of standard
functions, but it may be possible to develop a series solution with an
infinite number of terms convergent in some interval (We cover this method
later in the course). But here is an example.
J
0
(x)=1-
x
2
2
2
+
x
4
2
2
4
2
!
x
6
2
2
4
2
6
2
+...... (convergent for all x)
is a solution of
(x " y " ) +xy = 0 (Bessel's equation of order zero:x independent variable)
We can verify that this is a solution by direct substitution as follows:
Now " J
0
(x)=-
x
2
+
x
3
2
2
4
!
x
5
2
2
4
2
6
+......
#
d
dx
[x " J
0
(x)] =
d
dx
[!
x
2
2
+
x
4
2
2
4
!
x
6
2
2
4
2
6
+...]
= !x +
x
3
2
2
!
x
5
2
2
4
2
+...
= !x[1-
x
2
2
2
+
x
4
2
2
4
2
!
x
6
2
2
4
2
6
2
+..] = !xJ
0
(x)
# (xJ
0
"(x) " ) +xJ
0
(x) = 0 (all x)
17
Antiderivative Theorem (for linear 1
st
order DEs)
Suppose that F(t) is an antiderivative of a continuous function f (t)
over some open t interval I. Then all solutions of the ODE
! y = f (t)
in that interval are given by
y = F(t) +C where C is any constant.
Proof:
Suppose t
0
is any point in the interval, and y(t) is any solution of ! y = f (t).
Integrating both sides of DE from t
0
to t
! y (s)ds = ! F
t0
t
"
t0
t
"
(s)ds
y(t) # y(t
0
) = F(t) # F(t
0
)......(From the fundamental theorem of calculus)
y(t) = y(t
0
) + F(t) # F(t
0
) = y
0
+ F(t) # F(t
0
) = F(t) +C
for all t in the interval. This is the unique solution to the initial value problem y(t
0
) = y
0
.
It is the solution that passes through (t
0
, y
0
) in the t - y plane.
Conversely, if we set
y(t) = F(t) +C
then
! y = ! F (t) = f (t)
since F(t) is an anti-derivative of f (t), and so the differential
equation is satisfied.
1hls Lheorem seLs ouL Lhe baslc meLhod for ndlng soluuons
for rsL order dlerenual equauons of Lhls parucular Lype.

-[ust annd|erennate (or |ntegrate) both s|des of the
d|erenna| equanon!

As we shall see, Lhls slmple ldea does noL enable soluuons Lo
be found for more general rsL order uLs nor
ls lL clear lf a unlque soluuon need exlsL for an lnlual value
problem ln Lhese cases.
(e.g. ! y = f (t, y))
A polnL of noLauon
Note that the definite integral in the previous proof was written as
! y (s)ds
t
0
t
"
and not as ! y (t)dt
t
0
t
"
.
Here s is a running (or "dummy" variable) which
varies from t
0
(some starting value) to an end value t
which is the point at which we want to evaluate the
solution of a DE.
Sometimes the latter notation is used for convenience
but the meaning should be as above and clear.
With this notation, the interpretation of an integral as a Riemann
sum is explicit (see insert).
b a
x
h(x)
s
n
+"s
n
s
n

h(x)dx = lim
!sn"0
N"# a
b
$
h(s
n
)!s
n
n=1
N
%
20
20
Separable Equations
g(y)
dy
dt
= f (t)
g(y)
dy
dt
!
dt = f (t)
!
dt .......... (1)
Suppose G(y) and F(t) are anti-derivatives of g(y) and
f (t) respectively.
g(y) =
dG(y)
dy
, f (t) =
dF(t)
dt
Now g(y)
dy
dt
=
dG(y)
dy
dy
dt
=
dG(y(t))
dt
(chain rule)
(1) "
dG(y(t))
dt
!
dt =
dF
dt
!
dt
#G(y(t)) = F(t) +const
1here ls one case where Lhe soluuon of a rsL order uL ls lmmedlaLe:
- Lquauons of Lhe separable Lype (could be non-llnear) - full soluuon
follows
A more direct approach (once you know what you are doing!)
g(y)
dy
dt
= f (t)
!g(y)dy = f (t)dt (separation of variables)
Now integrate both sides of DE
G(y) = F(t)+const ---- (2)
G(y(t)) = F(t)+const
All solutions y =!(t) of the DE (1) are incorporated in the
solutions for y(t) of the implicit equation (2).
Note:
It may not be possible to find analytical forms for these.
If you can solve the equation (2) for y(t) you will
need to establish that " y (t) exists in some range of t
to identify it as a solution !(t) of the DE.
The most general form of a first order DE is
N(t, y)
dy
dt
+ M(t, y) = 0
Historically, this equation was written in differential form
M(t, y)dt + N(t, y)dy = 0
- hence the name differentail equation rather than derivative equation.
We have already seen in our treatment of separable equations where the
differential formulation was more convenient for finding solutions.
PlsLorlcal noLe
23
Example (separable equation)
Find a general solution of
e
y
dy
dt
!t !t
3
= 0
Solution
e
y
dy = (t +t
3
)dt ---separation of variables
e
y
=
t
2
2
+
t
4
4
+C
y(t) = ln(
t
2
2
+
t
4
4
+C)
is general solution.
noLe: 1he general soluuon has one arblLrary consLanL (a properLy of llrsL Crder
Lquauons) and ylelds a famlly of soluuon curves ln Lhe (y,L) plane labeled by
1he value of Lhe consLanL C. 1hree such soluuons are shown ln dlagram.
Example
y
dy
dt
= !t ------ (1)
ydy = !tdt
Integrating both sides
y
2
2
= !
t
2
2
+C
" y
2
+t
2
= D (a constant) ------------(2)
Here y(t) is defined "implicitly" by (2) and the solution
curves are circles.
Solving for y(t)we get the following two functions as solutions
y
1
(t) = + D
2
!t
2
(D> 0)
y
2
(t) = ! D
2
!t
2
(D> 0)
L
y
The graphs of these solutions are the solution curves
and form two families of semi-circles.which fill the upper (y > 0)
and lower (y < 0) half planes respectively.
Note that on the t axis, y = 0 and the DE (1) implies t = 0.
Hence there are no solutions that cross the t axis except possibly
at t = 0.
The full circles are not solutions of the DE! Only parts of the
solution curves solve the DE.
In fact,
dy
1
dt
= !
2t
t
2
! D
2
blows up as t approaches D
26
26
We will be dealing mostly with initial value problems.
Example: t
dy
dt
+5y = 6t
y(1) = 5
Example: y
dy
dt
= t
y(1) = 5
!find solution/solutions y(t) passing through t =1 with
y(1) = 5. This are called an "initial value problems" (IVPs).
There is only one initial condition for a first order DE.
In some physical contexts, one may only be interested only
in solutions for t "1 (e.g. if t is time).
Initial Value Problems (IVP)
L
y
(1,3)
Schemauc- noL acLual soln!
27
Here is an example of a, IVP for a second order DE
t
d
2
y
dt
2
+3
dy
dt
!10y = 2
y(1) =1, " y (1) =10
!find solution/solutions y(t) passing through t =1 with
y(1) =1 and derivative " y (1) =10. There are two
initial conditions for a second order IVP.
y
L
(1,1)
y
1
(1)=10
Schemauc- noL acLual soln!
28
Example (initial value problem)
Find solution/solutions of the IVP
e
y
dy
dt
!t !t
3
= 0, y(1)=1
Solution
We had as a general solution
y(t) = ln(
t
2
2
+
t
4
4
+C)
y(1) =1"1= ln(
3
4
+C) "C = e !
3
4
#solution to the IVP is
y(t) = ln(
t
2
2
+
t
4
4
+e !
3
4
)
This is in fact the only solution (see later for existence-uniqueness theorem
for this type of non-linear DE).
29
29 29
need Lheory Lo answer followlng:

(l) lor prescrlbed lnlual values (lv' s) ----- Cne soluuon?
Several soluuons?
no soluuon?
(ll) lor prescrlbed boundary values (8v' s) ----- Cne soluuon?
Several soluuons?
no soluuon?
1he enumerauon of all soluuons for some lnLerval requlres Lhe developmenL
of varlous Lechnlques (ln addluon Lo separauon of varlables) and
meLhodologles Lhe mosL lmporLanL of whlch wlll be covered ln Lhe resL of Lhls
Course.
It is sometimes possible to look at the form of the equation, and the properties of
the coefficients that multiply the various derivatives and decide whether the DE
has a solution. The theorems that make this possible are known as
Existence-Uniqueness theorems (EUTs).
If EU is guranteed, then one can use numerical techniques to search for
approximate solutions with confidence, particularly when standard techniques
do not yield solutions in terms of known analytic functions.
Operator formulation of ODEs
Differential equations can also be looked upon in operator terms.
Operators act on continuity sets:
C
0
(I ) is the set of all continuous functions on I.
C
1
(I ) the set of all functions in C
0
(I ) with continuous derivatives in I.
C
2
(I ) the set of all functions in C
1
(I ) with continuous second derivatives in I.
! C
n
(I ) is contained in C
m
(I ) if m " n.
C
0
(l)
C
1
(l)
uenluons: An operaLor L acLs on any elemenL y ln lLs domaln and
produces an elemenL L[y] ln lLs codomaln. An operaLor ls fully dened
when lLs domaln and codomaln are glven and lLs acuon ls known. 1he
range of an operaLor ls Lhe seL of all elemenLs ln Lhe codomaln LhaL are
acLually produced by Lhe operaLor acung on lLs domaln.







noLe LhaL whlle Lhe domaln and range of an operaLor are well dened
Lhe codomaln can be any seL LhaL conLalns Lhe range.




31
y





Codomaln

L[y]
uomaln
range
Example: Suppose an operator L has the action
y(t) !L[y] =
dy
dt

This defines L[y] as a function in C
0
(I ) for any y in C
1
(I ).
L : C
1
(I ) !C
0
(I )
and the domain and codomain of L are C
1
(I ) and C
0
(I ) respectively.
32
y





Codomaln

L[y]
uomaln
range
C
1
(l)
C
0
(l)
1wo lmporLanL Cbservauons

ulerenuauon can make a funcuon less smooLh
lnLegrauon (solvlng a uL) wlll ln general make a funcuon more
smooLh
L =
d
dt
We may therefore expect the first order differential equation
L[y] = g(t) (t ! I )
with g(t) in C
0
(I ) to have a solution y(t) in C
1
(I ).
CC
C
1

C
0

Example: ! y = "t (t < 0) # y = "t
2
/ 2 (t < 0)
= t (t $ 0) y = t
2
/ 2 (t $ 0)
as a possible solution.
34
Linear Operators
Denition
An operaLor L whlch asslgns funcuons Lo funcuons and
sauses



(a) L[cy] = cL[y]
(b) L[y
1
+ y
2
] = L[y
1
]+ L[y
2
]
is called a linear operator. All other operators are
non-linear. (Here, we have implicitly assumed that the domain
and codomain are linear spaces).
Example: L[y]=-t[ ! y (t)]
4
is non-linear because if
we take y = t,
L[t] = "t
L[ct] = "t[c]
4
# cL[t]
35
D, D
2
..D
n
as Linear Differential Operators
The operators D, D
2
, are defined to have the following actions when they
act on a function f (t) (assumed to be sufficiently differentiable)
D[ f (t)] =
df
dt
, D
2
[ f (t)] =
d
2
f
dt
2
,..........
D operates on an element y(t) in C
1
(I ) and produces the element ! y (t) in C
0
(I ).
D : C
1
(I ) "C
0
(I )
D
2
: C
2
(I ) "C
0
(I )
Now to prove that L is a linear operator, we need to show
that given any constants ! and ", and any two functions #(t) and $(t)
on which L operates,
L[! #(t) +" $(t)] =!L[#(t)]+" L[$(t)] (from definition of linearity)
D is a linear differential operator because
D [! #(t) +" $(t)] =
d
dt
(! #(t) +" $(t)) (by definition)
=!
d#(t)
dt
+"
d$(t)
dt
=! D[#(t)]+" D[$(t)]
36
Similarly
D
2
[!" +#$] =!D
2
["]+#D
2
[$] (check)
...................................................
D
n
[!" +#$] =!D
n
["]+#D
n
[$]
Thus
D, D
2
D
n
are all linear differential operators.
Product of Operators (definition):
If L and M are two operators, we define their product LM
by the following action;
(LM)[f (t)]=L[M[f (t)]]
provided this is permitted. That is, for any f in the domain of
M, M[ f ] lies in the domain of L.
In general, LM ! ML.
With the above definition, we can now look at D
2
as the product DD because
(DD)[f (t)] = D[D[f (t)]] = D[
df
dt
] =
d
2
f
dt
2
as before.
Likewise, DD....D= D
n
37
Thus, when suitably interpreted, we can even write
D
3
=
d
3
dt
3
=
d
dt
.
d
dt
.
d
dt
=
d
dt
!
"
#
$
%
&
3
..........!!!!
It is sometimes possible to factorise differential operators
in the above sense, and it may then be possible to use the
factorisation to reduce the order of the differential equation and
hence find a solution.
Sum of Operators (definition)
We can linearly combine D, D
2
,... by multiplying (on the left) by
continuous functions a
0
(t), a
1
(t),... of the independent variable t to
form the sum
L = a
0
(t) +a
1
(t)D+a
2
(t)D
2
+...
with the action
L[ f (t)] = (a
0
(t) +a
1
(t)D+a
2
(t)D
2
+...)[ f (t)]
= a
0
(t) f (t) +a
1
(t)D[ f (t)]+a
2
(t)D
2
[ f (t)]+... (the distributive law for addition)
where we have assumed that f(t) is sufficiently differentiable. It is easy to show that L
is also a linear operator.
38
Exercise: With the above definition, show that
L[!"(t)+#$(t)] =!L["(t)]+#L[$(t)]
and hence that L is a linear operator.
39
Example;
Show that the differential operators tD and (1! D) do not commute.
That is , (tD)(1-D) " (1-D)(tD).
Now
((tD)(1- D))[ f (t)] = (tD)[(1- D)[ f ]]
= (tD)[ f ! D[ f ]]
= tD[ f ]!tD[D[ f ]]
= tD[ f ]!tD
2
[ f ]
= (tD- tD
2
)[ f ]
#(tD)(1- D) = (tD- tD
2
)
((1- D)(tD))[ f (t)] = (1! D)[(tD)[ f ]] = (1! D)[tD[ f ]]
= tD[ f ]! D[tD[ f ]] = tD[ f ]! D[ f ]!tD
2
[ f ]
= (tD- D- tD
2
)[ f ]
#(1-D)(tD) = tD-D-tD
2
" tD-tD
2
(Sum rule)
(roducL rule)
(Sum rule)
(roducL rule)
(Sum rule)
40
Linear DEs (nth order)
Consider the operator
L = a
n
(t)D
n
+a
n!1
(t)D
n!1
+ +a
1
(t)D+a
0
(t)
where a
0
(t),...a
n
(t) are continuous on some interval I. Now as we
have shown L is linear.
The most general form of an nth order linear differential equation is
L[y] = b(t)
with b(t) in C
0
(I ).
The domain and codomain of L are the continuity sets C
n
(I ) and C
0
(I ).
L : C
n
(I ) "C
0
(I )
Written fully, this DE looks like
a
n
(t)
d
n
y
dt
n
+a
n!1
(t)
d
n!1
y
dt
n!1
+ +a
1
(t)
dy
dt
+a
0
(t)y = b(t)
If b(t) = 0 !!!!LDE is homogeneous
If b(t) # 0 !!!!LDE is inhomogeneous
Linear DEs have very special properties (see later)
41
We begin by considering rst order differential equations
of the linear type for which solutions expressible as an integral
can always be obtained by simple means.

We will then consider the non-linear rst order DEs for which such
solutions are not always possible.

This material is covered in Chapter 2; sections 2.1 - 2.8 (excluding
section 2.7) of BD. The interpretation of solutions in terms of
Greens functions is not given in BD (but see section 11.3).
42
A linear differential equation is said to be in standard form if the
coefficient of the highest derivative is unity.
The 1st order linear DE
a(t) ! y +b(t)y +c(t) = 0
can be converted to the standard (normalised) form by dividing through
by a(t)
dy
dt
+ p(t)y = g(t)
p(t) =
b(t)
a(t)
, g(t) = "
c(t)
a(t)
(a(t) # 0)
(Here g(t) plays the role of the "forcing" term that occurs in
2nd order DEs for mechanical systems)
First order linear DEs- The standard form
The singular points of the DE in standard form are
defined as the points at which p(t) and/ or g(t) fail to
be continuous and it is only at these points
that solutions may fail to exist or be unique
(see Theorem 2.4.1 next).
Exactly what happens to the solution at these singular points
depend on the nature of the singularities of p(t) and g(t).
Thus compare
p(t) =
t
t
,
1
t
and
1
t
2
at t = 0.
These range from no real singularity, through to a milder singularity,
to a "bad" singularity.
44
In the following three examples of linear DEs, there is a singular point
at t = 0. The singularity may be removable as in
(a) t ! y = t " ! y =
t
t
(t # 0) ...standard form
For t # 0, ! y =1
" y = dt
$
= t +C (t # 0) %%%%%(1)
Now it turns out that this solution is continuous ("well behaved") at t = 0.
Furthermore, if we substitute back in original DE,
t(t +C ! ) = t
so the solution (1) satisfies the original DE for all t
(and belongs to C
1
) even though t =1 is a singular point.
This singularity is "removable" (we could have simple
cancelled t and solved the DE!)
(a)
?(L)
L
or solution may not exist there, as in
(b) t ! y =1
" ! y =
1
t
(t # 0) ...standard form
t = 0 is a singular point. For t # 0
y =
1
t
dt =
$
ln t +C (t # 0)
Now the solution itself blows up at t = 0. (not well behaved!).
So the solution does not exist there.
Solutions are
y =
1
t
dt =
$
ln t + D (0, %)
y =
1
t
dt =
$
ln t + E (&%, 0)
Solution belong to C
1
on each interval separately.
(b)
?(L)
L
46
Or solution may not be unique, as in
(c) t ! y = 2y
t
dy
dt
= 2y "
dy
dt
=
2
t
y (t # 0) $$$$standard form

dy
y
=
2dt
t
(t # 0, y # 0) -------separation of variables
We first look for solutions for t > 0 and t < 0 separately
ln y = 2ln t + D
y = e
D
t
2
y = Ft
2
(t > 0) (F positive or negative)
y = Gt
2
( t < 0) (G positive or negative)
Now these solutions are well behaved as we approach zero and in fact
y = Ft
2
(t % 0) (F positive or negative)
y = Gt
2
( t < 0) (G positive or negative)
solves the DE for all t (including t = 0!) and belongs to C
1
.
L
?(L)
Note that the manupilations involved in solving a DE
may lead to a "loss" of solutions unless care is exercised.
Thus we required y ! 0 when we separated variables in the
above example.
But we note that
y = 0 (all t)
" # y = 0 (all t)
"t # y = 2y (all t)
So y = 0 is also a solution of the original DE for all t.
(Note: In this case we recover this solution by setting
F = G = 0 in a pevious solution but this may not happen always)
Loss of Soluuons
We can construct a general solution as follows:
General Solution: y = At
2
(t ! 0)
y = Bt
2
(t < 0)
y = 0 (all t)
........ bits of parabolae joined at the origin
will give solutions to original DE for all t.
(The solutions belong to C
1
).
1he followlng ls also a soluuon belonglng Lo C
1
!
y = !10t
2
(t " 0)
y = 0 (t < 0)
?(L)
?(L)
L
L
49
Existence: Note that there are no solutions through (0,y
0
) if y
0
is not zero. Every other point has at least one solution passing
through it.

Uniqueness: Note that a solution has a unique continuation in
any interval which does not contain t=0. Solutions fail to be unique
as they pass through the origin.
y
L
(L
0
,y
0
)
50
Note: The manner in which a problem is formulated could be important
in how one sets about determining the solutions.
Consider the two problems
(a) t ! y + y = t
2
(b) ! y +
1
t
y = t
In problem (a) one should look for solutions over all possible
intervals I.
In problem (b) one should look, in the first instance, for solutions for
for t >0, and t<0 separately, since these are the regions in which
the coefficients are continuous. Under suitable
conditions it may be possible to match the solutions across t = 0 and
obtain a function that solved the DE over an interval containing t = 0.
The answer should give all solutions over all possible intervals.
Of course the solution method may lead us to consider form (b)
in any case which then flags a possible problem at t = 0 which may
not have been apparent in form (a).
For a first order LDE in standard form and with continuous coefficients,
the solutions will also be in C
1
We will see later that if the forcing term has a simple
step function discontinuity, solutions may exist across the
discontinuity that belong not to C
1
but to C
0
.
In the case of a more serious delta function type
discontinuity of the forcing term, it may even be possible to have
discontinuous "solutions" to a LDE !! (more later)
Now for the theorem which applies to the "easy" case of
continuous coefficients and forcing terms.
52
Existence and Uniqueness for First Order DE
(Fundamental Theorem for Linear ODE- IVP)

L[y] = ! y + p(t)y = g(t)
y(t
0
) = y
0
If p(t) and g(t) are continuous in an open interval I:(a,b) containing t=t
0
,
then there exists a unique function y = f (t) passing through (t
0
, y
0
) that satisfies
the DE for all t in I. That is,
L[f ] = g(t) for all t in I
f (t
0
) = y
0
EU is guaranteed for any y
0
.
The local solutions of linear equations can always be extended (uniquely) to
the left and right until p or g fail to be continuous. Linear equations can fail
to have solutions, or lose the uniqueness of their solutions, only at points of
discontinuity of p or g(the "singular " points of DE).
1heorems 2.4.1 (68+)
53
Note
(1) The unique solution is defined over the entire interval I.
-that is the solutions do not "escape to infinity" in I.
(2) The theorem requires the DE to be in normalised form.
Use of direction elds (linear or non-linear DEs)
!
dy
dt
= f (t, y)
- 1he above denes a dlrecuon aL each polnL (L,y) aL whlch f ls meanlngful,
- 1he sum LoLal of Lhese dlrecuons dene a d|recnon he|d
- A soluuon y(L) ls a smooLh funcuon. CeomeLrlcally Lhe graph of a soluuon
has a LangenL aL each polnL whlch agrees wlLh Lhe dlrecuon eld Lhere
dy
dt
= t
(t, y) = (0, 0) f=0
(1,0) f=1
(2,0) f=2
L
y
55

We can skeLch a dlrecuon eld and vlsuallze Lhe behavlor of soluuons
for a rsL order uL. 1hls has Lhe advanLage of belng a relauvely slmple
process, even for compllcaLed equauons. Powever, dlrecuon elds do
noL lend Lhemselves Lo quanuLauve compuLauons or comparlsons.
Pere ls anoLher example of a dlrecuon eld for a cerLaln uL, and a
soluuon curve
dy
dt
= f (t, y) =
2y
t
2
+1
56
Practical approach to solving a problem

- SkeLch dlrecuon elds. useful only as an lndlcaLor of naLure of
soluuons.
- LocaLe polnLs aL whlch exlsLence/unlqueness falls -need Lo be gulded
by Lheory (Lo be sLaLed laLer for non-llnear cases).
- use analysls Lo obLaln formulae ln reglons ln whlch exlsLence/
unlqueness holds. (lf analyucal meLhods don'L exlsL, solve
numerlcally).

- llL LogeLher soluuon segmenLs Lo obLaln a general soluuon
57
L[y] = ! y (t) +ay(t) = 0 (p(t) = a, a constant)
We observe that if we mutiply both sides by the factor e
at
! y e
at
+aye
at
= 0
the LHS becomes a perfect derivative. Such a factor is called
an "integrating factor" (Note that any constant multiple of e
at
is also a factor)
d
dt
(e
at
y) = 0
e
at
y = C
y = Ce
"at
since the exponential is never zero, this process is valid for all t, and
we have the general solution (that is, the integrating factor has not
generated "spurious" solutions).
1he consLanL coemclenL llnear homogeneous equauon
Solutions of First Order Linear DEs
58
Note: there is one arbitrary constant- this is a general property of all
first order linear DEs. We say that the solution space (the Null Space of L)
has dimension 1, and is spanned by the function e
!at
. The general solution
gives a set of curves y = Ce
!at
.
The solution to the IVP
y(t
0
) = y
0
requires
y
0
= Ce
!at
0
"C = y
0
e
at
0
Thus through any point (t
0
,y
0
) there is a unique solution
y= y
0
e
!a(t!t
0
)
This is because the coefficients of the DE when put in "standard form"
are non singular.






Codomaln
uomaln
C
1
(l)
C
0
(l)
L
n (L)
59
1he consLanL coemclenL llnear |nhomogeneous equauon
L[y] = ! y (t) +ay(t) = g(t)
This has same integrating factor since
e
at
! y (t) +e
at
ay(t) = e
at
g(t)
leads to the LHS becoming a perfect derivative
d
dt
[e
at
y(t)] = e
at
g(t)
d[e
at
y(t)] = e
at
g(t)dt
Now relabel the running variable as ! and integrate
both sides between t
0
(initial) and t (final time)
e
at
y(t) "e
at0
y(t
0
) = e
a!
g(! )
t0
t
#
d!
y(t) = y
0
e
"a(t"t0 )
+ e
"a(t"! )
g(! )
t0
t
#
d!
this is the formal solution of DE satisfying the IVP y = y
0
at t = t
0
urlvlng or
lorclng Lerm
60
8esponse Lo forclng Lerm
for zero lnlual condluons
!
" y (t) +ay(t) = g(t)
y(t
0
) = y
0
- - - - - initial condition
Solution
y(t) = y
0
e
#a(t#t
0
)
+ e
#a(t#$ )
g($)
t
0
t
% d$
8esponse
Lo lnlual daLa
-vanlshes for large L
lf a > 0
urlvlng or
lorclng Lerm
61
lf drlvlng Lerm ls a consLanL
y(t) = y
0
e
!a(t!t
0
)
+ e
!a(t!! )
g
t
0
t
"
d! (! is integration variable, t is "constant")
= y
0
e
!a(t!t
0
)
! g e
!a(t!! )
t
0
t
"
d(t !! ) (equivalent to setting u = t - ! )
=y
0
e
!a(t!t
0
)
+
g
a
e
!a(t!! )
#
$
%
&
! =t
0
! =t
=y
0
e
!a(t!t
0
)
+
g
a
[1!e
!a(t!t
0
)
]
1hls Lerm vanlshes
for large L lf a>0
62
Mathematical Structure of Solution
dy
dt
+ay = g(t) ! L[y] = (D+a)y = g(t)

y(t) = Ae
"at
+e
"at
e
a!
g(! )d!
t0
t
#
= y
C
(t) + y
PI
(t)
(1) L[Ae
"at
] =
d
dt
(Ae
"at
) +aAe
"at
= 0
$L[y
C
(t)] = 0 ................. y
C
(t) satisfies the homogeneous equation.
(spans the "null space" of L)
(2) L[y
PI
(t)] =
d
dt
{e
"at
e
a!
g(! )d!
t0
t
#
}+ae
"at
e
a!
g(! )d!
t0
t
#
= "ae
"at
e
a!
g(! )d!
t0
t
#
+e
"at
e
at
g(t) +ae
"at
e
a!
g(! )d!
t0
t
#
$L[y
PI
(t)] = g(t) .............y
PI
(t) satisfies the inhomogeneous equation.
ComplemenLary
luncuon
arucular
lnLegral
63
!
We have just seen that solution to the linear IVP
L[y] = (D+a)y = g(t), y(t
0
) = y
0
is
y(t) = y
0
e
"a(t"t0 )
+ e
"a(t"# )
g(#)
t0
t
$
d#
= y
c
(t) + y
PI
(t)
where
L[y
c
] = 0
and
L[y
PI
] = g(t)
We shall see later in the course that this mathematical
structure also holds for higher order linear DEs.
Note that y
PI
(t
0
) = 0 so that the PI on its own does not
satisfy the initial condition, even though it satisfies the DE.
On the otherhand, y
c
(t) does not satisfy the DE, but since
y
c
(t
0
) = y
0
, it satisfies the initial condition!.
Parallels with Systems of Linear Algebraic Equations

Matrix Equation:
Ax=b (A a nxn matrix)
If the associated homogeneous equation
Ax=o
has a non-trivial solution (that is if det A=0)


Ax
H
=0

The non-homogeneous equation has the solution
structure
x = x
H
+x
P
where
Ax
P
= b (a particular solution)
This follows since
Ax = Ax
H
+Ax
P
= Ax
P
= b

65
First Order Linear DEs (general case)
L[y] =
dy
dt
+ p(t)y = g(t)
Trick is to introduce an integrating factor (t) such that
the left hand side of new equation is a perfect derivative.
(t)
dy
dt
+ (t)p(t)y = (t)g(t)
[
d
dt
((t)y)-
d(t)
dt
y ]+(t)p(t)y = (t)g(t)
d
dt
((t) y)+[(t)p(t) !
d(t)
dt
] y = (t)g(t)
d(t)
dt
= p(t)(t) !
1
(t)
d(t)
dt
= p(t) !
d
dt
(ln(t)) = p(t)
! ln(t) = p(t)dt
"
+ E !(t) = Ce
p(t )dt
"
Since we are after any integrating factor, we can take C =1.
non consLanL coemclenL llnear non homogeneous equauon
66
lnLegraung lacLor
!
Original DE is
d
dt
[(t)y(t)] = (t)g(t)
(t)y(t) = (t)g(t)dt
"
+ C
y(t) =
1
(t)
(t)g(t)dt
!
+ A
1
(t)
lull soluuon
noLe: 1he complemenLary funcuon ls deLermlned by Lhe coemclenL p(L) ln Lhe
llnear operaLor lndependenL of Lhe forclng Lerm, and Lhe Ceneral soluuon has
one arblLrary consLanL
ComplemenLary
funcuon
!
(t) = exp p(t)dt
"
arucular lnLegral
67
Example: Consider the folowing linear DE.
t
dy
dt
! y = t
2
cost
dy
dt
!
1
t
y = t cost (t " 0) !!!!t = 0 is a singular point.
Here p(t) = !
1
t
, g(t) = t cost
Integrating factor (IF) e
p(t )dt #
= e
(!
1
t
)dt #
= e
!ln t
=
1
t
For t > 0, integrating factor is 1/t, and for t<0, it is -1/t.
In either case,
1
t
dy
dt
!
1
t
1
t
y =
1
t
$
%
&
'
(
)t cost
d
dt
1
t
y
$
%
&
'
(
) =
1
t
$
%
&
'
(
)t cost
d
dt
1
t
y
$
%
&
'
(
) = cost
y
t
= sint +C (t " 0) 68
Consider the two IVPs
(a) y(0) = 0
y = t sint +Ct (any C)
so there are an infinity of solutions satisfying this IVP
(b) y(0) =1
!1= 0
There is no solution satisfying this IVP!. This is because t=0
is a singular point and EU may (and does) fail there.
y = t sint +Ct (0 < t < !)
y = t sint + Dt (-!< t < 0)
are solutions in the given intervals.
The solution
y = t sint + Et (-!< t < +!)
solves the DE for all t as can be checked by direct substitution.
t " y # y = t
d
dt
(t sint + Et) #(t sint + Et) = t
2
cost
This solution is continuous through t = 0 and in fact belongs
to C
1
on - !< t < +!
69
69
Example : Solve the IVP
x ! y +2y = 4x
2
, y(1) = 2
For x " 0, ! y +
2
x
y = 4x
Before solving the problem we can be certain that EU
can only fail at x = 0. So a local solution satisfying y(1) = 2
can be extended at least to (0,#).
For x > 0,
Integrating Factor: e
2
x
$ dx
= e
2ln x
= x
2
x
2
! y +2xy = 4x
3
(x
2
y ! ) = 4x
3
x
2
y = x
4
+C
= x
4
+1 (satifying initial conditions x =1, y = 2)
y = x
2
+
1
x
2
(x > 0)
NOTE: Curve with C =1 on left side of the y axis is NOT a
part of the solution to the IVP!
C =1
C =1
C = 0 C = 0
C = -1 C = -1
8u: 37
AnoLher Lxample
70
Earlier we had
d
dt
((t)y) = (t)g(t) where (t) = e
p(t )dt
!
Integrating both sides from t
0
(starting point) to a general time t
(t)y(t) "(t
0
)y
0
= (! )g(! )
t0
t
!
d!
y(t) =
(t
0
)
(t)
y
0
+
1
(t)
(! )g(! )
t0
t
!
d!
=
(t
0
)
(t)
y
0
+
(! )
(t)
g(! )
t0
t
!
d!
Now for any t
1
and t
2
,
(t
1
)
(t
2
)
= e
" p(s)ds
t1
t2
!
(using solution for )
Hence
y(t) = y
0
e
" p(s)ds
t0
t
!
+ g(! )e
" p(s)ds
!
t
!
d!
t0
t
!
General solution to rst order LDE with y(t
0
) = y
0
71
!
y(t) = y
0
e
" p(s)ds
t0
t
#
+ g($)e
" p(s)ds
$
t
#
d$
t
0
t
#
!
L =
d
dt
+ p(t) (1st order linear operator)
L[y] = g(t) (1st order linear DE)
y(t
0
) = y
0
(initial value)
INI1IAL VALUL kC8LLM (1st order ||near DL)
IULL SCLU1ICN (exp||c|t|y g|ven)
Discontinuous Forcing terms
The Existence-Uniqueness Theorem that we presented
and discussed for the problem
! y + p(t)y = g(t)
requires the coefficients p(t) and g(t) to be continuous
functions over the interval I on which a solution exists.
We now consider two cases where the forcing term is not
continuous and is given by
a delta function
a step function
As we shall demonstrate, by a careful extension of the methods that
we have already developed it is possible to find solutions
also to such problems.
73
Impulse Forcing Terms
An elecLrlcal clrculL or mechanlcal sysLem may be sub[ecL Lo a
sudden volLage or force g(L) of large magnlLude LhaL acLs over
a shorL ume lnLerval abouL L
0
. 1he assoclaLed uL wlll have Lhe
form
!
L[y] = g(t),
where
g(t) =
big, t
0
"# < t < t
0
+#
0, otherwise
$
%
&
and # > 0 is small.
llgure ls for L
0
=1 74
Measuring Impulse
- ln a mechanlcal sysLem, where g(L) ls a force, Lhe LoLal |mpu|se of Lhls force ls
measured by Lhe lnLegral
and a slmllar denluon ls applled for an
elecLrlcal sysLem where g(L) ls Lhe volLage.


- noLe LhaL lf g(L) has Lhe form

Lhen


- ln parucular, lf c = 1/(2#), Lhen l(#) = 1 (lndependenL of # ).
! !
+
"
#
# "
= =
$
$
$
0
0
) ( ) ( ) (
t
t
dt t g dt t g I
!
g(t) =
c, t
0
"# < t < t
0
+ #
0, otherwise
$
%
&
c
0 , 2 ) ( ) ( ) (
0
0
> = = =
! !
+
"
#
# "
$ $ $
$
$
c dt t g dt t g I
t
t
llgure ls for L
0
=1
!
In the mechanical case,
dp
dt
= g(t)
so I = p [ ]
t0 "#
t0 +#
" the change in linear momentum
75
unlL lmpulse luncuon aL
- Suppose Lhe forclng funcuon d
#
(L) has Lhe form
- 1hen as we have seen, l(#) = 1.
- We are lnLeresLed d
#
(L) acung over
shorLer and shorLer ume lnLervals
(l.e., # $ 0). See graph on rlghL.
- noLe LhaL d
#
(L) geLs Laller and narrower
as # $ 0. 1hus for L % 0, we have
!
g(t) = d
"
(t) =
1 2" , #" < t < "
0, otherwise
$
%
&
lim
!!0
d
!
(t) = ", and lim
!!0
I(! ) =1
L=0
L=0
!
t
0
= 0
76
The Dirac Delta Function
Thus for t % 0, we have
The unit impulse function & (t) is a function with the
dening properties
The unit impulse function is an example of a generalized
function and is usually called the Dirac delta function.
In general, for a unit impulse at an arbitrary point t
0
,
1 ) ( lim and , 0 ) ( lim
0 0
= =
! !
"
"
"
"
I t d
!(t) = 0 for t ! 0, and !(t)dt
a
b
"
=1 for all a < 0, b > 0
In particular, !(t)dt
#$
$
"
=1
!(t !t
0
) = 0 for t " t
0
, and !(t !t
0
)dt
a
b
#
=1
provided the range of integration includes t
0
.
L
0

!
("(t), "(t - t
0
))
77
!
"(t #t
0
) can be represented as the limit of the sequence
of forcing terms
g(t) = d
$
(t #t
0
) =
1
2$
, t
0
#$ < t < t
0
+$
0, otherwise
%
&
'
(
'
in the limit $ )0.
The delta "function" also has the following property. For any
continuous function f (t)
f (t)!
a
b
!
(t "t
0
)dt = f (t
0
)
provided the range of integration includes the point t = t
0
.
It is easily verified that the above representation of the delta
function has this property:
f (t)d
"
"#
+#
!
(t "t
0
)dt = f (t)
1
2"
t0""
t0+"
!
dt =
1
2"
f (t)
t0""
t0+"
!
dt =
1
2"
[2" f (t
$
)] (using MVT)
% f (t
0
) as " %0
78
In summary, the delta function has the following properties;
(i) !(t !t
0
)dt =
a
b
"
1
(ii) !(t !t
0
) = 0 if t # t
0
(iii) f (t)!(t !t
0
)
a
b
"
dt = f (t
0
)
provided the range of integration includes t
0
.
Note: the representation of ! (t !t
0
) as the limit of
the sequence of functions d
"
(t-t
0
) is not unique.
There are other sequences that could also be used
to define the delta function.
79
1he uelLa luncuon (anoLher represenLauon)
Consider the sequence of functions
!
n
(t !t
0
) =
n
"
e
!n
2
(t!t
0
)
2
(n=1,2,......)
These are the Gaussian functions with half width "1/ n.
For any n
!
n
(t !t
0
)dt =
-#
+#
$
n
"
e
!n
2
(t!t
0
)
2
dt =
n
"
e
!n
2
(t!t
0
)
2
d(t !t
0
)
-#
+#
$
-#
+#
$
=
1
"
e
!x
2
dx
-#
+#
$
=1 (setting x = n(t !t
0
))
i.e. the area under the curve is unity for each n.
As n %#, the sequence of functions approaches the
Dirac delta function ! (t !t
0
).
L
0
L
80
Consider the simplest case of a constant coefficient LDE with a forcing
term g(t) =!(t !" ) that represents a unit impulse
at t =". We solve the IVP
L[y] =
dy
dt
+ay =!(t !" ) (" > t
0
)
y(t
0
) = y
0
= 0
The zero initial condition is referred to as the "homogeneous" initial condition.
From our previous integral solution (assuming it is still valid!) we have
y(t) = y
0
e
!a(t!t
0
)
+ e
!a(t!s)
g(s)
t
0
t
"
ds (using s as dummy variable)
=
t
0
t
"
e
!a(t!s)
!(s !" )ds (for y
0
= 0)
Evaluating this integral,we get
y(t) = e
!a(t!" )
(t
0
<" < t)
= 0 (t
0
# t <" )
Greens Function (one sided)
ume
*

ume
Drive !(t - " )
Response G(t - ! )
t
0
!
t
0
!
The above solution is referred to as the one sided Green's function G(t, ! )
for the LDE L[y] = 0 :
G(t, ! ) = e
!a(t!! )
(t
0
<! < t)
= 0 (t
0
" t <! )
1he Creen's funcuon
measures Lhe response
aL ume a time L Lo
a unlL lmpulse (& funcuon)
drlve aL L = # for
homogeneous
|n|na| cond|nons (y (t
0
) = 0) .
#
L
C( L,#)
#
Dr|ve (Impu|se)
L
&( L-#)
kesponse
noLe: 1he Lu Lheorem for llnear uLs does
noL apply ln Lhls case because Lhe forclng
Lerm ls noL even a funcuon (!) (Lhe soluuon breaks down aL L=
L
0

L
0

#)
82
82
!
From our earlier result, the general solution to the IVP
L[y] = (D+a)y = g(t)
y(t
0
) = y
0
= 0
was found to be
y(t) = e
!a(t!! )
g(! )
t0
t
"
d!
This solution can be re-written as
y(t) = G(t, ! )g(! )
t0
t
"
d!
where G(t, ! ) is the Green's function derived earlier.
From an operational point of view
(D+a) takes y #g(t) (Differential operator)
G
"
takes g(t) # y (Integral operator)
G is also sometimes called the Kernel or the Influence function of the IVP.
Signicance of Greens Function
G(t, ! ) = e
!a(t!! )
(t
0
<! < t)- used in integrand
= 0 (t
0
" t <! )- not used in integrand
The solution for a general forcing term g(! ) (t
0
!! ! t)
G(t, ! )g(! )
t
0
t
"
d!
can be interpreted as follows:
The response at time t to an impulse of magnitude 1 at time ! is
G(t - ! ).
#The response at time t to an impulse of magnitude g(! ) at time ! is
g(! )G(t-! ).
The integral represents the result for a sum of such impulses
distributed from t
0
!! ! t, all with zero initial conditions.
For non zero initial conditions, the solution y
c
(t) = y
0
e
$a(t$t
0
)
of the
homogeneous DE which satisfies y(t
0
) = y
0
must be added.
#
urlve:
unlL lmpulse
&(L-#))
L = #
8esponse C( L-#)
urlve:
g(L) (general)
8esponse
L
L
L L
L[y] = (D+a)y =!(t !" )
y(t
0
) = 0
Solution:
y(t) = G(t, " )
L[y] = (D+a)y = g(t)
y(t
0
) = 0
Solution:
y(t) = g(! )
t0
t
!
G(t, ! )d!
?(L)
?(L)
85
(a) Obtain the integral solution to the IVP
dy
dt
+ay = g(t)
y(t
0
) = 0
and simply read off the Green's function. This is not very instructive
since it gives no insights as to what it represents!
(b) Calculate the solution of IVP
dy
dt
+ay =!(t !" ) (" >t
0
)
y(t
0
) = 0
in the two domains t
0
" t<" and t >", with the jump condition
[y(t)]
t="
!
t="
+
=1
This will give the Green's function G(t, " ) ab initio!
Methods for calculating the Greens Function
86
Then construct the solution for a general forcing term g(! ) as
y(t) = g(! )
t
0
t
!
G(t, ! )d!
Method b (justification of jump condition):
dy
dt
+ p(t)y ="(t "! )
(Note: we have allowed for the general case of a non constant p(t))
Integrate both sides between ! -# and ! +#
[y]
! -#
! +#
+ p(t)ydt
! -#
! +#
!
=1
Now take limit as # #0. If p(t) is continuous, the integral will tend
to zero even if y(t) has a finite dscontinuity)
$[y]
!
"
!
+
=1
87
Method b (solution): We first solve IVP
dy
dt
+ay = 0 (0 ! t<! )
y(0) = 0
Solution to this DE is
y(t) = Ae
"at
Using initial condition y(0) = 0
A = 0
Hence solution is
#y(t) = 0 (t
0
! t<! )
L=0
L=#
g(L)
unlL lmpulse
Next solve the IVP
dy
dt
+ay = 0 (t
0
<! < t)
88
The solution to the DE again has the same form:
y = Be
!at
The jump condition now gives
[y(t)]
t=!
!
t=!
+
= Be
!a!
!0 =1
"B = e
a!
#y = e
!a(t!! )
(0<! < t)
Putting two bits of solution together,
y(t) = e
!a(t!! )
(0<! < t)
y(t) = 0 (0 $ t<! )
which is the Green's function as before.
L=0
L=#
y(L)
L=0
L=#
g(L)
unlL lmpulse
Example: Find the solution to the initial value problem
L[y] = ! y " y =!(t "" )
y(0) = 0
and hence find the Green's function.
Find the solution to the following IVPs with y(0) = 0
L[y] = e
t
(t # 0)
L[y] = sint (t # 0)
90
Solution:
For 0 ! t <!, " y # y = 0
$ y(t) = Ae
t
y(0) = 0 $ A = 0
y(t) = 0 (0 ! t <! )
For t >!, " y # y = 0
$ y(t) = Be
t
[y(t)]
!
#
!
+
=1$Be
!
#0 =1$B = e
#!
y(t) = e
(t#! )
(0 <! < t)
Hence
G(t, ! ) = e
(t-! )
(0 <! < t)
G(t, ! ) = 0 (0 ! t <! )
L = # 0
L
C( L-#)
L = #
Dr|ve (Impu|se)
L
&( L-#)
kesponse
91
!
!
(1) L[y] = e
t
, y(0) = 0
y(t) = G(t,")
0
t
#
e
"
d"
= e
(t$" )
0
t
#
e
"
d" = e
t
["]
0
t
= te
t
(2) L[y] = sint, y(0) = 0
y(t) = G(t,")
0
t
#
sin"d"
= e
(t$" )
0
t
#
sin"d" = e
t
e
$"
0
t
#
sin"d"
= e
t
$
1
2
(cos" + sin")e
$"
%
&
'
(
)
*
0
t
= $
1
2
(cost + sint) +
1
2
e
t
92
!
Note : If the problem has non zero ("inhomogeneous") initial conditions,
must add the complementary function to the above solutions.
y(t) = Ae
t
+ e
(t"# )
0
t
$ q(#)d#
Thus
L[y] = e
t
, y(0) =1
has solution
y(t) = Ae
t
+te
t
y(0) =1% A =1
so A =1 satisfies initial condition.
Solution to IVP : y(t) = e
t
+te
t
93
We previously found that the full integral solution to the IVP
dy
dt
+ p(t)y = g(t)
y(t
0
) = y
0
Creen's funcuon for Lhe non consLanL coemclenL
rsL order Llnear uL
93
!
y(t) = y
0
e
" p(s)ds
t0
t
#
+ g($)e
" p(s)ds
$
t
#
d$
t
0
t
#
IULL SCLU1ICN (exp||c|t|y g|ven)
!
Green' s Function G(t,") (t # ")
Exercise: Find the solution to the IVP
dy
dt
+ p(t)y =!(t !" ) (" >t
0
)
y(t
0
) = 0
for t
0
" t<" and t >" with the jump condition
[y(t)]
t="
!
t="
+
=1
and hence show that the Green's function is
G(t," ) = e
! p(s)ds
"
t
#
(t >" )
= 0 (t
0
" t<" )
1he delLa funcuon case ls welrd" because we appear Lo have obLalned a soluuon
Lo a LuL wlLh a forclng Lerm LhaL ls noL even a funcuon!

nexL we conslder a more palaLable problem of a LuL wlLh a forclng Lerm LhaL ls
a dlsconunuous funcuon
95

ln a generallsauon of Lhe Lu1, Lhe funcuon g(L) ls allowed
Lo have a nlLe number of nlLe dlsconunulues - LhaL ls g(L) ls p|ecew|se
connnuous on l.








1he CuL L[y]=g(L) musL Lhen be saused everywhere excepL aL Lhe polnLs
of dlsconunulLy. 1hls problem ls sull unlquely solvable. 1he soluuons
are connnuous funcnons y(L) whose derlvauves y
/
(L) are now also plecewlse
conunuous. 1hls dlers from Lhe case of a delLa funcuon forclng Lerm when
1he soluuon lLself was dlsconunuous.
1he sLandard lnLegral expresslon for Lhe soluuon ln Lerms of an lnLegraung
facLor can sull be used.
-Lhe [usucauon ls LhaL the |ntegra| of a dlsconunuous funcuon ls conunuous
Existence and Uniqueness for First Order
LDE (discontinous forcing terms)
g(L)
L
A plecewlse
conunuous funcuon
96
Step Function denition
Let c ( 0. The unit step function, or Heaviside function, is
dened by
A negative step can be represented by
!
u
c
(t) = H(t " c) =
0, t < c
1, t # c
$
%
&
!
"
#
$
<
= % =
c t
c t
t u t y
c
, 0
, 1
) ( 1 ) (
97
Lxample
Sketch the graph of
Solution: Recall that u
c
(t) is dened by
Thus
and hence the graph of h(t) is a rectangular pulse.
0 ), ( ) ( ) (
2
! " = t t u t u t h
# #
!
"
#
$
<
=
c t
c t
t u
c
, 1
, 0
) (
!
"
!
#
$
% < &
< &
< &
=
t
t
t
t h
'
' '
'
2 0
2 , 1
0 , 0
) (
98
Example (piecewise continuous forcing term)
Using the Green's function solution to
L[y] = ! y " y =!(t "" )
y(0) = 0
obtained previously, find the solution to the
following IVP
L[y] = u
1
(t) t # 0
y(0) = 0
The RHS is the Heaviside step function:
u
1
(t) = H(t "1) = 0 (t<1)
=1 (t #1)
99
The Green's function for this poblem is
G(t, ! ) = e
(t!! )
and solution satisfying y(0) = 0 is
y(t) = G(t, ! )
0
t
"
H(! !1)d!
= e
(t!! )
0
t
"
H(! !1)d!
t <1: y(t) = e
(t!! )
0
t
"
0d! = 0
t #1: y(t) = e
(t!! )
0
1
"
0d! + e
(t!! )
1
t
"
1d! =[!e
(t!! )
]
1
t
= e
t!1
!1
Note that the solution is continuous across t =1
y(t) [ ]
1
!
1
+
= (e
1!1
!1) !0 = 0
but its first derivative is discontinuous, jumping by the
same amount as the forcing function (RHS of LDE).
$ y (t) [ ]
1
!
1
+
= (e
1!1
) !0 =1
The solution does not belong to C
1
because the forcing term
does not!
urlve g(L)
8esponse y(L)
L
L
0
0 1
1
100
A Physical Application
-Electrical Circuits-
Q - Quantity of charge (Coulombs)
Current I(t) =
dQ
dt
- Rate of flow of charge (Amperes)
As current flows in a circuit, the charge carriers
exchange energy with various elements of the circuit.
As a result, the potential energy V per unit charge changes.
The energy per unit charge exchanged with these elements
as charge flows from points a to b
V
ab
=V
a
!V
b
!!!(Joules per Coulomb - or Volts)
a b
Circuit Element
101
Batteries and electrical generators can maintain a voltage drop E between
two terminals. For batteries the terminal with the higher potential is labeled
with a + sign. The chemical energy of battery imparts an amount of
energy E per Coulomb as charge carriers move through the circuit.
E(t) is the source term for the problem.
E(t)
- +
I b a
102
Circuit Elements
!
V
ab
= RI
!
V
ab
= L
dI
dt
V
ab
=
1
C
Q(t) =
1
C
Q(t
0
) + I(s)ds
t0
t
!
"
#
$
%
$
&
'
$
(
$
Q(t) is charge stored in capacitor.
klrchho's Laws: (1) lor a closed clrculL, sum of poLenual drops ls zero
(conservauon of energy)

(2) AL any polnL, sum of currenLs owlng ln = Sum of currenLs owlng ouL
!
V
a
1
a
2
+V
a
2
a
3
+ ....... + V
a
n
a
1
= 0
I2
I1
I3
I5
I4
!
I
1
+ I
2
= I
3
+ I
4
+ I
5
(no source of charge at junction)
Resistor
R
I a
b
Inductor
L
I b a
Capacitor
C
I b a
103
A clrculL wlLh L=L
0
(a consLanL)
Assume initial (t=t
0
) current I
0
and constant applied
voltage E
0.
Then

E
0
= L
dI
dt
+ RI !
dI
dt
+
R
L
I =
E
0
L
We use the solution for constant coefficient first order LDE
with a =
R
L
, g(t) =
E
0
L
= const
I = I
0
e
"
R
L
(t"t0 )
+
E
0
R
(1"e
"
R
L
(t"t0 )
) = I
0
e
"(t"t0 )
T
+
E
0
R
(1"e
"(t"t0 )
T
)
where T =
L
R
is a time constant for the circuit.
Note (1): Initial current decays exponentially and current builds up
to the steady value E
0
/R. At (t-t
0
) = 4.6 T, solution is within 1
percent of steady state.
8
L
L
0

l(L)
1ranslenL Lerm,
uecay of lnlual currenL
8ulld up of currenL
due Lo drlvlng Lerm
104
A clrculL wlLh a varlable applled volLage
L = v(L)
L
dI
dt
+ RI =V(t)
I(t
0
) = I
0

!
dI
dt
+aI = g(t)
with a =
R
L
, g(t) =
V(t)
L
Using previous solution,
I = I
0
e
"
Rt
L
+ g(! )
t0
t
#
e
"
R
L
(t"! )
d!
or
I(t) = I
0
e
"
Rt
L
+ g(! )G(t, ! )
t0
t
#
d!
G(t, ! ) = e
"
R
L
(t"! )
(t $! ) (One sided Greens Function)
= 0 (0 % t<! )

1ranslenL Lerm,
uecay of lnlual currenL

8esponse Lo
drlvlng Lerm
105
Fundamental Theorem for First Order ODEs
-Existence and Uniqueness for First Order (non linear) DEs-

! y = f (t, y) (Normal form for general first order DE)
y(t
0
) = y
0
If f and
! f
!y
are defined and continuous throughout the interior
of a rectangle R containing (t
0
, y
0
) in its interior, then the
above initial value problem has a unique solution "(t)
which exists in
t
0
"h < t < t
0
+h (for some h>0)
That is:
! " = f (t, y) (t
0
"h < t < t
0
+h)
" (t
0
) = y
0
1heorem 2.4.2
L
y
(L
0
, y
0
)
L
0
-h L
0
+h
8ecLangle 8
where f and f
y
are
conunuous
1he condluons sLaLed ln Lhe
1heorem are sumclenL buL
noL necessary.

lf Lhe condluons are noL saused,
1hen Lhe lv may sull have elLher
(a) no soluuon (b)more Lhan one
Soluuon (c) a unlque soluuon.

ln facL lf we are noL lnLeresLed ln unlqueness, Lhe conunulLy of
f(L,y) ln 8 ls sumclenL Lo guaranLee aL leasL one soluuon Lhrough .

107
L
y
(L
0
, y
0
)
L
0
-h L
0
+h
8ecLangle 8
where f and f
y
are
conunuous
A solution either exits R or approaches the boundary at innity
A solution can never die inside R
We may continue the solution y(t) uniquely as long as we remain
in R. However, the solution may extend over the top and bottom of
R (as in gure) before t has changed much or may exit R from the
left or right. Hence h is not known a priori.
Corollary: Two different solution curves of cannot
meet in R.
!
" y = f (t, y)
Some roperues of
Soluuons ln 8
108
Example: ! y = y
2
, y(0) = C " 0
f = y
2
and
! f
!y
= 2y are continuous everywhere. R can be any
rectangular box in the (x, y) plane. There exists a unique solution
for all initial conditions.
Solutions:
dy
y
2
= dt (y " 0) #
1
y
= $t + D# y =
1
D$t
y(t) = 0 is also a solution.
%y =
1
1
C
$t
satisfies IV y(0) = C
C =1 y =
1
1$t
valid on (-&,1) : this solution cannot be extended for t>1.
But there is no contradiction with EUT!
C = 2 y =
1
1
2
$t
valid on (-&,
1
2
) :this solution cannot be extended for t>
1
2
For y(0) = C(" 0), we have
y =
1
1
C
$t
valid on (-&,
1
C
)
lor C large, soluuon cannoL be exLended much beyond L=0 - non llnear uL's
behave qulLe dlerenLly from llnear uL's!
1
1
L
y
C=1`
Numerical Example:
(Death of a Solution)
! y = "
t
2
(y +2)(y "3)
y(0) = 0
f and f
y
are
continuous in any rectangular box R about (0,0) that
excludes y = -2 and y = 3. In this case the solution in fact
dies (stops) at y = - 2 and y = +3 as the rate function blows up.
(ODE solver takes forever to reach these points!)
110
Summary: Interval of Denition
(Nonlinear Equations)
In the nonlinear case, the interval on which a solution exists
may be difcult to determine.
The solution y = !(t) exists as long as (t,!(t)) remains within
rectangular region indicated in Theorem. This is what
determines the value of h in that theorem. Since !(t) is usually
not known, it may be impossible to determine this region
although useful bounds can be set (see next theorem)
In any case, the interval on which a solution exists may have
no simple relationship to the function f in the differential
equation y' = f (t, y), in contrast with linear equations.
Furthermore, any singularities in the solution may depend on
the initial condition as well as the equation.
Compare these comments to the preceding example.
111
lmpllclL and LxpllclL Soluuons (1 of 4)
Solve the following rst order nonlinear equation:



The equation above denes the solution y implicitly. An
explicit expression for the solution can be found in this case:
!
dy
dx
=
3x
2
+ 4x + 2
2 y "1 ( )
!
2 y "1 ( )dy = 3x
2
+ 4x + 2
( )
dx
2 y "1 ( )dy
#
= 3x
2
+ 4x + 2
( )
dx
#
y
2
"2y = x
3
+ 2x
2
+ 2x + C
!
y
2
"2y " x
3
+ 2x
2
+ 2x + C
( )
= 0 # y =
2 4 + 4 x
3
+ 2x
2
+ 2x + C
( )
2
y =1 x
3
+ 2x
2
+ 2x + D
!
Note : f and f
y
are discontinuous at y =1
Non linear DEs often lead only to implicit solutions
112
Lxample : lnlual value roblem (2 of 4)
Suppose we seek a solution satisfying y(0) = -1. Using the
implicit expression of y, we obtain
Thus the implicit equation dening y is
Using explicit expression of y,
It follows that
!
y =1 x
3
+ 2x
2
+ 2x + D
"1=1 D # D= 4
3 ) 1 ( 2 ) 1 (
2 2 2
2
2 3 2
= ! = " " "
+ + + = "
C C
C x x x y y
3 2 2 2
2 3 2
+ + + = ! x x x y y
4 2 2 1
2 3
+ + + ! = x x x y ls Lhe branch relevanL Lo Lhe lv
So|unon curves
113
Lxample : lnlual Condluon y(0) = 3 (3 of 4)
Note that if initial condition is y(0) = 3, then we choose the
positive sign, instead of negative sign, on square root term:
4 2 2 1
2 3
+ + + + = x x x y
114
Lxample: uomaln (4 of 4)
Thus the solutions to the initial value problem
are given by
From explicit representation of y, it follows that

and hence domain of y is (-2, )). Note x = -2 yields y = 1, which makes
denominator of dy/dx zero (vertical tangent).
Conversely, domain of y can be estimated by locating vertical tangents on
graph (useful for implicitly dened solutions).
(explicit) 4 2 2 1
(implicit) 3 2 2 2
2 3
2 3 2
+ + + ! =
+ + + = !
x x x y
x x x y y
!
y =1" x
2
x + 2 ( ) + 2 x + 2 ( ) =1" x + 2 ( ) x
2
+ 2
( )
!
dy
dx
=
3x
2
+ 4x + 2
2 y "1 ( )
, y(0) = "1
115
Lxample : lmpllclL Soluuon of lnlual value
roblem (1 of 2)
Consider the following initial value problem:
Separating variables we obtain
Using the initial condition, it follows that
1 ) 0 ( ,
3 1
cos
3
=
+
= ! y
y
x y
y
C x y y
xdx dy y
y
xdx dy
y
y
+ = +
=
!
!
"
#
$
$
%
&
+
=
+
' '
sin ln
cos 3
1
cos
3 1
3
2
3
1 sin ln
3
+ = + x y y
116
Lxample : Craph of Soluuons (2 of 2)
Thus
The graph of this solution (black), along with the graphs of the direction
eld and several integral curves (blue) for this differential equation, is
given below.
The direction eld can often show the qualitative form of solutions, and
can help identify regions in the y-x -plane where solutions exhibit
interesting features that merit more detailed
analytical or numerical investigations.
1 sin ln 1 ) 0 ( ,
3 1
cos
3
3
+ = + ! =
+
= " x y y y
y
x y
y
117
!
" y = 3y sin y + t
This is a first order non linear DE.
This equation cannot be solved analytically. Direction fields and
solution curves can be obtained using numerical techniques.
1ry:

CuL Solver:
lnlual value: L = -6, y = -1, lnLerval = 10
Sweep, y=-2.3 Lo 4 (10 or 20 polnLs)
Lxerclse
118
Some special Equations

- Change of varlables
- Bernoulli type
- Homogeneous in y/x
- Exact equations
Change of varlables
Consider a first order DE
N(x, y)
dy
dx
+ M(x, y) = 0
Sometimes by changing variables (dependent, independent or both)
one could obtain a simpler looking equation that can be solved by
standard means. A change could be of the formv = F(y), u = G(x)
(or equivalently y = H(v), x = K(u)) and would lead to a solution
v(u). which can be converted back to a solution in terms of the original
variables y(x).
Example: Solve 2xy
dy
dx
= a
2
+ y
2
! x
2
We are looking for solutions y(x).
We introduce a new dependent variable v = y
2
.
dv
dx
= 2y
dy
dx
x
dv
dx
= a
2
+v ! x
2
.
This is first order linear in v and can now be solved for v(x) by standard
techniques!
But we can simplify further by introducing a new dependent variable
u = x
2
.
" x
dv
dx
= x
dv
du
du
dx
= x.2x
dv
du
= 2x
2
dv
du
= 2u
dv
du
2u
dv
du
= a
2
+v !u (a linear DE)
This equation can be solved for v(u). But we note that the DE can be
wriiten as
2u
d
du
(a
2
+v) = a
2
+v !u
This suggest that we define a new dependent variable
w = a
2
+v.
2u
dw
du
= w!u "
dw
du
!
1
2u
w = !
1
2
(a simpler linear DE)
Integrating factor e
!
1
2u
du
#
= e
!
1
2
ln u
= u
!
1
2
u
!1/2
dw
du
!u
!1/2
1
2u
w = !
1
2
u
!1/2
d
du
(u
!1/2
w) = !
1
2
u
!1/2
u
!1/2
w = 2c !u
1/2
w = 2cu
1/2
!u
But u = x
2
, w = a
2
+ y
2
"y
2
+a
2
= 2cx ! x
2
x
2
+ y
2
!2cx = !a
2
(x !c)
2
+ y
2
= c
2
!a
2
Clearly we need c > a
So solutions y(x) are portions of circles centered at x = c
of radius c
2
!a
2
which is less than c .
123
8ernoulll's Lquauon
!
" y + p(x)y = q(x)y
n
(n # 0,1)
" y y
$n
+ p(x)y
1$n
= q(x)
set u = y
1$n
, " u = (1$ n)y
$n
" y
" u
1$ n
+ p(x)u = q(x) - - first order linear DE solved by standard means.
Example :
" y +2y = xy
4
y
$4
" y +2y
$3
= x (y # 0)
$
1
3
(y
$3
" ) +2y
$3
= x
Set u = y
$3
$
1
3
" u +2u = x
" u $ 6u = $3x
124
! u "6u = "3x
Integrating factor: e
(-6)dx
#
$e
"6x

(ue
"6x
! ) = "3xe
"6x
ue
"6x
=
1
2
xe
"6x
+
1
12
e
"6x
+ A
y
3
=
1
u
=
1
1
2
x +
1
12
+ Ae
6x
y(0) = 2 %8 =
1
1
12
+ A
% A =
1
24
y
3
=
1
1
2
x +
1
12
+
1
24
e
6x
This solution is valid approximately from -0.2 to +&
(check numerically!). EUT does not help us much because
this is a non-linear DE.
125
! y = f (y x)
Trick: Set y = vx, ! y = ! v x +v
! v x +v = f (v) "
dv
f (v) #v
=
dx
x
... separable
Exercise: ! y =
y
2
+2xy
x
2
=
y
x
$
%
&
'
(
)
2
+2
y
x
$
%
&
'
(
)
RHS is a function of
y
x
$
%
&
'
(
) -- hence "homogeneous" in above sense.
Set y = vx and the resulting DE in v as above.
uL LhaL ls Pomogeneous ln y/x "
- A uL may be exacL" -lf lL ls recognlsed as such (noL
always easy Lo do!) , a full soluuon follows lmmedlaLely.
Pere ls a slmple example:
Example: t
3
dy
dt
+3t
2
y = 0
We note "by inspection" that the left hand side is a perfect derivative.
d
dt
(t
3
y(t)) = 0
We say the equation is exact and the solution is
t
3
y = D- a constant
We now develop Lhe Lheory relaLed Lo exacL equauons.
llrsL Crder LxacL Lquauons
127
The differential equation
M(x, y)dx + N(x, y)dy = 0
is said to be exact in some region R if one can
find a continuously differentiable function !(x,y) such that
d!(x,y) = M(x,y)dx+N(x,y)dy in R
or equivalently such that
"!
"x
= M,
"!
"y
= N in R
The solution to the DE is then
!(x, y) = constant
LxacL Lquauons
128
LxacL Lquauons and lnLegraung lacLors

1heorem 2.6.1, p91
Lx2, p 92


A Necessary and Sufficient condition for the DE
M(x,y)+N(x,y)
dy
dx
= 0
to be exact in a simply connected region R in which
M, N, M
y
and N
x
are continuous is that
!M
!y
=
!N
!x
* ConnecLed means, any Lwo polnLs of 8 can be [olned by a polygonal llne LhaL lles
enurely ln 8
- Slmply connecLed means LhaL every closed curve can be conunuously
shrunk Lo a polnL wlLhouL passlng ouL of 8: a reglon wlLh no holes
S|mp|y connected
Connected L.g. A recLangular box!
129
Consider the DE: M(x, y)dx + N(x, y)dy = 0
with M, N,
!M
!y
,
!N
!x
continuous in R.
Suppose the DE is exact in R. Then by definition there must exist
a continuously differentiable function "(x,y) in R such that
d" = M(x, y)dx + N(x, y)dy
We also know that for any differentiable function of two variables
d" =
!"
!x
dx +
!"
!y
dy ="
x
dx +"
y
dx
Hence a necessary condition for exactness is that
M(x, y) ="
x
, N(x, y) ="
y
in R
But if
!M
!y
,
!N
!x
are continuous in R, then so are "
xy
, "
yx
. Hence
!
2
"
!y!x
=
!
2
"
!x!y
(First year calculus)
i.e. "
xy
="
yx
! M
y
= N
x
roof (necesslLy)
130
roof (Sumclency)
Suppose M, N,
!M
!y
,
!N
!x
are continuous in R and that
!M
!y
=
!N
!x
.
We need to show that we can find a continuously differentiable function " such that
d" = Mdx + Ndy (in R)
Now, for any such "(x, y), d"=
!"
!x
dx +
!"
!y
dy.
We need to show that we can consistently solve
!"
!x
= M(x, y),
!"
!y
= N(x, y) for ".
From the first of these
"(x, y) = M(t, y)dt +#(y)
a
x
!
From which

!"
!y
=
!M(t, y)
!y
dt + " # (y)
a
x
!
=
!N(t, y)
!t
dt
a
x
!
+ " # (y) (using N
x
= M
y
)
= N(x, y) # N(a, y) + " # (y)
131
But we also require
!"
!y
= N(x, y)
so we must choose
! # (y) = N(a, y)
Integrating
#(y) = N(a, s)ds
b
y
"
and the required solution is
"(x, y) = M(t, y)dt +
a
x
"
N(a, s)ds
b
y
"

132
Example: 3y(x
2
!1)dx +(x
3
+8y !3x)dy = 0
Standard form: Mdx + Ndy = 0
!M
!y
=
!
!y
[3y(x
2
!1)] = 3(x
2
!1)
!N
!x
=
!
!x
[x
3
+8y !3x] = 3(x
2
!1)
"Exact
We want to find "(x,y) such that
d" = 3y(x
2
!1)dx +(x
3
+8y !3x)dy
But
d" =
!"
!x
dx +
!"
!y
dy always.
We try to satisfy
!"
!x
= 3y(x
2
!1),
!"
!y
= (x
3
+8y !3x)
simultaneously, and we know that this can be done because the
equation is exact.
133
!
Start with any one of them;
"#
"x
= 3y(x
2
$1)
%#(x, y) = yx
3
$ 3xy + f (y)
%
"#
"y
= x
3
$ 3x + & f (y)
But we want
"#
"y
= (x
3
+8y $ 3x)
' & f (y) = 8y
f (y) = 4y
2
+ K
and
#(x, y) = yx
3
$ 3xy + 4y
2
+ K
Solutions to DE are #(x, y) = const. That is
yx
3
$ 3xy + 4y
2
= D, any constant
lnLegraung facLor (general case)
134
Suppose M(x, y)dx + N(x, y)dy = 0 is not exact.
Question: Can we find an integrating factor that will
make it exact (as we did for the linear case?).
We need to find (x,y) ! 0 such that
(x, y)M(x, y)dx +(x, y)N(x, y)dy = 0
is exact.
"
!
!y
(M)=
!
!x
(N)
"
y
M +M
y
=
x
N +N
x
"
y
M #
x
N +(M
y
# N
x
) = 0
M
!
!y
# N
!
!x
+(M
y
# M
x
) = 0 ..... a PDE for !!!
Usually more complicated to solve than original DE.
(Solution may be possible in special cases.
A fruitful approach may be to try (x) ! 0 or (y) ! 0
-see book. May work only some times)
135
AuLonomous Lquauons


dy
dt
= f (y) - independent variable t not in f ! separable DE
y(t) = a (a constant)...... is a solution if f (a) = 0
Zeros of f (y) are called critical points.
Corresponding to critical points a
1
, a
2..
f (a
1
) = 0 , f (a
2
) = 0...
there are equilibrium solutions of DE:
y(t) = a
1
, y(t) = a
2
.....
the equilibrium solutions are horizontal straight lines in the (y - t) plane
The solutions divide the (y-t) plane into bands.
inside each band (defined by consecutive zeros of f ), f has a fixed sign,
and any solution rises or fall from one of the bounding lines and towards the other.
the manner in which these solutions are reached for large t,
determines the stability of these equilibrium solutions.
y
L
f(y) < 0
a
1

a
2

Some properues
136
Translation of Solutions:
If y = h(t) (a<t<b) is a solution, then so is
y = h(t+T) (a-T<t<b-T) for any T.
Given
dh(t)
dt
= f (h(t)) (a<t<b)
Replace t by t +T (change of variable)
!
dh(t +T)
d(t +T)
= f (h(t +T)) (a-T<t<b-T)
dh(t +T)
dt
= f (h(t +T)) (a-T<t<b-T)
"h(t +T) is also a solution (Note that this worked only
because t is not explicitly in f ).
The Separation of Orbits theorem: For an autonomous system,
"orbits" (name given to any solution) in S (region where f , f
y
continuous)
can never meet, because the EU theorem would then be violated.
y
L
a b
a-1 b-1
! y = (1"
y
12
)y 0 # y
0
# 40
137
Lxample from opulauon uynamlcs

8oberL MalLhus (1776-1834), A populauon experlenclng exponenual
growLh wlll evenLually overrun lLs hablLaL. need Lo change Lhe slmple
exponenual growLh ldea.


!
dy
dt
= ry "
dy
dt
= h(y)y
1he lnsLanLaneous raLe of growLh h(y) now depends on popu|anon y.
1hls ls now a non-llnear uL.
1he Loglsuc Lquauon
dy
dt
= (r !ay)y -positive growth rate for small y
negative growth rate at large y
dy
dt
= r(1!
y
K
)y -standard form
138
!
dy
dt
= r(1"
y
K
)y

!
!
" y = (1#
y
12
)y 0 $ y
0
$ 20
CDL so|ver
!
(1) For y < 0 y is decreasing
y = 0 y is constant
0 < y < K y is increasing
y = K y is constant
K < y y is decreasing
(2) There are two steady state (equilibrium - y does not change) solutions
corresponding to the critical points
y = 0
y = K
(3) The equation is separable, and the solution to the IV problem y(0) = y
0
y =
y
0
K
y
0
+ (K " y
0
)e
"rt
(BD p79, Eq 11)
(4) y = 0 is unstable
y = K is stable
139
0
y
2
= 1
y
1
= 0
1.3
0 3 L
y
!
" y = (1# y)y
y = 0.0 (repeller)
y =1.0 (attractor)
Introduction to Bifurcations
140
!
" y = f (y,c) (an autonomous system with a parameter c).
Question?: Would a small change in c be associated with a small
change in the banded structure, and associated solutions?. Not always.
an equilibrium solution may split (i.e. bifurcate) into several equlibrium
solutions or even vanish completely. In a natural system changes in c
could be brought about by external (e.g. environmental) factors - tracking
the changes in the nature of the solutions is called bifurcation theory,
!
EX :
dy
dt
= r(1"
y
K
)y + Q - Logistic equation with a harvesting or
stocking term Q
141
!
dy
dt
= (1" y)y + c
f (y,c) = (1" y)y + c
equlibrium populations;
(1" y)y + c = 0
y
1
=
1
2
"
1
2
(1+ 4c)
1/ 2
, y
2
=
1
2
+
1
2
(1+ 4c)
1/ 2
no equilibria if c < -
1
4
single equilibrium if c = -
1
4
two equlibria if c > -
1
4
expect bifurcation event at c = -
1
4
c = 0.0
-0.1
-0.23
-0.33
f(y , c)
y
0
0.3
-0.2
0.3 -0.3
1.0
Saddle-node
8lfurcauon
142

y
1
= y
2
= 0.3
Crlucal harvesung
c = - 0.2S
0.3
0
0

3 3
0

0
0.3
y
y
L
L
Peavy harvesung
c = - 0.30
Lxuncuon
!
" y = (1# y)y # 0.25
= #(y # 0.5)
2
!
" y = (1# y)y # 0.3
= #(y #0.5)
2
# .05 < 0
143
0 0
y
2
= 1
y
1
= 0
y
1
= 0.2
y
2
= 0.8
1.3
1.3
0 3
0 3
Mlld harvesung
c = - 0.16
no harvesung
c = 0
y
L
L
y
!
" y = (1# y)y # 0.16
= #(y # 0.2)(y #0.8)
y = 0.2 (repeller)
y = 0.8 (attractor)
!
" y = (1# y)y
y = 0.0 (repeller)
y =1.0 (attractor)
Lxuncuon
Bifurcation Diagram (saddle-node)
144
8|furcanon
o|nt
y coord|nate
of equlllbrlum
polnLs
arameter c
0
-0.23 -0.3
8epeller
AuracLor
0.3
145
!
dy
dt
= (c " y
2
)y
Non - linear autonomous
equlibrium populations;
(c " y
2
)y = 0
y
1
= 0, y
2
= c, y
3
= " c
c < 0
one equlibrium solution (only y
1
= 0)
c > 0
three equlibrium solutions
-0.33
f(y , c)
y
0
2
- 2
0 -2
2
lLch-fork
8lfurcauon
0
-1
+1
Lqulllbrlum soluuons can merge, spllL and dlsappear ln dlerenL ways .
Conslder
146
-2
y
1
= 0
y
1
= 0.0
0
0 3
0 3
c = 2
(aer blfurcauon)
y
L L
y
!
" y = (#1# y
2
)y
y
1
= 0.0 (attractor)
!
" y = (2 # y
2
)y
y
1
= 0.0 (repeller)
y
2
= 2 (attractor)
y
3
= # 2 (attractor)
c = -1
(before blfurcauon)
y
3
= -1.414
y
2
= 1.414
+2
0
+2
-2
147
Bifurcation Diagram
(pitch fork)
8|furcanon
o|nt
y coord|nate
of equlllbrlum
polnLs
arameter c
2
0
8epeller
AuracLor
0
-1
AuracLor
AuracLor
148
noLe on concepL of seml-sLable soluuons
(8u page 84)
-2
y
1
= 0.0
0 3

+2
0
AuracLed from above and
repelled from below
149
Dynamical systems and orbits of solutions

uynamlcal process are descrlbed by a collecuon of sLaLe"
varlables (Lhe dependenL varlables), an lndependenL varlable
(Lhe ume L), and dlerenual equauons connecung Lhe sLaLe
varlables.


!
The time evolution of first order dynamical system involving
two state variables will be governed by two coupled first order
DEs
dx
dt
= " x = f (x, y,t)
dy
dt
= " y = g(x, y,t)
If f and g do not depend on t, the system is said to be
autonomous. 150
For any solutions x(t), y(t) of the above system,
the parametric plot
r = (x(t),y(t))
in the xy state plane is the "orbit" of the solution.
The graphs of x(t) in the tx plane, and y(t) in the ty plane
are called the component curves.
Similarly, one can consider first order systems with n state
variables - we expect that such systems will be governed by
n coupled first order DEs (More on systems later in the course).
We consider next the simplest models of the "cascade" type
which can be solved by elementary means with the theory we
have already developed.
Compartmental Models
151
x1
x3
x2
x4
x5
I1
I2
k1x1
k2x2
k4x3
k3x3
k5x5
!
x
1
"
= I
1
# k
1
x
1
x
2
"
= #k
2
x
2
x
3
"
= I
2
+ k
1
x
1
+ k
2
x
2
# k
3
x
3
# k
4
x
3
x
4
"
= k
3
x
3
x
5
"
= k
4
x
3
# k
5
x
5
8a|ance Law

neL raLe
= raLe ln -raLe ouL
Llnear cascade of LuLs solved sequenually
!
** substance exits from a box at a
rate proportional to amount in the box,
and if it enters another box, does so at
the same rate.
** no directed chain of arrows begins
and ends in same box (hence "cascade")
Lxample (cold pllls)
152
!
** Cold pills dissolve and release medication (antihistamine)
into the gastrointestinal tract GT)
** medication diffuses into blood stream and blood stream
takes medication to site where it has a therapeutic effect
**medication in then cleared from blood by the kidneys and the liver
153
!
This can be modelled as a linear cascade :
* Suppose there are A units of drug in the
GI tract at time t = 0 and x(t) at time t, and
the medication moves out of the GI tract
into the blood at a rate k
1
x(t).
dx
dt
= "k
1
x(t), x(0) = A
* Suppose y(t) is the amount of medication
in theblood at time t. This material moves
out of the blood into the kidneys and liver
which clears the blood of foreign material at
a rate k
2
y(t).
dy
dt
= k
1
x(t) " k
2
y(t), y(0) = 0
GI tract
Blood
x(t)
y(t)
k1x
k2y
154
!
dx
dt
+ k
1
x = o
" x(t) = Ae
#k
1
t
Note : x(t) $0 as t $% (GI tract is eventually cleared of medication)
dy
dt
+ k
2
y = k
1
Ae
#k
1
t
d
dt
(e
k
2
t
y) = k
1
Ae
k
2
t#k
1
t
e
k
2
t
y =
k
1
k
2
# k
1
Ae
k
2
t#k
1
t
+ D
y(0) = 0 "D= #
1
k
2
# k
1
A
y(t) =
k
1
A
k
1
# k
2
(e
#k
2
t
#e
#k
1
t
)
y(t) (medication in blood) reaches a maximum, and then
decays at large t.
WhaL happens lf we keep Laklng cold LableLs aL regular lnLervals?.
155
!
Now we assume that the amount of medication is zero at time t = 0,
and that it is introduced into the GT at prescribed rate I(t).
The DE cascade now looks like.
dx
dt
= "k
1
x(t) + I(t), x(0) = 0
dy
dt
= k
1
x(t) " k
2
y(t), y(0) = 0
Can be solved for simple forms of I(t). For more complicated
forms use ODE solver.
Try combinations of :
Heaviside Step Function: u
t0
(t) = H(t - t
0
) = step (t, t
0
)
step (t, t
0
) = 0 t < t
0
=1 t # t
0
Square wave function: sqwave (t, t
p
, t
w
)
156
H(t-t0)
1
t0
t
0
0
!
EX : Look for solutions to problem where pills are taken
periodically every 6 hours, and each time delivers 6 units
of medication in the next half an hour.
" y = #0.7y +12 sqwave (t,6,0.5)
" z = .7y # k
2
z
Investigate effects of different clearence coefficient
k
2
= 0.002, 0.02, 0.2.
sqwave (t, tp, tw)
1
tp
t
0
0
tw
Appendlx (noLauon)
Suppose two functions are denoted by f and ! and
! is positive. We are considering the behaviour of these functions
as some limit is approached.
(1) If there exists a constant k such that f < k! as the limit
is approached then
f = O(!)
f = O(1) means f is bounded
(2) If
f
!
!0 as the limit is approaced
f = o(!)
f = o(1) means f !0
(3) If
f
!
!l " 0 as the limit is approached, then
f # l!
Appendlx (LxacL Lquauons)
(connecuon Lo vecLor calculus)
Consider a 2-D vector field
F(x, y) = M(x,y) i+N(x,y) j
in a simply connected region R in which M, N, M
y
and N
x
are continuous.
The vector field is irrotational if and only if
curl F = 0 (see vector calculus notes)
which is equivalent to the condition
!M
!y
=
!N
!x
In this case a scalar potential ! exists such that
F = grad! (see vector calculus notes)
The curves !(x, y) = const are in fact the solutions to the DE
Mdx + Ndy = 0

Vous aimerez peut-être aussi