Engineers
1
2
is innite at t = 0,
and so is not piecewise continuous there, but, as we shall see, its transform does exist.
Now we give the proof that if the conditions (1.3) and (1.4) hold, then the Laplace
transform exists.
L{f(t)} =
_
0
e
st
f(t) dt
=
_
T
0
e
st
f(t) dt +
_
T
e
st
f(t) dt.
The rst integral exists because f(t) is piecewise continuous. For t T, we have, by (1.4)
 e
st
f(t)  e
st
Me
ct
= Me
(sc)t
,
so that
_
T
e
st
f(t) dt
M
_
T
e
(sc)t
dt
= M
_
e
(sc)t
s c
_
T
,
which converges provided that (s c) is negative, i.e., s > c. This proves the result.
1.2 Examples of Laplace transforms.
Example 1. Evaluate L{1}.
L{1} =
_
0
e
st
1 dt =
_
1
s
e
st
_
0
=
1
s
, provided s > 0.
L{1} =
1
s
, s > 0,
i.e., if f(t) = 1, then F(s) =
1
s
.
4 CHAPTER 1. LAPLACE TRANSFORMS
Example 2. Evaluate L{t}.
L{t} =
_
0
e
st
t dt =
_
t(
1
s
e
st
)
_
_
0
1(
1
s
e
st
) dt
= 0
_
1
s
2
e
st
_
0
=
1
s
2
, provided s > 0.
L{t} =
1
s
2
, s > 0.
Example 3. Evaluate L{t
n
}, where n is a positive integer.
L{t
n
} =
_
0
e
st
t
n
dt =
_
t
n
(
1
s
e
st
)
_
_
0
nt
n1
(
1
s
e
st
) dt
= 0 +
n
s
_
0
e
st
t
n1
dt , provided s > 0.
L{t
n
} =
n
s
L{t
n1
}.
Put n = 2. L{t
2
} =
2
s
L{t} =
2
s
3
.
Put n = 3. L{t
3
} =
3
s
L{t
2
} =
3!
s
4
.
Continuing this process we see that
L{t
n
} =
n!
s
n+1
, s > 0.
Example 4. Evaluate L{e
at
}.
L{e
at
} =
_
0
e
st
e
at
dt =
_
e
(sa)t
(s a)
_
0
= 0 +
1
s a
=
1
s a
, s > a.
Example 5. Evaluate L{cos at} and L{sin at}.
L{cos at} =
_
0
e
st
cos at dt
L{sin at} =
_
0
e
st
sin at dt
1.2. EXAMPLES OF LAPLACE TRANSFORMS. 5
Now cos at + i sin at = e
iat
, so that
L{e
iat
} = L{cos at} + iL{sin at}
=
_
0
e
st
e
iat
dt =
_
e
(sia)t
(s ia)
_
0
= 0 +
1
s ia
=
s + ia
s
2
+ a
2
, s > 0.
L{cos at} =
s
s
2
+ a
2
, L{sin at} =
a
s
2
+ a
2
.
Example 6. Evaluate L{cos
2
t}.
Now cos
2
t =
1
2
(1 + cos 2t), so
L{cos
2
t} =
1
2
L{1} +
1
2
L{cos 2t}
=
1
2s
+
s
2(s
2
+ 4)
.
Example 7. Evaluate L{cosh at} and L{sinh at}.
Now cosh at =
1
2
(e
at
+ e
at
), so
L{cosh at} =
1
2
L{e
at
} +
1
2
L{e
at
}
=
1
2
1
s a
+
1
2
1
s + a
, s > a and s > a
=
s
s
2
a
2
, s >  a  .
Similarly, L{sinh at} =
1
2
L{e
at
}
1
2
L{e
at
}
=
a
s
2
a
2
, s >  a  .
Example 8. Evaluate L{t
2
e
at
}
L{t
2
e
at
} =
_
0
e
st
t
2
e
at
dt
=
_
0
t
2
e
(sa)t
dt
6 CHAPTER 1. LAPLACE TRANSFORMS
=
_
t
2
e
(sa)t
(s a)
_
_
0
2t
_
e
(sa)t
(s a)
_
dt
= 0 +
2
s a
_
0
te
(sa)t
dt , s > a
=
2
s a
__
t
e
(sa)t
(s a)
_
_
0
e
(sa)t
(s a)
dt
_
= 0 +
2
(s a)
2
_
e
sa)t
(s a)
_
0
, s > a
=
2
(s a)
3
[0 + 1] =
2
(s a)
3
, s > a.
Example 9. Evaluate L{f(t)} where f(t) =
_
0 0 t < 2
1 t 2
.
L{f(t)} =
_
0
e
st
f(t) dt
=
_
2
0
e
st
0 dt +
_
2
e
st
1 dt
= 0 +
_
e
st
s
_
2
=
e
2s
s
, s > 0.
Example 10. Evaluate L{f(t)} where f(t) is as shown in the diagram.
f(t) =
_
k
c
t 0 t < c
0 t c
k
c
f(t)
t
1.3. THE GAMMA FUNCTION. 7
L{f(t)} =
_
c
0
e
st
k
c
t dt =
k
c
__
t
_
e
st
s
__
c
0
_
c
0
1
e
st
s
dt
_
=
k
c
_
ce
sc
s
0 +
_
e
st
s
2
_
c
0
_
=
k
c
_
ce
sc
s
e
sc
s
2
+
1
s
2
_
.
1.3 The gamma function.
The gamma function (x) is dened by
(x) =
_
0
e
t
t
x1
dt , x > 0. (1.5)
This denite integral converges when x is positive and so denes a function of x for positive
values of x.
Now
(1) =
_
0
e
t
dt = 1 (1.6)
and integrating by parts we see that
(x + 1) =
_
0
t
x
e
t
dt = x
_
0
t
x1
e
t
dt + [t
x
e
t
]
0
= x
_
0
t
x1
e
t
dt,
and hence
(x + 1) = x(x), (1.7)
so that when x is a positive integer
(2) = 1 (1) = 1
(3) = 2 (2) = 2
.
.
.
.
.
.
.
.
.
(n + 1) = n!
8 CHAPTER 1. LAPLACE TRANSFORMS
In particular 0! = (1) = 1. For this reason the gamma function is often referred to as
the generalized factorial function.
Since (x) is dened for x > 0, Eq. (1.7) shows that it is dened for 0 > x > 1
and repeated use of Eq (1.7) in the form (x) =
1
x
(x + 1) shows that (x) is dened in
the intervals 2 < x < 1, 3 < x < 2, etc. However, Eqs. (1.6) and (1.7) show that
(0) =
1
0
, so the gamma function becomes innite when x is zero or a negative integer.
In the denition (1.5) make the substitution t = u
2
, then
(x) = 2
_
0
u
2x1
e
u
2
du
and putting x =
1
2
we obtain
_
1
2
_
= 2
_
0
e
u
2
du. (1.8)
But, for a denite integral,
_
0
e
u
2
du =
_
0
e
v
2
dv, so that
_
_
1
2
__
2
= 4
_
0
e
u
2
du
_
0
e
v
2
dv
= 4
_
0
_
0
e
(u
2
+v
2
)
dudv. (1.9)
To evaluate this integral we transform to polar coordinates u = r cos , v = r sin and
the double integral (1.9) becomes
_
_
1
2
__
2
= 4
_
2
0
_
0
e
r
2
r dr d = .
_
1
2
_
=
. (1.10)
Using Eq. (1.7) then shows that
_
3
2
_
=
1
2
_
1
2
_
=
1
2
, (1.11)
1
2
1
2
_
=
_
1
2
_
1
2
_
= 2
. (1.12)
1.3. THE GAMMA FUNCTION. 9
Example 1. Evaluate L{t
a
}, where a is any number satisfying a > 1.
L{t
a
} =
_
0
e
st
t
a
dt.
Put st = u, i.e., t =
u
s
, dt =
du
s
. Then
L{t
a
} =
_
0
e
u
u
a
s
a
du
s
=
1
s
a+1
_
0
e
u
u
a
du
=
(a + 1)
s
a+1
,
from Eq. (1.5). Note that we must have a > 1 for the integral to converge at u = 0. Thus
we have the result
L{t
a
} =
(a + 1)
s
a+1
for all a > 1. (1.13)
When a is a positive integer this corresponds to the result of Example 3 in Section 1.2.
Example 2. Evaluate L{t
1
2
} and L{t
1
2
}.
From Eqs. (1.10), (1.12) and (1.13) we nd that
L{t
1
2
} =
(
1
2
)
s
1
2
=
_
s
and
L{t
1
2
} =
(
3
2
)
s
3
2
=
1
2
(
1
2
)
s
3
2
=
1
2
_
s
3
.
Problem Set 1.3
In problems 114 use the denition (1.2) to nd L{f(t)}.
1. f(t) =
_
1 0 t < 1
1 t 1
2. f(t) =
_
4 0 t < 2
0 t 2
3. f(t) =
_
t 0 t < 1
1 t 1
4. f(t) =
_
2t + 1 0 t < 1
0 t 1
5. f(t) =
_
sin t 0 t <
0 t
6. f(t) =
_
0 0 t <
2
cos t t
2
10 CHAPTER 1. LAPLACE TRANSFORMS
7. f(t) = e
t+7
8. f(t) = e
2t5
9. f(t) = te
4t
10. f(t) = t
2
e
3t
11. f(t) = e
t
sin t 12. f(t) = e
t
cos t
13. f(t) = t cos t 14. f(t) = t sin t
In problems 15  40 use the results of the examples in Sections 1.2 and 1.3 to evaluate
L{f(t)}.
15. f(t) = 2t
4
16. f(t) = t
5
17. f(t) = 4t 10
18. f(t) = 7t + 3 19. f(t) = t
2
+ 6t 3 20. f(t) = 4t
2
+ 16t + 9
21. f(t) = (t + 1)
3
22. f(t) = (2t 1)
3
23. f(t) = 1 + e
4t
24. f(t) = t
2
e
9t
+ 5 25. f(t) = (1 + e
2t
)
2
26. f(t) = (e
t
e
t
)
2
27. f(t) = 4t
2
5 sin 3t 28. f(t) = cos 5t + sin 2t 29. f(t) = t sinh t
30. f(t) = e
t
sinh t 31. f(t) = e
t
cosh t 32. f(t) = sin 2t cos 2t
33. f(t) = sin
2
t 34. f(t) = cos t cos 2t 35. f(t) = sin t sin 2t
36. f(t) = sin t cos 2t 37. f(t) = sin
3
t 38. f(t) = t
3
2
39. f(t) = t
1
4
40. f(t) = (t
1
2
+ 1)
2
1.4 Inverse transforms and partial fractions.
The Laplace transform of a function f(t) is denoted by L{f(t)} = F(s). If we are
given F(s) and are required to nd the corresponding function f(t), then f(t) is the
inverse Laplace transform of F(s) and we write
f(t) = L
1
{F(s)}.
From the examples of Section 1.2 we can nd the following inverse Laplace transforms:
1.4. INVERSE TRANSFORMS AND PARTIAL FRACTIONS. 11
L
1
_
1
s
_
= 1, L
1
_
n!
s
n+1
_
= t
n
,
L
1
_
1
s a
_
= e
at
, L
1
_
a
s
2
+ a
2
_
= sin at,
L
1
_
s
s
2
+ a
2
_
= cos at, L
1
_
a
s
2
a
2
_
= sinh at,
L
1
_
s
s
2
a
2
_
= cosh at.
Example 1. Evaluate L
1
_
5
s + 3
_
.
L
1
_
5
s + 3
_
= 5L
1
_
1
s + 3
_
= 5e
3t
.
Example 2. Evaluate L
1
_
2
s
2
+ 16
_
.
L
1
_
2
s
2
+ 16
_
=
1
2
L
1
_
4
s
2
+ 16
_
=
1
2
sin 4t.
Example 3. Evaluate L
1
_
s + 1
s
2
+ 1
_
.
L
1
_
s + 1
s
2
+ 1
_
= L
1
_
s
s
2
+ 1
_
+L
1
_
1
s
2
+ 1
_
= cos t + sin t.
Example 4. Evaluate L
1
_
1
s
4
_
.
L
1
_
1
s
4
_
=
1
3!
L
1
_
3!
s
4
_
=
1
6
t
3
.
In order to nd inverse transforms we very often have to perform a partial fraction
decomposition. Here are some typical examples:
Example 5. Evaluate L
1
_
s
2
10s 25
s
3
25s
_
.
s
2
10s 25
s
3
25s
=
s
2
10s 25
s(s
2
25)
=
s
2
10s 25
s(s 5)(s + 5)
.
Write the last fraction as
s
2
10s 25
s(s 5)(s + 5)
=
A
s
+
B
s 5
+
C
s + 5
. We need to nd A, B and
C. Multiply both sides of the equation by s(s 5)(s + 5) and we obtain
s
2
10s 25 = A(s 5)(s + 5) + Bs(s + 5) + Cs(s 5).
12 CHAPTER 1. LAPLACE TRANSFORMS
The denominator is zero when s = 0, +5, 5, so put these values into the above equation.
s = 0 25 = A(5)(5) A = 1.
s = 5 25 50 25 = B(5)(10) B = 1.
s = 5 25 + 50 25 = C(5)(10) C = 1.
Hence
s
2
10s 25
s(s 5)(s + 5)
=
1
s
1
s 5
+
1
s + 5
.
L
1
_
s
2
10s 25
s(s 5)(s + 5)
_
= L
1
_
1
s
_
L
1
_
1
s 5
_
+L
1
_
1
s + 5
_
= 1 e
5t
+ e
5t
.
Example 6. Evaluate L
1
_
2s 1
s
3
(s + 1)
_
.
Write
2s 1
s
3
(s + 1)
=
A
s
+
B
s
2
+
C
s
3
+
D
s + 1
.
Multiply both sides of the equation by s
3
(s + 1) to get
2s 1 = As
2
(s + 1) + Bs(s + 1) + C(s + 1) + Ds
3
.
Put s = 0 : 1 = C C = 1
Put s = 1 : 3 = D D = 3,
i.e., 2s 1 = A(s
3
+ s
2
) + B(s
2
+ s) s 1 + 3s
3
.
s
3
terms : A + 3 = 0 A = 3
s
2
terms : A + B = 0 B = 3
s terms : B 1 = 2 B = 3.
2s 1
s
3
(s + 1)
=
3
s
+
3
s
2
1
s
3
+
3
s + 1
.
1.5. THE FIRST SHIFTING THEOREM. 13
L
1
_
2s 1
s
3
(s + 1)
_
= 3L
1
_
1
s
_
+ 3L
1
_
1
s
2
_
L
1
_
1
s
3
_
+ 3L
1
_
1
s + 1
_
= 3 + 3t
1
2
t
2
+ 3e
t
.
Example 7. Evaluate L
1
_
s + 2
(s + 1)(s
2
+ 4)
_
.
Write
s + 2
(s + 1)(s
2
+ 4)
=
A
s + 1
+
Bs + C
s
2
+ 4
.
Multiply both sides by (s + 1)(s
2
+ 4) to get
s + 2 = A(s
2
+ 4) + (Bs + C)(s + 1).
Put s = 1: 1 = 5A A =
1
5
s
2
terms 0 = A + B B =
1
5
s terms 1 = B + C C =
6
5
.
s + 2
(s + 1)(s
2
+ 4)
=
1
5
s + 1
+
1
5
s +
6
5
s
2
+ 4
.
L
1
_
s + 2
(s + 1)(s
2
+ 4)
_
=
1
5
L
1
_
1
s + 1
_
1
5
L
1
_
s
s
2
+ 4
_
+
3
5
L
1
_
2
s
2
+ 4
_
.
=
1
5
e
t
1
5
cos 2t +
3
5
sin 2t.
Problem Set 1.4
Find f(t) if L{f(t)} is given by
1.
s + 12
s
2
+ 4s
2.
s 3
s
2
1
3.
3s
s
2
+ 2s 8
4.
2s
2
+ 5s 1
s
3
s
5.
s + 1
s
3
(s 1)(s + 2)
6.
3s
2
2s 1
(s 3)(s
2
+ 1)
1.5 The First Shifting Theorem.
This theorem expands our ability to nd Laplace transforms and their inverses.
14 CHAPTER 1. LAPLACE TRANSFORMS
The First Shifting Theorem: If L{f(t)} = F(s) when s > c, then
L{e
at
f(t)} = F(s a), s > c + a. (1.14)
Proof:
L{e
at
f(t)} =
_
0
e
st
e
at
f(t) dt
=
_
0
e
(sa)t
f(t) dt = F(s a).
Example 1. Evaluate L{e
at
t
n
}.
L{t
n
} =
n!
s
n+1
so L{e
at
t
n
} =
n!
(s a)
n+1
.
Example 2. Evaluate L{e
at
sin bt}.
L{sin bt} =
b
s
2
+ b
2
so L{e
at
sin bt} =
b
(s a)
2
+ b
2
.
Example 3. Evaluate L{e
at
cos bt}.
L{cos bt} =
s
s
2
+ b
2
so L{e
at
cos bt} =
s a
(s a)
2
+ b
2
.
Example 4. Evaluate L
1
_
2
s
2
+ 2s + 2
_
.
Now s
2
+ 2s + 2 = (s + 1)
2
+ 1, so we require
L
1
_
2
(s + 1)
2
+ 1
_
= 2L
1
_
1
(s + 1)
2
+ 1
_
.
The quantity inside { } is identical with the answer to Example 2 above with a = 1, b = 1.
L
1
_
2
s
2
+ 2s + 2
_
= 2e
t
sin t.
Example 5. Evaluate L
1
_
3s + 9
s
2
+ 2s + 10
_
.
1.5. THE FIRST SHIFTING THEOREM. 15
Now
3s + 9
s
2
+ 2s + 10
=
3(s + 1) + 6
(s + 1)
2
+ 9
=
3(s + 1)
(s + 1)
2
+ 9
+
6
(s + 1)
2
+ 9
.
As in Examples 2 and 3 with a = 1, b = 3, hence
L
1
_
3s + 9
s
2
+ 2s + 10
_
= 3e
t
cos 3t + 2e
t
sin 3t.
Problem Set 1.5
Use partial fractions (if necessary) and the First Shifting Theorem to nd the inverse
Laplace transforms of the following functions:
1.
10 4s
(s 2)
2
2.
s
2
+ s 2
(s + 1)
3
3.
s
3
7s
2
+ 14s 9
(s 1)
2
(s 2)
3
4.
s
2
6s + 7
(s
2
4s + 5)s
5.
2s 1
s
2
(s + 1)
3
6.
3!
(s 2)
4
Find the Laplace transforms of the following functions:
7. L{te
8t
} 8. L{t
7
e
5t
} 9. L{e
2t
cos 4t}
10. L{e
3t
sinh t} 11. L
_
sin 2t
e
t
_
12. L{e
2t
cos
2
2t}
13. L{e
3t
(t + 2)
2
} 14. L{t
1/2
(e
t
+ e
2t
)}
16 CHAPTER 1. LAPLACE TRANSFORMS
1.6 Step functions.
The unit step function u
a
(t) is dened as follows:
t a
u (t)
a
1
u
a
(t) =
_
0 when t < a
1 when t a
(a 0). (1.15)
When a = 0 we have
u (t)
1
0
t
u
0
(t) =
_
0 t < 0
1 t 0
.
Note that the function f(t) = 1 u
c
(t) is given by
t c
f(t)
1
f(t) =
_
1 0 t < c
0 t c
.
The dierence between two step functions, i.e.,
f(t) = u
a
(t) u
b
(t) (b > a)
1.6. STEP FUNCTIONS. 17
has a graph of the form:
t
f(t)
a b
1
Step functions can be used to turn on or turn o portions of the graph of a function. For
example, the function f(t) = t
2
when multiplied by u
1
(t) becomes
f(t) = t
2
u
1
(t) =
_
0 0 t < 1
t
2
t 1
,
so that its graph is
1
f(t)
1
t
Given the function f(t) = t
2
note the graphs of the following functions:
18 CHAPTER 1. LAPLACE TRANSFORMS
f(t) = t
2
t
f(t)
f(t)=t
2
f(t) = t
2
, t 0
t
f(t)=t
f(t)
2
> 0 t
f(t 1) , t 0
t
f(t)
1
f(t )
1
1
1.6. STEP FUNCTIONS. 19
f(t 1)u
1
(t) , t 0
1
t 1
f(t)
f(t )
1
u (t)
Hence, given a function f(t), dened for t 0, the graph of the function f(t a)u
a
(t)
consists of the graph of f(t) translated through a distance a to the right with the portion
from 0 to a turned o, i.e., put equal to zero.
The Laplace transform of u
a
(t) is
L{u
a
(t)} =
_
0
e
st
u
a
(t) dt =
_
a
e
st
dt
=
_
e
st
s
_
a
=
e
as
s
,
i.e., L{u
a
(t)} =
e
as
s
, (s > 0). (1.16)
20 CHAPTER 1. LAPLACE TRANSFORMS
Example 1. Write f(t) in terms of unit step functions and nd its Laplace transform
where
f(t) =
_
_
1 0 t < 1
0 1 t < 2
1 2 t < 3
0 t 3
.
Since f(t) can be expressed as
f(t) = u
0
(t) u
1
(t) + u
2
(t) u
3
(t),
we then have
L{f(t)} =
1
s
e
s
s
+
e
2s
s
e
3s
s
=
1
s
(1 e
s
)(1 + e
2s
).
Example 2. Represent the function shown in the diagram below in terms of
unit step functions and nd its Laplace transform.
t 2 4 6 8
1
1
2
3
f(t)
O
f(t) = u
0
(t) 2u
2
(t) + 4u
4
(t) u
6
(t).
L{f(t)} =
1
s
2
s
e
2s
+
4
s
e
4s
1
s
e
6s
=
1
s
(1 e
2s
)(1 + 3e
2s
e
4s
).
1.6. STEP FUNCTIONS. 21
Example 3. The function shown in the diagram below is periodic with period 3. Write
the function in terms of unit step functions and nd its Laplace transform.
t
2 4
1
1/2
3
f(t)
1 O
f(t) = u
0
(t)
1
2
u
1
(t)
1
2
u
2
(t) + u
3
(t)
1
2
u
4
(t)
1
2
u
5
(t) . . .
= (u
0
+ u
3
+ u
6
+ . . . )
1
2
(u
1
+ u
4
+ u
7
+ . . . )
1
2
(u
2
+ u
5
+ u
8
+ . . . ).
L{f(t)} =
1
s
(1 + e
3s
+ e
6s
+ . . . )
1
2s
(e
s
+ e
4s
+ e
7s
+ . . . )
1
2s
(e
2s
+ e
5s
+ e
8s
+ . . . )
=
1
s
1
1 e
3s
1
2s
e
s
1 e
3s
1
2s
e
2s
1 e
3s
(Geometric Series)
=
2 e
s
e
2s
2s(1 e
3s
)
.
The Second Shifting Theorem: If L{f(t)} = F(s), then, for a > 0,
L{f(t a)u
a
(t)} = e
as
F(s), (1.17)
and, conversely,
L
1
{e
as
F(s)} = f(t a)u
a
(t). (1.18)
Proof:
L{f(t a)u
a
(t)} =
_
0
e
st
f(t a)u
a
(t) dt
22 CHAPTER 1. LAPLACE TRANSFORMS
=
_
a
e
st
f(t a) dt.
Put v = t a, then v = 0 when t = a, dv = dt and the integral becomes
L{f(t a)u
a
(t)} =
_
0
e
s(v+a)
f(v) dv
= e
as
_
0
e
sv
f(v) dv
= e
as
F(s).
Example 4. Evaluate L{(t )u
(t)}.
f(t) = t, so F(s) =
1
s
2
and L{(t )u
(t)} =
e
s
s
2
.
Example 5. Evaluate L{tu
2
(t)}.
Because the step function is u
2
(t), i.e., has sux 2, the function t must be written as a
function of (t 2), i.e., t = (t 2) + 2.
L{tu
2
(t)} = L{(t 2)u
2
(t) + 2u
2
(t)}
=
e
2s
s
2
+
2e
2s
s
=
(1 + 2s)
s
2
e
2s
,
because f(t) = t in the rst term.
Example 6. Evaluate L{sin(t 3)u
3
(t)}.
f(t) = sin t, so L{sin(t 3)u
3
(t)} =
1
s
2
+ 1
e
3s
.
Example 7. Find the Laplace transform of the function
g(t) =
_
0 t < 1
t
2
2t + 2 t 1
.
Now t
2
2t + 2 = (t 1)
2
+ 1,
i.e., f(t 1) = (t 1)
2
+ 1 so f(t) = t
2
+ 1.
1.6. STEP FUNCTIONS. 23
L{g(t)} = L{f(t 1)u
1
(t)} = e
s
L{f(t)}
= e
s
(
2
s
3
+
1
s
).
Example 8. Evaluate L
1
_
se
s
s
2
+ 4
_
.
Now
L
1
_
s
s
2
+ 4
_
= cos 2t = f(t).
L
1
_
se
s
s
2
+ 4
_
= f(t )u
(t).
Example 9. Evaluate L
1
{
e
s
s
4
}.
The e
s
indicates that u
1
(t) appears in the function and so does f(t 1).
L
1
{
1
s
4
} =
1
6
t
3
= f(t), so f(t 1) =
1
6
(t 1)
3
and
L
1
{
e
s
s
4
} =
1
6
(t 1)
3
u
1
(t).
Example 10. Evaluate L
1
_
1 + e
s/2
s
2
+ 1
_
.
This is the sum of the inverse transforms
L
1
_
1
s
2
+ 1
_
and L
1
_
e
s/2
s
2
+ 1
_
.
The rst equals sin t; the second changes sin t to sin(t
2
) = cos t and multiplies by
u
2
(t).
L
1
_
1 + e
s/2
s
2
+ 1
_
= sin t u
2
(t) cos t.
24 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.6
Evaluate the following
1. L{(t 1)u
1
(t)} 2. L{e
2t
u
2
(t)} 3. L{(3t + 1)u
3
(t)}
4. L{(t 1)
3
e
t1
u
1
(t)} 5. L{te
t5
u
5
(t)} 6. L{cos t u
2
(t)}
7. L
1
_
e
2s
s
3
_
8. L
1
_
(1 + e
2s
)
2
s + 2
_
9. L
1
_
e
s
s
2
+ 1
_
10. L
1
_
se
s/2
s
2
+ 4
_
11. L
1
_
e
s
s(s + 1)
_
12. L
1
_
e
2s
s
2
(s 1)
_
13. L
1
_
1 e
s
s
2
_
14. L
1
_
2
s
3e
s
s
2
+
5e
2s
s
2
_
In Problems 15  20 write each function in terms of unit step functions and nd the
Laplace transform of the given function.
15. f(t) =
_
2, 0 t < 3
2, t 3
16. f(t) =
_
_
_
1, 0 t < 4
0, 4 t < 5
1, t 5
17. f(t) =
_
0, 0 t < 1
t
2
, t 1
18. f(t) =
_
0, 0 t <
3
2
sin t, t
3
2
19. f(t) =
_
t, 0 t < 2
0, t 2
20. f(t) is the staircase function,
i.e., see graph below.
t
4
1
f(t)
2 3
2
3
1
O
1.7. DIFFERENTIATION AND INTEGRATION OF TRANSFORMS. 25
1.7 Dierentiation and integration of transforms.
Theorem: If F(s) = L{f(t)}, then
F
dF
ds
F
(s) =
_
0
te
st
f(t) dt
=
_
0
e
st
[tf(t)] dt.
F
(s) = L{tf(t)}.
By continuing to dierentiate with respect to t we see that each dierentiation produces
another factor t, so the result (1.20) follows easily.
Example 1. Evaluate L{t cos 2t}.
This is equal to (from (1.19)) F
=
_
1(s
2
+ 4) s 2s
(s
2
+ 4)
2
_
=
s
2
4
(s
2
+ 4)
2
.
Example 2. Evaluate L{te
2t
}.
f(t) = e
2t
F(s) =
1
s 2
.
Hence
L{te
2t
} = F
(s) =
1
(s 2)
2
.
26 CHAPTER 1. LAPLACE TRANSFORMS
Example 3. Evaluate L{t
2
e
t
}.
From (1.2), with n = 2, L{t
2
f(t)} = F
(s).
f(t) = e
t
F(s) =
1
s 1
F
(s) =
1
(s 1)
2
, F
(s) =
2
(s 1)
3
.
L{t
2
e
t
} =
2
(s 1)
3
.
Example 4. Evaluate L{te
2t
sin wt}.
From the First Shifting Theorem L{e
2t
sin wt} =
w
(s + 2)
2
+ w
2
,
i.e., if f(t) = e
2t
sin wt then L{f(t)} =
w
(s + 2)
2
+ w
2
= F(s).
L{tf(t)} = F
(s) =
w 2(s + 2)
[(s + 2)
2
+ w
2
]
2
.
Example 5. Evaluate L
1
{
1
(s + 1)
2
}.
1
(s + 1)
2
= [
1
s + 1
]
, so if F(s) =
1
s + 1
then
1
(s + 1)
2
= F
1
(s + 1)
2
= F
(s) = L{te
t
}.
Hence L
1
_
1
(s + 1)
2
_
= te
t
.
Example 6. Evaluate L
1
_
2s
(s
2
4)
2
_
.
Now
2s
(s
2
4)
2
=
_
1
s
2
4
_
= F
(s), where
1.7. DIFFERENTIATION AND INTEGRATION OF TRANSFORMS. 27
F(s) =
1
s
2
4
= L{
1
2
sinh 2t}, i.e., f(t) =
1
2
sinh 2t.
F
(s) = L{t
1
2
sinh 2t},
i.e., L
1
_
2s
(s
2
4)
2
_
=
1
2
t sinh 2t.
Theorem: If f(t) satises the condition for the existence of the Laplace transform
and if lim
t0+
f(t)
t
exists, then
L
_
f(t)
t
_
=
_
s
F( s) d s , (s > c) . (1.21)
Proof:
_
s
F( s) d s =
_
s
__
0
e
st
f(t) dt
_
d s .
Reversing the order of integration, we obtain
_
s
F( s) d s =
_
0
__
s
e
st
f(t) d s
_
dt =
_
0
f(t)
__
s
e
st
d s
_
dt
=
_
0
f(t)
_
1
t
e
st
_
s
dt =
_
0
e
st
f(t)
t
dt
= L{
f(t)
t
}, (s > c).
Example 7. Evaluate L
1
{ln
s + a
s a
}.
Now ln
s + a
s a
= ln(s + a) ln(s a),
d
ds
[ln(s + a) ln(s a)] =
1
s + a
+
1
s a
F(s).
f(t) = L
1
{F(s)} = e
at
+ e
at
= 2 sinh at.
Hence
L
1
_
ln
s + a
s a
_
= L
1
__
s
F( s) d s
_
=
f(t)
t
= 2t
1
sinh at.
28 CHAPTER 1. LAPLACE TRANSFORMS
Example 8. Evaluate L
1
{arc cot(s + 1)}.
_
Note: If y = arc cot x, then y
=
1
1 + x
2
.
_
d
ds
[arc cot(s + 1)] =
1
1 + (s + 1)
2
F(s)
f(t) = L
1
{F(s)} = e
t
sin t.
Hence L
1
{arc cot(s + 1)} =
f(t)
t
= t
1
e
t
sin t.
Problem Set 1.7
Evaluate the following.
1. L{t cos 2t} 2. L{t sinh 3t} 3. L{t
2
sinh t}
4. L{t
2
cos t} 5. L{te
2t
sin 6t} 6. L{te
3t
cos 3t}
7. L
1
_
s
(s
2
+ 1)
2
_
8. L
1
_
s + 1
(s
2
+ 2s + 2)
2
_
9. L
1
_
ln
s 3
s + 1
_
10. L
1
_
ln
s
2
+ 1
s
2
+ 4
_
11. L
1
_
1
s
arc cot
4
s
_
12. L
1
_
arc tan
1
s
_
1.8 Laplace transforms of derivatives and integrals.
Theorem: Suppose that f(t) is continuous for all t 0 and is of exponential order.
Suppose also that the derivative f
(t)} =
_
0
e
st
f
(t) dt
=
_
e
st
f(t)
0
+ s
_
0
e
st
f(t) dt
= f(0) + sL{f(t)}.
1.8. LAPLACE TRANSFORMS OF DERIVATIVES AND INTEGRALS. 29
Note that in the above proof we have assumed that f
(t) is continuous. If f
(t) is piecewise
continuous the proof is similar but the range of integration must be broken into parts for
which f
(t) is continuous.
We may extend the formula (1.22) to nd the Laplace transforms of higher derivatives.
For example, replacing f(t) by f
(t)} = sL{f
(t)} f
(0),
and, using Eq. (1.22) again, we obtain
L{f
(t)} = s
2
L{f(t)} sf(0) f
(0). (1.23)
Similarly, we can extend this to higher derivatives to obtain
L{f
(n)
(t)} = s
n
L{f} s
n1
f(0) s
n2
f
(0) . . . f
(n1)
(0). (1.24)
We shall use the results (1.22) and (1.23) to solve dierential equations with given initial
conditions. However, we rst note that (1.22) can be used to nd unknown transforms.
Example 1. Evaluate L{sin
2
t} using Eq. (1.22).
f(t) = sin
2
t , so f(0) = 0.
f
(t) is piecewise continuous on each nite interval and so, from Eq. (1.22)
L{f(t)} = L{g
+ ay
+ by = f(t), y(0) = p, y
(0) = q.
We illustrate the procedure with several examples.
Example 1. Find the solution to the equation
y
+ a
2
y = 0, y(0) = 0, y
(0) = 2.
Take the Laplace transform of the equation.
L{y
} + a
2
L{y} = 0.
From Eq. (1.23), putting L{y} = Y (s), we obtain
s
2
Y (s) sy(0) y
(0) + a
2
Y (s) = 0,
i.e., (s
2
+ a
2
)Y (s) = 2 (since y(0) = 0, y
(0) = 2).
Y (s) =
2
s
2
+ a
2
.
Hence, y(t) =
2
a
sin at.
Example 2. Solve 16y
9y = 0, y(0) = 3, y
(0) = 3.75.
16L{y
} 9L{y} = 0,
i.e., 16[s
2
Y (s) sy(0) y
(0)] 9Y (s) = 0.
32 CHAPTER 1. LAPLACE TRANSFORMS
i.e., (s
2
9
16
)Y (s) = 3s + 3.75.
Y (s) =
3s
s
2
9
16
+
3.75
s
2
9
16
.
y(t) = 3 cosh
3
4
t + 5 sinh
3
4
t.
Example 3. Solve y
+ 2y
+ 17y = 0, y(0) = 0, y
(0) = 12.
L{y
} + 2L{y
} + 17L{y} = 0
s
2
Y (s) sy(0) y
+ 4y
+ 4y = 0, y(0) = 2, y
(0) = 3.
L{y
} + 4L{y
} + 4L{y} = 0,
i.e., s
2
Y (s) sy(0) y
2y
+ 10y = 0, y(0) = 3, y
(0) = 3.
L{y
} 2L{y
} + 10L{y} = 0,
1.9. APPLICATION TO ORDINARY DIFFERENTIAL EQUATIONS. 33
i.e., s
2
Y (s) sy(0) y
+ y = 2, y(0) = 0, y
(0) = 3.
L{y
} +L{y} = L{2} s
2
Y (s) sy(0) y
(0) + Y (s) =
2
s
.
(s
2
+ 1)Y (s) =
2
s
+ 3.
Y (s) =
2
s(s
2
+ 1)
+
3
s
2
+ 1
=
2
s
2s
s
2
+ 1
+
3
s
2
+ 1
.
y(t) = 2 2 cos t + 3 sin t.
Example 7. y
+ 4y = 3 cos t, y(0) = 0, y
(0) = 0.
L{y
} + 4L{y} = 3L{cos t} s
2
Y (s) sy(0) y
(0) + 4Y (s) =
3s
s
2
+ 1
,
i.e., (s
2
+ 4)Y (s) =
3s
s
2
+ 1
Y (s) =
3s
(s
2
+ 1)(s
2
+ 4)
.
Partial fractions: Y (s) =
s
s
2
+ 1
s
s
2
+ 4
y(t) = cos t cos 2t.
Example 8. y
4y = 8t
2
4, y(0) = 5, y
(0) = 10.
L{y
} 4L{y} = L{8t
2
4} s
2
Y (s) sy(0) y
(0) 4Y (s) =
16
s
3
4
s
.
(s
2
4)Y (s) =
16
s
3
4
s
+ 5s + 10.
Y (s) =
16
s
3
(s 2)(s + 2)
4
s(s 2)(s + 2)
+
5(s + 2)
(s 2)(s + 2)
=
1
s
4
s
3
+
1
2
s 2
+
1
2
s + 2
+
1
s
1
2
s 2
1
2
s + 2
+
5
s 2
=
4
s
3
+
5
s 2
.
34 CHAPTER 1. LAPLACE TRANSFORMS
y(t) = 2t
2
+ 5e
2t
.
Example 9. y
+ 2y
+ y = e
2t
, y(0) = 0, y
(0) = 0.
L{y
} + 2L{y
} +L{y} = L{e
2t
},
i.e., s
2
Y (s) + 2sY (s) + Y (s) =
1
s + 2
(because y(0) = y
(0) = 0).
Y (s) =
1
(s + 2)(s + 1)
2
=
1
s + 2
1
s + 1
+
1
(s + 1)
2
.
y(t) = e
2t
e
t
+ te
t
.
Example 10. Solve y
(0) = 3.
s
2
Y (s) sy(0) y
(0) + 4Y (s) = 4
_
s
s
2
+ 4
2
s
2
+ 4
_
,
i.e., (s
2
+ 4)Y (s) =
4s
s
2
+ 4
8
s
2
+ 4
+ s + 3.
Y (s) =
4s
(s
2
+ 4)
2
8
(s
2
+ 4)
2
+
s
s
2
+ 4
+
3
s
2
+ 4
.
Now
4s
(s
2
+ 4)
2
=
_
2
s
2
+ 4
_
and
2
s
2
+ 4
= L{sin 2t}.
L
1
_
4s
(s
2
+ 4)
2
_
= t sin 2t.
Also
_
s
s
2
+ 4
_
=
4 s
2
(s
2
+ 4)
2
=
1
s
2
+ 4
+
4
(s
2
+ 4)
2
,
i.e., L
1
_
4
(s
2
+ 4)
2
_
= t cos 2t +
1
2
sin 2t.
Hence y(t) = t sin 2t + 2t cos 2t sin 2t + cos 2t +
3
2
sin 2t
= t(sin 2t + 2 cos 2t) +
1
2
sin 2t + cos 2t
= (t +
1
2
)(sin 2t + 2 cos 2t).
1.10. DISCONTINUOUS FORCING FUNCTIONS. 35
Problem Set 1.9
Use Laplace transforms to solve the following initial value problems.
1. y
y = 1, y(0) = 0.
2. y
+ 4y = e
4t
, y(0) = 2.
3. y
+ 5y
+ 4y = 0, y(0) = 1, y
(0) = 0.
4. y
6y
+ 13y = 0, y(0) = 0, y
(0) = 3.
5. y
6y
+ 9y = t, y(0) = 0, y
(0) = 1.
6. y
4y
+ 4y = t
3
, y(0) = 1, y
(0) = 0.
7. y
4y
+ 4y = t
3
e
2t
, y(0) = 0, y
(0) = 0.
8. y
2y
+ 5y = 1 + t, y(0) = 0, y
(0) = 4.
9. y
+ y = sin t, y(0) = 1, y
(0) = 1.
10. y
+ 16y = 1, y(0) = 1, y
(0) = 2.
11. y
= e
t
cos t, y(0) = 0, y
(0) = 0.
12. y
2y
= e
t
sinh t, y(0) = 0, y
(0) = 0.
1.10 Discontinuous forcing functions.
In some engineering problems we have to deal with dierential equations in which
the forcing function, i.e., the term on the righthand side of the dierential equation, is
discontinuous. We illustrate this by a number of examples.
Example 1. Solve y
+ 4y = g(t), y(0) = 0, y
(0) = 0,
where g(t) =
_
t 0 t <
2
2
t
2
,
36 CHAPTER 1. LAPLACE TRANSFORMS
i.e., g(t) = t tu
2
(t) +
2
u
2
(t) = t (t
2
)u
2
(t).
L{g(t)} = L{t} L{(t
2
)u
2
(t)} =
1
s
2
1
s
2
e
2
s
.
Hence, taking the Laplace transform of the dierential equation we obtain
s
2
Y (s) + sy(0) y
(0) + 4Y (s) =
1
s
2
(1 e
2
s
),
i.e., (s
2
+ 4)Y (s) =
1
s
2
(1 e
2
s
),
i.e., Y (s) =
1
s
2
(s
2
+ 4)
(1 e
2
s
)
=
1
4
_
1
s
2
1
s
2
+ 4
_
(1 e
2
s
).
Now L
1
__
1
s
2
1
s
2
+ 4
__
= t
1
2
sin 2t,
so L
1
__
1
s
2
1
s
2
+ 4
_
e
2
s
_
=
_
_
t
2
_
1
2
sin 2
_
t
2
_
_
u
2
(t)
= (t
2
+
1
2
sin 2t)u
2
(t).
y =
1
4
t
1
8
sin 2t +
1
4
(t
2
+
1
2
sin 2t)u
2
(t).
Example 2. Solve y
+ y = f(t) , y(0) = 0 , y
(0) = 1 ,
where f(t)
_
1 0 t <
2
0 t
2
,
i.e., f(t) = u
0
(t) u
2
(t).
L{f} =
1
s
e
2
s
s
.
s
2
Y (s) sy(0) y
(0) + Y (s) =
1
s
_
1 e
2
s
_
.
(s
2
+ 1)Y (s) = 1 +
1
s
_
1 e
2
s
_
.
Y (s) =
1
s
2
+ 1
+
1
s(s
2
+ 1)
_
1 e
2
_
=
1
s
2
+ 1
+
_
1
s
s
s
2
+ 1
_
_
1 e
2
s
_
.
1.10. DISCONTINUOUS FORCING FUNCTIONS. 37
Now L
1
_
1
s
s
s
2
+ 1
_
= 1 cos t.
L
1
_
(
1
s
s
s
2
+ 1
)e
2
s
_
=
_
1 cos(t
2
)
_
u
2
(t)
= (1 sin t)u
2
(t).
y(t) = sin t + 1 cos t + (1 sin t)u
2
(t).
Example 3. Solve y
+ 4y = sin t u
2
(t) sin(t 2), y(0) = 0 , y
(0) = 0.
s
2
Y (s) sy(0) y
(0) + 4Y (s) =
1
s
2
+ 1
1
s
2
+ 1
e
2s
.
(s
2
+ 4)Y (s) =
1
s
2
+ 1
(1 e
2s
).
Y (s) =
1
(s
2
+ 1)(s
2
+ 4)
(1 e
2s
)
=
1
3
_
1
s
2
+ 1
1
s
2
+ 4
_
(1 e
2s
).
y(t) =
1
3
_
sin t
1
2
sin 2t
_
1
3
_
sin(t 2)
1
2
sin 2(t 2)
_
u
2
(t)
=
1
3
_
sin t
1
2
sin 2t
_
(1 u
2
(t)) .
Example 4. Solve y
+ y
+
5
4
y = g(t) , y(0) = 0 , y
(0) = 0 ,
where g(t) =
_
sin t 0 t <
0 t
.
g(t) = (u
0
(t) u
(t) sin(t )
(using sin(t ) = sin t).
L{g(t)} =
1
s
2
+ 1
+
1
s
2
+ 1
e
s
.
s
2
Y (s) sy(0) y
1
2
t
cos t +
4
17
e
1
2
t
sin t
+
_
16
17
cos(t ) +
4
17
sin(t ) +
16
17
e
1
2
(t)
cos(t )
+
4
17
e
1
2
(t)
sin(t )
_
u
(t),
i.e., y(t) =
4
17
(4 cos t + sin t + 4e
1
2
t
cos t + e
1
2
t
sin t)
+
4
17
(4 cos t sin t 4e
1
2
(t)
cos t e
1
2
(t)
sin t)u
(t).
Example 5. Solve y
+ 3y
+ 2y = u
2
(t) , y(0) = 0 , y
(0) = 1 .
s
2
Y (s) sy(0) y
(0) = 1.
2. y
+ 4y = u
2
(t) sin t, y(0) = 1, y
(0) = 0.
3. y
5y
+ 6y = u
1
(t), y(0) = 0, y
(0) = 1.
4. y
+ 4y
+ 3y = 1 u
2
(t) u
4
(t) + u
6
(t), y(0) = 0, y
(0) = 0.
5. y
+ y = u
(t), y(0) = 1, y
(0) = 0.
6. y
(4)
+ 5y
+ 4y = 1 u
(t), y(0) = y
(0) = y
(0) = y
(0) = 0.
1.11 Periodic functions.
Let f(t) be a function which is dened for all positive t and has period T(> 0), i.e.,
f(t + T) = f(t), all t > 0. (1.26)
Examples of periodic functions are sint , cos t and functions such as
t
a 2 3
(period )
a a
a
k
O
f(t)
and
40 CHAPTER 1. LAPLACE TRANSFORMS
( period = 3 )
t
f(t)
1
1
1 2 3 4 5 6 O
Theorem: Let f(t) be piecewise continuous on [0, ) and of exponential order. If f(t) is
periodic with period T, then
L{f(t)} =
1
(1 e
sT
)
_
T
0
e
st
f(t) dt.
Proof:
L{f(t)} =
_
0
e
st
f(t) dt
=
_
T
0
e
st
f(t) dt +
_
T
e
st
f(t) dt. ()
Now put t = u + T in the last integral which then becomes
_
0
e
s(u+T)
f(u + T) du.
But f(u + T) = f(u) because f is periodic so this integral is
e
sT
_
0
e
su
f(u) du = e
sT
L{f(t)}.
Hence, () becomes
L{f(t)} =
_
T
0
e
st
f(t) dt + e
sT
L{f(t)}.
L{f(t)} =
1
1 e
sT
_
T
0
e
st
f(t) dt.
Note that this can be written as
L{f(t)} =
L{f
T
(t)}
1 e
sT
,
1.11. PERIODIC FUNCTIONS. 41
where
f
T
(t) =
_
f(t) 0 t < T
0 t T
,
is called the window of length T for the function f(t).
Example 1. Evaluate L{f(t)} where
f(t) = t (0 t < 2) , f(t + 2) = f(t)
as in the diagram above.
6 5 3
t
O
f(t)
2 4
L{f(t)} =
1
1 e
2s
_
2
0
e
st
( t) dt
=
_
1 e
2s
1
_
_
( t)
e
st
s
_
2
0
_
2
0
1
1
s
e
st
dt
_
=
_
1 e
2s
1
_
e
2s
s
+
s
+
1
s
2
_
e
st
2
0
_
=
_
1 e
2s
_
1
_
s
+
s
e
2s
+
1
s
2
e
2s
1
s
2
_
.
Example 2. Evaluate L{f(t)} where
f(t) =  sin at , 0 t <
a
, f(t +
a
) = f(t).
This is known as the rectied sine wave or the fullwave rectication of sin at. The
period is
a
with g(t) = sin at, 0 t <
a
, as illustrated below.
42 CHAPTER 1. LAPLACE TRANSFORMS
a a 4/ 3/ 2/ /
f(t)
t
a a
1
O
L{f(t)} =
1
1 e
s
a
_
a
0
e
st
sin at dt
= (1 e
s
a
)
1
_
e
st
s
2
+ a
2
(s sin at a cos at)
_
a
0
=
a(e
s
a
+ 1)
(1 e
s
a
)(s
2
+ a
2
)
=
a coth
s
2a
(s
2
+ a
2
)
.
Example 3. Evaluate L{f(t)} where
f(t) = t
2
(0 t < 2) f(t + 2) = f(t).
L{f(t)} =
_
1 e
2s
_
1
_
2
0
e
st
t
2
dt.
Now
_
2
0
e
st
t
2
dt = L{[1 u
2
(t)]t
2
}
= L{t
2
u
2
(t)
_
(t 2)
2
+ 4(t 2) + 4
2
}
=
2
s
3
e
2s
_
2
s
3
+
4
s
2
+
4
2
s
_
L{f(t)} =
_
1 e
2s
_
1
_
2
s
3
2
s
3
e
2s
4e
2s
s
2
4
2
e
2s
s
_
.
Example 4. An L R series circuit has L = 1 henry, R = 1 ohm, E(t) given by
E(t) = t (0 t < 1) E(t + 1) = E(t)
and i(0) = 0.
1.11. PERIODIC FUNCTIONS. 43
The dierential equation is L
di
dt
+ Ri = E(t),
i.e.,
di
dt
+ i = E(t).
Taking the Laplace transform of E(t) yields
L{E(t)} =
1
1 e
s
_
1
0
te
st
dt =
1
1 e
s
_
1
s
e
s
1
s
2
(e
s
1)
_
=
1
s
e
s
1 e
s
+
1
s
2
.
The Laplace transform of the dierential equation is
sI(s) i(0) + I(s) = L{E(t)},
i.e., I(s) =
1
s
2
(s + 1)
1
s(s + 1)
e
s
1 e
s
.
1
s
2
(s + 1)
=
1
s
+
1
s
2
+
1
s + 1
= L{1 + t + e
t
}.
1
s(s + 1)
=
1
s
1
s + 1
= L{1 e
t
}.
Now
e
s
1 e
s
= e
s
(1 + e
s
+ e
2s
+ e
3s
+ ). (Geometric Series)
1
s(s + 1)
e
s
1 e
s
=
_
1
s
1
s + 1
_
_
e
s
+ e
2s
+ e
3s
+ e
4s
+
_
= L{[u
1
(t) + u
2
(t) + u
3
(t) + u
4
(t) + ]
[e
(t1)
u
1
(t) + e
(t2)
u
2
(t) + e
(t3)
u
3
(t) + ]}.
Hence, i(t) = 1 + t + e
t
[u
1
(t) + u
2
(t) + u
3
(t) + ]
+[e
(t1)
u
1
(t) + e
(t2)
u
2
(t) + e
(t3)
u
3
(t) + ].
44 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.11
Find the Laplace transforms of the given periodic functions.
1. f(t) =
_
1 0 t < a
1 a t < 2a
, f(t + 2a) = f(t).
2. f(t) =
_
t 0 t < 1
2 t 1 t < 2
, f(t + 2) = f(t).
3. f(t) =
_
sin t 0 t <
0 t < 2
, f(t + 2) = f(t).
4. f(t) = e
t
, 0 t < 2, f(t + 2) = f(t).
5. Find the steadystate current in the circuit given by
R
L
E(t)
0
E
4 3 2 a t O a a a
E(t)
1.12 Impulse functions.
We often have to look at systems (mechanical or electrical) in which the external force
or voltage is of large magnitude but acts only for a very short time, i.e., a very high voltage
is applied to a circuit and then switched o immediately.
Consider the function dened by
a
(t t
0
) =
_
_
_
1
2a
t
0
a < t < t
0
+ a
0 t t
0
a or t t
0
+ a
,
as shown in the rst diagram on the next page. The second diagram shows how the function
changes as a gets smaller and smaller.
1.12. IMPULSE FUNCTIONS. 45
1
t
y
t +a t a t
0 0 0
a
O
2
t
y
t
0
O
If a is small, then
a
(t t
0
) has a large constant magnitude for a short period of time
around t
0
. Note that the integral
I(a) =
_
t
0
+a
t
0
a
a
(t t
0
) dt =
_
t
0
+a
t
0
a
1
2a
dt = 1,
and I(a) can be written as
I(a) =
_
a
(t t
0
) dt = 1,
since the function is zero outside (t
0
a, t
0
+a). The function
a
(t t
0
) is the unit impulse.
We dene the function (t t
0
) by
(t t
0
) = lim
a0
a
(t t
0
).
The quantity (t t
0
) is called the Dirac delta function and is an example of a generalized
function. It has the following properties:
(t t
0
) =
_
t = t
0
0 t = t
0
,
_
(t t
0
) dt = 1. (1.27)
In particular, when t
0
= 0 , we have
(t) =
_
t = 0
0 t = 0
,
46 CHAPTER 1. LAPLACE TRANSFORMS
_
(t) dt = 1. (1.28)
To nd the Laplace transform of (tt
0
) we rst nd the Laplace transform of
a
(tt
0
).
L{
a
(t t
0
)} =
_
0
e
st
a
(t t
0
) dt
=
_
t
0
+a
t
0
a
e
st
1
2a
dt
=
_
1
2as
e
st
_
t
0
+a
t
0
a
=
1
2as
e
st
0
_
e
as
e
as
_
=
sinh as
as
e
st
0
.
Now let a 0, then lim
a0
sinh as
as
= 1 (lHopitals rule) so
L{(t t
0
)} = lim
a0
L{
a
(t t
0
)}
becomes
L{(t t
0
)} = e
st
0
. (1.29)
When t = t
0
, we have
L{(t)} = 1. (1.30)
Note the following result:
_
f(t)(t t
0
) dt = f(t
0
), (1.31)
from which it follows that
L{f(t)(t t
0
)} = f(t
0
)e
st
0
. (1.32)
1.12. IMPULSE FUNCTIONS. 47
Example 1. Solve y
+ 2y
+ 2y = (t ), y(0) = 1 , y
(0) = 0.
Taking Laplace transforms
s
2
Y (s) s + 2[sY (s) 1] + 2Y (s) = e
s
,
i.e., (s
2
+ 2s + 2)Y (s) = s + 2 + e
s
.
Y (s) =
s + 1
(s + 1)
2
+ 1
+
1
(s + 1)
2
+ 1
+
1
(s + 1)
2
+ 1
e
s
.
y(t) = e
t
cos t + e
t
sin t + e
(t)
sin(t )u
(t),
i.e., y(t) = e
t
cos t + e
t
sin t[1 e
(t)].
Example 2. Solve y
+ 4y = (t ) (t 2) , y(0) = y
(0) = 0.
s
2
Y (s) + 4Y (s) = e
s
e
2s
.
Y (s) =
1
s
2
+ 4
(e
s
e
2s
).
y(t) =
1
2
sin 2(t )u
(t)
1
2
sin 2(t 2)u
2
(t)
=
1
2
sin 2t(u
(t) u
2
(t)).
Example 3. Solve y
+ y = (t ) cos t , y(0) = 0 , y
(0) = 1.
s
2
Y 1 + Y = e
s
cos = e
s
.
Y (s) =
1
s
2
+ 1
(1 e
s
).
y(t) = sin t sin(t )u
(t)
= sin t (1 + u
(t)).
48 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.12
Solve the following initial value problems
1. y
+ y = (t 2) , y(0) = 0 , y
(0) = 1.
2. y
+ 2y
+ 3y = sin t + (t ) , y(0) = 0 , y
(0) = 1.
3. y
+ y = u
2
(t) + (t ) u3
2
(t) , y(0) = 0 , y
(0) = 0.
4. y
+ 4y = 4(t
6
) sin t , y(0) = 0 , y
(0) = 0.
5. y
2y = 1 + (t 2) , y(0) = 0 , y
(0) = 1.
6. y
(4)
y = (t 1) , y(0) = y
(0) = y
(0) = y
(0) = 0.
1.13 The convolution integral.
The inverse of the product of two Laplace transforms is not the product of the separate
inverses, i.e.,
L
1
{F(s)G(s)} = L
1
{F(s)}L
1
{G(s)}.
The inverse of the product is given by the following theorem:
Theorem: If F(s) = L{f(t)} and G(s) = L{g(t)} both exist for s > a 0, then
H(s) = F(s)G(s) = L{h(t)} , s > a,
where
h(t) =
_
t
0
f(t )g() d =
_
t
0
f()g(t ) d. (1.33)
The function h(t) is the convolution of f(t) and g(t) and the integrals dening h(t) are
the convolution integrals.
1.13. THE CONVOLUTION INTEGRAL. 49
We write
h(t) = (f g)(t)
so that h(t) is a generalized product.
Proof:
Let F(s) =
_
0
e
s
f() d
G(s) =
_
0
e
s
g() d
then F(s)G(s) =
_
0
e
s
f() d
_
0
e
s
g() d
=
_
0
g() d
_
0
e
s(+)
f() d.
Put = t ( xed) in second integral and let = , then
F(s)G(s) =
_
0
g() d
_
e
st
f(t ) dt.
Reversing the order of integration, this becomes
F(s)G(s) =
_
0
e
st
dt
_
t
0
f(t )g() d
=
_
0
e
st
dt h(t) = L{h(t)}.
Example 1. Evaluate L
1
_
1
s
2
(s
2
+ 1)
_
.
Choose F(s) =
1
s
2
, G(s) =
1
s
2
+ 1
, so that
f(t) = t, g(t) = sin t.
Then h(t) =
_
t
0
(t ) sin d
= [(t ) cos ]
t
0
_
t
0
1 cos d
= t sin t.
Alternatively, we could have written
h(t) =
_
t
0
sin(t ) d,
which leads to the same result.
50 CHAPTER 1. LAPLACE TRANSFORMS
Example 2. Evaluate L
1
_
1
(s
2
+ a
2
)
2
_
.
Choose F(s) = G(s) =
1
s
2
+ a
2
.
Then f(t) = g(t) =
1
a
sin at,
h(t) =
1
a
2
_
t
0
sin a(t ) sin a d.
Using the identity sinAsin B =
1
2
[cos(AB) cos(A + B)]
h(t) =
1
2a
2
_
t
0
[cos a(2 t) cos at] d ( A = a; B = a(t ) )
=
1
2a
2
_
1
2a
sin a(2 t) cos at
_
t
0
=
1
2a
2
_
1
2a
sin at t cos at
1
2a
sin(at)
_
=
1
2a
3
(sin at at cos at).
Example 3. Find the Laplace transform of
f(t) =
_
t
0
(t )
2
cos 2 d .
L{f(t)} = L{t
2
}L{cos 2t} =
2
s
3
s
s
2
+ 4
=
2
s
2
(s
2
+ 4)
.
Example 4. Find L
1
_
1
s
2
+ 1
F(s)
_
.
Let
1
s
2
+ 1
= G(s) , then g(t) = sin t.
L
1
_
1
s
2
+ 1
F(s)
_
= h(t) =
_
t
0
sin(t )f() d,
where f(t) = L
1
{F(s)}.
Example 5. Express in terms of a convolution integral, the solution of the initial value
problem
y
+ 4y
+ 4y = g(t) , y(0) = 2 , y
(0) = 3.
1.13. THE CONVOLUTION INTEGRAL. 51
s
2
Y (s) 2s + 3 + 4sY (s) 8 + 4Y (s) = G(s).
Y (s) =
2s + 5
(s + 2)
2
+
G(s)
(s + 2)
2
=
2
s + 2
+
1
(s + 2)
2
+
G(s)
(s + 2)
2
.
y(t) = 2e
2t
+ te
2t
+
_
t
0
e
2(t)
g() d.
Example 6. Find L
1
_
1
s
4
(s
2
+ 1)
_
.
Let G(s) =
1
s
2
+ 1
and F(s) =
1
s
4
. H(s) = F(s)G(s).
Then g(t) = sin t , and f(t) =
1
6
t
3
.
h(t) =
_
t
0
1
6
3
sin(t ) d.
Integrating by parts we obtain
h(t) =
1
6
t
3
t + sin t.
Problem Set 1.13
Evaluate the following using the convolution theorem.
1. L{
_
t
0
(t )e
d} 2. L{
_
t
0
sin(t ) cos d}
3. L{
_
t
0
(t ) cos d} 4. L{
_
t
0
sin d}
5. L{t
2
t
3
} 6. L
1
_
1
s + 5
F(s)
_
7. L
1
_
s
s
2
+ 4
F(s)
_
8. L
1
_
1
s(s + 1)
_
9. L
1
_
s
(s
2
+ 4)
2
_
10. L
1
_
1
(s + 1)
2
_
11. L
1
_
1
(s
2
+ 4s + 5)
2
_
12. L
1
_
1
(s 3)(s
2
+ 4)
_
13. Compute cos t cos t and thus show that f f is not necessarily nonnegative.
14. Solve y
+ 3y
+ 2y = cos t , y(0) = 1 , y
1
= (k
1
+ k
2
)x
1
+ k
2
x
2
(2.1)
m
2
x
2
= k
2
x
1
k
2
x
2
.
Such a system may be supplemented by initial conditions, e.g., information that the
masses start from their equilibrium positions with certain velocities.
The system (2.1) is a linear system of the second order, since second derivatives of the
variables appear.
Example 2. Consider an electrical net
work with more than one loop, as shown
in the diagram. The current i
1
(t) splits
into two directions at the branch point
B
1
. From Kirchhos rst law
i
1
(t) = i
2
(t) + i
3
(t).
A B C
L
R
i
B
R
L
1
1
i
i
1 1
2
3
2
2
2
1
1
C
2
2
A
E
Applying Kirchhos second law to each loop, the voltage drops across each part of the loop
give the following system of dierential equations
E(t) = i
1
R
1
+ L
1
i
2
+ i
2
R
2
E(t) = i
1
R
1
+ L
2
i
3
.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 55
Eliminating i
1
from these equations we obtain
L
1
i
2
= (R
1
+ R
2
)i
2
R
1
i
3
+ E(t)
L
2
i
3
= R
1
i
2
+ R
1
i
3
+ E(t).
This system can be supplemented by initial conditions such as i
2
(0) = 0 , i
3
(0) = 0. This
is a rst order linear system.
2.2 Basic theory of systems of rst order linear equations.
We rst note than any n
th
order dierential equation of the form
y
(n)
= F(t, y, y
, . . . , y
(n1)
) (2.2)
can be reduced to a system of n rstorder equations of a special form. Introduce the
variables x
1
, x
2
, . . . , x
n
dened by
x
1
= y, x
2
= y
, x
3
= y
, . . . , x
n
= y
(n1)
. (2.3)
Then Eqs. (2.2) and (2.3) can be written in the form of a system:
x
1
= x
2
x
2
= x
3
.
.
. (2.4)
x
n1
= x
n
x
n
= F(t, x
1
, x
2
, . . . , x
n
).
Example 1. Consider the dierential equation
y
+ 3y
+ 2y = 0.
56 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Put x
1
= y , x
2
= y
2
= 3x
2
2x
1
, with x
2
= x
1
,
i.e., we have the system
x
1
= x
2
x
2
= 2x
1
3x
2
,
which can be written in vectormatrix form as
x
= Ax,
where
x =
_
x
1
x
2
_
and A =
_
0 1
2 3
_
.
The system (2.4) is a special case of the more general system
x
1
= F
1
(t, x
1
, . . . , x
n
)
.
.
. (2.5)
x
n
= F
n
(t, x
1
, . . . , x
n
).
However, we are interested in systems in which each of the functions F
1
, . . . , F
n
is linear in
the variables x
1
, . . . , x
n
; such a system is said to be linear. The most general system of n
linear rst order equations has the canonical form
x
1
= a
11
(t)x
1
+ . . . + a
1n
(t)x
n
+ f
1
(t)
.
.
. (2.6)
x
n
= a
n1
(t)x
1
+ . . . + a
nn
(t)x
n
+ f
n
(t).
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 57
If the functions f
1
(t), . . . , f
n
(t) are all zero the system is said to be homogeneous; otherwise
the system is nonhomogeneous. In addition to the system of equations, there may also be
given initial conditions of the form
x
1
(t
0
) = x
0
1
, x
2
(t
0
) = x
0
2
, . . . , x
n
(t
0
) = x
0
n
. (2.7)
Theorem: If the functions a
ij
(t) , f
i
(t) (i, j = 1, . . . , n) are continuous on an open interval
< t < , containing the point t = t
0
, then there exists a unique solution x
1
, . . . , x
n
of the
system of dierential equations (2.6) which also satises the initial conditions (2.7). This
solution is valid throughout the interval < t < .
We write the system (2.6) in vectormatrix form, i.e.,
x
= Ax +f (t), (2.8)
where
x =
_
_
x
1
.
.
.
x
n
_
_, A =
_
_
a
11
. . . a
1n
.
.
.
.
.
.
a
n1
. . . a
nn
_
_, f =
_
_
f
1
(t)
.
.
.
f
n
(t)
_
_.
A vector x is said to be a solution of the system (2.8) if its components satisfy the system
(2.6). We assume that Aand f (t) are continuous, i.e., all of their components are continuous,
on some interval < t < . From the last theorem this guarantees the existence of a solution
on the interval < t < .
We commence by considering the homogeneous system
x
= Ax, (2.9)
i.e., system (2.8) with f (t) = 0. Specic solutions of this system will be denoted by
x
(1)
(t), x
(2)
(t), . . . , x
(k)
(t).
58 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Theorem (Principle of Superposition): If the vector functions x
(1)
, x
(2)
, . . . , x
(k)
are
solutions of the system (2.9), then the linear combination
x = c
1
x
(1)
(t) + c
2
x
(2)
(t) + . . . + c
k
x
(k)
(t).
is also a solution for any constants c
1
, . . . , c
k
.
Example 2. Consider the system
x
=
_
3 2
1 2
_
x.
It can be shown that
x
(1)
(t) =
_
1
1
_
e
t
and
x
(2)
(t) =
_
2
1
_
e
4t
are solutions of this system. From the above theorem, the vector
x = c
1
_
1
1
_
e
t
+ c
2
_
2
1
_
e
4t
= c
1
x
(1)
(t) + c
2
x
(2)
(t)
also satises the system. Check this!
Denition. The set of solutions x
(1)
, . . . , x
(k)
is said to be linearly dependent on some
interval < t < if there exist constants c
1
, . . . , c
k
such that
c
1
x
(1)
+ . . . + c
k
x
(k)
= 0,
for every t in the interval. Otherwise the vectors are said to be linearly independent.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 59
Suppose that x
(1)
, . . . , x
(n)
are n solutions of the n
th
order system (2.9). We write
x
(i)
=
_
_
x
1i
x
2i
.
.
.
x
ni
_
_
,
i.e., x
mi
is the m
th
component of the i
th
solution. Form a matrix by writing each solution
vector as a column of the matrix, i.e.,
(t) =
_
_
x
11
(t) x
12
(t) . . . x
1n
(t)
.
.
.
.
.
.
.
.
.
x
n1
(t) x
n2
(t) . . . x
mn
(t)
_
_. (2.10)
Now the columns of (t) are linearly independent for a given value of t if and only if
det (t) = 0 for that value of t. This determinant is denoted by W(x
(1)
, . . . , x
(n)
) and is
called the Wronskian of the n solutions x
(1)
, . . . , x
(n)
,
i.e., W = det (t).
Hence, the solutions x
(1)
, . . . , x
(n)
are linearly independent at a point if and only if W = 0
at that point.
In fact, it can be shown that, if x
(1)
, . . . , x
(n)
are solution vectors of the system, then
either
W(x
(1)
, . . . , x
(n)
) = 0,
for every t in < t < or
W(x
(1)
, . . . , x
(n)
) = 0,
for every t in the interval. Hence, if we can show that W = 0 for some t
0
in < t < ,
then W = 0 for every t and so the solutions are linearly independent on the interval.
60 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 3. Consider the system of Example 2.
The solutions
x
(1)
=
_
1
1
_
e
t
, x
(2)
=
_
2
1
_
e
4t
are clearly linearly independent in (, ) since neither vector is a constant multiple of
the other.
The Wronskian is given by
W(x
(1)
, x
(2)
) =
e
t
2e
4t
e
t
e
4t
= 3e
5t
= 0,
for all real values of t.
Denition. Any set x
(1)
, . . . , x
(n)
of n linearly independent solution vectors of the ho
mogeneous system (2.9) is said to be a fundamental set of solutions.
Denition. If x
(1)
, . . . , x
(n)
is a fundamental set of solutions of the homogeneous system
(2.9) then the general solution of the system is
x = c
1
x
(1)
+ c
2
x
(2)
+ . . . + c
n
x
(n)
, (2.11)
where c
1
, . . . , c
n
are arbitrary constants.
Note that the general solution (2.11) can be written in the form
x = c
1
_
_
x
11
.
.
.
x
n1
_
_ + c
2
_
_
x
12
.
.
.
x
n2
_
_ + . . . + c
n
_
_
x
1n
.
.
.
x
nn
_
_
=
_
_
c
1
x
11
+ c
2
x
12
+ . . . + c
n
x
1n
.
.
.
c
1
x
n1
+ c
2
x
n2
+ . . . + c
n
x
nn
_
_
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 61
=
_
_
x
11
x
12
. . . x
1n
.
.
.
.
.
.
.
.
.
x
n1
x
n2
. . . x
nn
_
_
_
_
c
1
.
.
.
c
n
_
_,
i.e., x = (t)c, (2.12)
where c is the column vector
c =
_
_
c
1
.
.
.
c
n
_
_
and (t) is as dened in Eq. (2.10) with the solutions x
(1)
(t), . . . , x
(n)
(t) linearly indepen
dent. The matrix (t) is said to be a fundamental matrix of the system on the interval
< t < . Note that, since det (t) = W = 0, the inverse
1
(t) exists for every value of
t in the interval.
Example 4. For the problem of Examples 2 and 3, a fundamental matrix is
(t) =
_
e
t
2e
4t
e
t
e
4t
_
and it follows that
1
(t) =
_
1
3
e
t
2
3
e
t
1
3
e
4t 1
3
e
4t
_
.
Suppose that x
(1)
, . . . , x
(n)
are solutions of the system (2.9) which satisfy the initial
conditions
x
(1)
(t
0
) = e
(1)
_
1
0
.
.
.
0
_
_
, x
(2)
(t
0
) = e
(2)
=
_
_
0
1
.
.
.
0
_
_
, . . . , x
(n)
(t
0
) = e
(n)
_
0
0
.
.
.
1
_
_
,
where t
0
is some point in the interval < t < . The fundamental matrix of this system is
a special case of (t) and is usually denoted by (t). It has the property that
(t
0
) = I =
_
_
1 0 . . . 0
0 1 . . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . 1
_
_
. (2.13)
62 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 5. Consider the system of Example 3. The general solution is
x = c
1
_
1
1
_
e
t
+ c
2
_
2
1
_
e
4t
.
Choose t
0
= 0 and let the initial conditions be
x
(1)
(0) =
_
1
0
_
, x
(2)
(0) =
_
0
1
_
.
The rst of these conditions leads to
_
1
0
_
= c
1
_
1
1
_
+ c
2
_
2
1
_
,
i.e., c
1
= c
2
=
1
3
, so that x
(1)
(t) =
1
3
_
1
1
_
e
t
+
1
3
_
2
1
_
e
4t
,
i.e., x
(1)
(t) =
_
1
3
e
t
+
2
3
e
4t
1
3
e
t
+
1
3
e
4t
_
.
The second initial condition leads to
_
0
1
_
= c
1
_
1
1
_
+ c
2
_
2
1
_
,
i.e., c
1
=
2
3
, c
2
=
1
3
, so that x
(2)
(t) =
_
2
3
e
t
+
2
3
e
4t
2
3
e
t
+
1
3
e
4t
_
.
Hence
(t) =
_
1
3
e
t
+
2
3
e
4t
2
3
e
t
+
2
3
e
4t
1
3
e
t
+
1
3
e
4t 2
3
e
t
+
1
3
e
4t
_
and it is easily seen that (0) = I.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 63
Problem Set 2.2
In problems 1  6 verify that the vector x is a solution of the given system.
1.
dx
dt
= 3x 4y ,
dy
dt
= 4x 7y , x =
_
1
2
_
e
5t
2.
dx
dt
= 2x + 5y ,
dy
dt
= 2x + 4y , x =
_
5 cos t
3 cos t sin t
_
e
t
3. x
=
_
1
1
4
1 1
_
x ; x =
_
1
2
_
e
3
2
t
4. x
=
_
2 1
1 0
_
x ; x =
_
1
3
_
e
t
+
_
4
4
_
te
t
5. x
=
_
_
1 2 1
6 1 0
1 2 1
_
_
x ; x =
_
_
1
6
13
_
_
6. x
=
_
_
1 0 1
1 1 0
2 0 1
_
_
x ; x =
_
_
sin t
1
2
sin t
1
2
cos t
sin t + cos t
_
_
In problems 7 and 8 the given vectors are solutions of a system x
= Ax. Determine
whether the vectors form a fundamental set on (, ).
7. x
(1)
=
_
1
1
_
e
t
, x
(2)
=
_
2
6
_
e
t
+
_
8
8
_
te
t
8. x
(1)
=
_
_
1
2
4
_
_
+ t
_
_
1
2
2
_
_
, x
(2)
=
_
_
1
2
4
_
_
, x
(3)
=
_
_
3
6
12
_
_
+ t
_
_
2
4
4
_
_
9. Prove that the general solution of x
=
_
_
0 6 0
1 0 1
1 1 0
_
_
x on the interval (, ) is
x = c
1
_
_
6
1
5
_
_
e
t
+ c
2
_
_
3
1
1
_
_
e
2t
+ c
3
_
_
2
1
1
_
_
e
3t
.
64 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
In problems 1012 the indicated column vectors form a fundamental set of solutions for
the given system on (, ). Form a fundamental matrix (t) and compute
1
(t).
10. x
=
_
2 3
3 2
_
x ; x
(1)
=
_
1
1
_
e
t
, x
(2)
=
_
1
1
_
e
5t
11. x
=
_
4 1
9 2
_
x ; x
(1)
=
_
1
3
_
e
t
, x
(2)
=
_
1
3
_
te
t
+
_
0
1
_
e
t
12. x
=
_
3 2
5 3
_
x ; x
(1)
=
_
2 cos t
3 cos t + sin t
_
, x
(2)
=
_
2 sin t
cos t 3 sin t
_
13. Find the fundamental matrix (t) satisfying (0) = I for the system in problem 10.
14. Find the fundamental matrix (t) satisfying (0) = I for the system in problem 11.
15. Find the fundamental matrix (t) satisfying
_
2
_
= I for the system in problem 12.
16. Show that the fundamental matrices (t) and (t) satisfy (t) = (t)
1
(t
0
).
2.3 Review of eigenvalues and eigenvectors.
Given the nn matrix A, the equation Ax = y can be regarded as a linear transforma
tion that maps (or transforms) a given vector x into a new vector y. In many applications
it is useful to nd a vector x which is transformed into a multiple of itself by the action of
A, i.e., we need to nd the solution vectors, x, of the linear system
Ax = x, (2.14)
where is the proportionality factor. A vector x satisfying Eq. (2.14) is called an eigenvector
of A corresponding to the eigenvalue .
Eq. (2.14) can be rewritten in the form
(AI) x = 0, (2.15)
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 65
where I is the n n identity matrix. This is a homogeneous system of linear equations
which will have nontrivial solutions if and only if
det (AI) = 0. (2.16)
Eq. (2.16) is a polynomial equation of degree n known as the characteristic equation.
Thus, the n n matrix A has exactly n eigenvalues, some of which may be repeated. If a
given eigenvalue appears m times as a root of Eq. (2.16) then that eigenvalue is said to be
of multiplicity m. Each eigenvalue has at least one associated eigenvector; an eigenvalue of
multiplicity m may have q linearly independent eigenvectors where 1 q m. If all of the
eigenvalues of a matrix A are simple (i.e., of multiplicity one), then the n eigenvectors of
A are linearly independent.
Given the eigenvalues, we use GaussJordan elimination to solve the system of
equations (2.15) to nd the eigenvectors. These eigenvectors are determined only up to
a multiplicative constant. We can normalize the eigenvector by choosing the constant
appropriately.
We illustrate the above theory by presenting a number of examples. These examples
will cover the following possibilities:
(a) All eigenvalues real and simple.
(b) Some eigenvalues complex. (Note that for a real matrix A, complex eigenvalues must
occur in conjugate pairs).
(c) Eigenvalues of multiplicity m with m linearly independent associated eigenvectors.
66 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
(d) Eigenvalues of multiplicity m with less than m linearly independent associated eigen
vectors.
Example 1. Find the eigenvalues and eigenvectors of the matrix A =
_
_
1 1 4
3 2 1
2 1 1
_
_
.
The characteristic equation is
det (AI) =
1 1 4
3 2 1
2 1 1
= 0,
i.e., (
3
2
2
5 + 6) = 0,
i.e., ( 1)( + 2)( 3) = 0.
Thus A has the three simple eigenvalues
1
= 1 ,
2
= 2 and
3
= 3. For = 1,
Eq. (2.15) is
_
_
0 1 4 0
3 1 1 0
2 1 2 0
_
_
_
_
0 1 4 0
1 0 1 0
2 0 2 0
_
_
_
_
0 1 4 0
1 0 1 0
0 0 0 0
_
_
.
Hence x
1
= x
3
, x
2
= 4x
3
and, putting x
3
= 1, the associated eigenvector is x
(1)
=
_
_
1
4
1
_
_
.
For = 2, Eq. (2.15) is
_
_
3 1 4 0
3 4 1 0
2 1 1 0
_
_
_
_
3 1 4 0
1 0 1 0
5 0 5 0
_
_
_
_
1 1 0 0
1 0 1 0
0 0 0 0
_
_
.
Hence x
2
= x
1
, x
3
= x
1
and, putting x
1
= 1, the associated eigenvector is x
(2)
=
_
_
1
1
1
_
_
.
For = 3, Eq. (2.15) is
_
_
2 1 4 0
3 1 1 0
2 1 4 0
_
_
_
_
2 1 4 0
1 0 1 0
0 0 0 0
_
_
_
_
2 1 0 0
1 0 1 0
0 0 0 0
_
_
.
Hence x
3
= x
1
, x
2
= 2x
1
and, putting x
1
= 1, the associated eigenvector is x
(3)
=
_
_
1
2
1
_
_
.
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 67
Note that any multiple of x
(1)
, x
(2)
or x
(3)
is also an eigenvector.
Example 2. Find the eigenvalues and eigenvectors of the matrix A =
_
3 2
4 1
_
.
The characteristic equation is
det (AI) =
3 2
4 1
= 0,
i.e.,
2
2 + 5 = 0,
i.e., = 1 + 2i , 1 2i.
Thus A has the conjugate complex pair of eigenvalues
1
= 1 + 2i ,
2
= 1 2i. For
= 1 + 2i , Eq. (2.15) is
_
2 2i 2 0
4 2 2i 0
_
_
1 i 1 0
0 0 0
_
.
Hence x
2
= (1 i)x
1
and, putting x
1
= 1, the associated eigenvector is x
(1)
=
_
1
1 i
_
.
For = 1 2i it follows that the associated eigenvector is the complex conjugate of
x
(1)
, i.e., x
(2)
=
_
1
1 + i
_
.
Example 3. Find the eigenvalues and eigenvectors of the matrix A =
_
_
0 1 1
1 0 1
1 1 0
_
_
.
The characteristic equation is
det (AI) =
1 1
1 1
1 1
= 0,
i.e., ( + 1)
2
( 2) = 0.
Thus A has the simple eigenvalue
1
= 2 and the eigenvalue of multiplicity two,
2
= 1.
For = 2, Eq. (2.15) is
_
_
2 1 1 0
1 2 1 0
1 1 2 0
_
_
_
_
2 1 1 0
0 1 1 0
0 0 0 0
_
_
_
_
2 0 2 0
0 1 1 0
0 0 0 0
_
_
.
68 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Hence x
3
= x
2
= x
1
and the eigenvector is x
(1)
=
_
_
1
1
1
_
_
. For = 1, Eq. (2.15) is
_
_
1 1 1 0
1 1 1 0
1 1 1 0
_
_
_
_
1 1 1 0
0 0 0 0
0 0 0 0
_
_
.
Hence, we have the single equation
x
1
+ x
2
+ x
3
= 0,
and thus two parameters that can be assigned arbitrary values. If we put x
2
= a , x
3
= b,
then x
1
= a b and the eigenvector is of the form
_
_
a b
a
b
_
_
= a
_
_
1
1
0
_
_
+ b
_
_
1
0
1
_
_
.
Hence, a pair of linearly independent eigenvectors associated with the repeated eigenvalue
2
= 1 is
x
(2)
=
_
_
1
1
0
_
_
, x
(3)
=
_
_
1
0
1
_
_
.
Thus, in this case, the number of linearly independent eigenvectors equals the multiplicity
of the repeated eigenvalue. Note that any linear combination of x
(2)
and x
(3)
is also an
eigenvector associated with the eigenvalue
2
= 1.
Example 4. Find the eigenvalues and eigenvectors of the matrix A =
_
_
5 5 9
8 9 18
2 3 7
_
_
.
The characteristic equation is
det (AI) =
5 5 9
8 9 18
2 3 7
= 0,
i.e., ( + 1)
3
= 0.
Thus A has an eigenvalue, = 1 , of multiplicity 3. For = 1, Eq. (2.15) is
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 69
_
_
4 5 9 0
8 10 18 0
2 3 6 0
_
_
_
_
0 1 3 0
0 0 0 0
2 0 3 0
_
_
.
Hence, x
2
= 3x
3
, 2x
1
= 3x
3
and, putting x
3
= 2, we obtain the single linearly dependent
eigenvector
x
(1)
=
_
_
3
6
2
_
_
.
Example 5. Find the eigenvalues and eigenvectors of the matrix A =
_
_
1 3 9
0 5 18
0 2 7
_
_
.
The characteristic equation is
det (AI) =
1 3 9
0 5 18
0 2 7
= 0,
i.e., (1 + )(
2
+ 2 + 1) = 0,
i.e., ( + 1)
3
= 0.
Thus A has an eigenvalue, = 1, of multiplicity 3. For = 1, Eq. (2.15) is
_
_
0 3 9 0
0 6 18 0
0 2 6 0
_
_
_
_
0 1 3 0
0 0 0 0
0 0 0 0
_
_
.
Hence x
1
is arbitrary, x
2
= 3x
3
, i.e., put x
1
= a, x
3
= b and the eigenvector is
_
_
a
3b
b
_
_
= a
_
_
1
0
0
_
_
+ b
_
_
0
3
1
_
_
.
Thus the repeated eigenvalue = 1 has two linearly independent associated eigenvectors
x
(1)
=
_
_
1
0
0
_
_
, x
(2)
=
_
_
0
3
1
_
_
.
70 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 6. Find the eigenvalues and eigenvectors of the matrix A =
_
_
2 0 0
0 2 0
0 0 2
_
_
.
The characteristic equation is
det (AI) =
2 0 0
0 2 0
0 0 2
= 0,
i.e., (2 )
3
= 0.
Thus A has an eigenvalue, = 2, of multiplicity 3. For = 2, Eq. (2.15) is
_
_
0 0 0 0
0 0 0 0
0 0 0 0
_
_
,
i.e., x
1
, x
2
, x
3
may have any values. Put x
1
= a , x
2
= b , x
3
= c and the eigenvector is
_
_
a
b
c
_
_
= a
_
_
1
0
0
_
_
+ b
_
_
0
1
0
_
_
+ c
_
_
0
0
1
_
_
.
Hence the repeated eigenvalue = 2 has three linearly independent associated eigenvectors
x
(1)
=
_
_
1
0
0
_
_
, x
(2)
=
_
_
0
1
0
_
_
, x
(3)
=
_
_
0
0
1
_
_
.
Problem Set 2.3
In each of the problems nd the eigenvalues and eigenvectors of the given matrix.
1.
_
1 2
7 8
_
2.
_
2 1
2 1
_
3.
_
8 1
16 0
_
4.
_
1 1
1
4
1
_
5.
_
_
5 1 0
0 5 9
5 1 0
_
_
6.
_
_
3 0 0
0 2 0
4 0 1
_
_
7.
_
_
0 4 0
1 4 0
0 0 2
_
_
8.
_
_
1 6 0
0 2 1
0 1 2
_
_
9.
_
1 2
5 1
_
10.
_
_
2 1 0
5 2 4
0 1 2
_
_
2.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS. 71
2.4 Homogeneous linear systems with constant coecients.
We now show how to construct the general solution of a system of homogeneous linear
equations with constant coecients, i.e., a system of the form (2.9), x
= Ax, where A is
a constant n n matrix. We have seen in Section 2.2 that the systems considered there all
had solutions of the form
x = ke
t
, (2.17)
where k is a constant vector, so we look for solutions of this form. Since Eq. (2.17) implies
that x
= ke
t
, substitution into Eq. (2.9) gives
ke
t
= Ake
t
,
i.e., Ak = k
and nontrivial solutions of this equation exist if and only if
det (AI) = 0.
This is the characteristic equation of the matrix A. Hence it follows that x = ke
t
is a
solution of the system (2.9) if and only if is an eigenvalue of A and k is the associated
eigenvector.
If the nn matrix A possesses n distinct real eigenvalues
1
, . . . ,
n
, then the associated
eigenvectors k
(1)
, . . . , k
(n)
are linearly independent and
x
(1)
= k
(1)
e
1
t
, x
(2)
= k
(2)
e
2
t
, . . . , x
(n)
= k
(n)
e
nt
(2.18)
form a fundamental set of solutions for the system and the general solution on the interval
(, ) is
x = c
1
k
(1)
e
1
t
+ c
2
k
(2)
e
2
t
+ . . . + c
n
k
(n)
e
nt
. (2.19)
72 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 1. Consider the system x
= Ax where
A =
_
_
1 1 4
3 2 1
2 1 1
_
_
.
This is the matrix of Ex. 1 of Section 2.3. The eigenvalues are 1, 2, and 3 and the
associated eigenvectors are
k
(1)
=
_
_
1
4
1
_
_
, k
(2)
=
_
_
1
1
1
_
_
, k
(3)
=
_
_
1
2
1
_
_
.
Thus the solutions are
x
(1)
=
_
_
1
4
1
_
_
e
t
, x
(2)
=
_
_
1
1
1
_
_
e
2t
, x
(3)
=
_
_
1
2
1
_
_
e
3t
,
and so a fundamental matrix is
(t) =
_
_
e
t
e
2t
e
3t
4e
t
e
2t
2e
3t
e
t
e
2t
e
3t
_
_
.
The general solution is
x = c
1
_
_
1
4
1
_
_
e
t
+ c
2
_
_
1
1
1
_
_
e
2t
+ c
3
_
_
1
2
1
_
_
e
3t
or, equivalently,
x = (t)c =
_
_
e
t
e
2t
e
3t
4e
t
e
2t
2e
3t
e
t
e
2t
e
3t
_
_
_
_
c
1
c
2
c
3
_
_
.
Taking t
0
= 0, the special fundamental matrix (t) is given by
(t) = (t)
1
(t
0
) =
_
_
e
t
e
2t
e
3t
4e
t
e
2t
2e
3t
e
t
e
2t
e
3t
_
_
_
_
1 1 1
4 1 2
1 1 1
_
_
1
=
_
_
e
t
e
2t
e
3t
4e
t
e
2t
2e
3t
e
t
e
2t
e
3t
_
_
_
1
6
1
3
1
2
1
3
1
3
1
1
2
0
1
2
_
_
2.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS. 73
=
_
_
1
6
e
t
+
1
3
e
2t
+
1
2
e
3t
2
3
e
t
1
3
e
2t
+e
3t
1
6
e
t
1
3
e
2t
+
1
2
e
3t
1
3
e
t
+
1
3
e
2t
4
3
e
t
1
3
e
2t
1
3
e
t
1
3
e
2t
1
2
e
t
e
2t
+
1
2
e
3t
2e
t
+e
2t
+e
3t
1
2
e
t
+e
2t
+
1
2
e
3t
_
_.
It is easily seen that (0) = I.
Note that if a matrix A possesses a repeated eigenvalue of multiplicity m to which
correspond m linearly independent eigenvectors then the general solution of the system
(2.9) is of the form (2.19). In particular, if the matrix A is real and symmetric, i.e.,
A = A
T
, then there is always a full set of n linearly independent eigenvectors even if some
of the eigenvalues are repeated.
Example 2. Consider the system x
= Ax where
A =
_
_
0 1 1
1 0 1
1 1 0
_
_
.
This matrix is real and symmetric and so has real eigenvalues and three linearly inde
pendent eigenvectors. The eigenvalues are
1
= 2 ,
2
= 1 ,
3
= 1,
i.e., there is an eigenvalue of multiplicity two. The associated eigenvectors are
k
(1)
=
_
_
1
1
1
_
_
, k
(2)
=
_
_
1
0
1
_
_
, k
(3)
=
_
_
0
1
1
_
_
.
(Check this!). Thus the eigenvalue of multiplicity two has two linearly independent associ
ated eigenvectors and the general solution is
x = c
1
_
_
1
1
1
_
_
e
2t
+ c
2
_
_
1
0
1
_
_
e
t
+ c
3
_
_
0
1
1
_
_
e
t
or, in terms of a fundamental matrix,
x = (t)c =
_
_
e
2t
e
t
0
e
2t
0 e
t
e
2t
e
t
e
t
_
_
_
_
c
1
c
2
c
3
_
_
.
74 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
2.5 Complex eigenvectors.
We assume that the coecient matrix, A, is real but possesses a pair of conjugate
complex eigenvalues.
1
= + i ,
2
=
1
= i, (2.20)
where , are real constants. The associated eigenvectors are also complex conjugates,
i.e.,
k
(1)
= a + ib , k
(2)
= k
(1)
= a ib, (2.21)
where a and b are real vectors. Then the two linearly independent solutions of the system
which correspond to these eigenvalues are k
(1)
e
1
t
and k
(2)
e
2
t
= k
(1)
e
1
t
. However, the
solutions are complex; we need to nd real solutions of the system. Since the system
x
= Ax has real coecients, the real and imaginary parts of the single complex solution
x
c
(t) = k
(1)
e
1
t
can be shown to be two linearly independent real solutions. Thus the
general solution of the system can be written as x(t) = c
1
x
(1)
(t) + c
2
x
(2)
(t), where
x
(1)
(t) = Re(x
c
(t))
(2.22)
x
(2)
(t) = Im(x
c
(t)).
The real and imaginary parts of k
(2)
e
2
t
will not give rise to any additional linearly inde
pendent real solutions.
Example 1. Consider the system x
= Ax where
A =
_
3 2
4 1
_
.
This is the matrix of Ex. 2 of Section 2.3. The eigenvalues are
1
= 1 + 2i ,
2
= 1 2i
2.5. COMPLEX EIGENVECTORS. 75
and the associated eigenvectors are k
(1)
=
_
1
1 i
_
, k
(2)
=
_
1
1 + i
_
.
We need use only
1
and k
(1)
. Thus
x
c
(t) =
_
1
1 i
_
e
t
(cos 2t + i sin 2t)
which expands to yield
x
c
(t) =
_
e
t
cos 2t + ie
t
sin 2t
e
t
cos 2t + e
t
sin 2t + i(e
t
sin 2t e
t
cos 2t)
_
Hence, from Eq. (2.22), the solutions are
x
(1)
(t) = e
t
_
cos 2t
cos 2t + sin 2t
_
x
(2)
(t) = e
t
_
sin 2t
sin 2t cos 2t
_
,
and a fundamental matrix is
(t) =
_
e
t
cos 2t e
t
sin 2t
e
t
cos 2t + e
t
sin 2t e
t
sin 2t e
t
cos 2t
_
.
Problem Set 2.5
In each of the following problems the given matrix A is the coecient matrix of a linear
homogeneous system of equations x
5
2
2
_
4. A =
_
1
2
9
1
2
2
_
5. A =
_
10 5
8 12
_
6. A =
_
6 2
3 1
_
76 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
7. A =
_
_
1 1 1
0 2 0
0 1 1
_
_
8. A =
_
_
1 1 0
1 2 1
0 3 1
_
_
9. A =
_
_
3 1 1
1 1 1
1 1 1
_
_
10. A =
_
_
1 0 1
0 1 0
1 0 1
_
_
11. A =
_
6 1
5 2
_
12. A =
_
1 1
2 1
_
13. A =
_
5 1
2 3
_
14. A =
_
4 5
2 6
_
15. A =
_
4 5
5 4
_
16. A =
_
1 8
1 3
_
17. A =
_
_
0 0 1
0 0 1
0 1 0
_
_
18. A =
_
_
1 1 2
1 1 0
1 0 1
_
_
19. A =
_
_
4 0 1
0 6 0
4 0 4
_
_
20. A =
_
_
2 5 1
5 6 4
0 0 2
_
_
In the following problems solve the given system subject to the indicated initial condi
tions.
21. x
=
_
1
2
0
1
1
2
_
x , x(0) =
_
3
5
_
22. x
=
_
6 1
5 4
_
x , x(0) =
_
2
8
_
2.6 Repeated eigenvalues.
We have seen that if the coecient matrix A of the system x
= Ax has a repeated
eigenvalue of multiplicity m and there are m linearly independent eigenvectors associated
with , then there exist m linearly independent solutions of the system corresponding to
. This case was covered in Section 2.4 and is essentially the same as the case of distinct
eigenvalues. In this section we consider the case of a repeated eigenvalue of multiplicity m
2.6. REPEATED EIGENVALUES. 77
with less than m associated eigenvectors.
Suppose that the coecient matrix A has a repeated eigenvalue
1
=
2
of multiplicity
two and that there is only one associated eigenvector k. Then
x
(1)
= ke
1
t
(2.23)
is a solution and we need to nd a second linearly independent solution. This can be
achieved by assuming a solution of the form
x
(2)
= mte
1
t
+ne
1
t
, (2.24)
where m and n are constant vectors. Substituting the expression (2.24) into the equation
of the system we nd that
1
mte
1
t
+me
1
t
+
1
ne
1
t
= A(mte
1
t
+ne
1
t
).
The coecients of te
1
t
and e
1
t
are, respectively,
1
m = Am and m+
1
n = An,
i.e., (A
1
I)m = 0, (2.25)
(A
1
I)n = m. (2.26)
Eq. (2.25) shows that m is the eigenvector associated with the eigenvalue
1
, i.e., m = k,
so that Eq. (2.26) becomes
(A
1
I)n = k. (2.27)
By solving this equation for n we are able to complete the second solution given by
x
(2)
= kte
1
t
+ne
1
t
. (2.28)
78 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 1. Find the general solution of the system
x
=
_
3 4
1 1
_
x = Ax.
The eigenvalues of the coecient matrix A are given by
3 4
1 1
= 0
2
2 + 1 = 0,
i.e.,
1
=
2
= 1.
For = 1, the system is
_
2 4 0
1 2 0
_
,
i.e., x
1
2x
2
= 0.
Hence, there is a single eigenvector k =
_
2
1
_
and the corresponding solution of the system
is
x
(1)
=
_
2
1
_
e
t
.
The second linearly independent solution is of the form
x
(2)
=
_
2
1
_
te
t
+ne
t
,
where n is a solution of the equation (A
1
I)n = k,
i.e.,
_
2 4 2
1 2 1
_
,
i.e., n
1
2n
2
= 1. (2.29)
If we put n
2
= a, where a is arbitrary, then
n =
_
2a + 1
a
_
= a
_
2
1
_
+
_
1
0
_
.
2.6. REPEATED EIGENVALUES. 79
Hence, x
(2)
=
_
2
1
_
te
t
+
_
1
0
_
e
t
+ a
_
2
1
_
e
t
. Note that the last term is simply a multiple
of the rst solution x
(1)
, so the second linearly independent solution is
x
(2)
=
_
2
1
_
te
t
+
_
1
0
_
e
t
=
_
2t + 1
t
_
e
t
.
Note that the vector n could have been found by putting one of the components, n
1
or n
2
of n equal to zero in Eq. (2.28).
A fundamental matrix for the system is
(t) =
_
2e
t
2te
t
+ e
t
e
t
te
t
_
and the general solution is
x = c
1
_
2
1
_
e
t
+ c
2
_
2t + 1
t
_
e
t
.
Now suppose the coecient matrix A has an eigenvalue of multiplicity three, i.e.,
1
=
2
=
3
and there is only one associated eigenvector k. In this case it can be shown that
the rst solution is given by Eq. (2.23), a second solution by Eq. (2.28) and a third solution
is of the form
x
(3)
=
1
2
kt
2
e
1
t
+nte
1
t
+pe
1
t
, (2.30)
where k is the eigenvector, n satises Eq. (2.27) and p satises
(A
1
I)p = n. (2.31)
Example 2. Consider the system x
=
_
_
5 5 9
8 9 18
2 3 7
_
_
x.
80 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
There is a single repeated eigenvalue
1
=
2
=
3
= 1 and a single eigenvector k =
_
_
3
6
2
_
_
.
Thus the rst solution is
x
(1)
=
_
_
3
6
2
_
_
e
t
.
The second solution is given by Eq. (2.28) , i.e.,
x
(2)
=
_
_
3
6
2
_
_
te
t
+ne
t
,
where n is given by (A
1
I)n = k, i.e.,
_
_
4 5 9 3
8 10 18 6
2 3 6 2
_
_
,
and a solution of this is n =
_
_
0
0
1
3
_
_
, so that the second solution is
x
(2)
=
_
_
3
6
2
_
_
te
t
+
_
_
0
0
1
3
_
_
e
t
.
The third solution is given by Eq. (2.30), i.e.,
x
(3)
=
1
2
_
_
3
6
2
_
_
t
2
e
t
+
_
_
0
0
1
3
_
_
te
t
+pe
t
,
where p is given by (A
1
I)p = n, i.e.,
_
_
4 5 9 0
8 10 18 0
2 3 6
1
3
_
_
.
Row operations on this augmented matrix lead to
_
_
2 0 3
5
3
0 1 3
2
3
0 0 0 0
_
_
2.6. REPEATED EIGENVALUES. 81
and a solution of this is p =
_
5
6
2
3
0
_
1
3
_
_
te
t
+
_
5
6
2
3
0
_
_e
t
.
Hence a fundamental matrix is
(t) =
_
_
3e
t
3te
t
(
3
2
t
2
5
6
)e
t
6e
t
6te
t
(3t
2
+
2
3
)e
t
2e
t
(2t
1
3
)e
t
(t
2
1
3
t)e
t
_
_.
Another possibility is that the coecient matrix A has an eigenvalue of multiplicity three,
i.e.,
1
=
2
=
3
, with two linearly independent associated eigenvectors, k
(1)
and k
(2)
,
then nding the solution is a little more complicated. In this case two linearly independent
solutions are
x
(1)
= k
(1)
e
1
t
, x
(2)
= k
(2)
e
1
t
and a third solution is of the form
x
(3)
= kte
1
t
+ne
1
t
, (2.32)
where n is a solution of Eq. (2.27) and k is a linear combination of k
(1)
and k
(2)
chosen in
such a way that Eq. (2.27) has a solution. We demonstrate this with the following example.
Example 3. Consider the system x
=
_
_
1 3 9
0 5 18
0 2 7
_
_
x.
There is a single repeated eigenvalue
1
=
2
=
3
= 1 and two linearly independent
associated eigenvectors
k
(1)
=
_
_
1
0
0
_
_
, k
(2)
=
_
_
0
3
1
_
_
.
82 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Hence two linearly independent solutions are
x
(1)
=
_
_
1
0
0
_
_
e
t
, x
(2)
=
_
_
0
3
1
_
_
e
t
.
Eq. (2.27) is of the form
_
_
0 3 9 k
1
0 6 18 k
2
0 2 6 k
3
_
_
,
where k =
_
_
k
1
k
2
k
3
_
_
= ak
(1)
+ bk
(2)
=
_
_
a
3b
b
_
_
for some constants a and b. Thus we have
_
_
0 3 9 a
0 6 18 3b
0 2 6 b
_
_
R
2
+ 2R
1
=
R
3
1
3
R
2
_
_
0 3 9 a
0 0 0 2a 3b
0 0 0 0
_
_
.
For consistency we must have 2a 3b = 0, so we take a = 3, b = 2 and so
k = 3k
(1)
+ 2k
(2)
=
_
_
3
6
2
_
_
.
Thus we have the equation 3n
2
9n
3
= 3 and a solution of this is n
1
= 0, n
2
= 1, n
3
= 0,
i.e., n =
_
_
0
1
0
_
_
.
Hence the third solution is
x
(3)
=
_
_
3
6
2
_
_
te
t
+
_
_
0
1
0
_
_
e
t
and a fundamental matrix is
(t) =
_
_
e
t
0 3te
t
0 3e
t
(6t + 1)e
t
0 e
t
2te
t
_
_
.
2.7. NONHOMOGENEOUS SYSTEMS. 83
Problem Set 2.6
In the following problems nd the general solution of the given system.
1.
dx
dt
= 3x y 2.
dx
dt
= 6x + 5y
dy
dt
= 9x 3y
dy
dt
= 5x + 4y
3.
dx
dt
= x + 3y 4. x
=
_
_
5 4 0
1 0 2
0 2 5
_
_
x
dy
dt
= 3x + 5y
5. x
=
_
_
1 0 0
0 3 1
0 1 1
_
_
x 6. x
=
_
_
1 0 0
2 2 1
0 1 0
_
_
x
7. x
=
_
_
4 1 0
0 4 1
0 0 4
_
_
x 8. x
=
_
_
0 4 0
1 4 0
0 0 2
_
_
x
2.7 Nonhomogeneous systems.
Suppose that we have a nonhomogeneous system of equations of the form
x
= Ax +f (t). (2.33)
In solving this system, the rst step is to nd the solution of the homogeneous system
x
_
u
1
(t)
.
.
.
u
n
(t)
_
_ so that
x = (t)u(t) (2.35)
is a particular solution of the system (2.33).
Dierentiating Eq. (2.35) gives
x
(t)u(t) +(t)u
(t)
and, substituting into Eq. (2.33), we obtain
(t)u(t) +(t)u
(t) = f (t),
i.e., u(t) =
_
1
(t)f (t) dt.
Hence, a particular solution of the system (2.33) is
x
p
= (t)
_
1
(t)f (t) dt, (2.37)
and the general solution of the system is
x = (t)c +(t)
_
1
(t)f (t) dt. (2.38)
If there is an initial condition of the form
x(t
0
) = x
0
, (2.39)
2.7. NONHOMOGENEOUS SYSTEMS. 85
it is useful to rewrite the solution (2.38) in the form
x = (t)c +(t)
_
t
t
0
1
(s)f (s) ds, (2.40)
so that the particular solution chosen is the specic one that is zero at t = t
0
. In this case
the initial condition (2.39) becomes x
0
= (t
0
)c, i.e.,
c =
1
(t
0
)x
0
, (2.41)
so that the solution of the initial value problem is
x = (t)
1
(t
0
)x
0
+(t)
_
t
t
0
1
(s)f (s) ds. (2.42)
If we use as fundamental matrix the matrix (t) which satises (t
0
) = I (see Section 2.2),
then this solution can be written as
x = (t) x
0
+(t)
_
t
t
0
1
(s) f (s) ds. (2.43)
Example 1. Find the general solution of the system
x
=
_
2 1
3 2
_
x +
_
e
t
t
_
.
This matrix A =
_
2 1
3 2
_
has two distinct eigenvalues
1
= 1,
2
= 1 with associ
ated eigenvectors k
(1)
=
_
1
1
_
, k
(2)
=
_
1
3
_
. Hence a fundamental matrix is
(t) =
_
e
t
e
t
e
t
3e
t
_
,
so that
1
(t) =
1
2
_
3e
t
e
t
e
t
e
t
_
.
86 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Now f (t) =
_
e
t
t
_
and so
1
(t)f (t) =
1
2
_
3 te
t
e
2t
+ te
t
_
.
Hence
_
1
(t)f (t) dt =
1
2
_
3t + (t + 1)e
t
1
2
e
2t
+ (t 1)e
t
_
, and
x
p
= (t)
_
1
(t)f (t) dt =
_
3
2
te
t
1
4
e
t
+ t
3
2
te
t
3
4
e
t
+ 2t 1
_
.
The general solution is
x = c
1
_
1
1
_
e
t
+ c
2
_
1
3
_
e
t
+
_
3
2
te
t
1
4
e
t
+ t
3
2
te
t
3
4
e
t
+ 2t 1
_
.
Example 2. Find the general solution of the system
x
=
_
2 5
1 2
_
x +
_
cos t
sin t
_
.
The matrix A =
_
2 5
1 2
_
has complex eigenvalues
1
= i,
2
= i with associ
ated eigenvectors k
(1)
=
_
2 + i
1
_
, k
(2)
=
_
2 i
1
_
. Hence, the a complex solution of the
homogeneous system is given by
_
2 + i
1
_
e
it
=
_
2 + i
1
_
(cos t + i sin t) =
_
2 cos t sin t + i(cos t + 2 sin t)
cos t + i sin t
_
,
i.e., two linearly independent real solutions are
x
(1)
=
_
2 cos t sin t
cos t
_
, x
(2)
=
_
cos t + 2 sin t
sin t
_
and a fundamental matrix is
(t) =
_
2 cos t sin t cos t + 2 sin t
cos t sin t
_
.
2.7. NONHOMOGENEOUS SYSTEMS. 87
Then
1
(t) is given by
1
(t) =
_
sin t cos t + 2 sin t
cos t 2 cos t + sin t
_
.
Now f (t) =
_
cos t
sin t
_
and so
1
(t)f (t) =
_
1 cos 2t + sin 2t
cos 2t sin 2t
_
.
Hence
_
1
(t)f (t) dt =
_
t
1
2
sin 2t
1
2
cos 2t
1
2
sin 2t +
1
2
cos 2t
_
, and
x
p
= (t)
_
1
(t)f (t) dt =
_
2t cos t t sin t
3
2
sin t
1
2
cos t
t cos t
1
2
sin t
1
2
cos t
_
.
The general solution is
x = c
1
_
2 cos t sin t
cos t
_
+ c
2
_
cos t + 2 sin t
sin t
_
+
_
2t cos t t sin t
3
2
sin t
1
2
cos t
t cos t
1
2
sin t
1
2
cos t
_
.
Problem Set 2.7
Find the general solution of the given system.
1. x
=
_
3 3
2 2
_
x +
_
4
1
_
2. x
=
_
1
3 1
_
x +
_
e
t
3e
t
_
3. x
=
_
1 1
1 1
_
x +
_
cos t
sin t
_
e
t
4. x
=
_
1 1
4 2
_
x +
_
e
2t
2e
t
_
5. x
=
_
4 2
8 4
_
x +
_
t
3
t
2
_
6. x
=
_
4 2
2 1
_
x +
_
t
1
2t
1
+ 4
_
7. x
=
_
1 1
4 1
_
x +
_
2
1
_
e
t
8. x
=
_
2 1
3 2
_
x +
_
1
1
_
e
t
9. x
=
_
5
4
3
4
3
4
5
4
_
x +
_
2t
e
t
_
10. x
=
_
2 5
1 2
_
x +
_
0
cos t
_
88 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
2.8 Laplace transform method for systems.
If initial conditions are specied, Laplace transform methods can be used to reduce a
system of dierential equations to a set of simultaneous algebraic equations and thus to nd
the solution. The system need not be of the rst order. We demonstrate the method with
two examples.
Example 1. Use the Laplace transform to solve the system of dierential equations
dx
dt
= x + y ,
dy
dt
= 2x , x(0) = 0, y(0) = 1.
Let X(s) = L{x(t)} and Y (s) = L{y(t)}. Transforming each equation of the system,
we obtain
sX(s) x(0) = X(s) + Y (s),
sY (s) y(0) = 2X(s),
i.e., (s + 1)X(s) Y (s) = 0,
2X(s) + sY (s) = 1.
Multiplying the rst equation by s and adding the second equation we obtain
(s
2
+ s 2)X(s) = 1,
i.e., X(s) =
1
(s + 2)(s 1)
=
1
3
s 1
1
3
s + 2
.
x(t) =
1
3
e
t
1
3
e
2t
.
2.8. LAPLACE TRANSFORM METHOD FOR SYSTEMS. 89
Also
Y (s) = (s + 1)X(s) =
s + 1
(s + 2)(s 1)
=
2
3
s 1
+
1
3
s + 2
.
y(t) =
2
3
e
t
+
1
3
e
2t
.
Hence, the solution of the given system is
x(t) =
1
3
(e
t
e
2t
) , y(t) =
1
3
(2e
t
+ e
2t
).
Example 2. Use the Laplace transform to solve the system of dierential equations
d
2
x
dt
2
+
d
2
y
dt
2
= t
2
,
d
2
x
dt
2
d
2
y
dt
2
= 4t,
x(0) = 8 , x
(0) = 0 , y(0) = 0 , y
(0) = 0.
Taking the Laplace transform of each equation:
s
2
X(s) sx(0) x
(0) + s
2
Y (s) sy(0) y
(0) =
2
s
3
s
2
X(s) sx(0) x
(0) s
2
Y (s) + sy(0) + y
(0) =
4
s
2
,
i.e., X(s) + Y (s) =
8
s
+
2
s
5
X(s) Y (s) =
8
s
+
4
s
4
,
X(s) =
8
s
+
2
s
4
+
1
s
5
, Y (s) =
1
s
5
2
s
4
.
Hence
x(t) = 8 +
1
3
t
3
+
1
24
t
4
, y(t) =
1
24
t
4
1
3
t
3
.
90 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Problem Set 2.8
Use the Laplace transform to solve the given systems of dierential equations.
1.
dx
dt
= x 2y ,
dy
dt
= 5x y , x(0) = 0 , y(0) = 1.
2. 2
dx
dt
+
dy
dt
2x = 1 ,
dx
dt
+
dy
dt
3x 3y = 2 , x(0) = 0 , y(0) = 0.
3.
d
2
x
dt
2
+ x y = 0 ,
d
2
y
dt
2
+ y x = 0 , x(0) = 0 , x
(0) = 2 , y(0) = 0 , y
(0) = 1.
4.
d
2
x
dt
2
+
3dy
dt
+ 3y = 0 ,
d
2
x
dt
2
+ 3y = te
t
, x(0) = 0 , x
(0) = 2 , y(0) = 0.
Chapter 3
FOURIER SERIES
3.1 Orthogonal sets of functions.
Suppose we have two vectors u, v in an ndimensional vector space V (which can be
the usual 3space). The inner product (scalar product) is written as (u, v) or u v and has
the following properties:
(i) (u, v) = (v, u)
(ii) (ku, v) = k(u, v) for any scalar k
(iii) (u, u) = 0 if u = 0 and (u, u) > 0 if u = 0
(iv) (u +v, w) = (u, w) + (v, w)
The inner product of a vector with itself is
(u, u) = u
2
1
+ u
2
2
+ . . . + u
2
n
and the nonnegative square root of (u, u) is called the norm of u and is denoted by u,
i.e.,
u =
_
(u, u), (3.1)
91
92 CHAPTER 3. FOURIER SERIES
so that (u, u) = u
2
is the squared norm of u.
Two vectors are said to be orthogonal if their inner product is zero, i.e., if (u, v) = 0. In
ndimensional space we can nd n mutually orthogonal vectors u
i
(i = 1, . . . , n) and these
are said to form an orthogonal set. If each vector is divided by its norm, i.e., if we form the
vectors
e
i
=
u
i
u
i
, (3.2)
then the unit vectors e
i
satisfy
(e
i
, e
j
) =
ij
(i, j = 1, . . . , n), (3.3)
where
ij
is the Kronecker delta dened by
ij
=
_
0 if i = j
1 if i = j
. (3.4)
The set of vectors e
i
form an orthonormal set which is denoted by {e
i
}. For example, the
basis vectors i, j, k of 3space form an orthonormal set.
Every vector v in the ndimensional space can be expressed as a linear combination of
the orthonormal vectors e
i
, i.e.,
v = c
1
e
1
+ c
2
e
2
+ . . . + c
n
e
n
, (3.5)
where the coecients c
i
are given by
(v, e
i
) = c
i
(e
i
, e
i
) = c
i
(i = 1, . . . , n), (3.6)
so that c
i
is the projection of v on e
i
.
Now we extend these ideas of inner product and orthogonality to functions. Suppose
that f
m
(x) and f
n
(x) are two realvalued functions dened on an interval [a, b] and which
3.1. ORTHOGONAL SETS OF FUNCTIONS. 93
are such that the integral
(f
m
, f
n
) =
_
b
a
f
m
(x)f
n
(x) dx (3.7)
exists. We make the following denitions in analogy with the concepts of vector theory.
Denition 1. The inner product of two functions f
m
(x) and f
n
(x) is the number (f
m
, f
n
)
dened by Eq. (3.7).
Denition 2. Two functions f
m
(x) and f
n
(x) are said to be orthogonal on an interval
[a, b] if (f
m
, f
n
) = 0.
Denition 3. The norm of a function f
m
(x) is
f
m
(x) =
_
(f
m
, f
m
) =
_
b
a
[f
m
(x)]
2
dx . (3.8)
Note that f
m
(x) 0.
Our primary interest is in innite sets of orthogonal functions. A set of realvalued
functions f
1
(x), f
2
(x), f
3
(x), . . . is called an orthogonal set of functions on an interval [a, b]
if the functions are dened on [a, b] and if the condition
(f
m
, f
n
) =
_
b
a
f
m
(x)f
n
(x) dx = 0 (3.9)
holds for all pairs of distinct functions in the set.
Assuming that none of the functions f
m
(x) has zero norm, we can form a new set of
functions {
m
(x)} dened by
m
(x) =
f
m
(x)
f
m
(x)
. (3.10)
The set {
m
(x)} is called an orthonormal set and satises
_
b
a
m
(x)
n
(x) dx =
mn
. (3.11)
94 CHAPTER 3. FOURIER SERIES
Example 1. Consider the functions f
m
(x) = sin
mx
, i.e.,
f
1
(x), f
2
(x), f
3
(x), . . . = sin
x
, sin
2x
, sin
3x
, . . .
These functions form an orthogonal set on [, ] since, for all m and n
(f
m
, f
n
) =
_
sin
mx
sin
nx
dx
=
1
2
_
_
cos(mn)
x
cos(m + n)
x
_
dx
= 0 if m = n.
The norm of f
m
(x) is given by f
m
=
1
2
_
_
1 cos
2mx
_
dx =
, so that the
corresponding orthonormal set is
_
1
sin
mx
_
.
This concept of orthogonal functions may be generalized in two ways. First, we say that
a set of functions {f
m
(x)} is orthogonal on [a, b] with respect to a weight function w(x)
where w(x) 0, if
_
b
a
w(x)f
m
(x)f
n
(x) dx = 0 (m = n). (3.12)
The norm f
m
(x) is given by
f
m
(x)
2
=
_
b
a
w(x)[f
m
(x)]
2
dx (3.13)
and the set {
m
(x)}, where
m
(x) =
f
m
(x)
f
m
(x)
, (3.14)
forms an orthonormal set.
Note that this type of orthogonality reduces to the ordinary type by using the product
functions
_
w(x)f
m
(x).
3.2. EXPANSION OF FUNCTIONS IN SERIES OF ORTHOGONAL FUNCTIONS. 95
Another type of orthogonality concerns complexvalued functions. A set {t
m
} of complex
valued functions of a real variable x is orthogonal in the hermitian sense, on an interval [a, b]
if
_
b
a
t
m
(x)t
n
(x) dx = 0, m = n. (3.15)
The integral is the hermitian inner product (t
m
, t
n
) corresponding to the denition (3.7).
The norm of t
m
is real and nonnegative and is given by
t
m
2
=
_
b
a
t
m
t
m
dx =
_
b
a
([u
m
]
2
+ [v
m
]
2
) dx,
where t
m
(x) = u
m
(x) + iv
m
(x) , u
m
(x) and v
m
(x) being realvalued functions of x.
Example 2. Consider the functions
t
n
e
inx
= cos nx + i sin nx (n = 0, 1, 2, . . . ).
These functions form a set with hermitian orthogonality on the interval [, ] since
(t
m
, t
n
) =
_
[cos(mn)x + i sin(mn)x] dx
= 0 (m = n),
(t
m
, t
m
) = t
m
2
=
_
e
imx
e
imx
dx =
_
1 dx = 2,
i.e., t
m
=
2.
3.2 Expansion of functions in series of orthogonal functions.
Given a set {
m
(x)} of functions which are orthogonal on an interval [a, b], and which
may be orthonormal but not necessarily so, we can, under certain conditions, expand a
96 CHAPTER 3. FOURIER SERIES
given function f(x) in terms of a series of the functions
m
(x), i.e.,
f(x) = c
0
0
(x) + c
1
1
(x) + c
2
2
(x) + . . . ,
i.e., f(x) =
n=0
c
n
n
(x). (3.16)
Assuming that this expansion exists we need to determine the coecients c
n
. To do this we
multiply each side of Eq. (3.16) by
m
(x), where
m
(x) is the m
th
element of the orthogonal
set to obtain
f(x)
m
(x) =
n=0
c
n
m
(x)
n
(x).
Integrating both sides of this equation over [a, b] and assuming that the integral of the
innite sum is equivalent to the sum of the integrals, we obtain
_
b
a
f(x)
m
(x) dx =
n=0
c
n
_
b
a
m
(x)
n
(x) dx. (3.17)
Since {
m
(x)} is an orthogonal set, all the integrals on the right side of (3.17) are zero
except that one for which m = n. Hence, Eq. (3.17) reduces to
c
n
_
b
a
[
n
(x)]
2
dx =
_
b
a
f(x)
n
(x) dx,
i.e., c
n
=
_
b
a
f(x)
n
(x) dx
n
(x)
2
, (3.18)
which determines each constant c
n
.
The above analysis can be extended to the case when the set {
m
(x)} is orthogonal with
respect to the weight function w(x). In this case we multiply Eq. (3.16) by w(x)
m
(x) and
eventually obtain
c
n
=
_
b
a
w(x)f(x)
n
(x) dx
n
(x)
2
, (3.19)
3.2. EXPANSION OF FUNCTIONS IN SERIES OF ORTHOGONAL FUNCTIONS. 97
where now
n
(x)
2
=
_
b
a
w(x)[
n
(x)]
2
dx.
The series (3.16), with coecients given by Eqs. (3.18) or (3.19), is called a generalized
Fourier series.
Note that, although we have found a formal series of the form (3.16) we have not shown
that this series actually does represent the function f(x) in (a, b) or even that it converges
in (a, b). One necessary condition for the expansion to converge to f(x) is that the set
{
m
(x)} must be complete, i.e., there must be no function with positive norm which is
orthogonal to each of the functions
m
(x).
In general, the functions with which we shall be concerned are sectionally continuous or
piecewise continuous. A function is said to be piecewise continuous on [a, b] if it is dened
on [a, b] and if it has only a nite number of nite discontinuities in that interval.
Problem Set 3.2
1. Dene (f, g) by (f, g) =
_
1
0
f(x)g(x) dx.
(i) Are f(x) = x and g(x) = x
2
orthogonal?
(ii) Find , , such that f(x) = 1, g(x) = x+, h(x) = x
2
+x+ are orthogonal.
(iii) Form the orthonormal set corrersponding to the above set of three vectors.
(iv) If the vectors of part (iii) are labelled
1
,
2
,
3
, express F(x) = x
2
x + 1 as a
combination F(x) = c
1
1
(x) + c
2
2
(x) + c
3
3
(x).
2. Show that f
1
(x) = e
x
and f
2
(x) = (x 1)e
x
are orthogonal on the interval [0, 2].
3. Show that {f
m
(x)} = {sin x, sin 3x, sin 5x, . . . } is orthogonal on the interval [0,
2
] and
98 CHAPTER 3. FOURIER SERIES
nd the norm of each function f
m
(x).
4. Show that {f
m
(x)} = {1, cos
nx
, sin
nx
_
, n = 1, 2, 3, . . . (3.20)
is a complete orthogonal set on the interval [, ]. This follows from the integrals
_
sin
nx
dx = 0 ,
_
cos
nx
sin
mx
sin
nx
dx = 0 (m = n), (3.22)
_
cos
mx
cos
nx
dx = 0 (m = n), (3.23)
_
sin
mx
cos
nx
cos
2
mx
dx =
_
sin
2
mx
dx = . (3.25)
Hence, from Section 3.2 we can, under certain circumstances, expand a given function f(x)
in terms of the functions of the set (3.20), i.e.,
f(x) =
a
0
2
+
n=1
_
a
n
cos
nx
+ b
n
sin
nx
_
, (3.26)
3.3. FOURIER SERIES. 99
where the coecients a
0
, a
n
, b
n
are real and independent of x. Such a series may, or may
not, be convergent. If it does converge to the sum f(x), then for every integer k
f(x + 2k) = f(x) (3.27)
and f(x) is a periodic function of period 2 so that we need only study the series in the
interval (, ), or some other interval of length 2, such as (0, 2). A series of the form
(3.26) is called a trigonometric series.
Suppose that f(x) is a periodic function of period 2 which can be represented by the
trigonometric series (3.26). We need to nd the coecients a
0
, a
n
, b
n
; this can be done by
using the results (3.21)  (3.25). First integrate Eq. (3.26) from to
_
f(x) dx =
_
a
0
2
dx +
n=1
_
_
a
n
cos
nx
+ b
n
sin
nx
_
dx
= a
0
+ 0 + 0 ,
using Eq. (3.21). Hence
a
0
=
1
f(x) cos
mx
dx =
_
a
0
2
cos
mx
dx +
n=1
__
a
n
cos
nx
cos
mx
dx
+
_
b
n
sin
nx
cos
mx
dx
_
= 0 + a
m
+ 0 ,
i.e., a
m
=
1
f(x) cos
mx
dx. (3.29)
Similarly, multiplying by sin
mx
and integrating we nd
b
m
=
1
f(x) sin
mx
dx. (3.30)
100 CHAPTER 3. FOURIER SERIES
The expressions (3.28), (3.29), (3.30) are called the Euler formulae and the coe
cients a
0
, a
m
, b
m
are called the Fourier coecients of f(x). The series (3.26) is called the
Fourier series corresponding to f(x).
Even if the righthand side of Eq. (3.26) does not converge to f(x) for all x in (, ) we
can still calculate the Fourier coecients of f(x) from Eqs. (3.28)  (3.30) and then write
f(x)
a
0
2
+
n=1
_
a
n
cos
nx
+ b
n
sin
nx
_
,
where the righthand side is the Fourier series of f(x). The symbol means that f(x) is
not necessarily equal to the righthand side which may be divergent or converge to some
function other than f(x).
In order to know whether the Fourier series does, in fact, represent the function we need
the following theorem:
Fouriers Theorem: If a periodic function f(x), with period 2, is piecewise continuous
on < x < and has lefthand and righthand derivatives at each point of (, ), then
the corresponding Fourier series (3.26), with coecients (3.28)  (3.30), is convergent to
f(x) at a point of continuity. At a point of discontinuity, the Fourier series converges to
the sum
1
2
[f(x+) + f(x)],
where f(x+) is the value of f(x) when x is approached from the right, and f(x) is the
value of f(x) when x is approached from the left.
Summary. The Fourier series of a function f(x) dened on the interval (, ) is given by
3.3. FOURIER SERIES. 101
1
2
f(x)
O x
f(x+)
[ ] f(x+)+f(x)
f(x) =
a
0
2
+
n=1
_
a
n
cos
nx
+ b
n
sin
nx
_
, (3.26)
where a
0
=
1
f(x) cos
nx
dx, (3.29)
b
n
=
1
f(x) sin
nx
dx. (3.30)
Note that when = , i.e., when the interval is of length 2, the expressions (3.26),
(3.28)  (3.30) take the slightly simpler form
f(x) =
a
0
2
+
n=1
(a
n
cos nx + b
n
sin nx), (3.31)
a
0
=
1
f(x) dx , a
n
=
1
xdx =
1
_
1
2
x
2
_
= 0,
a
n
=
1
xcos
nx
dx
=
1
_
_
x
n
sin
nx
n
_
sin
nx
dx
_
= 0 +
n
2
2
_
cos
nx
= 0,
b
n
=
1
xsin
nx
dx
=
1
_
_
x
n
cos
nx
+
n
_
cos
nx
dx
_
=
1
2
n
cos n
2
n
cos(n) +
2
n
_
sin
nx
_
=
2
n
(1)
n
.
Hence the required series is
f(x) =
n=1
(1)
n+1
2
n
sin
nx
=
2
_
sin
x
1
2
sin
2x
+
1
3
sin
3x
. . .
_
and this does converge to the function in (, ).
Example 2. Consider the step function given by
f(x) =
_
_
_
0 < x <
2
1
2
< x <
2
0
2
< x <
.
This function has only a nite number of nite discontinuities and satises the conditions
3.3. FOURIER SERIES. 103
1
2
x
O
2
of the theorem. The period is 2, i.e., = , so we use Eq. (3.31) and (3.32)
a
0
=
1
f(x) dx =
1
_
2
2
1 dx = 1,
a
n
=
1
_
2
2
cos nxdx =
1
_
1
n
sin nx
_
2
2
=
2
n
sin
n
2
.
Now sin
n
2
= 0 if n is even. If n is odd, i.e., n = 2m + 1, then
sin
n
2
= sin(m +
2
) = cos m = (1)
m
.
a
n
= 0 (n even) , a
2m+1
= (1)
m
2
(2m + 1)
(m = 0, 1, 2, . . . ),
b
n
=
1
_
2
2
sin nxdx =
1
1
n
cos nx
_
2
2
= 0.
Hence the series is
f(x) =
1
2
+
2
cos x
2
3
cos 3x +
2
5
cos 5x
2
7
cos 7x + . . . ,
i.e., f(x) =
1
2
+
2
m=0
(1)
m
2m + 1
cos(2m + 1)x.
Note that when x =
2
and when x =
2
, cos(2m + 1)x = 0 so at these points f(x) =
1
2
,
which is the average of the two values either side of those points.
104 CHAPTER 3. FOURIER SERIES
The following diagrams illustrate how the Fourier series converges to the given function.
Diagrams 1, 3, 5 and 7 show each of the individual terms of the partial sums on the same
graph. Diagrams 2, 4, 6 and 8 show the terms summed together.
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
DIAGRAM 1
1
2
+
2
cos x DIAGRAM 2
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
DIAGRAM 3
1
2
+
2
cos x
2
3
cos 3x DIAGRAM 4
3.3. FOURIER SERIES. 105
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
DIAGRAM 5
1
2
+
2
cos x
2
3
cos 3x +
2
5
cos 5x DIAGRAM 6
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
y
3 2 1 1 2 3
x
DIAGRAM 7
1
2
+
2
cos x
2
3
cos 3x +
2
5
cos 5x
2
7
cos 7x DIAGRAM 8
Example 3. Find the Fourier series of the function f(x) of period 2 dened by
1
1
2
1
1
O
f(x) =
_
1 1 < x < 0
2x 0 x < 1
.
The period is 2 so that = 1 and hence
a
0
= 1
_
1
1
f(x) dx =
_
0
1
(1)dx +
_
1
0
2xdx
= [x]
0
1
+
_
x
2
1
0
= 1 + 1 = 0,
106 CHAPTER 3. FOURIER SERIES
a
n
=
_
1
1
f(x) cos nxdx
=
_
0
1
(1) cos nxdx +
_
1
0
2xcos nxdx
=
1
n
[sin nx]
0
1
+
_
2x
1
n
sin nx
_
1
0
2
n
_
1
0
1 sin nxdx
= 0 + 0 +
2
n
2
2
(cos n 1)
=
2
n
2
2
[(1)
n
1]
=
_
0 n even
4
n
2
2
n odd, i.e.,
4
(2m + 1)
2
2
(m = 0, 1, 2 . . . ),
b
n
=
_
1
1
f(x) sin nxdx
=
_
0
1
(1) sin nxdx +
_
1
0
2xsin nxdx
=
1
n
[cos nx]
0
1
_
2x
1
n
cos nx
_
1
0
+
2
n
_
1
0
cos nxdx
=
1
n
(1 cos n)
2
n
cos n +
2
n
2
2
[sin nx]
1
0
=
1
n
(1 3 cos n) + 0
=
_
2
n
(n even) =
1
m
(m = 1, 2, . . . )
4
n
(n odd) =
4
(2m + 1)
(m = 0, 1, 2, . . . ).
Hence f(x) =
_
2
cos x
4
9
2
cos 3x . . .
_
+
_
4
sin x
1
sin 2x +
4
3
sin 3x . . .
_
,
3.4. COSINE AND SINE SERIES. 107
i.e., f(x) =
4
m=0
1
(2m + 1)
2
cos(2m + 1)x +
4
m=0
1
2m + 1
sin(2m + 1)x
m=1
sin 2mx.
Problem Set 3.3
In the following problems nd the Fourier series of f(x) on the given interval.
1. f(x) =
_
0 < x < 0
1 0 x <
2. f(x) =
_
1 < x < 0
2 0 x <
3. f(x) =
_
1 1 < x < 0
x 0 x < 1
4. f(x) =
_
0 1 < x < 0
x 0 x < 1
5. f(x) =
_
0 < x < 0
x
2
0 x <
6. f(x) =
_
2
< x < 0
2
x
2
0 x <
7. f(x) = x + < x < 8. f(x) = 3 2x < x <
9. f(x) =
_
0 < x < 0
sin x 0 x <
10. f(x) =
_
0
2
< x < 0
cos x 0 x <
2
11. f(x) =
_
_
_
0 2 < x < 0
x 0 x < 1
1 1 x < 2
12. f(x) =
_
2 + x 2 < x < 0
2 0 x < 2
13. f(x) = e
x
< x < 14. f(x) =
_
0 < x < 0
e
x
1 0 x <
3.4 Cosine and sine series.
A function f(x) dened in the interval (, ) is said to be an even function of x if, for
every value of x in the interval
f(x) = f(x). (3.33)
108 CHAPTER 3. FOURIER SERIES
On the other hand, f(x) is said to be an odd function of x if, for every value of x in the
interval,
f(x) = f(x). (3.34)
For Eq. (3.34) to be consistent we must have f(0) = 0.
For example, f(x) = x
n
is an even function if n is an even integer, including zero, and
is an odd function if n is odd. Also cos x is an even function while sinx is an odd function.
The graph of an even function is symmetric with respect to the yaxis, and the graph of an
odd function is symmetric with respect to the origin.
Some properties of even and odd functions are:
(i) The product of two even functions is even.
(ii) The product of two odd functions is even.
(iii) The product of an even function and an odd function is odd.
(iv) The sum (dierence) of two even functions is even.
(v) The sum (dierence) of two odd functions is odd.
(vi) If f is even, then
_
a
a
f(x)dx = 2
_
a
0
f(x) dx .
(vii) If f is odd, then
_
a
a
f(x) dx = 0.
If f(x) is an even function on (, ), then from properties (i), (iii) and (vi), the Fourier
coecients (3.28)  (3.30) become
a
0
=
1
f(x) dx =
2
_
0
f(x) dx,
a
n
=
1
f(x) cos
n
xdx =
2
_
0
f(x) cos
nx
dx,
b
n
=
1
f(x) sin
nx
dx = 0.
3.4. COSINE AND SINE SERIES. 109
Hence, we have that the Fourier series of an even function f(x) on (, ) is the cosine series
f(x) =
a
0
2
+
n=1
a
n
cos
nx
, (3.35)
where a
0
=
2
_
0
f(x) dx , a
n
=
2
_
0
f(x) cos
nx
dx. (3.36)
The Fourier series of an odd function on (, ) is the sine series
f(x) =
n=1
b
n
sin
nx
, (3.37)
where b
n
=
2
_
0
f(x) sin
nx
dx. (3.38)
Example 1. Find the Fourier series of the function f(x) =
1
4
x
2
on the interval (, ).
The function is an even function, so the Fourier series is the cosine series given by
Eqs. (3.35), (3.36). In this case = , so
f(x) =
a
0
2
+
n=1
a
n
cos nx,
and a
0
=
2
_
0
f(x) dx =
1
2
_
0
x
2
dx
=
1
2
_
1
3
x
3
_
0
=
1
6
3
=
2
6
,
a
n
=
2
_
0
f(x) cos nxdx =
1
2
_
0
x
2
cos nxdx
=
1
2
__
x
2
1
n
sin nx
_
_
0
2x
1
n
sin nxdx
_
=
1
2
_
0
2
n
__
x
_
1
n
cos nx
__
0
+
_
0
1
1
n
cos nxdx
__
=
1
n
_
_
x
n
cos nx
_
0
+
_
1
n
2
sin nx
_
0
_
110 CHAPTER 3. FOURIER SERIES
=
1
n
_
n
cos n + 0 + 0
_
=
1
n
2
cos n =
(1)
n
n
2
.
a
1
= 1 , a
2
=
1
4
, a
3
=
1
9
, a
4
=
1
16
, etc.
f(x) =
2
12
+
n=1
(1)
n
n
2
cos nx
=
2
12
cos x +
1
4
cos 2x
1
9
cos 3x +
1
16
cos 4x . . . .
Example 2. Find the Fourier series of the function
f(x) =
_
x
2
< x < 0
x
2
0 x <
.
on the interval (, ).
This is an odd function, so the Fourier series is the sine series given by Eqs. (3.37),
(3.38).
b
n
=
2
_
0
x
2
sin nxdx
=
2
__
x
2
_
1
n
_
cos nx
_
_
0
2x
_
1
n
_
cos nxdx
_
=
2
__
1
n
x
2
cos nx
_
0
+
2
n
__
x
1
n
sin nx
_
_
0
1
1
n
sin nxdx
__
=
2
1
n
2
cos n + 0 + 0 +
2
n
_
1
n
2
cos nx
_
0
_
=
2
2
n
(1)
n
+
2
n
3
cos n
2
n
3
_
=
2
2
n
(1)
n
+
2
n
3
(1)
n
2
n
3
_
.
b
1
=
2
_
+
2
2 2
_
=
2
(
2
4) ; b
2
=
2
2
2
+
2
8
2
8
_
= ;
3.5. HALFRANGE EXPANSIONS. 111
b
3
=
2
2
3
2
27
2
27
_
=
2
27
(9
2
4) ; b
4
=
2
2
4
_
=
2
; etc.
f(x) =
2
(
2
4) sin x sin 2x +
2
27
(9
2
4) sin 3x
2
sin 4x + . . . .
3.5 Halfrange expansions.
In many physical and engineering problems we need to nd a Fourier series expansion
for a function f(x) which is dened on some nite interval such as (0, ) and very often we
need consider only a cosine series or only a sine series. For a series of cosines only we extend
f(x) as an even function into the interval < x < 0, and elsewhere periodically. For a
series of sines only we extend f(x) as an odd function into the interval < x < 0, and
elsewhere periodically. These denitions of f(x) into the interval < x < 0 are called,
respectively, the even periodic extension and the odd periodic extension of f(x).
EVEN PERIODIC EXTENSION
f(x)
x
y
l
l
f(x)
x
y
l
l
ODD PERIODIC EXTENSION
The even extension is given by
f(x) =
a
0
2
+
n=1
a
n
cos
nx
, (3.39)
a
0
=
2
_
0
f(x) dx , a
n
=
2
_
0
f(x) cos
nx
dx, (3.40)
112 CHAPTER 3. FOURIER SERIES
and the odd extension is given by
f(x) =
n=1
b
n
sin
nx
, (3.41)
b
n
=
2
_
0
f(x) sin
nx
dx. (3.42)
The series (3.39) and (3.41) are called the halfrange expansions of the function f(x).
Example. Find both the even and the odd periodic extensions of the function
f(x) =
_
2
0 < x <
2
x
2
x <
f(x)
y
x
/2
/2 O
O
y
x
/2
/2 /2
EVEN EXTENSION
O
y
x
/2
/2
/2
/2
ODD EXTENSION
The even extension is given by Eq. (3.39) with
a
0
=
2
_
0
f(x) dx =
2
_
2
0
2
dx +
2
2
( x) dx
=
2
2
x
_
2
0
+
2
_
x
1
2
x
2
_
2
=
2
+
2
1
2
1
2
2
+
1
8
2
_
=
3
4
,
a
n
=
2
_
0
f(x) cos nxdx =
2
_
2
0
2
cos nxdx +
2
2
( x) cos nxdx
3.5. HALFRANGE EXPANSIONS. 113
=
2
_
2n
sin nx
_
2
0
+
2
_
_
( x)
1
n
sin nx
_
2
+
_
2
1
1
n
sin nxdx
_
=
2
2n
sin
n
2
+
2
_
0
2n
sin
n
2
_
2
n
2
[cos nx]
2
=
2
n
2
_
cos n cos
n
2
_
.
Hence
a
1
=
2
_
cos cos
2
_
=
2
,
a
2
=
2
4
(cos 2 cos ) =
1
,
a
3
=
2
9
_
cos 3 cos
3
2
_
=
2
9
,
a
4
=
2
16
(cos 4 cos 2) = 0,
a
5
=
2
25
_
cos 5 cos
5
2
_
=
2
25
,
a
6
=
2
36
(cos 6 cos 3) =
1
9
.
Hence, the even periodic extension is
f(x) =
3
8
+
1
_
2 cos x cos 2x +
2
9
cos 3x +
2
25
cos 5x
1
9
cos 6x + . . .
_
.
The odd extension is given by Eq. (3.41) with
b
n
=
2
_
0
f(x) sin nxdx =
2
_
2
0
2
sin nxdx +
2
2
( x) sin nxdx
=
_
1
n
cos nx
_
2
0
+
2
_
_
( x)(
1
n
cos nx)
_
2
1
n
cos nxdx
_
=
1
n
cos
n
2
+
1
n
+
2
_
0 +
2n
cos
n
2
_
2
n
2
[sin nx]
2
=
1
n
+
2
n
2
sin
n
2
.
Hence,
114 CHAPTER 3. FOURIER SERIES
b
1
= 1 +
2
sin
2
= 1 +
2
, b
2
=
1
2
+
1
2
sin =
1
2
,
b
3
=
1
3
+
2
9
sin
3
2
=
1
3
2
9
, b
4
=
1
4
+
1
8
sin 2 =
1
4
,
b
5
=
1
5
+
2
25
sin
5
2
=
1
5
+
2
25
, b
6
=
1
6
+
1
18
sin 3 =
1
6
.
Hence, the odd periodic extension is
f(x) =
_
1 +
2
_
sin x +
1
2
sin 2x +
_
1
3
2
9
_
sin 3x +
1
4
sin 4x +
_
1
5
+
2
25
_
sin 5x + . . . .
Problem Set 3.5
In problems 1  12 determine whether f(x) is odd or even and expand the function in
an approporiate sine or cosine series.
1. f(x) =
_
1 < x < 0
1 0 x <
2. f(x) =
_
_
_
1 2 < x < 1
0 1 < x < 1
1 1 < x < 2
3. f(x) =  x, < x < 4. f(x) = x, < x <
5. f(x) = x
2
, 1 < x < 1 6. f(x) = x x, 1 < x < 1
7. f(x) =
2
x
2
, < x < 8. f(x) = x
3
, < x <
9. f(x) =
_
x 1 < x < 0
x + 1 0 x <
10. f(x) =
_
x + 1 1 < x < 0
x 1 0 x < 1
11. f(x) =  sin x, < x < 12. f(x) = cos x,
2
< x <
2
In problems 13  22 nd the halfrange cosine and sine expansions of f(x).
13. f(x) =
_
1 0 < x <
1
2
0
1
2
x < 1
14. f(x) =
_
0 0 < x <
1
2
1
1
2
x < 1
15. f(x) = cos x , 0 < x <
2
16. f(x) = sin x , 0 < x <
17. f(x) =
_
_
_
x 0 < x <
2
x
2
x <
18. f(x) =
_
0 0 < x <
x x < 2
19. f(x) =
_
x 0 < x < 1
1 1 x < 2
20. f(x) =
_
1 0 < x < 1
2 x 1 x < 2
21. f(x) = x
2
+ x , 0 < x < 1 22. f(x) = x(2 x) , 0 < x < 2
3.6. COMPLEX FORM OF THE FOURIER SERIES. 115
3.6 Complex form of the Fourier series.
The complex exponential function is given by
e
ix
= cos x + i sin x , e
ix
= cos x i sin x.
Hence cos nx =
1
2
(e
inx
+ e
inx
) and sinnx =
1
2i
(e
inx
e
inx
).
The Fourier series
f(x) =
a
0
2
+
n=1
(a
n
cos
nx
+ b
n
sin
nx
)
becomes
f(x) =
a
0
2
+
n=1
_
a
n
1
2
_
e
inx
+ e
inx
_
+ b
n
1
2i
_
e
inx
inx
_
_
=
a
0
2
+
n=1
_
1
2
(a
n
+
1
i
b
n
)e
inx
+
1
2
(a
n
1
i
b
n
)e
inx
_
= h
0
+
n=1
_
h
n
e
inx
+
h
n
e
inx
_
, (3.43)
where h
0
=
a
0
2
, h
n
=
1
2
(a
n
ib
n
) ,
h
n
=
1
2
(a
n
+ ib
n
).
By writing
h
n
as h
n
, Eq. (3.43) can be written as
f(x) =
n=
h
n
e
inx
. (3.44)
The Fourier coecients are given by
h
0
=
a
0
2
=
1
2
_
f(x)
_
cos
nx
i sin
nx
_
dx,
116 CHAPTER 3. FOURIER SERIES
i.e., h
n
=
1
2
_
f(x)e
inx
dx, (3.46)
h
n
=
1
2
(a
n
+ ib
n
) =
1
2
_
f(x)e
inx
dx. (3.47)
Hence, the complex Fourier series is given by Eq. (3.43) with Eqs. (3.45)  (3.47). Alterna
tively, the Fourier series is given by Eq. (3.44) with h
n
given by Eq. (3.46).
If f(x) is an even function, then in its Fourier expansion b
n
= 0. Hence h
n
=
h
n
=
1
2
a
n
,
so that the complex Fourier coecients are real. Similarly, if f(x) is an odd function, then
a
0
= a
n
= 0 and h
n
= ib
n
so that the complex Fourier coecients are pure imaginary.
Example. Find the complex Fourier series of the function
f(x) = e
x
, < x < , f(x + 2) = f(x).
We use the form (3.44) so that
h
n
=
1
2
_
e
x
e
inx
dx =
1
2
_
1
1 in
e
(1in)x
_
=
1
2
1
1 in
_
e
(1in)
e
(1in)
_
=
1
2
1 + in
1 + n
2
_
e
e
in
e
e
in
_
.
Now e
in
= cos n + i sinn = (1)
n
. Similarly e
in
= (1)
n
. Hence
h
n
=
1
2
1 + in
1 + n
2
(1)
n
(e
) =
1 + in
(1 + n
2
)
(1)
n
sinh ,
f(x) =
sinh
n=
1 + in
1 + n
2
(1)
n
e
inx
.
Separating this expression for f(x) into real and imaginary parts we obtain
f(x) =
sinh
n=
(1)
n
1 + n
2
[(cos nx nsin nx) + i(ncos nx + sin nx)].
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 117
The terms corresponding to n = 1. . . . , can be included in a sum from n = 1 to
by replacing n by n and the n = 0 term can be listed separately to give
f(x) =
sinh
_
1 +
n=1
(1)
n
1 + n
2
[(cos nx nsin nx) + i(ncos nx + sin nx)]
+
n=1
(1)
n
1 + n
2
[(cos nx nsin nx) + i(ncos nx sin nx)]
_
=
sinh
_
1 + 2
n=1
(1)
n
1 + n
2
cos nx 2
n=1
(1)
n
n
1 + n
2
sin nx
_
,
which is the corresponding real Fourier series.
Problem Set 3.6
Find the complex form of the Fourier series of the following functions f(x).
1. f(x) = x < x < .
2. f(x) =
_
1 < x < 0
1 0 < x <
.
3. f(x) = e
x
2
< x <
2
.
3.7 Separable partial dierential equations.
Many problems in applied mathematics can be reduced to the solution of the partial
dierential equation
2
V = L
2
V
t
2
+ M
V
t
+ N, (3.48)
where V is a physical quantity depending on the three cartesian space coordinates x, y, z
and on time t,
2
is the dierential operator
2
=
_
2
x
2
+
2
y
2
+
2
z
2
_
, (3.49)
118 CHAPTER 3. FOURIER SERIES
known as the Laplacian, and L, M, N are functions of x, y, z or constants. The following
four special cases of Eq. (3.48) are of particular importance.
(a) Laplaces equation. Here L = M = N = 0 and examples of quantitites which can
be represented by the function V are:
(i) The gravitational potential in a region devoid of attracting matter.
(ii) The electrostatic potential in a uniform dielectric.
(iii) The magnetic potential.
(iv) The velocity potential in the irrotational motion of a homogeneous uid.
(v) The steady state temperature in a uniform solid.
(b) Poissons Equation. In this equation L = M = 0 and N is a given function of x, y, z.
Examples of quantities represented by V are:
(i) The gravitational potential in a region in which N is proportional to the density
of the material at a point (x, y, z) in the region.
(ii) The electrostatic potential in a region in which N is proportional to the charge
distribution.
(iii) In the twodimensional (z absent) form of the equation in which N is a constant,
V is a measure of the shear stress entailed by twisting a long bar of specied
crosssection.
(c) Heat conduction equation. When L = N = 0 and M = k
2
, where k
2
is the
diusivity of a homogeneous isotropic body, Eq. (3.48) gives the temperature V at a
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 119
point (x, y, z) of the body. In certain circumstances the same equation can be used
in diusion problems, the quantity V then being the concentration of the diusing
substance.
(d) The wave equation. Eq. (3.48) with M = N = 0 and L = c
2
arises in investigations
of waves propagated with velocity c independent of wave length. Typical examples of
quantities which can be represented by the function V are:
(i) Components of the displacement in vibrating systems.
(ii) The velocity potential of a gas in the theory of sound.
(iii) Components of the electric or magnetic vector in the electromagnetic theory of
light.
In the twodimensional case, i.e., when there are two independent variables x
1
and x
2
, the
general secondorder homogeneous linear partial dierential equation for a function u(x
1
, x
2
)
can be written in the form
A
2
u
x
2
1
+ B
2
u
x
1
x
2
+ C
2
u
x
2
2
+ D
u
x
1
+ E
u
x
2
+ Fu = 0, (3.50)
where A, B, C, D, E, F are real constants. Such an equation can be classied into one of
three types; the equation is said to be
(a) hyperbolic if B
2
4AC < 0 ,
(b) parabolic if B
2
4AC = 0 ,
(c) elliptic if B
2
4AC > 0.
Three important partial dierential equations which are special cases of both Eqs. (3.48)
and (3.50) are:
120 CHAPTER 3. FOURIER SERIES
1. The onedimensional heat conduction equation
k
2
2
u
x
2
=
u
t
, (3.51)
where k
2
is the thermal diusivity, and u(x, t) is the temperature. This equation
governs the temperature distribution in a straight bar of uniform crosssection and
homogeneous material. It is a parabolic equation.
2. The onedimensional wave equation
c
2
2
u
x
2
=
2
u
t
2
(3.52)
describes the motion of a vibrating string, where c is the wave velocity for the string.
It is a hyperbolic equation.
3. The twodimensional Laplace equation
2
u
x
2
+
2
u
y
2
= 0. (3.53)
This equation arises in problems involving timeindependent potential functions. It is
an elliptic equation.
We shall rst turn our attention to the onedimensional heat conduction equation, ap
plying it to the study of the temperature distribution in a bar of thermal diusivity k
2
and
of length with the ends of the bar denoted by x = 0 and x = . In order to solve such a
problem we require an initial condition, such as the temperature distribution in the bar at
time t = 0. Only one such condition is required since the dierential equation contains only
the rst derivative with respect to t. However, we also require two boundary conditions,
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 121
i.e., conditions on x, since the dierential equation contains the second derivative with re
spect to x. The boundary conditions may be that the ends of the bar are held at xed
temperatures or that the ends are insulated or that one end is at a xed temperature while
the other end is insulated. The initial and boundary conditions that we will use may be
stated as follows:
Initial condition:
u(x, 0) = f(x). (3.54)
This states that at time t = 0 , the temperature distribution in the bar is given by f(x).
Boundary conditions: Either
u(0, t) = 0 , u(, t) = 0 , (3.55)
which states that both ends of the bar are held at zero temperature, or
u
x
(0, t) = 0 , u
x
(, t) = 0 , (3.56)
which states that there is no heat ow at the ends of the bar, i.e., the ends are insulated.
In order to solve Eq. (3.51) we assume that u(x, t) is a product function, i.e.,
u(x, t) = X(x)T(t). (3.57)
Substituting this into Eq. (3.51) we obtain
k
2
X
T = X
T, (3.58)
where the prime denotes dierentiation with respect to x and the dot denotes dierentiation
with respect to t. Dividing through by XT, Eq. (3.58) becomes
X
X
= k
2
T
T
. (3.59)
122 CHAPTER 3. FOURIER SERIES
The left side of this equation is a function of x only while the right side is a function of t
only. Hence, each side must be equal to a constant, i.e.,
X
X
= k
2
T
T
= , (3.60)
so that the single partial dierential equation is replaced by the two ordinary dierential
equations
X
X = 0, (3.61)
T k
2
T = 0. (3.62)
The product of two solutions of Eq. (3.61), (3.62), respectively, for any value of is a
solution of Eq. (3.51). However, we require solutions that satisfy the boundary conditions
(3.55) or (3.56) and this severely restricts the possible values of .
First we consider the case of the boundary conditions (3.55). The rst of these gives
u(0, t) = X(0)T(t) = 0 and, since T(t) = 0 would imply that u(x, t) = 0 for all x, the only
possibility is X(0) = 0. Similarly, the second condition of (3.55) gives u(, t) = X()T(t) = 0
which implies that X() = 0. Hence, the boundary conditions (3.55) imply that
X(0) = X() = 0. (3.63)
Now, in solving Eq. (3.61) there are three cases to consider, namely = 0, > 0 , (i.e.,
=
2
) , and < 0 , (i.e., =
2
).
Case 1. = 0. Eq. (3.61) becomes X
= 0, i.e.,
X = ax + b,
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 123
where a, b are constants. Substituting x = 0 and x = , the boundary conditions (3.63)
lead to a = b = 0, i.e., X = 0, so that u(x, t) = 0. This is not an acceptable solution so we
discard it.
Case 2. =
2
. Eq. (3.61) becomes X
2
X = 0, i.e.,
X = K
1
e
x
+ K
2
e
x
.
The boundary conditions (3.63) give
K
1
+ K
2
= 0 , K
1
e
+ K
2
e
= 0 ,
and the solution of this system is K
1
= K
2
= 0, i.e., X = 0 , so that u(x, t) = 0. Again, we
discard this solution.
Case 3. =
2
. Eq. (3.61) becomes X
+
2
X = 0, i.e.,
X = K
1
cos x + K
2
sin x.
The boundary conditions (3.63) give
K
1
= 0 , K
2
sin = 0.
We discard the possibility K
2
= 0 since this implies that X = 0, so the solution is given by
sin = 0, i.e.,
= n , (3.64)
where n is a nonzero integer. Thus there are an innite number of solutions. The constant
in Eq. (3.61) is given by
=
2
=
n
2
2
(3.65)
124 CHAPTER 3. FOURIER SERIES
and X is proportional to sin
_
nx
_
.
From Eq. (3.65), the dierential equation (3.62) for T becomes
T =
n
2
2
k
2
T, (3.66)
so that T is proportional to e
n
2
2
k
2
t
2
. Thus, neglecting constant multipliers, the functions
u
n
(x
,
t) = e
n
2
2
k
2
t
2
sin
nx
(n = 1, 2, . . . ) (3.67)
are each solutions of Eq. (3.51) and satisfy the boundary conditions (3.55). Note that we
consider only positive values of n because negative values of n give the same solutions. Since
the dierential equation (3.51) and the boundary conditions (3.55) are linear and homo
geneous, it follows that any linear combination of the u
n
(x, t) also satises the dierential
equation and boundary conditions. Consequently, the solution of the dierential equation
can be written as
u(x, t) =
n=1
c
n
u
n
(x, t) =
n=1
c
n
e
n
2
2
k
2
t
2
sin
nx
. (3.68)
To complete the solution we have to satisfy the initial condition (3.54). Putting t = 0
in Eq. (3.68) we obtain
u(x, 0) = f(x) =
n=1
c
n
sin
nx
,
so that the c
n
are the Fourier coecients for the Fourier sine series corresponding to f(x),
i.e.,
c
n
=
2
_
0
f(x) sin
nx
dx. (3.69)
This completes the solution for u(x, t) which is Eq. (3.68) with c
n
given by Eq. (3.69).
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 125
Example 1. Find the solution to the heat conduction problem
k
2
u
xx
= u
t
0 < x < , t > 0,
u(0, t) = 0 , u(, t) = 0 t > 0,
u(x, 0) = x( x).
The solution is of the form (3.68) with c
n
given by Eq. (3.69) with f(x) = x( x), i.e.,
c
n
=
2
_
0
x( x) sin
nx
dx
=
2
_
_
x( x)(
n
cos
nx
)
_
0
+
n
_
0
( 2x) cos
nx
dx
_
=
2
[0] +
2
n
_
_
( 2x)
n
sin
nx
0
+
2
n
_
0
sin
nx
dx
_
=
2
n
[0] +
4
n
2
2
_
n
cos
nx
0
=
4
2
n
3
3
(cos 0 cos n)
=
4
2
n
3
3
[1 (1)
n
]
=
_
0 n even
8
2
n
3
3
n odd
.
Put n = 2m + 1 (m = 0, 1, 2, . . . ), then
c
2m+1
=
8
2
(2m + 1)
3
3
and the nal solution is
u(x, t) =
m=0
8
2
(2m + 1)
3
3
e
(2m+1)
2
2
k
2
t
2
sin
(2m + 1)x
3
_
e
2
k
2
t
2
sin
x
+
1
27
e
9
2
k
2
t
2
sin
3x
+
1
125
e
25
2
k
2
t
2
sin
5x
+ . . .
_
.
Example 2. Find the solution to the heat conduction problem
100u
xx
= u
t
0 < x < 1 , t > 0,
u(0, t) = 0 , u(1, t) = 0 t > 0,
u(x, 0) = sin 2x 2 sin 5x 0 x 1.
In this case k
2
= 100 , = 1 , f(x) = sin 2x 2 sin 5x.
The solution is of the form
u(x, t) =
n=1
c
n
e
100n
2
2
t
sin nx.
When t = 0 , u(x, 0) = sin 2x 2 sin 5x =
n=1
c
n
sin nx.
Hence c
2
= 1 , c
5
= 2 , c
n
= 0(n = 2, 5) and the solution is
u(x, t) = e
400
2
t
sin 2x 2e
2500
2
t
sin 5x.
Now consider the case of the boundary conditions (3.56). The rst of these gives
u
x
(0, t) = X
(0) = 0.
Similarly, the second condition implies that X
(0) = X
() = 0. (3.70)
As in the previous case we consider the three cases in which = 0 , =
2
,
=
2
, respectively, in Eq. (3.61).
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 127
Case 1. = 0.
We again obtain X = ax +b and each of the conditions (3.70) imply that a = 0 so that
X is a constant, i.e.,
X =
1
2
c
0
. (3.71)
Case 2. =
2
.
We again obtain X = K
1
e
x
+ K
2
e
x
, so that X
= K
1
e
x
K
2
e
x
and the bound
ary conditions (3.70) lead to
K
1
K
2
= 0 , K
1
e
K
2
e
= 0
and the solution of this system is K
1
= K
2
= 0, i.e., X = 0, so that u(x, t) = 0 and we
discard this solution.
Case 3. =
2
.
As before we obtain X = K
1
cos x + K
2
sin x. Then X
= K
1
sin x + K
2
cos x
and the boundary conditions (3.70) give
K
2
= 0 , K
1
sin = 0.
Hence K
2
= 0 and, since K
1
cannot also be zero, we must have sin = 0, i.e.,
= n, (3.72)
where n is a nonzero integer. Thus there is an innite number of solutions of Eq. (3.51)
satisfying the boundary conditions (3.56) of the form
u
n
(x, t) = e
n
2
2
k
2
2
t
cos
nx
(n = 1, 2, . . . ) (3.73)
128 CHAPTER 3. FOURIER SERIES
together with the solution of Case 1. Thus the solution of the dierential equation can be
written as
u(x, t) =
1
2
c
0
+
n=1
c
n
e
n
2
2
k
2
2
t
cos
nx
. (3.74)
We now have to satisfy the initial condition (3.54). Putting t = 0 in Eq. (3.74) we obtain
u(x, 0) = f(x) =
1
2
c
0
+
n=1
c
n
cos
nx
, (3.75)
so that c
0
and c
n
are the Fourier coecients for the Fourier cosine series corresponding to
f(x), i.e.,
c
0
=
2
_
0
f(x) dx , (3.76)
c
n
=
2
_
0
f(x) cos
nx
dx. (3.77)
Thus, in the case of the boundary conditions (3.56), the complete solution is given by
Eq. (3.74) with c
0
, c
n
given by Eq. (3.76) , (3.77).
Example 3. Consider a uniform rod of length with an initial temperature given by
sin
x
, 0 x . Assume that both ends of the bar are insulated. Find a formal series
expansion for the temperature u(x, t). What is the steadystate temperature as t ?
The solution is of the form (3.74) with c
0
and c
n
given by Eqs. (3.76) and (3.77) with
f(x) = sin
x
, i.e.,
c
0
=
2
_
0
sin
x
dx
=
2
cos
x
0
=
2
(cos cos 0) =
4
.
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 129
c
n
=
2
_
0
sin
x
cos
nx
dx
=
2
_
0
1
2
_
sin
(n + 1)x
sin
(n 1)x
_
dx
=
1
(n + 1)
cos(n + 1)
x
+
(n 1)
cos(n 1)
x
0
(n = 1)
=
1
(n + 1)
cos(n + 1) +
1
(n 1)
cos(n 1) +
1
(n + 1)
cos 0
1
(n 1)
cos 0
=
_
0 n odd, n = 1
4
(n
2
1)
n even
. (3.78)
When n = 1
c
1
=
2
_
0
sin
x
cos
x
dx
=
1
_
0
sin
2x
dx
=
1
2
cos
2x
0
=
1
2
(cos 2 cos 0) = 0.
Hence the solution is
u(x, t) =
2
n=1
c
n
e
n
2
2
k
2
2
t
cos
nx
,
where c
n
are given by Eq. (3.78).
The steady state temperature is the limit as t in the expression for u(x, t), i.e.,
u
ss
=
2
.
Now consider the onedimensional wave equation (3.52) as applied to the vibrations of
an elastic string tightly stretched between two points at the same horizontal level, distance
130 CHAPTER 3. FOURIER SERIES
apart, so that the xaxis lies along the string. These vibrations are described by Eq. (3.52)
provided that damping eects, such as air resistance, are neglected and that the amplitude
of the motion is not too large. The constant c
2
appearing in Eq. (3.52) is given by
c
2
= T/, (3.79)
where T is the tension in the string and is the mass per unit length of the string. As
we remarked earlier, c is called the wave velocity for the string, i.e., the velocity at which
waves are propagated along the string.
Since Eq. (3.52) is of second order with respect to x and also with respect to t, it follows
that we require two boundary conditions and two initial conditions. Assuming that the
ends of the string are xed, the boundary conditions are
u(0, t) = 0 , u(, t) = 0 , (3.80)
and the initial conditions are
u(x, 0) = f(x) , 0 x , (3.81)
which describes the initial position of the string, and
u
t
(x, 0) = g(x) , 0 x , (3.82)
which describes the initial velocity, where f(x) and g(x) are given functions which, for the
consistency of Eqs. (3.80) to (3.82) must satisfy
f(0) = f() = 0 , g(0) = g() = 0. (3.83)
The string may be set in motion by plucking, i.e., by pulling the string aside and letting it
go from rest. In this case f(x) = 0, but g(x) = 0. Alternatively, the string may be struck
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 131
while in an initial horizontal position, in which case f(x) = 0 but g(x) = 0. Of course, the
actual initial conditions may be some combination of these two possibilities.
To solve Eq. (3.52) we assume a separable solution of the form
u(x, t) = X(x)T(t) (3.84)
and substitution of this into Eq. (3.52) yields
X
X
=
1
c
2
T
T
= , (3.85)
where is a constant. As in the case of the heatconduction equation with boundary
conditions (3.55), a nontrivial solution is obtained only if < 0, i.e., =
2
, in which
case Eq. (3.85) leads to the two ordinary dierential equations
X
+
2
X = 0 , (3.86)
T + c
2
2
T = 0 , (3.87)
for which the solutions are
X = K
1
cos x + K
2
sin x , (3.88)
T = K
3
cos ct + K
4
sin ct , (3.89)
where K
1
, . . . , K
4
are arbitrary constants.
The boundary conditions (3.80) become
X(0) = 0 , X() = 0 , (3.90)
which lead to K
1
= 0 , K
2
sin = 0. In order to avoid the trivial solution u(x, t) = 0
everywhere we require sin = 0, i.e.,
=
n
, (3.91)
132 CHAPTER 3. FOURIER SERIES
so that the solution for X(x) is
X = K
2
sin
nx
(n = 1, 2, 3, . . . ). (3.92)
From Eqs. (3.89), (3.91) and (3.92) we see that the solutions for u satisfying the boundary
conditions are
u
n
= (A
n
cos
nct
+ B
n
sin
nct
) sin
nx
(3.93)
and the general solution is the sum of all such solutions, i.e.,
u(x, t) =
n=1
(A
n
cos
nct
+ B
n
sin
nct
) sin
nx
. (3.94)
Putting t = 0 in Eq. (3.94) we have
u(x, 0) = f(x) =
n=1
A
n
sin
nx
, (3.95)
so that the A
n
are the Fourier coecients for the halfrange expansion sine series for f(x),
i.e.,
A
n
=
2
_
0
f(x) sin
nx
dx. (3.96)
To determine B
n
, we dierentiate Eq. (3.94) with respect to t to obtain
u
t
(x, t) =
n=1
(A
n
nc
sin
nct
+ B
n
nc
cos
nct
) sin
nx
,
so that
u
t
(x, 0) = g(x) =
n=1
B
n
nc
sin
nx
, (3.97)
and hence the coecients B
n
nc
=
2
_
0
g(x) sin
nx
dx,
i.e., B
n
=
2
nc
_
0
g(x) sin
nx
dx. (3.98)
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 133
Example 4. Find the solution of the wave equation for a vibrating string of length
subject to the boundary conditions
u(0, t) = 0 , u(, t) = 0,
and the initial conditions
u(x, 0) = 0 , u
t
(x, 0) = x( x).
In this problem f(x) = 0 , g(x) = x( x), so that A
n
= 0 and
B
n
=
2
nc
_
0
x( x) sin
nx
dx
=
2
nc
_
_
n
x( x) cos
nx
0
+
n
_
0
( 2x) cos
nx
dx
_
=
2
n
2
2
c
_
_
( 2x)
n
sin
nx
0
+
2
n
_
0
sin
nx
dx
_
=
4
2
n
3
3
c
n
_
cos
nx
0
=
4
3
n
4
4
c
[1 (1)
n
].
Hence the solution for u(x, t) is
u(x, t) =
4
3
4
c
n=1
1
n
4
[1 (1)
n
] sin
nct
sin
nx
.
When the tension T in the string is large enough, the vibrating string will produce
a musical sound. This sound is the result of standing waves. The solution (3.94) is a
superposition of product solutions called standing waves or normal modes
u(x, t) = u
1
(x, t) + u
2
(x, t) + u
3
(x, t) + . . . .
134 CHAPTER 3. FOURIER SERIES
The product solutions (3.93) can be written as
u
n
(x, t) = C
n
sin
_
nct
+
n
_
sin
nx
, (3.99)
where C
n
=
_
A
2
n
+ B
2
n
and sin
n
=
A
n
C
n
, cos
n
=
B
n
C
n
. For n = 1, 2, 3, . . . the stand
ing waves are essentially the graphs of sin
nx
+
n
_
. Alternatively, we see from Eq. (3.99) that, at a xed value of x, each
product function u
n
(x, t) represents simple harmonic motion with amplitude C
n
 sin
nx

and frequency f
n
=
nc
2
, i.e., each point on a standing wave vibrates with a dierent ampli
tude but with the same frequency. When n = 1
u
1
(x, t) = C
1
sin
_
ct
+
n
_
sin
x
,
is called the rst standing wave, the rst normal mode, or the fundamental mode of vibration.
l
0
FIRST STANDING WAVE
0 l/2 l
node
SECOND STANDING WAVE
The rst three standing waves, or normal modes, are shown in the gures. The dashed
line graphs represent the standing waves at various values of time. The points in the
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 135
0
2l/3 l/3
l
nodes
THIRD STANDING WAVE
interval (0, ) for which sin
nx
C for all t > 0. Find an expression for the temperature u(x, t) if the
initial temperature distribution in the rod is given by
(a) u(x, 0) =
_
x 0 x < 50
100 x 50 x 100
.
(b) u(x, 0) =
_
_
_
0 0 x < 25
50 25 x < 75
0 75 x 100
.
2. Consider a uniform rod of length with an initial temperature given by sin
x
,
136 CHAPTER 3. FOURIER SERIES
0 x . Assume that both ends of the bar are insulated. Find a Fourier series ex
pansion for the temperature u(x, t). What is the steadystate temperature as t ?
3. Find the displacement u(x, t) in an elastic string that is xed at its ends, x = 0 and
x = , and is set in motion by plucking it at its centre. The initial displacement f(x)
is dened by
f(x) =
_
_
_
Ax 0 x
1
2
A( x)
1
2
< x
,
where A is a constant.
4. Find the displacement u(x, t) in an elastic string of length , xed at both ends, that
is set in motion from its straight equilibrium position with the initial velocity g(x)
dened by
g(x) =
_
_
x 0 x
1
4
1
4
1
4
< x <
3
4
x
3
4
x
.
Appendix A
ANSWERS TO
ODDNUMBERED PROBLEMS
AND TABLES
A.1 Answers for Chapter 1.
Problem Set 1.3 (p. 9)
1.
1
s
_
2e
s
1
_
3.
1
s
2
(1 e
s
) 5.
(e
s
+ 1)
s
2
+ 1
7.
e
7
s 1
9.
1
(s 4)
2
11.
1
(s + 1)
2
+ 1
13.
s
2
1
(s
2
+ 1)
2
15.
48
s
5
17.
4
s
2
10
s
19.
2
s
3
+
6
s
2
3
s
21.
6
s
4
+
6
s
3
+
3
s
2
+
1
s
23.
1
s
+
1
s 4
25.
1
s
+
2
s 2
+
1
s 4
27.
8
s
3
15
s
2
+ 9
29.
2s
(s
2
1)
2
31.
s + 1
s(s + 2)
33.
2
s(s
2
+ 4)
35.
1
2
_
s
s
2
+ 9
+
s
s
2
+ 1
_
37.
6
(s
2
+ 1)(s
2
+ 9)
39.
(
5
4
)
s
5/4
Problem Set 1.4. (p. 13)
1. 3 2e
4t
3. 2e
4t
+ e
2t
5.
5
8
3
4
t
1
4
t
2
+
2
3
e
t
1
24
e
2t
137
138 APPENDIX A. ANSWERS TO ODDNUMBERED PROBLEMS AND TABLES
Problem Set 1.5 (p. 15)
1. 2(t 2)e
2t
3. te
t
1
2
t
2
e
2t
5. t + 5
3
2
e
t
t
2
4e
t
t 5e
t
7.
1
(s 8)
2
9.
s + 2
s
2
+ 4s + 20
11.
2
s
2
+ 2s + 5
13.
2
(s 3)
3
+
4
(s 3)
2
+
4
(s 3)
Problem Set 1.6 (p. 24)
1.
1
s
2
e
s
3.
_
3
s
2
+
10
s
_
e
3s
5.
_
1
(s 1)
2
+
5
s 1
_
e
5s
7.
1
2
(t 2)
2
u
2
(t)
9. sin t u
(t) 11. [1 e
(t1)
]u
1
(t)
13. t (t 1)u
1
(t) 15.
2
s
(1 2e
3s
), f(t) = 2u
0
4u
3
17.
_
2
s
3
+
2
s
2
+
1
s
_
e
s
, f(t) = t
2
u
1
19.
1
s
2
1
s
2
e
2s
2
s
e
2s
, f(t) = t(u
0
u
2
)
Problem Set 1.7 (p. 28)
1.
s
2
4
(s
2
+ 4)
2
3.
2(3s
2
+ 1)
(s
2
1)
3
5.
12(s 1)
(s
2
4s + 40)
2
7.
1
2
t sin t 9.
1
t
(e
t
e
3t
) 11. 1 +
1
t
sin 4t
Problem Set 1.8 (p. 30)
1.
s
2
+ 2a
2
s(s
2
+ 4a
2
)
A.1. ANSWERS FOR CHAPTER 1. 139
Problem Set 1.9 (p. 35)
1. e
t
1 3.
1
3
(4e
t
e
4t
)
5.
1
27
(2 + 3t 2e
3t
+ 30te
3t
) 7.
1
20
t
5
e
2t
9. cos t
1
2
sin t
1
2
t cos t 11.
1
2
(1 + e
t
sin t e
t
cos t)
Problem Set 1.10 (p. 39)
1.
1
4
[1 + 2 sin 2t cos 2t u
1
(t) + cos 2(t 1)u
1
(t)]
3.
1
6
[1 3e
2(t1)
+ 2e
3(t1)
]u
1
(t) + e
3t
e
2t
5. cos t + (1 + cos t)u
(t)
Problem Set 1.11 (p. 44)
1.
1
s
tanh
1
2
as 3.
1
s
2
+ 1
1
1 e
s
5.
E
o
R
__
e
Ra
L
1
_
u
a
(t)
_
e
2Ra
L
1
_
u
2a
(t) +
_
e
3Ra
L
1
_
u
3a
(t) . . .
_
Problem Set 1.12 (p. 48)
1. sin t(1 + u
2
(t))
3. u
2
(t) u3
2
(t) sin t
_
u
2
(t) + u
(t) + u3
2
(t)
_
5.
1
2
sinh
2t +
1
2
cosh
2t
1
2
+
1
2
sinh
2(t 2)u
2
(t)
Problem Set 1.13 (p. 51)
1.
1
s
2
(s 1)
3.
1
s(s
2
+ 1)
5.
12
s
7
7.
_
t
0
f(t ) cos 2 d 9.
1
4
t sin 2t
11.
1
2
e
2t
(sin t t cos t) 13.
1
2
(t cos t + sin t) [< 0 when t = ]
140 APPENDIX A. ANSWERS TO ODDNUMBERED PROBLEMS AND TABLES
A.2 Answers for Chapter 2.
Problem Set 2.2 (p. 63)
7. Fundamental Set
11. (t) =
_
e
t
te
t
3e
t
(3t + 1)e
t
_
,
1
(t) =
_
(3t + 1)e
t
te
t
3e
t
e
t
_
13. (t) =
_
1
2
e
t
+
1
2
e
5t
1
2
e
t
+
1
2
e
5t
1
2
e
t
+
1
2
e
5t 1
2
e
t
+
1
2
e
5t
_
15. (t) =
_
sin t 3 cos t 2 cos t
5 cos t sin t + 3 cos t
_
Problem Set 2.3 (p. 70)
1.
1
= 6 ,
2
= 1 , x
(1)
=
_
2
7
_
, x
(2)
=
_
1
1
_
3.
1
=
2
= 4 , x
(1)
=
_
1
4
_
5.
1
= 0 ,
2
= 4 ,
3
= 4 , x
(1)
=
_
_
9
45
25
_
_
, x
(2)
=
_
_
1
1
1
_
_
, x
(3)
=
_
_
1
9
1
_
_
7.
1
=
2
=
3
= 2 , x
(1)
=
_
_
2
1
0
_
_
, x
(2)
=
_
_
0
0
1
_
_
9.
1
= 3i ,
2
= 3i , x
(1)
=
_
1 3i
5
_
, x
(2)
=
_
1 + 3i
5
_
Problem Set 2.5 (p. 75)
1. x = c
1
_
1
2
_
e
5t
+ c
2
_
1
1
_
e
t
3. x = c
1
_
2
1
_
e
3t
+ c
2
_
2
5
_
e
t
5. x = c
1
_
5
2
_
e
8t
+ c
2
_
1
4
_
e
10t
A.2. ANSWERS FOR CHAPTER 2. 141
7. x = c
1
_
_
1
0
0
_
_
e
t
+ c
2
_
_
2
3
1
_
_
e
2t
+ c
3
_
_
1
0
2
_
_
e
t
9. x = c
1
_
_
1
1
1
_
_
e
t
+ c
2
_
_
1
1
0
_
_
e
2t
+ c
3
_
_
1
0
1
_
_
e
2t
11. x = c
1
_
cos t
2 cos t +sin t
_
e
4t
+ c
2
_
sin t
2 sin t cos t
_
e
4t
13. x = c
1
_
cos t
cos t sin t
_
e
4t
+ c
2
_
sin t
sin t +cos t
_
e
4t
15. x = c
1
_
5 cos 3t
4 cos 3t +3 sin 3t
_
+ c
2
_
5 sin 3t
4 sin 3t 3 cos 3t
_
17. x = c
1
_
_
1
0
0
_
_
+ c
2
_
_
cos t
cos t
sin t
_
_
+ c
3
_
_
sin t
sin t
cos t
_
_
19. x = c
1
_
_
0
1
0
_
_
e
6t
+ c
2
_
_
cos 2t
0
2 sin 2t
_
_
e
4t
+ c
3
_
_
sin 2t
0
2 cos 2t
_
_
e
4t
21. x = 3
_
1
1
_
e
1
2
t
+ 2
_
0
1
_
e
1
2
t
Problem Set 2.6 (p. 83)
1. x = c
1
_
1
3
_
+ c
2
_
_
1
3
_
t +
_
1
4
1
4
__
3. x = c
1
_
1
1
_
e
2t
+ c
2
__
1
1
_
t +
_
1
3
0
__
e
2t
5. x = c
1
_
_
1
0
0
_
_
e
t
+ c
2
_
_
0
1
1
_
_
e
2t
+ c
3
_
_
_
_
0
1
1
_
_
t +
_
_
0
1
0
_
_
_
_
e
2t
7. x = c
1
_
_
1
0
0
_
_
e
4t
+ c
2
_
_
_
_
1
0
0
_
_
t +
_
_
0
1
0
_
_
_
_
e
4t
+ c
3
_
_
_
_
1
0
0
_
_
t
2
2
+
_
_
0
1
0
_
_
t +
_
_
0
0
1
_
_
_
_
e
4t
Problem Set 2.7 (p. 87)
1. x = c
1
_
1
1
_
+ c
2
_
3
2
_
e
t
+
_
5t 3
5t
_
142 APPENDIX A. ANSWERS TO ODDNUMBERED PROBLEMS AND TABLES
3. x = c
1
_
cos t
sin t
_
e
t
+ c
2
_
sin t
cos t
_
e
t
+
_
cos t
sin t
_
te
t
5. x = c
1
_
1
2
_
+ c
2
__
1
2
_
t
1
2
_
0
1
__
2
_
1
2
_
ln t +
_
2
5
_
t
1
_
1
2
0
_
t
2
7. x = c
1
_
1
2
_
e
3t
+ c
2
_
1
2
_
e
t
+
1
4
_
1
8
_
e
t
9. x = c
1
_
1
1
_
e
1
2
t
+ c
2
_
1
1
_
e
2t
+
_
5
2
3
2
_
t
_
17
4
15
4
_
+
_
1
6
1
2
_
e
t
Problem Set 2.8 (p. 90)
1. x =
2
3
sin 3t , y =
1
3
sin 3t + cos 3t
3. x =
1
2
t
3
4
2 sin
2t , y =
1
2
t +
3
4
2 sin
2t
A.3 Answers for Chapter 3.
Problem Set 3.2 (p. 97)
1. (i) Not orthogonal
(ii) =
1
2
, = 1 , =
1
6
(iii)
1
(x) = 1 ,
2
(x) = 2
3(x
1
2
) ,
3
(x) = 6
5(x
2
x
1
6
)
(iv) F(x) =
5
6
1
(x) +
1
6
3
(x)
3. Norm of each function is
1
2
n=1
1
n
[1 (1)
n
] sin nx
3. f(x) =
3
4
+
n=1
_
(1)
n
1
n
2
2
cos nx
1
n
sin nx
_
5. f(x) =
1
6
2
+ 2
n=1
(1)
n
n
2
cos nx +
2
n=1
(1)
n
1
n
3
sin nx +
n=1
(1)
n+1
n
sin nx
A.3. ANSWERS FOR CHAPTER 3. 143
7. f(x) = + 2
n=1
(1)
n+1
n
sin nx
9. f(x) =
1
n=1
1
n
2
1
_
1 (1)
n+1
cos nx +
1
2
sin x
11. f(x) =
3
8
+
2
n=1
_
1
n
2
(1 cos
n
2
) cos
nx
2
+
_
1
n
2
sin
n
2
(1)
n
2n
_
sin
nx
2
_
13. f(x) =
2 sinh
_
1
2
+
n=1
(1)
n
n
2
+ 1
(cos nx nsin nx)
_
Problem Set 3.5 (p. 114)
1. ODD f(x) =
2
n=1
[1 (1)
n
]
n
sin nx
3. EVEN f(x) =
2
+
2
n=1
[(1)
n
1]
n
2
cos nx
5. EVEN f(x) =
1
3
+
4
n=1
(1)
n
n
2
cos nx
7. EVEN f(x) =
2
3
2
+ 4
n=1
(1)
n+1
n
2
cos nx
9. ODD f(x) =
2
n=1
1 (1)
n
( + 1)
n
sin nx
11. EVEN f(x) =
2
+
2
n=1
1 + (1)
n
1 n
2
cos nx
13. EVEN EXTENSION f(x) =
1
2
+
2
n=1
1
n
sin
n
2
cos nx
ODD EXTENSION f(x) =
2
n=1
1
n
_
1 cos
n
2
_
sin nx
15. EVEN EXTENSION f(x) =
2
+
4
n=1
(1)
n
1 4n
2
cos 2nx
ODD EXTENSION f(x) =
8
n=1
n
4n
2
1
sin 2nx
144 APPENDIX A. ANSWERS TO ODDNUMBERED PROBLEMS AND TABLES
17. EVEN EXTENSION f(x) =
4
+
2
n=1
2 cos
n
2
1 (1)
n
n
2
cos nx
ODD EXTENSION f(x) =
4
n=1
1
n
2
sin
n
2
sin nx
19. EVEN EXTENSION f(x) =
3
4
+
4
n=1
(cos
n
2
1)
n
2
cos
nx
2
ODD EXTENSION f(x) =
n=1
_
4
n
2
2
sin
n
2
2
n
(1)
n
_
sin
nx
2
21. EVEN EXTENSION f(x) =
5
6
+
2
n=1
[3(1)
n
1]
n
2
cos nx
ODD EXTENSION f(x) = 4
n=1
_
(1)
n+1
n
+
(1)
n
1
n
3
2
_
sin nx
Problem Set 3.6 (p. 117)
1. f(x) = i
n=
1
n
(1)
n+1
e
inx
(n = 0)
3. f(x) =
2 sinh
2
n=
(1)
n
(1 2in)
1 + 4n
2
e
2inx
Problem Set 3.7 (p. 135)
1. (a) u(x, t) =
400
n=1
1
n
2
sin
n
2
sin
nx
100
e
_
n
2
c
2
t
10
4
_
(b) u(x, t) =
100
n=1
1
n
_
cos
n
4
cos
3n
4
_
sin
nx
100
e
_
n
2
c
2
t
10
4
_
3. u(x, t) =
4A
n=1
1
n
2
sin
n
2
sin
nx
cos
nct