Académique Documents
Professionnel Documents
Culture Documents
MSc Course
1999
H. G. ter Morsche
G. Meinsma
Contents
1
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. .
. .
. .
. .
. .
. .
. .
1
1
2
6
8
9
9
12
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
21
26
31
35
36
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
45
51
54
59
63
64
.
.
.
.
.
.
69
69
71
73
75
77
80
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
iv
4.5
5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
86
96
100
102
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
107
. 108
. 111
. 112
. 119
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
121
. 121
. 129
. 133
. 135
. 137
. 140
. 145
. 146
. 152
. 154
187
188
191
192
Contents
203
D Exam examples
209
E Solutions
215
vi
Contents
1
Signals, energy and power
t
(a)
(b)
(tn R),
then f (t) is said to be a discrete-time signal. A typical example of discrete-time signal is a signal obtained through sampling of a continuous-time signal. The sampled signal of a continuoustime signal f (t) is the discrete-time signal f (np), n Z, defined at integer multiples of the
sampling time p > 0. Figure 1.1(a) shows the plot of a damped sinusoid (continuous-time) and
Figure 1.1(b) next to it shows the corresponding sampled signal (discrete-time) for a certain
sampling time. In plots, discrete-time signals are represented by a series of stems on the real
axis, such as in Figure 1.1(b). Often the values of the time instances tn are irrelevant, and in
such cases it is customary to denote the discrete-time signal by f [n], where f [n] := f (tn ), so
with square brackets around the time index n.
1
and
Moreover, f (t) = f (t) is a different way of saying that f (t) is real. Also, |e f (t) | = eRe f (t) ,
which is a consequence of the fact that |ej | = 1 for every R.
1.2.1. Example.
f (t) = eut e j vt = ea t .
To effectively work with complex signals we shall need to extend the analysis techniques of
real functions to complex functions. This is relatively straightforward. To begin with, consider
the notion of limit.
1.2.2. Definition. Let f (t) = f 1 (t) + j f 2(t) be a complex function and let L = L1 + j L 2 C.
Then the complex-valued limit
lim f (t) = L
ta
and
ta
lim f 2 (t) = L 2 .
ta
As a consequence the rules of calculus for limits that we know for real-valued functions may
also be applied to complex-valued functions.
1.2.3. Example. Let f (t) =
e jt
.
j +t
lim e j t
lim f (t) =
t0
t0
lim( j + t)
t0
1
= j.
j
For real-valued functions f (t) it is direct that limta f (t) = L if and only if limta | f (t)
L| = 0. For complex-valued functions this holds as well, with now | f (t) L| denoting the
modulus of f (t) L. In full:
1.2.4. Theorem. Let f (t) = f1 (t)+ j f 2(t) be a complex-valued function and L = L1 + j L 2
C. Then
lim f (t) = L
ta
if and only if
lim | f (t) L| = 0.
ta
Proof. Suppose first that limta | f (t) L| = 0. We make use of the following inequalities
that hold for any complex z = x + j y:
| Re z| |z|,
| Im z| |z|.
| Re z| = |x| = x 2 x 2 + y 2 = |z|,
| Im z| = |y| = y 2 x 2 + y 2 = |z|.
Therefore | f1 (t) L 1 | = | Re( f (t) L)| | f (t) L| and since | f (t) L| 0 as t a,
it follows that | f1 (t) L 1 | 0 as t a and hence that limta f 1 (t) = L 1 . In the same way
it follows that limta f 2 (t) = L 2 .
Now suppose that f1 (t) L 1 and f2 (t) L 2 as t a. Then as t a we get that
(1.1)
| f (t) L| = ( f 1 (t) L 1 )2 + ( f 2 (t) L 2 )2 02 + 02 = 0.
1.2.5. Example. Let a C with Re a > 0. Then limt eat = 0. This is a consequence of
the fact that |eat 0| = eRe(at) = e Re(a)t 0 as t .
Continuity and differentiability are notions that are defined by means of limits. Concretely, a
function is said to be continuous in t = a if limta f (t) = f (a) and f is said to be differentiable at t = a if limta ( f (t) f (a))/(t a) exists. It may be verified that continuity
and differentiability of a complex function f (t) = f1 (t) + j f 2 (t) is equivalent, respectively,
to continuity and differentiability of its real part f1 (t) and imaginary part f2 (t). Moreover,
the derivative f (t) at t = a then equals f (a) = f 1 (a) + j f 2 (a). It is important to note that
for complex functions f (t) the time t is still real-valued, and in particular f (t) denotes the
derivative with respect to a real-valued parameter (namely t).
The rules of calculus for differentiation of complex functions are the same as those for real
functions. Complex numbers are in this respect to be regarded as constants. It may be shown
for example that for any a C the derivative of f (t) = eat is indeed equal to f (t) = aeat .
Also integration as we know it for real-valued functions is easily extended to complexvalued functions. The integral of a complex function f (t) = f1 (t) + j f 2 (t) on an interval
(a, b) is defined as
b
b
b
f (t) dt =
f 1 (t) dt + j
f 2 (t) dt.
(1.2)
a
In effect this says that for complex-valued functions the integral exists if and only if both its
real and imaginary part can be integrated. In the above, a = and b = are allowed.
From Equation (1.2) it follows that
b
b
f (t) dt =
f (t) dt.
a
Like in the real case it is often possible to obtain an explicit function description of the primitive of f (t), also called the antiderivative of f (t). Also the rules of partial integration and
substitution remain valid for complex-valued functions. This is illustrated in the following
three examples.
1.2.6. Example. Let n be a positive integer and let T > 0 and 0 = 2/T . Then
1 T /2 j n0 t
0 if n = 0,
e
dt =
T T /2
1 if n = 0.
(1.3)
This may be seen as follows. For n = 0 we have that ej n0 t = 1 which immediately establishes
the case for n = 0. If n = 0 then
T /2
T /2
e j n0 t
1
1
j n0 t
e
dt =
=
(e j n e j n ) =
((1)n (1)n ) = 0.
j n0 T /2
j n0
j n0
T /2
(1.4)
This is because
eat dt = lim
0
eat
M
a
eat dt = lim
M
=
ea M
M
a
lim
+
1
1
= .
a
a
1.2.8. Example. Suppose T > 0, and let 0 = 2/T and n Z, n = 0. We shall establish
that
T
T2
.
(1.5)
te j n0 t dt =
2 j n
0
Partial integration yields
T
te j n0 t dt =
0
T
T
T
1
te j n0 t
1
j n0 t
t de
=
e j n0 t dt
j n0 0
j n0 0
j n0 0
1
T
e j n0 T
(e j n0 T 1).
j n0
( j n0)2
The following inequalities are often used when only existence of integrals or bounds on integrals are needed and not so much their precise value.
b
b
f (t) dt
| f (t)| dt.
a
f (ci ) := lim f (ci h), and f (ci ):= lim f (ci h),
h0
h0
t R
Here A ( A > 0) is the amplitude, the angular frequency and the initial phase of the signal
f (t). If the time t expresses seconds, then the angular frequency is in units of radians per
second (rad/s) and /(2 ) is called the frequency and is in units of hertz (Hz). One hertz is
one cycle per second. Similarly for the complex case we define
1.3.2. Definition (Harmonic signals). A signal f (t) that can be written in the form
f (t) = ce j t
with c C and R, is called a (complex) harmonic signal with amplitude |c|, angular
frequency and initial phase = arg(c).
The connection between a real and a complex harmonic signal lies in Eulers formula ej t =
cos(t) + j sin(t), or, equivalently,
e j t + e j t
,
2 j t
j t
e e
.
=
2j
cos(t) = Re e j t =
sin(t) = Im e
j t
(1.6)
Any linear combination of two real or complex harmonic signals with the same frequency, is
again a harmonic signal. For example, for any a, b R,
= Re |a j b|e j arg(a j b) e j t
= Re(|a j b|e j (t+) ),
= Re Ae j (t+) ,
= A cos(t + ).
Note that complex harmonic signals are allowed to have a negative frequency.
Sinusoids f (t) = A cos(t + ) and complex harmonic signals ce j t are important examples of periodic signals with period T = 2/. Periodic signals f (t) with period T , i.e.,
signals such that f (t + T ) = f (t) for all t R, will be referred to as T -periodic signals.
In Chapter 2 we shall see that every piecewise smooth T -periodic signal can be written as a
sum (possibly an infinite sum) of harmonic signals whose angular frequencies are multiples
, n Z of 0 = 2/T .
= n 2
T
The following simple theorem will be of use later.
1.3.3. Theorem (Interval-shift). Suppose that f (t) is integrable on [T /2, T /2] and that
f (t) is periodic with period T > 0. Then for every a R, there holds that
a+T
f (t) dt =
T /2
T /2
f (t) dt.
Proof. We write
a
a+T
f (t) dt =
a
T /2
f (t) dt +
T /2
T /2
f (t) dt +
a+T
T /2
f (t) dt.
The result now follows because the first and third integral on the right-hand side cancel each
other, which follows by substitution t = + T ,
a+T
a
a
T /2
f (t) dt =
f ( + T ) d =
f ( ) d =
f ( ) d.
(1.7)
T /2
T /2
T /2
1
recta (t) = 0
1
2
if |t| < a2 ,
if |t| > a2 ,
if |t| = a2 .
a2
b) For a given a > 0 the triangular pulse triana (t) is defined as
triana (t) =
1 |t|/a
0
if |t| < a,
if |t| a.
1
1(t) = 0
a
2
if t > 0,
if t < 0,
if t = 0.
0
t
Remark: Note that at the jump discontinuities the function value is here taken to be the midvalue f (t) = ( f (t) + f (t+))/2. This choice is somewhat arbitrary but circumvents certain
technicalities when Fourier series and Fourier integrals are considered.
If E f < (finite energy content) then the signal is said to be an energy signal. The rectangular
and triangular pulses are examples of energy signals. Sinusoids and harmonic signals are not.
For example, the harmonic signal f (t) = ce j 0 t satisfies | f (t)| = |c| and, hence, E f = .
For a signal f (t) to have a finite energy content it is necessary that limt t | f (t)|2 dt = 0.
Consequently, signals like sinusoids, periodic signals and the unit step and many others, are not
energy signals. In such cases it is customary to look at the average energy per unit time, i.e., to
look at its (averaged) power.
1.5.2. Definition (Power). Let f (t) be a signal. The power Pf of f (t) is defined as
P f = lim
1
M
M/2
M/2
| f (t)|2 dt.
The power of a bounded T -periodic signal f (t) is finite, and you may wish to verify that its
power equals the average power over one period,
1
Pf =
T
T /2
T /2
| f (t)|2 dt.
1.5.3. Example. The power of the sinusoid f (t) = A cos(0 t + ) with period T = 2/0,
is
0 /0 2
A2
A cos2 (0 t + )dt = {x = 0 t} =
cos2 (x )dx = A2 /2.
Pf =
2 /0
2
10
ones. Based on Definition 1.2.2 about limits, it is not surprising that we define convergence of
a complex sequence an = u n + j vn , (n N) to a = u + j v by
lim an = a
if and only if
if and only if
lim |an a| = 0.
(1.8)
The rules of calculus for limits of complex-valued sequences are the same as for real-valued
sequences.
1.6.1. Example. Suppose z C and |z| < 1. Then for every p R we have that
lim n p z n = 0.
The proof is straightforward, given that we know this result already for the real-valued case.
Indeed, |n p z n 0| = n p |z|n 0 as n , so we may conclude from the property in (1.8)
that limn n p z n = 0.
Consider next the series
ak .
k=0
n
A series like this is said to converge if the sequence of partial
sums sn = k=0 ak converges. It
can be shown that for complex
ak = u k + j v
k the series
k=0 ak converges with limit u + j v if
and only if the two real series k=0 u k and k=0 vk converge with limits u and v respectively.
So
ak =
uk + j
vk .
k=0
k=0
k=0
k=0
ak converges.
(1.9)
k=0
1.6.2. Definition (Absolute convergence and absolutely summable sequences). If
k=0 |ak |
is finite, then the sequence
ak , k = 0, 1, 2, . . . is said to be absolutely summable and the cor
a
is
said to be absolutely convergent.
responding series
k=0 k
The
property in (1.9) thus states that absolutely convergent series converge. The advantage is
that
k=0 |ak | is a real-valued series with nonnegative terms only, for which various convergence
criteria
are known. An important such criterion is the comparison test, which states that
a
is
absolutely
convergent if we can find a dominating sequence bk |ak | such that
k
k=0
11
1.6.3. Example (The geometric series). A geometric series is a series of the form
zk ,
k=0
in which z is some complex number, called the ratio of successive terms. A necessary condition
for this series to converge is that zk 0 as k . This requires that |z| < 1. The condition
|z| < 1 is also sufficient for convergence, which can be seen as follows. Suppose |z| < 1. For
the partial sums we have that
N
zk =
k=0
1 z N+1
,
1z
(1.10)
which is easily established by multiplying both sides of the equation with 1 z. As |z| < 1,
the contribution of zN+1 to the right-hand side of (1.10) goes to zero as N . Hence
as N , the partial sums (1.10) converge with limit 1/(1 z). Note that |z| < 1 also
guarantees that the geometric series converges absolutely.
1.6.4. Example. Consider the series
e j k
k + k2
k=1
with
R.
zk
,
k
k=1
(z C).
(1.11)
k
For k 1 we have that |zk /k| = |z|k /k |z|k . The dominating series
k=1 |z| is a geometric
series which converges absolutely if |z| < 1. The comparison test then shows that (1.11)
converges absolutely for every |z| < 1. If on the other hand |z| > 1 then the sequence zk /k
does not converge to zero as k so in that case the series (1.11) diverges. Convergence
for the case that |z| = 1 is somewhat
more complicated. If |z| = 1 then the series certainly
does not converge absolutely because
k=1 1/k = , but it still leaves open the possibility
that (1.11) converges, albeit not absolutely. For example if z = 1 the series reduces to one
with general term (1)k /k and this can be seen to converge. In fact it may be shown that the
series converges for every |z| = 1 other than z = 1. For z = 1 the series diverges.
12
1.6.6. Example. We end this chapter with the derivation of the finite sum
s N () =
N
e j k .
k=N
N
z k = z N
k=N
2N
zk
k=0
2N + 1
= {finite geometric series} = N 1z 2N +1
z 1z =
z N z N +1
1z
if z = 1,
if z = 1.
e j N e j (N+1)
e j (N+1/2) e j (N+1/2)
sin((N + 1/2))
.
=
=
1 e j
e j /2 e j /2
sin(/2)
In conclusion,
s N () =
2N + 1
sin((N+1/2))
sin(/2)
if is a multiple of 2 ,
if is not a multiple of 2 .
(1.12)
= 2
= 2
1.7 Problems
1.1 Show that limt0
e2 j t 1
= 2 j.
t
1.7. Problems
13
e jt
= 0.
t
t+T
gi (t) =
f i ( ) d.
2
.
T
(T = 2/0).
14
zn
n=0 n! .
n
(d) De Moivre: Show that cos() + j sin() = cos(n) + j sin(n).
N
sin n.
1.15 Determine n=1
= z n1 + z n3 + +
1.16 Let n N, n > 1, n even, and define z = e j t . Show that sin(nt)
sin(t)
The first command defines a vector of time instances increasing from 0 to 10 with stepsize 1/10. The semicolumn ; tells Matlab to execute the command silently. The second
command defines a complex-valued vector y of function values e0.3 j t for each t in the
1
2
9
, 10
, . . . , 9 10
, 10]. The vector y is given to the plot command, which plots
vector [0, 10
the imaginary part of each entry of y against its real part.
Plot the curve (Re f (t), Im f (t)) of
(a) f (t) = e(6 j +1)t ,
(b) f (t) = e0.2 j t + e0.4 j t .
1.18 In Matlab an integral like
1
f (t) dt
0
can be obtained numerically in the following way. First a file with a name, say,
myfunction.m must be opened. Assuming the function f (t) to be integrated is
f (t) = t 3 + t 2 , we type in this file the two lines
function y = myfunction(t)
y=t.^3+t.^2;
1.7. Problems
15
t 3 + t 2 dt
can be obtained by
quad(myfunction,0,1)
At first sight it may seem strange that we need to use .^ instead of ^. This has to do
with Matlabs philosophy of array promotion, see the Matlab Mini Manual. Also, the
name quad is not very descriptive. Try it, and use it to find the power of the signal
f (t) = 2e2 j t + e j t
and verify the result by direct calculation of the power.
16
2
Periodic signals and their line spectra
Sinusoids and complex harmonic signals are the fundamental building blocks in signal analysis.
In this chapter we use the harmonic signals to describe arbitrary periodic signals. The discussion
culminates in the famous result that practically every T -periodic signal f (t) can be expressed
as a sum of harmonics, also called a superposition of harmonics,
2
ck e j k T t ,
ck C.
(2.1)
f (t) =
kZ
Note that the harmonic signals in the sum all are T -periodic:
2
e jk T
(t+T )
= e jk T
t+ j k2
= e j k T t e j k2 = e j k T
t R.
Therefore the sum (2.1) is T -periodic as well. The incredible fact is that the converse is true
as well: Practically every T -periodic signal f (t) is of the form (2.1) for suitable choice of
coefficients ck .
If we define 0 as
0 =
2
T
(2.2)
kZ
It shows that the frequencies k0 of the harmonics that build up f (t) are an integer multiple of
0 . For this reason 0 = 2/T is called the fundamental frequency of T -periodic signals.
18
1
ak cos(k0 t) + bk sin(k0 t) ,
f (t) = a0 +
2
k=1
(2.3)
in which ak and bk are real numbers. This can be rewritten as a sum of complex harmonic
signals as follows.
N
1
ak cos(k0 t) + bk sin(k0 t)
f (t) = a0 +
2
k=1
N
ak j k t
bk
1
(e 0 + e j k0 t ) j (e j k0 t e j k0 t )
= a0 +
2
2
2
k=1
N
N
ak j bk j k0 t
ak + j bk j k0 t
1
= a0 +
e
e
+
2
2
2
k=1
k=1
N
ck e j k0 t .
(2.4)
k=N
Here ck = (ak j bk )/2 and ck = (ak + j bk )/2 for k = 0, 1, 2, . . . , N and for consistency
we put b0 = 0. Note that ck = ck .
It will be clear that in this example for N = one would end up with an infinite sum of
complex harmonics
ck e j k0 t .
(2.5)
k=
A series like this is called a (complex) Fourier series, and the coefficients ck are the (complex)
Fourier coefficients. The successive terms ck e j 0 t generally are complex-valued functions of t,
even if their sum is a real-valued function. Note that whereas the index k in the real case (2.3)
goes from 1 to N, in the complex case (2.4) the index k goes from N to N. Inspired by this
we define convergence of the infinite Fourier-series (2.5) as follows.
2.1.2. Definition. For a given t R the Fourier series in (2.5) is said to converge with limit
f (t) if
f (t) = lim
N
ck e j k0 t .
(2.6)
k=N
To avoid clutter we usually write f (t) = k= ck e j k0 t when we mean (2.6). The next
example demonstrates
which
a Fourier series may be convergent even if
is that
1 a subtlej kpoint,
t
j k0 t
0
and k=0 ck e
do not converge.
its two sub-series k= ck e
19
j k0 t
2.1.3. Example. The Fourier series k=0 (1/k)e j k0 t which is the series
k= ck e
with ck = 1/k for k = 0 and c0 = 0converges for t = 0 to zero. This can be seen as follows.
ck e j k0 t =
k=
ck
k=
1
N
1
1
= lim
ck = lim
+
N
N
k k=1 k
k=N
k=N
N
= lim (
N
N
1
k=1
N
1
k=1
) = 0.
With a bit more theory (developed in the following pages) it may be shown that the Fourier
series converges for every t R and that it converges to f (t) = 0 (t T /2)/j for 0 < t < T
(see Example 2.2.4 and Figure 2.1), and that the series is T -periodic, piecewise smooth with
discontinuities at t = 0, T, 2T, . .
..
j k0 t
=
Note that for t = 0 the sub-series
k=0 ck e
k=1 1/k do not converge.
If in a Fourier series f (t) = k= ck e j k0 t only finitely many coefficients ck are nonzero,
then obviously the Fourier series converges for every t R. More
generally, convergence for
every t R is ensured if the ck are absolutely summable, that is, if
k= |ck | converges. In
this case the sum f (t) is in fact continuous everywhere.
j k0 t
converges and
2.1.4. Theorem. If
k= |ck | < , then the Fourier series
k= ck e
moreover is continuous at every t R.
1
Proof. It follows from k= |ck | < that both k= |ck | and k=0 |ck | converge absolutely. As |ck e j k0 t | = |ck | we conclude that the two sub-series of the Fourier series converge
absolutely for any t, and, hence, that the Fourier
converges for any t.
series itself
j k0 t
c
e
. We show that f (t) is continLet f (t) denote the Fourier series f (t) =
k= k
uous for every time a R, that is, we show that for every
> 0 there is a > 0 such that
|t a| < implies that | f (t) f (a)| <
.
Define the partial sums
s N (t) =
N
ck e j k0 t .
k=N
ck e j k0 t |
|k|>N
By assumption
|k|>N
|ck | =
k=
k=
|ck |.
|k|>N
N
k=N
|ck | 0
as N .
(2.7)
20
Consequently
there is a large enough positive integer N1 such that for every N > N1 we
get
|k|>N |ck | <
/3. Considering Equation (2.7) we conclude that | f (t) sN1 (t)|
|k|>N1 |ck | < /3 for every t.
The partial sum sN1 (t) is a finite sum of continuous functions, hence is itself continuous.
For the given
> 0, therefore, a > 0 can be found such that |sN1 (t)s N1 (a)| <
/3 whenever
|t a| < . Finally then for all such t,
| f (t) f (a)| = | f (t) sN1 (t) + s N1 (t) s N1 (a) ( f (a) sN1 (a))|
| f (t) s N1 (t)| + | f (a) sN1 (a)| + |s N1 (t) s N1 (a)|
< /3 + /3 + /3 = .
This completes the proof.
2.1.5. Example. The sum
f (t) of the Fourier series in Example 2.1.3 is not continuous. By
the above result, therefore,
k= |ck | = , which may also be verified directly.
If
k= |ck | = (which is always the case if f (t) is not continuous on R), then the partial
N
j k0 t
may not provide a satisfactory approximation of f (t) near
sums s N (t) =
k=N ck e
the points of discontinuity. A famous phenomenon in this respect is the Gibbs phenomenon
discussed in Section 2.5.
We end this section with an important connection between f (t) and its Fourier coefficients
ck . So far we assumed the Fourier coefficients ck given and the signal f (t) to be the resulting
sum. But what if we are given f (t) and want to find the corresponding Fourier series (assuming
one exists)? For the sake of simplicity we restrict
the moment to signals f (t) that
Nattentionj for
k0 t
. The claim is that the Fourier
have a finite Fourier series expansion f (t) = k=N ck e
coefficients are uniquely determined by f (t) through the integral
1
ck =
T
T /2
T /2
f (t)e j k0 t dt.
(2.8)
T /2
T /2
f (t)e j k0 t dt =
1
T
T /2
N
T /2 n=N
N
1
cn
=
T
n=N
cn e j n0 t e j k0 t dt
T /2
e j (nk)0 t dt
T /2
= ck .
In the last integral we made use of Example 1.2.6, which showed that in the above sum all
terms for n = k are zero. Only for n = k does the above integral have a nonzero value. In that
case the integrand e j (nk)0 t is 1 for all t and its integral over [T /2, T /2] hence equals T .
21
ck e j k0 t ,
k=
where 0 = 2
is the fundamental frequency and ck are the Fourier coefficients determined
T
by (2.8). The Fourier series thus obtained from f (t) is called the (complex) Fourier series of
f (t) and the coefficients ck are the (complex) Fourier coefficients of f (t). To emphasize the fact
that the Fourier coefficients are determined by f (t), we shall from now on denote the Fourier
coefficients of f (t) by fk rather than ck . The question now is to which function the Fourier
series of f (t) converges. We shall see that for piecewise smooth T -periodic f (t) the Fourier
series converges for every t with limit ( f (t+) + f (t))/2. In particular for signals f (t) that
are continuous the signal f (t) and its Fourier series are one and the same thing.
To begin with, we show that the Fourier coefficients fk of a piecewise smooth periodic
signal f (t) tend to zero as k . This is based on the following more general result.
2.2.1. Lemma (RiemannLebesgue). If f (t) is piecewise smooth on [a, b], then
b
f (t)e j t dt = 0.
lim
||
Proof. Suppose first that f (t) is continuously differentiable on [a, b]. Then we can use partial
integration to obtain
b
b
1
1
j t
j t b
f (t)e
f (t)e dt =
f (t)e j t dt.
a
j
j a
a
Since |e j t | = 1 we can derive from this the bound
b
b
1
1
j t
(| f (b)| + | f (a)|) +
f (t)e dt
| f (t)|dt.
| j |
| j | a
a
It is immediate that the right-hand side goes to zero as || , which proves the claim.
If f (t) is not continuously differentiable, then, since f (t) is piecewise smooth, we may
split [a, b] into a finite set of subintervals [ti , ti+1 ] (i = 1, . . . ) such that f (t) is continuously
differentiable on each ofthese subintervals. Similarly as done above
(using partial integration)
b
t
it follows that lim|| ti i+1 f (t)e j t dt = 0. Hence lim|| a f (t)e j t dt = 0.
An immediate consequence is that piecewise smooth T -periodic signals f (t) satisfy
T /2
f (t)e j k0 t dt = 0,
lim
|k|
T /2
i.e., that the Fourier coefficients fk tend to zero as |k| . In fact, looking at the proof
of RiemannLebesgue lemma, we
bmay conclude that | fk | C/|k|. The
b RiemannLebesgue
lemma also implies that lim|| a f (t) cos(t) dt = 0 and lim|| a f (t) sin(t) dt = 0.
22
sin(at)
dt = f (0+),
f (t)
lim
a 0
t
2
Indeed, if the above holds then replacing t with t readily gives
0
T /2
sin(at)
sin(at)
dt = lim
dt = f (0).
f (t)
f (t)
lim
a T /2
a
t
t
2
0
T /2
Define I (a) = 0 f (t) sin(at)
dt and express I (a) as a sum I (a) = I1 (a) + f (0+)I2 (a) with
t
T /2
f (t) f (0+)
sin(at) dt,
I1 (a) =
t
0
T /2
sin(at)
dt.
I2 (a) =
t
0
We will show that lima I1 (a) = 0 and that lima I2 (a) = /2.
To calculate the limit of I2 (a) we make use of the standard integral
sin t
dt = .
t
2
0
This gives
lim I2 (a) = lim
T /2
sin(at)
dt = { = at} = lim
a
t
0
aT /2
sin
d = .
Now take an
> 0. We show that |I1 (a)| <
for large enough a. Since f (0+) =
limt0 ( f (t) f (0+))/t exists, the function ( f (t) f (0+))/t is bounded on (0, T /2], that is,
| f (t) f (0+)| Mt
t [0, T /2]
for some M > 0. Choose > 0 such that <
/(2M) and < T /2. Then
f (t) f (0+)
sin(at) dt M
| sin(at)| dt M < .
t
2
0
0
We have found that
T /2
f
(t)
f
(0+)
f
(t)
f
(0+)
sin(at) dt +
sin(at) dt
|I1 (a)| =
t
t
0
T /2
f (t) f (0+)
sin(at) dt .
< +
2
t
23
On the interval [, T /2] the function ( f (t) f (0+))/t is piecewise smooth since the only
possible singularity is at t = 0 and this is not in the interval. The RiemannLebesgue lemma
therefore applies, which gives
T /2
lim
f (t) f (0+)
sin(at) dt = 0.
t
T /2
f (t) f (0+)
sin(at) dt < ,
t
2
The previous results allow us to state and prove the following fundamental result.
2.2.3. Theorem (The Fourier series theorem). Let f (t) be a T -periodic signal and suppose
it is piecewise smooth on [T /2, T /2]. Then for every t R there holds that
f (t+) + f (t)
=
f k e j k0 t ,
2
k=
1
T
T /2
T /2
f (t)e j k0 t dt.
(2.9)
Proof. We need to show that ( f (t+) + f (t))/2 = limN s N (t) for every t, where
s N (t) =
N
f k e j k0 t .
(2.10)
k=N
First we derive an integral representation for sN (t) by substituting the defining integral for fk ,
namely,
fk =
1
T
T /2
T /2
f ( )e j k0 d
24
N
f k e j k0 t =
k=N
1
=
T
T /2
T /2
f ( )
N
1 T /2
f ( )e j k0 (t ) d
T
T /2
k=N
N
e j k0 (t ) d
k=N
1
T
= {see (1.12)}
= {x = t }
1
=
T
1
= {interval-shift} =
T
T /2
f ( )
T /2
t+T /2
tT /2
T /2
sin((N + 1/2)0 (t ))
d
sin(0 (t )/2)
f (t x)
T /2
f (t x)
sin((N + 1/2)0 x)
dx
sin(0 x/2)
sin((N + 1/2)0 x)
dx.
sin(0 x/2)
x
.
sin(0 x/2)
Since limx0 x/ sin(0 x/2) = 2/0 we have that g(0+) = 2 f (t)/0 and g(0) =
2 f (t+)/0 . Therefore
sin((N + 1/2)0 x)
1 T /2
dx
g(x)
lim s N (t) = lim
N
N T T /2
x
1 g(0+) + g(0)
= {Lemma 2.2.2 for a = N + 1/2} =
T
2
f (t+) + f (t)
f (t+) + f (t)
2
=
.
=
0 T
2
2
This completes the proof.
2.2.4. Example (The sawtooth). Figure 2.1 shows the graph of the sawtooth with period T .
It is the T -periodic signal f (t) which on one period [0, T ) is given by
f (t) = t T /2,
(0 t < T ).
25
T
2
2T
T2
Figure 2.1: The sawtooth.
N
k=N
26
f (t) = lim
f k e j k0 t
k=N
N
= f 0 + lim
= f0 +
( f k e j k0 t + f k e j k0 t )
k=1
( f k e j k0 t ) + f k e j k0 t
k=1
= f0 + 2
Re( f k e j k0 t ).
k=1
The equality fk = f k for k = 0 states that f0 = f 0 , i.e., that f 0 is real. Define ak and bk as
ak = 2 Re f k ,
bk = 2 Im f k ,
(k = 1, 2, 3, . . . )
(2.11)
So 2 f k = ak j bk . Then 2 Re( fk e j k0 t ) = ak cos(k0 t) + bk sin(k0 t), which finally establishes that f (t) is indeed a sum of sinusoids,
f (t) = 12 a0 +
ak cos(k0 t) + bk sin(k0 t) .
(2.12)
k=1
It will be clear that f (t) is a real-valued T -periodic signal with T = 2/0 . The series (2.12) is
known as the real Fourier series and the coefficients ak and bk are the real Fourier coefficients.
The term ak cos(k0 t) + bk sin(k0 t) is sometimes referred to as the k-th harmonic of f (t). In
Summary:
2.3.1. Theorem (Fourier series theorem, real-valued case). Let f (t) be a real-valued T periodic signal and suppose it is piecewise smooth on [T /2, T /2]. Then for every t R
there holds that
f (t) + f (t+)
= 12 a0 +
ak cos(k0 t) + bk sin(k0 t) .
2
k=1
27
2
T
T /2
T /2
T /2
T /2
k = 0, 1, . . . ,
(2.13)
k = 1, 2, . . . .
(2.14)
Proof. Let f k be the complex Fourier coefficients of f (t). In the above we showed that ak =
2 Re f k and bk = 2 Im f k . Therefore
ak = 2Re f k = 2Re
1
T
T /2
T /2
f (t)e j k0 t dt =
2
T
T /2
T /2
and
bk = 2Im f k = 2Im
1
T
T /2
T /2
f (t)e j k0 t dt =
2
T
T /2
T /2
2.3.2. Example. Consider the signal f (t) = | sin( t)|, (see Figure 2.3). This signal is periodic
with period T = 1. In addition the signal is even which is to say that f (t) = f (t). This
means that the g(t) := f (t) sin(k0 t) are odd i.e., that g(t) = g(t). Hence the integrals for
bk ,
bk =
2
T
T /2
T /2
k = 1, 2, . . .
28
are all zero. The Fourier series of f (t) apparently entails only cosine terms.
ak =
2
T
T /2
T /2
f (t) cos(k0 t) dt = 2
1/2
1/2
1/2
sin( t) cos(2k t) dt
= {integrand is even} = 4
0
1/2
=2
(sin((2k + 1) t) sin((2k 1) t)) dt
0
2 cos((2k + 1) t) 1/2 2 cos((2k 1) t) 1/2
=
(2k + 1)
(2k 1)
0
0
1
1
4
1
2
=
=
.
2k + 1 2k 1
1 4k 2
The signal f (t) is continuous and is piecewise smooth on R, so we conclude that
| sin( t)| =
4
1
2
+
cos(2k t)
k=1 1 4k 2
for every t R.
Incidentally, the trigonometric formula that was used in the above example, sin() cos() =
1
sin( + ) + 12 sin( ), is easily obtained with the help of Eulers formula. Indeed,
2
sin() cos() = Im e j Re e j
e j e j e j + e j
2j
2
j (+)
j ()
+e
e j (+) e j ()
e
=
4j
(e j (+) e j () ) + (e j () e j (+) )
=
4j
1
1
= sin( + ) + sin( ).
2
2
=
29
The values that a line spectrum takes are usually complex numbers. These may be expressed
in polar form as fk = | f k |e j k . The sequence | fk | is then referred to as the amplitude spectrum
of f (t), and the sequence k as the phase spectrum of f (t). The phase, as always, is unique
up to an integer multiple of 2 . If f (t) is real-valued, then fk = f k , hence, | fk | = | f k |
and k = k . Real-valued signals hence have an amplitude spectrum that is even and a phase
spectrum that is odd (up to multiples of 2 ).
| fk |
2
Figure 2.4: The amplitude spectrum and phase spectrum of the sawtooth.
2.3.3. Example. In Example 2.2.4 we found the Fourier coefficients of the sawtooth to be fk =
j/(k0) for k = 0 and fk = 0 for k = 0. The amplitude spectrum is therefore | fk | = 1/|k0 |,
except for k = 0 where it is zero. The phase spectrum k = arg fk equals /2 for positive k,
and /2 for negative k. The phase of f0 = 0 is not really defined, but in such cases we take
it to be zero. Figure 2.4 shows the amplitude and phase spectrum of the sawtooth signal for a
certain 0 . The sawtooth is a real-valued function and this is in accordance with the fact that
the amplitude spectrum is an even function of k and that the phase spectrum is an odd function
of k.
Certain operations on signals in the time domain are more conveniently described in the
frequency domain. An important such operation is the convolution of two signals, discussed
in the next section. We end this section with a list of time domain signal operations and their
corresponding operation in the frequency domain. Though each operation on its own is straightforward, the combination of these operations allow one to derive Fourier expansions of quite a
wide class of signals (see the problems in Section 2.6).
Signal operations
1. Linearity: f (t) + g(t) fk + gk
The line spectrum dk of d(t) = f (t) + g(t) for any constants , C equals
1 T /2
( f (t) + g(t))e j k0 t dt
dk =
T T /2
1 T /2
1 T /2
j k0 t
f (t)e
dt +
g(t)e j k0 t dt = f k + gk
=
T T /2
T T /2
30
2. Time-shift: f (t ) e j k0 f k
The line spectrum dk of the shifted signal f (t ) follows from
dk =
1
T
T /2
T /2
f (t )e j k0 t dt
= {v = t }
1
=
T
T
2
T2
= {interval-shift} = e j k0
=e
j k0
1
T
f (v)e j k0 (v+ ) dv
T /2
T /2
f (v)e j k0 v dv
fk .
T /2
T /2
f (t)e j k0 t dt
1 T /2
f (v)e j k0 v dv
= {v = t} =
T T /2
1 T /2
f (v)e j (k)0 v dv
=
T T /2
= f k .
4. Conjugation: f (t) f k
1
T
1
T
T /2
f (t)e j k0 t dt
T /2
T /2
T /2
f (t)e
j k0 t
dt
= f k
.
Conjugation in the time domain results in the frequency domain to conjugation and frequency reversal.
31
Property
Frequency domain: fk
T /2
f k = T1 T /2 f (t)e j k0 t dt
Linearity
f (t) + g(t)
fk + gk
Time-shift
f (t ), ( R)
e j k0 f k
Time-reversal
f (t)
fk
Conjugation
f (t)
f k
Frequency-shift
e j n0 t f (t), (n Z)
fkn
f kn e j k0 t = {m = k n} =
f m e j (m+n)0 t
m=
k=
= e j n0 t
n
f m e j m0 t = e j n0 t f (t) = e j 0 t f (t).
m=
Convolution products in the time domain reduce to ordinary products of the respective line
spectra in the frequency domain:
2.4.2. Theorem (Convolution theorem for periodic signals). Let f (t) and g(t) be two T periodic piecewise smooth signals with line spectra fk and gk respectively. Then ( f g)(t) is
piecewise smooth, continuous and its line spectrum ( f g)k satisfies
( f g)k = f k gk ,
(k Z).
32
Proof. We omit the proof that ( f g)(t) is piecewise smooth and continuous, since the proof
is technical but otherwise straightforward.
The line spectrum ( f g)k obeys
1 T /2
f ( )g(t ) d e j k0 t dt
T /2 T T /2
T /2 T /2
1
f ( )g(t )e j k0 t d dt
= 2
T T /2 T /2
1
( f g)k =
T
T /2
(a)
(b)
(c)
Figure 2.5: (a): A jumpy signal; (b) averaged with
= 0.03; (c) averaged with
= 0.09
2.4.3. Example (Sliding window averaging). For a given T -periodic signal f (t) we construct the signal f(t) by averaging f (t) around t over an interval of a fixed length
T ,
(0, 1)
i.e., we consider
t+
T /2
1
f(t) =
f ( ) d.
T t
T /2
Averaging f (t) this way filters out high-frequency noise. It is to be expected, then, that f(t) is
somewhat smoother than f (t), but as long as
is not too large the graph of the averaged f(t)
should retain roughly the same shape as the graph of f (t). Figure 2.5(a) shows an example
33
of a jumpy signal f (t). Figure 2.5(b) shows f(t) for the case that
= 0.03. In plot (c) of
that figure the average was taken over a wider interval (
= 0.09) and as expected the plot is
smoother than the one in (b).
The signal f(t) can be considered as the convolution of f with a suitable function g(t):
t+
T /2
T /2
1
f (t) = 1
f ( ) d = {v = t } =
f (t v) dv
T t
T /2
T
T /2
1 T /2
f (t v)g(v) dv = ( f g)(t),
=
T T /2
for
g(t) =
if |t|
T /2
if
T /2 < |t| <
T /2.
T
2
T
2
In frequency domain the process of averaging hence means multiplying the line spectrum with
the line spectrum gk of g(t).
1 T /2
g(t)e j k0 t dt
gk =
T T /2
T /2
T
T
1
1
(e j k0 2 e j k0 2 )
e j k0 t dt = {k = 0} =
=
T
T /2
j k0
T
=
g0 =
sin(k0
T2 )
k0
T2
T /2
1
T
T /2
= {0 T2 =
2
T
T 2
= } =
sin(k
)
,
k
dt = 1.
Therefore
sin(k
)
fk .
fk =
k
Note that the function sin(k
)/(k
) tends to zero as k . The high-frequency harmonics fk e j 0 t are therefore more attenuated than the lower frequency harmonics. This agrees
with our understanding of averaging. Also, the greater the averaging interval, the smaller is
sin(k
)/(k
) for large k, i.e., the more are the high-frequency harmonics attenuated. Again
this agrees with our understanding of averaging.
As a T -periodic signal is fully determined by its line spectrum, it should be possible to express
any property that f (t) may have in terms of its line spectrum. It is for example possible to
express the power
1 T /2
| f (t)|2 dt
Pf =
T T /2
of a T -periodic signal f (t) in terms of the fk s.
34
2.4.4. Theorem (Parsevals theorem for periodic signals). If f (t) is a piecewise smooth T periodic signal, then
Pf =
| f k |2 .
k=
Proof. Note that P f may be obtained from the following expression for t = 0:
1
T
T /2
T /2
f ( ) f ( t) d.
The above expression is the convolution product of f (t) and g(t) = f (t). We have that
P f = ( f g)(0). The line spectrum gk of g(t) = f (t) equals gk = f k (see Table 2.1) so by
the convolution theorem ( f g) has line spectrum ( f g)k = f k f k = | f k |2 . Consequently
( f g)(t) =
| f k |2 e j k0 t .
k=
(t T /2)2 dt =
1 2
T ,
12
T2
1
=
Pf =
(k0 )2
4 2
kZ, k =0
T2
1
1
=
.
2
2
k
2 k=1 k 2
kZ, k =0
We conclude that
1 2
1
T2
T ,
=
2
2
2 k=1 k
12
2
1
.
=
k2
6
k=1
35
2
t
. The f k follow
and f (0) = 0, see Figure 2.6. The square wave is real-valued so that fk = f k
easily,
0
1
j kt
j kt
e
dt +
e
dt
fk =
2
0
1 cos(k )
1
(e j kt e j kt ) dt =
=
2
j k
0
0
if k is even,
=
2
if k is odd.
j k
The (2N 1)-th partial sum s2N1 (t) is equal to
s2N1 (t) =
N1
l=0
N1
2
4
(e(2l+1) j t e(2l+1) j t ) =
sin((2l + 1)t).
(2l + 1) j
(2l + 1)
l=0
It may be shown that s2N1 (t) takes on its maximal value at t = /(2N). Figure 2.7 shows
plots of the partial sums S2N1 (t) for N = 4, 8, 12, 16 on the interval [2 , 2 ], and the peak
values of s2N1 (t) are collected in Table 2.2. Note that the overshoot does not converge to zero
but to a value of about 0.17898.
36
s2N1 ( 2N
)
4
8
16
32
64
128
256
1.180284
1.179305
1.179061
1.179000
1.178985
1.178981
1.178980
Figure 2.7: The partial sums s2N1 (t) for N = 4 and N = 8 (left) and N = 12 and N = 16
(right).
summable, i.e., if
If the Fourier coefficients fk are absolutely
k= | f k | < , then the
|
f
|
<
,
then
the maximal approxiGibbs phenomenon does not occur. Indeed, if
k
k=
mation error,
f k e j k0 t
| f k |,
max | f (t) s N (t)| = max
tR
tR
|k|>N
|k|>N
which converges to zero as N . In such cases one says that the convergence from sN (t)
to f (t) is uniform in t.
2.6 Problems
2.1 Express sin2 (0 t + /3) as a superposition of complex harmonic signals and as superposition of sinusoids.
2.2 Suppose a T -periodic signal f (t) is such that its Fourier coefficients fk satisfy fk =
2.6. Problems
37
f k for all integers k. Show that f (t) is imaginary-valued (that is, that j f (t) is realvalued).
2.3 Given is a T -periodic signal f (t). Suppose, in addition that f (t) is even. Show that fk
is real and that fk = f k for any integer k.
2.4 Let f (t) be a T -periodic signal that on period [0, T ] is given by f (t) = rectT /2 (t T /2).
(a) Sketch the graph of f (t).
(b) Determine the Fourier coefficients fk of f (t).
(c) Sketch the amplitude and phase spectrum of f (t).
(d) Determine the real Fourier series of f (t).
(e) Write f (t) as an infinite sum of sinusoids.
2.5 Suppose f (t) is a 2 -periodic signal with line spectrum fk = 1/(k 2 + 1).
(a) Show that f (t) is real and even.
(b) Determine the line spectrum of f (t) cos2 (0 t).
(c) Determine the line spectrum of f (2t).
(d) Determine the phase spectrum of f (2t T /2).
2.6 Let f (t) be the 2 -periodic signal such that
f (t) = t 2
( t ).
(a) Determine the complex Fourier coefficients of f (t) and write down the Fourier
series of f (t).
(b) Determine the real Fourier series of f (t).
(c) What is the third harmonic of f (t)?
(d) What is the amplitude spectrum of f (t)?
(e) Calculate 1 14 +
1
9
1
16
+ .
(3 t 3 + ).
( t ).
38
( < t ).
(T /2 t T /2).
1
a
t+a/2
f (u) du.
ta/2
2.6. Problems
39
2.17 Let fn (t) be the T -periodic signal given by fn (t) = e j n0 t , where 0 = 2/T . Determine ( f n f m )(t) for every n, m Z.
2.18 The T -periodic signals f (t) and g(t) on the interval (0, T ] are given by f (t) =
rectT /2 (t T /4) and g(t) = rectT /2 (t 3T /4). Determine ( f g)(t).
2.19 Let f (t) be a real T -periodic signal with real Fourier series
1
ak cos(k0 t) + bk sin(k0 t) .
f (t) = a0 +
2
k=1
sin(2t) + sin(3t)
.
sin(t)
T /2
2.23 Suppose we are given a T -periodic signal f (t) that satisfies T /2 f (t) dt = 0. The
t
signal g(t) is defined through integration of f (t) as g(t) = 0 f ( ) d.
(a) Show that g(t) is T -periodic.
T
Suppose in addition that 0 t f (t) dt = 0, then the line spectrum gk of g(t) follows from
the line spectrum fk of f (t) as
f k /( j k0) if k = 0,
gk =
0
if k = 0.
(You do not need to show this.)
(b) Find a signal h(t) such that g(t) = ( f h)(t).
1
2.24 Find
n=1 n 4 . (Hint: Use Problem 2.6.)
40
Matlab problems:
2.25 In Example 2.3.2 we found the real line spectrum of f (t) = | sin( t)|,
ak =
1
4
,
1 4k 2
bk = 0.
(2.16)
we open a file with name, say, mysum.m and enter the following code
function sn = mysum(t,N)
w0=2*pi;
sn=2/pi;
for k=1:N,
sn=sn+ (4/pi)*(1/(1-4*k^2))*cos(k*w0*t);
end
Then the sum (2.16) can be computed by typing the following commands at the Matlab
prompt.
t=0:0.01:1;
N=5;
sn=mysum(t,N);
plot(t,sn)
hold on
plot(t,abs(sin(pi*t)),red)
hold off
%
%
%
%
%
%
%
Discretized time
Calculate partial sum
Plot it
Keep this plot
Add a plot of f(t)
Try this Matlab code and then similarly plot the sum of the first N harmonics for N =
2, 5, 10 of the Fourier series of
f (t) = t ( t),
(0 t ).
3
Non-periodic signals and their continuous
spectra
This chapter is centered around a version of the Fourier theorem for non-periodic signals. Under
certain conditions, non-periodic signals f (t) can be seen as a continuous superposition of
harmonic signals, that is to say, as an integral of weighted harmonic signals
c()e j t d.
f (t) =
(cf. (2.2).) We assume throughout this chapter that the signals f (t) are piecewise smooth, and
that
f (t) =
f (t+) + f (t)
2
for every t. This may always be achieved by redefining f (t) at the points of discontinuity, if
necessary.
(3.1)
(3.2)
41
42
The proofs of the above two lemmas are almost identical to the proofs of Lemma 2.2.3 and
Lemma 2.2.2, and are therefore omitted here. Be aware that the lemmas assume our standing
assumption that f (t) = f (t+)+2 f (t) and that f (t) is piecewise smooth. Also note that the above
two lemmas reduce to Lemma 2.2.1 and Lemma 2.2.2 for the case that f (t) has finite duration1 .
In the above lemmas we assume that f (t) is absolutely integrable, which is something we have
not yet defined. A signal f (t) is said to be absolutely integrable if
| f (t)| dt < .
Roughly speaking this means that f (t) should go to zero fast enough as t . This
condition is needed in the proofs in order to make sense of the integrals (3.1) and (3.2). Usually,
if f (t) is not absolutely integrable, then (3.1) is not defined.
For example if f (t) = 1, then
f (t) is not absolutely integrable, and (3.1) is not defined: e j t dt =?. With that out of
the way we can prove the famous result:
3.1.3. Theorem (The Fourier integral theorem). Let f (t) be an absolutely integrable signal. Then
1
F()e j t d,
(3.3)
f (t) =
2
where F() is the Fourier Transform of f (t), defined as
f (t)e j t dt.
F() =
(3.4)
Proof. Substituting the integral expression of F() in F()e j t d gives
a
1
1
F()e j t d = lim
f ( )e j (t ) d d
a 2 a
2
a
1
f ( )
e j (t ) d d
= {change order of integr.} = lim
a 2
a
e j a(t ) e j a(t )
1
d
f ( )
= lim
a 2
j (t )
sin(a(t ))
1
d.
f ( )
= lim
a
t
That the order of integration may be changed is due to the fact that f (t) is absolutely integrable.
Since
sin(a(t ))
sin(av)
d = {v = t } =
dv,
f ( )
f (t v)
t
we see that
sin(av)
1
1
j t
dv = {Lemma 3.1.2} = f (t).
F()e d = lim
f (t v)
a
2
v
1 Finite duration means that the signal is zero for all |t| large enough.
43
Note the striking symmetry between the expressions for f (t) and F(). As it turns out, absolute
integrability of f (t) is enough to ensure the Fourier integral theorem to be valid, i.e., we need
not impose anything similar on F().
3.1.4. Example. The rectangular pulse recta (t) as defined in Definition 1.4.1 is bounded and
of finite duration, so it is absolutely integrable. Its Fourier transform F() equals
F() =
recta (t)e
j t
dt =
a/2
a/2
e j t dt =
sin(a/2)
.
/2
(3.5)
The function F() is known under a variety of names. It is called the Fourier transform of
f (t), and sometimes it is referred to as the spectrum or frequency spectrum of f (t). A plot of
F() as a function of is also quite often called the spectrum or frequency spectrum of f (t).
Since F() is generally complex-valued, a plot of F() consists generally of two parts, one
of its amplitude versus frequency, and one its phase versus frequency. The Fourier transform
F() is said to describe f (t) in the frequency domain or the -domain. For T -periodic signals
we found in the previous chapter that the frequency spectrum is a line spectrum which may
be regarded as a discrete signal defined at the frequencies k0 in the -domain. Non-periodic
signals as we see have a frequency spectrum that is build up from a continuum of frequencies.
The Fourier transform can reveal properties of the signal f (t) that may not be apparent so
easily from f (t) itself. Consider Example 3.1.4, where we computed the Fourier transform of
the rectangular pulse recta (t). Figure 3.1 shows for three values of a the corresponding recta (t)
and its Fourier transform. What we notice is that for small a the Fourier transform is smeared
out over a wide frequency range (Figure 3.1a,b). More important for our understanding of the
Fourier transform is to understand what happens when a is large, such as Figure 3.1(e,f). In
that case recta (t) is constant equal to 1 for a wide time-span. As we see from Figure 3.1(f), this
apparently implies that the Fourier transform is practically build up from the single frequency
= 0 only: For all other frequencies the Fourier transform F() is very small. Stated differently, the signal recta (t) for large a has its frequency content concentrated around = 0.
This, in hind side, is actually not so surprising, since F(0) being relatively large meansloosely
speakingthat
1
F()e j t d
f (t) =
2
approximately equals
+
2
1
F(0)e0t
F()e j t d
2
2
44
F1 () =
sin( 21 /2)
/2
0
5
5
(a)
5
50
50
F2 () =
(b)
sin(2/2)
/2
0
55
(c)
50
50
F3 () =
(d)
sin(5/2)
/2
0
55
5
5 (e)
50
50
(f)
F4 () = 1 (F3 ( + 0 ) + F3 ( 0 ))
2
4
2
0
55
(g)
50
50
(h)
= 4 and 2
= 6.
moons gravitational pull. Also note the little humps in |F()| at about 2
(We have something more to say about this in Chapter 4.) Can you explain the spike of |F()|
= 1?
at precisely 2
45
f (t) [meters]
f (t) [meters]
4
2
0
-2
-4
2
0
-2
-4
20
40
t [days]
100
|F()|
80
|F()|
60
t [days]
60
40
20
0
0
6
4
2
0
0
0.5
1.5
j t
dt. Often Fourier transform is also used to refer to the mapping F that sends
f (t)e
f (t) to F():
f (t)e j t dt.
F { f (t)} =
Likewise, the inverse Fourier transform refers either to the mapping F 1 that sends F() to
f (t) or refers to f (t) itself, seen as the result of a given F().
The connection between f (t) and F() is conveniently expressed as a transform pair
F
f (t) F().
3.2.1. Example. The rectangular pulse recta (t), (a > 0) has Fourier transform (see Example 3.1.4)
2 sin( a2 )
.
F() =
Note that F() is not absolutely integrable. If we interchange and t and replace with ,
then we get the odd looking result that
sin( a2 t) j t
e
dt = recta () = recta ().
t
46
The Fourier transform of the signal sin(at/2)/t apparently is recta () even though
sin(at/2)/t is not absolutely integrable. Moreover
sin( a2 t)
1
=
recta ()e j t d.
t
2
The formulas of the Fourier integral theorem remain valid in this case.
F { f (t)} =
f (t)e j t dt =
4. Time-scaling: f (at)
1
F( a ),
|a|
f (t)e j t dt
= F ().
F
If a > 0, then
F { f (at)} =
f (at)e
j t
1
dt = { = at} =
a
f ( )e j /a d =
1
F( ).
a a
If a < 0 then the integral gains a minus sign since the boundaries of integration and
swap.
F
5. Time-shift: f (t ) F()e j .
f (t )e j t dt = {v = t }
F { f (t )} =
f (v)e j (v+ ) dt = F()e j .
=
47
6. Frequency-shift: f (t)e j 0 t F( 0 ).
f (t)e j (0 )t dt = F( 0 ).
F { f (t)e j 0 t } =
f (t)e j t dt = j F(),
provided that limt f (t) = 0, which is usually the case if f (t) is absolutely integrable.
t
F
8. Integration with respect to time: f ( ) d
t
Let g(t) = f ( )d , then g (t) = f (t) and
lim g(t) =
f ( ) d = {for = 0} =
F()
,
j
f ( )e j d = F(0) = 0,
g(t)e j t dt = g(t)
e j t
1
+
j
j
f (t)e j t dt =
F()
.
j
F()e j t d = 2 j t f (t),
provided that lim F() = 0, but this is always the case for absolutely integrable
f (t) (see Lemma 3.1.1).
Note the symmetry between rules 5 and 6, and between rules 7 and 9. Table 3.1 collects
the various properties. Table 3.2 brings together some of the more standard Fourier transform
pairs. In the derivation of these transform pairs extensive use is made of the above properties.
The following examples clarify Table 3.2.
48
Property
Time domain
f (t) =
1
2
F()e j t d
Freq. domain
F() =
f (t)e j t dt
a1 f 1 (t) + a2 f 2 (t)
a1 F1 () + a2 F2 ()
Reciprocity
F(t)
2 f ()
Conjugation
f (t)
F ()
Time-scaling
f (at)
1
F( a )
|a|
Time-shift
f (t )
F()e j
Frequency-shift
f (t)e j 0 t
F( 0 )
f (t)
j F()
Linearity
Differentiation (time)
Integration (time)
Differentiation (freq.)
t
f ( ) d
j t f (t)
Condition
F()
j
F ()
a R, a = 0
lim f (t) = 0
F(0) = 0
recta (t) =
f (t)
1 if |t| < 12 a,
1
0 if |t| > 2 a
0
if |t| > a
ea|t|
t n at
e
n!
1(t)
eat
49
F()
2 sin( 21 a)
a>0
4 sin2 ( 21 a)
a2
a R, a > 0
2a
a 2 +2
Re a > 0
1
(a+ j )n+1
Re a > 0
1
(a+ j )n+1
Re a < 0
2 /4a
a R, a > 0
recta ()
a R, a > 0
sin( a2 t)
t
Condition
3.2.2. Example (Rectangular and triangular pulse). The Fourier Transform of the rectangular pulse is derived in Example 3.1.4. The triangular pulse triana (t) is defined in Definition 1.4.1. Suppose f (t) is given as
1
f (t) = recta (t + a/2) recta (t a/2).
Note that F(0) =
1
triana (t) =
a
f ( ) d.
Recall that recta (t) 2 sin(a/2)/. Hence based on integration in time (Rule 8) and time
shift (Rule 5) we get that
4 sin2 ( a2 )
1 2 sin( a2 ) a j /2
a j /2
(e
e
)=
.
triana (t)
j
a
a2
F
50
3.2.3. Example. In Example 1.2.7 we saw that for Re a > 0 there holds that 0 eat dt =
1/a. An immediate consequence is the frequency spectrum of f (t) = eat 1(t). Since Re(a +
j ) = Re(a) > 0 we have that
1
F
at
(a+ j )t
.
e
1(t) dt =
e(a+ j )t dt =
e 1(t)
a + j
0
Differentiating with respect to frequency (Rule 9) n times gives
n
1
d
1
F
( j t)n eat 1(t)
= (1)n j n n!
,
d a + j
(a + j )n+1
Therefore
1
t n at
F
e 1(t)
,
n!
(a + j )n+1
Re a > 0.
(3.6)
These transform pairs are for the cases where Re a > 0. If Re a < 0 then similarly it may be
shown that
1
t n at
F
e 1(t)
,
n!
(a + j )n+1
The inverse Fourier transform of 1/(a + j )n+1 hence depends rather dramatically on a. If
n
Re a > 0 then the inverse Fourier transform is tn! eat 1(t) which is zero for all negative time,
n
but if Re a < 0 then the inverse Fourier transform is tn! eat 1(t) which is zero for positive
time.
3.2.4. Example. Suppose Re a > 0. Then
ea|t| = eat 1(t) + eat 1(t).
By linearity,
F
ea|t|
1
(a + j ) + (a + j )
2a
1
2a
+
=
=
= 2
.
2
2
a + j a + j
(a + j )(a + j )
a
a + 2
3.2.5. Example. To determine the frequency spectrum of the Gaussian function eat , (a > 0)
we make use of the following standard integral
2
et dt = .
2
Let F() =
3.3. Examples
51
=
e
cos(t) dt = F().
a 0
2a
We end up with a first order differential equation
F () = F().
2a
Next separate the variables,
F ()
= .
F()
2a
After integrating both sides from = 0, we get that ln |F()| ln |F(0)| = 2 /4a, or,
F() = F(0)e
2 /4a
at 2
2
.
e
dt = { = t/ a} =
e d =
F(0) =
a
a
3.3 Examples
Often the frequency spectrum F() can be found through a combination of the rules of Table 3.2.
3.3.1. Example. The frequency spectrum of recta (t) cos(0 t) and eat cos(0 t) 1(t) may be
obtained using the modulation theorem (Rule 6).
F
sin( 12 a( 0 )) sin( 12 a( + 0 ))
+
,
0
+ 0
(a > 0)
a + j
,
(a + j )2 + 02
52
3.3.2. Example. With help of the reciprocity rule (Rule 2) it follows that
a2
F
ea|| ,
2
+t
a
+ ea|+0 | ),
a2 + t 2
2a
sin2 (at)
.
t2
trian2a (t)
4 sin2 (a)
2
= f (),
2
2a
a
The most important family of Fourier transforms are the rational functions of frequency.
3.3.3. Example (Rational functions in frequency domain). In this example we calculate the
inverse Fourier transform of a rational function of the form
F() =
pm ( j )m + pm1 ( j )m1 + + p1 ( j ) + p0
P( j )
=
.
Q( j )
qn ( j )n + qn1 ( j )n1 + + q1 ( j ) + q0
The coefficients pi and qi are assumed real. We shall further assume that the rational function
is strictly proper which means that the degree of the numerator P is less than that of the
denominator Q. Additionally we assume that Q has no zeros on the imaginary axis, i.e., that
Q( j ) = 0 for every R. In Chapter 4 when we consider generalized Fourier transforms,
we will have a way of coping with imaginary zeros of Q and also the case that P/Q is not
strictly proper can then be dealt with.
For rational functions there is a straightforward algorithm that always leads to an explicit
form of the inverse Fourier transform f (t). Here we illustrate it by example. Suppose that
F() =
6 j
.
( j + 1)(4 + 2 )
(3.7)
3.3. Examples
53
.
(s + 1)(4 s 2 )
s+2 s +1 s2
A partial fraction expansion of a rational function is an expansion of the rational function as a
sum of simple terms of the form /(s )k , such as done above. Appendix A discusses partial
fraction expansion in more detail and shows for example that a partial fraction expansion of a
strictly proper rational function always exists, and it explains how to construct a partial fraction
expansion. Continuing with the example, we see that
F() =
2
1
3
.
j + 2
j + 1
j 2
The inverse Fourier transform now follows from Table 3.2 as being
f (t) = (3e2t 2et ) 1(t) + e2t 1(t).
3.3.4. Example (Maple example). In Maple-V4 the inverse Fourier transform of (3.7) can be
produced by the following three commands.
with(inttrans):
F:=6*I*omega/(I*omega+1)/(4+omega^2):
invfourier(F,omega,t);
54
f ( )g(t )d.
f ( )g(t ) d = {v = t } =
g(v) f (t v) dv = (g f )(t).
Sufficient for the existence of ( f g)(t) is that f (t) is bounded while g(t) is absolutely
integrable, or the other way around. Another important class of signals for which convolutions
exist is the class of causal signals. These play a prominent role in the theory of systems,
considered in Chapter 5.
f (t)
0
3.4.2. Definition. A signal f (t) is causal if f (t) = 0 for all t < 0. (See Figure 3.3.)
A signal of the form f (t) 1(t) is causal because 1(t) = 0 for t < 0. The name causal may
seem a bit unfortunate, since there is not really a cause or an effect. The justification for
this name will emerge later when we consider systems in Chapter 5. As claimed, convolutions
55
of causal signals are defined for every t. Indeed if f (t) and g(t) are causal, then
f ( ) 1( )g(t ) 1(t ) d
( f g)(t) = ( f 1) (g 1) (t) =
f ( )g(t ) 1(t ) d
=
0
0
if t < 0,
= t
if t 0,
0 f ( )g(t ) d
t
f ( )g(t )d 1(t).
=
0
The convolution of two causal signals apparently is itself causal, and since for each t the integration above is over a finite interval [0, t] it follows that the convolution exists for every t and
every piecewise smooth f (t) and g(t).
3.4.3. Example. Convolution with the unit step amounts to integration:
t
f ( ) 1(t ) d =
f ( ) d.
( f 1)(t) =
3.4.4. Example. Consider the rectangular pulse recta (t). The convolution of recta (t) with
itself is given by
(recta recta )(t) =
recta ( ) recta (t ) d =
a/2
a/2
recta (t ) d.
t+a/2
(3.8)
ta/2
The integrand recta (v) is 1 on the interval [a/2, a/2] and it is zero elsewhere. If the interval
[t a/2, t + a/2] over which is integrated has no overlap with the interval [a/2, a/2] where
the integrand recta (v) is nonzero, then the integral (3.8) will be zero. We distinguish four cases
for t corresponding to how the two intervals overlap: t < a, t > a and t [a, 0) and
t [0, a].
If t < a then t + a/2 < a/2, so that [t a/2, t + a/2] is entirely to the left of
[a/2, a/2]. The integral (3.8) is therefore zero for all t < a. Similarly, if t > a then
a/2 < t a/2 so the interval [t a/2, t + a/2] is entirely to the right of [a/2, a/2], and
(3.8) is therefore 0.
If t [a, 0) then the interval [t a/2, t + a/2] is to the left of
[a/2, a/2] but they
t+a/2
overlap on the interval [a/2, t + a/2]. The integral (3.8) then equals a/2 1 dt = a + t.
56
(3.9)
Next we formulate and prove the convolution theorem for the Fourier integral. In fact we
consider two versions of the convolution theorem, one for convolution in the time domain
and one for convolution in the frequency domain. In the proofs we shall silently assume that
changing the order of integration is allowed. It is allowed but we do not prove it.
F
3.4.5. Theorem (Convolution theorem in the time domain). Suppose that f (t) F()
F
and g(t) G(). Then
F
( f g)(t) F()G().
Proof. Determine the Fourier transform of ( f g)(t) as follows:
j t
e
f ( )g(t ) d dt
F {( f g)(t)} =
j t
f ( )
e
g(t ) dt d
=
F
f ( )G()e j d = F()G().
= {g(t ) G()e j 0 t } =
3.4.6. Theorem (Convolution theorem in the frequency domain). Suppose that f (t)
F
F() and g(t) G(). Then
F
f (t)g(t)
1
(F G)().
2
57
1
1
F(u)G( u)du =
=
(F G)().
2
2
(3.10)
3.4.7. Example. In Example 3.4.4 it was shown that (recta recta )(t) = a triana (t). Application of the convolution theorem gives
F {triana (t)} =
4 sin2 ( 12 a)
1
(F {recta (t)})2 =
a
a2
F
3.4.8. Example. Given that f (t) = eat 1(t) 1/(a + j ) for any Re a > 0, it follows
by the convolution theorem that
t
1
a a(t )
e
e
d
1(t) = teat 1(t).
=
(
f
f
)(t)
=
F 1
(a + j )2
0
In the case of periodic signals we found a way to express the power of a periodic signal in terms
of its line spectrum. The result was called Parsevals theorem. Similarly there is a Parsevals
theorem for non-periodic signals that expresses the energy content of a non-periodic signal in
terms of its frequency spectrum. The energy content of a signal f (t) was defined as
| f (t)|2 dt.
Ef =
| f (t)|2 dt =
1
2
|F()|2 d.
f (t)g(t)e
j t
1
(F
2
1
dt =
2
F(v)G( v) dv.
58
Now take = 0,
1
f (t)g(t) dt =
F(v)G(v) dv.
2
For a more symmetrical version, replace g(t) with g (t) and the corresponding Fourier transform G() with G () (See Rule 3). Then we get
1
f (t)g (t)dt =
F(v)G (v) dv.
(3.11)
2
1
sin4 t
dt =
t4
2
( trian2 ())2 d =
0
1
2
(1 )2 d = .
2
3
An operation closely related to the convolution product is the cross correlation of two signals.
3.4.11. Definition (Cross correlation). Let f1 (t) and f2 (t) be two signals with E f1 < and
E f2 < . The cross correlation 1,2 (t) of f 1 (t) and f2 (t) is defined as
f 1 (t + ) f 2 ( ) d.
1,2 (t) =
The Fourier transform of 1,2 (t) follows from the convolution theorem on noting that 1,2 (t)
is the convolution product of the signals f (t) = f1 (t) and g(t) = f2 (t) with respective
frequency spectra F1 () and F2 (). Hence
F
(3.12)
If f 2 (t) = f 1 (t) = f (t) then (t) = 1,1 (t) is called the autocorrelation of f (t). The spectrum
of (t) is therefore equal to F()F () = |F()|2 , which is called the energy spectrum or
spectral density of f (t). The inverse Fourier transform now yields the formula
1
f (t + ) f ( ) d =
|F()|2 e j t d.
(t) =
2
Substitute t = 0 and what follows is again Parsevals equality. Moreover it follows that
1
1
2 j t
|F()| |e | d =
|F()|2 d = (0),
|(t)|
2
2
In other words, the auto correlation is maximal for t = 0.
59
f (t)
f [n]
g(t)
Figure 3.4: Original signal, sampled signal, held signal (via a zero order hold).
f (t)
g(t)
2T 3T 4T 5T 6T 7T 8T
Figure 3.5: The signals f (t) and g(t) = f (t) + sin(T t) have identical samples.
It is to be expected that with sampling some information of the original continuous-time
signal f (t) is lost. It is unlikely that the samples f [n] are enough to reconstruct by some
sort of holding device the signal f (t). For example if of the signal f (t) shown in Figure 3.5
we are only given its samples f (nT ), then we can not be sure that the samples come from
f (t) and not from g(t) = f (t) + sin( T t), because g(t) and f (t) are identical at the sampling
instances t = nT . However, in this example the signal g(t) contains a term sin(T t) which
is a signal whose frequency may be unrealistically high if T is very small. If we know that
the samples are taken from a signal that does not have such high frequencies, then we can
discard g(t). In this section, we show that signals that are bandlimited can be reconstructed
from their samples f (nT ) provided that the sampling time is small enough, i.e., provided that
60
.
This happens when the sampling frequency s =
2
T
satisfies
s = 2b .
(3.13)
The value 2b is known as the Nyquist rate. To allow for reconstruction of a signal f (t) with
bandwidth b it suffices to take the sampling frequency higher than 2b :
F
3.5.2. Theorem (Shannons sampling theorem). Let f (t) F() and suppose that
F() = 0 for all || > b . Let f [n] = f (nT ), n Z be the sampled signal of f (t)
with sampling frequency s = 2/T . If s > 2b , then f (t) is uniquely determined by its
samples f [n] via
f (t) =
f [n]
n=
sin(s (t nT )/2)
.
s (t nT )/2
(3.14)
1
{F()} =
2
F()e
j t
1
d =
2
s /2
F()e j t d.
s /2
On the interval [s /2, s /2] we express F() as a Fourier series with period s ,
F() =
Fn e j nT ,
(s /2, s /2),
n=
in which T = 2/s . Note that T here is precisely the sampling period. Since F() has its
support on (s /2, s /2) we may multiply with the rectangular pulse rects without changing
the result,
F() =
n=
Fn e j nT rects (),
R.
(3.15)
61
(3.16)
n=
f [n]
n=
sin(s (t nT )/2)
sin(s (t nT )/2)
=
.
f [n]
(t nT )
s (t nT )/2
n=
62
f [n]
f [n] sin((tn))
(tn)
n
t
Figure 3.6: Graph of f [n]
sin((t n))
.
(t n)
n=
f [n]
sin((t n))
.
(t n)
(a)
2c
2c
2c
(b)
2c
(c)
2c
2c
63
1
X am () = M( c ) + M( + c ) .
2
Now, m(t) is bandlimited with bandwidth much smaller than c , see Figure 3.7(a). The frequency spectrum of the modulated signal xam (t) therefore looks like the one in Figure 3.7(b).
It is a signal whose spectrum is more or less centered around c . This signal now is transmitted. Incidentally, since the two lobes of Xam () have the same shape as that of M() it will be
intuitively clear that amplitude modulation incurs no loss of information. Indeed, the original
signal m(t) can be completely recovered from its modulated xam (t). This is done at the users
end. The receiver multiplies the received signal xam (t) once again with cos(c t). The resulting
product
y(t) = xam (t) cos(c t)
has, by the frequency-shift rule, a frequency spectrum
1
Y () = X am ( c ) + X am ( + c )
2
1
1
1
= M( 2c ) + M() + M( + 2c ).
4
2
4
The frequency spectrum of y(t) is shown in Figure 3.7(c). The original signal m(t) now follows
as
M() = 2Y () rect2c (),
that is, removal of the high-frequency components of y(t) and then doubling the signal, brings
back m(t).
For reasons of exposition we glossed over several practical problems, such as how the
receiver knows the carrier angular frequency c .
64
3.7 Problems
3.1 Let f (t) be the signal
f (t) =
if 5 < t < 6,
if t < 5 or t > 6.
et
0
Determine the frequency spectrum of f (t). Are f (t) and its frequency spectrum absolutely integrable?
F
3.2 Let f (t) F() and 0 > 0. Determine the frequency spectra of the following
signals.
(a) f (t) sin(0 t),
(b) f (at)e j 0 t ,
(a = 0),
(a > 0),
at 2
(a > 0),
3.4 Let f (t) F(). Determine the frequency spectra of the following signals.
(a) 2 f (3t 1),
(b) e2 j t f (t 2),
(c) t f (t),
(d) f ( 12 t),
(e) f (1 t),
(f) f (t) cos2 (0 t).
3.7. Problems
65
3.6 Let f (t) F(). Determine f (t) for the cases that F() equals
(a) F() = rect2a ( 0 ) + rect2a ( + 0 ),
(b) F() =
(c) F() =
2+ j
,
4+5 j 2
9
.
(1+ j )2 (2+ j )
(a) Is the signal f (t) uniquely determined by its samples at the time instance t =
0, 12 , 1, 32 , . . . ?
Motivate your answer.
(b) Determine f [n] for n Z.
(c) Determine the energy content of f (t).
3.9 Given are the signals
f (t) = e|t|
and
h(t) =
sin(at)
t
(a > 0).
1
j + b
with b a real constant. Determine the frequency spectrum G() of the following signals
g(t).
66
1
.
j tb
F
3.11 Suppose f (t) F(). Determine f (t) for the cases that F() equals
(a) F() = j trian2 (),
(b) F() = e j t0 rect8 (),
(c) F() = cos() rect2 ().
More involved problems:
3.12 Let f (t) be an absolutely integrable signal and let the signal g(t) be given by
g(t) =
1
a
t+a/2
f (u) du.
ta/2
3.7. Problems
67
is the most likely candidate. In the proof of Shannons sampling theorem we derived an
explicit expresseion for F() of (3.14):
F() = T
n=
The Matlab macro fouriertrans computes this F() at some equidistant frequency
grid of [0, s ].
function [w,F]=fouriertrans(t,f,N);
%[w,F]=fouriertrans(t,f,N). Continuous-time Fourier transform
%
%
IN: t, vector of equidistant sampled time
%
f, vector of sampled function values f(t)
%
N, number of grid points (preferably N=2^k)
% OUT: w, row vector of sampled frequencies
%
F, row vector of sampled Fourier trans. F(w)
if N < length(f)
disp(Not all data is used.);
end
T=t(2)-t(1);
% sampling time
ws=2*pi/T;
% sampling frequency
points=1:(N/2);
w=(points-1)*ws/N;
% N/2 grid points from [0,ws/2]
F=T*fft(f(:),N);
% note: f(:) is a row vector
F=F(points).*exp(-j*t(1)*w); % T sum f[n]e^(-jnwT)rect_ws(w)
It makes use of the famous fast Fourier transform which is an ingenious trick to speed
up computation of discrete-to-discrete Fourier transforms (not considered in this course).
To find the amplitude transfer of f (t) = et 1(t)+sin(10t) we type at the Matlab prompt
T=0.1;
t=0:T:20;
f=exp(-t)+sin(10*t);
[w,F]=fouriertrans(t,f,512);
plot(w,abs(F));
Now let
f (t) =
%
%
%
%
%
t 1 if t [0, 2),
0
if t [0, 2).
Make plots of the amplitude transfer of F() on the frequency interval [0, 20]. Do this
numerically with the help of fouriertrans and compare it with the exact |F()|
(use Maple). You may need to experiment with T and the length of the time-interval
over which f (t) is sampled.
68
4
Generalized functions and Fourier transforms
The Fourier theory of the two preceding chapters is still rather limited in scope since the signals
to which it applies are required to be piecewise smooth and absolutely integrable, with the single
exception of f (t) = sin(t)/t. The unit step 1(t) for one could not be Fourier transformed. A
signal f (t) whose F() is identical to 1 for all frequencies (assuming this makes sense) is
another signal that could not be dealt with.
In this chapter we shall see that with the introduction of the delta function or, more generally, with the introduction of the generalized functions we will have a way of dealing with
the above mentioned problems. A thorough treatment of delta functions is beyond the scope of
this course. We have to limit the treatment at various stages and appeal to intuitive arguments.
The properties in the end lead to a set of rules of calculus for delta functions that we may apply
without heading into problems.
n
rn (t) =
0
n
if |t| <
if |t| >
1
,
2n
1
.
2n
(4.1)
1
2n
1
2n
As n goes to infinity, the rectangular pulses rn (t) become spikier and spikier, with
their spike
around t = 0, see Figure 4.1. However, the area between the spike and the x-axis, rn (t) dt
equals 1. We now naively define the delta function (t) as the limit
(t) = lim rn (t)
(4.2)
69
70
The delta function is usually depicted as done in Figure 4.2, i.e., depicted by the zero function
with a fat arrow pointing upwards at t = 0. The idea to see the delta function as a spike in this
71
Proof. First replace (t) with rn (t). The mean-value theorem gives that
rn (t) f (t) dt = n
1/(2n)
1/(2n)
1
f (t) dt = n( f (n )) = f (n )
n
for some n ( 1
, 1 ) depending on n. As a last step we take the limit n :
2n 2n
(t) f (t) dt = lim
at b = 1
2n
0
b/a
This is very much like a shifted copy of rn (t) with the difference that the spike does not have a
unit area. The width of the spike of rn (at b) is easily seen to be /|an|, so the area of the spike
is 1/|a|. In the limit as n the spike therefore approaches 1/|a| times the delta function
that has its spike at t = b/a:
(at b) =
b
1
(t ).
|a|
a
(4.3)
|a|
1
1
b
b
b
f ( ).
( ) f ( + ) d =
= { = t } =
a
|a|
a
|a| a
An immediate special case is that
(t b) f (t) dt = f (b),
(4.4)
72
(t b)
0
t =b
that it says that (t b) f (t) dt equals f (t) at that t where (t b) has its spike.
It is also possible to determine the convolution product ( f )(t) of a signal f (t) and the
delta function (t).
( f )(t) =
(4.5)
Here we used the fact that (t ) = ( t) as a function of has its spike at = t. A final
useful property to have is the following.
4.1.2. Lemma. If f (t) is continuous at t = b, then
f (t)(t b) = f (b)(t b).
(4.6)
Proof (idea). This is another instance were we shall use the rule the-last-limit-you-take. The
1
1
1
function f (t)rn (t b) is zero
for all |t b| > 2n . On the interval [b 2n , b + 2n ] it generally
has a spike. Since limn f (t)rn (t b) dt = f (b) it means that the area of this spike
approaches f (b) as n , so that f (t)rn (t) approaches f (b) times the delta function (t
b).
4.1.3. Example.
1. t(t) = 0(t) = 0, i.e. the zero signal.
2. e j 0 t (t b) = e j 0 b (t b).
3. (t) = (t). This is because of the scaling property for a = 1 (Table 4.1).
4.
t
( ) d
(t).
= (1 )(t) =
73
Property
Condition
Sifting
(t b) f (t) dt = f (b)
f (t)(t b) = f (b)(t b)
Convolution
( f )(t) = f (t)
Scaling
1
(at b) = |a|
(t ab )
t
( ) d = 1(t)
f (t) continuous at t = b
f (t) continuous at t = b
t = 0
Table 4.1: Properties and rules of calculus for the delta function.
if t > 0,
1
sgn(t) := 0
if t = 0,
1 if t < 0.
The function sgn(t) is the generalized derivative of f (t) = |t|, because
t
sgn( ) d.
|t| =
0
74
It may be shown that the product rule and chain rule of differentiation remain valid for derivatives in the generalized sense.
f (t)
f (t)
1
1
Figure 4.4: f (t) = 1(t) 1(t 1) + et 1(t 1) and its generalized derivative f (t).
4.2.2. Example. Consider the signal f (t) depicted in Figure 4.4. It is the signal
f (t) = 1(t) 1(t 1) + et 1(t 1).
Its generalized derivative f (t) equals
f (t) = (t) (t 1) et 1(t 1) + et (t 1)
= (t) (t 1) et 1(t 1) + e (t 1)
= (t) (1 e )(t 1) et 1(t 1).
75
From Theorem 3.1.3 we know that absolutely integrable signals can be recovered from their
Fourier transform through an inverse Fourier transform which is in the form of an integral. In
the case of the delta function, however, this integral
1
e j t d
2
diverges. In a proper setup the inverse Fourier transform can be given a meaning and can be
shown to equal (t), see Section 4.4. For now however we simply define F {(t)} = 1. Its
implication that (t) is build up from all frequencies with equal weight F() = 1 is bit
difficult to interpret. Delta functions in the frequency domain () have a more appealing
interpretation. Consider F() = () and apply it to the inverse Fourier transform (assuming
this makes sense)
1 j 0t
1
1
e =
.
()e j t d = {sifting property} =
f (t) =
2
2
2
1 j t
e
dt is now not defined, but also
This is a constant signal. The Fourier transform 2
in this case it is possible to give it a meaning and show that the Fourier transform equals
F() = (), see Section 4.4.
Summarizing, delta functions in one domain correspond to constant functions in the other.
4.3.1. Example.
F
a) (t ) e j .
F
This is a direct consequence of the time-shift rule and the fact that (t) 1.
F
b) e j 0 t 2 ( 0 ).
F
1
().
This is a direct consequence of the frequency-shift rule and the fact that 2
F
c) cos(0 t) (( + 0 ) + ( 0 )).
It follows from the Modulation theorem (Page 47).
76
agrees with our understanding of what the Fourier transform F() entails. The above function
F() consists of two spikes, one spike at frequency 0 and one at 0 . Its frequency content is
therefore concentrated at the frequencies 0 only, and does not depend on any other frequency.
Indeed, cos(0 t) is like that. As a last example we consider the frequency spectrum of a
f (t)
F()
(t)
2 ()
(t b)
e j b
e j 0 t
2 ( 0 )
cos(0 t)
(( 0 ) + ( + 0 ))
sgn(t)
2
j
1(t)
1
j
+ ()
f k e j k0 t
( < t < ).
k=
With the help of Table 4.2 and the superposition principlei.e., that the Fourier transform of a
sum of fk e j k0 t is the sum of the Fourier transforms of fk e j k0 t we find that
F() = 2
f k ( k0 ).
k=
The Fourier transform of a periodic signal consists of a train of delta functions. That the
superposition principle applies will not be shown here.
Table 4.2 collects some generalized Fourier transform pairs, including some that we did not
yet treat. The rules that hold for the classical Fourier transform remain valid if we extend it
with the Fourier transform pairs of Table 4.2 (proof is omitted). In those rules any derivative
should now be understood to mean the generalized derivative.
4.3.2. Example. Let f (t) = et 1(t). Then f (t) = et (t) et 1(t) = (t) et 1(t). The
Fourier transform of f (t) equals 1 1/(1 + j ) = j /(1 + j ). Via the differentiation rule
we the Fourier transform of f (t) equals the Fourier transform F() = 1/(1 + j ) of f (t)
multiplied with j . Indeed, this gives the same result.
77
Even the convolution theorems remain valid. An important consequence of the convolution
theorem has to do with Rule 8 on page 47 about integration with respect to time. In that rule
it was assumed that F(0) = 0. Now, in the generalized sense,
t we may dispense with this
assumption and generalize the rule as follows. First we express f ( )d as a convolution
f ( )d = ( f 1)(t).
F
Now let f (t) F(), then application of the convolution theorem in the time-domain (see
Theorem 3.4.5) yields
t
F()
1
F
+ ()) =
+ F(0)().
f ( ) d F() F {1(t)} = F()(
j
j
for every (t) continuous at t = 0. That is, the delta function maps continuous (t) to (0).
You may want to convince yourself that no other function can have this property. Since (4.8)
characterizes (t) uniquely, we may take that as the definition of the delta function. That is, we
define the delta function not by its function values as one would normally do, but by the way it
acts on (t): The delta function is a linear mapping that sends continuous (t) to (0). When,
like here, a function is defined through how it acts on (t) then we say that it is a generalized
function or distribution. The delta function defined like this, as a generalized function, is
properly defined. Any piecewise smooth function can be seen as a generalized function, but
not every generalized function is a piecewise smooth function. This leads to the generalization
of the concept of function that we alluded to in the beginning of this section. Formally:
4.4.1. Definition. A test function (t) is a function that is infinitely often continuously differentiable and has finite duration.
A generalized function or distribution is a linear continuous mapping that sends test functions (t) to complex numbers.
78
We shall not go into the type of continuity here. Up to this definition we allowed for
general continuous (t), but restricting the (t) to test functions has advantages. The standard
test function is
1
e 1t 2 if |t| < 1,
0 (t) =
0
elsewhere
This is shown in Figure 4.5(a). By shifting and scaling it is possible to make many more test
functions.
1/e
0 (t)
1
c0 (at b)
c 0 (a t b )
(4.9)
Such mappings identify f (t) uniquely for almost all t and this is the reason that one usually
makes no distinction between a function f (t) and the generalized function (4.9), even though
one is a function and the other is a mapping.
4.4.2. Example.
1. The unit step
0 (t) dt.
1(t)(t) dt =
2. The generalized function f (t) = t2 maps (t) to 0 t 2 (t)dt. Note that this integral is
defined for every test function. (If we would have allowed every continuous (t) then
the integral does not always exist. This is an instance where we see that test functions
are convenient.)
The game is next to generalize the notions like sum, product and limit etcetera available
for regular functions to that of generalized functions. Of course we want the generalization
to be such that these notions for regular functions are the same as when seen as generalized
functions. A complete list of all generalizations is too much for an introductory exposition like
ours. We shall confine ourselves to the notion of limitwhich is an intriguing oneand later
we shall generalize the derivatives and Fourier transforms.
79
limit limn f n (t)(t) dt exists. The generalized limit f (t) is denoted as f (t) =
"lim"n f n (t).
If the fn (t) have a limit in a regular sense to a regular function f (t), say,
lim max | f n (t) f (t)| = 0
n t[a,b]
a < b
then fn (t) also has a limit in the generalized sense, with the same limit f (t) = "lim"n f n (t).
But with generalized limits we are able to take limits that hitherto could not be taken.
3.5
3
2.5
2
1.5
1
0.5
0
-0.5
-1
-10
-8
-6
-4
-2
10
80
|t|>
cos(at)/t(t) dt. So
cos(at)
cos(at)
cos(at)
(t) dt = lim
(t) dt +
(t) dt
0
t
t
t
|t|>
+
cos(a )( )
cos(at)(t)
d( ) +
dt
= {t = } = lim
0
t
+
(t) (t)
dt
cos(at)
= {t = } = lim
0
t
cos(at)
lim
0
(t) (t)
dt
t
= Re
e j at
(t) (t)
dt
t
(t) (t)
) 1(t)} = 0, as |a| .
= {Lemma 3.1.1 for f (t) = (t
t
sin(at)
t
= (t).
2
1 et /
2
= (t).
"lim"a e j at = 0.
"lim"a
cos(at)
t
= 0.
81
A definition like this is only then sensible if "lim"n f n (t) exists and only depends on f (t)
and not on fn (t). Every sequence of fn (t) that converges to f (t) should give the same result
f (t). This is the case, which can be seen with partial integration,
lim
f (t)(t) dt = lim f n (t)(t)]
f n (t)(t) dt
n n
n
f n (t) (t) dt =
f (t) (t) dt.
= lim
n
This shows that the definition of f (t) only depends on f (t) as it should, and it also shows that
"lim"n f n (t) exists whenever "lim"n f n (t) exists.
4.4.5. Example. The unit step 1(t) may be expressed as the generalized limit as n of
f n (t) =
1 arctan(nt)
+
2
n
1
1 + (nt)2
which may be shown to converge (in the generalized sense) to the delta function. We found
n
once again that 1 (t) equals "lim"n 1 1+(nt)
2 = (t).
A definition like this is only then sensible if "lim"n Fn () exists and only depends on
f (t) and not on fn (t). Every sequence of fn (t) that converges to f (t) should give the same
"lim"n Fn () and this limit should exist. As it stands this is not the case. The workaround
is to replace the test functions with by what are the called tempered test functions (t) which
are the infinitely often differentiable functions that are polynomially bounded in that tn (m) (t)
is bounded for every n, m N. It may be shown that the Fourier transform is a bijection on the
set of tempered test functions, and that is a property that we shall need. Consider
j t
Fn ()() d =
f n (t)e
dt () d
f n (t)(t) dt
82
Since the Fourier transform is a bijection on the set of tempered test functions it follows that
(t) is a tempered test function as well, so that the above generalized limit exists,
F()() d =
f (t)(t) dt.
This expression for F() only depends on f (t) as it should, and moreover is well defined
whenever f (t) = "lim"n f n (t) exists.
4.4.6. Example.
1. F {(t)} = "lim"n F {n rect1/n (t)} = "lim"n
sin(/(2n))
/(2n)
= 1.
a/2
2. Since lima recta (t)(t) dt = lima a/2 (t)dt = (t) dt, holds for
every test function (t), we get that
"lim" recta (t) = 1.
a
2 sin( a2 )
= 2 ().
recta (t a/2)
1
0
F {1(t)} = "lim"
4.5. Problems
83
1(t)
1
+ ().
j
2. The Fourier transform of the signal f (t) = sgn(t) subsequently follows readily on noting
that
sgn(t) = 2 1(t) 1.
This directly gives the Fourier transform pair
F
sgn(t)
2
2
+ 2 () 2 () =
.
j
j
For absolutely integrable functions the generalized Fourier transform and the normal Fourier
transform are the same. For this reason the adjective generalized is often omitted.
4.5 Problems
4.1 Given is a continuous function f (t).
(a) Let g1 (t) = (2t + 4). Determine ( f g1 )(t).
(b) Let g2 (t) = (2t 1). Determine f (t)g2 (t).
4.2 Determine the derivative in the generalized sense of the following signals.
(a) t rect2 (t),
(b) (sin t) 1(t),
(c) t rect2 (t 1),
(d) e j t 1(t ),
(e) rect1 (t) trian1 (t).
4.3 Let f (t) and g(t) be two continuously differentiable signals. Determine the generalized
derivative of f (t) 1(t) + g(t) 1(t).
4.4 Determine the frequency spectrum of the following signals
(a) sin(0 t),
(b) e j 0 t 1(t),
(c) cos(0 t) 1(t),
(d) sin(0 t) 1(t),
(e) e j 0 t sgn(t),
84
0 if t < 0,
f (t) = t if 0 t < 1,
1 if t 1.
4.6 Determine the frequency spectrum of the following signals
(a) (rect2 1)(t),
t
sin
(b)
d .
4.7 Determine the time-domain the signal f (t) whose frequency spectrum equals
(a) F() =
1
(2
+ 1)
cos
,
4.8 Let f (t) F(). Verify that the differentiation rule f (t) j F() is also valid
for the signals
1(t b),
sgn(t),
e j 0 t .
4.9 Determine in the time-domain the convolution product ( f g)(t) for the cases that
(a) f (t) = et 1(t), g(t) = sgn(t),
(b) f (t) = 1(2t + 1), g(t) = e|t| .
4 and
4.10 Consider Example 3.1.5. Explain the little humps in |F()| at 2
6.
5
Linear time-invariant systems
u(t)
y(t) = T {u(t)}
or
u(t) y(t).
To emphasize that u and y are functions of time, we shall normally write y(t) = T {u(t)},
though this notation is debatable.
The idea of inputs causing outputs is not limited to filters. A wide variety of disciplines,
including econometrics, physics, electrical engineering and chemical engineering and others,
have a need for input signals and output signals, and the theory within which such problems
are treated has the generic name system theory and not merely filter theory, and a filter T is
often referred to as a system. We are not concerned with a formal setup of system theory, save
to say that there is a lot more to it then mentioned here. In this course we think of a system T
85
86
as being a mapping from inputs to outputs. At a later stage we shall amend this by allowing for
the effect of initial conditions.
In this chapter we restrict attention to continuous-time systems, which are systems whose
inputs and outputs are continuous-time signals. The output y(t) is often referred to as the
response of the system. In addition we assume that the system is linear and time-invariant,
two important properties defined shortly. Precisely these two properties allow a system to be
characterized in frequency domain.
(5.1)
A system T with the property (5.1) is sometimes said to obey the superposition principle.
5.1.2. Definition (Time-invariant system). A continuous-time system T is time-invariant if
for every t0 R and every input u(t), the corresponding output y(t) = T {u(t)} satisfies
T
u(t t0 ) y(t t0 ).
Roughly speaking, a system is time-invariant if the response to an input does not depend on the
time the input is applied; an input applied today will yield the same response as when applied
tomorrow.
A system that is both linear and time-invariant, is said to be an LTI system. LTI is an
acronym of Linear Time-Invariant.
5.1.3. Example. Consider the RC-network shown in Figure 5.2. We interpret the RC-network
as a system with the voltage delivered by the voltage source as the input u(t) of the system,
and the voltage across the capacitor as the output y(t).
The input and output are related by a differential equation that may be obtained using
Kirchoffs voltage law and the voltage-current relations of resistors and capacitors. Kirchoffs
voltage law gives that
u(t) = v R (t) + y(t) = Ri(t) + y(t).
(5.2)
1
i( ) d } =
C
i( ) d.
(5.3)
87
v R (t)
i(t)
R
u(t)
y(t) = vC (t)
dy(t)
= i(t).
dt
Substitute this expression for i(t) in (5.2) and we get a differential equation in the input and
output
dy(t)
+ y(t) = u(t),
dt
(5.4)
1
. This is a first-order ordinary linear differential equation with constant coefin which = RC
ficients, and with the right-hand side u(t) known. Solutions y(t) of this differential equation
are not unique. Indeed, the associated homogeneous equation
dy(t)
+ y(t) = 0
dt
has a non-trivial solution et . Hence if y(t) is a solution of (5.4), then also y(t) + cet is
a solution of (5.4) for any constant c. In fact, it may be shown that every solution of (5.4) is
necessarily of the form y(t) + cet with c constant (see Appendix A). The solution is unique
upto a constant c. However, because of (5.3) we have some more knowledge about y(t):
1 t
i( ) d = 0.
lim y(t) = lim
t
t C
To find the solution y(t) that obeys the above condition, we shall use handy trick: The left-hand
side of (5.4) may be written in the form
d
dy(t)
+ y(t) = et (et y(t)).
dt
dt
So the differential equation (5.4) becomes (with replacing t),
d
(e y( )) = e u( ).
d
88
Now integrate the left-hand side and use that limt y(t) = 0,
t
t
d
(e y( )) d = e y( )
= et y(t).
We found that
t
e y(t) =
e u( ) d.
(5.5)
Apparently, the differential equation with the condition that limt y(t) = 0, has a unique
solution. To beautify formula (5.5) we define the function h(t) = et 1(t). Because of the
unit step 1(t), we may write (5.5) as
h(t )u( ) d,
y(t) =
h(t t0 v)u(v) dv = (h u)(t t0 ) = y(t t0 ),
=
for any t0 .
In the previous example we saw that the output y(t) could be expressed as a convolution of the
input u(t) and a fixed signal h(t). We shall now demonstrate that this is always the case for LTI
systems. This is quite a remarkable result.
Assume first that we know that the response y(t) to an input u(t) equals a convolution
y(t) = (h u)(t),
for some fixed but possibly unknown function h(t). If we apply as input the delta function
u(t) = (t), then the response y(t) to this input is precisely the function h(t),
h(t )( ) d = h(t).
y(t) = (h )(t) =
This is interesting, because it shows that we can uncover h(t) from its input-output behavior.
89
M
ck (t tk ),
(ck R, tk R),
k=1
is
M
ck (t tk )}
y(t) = T {u(t)} = T {
k=1
= {linearity} =
M
ck T {(t tk )}
k=1
= {time-invariance} =
M
ck h(t tk ),
k=1
where h(t) = T {(t)} is the impulse response of the system. Loosely speaking integration is
the same as summation, so, likewise, the response to an input of the form
c( )(t ) d
(5.6)
u(t) = (c )(t) =
is
c( )(t ) d }
y(t) = T {u(t)} = T {
c( )T {(t )} d
= {linearity} =
c( )h(t ) d = (c h)(t).
= {time-invariance} =
Here we assumed that the superposition principle applies over a continuum, which is practically
always the case and will be silently assumed from now on. Note that the function c(t) in (5.6)
is in fact equal to the input itself, c(t) = u(t), because (c )(t) = c(t). So we found the
following important result.
5.1.5. Theorem. If T is a continuous-time LTI system, then for any input u(t), the output y(t)
is given as
y(t) = (h u)(t),
where h(t) = T {(t)} is the impulse response of the system.
90
5.1.7. Example (The integrator). The integrator system is the system whose output is the
input integrated,
t
u( ) d.
y(t) =
Often it is possible to argue that a system is linear and time-invariant, without having much idea
of the inner workings of the system. Theorem 5.1.5 states that then the system is completely
determined once its impulse response is determined.
5.1.8. Example (The echo system). The echo system is a system with as input u(t) the transmitted signal (for example, the voice of a singer in a concert hall) and as output y(t) the sum
of reflected signals (what a person in the audience in the concert hall hears).
The echo system is time-invariant, after all, the time of day that the concert begins probably
has no effect on the performance. The echo system is also linear (within reason), since the
reflected sound of one singer will generally not depend on what another singer happens to be
singing at that time.
By Theorem 5.1.5 the echo system is, hence, completely determined by its impulse response T {(t)}. Thinking of the concert hall, the impulse response is something like the effect
of a gun shot. A reasonable first model is to assume that the sound of a gun shot is reflected
91
1
k=0 2k
1
k=0 2k (t
kt0 )
t0
1(t kt0 )
t0
Figure 5.3: The echo systems impulse response and step response.
(echoed) every t0 seconds, for some t0 > 0, and that with each reflection its amplitude is halved,
say. This means that the impulse response of the echo system is (see Figure 5.3),
h(t) =
1
k
k=0
(t kt0 ).
Now for any other less violent signal u(t) the total of reflections y(t) follows as
y(t) = (h u)(t) =
1
k
k=0
u(t kt0 ).
The impulse response and the response y(t) to the unit step u(t) =
ure 5.3.
Systems in practical situations are real, and, in addition, they are causal or non-anticipating.
Causality expresses that the output at any time t0 does not depend on future time input values
u(t), t t0 . Formally:
5.1.9. Definition. A system T is causal or non-anticipating, if for any two inputs u1 (t) and
u 2 (t) and corresponding outputs y1 (t) = T {u 1 (t)} and y2 (t) = T {u 2 (t)} there holds for every
fixed t0 R that
if u 1 (t) = u 2 (t) t < t0
then
(5.7)
5.1.10. Example. The system y(t) = u(t 1) is an example of a delay system. If t is in units
of seconds, then y(t) = u(t 1) expresses that the response y(t) equals the input but then one
second delayed. So this is a causal system. Causality may also be verified formally through the
definition. Indeed, if u1 (t) = u 2 (t) for all t < t0 , then for any t < t0 we have that
y1 (t) = u 1 (t 1) = {since t 1 < t0 } = u 2 (t 1) = y2 (t).
92
Likewise the system y(t) = u(t + 1) expresses that at time t the output equals the input one
second into the future. This is not a causal system. If we want to formally show that the system
is noncausal, then it suffices to find one counterexample of (5.7). Suppose that u1 (t) = 0 and
u 2 (t) = 1(t), then u1 (t) = u 2 (t) for all t < 0, but y1 (t) = 0 and y2 (t) = 1(t + 1) and they are
not the same for t = 12 < 0: The system is not causal.
The response of an LTI system to the zero signal u(t) = 0 t, is the zero signal y(t) = 0 t.
As a causal signal is zero for all t < 0, we see that the response y(t) of a causal LTI system to
a causal input, is zero for all t < 0. Therefore the response y(t) is then causal as well.
We know that an LTI system is fully determined by its impulse response h(t). Therefore
it should be possible to express causality in terms of the impulse response. This can indeed be
done, in fact the condition is easy.
u(t) = (t)
0
y(t) = h(t)
0
Figure 5.4: The impulse response h(t) of a causal system is a causal signal.
5.1.11. Theorem. An LTI system T is causal if and only if its impulse response h(t) is a causal
signal. (See Figure 5.4.)
Proof. Suppose T is a causal system. Then, as argued above, its response to a causal input is
causal. The delta function is considered a causal signal, hence its response h(t) is causal.
Conversely, suppose that h(t) is a causal signal. Consider any two inputs u1 (t) and u 2 (t)
and any t0 R and suppose that
u 1 (t) = u 2 (t)
t < t0 .
u 1 ( )h(t ) d
u 2 ( )h(t ) d.
93
5.1.12. Definition. A system is real if u(t) R for all t R implies that y(t) = T {u(t)} R
for all t R.
It is not difficult to see that an LTI system is real if and only if its impulse response is realvalued.
The next property that we consider concerns stability of a system. Stability can be defined
in many different ways, depending on what is deemed important. For LTI systems an often
used version of stability is BIBO-stability.
5.1.13. Definition. An LTI system T is BIBO-stable if each bounded input results in a
bounded output.
BIBO is an acronym of Bounded-Input-Bounded-Output. In this course, stability will always
be understood in the sense of BIBO-stability. The next theorem shows how BIBO-stability can
be characterized by the systems impulse response. For the moment we shall assume that the
impulse response has no delta function components.
5.1.14. Theorem. An LTI system whose impulse response h(t) has no delta function components, is BIBO-stable if and only if
|h(t)| dt < .
(5.8)
Proof. The condition (5.8) is that h(t) is absolutely integrable. First we show that absolute
integrability of h(t) is enough to ensure BIBO stability. Suppose u(t) is bounded, that is,
|u(t)| < M
t R
|h( )u(t )| d M
|h( )| d = M I,
with I := |h( )| d < . The bound M I is independent of time, hence, y(t) is bounded.
Next we show that absolute integrability of h(t) is also necessary for BIBO-stability. Consider the input signal
u(t) = sgn(h(t)),
that is, u(t) = 1 for the time instances where h(t) > 0 and u(t) = 1 for the time instances
where h(t) < 0. This signal u(t) is bounded, and its response y(t) at t = 0 equals
h(0 )u( ) d =
|h( )| d =
|h(t)| dt.
y(0) = (h u)(0) =
If the system is BIBO-stable then |y(0)| is finite, so by the above, the impulse response h(t) is
necessarily absolutely integrable.
94
t
5.1.16. Example. The RC-network of Example 5.1.3 is stable. Indeed, its impulse response
is h(t) = et 1(t) with = 1/RC a positive constant. Therefore
1
|h(t)| dt =
et dt = = 1 < .
0
If h(t) contains delta function components, then it is still possible to characterize BIBOstability. This can be done as follows. Write h(t) in the form
h(t) = h 1 (t) +
N
an (t tn ),
(5.9)
n=0
(t) is the signal obtained from h(t) by removing all delta function components, and
Here, h 1
N
an (t tn ) is the sum of delta functions. Then the response y(t) to an input u(t)
h 2 (t) = n=0
can be written as
y(t) = (h u)(t) = (h 1 u)(t) +
N
an u(t tn ).
n=0
N
|an | < .
tion components, is BIBO-stable if and only if |h 1 (t)| dt < and n=0
In
Nmost applications the number N of delta function components is finite. In such cases
n=0 |an | is always finite and then BIBO-stability of the system is the same as BIBO-stability
of the system with the delta function components removed. Stated differently, adding a finite
number of delta function components to the impulse response has no effect on the systems
stability properties.
5.1.18. Example. The echo
of Example 5.1.8 has impulse response h(t) =
system
1 k
k
(
)
(t
kt
).
As
(1/2)
=
2 is finite, the echo system is BIBO-stable.
0
k=0 2
k=0
If the singer uses a microphone and if the microphone picks up its own reflected
ksignal
then
the
impulse
response
is
h(t)
=
delayed by t0 and amplified by a factor 2, say,
k=0 2 (t
k
2
=
.
This
problem
occurs
all
too
often at
kt0 ). This system is not BIBO-stable since
k=0
pop concerts when bands try to test their equipment.
95
Once the impulse response of an LTI system is known, then we know the system completely
and we can calculate the response to any input signal. In particular it is possible to determine
the systems step response. This is the response to the unit step u(t) = 1(t). The step response
of a given system is in the course denoted by g(t), that is,
t
h( ) d.
(5.10)
g(t) = (h 1)(t) =
(a)
(b)
(c)
e
1( ) d = e d 1(t)
g(t) =
= (1 e
) 1(t).
96
Hereby the signal u(t) is written as continuous superposition of shifted step functions. By
time-invariance the response to 1(t ) is g(t ). The response to u(t) can hence be derived
from the (extended) superposition principle, as
u ( ) 1(t ) d =
u ( )g(t ) d.
y(t) = T
5.1.20. Example. Consider again the RC-network of Example 5.1.3. We shall calculate the
response to the causal input u(t) given as
0 if t < 0,
u(t) = t if 0 t 1,
1 if t > 1.
Note that the generalized derivative of u(t) is u (t) = 1(t) 1(t 1), i.e., the rectangular pulse
on (0, 1).
The system is causal and the input signal is causal, hence so is its response y(t). The
response y(t) follows as
1
u ( )g(t ) d =
g(t ) d
y(t) =
0
t1
g(v) dv =
g(v) dv
= {v = t } =
t
t1
t
t
1
1
(1 ev ) 1(v) dv = (v + ev ) 1(v) t1
=
t1
1 t
1 (t1)
= (t + (e
1)) 1(t) (t 1 + (e
1)) 1(t 1).
Because of the step functions 1(t) and 1(t 1) it is best to distinguish the three cases t < 0,
t [0, 1] and t > 1:
if t < 0,
0
1 t
(5.11)
y(t) = t + (e
1)
if 0 t 1,
1
(t1)
if t > 1.
1 + (e 1)e
Figure 5.6 shows the response y(t). It may not be immediately clear, but the response is
continuously differentiable, while the input clearly is not.
97
(5.12)
An LTI system, therefore, is uniquely determined by the spectrum of its impulse response,
assuming the Fourier transform H () of h(t) exists. In such cases H () is called the frequency
response of the system. An important class of systems that have a frequency response are the
BIBO-stable systems.
5.2.1. Lemma. A BIBO-stable LTI system has a frequency response H () defined for all .
N
an (t
Proof. BIBO-stable systems have an impulse response of the form h(t) = h1 (t)+ n=0
N
tn ) with h 1 (t) absolutely integrable and n=0 |an | < . Now, the Fourier transform H1 ()
then, the
of h 1 (t) exists for all because h1 (t) is absolutely integrable. By superposition,
N
j tn
. It is
frequency response H () of the system follows as H () = H1 () + n=0 an e
defined for all frequencies.
The (complex) frequency response H () may be written in polar form as
H () = |H ()|e j ().
We refer to |H ()| as amplitude transfer of the system, and to () as the systems phase
transfer. For real systems, the impulse response h(t) is real-valued. In such cases the frequency
response has the property that H () = H (). Hence the amplitude transfer is even as a
function of frequency, and the phase transfer is an odd function of frequency. Consequently, in
a graphical representation of amplitude transfer and phase transfer it suffices to plot only for
non-negative values of frequencies 0. This is common practice (have a look at the booklet
that came with your stereo equipment or set of loudspeakers and probably you will find one or
more of such plots). In the remainder of this section we review the response of an LTI system
to four special type of inputs.
1. Response to a harmonic signal. The response to a harmonic input signal u(t) = ej 0 t
may be obtained via the convolution product (h u)(t),
j 0 (t )
j 0
h( )e
d =
h( )e
d e j 0 t = H (0 )e j 0 t .
y(t) =
98
A prerequisite for the above to hold is of course that H (0) exists, and this is not always
the case. For example, the integrator has impulse response h(t) = 1(t) and (generalized)
frequency response H () = 1/j + () and this is not defined for = 0 = 0.
Indeed, the harmonic signal for 0 = 0 is the constant signal u(t) = 1, and it will be
t
clear that then the output y(t) = u( ) d of the integrator is not defined.
For an LTI system T there apparently holds that
T {e j 0 t } = H (0 )e j 0 t ,
(5.13)
but only for those values 0 for which H (0 ) exists. Because of (5.13) one sometimes
refers to e j 0 t as an eigenfunction of the system, with eigenvalue H (0). Each 0 R
gives rise to an eigenfunction ej 0 t and eigenvalue H (0). LTI systems have a continuum
of eigenfunctions and eigenvalues.
2. Response to a sinusoid. Signals and systems are in practical situations always real. It
is therefore more interesting to consider sinusoidal signals than the complex harmonic
signals considered above.
Suppose the LTI system is real and take as input the sinusoid u(t) = cos(0 t + ). The
sinusoid cos(0 t + ) may be written as
1
u(t) = (e j e j 0 t + e j e j 0 t ).
2
Then, by linearity of the system,
1
1
T {u(t)} = e j T {e j 0 t } + e j T {e j 0 t },
2
2
and, with the help of (5.13), we arrive at
1
1
T {u(t)} = e j H (0 )e j 0 t + e j H (0)e j 0 t .
2
2
By assumption the system is real, so that H (0 ) = H (0 ). This, finally allows us to
write
1 j
1
e H (0 )e0 t
T {u(t)} = e j H (0 )e j 0 t +
2
2
= Re(e j H (0)e j 0 t ) = |H (0)| cos(0 t + + (0 )).
The output is again a sinusoid, but with amplitude |H ()| and initial phase + () =
+ arg H ().
3. Response to a periodic signal. Let u(t) be a T -periodic signal with line spectrum un .
Then u(t) has a Fourier series expansion
u(t) =
n=
u n e j n0 t
99
with 0 = 2/T .
Because of (5.13) and the superposition principle of linear systems, we have that
y(t) = T {u(t)} =
u n H (n0)e j n0 t .
n=
The output y(t) is also periodic with again period T , and its line spectrum yn equals
yn = H (n0)u n .
4. Response to the unit step (the step response). The step response g(t) of an LTI system
is introduced on page 95. If the LTI system is BIBO-stable, then its frequency response
is defined for all frequencies and is in fact continuous in . In this case the frequency
spectrum G() of g(t) equals (in the generalized sense)
F
H ()
1
+ ()) =
+ H (0)().
j
j
That is,
G() =
H ()
+ H (0)().
j
(5.14)
U () = H ()U ().
j +
.
j +
100
The inverse Fourier transform is h(t) = et 1(t), which is the impulse response as found
earlier. The step response g(t) is the inverse Fourier transform of
G() = {see (5.14)} =
+ ().
j ( j + )
1
1
1
1
+ () =
+ ()
.
j
j +
j
j +
From the table of standard and generalized Fourier transform pairs we read that
g(t) = 1(t) et 1(t) = (1 et ) 1(t).
This is accordance with what was established in Example 5.1.19.
|H ()|
101
A typical example of an ideal low-pass filter is the LTI system with frequency response
e j t0
H () =
0
|H ()|
if || < c ,
if || > c .
0
c
c t0
arg H ()
The value c > 0 is the cut-off frequency of the filter. That the filter can not be build follows
from the inverse Fourier transform h(t) of H (),
1
h(t) =
2
=
H ()e
j t
1
d =
2
e j (tt0) d
sin(c (t t0 ))
.
(t t0 )
It is a non-causal signal, implying that the system is not causal (see Figure 5.8). The impulse
c /
t0
2/c
Figure 5.8: Graph of sin(c (t t0 ))/(t t0 ).
response h(t) has a maximum of c / at t = t0 . Finally, consider two special inputs to the
ideal low-pass filter.
1. Let u(t) be a bandlimited signal whose frequency spectrum U () is zero for all || > c .
Then the frequency response of y(t) satisfies
Y () = H ()U () = e j t0 U (),
and, hence,
y(t) = u(t t0 ),
Apparently, the transfer of this type of inputs is distortionless.
102
u(t) =
u n e j n0 t
n=
with 0 = 2/T . By Item 3 on page 98 we have that the response y(t) equals
y(t) =
u n H (n0)e j n0 t .
n=
Let N be that (unique) integer number such that N0 c < (N + 1)0 . Then y(t)
follows as
N
y(t) =
u n e j n0 (tt0 ) = u N (t t0 ),
n=N
where u N (t) denotes the N-th partial sum of the Fourier series expansion u(t). All
harmonic components of u(t) whose frequencies satisfy || > c , are filtered out by
the ideal low-pass filter.
5.4 Problems
5.1 A system T is described by
y(t) = T {u(t)} = au(t) + bu(t 1).
(a) Show that the system is an LTI system.
(b) Determine its frequency response.
(c) Determine the impulse response.
(d) Is the system causal?
(e) Is the system BIBO-stable.
5.2 A system T is described by
t+1
y(t) = T {u(t)} =
u( ) d.
t1
5.4. Problems
103
1
.
j ( j + 1)
1 j
.
1 + j
104
(f) Determine
y(t) dt,
1
,
1 + jk
(k Z).
5.4. Problems
105
u 1 (t)
T1
y1 (t) = u 2 (t)
T2
y2 (t)
106
6
The Laplace transform
is that many signals that we wish to consider simply do not have a Fourier transform. The unit
step 1(t), for example, only has a Fourier transform in the generalized sense, and et 1(t) does
not have a Fourier transform at all. The Laplace transform can be seen as an extension of the
Fourier transform. It is an extension that allows to consider a much wider family of signals,
but which still inherits most of the useful properties and insights of the Fourier transform. As
it turns out, it gives rise to some useful new properties and insights as well. In accordance with
the Fourier transform, the two-sided Laplace transform of a signal f (t) is defined as
f (t)est dt.
F(s) =
In contrast to the Fourier transform, however, in the Laplace transform we allow for general
complex valued s and not just imaginary valued s = j j R. This simple extension makes it
possible to take Laplace transforms of signals that hitherto were not (Fourier) transformable.
In many cases we deal with causal signals, f (t) = 0 t < 0. Assuming a causal signal
f (t) contains no delta function components, then the Laplace transform reduces to
st
f (t)e dt =
f (t)est dt.
is the one-sided Laplace transform. In this course we only consider the one-sided Laplace
transform, from now on referred to as the Laplace transform. It is important to realize that the
Laplace transform may also be used for non-causal signals. Taking the Laplace transform of a
non-causal signal, however, means that all values f (t), t < 0 are lost in the transformation.
107
108
The Laplace transform will therefore be of use only if we are not concerned with past time
function values f (t), t < 0; a situation that is often the case.
Later when functions with delta function components are allowed, we shall have to amend
the definition of the Laplace transform slightly. In the following section we consider piecewise
smooth signals.
Generally the Laplace transform of a given signal f (t) is defined only for a subset of the
complex numbers s. If we know of f (t) that it is exponentially bounded for t > 0, that is, if
we know that
| f (t)| Ceat
t > 0,
for some real numbers C and a, then the Laplace transform exists for every Re s > a. Indeed,
for those s the integral of (6.1) converges even absolutely,
C
st
Re(s)t
< .
| f (t)e | dt =
| f (t)|e
dt
Ce(aRe(s))t dt =
Re(s) a
0
0
0
All polynomials in t are exponentially bounded for t > 0, all exponential functions of the form
ebt are exponentially bounded. All piecewise smooth periodic signals are bounded, hence,
exponentially bounded for t > 0 as well, and all products and sums of exponentially bounded
signals are again exponentially bounded. The Laplace transform therefore applies to many
signals, many more than can be handled with the Fourier transform.
6.1.2. Example. The Laplace transform of f (t) = 1 is F(s) = 0 est dt. This Laplace
st
transform exists iff Re s > 0 in which case F(s) = 0 est dt = es 0 = 1/s.
This example is instructive in that it demonstrates a fundamental feature of Laplace transforms:
For a given signal f (t) the set of values of s for which the Laplace integral (6.1) exists, adheres
to one of the following three situations.
1. There is an R such that the Laplace transform exists for every Re s > , while
for Re s < the Laplace transform never exists. This number is the abscissa of
convergence.
2. The Laplace transform exists for every s C. Example:
f (t) = et .
2
109
f (t) = et .
In the second of the three situations mentioned above, the abscissa of convergence is taken to
be = ; in the third situation the abscissa of convergence is taken to be = +. The
general statement is this:
6.1.3. Lemma. For any signal f (t) there is an R, possibly = , such that F(s)
exists for all Re s > , and does not exist for any Re s < .
Proof. The statement is equivalent to saying that
If F(s0 ) exists, then F(s) exists Re s > Re s0 .
This we shall prove. So, suppose s0 is such that F(s0 ) exists. Then (t) defined as
t
(t) = 0 es0 f ( ) d converges to F(s0 ) as t , which, in particular, means that (t)
is bounded on [0, ). This we soon need. Now suppose that Re s > Re s0 . Then
M
est f (t) dt
F(s) = lim
M 0
M
e(ss0 )t es0 t f (t) dt
= lim
M 0
M
e(ss0 )t (t) dt
= lim
M 0
M
t=M
(ss0 )t
(ss0 )t
(t) t=0 + (s s0 )
e
(t) dt .
= lim e
M
Since (t) is bounded, and Re(s s0 ) > 0 we see that the last limit exist. Therefore F(s)
exists whenever Re s > Re s0 .
6.1.4. Example.
1. The unit step 1(t) has Laplace transform
est 1(t) dt =
est
N s
est dt = lim
t=N
t=0
1
= {Re s > 0} = .
s
The above limit exists only if Re s > 0. The abscissa of convergence is therefore = 0.
2. The causal exponential function eat 1(t) (with a complex) has Laplace transform
1
(Re(s) > Re(a)).
est eat 1(t)dt =
e(sa)t dt =
sa
0
0
The abscissa of convergence is = Re(a).
110
3. The Laplace transform of f (t) = eat cos(bt) 1(t) with a complex and b real, follows
similarly as above:
1 (sa j b)t
est eat cos(bt) dt =
+ e(sa+ j b)t dt
e
2 0
0
1
1
sa
1
+
=
.
= {Re(s a j b) > 0} =
2 s a jb s a + jb
(s a)2 + b2
The abscissa of convergence is = Re(a j b) = Re a.
Im s
Re s
s=
exists with s = j . In these cases the Fourier transform exists and coincides with the Laplace
transform for s = j .
6.1.5. Example. Consider the causal signal f (t) = eat 1(t) with Re a > 0. The Laplace
transform (see Example 6.1.4) equals 1/(s+a) with abscissa of convergence = Re(a) < 0.
Hence the Fourier transform is 1/( j + a).
111
The Laplace transform of 1(t) is 1/s with abscissa of convergence = 0. The Fourier transform of 1(t), however, equals 1/j + (). This demonstrates that when the boundary of the
region of convergence is the imaginary axis, then the (generalized) Fourier transform can not
always be obtained from the Laplace transform through substitution of s = j .
The mapping that sends a signal f (t) to its Laplace transform F(s) is denoted by L. That
is,
f (t)est dt.
L{ f (t)} =
0
Both F(s) and the mapping L are referred to as the Laplace transform.
In Chapter 3 we saw that a time domain signal f (t) can be recovered from its Fourier
transform through what was called the inverse Fourier transform. Also the Laplace transform
has an inverse, however, since the Laplace transform only considers positive time values t > 0,
the inverse Laplace transform necessarily can only recover f (t) for t > 0. Without proof we
state that causal signals are uniquely determined by their Laplace transform, where uniqueness
is in the sense of generalized functions. For piecewise smooth signals this means that the
Laplace transform uniquely characterizes the signal except at its points of discontinuity.
N
an (t tn ).
(6.2)
n=0
Here g(t) denotes a piecewise smooth signal, the coefficients an are (complex) numbers and
the tn are arbitrary time instances tn R.
The Laplace transform of the signal f (t) of (6.2) is now taken to be
L{ f (t)} = L{g(t)} +
N
an L{(t tn )}.
n=0
The Laplace transform of g(t) is the Laplace transform of a piecewise smooth signal as dealt
with in the previous section. It will be no surprise that for the Laplace transform of (t tn )
we shall use the sifting property of delta functions: If tn = 0, then
st
(t tn )e dt =
(t tn )est 1(t) dt
L{(t tn )} =
0
0
if tn < 0,
= {1(t) is continuous at t = tn = 0} = st
e n if tn > 0.
112
If tn = 0, then we end up with the integral 0 (t)est dt, which has no meaning since the
delta function (t) has its spike precisely at t = 0 which is on the boundary of the interval over
which is integrated. To accommodate for this problem it is customary to adjust the definition
of Laplace transform by expanding slightly the interval over which is integrated. The Laplace
transform will henceforth be understood as
F(s) =
f (t)est dt := lim
f (t)est dt.
(6.3)
0
(6.4)
In particular the Laplace transform of the delta function (t) is equal to 1. For piecewise smooth
signals f (t) it makes no difference whether or not integration in (6.3) begins at 0 or 0 or even
0+, but for generalized functions it does make a difference, and opting for the choice 0 means
that we want to be able to take effects of delta function components (t) fully into account.
1 s
F( ),
a a
3. Time-shift
L{ f (t t0 ) 1(t t0 )} = F(s)est0 ,
(Re(s) > ).
(6.5)
113
Proof. We prove this only for that case that f (t) is differentiable in the classical sense.
We shall further assume that on the region of convergence f (t)est 0 for t .
This is practically always the case. Let
> 0. Partial integration gives that
st
st
st
f (t)e dt =
e d f (t) = f (t)e
+s
f (t)est dt
f (t)est dt.
= f (
)es
+ s
Now take the limit
0, then we see that L{ f (t)} = f (0) + s F(s).
If f (t) is piecewise smooth, then the rule (6.5) remains valid, even if the derivative f (t)
exists only in the generalized sense.
t
6. Integration with respect to time Let g(t) = 0 f ( ) d . Then
L{g(t)} =
F(s)
,
s
The derivation of this rule is postponed till we treat the convolution theorem for Laplace
transforms (see Example 6.3.6).
7. Differentiation with respect to s
L{t f (t)} = F (s),
(Re(s) > ).
6.3.1. Example. Let g(t) = f (t). Then by Rule 5 we have that G(s) = s F(s) f (0). The
Laplace transform of the second derivative can be obtained by applying Rule 5 twice:
(6.6)
(n = 0, 1, . . . ).
(6.7)
114
6.3.3. Example.
a) Consider the signal f1 (t) = eat and the causal signal f2 (t) = eat 1(t), and realize that
they have the same Laplace transform F1 (s) = F2 (s) = 1/(s a). The derivatives of
f 1 (t) and f2 (t) are
f 2 (t) = aeat 1(t) + (t).
f 1 (t) = aeat ,
L{ f 1 (t)} =
These findings may also be obtained from direct calculation of L{ f1 (t)} and L{ f 2 (t)}.
b) We know that L{eat } = 1/(s a). Differentiate n times in the s-domain and we arrive
at
L{(t)n eat } =
hence
t n eat
L
n!
=
(1)n n!
,
(s a)n+1
1
.
(s a)n+1
Some of the more commonly used Laplace transform pairs and properties are collected in
Tables 6.1 and 6.2.
6.3.4. Example. The inverse Laplace transform of rational functions may be determined with
the help of partial fraction expansion (see Appendix A). The method is the same as for determining the inverse Fourier transform of rational functions. Let F(s) be given as
F(s) =
6s
.
(s + 1)(s 2 4)
The poles of this rational function are s1 = 1, s2 = 2 and s3 = 2. Verify yourself that the
partial fraction expansion of F(s) is
F(s) =
1
3
2
+
+
s+1 s2 s+2
Now, from Table 6.2 we can directly write down the inverse Laplace transform,
f (t) = 2et + e2t 3e2t ,
(t > 0).
f (t)
Property
115
F(s) =
a1 f 1 (t) + a2 f 2 (t)
0
f (t)est dt
Condition
a1 F1 (s) + a2 F2 (s)
Re s > max(1 , 2 )
f (at)
1
F( a )
a
a > 0, Re s >
f (t t0 ) 1(t t0 )
F(s)est0
t0 > 0, Re s >
Shift in s-domain
f (t)es0 t
F(s s0 )
Re s > Re s0 +
Differentiation (t)
f (t)
s F(s) f (0)
Re s >
f (t)
Re s >
F(s)
s
Re s > max(0, )
F (s)
Re s >
Linearity
Time-scaling
Time-shift
t
Integration (t)
Differentiation (s)
f ( ) d
t f (t)
f (t), (t > 0)
t n at
e
n!
0
f (t)est dt
Region of conv.
1
sa
Re s > Re a
1
s n+1
Re s > 0
1
(sa)n+1
Re s > Re(a)
cos(bt)
s
s 2 +b2
Re s > 0
sin(bt)
b
s 2 +b2
Re s > 0
eat cos(bt)
sa
(sa)2 +b2
Re s > Re a
eat sin(bt)
b
(sa)2 +b2
Re s > Re a
(t)
s C
eat
tn
n!
F(s) =
(n = 0, 1, . . . )
(n = 0, 1, . . . )
116
In Section 3.4 we saw that the convolution of two causal signals f (t) and g(t) is again
causal,
t
( f g)(t) =
f ( )g(t ) d 1(t).
0
Since the Laplace transform only deals with the causal part of a signal (i.e., the part f (t) for
t 0), it is natural to define convolutions in this respect as
t
f ( )g(t ) d, (t > 0).
(6.8)
( f g)(t) =
0
We stress that the signals f (t) and g(t) are allowed to be non-causal. Also, we want to allow
delta components in f (t) and g(t), and so we have to extend the interval [0, t] over which is
integrated in (6.8) slightly,
t+
f ( )g(t ) d.
(6.9)
( f g)(t) =
0
6.3.5. Theorem (Convolution theorem for the Laplace transform). Let f (t) and g(t) be
signals with Laplace transforms F(s) and G(s) respectively. Then
L{( f g)(t)} = F(s)G(s),
where ( f g)(t) is the convolution product (6.9).
For piecewise smooth signals f (t) and g(t) the proof of this theorem is the same is the proof
of the convolution theorem for the Fourier transform. It may be shown that the result is still
valid when f (t) and g(t) contain delta function components.
6.3.6. Example.
a) Consider the unit step 1(t) and an arbitrary signal f (t). Then
t
F(s)
.
f ( ) d =
L{( f 1)(t)} = L
s
0
(Compare with Rule 6 on page 113.)
b) Consider the delta function (t) and an arbitrary signal f (t). Then
L{( f )(t)} = F(s)1 = F(s).
In other words ( f )(t) = f (t) for all t > 0.
directly in terms of F(s). This can be done provided that the final value exists.
117
6.3.7. Theorem (Final value theorem). Let f (t) be a signal whose final value f () exists,
and let F(s) denote the Laplace transform of f (t), and assume that F(s) is defined for all
Re s > 0. Then
lim s F(s) = f ().
s0
Proof. (Sketch) Suppose that f (t) is piecewise smooth, then by the differentiation rule (Rule 5)
it holds that
est f (t) dt + f (0).
s F(s) =
0
(6.10)
an (t tn )} = lim s F(s) +
an estn = lim s F(s).
lim sL{ f (t) +
s0
s0
s0
5
.
s(s 2 + 2s + 5)
1
s+2
1
1
s+1
5
= 2
=
.
2
+ 2s + 5)
s
s + 2s + 5
s
(s + 1) + 4 (s + 1)2 + 4
1
1
L{ et sin(2t)} =
,
2
(s + 1)2 + 4
L{et cos(2t)} =
s+1
,
(s + 1)2 + 4
so that
f (t) = 1 et (sin(2t) + cos(2t)),
(t > 0).
From this the final value f () can be seen to exist and it equals limt f (t) = 1. This value
is indeed equal to what the final value theorem states:
lim s F(s) = lim
s0
s0
5
= 1.
s 2 + 2s + 5
118
We end this chapter with a discussion on Laplace transforms of periodic signals, at least, periodic on [0, ), which is to say that f (t + T ) = f (t) for all t 0. Here T is the period of
f (t). The Laplace transform of a T -periodic signal can be written as
mT +T
st
F(s) =
f (t)e dt =
f (t)est dt
0
= { = t mT } =
=
msT
m=0 mT
T
f (mT + )es(mT + ) d
m=0
T
f ( )es d .
m=0
Here we recognize a geometric series with ratio emsT . The geometric series converges provided that |emsT | < 1, i.e., Re s > 0, and then has limit
emsT =
m=0
1
1 esT
Therefore
F(s) = L{ f (t)} =
where
FT (s) =
FT (s)
1 esT
(6.11)
f (t)est dt.
The Laplace transform can apparently be expressed by a Laplace type integral over one period.
2T
3T
6.4. Problems
119
FT (s)
1 + sT
1 (1 + sT )esT
T
=
.
=
sT
2
sT
2
1e
s (1 e )
s
s(1 esT )
See Figure 6.3. Every periodic signal has an abscissa of convergence equal to 0. Note that the
denominator s(1 esT ) is zero for every s = j k0, k Z with 0 = 2/T . This is perhaps
not that surprising since we know that periodic signals have a generalized Fourier transform
whose spectrum is a series of infinite peaks at integer multiples of 0 .
1
0.8
0.6
|F(s)|
0.4
0.2
0
0
1
2
Re s
3
4
5
20
T
,
s(1esT )
15
10
(b)
(c)
(d)
(e)
(f)
s3
(s2)3
1
s 2 +2s+2
s
s 2 +2s+2
20
15
Im s
6.4 Problems
(a)
10
120
(s 2
1 eas
,
s(1 ebs )
1
(s + a)(1 esT )
6.6 Suppose f (t) is a periodic signal with period T and let F(s) be the Laplace transform of
f (t). Determine lims0 s F(s).
6.7 Use the Laplace transform to determine ( f g)(t) for f (t) = tn 1(t) and g(t) = t m 1(t)
More involved problems:
6.8 Let > 0. Determine all s C for which the Laplace transform of the signal f (t) =
1/(1 + t ) exists. (Hint: distinguish various cases of ).
7
Systems described by ordinary linear
differential equations
u(t)
y(t)
x 1 (t), x 2 (t),
122
q(t)
F(t)
m
(7.1)
(For brevity, the derivative of q(t) with respect to time is denoted by q(t),
and q(t)
denotes the
second derivative, etc.).
From t = t0 an input force u(t) = F(t) is exerted on the mass. It may be intuitively clear
0 ) of the mass, that
that if we know at t = t0 the position q(t0 ) of the mass and the speed q(t
then the position and speed of the mass is fully determined for all t t0 for any given input
force. Define the signals x1 (t) and x2 (t) as
x1 (t) = q(t),
x2 (t) = q(t).
The second-order differential equation (7.1) can be alternatively expressed by 2 first-order differential equations
x1 (t) = x2 (t),
k
1
x2 (t) = x1 (t) + u(t).
m
m
In matrix notation this is
0
0 1
x(t) + 1 u(t),
x(t)
=
mk 0
m
where x(t) is the 2 1 vector x(t) =
matrix notation becomes
x1 (t)
y(t) = 0 1
x2 (t)
x1 (t)
x 2 (t)
= x2 (t) in
123
(7.2)
1
This chapter considers systems with one input u(t) and one output y(t) that can be related via
a vector differential equation of the form (7.2). In the above example the variable x(t) is a
vector with two entries, x1 (t) and x2 (t). In this chapter we shall allow for variables x(t) with
an arbitrary number of entries,
x1 (t)
x2 (t)
x(t) = . .
..
xn (t)
This vector-valued signal x(t) is called a state of the system. The first set of equations in (7.2),
x(t)
= Ax(t) + Bu(t),
are called the state equations. Now in general A is an n n matrix and B is a column vector
of the same dimension as the state x(t). The second equation in (7.2), is y(t) = Cx(t) and
is called the output equation of the system. In fact we shall allow for a more general output
equation
y(t) = Cx(t) + Du(t).
Here C is a row vector with as many entries as the state x(t), and D is a scalar. Note that
the output equation involves no derivatives. In summary, then, the systems considered in this
chapter are the ones with representation
x(t)
= Ax(t) + Bu(t)
.
y(t) = Cx(t) + Du(t)
(7.3)
124
u(t)
1
0
y(t)
1
t =0
t = 20
t = 40
h=0.1;
t=0:h:40;
u=stepfun(t,20);
% Integration step-size
% Discretized time interval [0,40]
% Discretized unit step 1(t-20)
125
i 2 (t)
i(t)
R
L
R
u(t)
y(t)
C1
C2
i 1 (t)
Figure 7.4: Electrical circuit (Example 7.1.3).
7.1.3. Example. Consider the electrical circuit of Figure 7.4. The voltage across the voltage
source is taken to be the input u(t), and as output y(t) we take the voltage across the resistor R
and capacitor C2 as shown in the figure.
With Kirchoffs laws and the current-voltage relations of the various components, it is
possible to determine the voltages and currents for any of the components. As state vector we
take
i(t)
x1 (t)
x(t) = x2 (t) = vC1 (t)
x3 (t)
vC2 (t)
in which x1 (t) is the current i(t) = iL (t) through the inductor L, x2 (t) is the voltage vC1 (t)
across the capacitor C1 , and x3 (t) is the voltage vC2 (t) across the capacitor C2 .
Let i 1 (t) and i2 (t) denote the currents through the capacitors C1 and C2 respectively, and
let v L (t) be the voltage across the inductor. The following three differential equations in
x1 (t), x2 (t) and x3 (t) are readily verified.
u(t) = Rx1 (t) + v L (t) + Ri 1 (t) + x2 (t),
0 = Ri 1 (t) + x2 (t) Ri 2 (t) x3 (t),
(7.4)
t
1
i 1 ( ) d,
x2 (t) = x2 (t0 ) +
C 1 t0
t
1
i 2 ( ) d.
x3 (t) = x3 (t0 ) +
C 2 t0
The second and third of these equations can be made into differential equations through differ-
126
L
0
0
!
R 1 0
x1 (t)
1
0
x1 (t)
RC 1
RC1 RC 2 x2 (t) = 0 1 1 x2 (t) + 0 u(t).
C1
C2
1
0 0
0
x3 (t)
x3 (t)
"#
$
V
Finally, multiply both sides of the equality from the left with V1 , then we end up with the
state equations
x(t)
= Ax(t) + Bu(t),
in which (using Maple)
3R/L
0
R 1 0
RC 1
1
1/C1
RC1 RC 2
0 1 1 =
2
C1
C2
1/C2
1
0 0
0
1
1/L
RC 1
RC1 RC 2 0 = 0 .
C1
C2
0
0
A= 0
0
1/L
1/L
1/RC1 1/RC1 ,
1/RC2 1/RC2
and
L
B = 0
0
The output is y(t) = Ri2 (t)+vC2 (t) = RC2 x3 (t)+x3 (t). Since x3 (t) =
x3 (t)) we find the output equation y(t) = Cx(t) with
C=
1
1
x (t)+ 2RC
(x2 (t)
2C 2 1
2
1
R 1 1 .
2
Now we wrote the circuit in the input-state-output representation (7.3). Note that D = 0 in this
example.
127
u(t)
8
7
x2 (t)
x2 (t)
x1 (t)
x1 (t)
y(t)
(7.5)
may be rearranged as an input-state-output equation. This partly explains the interest in inputstate-output equations.
7.1.4. Example (Simulation diagrams). Consider the differential equation
y (t) + 5 y (t) + 6y(t) = 7u(t)
+ 8u(t).
(7.6)
As explained, the input-state-output representations are a convenient form for simulation. For
ODEs like (7.6) there is another representation that is suitable for simulation purposes. These
are the so called simulation diagrams. Figure 7.5 shows an example of a simulation diagram.
A simulation diagram is built up from three blocks: adders, integrators and gains.
a(t)
a(t) + b(t) + c(t)
c(t)
x(t)
b(t)
Adder
x(t)
Integrator
u(t)
Bu(t)
Gain
Integrators and gains are simple LTI systems. Note that the integrator is represented by a
triangle instead of a box. An adder can have many inputs, but has one output, which is the
sum of the inputs. The name simulation diagram derives from the fact that each of the three
blocks can be well simulated on a computer. For adders and gains this is obvious. Integration
is also doable on a computer, as opposed to numeric differentiation, which is error prone.
In order to find a simulation diagram of (7.6) we rearrange the equation so that only the
highest order derivative of the output is on the left-hand side,
y (t) = 5 y (t) 6y(t) + 7u(t)
+ 8u(t).
128
Next we integrate twice so as to get rid of the derivatives on the left-hand side
y(t) =
5 y (t) 6y(t) + 7u(t)
+ 8u(t)
=
5y(t) + 7u(t) +
6y(t) + 8u(t) .
As a final step we assign to each integral on the right-hand side a state variable xk (t), k = 1, 2:
y(t) =
5y(t) + 7u(t) +
6y(t) + 8u(t)
"#
$
!
"#
x 2 (t)
x 1 (t)
y(t) = x1 (t)
x (t) = 5y(t) + 7u(t) + x2 (t) .
1
x2 (t) = 6y(t) + 8u(t)
Verify yourself that this agrees with the simulation diagram of Figure 7.5. From the simulation
diagram we can directly write down an equivalent input-state-output equation:
5 1
7
x(t)
=
x(t) +
u(t)
6
0
8
.
y(t) =
1 0 x(t)
!
!
"#
x 1 (t)
x 2 (t)
x 3 (t)
$
$
129
This way the signals y(t), u(t) and the state components satisfy
y(t) = p3 u(t) + x1 (t),
x1 (t) = p2 u(t) q2 y(t) + x2 (t),
x2 (t) = p1 u(t) q1 y(t) + x3 (t),
x3 (t) = p0 u(t) q0 y(t),
This corresponds to the simulation diagram of Figure 7.6 and from the simulation diagram in
turn we may form the input-state-output representation,
p2 q2 p3
q2 1 0
x(t)
= q1 0 1 x(t) + p1 q1 p3 u(t),
p0 q0 p3
q0 0 0
y(t) = 1 0 0 x(t) + p3 u(t).
u(t)
p0
p1
x3 (t)
x 3 (t)
q0
x2 (t)
q1
p3
p2
x 2 (t)
x1 (t)
x 1 (t)
q2
y(t)
130
=
Ax(t) + Bu(t).
7.2.1. Example (Variation of constants). In this example we provide the general solution
x(t) for the case that n = 1. That is, A and B are assumed scalar, A = a, B = b, and
the state consists of only one entry x(t) = x1 (t). Consider, then, the differential equation
x1 (t) = ax1 (t) + bu(t).
(7.7)
For the zero input this differential equation reduces to the homogeneous equation x1 (t) =
ax1 (t). It is immediate that
x1 (t) = zeat ,
zC
is a solution of the homogeneous equation x1 (t) = ax1 (t) for any constant z. The trick of
variation of constants is now to express the candidate solution of (7.7) as
x1 (t) = z(t)eat ,
where now z(t) is an arbitrary function of time. Any signal x1 (t) can be expressed as x1 (t) =
z(t)eat because eat is invertible. Now
x1 (t) = ax1 (t) + bu(t) z (t)eat + az(t)eat = az(t)eat + bu(t)
z (t)eat = bu(t)
z (t) = eat bu(t)
t
ea bu( ) d,
z(t) = z 0 +
(z0 C),
(7.8)
As we shall see, the variations of constants formula of the above example also works for the
general n-dimensional case. In the above example, the general solution (7.8) relies on the
exponential function eat . In the n-dimensional case the role of the exponential function eat
is taken over the matrix exponential eAt , where A Rnn . Analogous with the Taylor series
expansion of ea ,
ea =
1
1
1 k
a = 1 + a + a2 + a3 +
k!
2!
3!
k=0
131
1 k
1
1
A = I + A + A2 + A3 + .
k!
2!
3!
k=0
(7.9)
This series is well defined for every matrix A. In view of the scalar case it is tempting to think
that for matrices A a solution of x(t)
d At
e
dt
= Ae At = e At A.
Proof.
1. Trivial.
2.
m=
1
1 k
1 m
k=
A
F =
Ak F m .
e e =
k!
m!
k!
m!
k=0
m=0
m=0
A F
k=0
1
.
k!m!
%
&
n
n
1
1
1
n
k nk
( A + F)n =
Ak F nk .
=
F
A
=
k
n!
n!
k!(n
k)!
n=0
n=0
k=0
n=0 k=0
1
.
k!m!
Therefore e A e F =
d
1 k k
1 kd k
d At
e =
A t =
A
t
dt
dt k=0 k!
k! dt
k=0
1
1 m+1 m
Ak t k1 = {m = k 1} =
A
t = e At A.
(k
1)!
m!
k=1
m=0
1 n n+1
= Ae At .
Since A commutes with A we further have that eAt A =
n=0 n! t A
132
With these properties of the matrix exponential we can redo the variations of constants trick
of Example 7.2.1. That is, we express the candidate solution x(t) of the state equation x(t)
=
Ax(t) + Bu(t) as
x(t) = e At z(t)
where z(t) is an arbitrary function of time. Any function x(t) can be expressed like this since
e At is invertible for every A Rnn and t R. Now
x(t)
= Ax(t) + Bu(t)
e At z (t) = Bu(t)
t0
t
t0
We shall have to be a bit careful with possible delta function components in the input. The way
around this problem is to replace the initial state x(t0 ) with
x(t0 ) := lim x(t0 h).
h0
In the event of a jump-discontinuity at t0 the initial state x(t0 ) is hence understood to be the
state just before the jump. Summary:
(7.11)
t0
1
0
0 2
. Then
1
1
e At = ( At)0 + ( At) + ( At)2 +
2!
1!
1 t2
1
t
0
0
1 0
+
+
=
+
0 1
1! 0 (2t)
2! 0 (2t)2
t
e
0
1 + t + 2!1 t 2 +
=
=
1
2
0
1 + (2t) + 2! (2t) +
0
0
e2t
.
133
In the above example the matrix exponentials eAt could be found because the matrix A is
diagonal. For general matrices there are other techniques to find the entries of the matrix
exponential. One such technique uses the Laplace transform. The next section among other
things explains how this works.
L{x1 (t)}
L{x2 (t)}
L{x(t)} =
.
..
.
L{xn (t)}
Because of the differentiation rule, there holds for the derivative that
L{x(t)}
= sL{x(t)} x(0).
It is not difficult to verify that
L{ Ax(t)} = AL{x(t)}.
(7.12)
134
Now let X (s) denote the Laplace transform of x(t) and let U (s) denote the Laplace transform
of u(t). Taking Laplace transforms of left and right-hand side of the state equations x(t)
=
Ax(t) + Bu(t) gives us
s X (s) x(0) = AX (s) + BU (s).
(7.13)
(7.14)
Here I stands for the n n identity matrix. Now, if s is not a root of the characteristic equation
det(s I A) = 0 of A, then s I A has an inverse. Multiplying both left and right-hand side of
(7.14) from the left with (s I A)1 , results in
X (s) = (s I A)1 x(0) + (s I A)1 BU (s).
(7.15)
This is a set of algebraic equations, which is usually easier to work with than the state equation
itself. Once the above algebraic equation is solved, the underlying time-domain signal x(t)
follows from the inverse Laplace transform.
7.3.1. Corollary. Let A Rnn . Then L{e At } = (s In A)1 . So for t > 0 the matrix
exponential equals L1 {(s I A)1 }.
Proof. Let t0 = 0 and let u(t) = 0 for all t, then (7.10) reduces to x(t) = eAt x(0) and (7.15)
reduces to X (s) = (s In A)1 x(0). So necessarily L{e At } = (s In A)1 .
7.3.2. Example. We shall calculate the matrix exponential eAt of A = 11 1
1 . In the Laplace
domain we have that
1
s 1
1
1
.
(s I2 A) =
1 s 1
Now, recall that the inverse of an arbitrary 2 2 matrix
(s I2 A)
a b
c d
is
' s1
1
s 1 1
2
= (s1)1 +1
=
1
s1
(s 1)2 + 1
(s1)2 +1
a b 1
c d
(s1)1 2+1
s1
(s1)2 +1
1
d b
adbc c a
. So
(
.
In order to determine the matrix exponential we need only take the inverse Laplace transforms
entry-by-entry:
t
e cos(t) et sin(t)
.
(7.16)
e At = t
e sin(t) et cos(t)
Since we used the Laplace transformwhich deals with positive time onlywe formally derived the matrix exponential only for positive time. It is easy to believe and indeed true that
(7.16) in fact holds for all time t R.
135
If we are given the input u(t) and initial state x(0) then we need not determine the matrix
exponential first in order to find x(t).
7.3.3. Example. Suppose we have a continuous time system with state x(t) = xx12 (t)
(t) and state
equations
1 1
1
x(t)
=
x(t) +
u(t).
1 1
0
As input we take u(t) = (t 1), and we suppose that the initial state is x(0) = 01 .
We shall calculate x(t) for t > 0. Let, as always, X (s) denote the Laplace transform
X 1 (s)of
s
x(t). The Laplace transform U (s) of u(t) = (t1) is U (s) = e . Therefore, X (s) := X 2 (s)
equals
1
0
1 s
s1
1
X 1 (s)
1
= (s In A) (x(0) + BU (s)) =
+
e
.
1
0
1 s 1
X 2 (s)
So for X 1 (s) and X 2 (s) we find
' 1+es (s1) (
1
s 1 1
0
1 s
X 1 (s)
(s1)2 +1
+
e
=
=
.
2
s1+es
1
s
1
1
0
X 2 (s)
(s 1) + 1
2
(s1) +1
Verify yourself that X1 (s) and X 2 (s) are the Laplace transforms of the signals
x1 (t) = et sin t + et1 cos(t 1) 1(t 1),
et cos t + et1 sin(t 1) 1(t 1).
x2 (t) =
(t > 0).
136
7.3.4. Lemma. Let U (s), X (s) and Y (s) denote the Laplace transforms of u(t), x(t) and
y(t). Then y(t) satisfies (7.3) for t > 0 if and only if
Now that the output is characterized in the s-domain, it is also possible to express the output in
time-domain terms. For that we simply determine the inverse Laplace transform of (7.17). To
this end define K (s) and H (s) as
K (s) = C(s I A)1 ,
H (s) = C(s I A)
(7.18)
B + D.
(7.19)
Realize that K (s) is a row vector and that H (s) for each s is a scalar. The Laplace transform
Y (s) can now be written as
Y (s) = K (s)x(0) + H (s)U (s).
(7.20)
It will be no surprise that we will need the inverse Laplace transforms of K (s) and H (s).
Suppose we managed to find the inverse Laplace transforms k(t) and h(t) of K (s) and H (s).
Then because of the convolution theorem we get the time domain characterization of the output
y(t) = k(t)x(0) + (h u)(t).
(7.21)
0
1
,
A=
1 2
0
B=
,
1
C = 1 1 ,
D = 1.
The inverse of s I A is
1
1
s+2 1
s 1
2
= {det(s I A) = s(s + 2) + 1 = (s + 1) } =
.
1 s+2
(s + 1)2 1 s
Therefore
K (s) = C(s I A)
s+2 1
1
1
=
1 1
=
s+3 1s ,
2
2
1 s
(s + 1)
(s + 1)
and
1s
+1
(s + 1)2
1
1
+2
.
= {partial fraction expansion} = 1
s+1
(s + 1)2
137
Since K (s) and H (s) are rational functions (this is always the case) it is easy to find the inverse
Laplace transform. The inverse Laplace transforms are
k(t) = 2t + 1 2t 1 et 1(t),
and
h(t) = (t) + (1 + 2t)et 1(t).
For a given initial state x(0) = xx12 (0)
(0) and input u(t) we finally obtain the general form of
the output
y(t) = (2t + 1)et x1 (0) + (2t 1)et x2 (0) 1(t) + (h u)(t).
( =
1
).
RC
(7.22)
Suppose that since long the voltage u(t) has been equal to a constant value of one. It is easy
to believe that then y(t) (the voltage across the capacitor) then settles to a constant value as
well. In fact we show this to be true in Subsection 7.5.1 on steady-state behavior. Assuming
that y(t) is constant, gives us that y(1) (t) = 0, and so from (7.22) we see that necessarily the
output settles to a value of y(t) = u(t) = 1.
Now, at t = 0, we suddenly increase the voltage from 1 to 2 and, keep it constant from that
point onwards
u(t) = 2,
t > 0.
What will happen with y(t)? That is, what is y(t) for t > 0? Intuition tells us that the response
y(t) is unique, but we also know that the general solution y(t) of ODE (7.22) is not unique. It
is readily verifiedsee Appendix Bthat the general solution for t > 0 is
y(t) = 2 + et ,
R.
138
As argued, we have shortly before the set-point change at t = 0 that y(0) = 1. This gives us
:
1 = y(0) = 2 + e0 = 2 + .
So = 1, and we found the (unique) response to the set-point change,
2
1
if t < 0
1
y(t) =
2 et if t > 0
y(t)
In words, after that u(t) is increased from 1 to 2, the output y(t) grows continuously and
exponentially from 1 upwards and settles to a final value of 2.
Another example of a set-point change is shown in Figure 7.3. Changing the reference
temperature on your central heating system is another instance of a set-point change. Set-point
changes are very very common. With the Laplace transform we can redo the previous example,
but now more succinctly. Indeed, initial conditions can then be taken into account right from the
start andimportantlywe can solve the problem without having to find a particular solution.
7.3.7. Example (Set-point change via Laplace). To find the solution y(t) of (7.22) for u(t) =
2 we shall use the Laplace transform. Recall that
L{y (1) (t)} = sY (s) y(0).
So, taking the Laplace transform of the Equation (7.22) gives
s + 2
2
1
1 + 2/s
=
= {partial fraction expansion} =
.
s+
s(s + )
s
s+
Its inverse Laplace transform yields the time-domain y(t) that we are after,
y(t) = 2 et ,
t > 0
In Example 7.3.6 we found the solution by assuming that y(t) is continuous at t = 0. Only
then can we say that y(0) = 2 + e0 = 2 + , which we needed to determine y(t). This
assumption is not generally valid. In certain systems y(t) may jump as the result of a jump
in u(t), so the procedure in Example 7.3.6 is not generally applicable. The use of the Laplace
transform in Example 7.3.7 does not rely on any continuity assumption. The Laplace transform
is generally applicable. The following example demonstrates a case where y(t) jumps.
139
(7.23)
and suppose that u(t) = 1(t) and that we are given the initial conditions y(0) = 0,
y (0) = 1. To find y(t) for t > 0 we apply the Laplace transform on the above equation.
Using the differentiation rule (Page 115) we find that
L{y (2) (t)} = s 2 Y (s) sy(0) y (1) (0) = s 2 Y (s) 1,
and
L{u (2) (t)} = s 2 U (s) su(0) u (1) (0) = s 2 U (s) = s 2
1
= s.
s
s+1
3/4
1/4
s+1
=
=
+
.
2
s 4
(s 2)(s + 2)
s2 s+2
y(t)
1
3
y(t) = e2t + e2t ,
4
4
(t > 0),
y(0)=0
y (0)=1
0
u(t)
y(t)
T
x k (t)
140
u(t) y(t).
However, in input-state-output systems and systems described by ODEs the output not only
depends on the input but also on initial conditions, whether initial conditions x(0) from a state
x(t) or initial conditions y(0), y (0) in the form of derivatives of the output. Nevertheless
there still remains the desire to associate with ODEs and input-state-output equations, a system
in the old sense, that is, a system where the output y(t) is uniquely determined by the input
u(t). In other words, where the system is a mapping from inputs to outputs.
7.4.1. Example. In Example 7.1.1 we considered a mechanical system consisting of a mass
connected to a wall via a spring. Due to friction (even though not modeled in the example) the
mass will eventually come to a stand-still if no force is applied for a long time. Just before we
start experimenting with the input force it is likely that we find the mass at rest. In other words,
if we apply a force F(t) from t = 0 onwards, then the state of the system will have been at rest
prior to that,
q(t)
0
=
t < 0.
x(t) =
0
q(t)
Intuitively it may be clear that when the system is initially at rest, that then the position and
speed of the mass are fully determined by the input force. In particular, the output y(t) (the
speed) is fully determined by the input force u(t) = F(t). That is, the output is a function of
the input.
f (t)
t = t0
0
t
141
Since u(t) = 0 for all t < t0 the above integral is zero for all t < t0 , so the output for t < t0 is
completely determined by the initial condition x(t0 ) as
y(t) = Ce A(tt0 ) x(t0 ),
t < t0 .
A signal like this is initially at rest if and only if CeA(tt0 ) x(t0 ) = 0 is identically zero for all
t. We can therefore summary as follows.
7.4.2. Lemma. In an initially at rest system described by input-state-output equations (7.3),
the response y(t) to an input u(t) initially at rest for t < t0 is determined as
t
Ce A(t ) Bu( ) d + Du(t).
(7.24)
y(t) =
t0
We can take the result a bit further. For the inputs that are initially at rest for t < t0 the integral
in (7.24) is the same as
t
Ce A(t ) Bu( ) d + Du(t).
(7.25)
y(t) =
The latter equation (7.25) has the advantage over (7.24) that it is independent of t0 , that is, we
do not need to know t0 for (7.25) to be valid. Moreover, from this integral (7.25) we can see
that it is a convolution. Indeed, define h(t) as
h(t) = Ce At B 1(t) + D(t),
then (7.25) is nothing but
y(t) = (h u)(t).
(7.26)
Now that we wrote the system as a convolution, we may copy a whole series of results of
Chapter 5 on systems described by convolutions. For one, they are linear and time-invariant.
Also, since h(t) = Ce At B 1(t) + D(t) is a causal signal, we see that initially at rest systems
(7.3) are causal (Theorem 5.1.11). In the following we shall assume that t0 = 0.
In Lemma 7.3.4 we found that
(7.27)
Comparing (7.26) with (7.27) we see that the impulse response h(t) has Laplace transform
H (s) = C(s I A)1 B + D.
142
7.4.3. Definition. The transfer function H (s) of an LTI system is defined as the Laplace transform H (s) = L{h(t)} of the impulse response h(t) of the system.
The transfer function is a convenient notion. As we saw in Chapter 5, it is possible to
express in principle all properties of a system in terms of its impulse response. As we shall
shortly see, some of its more important properties can seen by inspecting the transfer function.
For systems described by an ODE
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t)
=
(7.28)
its initially at rest behavior is very compactly characterized by its transfer function. Formally we
should first rewrite the ODE as an input-state-output equation, and then work out the necessary
formulas. However we know that the transfer function of the initially at rest system H (s)
relates an input U (s) and output Y (s) via
Y (s) = H (s)U (s),
and this allows us to take a shortcut.
7.4.4. Lemma. The initially at rest system described by (7.28) has rational transfer function
H (s) =
pn s n + pn1 s n1 + + p1 s + p0
.
s n + qn1 s n1 + + q1 s + q0
Proof. Take any smooth causal input u(t). Because of causality of u(t) there holds that
u (k) (0) = 0 for all k. The response y(t) is also causal, so also for the output there holds
that y (k) (0) = 0 for every k. The differentiation rule (Page 112) then gives
L{y (k) (t)} = s k Y (s),
pn s n + pn1 s n1 + + p1 s + p0
U (s).
s n + qn1 s n1 + + q1 s + q0
7.4.5. Example. The transfer function of the initially at rest system described by
y (2) (t) 4y(t) = u(t)
143
is
H (s) =
s2
1
.
4
The impulse response can be obtained through partial fraction expansion of H (s),
H (s) =
1
1/4
1/4
=
+
.
(s + 2)(s 2)
s +2 s2
(t > 0).
We also know that h(t) is zero for all t < 0, so h(t) = 14 e2t + 14 e2t 1(t).
7.4.6. Example. Consider again the RC-network as shown in Figure 5.2 on Page 87. In Example 5.1.3 we found that u(t) and y(t) are related by the differential equation
y (t) + y(t) = u(t),
( =
1
> 0).
RC
We assume the system is initially at rest, i.e., there is no charge on the capacitor and no current
in the circuit before an input is applied. Then the system is determined by its transfer function
H (s) =
.
s+
7.4.7. Example. Consider the RC L-network of Figure 7.9. The voltage across the voltage
source is taken to be the input u(t) of the system and the voltage vR (t) across the resistor as
the output of the system. We assume the system is initially at rest, i.e., there is no charge on
the capacitor, no flux in the inductor and no current in the circuit before an input is applied.
Let x1 (t) be the voltage vC (t) across the capacitor, and as second state component x2 (t) we
take the current through the inductor. Verify yourself that the system is described by
x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which
A=
1/C
,
1/L
0
B=
,
0
C = 1 0 ,
D = 1.
144
y(t) = v R (t)
i 2 (t)
R
i 1 (t)
u(t)
( := 1/ LC).
The transfer function is a rational function so the inverse Laplace transform can be found
through partial fraction expansion. For simplicity suppose that = /2, that is, that 2R =
1
1
s
+ 12 2
=1
.
2
(s + /2)
s + /2
(s + /2)2
t/2
t)e
1(t).
2
In the above examples we saw that the transfer function H (s) is a rational function of s. This is
trivially the case for systems described by ODEs (7.28) and is also always the case for systems
that have an input-state-output representation. In addition these transfer functions are proper,
meaning that the degree of the numerator is less than or equal to the degree of the denominator
of H (s).
7.4.8. Theorem. Let T be the initially at rest system described by input-state-output equations
(7.3). Then the transfer function is of the form
H (s) =
P(s)
,
Q(s)
where P(s) and Q(s) are polynomials s with real coefficients and such that deg P deg Q.
145
Proof. We know that H (s) = C(s I A)1 B + D. We shall determine E(s) := (s I A)1 B
with the help Cramers rule. The vector E(s) obeys
(s I A)E(s) = B.
By Cramers rule the k-th entry Ek (s) of E(s) equals
E k (s) =
det Mk (s)
.
det(s I A)
Here Mk (s) is derived from s I A with its k-th column replaced by B. The denominator
is the characteristic polynomial of A. The entries Ek (s) are therefore rational functions of s.
Then also the transfer function
H (s) = C(s I A)1 B + D = C E(s) + D
is a rational function. Finally consider the limit
1
1
C(I A)1 B + D = D.
|s| s
s
|s|
|s|
This shows that lim|s| H (s) is bounded, but that means that H (s) is proper.
P(s)
,
Q(s)
(7.29)
where deg(P) deg(Q). We shall assume that the polynomials P(s) and Q(s) do not have
common zeros. This may always be achieved by removing common factors of P(s) and Q(s).
The impulse response may be found through partial fraction expansion of H (s). In Appendix A it is explained how a partial fraction expansion may be constructed. The general form
of the partial fraction expansion of P(s)/Q(s) is
M
Ak,1
Ak,2
Ak,m k
P(s)
= A0 +
+
+
+
,
Q(s)
s sk
(s sk )2
(s sk )m k
k=1
where A0 , Ak,l are constants (generally complex-valued) and the sk are the zeros of Q(s) with
multiplicity mk . Assuming we found the coefficients A0 and Ak,l then the impulse response
follows from Table 6.2,
h(t) = A0 (t) +
M
k=1
t sk t
t m k 1
e + + Ak,m k
esk t 1(t).
1!
(m k 1)!
146
This has some implications for BIBO-stability. We know that a system is BIBO-stable if and
only if the impulse response (after removal of the delta function component) is absolutely
integrable. It may be shown that this is equivalent to the zeros sk having negative real part. The
reasoning is as follows. The function h1 (t) = h(t) A0 (t) consists of terms of the form
Ak,l
t l1 sk t
e 1(t).
(l 1)!
(7.30)
Now, if all sk have negative real part, then all functions (7.30) are absolutely integrable. In that
case h 1 (t) is a sum of absolutely integrable functions, and, hence, is itself absolutely integrable.
t l1 sk t
e 1(t)
If, on the other hand, some sk have zero real part or positive real part, then limt (l1)!
is not zero, and in such cases it may be shown that h1 (t) then can not converge to zero and is
not absolutely integrable. These intuitive arguments may be properly proved, but we will not
do that here.
7.5.1. Definition. The poles sk C of a rational function H (s) are the zeros sk of its inverse
1/H (s).
7.5.2. Theorem. An LTI system with proper rational transfer function H (s) is BIBO-stable if
and only if all poles sk of H (s) have negative real part, Re sk < 0.
7.5.3. Example.
1. The initially at rest system y(1) (t) + 2y(t) = u(t) is BIBO-stable because the pole
1
has negative real part.
s1 = 2 of H (s) = s+2
2. The initially at rest systemy(2) (t) 2y(t)
= u(t) is not BIBO-stable since the poles
of H (s) = s 212 are s1 = 2, s2 = 2 and one of them has nonnegative real part:
Re s1 0.
3. The initially at rest system y(1) (t) = u(t) is not BIBO-stable because the pole s1 = 0 of
H (s) = 1s has nonnegative real part, Re s1 0. Indeed, this is the integrator system: If
u(t) = 1(t) then y(t) = t 1(t) which is unbounded.
The RC-network of Example 5.1.3 and 7.4.6 has transfer function H (s) = /(s + ), where
= 1/(RC) > 0. The transfer function has one pole at s = , which has negative real part,
hence, by Theorem 7.5.2 the system is BIBO-stable.
147
Here H ( j 0) is the Laplace transform of the impulse response of the system evaluated at j 0 ,
and the function G() = H ( j ) was called the frequency response of the system and it equals
the Fourier transform of the impulse response.
Harmonic inputs u(t) = e j 0 t are somewhat artificial since no input in practice will ever
have been active since the beginning of time. It is more appropriate to consider harmonic inputs
switched on at t = 0, i.e., u(t) = e j 0 t 1(t).
Suppose the system is BIBO-stable. The response y(t) to the input u(t) = ej 0 t 1(t) is
t
y(t) = (h u)(t) =
h( )e j (t ) 1(t ) d =
h( )e j (t ) d
h( )e j d e j t
h( )e j (t ) d = H ( j )e j t + ytr (t),
=
where
ytr (t) =
h( )e j (t ) d.
as t .
We conclude that in BIBO-stable systems for large values of t the response to ej 0 t 1(t) looks
like H ( j )e j t , and in the limit t is indistinguishable from H ( j )ej t . For this reason
yst (t) = H ( j )e j t is called the steady-state response. The difference between y(t) and the
steady-state response is ytr (t) and it is referred to as the transient response. The transient
response tends to zero as t .
Any T -periodic input u(t) switched on at t = 0 can be written as u(t) = f (t) 1(t), in
which f (t) is T -periodic over the whole time axis R. Let fk denote the line spectrum of f (t).
Then
f k e j k0 t 1(t).
u(t) =
k=
We determine the steady-state response to this input for a given BIBO-stable system T . We
assume that the infinite superposition principle applies,
y(t) =
f k T {e j k0 t 1(t)}.
k=
k=
f k H ( j k0)e j k0 t .
148
i 1 (t)
i 2 (t)
u(t)
i 3 (t)
I1 (s)
R
I2 (s)
U (s)
Ls
I3 (s)
1
Cs
149
In steady-state therefore the response to a T -periodic input is again T -periodic and the line
spectrum of the steady-state response is fk H ( j k0).
7.5.4. Example. Consider the RC L-network of Figure 7.10. We shall assume that the network
is initially at rest. At t = 0 the voltage source is switched on, and the resulting current i3 (t)
through the capacitor C is the output of the system.
To determine the transfer function H (s) it is not necessary to write down an input-stateoutput representation of the system or some high-order differential equation that relate input
and output. We may instead apply the Laplace transform directly to the Kirchoffs laws and the
current-voltage relations of the various components. To this end we form an alternative network
were each time domain signal is replaced with its Laplace transform, and were each component
(resistor, capacitor, etc.) is replaced with its impedance, see Figure 7.11. The impedance Z(s)
of a component is the ratio of the voltage across the component, V (s), and current trough the
component, I (s). For resistors R, inductors L and capacitors C the respective impedances
Z R (s), Z L (s) and Z C (s) are
R I R (s)
VR (s)
=
= R,
I R (s)
I R (s)
VL (s)
d
Ls I (s)
= {v L (t) = L i L (t)} =
= Ls,
inductor: Z L (s) =
I L (s)
dt
I (s)
d
1
VC (s)
VC (s)
= {C vC (t) = i C (t)} =
=
.
capacitor: ZC (s) =
IC (s)
dt
CsVC (s)
Cs
resistor: Z R (s) =
The impedance equivalent network of the network of Figure 7.10 is shown in Figure 7.11. The
advantage of working with impedances is that then all components act like resistors, that is, the
voltage over a component is simply the current through it multiplied by something (namely the
impedance).
The remaining equations that are needed to fully determine the system, are the network
equations that the sum of voltages in each loop is zero, and that the sum of currents at each
node is zero:
LCs 2
.
2RLCs 2 + (R 2 C + L)s + R
s2
s2
=
.
s 2 + 2s + 2
(s + 1)2 + 1
150
The poles of H (s) are the zeros of (s + 1)2 + 1, i.e., the poles are s1,2 = 1 j . Note that the
poles have negative real part, hence, the system is BIBO-stable.
We next calculate the response to a constant voltage u(t) = 1(t) switched on at t = 0. The
response y(t) = i3 (t) in this case has Laplace transform
1
s
s
=
I3 (s) = H (s)U (s) = H (s) = 2
s
s + 2s + 2
(s + 1)2 + 1
1
s+1
.
=
(s + 1)2 + 1 (s + 1)2 + 1
The current i3 (t) follows from Table 6.2,
i 3 (t) = et (cos t sin t) 1(t).
Remark: The impulse response can similarly be obtained from a partial fraction expansion
of H (s). However, since we know the step response already, there is an easier way to find the
impulse response h(t); it is simply the derivative of the step response:
d
H (s)
H (s)
}(t) = L{
}
s
dt
s
d t
di 3 (t)
=
e (cos t sin t) 1(t)
=
dt
dt
= et (cos t sin t) 1(t) + et ( sin t cos t) 1(t) + et (cos t sin t)(t)
1(t).
0 if t < 0,
u(t) = t if 0 t 1,
1 if t > 1.
st
1 es
.
s 2 + 2s + 2
151
1
is et sin(t) 1(t), hence,
The inverse Laplace transform of s 2 +2s+2
0
u(t) = 1
if 0 t < T /4,
if T /4 t < 3T /4,
if 3T /4 t < T
1
T
2T
and periodically extended elsewhere. As is customary for causal periodic inputs, we write u(t)
as u(t) = f (t) 1(t), where f (t) is a T -periodic signal on the whole time axis R. The line
spectrum fk of f (t) follows from
1
fk =
T
=
3T /4
T /4
e j k0 t dt =
1
(e j k0 T /4 e3 j k0 T /4 )
j k0 T
e j k/2
1
(e j k/2 e3 j k/2 ) =
(1 (1)k )
j k0 T
j k0 T
(k = 0),
and, for k = 0,
1
f0 =
T
3T /4
T /4
1
1dt = .
2
f 2n+1
The line spectrum of the steady-state response yst (t) is yk = f k H ( j k0), so we end up with
the following expression for the steady state,
yst (t) =
k=
f n H ( j k0)e j k0 t =
1
(1)n1
H (0) +
H ( j (2n + 1)0 )e j (2n+1)0t .
2
(2n
+
1)
! "# $ n=
0
For computation of yst (t) a computer is needed. Figure 7.12(a) shows the actual response y(t)
to the square wave u(t), found through
20 simulation. Partj n(b)t shows a plot of the partial sum
approximation of the steady state, n=20 f n H ( j n0)e 0 . Note the transient behavior of
y(t) in Part (a).
152
1.5
1.5
0.5
0.5
-0.5
-0.5
-1
-1
-1.5
-1.5
-2
-2
0
(a)
(b)
20
n=20
f n H ( j n0)e j n0 t
1
Hideal ( j )
0
|Hideal( j )|
if || < 1,
if || > 1,
0
1
.
1 + 2n
(7.32)
153
|Hn ( j )|
n=
n=
n=
for s = j ,
for s = j .
1
.
1 + (s 2 )n
(7.33)
This equation we shall solve for Hn (s). For BIBO-stability we need Hn (s) to have its poles to
the left of the imaginary axis. To this end we factor the right-hand side of (7.33) as
2n
)
1
1
=
,
2
n
1 + (s )
s sk
k=1
j (2k1)
n
n
(sk2 ) = 1 = e j (2k1) = e n ,
k = 1, 2, . . . , n,
so that
sk = j e
j (k1/2)
n
k = 1, 2, . . . , 2n.
We see that the poles sk lie evenly distributed on the unit circle in the complex plane, see
Figure 7.14. For BIBO-stability we need that Hn (s) has poles only to the left of the imaginary
axis. It will be no surprise, then, that we choose Hn (s) to be the product of factors 1/(s sk )
for those sk that lie on the left of the imaginary axis,
Hn (s) =
n
)
k=1
1
.
s sk
154
s1
s12
s11
s2
s3
s10
0
s9
s4
s8
s5
s6
s7
,
H2 (s) =
s 2 + 2s + 1
1
,
H3 (s) =
(s + 1)(s 2 + s + 1)
H1 (s) =
H4 (s) =
1
.
(s 2 + 2 + 2s + 1) (s 2 + 2 2s + 1)
The filter with transfer function Hn (s) obtained this way is known as the n-th order Butterworth
filter and since Hn (s) is rational it means that Butterworth filters are described by a differential
equation. For example, the second order Butterworth filter is described by
7.7 Problems
7.1 Construct the simulation diagrams and then determine the input-state-output representations of
(a) y (1) (t) + y(t) = 2u(t).
(b) y (1) (t) + y(t) = u (1) (t) + 2u(t).
7.7. Problems
155
(b) Suppose we are given the initial conditions x(2) = 10 and that u(t) = et for
t > 2. Find x(t) for t > 2.
1 1
At
and verify that dtd e At = Ae At .
7.3 Determine e for A =
0 1
7.4 We are given an initially at rest system described by
x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which
1 1
A=
,
1 1
0
B=
,
1
C= 1 1 ,
D = 1.
1 2 0
A = 1 0 0 ,
0 1 0
B = 0 ,
0
C= 1 0 0 ,
s2
.
s 2 + 2s + 5
D = 0.
156
L (1)
y (t)
R
(4)
+ y(t) = u(t).
t R.
(7.34)
t.
(7.35)
(f) Determine the output y(t) for t > 0 with initial conditions y(0) = 0, y (0) = 1.
7.9 Consider the continuous-time system described by
y (2) (t) + 4y (1) (t) = u (1) (t) + 4u(t).
(7.36)
(7.37)
(e) Determine y(t) for t > 0 with initial conditions y(0) = 1 and y (0) = 4.
7.7. Problems
157
(7.38)
%
%
%
%
%
%
y + 2y + 4y
u + 2u
Compute the poles of the system
plot the impulse response h(t)
plot the step response (h*1)(t)
find an input-state-output repr.
158
t=0:0.1:20;
noise=rand(1,length(t))/5-1/10;
plot(t,noise);
x=sin(t)+noise;
y=lsim(num,den,x,t);
plot(t,y,t,x);
Explain what these commands do. What can you say about possible noise reduction of
the system? Finally type in and interprete
[ampl,myphase,w]=bode(num,den); % Determine amplitude and
% phase of H(j*w) at a suitable vector w
plot(w,ampl);
% plot |H(jw)| against w
plot(w,myphase);
% plot arg H(jw) against w
7.14 Reproduce the two Matlab plots of Figure 7.12. (You probably want to have a look at the
Matlab example on page 124.)
7.15 Simulate the initially at rest system T2 described by
y (2) (t) + y (1) (t) + y(t) = u (1) (t) + u(t),
over the time interval [0, 10] for the inputs
(a) u(t) = 1(t),
(b) u(t) = (1 et ) 1(t).
Simulate the step response of the initially at rest system described by
y (3) (t) + 2y (2) (t) + 2y (1) (t) + y(t) = u (1) (t) + u(t).
Is this the same as the response found in (7.15b)? (Explain.) What is its steady-state
response?
8
The z-transform and Discrete-time Systems
The Fourier and Laplace transforms are foremost of use in the analysis of continuous-time
signals and systems. In this chapter we treat a transform that plays a similar role in the analysis
of discrete-time signals and systems. The transform is the z-transform. The z-transform assigns
to a discrete-time signal f [n] a function F(z), defined in the complex z-plane as the two-sided
series
F(z) =
f [n]z n .
n=
The treatment of the z-transform follows the same lines as that of the other transforms introduced in this course. First we define the z-transform and comment on its convergence properties (Section 8.1). Next, a number of important properties and rules for z-transform are reviewed (Section 8.2). In Section 8.4 we treat the convolution product in a form that suits the
z-transform. The one-sided z-transformanalogous to the one-sided Laplace transformis
the topic of Section 8.5. Finally, in Section 8.6 the use of the z-transform in the analysis of LTI
discrete-time systems is discussed.
Like with the Laplace transform, we shall not develop a general inverse z-transform. We
are only concerned with the reconstruction of a discrete-time signal whose given z-transform
is a rational function. This is the most useful class of z-transforms, and no general inverse
z-transform theory is needed to find the inverse z-transform of such functions. With the help of
partial fraction expansion the inverse z-transform of a rational function can be determined.
The two discrete-time signals most fundamental in signal theory are the discrete delta
function [n] also known as the unit pulse, and the discrete unit step 1[n]. The unit pulse [n]
is defined as
[n] =
0
1
if n = 0,
if n = 0,
160
1
1[n] =
0
if n 0,
if n < 0.
f [n]
f (t)
f [n](t nT )
(t nT ),
T
n=
(t nT ) =
n=
f (t)(t nT ) =
n=
f [n](t nT ),
(8.1)
n=
and this is a continuous-time signal uniquely determined by the samples f [n], see Figure 8.1.
This way we can interpret every discrete-time signal as a continuous-time signal. For the
frequency spectrum of (8.1) we find that
F() =
n=
f [n]F {(t nT )} =
f [n]e j nT .
n=
The frequency spectrum apparently is a Fourier series in the frequency domain with period
2/T , that is with fundamental frequency T . If we now take z = ej T , then we arrive at
n
n= f [n]z . Remember that the Laplace transform was introduced as the Fourier transform for s = j and that we subsequently allowed for arbitrary s not just imaginary s = j .
We do the same thing here. Even though z is introduced as z = ej T we shall from now on
allow arbitrary complex valued z.
161
8.1.1. Definition. Let f [n] be a discrete-time signal. The z-transform F(z) of f [n] is defined
as
F(z) =
f [n]z n ,
n=
You may recognize in the z-transform a Laurent series that is to say, a two-sided power series, where besides the positive powers also the negative powers of the variable z occur. The
convergence properties of these series are similar to those of power series. To analyze the
convergence
properties, we split the
two-sided z-transform into two parts, an anti-causal part
1
n
n
n= f [n]z , and a causal part
n=0 f [n]z ,
F(z) = + f [3]z 3 + f [2]z 2 + f [1]z + f [0] + f [1]z 1 + f [2]z 2 + .
!
"#
$ !
"#
$
anti-causal part, F (z)
Both the anticausal part and the causal part are infinite sums, both of which have to exist in
order for F(z) to be defined. We shall denote the anti-causal part of the z-transform by F (z),
and the causal part is denoted by F+ (z). So in case the z-transform converges, then we have
that
F(z) =
(8.2)
n=
1
1
3n
if n < 0,
if n 0.
Then
F (z) =
1
n=
z n =
n=1
z
,
1z
and
F+ (z) =
1
3z
1 n
1
z
=
= {if 1/|3z| < 1} =
=
1
n
n
3
(3z)
3z 1
1 3z
n=0
n=0
The anti-causal part F (z) exists for all |z| < 1, and the causal part F+ (z) exists for all |z| > 13 .
The z-transform then exists whenever 1/3 < |z| < 1. The set of such values z form a ringshaped region in the complex plane, see Figure 8.2.
162
z-plane
j R2
j R1
R1
R2
In the above example the set of values z where the z-transform is defined turned out to be a
ring-shaped region. It may be shown that this is always the case. In other words, there are radii
R1 and R2 such that F(z) exists in the interior of the ring,
{ z C : R1 < |z| < R2 }
while it does not exist in the interior of its complement { z : |z| < R1 , or |z| > R2 }. We
call the ring {z C : R1 < |z| < R2 } the region of convergence of F(z). The z-transform
converges on this region of convergence, and it may also converge for some points on the
boundary of this region. If R1 = 0 and R2 > 0 then the region of convergence is a punctured
disc with radius R2 with possibly the point z = 0 removed. If R2 = , then the region of
convergence is the complement of the disc with radius R1 , see Figure 8.3. An example of such
a case is when f [n] is initially at rest. This is an important case; a case we shall prove.
8.1.3. Definition. A signal f [n] is initially at rest if f [n] = 0 for all n < n0 for a certain n0
(see Figure 8.4).
8.1.4. Lemma. Let f [n] be an initially at rest signal. The z-transform F(z) of f [n] exists
either for all z C or there is an R1 0 such that F(z) exists for all |z| > R1 and F(z) does
not exist for any |z| < R1 . (See Figure 8.3.)
n
Moreover, on the region of convergence |z| > R1 the z-transform F(z) =
n=n 0 f [n]z
converges absolutely.
Proof. Let R1 := inf{|z0 | : F(z 0 ) exists}. By definition of R1 , the z-transform F(z) does
not exist for any |z| < R1 . Suppose now that |z| > R
1 . Then there is a z0 for which R1
n
z 0 < z and such that F(z0 ) exists. Since F(z0 ) =
n=n 0 f [n]z 0 exists, we must have
that limn | f [n]z 0n | = 0. In particular this shows that | f [n]z0n | is bounded by some M
163
z-plane
j R1
R1
n0
z
n
0
f [n]z n =
f [n]z 0n
|F(z)| =
n=n
n=n
z
0
0
z 0 n
z 0 n
1
< .
M = {since | zz0 | < 1} = M 0
z
z
1 | zz0 |
n=n 0
f [n] F(z),
(z V ).
We say that F(z) describes f [n] in the z-domain. The z-domain is the complex plane.
164
2
f [n] = 0
1/n
if n < 0,
if n = 0,
if n > 0.
2n z n = {k = n} =
n=
(z/2)k .
k=1
This is a geometric series with ratio z/2, hence it converges for |z| < 2 and diverges for |z| 2.
The causal part is the series
0+
z n /n.
n=1
This can be seen as a power series in the variable 1/z. Its convergence radius is 1 (see Example 1.6.5). Therefore the causal part converges for |z| > 1 and diverges for |z| < 1. The region
of convergence of the z-transform of f [n] is the ring-shaped region 1 < |z| < 2.
8.1.6. Example. Let f [n] be the causal signal f [n] = an 1[n], with a C. This is an initiallyat-rest signal. The anti-causal part of f [n] is zero, so the z-transform F (z) of the anti-causal
part is identically zero as well and is defined for every z C. The z-transform is a geometric
series,
F(z) =
a n 1[n]z n =
n=
a n z n =
n=0
(a/z)n .
n=0
The ratio in this geometric series is a/z, so for convergence we need that |z| > |a|. The region
of convergence is the complement of the disc with radius |a|. On the region of convergence the
geometric series converges with limit 1/(1 az ) = z/(z a). We found the transform pair
Z
a n 1[n]
z
,
za
(8.3)
if n 0,
if n < 0.
165
Here a is some nonzero complex number. The causal part of the z-transform consists of zero
elements only and hence is the zero function, well defined for any z C. For the anti-causal
part we find that
F (z) =
1
a n z n =
n=
(z/a)n .
n=1
This is a geometric series with ratio z/a. It converges if and only if |z| < |a| in which case its
limit is az /(1 az ) = z/(z a). The region of convergence is the disc with radius |a|,
Z
a n 1[n 1]
z
,
za
8.1.8. Example. The z-transform of the discrete-time unit pulse [n] is easily obtained. Indeed, since [n] is zero except for n = 1 where it is 1, we find immediately that
[n]z n = 1.
n=
[n] 1,
(z C).
(8.4)
It is important to mention with a z-transform its region of convergence. This is eminent from
Examples 8.1.6 and 8.1.7. In both examples the z-transform was found to be z/(z a), even
though the discrete-time signals f [n] in these examples were anything but the same. From this
it is clear that for a given F(z) we can not determine the inverse z-transform if we do not know
its region of convergence. For initially at rest signals, however, we know that the region of
convergence is necessarily the complement of a disc with a certain radius. In that case, as we
shall see, knowledge of F(z) is enough to find f [n]; no specific knowledge of the region of
convergence is needed. We return to this problem in Section 8.3.
166
n=
f [n]z
%
n
( f [n](z ) ) =
n=
&
n
f [n](z )
= (F(z )) .
n=
Z
f [n k]z n = {m = n + k} =
n=
f [m]z mk = z k F(z).
m=
Z
n=
a n f [n]z n =
f [n](z/a)n .
n=
Z
n=
n f [n]z n =
n=
f [n]z(z n ) = z
f [n](z n ) = z F (z).
n=
In the last equality we interchanged summation and differentiation. It may be shown that
that is allowed for all z in the region of convergence, which has to do with the fact that
on the region of convergence the z-transform converges absolutely.
8.2.1. Example.
167
a) Any discrete-time signal f [n] can be expressed as a sum of shifted, weighted discretetime delta pulses as
f [n] =
f [k][n k].
(8.5)
k=
Convergence of this series is no problem. Indeed, for any n all terms f [k][n k],
(k Z) are zero except when k = n in which case f [k][n k] = f [n][0] = f [n].
Z
[n k] z k .
If we take in (8.5) the z-transform of both left and right-hand side, then we re-derive the
z-transform:
Z{ f [n]} = Z{
f [k][n k]} =
k=
f [k]Z{[n k]} =
k=
f [k]z k .
k=
b) The z-transform of the discrete-time unit step 1[n] equals z/(z 1) with region of convergence |z| > 1 (see Example 8.1.6 with a = 1). By the differentiation rule, we have
that
z
d z
Z
=
,
(|z| > 1).
n 1[n] z
dz z 1
(z 1)2
Z
Note that n(n 1) 1[n 1] = n(n 1) 1[n], so n(n 1) 1[n] 2z/(z 1)3 . We
may repeat this process any number of times, say k times, and this will bring us to
Z
n(n 1) (n k + 1) 1[n] k!
z
,
(z 1)k+1
168
f [n]
F(z)
Region of convergence
[n k]
z k
z = 0 (or every z if k 0)
a n 1[n]
n
n
a 1[n]
k
z
za
ak z
(za)k+1
a n1 1[n 1]
1
za
pm z m + pm1 z m1 + . . . + p1 z + p0
.
ql z l + ql1 z l1 + . . . + q1 z + q0
(8.6)
The numerator and denominator are polynomials in the complex variable z. The numerator
polynomial we denote by P(z) and the denominator polynomial we denote by Q(z),
F(z) =
P(z)
.
Q(z)
P(z)
Q(z)
Q(z)
.
P(z)
Since we assume that P(z) and Q(z) have no common zeros, the poles are simply the zeros
of Q(z). The poles zk are those numbers z where P(z)/Q(z) is not defined. Now, on the
region of convergence the z-transform F(z) = P(z)/Q(z) is by definition well-defined, so
169
we can conclude that the region of convergence does not contain any pole zk of F(z). If for
example the zk in Figure 8.5 are the poles of F(z), then the largest possible convergence region
is the complement of the disc with center at z = 0 and radius maxk |z k |, such as indicated
in Figure 8.5. Interestingly, this largest possible region of convergence is the actual region of
z-plane
z1
z4
z5
z3
z2
h[n]z n .
n=n 0
If z is large enough then z is in the convergence regions of both f [n] and g[n], so then the
left-hand side of the above equation is defined and then equal to F(z) F(z) = 0. This leads
to a contradiction, since the right-hand is not identically zero for large z, because
lim z
n0
1
1
h[n]z n = lim h[n 0 ] + h[n 0 + 1] + h[n 0 + 2] 2 + = h[n 0 ] = 0.
z
z
z
n=n 0
170
This is a contradiction, so our assumption that g[n] differs from f [n] is wrong. They are the
same.
8.3.3. Example. Suppose
F(z) =
z2
.
(2z 1)(z 1)
There are two poles, z1 = 1/2 and z2 = 1. The region of convergence of F(z) seen as the
z-transform of an initially at rest signal hence is |z| > 1.
To determine the initially at rest signal f [n] given its rational z-transform F(z) we shall
use partial fraction expansion. The following examples demonstrate the method. Table 8.1 is
indispensable to these examples.
8.3.4. Example.
a) Let F(z) be given by
F(z) =
z4
.
z2 1
Motivated by the formulae of Table 8.1 where a factor z appears in the numerator, we
shall perform partial fraction expansion on F(z)/z rather than on F(z).
Now F(z)/z = z 3 /(z 2 1) is not strictly proper. It is not even proper. So first we have
to separate a polynomial part (see Appendix A),
z3
z
F(z)
= 2
= {division with remainder} = z + 2
.
z
z 1
z 1
The term z/(z 2 1) is strictly proper, and so has a partial fraction expansion,
1 1
1 1
F(z)
=z+
+
.
z
2z1 2z+1
Therefore,
F(z) = z 2 +
1 z
1 z
+
.
2z1 2z+1
(8.7)
The signal f [n] may now be determined by taking the inverse z-transform of each of the
three terms on the right-hand side of (8.7). Using Table 8.1 we get that
f [n] = [n + 2] +
1
2
z2
1
.
+ 2z + 2
171
+
.
z
z(z + 1 + j )(z + 1 j )
2z
4 z+1+ j
4
z+1 j
Then F(z) follows as
F(z) =
z
1 + j
z
1 1+ j
+
,
2
4 z+1+ j
4
z+1 j
and then from Table 8.1 we may determine the inverse z-transform,
f [n] = 12 [n] + 14 (1 j )n+1 + (1 + j )n+1 1[n]
n0
f [n]z
= {k = n n 0 } = lim
n=n 0
f [k + n 0 ]z k = f [n 0 ].
k=0
|z|
z4
z2 1
z2
|z| z 2 1
= {divide numerater and denominator by highest power z2 }
1
= 1.
= lim
|z| 1 1/z 2
=
lim
Verify yourself that no other n0 will make lim|z| z n0 z 2z1 finite and nonzero. It is now easy
to show that
172
8.3.5. Theorem (initial value theorem). The initially at rest f [n] with z-transform
F(z) =
pm z m + pm1 z m1 + . . . + p1 z + p0
,
ql z l + ql1 z l1 + . . . + q1 z + q0
pm = 0, ql = 0
1
.
z(z 2)2
1
[n] + [n 1] 2n 1[n] + n2n1 1[n] .
4
The degree of the denominator of F(z) is 3, and the degree of the numerator is 0. So f [n]
is zero for all n < 3 0 = 3. Indeed, f [n] is like that. Note that lim|z| z 3 F(z) =
z3
1
lim|z| z(z2)
2 = lim|z| 1(12/z) 2 = 1, which equals f [3] = 1.
8.4 Convolution
With each of the transforms considered so far it was possible to define a convolution product
of two signals whose transform is the product of the transforms of the two signals. Also for
the z-transform there is such a convolution. We assume in the following that F(z) and G(z)
are the z-transforms of discrete-time signals f [n] and g[n]. We try to find a signal h[n] whose
z-transform is the product F(z)G(z). Written out, this is
h[n]z n =
n=
f [l]z l
l=
g[k]z k .
k=
The right-hand side is the product of two infinite series. On the intersection of the two regions
of convergence of F(z) and G(z) the two infinite series converge absolutely, which allows us
to interchange product and summation,
n=
h[n]z n =
l= k=
f [l]g[k]z lk .
8.4. Convolution
173
To find h[n] we equate equal powers of z. The coefficient of zn on the left-hand, h[n], must
equal the sum of the products f [l]g[k] for which l + k = n. That is,
h[n] =
f [l]g[n l].
l=
It is now clear how we should define convolutions with respect to the z-transform.
8.4.1. Definition. The convolution or convolution product ( f g)[n] of two discrete-time signals f [n] and g[n] is defined as
( f g)[n] =
f [l]g[n l].
(8.8)
l=
Verify yourself that convolution products commute. The convolution product of Definition 8.4.1
is an infinite series. If f [n] and g[n] are causal signals then convergence of the convolution
product is trivial, since then the convolution product ( f g)[n] for each n is a finite sum,
( f g)[n] =
l=
f [l]g[n l]
l=0
n
f [l]g[n l].
l=0
By the above construction, the z-transform of a convolution product is the product of the ztransforms. We formulate this as a theorem.
8.4.2. Theorem (Convolution theorem for the z-transform). Let f [n] and g[n] be two
discrete-time signals with z-transforms F(z) and G(z), respectively. Then
Z
( f g)[n] F(z)G(z).
g[n] = 3n 1[n].
The z-transform of f [n] is F(z) = z/(z 1/2) with region of convergence |z| > 1/2. The ztransform of g[n] is G(z) = z/(z1/3) with region of convergence |z| > 1/3. The convolution
product has z-transform
F(z)G(z) =
z2
.
(z 1/2)(z 1/3)
After partial fraction expansion of F(z)G(z)/z, etcetera, the convolution may be shown to be
( f g)[n] = (3 2n 2 3n ) 1[n].
174
f [l]
l=
may be seen as the convolution with the discrete unit step 1[n]. Indeed,
( f 1)[n] =
f [l] 1[n l] =
l=
n
f [l].
l=
f [l]
l=
z
F(z).
z1
f [n]z n
n=0
is known as the one-sided z-transform or the unilateral z-transform of f [n], and is sometimes
written as Z+ { f [n]}. Note that the one-sided z-transform equals the z-transform of f [n] 1[n].
The region of convergence of a one-sided transform, hence, equals the region of convergence
of the z-transform of a causal signal, of which we know that it is the complement of a disc with
certain radius and center at z = 0.
It is clear that the inverse of the one-sided z-transform can only recover positive-time function values, f [n], n 0. In case F+ (z) is rational we can again use partial fraction expansion
in combination with Table 8.1 to find f [n], (n 0).
In the following we review some properties and rules of calculus for the one-sided ztransform. These rules are the same as for the two-sided z-transform, except for the time-shift
rule.
Properties of the one-sided z-transform
Z+
175
(k > 0).
f [n k]z n = {m = n k} =
n=0
f [m]z mk
m=k
f [m]z
mk
m=0
(a = 0).
Z+
n f [n]z
n=0
d n
[n](z dz
z )
= z
n=0
n=0
f [n]
d n
d
z = z F+ (z).
dz
dz
n
l=0
Z+
Note that the convolution product used here coincides with the convolution product of
Definition 8.4.1 for the case of causal signals f [n] 1[n] and g[n] 1[n].
8.5.1. Example. Suppose f [n] satisfies
f [n] = 2 f [n 1] f [n 2]
(n 0),
(8.9)
(8.10)
176
and since the initial conditions are given, f [1] = 0, f [2] = 1, the equation (8.10)
becomes
F+ (z) = 2z 1 F+ (z) z 2 F+ (z) 1 .
The solution F+ (z) of this algebraic equation is
F+ (z) =
1
1 2z 1 + z 2
z2
z2
=
.
z 2 2z + 1
(z 1)2
z
z
.
+
z 1 (z 1)2
(n 0).
f [0] = 2 f [1] f [n 2] = 20 + 1 = 1,
f [1] = 2 f [0] f [1] = 21 0 = 2,
f [2] = 2 f [1] f [0] = 22 1 = 3,
..
.
We have more to say about difference equations like (8.9) when we consider discrete-time
systems.
177
f [n]z n
f [n]
Linearity
a1 f 1 [n] + a2 f 2 [n]
f [n]
(F+ (z ))
|z| >
f [n 1]
z1 F+ (z) + f [1]
|z| >
f [n 2]
|z| >
f [n + 1]
z F+ (z) z f [0]
|z| >
f [n + 2]
|z| >
Scaling (z-dom.)
an f [n]
F+ (z/a)
Differentiation
n f [n]
z F+ (s)
|z| >
F+ (z)G + (z)
Conjugation
Time-shift
Convolution
n
l=0
f [l]g[n l]
F+ (z) =
Property
n=0
|z| >
178
8.6.1. Definition. A discrete-time system T is linear if for every input signals u1 [n] and u 2 [n]
and every complex a1 and a2 there holds that
T {a1 u 1 [n] + a2 u 2 [n]} = a1 T {u 1 [n]} + a2 T {u 2 [n]}.
8.6.2. Definition. A discrete-time system T is time-invariant if for each input u[n] and corresponding output y[n] = T {u[n]} there holds that for any k Z,
T
y2 [n] = T {u 2 [n]}
then
Discrete-time systems that are both linear and time-invariant are called discrete-time LTI
systems. As with continuous-time systems, a discrete-time LTI system is causal if and only if
the response to every causal input is causal.
In the description of discrete-time LTI systems in the time-domain an important role is
played by the response to the unit pulse [n]. The response to the unit pulse is called the
(discrete-time) impulse response. Let h[n] be the impulse response. Then, by time-invariance,
the response to a shifted unit pulse [n k] is h[n k]. Now, any input signal u[n] can be seen
as a superposition of shifted pulses h[n k], (see Example 8.2.1a):
u[n] =
u[k][n k].
k=
k=
u[k]h[n k].
k=
(8.11)
179
Apparently every LTI system is described by a convolution (8.11) and as such is completely
characterized by its impulse response. This is a beautiful result. The description of LTI systems
is even simpler in the z-domain. Applying the z-transform to (8.11) namely gives
Y (z) = H (z)U (z).
Z
(8.12)
Z
The system calculates for each n the weighted average of the input at times n, n 1 and n 2.
Verify yourself that the system is linear and time-invariant and that the system is causal. The
impulse response is the output y[n] for the case that the input u[n] is the unit pulse u[n] = [n].
The impulse response therefore is
h[n] =
[n] + 2[n 1] + [n 2]
,
4
The discrete-time system considered in the above example has the property that only finitely
many function values h[n], (n Z) are nonzero. Systems like these are called finite impulse
response filters (FIR filters). Examples of infinite impulse response (IIR filter) are considered
in the following subsection. An IIR filter is a filter whose impulse response h[n] has infinitely
many nonzero function values h[n] = 0.
(8.13)
Now, it may be clear that this way the output y[n] is not uniquely determined by the input u[n].
For example if u[n] is identically zero, then any constant y[n] = c is a solution of the difference
equation
y[n] y[n 1] = u[n].
180
As in the continuous-time case it may however be argued that the most natural vantage point is
to assume the output is initially at rest when given an initially at rest input. To see more clearly
how this works we reorder the difference equation into an explicit recurrence relation
y[n] = p0 u[n] + p1 u[n 1] + + pN1 u[n N + 1] + pN u[n N]
(8.14)
An initially at rest system described by a difference equation (8.13) refers to a system where
the output y[n] is computed as indicated by the above scheme. You may want to verify that
this way the output is uniquely determined by the input, and that in addition the system is
linear and time-invariant. We may therefore conclude that initially at rest systems described
by a difference equation (8.13) are LTI systems. In particular they have an impulse response
that fully characterizes them. Recall that the impulse response is the output y[n] if we take as
input the unit pulse, u[n] = [n]. The output computed in Example 8.6.5 therefore is nothing
but the systems impulse response. It is generally easier to find the impulse response via the
z-transform. Note that the impulse response h[n] as found in Example 8.6.5 turned out to
be a causal signal. Like in the continuous-time case this means that initially at rest systems
described by (8.13) are causal systems.
8.6.6. Lemma. The initially at rest system described by (8.13) is a causal LTI system and it
has a proper rational transfer function
H (z) =
p0 + p1 z 1 + + p N1 z N+1 + p N z N
.
1 + q1 z 1 + + q N1 z N+1 + q N z N
(8.15)
181
Proof. The impulse response is the output when we take as input u[n] = [n]. That is,
h[n] + q1 h[n 1] + + qN1 h[n N + 1] + qN h[n N]
= p0 [n] + p1 [n 1] + + pN1 [n N + 1] + p N [n N].
Next take z-transforms of left and right-hand side. The time-shift rule states that Z{ f [n k]} =
z k Z{ f [n]}, and Z{[n k]} = zk . So we obtain
(1 + q1 z 1 + + q N z N )H (z) = ( p0 + p1 z 1 + + p N z N ),
and the transfer function follows. By the Initial Value Theorem 8.3.5, the impulse response
h[n] is at rest up tp n = 0 (and h[0] = p0 ). The impulse response hence is causal, implying
that the system is causal.
8.6.7. Example. Suppose T is an initially at rest system in which the input and output are
related via
y[n] y[n 1] + 14 y[n 2] = u[n] u[n 1].
(8.16)
We shall determine the impulse response with the help of the z-transform. The transfer function
is
H (z) =
z2 z
1 z 1
=
1 z 1 + 14 z 2
z2 z +
1
4
z2 z
.
(z 12 )2
1
2
1
2
(z 12 )2
so that
H (z) =
z
z
1
2
1
z
2
(z 12 )2
182
h[k]u[n k] M
|h[k]| < ,
|y[n]| =
k=
k=
and the bound is independent of n.
That absolute summability of h[n] is also necessary for BIBO-stability can be seen as
follows. Assume the system is BIBO-stable and consider the bounded input u[n] = sgn h[n].
Then, by BIBO-stability we have that |y[0]| < , but,
y[0] = (h u)[0] =
h[k]u[0 k] =
k=
|h[k]|,
k=
z
The transfer function of this system is H (z) = 1z
1 =
n1
1[n 1]. Now
n=
|h[n]| =
n=1
n1 = {assuming || < 1} =
1
z
1
.
1
8.7. Problems
183
From this expression we conclude that the system is BIBO-stable if and only if || < 1.
The same conclusion can be drawn, without doing any computation, by inspecting the
poles of H (z). The transfer function H (z) = 1/(z ) has one pole, z1 = , and by the above
theorem, the system is therefore BIBO-stable if and only this pole z1 = has absolute value
less than one.
8.6.11. Example. A typical example of an unstable system is that of multiplying rabbits. Let
y[n] denote the number of pairs of rabbits that a farmer has in month n. Now suppose that
rabbits have to be two months old before they are mature and that from that time onwards each
pair of rabbits produces one new pair every month! This we can capture by the difference
equation,
y[n] = y[n 1] + y[n 2] + u[n].
(8.17)
The difference equation expresses that in month n the farmer has as many pairs of rabbits as
she had the previous month, y[n 1], plus y[n 2] newborn pairs of rabbits due to the number
of pairs of rabbits y[n 2] that are mature in month n. The input u[n] is the number of pairs
that the farmer introduces to the group.
If the farmer would have been a mathematician, she would have calculated the poles of the
system first. To calculate the poles of the system, we rearrange Equation (8.17) as
y[n] y[n 1] y[n 2] = u[n].
The transfer function now follows as
H (z) =
1
1 z 1 z 2
z2
.
z2 z 1
8.7 Problems
8.1 Determine the one-sided z-transforms of f [n] and its region of convergence for
(a) f [n] = [n],
(b) f [n] = [n 5],
(c) f [n] = 1[n],
(d) f [n] = nbn ,
(e) f [n] =
1
.
n!
184
5
(b) y[n] y[n 1] + 16
y[n 2] = u[n].
3
y[n 1] y[n 2] = u[n 1] + u[n 2].
2
8.7. Problems
(a)
(b)
(c)
(d)
(e)
185
n Z.
(8.18)
186
8.14 Consider a discrete-time system T . Let y[n] be the step response y[n] = T {1[n]}. Let
H (z) be the systems transfer function.
z
(a) Suppose that T is the initially at rest system (8.13). Show that Y (z) = H (z)z1
and determine the region of convergence of Y (z).
(b) Now suppose that all we know of T is that its impulse response is initially at rest
and that the region of convergence of H (z) is |z| > . Prove that Y (z) exists for
all |z| > max(1, ).
A
Partial Fraction Expansion
It is easy to verify that the above identity is correct: multiply both left and right-hand side by
(s + 1)(s + 2)(s + 3), and the identity reduces to the polynomial identity
1
1
1 = (s + 2)(s + 3) (s + 1)(s + 3) + (s + 1)(s + 2),
2
2
whose validity is subsequently easily verified.
In this appendix we briefly review the problem of partial fraction expansion. The partial
fraction expansion of a rational function, like
1
,
(s + 1)(s + 2)(s + 3)
(A.1)
is an expansion of the rational function as a sum of elementary terms of the form /(s )k ,
such as,
1
1
+
+ 2 .
s+1 s+2 s+3
1
2
(A.2)
P(s)
= A0 +
+
+
+
Q(s)
s sk
(s sk )2
(s sk )m k
k=1
187
(A.3)
188
with A0 , Ak,l , sk C and m k , M N. Two properties are immediate. Firstly, the right-hand
side of (A.3) is not defined for s = sk , so the left-hand side, P(sk )/Q(sk ) is not defined as well.
Therefore the sk are necessarily zeros of the polynomial Q(s). Secondly, the limit as |s|
of the right-hand side is finite,
lim A0 +
|s|
M
Ak,1
Ak,2
Ak,m k
+
+ +
= A0 ,
2
s sk
(s sk )
(s sk )m k
k=1
(A.4)
so the left-hand side P(s)/Q(s) is also finite in the limit |s| . This is the case if and only
if the degree of P(s) is less than or equal to the degree of Q(s). Rational functions P(s)/Q(s)
with deg P(s) deg Q(s) are called proper rational functions.
A.0.1. Theorem. Every proper rational function P(s)/Q(s) has a partial fraction expansion.
More concretely, let sk , (k = 1, 2, . . . , M) denote the zeros of Q(s). Then P(s)/Q(s) has
a partial fraction expansion of the form
M
Ak,1
P(s)
Ak,2
Ak,m k
= A0 +
+
+ +
2
Q(s)
s sk
(s sk )
(s sk )m k
k=1
where M is the number of different zeros of Q(s), mk is the multiplicity of zero sk of Q(s), and
A0 and the Ak,l are (complex) constants.
(qn = 0).
Strictly proper rational functions tend to zero as |s| , so in view of (A.4), we have that
A0 = 0.
In this section we demonstrate partial fraction expansion techniques for strictly proper rational
functions.
A.1.1. Example. Let P(s)/Q(s) = 1/((s + 1)(s + 2)). The zeros of Q(s) are s1 = 1
and s2 = 2 and they both have multiplicity one. Therefore by the above theorem there are
constants A = A1,1 and B = A2,1 such that
A
B
1
=
+
.
(s + 1)(s + 2)
s+1 s+2
To determine the values of A and B we may multiply left and right-hand side by (s + 1)(s + 2)
to obtain,
1 = (s + 2) A + (s + 1)B = s(A + B) + (2A + B).
189
B = 1.
1
=
We found the partial fraction expansion (s+1)(s+2)
1
s+1
1
.
s+2
A.1.2. Example. The method of the previous example is generally applicable, but if Q(s) has
many zeros, then the method becomes unwieldy. In such cases it is often easier to work with a
direct method, such as the one demonstrated on the following example. The method assumes
that all poles of F(s) have multiplicity 1.
Suppose
F(s) =
s+4
.
(s + 1)(s + 2)(s + 3)
We see that F(s) is strictly proper, so F(s) has the partial fraction expansion,
A
B
C
s+4
=
+
+
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3
1
which has a pole at
for some constants A, B and C. Note that A is the coefficient of s+1
s = 1. Now to find A we simply evaluate at this pole s = 1 the function F(s) with the term
(s + 1) removed:
3
1 + 4
s+4
= .
=
A=
2
(s + 1) (s + 2)(s + 3) s=1 (1 + 2)(1 + 3)
1
1
and s+3
may be directly determined as
Likewise the coefficients B and C of s+2
2 + 4
s+4
= 2,
=
B=
(s + 1) (s + 2) (s + 3) s=2 (2 + 1)(2 + 3)
and
C=
s+4
(s + 1)(s + 2) (s + 3)
s=3
1
3 + 4
= .
(3 + 1)(3 + 2)
2
190
The exposition in the previous example was deliberately taken rather graphical as this makes
the method easier to perform by hand. Mathematically, we did nothing but compute
3
1
A = lim (s + 1)F(s) = , B = lim (s + 2)F(s) = 2, C = lim (s + 3)F(s) = .
s1
s2
s3
2
2
If F(s) has a multiple pole then a similar result holds.
A.1.3. Lemma. Suppose sk is a zero of a polynomial Q(s) with multiplicity mk . Then the
coefficient Ak,m k of the term of highest order
Ak,m k
(s sk )m k
in the partial fraction expansion of F(s) = P(s)/Q(s) equals
Ak,m k = lim (s sk )m k F(s).
ssk
1
Since the multiplicity of the zero s1 = 1 is 2, the coefficient B of the highest order term (s+1)
2
equals
s+4
1 + 4
2
= 3.
=
B = lim (s + 1) F(s) =
s1
(s + 1)2 (s + 2) s=1 1 + 2
2s 2
2
(s + 4) 3(s + 2)
1
=
=
.
=
2
2
2
(s + 1)
(s + 1) (s + 2)
(s + 1) (s + 2)
(s + 1)(s + 2)
We have reduced the problem to one of lower order. We leave it to the reader to verify that
F(s) 3
A
C
2
=
+
(s + 1)(s + 2)
s+1 s+2
for A = 2 and C = 2. The partial fraction expansion of F(s) is now determined,
2
3
2
s+4
=
+
.
+
2
2
(s + 1) (s + 2)
s + 1 (s + 1)
s+2
191
(qn = 0).
(A.6)
Partial fraction expansion of a proper rational function can easily be reduced to that of a strictly
proper rational function. Indeed, if P(s)/Q(s) is proper, then we may always express it as
P(s) A0 Q(s) + A0 Q(s)
P(s) A0 Q(s)
P(s)
=
=
+ A0
Q(s)
Q(s)
Q(s)
in which A0 is chosen in such a way that
P(s) A0 Q(s)
Q(s)
is strictly proper.
A.2.1. Example. Suppose
s2
P(s)
= 2
.
Q(s)
s + 3s + 2
The degree of the numerator P(s) is the same as the degree of the denominator Q(s). Now
s 2 A0 (s 2 + 3s + 2)
P(s)
=
+ A0 .
Q(s)
s 2 + 3s + 2
For A0 = 1 we numerator polynomial s2 A0 (s 2 + 3s + 2) drops degree,
s 2 (s 2 + 3s + 2)
3s 2
P(s)
=
+1= 2
+ 1.
2
Q(s)
s + 3s + 2
s + 3s + 2
1
1
Now s 3s2
2 +3s+2 is strictly proper and it has partial fraction expansion s+1 4 s+2 (verify this
yourself). Then the partial fraction expansion of P(s)/Q(s) follows as
1
1
s2
=1+
4
.
2
s + 3s + 2
s+1
s+2
192
( pm = 0).
In cases like these we may express the rational function as a polynomial plus a strictly proper
part,
P(s)
= L polynomial (s) + Fstrictly proper (s).
Q(s)
On the strictly proper part we may again perform partial fraction expansion. The polynomial
part and strictly proper part may be obtained through long division.
A.3.1. Example. Suppose
s4
P(s)
= 2
.
Q(s)
s + 3s + 2
The degree of the numerator P(s) exceeds that of the denominator Q(s), i.e., P(s)/Q(s) is
non-proper. The polynomial part and strictly proper part follow from long division,
s 2 + 3s + 2 / s 4
s 4 + 3s 3 + 2s 2
\ s 2 3s + 7
3s 3 2s 2
3s 3 9s 2 6s
7s 2 + 6s
7s 2 + 21s +14
15s 14
Then, finally,
15s 14
s4
= s 2 3s + 7 + 2
2
s + 3s + 2
s + 3s + 2
15s 14
= s 2 3s + 7 +
(s + 1)(s + 2)
16
1
.
= s 2 3s + 7 +
s+1 s +2
B
Solution of ODEs and ODEs
This appendix reviews time-domain solutions of ordinary linear differential equations with
constant coefficients. In Section B.2 time-domain solutions of ordinary difference equations
are discussed.
(B.1)
(B.2)
The fundamental theorem of algebra states that a polynomial equation of degree n has exactly
n roots, counting multiplicities. Now if 1 is a root of the characteristic equation (B.2), then
y(t) = e1 t is a solution of the homogeneous equation. Indeed, if y(t) = e1 t , then
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t)
1 t
+ + q1 1 e1 t + q0 e1 t
= n1 e1 t + qn1 n1
1 e
= n1 + qn1 n1
+ + q1 1 + q0 e1 t = 0.
1
193
194
B.1.1. Example.
1. The characteristic equation of
y (2) (t) b2 y(t) = 0
is 2 b2 = 0. Its roots are 1 = b and 2 = b. Hence y1 (t) = ebt and y2 (t) = ebt
are solutions of y(2) (t) b2 y(t) = 0. By linearity, then,
y(t) = 1 ebt + 2 ebt ,
is a solution of y(2) (t) b2 y(t) = 0 for any 1 , 2 C.
2. The characteristic equation of
y (3) (t) 3y (2) (t) + 2y (1) (t) = 0
is 3 32 + 2 = 0. Since
3 32 + 2 = ( 1)( 2)
we see that 1 = 0, 2 = 1 and 3 = 2 are the characteristic roots. Then e0t = 1, and
et and e2t are three solutions of the homogeneous equation, and then by linearity every
y(t) of the form
y(t) = 1 + 2 et + 3 e2t ,
1 , 2 , 3 C,
In the last of the three examples we saw that not every solution of the homogeneous equation
is a sum of exponential functions. This has something to do with the fact that the multiplicity
of the characteristic root 1 in that example is more than 1. The general result is this:
B.1.2. Theorem. To each characteristic root i of multiplicity mi , the m i functions
yi,k (t) = t k ei t ,
(k = 0, 1, . . . , m i 1)
are solutions of the homogeneous equation. These solutions yi,k (t) are called the basis solutions.
Furthermore, y(t) is a solution of the homogeneous equation if and only if it is a linear
combination of the basis solutions,
i,k yi,k (t), i,k C.
(B.3)
y(t) =
i,k
195
Proof. (Idea only). That each basis solution satisfies the homogeneous equation is not difficult
to verify. There are exactly n basis solutions, and they can be seen to be linearly independent.
Therefore the general solution (B.3) form an n-dimensional subspace. The homogeneous solutions are the solutions for u(t) = 0, so from Theorem 7.2.3, Equation (7.11) we know that
the general solution of the homogeneous equation is CeAt x(0), x(0) Rn , and this forms an
n-dimensional subspace as well. The solution sets {y(t) : y(t) = CeAt x(0), x(0) Rn } and
(B.3) must therefore be the same.
B.1.3. Example.
1. The characteristic equation of
y (n) (t) = 0
is n = 0. It has one root 1 = 0 with multiplicity n. The basis solutions hence are
y1,1 (t) = 1, y1,2 (t) = t, y1,3 (t) = t 2 ,
... ,
y1,n (t) = t n1 .
The general solution of y(n) (t) = 0 is therefore y(t) = 1,1 + 1,2 t + + 1,n t n1 , that
is, the solutions are the polynomials in t of degree n 1 or less.
2. The characteristic equation of
y (3) (t) 4y (2) (t) + 5y (1) (t) 2y(t) = 0
(B.4)
is 3 42 + 5 2 = 0. Since
3 42 + 5 2 = ( 1)2 ( 2)
we obtain as basis solutions
y1,1 (t) = et ,
196
B.1.4. Lemma. Suppose u(t) is given and let ypart (t) be one solution of the ODE (B.1). Then
the general solution y(t) of (B.1) is
y(t) = ypart (t) + yhom (t)
where yhom (t) is any solution of the associated homogeneous equation.
Proof. Prove it yourself!
In our quest for the general solution it therefore suffices to find one solution of the ODE. All
others then follow by adding the general solution of the homogeneous equation. One solution
ypart (t) of the ODE is commonly called a particular solution. Generally it is difficult to find a
particular solution in which case one has to settle for the integral expression (7.11) or solve it
numerically. For certain input signals u(t) it is however possible to make an educated guess.
The following three examples demonstrate three such cases.
B.1.5. Example (Constant inputs). If the input is constant
u(t) = c,
then we may contemplate a constant particular solution ypart (t). As all derivatives of a constant
signal are zero, the ODE (B.1) for constant input and output reduce to
q0 y(t) = p0 u(t).
If q0 = 0 then apparently
ypart (t) =
p0
c
q0
1,1 , 2,1 C.
B.1.6. Example (Exponential inputs). The constant input of the previous example is a degenerate case of an exponential input u(t) = es0 t . For exponential inputs u(t) = es0 t we
contemplate a particular solution of the form
ypart (t) = Aes0 t ,
for some A C.
197
(n)
(n1)
(t) + qn1 ypart
(t) + + q0 ypart (t) = A s0n + qn1 s0n1 + + q0 )es0 t
ypart
pn u (n) (t) + pn1 u (n1) (t) + + p0 u(t) = pn s0n + pn1 s0n1 + + p0 es0 t .
For A to exist we shall need to assume that s0 is not a characteristic root, otherwise the above
denominator is zero. For s0 = 0 we recover that case of constant inputs.
Consider the ODE
y (2) (t) 4y(t) = u(t)
with input u(t) = es0 t . Then as long as s0 is not a characteristic root, we obtain as particular
solution
1
e s0 t .
ypart (t) = 2
s0 4
Like in the previous example, the general solution then is
y(t) =
s02
1
es0 t + 1,1 e2t + 2,1 e2t ,
4
1,1 , 2,1 C.
B.1.7. Example (Polynomial inputs). If the input u(t) is a polynomial in t, then the righthand side of the ODE (B.1), pn u (n) (t) + pn1 u (n1) (t) + + p0 u(t) is a polynomial in t as
well. The ODE is then of the form
(n)
y (t) + qn1 y
(n1)
(t) + + q0 y(t) =
M
k t k ,
(k C).
k=0
The claim is that there is a particular solution which is polynomial in t. The method is best
demonstrated on an example. Consider the ODE
y (2) (t) + y (1) (t) + 2y(t) = u (1) (t) + u(t).
(B.5)
Suppose that u(t) = t 2 , so the ODE becomes y(2) (t) + y (1) (t) + 2y(t) = 2t + t 2 . Differentiate
both sides as often as needed up to the point where the right-hand side becomes constant.
Original equation: y(2) (t) + y (1) (t) + 2y(t) = 2t + t 2
(3)
(2)
(1)
(B.6)
(B.7)
(B.8)
198
The last equation (B.8) has a solution y(2) (t) = 1. Now we use that in the preceding equation
(B.7) to solve for y(1) (t). Since y (3) (t) = 0 we obtain from (B.7) that
y (1) (t) =
1
1
(2 + 2t) y (3) (t) y (2) (t) = t + .
2
2
Now that y (1) (t) is determined we return to Equation (B.6) and solve that for y(t),
y(t) =
1
1
3
1
1
1
(2t + t 2 ) y (2) (t) y (1) (t) = (2t + t 2 1 (t + )) = t 2 + t .
2
2
2
2
2
4
1
1
1
1
2 + j 2 7 and 2 = 2 j 2 7. The general solution of (B.5) for u(t) = t2 hence is
1
3
1
1
1
1
1
y(t) = t 2 + t + 1,1 e( 2 + j 2 7)t + 2,1 e( 2 j 2 7)t
2
2
4
(B.9)
(B.10)
(B.11)
Any root of this equation gives rise to a solution of the homogeneous equation (B.10). Indeed
if 1 is a root of (B.11), then y[n] = n1 is a solution of the homogeneous equation:
y[n] + q1 y[n 1] + + qN1 y[n N + 1] + qN y[n N]
N+1
n
+ q0 N
= (1 + q1 1
1 + + q N1 1
1 )1 = 0.
B.2.1. Example.
199
we see that 1 = 1 and 2 = 2 are the characteristic roots. Then 1n = 1 and 2n are two
solutions of the homogeneous equation, and then by linearity every y[n] of the form
1 31 + 22 =
y[n] = 1 + 2 2n ,
1 , 2 , C,
In the last of the three examples we saw that not every solution of the homogeneous equation
is a sum of powers n of n. This has something to do with the fact that the multiplicity of the
characteristic root 1 in that example is more than 1. Without proof we state the general result.
B.2.2. Theorem. To each characteristic root i with multiplicity mi , the m i functions
yi,k [n] = n k ki ,
(k = 0, 1, . . . , m i 1)
are solutions of the homogeneous equation. These solutions yi,k [n] are called basis solutions.
Furthermore, y[n] is a solution of the homogeneous equation if and only if it is a linear
combination of the basis solutions,
i,k yi,k [n], i,k C.
(B.12)
y[n] =
i,k
200
B.2.3. Example.
1. The characteristic equation of
y[n] 2y[n 1] + y[n 2] = 0
is 1 1 + 2 = 12 (2 2 + 1) = 0. It has one root 1 = 1 with multiplicity 2. The
basis solutions hence are
y1,1 [n] = 1n = 1, y1,2 [n] = n1n = n.
The general solution of y[n] 2y[n 1] + y[n 2] = 0 is y[n] = 1,1 + 1,2 n, with
1,1 , 1,2 C.
2. The characteristic equation of
y[n] 5y[n 1] + 8y[n 2] 4y[n 3] = 0
is
1
(3
3
(B.13)
52 + 8 4) = 0. Since
3 52 + 8 4 = ( 1)( 2)2
we obtain as basis solutions
y1,1 [n] = 1n = 1,
y2,1 [n] = 2n ,
Particular solutions
Up to now we considered only homogeneous equations of (B.9), that is to say the case that u[n]
is the zero function. Suppose now that for a given (nonzero) u[n] we found one solution ypart [n]
of (B.9). Then, as with the continuous-time case, it may be shown that the general solution y[n]
of (B.9) is
y[n] = ypart [n] + yhom [n]
where yhom [n] is any solution of the associated homogeneous equation. In order to find all
solutions of (B.9) it therefore suffices to find one solution ypart [n]. All others then follow by
adding the general solution of the homogeneous equation. A solution ypart [n] of the difference
equation is called a particular solution.
In general it is difficult to find an explicit formula for the particular solution, with a few
exceptions.
201
for some A C.
p0 bn + p1 bn1 + + p N bnN = bn p0 + p1 b1 + + p N bN .
Equating the two sides yields A,
A=
p0 + p1 b1 + p2 b2 + + p N bN
= H (b).
1 + q1 b1 + q2 b2 + + q N bN
For A to exist we shall need to assume that b is not a characteristic root, otherwise the above
denominator is zero.
Consider the difference equation
y[n] 4y[n 2] = u[n 2]
with input u(t) = bn . Then if b is not a characteristic root, b = 2, we obtain as particular
solution
ypart [n] =
b2
1
bn .
bn = 2
2
1 4b
b 4
If we know some initial conditions and we are only interested in solutions for positive time
(n 0) then the one-sided z-transform comes in handy, see Chapter 8.
202
C
Selected tables
For easy reference the often needed tables that are scattered throughout the notes are collected
in this appendix.
Property
Frequency domain: fk
T /2
f k = T1 T /2 f (t)e j k0 t dt
Linearity
f (t) + g(t)
fk + gk
Time-shift
f (t ), ( R)
e j k0 f k
Time-reversal
f (t)
fk
Conjugation
f (t)
f k
Frequency-shift
e j n0 t f (t), (n Z)
fkn
203
204
Property
Time domain
f (t) =
1
2
F()e j t d
Freq. domain
F() =
f (t)e j t dt
a1 f 1 (t) + a2 f 2 (t)
a1 F1 () + a2 F2 ()
Reciprocity
F(t)
2 f ()
Conjugation
f (t)
F ()
Time-scaling
f (at)
1
F( a )
|a|
Time-shift
f (t )
F()e j
Frequency-shift
f (t)e j 0 t
F( 0 )
f (t)
j F()
Linearity
Differentiation (time)
Integration (time)
Differentiation (freq.)
t
f ( ) d
j t f (t)
Condition
F()
j
F ()
a R, a = 0
lim f (t) = 0
F(0) = 0
205
recta (t) =
f (t)
F()
1 if |t| < 12 a,
1
0 if |t| > 2 a
0
if |t| > a
ea|t|
t n at
e
n!
1(t)
eat
2 sin( 12 a)
a>0
4 sin2 ( 12 a)
a2
a R, a > 0
2a
a 2 +2
Re a > 0
1
(a+ j )n+1
Re a > 0
1
(a+ j )n+1
Re a < 0
2 /4a
a R, a > 0
recta ()
a R, a > 0
sin( a2 t)
t
Condition
sin(at)
t
= (t).
2
1 et /
2
= (t).
"lim"a e j at = 0.
"lim"a
cos(at)
t
= 0.
206
Property
Sifting
Condition
(t b) f (t) dt = f (b)
f (t)(t b) = f (b)(t b)
Convolution
( f )(t) = f (t)
Scaling
1
(at b) = |a|
(t ba )
t
( ) d = 1(t)
f (t) continuous at t = b
f (t) continuous at t = b
t = 0
Table 4.1: Properties and rules of the delta function. (Page 73.)
f (t)
F()
(t)
2 ()
(t b)
e j b
e j 0 t
2 ( 0 )
cos(0 t)
(( 0 ) + ( + 0 ))
sgn(t)
2
j
1(t)
1
j
+ ()
207
f (t)
Property
F(s) =
a1 f 1 (t) + a2 f 2 (t)
0
f (t)est dt
Condition
a1 F1 (s) + a2 F2 (s)
Re s > max(1 , 2 )
f (at)
1
F( a )
a
a > 0, Re s >
f (t t0 ) 1(t t0 )
F(s)est0
t0 > 0, Re s >
Shift in s-domain
f (t)es0 t
F(s s0 )
Re s > Re s0 +
Differentiation (t)
f (t)
s F(s) f (0)
Re s >
f (t)
Re s >
F(s)
s
Re s > max(0, )
F (s)
Re s >
Linearity
Time-scaling
Time-shift
t
Integration (t)
Differentiation (s)
f ( ) d
t f (t)
f (t), (t > 0)
t n at
e
n!
0
f (t)est dt
Region of conv.
1
sa
Re s > Re a
1
s n+1
Re s > 0
1
(sa)n+1
Re s > Re(a)
cos(bt)
s
s 2 +b2
Re s > 0
sin(bt)
b
s 2 +b2
Re s > 0
eat cos(bt)
sa
(sa)2 +b2
Re s > Re a
eat sin(bt)
b
(sa)2 +b2
Re s > Re a
(t)
s C
eat
tn
n!
F(s) =
(n = 0, 1, . . . )
(n = 0, 1, . . . )
208
f [n]z n
f [n]
Linearity
a1 f 1 [n] + a2 f 2 [n]
f [n]
(F+ (z ))
|z| >
f [n 1]
z1 F+ (z) + f [1]
|z| >
f [n 2]
|z| >
f [n + 1]
z F+ (z) z f [0]
|z| >
f [n + 2]
|z| >
Scaling (z-dom.)
an f [n]
F+ (z/a)
Differentiation
n f [n]
z F+ (s)
|z| >
F+ (z)G + (z)
Conjugation
Time-shift
Convolution
n
l=0
F+ (z) =
Property
f [l]g[n l]
n=0
|z| >
f [n]
F(z)
Region of convergence
[n k]
z k
z = 0 (or every z if k 0)
a n 1[n]
n
n
a 1[n]
k
z
za
ak z
(za)k+1
a n1 1[n 1]
1
za
D
Exam examples
(D.1)
t 0.
(D.2)
(g) Determine the output y(t) for t > 0, with initial conditions y(0) = 0, y (0) = ,
where is some constant.
2. Consider the difference equation
y[n] +
1
5
y[n 1] + y[n 2] = u[n],
6
6
n Z.
(D.3)
210
(c) Determine the impulse response of the initially at rest system (D.3).
(d) Is the initially at rest system (D.3) BIBO-stable?
Suppose now that
u[n] =
1
n
2
1[n], n Z.
(e) Determine the output y(n) for n 0 with initial conditions y[2] = 3 and
y[1] = 0.
3.
(a) Let T = 2. Determine the Fourier coefficients fk of the T -periodic signal f (t) that
on [1, 1] equals
f (t) = 1 |t|
(b) Compute
(1 t < 1)
1
m=0 (2m+1)2
4. Suppose f (t) and g(t) are continuously differentiable functions with derivatives f (t)
and g (t) respectively. What is the the generalized derivative of f (t) 1(t) + g(t) 1(t).
5. In Problem 1g you determined an output y(t) for all t > 0. What is this output y(t) for
all t R.?
6.
problem: 1(a) 1(b) 1(c) 1(d) 1(e) 1(f) 1(g) 2(a) 2(b) 2(c) 2(d) 2(e) 3(a) 3(b) 4 5 6(a) 6(b)
points:
The grade is proportional to the sum of points earned. Total number of points: 42.
3 3
211
(D.4)
(D.5)
(g) Determine the output y(t) for t > 0, with initial conditions y(0) = 1, y (0) =
0.
2. Consider the difference equation
y[n] y[n 2] = u[n] u[n 1],
n Z.
(D.6)
n Z.
(e) Determine the output y[n] for n 0 with initial conditions y[2] = 1 and
y[1] = 0.
3.
(a) Let T = 2. Determine the Fourier coefficients fk of the T -periodic signal f (t) that
on [0, 2) equals
1 if 0 t < 1
f (t) =
1 if 1 t < 2
212
1(a) 1(b) 1(c) 1(d) 1(e) 1(f) 1(g) 2(a) 2(b) 2(c) 2(d) 2(e) 3(a) 3(b) 3(c) 4(a) 4(b) 4(c) 5(a) 5(b)
2
The grade is proportional to the sum of points earned. Total number of points: 42.
213
0t 2
k even, k = 0
0
2
f k = k 2 2 k odd
1
k=0
2
4. Let f (t) = 1(t 1) trian4 (t). Determine the generalized derivative f (t) and sketch
f (t) and f (t).
5. Let R. Consider the system
t++1
u( ) d.
y(t) =
t+
(a)
(b)
(c)
(d)
j j k0 t
e
u(t) =
k0
kZ, k =0
T
T
2
2 3
214
(D.7)
3 3
E
Solutions
Chapter 1
1.1 1.2 No.
1.3 1.4 The amplitude is
2.
1.5 1.6
T
4
1.7 1.8 P f = 4
1.9
(a) (b) P f .
(a) E f = 4T .
1.10
(b) E f =
7
6
1 cos((N+1/2))cos(/2)
2
sin(/2)
or, equivalently,
sin(N/2) sin((N+1)/2)
sin(/2)
1.16 -
215
Appendix E. Solutions
216
Chapter 2
2.1 sin2 (0 t + /3) =
1
2
1
4
cos(20 t) +
1
2
3 sin(20 t)
(a) -
0
(b) ck = 1
k
1
k
k
k
k
k
=0
even, k = 0
= 4l + 1, l Z,
= 4l + 3, l Z
(c) (d)
(1)m+1
1
+
cos((2m + 1)0 t)
2 m=0 (2m + 1)
(e) f (t) =
2.5
1
(1)m+1
cos((2m + 1)0 t)
+
2 m=0 (2m + 1)
(a) k 4 + 15
.
k 6 5k 4 + 19k 2 + 25
1
(c) f k = 2
k +1
0 k even
(d) k =
k odd
2(1)k
k = 0
.
(a) f k = 2k 2
k=0
3
The Fourier series hence is
(b) f k =
2.6
2(1)k j k0 t
2
+
e
3
k2
k=
k =0
(b)
4(1)k
2
+
cos(k0 t)
3
k2
k=1
4
(c) cos(30 t)
9
2
k = 0
2
(d) | f k | = k 2
k=0
3
(e)
2.7 f k =
2
12
4(1)k e3 j k
k2
2
3
k = 0
k=0
217
2.8 f k =
2.9 f k =
4(1)k
(k4)2
2
3
k = 4
k=4
2
2(1)k
cos(2 2 )
(1)k sin(2 2 ) +
3
2 k
(2 k)
(2 k)2
k even
0
2.11 f k = 1
k = 4l + 1, l Z
1 k = 4l + 3, l Z
2.12
(a) (b)
2T 2
4
(a) fk =
2.13
1
4 f k2
1
2 fk
1
4 f k+2
(a) -
k0 a
2
fk .
sin
(b) g0 = f 0 en gk =
0 ka
2
k0 a
2
(c) In both cases g 0 = f 0 , and gk =
fk .
sin
0 ka
2
2.15
(a) P f =
(b) P f =
5
2
1
2
2.19
2.24
4
.
90
sin(k0 t)
k0
k=1
Appendix E. Solutions
218
Chapter 3
1
(e6( j 1) e5( j 1) ). f (t) is absolutely integrable, but F() is not abso1 j
lutely integrable.
3.1 F() =
3.2
3.3
1
(F( 0 ) F( + 0 )),
2j
0
1
F
,
(b)
|a|
a
1
F() + F () ,
(c)
2
1
F() F () .
(d)
2j
(a)
(e)
2 j a + j ( 0 ) a + j ( + 0 )
(b)
(f) e || ,
(g) e j e|| .
3.4
2 j /3
F( ),
e
3
3
(b) F( + 2)e 2 j (+2) ,
1
(c) F (),
j
(d) 2F(2),
(a)
(e) e j F(),
1
1
1
(f) F() + F( 20 ) + F( + 20 ).
2
4
4
sin(T ) T 2 sin(T )
2 2
.
T 2
2 sin(at)
cos(0 t),
3.6 (a)
t
1
(b) (et + 2e 4t ) 1(t),
3
(c) (9e t + 9te t + 9e 2t ) 1(t).
3.5
3.7
(a) ( f g)(t) =
1
(eat 1(t) + ebt 1(t)),
a+b
219
ea(t1)
,
a
sin(t)
(if ),
(c) ( f g)(t) =
t
(d) ( f g)(t) = (t + 1) rect2 (t) + 1(t 1).
(b) ( f g)(t) =
3.8
(a) Yes,
2 4 2
n
n 2
(b) f [n] =
8
n2
2 4 2
n 2
n = 4l,
n = 4l + 1,
n = 4l + 2,
n = 4l + 3,
2
.
3
(a) a < ,
(c)
3.9
3.10
(a)
1
4 j /5 ,
j +5b e
(b)
2
,
( j +b)3
(c)
1
j (2)+b ,
(d)
1
1
2 j (4)+b
1
1
2 j (+4)+b ,
1
(e) 2 j +b
,
(f)
1
,
( j +b)2
1
(g) sgn(b) j +2b
,
(b > 0)
2e b 1()
.
(h)
b
1() (b < 0)
2e
3.11
(a)
(b)
(c)
3.12
2 (t)
2 sin(t) cos(t)
2 sin
t 2
t 3
1 sin(4(tt 0))
tt0
t sin(t)
(t 2 1)
(a) 2 sin( a
2 )
(b)
F().
a
3.13 -
Chapter 4
4.1
(a)
(b)
4.2
f (t+2)
2 ,
1
1
2 f ( 2 )(t
12 ).
Appendix E. Solutions
220
1
2 (t
(a)
(b)
(c)
(( 0 ) ( + 0 )),
1
j (0 ) + ( 0 ),
+ 2 ( + 0 ) + 2 (
j (2 02 )
(d)
0
2 02
(e)
2
j (0 ) ,
4.5
e j 1
2
4.6
(a)
2 sin
j 2
(b)
rect2 ()
j
4.7
(a)
(b)
j
2
j
4
2 j (
+ 0 )
2 j (
+ ().
+ 2(),
+ 2 ().
sgn(t) 1 + e |t| ,
(sgn(t + 1) + sgn(t 1)),
j jt
2e
sgn(t).
4.8 4.9
(b) sgn(t + 12 ) 1 e |t+ 2 | + 1.
Chapter 5
5.1
(a) (b) H () = a + be j ,
(c) h(t) = a(t) + b(t 1),
(d) Yes,
(e) Yes.
5.2
(a) (b) H () =
2 sin
,
0 ),
0 ),
221
(c) No
(d) Yes
(e) y(t) = 2 trian2 (t)
5.3
5.4
(a) No
(b) Yes
(c) Yes
(d) No
5.5
(a) (b) No
5.6
(a) Yes,
(b) H () =
1
(1+ j )2
(c) Yes,
(d) y(t) =
(a) h(t) =
(b) y(t) =
5.10
2 sin2 (t/2)
,
t 2
1
2
2t
2 3 cos( 3 ).
5.11 -
8
5
cos t
4
5
sin t
2(1+ j )
2+ j ,
Appendix E. Solutions
222
Chapter 6
(a) e 2t
6.1
(b) (t) + 2e 2t
(c) (2t + 1)e 2t
(d) (t) + (4t 2 + 12t + 6)e 2t
(e) sin(t)e t
(f) (cos(t) sin(t))e t
6.2 6.3
6.4
(a)
0 1+es/0
s 2 +02 1es/0
(b)
es
.
s(1es )
(d) e at 1(t t0 )
a
6.5 (a)
n=0 recta (t nb 2 ) =
n=0 1(t nb) 1(t nb a)
a(tnT )
(b)
1(t nT )
n=0 e
T
6.6 T1 0 f (t)dt
6.7
n!m!
n+m+1
(n+m+1)! t
1(t).
6.8 F(s) exists iff Re s 0, except s = 0 for wich F(s) exists iff > 1.
Chapter 7
7.1
(a) (b) -
(c) 1 2t 2 t 2 2t 2 t
e + e
e e
7.2 (a) 31 2t 31 t 32 2t 31 t
e
e
3
3
3e + 3e
2(t2)
+ (6t 7)e (t2)
1 4e
(b) x(t) = 9
for t > 2
4e 2(t2) (3t 2)e (t2)
t t
7.3 e At = e0 teet .
7.4
(a) H (s) =
(b) (t)
s1
s2 ,
2t
+ e 1(t)
(a) H (s) =
s
(s+1)(s2) ,
223
(b) No,
(c) h(t) = 13 (et + 2e 2t ) 1(t).
7.6
7.7
(a) e L t
(b) e j t , e j t e j
2t
e j
2t
(c) e t , tet , t 2 et
7.8
(a) e 2 t , e 2 t
(b)
s/21/2
s 2 1/4
(c) No
(d)
01
1/2
x(t) +
x(t)
= 1
u(t)
1/2
(e)
0
4
y(t) = x 1 (t)
(f) 2e t/2 + 2t 2, (t 0)
7.9
(a) e 4t , 1
(b) (c) H (s) =
1
s
(d) No.
(e) t + e 4t
7.10
(a) e 3t , et
(b)
21
1
x(t)
=
x(t) +
u(t)
30
(c)
y(t) = 1 0 x(t)
(d)
s
s 2 2s3
(e)
s
s 2 2s3
Appendix E. Solutions
224
Chapter 8
8.1
bz
,
(zb)2
z
1z ,
|z| > 1
1
4
j )n , y2 [n] = ( 12
8.3
8.4
(a) H (z) =
1
zN
1
4
1
10
j )n
5)( 12
1
2
n
5)
, z = 0, h(t) = (n N)
2 +z
(c) H (z) = z 2z1/4
, |z| > 14 h(t) = 3( 12 )n 12 ( 12 )n 1[n]
(b) H (z) =
z2
,
z 2 z5/16
8.5 8.6
(a) Yes
(b) No
(c) Yes
(d) Yes!
(e) BIBO-stable iff = 1
8.7
(a) (2) n , ( 12 )n
(b)
(c)
z+1
z 2 + 32 z1
4
3
2
n
15 (2)
+ 65 ( 12 )n
1[n]
(d) No
(e) [n] + 75 (2)n +
8.8
11 1
10 2n
(a) ( 32 )n , ( 12 )n
(b) ( 12 )n1 1[n 1]
(c) H (z) =
1
z 12
, |z| >
1
2
(d) Yes
(e) 9/8( 12 )n 5/8( 32 )n (n 0)
8.9
(a) (1) n , 1n = 1
(b) h(n) = (1/2 + 1/2(1) n ) 1[n]
(c) No
225
1 1 n
1
8.10 See Problem 8.3: ( 12 + 10
5)( 2 + 2 5) + ( 12
n+2
8.11 (a) a1a
cos(n/2) + a sin(n/2) 1[n]
2 +1 a
(b) ( 12 )n 2( 13 )n 1[n]
1
10
1 1 n
5)( 2 2 5)
8.13 8.14
226
Appendix E. Solutions
Index
F , 43
L, 108
T , 83
Z, 163
Z+ , 174
-domain, 41
k-th harmonic, 24
z-domain, 163
z-transform, 161
mapping, 163
one-sided, 174
time-reversal, 165
unilateral, 174
T -periodic, 7
delta-train, 160
delta function, 67, 75
continuous-time, 67
discrete-time, 159
differentiable function, 4
differentiation rule
z-transform, 166
Fourier transform, 45
Laplace transform, 110
bandlimited, 57
bandwidth, 57
basis solution
continuous-time, 194
discrete-time, 199
BIBO-stable, 181
Butterworth filter, 153
causal, 52, 89
227
Index
228
digital filter, 83
discrete-time
BIBO-stable, 181
causal system, 177
delta-function, 159
filter, 176
impulse response, 178
initially at rest system, 179
linear system, 176
LTI system, 177
non-anticipating system, 177
signal, 1
system, 176
time-invariant system, 177
transfer function, 178
unit pulse, 159
unit step, 159
discrete delta function, 159
distortionless, 88
energy, 9
energy content, 9
energy signal, 9
energy spectrum, 56
even signal, 25
exponentially bounded, 106
filter
continuous time, 83
discrete-time, 176
ideal, 98
final value theorem, 114
finite duration, 40
finite impulse response, 178
FIR filter, 178
Fourier coefficient, 16
Fourier integral theorem, 40
Fourier series, 16
complex, 19
real, 24
Fourier transform, 41, 43
generalized, 79
inverse, 43
frequency
fundamental, 15
frequency characteristic, 83
frequency domain, 26
frequency response, 95
frequency spectrum, 41
function
delta, 75
generalized, 75
tempered test, 79
test, 75
fundamental frequency, 15
gain, 125
generalized derivative, 71
generalized Fourier transform, 79
generalized function, 67, 75
geometric series, 10
ratio, 10
Gibbs phenomenon, 32
harmonic
k-th, 24
harmonic signal, 3
homogeneous equation
continuous-time, 193
discrete-time, 198
ideal filter, 98
IIR filter, 178
impedance, 146
impulse response
continuous-time system, 87
discrete-time system, 178
impulse train, 160
initially at rest, 179
continuous-time signal, 139
continuous-time system, 139
discrete-time signal, 162
discrete-time system, 179
initial value theorem, 171
input-state-output representation, 121
integrator, 125
integrator system, 88
Laplace transform, 105, 106, 108
one-sided, 105
two-sided, 105
Laurent series, 161
linearity
continuous-time system, 84
discrete-time system, 176
linear time-invariant, 84
line spectrum, 26
LTI system
Index
continuous-time, 84
discrete-time, 177
matrix exponential, 129
modulation theorem, 45
non-anticipating
continuous-time system, 89
discrete-time system, 177
non-proper, 192
Nyquist rate, 57
odd signal, 25
one-sided z-transform, 174
one-sided Laplace transform, 105
output equation, 121
Parseval theorem
T -periodic signals, 31
non-periodic case, 55
partial fraction expansion, 187
particular solution
continuous-time, 196
discrete-time, 200
periodic signal, 7
convolution, 28
phase, 6
phase transfer, 95
piecewise smooth, 6
power, 9
power signal, 9
primitive, 5
proper, 188, 191
strictly, 50
pulse
rectangular, 8
triangular, 8
unit, 67, 159
ratio, 10
rational function, 50
real Fourier series, 24
real harmonic signal, 6
real signal, 2
real system, 90
reciprocity, 44
rectangular pulse, 8
region of convergence
z-transform, 162
229
230
Index