Académique Documents
Professionnel Documents
Culture Documents
Collection Editor:
Marco F. Duarte
Collection Editor:
Marco F. Duarte
Authors:
Thanos Antoulas
Richard Baraniuk
Dan Calderon
Marco F. Duarte
Catherine Elder
Natesh Ganesh
Michael Haag
Don Johnson
Stephen Kruzick
Matthew Moravec
Justin Romberg
Louis Scharf
Melissa Selik
JP Slavinsky
Dante Soares
Online:
< http://legacy.cnx.org/content/col11557/1.10/ >
OpenStax-CNX
This selection and arrangement of content as a collection is copyrighted by Marco F. Duarte. It is licensed under the
Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).
Collection structure revised: September 13, 2014
PDF generated: December 6, 2014
For copyright and attribution information for the modules contained in this collection, see p. 198.
Table of Contents
1 Review of Prerequisites: Complex Numbers
1.1 Geometry of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Complex Numbers: Algebra of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Representing Complex Numbers in a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Continuous-Time Signals
2.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Common Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Energy and Power of Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Continuous Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6 Continuous-Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 Introduction to Systems
3.1 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4 Time Domain Analysis of Continuous Time Systems
4.1 Continuous Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Continuous Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Continuous-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Properties of Continuous Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5 Causality and Stability of Continuous-Time Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . 62
5 Introduction to Fourier Analysis
5.1 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Continuous Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3 Eigenfunctions of Continuous-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.4 Continuous Time Fourier Series (CTFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6 Continuous Time Fourier Transform (CTFT)
6.1 Continuous Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3 Properties of the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.4 Common Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.5 Continuous Time Convolution and the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.6 Frequency-Domain Analysis of Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7 Discrete-Time Signals
7.1 Common Discrete Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.2 Energy and Power of Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3 Discrete-Time Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.4 Discrete Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.5 Discrete Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8 Time Domain Analysis of Discrete Time Systems
8.1 Discrete Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.2 Discrete Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 107
8.3 Discrete-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
iv
Chapter 1
Review of Prerequisites: Complex
Numbers
1.1 Geometry of Complex Numbers
ing.
The LaTeX source les for this collection were created using an optical character recognition
technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
The most fundamental new idea in the study of complex numbers is the imaginary number
imaginary number is dened to be the square root of
(1.1)
j 2 = 1.
(1.2)
and
z = x + jy.
We say that the complex number
z =x + jy
(1.3)
y:
(1.4)
Re [z] = x; Im [z] = y.
(1.5)
is denoted by
real(z),
the
This
j=
j.
1:
Cartesian
z,
and
is denoted by
imag(z).
In commu-
z = x + jy
r=
1 This
are
p
x2 + y 2
(1.6)
= tan1
See Figure 1.1. In MATLAB,
Figure 1.1:
is denoted by
abs(z),
y
x
and
(1.7)
is denoted by
angle(z).
and angle
as follows:
x = rcos
(1.8)
y = r sin .
(1.9)
x + jy
rcos + jrsin
(1.10)
= r ( cos + j sin ) .
The complex number cos
+ j sin is, itself, a number that may be represented on the complex plane and
(cos, sin). This is illustrated in Figure 1.2. The radius and angle to the
1 and . Can you see why?
z = cos + j sin
Figure 1.2:
are
+ j sin is
ej :
ej = cos + j sin.
As illustrated in Figure 1.2, the complex number
may write the complex number
ej
(1.11)
complex number
z.
We call
z = re
to be the angle, or
polar
(1.12)
phase, of z :
|z| = r
(1.13)
arg (z) = .
(1.14)
With these denitions of magnitude and phase, we can write the complex number
as
z = |z|ejarg(z) .
Let's summarize our ways of writing the complex number
= x + jy
ej
rej
(x, y)
= |z|ej
arg(z)
.
(1.16)
(1.15)
ej = cos + j sin
x = j.
Exercise 1.1.1
Prove
2n
(j)
= (1)
Exercise 1.1.2
and
2n+1
(j)
= (1) j .
Evaluate
Exercise 1.1.3
z
z
z
z
z = rej
and
ej(+m2) = 1.
Plot these
= 1 + j0;
= 0 + j1;
= 1 + j1;
= 1 j1.
Exercise 1.1.4
z = x + jy
z = 2ej/2 ;
z = 2ej/4 ;
z = ej3/4 ;
2 "Complex
we
as
z = rej .
j
ej ,
We
d.
z=
2ej3/2 .
Exercise 1.1.5
(0.7, 0.1) z =
(1.0, 0.5) z =
1.6/8 z =?
0.47/8 z =?
z:
?
?
Exercise 1.1.6
Show that
Im [jz] = Re [z]
and
Re [jz] = Im [z].
ej
for
= i2/360, i =
1, 2, ..., 360:
j=sqrt(-1)
n=360
for i=1:n,circle(i)=exp(j*2*pi*i/n);end;
axis('square')
plot(circle)
Replace the explicit for loop of line 3 by the implicit loop
circle=exp(j*2*pi*[1:n]/n);
to speed up the calculation. You can see from Figure 1.3 that the complex number
ej ,
evaluated
at angles
We say
and radius 1.
Figure 1.3:
ing.
The LaTeX source les for this collection were created using an optical character recognition
technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
The complex numbers form a mathematical eld on which the usual operations of addition and multiplication are dened. Each of these operations has a simple geometric interpretation.
z1
and
z2
are
z1 + z2
3 This
(x1 + x2 ) + j (y1 + y2 ) .
(1.17)
We say that the real parts add and the imaginary parts add.
number
formed
z1 + z2
from z1
only
z1 = r1 ej1
z1
and
the variables
The product of
and
z2
that involves
is
z1 z2
Figure 1.4:
z2 .
Exercise 1.2.1
Let
z1 + z2
(x1 x2 y1 y2 ) + j (y1 x2 + x1 y2 ) .
(1.18)
z1 z2
z1
and
z2
r1 ej1 r2 ej2
r1 r2 ej(1 +2 ) .
We say that the magnitudes multiply and the angles add. As illustrated in Figure 1.5, the product
at the angle
(1.19)
z1 z2
lies
(1 + 2 ).
4 We have used the trigonometric identities cos ( + ) = cos cos sin sin and sin ( + ) = sin cos +cos
1
2
1
2
1
2
1
2
1
2
1
sin 2 to derive this result.
Figure 1.5:
Rotation.
There is a special case of complex multiplication that will become very important in our study
5
z2 = e
j2
z1
and
z1 is
z2 is
z1 = r1 ej1
and
z2
is the complex
z1 z2 = z1 ej2 = r1 ej(1 +2 ) .
As illustrated in Figure 1.6,
Figure 1.6:
z1 z2
is just a rotation of
z1
(1.20)
2 .
Exercise 1.2.2
5 "Phasors:
Introduction" <http://legacy.cnx.org/content/m21469/latest/>
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
its
by
Powers.
z1
multiplies itself
(z1 )
= r1N ejN 1 .
(1.21)
Complex Conjugate.
conjugate
z = x jy = rej .
z
jto j .
(1.22)
The recipe for nding
number.
Figure 1.7:
Magnitude Squared.
and is denoted by
|z|
The product of
|z| = r
(1.23)
(Section 1.1).
Exercise 1.2.3
Write
as
z = zw.
Exercise 1.2.4
Exercise 1.2.5
Find
(z2 z1 ) = 2 1 .
z = x + jy
may be written as
1
(z + z )
2
(1.24)
Im [z] = 2j (z z ) .
(1.25)
Re [z] =
z1 + z2
= z2 + z1
z1 z2
(z1 + z2 ) + z3
(1.26)
z2 z1
= z1 + (z2 + z3 )
(1.27)
z1 (z2 z3 )
(z1 z2 ) z3
z1 (z2 + z3 )
z1 z2 + z1 z3 .
1 + j0
0 + j0
(denoted by 0)
multiplicative identity:
x
x2 +y 2
+j
y
x2 +y 2
z+0
0+z
z1
1z.
z = x + j (y)
zz
Exercise 1.2.6
Show that the additive inverse of
z = rej
Exercise 1.2.7
z 1 =
zz
z,
z + (z)
Show that
(1.28)
z 1
1.
(1.29)
may be written as
rej(+) .
may be written as
1
1
z = 2
(x jy) .
zz
x + y2
(1.30)
z 1 = r1 ej .
Plot
and
z 1
Exercise 1.2.8
Prove
(j)
for a representative
(1.31)
z.
= j .
Exercise 1.2.9
Find
z 1
when
Exercise 1.2.10
Prove
z 1
z = 1 + j1.
1
= (z )
= r1 ej =
Exercise 1.2.11
Find all of the complex numbers
1
z z z . Plot
and
z 1
for a representative
jz = z .
z.
Demo 1.2 (MATLAB). Create and run the following script le (name it Complex Numbers)
6 If
you are using PC-MATLAB, you will need to name your le cmplxnos.m.
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
10
clear, clg
j=sqrt(-1)
z1=1+j*.5,z2=2+j*1.5
z3=z1+z2,z4=z1*z2
z5=conj(z1),z6=j*z2
axis([-4 4 -4 4]),axis('square'),plot([0 z1],'-o')
hold on
plot([0 z2],'-o'),plot([0 z3],'-+'),plot([0 z4],'-*'),
plot([0 z5],'x'),plot([0 z6],'-x')
Figure 1.8:
With the help of Appendix 1, you should be able to annotate each line of this program. View your graphics
display to verify the rules for add, multiply, conjugate, and perp. See Figure 1.8.
Exercise 1.2.12
Prove that
z 0 = 1.
Exercise 1.2.13
(MATLAB) Choose
n
and plot z1 and
11
Figure 1.9:
Powers of z
ing.
The LaTeX source les for this collection were created using an optical character recognition
technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
So far we have coded the complex number
(r).
z = x + jy with the Cartesian pair (x, y) and with the polar pair
z may be coded with a two-dimensional vector z and show
how this new code may be used to gain insight about complex numbers.
as a Vector.
two-dimensional vector
z=
x
y
z = x + jy
with the
x + jy = z z =
x
y
.
z
(1.32)
This means
z1 + z2
x1 + x2
y1 + y2
ax
.
az =
ay
7 This
(1.33)
(1.34)
12
Figure 1.10:
z,
an additive identity
0,
1 all exist:
The vector
is
0=
0
0
z + (z) = 0
(1.35)
lz = z.
(1.36)
Prove that vector addition and scalar multiplication satisfy these properties of commutation, association,
and distribution:
z1 + z2 = z2 + z1
(1.37)
(z1 + z2 ) + z3 = z1 + (z2 + z3 )
(1.38)
a (bz) = (ab) z
(1.39)
The
inner product
(1.40)
z1
and
z2
number
(z1 , z2 ) = x1 x2 + y1 y2 .
(1.41)
8
We sometimes write this inner product as the vector product (more on this in Linear Algebra )
(z1 , z2 )
[x1 y1 ]
x2
y2
zT z
1 2
= (x1 x2 + y1 y2 ) .
Exercise 1.3.1
Prove
8 "Linear
(z1 , z2 ) = (z2 , z1 ) .
(1.42)
13
When
z1 = z2 = z,
norm squared
of
z:
||z|| = (z, z) = x2 + y 2 .
(1.43)
These properties of vectors seem abstract. However, as we now show, they may be used to develop a vector
calculus for doing complex arithmetic.
z1
z2
and
z1 + z2 z1 + z2
The scalar multiplication of the complex number
cation of the vector
z2
by
x1
z1
and
z2
z2
x1 + x2
y1 + y2
(1.44)
x1
x2
x1 z2 x1
y2
x1 x2
x1 y2
(1.45)
x2
y1 x2
=
.
y1 z2 y1
y2
y1 y2
is
z1 z2 = (x1 + jy1 ) z2
(1.46)
is therefore represented as
z1 z2
x1 x2 y1 y2
x1 y2 + y1 x2
(1.47)
z1 z2 = z2 z1
where
and
(v, z1 )
(w, z1 )
x2
y2
and w =
.
v=
y2
x2
x y2
2
,
y2 x2
z1 z2
(1.48)
(1.49)
z1 z2 = z2 z1
x2
y2
y2
x2
x1
y1
(1.50)
zej = ej z
9 "Linear
cos
sin
sin
cos
x1
x2
(1.51)
14
cos
sin
sin
cos
a rotation matrix.
Exercise 1.3.2
Call
R ()
R () =
Show that
R ()
Exercise 1.3.3
rotates by
().
z
and
cos
(1.52)
R () w
when
w = R () z?
x
y
(1.53)
of the matrix.
sin
as
a, b, c,
sin
cos
z = rej
1/2
1/2
r = x2 + y 2
= ||z|| = (z, z) .
0
1
and e2 = , then we
e1 =
0
1
(1.54)
z = (z, e1 ) e1 + (z, e2 ) e2 .
See Figure 1.11. From the gure it is clear that the cosine and sine of the angle
cos =
Figure 1.11:
as
(1.55)
are
(z, e2 )
(z, e1 )
; sin =
||z||
||z||
(1.56)
15
z:
z = ||z||cose1 + ||z||sine2 .
The inner product between two vectors
(z1 , z2 )
z1
and
z2
(1.57)
is now
(z
,
e
)
e
2 1
1
(1.58)
cos (2 1 ) = cos2
cos
1 + sin1 sin2
cos (2 1 ) =
may be written as
(z1 , z2 )
||z1 || ||z2 ||
(1.59)
This formula shows that the cosine of the angle between two vectors
cosine of the angle of
Exercise 1.3.4
z2 z1 ,
z1
and
z2 ,
(1.60)
(1.61)
16
Chapter 2
Continuous-Time Signals
2.1 Signal Classications and Properties
2.1.1 Introduction
This module will begin our study of signals and systems by laying out some of the fundamentals of signal classication. It is essentially an introduction to the important denitions and properties that are fundamental
to the discussion of signals and systems, with a brief discussion of each.
discrete (countable)
A continuous-time signal will contain a value for all real numbers along the
2
time axis. In contrast to this, a discrete-time signal , often created by sampling a continuous signal, will
only have values at equally spaced intervals along the time axis.
Figure 2.1
online at <http://legacy.cnx.org/content/m47271/1.3/>.
<http://legacy.cnx.org/content/m0009/latest/>
18
However, in this case the dierence involves the values of the function.
Analog corresponds to a
continuous set of possible function values, while digital corresponds to a discrete set of possible function
values. An common example of a digital signal is a binary sequence, where the values of the function can
only be one or zero.
Figure 2.2
Periodic signals
We can dene a periodic function through the following mathematical expression, where
and
is a positive constant:
f (t) = f (T + t)
The
(2.1)
that the still allows (2.1) to be
true.
3 "Continuous
19
(a)
(b)
Figure 2.3:
f (t)
is a
where
f (t),
t1 >
and
t2 < .
innite-length signal,
f (t)
Figure 2.4:
Finite-Length Signal. Note that it only has nonzero values on a set, nite interval.
20
signal
such that
f (t) = f (t).
An
odd signal,
such that
(Figure 2.5).
(a)
(b)
Figure 2.5:
Using the denitions of even and odd signals, we can show that any signal can be written as a combination
of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we
have to look no further than a single equation.
f (t) =
1
1
(f (t) + f (t)) + (f (t) f (t))
2
2
f (t) + f (t)
f (t) f (t)
(2.2)
Also, it can be shown that
fullls the requirement of an
Example 2.1
21
(a)
(b)
(c)
(a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) =
(c) Odd part: o (t) = 21 (f (t) f (t)) (d) Check: e (t) + o (t) = f (t)
Figure 2.6:
1
(f (t) + f (t))
2
22
Example 2.2
Consider the signal dened for all real
described by
f (t) = {
sin (2t) /t
t1
t<1
(2.3)
This signal is continuous time, analog, aperiodic, innite length, causal, and neither even nor odd.
2.2.1 Introduction
Before looking at this module, hopefully you have an idea of what a signal is and what basic classications
and properties a signal can have. In review, a signal is a function dened with respect to an independent
variable. This variable is often time but could represent any number of things. Mathematically, continuous
time analog signals have continuous independent and dependent variables. This module will describe some
useful continuous time analog signals.
In its
Acos (t + )
where
is the amplitude,
T =
4 This
(2.4)
(2.5)
23
Figure 2.7:
0
u (t) =
1
if
t<0
if
t0
(2.6)
1
t
Figure 2.8:
The step function is a useful tool for testing and for dening other signals. For example, when dierent
shifted versions of the step function are multiplied by other signals, one can select a certain portion of the
signal and zero out the rest.
24
unit-pulse function can be thought of as turning a switch on and o after a unit of time.
dened as
1
p (t) =
0
0.5 t 0.5
if
if
It is
(2.7)
Figure 2.9:
Note that the pulse can be easily written in terms of unit step functions as
(t) =
t+1
1t
if
if
if
1t0
0t1
(2.8)
t < 1 or t > 1
25
Figure 2.10:
2.3.1 Introduction
This module will look at two signal operations aecting the time parameter of the signal, time shifting and
time scaling. These operations are very common components to real-world systems and, as such, should be
understood thoroughly when learning about signals and systems.
5 This
26
Figure 2.11:
Figure 2.12:
f (at) compresses f by a.
Example 2.3
Given
f (t)
f (at b),
with both
a>0
and
b > 0.
27
(a)
(b)
(c)
Figure 2.13:
t ab
f
to get
` (a)
` Begin
with f (t) (b) Then replace t with at to get f (at) (c) Finally, replace t with
a t ab = f (at b)
Figure 2.14:
28
From physics we've learned that energy is work and power is work per time unit. Energy was measured in
Joule (J) and work in Watts(W). In signal processing energy and power are dened more loosely without
any necessary physical units, because the signals may represent very dierent physical entities. We can say
that energy and power are a measure of the signal's "size".
29
Ea =
(|x (t) |) dt
Note that we have used squared magnitude(absolute value) if the signal should be complex valued. If the
signal is real, we can leave out the magnitude operation.
(a)
(b)
Figure 2.16:
Sketch of energy calculation (a) Signal x(t) (b) The energy of x(t) is the shaded region
30
Figure 2.17:
Pa = lim
T
2
T2
(|x (t) |) dt
For periodic analog signals, the power needs to only be measured across a single period.
T0 : Pa =
1
T0
Example 2.4
Given the signal
T0
2
T
20
(|x (t) |) dt
Figure 2.18:
Analog sine.
31
2.5.1 Introduction
In engineering, we often deal with the idea of an action occurring at a point.
Whether it be a force at
a point in space or some other signal at a point in time, it becomes worth while to develop some way of
quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important
function, next to the complex exponential, in this systems and signals course.
often referred to as the unit impulse or delta function, is the function that
denes the idea of a unit impulse in continuous-time. Informally, this function is one that is innitesimally
narrow, innitely tall, yet integrates to one. Perhaps the simplest way to visualize this is as a rectangular
1
2 to a + 2 with a height of . As we take the limit of this setup as approaches 0, we see
that the width tends to zero and the height tends to innity as the total area remains constant at one. The
pulse from
(t).
Z
(t) dt = 1
Figure 2.19:
8 This
(2.9)
32
Since it is quite dicult to draw something that is innitely tall, we represent the Dirac
with an arrow centered at the point it is applied. If we wish to scale it, we may write the value it is
scaled by next to the point of the arrow. This is a unit impulse (no scaling).
Figure 2.20:
Below is a brief list a few important properties of the unit impulse without going into detail of their
proofs.
1
(t) = ||
(t)
(t) = (t)
d
(t) = dt
u (t), where u (t)
f (t) (t) = f (0) (t)
The last of these is especially important as it gives rise to the sifting property of the dirac delta function, which
selects the value of a function at a specic time and is especially important in studying the relationship of an
operation called convolution to time domain analysis of linear time invariant systems. The sifting property
is shown and derived below.
f (t) (t) dt =
(t) dt = f (0)
(2.10)
33
Figure 2.21: Click on the above thumbnail image (when online) to download an interactive Mathematica
Player demonstrating the Continuous Time Impulse Function.
2.6.1 Introduction
Complex exponentials are some of the most important functions in our study of signals and systems. Their
importance stems from their status as eigenfunctions of linear time invariant systems. Before proceeding,
you should be familiar with complex numbers.
9 This
34
Aest
where
s = + j
(2.11)
(2.12)
z,
at
z = jx.
about
z = 0,
which
The result is
ejx
=
=
ez
k=0
k=0
k x2k
(2k)!
(1)
+j
(jx)k
k!
P
k=0
k x2k+1
(2k+1)!
(2.13)
(1)
x.
Choosing
which converge
x = t
(2.14)
which breaks a continuous time complex exponential into its real part and imaginary part.
Using this
1 jt 1 jt
e + e
2
2
(2.15)
1
1 jt
e ejt
2j
2j
(2.16)
cos (t) =
sin (t) =
s = + j
st
where
is the
It follows that
(2.17)
appear below.
Re{e(+j)t+j } = et cos (t + )
(2.18)
Im{e(+j)t+j } = et sin (t + )
(2.19)
35
Using the real or imaginary parts of complex exponential to represent sinusoids with a phase delay multiplied
by real exponential is often useful and is called attenuated phasor notation.
We can see that both the real part and the imaginary part have a sinusoid times a real exponential. We
also know that sinusoids oscillate between one and negative one. From this it becomes apparent that the
real and imaginary parts of the complex exponential will each oscillate within an envelope dened by the
real exponential part.
(a)
(b)
(c)
The shapes possible for the real part of a complex exponential. Notice that the oscillations
are the result of a cosine, as there is a local maximum at t = 0. (a) If is negative, we have the case of
a decaying exponential window. (b) If is positive, we have the case of a growing exponential window.
(c) If is zero, we have the case of a constant window.
Figure 2.22:
36
Interact (when online) with a Mathematica CDF demonstrating the Continuous Time
Complex Exponential. To Download, right-click and save target as .cdf.
Figure 2.23:
Chapter 3
Introduction to Systems
3.1 Introduction to Systems
with
Denition of a system
x(t)
System
y(t)
The system depicted has input x (t) and output y (t). Mathematically, systems operate
on function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a
value for the dependent variable (our output signal) for each value of its independent variable (its input
signal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S () the input-output
relation for the system.
Figure 3.1:
This notation mimics the mathematical symbology of a function: A system's input is analogous to an
independent variable and its output the dependent variable. For the mathematically inclined, a system is a
functional:
Simple systems can be connected togetherone system's output becomes another's inputto accomplish
some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of
three basic interconnection forms.
1 This
38
x(t)
w(t)
S1[]
y(t)
S2[]
The most rudimentary ways of interconnecting systems are shown in the gures in this
section. This is the cascade conguration.
Figure 3.2:
The simplest form is when one system's output is connected only to another's input.
w (t) = S1 (x (t)),
and
y (t) = S2 (w (t)),
x (t)
Mathematically,
the second system. In some cases, the ordering of the systems matter, in others it does not. For example, in
the fundamental model of communication
S1[]
x(t)
+
S2[]
x(t)
Figure 3.3:
A signal
x (t)
y(t)
is routed to two (or more) systems, with this signal appearing as the input to all systems
than one system are not split into pieces along the way. Two or more systems operate on
outputs are added together to create the output
in
x (t)
y (t).
Thus,
39
e(t)
y(t)
S1[]
S2[]
Figure 3.4:
The subtlest interconnection conguration has a system's output also contributing to its input. Engineers
would say the output is "fed back" to the input through system 2, hence the terminology. The mathematical
statement of the feedback interconnection (Figure 3.4: feedback) is that the feed-forward system produces
y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system's
y (t): e (t) = x (t) S2 (y (t)). Feedback systems are omnipresent in control problems, with the
the output:
output to
error signal used to adjust the output to achieve some condition dened by the input (controlling) signal.
For example, in a car's cruise control system,
x (t)
y (t)
is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (output
equals input).
3.2.1 Introduction
In this module some of the basic classications of systems will be briey introduced and the most important
properties of these systems are explained. As can be seen, the properties of a system provide an easy way
to distinguish one system from another. Understanding these basic dierences between systems, and their
properties, will be a fundamental concept used in all signal and system courses. Once a set of systems can be
identied as sharing particular properties, one no longer has to reprove a certain characteristic of a system
each time, but it can simply be known due to the the system classication.
3 This
40
Figure 3.5:
Figure 3.6:
(3.1)
(3.2)
It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine
the rst two steps to get
(3.3)
41
f (t T )
for all
T,
ST (f (t)) =
which is to say
HST = ST H
for all real
T.
(3.4)
Intuitively, that means that for any input function that produces some output function, any
time shift of that input function will produce an output function identical in every way except that it is
shifted by the same amount. Any system that does not have this property is said to be time varying.
This block diagram shows what the condition for time invariance. The output is the same
whether the delay is put on the input or the output.
Figure 3.7:
42
(a)
(b)
(a) For a typical system to be causal... (b) ...the output at time t0 , y (t0 ), can only depend
on the portion of the input signal before t0 .
Figure 3.8:
43
Figure 3.9:
A bounded signal is a signal for which there exists a constant A such that |f (t) | < A
Representing this mathematically, a stable system must have the following property, where
input and
y (t)
x (t)
is the
|y (t) | My <
(3.5)
|x (t) | Mx <
Mx and My
(3.6)
both represent a set of nite positive numbers and these relationships hold for all of t. Otherwise,
Systems can be
continuous time, discrete time, or neither. They can be linear or nonlinear, time invariant or time varying,
and stable or unstable. We can also divide them based on their causality properties. There are other ways
to classify systems, such as use of memory, that are not discussed here but will be described in subsequent
modules.
3.3.1 Introduction
Linearity and time invariance are two system properties that greatly simplify the study of systems that exhibit
them. In our study of signals and systems, we will be especially interested in systems that demonstrate both
of these properties, which together allow the use of some of the most powerful tools of signal processing.
4 This
44
Linear Scaling
(a)
(b)
Figure 3.10
y.
If
is scaled by a value
and passed through this same system, as in Figure 3.10(b), the output will also be scaled by
A linear system also obeys the principle of superposition.
together and passed through a linear system, the output will be the sum of the individual inputs' outputs.
(a)
(b)
Figure 3.11
45
Superposition Principle
If Figure 3.11 is true, then the principle of superposition says that Figure 3.12 (Superposition Principle) is true as well. This holds for linear systems.
Figure 3.12:
That is, if Figure 3.11 is true, then Figure 3.12 (Superposition Principle) is also true for a linear system.
The scaling property mentioned above still holds in conjunction with the superposition principle. Therefore,
if the inputs x and y are scaled by factors
and
(a)
(b)
Figure 3.13
Given Figure 3.13 for a linear system, Figure 3.14 (Superposition Principle with Linear
Scaling) holds as well.
Figure 3.14:
46
Example 3.1
Consider the system
H1
in which
H1 (f (t)) = tf (t)
for all signals
f.
f, g
and scalars
(3.7)
a, b
H1 (af (t) + bg (t)) = t (af (t) + bg (t)) = atf (t) + btg (t) = aH1 (f (t)) + bH1 (g (t))
Thus,
H1
is a linear system.
H2
in which
t.
(3.8)
Example 3.2
H2 (f (t)) = (f (t))
for all signals
f.
(3.9)
Because
t, H2
(3.10)
Time-Invariant Systems
(a)
(b)
Figure 3.15(a) shows an input at time t while Figure 3.15(b) shows the same input
t0 seconds later. In a time-invariant system both outputs would be identical except that the one in
Figure 3.15(b) would be delayed by t0 .
Figure 3.15:
In this gure,
due to
Whether a system is time-invariant or time-varying can be seen in the dierential equation (or dierence
equation) describing it.
A constant coecient dierential (or dierence) equation means that the parameters of the system are
changing over time and an input now will give the same result as the same input later.
47
Example 3.3
Consider the system
H1
in which
H1 (f (t)) = tf (t)
for all signals
f.
(3.11)
Because
T , H1
(3.12)
Example 3.4
Consider the system
H2
in which
H2 (f (t)) = (f (t))
for all signals
f.
and signals
t.
Thus,
H2
(3.13)
f,
2
= (f (t T )) = H2 (f (t T )) = H2 (ST (f (t)))
(3.14)
(a)
(b)
This is a combination of the two cases above. Since the input to Figure 3.16(b) is a scaled,
time-shifted version of the input in Figure 3.16(a), so is the output.
Figure 3.16:
As LTI systems are a subset of linear systems, they obey the principle of superposition. In the gure
below, we see the eect of applying time-invariance to the superposition denition in the linear systems
section above.
48
(a)
(b)
Figure 3.17
Figure 3.18:
49
(a)
(b)
Figure 3.19:
eect.
The order of cascaded LTI systems can be interchanged without changing the overall
(a)
Figure 3.20:
(b)
50
Example 3.5
Consider the system
H3
in which
H3 (f (t)) = 2f (t)
for all signals
f.
f, g
and scalars
(3.15)
a, b
H3 (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH3 (f (t)) + bH3 (g (t))
for all real
t.
Thus,
H3
and signals
f,
H3
H3
(3.16)
(3.17)
Example 3.6
As has been previously shown, each of the following systems are not linear or not time invariant.
H1 (f (t)) = tf (t)
H2 (f (t)) = (f (t))
(3.18)
(3.19)
Figure 3.21: Interact(when online) with the Mathematica CDF above demonstrating Linear Time
Invariant systems. To download, right click and save le as .cdf.
Chapter 4
Time Domain Analysis of Continuous
Time Systems
4.1 Continuous Time Systems
4.1.1 Introduction
As you already now know, a continuous time system operates on a continuous time signal input and produces
a continuous time signal output. There are numerous examples of useful continuous time systems in signal
processing as they essentially describe the world around us.
are both linear and time invariant, known as continuous time LTI systems, is of particular interest as the
properties of linearity and time invariance together allow the use of some of the most important and powerful
tools in signal processing.
is said to be linear if it satises two important conditions. The rst, additivity, states for every
pair of signals
signal
x, y
that
and scalar
H (x + y) = H (x) + H (y).
H (ax) = aH (x). It
we have
into a single condition for linearity. Thus, a system is said to be linear if for every signals
a, b
x, y
and scalars
we have that
(4.1)
Linearity is a particularly important property of systems as it allows us to leverage the powerful tools of
linear algebra, such as bases, eigenvectors, and eigenvalues, in their study.
A system
is said to be time invariant if a time shift of an input produces the corresponding shifted
T R.
ST
for every
That is,
ST H = HST .
(4.2)
Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,
physical systems should react the same to identical inputs at dierent times.
When a system exhibits both of these important properties it allows for a more straigtforward analysis
than would otherwise be possible.
1 This
52
of the system output for a given input becomes a simple matter of convolving the input with the system's
impulse response signal. Also proven later, the fact that complex exponential are eigenvectors of linear time
invariant systems will enable the use of frequency domain tools such as the various Fouier transforms and
associated transfer functions, to describe the behavior of linear time invariant systems.
Example 4.1
Consider the system
in which
H (f (t)) = 2f (t)
for all signals
f.
f, g
and scalars
(4.3)
a, b
H (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH (f (t)) + bH (g (t))
for all real
t.
Thus,
and signals
f,
t.
Thus,
(4.4)
(4.5)
4.2.1 Introduction
The output of an LTI system is completely determined by the input and the system's response to a unit
impulse.
System Output
We can determine the system's output, y(t), if we know the system's impulse response,
h(t), and the input, f(t).
Figure 4.1:
The output for a unit impulse input is called the impulse response.
2 This
53
Figure 4.2
f ( ) (t ) d
f (t) =
(t )
peaks up where
t = .
(4.6)
54
Figure 4.3
Since we know the response of the system to an impulse and any signal can be decomposed into impulses,
all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,
calculate the system's output for every impulse and add the outputs back together.
known as
Integral.
Convolution.
Since we are in
55
Figure 4.4
4.3.1 Introduction
Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output a system produces for a given input signal.
function tells us that the input signal to a system can be represented as an integral of scaled and shifted
impulses and, therefore, as the limit of a sum of scaled and shifted approximate unit impulses. Thus, by
linearity, it would seem reasonable to compute of the output signal as the limit of a sum of scaled and
shifted unit impulse responses and, therefore, as the integral of a scaled and shifted impulse response. That
is exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine a
linear time invariant system's output from knowledge of the input and the impulse response.
(f g) (t) =
f ( ) g (t ) d
(4.7)
f, g
dened on
R.
meaning that
f g =gf
3 This
(4.8)
56
f, g
dened on
R.
Thus, the convolution operation could have been just as easily stated using
Z
(f g) (t) =
f (t ) g ( ) d
(4.9)
f, g
dened on
R.
Convolution has several other important properties not listed here but
H (x).
h.
Given
x ( ) (t ) d
x (t) =
(4.10)
by the sifting property of the unit impulse function. Writing this integral as the limit of a summation,
x (t) = lim
x (n) (t n)
(4.11)
where
(t) = {
approximates the properties of
(t).
1/
0t<
otherwise
(4.12)
By linearity
Hx (t) = lim
x (n) H (t n)
(4.13)
x ( ) H (t ) d.
Hx (t) =
(4.14)
Since
H (t )
h (t ),
x ( ) h (t ) d = (x h) (t) .
Hx (t) =
(4.15)
Hence, convolution has been dened such that the output of a linear time invariant system is given by the
convolution of the system input with the system unit impulse response.
f, g
given by
(f g) (t) =
f ( ) g (t ) d =
f (t ) g ( ) d.
(4.16)
The rst step in graphically understanding the operation of convolution is to plot each of the functions.
Next, one of the functions must be selected, and its plot reected across the
=0
57
t.
Example 4.2
Recall that the impulse response for the capacitor voltage in a series RC circuit is given by
h (t) =
1 t/RC
e
u (t) ,
RC
(4.17)
x (t) = u (t) .
(4.18)
We know that the output for this input voltage is given by the convolution of the impulse response
with the input signal
(4.19)
We would like to compute this operation by beginning in a way that minimizes the algebraic
complexity of the expression.
Thus, since
x (t) = u (t)
desirable to select it for time reversal and shifting. Thus, we would like to compute
y (t) =
1 /RC
e
u ( ) u (t ) d.
RC
(4.20)
The step functions can be used to further simplify this integral by narrowing the region of integration to the nonzero region of the integrand. Therefore,
max{0,t}
Z
y (t) =
0
1 /RC
e
d.
RC
(4.21)
y (t) = {
t0
t/RC
t>0
1e
(4.22)
y (t) = 1 et/RC u (t) .
(4.23)
4 http://www.jhu.edu/signals/convolve/index.html
5 http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Convo1/
58
4.4.1 Introduction
We have already shown the important role that continuous time convolution plays in signal processing. This
section provides discussion and proof of some of the important properties of continuous time convolution.
Analogous properties can be shown for continuous time circular convolution with trivial modication of the
proofs provided except where explicitly noted otherwise.
6 This
59
f1 , f2 , f3
the following
relationship holds.
f1 (f2 f3 ) = (f1 f2 ) f3
(4.24)
R R
= f1 (1 ) f2 (2 ) f3 ((t 1 ) 2 ) d2 d1
R R
= f1 (1 ) f2 ((1 + 2 ) 1 ) f3 (t (1 + 2 )) d2 d1
R R
= f1 (1 ) f2 (3 1 ) f3 (t 3 ) d1 d3
(4.25)
= ((f1 f2 ) f3 ) (t)
proving the relationship as desired through the substitution
3 = 1 + 2 .
4.4.2.2 Commutativity
The operation of convolution is commutative. That is, for all continuous time signals
f1 , f2
the following
relationship holds.
f1 f2 = f2 f1
(4.26)
(f1 f2 ) (t)
f1 (1 ) f2 (t 1 ) d1
f1 (t 2 ) f2 (2 ) d2
(4.27)
= (f2 f1 ) (t)
proving the relationship as desired through the substitution
2 = t 1 .
4.4.2.3 Distributivity
The operation of convolution is distributive over the operation of addition. That is, for all continuous time
signals
f1 , f2 , f3
f1 (f2 + f3 ) = f1 f2 + f1 f3
(4.28)
f1 ( ) (f2 (t ) + f3 (t )) d
R
= f1 ( ) f2 (t ) d + f1 ( ) f3 (t ) d
=
= (f1 f2 + f1 f3 ) (t)
proving the relationship as desired.
(4.29)
60
4.4.2.4 Multilinearity
The operation of convolution is linear in each of the two function variables.
results from distributivity of convolution over addition. Homogenity of order one in each variable results
from the fact that for all continuous time signals
f1 , f2
and scalars
(4.30)
(a (f1 f2 )) (t)
R
= a f1 ( ) f2 (t ) d
R
= (af1 ( )) f2 (t ) d
= ((af1 ) f2 ) (t)
=
(4.31)
f1 ( ) (af2 (t )) d
4.4.2.5 Conjugation
The operation of convolution has the following property for all continuous time signals
f1 , f2 .
f1 f2 = f1 f2
(4.32)
f1 f2 (t)
f1 ( ) f2 (t ) d
f1 ( ) f2 (t )d
(4.33)
f1 ( ) f2 (t ) d
= f1 f2 (t)
f1 , f2
where
ST
is
(4.34)
ST (f1 f2 ) (t)
f ( ) f1 ((t T ) ) d
2
R
= f2
( ) ST f1 (t ) d
= ((ST f1 ) f2 ) (t)
=
f ( ) f2 ((t T ) ) d
1
R
= f1
( ) ST f2 (t ) d
= f1 (ST f2 ) (t)
proving the relationship as desired.
(4.35)
61
4.4.2.7 Dierentiation
The operation of convolution has the following property for all continuous time signals
d
(f1 f2 ) (t) =
dt
df1
f2 (t) =
dt
f1
df2
dt
f1 , f2 .
(t)
(4.36)
d
dt
(f1 f2 ) (t)
d
f2 ( ) dt
f1 (t ) d
df1
= dt f2 (t)
(4.37)
d
f ( ) dt
f2 (t ) d
1
df2
= f1 dt (t)
where
is the Dirac
delta funciton.
f =f
(4.38)
(f ) (t)
f ( ) (t ) d
R
= f (t) (t ) d
(4.39)
= f (t)
proving the relationship as desired.
4.4.2.9 Width
The operation of convolution has the following property for all continuous time signals
Duration (f )
f1 , f2
f1 ( ) f2 (t )
see that such a
where
f.
(f1 f2 ) (t)
(4.40)
such that
is nonzero. When viewing one function as reversed and sliding past the other, it is easy to
always true of circular convolution of nite length and periodic signals as there is then a maximum possible
duration within a period.
continuous time circular convolution as well and the cases in which exceptions occur have been noted above.
These identities will be useful to keep in mind as the reader continues to study signals and systems.
62
4.5.1 Introduction
We have previously dened the system properties of causality and bounded-input bounded-output (BIBO)
stability.
We have also determined that a linear time-invariant (LTI) system is completely determined
h (t):
the output
y (t)
x (t)
It should not be surprising then that one can determine whether an LTI system is causal
h (t).
4.5.2 Causality
Recall that a system is causal if its output
t t0 .
y (t0 )
t0
at time
x (t)
for values of
y (t)
We replace the time variable
= x (t) h (t) =
by a xed value
y (t0 )
x ( ) h (t ) d.
(4.41)
t0 :
x ( ) h (t0 ) d.
t0
(where
(4.42)
must have its contribution to the integral being nulled out by requiring the value of the impulse response
h (t0 ) = 0
for values of
t < 0.
convolution integral:
y (t)
= x (t) h (t) =
x ( ) h (t ) d.
(4.43)
We apply absolute value to both sides and use the straightforward bound on the absolute value of an integral:
R
R
|y (t)| = x ( ) h (t ) d |x ( ) h (t ) |d
R
R
= |x ( ) ||h (t ) |d M |h (t ) |d
R
= M |h (t ) |d,
an LTI system with impulse response h (t) is BIBO stable if and only if |h (t) |
is nite.
7 This
(4.44)
63
4.5.4 Summary
The derivations above show that it is signicantly easier to verify whether a system is causal and/or BIBO
stable when it is linear and time-invariant.
evaluations of the system's impulse response
h (t).
64
Chapter 5
Introduction to Fourier Analysis
5.1 Introduction to Fourier Analysis
B = {ej T
any nite-energy function
x (t)
nt
}n=
(5.1)
x (t) =
C n ej T
nt
(5.2)
n=
Now, The issue of exact convergence did bring Fourier
(Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its presentation on 1807. It was not resolved for almost a century, and its resolution is interesting and important to
understand from a practical viewpoint.
Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result of
3
the fact that sinusoids are Eigenfunctions (Section 5.3) of linear, time-invariant (LTI)
systems. This is to
say that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoid
on the output. Then, since Fourier analysis allows us to redene the signals in terms of a combination of
sinusoids, all we need to do is determine how any given system acts on each possible sinusoid (its
transfer
the passage of sinusoids through a system as the multiplication of that sinusoid by its scaling factor, we can
convert the passage of any signal through a system from convolution (Section 4.3) (in time) to multiplication
(in frequency). These ideas are what give Fourier analysis its power.
Now, after hopefully having sold you on the value of this method of analysis, we must examine exactly
what we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the Fourier
Series (Section 5.4), Continuous-Time Fourier Transform (Section 6.2), Discrete-Time Fourier Transform
(Section 9.3) and Discrete Fourier Transform (Section 10.1). All of these transforms act essentially the same
way, by converting a signal in time to an equivalent signal in frequency (sinusoids).
on the nature of a specic signal
i.e.
However, depending
continuous-time) there is an appropriate transform to convert the signal into the frequency domain.
66
5.2.1 Introduction
This module describes the type of signals acted on by the Continuous Time Fourier Series.
periodic.
periodic
f (t) = f (t + mT ) m Z
T >0
where
represents the
(5.3)
for the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Any
Figure 5.1:
2. or, we can cut out all of the redundancy, and think of them as functions on an interval
more generally,
[a, a + T ]).
[0, T ]
(or,
If we know the signal is T-periodic then all the information of the signal is
Figure 5.2:
An
Remove the redundancy of the period function so that f (t) is undened outside [0, T ].
aperiodic CT function f (t), on the other hand, does not repeat for any T
R; i.e.
4 This
there exists no
67
5.2.3 Demonstration
Here's an example demonstrating a
periodic
phase delays:
Interact (when online) with a Mathematica CDF demonstrating a Periodic Sinusoidal Signal
with various frequencies, amplitudes, and phase delays. To download, right click and save le as .cdf.
Figure 5.3:
To learn the full concept behind periodicity, see the video below.
<http://www.youtube.com/v/tJW_a6JeXD8&rel=0&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_em
Figure 5.4:
5.2.4 Conclusion
A periodic signal is completely dened by its values in one period, such as the interval [0,T].
68
5.3.1 Introduction
Prior to reading this module, the reader should already have some experience with linear algebra and should
specically be familiar with the eigenvectors and eigenvalues of square matrices.
system is a linear operator dened on a function space that commutes with every time shift operator on
that function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.
The concept of an eigenfunction is closely tied to the concept of an eigenvector in linear algebra. Eigen is
german for "self": the eigenfunction of a system is a function that, when fed to the system, produces in
the output a copy of the function, perhaps rescaled.
if
H (f ) = f
More concretely,
eigenfunction is the input as the output is simply the eigenfunction scaled by the associated eigenvalue. As
will be shown, continuous time complex exponentials serve as eigenfunctions of linear time invariant systems
operating on continuous time signals.
H (x (t))
h ( ) x (t ) d.
H (x (t)) =
(5.4)
x (t) = est
where
s C.
H (est )
h ( ) es(t ) d
h ( ) est es d
R
est
h ( ) e
(5.5)
d.
Thus,
H est = s est
(5.6)
where
h ( ) es d
s =
(5.7)
est .
est
est
for some
s C.
operates. However, for our purposes, complex exponentials will be accepted as eigenfunctions of linear time
invariant systems. A similar argument using continuous time circular convolution would also hold for spaces
nite length signals.
5 This
69
invariant system for a complex exponential input as the result is a complex exponential output scaled by the
associated eigenvalue. Consequently, representations of continuous time signals in terms of continuous time
complex exponentials provide an advantage when studying signals. As will be explained later, this is what
is accomplished by the continuous time Fourier transform and continuous time Fourier series, which apply
to aperiodic and periodic signals respectively.
5.4.1 Introduction
In this module, we will derive an expansion for continuous-time, periodic functions, and in doing so, derive
the
H (s) C
given
est
is the eigenvalue corresponding to s. As shown in the gure, a simple exponential input would
Figure 5.5:
(5.8)
is linear, calculating
straightforward.
cn esn t
n
The action of
cn H (sn ) esn t
nt
f (t)
H (sn ) C.
inde-
output of a system.
6 This content
7 "Continuous
As such, if
70
Joseph Fourier
x (t) =
cn ej2f0 nt
(5.9)
n=
1
T is the fundamental frequency. For almost all x (t) of practical interest, there exists cn to make
2
(5.9) true. If x (t) is nite energy ( x (t) L [0, T ]), then the equality in (5.9) holds in the sense of energy
where
f0 =
convergence; if
x (t)
x (t)
Dirichlet conditions), then (5.9) holds pointwise everywhere except at points of discontinuity.
cn - called the Fourier coecients - tell us "how much" of the sinusoid ej2f0 nt is in x (t). The formula
x (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it
The
shows
is an eigenfunction of
ej2f0 nt , n Z
Example 5.1
We know from Euler's formula that
1j j2f t
2 e
1+j j2f t
.
2 e
8 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html
71
Figure 5.6: Interact(when online) with a Mathematica CDF demonstrating sinusoid synthesis. To
download, right click and save as .cdf.
Figure 5.7
72
e(j2f0 kt) ,
where
k Z.
(5.10)
n=
Now integrate both sides over a given period,
T:
T
(j2f0 kt)
f (t) e
Z
dt =
(5.11)
n=
On the right-hand side we can switch the summation and integral and factor the constant out of the integral.
f (t) e
(j2f0 kt)
dt =
cn
ej2f0 (nk)t dt
(5.12)
n=
RT
n 6= k .
For
n=k
we will have:
ej2f0 (nk)t dt = T , n = k
(5.13)
0
For
n 6= k ,
Z
we will have:
ej2f0 (nk)t dt =
0
But
sin (2f0 (n k) t) dt , n 6= k
cos (2f0 (n k) t)
Z
cos (2f0 (n k) t) dt + j
(5.14)
n k,
between
and
T.
cosine; because it has an integer number of periods, there are equal areas above and below the x-axis of the
graph. This statement holds true for
sin (2f0 (n k) t)
Z
cos (2f0 (n k) t) dt = 0
(5.15)
0
which also holds for the integral involving the sine function. Therefore, we conclude the following about
our integral of interest:
T
ej2f0 (nk)t dt =
0
if
n=k
(5.16)
otherwise
We plug in this result in (5.12) to see if we can nish nding an equation for our Fourier coecients. Using
the facts that we have just proven above, we can see that the only time (5.12) will have a nonzero result is
when
and
are equal:
(5.17)
0
Finally, we have our general equation for the Fourier coecients:
1
cn =
T
(5.18)
73
Example 5.2
Consider the square wave function given by
x (t) = {
on the unit interval
1/2
t 1/2
1/2
t > 1/2
(5.19)
t Z [0, 1).
ck
R1
x (t) ej2kt dt
0
R1
1 j2kt
dt 1/2 12 ej2kt dt
2e
j (1+ejk )
=
2k
=
R 1/2
0
(5.20)
Thus, the Fourier coecients of this function found using the Fourier series analysis formula are
ck = {
j/k
kodd
keven
(5.21)
f (t) =
cn ej2f0 nt
(5.22)
n=
The continuous time Fourier series analysis formula gives the coecients of the Fourier series expansion.
cn =
In both of these equations
f0 =
1
T
1
T is the fundamental frequency.
(5.23)
74
Chapter 6
Continuous Time Fourier Transform
(CTFT)
6.1 Continuous Time Aperiodic Signals
6.1.1 Introduction
This
module
describes
the
type
of
signals
acted
on
by
the
Continuous
Time
Fourier
Transform.
periodic.
f (t) = f (t + mT ) m Z
where
T >0
represents the
periodic
(6.1)
for the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Any
there exists no
f (t)
f (t) called
T0 , we obtain a precise model of
an aperiodic signal for which all rules that govern periodic signals can be applied, including Fourier Analysis
(with an important modication). For more detail on this distinction, see the module on the
Continuous
76
Interact (when online) with a Mathematica CDF demonstrating Periodic versus Aperiodic
Signals.To download, right-click and save as .cdf.
Figure 6.1:
6.1.4 Conclusion
Any aperiodic signal can be dened by an innite sum of periodic functions, a useful denition that makes
it possible to use Fourier Analysis on it by assuming all frequencies are present in the signal.
6.2.1 Introduction
In this module, we will derive an expansion for any arbitrary continuous-time function, and in doing so,
derive the
H (s) C
given
est
is the eigenvalue corresponding to s. As shown in the gure, a simple exponential input would
is linear, calculating
straightforward.
(6.2)
77
cn esn t
n
The action of
cn H (sn ) esn t
nt
f (t)
H (sn ) C.
inde-
As such, if
output of a system.
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals
in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will
present the
Transform (FT).
Joseph Fourier
s (t) =
cn ej T
s (t)
nt
(6.3)
n=
where
T is the fundamental
s (t) is nite energy,
true. If
s (t)
cn
to make (6.3)
s (t)
s (t)
is
cn - called the Fourier coecients - tell us "how much" of the sinusoid ej T nt is in s (t). The formula
s (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it
The
shows
is an eigenfunction of
n 2
o
ej T nt , n Z
T -periodic
deals with nonperiodic signals, we must nd a way to include all real frequencies in the general equations.
go to innity.
Example 6.1
We know from Euler's formula that
1j j2f t
2 e
1+j j2f t
.
2 e
s (t)
ej2f t .
these sinusoids are complex exponentials, and so they are eigenfunctions of LTI systems; recall as well that
such eigenfunctions easily pass through LTI systems. Thus, we can use these two principles to easily obtain
the output to any signal for any system: First, we obtain the Fourier coecients of the signal to decompose
it as the sum of scaled sinusoids; next, we run each scaled sinusoid through the system, which in essence
scales (multiplies) it by the sinusoid's eigenvalue; and nally we sum together all the outputs (thanks to
linearity and superposition) to obtain the output. What remains to be shown is the way to easily compute
the coecients and eigenvalues of a signal and a system. Both of these problems are solved using the Fourier
Transform.
S (f ) =
s (t) e(j2f t) dt
4 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html
(6.4)
78
Inverse CTFT
S (f ) ej2f t df
s (t) =
(6.5)
For signals, the CTFT provides the Fourier coecients that are attached to sinusoids to represent the signal
as in (6.3). The inverse CTFT then provides the signal as the linear combination of all sinusoids with the
corresponding weights, extending (6.2). For systems, the CTFT can provide the eigenvalues for all sinusoids
when applied to the impulse response function
h (t).
response allows for very easy computation of the system output, as we will soon observe.
warning: It is not uncommon to see the above formula written slightly dierent.
One of the
most common dierences is the way that the exponential is written. The above equations use the
frequency variable
instead, where
Interact (when online) with a Mathematica CDF demonstrating Continuous Time Fourier
Transform. To Download, right-click and save as .cdf.
Figure 6.2:
5 "DSP
notation" <http://legacy.cnx.org/content/m10161/latest/>
79
(Solution on p. 89.)
e(t) if t 0
x (t) =
0 otherwise
(6.6)
Exercise 6.2.2
(Solution on p. 89.)
Find the inverse Fourier transform of the ideal lowpass lter dened by
1
X (f ) =
0
if
|f | M
(6.7)
otherwise
6.3.1 Introduction
This module will look at some of the basic properties of the Continuous-Time Fourier Transform (CTFT).
Signal ( x (t) )
Transform ( X (f ) )
ax1 t + bx2 t
aX1 f + bX2 f
(Sec-
x (t)
X (f )
Dual-
X (t)
x (f )
x (t)
1
|| X
x (t )
X (f ) e(j2f )
Lin-
earity)
Scalar
Multiplication
Scaling
(Section
6.3.3.3:
f
Time Scaling)
Time Shift (Section 6.3.3.4: Time
Shifting)
6 This
80
Convolution
in
Time
(Sec-
x1 (t) x2 (t)
X1 (f ) X2 (f )
x1 (t) x2 (t)
X1 (f ) X2 (f )
dn
dtn x (t)
(j2f ) X (f )
(Section
6.3.3.6:
Time Dierentiation)
Parseval's
Theorem
(Sec-
(|x (t) |) dt
(|X (f ) |) df
(Frequency
6.3.3.8:
Shift)
x (t) ej2t
X (f )
x (t)
X (f ) = X (f )
Modulation
(Frequency Shift))
Symmetry for Real Signals (Sec-
is real
Example 6.2
We will begin with the following signal:
(6.8)
Z (f ) = aX1 (f ) + bX2 (f )
(6.9)
6.3.3.2 Duality
Duality is a property that can make life quite easy when solving problems involving Fourier transforms.
Basically what this property says is that since a rectangular function in time is a sinc function in frequency,
then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similarity
between the forward CTFT and the inverse CTFT. The only dierence is a frequency reversal.
vice versa.
81
a unit pulse
with a
very
frequency.
The table above shows this idea for the general transformation from the time-domain to the frequencydomain of a signal. You should be able to easily notice that these equations show the relationship mentioned
previously: if the time variable is increased then the frequency range will be decreased.
Example 6.3
We will begin by letting
(6.10)
Dene
= t .
Through the calculations below, you can see that only the variable in the
exponential are altered thus only changing the phase in the frequency domain.
Z (f )
= e
= e
x () e(j2f (+ )) d
R
x () e(j2f ) d
(j2f )
(j2f )
(6.11)
X (f )
6.3.3.5 Convolution
Convolution is one of the big reasons for converting signals to the frequency domain, since convolution in
time becomes multiplication in frequency.
between time and frequency. It also shows that there may be little to gain by changing to the frequency
domain when multiplication in time is involved.
We will introduce the convolution integral here, but if you have not seen this before or need to refresh your
memory, then look at the continuous-time convolution (Section 6.5) module for a more in depth explanation
and derivation.
y (t)
=
=
x1 (t) x2 (t)
R
f ( ) f2 (t ) d
1
(6.12)
Since LTI
systems can be represented in terms of dierential equations, it is apparent with this property
that converting to the frequency domain may allow us to convert these complicated dierential equations to
simpler equations involving multiplication and addition. This is often looked at in more detail during the
9
82
(|x (t) |) dt =
(|X (f ) |) df
(6.13)
X (f ) ej2f t df
z (t) =
(6.14)
(6.15)
X (f )
x (t) = x (t).
x (t) ej2f t dt
x (t) e(j2(f )t) dt
(6.16)
= X (f )
10
11
Figure 6.3:
Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.
10 http://www.jhu.edu/signals/ctftprops-mathml/index.htm
11 http://www.jhu.edu/signals/ctftprops/indexCTFTprops.htm
83
12
12 This
84
(at)
u (t)
at
e u (t)
e(a|t|)
te
(at)
u (t)
n (at)
t e
u (t)
Condition
1
a+j2f
1
aj2f
2a
a2 +(2f )2
1
(a+j2f )2
n!
(a+j2f )n+1
a>0
(t)
(f )
j2f0 t
a>0
a>0
a>0
(f f0 )
cos (2f0 t)
sin (2f0 t)
u (t)
sgn (t)
cos (2f0 t) u (t)
sin (2f0 t) u (t)
e(at) sin (2f0 t) u (t)
e
a>0
(at)
1
2 ( (f f0 ) + (f + f0 ))
j
2 ( (f + f0 ) (f f0 ))
1
1
2 (f ) + j2f
1
jf
1
4 ( (f f0 ) + (f + f0 ))
jf
2(f0 2 f 2 )
1
4j ( (f f0 ) (f + f0 ))
f0
2(f0 2 f 2 )
2f0
(a+j2f )2 +(2f0 )2
a+j2f
(a+j2f )2 +(2f0 )2
)
2 sin(2f
= 2 sinc (2f )
2f
0 t)
(20 ) sin(2f
=
2f0 t
(2f0 ) sinc (2f0 t)
t
=
t
t
t
+ 1 u +1 u +
t + 1 u t u t 1
u (f + f0 ) u (f f0 ) = p
f0 sinc2 (f0 t)
+
+
a>0
a>0
f
2f0
sinc2 (f )
u ff0 + 1 u ff0 +
ff0 + 1 u ff0 u ff0 1 =
ff0
f
f0
+1
85
P
e
n=
t2
2
2
P
f0 n= (f nf0 )
2
2e2(f )
(t nT )
1
T
f0 =
Table 6.2
Notes
p (t)
t.
p (t) = {
(t)
if
if
|t| 1/2
1+t
(t) = { 1 t
(6.17)
t.
if
1t0
if
0<t1
(6.18)
otherwise
13
6.5.1 Introduction
This module discusses convolution of continuous signals in the time and frequency domains.
CTFT
Z
X (f ) =
x (f ) e(j2f t) dt
(6.19)
X (f ) ej2f t df
(6.20)
Inverse CTFT
x (t) =
h (t).
The
x (t),
and the
x ( ) h (t ) d
y (t) =
(6.21)
(6.22)
Convolution is commutative. For more information on the characteristics of the convolution integral, read
about the Properties of Convolution (Section 4.4).
13 This
86
6.5.4 Demonstration
Interact (when online) with a Mathematica CDF demonstrating Use of the CTFT in signal
denoising. To Download, right-click and save target as .cdf.
Figure 6.4:
87
and
f g ..
Let
F (f g) = F (f ) F (g)
(6.23)
F (f g) = F (f ) F (g)
(6.24)
F 1 ,
we can write:
f g = F 1 (F (f ) F (g))
(6.25)
6.5.6 Conclusion
The Fourier transform of a convolution is the pointwise product of Fourier transforms.
In other words,
convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain
(e.g., frequency domain).
14
Convolution is the standard tool for the analysis of linear time-invariance (LTI) systems.
an LTI system, the impulse response
an impulse
h (t)
system in the frequency domain by applying a Fourier transform to the LTI system equation, which results
in
as a product of magnitudes,
function multiplication the eect of the LTI system is to scale the "magnitude" of each complex sinusoid in
the input independently to obtain the output, with the scaling factors being given by the Fourier transform
H (f ) = 0;
however,
an LTI system cannot "add" new frequencies to an input signal if they were not already present, and an
14 This
88
LTI system cannot change the values of the frequencies present in the signal (that is, an LTI system cannot
perform modulation; by the same token, a modulator is not an LTI system).
It is also worth noting that the delay of a complex sinusoid with a given phase is linearly dependent on
its frequency. Thus, a delay system (which we know is LTI) is characterized by a phase
be linearly dependent on the value of the frequency
f.
(H (f ))
that will
complex exponential with a dierent delay that is inversely proportional to the frequency
89
complex exponentials
X (f )
0
0
X (f ) =
x (t) e(j2f t) dt
e(t) e(j2f t) dt
1
+j2f
1
+ j2f
(6.27)
t 6= 0.
RM
ej2f t df
M
1 j2f t M
|f =M
2t e
1
t sin (2M t)
(6.28)
(6.29)
x (t)
=
=
=
15 "Continuous
(6.26)
e(t)(+j2f ) dt
90
Chapter 7
Discrete-Time Signals
7.1 Common Discrete Time Signals
7.1.1 Introduction
Before looking at this module, hopefully you have an idea of what a signal is and what basic classications
and properties a signal can have. In review, a signal is a function dened with respect to an independent
variable.
This variable is often time but could represent any number of things.
Mathematically, discrete
time analog signals have discrete independent variables and continuous dependent variables. This module
will describe some useful discrete time analog signals.
In its
Acos (n + )
where
is the amplitude,
(7.1)
Figure 7.1:
Note that the equation representation for a discrete time sinusoid waveform is not unique.
1 This
92
Aesn
where
(7.2)
s = +j , is a complex number in terms of , the attenuation constant, and the angular frequency.
ejn = ej(+2)n
(7.3)
+ 2 ,
1
[n] =
0
if
(7.4)
otherwise
Unit Sample
n
1
n
Figure 7.2:
More detail is provided in the section on the discrete time impulse function. For now, it suces to say
that this signal is crucially important in the study of discrete signals, as it allows the sifting property to be
used in signal representation and signal decomposition.
0
u [n] =
1
if
n<0
if
n0
(7.5)
93
Figure 7.3:
The step function is a useful tool for testing and for dening other signals. For example, when dierent
shifted versions of the step function are multiplied by other signals, one can select a certain portion of the
signal and zero out the rest.
Example 7.1
Given the sequence
y [l] = bl u [l],
Ed =
where
n=
u [n]
(|x [n] |)
sequence.
We recognize
a look at power...
2 This
94
Figure 7.4:
1
N 2N +1
Pd = lim
PN
n=N
(|x [n] |)
For periodic discrete-time signals, the integral need only be dened over one period:
N 0 : Pd =
Example 7.2
1
N0
PN0 1
n=0
(|x [n] |)
1
n , shown in Figure 7.5, calculate the power for one period.
x [n] = sin 10
P20
2 1
1
3
discrete sine we get Pd =
n=1 sin
20
10 n = 0.500. Download power_sine.m
Figure 7.5:
3 See
for
95
7.3.1 Introduction
This module will look at two signal operations aecting the time parameter of the signal, time shifting
and time scaling. While they appear at rst to be straightforward extensions of the continuous-time signal
operations, there are some intricacies that are particular to discrete-time signals.
Figure 7.6:
7.3.2.2.1 Decimation
In decimation, the input of the signal is changed to be
f [cn]
must be
an integer so that the input takes values for which a discrete function is properly dened. The decimated
signal
f [cn]
f [n]
and so we are throwing away samples of the signal (or decimating it).
4 This
f [0]),
96
Figure 7.7:
f [2n] decimates f by 2.
7.3.2.2.2 Expansion
In expansion, the input of the signal is changed to be
for integer values of the input
signal
signal
n.
n
c
f [n]
is dened only
Thus, in the expanded signal we can only place the entries of the original
f at values of n that are multiples of c. In other words, we are spacing the values of the discrete-time
c 1 entries away from each other. Since the signal is undened elsewhere, the standard convention
Figure 7.8:
n
2
expands f by 2.
7.3.2.2.3 Interpolation
In practice, we may know specic information about the signal of interest that allows us to provide good
estimates of the entries of
n
c
that are missing after expansion. For example, we may know that the signal
between
P
and
n
c
at
n=
m
c and at
n=
P
c
allows us to
interpolation. The rule described above is known as polar interpolation; although more sophisticated rules
exist for interpolating values, linear interpolation will suce for our explanation in this module.
97
Figure 7.9:
n
2
with interpolation lls in the missing values of the expansion using linear extensions.
Example 7.3
Given
(a)
Figure 7.10:
n ab
f
to get
(b)
(c)
(a)
` Beginwith f [n] (b) Then replace n with an to get f [an] (c) Finally, replace n with
a n ab = f [an b]
Figure 7.11:
98
7.4.1 Introduction
In engineering, we often deal with the idea of an action occurring at a point.
Whether it be a force at
a point in space or some other signal at a point in time, it becomes worth while to develop some way of
quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important
function, next to the complex exponential, in this systems and signals course.
often referred to as the unit impulse or delta function, is the function that
denes the idea of a unit impulse in discrete time. There are not nearly as many intricacies involved in its
denition as there are in the denition of the Dirac delta function, the continuous time impulse function.
The unit sample function simply takes a value of one at n=0 and a value of zero elsewhere. The impulse
function is often written as
(n).
1
[n] =
0
if
n=0
(7.6)
otherwise
Unit Sample
n
1
n
Figure 7.12:
Below we will briey list a few important properties of the unit impulse without going into detail of their
proofs.
1
[n] = ||
[n]
[n] = [n]
[n] = u [n] u [n 1]
f [n] [n] = f [0] [n]
5 This
99
f [n] [n] =
n=
n=
[n] = f [0]
(7.7)
n=
Figure 7.13:
Function.
Interact(when online) with a Mathematica CDF demonstrating the Discrete Time Impulse
The function takes a value of one at time n=0 and a value of zero
100
elsewhere. It has several important properties that will appear again when studying systems.
7.5.1 Introduction
Complex exponentials are some of the most important functions in our study of signals and systems. Their
importance stems from their status as eigenfunctions of linear time invariant systems; as such, it can be
both convenient and insightful to represent signals in terms of complex exponentials. Before proceeding, you
should be familiar with complex numbers.
zn
(7.8)
z is a complex number. Recalling the polar expression of complex numbers, z can be expressed in terms
n
|z| and its angle (or argument) in the complex plane: z = |z|ej . Thus z n = (|z|) ejn .
In the context of complex exponentials, is referred to as frequency. For the time being, let's consider
complex exponentials for which |z| = 1.
where
of its magnitude
These discrete time complex exponentials have the following property, which will become evident through
discussion of Euler's formula.
ejn = ej(+2)n
(7.9)
+ 2 ,
x,
ejx = cos (x) + jsin (x)
(7.10)
z,
at
z = jx.
about
z = 0,
which
The result is
ejx
=
=
ez
k=0
k x2k
(2k)!
(1)
k=0
+j
(jx)k
k!
P
k=0
k x2k+1
(2k+1)!
(1)
(7.11)
x.
Choosing
x = n,
we have:
which converge
(7.12)
101
which breaks a discrete time complex exponential into its real part and imaginary part. Using this formula,
we can also derive the following relationships.
1 jn 1 jn
e
+ e
2
2
(7.13)
1 jn
1
e
ejn
2j
2j
(7.14)
cos (n) =
sin (n) =
zn.
Recall that
z n = (|z|) ejn .
We can
(7.15)
(7.16)
controlling
102
(a)
(b)
(c)
(a) If |z| < 1, we have the case of a decaying exponential envelope. (b) If |z| > 1, we
have the case of a growing exponential envelope. (c) If |z| = 1, we have the case of a constant envelope.
Figure 7.14:
103
Interact (when online) with a Mathematica CDF demonstrating the Discrete Time Complex Exponential. To Download, right-click and save target as .cdf.
Figure 7.15:
104
Chapter 8
Time Domain Analysis of Discrete Time
Systems
8.1 Discrete Time Systems
8.1.1 Introduction
As you already now know, a discrete time system operates on a discrete time signal input and produces a
discrete time signal output. There are numerous examples of useful discrete time systems in digital signal
processing, such as digital lters for images or sound.
linear and time invariant, known as discrete time LTI systems, is of particular interest as the properties of
linearity and time invariance together allow the use of some of the most important and powerful tools in
signal processing.
is said to be linear if it satises two important conditions. The rst, additivity, states for every
pair of signals
signal
x, y
that
and scalar
H (x + y) = H (x) + H (y).
H (ax) = aH (x). It
we have
into a single condition for linearity. Thus, a system is said to be linear if for every signals
a, b
x, y
and scalars
we have that
(8.1)
Linearity is a particularly important property of systems as it allows us to leverage the powerful tools of
linear algebra, such as bases, eigenvectors, and eigenvalues, in their study.
A system
is said to be time invariant if a time shift of an input produces the corresponding shifted
T Z.
ST
for every
That is,
ST H = HST .
(8.2)
Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,
physical systems should react the same to identical inputs at dierent times.
When a system exhibits both of these important properties it opens. As will be explained and proven
in subsequent modules, computation of the system output for a given input becomes a simple matter of
1 This
106
convolving the input with the system's impulse response signal. Also proven later, the fact that complex
exponential are eigenvectors of linear time invariant systems will encourage the use of frequency domain
tools such as the various Fouier transforms and associated transfer functions, to describe the behavior of
linear time invariant systems.
Example 8.1
Consider the system
in which
H (f [n]) = 2f [n]
for all signals
f.
f, g
and scalars
(8.3)
a, b
H (af [n] + bg [n]) = 2 (af [n] + bg [n]) = a2f [n] + b2g [n] = aH (f [n]) + bH (g [n])
for all integers
n.
Thus,
and signals
(8.4)
f,
n.
Thus,
Therefore,
(8.5)
system.
8.1.2.2 Causality
The causality property requires that a system's output depends only on past and present values of the input.
For a discrete-time system, this means that the value of the output
depend on values of the input
x [n]
for
y [n0 ]
at a specic time
n0
can only
n < n0 .
8.1.2.3 Stability
There are several denitions of stability, but the one that will be used most frequently in this course will
be bounded input, bounded output (BIBO) stability. In this context, a stable system is one in which the
output is bounded if the input is also bounded. Similarly, an unstable system is one in which at least one
bounded input produces an unbounded output.
In order to understand this concept, we must rst look more closely into exactly what we mean by
bounded. A bounded signal is any signal such that there exists a value such that the absolute value of the
signal is never greater than some value. Since this value is arbitrary, what we mean is that at no point can
the signal tend to innity, including the end behavior.
A bounded signal
f [n]
we have that
|f [n] | < A
Representing this mathematically, a stable system must have the following property, where
input and
y (n)
Mx and My
x (n)
is the
|y [n] | My <
(8.6)
|x [n] | Mx <
(8.7)
both represent a set of nite positive numbers and these relationships hold for all of
n.
Otherwise,
107
8.2.1 Introduction
The output of a discrete time LTI system is completely determined by the input and the system's response
to a unit impulse.
System Output
We can determine the system's output, y[n], if we know the system's impulse response,
h[n], and the input, x[n].
Figure 8.1:
The output for a unit impulse input is called the impulse response.
Figure 8.2
2 This
108
(a)
(b)
Figure 8.3
at the point
n = 0,
and
everywhere else.
x [n]
The function
k [n] = [n k]
x [k] k [n]
x [k] [n k]
peaks up where
k=
k=
n = k.
(8.8)
109
(a)
(b)
Figure 8.4
Since we know the response of the system to an impulse and any signal can be decomposed into impulses,
all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,
calculate the system's output for every impulse and add the outputs back together.
known as
Convolution.
Since we are in
h [n]
y [n]
h [n]
x [n].
Figure 8.5
110
8.3.1 Introduction
Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output a system produces for a given input signal. It can be shown that a linear time invariant system is
completely characterized by its impulse response. The sifting property of the discrete time impulse function
tells us that the input signal to a system can be represented as a sum of scaled and shifted unit impulses.
Thus, by linearity, it would seem reasonable to compute of the output signal as the sum of scaled and shifted
unit impulse responses. That is exactly what the operation of convolution accomplishes. Hence, convolution
can be used to determine a linear time invariant system's output from knowledge of the input and the impulse
response.
(f g) [n] =
f [k] g [n k]
(8.9)
k=
f, g
dened on
Z.
meaning that
f g =gf
for all signals
f, g
dened on
Z.
(8.10)
Thus, the convolution operation could have been just as easily stated using
(f g) [n] =
f [n k] g [k]
(8.11)
k=
for all signals
f, g
dened on
Z.
Convolution has several other important properties not listed here but
H (x).
h.
Given
x [n] =
x [k] [n k]
(8.12)
k=
by the sifting property of the unit impulse function. By linearity
Hx [n] =
x [k] H [n k] .
k=
3 This
(8.13)
111
Since
H [n k]
Hx [n] =
h [n k],
x [k] h [n k] = (x h) [n] .
(8.14)
k=
Hence, convolution has been dened such that the output of a linear time invariant system is given by the
convolution of the system input with the system unit impulse response.
f, g
(f g) [n] =
given by
f [k] g [n k] =
k=
f [n k] g [k] .
(8.15)
k=
The rst step in graphically understanding the operation of convolution is to plot each of the functions.
Next, one of the functions must be selected, and its plot reected across the
same function must be shifted left by
t.
k=0
Example 8.2
Recall that the impulse response for a discrete time echoing feedback system with gain
h [n] = an u [n] ,
is
(8.16)
x [n] = bn u [n] .
(8.17)
We know that the output for this input is given by the convolution of the impulse response with
the input signal
(8.18)
We would like to compute this operation by beginning in a way that minimizes the algebraic
complexity of the expression. However, in this case, each possible coice is equally simple. Thus, we
would like to compute
y [n] =
ak u [k] bnk u [n k] .
(8.19)
k=
The step functions can be used to further simplify this sum. Therefore,
y [n] = 0
for
n<0
(8.20)
and
y [n] =
n
X
(ab)
(8.21)
k=0
for
n 0.
Hence, provided
ab 6= 1,
we have that
y [n] = {
n<0
1(ab)n+1
1(ab)
n0
(8.22)
112
(f g) [n] =
N
1
X
f [k] g [n k]
(8.23)
k=0
for all signals
f, g
dened on
Z [0, N 1]
where
^ ^
f, g
and
g.
It is important to
f g =gf
for all signals
f, g
Z [0, N 1].
dened on
(8.24)
(f g) [n] =
N
1
X
f [n k] g [k]
(8.25)
k=0
for all signals
f, g
dened on
^ ^
Z [0, N 1] where f , g
and
g.
Circular convolution
has several other important properties not listed here but explained and derived in a later module.
Alternatively, discrete time circular convolution can be expressed as the sum of two summations given
by
(f g) [n] =
n
X
f [k] g [n k] +
f, g
dened on
f [k] g [n k + N ]
(8.26)
k=n+1
k=0
for all signals
N
1
X
Z [0, N 1].
Meaningful examples of computing discrete time circular convolutions in the time domain would involve
complicated algebraic manipulations dealing with the wrap around behavior, which would ultimately be
more confusing than helpful. Thus, none will be provided in this section. Of course, example computations
in the time domain are easy to program and demonstrate. However, disrete time circular convolutions are
more easily computed using frequency domain tools as will be shown in the discrete time Fourier series
section.
h.
H (x).
Given
First,
x (n) =
N
1
X
x [k] [n k]
(8.27)
k=0
by the sifting property of the unit impulse function. By linearity,
Hx [n] =
N
1
X
x [k] H [n k] .
k=0
(8.28)
113
Since
H (n k)
Hx [n] =
N
1
X
h (n k),
x [k] h [n k] = (x h) [n] .
(8.29)
k=0
Hence, circular convolution has been dened such that the output of a linear time invariant system is given
by the convolution of the system input with the system unit impulse response.
(f g) [n] =
N
1
X
f [k] g [n k] =
k=0
N
1
X
f, g
given by
f [n k] g [k] .
(8.30)
k=0
The rst step in graphically understanding the operation of convolution is to plot each of the periodic
extensions of the functions. Next, one of the functions must be selected, and its plot reected across the
k=0
k Z [0, N 1],
resulting plots is then constructed. Finally, the area under the resulting curve on
4 http://www.jhu.edu/signals/discreteconv/index.html
114
Interact (when online) with the Mathematica CDF demonstrating Discrete Linear ConvoAvailable
for free
Connexions
lution. To download,
right
clickatand
save le <http://legacy.cnx.org/content/col11557/1.10>
as .cdf
Figure 8.6:
115
8.4.1 Introduction
We have already shown the important role that discrete time convolution plays in signal processing. This
section provides discussion and proof of some of the important properties of discrete time convolution.
Analogous properties can be shown for discrete time circular convolution with trivial modication of the
proofs provided except where explicitly noted otherwise.
f1 , f2 , f3
the following
relationship holds.
f1 (f2 f3 ) = (f1 f2 ) f3
(8.31)
P
k1 =
k2 = f1 [k1 ] f2 [k2 ] f3 [(n k1 ) k2 ]
P
P
k1 =
k2 = f1 [k1 ] f2 [(k1 + k2 ) k1 ] f3 [n (k1 +
P
P
= k3 = k1 = f1 [k1 ] f2 [k3 k1 ] f3 [n k3 ]
=
k2 )]
(8.32)
= ((f1 f2 ) f3 ) [n]
proving the relationship as desired through the substitution
k3 = k1 + k2 .
8.4.2.2 Commutativity
The operation of convolution is commutative.
f1 , f2
the following
relationship holds.
f1 f2 = f2 f1
(8.33)
(f1 f2 ) [n]
f1 [k1 ] f2 [n k1 ]
f1 [n k2 ] f2 [k2 ]
k1 =
k2 =
= (f2 f1 ) [n]
proving the relationship as desired through the substitution
5 This
k2 = n k1 .
(8.34)
116
8.4.2.3 Distribitivity
The operation of convolution is distributive over the operation of addition.
signals
f1 , f2 , f3
f1 (f2 + f3 ) = f1 f2 + f1 f3
(8.35)
k=
(8.36)
= (f1 f2 + f1 f3 ) [n]
proving the relationship as desired.
8.4.2.4 Multilinearity
The operation of convolution is linear in each of the two function variables.
results from distributivity of convolution over addition. Homogenity of order one in each varible results from
the fact that for all discrete time signals
f1 , f2
and scalars
(8.37)
(a (f1 f2 )) [n]
P
= a k= f1 [k] f2 [n k]
P
= k= (af1 [k]) f2 [n k]
= ((af1 ) f2 ) [n]
P
= k= f1 [k] (af2 [n k])
(8.38)
8.4.2.5 Conjugation
The operation of convolution has the following property for all discrete time signals
f1 , f2 .
f1 f2 = f1 f2
(8.39)
f1 f2 [n]
f1 [k] f2 [n k]
f1 [k] f2 [n k]
k=
k=
f1 [k] f2 [n k]
= f1 f2 [n]
k=
(8.40)
117
f1 , f2
where
ST
is the
T Z.
ST (f1 f2 ) = (ST f1 ) f2 = f1 (ST f2 )
(8.41)
ST (f1 f2 ) [n]
k=
f2 [k] f1 [(n T ) k]
k=
f2 [k] ST f1 [n k]
= ((ST f1 ) f2 ) [n]
=
(8.42)
k= f1 [k] f2 [(n T ) k]
P
k= f1 [k] ST f2 [n k]
= f1 (ST f2 ) [n]
proving the relationship as desired.
where
is the unit
sample funciton.
f =f
(8.43)
(f ) [n]
=f
k= f [k] [n k]
P
[n] k= [n k]
(8.44)
= f [n]
proving the relationship as desired.
8.4.2.8 Width
The operation of convolution has the following property for all discrete time signals
gives the duration of a signal
f1 , f2 where Duration (f )
f.
(f1 f2 ) [n]
(8.45)
such that
f1 [k] f2 [n k]
is nonzero. When viewing one function as reversed and sliding past the other, it is easy to
on an interval of length
is not always true of circular convolution of nite length and periodic signals as there is then a maximum
possible duration within a period.
These
118
8.5.1 Introduction
We have previously dened the system properties of causality and bounded-input bounded-output (BIBO)
stability.
We have also determined that a linear time-invariant (LTI) system is completely determined
h [n]:
the output
y [n]
x [n]
It should not be surprising then that one can determine whether an LTI system is causal
h [n].
8.5.2 Causality
Recall that a system is causal if its output
n n0 .
y [n0 ]
n0
at time
x [n]
for values of
y [n]
We replace the time variable
= x [n] h [n] =
by a xed value
y [n0 ]
m=
x [m] h [n m] .
(8.46)
t0 :
m=
x [m] h [n0 m] .
m n0
(where
(8.47)
must have its contribution to the integral being nulled out by requiring the value of the impulse response
h [n0 m] = 0
for values of
n < 0.
convolution integral:
y [n]
= x [n] h [n] =
m=
x [m] h [n m] .
(8.48)
We apply absolute value to both sides and use the triangle inequality on a sum:
P
P
|y (n)| = m= x [m] h [n m] m= |x [m] h [n m] |
P
P
= m= |x [m] ||h [n m] | m= M |h [n m] |
P
= M m= |h [n m] |,
PThus,
m=
6 This
we obtain that
|h [n] | is nite.
(8.49)
119
8.5.4 Summary
The derivations above show that it is signicantly easier to verify whether a system is causal and/or BIBO
stable when it is linear and time-invariant.
evaluations of the system's impulse response
h [n].
120
Chapter 9
Discrete Time Fourier Transform
(DTFT)
9.1 Discrete Time Aperiodic Signals
9.1.1 Introduction
This module describes the type of signals acted on by the Discrete Time Fourier Transform.
periodic.
periodic
f [n] = f [n + mN ] m Z
where
N > 0
represents the
fundamental period
(9.1)
for the signal to repeat. Because of this, you may also see a signal referred to as an
N -periodic
signal.
time repeats themselves in each cycle. However, only integers are allowed as time variable in discrete time.
We denote signals in such case as
f [n],
where
Here's an example of a
122
Figure 9.1:
We can think of
1.
Figure 9.2:
2. or, we can cut out all of the redundancy, and think of them as functions on an interval
[0, N ]
(or,
123
more generally,
[a, a + N ]).
N -periodic
Figure 9.3:
An
Remove the redundancy of the period function so that f [n] is undened outside [0, N ].
R; i.e.
there exists no
such
that this equation (9.1) holds. This broader class of signals can only be acted upon by the DTFT.
Suppose we have such an aperiodic function
fN o [n]
, where
f [n]
is repeated every
N0
f [n]
N0 ,
f [n]
called
we obtain a precise
model of an aperiodic signal for which all rules that govern periodic signals can be applied, including Fourier
Analysis (with an important modication).
124
Click on the above thumbnail image (when online) to download an interactive Mathematica
Player testing Periodic versus Aperiodic Signals. To download, right-click and save as .cdf.
Figure 9.4:
9.1.4 Conclusion
A discrete periodic signal is completely dened by its values in one period, such as the interval
[0, N ].
Any
aperiodic signal can be dened as an innite sum of periodic functions, a useful denition that makes it
possible to use Fourier Analysis on it by assuming all frequencies are present in the signal.
9.2.1 Introduction
Prior to reading this module, the reader should already have some experience with linear algebra and should
specically be familiar with the eigenvectors and eigenvalues of linear operators.
system is a linear operator dened on a function space that commutes with every time shift operator on
that function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.
It is particularly easy to calculate the output of a system when an eigenfunction is the input as the output
is simply the eigenfunction scaled by the associated eigenvalue.
exponentials serve as eigenfunctions of linear time invariant systems operating on discrete time signals.
2 This
125
H (x [n])
H (x [n]) =
h [k] x [n k] .
(9.2)
k=
Now consider the input
x [n] = esn
where
s C.
H (esn )
=
=
=
k=
h [k] es(nk)
sn sk
k= h [k] e e
P
(9.3)
Thus,
H (esn ) = s esn
(9.4)
where
s =
h [k] esk
(9.5)
k=
is the eigenvalue corresponding to the eigenvector
esn .
esn
esn
s C.
operates. However, for our purposes, complex exponentials will be accepted as eigenvectors of linear time
invariant systems. A similar argument using discrete time circular convolution would also hold for spaces
nite length signals.
9.3.1 Introduction
In this module, we will derive an expansion for arbitrary discrete-time functions, and in doing so, derive the
126
given
eigenvalue corresponding to
ejn
H () C is the
As shown in the gure, a simple exponential input would yield the output
y [n] = H () ejn
Figure 9.5:
(9.6)
is linear, calculating
y [n]
for combinations of
complex exponentials is
also straightforward.
cl ejl n
l
The action of
cl H (l ) ejl n
H indepenH (l ) C. As such, if
y [n]
output of a system.
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals
in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will
present the
signals, we must nd a way to include all real frequencies in the general equations. For the DTFT we simply
let N go to innity. This will also change the summation over integers to an integration over real numbers.
N -periodic
f [n]
can be written
f [n] =
N
1
X
ck ej0 kn
(9.7)
k=0
2
N is the fundamental frequency. For almost all f [n] of practical interest, there exists cn to
make (9.7) true. If f [n] is nite energy, then the equality in (9.7) holds in the sense of energy convergence;
where
0 =
with discrete-time signals, there are no concerns for divergence as there are with continuous-time signals.
4 "Continuous Time
5 "Eigenfunctions of
127
cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 kn is in f [n]. The formula
f [n] as a sum of complex exponentials, each of which is easily processed by an LTI system (since it
The
shows
is an eigenfunction of
ej0 kn , k Z
9.3.2.1 Equations
Discrete-Time Fourier Transform
X () =
f [n] e(jn)
(9.8)
X () ejn d
(9.9)
n=
Inverse DTFT
1
x [n] =
2
warning: It is not uncommon to see the above formula written slightly dierent. One of the most
common dierences is the way that the exponential is written. The above equations use the radial
frequency variable
explicit expression,
to make it clear
6 "DSP
notation" <http://legacy.cnx.org/content/m10161/latest/>
128
Click on the above thumbnail image (when online) to download an interactive Mathematica
Player demonstrating Discrete Time Fourier Transform. To Download, right-click and save target as .cdf.
Figure 9.6:
X () =
x [n] e(jn)
(9.10)
X () ejn d
(9.11)
n=
x [n] =
1
2
9.4.1 Introduction
8
This module will look at some of the basic properties of the Discrete-Time Fourier Transform
note:
(DTFT).
We will be discussing these properties for aperiodic, discrete-time signals but understand
that very similar properties hold for continuous-time signals and periodic signals as well.
129
Linearity
Sequence Domain
Frequency Domain
a1 s1 [n] + a2 s2 [n]
a1 S1 () + a2 S2 ()
Conjugate Symmetry
s [n]
S ()
sc [n] = {
s [n/c]
0
if
n/c
is integer
S (c)
otherwise
Time Reversal
s [n]
S ()
Time Delay
s [n n0 ]
e(jn0 ) S ()
Multiplication by n
ns [n]
P
j dS()
d
Sum
n=
s [n]
S (0)
R
1
Value at Origin
s [0]
Time Convolution
s1 [n] s2 [n]
Frequency Convolution
Parseval's Theorem
s1 [n] s2 [n]
P
2
n= (|s [n] |)
S1 () S2 ()
R
1
2 2 S1 (u) S2 ( u) d
R
2
1
2 (|S () |) d
Complex Modulation
ej0 n s [n]
S ( 0 )
Amplitude Modulation
s [n] cos (0 n)
S(0 )+S(+0 )
2
S(0 )S(+0 )
2j
s [n] sin (0 n)
S () d
Example 9.1
We will begin with the following signal:
(9.12)
Now, after we take the Fourier transform, shown in the equation below, notice that the linear
combination of the terms is unaected by the transform.
Z () = aF1 () + bF2 ()
9 "Common
(9.13)
130
9.4.3.2 Symmetry
Symmetry is a property that can make life quite easy when solving problems involving Fourier transforms.
Basically what this property says is that since a rectangular function in time is a sinc function in frequency,
then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similarity
between the forward DTFT and the inverse DTFT. The only dierence is the scaling by
and a frequency
reversal.
a unit pulse
with a
very
vice versa.
frequency.
In contrast to the CTFT property, the DTFT time scaling property is available only for expansion in
the time domain. This is because decimation discards samples of the original signal and therefore there is
no unique relationship between the original signal and a decimated signal (that is, a decimated signal could
correspond to many original signals) that would provide a single transformation between the original DTFT
and the "decimated" DTFT. The intuition from CTFT still holds for expansion: expanding the signal in
time compacts the DTFT in frequency.
Example 9.2
We will begin by letting
z [n] = f [n ].
z [n].
Z () =
f [n ] e(jn)
(9.14)
n=
Now let us make a simple change of variables, where
= n .
you can see that only the variable in the exponential is altered thus only changing the phase in the
frequency domain.
Z ()
=
=
(j(+))
= f [] e
P
e(j) = f [] e(j)
(j)
= e
(9.15)
F ()
9.4.3.5 Convolution
Convolution is one of the big reasons for converting signals to the frequency domain, since convolution in
time becomes multiplication in frequency.
between time and frequency. It also shows that there may be little to gain by changing to the frequency
domain when multiplication in time is involved.
10 "Elemental
131
We will introduce the convolution integral here, but if you have not seen this before or need to refresh
your memory, then look at the discrete-time convolution
11
derivation.
= f1 [n] f2 [n]
P
=
= f1 [] f2 [n ]
y [n]
(9.16)
Since LTI
systems can be represented in terms of dierential equations, it is apparent with this property
that converting to the frequency domain may allow us to convert these complicated dierential equations to
simpler equations involving multiplication and addition. This is often looked at in more detail during the
study of the Z Transform
13
(|f [n] |) =
(|F () |) d
(9.17)
n=
Parseval's relation tells us that the energy of a signal is equal to the energy of its Fourier transform.
Figure 9.7
z [n] =
1
2
F ( ) ejn d
(9.18)
(9.19)
132
14
Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.
Figure 9.8:
15
x [n]
j0 n
Frequency Domain
X ()
( 2m)
( 0 2m)
m=
m=
[n]
[n M ]
P
m= [n M m]
ejM
P
ejM m
=
m=
P
1
k
k=
M
2
M
P
1
+
( + 2k)
k=
1ej
u [n]
an u [n]
an u [ (n + 1)]
a|n|
nan u [n]
sin (an)
cos (an)
c 2
2
sinc2
c n
Notes
real number
integer
integer
1
if |a| < 1
1aej
1
if |a| > 1
1aej
1a2
if |a| < 1
12acos()+a2
aej
if |a| < 1
(ej a)2
P
j m= [ ( + a 2m) (real
a number
2m)]a
P
m= [ ( a 2m) + ( +
real
a number
2m)] a
P
2m
real number c ,
m=
2c
0 < c
14 http://www.jhu.edu/signals/dtftprops/indexDTFTprops.htm
15 This content is available online at <http://legacy.cnx.org/content/m47373/1.5/>.
133
sinc
c n
m= p
u [n] u [n M ]
c
(n+a) {cos [c
(n + a)]
sinc [c (n + a)]}
n
1
n2
[(1) 1]
0
(1)n
n
2m
2c
real number
j p
eja
c
integer
c , 0 < c
real number
c , 0 < c 1
||
n=0
dierentiator lter
elsewhere
n odd
2
n
n even
<0
{ 0
=0
Hilbert Transform
>0
Table 9.2
Notes
p (t)
p (t) = {
(t)
t.
0
if
if
|t| 1/2
(9.20)
t.
1+t
if
(t) = { 1 t
1t0
if
0<t1
(9.21)
otherwise
16
9.6.1 Introduction
This module discusses convolution of discrete signals in the time and frequency domains.
2 -
DTFT
X () =
x [n] e(jn)
(9.22)
X () ejn d
(9.23)
n=
Inverse DTFT
x [n] =
16 This
1
2
134
9.6.2.2 Demonstration
Interact (when online) with a Mathematica CDF demonstrating the Discrete Convolution.
To Download, right-click and save as .cdf.
Figure 9.9:
135
y [n] =
x [k] h [n k]
The
(9.24)
k=
As with continuous-time, convolution is represented by the symbol *, and can be written as
(9.25)
Convolution is commutative. For more information on the characteristics of convolution, read about the
Properties of Convolution (Section 4.4).
and
f g.
Let
F (f g) = F (f ) F (g)
(9.26)
F (f g) = F (f ) F (g)
(9.27)
F 1 ,
we can write:
f g = F 1 (F (f ) F (g))
(9.28)
9.6.3 Conclusion
The Fourier transform of a convolution is the pointwise product of Fourier transforms.
In other words,
convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain
(e.g., frequency domain).
136
Chapter 10
Computing Fourier Transforms
10.1 Discrete Fourier Transform (DFT)
10.1.1 Introduction
The discrete-time Fourier transform (DTFT) can be evaluated when we have an analytic expression for the
signal. This is a good match to the digital world, where signals are discrete-time and quantized, and thus
it is crucial to implement transforms like the DTFT in a computer. The formula for the DTFT is a sum,
which conceptually can be easily computed save for two issues.
Signal duration:
The sum extends over the signal's duration, which must be nite to compute the
signal's spectrum.
Continuous frequency:
[0, N 1].
Subtler than the signal duration issue is the fact that the frequency variable is
[12, 12]
or
[0, 1],
xN [n]
of
x [n] = x [n N ]
for all
n.
nite length
N.
x [n]
One can also write this relationship using a convolution with a train of impulses:
x [n]
= xN [n mod N ] = xN [n]
k=
[n kN ] .
(10.1)
Recall that each convolution with a shifted impulse will contribute one copy of the original signal, therefore providing periodization.
denote the DTFTs of
k=
factor of
x [n]
and
xN [n]
by
X ()
= XN () F
X ()
We
P
[n kN ] = XN () m= (N 2m)
P
= XN () N1 m= 2m
.
N
k=
1 This
XN (),
signal
and
X ()
(10.2)
2/N .
X ()
138
of
dened by a set of
2/N .
2 -periodic;
thus,
X ()
is uniquely
XN ().
Conceptually, we see that periodization in time is reected as sampling in frequency, and that the DTFT
of a
distinct
values of its DTFT. This discrete, nite-length relationship between a signal and its Fourier transform gives
rise to the discrete Fourier transform (DFT).
x [n]
with a spacing of
2/N
X [k] =
PN 1
n=0
x [n] e
j2kn
N
(10.3)
X ()
its sampled nature turns the corresponding integral over the range of frequencies
x [n] =
1
N
PN 1
n=0
X [k] e
j2kn
N
[0, 2]
into a sum:
(10.4)
This pair of equations provide the denition for the forward and inverse DFT, respectively.
N.
wN = ej2/N
X [k] =
PN 1
n=0
kn
x [n] (wN )
, x [n] =
PN 1
k=0
kn
(10.5)
X [k] (wN ) .
One may observe that the equation for these two transforms are very reminiscent of one another: in fact,
has many additional useful properties that signicantly simplify it computation and that are leveraged by
modern implementations such as the Fast Fourier Transform (FFT).
10.1.5 Summary
By restricting our attention to nite-length signals, it is possible to dene a version of the Fourier transform
that also has nite length and that can be easily obtained by a computer. This new discrete Fourier transform
is intimately related with the discrete-time Fourier transform and can be interpreted as performing a sampling
operation in the frequency domain.
We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform
(DFT)
never arises in analog "computation," like that performed by a circuit, is how much work it takes to perform
<http://legacy.cnx.org/content/m0504/2.9/>.
: Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>
139
the signal processing operation such as ltering. In computation, this consideration translates to the number
of basic computational steps required to perform the needed processing.
the
complexity,
becomes equivalent to how long the computation takes (how long must we wait for an
answer). Complexity is not so much tied to specic computers or programming languages but to how many
steps are required on any computer. Thus, a procedure's stated complexity says that the time taken will be
proportional to some function of the amount of data used in the computation and the amount demanded.
For example, consider the formula for the discrete Fourier transform. For each frequency we chose, we
must multiply each signal value by a complex number and add together the results. For a real-valued signal,
each real-times-complex multiplication requires two real multiplications, meaning we have
2N
multiplications
to perform. To add the results together, we must keep the real and imaginary parts separate. Adding
numbers requires
N 1
2N + 2 (N 1) = 4N 2
computations is N (4N 2).
basic
In complexity calculations, we only worry about what happens as the data lengths increase, and take the
dominant termhere the
4N 2
As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd the
DFT is an
O N2
N -squared".
Thus, if we double
the length of the data, we would expect that the computation time to approximately quadruple.
Exercise 10.2.1
(Solution on p. 145.)
In making the complexity evaluation for the DFT, we assumed the data to be real. Three questions emerge.
First of all, the spectra of such signals have conjugate symmetry, meaning that
N
Secondly, suppose the data are complex-valued; what is the DFT's complexity now?
Finally, a less important but interesting question is suppose we want
of
N;
10.3.1 Introduction
The Fast Fourier Transform (FFT) is an ecient O(NlogN) algorithm for calculating DFTs The FFT
symmetries in the
exploits
matrix to take a "divide and conquer" approach. We will rst discuss deriving the
actual FFT algorithm, some of its implications for the DFT, and a speed comparison to drive home the
importance of this powerful algorithm.
N = 2l
to the even-numbered and odd-numbered elements of the sequence in the DFT calculation.
140
22k
2(N 2)k
+ s [N 2] e(j) N
2(N 2+1)k
N
+ s [N 1] e(j)
2 ( N
2 1)k
(j)
(j) 2k
N
N
2
2
s [0]
+
s [2] e
+
+ s [N 2] e
2 ( N
1
k
)
2
(j) 2k
(j)
N
N
s [1] + s [3] e
e (j2k)
2 + + s [N 1] e
2
N
S [k]
=
(j) 2k
N
s [1] e
form of a
+
=
(10.6)
N
2 -length DFT. The rst one is a DFT of the evenThe rst DFT is combined with the
k {0, . . . , N 1} .
(j2k)
N
recognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations
made in the half-length transforms and combines them through additions and the multiplication by
(j2k)
N
N
, which is not periodic over
2 , to rewrite the length-N DFT. Figure 10.1 (Length-8 DFT decomposition)
2
N
N
illustrates this decomposition. As it stands, we now compute two length2 transforms (complexity 2O
4
), multiply one of them by the complex exponential (complexity
O (N )
O (N )
). At this point, the total complexity is still dominated by the half-length DFT calculations, but the
transforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left with
length-2 transforms. This transform is quite simple, involving only additions. Thus, the rst stage of the
N
2 length-2 transforms (see the bottom part of Figure 10.1 (Length-8 DFT decomposition)). Pairs
of these transforms are combined by adding one to the other multiplied by a complex exponential. Each pair
N
N
requires 4 additions and 4 multiplications, giving a total number of computations equaling 8
4 = 2 . This
number of computations does not change from stage to stage. Because the number of stages, the number of
FFT has
log2 N
O (N logN )
141
(a)
(b)
The initial decomposition of a length-8 DFT into the terms using even- and odd-indexed
inputs marks the rst phase of developing the FFT algorithm. When these half-length transforms are
successively decomposed, we are left with the diagram shown in the bottom panel that depicts the
length-8 FFT computation.
Figure 10.1:
Doing an example will make computational savings more obvious. Let's look at the details of a length-8
DFT. As shown on Figure 10.1 (Length-8 DFT decomposition), we rst decompose the DFT into two length4 DFTs, with the outputs added and subtracted together in pairs. Considering Figure 10.1 (Length-8 DFT
decomposition) as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTs
into the nal calculation because of the periodicity of the DFT output. Examining how pairs of outputs are
collected together, we create the basic computational element known as a
142
Buttery
Figure 10.2:
The basic computational element of the fast Fourier transform is the buttery. It takes
two complex numbers, represented by a and b, and forms the quantities shown. Each buttery requires
one complex multiplication and two complex additions.
By considering together the computations involving common output frequencies from the two half-length
DFTs, we see that the two complex multiplies are related to each other, and we can reduce our computational
work even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their
outputs, we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 10.1 (Length-8
DFT decomposition)).
e(j)
means negating real and imaginary parts), let's count those for purposes of evaluating the complexity as full
N
2 = 4 complex multiplies and 2N = 16 additions for each stage and
3N
stages, making the number of basic computations
2 log2 N as predicted.
complex multiplies. We have
Exercise 10.3.1
log2 N = 3
(Solution on p. 145.)
Note that the ordering of the input sequence in the two parts of Figure 10.1 (Length-8 DFT
decomposition) aren't quite the same. Why not? How is the ordering determined?
never arises in analog "computation," like that performed by a circuit, is how much work it takes to perform
the signal processing operation such as ltering. In computation, this consideration translates to the number
of basic computational steps required to perform the needed processing.
the
complexity,
becomes equivalent to how long the computation takes (how long must we wait for an
answer). Complexity is not so much tied to specic computers or programming languages but to how many
steps are required on any computer. Thus, a procedure's stated complexity says that the time taken will be
proportional to some function of the amount of data used in the computation and the amount demanded.
For example, consider the formula for the discrete Fourier transform. For each frequency we chose, we
must multiply each signal value by a complex number and add together the results. For a real-valued signal,
each real-times-complex multiplication requires two real multiplications, meaning we have
2N
multiplications
to perform. To add the results together, we must keep the real and imaginary parts separate. Adding
numbers requires
N 1
2N + 2 (N 1) = 4N 2
computations is N (4N 2).
basic
In complexity calculations, we only worry about what happens as the data lengths increase, and take the
dominant termhere the
7 "Discrete
4N 2
143
As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd the
O N2
DFT is an
N -squared".
Thus, if we double
the length of the data, we would expect that the computation time to approximately quadruple.
Exercise 10.3.2
(Solution on p. 145.)
In making the complexity evaluation for the DFT, we assumed the data to be real. Three questions emerge.
First of all, the spectra of such signals have conjugate symmetry, meaning that
N
N;
O (N logN )
than
O N2
This gure shows how much slower the computation time of an O(NlogN) process grows.
Figure 10.3:
10
100
1000
106
109
N2
100
104
106
1012
1018
N logN
200
3000
6 106
9 109
Table 10.1
Say you have a 1 MFLOP machine (a million "oating point" operations per second). Let
N = 1million =
106 .
An
An
O N 2 algorithm takes 1012 ors 106 seconds ' 11.5
O (N logN ) algorithm takes 6 106 Flors 6 seconds.
note:
N = 1million
is
days.
not unreasonable.
Example 10.1
8 "Discrete
point sequences
144
O (N )
O (N logN ).
complexity is O (N logN ).
multiplying FFTs
inverse FFTs
the total
10.3.4 Conclusion
Other "fast" algorithms have been discovered, most of which make use of how many common factors the
transform length N has. In number theory, the number of prime factors a given integer has measures how
composite
it is.
number 18 is less so (
1 2
2 3
24
and
34
respectively), the
algorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. It
is so computationally ecient that power-of-two transform lengths are frequently used regardless of what
the actual length of the data. It is even well established that the FFT, alongside the digital computer, were
almost completely responsible for the "explosion" of DSP in the 60's.
145
O (KN).
The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has.
The ordering is determined by the algorithm.
When the signal is real-valued, we may only need half the spectral values, but the complexity remains
unchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity is
again the same. When only
O (KN).
146
Chapter 11
Sampling and Reconstruction
11.1 Signal Sampling
11.1.1 Introduction
Digital computers can process discrete time signals using extremely exible and powerful algorithms. However, most signals of interest are continuous time signals, which is how data almost always appears in nature.
This module introduces the concepts behind converting continuous time signals into discrete time signals
through a process called sampling.
11.1.2 Sampling
Sampling a continuous time signal produces a discrete time signal by selecting the values of the continuous
time signal at evenly spaced points in time. Thus, sampling a continuous time signal
Ts
by
fs = 1/Ts .
xs
dened by
It should be intuitively clear that multiple continuous time signals sampled at the same rate can produce
the same discrete time signal since uncountably many continuous time functions could be constructed that
connect the points on the graph of any discrete time function. Thus, sampling at a given rate does not result
in an injective relationship. Hence, sampling is, in general, not invertible.
Example 11.1
For instance, consider the signals
x, y
dened by
x (t) =
sin (t)
t
sin (5t)
t
period Ts = /2
y (t) =
and their sampled versions
x, y
with sampling
1 This
(11.2)
sin (n/2)
n/2
(11.3)
sin (n5/2)
.
n/2
(11.4)
x [n] =
y [n] =
(11.1)
148
(11.5)
it follows that
y [n] =
Hence,
and
sin (n/2)
= x [n] .
n/2
(11.6)
x [n] = x (nTs ).
Ts
to produce
p (t) =
(t nTs )
(11.7)
n=
Using Fourier series, it can be shown that
p (t) =
1 X j T2 kt
e s
Ts
(11.8)
k=
Multiplying
x (t)
xs (t)
xc (t)
= xc (t)
Taking the CTFT of
=
=
=
Xs (f )
P
1
Ts
n= (t nT )
P
2
kt
jT
s
k= e
(11.9)
xs (t),
Xs (f )
Notice that
xs (t):
R
P
2
kt
jT
1
s
x
(t)
e
e(j2f t) dt
c
k=
Ts
R
P
k
1
xc (t) e(j2(f Ts )t) dt
k=
Ts
P
1
k
k= Xc f Ts
Ts
(11.10)
Xc (f ).
X ()
=
=
=
=
where
(jn)
n= x [n] e
P
(jn)
n= xc (nTs ) e
R
P
n= xc (t) (t
R
P
n= xc (t) (t
=
=
=
n=
nTs ) dte(jn)
(11.11)
nTs ) e(jn) dt
t
Ts .
R
(j Tts )
dt
n= xc (t) (t nTs ) e
(j t )
R
P
T
s dt
xc (t) n= (t nTs ) e
R
(j Tts )
x (t) e
dt
s
(11.12)
149
where
xc (t)
(t nTs ) = xs (t)
n=
so that
X () = Xs
2Ts
(11.13)
2k
1 X
Xc
X () =
Ts
2Ts
(11.14)
k=
x [n]
xc (t).
Consequently,
the sampling process is not, in general, invertible. Nevertheless, as will be shown in the module concerning
reconstruction, the continuous time signal can be recovered from its sampled version if some additional
assumptions hold.
11.2.1 Introduction
With the introduction of the concept of signal sampling, which produces a discrete time signal by selecting
the values of the continuous time signal at evenly spaced points in time, it is now possible to discuss one
of the most important results in signal processing, the Nyquist-Shannon sampling theorem. Often simply
called the sampling theorem, this theorem concerns signals, known as bandlimited signals, with spectra that
are zero for all frequencies with absolute value greater than or equal to a certain level. The theorem implies
that there is a suciently high sampling rate at which a bandlimited signal can be recovered exactly from its
samples, which is an important step in the processing of continuous time signals using the tools of discrete
time signal processing.
(B, B).
(B, B)
B.
Essentially, the sampling theorem has already been implicitly introduced in the previous module
Xs
of sampled signal
xs
Xs () =
Ts
1 X
2k
X
.
Ts
2Ts
k=
2 This
X,
recall
is given by
(11.15)
150
would have the same continuous time Fourier transform and thus be identical. Thus, for each discrete time
signal there is a unique
(1/2Ts , 1/2Ts ) bandlimited continuous time signal that samples to the discrete
Ts . Therefore, this (1/2Ts , 1/2Ts ) bandlimited signal can be found from
x is bandlimited to (B, B), it is completely determined by its samples with sampling rate fs = 2B ,
x can be reconstructed exactly from its samples xs with sampling
rate fs = 2B . The angular frequency 4B is often called the angular Nyquist rate. Equivalently, this can be
stated in terms of the sampling period Ts = 1/fs . If a signal x is bandlimited to (B, B), it is completely
determined by its samples with sampling period Ts = 1/2B . That is to say, x can be reconstructed exactly
from its samples xs with sampling period Ts .
a signal
The spectrum of a bandlimited signals is shown as well as the spectra of its samples
at rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquist
frequency, and the period of the samples spectrum centered about the origin has the same form as the
spectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur and
causes the spectrum to take a dierent than the original spectrum.
Figure 11.1:
151
given here, provides the interesting observation that the samples of a signal with period
Ts
provide Fourier
(1/2Ts , 1/2Ts ).
and xs be its samples
represent
be a
x (t)
This representation of
R 1/2Ts
1/2Ts
X (f ) ej2f t df
xs [n]
(1/2Ts , 1/2Ts ),
= xs (nTs ) =
xs [n]
is the
nth
R 1/2Ts
(11.16)
Ts
to produce
X (f ) ej2f nTs df
1/2Ts
(11.17)
X (f )
X (f ) and,
on
by
x
=x
xs
Ts ,
provided
is bandlimited to
(1/2Ts , 1/2Ts ).
This is done in the module on perfect reconstruction. However, the result, known as the Whittaker-Shannon
reconstruction formula, will be stated here. If the requisite conditions hold, then the perfect reconstruction
is given by
x (t) =
(11.18)
n=
where the sinc function is dened as
sinc (t) =
sin (t)
.
t
(11.19)
{sinc (t/Ts n) |n Z}
forms an orthogonal basis for the set of
(1/2Ts , 1/2Ts )
(1/2Ts , 1/2Ts )
(11.20)
Ts .
152
11.2.3.2 Psychoacoustics
The properties of human physiology and psychology often inform design choices in technologies meant for
interactin with people. For instance, digital devices dealing with sound use sampling rates related to the
frequency range of human vocalizations and the frequency range of human auditory sensativity.
Because
most of the sounds in human speech concentrate most of their signal energy between 5 Hz and 4 kHz, most
telephone systems discard frequencies above 4 kHz and sample at a rate of 8 kHz. Discarding the frequencies
greater than or equal to 4 kHz through use of an anti-aliasing lter is important to avoid aliasing, which
would negatively impact the quality of the output sound as is described in a later module. Similarly, human
hearing is sensitive to frequencies between 20 Hz and 20 kHz. Therefore, sampling rates for general audio
waveforms placed on CDs were chosen to be greater than 40 kHz, and all frequency content greater than
or equal to some level is discarded. The particular value that was chosen, 44.1 kHz, was selected for other
reasons, but the sampling theorem and the range of human hearing provided a lower bound for the range of
choices.
Ts .
(1/2Ts , 1/2Ts )
can be recon-
which will be further described in the section on perfect reconstruction, provides the reconstruction of the
unique
(1/2Ts , 1/2Ts ) bandlimited continuous time signal that samples to a given discrete time signal
Ts . This enables discrete time processing of continuous time signals, which has many
powerful applications.
11.3.1 Introduction
The sampling process produces a discrete time signal from a continuous time signal by examining the value
of the continuous time signal at equally spaced points in time. Reconstruction, also known as interpolation,
attempts to perform an opposite process that produces a continuous time signal coinciding with the points
of the discrete time signal. Because the sampling process for general sets of signals is not invertible, there
are numerous possible reconstructions from a given discrete time signal, each of which would sample to that
signal at the appropriate sampling rate. This module will introduce some of these reconstruction schemes.
11.3.2 Reconstruction
11.3.2.1 Reconstruction Process
The process of reconstruction, also commonly known as interpolation, produces a continuous time signal
that would sample to a given discrete time signal at a specic sampling rate. Reconstruction can be mathematically understood by rst generating a continuous time impulse train
ximp (t) =
xs [n] (t nTs )
(11.21)
n=
from the sampled signal
xs with sampling period Ts and then applying a lowpass lter G that satises certain
x
. If G has impulse response g , then the result of the reconstruction
3 This
153
process, illustrated in Figure 11.2, is given by the following computation, the nal equation of which is used
to perform reconstruction in practice.
= (ximp g) (t)
x
(t)
R
= ximp ( ) g (t ) d
R P
= n= xs [n] ( nTs ) g (t ) d
R
P
= n= xs [n] ( nTs ) g (t ) d
P
= n= xs [n] g (t nTs )
Figure 11.2:
(11.22)
Ts ,
can be expressed well in the time domain in terms of a condition on the impulse response
lter
G.
xs
from which it
of the lowpass
The sucient condition to be a reconstruction lters that we will require is that, for all
g (nTs ) = {
This means that
that sampling
sampled at a rate
n=0
n 6= 0
Ts produces a
Ts results in
x
(nTs )
(11.23)
m=
= [n] .
n Z,
m= xs [m] g ((n m) Ts )
P
m= xs [m] [n m]
(11.24)
= xs [n] ,
which is the desired result for reconstruction lters.
reconstruction to yield a spline of a certain degree, which is a signal described in piecewise parts by polynomials not exceeding that degree. Additionally, we might want to guarantee that the function and a certain
number of its derivatives are continuous.
154
This may be accomplished by restricting the result to the span of sets of certain splines, called basis
splines or B-splines. Specically, if a
nth
Ts
{Bn (t/Ts k) |k Z}
Bn = B0 Bn1
for
n1
n1
where
(11.25)
and
B0 (t) = {
otherwise
(11.26)
The basis splines Bn are shown in the above plots. Note that, except for the order 0 and
order 1 functions, these functions do not satisfy the conditions to be reconstruction lters. Also notice
that as the order increases, the functions approach the Gaussian function, which is exactly B .
Figure 11.3:
Still, the
n (t) =
b1
n [k] Bn (t/Ts k) .
k=
(11.27)
155
In order to conrm that this satises the condition to be a reconstruction lter, note that
n (mTs ) =
1
b1
n [k] Bn (m k) = bn bn (m) = (m) .
(11.28)
k=
n is a valid reconstruction lter. Since n is an nth degree spline with continuous derivatives up
n 1, the result of the reconstruction will be a nth degree spline with continuous derivatives up
n 1.
Thus,
order
order
to
to
Figure 11.4: The above plots show cardinal basis spline functions 0 , 1 , 2 , and . Note that the
functions satisfy the conditions to be reconstruction lters. Also, notice that as the order increases, the
cardinal basis splines approximate the sinc function, which is exactly . Additionally, these lters are
acausal.
The lowpass lter with impulse response equal to the cardinal basis spline
simplest examples of a reconstruction lter. It simply extends the value of the discrete time signal for half
the sampling period to each side of every sample, producing a piecewise constant reconstruction. Thus, the
result is discontinuous for all nonconstant discrete time signals.
Likewise, the lowpass lter with impulse response equal to the cardinal basis spline
1 of order 1 is another
of the simplest examples of a reconstruction lter. It simply joins the adjacent samples with a straight line,
producing a piecewise linear reconstruction.
discrete time signals. However, unless the samples are collinear, the result has discontinuous rst derivatives.
In general, similar statements can be made for lowpass lters with impulse responses equal to cardinal
basis splines of any order.
Using the
nth
n ,
n,
n 1. However, unless
n will be discontinuous.
all
156
Reconstructions of the discrete time signal given in Figure 11.5 using several of these lters are shown
in Figure 11.6. As the order of the cardinal basis spline increases, notice that the reconstruction approaches
that of the innite order cardinal spline
on perfect reconstruction, the lters with impulse response equal to the sinc function play an especially
important role in signal processing.
The above plot shows an example discrete time function. This discrete time function will
be reconstructed using sampling period Ts using several cardinal basis splines in Figure 11.6.
Figure 11.5:
157
The above plots show interpolations of the discrete time signal given in Figure 11.5 using
lowpass lters with impulse responses given by the cardinal basis splines shown in Figure 11.4. Notice
that the interpolations become increasingly smooth and approach the sinc interpolation as the order
increases.
Figure 11.6:
However, it is important to note that reconstruction is not the inverse of sampling and only
produces one possible continuous time signal that samples to a given discrete time signal.
As is covered
in the subsequent module, perfect reconstruction of a bandlimited continuous time signal from its sampled
version is possible using the Whittaker-Shannon reconstruction formula, which makes use of the ideal lowpass
lter and its sinc function impulse response, if the sampling rate is suciently high.
11.4.1 Introduction
If certain additional assumptions about the original signal and sampling rate hold, then the original signal
can be recovered exactly from its samples using a particularly important type of lter. More specically, it
will be shown that if a bandlimited signal is sampled at a rate greater than twice its bandlimit, the WhittakerShannon reconstruction formula perfectly reconstructs the original signal. This formula makes use of the
ideal lowpass lter, which is related to the sinc function. This is extremely useful, as sampled versions of
4 This
158
continuous time signals can be ltered using discrete time signal processing, often in a computer. The results
may then be reconstructed to produce the same continuous time output as some desired continuous time
system.
discussed. Subsequently, the lter and process used for perfect reconstruction will be detailed.
Recall that the sampled version
xs
has a spectrum
given by
Xs () =
2k
1 X
X
.
Ts
2Ts
(11.29)
k=
As before, note that if x is bandlimited to (1/2Ts , 1/2Ts ), meaning that X is only nonzero on
(1/2Ts , 1/2Ts ), then each period of Xs has the same form as X . Thus, we can identify the original
spectrum X from the spectrum of the samples Xs and, by extension, the original signal x from its samples
xs at rate Ts if x is bandlimited to (1/2Ts , 1/2Ts ).
If a signal x is bandlimited to (B, B), then it is also bandlimited to (fs /2, fs /2) provided that Ts <
1/2B . Thus, if we ensure that x is sampled to xs with suciently high sampling frequency fs = 1/Ts > 2B
and have a way of identifying the unique (fs /2, fs /2) bandlimited signal corresponding to a discrete time
signal at sampling period Ts , then xs can be used to reconstruct x
= x exactly. The frequency 2B is known
as the Nyquist rate. Therefore, the condition that the sampling rate fs = 1/Ts > 2B be greater than the
Nyquist rate is a sucient condition for perfect reconstruction to be possible.
The correct lter must also be known in order to perform perfect reconstruction.
lter dened by
t=0
sin(t/Ts )
t/Ts
t 6= 0
(11.30)
n=0
sin(n)
n
n 6= 0
n= xs [n] (t nTs )
n 6= 0
= [n] .
(11.31)
Ts
when
is input.
Ts
n=0
bandlimited signal that samples to a given discrete time sequence at sampling period
={
by the formula
x (t) =
(11.32)
n=
This perfect reconstruction formula is known as the Whittaker-Shannon interpolation formula and is sometimes also called the cardinal series. In fact, the sinc function is the innite order cardinal basis spline
159
{sinc (t/Ts n) |n Z} forms a basis for the vector space of (fs /2, fs /2) bandlimited
signals where the signal samples provide the corresponding coecients. It is a simple exercise to show that
this basis is, in fact, an orthogonal basis.
Figure 11.7:
The above plots show the ideal lowpass lter and its inverse Fourier transform, the sinc
Figure 11.8:
The plots show an example discrete time signal and its Whittaker-Shannon sinc recon-
function.
struction.
The
Whittaker-Shannon reconstruction formula computes this perfect reconstruction using an ideal lowpass lter,
with the resulting signal being a sum of shifted sinc functions that are scaled by the sample values. Sampling
below the Nyquist rate can lead to aliasing which makes the original signal irrecoverable as is described in
the subsequent module.
implications for the processing of continuous time signals using the tools of discrete time signal processing.
160
11.5.1 Introduction
Through discussion of the Nyquist-Shannon sampling theorem and Whittaker-Shannon reconstruction formula, it has already been shown that a
at rate
fs = 1/Ts
phenomenon, called aliasing, that can occur if this sucient condition for perfect reconstruction does not
hold. When aliasing occurs the spectrum of the samples has dierent form than the original signal spectrum,
so the samples cannot be used to reconstruct the original signal through Whittaker-Shannon interpolation.
11.5.2 Aliasing
Aliasing occurs when each period of the spectrum of the samples does not have the same form as the spectrum
of the original signal. Given a continuous time signals
that the spectrum
Xs
of sampled signal
xs
Ts
X,
recall
is given by
2k
1 X
X
.
Xs () =
Ts
2Ts
(11.33)
k=
X.
However, if
2k
2Ts
has
sum together. This is illustrated in Figure 11.9 in which sampling above the Nyquist frequency produces
a samples spectrum of the same shape as the original signal, but sampling below the Nyquist frequency
produces a samples spectrum with very dierent shape. Whittaker-Shannon interpolation of each of these
sequences produces dierent results. The low frequencies not aected by the overlap are the same, but there
is noise content in the higher frequencies caused by aliasing. Higher frequency energy masquerades as low
energy content, a highly undesirable eect.
5 This
161
The spectrum of a bandlimited signals is shown as well as the spectra of its samples
at rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquist
frequency, and the period of the samples spectrum centered about the origin has the same form as the
spectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur and
causes the spectrum to take a dierent than the original spectrum.
Figure 11.9:
Unlike when sampling above the Nyquist frequency, sampling below the Nyquist frequency does not yield
the Nyquist frequency, as is demonstrated in Figure 11.10. It is quite easy to construct uncountably innite
families of such signals.
Aliasing obtains it name from the fact that multiple, in fact innitely many,
sample to the same discrete sequence if
fs < 2B .
noninvertible process, and these dierent signals eectively assume the same identity, an alias. Hence, under
these conditions the Whittaker-Shannon interpolation formula will not produce a perfect reconstruction of
the original signal but will instead give the unique
discrete sequence.
162
The spectrum of a discrete time signal xs , taken from Figure 11.9, is shown along with
the spectra of three (B, B) signals that sample to it at rate s < 2B . From the sampled signal alone,
it is impossible to tell which, if any, of these was sampled at rate s to produce xs . In fact, there are
innitely many (B, B) bandlimited signals that sample to xs at a sampling rate below the Nyquist rate.
Figure 11.10:
7
8
6 http://cwx.prenhall.com/bookbind/pubbooks/cyganski/chapter0/medialib/SAMPLING_RECONS_SOUND/sampling.html
7 http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Reconst/index.html
8 http://www2.egr.uh.edu/glover/applets/Sampling/Sampling.html
163
Figure 11.11: Interact (when online) with a Mathematica CDF demonstrating sampling and aliasing
for a sinusoid. To Download, right-click and save target as .cdf.
fs = 1/Ts < 2B
that samples to xs ,
(B, B)
xs
(B, B) bandlimited
of xs , at rate fs 2B
at a rate
signal
as no aliasing occurs above the Nyquist frequency. Unfortunately, suciently high sampling rates cannot
always be produced. Aliasing is detrimental to many signal processing applications, so in order to process
continuous time signals using discrete time tools, it is often necessary to nd ways to avoid it other than
increasing the sampling rate. Thus, anti-aliasing lters, are of practical importance.
11.6.1 Introduction
It has been shown that a
rate
fs = 1/Ts 2B .
(B, B)
dierence in shape between the periods of the samples signal spectrum and the original spectrum, would
occur without any further measures to correct this.
energy at frequencies above
fs /2
the anti-aliasing lter, a lowpass lter applied before sampling to ensure that the signal is
(B, B)
is sampled at rate
fs < 2B .
Thus, when sampling below the Nyquist frequency, it is desirable to remove as much signal energy outside
the frequency range
(B, B) as possible while keeping as much signal energy in the frequency range (B, B)
fs /2 would be the optimal anti-
as possible. This suggests that the ideal lowpass lter with cuto frequency
aliasing lter to apply before sampling. While this is true, the ideal lowpass lter can only be approximated
in real situations.
9 This
164
In order to demonstrate the importance of anti-aliasing lters, consider the calculation of the error
energy between the original signal and its Whittaker-Shannon reconstruction from its samples taken with
and without the use of an anti-aliasing lter.
ltered signal where
Let
fs /2.
y = Gx
be the anti-alias
(f ) = {
X
Ts Xs (Ts f ) |f | < fs /2
0
P
={
k=
X (f kfs ) |f | < fs /2
0
otherwise
otherwise
(11.34)
P
k=1 (X (f + kfs ) + X (f kfs )) |f | < fs /2
X X (f ) = {
.
X (f )
otherwise
(11.35)
Similarly, the reconstructed spectrum using the ideal lowpass anti-aliasing lter is given by
Y (f ) = Y (f ) = {
X (f ) |f | < fs /2
0
(11.36)
(11.37)
otherwise
X Y (f ) = {
|f | < fs /2
X (f )
otherwise
Figure 11.12.
is
in
165
The gure above illustrates the use of an anti-aliasing lter to improve the process of
sampling and reconstruction when using a sampling frequency below the Nyquist frequency. Notice that
when using an ideal lowpass anti-aliasing lter, the reconstructed signal spectrum has the same shape
as the original signal spectrum for all frequencies below half the sampling rate. This results in a lower
error energy when using the anti-aliasing lter, as can be seen by comparing the error spectra shown.
Figure 11.12:
The optimal anti-aliasing lter would be the ideal lowpass lter with cuto frequency at
fs /2,
which would
ensure that the original signal spectrum and the reconstructed signal spectrum are equal on the interval
However, the ideal lowpass lter is not possible to implement in practice, and approximations
discrete time processing of continuous time signals, as will be shown in the subsequent module.
10
Up to this point, we have discussed the connection between continuous-time and discrete-time signals that
are captured by the concepts of sampling and reconstruction. In particular cases, we might be interested in
observing a signal under a variety of sampling rates. For example, the amount of memory or communication
bandwidth available for transmission or storage of a discrete-time signal might uctuate in time, which may
require increasing or decreasing the sampling rate (or sampling period) accordingly.
10 This
166
Changing the sampling frequency by reconstructing and sampling always works, but
sometimes it may be possible to do so working only in the discrete-time domain.
Figure 11.13:
Naively, if we have sampled the signal suciently often to be able to recover it (according to the Nyquist
criterion), then we can always return from the discrete-time signal to a continuous-time signal using reconstruction and then sample the signal at the new desired sampling rate.However, there are specic cases
where it is possible to modify the sampling rate of the signal without having to switch back to a continuoustime representation.
In other words, certain changes of sampling rate can be performed directly on the
discrete-time signal. We discuss three specic cases: downsampling, upsampling, and rational scaling.
11.7.1 Downsampling
fs and are asked to reduce the sampling frequency
fs
. When this change is translated to the sampling period T (as
k
1/T ), it is easy to see that the new sampling period T ' = kT will be k times larger than its original
fs =
fs' =
value. Therefore, the change in sampling frequency can be accounted for by taking the existing discrete signal
x [n]
(sampled at frequency
fs )
'
fs .
We know that for both the old and new sampling frequencies the discrete-time Fourier transform of
the sampled signal will correspond to a periodized, frequency-scaled version of the continuous-time signal's
Fourier transform
to
= 2 .
XCT (f )
fs
and
fs'
We now compare how these two discrete-time transforms match to one another:
s
x [n] ejn = XCT f
,
2
P
s
= n= x [kn] ejn = XCT f
k2 .
(11.38)
By connecting the two equations through the right-most terms, it is easy to see that
X ' () = X (/k),
X ()
X ' ()
=
=
n=
n=
i.e., that the downsampling performed expands the DTFT of the original signal
since we are still working with a discrete-time signal
x' [n],
x [n].
2 -periodic,
and
so this expansion occurs for each copy of the spectrum around its center, but the copies stay stationary at
multiples of
2 .
167
Figure 11.14:
Note also that this result eectively provides us with a new property for the DTFT: decimation in the
time domain corresponds to a qualied expansion in the frequency domain, where the expansion is around
the center of each copy of the CT spectrum (
= 0, 2, 4, ...).
Finally, notice that in downsampling there is the risk that aliasing may occur when the new frequency
'
fs'
does not hold the Nyquist frequency criterion (fs 2f0 , where f0 is the bandwidth of the CT signal).
Noticeably, this is the rst time that we observe the possibility of aliasing directly in the discrete-time
domain.
Since aliasing may occur, it is good engineering practice to apply a (discrete-time) anti-aliasing
c = 2/k .
Figure 11.15:
11.7.2 Upsampling
Now, consider the case where we start with a sampling frequency
frequency by an integer factor to a new value
period
(as
fs = 1/T ),
fs' = k fs .
fs
T ' = T /k
the original sampling period. Therefore, the change in sampling frequency requires the acquisition of new
samples in addition to those already available under the old sampling frequency. For this reason, this process
is commonly known as up sampling.
While at rst sight this may imply a demand to go back to the continuous-time signal, we must recall
that samples that are obtained with a sampling frequency greater than the Nyquist frequency contain all
information needed to recover the continuous-time signal, and so it should be possible to infer the new
168
samples directly from existing ones (as long as no aliasing has occurred). For this purpose, we will retrieve
the concept of an expanded discrete-time signal:
xk [n]
Notice that this signal
xk [n]
={
x [n/k]
0
if
n/k Z,
xk [n]
(11.39)
otherwise.
x' [n]
k,
done, we appeal to the DTFT of the expanded signal: recall that the time expansion property of the DTFT
gives us that
makes it
Figure 11.16:
x' [n]
xk [n]: to apply a low-pass lter on the expanded signal xk [n] so that one of the k copies that appear
each 2 -length region of the DTFT is preserved. Such a lter will need a cuto frequency of fc = /k .
combination of an expander and a low-pass lter is known as an upsampler, as shown below.
Figure 11.17:
over
The
169
a
b fs .
Intuitively, one can see that such a change can be obtained by combining an upsampling by a factor of a
the ratio between the new and old sampling frequencies provides a rational number.
with a downsampling by a factor of
b.
That is,
fs' =
Essentially, if downsampling is applied before upsampling, there is a chance that the downsampling antialiasing lter will remove a portion of the signal's spectrum that would alias but would have been shrunk
into the allowable region during the upsampling, and so the potential for unnecessary distortion is introduced.
In contrast, if upsampling is performed before downsampling, the cascade of the two systems will yield a
sequence of two low-pass lters, and implementing only the narrower lter would provide the same output
as the original cascade. This is illustrated in an example below.
fs = 10kHz
(i.e., the minimum allowed sampling frequency under the Nyquist criterion).
In this case,
170
Changing the sampling frequency by downsampling followed by upsampling. The downsampling anti-aliasing lter cut o part of the original spectrum.
Figure 11.18:
process. In contrast, applying upsampling before downsampling allows the entire spectrum to go through,
eectively meeting the bound given by the Nyquist criterion.
171
Figure 11.19: Changing the sampling frequency by upsampling followed by downsampling. The entire
spectrum is preserved through the process, and one of the lowpass lters is redundant.
11
11.8.1 Introduction
Digital computers can process discrete time signals using extremely exible and powerful algorithms. However, most signals of interest are continuous time signals, which is how data almost always appears in nature.
Now that the theory supporting methods for generating a discrete time signal from a continuous time signal
through sampling and then perfectly reconstructing the original signal from its samples without error has
been discussed, it will be shown how this can be applied to implement continuous time, linear time invariant systems using discrete time, linear time invariant systems. This is of key importance to many modern
technologies as it allows the power of digital computing to be leveraged for processing of analog signals.
11 This
172
H2
Xs
Xs ,
X.
H1 .
y,
ys .
Xs
This process is illustrated in Figure 11.20, and the spectra are shown for a specic case in Figure 11.21.
Figure 11.20:
shown.
A block diagram for processing of continuous time signals using discrete time systems is
Further discussion about each of these steps is necessary, and we will begin by discussing the analog to
digital converter, often denoted by ADC or A/D. It is clear that in order to process a continuous time signal
using discrete time techniques, we must sample the signal as an initial step. This is essentially the purpose of
the ADC, although there are practical issues that which will be discussed later. An ADC takes a continuous
time analog signal as input and produces a discrete time digital signal as output, with the ideal innite
precision case corresponding to sampling. As stated by the Nyquist-Shannon Sampling theorem, in order to
retain all information about the original signal, we usually wish sample above the Nyquist frequency
where the original signal is bandlimited to
(B, B).
fs 2B
exible in the lter that it implements. If sampling above the Nyquist frequency the. Any modications
that the discrete lter makes to this shape can be passed on to a continuous time signal assuming perfect
reconstruction. Consequently, the process described will implement a continuous time, linear time invariant
lter. This will be explained in more mathematical detail in the subsequent section. As usual, there are, of
course, practical limitations that will be discussed later.
Finally, we will discuss the digital to analog converter, often denoted by DAC or D/A. Since continuous
time lters have continuous time inputs and continuous time outputs, we must construct a continuous time
signal from our ltered discrete time signal. Assuming that we have sampled a bandlimited at a suciently
high rate, in the ideal case this would be done using perfect reconstruction through the Whittaker-Shannon
interpolation formula. However, there are, once again, practical issues that prevent this from happening that
will be discussed later.
173
Spectra are shown in black for each step in implementing a continuous time lter using
a discrete time lter for a specic signal. The lter frequency responses are shown in blue, and both are
meant to have maximum value 1 in spite of the vertical scale that is meant only for the signal spectra.
Ideal ADCs and DACs are assumed.
Figure 11.21:
H1
H2
can
We will assume the use of ideal, innite precision ADCs and DACs that perform sampling
(B, B).
fs = 1/Ts 2B
is
Note that these arguments fail if this condition is not met and aliasing occurs. In
that case, preapplication of an anti-aliasing lter is necessary for these arguments to hold.
Recall that we have already calculated the spectrum
Xs
of the samples
xs
given an input
x with spectrum
as
2k
1 X
X
.
Ts
2Ts
Xs () =
(11.40)
k=
Ys
of the samples
ys
given an output
Ys () =
with spectrum
is
1 X
2k
Y
.
Ts
2Ts
(11.41)
k=
H1
k=
Because
is bandlimited to
H2 () =
2k
2Ts
H2
is
X
2k
2Ts
(1/2Ts , 1/2Ts ),
k=
More simply stated,
H1
it follows that
2k
2Ts
= H2 ()
X
k=
X
2k
2Ts
.
(u ( (2k 1) ) u ( (2k + 1) )) .
(11.43)
H2 () = H1 (/2Ts ) for [, ).
H1 , the above equation solves the system
implement H2 . The lter H2 must be chosen such that it has a
periodic and
(11.42)
174
frequency response where each period has the same shape as the frequency response of
H1 on (1/2Ts , 1/2Ts ).
H2
H1 .
H2 ()
for
be bandlimited to
Because
was assumed to
A more complete model of how discrete time processing of continuous time signals is
implemented in practice. Notice the addition of anti-aliasing and anti-imaging lters to promote input
and output bandlimitedness. The ADC is shown to perform sampling with quantization. The digital
lter is further specied to be causal. The DAC is shown to perform imperfect reconstruction, a zero
order hold in this case.
Figure 11.22:
fs /2
must be used before the signal is fed into the ADC. The
fs /2
would be optimal. Of course, this is not achievable, so approximations of the ideal lowpass lter with low
gain above
fs /2
must be accepted. This means that some aliasing is inevitable, but it can be reduced to a
175
The data obtained by the ADC must be stored in nitely many bits inside a digital logic device. Thus,
there are only nitely many values that a digital sample can take, specically
2N
where
is the number of
bits, while there are uncountably many values an analog sample can take. Hence something must be lost in
the quantization process. The result is that quantization limits both the range and precision of the output
of the ADC. Both are nite, and improving one at constant number of bits requires sacricing quality in the
other.
H2
H1
lter annotation in Figure 11.22 reects this addition. If the desired system is not causal but has impulse
response equal to zero before some time
t0 ,
delay is excessive or the impulse response has innite length, a windowing scheme becomes necessary in order
to practically solve the problem. Multiplying by a window to decrease the length of the impulse response
can reduce the necessary delay and decrease computational requirements.
Take, for instance the case of the ideal lowpass lter. It is acausal and innite in length in both directions.
Thus, we must satisfy ourselves with an approximation. One might suggest that these approximations could
be achieved by truncating the sinc impulse response of the lowpass lter at one of its zeros, eectively
windowing it with a rectangular pulse.
domain as the resulting convolution would signicantly spread the signal energy. Other windowing functions,
of which there are many, spread the signal less in the frequency domain and are thus much more useful for
producing these approximations.
fs
used by the ADC. However, doing so will result in a function that is not bandlimited
Therefore, an additional lowpass lter, called an anti-imaging lter, must be applied to the
output. The process illustrated in Figure 11.22 reects these additions. The anti-imaging lter attempts to
bandlimit the signal to
been stated, this is not possible. Therefore, approximations of the ideal lowpass lter with low gain above
fs /2
must be accepted.
The anti-imaging lter typically has the same characteristics as the anti-aliasing
lter.
176
This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those of
you trying to gain a deep insight into linear algebra. Rather, this brief introduction to some of the terms and
ideas of linear algebra is meant to provide a little background to those trying to get a better understanding
or learn about eigenvectors and eigenfunctions, which play a big role in deriving a few important ideas on
Signals and Systems. The goal of these concepts will be to provide a background for signal decomposition
2
{x1 , x2 , . . . , xk } , xi Cn
are
{x1 , x2 , . . . , xn },
c1 x1 + c2 x2 + + cn xn = 0
only when
Example
c1 = c2 = = cn = 0
x1 =
x2 =
These are
2
6
4
can be seen to not adhere to the denition of linear independence stated above.
not linearly
APPENDIX
178
Figure 12.1:
Example 12.1
We are given the following two vectors:
x1 =
x2 =
These are
3
2
1
2
only if
c1 = c2 = 0.
independent.
Based on the denition, this proof shows that these vectors are indeed linearly
Again, we could also graph these two vectors (see Figure 12.2) to check for linear
independence.
Figure 12.2:
Exercise 12.1.1
Are
{x1 , x2 , x3 }
(Solution on p. 187.)
linearly independent?
x1 =
3
2
APPENDIX
179
x2 =
x3 =
1
2
1
0
As we have seen in the two above examples, often times the independence of vectors can be easily seen
through a graph. However this may not be as easy when we are given three or more vectors. Can you easily
tell whether or not these vectors are independent from Figure 12.3. Probably not, which is why the method
used in the above solution becomes important.
Plot of the three vectors. Can be shown that a linear combination exists among the
three, and therefore they are not linear independent.
Figure 12.3:
Hint:
A set of
vectors in
Cn
m > n.
12.1.2 Span
Denition 12.2: Span
The span
{x1 , x2 , . . . , xk }
{x1 , x2 , . . . , xk }
of a set of vectors
combination of
span ({x1 , . . . , xk }) = {1 x1 + 2 x2 + + k xk , i Cn }
Example
Given the vector
x1 =
the span of
x1
is a
Example
Given the vectors
line.
x1 =
3 "Subspaces",
3
2
APPENDIX
180
x2 =
the span of these vectors is
C2 .
12.1.3 Basis
Denition 12.3: Basis
A basis for
Cn
Cn
Example 12.2
.
.
.
ei =
1
.
..
0
where the
1 is always in the ith place and the remaining values are zero.
Then the
basis for Cn is
{ei , i = [1, 2, . . . , n] }
note:
{ei , i = [1, 2, . . . , n] }
is called the
standard basis.
Example 12.3
h1 =
h2 =
{h1 , h2 }
is a basis for
1
1
1
1
C2 .
APPENDIX
181
Figure 12.4:
If
{b1 , . . . , b2 }
is a basis for
Cn ,
x = 1 b1 + 2 b2 + + n bn , i C
Example 12.4
Given the following vector,
x=
writing
in terms of
{e1 , e2 }
1
2
gives us
x = e1 + 2e2
Exercise 12.1.2
Try and write
(Solution on p. 187.)
{h1 , h2 }
in terms of
is the same vector in both cases, but we can express it in many dierent
ways (we give only two out of many, many possibilities). You can take this even further by extending this
idea of a basis to
note:
function spaces.
As mentioned in the introduction, these concepts of linear algebra will help prepare you
4
to understand the Fourier Series , which tells us that we can express periodic functions,
terms of their basis functions,
j0 nt
[Media Object]5
4 "Fourier Series: Eigenfunction Approach" <http://legacy.cnx.org/content/m10496/latest/>
5 This media object is a LabVIEW VI. Please view or download it at
<LinearAlgebraCalc3.llb>
f (t),
in
APPENDIX
182
Figure 12.5:
will increase by
interest system is described by the rst order dierence equation shown in (12.1).
y (n) = (1 + r) y (n 1)
(12.1)
Given a suciently descriptive set of initial conditions or boundary conditions, if there is a solution to the
dierence equation, that solution is unique and describes the behavior of the system. Of course, the results
are only accurate to the degree that the model mirrors reality.
Cy (n) = f (n)
where
(12.2)
C = cN DN + cN 1 DN 1 + ... + c1 D + c0
in which
(12.3)
D (y (n)) = y (n) y (n 1) .
Note that operators of this type satisfy the linearity conditions, and
(12.4)
c0 , ..., cn
However, (12.2) can easily be written as a linear constant coecient recurrence equation without dierence
operators.
Conversely, linear constant coecient recurrence equations can also be written in the form of
a dierence equation, so the two types of equations are dierent representations of the same relationship.
Although we will still call them linear constant coecient dierence equations in this course, we typically
will not write them using dierence operators. Instead, we will write them in the simpler recurrence relation
form
N
X
k=0
6 This
ak y (n k) =
M
X
bk x (n k)
k=0
(12.5)
APPENDIX
where
183
1
y (n) =
a0
N
X
ak y (n k) +
k=1
M
X
y (n)
as
!
bk x (n k)
(12.6)
k=0
The forms provided by (12.5) and (12.6) will be used in the remainder of this course.
A similar concept for continuous time setting, dierential equations, is discussed in the chapter on time
domain analysis of continuous time systems.
constant coecient ordinary dierential equations and linear constant coecient dierece equations.
Example 12.5
Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when a
pair rabbits get left alone in a black box...
and produce a pair of ospring every month starting on their second month of life. This system is
dened by the recursion relation for the number of rabit pairs
y (n)
at month
y (n) = y (n 1) + y (n 2)
with the initial conditions
(12.7)
12.3.1 Introduction
The approach to solving linear constant coecient dierence equations is to nd the general form of all
possible solutions to the equation and then apply a number of conditions to nd the appropriate solution.
The two main types of problems are initial value problems, which involve constraints on the solution at several
consecutive points, and boundary value problems, which involve constraints on the solution at nonconsecutive
points.
The number of initial conditions needed for an
N th
highest order dierence or the largest delay parameter of the output in the equation, is
N,
and a unique
solution is always guaranteed if these are supplied. Boundary value probelms can be slightly more complicated
and will not necessarily have a unique solution or even a solution at all for a given set of conditions. Thus,
this section will focus exclusively on initial value problems.
7 This
APPENDIX
184
Ay (n) = f (n),
in which
is a
A = aN DN + aN 1 DN 1 + ... + a1 D + a0
where
(12.8)
D (y (n)) = y (n) y (n 1) .
yh (n)
Let
A,
and
note that
and
(12.9)
By the linearity of
yg (n) to any
yh (n) to the
f (n).
linear constant coecient ordinary dierential equation is the sum of a homogeneous solution
equation
Ay (n) = 0
yp (n)
We wish to determine the forms of the homogeneous and nonhomogeneous solutions in full generality in
order to avoid incorrectly restricting the form of the solution before applying any conditions. Otherwise, a
valid set of initial or boundary conditions might appear to have no corresponding solution trajectory. The
following sections discuss how to accomplish this for linear constant coecient dierence equations.
PN
PN
ak y (n k) = f (n), consider the dierence equation k=0 ak y (n k) = 0. We
PN
n
nk
have the form c for some complex constants c, . Since
= 0 for
k=0 ak c
k=0
tions
that
cnN
N
X
ak N k = 0
(12.10)
k=0
so it also follows that
a0 N + ... + aN = 0.
(12.11)
Therefore, the solution exponential are the roots of the above polynomial, called the characteristic polynomial.
For equations of order two or more, there will be several roots. If all of the roots are distinct, then the
general form of the homogeneous solution is simply
yh (n) = c1 n1 + ... + c2 n2 .
(12.12)
If a root has multiplicity that is greater than one, the repeated solutions must be multiplied by each power
of
from 0 to one less than the root multipicity (in order to ensure linearly independent solutions). For
instance, if
(12.13)
Example 12.6
Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when a
pair rabbits get left alone in a black box...
and produce a pair of ospring every month starting on their second month of life. This system is
dened by the recursion relation for the number of rabit pairs
y (n)
at month
y (n) y (n 1) y (n 2) = 0
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
(12.14)
APPENDIX
185
y (0) = 0
y (1) = 1.
and
Note that the forcing function is zero, so only the homogenous solution is needed. It is easy to
1 5
2 . Thus, the solution is of the form
1 =
1+ 5
and
2
2 =
y (n) = c1
!n
!n
1 5
1+ 5
+ c2
.
2
2
(12.15)
c1 =
5
5
(12.16)
and
c2 =
5
.
5
(12.17)
5
y (n) =
5
!n
5
1+ 5
2
5
!n
1 5
.
2
(12.18)
Finding the particular solution ot a dierential equation is discussed further in the chapter
concerning the z-transform, which greatly simplies the procedure for solving linear constant coecient
dierential equations using frequency domain tools.
Example 12.7
Consider the following dierence equation describing a system with feedback
y (n) ay (n 1) = x (n) .
(12.19)
y (n) ay (n 1) = 0.
It is easy to see that the characteristic polynomial is
(12.20)
a = 0,
so
=a
yh (n) = c1 an .
(12.21)
In order to nd the particular solution, consider the output for the
x (n) = (n)
y (n) ay (n 1) = (n) .
By inspection, it is clear that the impulse response is
given
x (n)
a u (n).
(12.22)
Hence, the particular solution for a
is
(12.23)
APPENDIX
186
(12.24)
Initial conditions and a specic input can further tailor this solution to a specic situation.
solution to the dierence equation that does not depend on the forcing function input and a particular
solution to the dierence equation that does depend on the forcing function input.
APPENDIX
187
x1 x2 + 2x3 = 0
Thus we have found a linear combination of these three vectors that equals zero without setting the coecients
equal to zero. Therefore, these vectors are
1
3
h1 +
h2
2
2
APPENDIX
188
APPENDIX
190
13.1.1 Introduction
In order to view LabVIEW content embedded in Connexions modules, you must install and enable the LabVIEW 8.0 and 8.5 Local VI Execution Browser Plug-in for Windows. Step-by-step installation instructions
are given below. Once installation is complete, the placeholder box at the bottom of this module should
display a demo LabVIEW virtual instrument (VI).
and
install
the
LabVIEW
2
http://zone.ni.com/devzone/cda/tut/p/id/4346
2. Download
and
install
the
LabVIEW
3
http://zone.ni.com/devzone/cda/tut/p/id/6633
3. Dowload the
8.0
Runtime
Engine
found
at:
8.5
Runtime
Engine
found
at:
.
.
place it in the
already exist.)
4. Restart your computer to complete the installation.
5. The placeholder box at the bottom of this module should now display a demo LabVIEW virtual
instrument (VI).
design
LabVIEW
virtual
instrument
from
It is developed by Wolfram
Research. Mathematica makes it easy to visualize data and create GUIs in only a few lines of code.
APPENDIX
191
of downloading source les and running on your computer, but the CDF-player comes with a plug-in for
viewing dynamic content online on your web browser!
Create
already have a Mathematica license. Wolfram has a free, save-disabled 15-day trial version
Find
of Mathematica.
Wolfram has thousands of Mathematica programs (including source code) available at the Wolfram Demon10
strations Project
. Anyone can create and submit a Demonstration. Also, many other websites (including
Mathematica's free
CDF-player is available for Windows and Mac OS X, and is in development for Linux; the CDF-Player
plugin is available for IE, Firefox, Chrome, Safari, and Opera.
Open your .cdf in Mathematica and left click on the bracket surrounding the manipulate command.
Click on Cell->Convert To->Bitmap.
Then click on File->Save Selection As, and save the image le in your desired image format.
Embed the les into the module in any way you like. Some tags you may nd helpful include image,
gure, download, and link (if linking to an .cdf le on another website). The best method is to create an
interactive gure, and include a fallback png image of the cdf le should the CDF image not render properly.
See the interactive demo/image below.
Convolution Demo
<figure id="demoonline">
<media id="CNXdemoonline" alt="timeshiftDemo">
<image mime-type="image/png" src="Convolutiondisplay-4.cdf" thumbnail="Convolution4.0Display.png" width
<object width="500" height="500" src="Convolutiondisplay-4.cdf" mime-type="application/vnd.wolfram.cdf"
<image mime-type="application/postscript" for="pdf" src="Convolution4.0Display.png" width="400"/>
</media>
<caption>Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download, right-cl
</figure>
8 http://www.wolfram.com/products/mathematica/purchase.html
9 http://www.wolfram.com/products/mathematica/experience/request.cgi
10 http://demonstrations.wolfram.com/index.html
APPENDIX
192
Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download, right-click and save target as .cdf.
Figure 13.2:
Alternatively, this is how it looks when you use a thumbnail link to a live online demo.
Click on the above thumbnail image (when online) to view an interactive Mathematica
Player demonstrating Convolution.
Figure 13.3:
When troubleshooting, the error messages are often unhelpful, so it's best to evaluate often so the problem
can be easily located. Search engines like Google are useful when you're looking for an explanation of specic
error messages.
11 http://www.wolfram.com/learningcenter/
GLOSSARY
193
Glossary
B Basis
Cn
A basis for
Cn
L Linearly Independent
For a given set of vectors,
{x1 , x2 , . . . , xn },
c1 x1 + c2 x2 + + cn xn = 0
only when
c1 = c2 = = cn = 0
Example:
x1 =
x2 =
These are
3
2
6
4
inspection, can be seen to not adhere to the denition of linear independence stated above.
12
not linearly
independent.
S Span
13
The span
{x1 , x2 , . . . , xk }
{x1 , x2 , . . . , xk }
of a set of vectors
combination of
span ({x1 , . . . , xk }) = {1 x1 + 2 x2 + + k xk , i Cn }
Example:
x1 =
the span of
x1
is a
3
2
line.
12 http://legacy.cnx.org/content/m10734/latest/
13 http://legacy.cnx.org/content/m10734/latest/
GLOSSARY
194
Example:
x1 =
x2 =
the span of these vectors is
3
2
1
2
C2 .
INDEX
195
Keywords
do not necessarily appear in the text of the page. They are merely associated with that section.
apples, 1.1 (1)
acausal, 11.8(171)
ADC, 11.8(171)
algebra, 1.2(5)
6.4(83), 11.1(147)
analysis, 5.4(69)
anti-imaging, 11.8(171)
continuous-time, 4.1(51)
9.1(121), 123
converter, 11.8(171)
Convolution, 54, 4.3(55), 4.4(58), 6.3(79),
bandlimited, 11.8(171)
bases, 12.1(177)
convolution integral, 85
Cooley-Tukey, 10.2(138)
CTFS, 5.4(69)
Ex.
apples, 1
DAC, 11.8(171)
de, 4.1(51)
3.2(39)
decimation, 7.3(95)
buttery, 141
decompose, 7.1(91)
cardinal, 11.3(152)
cascade, 3.1(37), 3.3(43)
causal, 2.1(17), 3.2(39), 11.8(171)
causality, 3.2(39), 4.5(62), 8.5(118)
common, 9.5(132)
complex, 2.2(22), 7.1(91)
complex exponential, 2.6(33), 5.3(68), 92,
7.5(100), 9.2(124)
complex exponentials, 126
complex numbers, 1.1(1), 1.2(5), 1.3(11)
complex plane, 2.6(33), 7.5(100)
complex-valued, 2.2(22), 7.1(91)
complexity, 139, 142
composite, 144
computational advantage, 140
considerations, 11.8(171)
Constant Coecient, 12.2(182), 12.3(183)
continuous, 17, 6.5(85), 11.8(171)
INDEX
196
homogeneous, 12.3(183)
imperfect, 11.8(171)
9.4(128)
implementability, 11.8(171)
downsampling, 11.7(165)
8.3(110), 12.3(183)
DT, 8.3(110)
independence, 12.1(177)
innite-length signal, 19
duality, 6.3(79)
input, 3.1(37)
invariant, 11.8(171)
1.3(11)
12.2(182), 12.3(183)
embedded, 13.1(190)
examples, 9.5(132)
9.2(124)
expansion, 7.3(95)
linearity, 6.3(79)
feedback, 3.1(37)
nite-length signal, 19
form, 140
fourier, 10.3(139)
Fourier methods, 54, 109
fourier series, 5.1(65), 5.4(69)
M Mathematica, 13.2(190)
modulation, 6.3(79)
nonlinear, 3.2(39)
not, 143
10.3(139)
11.8(171)
G
H
LabVIEW, 13.1(190)
Laplace transform, 54, 5.1(65)
geometry, 1.1(1)
hold, 11.8(171)
INDEX
Q
R
197
sinusoid, 7.1(91)
solution, 12.3(183)
periodicity, 5.2(66)
Player, 13.2(190)
spectrum, 11.1(147)
plug-in, 13.1(190)
spline, 11.3(152)
stability, 3.2(39)
practical, 11.8(171)
stable, 3.2(39)
precision, 11.8(171)
processing, 11.8(171)
properties, 8.4(115)
property, 4.4(58)
synthesis, 5.4(69)
pulse, 2.2(22)
quantization, 11.8(171)
range, 11.8(171)
6.4(83), 7.1(91)
theorem, 11.2(149)
real-valued, 7.1(91)
time, 11.8(171)
6.3(79)
reconstruct, 11.4(157)
11.8(171)
t-periodic, 5.2(66)
time-invariant, 3.3(43)
triangle, 2.2(22)
11.8(171)
sampling rate, 11.7(165)
sampling theorem, 11.8(171)
Sequence-Domain, 9.4(128)
triangle function, 24
sequences, 7.1(91)
unit-pulse function, 24
11.4(157), 11.8(171)
upsampling, 11.7(165)
VI, 13.1(190)
11.8(171)
window, 11.8(171)
Wolfram, 13.2(190)
z transform, 5.1(65)
ATTRIBUTIONS
198
Attributions
Collection:
ATTRIBUTIONS
199
ATTRIBUTIONS
200
Module: "Linear Time Invariant Systems"
By: Thanos Antoulas, JP Slavinsky
URL: http://legacy.cnx.org/content/m2102/2.26/
Pages: 43-50
Copyright: Thanos Antoulas, JP Slavinsky
License: http://creativecommons.org/licenses/by/3.0/
Module: "Continuous Time Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47437/1.1/
Pages: 51-52
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Systems
By: Michael Haag, Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10855/2.8/
Module: "Continuous Time Impulse Response"
By: Dante Soares
URL: http://legacy.cnx.org/content/m34629/1.2/
Pages: 52-55
Copyright: Dante Soares
License: http://creativecommons.org/licenses/by/3.0/
Module: "Continuous-Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47482/1.2/
Pages: 55-58
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Continuous Time Convolution
By: Melissa Selik, Richard Baraniuk, Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m10085/2.34/
Module: "Properties of Continuous Time Convolution"
By: Melissa Selik, Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10088/2.20/
Pages: 58-61
Copyright: Melissa Selik, Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/4.0/
Module: "Causality and Stability of Continuous-Time Linear Time-Invariant Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50671/1.3/
Pages: 62-63
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
ATTRIBUTIONS
201
ATTRIBUTIONS
202
ATTRIBUTIONS
203
ATTRIBUTIONS
204
Module: "Discrete Time Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47454/1.4/
Pages: 105-107
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Time Systems
By: Don Johnson, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34614/1.2/
Module: "Discrete Time Impulse Response"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47363/1.2/
Pages: 107-109
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Impulse Response
By: Dante Soares
URL: http://legacy.cnx.org/content/m34626/1.1/
Module: "Discrete-Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47455/1.2/
Pages: 110-115
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Time Convolution
By: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick, Catherine Elder
URL: http://legacy.cnx.org/content/m10087/2.27/
Module: "Properties of Discrete Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47456/1.1/
Pages: 115-117
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Properties of Discrete Time Convolution
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34625/1.2/
Module: "Causality and Stability of Discrete-Time Linear Time-Invariant Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50677/1.1/
Pages: 118-119
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
ATTRIBUTIONS
205
ATTRIBUTIONS
206
Module: "Discrete Time Convolution and the DTFT"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47375/1.2/
Pages: 133-135
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Convolution and the DTFT
By: Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m34851/1.6/
Module: "Discrete Fourier Transform (DFT)"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47468/1.2/
Pages: 137-138
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Fourier Transform (DFT)
By: Don Johnson
URL: http://legacy.cnx.org/content/m10249/2.28/
Module: "DFT: Fast Fourier Transform"
By: Don Johnson
URL: http://legacy.cnx.org/content/m0504/2.9/
Pages: 138-139
Copyright: Don Johnson
License: http://creativecommons.org/licenses/by/3.0/
Module: "The Fast Fourier Transform (FFT)"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47467/1.1/
Pages: 139-144
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: The Fast Fourier Transform (FFT)
By: Justin Romberg
URL: http://legacy.cnx.org/content/m10783/2.7/
Module: "Signal Sampling"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47377/1.2/
Pages: 147-149
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Signal Sampling
By: Stephen Kruzick, Justin Romberg
URL: http://legacy.cnx.org/content/m10798/2.8/
ATTRIBUTIONS
207
ATTRIBUTIONS
208
Module: "Changing Sampling Rates in Discrete Time"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m48038/1.2/
Pages: 165-171
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Module: "Discrete Time Processing of Continuous Time Signals"
By: Marco F. Duarte, Natesh Ganesh
URL: http://legacy.cnx.org/content/m47398/1.3/
Pages: 171-175
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Processing of Continuous Time Signals
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10797/2.11/
Module: "Linear Algebra: The Basics"
Used here as: "Basic Linear Algebra"
By: Michael Haag, Justin Romberg
URL: http://legacy.cnx.org/content/m10734/2.7/
Pages: 177-182
Copyright: Michael Haag, Justin Romberg
License: http://creativecommons.org/licenses/by/3.0/
Module: "Linear Constant Coecient Dierence Equations"
By: Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m12325/1.5/
Pages: 182-183
Copyright: Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Module: "Solving Linear Constant Coecient Dierence Equations"
By: Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m12326/1.6/
Pages: 183-186
Copyright: Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Module: "Viewing Embedded LabVIEW Content in Connexions"
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34460/1.5/
Page: 190
Copyright: Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Based on: Viewing Embedded LabVIEW Content
By: Matthew Hutchinson
URL: http://legacy.cnx.org/content/m13753/1.3/
ATTRIBUTIONS
209
About OpenStax-CNX
Rhaptos is a web-based collaborative publishing system for educational material.