Vous êtes sur la page 1sur 216

Signals and Systems

Collection Editor:
Marco F. Duarte

Signals and Systems

Collection Editor:
Marco F. Duarte
Authors:
Thanos Antoulas
Richard Baraniuk
Dan Calderon
Marco F. Duarte
Catherine Elder
Natesh Ganesh
Michael Haag
Don Johnson

Stephen Kruzick
Matthew Moravec
Justin Romberg
Louis Scharf
Melissa Selik
JP Slavinsky
Dante Soares

Online:
< http://legacy.cnx.org/content/col11557/1.10/ >

OpenStax-CNX

This selection and arrangement of content as a collection is copyrighted by Marco F. Duarte. It is licensed under the
Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).
Collection structure revised: September 13, 2014
PDF generated: December 6, 2014
For copyright and attribution information for the modules contained in this collection, see p. 198.

Table of Contents
1 Review of Prerequisites: Complex Numbers
1.1 Geometry of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Complex Numbers: Algebra of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Representing Complex Numbers in a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Continuous-Time Signals
2.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Common Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Energy and Power of Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Continuous Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6 Continuous-Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 Introduction to Systems
3.1 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4 Time Domain Analysis of Continuous Time Systems
4.1 Continuous Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Continuous Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Continuous-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Properties of Continuous Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5 Causality and Stability of Continuous-Time Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . 62
5 Introduction to Fourier Analysis
5.1 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Continuous Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3 Eigenfunctions of Continuous-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.4 Continuous Time Fourier Series (CTFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6 Continuous Time Fourier Transform (CTFT)
6.1 Continuous Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3 Properties of the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.4 Common Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.5 Continuous Time Convolution and the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.6 Frequency-Domain Analysis of Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

7 Discrete-Time Signals
7.1 Common Discrete Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.2 Energy and Power of Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3 Discrete-Time Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.4 Discrete Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.5 Discrete Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8 Time Domain Analysis of Discrete Time Systems
8.1 Discrete Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.2 Discrete Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 107
8.3 Discrete-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

iv

8.4 Properties of Discrete Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


8.5 Causality and Stability of Discrete-Time Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . 118
9 Discrete Time Fourier Transform (DTFT)
9.1 Discrete Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.2 Eigenfunctions of Discrete Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
9.3 Discrete Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.4 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
9.5 Common Discrete Time Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 132
9.6 Discrete Time Convolution and the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
10 Computing Fourier Transforms
10.1 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
10.2 DFT: Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
10.3 The Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

11 Sampling and Reconstruction


11.1 Signal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
11.2 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
11.3 Signal Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.4 Perfect Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
11.5 Aliasing Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
11.6 Anti-Aliasing Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
11.7 Changing Sampling Rates in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 165
11.8 Discrete Time Processing of Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
12 Appendix: Mathematical Pot-Pourri
12.1 Basic Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.2 Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
12.3 Solving Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

13 Appendix: Viewing Interactive Content


13.1 Viewing Embedded LabVIEW Content in Connexions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
13.2 Getting Started With Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 1
Review of Prerequisites: Complex
Numbers
1.1 Geometry of Complex Numbers

note: This module is part of the collection,

ing.

A First Course in Electrical and Computer Engineer-

The LaTeX source les for this collection were created using an optical character recognition

technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
The most fundamental new idea in the study of complex numbers is the imaginary number
imaginary number is dened to be the square root of

(1.1)

j 2 = 1.

(1.2)

is used to build complex numbers

and

in the following way:

z = x + jy.
We say that the complex number

In MATLAB, the variable


nication theory,
call

z =x + jy

has real part

(1.3)

and imaginary part

y:

z = Re [z] + jIm [z]

(1.4)

Re [z] = x; Im [z] = y.

(1.5)

is denoted by

real(z),

and the variable

is called the in-phase component of

the

This

j=

The imaginary number

j.

1:

Cartesian

z,

and

is denoted by

imag(z).

In commu-

is called the quadrature component. We

representation of z , with real component x and imaginary component y . We


(x, y)codes the complex number z .
complex number z on the plane as in Figure 1.1. We call the horizontal axis the real

say that the Cartesian pair


We may plot the

axis and the vertical axis the imaginary axis.


angle of the line to the point

z = x + jy

The plane is called the complex plane.

r=
1 This

The radius and

are

p
x2 + y 2

content is available online at <http://legacy.cnx.org/content/m21411/1.6/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
1

(1.6)

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

= tan1
See Figure 1.1. In MATLAB,

Figure 1.1:

is denoted by

abs(z),

y
x

and

(1.7)
is denoted by

angle(z).

Cartesian and Polar Representations of the Complex Number z

The original Cartesian representation is obtained from the radius

The complex number

and angle

as follows:

x = rcos

(1.8)

y = r sin .

(1.9)

may therefore be written as

x + jy

rcos + jrsin

(1.10)

= r ( cos + j sin ) .
The complex number cos

+ j sin is, itself, a number that may be represented on the complex plane and
(cos, sin). This is illustrated in Figure 1.2. The radius and angle to the
1 and . Can you see why?

coded with the Cartesian pair


point

z = cos + j sin

Figure 1.2:

are

The Complex Number cos + jsin

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

The complex number cos

+ j sin is
ej :

of such fundamental importance to our study of complex numbers

that we give it the special symbol

ej = cos + j sin.
As illustrated in Figure 1.2, the complex number
may write the complex number

ej

(1.11)

has radius 1 and angle

With the symbol

complex number

z.

We call

z = re

to be the angle, or

polar

(1.12)

z . We say that the polar pair rcodes the


|z| = r to be the magnitude of z and arg (z) =

representation for the complex number

In this polar representation, we dene

phase, of z :

|z| = r

(1.13)

arg (z) = .

(1.14)

With these denitions of magnitude and phase, we can write the complex number

as

z = |z|ejarg(z) .
Let's summarize our ways of writing the complex number

= x + jy

In "Roots of Quadratic Equations"


We show, in fact, that
call

ej

rej

(x, y)

and record the corresponding geometric codes:

= |z|ej

arg(z)

.
(1.16)

we show that the denition

is just the familiar function

(1.15)

ej = cos + j sin

is more than symbolic.

evaluated at the imaginary argument

x = j.

a complex exponential, meaning that it is an exponential with an imaginary argument.

Exercise 1.1.1
Prove

2n

(j)

= (1)

Exercise 1.1.2

and

2n+1

(j)

= (1) j .

Evaluate

j3, j4, j5.

ej[(/2)+m2] = j, ej[(3/2)+m2] = j, ej(0+m2) = 1,


identities on the complex plane. (Assume m is an integer.)
Prove

Exercise 1.1.3

Find the polar representation


a.
b.
c.
d.

z
z
z
z

z = rej

and

ej(+m2) = 1.

Plot these

for each of the following complex numbers:

= 1 + j0;
= 0 + j1;
= 1 + j1;
= 1 j1.

Plot the points on the complex plane.

Exercise 1.1.4

Find the Cartesian representation


a.
b.
c.

z = x + jy

for each of the following complex numbers:

z = 2ej/2 ;
z = 2ej/4 ;
z = ej3/4 ;

2 "Complex

we

as

z = rej .
j

ej ,

Numbers: Roots of Quadratic Equations" <http://legacy.cnx.org/content/m21415/latest/>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

We

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

d.

z=

2ej3/2 .

Plot the points on the complex plane.

Exercise 1.1.5

The following geometric codes represent complex numbers.


corresponding complex number
a.
b.
c.
d.

(0.7, 0.1) z =
(1.0, 0.5) z =
1.6/8 z =?
0.47/8 z =?

z:

Decode each by writing down the

?
?

Exercise 1.1.6
Show that

Im [jz] = Re [z]

and

Re [jz] = Im [z].

Demo 1.1 (MATLAB). Run the following

MATLAB program in order to compute and plot the complex number

ej

for

= i2/360, i =

1, 2, ..., 360:

j=sqrt(-1)
n=360
for i=1:n,circle(i)=exp(j*2*pi*i/n);end;
axis('square')
plot(circle)
Replace the explicit for loop of line 3 by the implicit loop

circle=exp(j*2*pi*[1:n]/n);
to speed up the calculation. You can see from Figure 1.3 that the complex number

= 2/360, 2 (2/360) , ..., turns


j
that e
is a complex number that

ej ,

evaluated

at angles

out complex numbers that lie at angle

We say

lies on the unit circle. We will have much more to say

about the unit circle in Chapter 2.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

and radius 1.

Figure 1.3:

The Complex Numbers ej for 0 2 (Demo 1.1)

1.2 Complex Numbers: Algebra of Complex Numbers


note: This module is part of the collection,

ing.

A First Course in Electrical and Computer Engineer-

The LaTeX source les for this collection were created using an optical character recognition

technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
The complex numbers form a mathematical eld on which the usual operations of addition and multiplication are dened. Each of these operations has a simple geometric interpretation.

1.2.1 Addition and Multiplication.


The complex numbers

z1

and

z2

are

added according to the rule

z1 + z2
3 This

(x1 + jy1 ) + (x2 + jy2 )

(x1 + x2 ) + j (y1 + y2 ) .

content is available online at <http://legacy.cnx.org/content/m50948/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(1.17)

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

We say that the real parts add and the imaginary parts add.
number
formed

z1 + z2
from z1

is computed from a parallelogram rule, wherein


and

only

z1 = r1 ej1

z1

z2 = r2 ej2 . Find a polar formula z3 =r3 ej3 for z3 = z1 + z2


r1 , r2 , 1 , and 2 . The formula for r3 is the law of cosines.

and

the variables

The product of

and

z2

that involves

is

z1 z2

Figure 1.4:

lies on the node of a parallelogram

z2 .

Exercise 1.2.1
Let

As illustrated in Figure 1.4, the complex

z1 + z2

(x1 + jy1 ) (x2 + jy2 )

(x1 x2 y1 y2 ) + j (y1 x2 + x1 y2 ) .

(1.18)

Adding Complex Numbers

If the polar representations for

z1 z2

z1

and

z2

are used, then the product may be written as

r1 ej1 r2 ej2

(r1 cos1 + jr1 sin1 ) (r2 cos2 + jr2 sin2 )

( r1 cos 1 r2 cos 2 r1 sin 1 r2 sin 2 )

+ j ( r1 sin 1 r2 cos 2 + r1 cos 1 r2 sin 2 )


=

r1 r2 cos (1 + 2 ) + jr1 r2 sin (1 + 2 )

r1 r2 ej(1 +2 ) .

We say that the magnitudes multiply and the angles add. As illustrated in Figure 1.5, the product
at the angle

(1.19)

z1 z2

lies

(1 + 2 ).

4 We have used the trigonometric identities cos ( + ) = cos cos sin sin and sin ( + ) = sin cos +cos
1
2
1
2
1
2
1
2
1
2
1
sin 2 to derive this result.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Figure 1.5:

Rotation.

Multiplying Complex Numbers

There is a special case of complex multiplication that will become very important in our study
5

of phasors in the chapter on Phasors . When


number

z2 = e

j2

, then the product of

z1

and

z1 is
z2 is

the complex number

z1 = r1 ej1

and

z2

is the complex

z1 z2 = z1 ej2 = r1 ej(1 +2 ) .
As illustrated in Figure 1.6,

Figure 1.6:

z1 z2

is just a rotation of

z1

through the angle

(1.20)

2 .

Rotation of Complex Numbers

Exercise 1.2.2

z1 = x + jy = rej . Compute the complex number z2 = jz1 in


Cartesian and polar forms. The complex number z2 is sometimes called perp(z1 ). Explain why
j
writing perp(z1 ) as z1 e 2 . What is 2 ? Repeat this problem for z3 = jz1 .
Begin with the complex number

5 "Phasors:

Introduction" <http://legacy.cnx.org/content/m21469/latest/>
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

its
by

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

Powers.

If the complex number

z1

multiplies itself

(z1 )

times, then the result is

= r1N ejN 1 .

(1.21)

z1k = r1k ejk1 . (The assumption is


k+1
k+1 j(k+1)1
k
true for k = 1.) Then use the recursion z1
= z1 z1 = r1 e
. Iterate this recursion (or induction)
until k + 1 = N . Can you see that, as n ranges from n = 1, ..., N , the angle of zfrom 1 to 21 , ..., to N 1
2
N
and the radius ranges from r1 to r1 , ..., to r1 ? This result is explored more fully in Problem 1.19.
j
Corresponding to every complex number z = x + jy = re
is the complex
This result may be proved with a simple induction argument. Assume

Complex Conjugate.

conjugate

z = x jy = rej .
z

The complex number

and its complex conjugate are illustrated in Figure 1.7.

complex conjugates is to change

jto j .

(1.22)
The recipe for nding

This changes the sign of the imaginary part of the complex

number.

Figure 1.7:

A Complex Variable and Its Complex Conjugate

Magnitude Squared.
and is denoted by

|z|

The product of

and its complex conjugate is called the magnitude squared of

|z| = z z = (x jy) (x + jy) = x2 + y 2 = rej rej = r2 .


Note that

|z| = r

(1.23)

is the radius, or magnitude, that we dened in "Geometry of Complex Numbers"

(Section 1.1).

Exercise 1.2.3
Write

as

z = zw.

Exercise 1.2.4

Prove that angle

Exercise 1.2.5

Find

in its Cartesian and polar forms.

(z2 z1 ) = 2 1 .

Show that the real and imaginary parts of

z = x + jy

may be written as

1
(z + z )
2

(1.24)

Im [z] = 2j (z z ) .

(1.25)

Re [z] =

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Commutativity, Associativity, and Distributivity.

The complex numbers commute, associate, and

distribute under addition and multiplication as follows:

z1 + z2

= z2 + z1

z1 z2

(z1 + z2 ) + z3

Identities and Inverses.

(1.26)

z2 z1

= z1 + (z2 + z3 )
(1.27)

z1 (z2 z3 )

(z1 z2 ) z3

z1 (z2 + z3 )

z1 z2 + z1 z3 .

In the eld of complex numbers, the complex number

plays the role of an additive identity, and the complex number

1 + j0

0 + j0

(denoted by 0)

(denoted by 1) plays the role of a

multiplicative identity:

In this eld, the complex number

x
x2 +y 2

+j

y
x2 +y 2

z+0

0+z

z1

1z.

z = x + j (y)

is the additive inverse of

zz

Exercise 1.2.6
Show that the additive inverse of

z = rej

Exercise 1.2.7

Show that the multiplicative inverse of

z 1 =
zz

z,

and the complex number

is the multiplicative inverse:

z + (z)

Show that

(1.28)

is real. Show that

z 1

1.

(1.29)

may be written as

rej(+) .

may be written as

1
1
z = 2
(x jy) .
zz
x + y2

(1.30)

may also be written as

z 1 = r1 ej .
Plot

and

z 1

Exercise 1.2.8
Prove

(j)

for a representative

(1.31)

z.

= j .

Exercise 1.2.9
Find

z 1

when

Exercise 1.2.10

Prove

z 1

z = 1 + j1.
1

= (z )

= r1 ej =

Exercise 1.2.11
Find all of the complex numbers

1
z z z . Plot

and

z 1

for a representative

with the property that

jz = z .

z.

Illustrate these complex

numbers on the complex plane.

Demo 1.2 (MATLAB). Create and run the following script le (name it Complex Numbers)
6 If

you are using PC-MATLAB, you will need to name your le cmplxnos.m.
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

10

clear, clg
j=sqrt(-1)
z1=1+j*.5,z2=2+j*1.5
z3=z1+z2,z4=z1*z2
z5=conj(z1),z6=j*z2
axis([-4 4 -4 4]),axis('square'),plot([0 z1],'-o')
hold on
plot([0 z2],'-o'),plot([0 z3],'-+'),plot([0 z4],'-*'),
plot([0 z5],'x'),plot([0 z6],'-x')

Figure 1.8:

Complex Numbers (Demo 1.2)

With the help of Appendix 1, you should be able to annotate each line of this program. View your graphics
display to verify the rules for add, multiply, conjugate, and perp. See Figure 1.8.

Exercise 1.2.12
Prove that

z 0 = 1.

Exercise 1.2.13

z1 = 1.05ej2/16 and z2 = 0.95ej2/16 . Write a MATLAB program to compute


n
z2 for n = 1, 2, ..., 32. You should observe a gure like Figure 1.9.

(MATLAB) Choose

n
and plot z1 and

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

11

Figure 1.9:

Powers of z

1.3 Representing Complex Numbers in a Vector Space


note: This module is part of the collection,

ing.

A First Course in Electrical and Computer Engineer-

The LaTeX source les for this collection were created using an optical character recognition

technology, and because of this process there may be more errors than usual. Please contact us if
you discover any errors.
So far we have coded the complex number

(r).

z = x + jy with the Cartesian pair (x, y) and with the polar pair
z may be coded with a two-dimensional vector z and show

We now show how the complex number

how this new code may be used to gain insight about complex numbers.

Coding a Complex Number

as a Vector.

two-dimensional vector

z=

x
y

We code the complex number

z = x + jy

with the

x + jy = z z =

x
y

We plot this vector as in Figure 1.10. We say that the vector

.
z

belongs to a vector space.

(1.32)

This means

that vectors may be added and scaled according to the rules

z1 + z2

x1 + x2

y1 + y2

ax
.
az =
ay

7 This

content is available online at <http://legacy.cnx.org/content/m21414/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(1.33)

(1.34)

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

12

Figure 1.10:

The Vector z Coding the Complex Number z

Furthermore, it means that an additive inverse

z,

an additive identity

0,

and a multiplicative identity

1 all exist:

The vector

is

0=

0
0

z + (z) = 0

(1.35)

lz = z.

(1.36)

Prove that vector addition and scalar multiplication satisfy these properties of commutation, association,
and distribution:

z1 + z2 = z2 + z1

(1.37)

(z1 + z2 ) + z3 = z1 + (z2 + z3 )

(1.38)

a (bz) = (ab) z

(1.39)

a (z1 + z2 ) = az1 + az2 .

Inner Product and Norm.

The

inner product

between two vectors

(1.40)

z1

and

z2

is dened to be the real

number

(z1 , z2 ) = x1 x2 + y1 y2 .

(1.41)
8

We sometimes write this inner product as the vector product (more on this in Linear Algebra )

(z1 , z2 )

[x1 y1 ]

x2
y2

zT z
1 2
= (x1 x2 + y1 y2 ) .

Exercise 1.3.1
Prove

8 "Linear

(z1 , z2 ) = (z2 , z1 ) .

Algebra: Introduction" <http://legacy.cnx.org/content/m21454/latest/>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(1.42)

13

When

z1 = z2 = z,

then the inner product between

and itself is the

norm squared

of

z:

||z|| = (z, z) = x2 + y 2 .

(1.43)

These properties of vectors seem abstract. However, as we now show, they may be used to develop a vector
calculus for doing complex arithmetic.

A Vector Calculus for Complex Arithmetic.

corresponds to the addition of the vectors

z1

z2

and

The addition of two complex numbers

z1 + z2 z1 + z2
The scalar multiplication of the complex number
cation of the vector

z2

by

x1

z1

and

z2

z2

x1 + x2

y1 + y2

(1.44)

by the real number

x1

corresponds to scalar multipli-

x2

x1 z2 x1

y2

x1 x2

x1 y2

(1.45)

z2 by the real number y1


x2
y1 x2
=
.
y1 z2 y1
y2
y1 y2

Similarly, the multiplication of the complex number

is

The complex product

z1 z2 = (x1 + jy1 ) z2

(1.46)

is therefore represented as

z1 z2

x1 x2 y1 y2
x1 y2 + y1 x2

(1.47)

This representation may be written as the inner product

z1 z2 = z2 z1

where

and

are the vectors

(v, z1 )

(w, z1 )

x2
y2
and w =
.
v=
y2
x2

x y2
2
,
y2 x2

we can represent the complex product

z1 z2

(1.48)

By dening the matrix

(1.49)

as a matrix-vector multiply (more on this in Linear Algebra ):

z1 z2 = z2 z1

x2

y2

y2

x2

x1
y1

(1.50)

With this representation, we can represent rotation as

zej = ej z
9 "Linear

cos

sin

sin

cos

x1
x2

Algebra: Introduction" <http://legacy.cnx.org/content/m21454/latest/>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(1.51)

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

14

We call the matrix

cos

sin

sin

cos

a rotation matrix.

Exercise 1.3.2
Call

R ()

the rotation matrix:

R () =
Show that

R ()

Exercise 1.3.3

rotates by

().

z
and

in the polar representation

If we dene the coordinate vectors

cos

(1.52)

R () w

when

w = R () z?

x
y

(1.53)

of the matrix.

Inner Product and Polar Representation.


magnitude of

sin

as

a, b, c,

sin

What can you say about

Represent the complex conjugate of

and nd the elements

cos

From the norm of a vector, we derive a formula for the

z = rej

1/2
1/2
r = x2 + y 2
= ||z|| = (z, z) .


0
1
and e2 = , then we
e1 =
0
1

(1.54)

can represent the vector

z = (z, e1 ) e1 + (z, e2 ) e2 .
See Figure 1.11. From the gure it is clear that the cosine and sine of the angle

cos =

Figure 1.11:

as

(1.55)

are

(z, e2 )
(z, e1 )
; sin =
||z||
||z||

Representation of z in its Natural Basis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(1.56)

15

z:

This gives us another representation for any vector

z = ||z||cose1 + ||z||sine2 .
The inner product between two vectors

(z1 , z2 )

z1

and

z2

(1.57)

is now



(z
,
e
)
e
2 1
1

(z1 , e1 ) eT1 (z1 , e2 ) eT2


(z2 , e2 ) e2

(z1 , e1 ) (z2 , e1 ) + (z1 , e2 ) (z2 , e2 )

(1.58)

= ||z1 ||cos1 ||z2 ||cos2 + ||z1 || sin 1 ||z2 ||sin2 .


It follows that

cos (2 1 ) = cos2

cos

1 + sin1 sin2
cos (2 1 ) =

may be written as

(z1 , z2 )
||z1 || ||z2 ||

(1.59)

This formula shows that the cosine of the angle between two vectors
cosine of the angle of

Exercise 1.3.4

z2 z1 ,

z1

and

z2 ,

which is, of course, the

is the ratio of the inner product to the norms.

Prove the Schwarz and triangle inequalities and interpret them:

(Schwarz) (z1 , z2 ) ||z1 || ||z2 ||

(1.60)

(triangle) I z1 z2 || ||z1 z3 || + ||z2 z3 ||.

(1.61)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

16

CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 2
Continuous-Time Signals
2.1 Signal Classications and Properties

2.1.1 Introduction
This module will begin our study of signals and systems by laying out some of the fundamentals of signal classication. It is essentially an introduction to the important denitions and properties that are fundamental
to the discussion of signals and systems, with a brief discussion of each.

2.1.2 Classications of Signals


2.1.2.1 Continuous-Time vs. Discrete-Time
As the names suggest, this classication is determined by whether or not the time axis is
or

continuous (Figure 2.1).

discrete (countable)

A continuous-time signal will contain a value for all real numbers along the
2

time axis. In contrast to this, a discrete-time signal , often created by sampling a continuous signal, will
only have values at equally spaced intervals along the time axis.

Figure 2.1

1 This content is available


2 "Discrete-Time Signals"

online at <http://legacy.cnx.org/content/m47271/1.3/>.
<http://legacy.cnx.org/content/m0009/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


17

CHAPTER 2. CONTINUOUS-TIME SIGNALS

18

2.1.2.2 Analog vs. Digital


The dierence between analog and digital is similar to the dierence between continuous-time and discretetime.

However, in this case the dierence involves the values of the function.

Analog corresponds to a

continuous set of possible function values, while digital corresponds to a discrete set of possible function
values. An common example of a digital signal is a binary sequence, where the values of the function can
only be one or zero.

Figure 2.2

2.1.2.3 Periodic vs. Aperiodic


3

Periodic signals

repeat with some

period T , while aperiodic, or nonperiodic, signals do not (Figure 2.3).

We can dene a periodic function through the following mathematical expression, where
and

t can be any number

is a positive constant:

f (t) = f (T + t)
The

fundamental period of our function, f (t), is the smallest value of T

(2.1)
that the still allows (2.1) to be

true.

3 "Continuous

Time Periodic Signals" <http://legacy.cnx.org/content/m10744/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

19

(a)

(b)
Figure 2.3:

(a) A periodic signal with period T0 (b) An aperiodic signal

2.1.2.4 Finite vs. Innite Length


As the name implies, signals can be characterized as to whether they have a nite or innite length set of
values. Most nite length signals are used when dealing with discrete-time signals or a given sequence of
values. Mathematically speaking,

f (t)

is a

nite-length signal if it is nonzero over a nite interval


t1 < f (t) < t2

where

f (t),

t1 >

and

t2 < .

An example can be seen in Figure 2.4. Similarly, an

innite-length signal,

is dened as nonzero over all real numbers:

f (t)

Figure 2.4:

Finite-Length Signal. Note that it only has nonzero values on a set, nite interval.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

20

2.1.2.5 Even vs. Odd


An even signal is any
are symmetric around
f (t) = f (t)

signal

such that

the vertical axis.

f (t) = f (t).
An

odd signal,

Even signals can be easily spotted as they


on the other hand, is a signal

such that

(Figure 2.5).

(a)

(b)
Figure 2.5:

(a) An even signal (b) An odd signal

Using the denitions of even and odd signals, we can show that any signal can be written as a combination
of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we
have to look no further than a single equation.

f (t) =

1
1
(f (t) + f (t)) + (f (t) f (t))
2
2

By multiplying and adding this expression out, it can be shown to be true.

f (t) + f (t)

fullls the requirement of an even function, while

f (t) f (t)

(2.2)
Also, it can be shown that
fullls the requirement of an

odd function (Figure 2.6).

Example 2.1

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

21

(a)

(b)

(c)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


(d)

(a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) =
(c) Odd part: o (t) = 21 (f (t) f (t)) (d) Check: e (t) + o (t) = f (t)

Figure 2.6:
1
(f (t) + f (t))
2

CHAPTER 2. CONTINUOUS-TIME SIGNALS

22

Example 2.2
Consider the signal dened for all real

described by

f (t) = {

sin (2t) /t

t1

t<1

(2.3)

This signal is continuous time, analog, aperiodic, innite length, causal, and neither even nor odd.

2.1.3 Signal Classications Summary


This module describes just some of the many ways in which signals can be classied. They can be continuous
time or discrete time, analog or digital, periodic or aperiodic, nite or innite, and deterministic or random.
We can also divide them based on their causality and symmetry properties. There are other ways to classify
signals, such as boundedness, handedness, and continuity, that are not discussed here but will be described
in subsequent modules.

2.2 Common Continuous Time Signals

2.2.1 Introduction
Before looking at this module, hopefully you have an idea of what a signal is and what basic classications
and properties a signal can have. In review, a signal is a function dened with respect to an independent
variable. This variable is often time but could represent any number of things. Mathematically, continuous
time analog signals have continuous independent and dependent variables. This module will describe some
useful continuous time analog signals.

2.2.2 Important Continuous Time Signals


2.2.2.1 Sinusoids
One of the most important elemental signal that you will deal with is the real-valued sinusoid.

In its

continuous-time form, we write the general expression as

Acos (t + )
where

is the amplitude,

is the frequency, and

is the phase. Thus, the period of the sinusoid is

T =

4 This

(2.4)

content is available online at <http://legacy.cnx.org/content/m47606/1.4/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(2.5)

23

Figure 2.7:

Sinusoid with A = 2, w = 2, and = 0.

2.2.2.2 Unit Step


Another very basic signal is the

unit-step function that is dened as

0
u (t) =
1

if

t<0

if

t0

(2.6)

1
t
Figure 2.8:

Continuous-Time Unit-Step Function

The step function is a useful tool for testing and for dening other signals. For example, when dierent
shifted versions of the step function are multiplied by other signals, one can select a certain portion of the
signal and zero out the rest.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

24

2.2.2.3 Unit Pulse


Many engineers interpret the unit step function as the representation of turning on a switch and leaving it
on. The

unit-pulse function can be thought of as turning a switch on and o after a unit of time.

dened as

1
p (t) =
0

0.5 t 0.5

if
if

t < 0.5 or t > 0.5

It is

(2.7)

so that it is an even function.

Figure 2.9:

Continuous-Time Unit-Step Function

Note that the pulse can be easily written in terms of unit step functions as

p (t) = u (t + 0.5) u (t 0.5)

2.2.2.4 Triangle Function


The last function we will introduce is the

triangle function, which represents and input that increases and

then decreases linearly with time. It is dened as

(t) =

t+1

1t

if

if
if

1t0
0t1

(2.8)

t < 1 or t > 1

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

25

Figure 2.10:

Continuous-Time Triangle Function

2.2.3 Common Continuous Time Signals Summary


Some of the most important and most frequently encountered signals have been discussed in this module.
There are, of course, many other signals of signicant consequence not discussed here. As you will see later,
many of the other more complicated signals will be studied in terms of those listed here. Especially take
note of the complex exponentials and unit impulse functions, which will be the key focus of several topics
included in this course.

2.3 Signal Operations

2.3.1 Introduction
This module will look at two signal operations aecting the time parameter of the signal, time shifting and
time scaling. These operations are very common components to real-world systems and, as such, should be
understood thoroughly when learning about signals and systems.

2.3.2 Manipulating the Time Parameter


2.3.2.1 Time Shifting
Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtracting
a quantity of the shift to the time variable in the function. Subtracting a xed positive quantity from the
time variable will shift the signal to the right (delay) by the subtracted quantity, while adding a xed positive
amount to the time variable will shift the signal to the left (advance) by the added quantity.

5 This

content is available online at <http://legacy.cnx.org/content/m50957/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

26

Figure 2.11:

f (t T ) moves (delays) f to the right by T .

2.3.2.2 Time Scaling


Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If that quantity's absolute value is greater than one, the signal becomes narrower and the operation is called compression,
while if the quantity's absolute value is less than one, the signal becomes wider and is called dilation. Note
that if the quantity is negative, then one must also account for time reversal (described below).

Figure 2.12:

f (at) compresses f by a.

Example 2.3
Given

f (t)

we woul like to plot

f (at b),

with both

a>0

and

b > 0.

The gure below describes

a method to accomplish this.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

27

(a)

(b)

(c)
Figure 2.13:
t ab
f

to get

` (a)
` Begin
with f (t) (b) Then replace t with at to get f (at) (c) Finally, replace t with
a t ab = f (at b)

2.3.2.3 Time Reversal


A natural question to consider when learning about time scaling is: What happens when the time variable
is multiplied by a negative number? The answer to this is time reversal, also known as time inversion. This
operation is the reversal of the time axis, or ipping the signal over the y-axis.

Figure 2.14:

Reverse the time axis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

28

2.3.3 Time Scaling and Shifting Demonstration

Download6 or Interact (when online) with a Mathematica CDF demonstrating Discrete


Harmonic Sinusoids.
Figure 2.15:

2.3.4 Signal Operations Summary


Some common operations on signals aect the time parameter of the signal. One of these is time shifting in
which a quantity is added to the time parameter in order to advance or delay the signal. Another is the time
scaling in which the time parameter is multiplied by a quantity in order to dilate or compress the signal in
time. In the event that the quantity involved in the latter operation is negative, time reversal occurs.

2.4 Energy and Power of Continuous-Time Signals

From physics we've learned that energy is work and power is work per time unit. Energy was measured in
Joule (J) and work in Watts(W). In signal processing energy and power are dened more loosely without
any necessary physical units, because the signals may represent very dierent physical entities. We can say
that energy and power are a measure of the signal's "size".

6 See the le at <http://legacy.cnx.org/content/m50957/latest/TimeshifterDrill_display.cdf>


7 This content is available online at <http://legacy.cnx.org/content/m47273/1.4/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

29

2.4.1 Signal Energy


2.4.1.1 Analog signals
Since we often think of a signal as a function of varying amplitude through time, it seems to reason that a
good measurement of the strength of a signal would be the area under the curve. However, this area may
have a negative part. This negative part does not have less strength than a positive signal of the same size.
This suggests either squaring the signal or taking its absolute value, then nding the area under that curve.
It turns out that what we call the energy of a signal is the area under the squared signal, see Figure 2.16

Energy - Analog signal:

Ea =

(|x (t) |) dt

Note that we have used squared magnitude(absolute value) if the signal should be complex valued. If the
signal is real, we can leave out the magnitude operation.

(a)

(b)
Figure 2.16:

Sketch of energy calculation (a) Signal x(t) (b) The energy of x(t) is the shaded region

2.4.2 Signal Power


Our denition of energy seems reasonable, and it is. However, what if the signal does not decay fast enough?
In this case we have innite energy for any such signal. Does this mean that a fty hertz sine wave feeding
into your headphones is as strong as the fty hertz sine wave coming out of your outlet? Obviously not.
This is what leads us to the idea of

signal power, which in such cases is a more adequate description.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

30

Figure 2.17:

Signal with ininite energy

2.4.2.1 Analog signals


For analog signals we dene power as

Power - analog signal:

energy per time interval.


1
T T

Pa = lim

T
2

T2

(|x (t) |) dt

For periodic analog signals, the power needs to only be measured across a single period.

Power - periodic analog signal with period

T0 : Pa =

1
T0

Example 2.4
Given the signal

x (t) = sin (2t), shown in Figure 2.18, calculate


R1
Pa = 11 0 sin2 (2t) dt = 12 .

T0
2
T
20

(|x (t) |) dt

the power for one period.

For the analog sine we have

Figure 2.18:

Analog sine.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

31

2.5 Continuous Time Impulse Function

2.5.1 Introduction
In engineering, we often deal with the idea of an action occurring at a point.

Whether it be a force at

a point in space or some other signal at a point in time, it becomes worth while to develop some way of
quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important
function, next to the complex exponential, in this systems and signals course.

2.5.2 Dirac Delta Function


The

Dirac delta function,

often referred to as the unit impulse or delta function, is the function that

denes the idea of a unit impulse in continuous-time. Informally, this function is one that is innitesimally
narrow, innitely tall, yet integrates to one. Perhaps the simplest way to visualize this is as a rectangular


1

2 to a + 2 with a height of  . As we take the limit of this setup as  approaches 0, we see
that the width tends to zero and the height tends to innity as the total area remains constant at one. The

pulse from

impulse function is often written as

(t).
Z

(t) dt = 1

Figure 2.19:

8 This

This is one way to visualize the Dirac Delta Function.

content is available online at <http://legacy.cnx.org/content/m10059/2.27/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(2.9)

CHAPTER 2. CONTINUOUS-TIME SIGNALS

32

Since it is quite dicult to draw something that is innitely tall, we represent the Dirac
with an arrow centered at the point it is applied. If we wish to scale it, we may write the value it is
scaled by next to the point of the arrow. This is a unit impulse (no scaling).
Figure 2.20:

Below is a brief list a few important properties of the unit impulse without going into detail of their
proofs.

Unit Impulse Properties

1
(t) = ||
(t)
(t) = (t)
d
(t) = dt
u (t), where u (t)
f (t) (t) = f (0) (t)

is the unit step.

The last of these is especially important as it gives rise to the sifting property of the dirac delta function, which
selects the value of a function at a specic time and is especially important in studying the relationship of an
operation called convolution to time domain analysis of linear time invariant systems. The sifting property
is shown and derived below.

f (t) (t) dt =

f (0) (t) dt = f (0)

(t) dt = f (0)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(2.10)

33

2.5.3 Unit Impulse Limiting Demonstration

Figure 2.21: Click on the above thumbnail image (when online) to download an interactive Mathematica
Player demonstrating the Continuous Time Impulse Function.

2.5.4 Continuous Time Unit Impulse Summary


The continuous time unit impulse function, also known as the Dirac delta function, is of great importance
to the study of signals and systems. Informally, it is a function with innite height ant innitesimal width
that integrates to one, which can be viewed as the limiting behavior of a unit area rectangle as it narrows
while preserving area. It has several important properties that will appear again when studying systems.

2.6 Continuous-Time Complex Exponential

2.6.1 Introduction
Complex exponentials are some of the most important functions in our study of signals and systems. Their
importance stems from their status as eigenfunctions of linear time invariant systems. Before proceeding,
you should be familiar with complex numbers.

9 This

content is available online at <http://legacy.cnx.org/content/m50675/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

34

2.6.2 The Continuous Time Complex Exponential


2.6.2.1 Complex Exponentials
The complex exponential function will become a critical part of your study of signals and systems. Its general
continuous form is written as

Aest
where

s = + j

is a complex number in terms of

(2.11)

, the attenuation constant, and

the angular frequency.

2.6.2.2 Euler's Formula


The mathematician Euler proved an important identity relating complex exponentials to trigonometric functions. Specically, he discovered the eponymously named identity, Euler's formula, which states that

ejx = cos (x) + jsin (x)

(2.12)

which can be proven as follows.


In order to prove Euler's formula, we start by evaluating the Taylor series for
converges for all complex

z,

at

z = jx.

about

z = 0,

which

The result is

ejx

=
=

ez

k=0

k=0

k x2k
(2k)!

(1)

+j

(jx)k
k!
P
k=0

k x2k+1
(2k+1)!

(2.13)

(1)

= cos (x) + jsin (x)


because the second expression contains the Taylor series for
for all real

x.

Choosing

cos (x) and sin (x) about t = 0,

which converge

Thus, the desired result is proven.

x = t

this gives the result

ejt = cos (t) + jsin (t)

(2.14)

which breaks a continuous time complex exponential into its real part and imaginary part.

Using this

formula, we can also derive the following relationships.

1 jt 1 jt
e + e
2
2

(2.15)

1
1 jt
e ejt
2j
2j

(2.16)

cos (t) =
sin (t) =

2.6.2.3 Continuous Time Phasors


It has been shown how the complex exponential with purely imaginary frequency can be broken up into
its real part and its imaginary part. Now consider a general complex frequency
attenuation factor and

is the frequency. Also consider a phase dierence

s = + j

e(+j)t+j = et (cos (t + ) + jsin (t + )) .


Thus, the real and imaginary parts of

st

where

is the

It follows that
(2.17)

appear below.

Re{e(+j)t+j } = et cos (t + )

(2.18)

Im{e(+j)t+j } = et sin (t + )

(2.19)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

35

Using the real or imaginary parts of complex exponential to represent sinusoids with a phase delay multiplied
by real exponential is often useful and is called attenuated phasor notation.
We can see that both the real part and the imaginary part have a sinusoid times a real exponential. We
also know that sinusoids oscillate between one and negative one. From this it becomes apparent that the
real and imaginary parts of the complex exponential will each oscillate within an envelope dened by the
real exponential part.

(a)

(b)

(c)

The shapes possible for the real part of a complex exponential. Notice that the oscillations
are the result of a cosine, as there is a local maximum at t = 0. (a) If is negative, we have the case of
a decaying exponential window. (b) If is positive, we have the case of a growing exponential window.
(c) If is zero, we have the case of a constant window.
Figure 2.22:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 2. CONTINUOUS-TIME SIGNALS

36

2.6.3 Complex Exponential Demonstration

Interact (when online) with a Mathematica CDF demonstrating the Continuous Time
Complex Exponential. To Download, right-click and save target as .cdf.

Figure 2.23:

2.6.4 Continuous Time Complex Exponential Summary


Continuous time complex exponentials are signals of great importance to the study of signals and systems.
They can be related to sinusoids through Euler's formula, which identies the real and imaginary parts of
purely imaginary complex exponentials. Eulers formula reveals that, in general, the real and imaginary parts
of complex exponentials are sinusoids multiplied by real exponentials. Thus, attenuated phasor notation is
often useful in studying these signals.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 3
Introduction to Systems
3.1 Introduction to Systems

Signals are manipulated by systems.


y (t) = S (x (t)),

with

Mathematically, we represent what a system does by the notation

representing the input signal and

the output signal.

Denition of a system
x(t)

System

y(t)

The system depicted has input x (t) and output y (t). Mathematically, systems operate
on function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a
value for the dependent variable (our output signal) for each value of its independent variable (its input
signal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S () the input-output
relation for the system.

Figure 3.1:

This notation mimics the mathematical symbology of a function: A system's input is analogous to an
independent variable and its output the dependent variable. For the mathematically inclined, a system is a

functional:

a function of a function (signals are functions).

Simple systems can be connected togetherone system's output becomes another's inputto accomplish
some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of
three basic interconnection forms.

1 This

content is available online at <http://legacy.cnx.org/content/m0005/2.19/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


37

CHAPTER 3. INTRODUCTION TO SYSTEMS

38

3.1.1 Cascade Interconnection


cascade

x(t)

w(t)

S1[]

y(t)

S2[]

The most rudimentary ways of interconnecting systems are shown in the gures in this
section. This is the cascade conguration.

Figure 3.2:

The simplest form is when one system's output is connected only to another's input.

w (t) = S1 (x (t)),

and

y (t) = S2 (w (t)),

with the information contained in

x (t)

Mathematically,

processed by the rst, then

the second system. In some cases, the ordering of the systems matter, in others it does not. For example, in
the fundamental model of communication

the ordering most certainly matters.

3.1.2 Parallel Interconnection


parallel
x(t)

S1[]

x(t)

+
S2[]

x(t)
Figure 3.3:

A signal

x (t)

y(t)

The parallel conguration.

is routed to two (or more) systems, with this signal appearing as the input to all systems

simultaneously and with equal strength.

Block diagrams have the convention that signals going to more

x (t) and their


y (t) = S1 (x (t))+S2 (x (t)), and the information

than one system are not split into pieces along the way. Two or more systems operate on
outputs are added together to create the output
in

x (t)

y (t).

Thus,

is processed separately by both systems.

2 "Structure of Communication Systems", Figure 1: Fundamental model of communication


<http://legacy.cnx.org/content/m0002/latest/#commsys>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

39

3.1.3 Feedback Interconnection


feedback
x(t)

e(t)

y(t)

S1[]

S2[]

Figure 3.4:

The feedback conguration.

The subtlest interconnection conguration has a system's output also contributing to its input. Engineers
would say the output is "fed back" to the input through system 2, hence the terminology. The mathematical
statement of the feedback interconnection (Figure 3.4: feedback) is that the feed-forward system produces

y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system's
y (t): e (t) = x (t) S2 (y (t)). Feedback systems are omnipresent in control problems, with the

the output:
output to

error signal used to adjust the output to achieve some condition dened by the input (controlling) signal.
For example, in a car's cruise control system,

x (t)

is a constant representing what speed you want, and

y (t)

is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (output
equals input).

3.2 System Classications and Properties

3.2.1 Introduction
In this module some of the basic classications of systems will be briey introduced and the most important
properties of these systems are explained. As can be seen, the properties of a system provide an easy way
to distinguish one system from another. Understanding these basic dierences between systems, and their
properties, will be a fundamental concept used in all signal and system courses. Once a set of systems can be
identied as sharing particular properties, one no longer has to reprove a certain characteristic of a system
each time, but it can simply be known due to the the system classication.

3.2.2 Classication of Systems


3.2.2.1 Continuous vs. Discrete
One of the most important distinctions to understand is the dierence between discrete time and continuous
time systems. A system in which the input signal and output signal both have continuous domains is said
to be a continuous system. One in which the input signal and output signal both have discrete domains is
said to be a discrete system. Of course, it is possible to conceive of signals that belong to neither category,
such as systems in which sampling of a continuous time signal or reconstruction from a discrete time signal
take place.

3 This

content is available online at <http://legacy.cnx.org/content/m50678/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

40

3.2.2.2 Linear vs. Nonlinear


A linear system is any system that obeys the properties of scaling (rst order homogeneity) and superposition
(additivity) further described below. A nonlinear system is any system that does not have at least one of
these properties.
To show that a system

obeys the scaling property is to show that

H (kf (t)) = kH (f (t))

Figure 3.5:

A block diagram demonstrating the scaling property of linearity

To demonstrate that a system

obeys the superposition property of linearity is to show that

H (f1 (t) + f2 (t)) = H (f1 (t)) + H (f2 (t))

Figure 3.6:

(3.1)

(3.2)

A block diagram demonstrating the superposition property of linearity

It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine
the rst two steps to get

H (k1 f1 (t) + k2 f2 (t)) = k1 H (f1 (t)) + k2 H (f2 (t))

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(3.3)

41

3.2.2.3 Time Invariant vs. Time Varying


A system is said to be time invariant if it commutes with the parameter shift operator dened by

f (t T )

for all

T,

ST (f (t)) =

which is to say

HST = ST H
for all real

T.

(3.4)

Intuitively, that means that for any input function that produces some output function, any

time shift of that input function will produce an output function identical in every way except that it is
shifted by the same amount. Any system that does not have this property is said to be time varying.

This block diagram shows what the condition for time invariance. The output is the same
whether the delay is put on the input or the output.
Figure 3.7:

3.2.2.4 Causal vs. Noncausal


A causal system is one in which the output depends only on current or past inputs, but not future inputs.
Similarly, an anticausal system is one in which the output depends only on current or future inputs, but not
past inputs. Finally, a noncausal system is one in which the output depends on both past and future inputs.
All "realtime" systems must be causal, since they can not have future inputs available to them.
One may think the idea of future inputs does not seem to make much physical sense; however, we have
only been dealing with time as our dependent variable so far, which is not always the case. Imagine rather
that we wanted to do image processing. Then the dependent variable might represent pixel positions to the
left and right (the "future") of the current position on the image, and we would not necessarily have a causal
system.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

42

(a)

(b)

(a) For a typical system to be causal... (b) ...the output at time t0 , y (t0 ), can only depend
on the portion of the input signal before t0 .
Figure 3.8:

3.2.2.5 Stable vs. Unstable


There are several denitions of stability, but the one that will be used most frequently in this course will
be bounded input, bounded output (BIBO) stability. In this context, a stable system is one in which the
output is bounded if the input is also bounded. Similarly, an unstable system is one in which at least one
bounded input produces an unbounded output.
In order to understand this concept, we must rst look more closely into exactly what we mean by
bounded. A bounded signal is any signal such that there exists a value such that the absolute value of the
signal is never greater than some value. Since this value is arbitrary, what we mean is that at no point can
the signal tend to innity, including the end behavior.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

43

Figure 3.9:

A bounded signal is a signal for which there exists a constant A such that |f (t) | < A

Representing this mathematically, a stable system must have the following property, where
input and

y (t)

x (t)

is the

is the output. The output must satisfy the condition

|y (t) | My <

(3.5)

whenever we have an input to the system that satises

|x (t) | Mx <
Mx and My

(3.6)

both represent a set of nite positive numbers and these relationships hold for all of t. Otherwise,

the system is unstable.

3.2.3 System Classications Summary


This module describes just some of the many ways in which systems can be classied.

Systems can be

continuous time, discrete time, or neither. They can be linear or nonlinear, time invariant or time varying,
and stable or unstable. We can also divide them based on their causality properties. There are other ways
to classify systems, such as use of memory, that are not discussed here but will be described in subsequent
modules.

3.3 Linear Time Invariant Systems

3.3.1 Introduction
Linearity and time invariance are two system properties that greatly simplify the study of systems that exhibit
them. In our study of signals and systems, we will be especially interested in systems that demonstrate both
of these properties, which together allow the use of some of the most powerful tools of signal processing.

4 This

content is available online at <http://legacy.cnx.org/content/m2102/2.26/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

44

3.3.2 Linear Time Invariant Systems


3.3.2.1 Linear Systems
If a system is linear, this means that when an input to a given system is scaled by a value, the output of the
system is scaled by the same amount.

Linear Scaling

(a)

(b)
Figure 3.10

In Figure 3.10(a) above, an input

to the linear system

gives the output

y.

If

is scaled by a value

and passed through this same system, as in Figure 3.10(b), the output will also be scaled by
A linear system also obeys the principle of superposition.

This means that if two inputs are added

together and passed through a linear system, the output will be the sum of the individual inputs' outputs.

(a)

(b)
Figure 3.11

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

45

Superposition Principle

If Figure 3.11 is true, then the principle of superposition says that Figure 3.12 (Superposition Principle) is true as well. This holds for linear systems.
Figure 3.12:

That is, if Figure 3.11 is true, then Figure 3.12 (Superposition Principle) is also true for a linear system.
The scaling property mentioned above still holds in conjunction with the superposition principle. Therefore,
if the inputs x and y are scaled by factors

and

respectively, then the sum of these scaled inputs will

give the sum of the individual scaled outputs:

(a)

(b)
Figure 3.13

Superposition Principle with Linear Scaling

Given Figure 3.13 for a linear system, Figure 3.14 (Superposition Principle with Linear
Scaling) holds as well.
Figure 3.14:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

46

Example 3.1
Consider the system

H1

in which

H1 (f (t)) = tf (t)
for all signals

f.

Given any two signals

f, g

and scalars

(3.7)

a, b

H1 (af (t) + bg (t)) = t (af (t) + bg (t)) = atf (t) + btg (t) = aH1 (f (t)) + bH1 (g (t))
Thus,

H1

is a linear system.

Consider the system

H2

in which

for all real

t.

(3.8)

Example 3.2
H2 (f (t)) = (f (t))
for all signals

f.

(3.9)

Because

H2 (2t) = 4t2 6= 2t2 = 2H2 (t)


for nonzero

t, H2

(3.10)

is not a linear system.

3.3.2.2 Time Invariant Systems


A time-invariant system has the property that a certain input will always give the same output (up to
timing), without regard to when the input was applied to the system.

Time-Invariant Systems

(a)

(b)

Figure 3.15(a) shows an input at time t while Figure 3.15(b) shows the same input
t0 seconds later. In a time-invariant system both outputs would be identical except that the one in
Figure 3.15(b) would be delayed by t0 .

Figure 3.15:

x (t) and x (t t0 ) are passed


x (t) and x (t t0 ) produce
x (t t0 ) is shifted by a time t0 .

In this gure,

through the system TI. Because the system TI is time-

invariant, the inputs

the same output. The only dierence is that the output

due to

Whether a system is time-invariant or time-varying can be seen in the dierential equation (or dierence
equation) describing it.

Time-invariant systems are modeled with constant coecient equations.


not

A constant coecient dierential (or dierence) equation means that the parameters of the system are
changing over time and an input now will give the same result as the same input later.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

47

Example 3.3
Consider the system

H1

in which

H1 (f (t)) = tf (t)
for all signals

f.

(3.11)

Because

ST (H1 (f (t))) = ST (tf (t)) = (t T ) f (t T ) 6= tf (t T ) = H1 (f (t T )) = H1 (ST (f (t)))


for nonzero

T , H1

(3.12)

is not a time invariant system.

Example 3.4
Consider the system

H2

in which

H2 (f (t)) = (f (t))
for all signals

f.

For all real

and signals

ST (H2 (f (t))) = ST f (t)


for all real

t.

Thus,

H2

(3.13)

f,
2

= (f (t T )) = H2 (f (t T )) = H2 (ST (f (t)))

(3.14)

is a time invariant system.

3.3.2.3 Linear Time Invariant Systems


Certain systems are both linear and time-invariant, and are thus referred to as LTI systems.

Linear Time-Invariant Systems

(a)

(b)

This is a combination of the two cases above. Since the input to Figure 3.16(b) is a scaled,
time-shifted version of the input in Figure 3.16(a), so is the output.
Figure 3.16:

As LTI systems are a subset of linear systems, they obey the principle of superposition. In the gure
below, we see the eect of applying time-invariance to the superposition denition in the linear systems
section above.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

48

(a)

(b)
Figure 3.17

Superposition in Linear Time-Invariant Systems

Figure 3.18:

The principle of superposition applied to LTI systems

3.3.2.3.1 LTI Systems in Series


If two or more LTI systems are in series with each other, their order can be interchanged without aecting
the overall output of the system. Systems in series are also called cascaded systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

49

Cascaded LTI Systems

(a)

(b)
Figure 3.19:

eect.

The order of cascaded LTI systems can be interchanged without changing the overall

3.3.2.3.2 LTI Systems in Parallel


If two or more LTI systems are in parallel with one another, an equivalent system is one that is dened as
the sum of these individual systems.

Parallel LTI Systems

(a)
Figure 3.20:

(b)

Parallel systems can be condensed into the sum of systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 3. INTRODUCTION TO SYSTEMS

50

Example 3.5
Consider the system

H3

in which

H3 (f (t)) = 2f (t)
for all signals

f.

Given any two signals

f, g

and scalars

(3.15)

a, b

H3 (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH3 (f (t)) + bH3 (g (t))
for all real

t.

Thus,

H3

is a linear system. For all real

and signals

f,

ST (H3 (f (t))) = ST (2f (t)) = 2f (t T ) = H3 (f (t T )) = H3 (ST (f (t)))


for all real t. Thus,

H3

is a time invariant system. Therefore,

H3

(3.16)

(3.17)

is a linear time invariant system.

Example 3.6
As has been previously shown, each of the following systems are not linear or not time invariant.

H1 (f (t)) = tf (t)
H2 (f (t)) = (f (t))

(3.18)

(3.19)

Thus, they are not linear time invariant systems.

3.3.3 Linear Time Invariant Demonstration

Figure 3.21: Interact(when online) with the Mathematica CDF above demonstrating Linear Time
Invariant systems. To download, right click and save le as .cdf.

3.3.4 LTI Systems Summary


Two very important and useful properties of systems have just been described in detail. The rst of these,
linearity, allows us the knowledge that a sum of input signals produces an output signal that is the summed
original output signals and that a scaled input signal produces an output signal scaled from the original
output signal. The second of these, time invariance, ensures that time shifts commute with application of
the system. In other words, the output signal for a time shifted input is the same as the output signal for the
original input signal, except for an identical shift in time. Systems that demonstrate both linearity and time
invariance, which are given the acronym LTI systems, are particularly simple to study as these properties
allow us to leverage some of the most powerful tools in signal processing.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 4
Time Domain Analysis of Continuous
Time Systems
4.1 Continuous Time Systems

4.1.1 Introduction
As you already now know, a continuous time system operates on a continuous time signal input and produces
a continuous time signal output. There are numerous examples of useful continuous time systems in signal
processing as they essentially describe the world around us.

The class of continuous time systems that

are both linear and time invariant, known as continuous time LTI systems, is of particular interest as the
properties of linearity and time invariance together allow the use of some of the most important and powerful
tools in signal processing.

4.1.2 Continuous Time Systems


4.1.2.1 Linearity and Time Invariance
A system

is said to be linear if it satises two important conditions. The rst, additivity, states for every

pair of signals
signal

x, y

that

and scalar

H (x + y) = H (x) + H (y).
H (ax) = aH (x). It

we have

The second, homogeneity of degree one, states for every


is clear that these conditions can be combined together

into a single condition for linearity. Thus, a system is said to be linear if for every signals

a, b

x, y

and scalars

we have that

H (ax + by) = aH (x) + bH (y) .

(4.1)

Linearity is a particularly important property of systems as it allows us to leverage the powerful tools of
linear algebra, such as bases, eigenvectors, and eigenvalues, in their study.
A system

is said to be time invariant if a time shift of an input produces the corresponding shifted

output. In other, more precise words, the system

T R.

commutes with the time shift operator

ST

for every

That is,

ST H = HST .

(4.2)

Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,
physical systems should react the same to identical inputs at dierent times.
When a system exhibits both of these important properties it allows for a more straigtforward analysis
than would otherwise be possible.

1 This

As will be explained and proven in subsequent modules, computation

content is available online at <http://legacy.cnx.org/content/m47437/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
51

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

52

of the system output for a given input becomes a simple matter of convolving the input with the system's
impulse response signal. Also proven later, the fact that complex exponential are eigenvectors of linear time
invariant systems will enable the use of frequency domain tools such as the various Fouier transforms and
associated transfer functions, to describe the behavior of linear time invariant systems.

Example 4.1
Consider the system

in which

H (f (t)) = 2f (t)
for all signals

f.

Given any two signals

f, g

and scalars

(4.3)

a, b

H (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH (f (t)) + bH (g (t))
for all real

t.

Thus,

is a linear system. For all real

and signals

f,

ST (H (f (t))) = ST (2f (t)) = 2f (t T ) = H (f (t T )) = H (ST (f (t)))


for all real

t.

Thus,

is a time invariant system. Therefore,

(4.4)

(4.5)

is a linear time invariant system.

4.1.3 Continuous Time Systems Summary


Many useful continuous time systems will be encountered in a study of signals and systems. This course
is most interested in those that demonstrate both the linearity property and the time invariance property,
which together enable the use of some of the most powerful tools of signal processing. It is often useful to
describe them in terms of rates of change through linear constant coecient ordinary dierential equations.

4.2 Continuous Time Impulse Response

4.2.1 Introduction
The output of an LTI system is completely determined by the input and the system's response to a unit
impulse.

System Output

We can determine the system's output, y(t), if we know the system's impulse response,
h(t), and the input, f(t).
Figure 4.1:

The output for a unit impulse input is called the impulse response.

2 This

content is available online at <http://legacy.cnx.org/content/m34629/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

53

Figure 4.2

4.2.1.1 Example Approximate Impulses


1. Hammer blow to a structure
2. Hand clap or gun blast in a room
3. Air gun blast underwater

4.2.2 LTI Systems and Impulse Responses


4.2.2.1 Finding System Outputs
By the sifting property of impulses, any signal can be decomposed in terms of an integral of shifted, scaled
impulses.

f ( ) (t ) d

f (t) =

(t )

peaks up where

t = .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(4.6)

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

54

Figure 4.3

Since we know the response of the system to an impulse and any signal can be decomposed into impulses,
all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,
calculate the system's output for every impulse and add the outputs back together.
known as

Integral.

Convolution.

Since we are in

This is the process

Continuous Time, this is the Continuous Time Convolution

4.2.2.2 Finding Impulse Responses


Theory:
a. Solve the system's dierential equation for y(t) with f (t) = (t)
b. Use the Laplace transform
Practice:
a. Apply an impulse-like input signal to the system and measure the output
b. Use Fourier methods
We will assume that h(t) is given for now.
The goal now is to compute the output y(t) given the impulse response h(t) and the input f(t).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

55

Figure 4.4

4.2.3 Impulse Response Summary


When a system is "shocked" by a delta function, it produces an output known as its impulse response. For
an LTI system, the impulse response completely determines the output of the system given any arbitrary
input. The output can be found using continuous time convolution.

4.3 Continuous-Time Convolution

4.3.1 Introduction
Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output a system produces for a given input signal.

It can be shown that a linear time invariant system

is completely characterized by its impulse response.

The sifting property of the continuous time impulse

function tells us that the input signal to a system can be represented as an integral of scaled and shifted
impulses and, therefore, as the limit of a sum of scaled and shifted approximate unit impulses. Thus, by
linearity, it would seem reasonable to compute of the output signal as the limit of a sum of scaled and
shifted unit impulse responses and, therefore, as the integral of a scaled and shifted impulse response. That
is exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine a
linear time invariant system's output from knowledge of the input and the impulse response.

4.3.2 Convolution and Circular Convolution


4.3.2.1 Convolution
4.3.2.1.1 Operation Denition
Continuous time convolution is an operation on two continuous time signals dened by the integral

(f g) (t) =

f ( ) g (t ) d

(4.7)

for all signals

f, g

dened on

R.

It is important to note that the operation of convolution is commutative,

meaning that

f g =gf
3 This

content is available online at <http://legacy.cnx.org/content/m47482/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(4.8)

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

56

for all signals

f, g

dened on

R.

Thus, the convolution operation could have been just as easily stated using

the equivalent denition

Z
(f g) (t) =

f (t ) g ( ) d

(4.9)

for all signals

f, g

dened on

R.

Convolution has several other important properties not listed here but

explained and derived in a later module.

4.3.2.1.2 Denition Motivation


The above operation denition has been chosen to be particularly useful in the study of linear time invariant
systems. In order to see this, consider a linear time invariant system
a system input signal

with unit impulse response

we would like to compute the system output signal

H (x).

h.

Given

First, we note that the

input can be expressed as the convolution

x ( ) (t ) d

x (t) =

(4.10)

by the sifting property of the unit impulse function. Writing this integral as the limit of a summation,

x (t) = lim

x (n) (t n)

(4.11)

where

(t) = {
approximates the properties of

(t).

1/

0t<

otherwise

(4.12)

By linearity

Hx (t) = lim

x (n) H (t n)

(4.13)

which evaluated as an integral gives

x ( ) H (t ) d.

Hx (t) =

(4.14)

Since

H (t )

is the shifted unit impulse response

h (t ),

this gives the result

x ( ) h (t ) d = (x h) (t) .

Hx (t) =

(4.15)

Hence, convolution has been dened such that the output of a linear time invariant system is given by the
convolution of the system input with the system unit impulse response.

4.3.2.1.3 Graphical Intuition


It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes.
Consider the convolution of two functions

f, g

given by

(f g) (t) =

f ( ) g (t ) d =

f (t ) g ( ) d.

(4.16)

The rst step in graphically understanding the operation of convolution is to plot each of the functions.
Next, one of the functions must be selected, and its plot reected across the

=0

axis. For each real t, that

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

57

same function must be shifted left by

t.

The product of the two resulting plots is then constructed. Finally,

the area under the resulting curve is computed.

Example 4.2

Recall that the impulse response for the capacitor voltage in a series RC circuit is given by

h (t) =

1 t/RC
e
u (t) ,
RC

(4.17)

and consider the response to the input voltage

x (t) = u (t) .

(4.18)

We know that the output for this input voltage is given by the convolution of the impulse response
with the input signal

y (t) = x (t) h (t) .

(4.19)

We would like to compute this operation by beginning in a way that minimizes the algebraic
complexity of the expression.

Thus, since

x (t) = u (t)

is the simpler of the two signals, it is

desirable to select it for time reversal and shifting. Thus, we would like to compute

y (t) =

1 /RC
e
u ( ) u (t ) d.
RC

(4.20)

The step functions can be used to further simplify this integral by narrowing the region of integration to the nonzero region of the integrand. Therefore,

max{0,t}

Z
y (t) =
0

1 /RC
e
d.
RC

(4.21)

Hence, the output is

y (t) = {

t0

t/RC

t>0

1e

(4.22)

which can also be written as



y (t) = 1 et/RC u (t) .

(4.23)

4.3.3 Online Resources


The following pages have interactive Java applets that demonstrate several aspects of continuous-time convolution.
4

Joy of Convolution (Johns Hopkins University)


5

Step-by-Step Convolution (Rice University)

4 http://www.jhu.edu/signals/convolve/index.html
5 http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Convo1/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

58

4.3.4 Convolution Demonstration

Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download,


right-click and save target as .cdf.
Figure 4.5:

4.3.5 Convolution Summary


Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output signal of a linear time invariant system for a given input signal with knowledge of the system's unit
impulse response. The operation of continuous time convolution is dened such that it performs this function
for innite length continuous time signals and systems. The operation of continuous time circular convolution
is dened such that it performs this function for nite length and periodic continuous time signals. In each
case, the output of the system is the convolution or circular convolution of the input signal with the unit
impulse response.

4.4 Properties of Continuous Time Convolution

4.4.1 Introduction
We have already shown the important role that continuous time convolution plays in signal processing. This
section provides discussion and proof of some of the important properties of continuous time convolution.
Analogous properties can be shown for continuous time circular convolution with trivial modication of the
proofs provided except where explicitly noted otherwise.

6 This

content is available online at <http://legacy.cnx.org/content/m10088/2.20/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

59

4.4.2 Continuous Time Convolution Properties


4.4.2.1 Associativity
The operation of convolution is associative. That is, for all continuous time signals

f1 , f2 , f3

the following

relationship holds.

f1 (f2 f3 ) = (f1 f2 ) f3

(4.24)

In order to show this, note that

(f1 (f2 f3 )) (t)

R R
= f1 (1 ) f2 (2 ) f3 ((t 1 ) 2 ) d2 d1
R R
= f1 (1 ) f2 ((1 + 2 ) 1 ) f3 (t (1 + 2 )) d2 d1
R R
= f1 (1 ) f2 (3 1 ) f3 (t 3 ) d1 d3

(4.25)

= ((f1 f2 ) f3 ) (t)
proving the relationship as desired through the substitution

3 = 1 + 2 .

4.4.2.2 Commutativity
The operation of convolution is commutative. That is, for all continuous time signals

f1 , f2

the following

relationship holds.

f1 f2 = f2 f1

(4.26)

In order to show this, note that

(f1 f2 ) (t)

f1 (1 ) f2 (t 1 ) d1
f1 (t 2 ) f2 (2 ) d2

(4.27)

= (f2 f1 ) (t)
proving the relationship as desired through the substitution

2 = t 1 .

4.4.2.3 Distributivity
The operation of convolution is distributive over the operation of addition. That is, for all continuous time
signals

f1 , f2 , f3

the following relationship holds.

f1 (f2 + f3 ) = f1 f2 + f1 f3

(4.28)

In order to show this, note that

(f1 (f2 + f3 )) (t)

f1 ( ) (f2 (t ) + f3 (t )) d
R
= f1 ( ) f2 (t ) d + f1 ( ) f3 (t ) d
=

= (f1 f2 + f1 f3 ) (t)
proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(4.29)

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

60

4.4.2.4 Multilinearity
The operation of convolution is linear in each of the two function variables.

Additivity in each variable

results from distributivity of convolution over addition. Homogenity of order one in each variable results
from the fact that for all continuous time signals

f1 , f2

and scalars

the following relationship holds.

a (f1 f2 ) = (af1 ) f2 = f1 (af2 )

(4.30)

In order to show this, note that

(a (f1 f2 )) (t)

R
= a f1 ( ) f2 (t ) d
R
= (af1 ( )) f2 (t ) d
= ((af1 ) f2 ) (t)
=

(4.31)

f1 ( ) (af2 (t )) d

= (f1 (af2 )) (t)


proving the relationship as desired.

4.4.2.5 Conjugation
The operation of convolution has the following property for all continuous time signals

f1 , f2 .

f1 f2 = f1 f2

(4.32)

In order to show this, note that


f1 f2 (t)

f1 ( ) f2 (t ) d
f1 ( ) f2 (t )d

(4.33)

f1 ( ) f2 (t ) d

= f1 f2 (t)

proving the relationship as desired.

4.4.2.6 Time Shift


The operation of convolution has the following property for all continuous time signals

f1 , f2

where

ST

is

the time shift operator.

ST (f1 f2 ) = (ST f1 ) f2 = f1 (ST f2 )

(4.34)

In order to show this, note that

ST (f1 f2 ) (t)

f ( ) f1 ((t T ) ) d

2
R
= f2

( ) ST f1 (t ) d

= ((ST f1 ) f2 ) (t)
=

f ( ) f2 ((t T ) ) d

1
R
= f1

( ) ST f2 (t ) d

= f1 (ST f2 ) (t)
proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(4.35)

61

4.4.2.7 Dierentiation
The operation of convolution has the following property for all continuous time signals

d
(f1 f2 ) (t) =
dt

df1
f2 (t) =
dt


f1

df2
dt

f1 , f2 .


(t)

(4.36)

In order to show this, note that

d
dt

(f1 f2 ) (t)

d
f2 ( ) dt
f1 (t ) d


df1
= dt f2 (t)

(4.37)

d
f ( ) dt
f2 (t ) d
1


df2
= f1 dt (t)

proving the relationship as desired.

4.4.2.8 Impulse Convolution


The operation of convolution has the following property for all continuous time signals

where

is the Dirac

delta funciton.

f =f

(4.38)

In order to show this, note that

(f ) (t)

f ( ) (t ) d
R
= f (t) (t ) d

(4.39)

= f (t)
proving the relationship as desired.

4.4.2.9 Width
The operation of convolution has the following property for all continuous time signals

Duration (f )

gives the duration of a signal

f1 , f2

Duration (f1 f2 ) = Duration (f1 ) + Duration (f2 )


. In order to show this informally, note that

f1 ( ) f2 (t )
see that such a

where

f.

(f1 f2 ) (t)

is nonzero for all

(4.40)

for which there is a

such that

is nonzero. When viewing one function as reversed and sliding past the other, it is easy to

exists for all

t on an interval of length Duration (f1 ) + Duration (f2 ).

Note that this is not

always true of circular convolution of nite length and periodic signals as there is then a maximum possible
duration within a period.

4.4.3 Convolution Properties Summary


As can be seen the operation of continuous time convolution has several important properties that have
been listed and proven in this module.

With slight modications to proofs, most of these also extend to

continuous time circular convolution as well and the cases in which exceptions occur have been noted above.
These identities will be useful to keep in mind as the reader continues to study signals and systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

62

4.5 Causality and Stability of Continuous-Time Linear TimeInvariant Systems


7

4.5.1 Introduction
We have previously dened the system properties of causality and bounded-input bounded-output (BIBO)
stability.

We have also determined that a linear time-invariant (LTI) system is completely determined

by its impulse response

y (t) = x (t) h (t).

h (t):

the output

y (t)

for an arbitrary input

x (t)

is obtained via convolution as

It should not be surprising then that one can determine whether an LTI system is causal

or BIBO stable simply by inspecting its impulse response

h (t).

4.5.2 Causality
Recall that a system is causal if its output

t t0 .

y (t0 )

t0

at time

depends only on the input

x (t)

for values of

Consider then the characterization of the system by the convolution integral:

y (t)
We replace the time variable

= x (t) h (t) =

by a xed value

y (t0 )

x ( ) h (t ) d.

(4.41)

t0 :

For this input to depend only on values of time

x ( ) h (t0 ) d.

t0

(where

(4.42)

represents a second time variable), we

must have its contribution to the integral being nulled out by requiring the value of the impulse response

h (t0 ) = 0

> t0 . By making a change of variables t = t0 (so that = t0 t), this


h (t) = 0 for t0 t > t0 ; this means that we require h (t) = 0 for t < 0.
that an LTI system with impulse response h (t) is causal if and only if h (t) = 0 for all

for values of

means that we require


Thus, we obtain

t < 0.

4.5.3 BIBO Stability


x (t) is bounded (that is, if there exists M <
|x (t) | < M for all t) then we have that the system output is also bounded (that is, if there exists
N < such that |y (t) | < N for all t). As before, consider then the characterization of the system by the

Recall that a system is BIBO stable if whenever the input


such that

convolution integral:

y (t)

= x (t) h (t) =

x ( ) h (t ) d.

(4.43)

We apply absolute value to both sides and use the straightforward bound on the absolute value of an integral:

R
R



|y (t)| = x ( ) h (t ) d |x ( ) h (t ) |d
R
R
= |x ( ) ||h (t ) |d M |h (t ) |d
R
= M |h (t ) |d,

x (t). Thus, if we have that the integral has a nite


|y (t) |, which implies BIBO stability. R

an LTI system with impulse response h (t) is BIBO stable if and only if |h (t) |

where the next-to-last step uses the boundedness of

value, then we have established a nite upper bound on


Thus, we obtain that

is nite.
7 This

(4.44)

content is available online at <http://legacy.cnx.org/content/m50671/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

63

4.5.4 Summary
The derivations above show that it is signicantly easier to verify whether a system is causal and/or BIBO
stable when it is linear and time-invariant.
evaluations of the system's impulse response

In such a case, we simply need to perform straightforward

h (t).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

64

CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME


SYSTEMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 5
Introduction to Fourier Analysis
5.1 Introduction to Fourier Analysis

5.1.1 Fourier's Daring Leap


Fourier postulated around 1807 that any periodic signal (equivalently nite length signal) can be built up
as an innite linear combination of harmonic sinusoidal waves.
Given the collection
2

B = {ej T
any nite-energy function

x (t)

nt
}n=

(5.1)

can be approximated arbitrarily closely by

x (t) =

C n ej T

nt

(5.2)

n=
Now, The issue of exact convergence did bring Fourier

much criticism from the French Academy of Science

(Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its presentation on 1807. It was not resolved for almost a century, and its resolution is interesting and important to
understand from a practical viewpoint.
Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result of
3

the fact that sinusoids are Eigenfunctions (Section 5.3) of linear, time-invariant (LTI)

systems. This is to

say that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoid
on the output. Then, since Fourier analysis allows us to redene the signals in terms of a combination of
sinusoids, all we need to do is determine how any given system acts on each possible sinusoid (its

function) and we have a complete understanding of the system.

transfer

Furthermore, since we are able to dene

the passage of sinusoids through a system as the multiplication of that sinusoid by its scaling factor, we can
convert the passage of any signal through a system from convolution (Section 4.3) (in time) to multiplication
(in frequency). These ideas are what give Fourier analysis its power.
Now, after hopefully having sold you on the value of this method of analysis, we must examine exactly
what we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the Fourier
Series (Section 5.4), Continuous-Time Fourier Transform (Section 6.2), Discrete-Time Fourier Transform
(Section 9.3) and Discrete Fourier Transform (Section 10.1). All of these transforms act essentially the same
way, by converting a signal in time to an equivalent signal in frequency (sinusoids).
on the nature of a specic signal

i.e.

However, depending

whether it is nite- or innite-length and whether it is discrete- or

continuous-time) there is an appropriate transform to convert the signal into the frequency domain.

1 This content is available online at <http://legacy.cnx.org/content/m47439/1.2/>.


2 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html
3 "Continuous Time Systems" <http://legacy.cnx.org/content/m10855/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


65

CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

66

5.2 Continuous Time Periodic Signals

5.2.1 Introduction
This module describes the type of signals acted on by the Continuous Time Fourier Series.

5.2.2 Periodic Signals


When a function repeats itself exactly after some given period, or cycle, we say it's

function can be mathematically dened as:

periodic.

periodic

f (t) = f (t + mT ) m Z
T >0

where

represents the

(5.3)

fundamental period of the signal, which is the smallest positive value of T

for the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Any

periodic with period T.


periodic functions (with period T ) two dierent ways:
as functions on all of R

function that satises this equation is said to be


We can think of
1.

Figure 5.1:

Continuous time periodic function over all of R where f (t0 ) = f (t0 + T )

2. or, we can cut out all of the redundancy, and think of them as functions on an interval
more generally,

[a, a + T ]).

[0, T ]

(or,

If we know the signal is T-periodic then all the information of the signal is

captured by the above interval.

Figure 5.2:

An

Remove the redundancy of the period function so that f (t) is undened outside [0, T ].

aperiodic CT function f (t), on the other hand, does not repeat for any T

R; i.e.

such that this equation (5.3) holds.

4 This

content is available online at <http://legacy.cnx.org/content/m47350/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

there exists no

67

5.2.3 Demonstration
Here's an example demonstrating a

periodic

sinusoidal signal with various frequencies, amplitudes and

phase delays:

Interact (when online) with a Mathematica CDF demonstrating a Periodic Sinusoidal Signal
with various frequencies, amplitudes, and phase delays. To download, right click and save le as .cdf.
Figure 5.3:

To learn the full concept behind periodicity, see the video below.

Khan Lecture on Periodic Signals


This media object is a Flash object. Please view or download it at

<http://www.youtube.com/v/tJW_a6JeXD8&rel=0&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_em

Figure 5.4:

video from Khan Academy

5.2.4 Conclusion
A periodic signal is completely dened by its values in one period, such as the interval [0,T].

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

68

5.3 Eigenfunctions of Continuous-Time LTI Systems

5.3.1 Introduction
Prior to reading this module, the reader should already have some experience with linear algebra and should
specically be familiar with the eigenvectors and eigenvalues of square matrices.

A linear time invariant

system is a linear operator dened on a function space that commutes with every time shift operator on
that function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.
The concept of an eigenfunction is closely tied to the concept of an eigenvector in linear algebra. Eigen is
german for "self": the eigenfunction of a system is a function that, when fed to the system, produces in
the output a copy of the function, perhaps rescaled.

if

H (f ) = f

for some scalar

More concretely,

is an eigenfunction for a system

It is particularly easy to calculate the output of a system when an

eigenfunction is the input as the output is simply the eigenfunction scaled by the associated eigenvalue. As
will be shown, continuous time complex exponentials serve as eigenfunctions of linear time invariant systems
operating on continuous time signals.

5.3.2 Eigenfunctions of LTI Systems


Consider a linear time invariant system

with impulse response

continuous time signals. Recall that the output

H (x (t))

h operating on some space of innite length


x (t) is given by the

of the system for a given input

continuous time convolution of the impulse response with the input

h ( ) x (t ) d.

H (x (t)) =

(5.4)

Now consider the input

x (t) = est

where

s C.

H (est )

Computing the output for this input,

h ( ) es(t ) d
h ( ) est es d

R
est

h ( ) e

(5.5)

d.

Thus,


H est = s est

(5.6)

where

h ( ) es d

s =

(5.7)

is the eigenvalue corresponding to the eigenfunction

est .

There are some additional points that should be mentioned.


eigenfunctions of a linear time invariant system not described by
discussion has been somewhat formally loose as

est

Note that, there still may be additional

est

for some

s C.

Furthermore, the above

may or may not belong to the space on which the system

operates. However, for our purposes, complex exponentials will be accepted as eigenfunctions of linear time
invariant systems. A similar argument using continuous time circular convolution would also hold for spaces
nite length signals.

5.3.3 Eigenfunction of LTI Systems Summary


As has been shown, continuous time complex exponential are eigenfunctions of linear time invariant systems
operating on continuous time signals. Thus, it is particularly simple to calculate the output of a linear time

5 This

content is available online at <http://legacy.cnx.org/content/m47308/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

69

invariant system for a complex exponential input as the result is a complex exponential output scaled by the
associated eigenvalue. Consequently, representations of continuous time signals in terms of continuous time
complex exponentials provide an advantage when studying signals. As will be explained later, this is what
is accomplished by the continuous time Fourier transform and continuous time Fourier series, which apply
to aperiodic and periodic signals respectively.

5.4 Continuous Time Fourier Series (CTFS)

5.4.1 Introduction
In this module, we will derive an expansion for continuous-time, periodic functions, and in doing so, derive
the

Continuous Time Fourier Series (CTFS).

Since complex exponentials

are eigenfunctions of linear time-invariant (LTI) systems (Section 5.3), cal-

culating the output of an LTI system

H (s) C

given

est

as an input amounts to simple multiplication, where

is the eigenvalue corresponding to s. As shown in the gure, a simple exponential input would

yield the output

y (t) = H (s) est

Figure 5.5:

(5.8)

Simple LTI system.

Using this and the fact that

is linear, calculating

y (t) for combinations of complex exponentials is also

straightforward.

c1 es1 t + c2 es2 t c1 H (s1 ) es1 t + c2 H (s2 ) es2 t


X

cn esn t

n
The action of

cn H (sn ) esn t

on an input such as those in the two equations above is easy to explain.

pendently scales each exponential component es

nt

we can write a function

f (t)

by a dierent complex number

H (sn ) C.

inde-

as a combination of complex exponentials it allows us to easily calculate the

output of a system.

6 This content
7 "Continuous

As such, if

is available online at <http://legacy.cnx.org/content/m47348/1.2/>.


Time Complex Exponential" <http://legacy.cnx.org/content/m10060/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

70

5.4.2 Fourier Series Synthesis


8

Joseph Fourier

demonstrated that an arbitrary periodic function

of harmonic complex sinusoids

x (t) =

x (t) can be written as a linear combination

cn ej2f0 nt

(5.9)

n=
1
T is the fundamental frequency. For almost all x (t) of practical interest, there exists cn to make
2
(5.9) true. If x (t) is nite energy ( x (t) L [0, T ]), then the equality in (5.9) holds in the sense of energy
where

f0 =

convergence; if

x (t)

is continuous, then (5.9) holds pointwise. Also, if

x (t)

meets some mild conditions (the

Dirichlet conditions), then (5.9) holds pointwise everywhere except at points of discontinuity.

cn - called the Fourier coecients - tell us "how much" of the sinusoid ej2f0 nt is in x (t). The formula
x (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it

The
shows

is an eigenfunction of

ej2f0 nt , n Z

every LTI system).

Mathematically, it tells us that the set of complex exponentials

form a basis for the space of T-periodic continuous time functions.

Example 5.1
We know from Euler's formula that

cos (2f t) + sin (2f t) =

1j j2f t
2 e

1+j j2f t
.
2 e

8 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

71

5.4.3 Synthesis with Sinusoids Demonstration

Figure 5.6: Interact(when online) with a Mathematica CDF demonstrating sinusoid synthesis. To
download, right click and save as .cdf.

Guitar Oscillations on an iPhone


This media object is a Flash object. Please view or download it at
<http://www.youtube.com/v/TKF6nFzpHBU?version=3&hl=en_US>

Figure 5.7

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

72

5.4.4 Fourier Series Analysis


Finding the coecients of the Fourier series expansion involves some algebraic manipulation of the synthesis
formula. First of all we will multiply both sides of the equation by

f (t) e(j2f0 kt) =

e(j2f0 kt) ,

where

k Z.

cn ej2f0 nt e(j2f0 kt)

(5.10)

n=
Now integrate both sides over a given period,

T:

T
(j2f0 kt)

f (t) e

Z
dt =

cn ej2f0 nt e(j2f0 kt) dt

(5.11)

n=

On the right-hand side we can switch the summation and integral and factor the constant out of the integral.

f (t) e

(j2f0 kt)

dt =

cn

ej2f0 (nk)t dt

(5.12)

n=

RT

ej2f0 (nk)t dt,


0
on the right-hand side of the above equation. For this integral we will need to consider two cases: n = k and
Now that we have made this seemingly more complicated, let us focus on just the integral,

n 6= k .

For

n=k

we will have:

ej2f0 (nk)t dt = T , n = k

(5.13)

0
For

n 6= k ,
Z

we will have:

ej2f0 (nk)t dt =

0
But

sin (2f0 (n k) t) dt , n 6= k

cos (2f0 (n k) t)

Z
cos (2f0 (n k) t) dt + j

(5.14)

has an integer number of periods,

n k,

between

and

T.

Imagine a graph of the

cosine; because it has an integer number of periods, there are equal areas above and below the x-axis of the
graph. This statement holds true for

sin (2f0 (n k) t)
Z

as well. What this means is

cos (2f0 (n k) t) dt = 0

(5.15)

0
which also holds for the integral involving the sine function. Therefore, we conclude the following about
our integral of interest:

T
ej2f0 (nk)t dt =
0

if

n=k

(5.16)

otherwise

We plug in this result in (5.12) to see if we can nish nding an equation for our Fourier coecients. Using
the facts that we have just proven above, we can see that the only time (5.12) will have a nonzero result is
when

and

are equal:

f (t) e(j2f0 nt) dt = T cn , n = k

(5.17)

0
Finally, we have our general equation for the Fourier coecients:

1
cn =
T

f (t) e(j2f0 nt) dt

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(5.18)

73

Example 5.2
Consider the square wave function given by

x (t) = {
on the unit interval

1/2

t 1/2

1/2

t > 1/2

(5.19)

t Z [0, 1).
ck

R1

x (t) ej2kt dt
0
R1
1 j2kt
dt 1/2 12 ej2kt dt
2e
j (1+ejk )
=
2k
=

R 1/2
0

(5.20)

Thus, the Fourier coecients of this function found using the Fourier series analysis formula are

ck = {

j/k

kodd

keven

(5.21)

5.4.5 Fourier Series Summary


Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals using
a set of complex exponentials as a basis. The continuous time Fourier series synthesis formula expresses a
continuous time, periodic function as the sum of continuous time, discrete frequency complex exponentials.

f (t) =

cn ej2f0 nt

(5.22)

n=
The continuous time Fourier series analysis formula gives the coecients of the Fourier series expansion.

cn =
In both of these equations

f0 =

1
T

f (t) e(j2f0 nt) dt

1
T is the fundamental frequency.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(5.23)

74

CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 6
Continuous Time Fourier Transform
(CTFT)
6.1 Continuous Time Aperiodic Signals

6.1.1 Introduction
This

module

describes

the

type

of

signals

acted

on

by

the

Continuous

Time

Fourier

Transform.

6.1.2 Periodic and Aperiodic Signals


When a function repeats itself exactly after some given period, or cycle, we say it's

function can be mathematically dened as:

periodic.

f (t) = f (t + mT ) m Z
where

T >0

represents the

periodic
(6.1)

fundamental period of the signal, which is the smallest positive value of T

for the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Any

periodic with period T.


aperiodic CT function f (t) does not repeat for any T R; i.e.

function that satises this equation is said to be


An

there exists no

such that this

equation (6.1) holds.


Suppose we have such an aperiodic function

fT o (t) , where f (t) is repeated every T0 seconds.

f (t)

f (t) called
T0 , we obtain a precise model of

. We can construct a periodic extension of

If we take the limit as

an aperiodic signal for which all rules that govern periodic signals can be applied, including Fourier Analysis
(with an important modication). For more detail on this distinction, see the module on the

Time Fourier Transform.


1 This

content is available online at <http://legacy.cnx.org/content/m47356/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
75

Continuous

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

76

6.1.3 Aperiodic Signal Demonstration

Interact (when online) with a Mathematica CDF demonstrating Periodic versus Aperiodic
Signals.To download, right-click and save as .cdf.
Figure 6.1:

6.1.4 Conclusion
Any aperiodic signal can be dened by an innite sum of periodic functions, a useful denition that makes
it possible to use Fourier Analysis on it by assuming all frequencies are present in the signal.

6.2 Continuous Time Fourier Transform (CTFT)

6.2.1 Introduction
In this module, we will derive an expansion for any arbitrary continuous-time function, and in doing so,
derive the

Continuous Time Fourier Transform (CTFT).

Since complex exponentials

are eigenfunctions of linear time-invariant (LTI) systems (Section 5.3), cal-

culating the output of an LTI system

H (s) C

given

est

as an input amounts to simple multiplication, where

is the eigenvalue corresponding to s. As shown in the gure, a simple exponential input would

yield the output

y (t) = H (s) est


Using this and the fact that

is linear, calculating

y (t) for combinations of complex exponentials is also

straightforward.

c1 es1 t + c2 es2 t c1 H (s1 ) es1 t + c2 H (s2 ) es2 t


2 This content
3 "Continuous

(6.2)

is available online at <http://legacy.cnx.org/content/m47319/1.5/>.


Time Complex Exponential" <http://legacy.cnx.org/content/m10060/latest/>
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

77

cn esn t

n
The action of

cn H (sn ) esn t

on an input such as those in the two equations above is easy to explain.

pendently scales each exponential component es

nt

we can write a function

f (t)

by a dierent complex number

H (sn ) C.

inde-

As such, if

as a combination of complex exponentials it allows us to easily calculate the

output of a system.
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals
in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will
present the

Continuous-Time Fourier Transform

(CTFT), commonly referred to as just the Fourier

Transform (FT).

6.2.2 Fourier Synthesis


4

Joseph Fourier

demonstrated that an arbitrary periodic signal

of harmonic complex sinusoids

s (t) =

cn ej T

s (t)

can be written as a linear combination

nt

(6.3)

n=
where

T is the fundamental
s (t) is nite energy,

true. If

period. For almost all

s (t)

of practical interest, there exists

cn

to make (6.3)

then the equality in (6.3) holds in the sense of energy convergence; if

continuous, then (6.3) holds pointwise. Also, if

s (t)

s (t)

is

meets some mild conditions (the Dirichlet conditions),

then (6.3) holds pointwise everywhere except at points of discontinuity.


2

cn - called the Fourier coecients - tell us "how much" of the sinusoid ej T nt is in s (t). The formula
s (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it

The
shows

is an eigenfunction of

n 2
o
ej T nt , n Z

every LTI system).

Mathematically, it tells us that the set of complex exponentials

form a basis for the space of

T -periodic

continuous time functions. Because the CTFT

deals with nonperiodic signals, we must nd a way to include all real frequencies in the general equations.

For the CTFT we simply let

go to innity.

This will also change the summation over integers to an

integration over real numbers.

Example 6.1
We know from Euler's formula that

cos (2f t) + sin (2f t) =

1j j2f t
2 e

1+j j2f t
.
2 e

6.2.3 The Fourier Transform


We have seen how all signals

s (t)

can be decomposed in terms of the sinusoids

ej2f t .

We also know that

these sinusoids are complex exponentials, and so they are eigenfunctions of LTI systems; recall as well that
such eigenfunctions easily pass through LTI systems. Thus, we can use these two principles to easily obtain
the output to any signal for any system: First, we obtain the Fourier coecients of the signal to decompose
it as the sum of scaled sinusoids; next, we run each scaled sinusoid through the system, which in essence
scales (multiplies) it by the sinusoid's eigenvalue; and nally we sum together all the outputs (thanks to
linearity and superposition) to obtain the output. What remains to be shown is the way to easily compute
the coecients and eigenvalues of a signal and a system. Both of these problems are solved using the Fourier
Transform.

Continuous-Time Fourier Transform


Z

S (f ) =

s (t) e(j2f t) dt

4 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(6.4)

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

78

Inverse CTFT

S (f ) ej2f t df

s (t) =

(6.5)

For signals, the CTFT provides the Fourier coecients that are attached to sinusoids to represent the signal
as in (6.3). The inverse CTFT then provides the signal as the linear combination of all sinusoids with the
corresponding weights, extending (6.2). For systems, the CTFT can provide the eigenvalues for all sinusoids
when applied to the impulse response function

h (t).

Applying the CTFT to the input signal and the impulse

response allows for very easy computation of the system output, as we will soon observe.

warning: It is not uncommon to see the above formula written slightly dierent.

One of the

most common dierences is the way that the exponential is written. The above equations use the

f in the exponential. However, in some cases the radial frequency is used


= 2f . Therefore, you may often see the expressions above with in place of 2f

frequency variable
instead, where

in the exponentials. Click here

for an overview of the notation used in Connexion's DSP modules.

6.2.4 CTFT Denition Demonstration

Interact (when online) with a Mathematica CDF demonstrating Continuous Time Fourier
Transform. To Download, right-click and save as .cdf.
Figure 6.2:

5 "DSP

notation" <http://legacy.cnx.org/content/m10161/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

79

6.2.5 Example Problems


Exercise 6.2.1

(Solution on p. 89.)

Find the Fourier Transform (CTFT) of the function

e(t) if t 0
x (t) =
0 otherwise

(6.6)

Exercise 6.2.2

(Solution on p. 89.)

Find the inverse Fourier transform of the ideal lowpass lter dened by

1
X (f ) =
0

6.3 Properties of the CTFT

if

|f | M

(6.7)

otherwise

6.3.1 Introduction
This module will look at some of the basic properties of the Continuous-Time Fourier Transform (CTFT).

6.3.2 Summary Table of Fourier Transform Properties


Table of Fourier Transform Properties
Operation Name
Linearity (Section 6.3.3.1:

Signal ( x (t) )

Transform ( X (f ) )

ax1 t + bx2 t

aX1 f + bX2 f

(Sec-

x (t)

X (f )

Dual-

X (t)

x (f )

x (t)

1
|| X

x (t )

X (f ) e(j2f )

Lin-

earity)
Scalar

Multiplication

tion 6.3.3.1: Linearity)


Duality (Section 6.3.3.2:
ity)
Time

Scaling

(Section

6.3.3.3:

 
f

Time Scaling)
Time Shift (Section 6.3.3.4: Time
Shifting)

continued on next page

6 This

content is available online at <http://legacy.cnx.org/content/m47347/1.4/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

80

Convolution

in

Time

(Sec-

x1 (t) x2 (t)

X1 (f ) X2 (f )

x1 (t) x2 (t)

X1 (f ) X2 (f )

dn
dtn x (t)

(j2f ) X (f )

tion 6.3.3.5: Convolution)


Convolution in Frequency (Section 6.3.3.5: Convolution)
Dierentiation

(Section

6.3.3.6:

Time Dierentiation)
Parseval's

Theorem

(Sec-

(|x (t) |) dt

(|X (f ) |) df

tion 6.3.3.7: Parseval's Relation)


Modulation
(Section

(Frequency

6.3.3.8:

Shift)

x (t) ej2t

X (f )

x (t)

X (f ) = X (f )

Modulation

(Frequency Shift))
Symmetry for Real Signals (Sec-

is real

tion 6.3.3.9: Symmetry for Real


Signals)
Table 6.1

6.3.3 Discussion of Fourier Transform Properties


6.3.3.1 Linearity
The combined addition and scalar multiplication properties in the table above demonstrate the basic property
of linearity. What you should see is that if one takes the Fourier transform of a linear combination of signals
then it will be the same as the linear combination of the Fourier transforms of each of the individual signals.
This is crucial when using a table (Section 6.4) of transforms to nd the transform of a more complicated
signal.

Example 6.2
We will begin with the following signal:

z (t) = ax1 (t) + bx2 (t)

(6.8)

Z (f ) = aX1 (f ) + bX2 (f )

(6.9)

6.3.3.2 Duality
Duality is a property that can make life quite easy when solving problems involving Fourier transforms.
Basically what this property says is that since a rectangular function in time is a sinc function in frequency,
then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similarity
between the forward CTFT and the inverse CTFT. The only dierence is a frequency reversal.

6.3.3.3 Time Scaling


This property deals with the eect on the frequency-domain representation of a signal if the time variable
is altered. The most important concept to understand for the time scaling property is that signals that are
narrow in time will be broad in frequency and

vice versa.

The simplest example of this is a delta function,

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

81

a unit pulse

with a

very

small duration, in time that becomes an innite-length constant function in

frequency.
The table above shows this idea for the general transformation from the time-domain to the frequencydomain of a signal. You should be able to easily notice that these equations show the relationship mentioned
previously: if the time variable is increased then the frequency range will be decreased.

6.3.3.4 Time Shifting


Time shifting shows that a shift in time is equivalent to a linear phase shift in frequency. Since the frequency
content depends only on the shape of a signal, which is unchanged in a time shift, then only the phase
spectrum will be altered. This property is proven below:

Example 6.3
We will begin by letting

z (t) = x (t ). Now let us take the Fourier


z (t).
Z
Z (f ) =
x (t ) e(j2f t) dt

transform with the previous

expression substituted in for

(6.10)

Dene

= t .

Through the calculations below, you can see that only the variable in the

exponential are altered thus only changing the phase in the frequency domain.

Z (f )

= e
= e

x () e(j2f (+ )) d
R
x () e(j2f ) d

(j2f )
(j2f )

(6.11)

X (f )

6.3.3.5 Convolution
Convolution is one of the big reasons for converting signals to the frequency domain, since convolution in
time becomes multiplication in frequency.

This property is also another excellent example of symmetry

between time and frequency. It also shows that there may be little to gain by changing to the frequency
domain when multiplication in time is involved.
We will introduce the convolution integral here, but if you have not seen this before or need to refresh your
memory, then look at the continuous-time convolution (Section 6.5) module for a more in depth explanation
and derivation.

y (t)

=
=

x1 (t) x2 (t)
R
f ( ) f2 (t ) d
1

(6.12)

6.3.3.6 Time Dierentiation


8

Since LTI

systems can be represented in terms of dierential equations, it is apparent with this property

that converting to the frequency domain may allow us to convert these complicated dierential equations to
simpler equations involving multiplication and addition. This is often looked at in more detail during the
9

study of the Laplace Transform .

7 "Elemental Signals": Section Pulse <http://legacy.cnx.org/content/m0004/latest/#pulsedef>


8 "System Classications and Properties" <http://legacy.cnx.org/content/m10084/latest/>
9 "The Laplace Transform" <http://legacy.cnx.org/content/m10110/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

82

6.3.3.7 Parseval's Relation


Z

(|x (t) |) dt =

(|X (f ) |) df

(6.13)

6.3.3.8 Modulation (Frequency Shift)


Modulation is absolutely imperative to communications applications. Being able to shift a signal to a dierent
frequency, allows us to take advantage of dierent parts of the electromagnetic spectrum is what allows us
to transmit television, radio and other applications through the same space without signicant interference.
The proof of the frequency shift property is very similar to that of the time shift (Section 6.3.3.4: Time
Shifting); however, here we would use the inverse Fourier transform in place of the Fourier transform. Since
we went through the steps in the previous, time-shift proof, below we will just show the initial and nal step
to this proof:

X (f ) ej2f t df

z (t) =

(6.14)

z (t) = f (t) ej2t

(6.15)

6.3.3.9 Symmetry for Real Signals


For signals that are real-valued, we have that

X (f )

x (t) = x (t).

Thus, we can evaluate

x (t) ej2f t dt
x (t) e(j2(f )t) dt

(6.16)

= X (f )

6.3.4 Online Resources


The following online resources provide interactive explanations of the properties of the CTFT:
Continuous-Time Fourier Transform Properties Applet (Internet Explorer)
Continuous-Time Fourier Transform Properties Applet (Other Browsers)

10

11

6.3.5 Properties Demonstration


An interactive example demonstration of the properties is included below:
This media object is a LabVIEW VI. Please view or download it at
<CTFTSPlab.llb>

Figure 6.3:

Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.

10 http://www.jhu.edu/signals/ctftprops-mathml/index.htm
11 http://www.jhu.edu/signals/ctftprops/indexCTFTprops.htm

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

83

6.4 Common Fourier Transforms

12

6.4.1 Common CTFT Pairs

12 This

content is available online at <http://legacy.cnx.org/content/m47344/1.5/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

84

Time Domain Signal


e

(at)

u (t)

at

e u (t)
e(a|t|)
te

(at)

u (t)

n (at)

t e

u (t)

Frequency Domain Signal

Condition

1
a+j2f
1
aj2f
2a
a2 +(2f )2
1
(a+j2f )2
n!
(a+j2f )n+1

a>0

(t)

(f )

j2f0 t

a>0
a>0
a>0

(f f0 )

cos (2f0 t)
sin (2f0 t)
u (t)
sgn (t)
cos (2f0 t) u (t)
sin (2f0 t) u (t)
e(at) sin (2f0 t) u (t)
e

a>0

(at)

cos (2f0 t) u (t)



p 2t = u (t + ) u (t )

1
2 ( (f f0 ) + (f + f0 ))
j
2 ( (f + f0 ) (f f0 ))
1
1
2 (f ) + j2f
1
jf
1
4 ( (f f0 ) + (f + f0 ))
jf
2(f0 2 f 2 )
1
4j ( (f f0 ) (f + f0 ))
f0
2(f0 2 f 2 )
2f0
(a+j2f )2 +(2f0 )2
a+j2f
(a+j2f )2 +(2f0 )2
)
2 sin(2f
= 2 sinc (2f )
2f

0 t)
(20 ) sin(2f
=
2f0 t
(2f0 ) sinc (2f0 t)

t 
=


t
t
t
+ 1 u +1 u  +
t + 1 u t u t 1

u (f + f0 ) u (f f0 ) = p

f0 sinc2 (f0 t)

+
+
a>0
a>0


f
2f0

sinc2 (f )

 

 
u ff0 + 1 u ff0 +

  


ff0 + 1 u ff0 u ff0 1 =
 
ff0
f
f0

+1

continued on next page

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

85

P
e

n=
t2
2
2

P
f0 n= (f nf0 )

2
2e2(f )

(t nT )

1
T

f0 =

Table 6.2

Notes
p (t)

t.

is the pulse function for arbitrary real-valued

p (t) = {
(t)

if

|t| > 1/2

if

|t| 1/2

is the triangle function for arbitrary real-valued

1+t
(t) = { 1 t

(6.17)

t.
if

1t0

if

0<t1

(6.18)

otherwise

6.5 Continuous Time Convolution and the CTFT

13

6.5.1 Introduction
This module discusses convolution of continuous signals in the time and frequency domains.

6.5.2 Continuous Time Fourier Transform


The CTFT (Section 6.2) transforms a innite-length continuous signal in the time domain into an innitelength continuous signal in the frequency domain.

CTFT

Z
X (f ) =

x (f ) e(j2f t) dt

(6.19)

X (f ) ej2f t df

(6.20)

Inverse CTFT

x (t) =

6.5.3 Convolution Integral


The convolution integral expresses the output of an LTI system based on an input signal,
system's impulse response,

h (t).

The

convolution integral is expressed as


Z

x (t),

and the

x ( ) h (t ) d

y (t) =

(6.21)

Convolution is such an important tool that it is represented by the symbol

and can be written as

y (t) = x (t) h (t)

(6.22)

Convolution is commutative. For more information on the characteristics of the convolution integral, read
about the Properties of Convolution (Section 4.4).

13 This

content is available online at <http://legacy.cnx.org/content/m47346/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

86

6.5.4 Demonstration

Interact (when online) with a Mathematica CDF demonstrating Use of the CTFT in signal
denoising. To Download, right-click and save target as .cdf.
Figure 6.4:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

87

6.5.5 Convolution Theorem


Let

and

be two functions with convolution

f g ..

Let

be the Fourier transform operator. Then

F (f g) = F (f ) F (g)

(6.23)

F (f g) = F (f ) F (g)

(6.24)

By applying the inverse Fourier transform

F 1 ,

we can write:

f g = F 1 (F (f ) F (g))

(6.25)

6.5.6 Conclusion
The Fourier transform of a convolution is the pointwise product of Fourier transforms.

In other words,

convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain
(e.g., frequency domain).

6.6 Frequency-Domain Analysis of Linear Time-Invariant Systems

14

Convolution is the standard tool for the analysis of linear time-invariance (LTI) systems.
an LTI system, the impulse response

(t)) is the only


y (t) = x (t) h (t). Recall,

an impulse

h (t)

Recall that for

(which, as its name indicates, is the output of the system for

system knowledge required to compute the output to an arbitrary input as


also, that convolution involves a nontrivial computation involving shifting,

multiplication, and accumulation.

Thus, it becomes highly convenient to compute the output of the LTI

system in the frequency domain by applying a Fourier transform to the LTI system equation, which results

Y (f ) = X (f ) H (f ). Here, Y (f ), X (f ), and H (f ) denote the Fourier transforms of y (t), x (t), and


h (t), respectively. In summary, the Fourier transform allows us to obtain the output of a LTI system by

in

performing simple multiplication instead of a complicated convolution.


We can also obtain a signicant amount of intuition about LTI systems by considering them from the
frequency domain. Since Fourier transforms are complex-valued, we can rewrite the multiplication process

|Y (f ) | = |X (f ) H (f ) | = (|X (f ) ||H (f ) |), and a sum of phases, (Y (f )) =


(X (f )) + (H (f )).
Recall that |X (f ) |, intuitively, represents the "magnitude" of the presence of a complex sinusoid of
frequency f in the input signal x (t); similarly, |Y (f ) | represents the "magnitude" of the presence of the
same complex sinusoid in the output signal y (t). Therefore, we can argue that due to the pointwise nature of

as a product of magnitudes,

function multiplication the eect of the LTI system is to scale the "magnitude" of each complex sinusoid in
the input independently to obtain the output, with the scaling factors being given by the Fourier transform

|H (f ) |, and these factors being distinct for each possible frequency f .


(X (f )) represents the delay of the complex sinusoid of frequency f when it is
present in the input signal x (t); similarly, (Y (f )) represents the delay of the complex sinusoid of frequency
f when it is present in the output signal y (t). Therefore, we can argue that the LTI system imposes an
additional delay of (H (f )) to each complex sinusoid present in the input to obtain the delay's output of the
same complex sinusoid, and this factor (H (f )) is specic to each possible frequency due to the pointwise

of the impulse response

Similarly, recall that

nature of function addition.


Remarkably, these are the only two things that an LTI system can do to an input signal, and these
operations are specic to each frequency of the input signal's Fourier decomposition.
that an LTI system can remove a specic frequency

from an input signal by setting

Note, for example,

H (f ) = 0;

however,

an LTI system cannot "add" new frequencies to an input signal if they were not already present, and an

14 This

content is available online at <http://legacy.cnx.org/content/m50661/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

88

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

LTI system cannot change the values of the frequencies present in the signal (that is, an LTI system cannot
perform modulation; by the same token, a modulator is not an LTI system).
It is also worth noting that the delay of a complex sinusoid with a given phase is linearly dependent on
its frequency. Thus, a delay system (which we know is LTI) is characterized by a phase
be linearly dependent on the value of the frequency

f.

(H (f ))

that will

This is contrast with an LTI system whose phase

response is independent of the frequency, i.e., with constant

(H (f )); such a system will actually delay each


f.

complex exponential with a dierent delay that is inversely proportional to the frequency

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

89

Solutions to Exercises in Chapter 6


Solution to Exercise 6.2.1 (p. 79)
In order to calculate the Fourier transform, all we need to use is (6.4) (Continuous-Time Fourier Transform),
15

complex exponentials

, and basic calculus.

X (f )

0
0

X (f ) =

x (t) e(j2f t) dt

e(t) e(j2f t) dt
1
+j2f

1
+ j2f

Solution to Exercise 6.2.2 (p. 79)


Here we will use (6.5) (Inverse CTFT) to nd the inverse FT given that

(6.27)

t 6= 0.

RM

ej2f t df
M
1 j2f t M
|f =M
2t e
1
t sin (2M t)

(6.28)

x (t) = (2M ) (sinc (2M t))

(6.29)

x (t)

=
=
=

15 "Continuous

(6.26)

e(t)(+j2f ) dt

Time Complex Exponential" <http://legacy.cnx.org/content/m10060/latest/>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

90

CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 7
Discrete-Time Signals
7.1 Common Discrete Time Signals

7.1.1 Introduction
Before looking at this module, hopefully you have an idea of what a signal is and what basic classications
and properties a signal can have. In review, a signal is a function dened with respect to an independent
variable.

This variable is often time but could represent any number of things.

Mathematically, discrete

time analog signals have discrete independent variables and continuous dependent variables. This module
will describe some useful discrete time analog signals.

7.1.2 Important Discrete Time Signals


7.1.2.1 Sinusoids
One of the most important elemental signal that you will deal with is the real-valued sinusoid.

In its

discrete-time form, we write the general expression as

Acos (n + )
where

is the amplitude,

(7.1)

is the phase. Because


2
is a rational number.

is the frequency, and

resulting function is only periodic if

only takes integer values, the

Discrete-Time Cosine Signal


sn
1

Figure 7.1:

A discrete-time cosine signal is plotted as a stem plot.

Note that the equation representation for a discrete time sinusoid waveform is not unique.

1 This

content is available online at <http://legacy.cnx.org/content/m47447/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
91

CHAPTER 7. DISCRETE-TIME SIGNALS

92

7.1.2.2 Complex Exponentials


As important as the general sinusoid, the

complex exponential function will become a critical part of your

study of signals and systems. Its general discrete form is written as

Aesn
where

(7.2)

s = +j , is a complex number in terms of , the attenuation constant, and the angular frequency.

The discrete time complex exponentials have the following property.

ejn = ej(+2)n

(7.3)

Given this property, if we have a complex exponential with frequency


to a complex exponential with frequency

+ 2 ,

then this signal "aliases"

implying that the equation representations of discrete complex

exponentials are not unique.

7.1.2.3 Unit Impulses


The second-most important discrete-time signal is the

1
[n] =
0

unit sample, which is dened as


n=0

if

(7.4)

otherwise

Unit Sample
n
1
n
Figure 7.2:

The unit sample.

More detail is provided in the section on the discrete time impulse function. For now, it suces to say
that this signal is crucially important in the study of discrete signals, as it allows the sifting property to be
used in signal representation and signal decomposition.

7.1.2.4 Unit Step


Another very basic signal is the

unit-step function dened as

0
u [n] =
1

if

n<0

if

n0

(7.5)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

93

Figure 7.3:

Discrete-Time Unit-Step Function

The step function is a useful tool for testing and for dening other signals. For example, when dierent
shifted versions of the step function are multiplied by other signals, one can select a certain portion of the
signal and zero out the rest.

7.1.3 Common Discrete Time Signals Summary


Some of the most important and most frequently encountered signals have been discussed in this module.
There are, of course, many other signals of signicant consequence not discussed here. As you will see later,
many of the other more complicated signals will be studied in terms of those listed here. Especially take
note of the complex exponentials and unit impulse functions, which will be the key focus of several topics
included in this course.

7.2 Energy and Power of Discrete-Time Signals

7.2.1 Signal Energy


7.2.1.1 Discrete signals
For time discrete signals the "area under the squared signal" makes no sense, so we will have to use another
energy deniton. We dene energy as the sum of the squared magnitude of the samples. Mathematically

Energy - Discrete time signal:

Example 7.1
Given the sequence

y [l] = bl u [l],

Ed =

where

n=

u [n]

(|x [n] |)

is the unit step function.

Find the energy of the

sequence.

y [l] as a geometric series.


use the formula for the sum of a geometric
P Thus we can
1
Ed = l=0 y 2 [l] = 1b
2 . This expression is only valid for |b| < 1.
larger |b|, the series will diverge. The signal y [l] then has innite energy. So let's have

We recognize

series and we obtain the energy,


If we have a

a look at power...

7.2.2 Signal Power


Our denition of energy seems reasonable, and it is. However, what if the signal does not decay fast enough?
In this case we have innite energy for any such signal. Does this mean that a fty hertz sine wave feeding
into your headphones is as strong as the fty hertz sine wave coming out of your outlet? Obviously not.
This is what leads us to the idea of

2 This

signal power, which in such cases is a more adequate description.

content is available online at <http://legacy.cnx.org/content/m47357/1.3/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 7. DISCRETE-TIME SIGNALS

94

Figure 7.4:

Signal with ininite energy

7.2.2.1 Discrete signals


For time discrete signals we dene power as energy per sample.

Power - Discrete time:

1
N 2N +1

Pd = lim

PN

n=N

(|x [n] |)

For periodic discrete-time signals, the integral need only be dened over one period:

Power - Discrete time periodic signal with period

N 0 : Pd =

Example 7.2

1
N0

PN0 1
n=0

(|x [n] |)


1
n , shown in Figure 7.5, calculate the power for one period.
x [n] = sin 10

P20
2 1
1
3
discrete sine we get Pd =
n=1 sin
20
10 n = 0.500. Download power_sine.m

Given the signal


For the

plots and calculation.

Figure 7.5:

3 See

Discrete time sine.

the le at <http://legacy.cnx.org/content/m11526/latest/power_sine.m>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

for

95

7.3 Discrete-Time Signal Operations

7.3.1 Introduction
This module will look at two signal operations aecting the time parameter of the signal, time shifting
and time scaling. While they appear at rst to be straightforward extensions of the continuous-time signal
operations, there are some intricacies that are particular to discrete-time signals.

7.3.2 Manipulating the Time Parameter


7.3.2.1 Time Shifting
Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtracting
an integer quantity of the shift to the time variable in the function. Subtracting a xed positive quantity
from the time variable will shift the signal to the right (delay) by the subtracted quantity, while adding a
xed positive amount to the time variable will shift the signal to the left (advance) by the added quantity.

Figure 7.6:

f [n 3] moves (delays) f to the right by 3.

7.3.2.2 Time Scaling


Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If that quantity
is greater than one, the signal becomes narrower and the operation is called decimation. In contrast, if the
quantity is less than one, the signal becomes wider and the operation is called expansion or interpolation,
depending on how the gaps between values are lled.

7.3.2.2.1 Decimation
In decimation, the input of the signal is changed to be

f [cn]

. The quantity used for decimation

must be

an integer so that the input takes values for which a discrete function is properly dened. The decimated
signal

f [cn]

corresponds to the original signal

f [n]

where only each

sample is preserved (including

and so we are throwing away samples of the signal (or decimating it).

4 This

content is available online at <http://legacy.cnx.org/content/m47809/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

f [0]),

CHAPTER 7. DISCRETE-TIME SIGNALS

96

Figure 7.7:

f [2n] decimates f by 2.

7.3.2.2.2 Expansion
In expansion, the input of the signal is changed to be
for integer values of the input
signal
signal

n.

n
c

. We know that the signal

f [n]

is dened only

Thus, in the expanded signal we can only place the entries of the original

f at values of n that are multiples of c. In other words, we are spacing the values of the discrete-time
c 1 entries away from each other. Since the signal is undened elsewhere, the standard convention

in expansion is to ll in the undetermined values with zeros.

Figure 7.8:

n
2

expands f by 2.

7.3.2.2.3 Interpolation
In practice, we may know specic information about the signal of interest that allows us to provide good
estimates of the entries of

n
c

that are missing after expansion. For example, we may know that the signal

is supposed to be piecewise linear, and so knowing the values of


infer the values for

between

P
and

n
c

at

n=

m
c and at

n=

P
c

allows us to

. This process of inferring the undened values is known as

interpolation. The rule described above is known as polar interpolation; although more sophisticated rules
exist for interpolating values, linear interpolation will suce for our explanation in this module.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

97

Figure 7.9:

n
2

with interpolation lls in the missing values of the expansion using linear extensions.

Example 7.3
Given

f [n] we woul like to plot f [an b].

(a)
Figure 7.10:
n ab
f

to get

The gure below describes a method to accomplish this.

(b)

(c)

(a)
` Beginwith f [n] (b) Then replace n with an to get f [an] (c) Finally, replace n with
a n ab = f [an b]

7.3.2.3 Time Reversal


A natural question to consider when learning about time scaling is: What happens when the time variable
is multiplied by a negative number? The answer to this is time reversal. This operation is the reversal of
the time axis, or ipping the signal over the y-axis.

Figure 7.11:

Reverse the time axis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 7. DISCRETE-TIME SIGNALS

98

7.3.3 Signal Operations Summary


Some common operations on signals aect the time parameter of the signal. One of these is time shifting in
which a quantity is added to the time parameter in order to advance or delay the signal. Another is the time
scaling in which the time parameter is multiplied by a quantity in order to expand or decimate the signal in
time. In the event that the quantity involved in the latter operation is negative, time reversal occurs.

7.4 Discrete Time Impulse Function

7.4.1 Introduction
In engineering, we often deal with the idea of an action occurring at a point.

Whether it be a force at

a point in space or some other signal at a point in time, it becomes worth while to develop some way of
quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important
function, next to the complex exponential, in this systems and signals course.

7.4.2 Unit Sample Function


The

unit sample function,

often referred to as the unit impulse or delta function, is the function that

denes the idea of a unit impulse in discrete time. There are not nearly as many intricacies involved in its
denition as there are in the denition of the Dirac delta function, the continuous time impulse function.
The unit sample function simply takes a value of one at n=0 and a value of zero elsewhere. The impulse
function is often written as

(n).

1
[n] =
0

if

n=0

(7.6)

otherwise

Unit Sample
n
1
n
Figure 7.12:

The unit sample.

Below we will briey list a few important properties of the unit impulse without going into detail of their
proofs.

Unit Impulse Properties

1
[n] = ||
[n]
[n] = [n]
[n] = u [n] u [n 1]
f [n] [n] = f [0] [n]

5 This

content is available online at <http://legacy.cnx.org/content/m47448/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

99

f [n] [n] =

n=

f [0] [n] = f [0]

n=

[n] = f [0]

(7.7)

n=

7.4.3 Discrete Time Impulse Response Demonstration

Figure 7.13:

Function.

Interact(when online) with a Mathematica CDF demonstrating the Discrete Time Impulse

7.4.4 Discrete Time Unit Impulse Summary


The discrete time unit impulse function, also known as the unit sample function, is of great importance
to the study of signals and systems.

The function takes a value of one at time n=0 and a value of zero

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 7. DISCRETE-TIME SIGNALS

100

elsewhere. It has several important properties that will appear again when studying systems.

7.5 Discrete Time Complex Exponential

7.5.1 Introduction
Complex exponentials are some of the most important functions in our study of signals and systems. Their
importance stems from their status as eigenfunctions of linear time invariant systems; as such, it can be
both convenient and insightful to represent signals in terms of complex exponentials. Before proceeding, you
should be familiar with complex numbers.

7.5.2 The Discrete Time Complex Exponential


7.5.2.1 Complex Exponentials
The complex exponential function will become a critical part of your study of signals and systems. Its general
discrete form is written as

zn

(7.8)

z is a complex number. Recalling the polar expression of complex numbers, z can be expressed in terms
n
|z| and its angle (or argument) in the complex plane: z = |z|ej . Thus z n = (|z|) ejn .
In the context of complex exponentials, is referred to as frequency. For the time being, let's consider
complex exponentials for which |z| = 1.
where

of its magnitude

These discrete time complex exponentials have the following property, which will become evident through
discussion of Euler's formula.

ejn = ej(+2)n

(7.9)

Given this property, if we have a complex exponential with frequency


to a complex exponential with frequency

+ 2 ,

then this signal "aliases"

implying that the equation representations of discrete complex

exponentials are not unique.

7.5.2.2 Euler's Formula


The mathematician Euler proved an important identity relating complex exponentials to trigonometric functions. Specically, he discovered the eponymously named identity, Euler's formula, which states that for any
real number

x,
ejx = cos (x) + jsin (x)

(7.10)

which can be proven as follows.


In order to prove Euler's formula, we start by evaluating the Taylor series for
converges for all complex

z,

at

z = jx.

about

z = 0,

which

The result is

ejx

=
=

ez

k=0

k x2k
(2k)!

(1)

k=0

+j

(jx)k
k!
P
k=0

k x2k+1
(2k+1)!

(1)

(7.11)

= cos (x) + jsin (x)


because the second expression contains the Taylor series for
for all real

x.

Choosing

cos (x) and sin (x) about t = 0,

x = n,

we have:

ejn = cos (n) + jsin (n)


6 This

which converge

Thus, the desired result is proven.

content is available online at <http://legacy.cnx.org/content/m34573/1.6/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(7.12)

101

which breaks a discrete time complex exponential into its real part and imaginary part. Using this formula,
we can also derive the following relationships.

1 jn 1 jn
e
+ e
2
2

(7.13)

1 jn
1
e
ejn
2j
2j

(7.14)

cos (n) =
sin (n) =

7.5.2.3 Real and Imaginary Parts of Complex Exponentials


Now let's return to the more general case of complex exponentials,

zn.

Recall that

z n = (|z|) ejn .

We can

express this in terms of its real and imaginary parts:

Re{z n } = (|z|) cos (n)

(7.15)

Im{z n } = (|z|) sin (n)


We see now that the magnitude of

establishes an exponential envelope to the signal, with

rate of the sinusoidal oscillation within the envelope.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(7.16)

controlling

CHAPTER 7. DISCRETE-TIME SIGNALS

102

(a)

(b)

(c)

(a) If |z| < 1, we have the case of a decaying exponential envelope. (b) If |z| > 1, we
have the case of a growing exponential envelope. (c) If |z| = 1, we have the case of a constant envelope.

Figure 7.14:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

103

7.5.3 Discrete Complex Exponential Demonstration

Interact (when online) with a Mathematica CDF demonstrating the Discrete Time Complex Exponential. To Download, right-click and save target as .cdf.

Figure 7.15:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 7. DISCRETE-TIME SIGNALS

104

7.5.4 Discrete Time Complex Exponential Summary


Discrete time complex exponentials are signals of great importance to the study of signals and systems. They
can be related to sinusoids through Euler's formula, which identies the real and imaginary parts of complex
exponentials. Eulers formula reveals that, in general, the real and imaginary parts of complex exponentials
are sinusoids multiplied by real exponentials.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 8
Time Domain Analysis of Discrete Time
Systems
8.1 Discrete Time Systems

8.1.1 Introduction
As you already now know, a discrete time system operates on a discrete time signal input and produces a
discrete time signal output. There are numerous examples of useful discrete time systems in digital signal
processing, such as digital lters for images or sound.

The class of discrete time systems that are both

linear and time invariant, known as discrete time LTI systems, is of particular interest as the properties of
linearity and time invariance together allow the use of some of the most important and powerful tools in
signal processing.

8.1.2 Discrete Time Systems


8.1.2.1 Linearity and Time Invariance
A system

is said to be linear if it satises two important conditions. The rst, additivity, states for every

pair of signals
signal

x, y

that

and scalar

H (x + y) = H (x) + H (y).
H (ax) = aH (x). It

we have

The second, homogeneity of degree one, states for every


is clear that these conditions can be combined together

into a single condition for linearity. Thus, a system is said to be linear if for every signals

a, b

x, y

and scalars

we have that

H (ax + by) = aH (x) + bH (y) .

(8.1)

Linearity is a particularly important property of systems as it allows us to leverage the powerful tools of
linear algebra, such as bases, eigenvectors, and eigenvalues, in their study.
A system

is said to be time invariant if a time shift of an input produces the corresponding shifted

output. In other, more precise words, the system

T Z.

commutes with the time shift operator

ST

for every

That is,

ST H = HST .

(8.2)

Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,
physical systems should react the same to identical inputs at dierent times.
When a system exhibits both of these important properties it opens. As will be explained and proven
in subsequent modules, computation of the system output for a given input becomes a simple matter of

1 This

content is available online at <http://legacy.cnx.org/content/m47454/1.4/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
105

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

106

convolving the input with the system's impulse response signal. Also proven later, the fact that complex
exponential are eigenvectors of linear time invariant systems will encourage the use of frequency domain
tools such as the various Fouier transforms and associated transfer functions, to describe the behavior of
linear time invariant systems.

Example 8.1
Consider the system

in which

H (f [n]) = 2f [n]
for all signals

f.

Given any two signals

f, g

and scalars

(8.3)

a, b

H (af [n] + bg [n]) = 2 (af [n] + bg [n]) = a2f [n] + b2g [n] = aH (f [n]) + bH (g [n])
for all integers

n.

Thus,

is a linear system. For all integers

and signals

(8.4)

f,

ST (H (f [n])) = ST (2f [n]) = 2f [n T ] = H (f [n T ]) = H (ST (f [n]))


for all integers

n.

Thus,

is a time invariant system.

Therefore,

(8.5)

is a linear time invariant

system.

8.1.2.2 Causality
The causality property requires that a system's output depends only on past and present values of the input.
For a discrete-time system, this means that the value of the output
depend on values of the input

x [n]

for

y [n0 ]

at a specic time

n0

can only

n < n0 .

8.1.2.3 Stability
There are several denitions of stability, but the one that will be used most frequently in this course will
be bounded input, bounded output (BIBO) stability. In this context, a stable system is one in which the
output is bounded if the input is also bounded. Similarly, an unstable system is one in which at least one
bounded input produces an unbounded output.
In order to understand this concept, we must rst look more closely into exactly what we mean by
bounded. A bounded signal is any signal such that there exists a value such that the absolute value of the
signal is never greater than some value. Since this value is arbitrary, what we mean is that at no point can
the signal tend to innity, including the end behavior.
A bounded signal

f [n]

is a signal for which there exists a constant

such that for all

we have that

|f [n] | < A
Representing this mathematically, a stable system must have the following property, where
input and

y (n)

Mx and My

x (n)

is the

is the output. The output must satisfy the condition

|y [n] | My <

(8.6)

|x [n] | Mx <

(8.7)

both represent a set of nite positive numbers and these relationships hold for all of

the system is unstable.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

n.

Otherwise,

107

8.1.3 Discrete Time Systems Summary


Many useful discrete time systems will be encountered in a study of signals and systems. This course is most
interested in those that demonstrate both the linearity property and the time invariance property, which
together enable the use of some of the most powerful tools of signal processing. It is often useful to describe
them in terms of rates of change through linear constant coecient dierence equations, a type of recurrence
relation.

8.2 Discrete Time Impulse Response

8.2.1 Introduction
The output of a discrete time LTI system is completely determined by the input and the system's response
to a unit impulse.

System Output

We can determine the system's output, y[n], if we know the system's impulse response,
h[n], and the input, x[n].
Figure 8.1:

The output for a unit impulse input is called the impulse response.

Figure 8.2

2 This

content is available online at <http://legacy.cnx.org/content/m47363/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

108

(a)

(b)
Figure 8.3

8.2.1.1 Example Impulses


Since we are considering discrete time signals and systems, an ideal impulse is easy to simulate on a computer
or some other digital device. It is simply a signal that is

at the point

n = 0,

and

everywhere else.

8.2.2 LTI Systems and Impulse Responses


8.2.2.1 Finding System Outputs
By the sifting property of impulses, any signal can be decomposed in terms of an innite sum of shifted,
scaled impulses.

x [n]

The function

k [n] = [n k]

x [k] k [n]

x [k] [n k]

peaks up where

k=
k=

n = k.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.8)

109

(a)

(b)
Figure 8.4

Since we know the response of the system to an impulse and any signal can be decomposed into impulses,
all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,
calculate the system's output for every impulse and add the outputs back together.
known as

Convolution.

Since we are in

This is the process

Discrete Time, this is the Discrete Time Convolution Sum.

8.2.2.2 Finding Impulse Responses


a. Apply an impulse input signal to the system and record the output
b. Use Fourier methods
We will assume that

h [n]

is given for now.

The goal is now to compute the output

y [n]

given the impulse response

h [n]

and the input

x [n].

Figure 8.5

8.2.3 Impulse Response Summary


When a system is "shocked" by a delta function, it produces an output known as its impulse response. For
an LTI system, the impulse response completely determines the output of the system given any arbitrary
input. The output can be found using discrete time convolution.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

110

8.3 Discrete-Time Convolution

8.3.1 Introduction
Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output a system produces for a given input signal. It can be shown that a linear time invariant system is
completely characterized by its impulse response. The sifting property of the discrete time impulse function
tells us that the input signal to a system can be represented as a sum of scaled and shifted unit impulses.
Thus, by linearity, it would seem reasonable to compute of the output signal as the sum of scaled and shifted
unit impulse responses. That is exactly what the operation of convolution accomplishes. Hence, convolution
can be used to determine a linear time invariant system's output from knowledge of the input and the impulse
response.

8.3.2 Convolution and Circular Convolution


8.3.2.1 Convolution
8.3.2.1.1 Operation Denition
Discrete time convolution is an operation on two discrete time signals dened by the integral

(f g) [n] =

f [k] g [n k]

(8.9)

k=

f, g

for all signals

dened on

Z.

It is important to note that the operation of convolution is commutative,

meaning that

f g =gf
for all signals

f, g

dened on

Z.

(8.10)

Thus, the convolution operation could have been just as easily stated using

the equivalent denition

(f g) [n] =

f [n k] g [k]

(8.11)

k=
for all signals

f, g

dened on

Z.

Convolution has several other important properties not listed here but

explained and derived in a later module.

8.3.2.1.2 Denition Motivation


The above operation denition has been chosen to be particularly useful in the study of linear time invariant
systems. In order to see this, consider a linear time invariant system
a system input signal

with unit impulse response

we would like to compute the system output signal

H (x).

h.

Given

First, we note that the

input can be expressed as the convolution

x [n] =

x [k] [n k]

(8.12)

k=
by the sifting property of the unit impulse function. By linearity

Hx [n] =

x [k] H [n k] .

k=

3 This

content is available online at <http://legacy.cnx.org/content/m47455/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.13)

111

Since

H [n k]

is the shifted unit impulse response

Hx [n] =

h [n k],

this gives the result

x [k] h [n k] = (x h) [n] .

(8.14)

k=
Hence, convolution has been dened such that the output of a linear time invariant system is given by the
convolution of the system input with the system unit impulse response.

8.3.2.1.3 Graphical Intuition


It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes.
Consider the convolution of two functions

f, g

(f g) [n] =

given by

f [k] g [n k] =

k=

f [n k] g [k] .

(8.15)

k=

The rst step in graphically understanding the operation of convolution is to plot each of the functions.
Next, one of the functions must be selected, and its plot reected across the
same function must be shifted left by

t.

k=0

axis. For each real t, that

The product of the two resulting plots is then constructed. Finally,

the area under the resulting curve is computed.

Example 8.2

Recall that the impulse response for a discrete time echoing feedback system with gain

h [n] = an u [n] ,

is
(8.16)

and consider the response to an input signal that is another exponential

x [n] = bn u [n] .

(8.17)

We know that the output for this input is given by the convolution of the impulse response with
the input signal

y [n] = x [n] h [n] .

(8.18)

We would like to compute this operation by beginning in a way that minimizes the algebraic
complexity of the expression. However, in this case, each possible coice is equally simple. Thus, we
would like to compute

y [n] =

ak u [k] bnk u [n k] .

(8.19)

k=
The step functions can be used to further simplify this sum. Therefore,

y [n] = 0
for

n<0

(8.20)

and

y [n] =

n
X

(ab)

(8.21)

k=0
for

n 0.

Hence, provided

ab 6= 1,

we have that

y [n] = {

n<0

1(ab)n+1
1(ab)

n0

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.22)

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

112

8.3.2.2 Circular Convolution


Discrete time circular convolution is an operation on two nite length or periodic discrete time signals dened
by the integral

(f g) [n] =

N
1
X

f [k] g [n k]

(8.23)

k=0
for all signals

f, g

dened on

Z [0, N 1]

where

^ ^

f, g

are periodic extensions of

and

g.

It is important to

note that the operation of circular convolution is commutative, meaning that

f g =gf
for all signals

f, g

Z [0, N 1].

dened on

(8.24)

Thus, the circular convolution operation could have been just as

easily stated using the equivalent denition

(f g) [n] =

N
1
X

f [n k] g [k]

(8.25)

k=0
for all signals

f, g

dened on

^ ^

Z [0, N 1] where f , g

are periodic extensions of

and

g.

Circular convolution

has several other important properties not listed here but explained and derived in a later module.
Alternatively, discrete time circular convolution can be expressed as the sum of two summations given
by

(f g) [n] =

n
X

f [k] g [n k] +

f, g

dened on

f [k] g [n k + N ]

(8.26)

k=n+1

k=0
for all signals

N
1
X

Z [0, N 1].

Meaningful examples of computing discrete time circular convolutions in the time domain would involve
complicated algebraic manipulations dealing with the wrap around behavior, which would ultimately be
more confusing than helpful. Thus, none will be provided in this section. Of course, example computations
in the time domain are easy to program and demonstrate. However, disrete time circular convolutions are
more easily computed using frequency domain tools as will be shown in the discrete time Fourier series
section.

8.3.2.2.1 Denition Motivation


The above operation denition has been chosen to be particularly useful in the study of linear time invariant
systems. In order to see this, consider a linear time invariant system
a nite or periodic system input signal

h.
H (x).

with unit impulse response

we would like to compute the system output signal

Given
First,

we note that the input can be expressed as the circular convolution

x (n) =

N
1
X

x [k] [n k]

(8.27)

k=0
by the sifting property of the unit impulse function. By linearity,

Hx [n] =

N
1
X

x [k] H [n k] .

k=0

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.28)

113

Since

H (n k)

is the shifted unit impulse response

Hx [n] =

N
1
X

h (n k),

this gives the result

x [k] h [n k] = (x h) [n] .

(8.29)

k=0
Hence, circular convolution has been dened such that the output of a linear time invariant system is given
by the convolution of the system input with the system unit impulse response.

8.3.2.2.2 Graphical Intuition


It is often helpful to be able to visualize the computation of a circular convolution in terms of graphical
processes. Consider the circular convolution of two nite length functions

(f g) [n] =

N
1
X

f [k] g [n k] =

k=0

N
1
X

f, g

given by

f [n k] g [k] .

(8.30)

k=0

The rst step in graphically understanding the operation of convolution is to plot each of the periodic
extensions of the functions. Next, one of the functions must be selected, and its plot reected across the

k=0

axis. For each

k Z [0, N 1],

that same function must be shifted left by

k . The product of the two


Z [0, N 1] is computed.

resulting plots is then constructed. Finally, the area under the resulting curve on

8.3.3 Online Resources


The following website provides an interactive Java applet illustrating discrete-time convolution:
4

Discrete Joy of Convolution (Johns Hopkins University)

4 http://www.jhu.edu/signals/discreteconv/index.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

114

8.3.4 Interactive Element

Interact (when online) with the Mathematica CDF demonstrating Discrete Linear ConvoAvailable
for free
Connexions
lution. To download,
right
clickatand
save le <http://legacy.cnx.org/content/col11557/1.10>
as .cdf
Figure 8.6:

115

8.3.5 Convolution Summary


Convolution, one of the most important concepts in electrical engineering, can be used to determine the
output signal of a linear time invariant system for a given input signal with knowledge of the system's unit
impulse response. The operation of discrete time convolution is dened such that it performs this function
for innite length discrete time signals and systems. The operation of discrete time circular convolution is
dened such that it performs this function for nite length and periodic discrete time signals. In each case,
the output of the system is the convolution or circular convolution of the input signal with the unit impulse
response.

8.4 Properties of Discrete Time Convolution

8.4.1 Introduction
We have already shown the important role that discrete time convolution plays in signal processing. This
section provides discussion and proof of some of the important properties of discrete time convolution.
Analogous properties can be shown for discrete time circular convolution with trivial modication of the
proofs provided except where explicitly noted otherwise.

8.4.2 Discrete Time Convolution Properties


8.4.2.1 Associativity
The operation of convolution is associative.

That is, for all discrete time signals

f1 , f2 , f3

the following

relationship holds.

f1 (f2 f3 ) = (f1 f2 ) f3

(8.31)

In order to show this, note that

(f1 (f2 f3 )) [n]


=

P
k1 =
k2 = f1 [k1 ] f2 [k2 ] f3 [(n k1 ) k2 ]
P
P
k1 =
k2 = f1 [k1 ] f2 [(k1 + k2 ) k1 ] f3 [n (k1 +
P
P
= k3 = k1 = f1 [k1 ] f2 [k3 k1 ] f3 [n k3 ]
=

k2 )]

(8.32)

= ((f1 f2 ) f3 ) [n]
proving the relationship as desired through the substitution

k3 = k1 + k2 .

8.4.2.2 Commutativity
The operation of convolution is commutative.

That is, for all discrete time signals

f1 , f2

the following

relationship holds.

f1 f2 = f2 f1

(8.33)

In order to show this, note that

(f1 f2 ) [n]

f1 [k1 ] f2 [n k1 ]

f1 [n k2 ] f2 [k2 ]

k1 =
k2 =

= (f2 f1 ) [n]
proving the relationship as desired through the substitution

5 This

k2 = n k1 .

content is available online at <http://legacy.cnx.org/content/m47456/1.1/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.34)

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

116

8.4.2.3 Distribitivity
The operation of convolution is distributive over the operation of addition.
signals

f1 , f2 , f3

That is, for all discrete time

the following relationship holds.

f1 (f2 + f3 ) = f1 f2 + f1 f3

(8.35)

In order to show this, note that

(f1 (f2 + f3 )) [n]


=

f1 [k] (f2 [n k] + f3 [n k])


P
k= f1 [k] f2 [n k] +
k= f1 [k] f3 [n k]
=

k=

(8.36)

= (f1 f2 + f1 f3 ) [n]
proving the relationship as desired.

8.4.2.4 Multilinearity
The operation of convolution is linear in each of the two function variables.

Additivity in each variable

results from distributivity of convolution over addition. Homogenity of order one in each varible results from
the fact that for all discrete time signals

f1 , f2

and scalars

the following relationship holds.

a (f1 f2 ) = (af1 ) f2 = f1 (af2 )

(8.37)

In order to show this, note that

(a (f1 f2 )) [n]

P
= a k= f1 [k] f2 [n k]
P
= k= (af1 [k]) f2 [n k]
= ((af1 ) f2 ) [n]
P
= k= f1 [k] (af2 [n k])

(8.38)

= (f1 (af2 )) [n]


proving the relationship as desired.

8.4.2.5 Conjugation
The operation of convolution has the following property for all discrete time signals

f1 , f2 .

f1 f2 = f1 f2

(8.39)

In order to show this, note that


f1 f2 [n]

f1 [k] f2 [n k]

f1 [k] f2 [n k]

k=
k=

f1 [k] f2 [n k]

= f1 f2 [n]

k=

proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(8.40)

117

8.4.2.6 Time Shift


The operation of convolution has the following property for all discrete time signals
time shift operator with

f1 , f2

where

ST

is the

T Z.
ST (f1 f2 ) = (ST f1 ) f2 = f1 (ST f2 )

(8.41)

In order to show this, note that

ST (f1 f2 ) [n]

k=

f2 [k] f1 [(n T ) k]

k=

f2 [k] ST f1 [n k]

= ((ST f1 ) f2 ) [n]
=

(8.42)

k= f1 [k] f2 [(n T ) k]
P
k= f1 [k] ST f2 [n k]

= f1 (ST f2 ) [n]
proving the relationship as desired.

8.4.2.7 Impulse Convolution


The operation of convolution has the following property for all discrete time signals

where

is the unit

sample funciton.

f =f

(8.43)

In order to show this, note that

(f ) [n]

=f

k= f [k] [n k]
P
[n] k= [n k]

(8.44)

= f [n]
proving the relationship as desired.

8.4.2.8 Width
The operation of convolution has the following property for all discrete time signals
gives the duration of a signal

f1 , f2 where Duration (f )

f.

Duration (f1 f2 ) = Duration (f1 ) + Duration (f2 ) 1


. In order to show this informally, note that

(f1 f2 ) [n]

is nonzero for all

(8.45)

for which there is a

such that

f1 [k] f2 [n k]

is nonzero. When viewing one function as reversed and sliding past the other, it is easy to

see that such a

exists for all

on an interval of length

Duration (f1 ) + Duration (f2 ) 1.

Note that this

is not always true of circular convolution of nite length and periodic signals as there is then a maximum
possible duration within a period.

8.4.3 Convolution Properties Summary


As can be seen the operation of discrete time convolution has several important properties that have been
listed and proven in this module. With silight modications to proofs, most of these also extend to discrete
time circular convolution as well and the cases in which exceptions occur have been noted above.
identities will be useful to keep in mind as the reader continues to study signals and systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

These

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

118

8.5 Causality and Stability of Discrete-Time Linear Time-Invariant


Systems
6

8.5.1 Introduction
We have previously dened the system properties of causality and bounded-input bounded-output (BIBO)
stability.

We have also determined that a linear time-invariant (LTI) system is completely determined

by its impulse response

y [n] = x [n] h [n].

h [n]:

the output

y [n]

for an arbitrary input

x [n]

is obtained via convolution as

It should not be surprising then that one can determine whether an LTI system is causal

or BIBO stable simply by inspecting its impulse response

h [n].

8.5.2 Causality
Recall that a system is causal if its output

n n0 .

y [n0 ]

n0

at time

depends only on the input

x [n]

for values of

Consider then the characterization of the system by the convolution sum:

y [n]
We replace the time variable

= x [n] h [n] =

by a xed value

y [n0 ]

m=

x [m] h [n m] .

(8.46)

t0 :

m=

For this input to depend only on values of time

x [m] h [n0 m] .

m n0

(where

(8.47)

represents a second time variable), we

must have its contribution to the integral being nulled out by requiring the value of the impulse response

h [n0 m] = 0

m > n0 . By making a change of variables n = n0 m (so that m = n0 n),


h [n] = 0 for n0 n > n0 ; this means that we require h [n] = 0 for n < 0.
an LTI system with impulse response h [n] is causal if and only if h [n] = 0 for all

for values of

this means that we require


Thus, we obtain that

n < 0.

8.5.3 BIBO Stability


x [n] is bounded (that is, if there exists M <
|x [n] | < M for all n) then we have that the system output is also bounded (that is, if there exists
N < such that |y [n] | < N for all t). As before, consider then the characterization of the system by the
Recall that a system is BIBO stable if whenever the input
such that

convolution integral:

y [n]

= x [n] h [n] =

m=

x [m] h [n m] .

(8.48)

We apply absolute value to both sides and use the triangle inequality on a sum:

P
P
|y (n)| = m= x [m] h [n m] m= |x [m] h [n m] |
P
P
= m= |x [m] ||h [n m] | m= M |h [n m] |
P
= M m= |h [n m] |,

x [n]. Thus, if we have that the integral has a nite


|y [n] |, which implies BIBO stability.
an LTI system with impulse response h [n] is BIBO stable if and only if

where the next-to-last step uses the boundedness of

value, then we have established a nite upper bound on

PThus,
m=

6 This

we obtain that

|h [n] | is nite.

(8.49)

content is available online at <http://legacy.cnx.org/content/m50677/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

119

8.5.4 Summary
The derivations above show that it is signicantly easier to verify whether a system is causal and/or BIBO
stable when it is linear and time-invariant.
evaluations of the system's impulse response

In such a case, we simply need to perform straightforward

h [n].

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

120

CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 9
Discrete Time Fourier Transform
(DTFT)
9.1 Discrete Time Aperiodic Signals

9.1.1 Introduction
This module describes the type of signals acted on by the Discrete Time Fourier Transform.

9.1.2 Periodic and Aperiodic Signals


When a function repeats itself exactly after some given period, or cycle, we say it's

function can be mathematically dened as:

periodic.

periodic

f [n] = f [n + mN ] m Z
where

N > 0

represents the

fundamental period

(9.1)

of the signal, which is the smallest positive value of

for the signal to repeat. Because of this, you may also see a signal referred to as an

Any function that satises this equation is said to be

periodic with period N .

N -periodic

signal.

Periodic signals in discrete

time repeats themselves in each cycle. However, only integers are allowed as time variable in discrete time.
We denote signals in such case as

f [n],

where

discrete-time periodic signal with period N :


1 This

takes value over the integers.

Here's an example of a

content is available online at <http://legacy.cnx.org/content/m47369/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


121

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

122

discrete-time periodic signal

Figure 9.1:

Notice the function is the same after a time shift of N

periodic functions (with period N ) two dierent ways:


as functions on all of R

We can think of
1.

Figure 9.2:

discrete time periodic function over all of R where f [n0 ] = f [n0 + N ]

2. or, we can cut out all of the redundancy, and think of them as functions on an interval

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

[0, N ]

(or,

123

more generally,

[a, a + N ]).

If we know the signal is

N -periodic

then all the information of the signal

is captured by the above interval.

Figure 9.3:

An

Remove the redundancy of the period function so that f [n] is undened outside [0, N ].

aperiodic DT function, however, f [n] does not repeat for any N

R; i.e.

there exists no

such

that this equation (9.1) holds. This broader class of signals can only be acted upon by the DTFT.
Suppose we have such an aperiodic function

fN o [n]

, where

f [n]

is repeated every

N0

f [n]

. We can construct a periodic extension of

seconds. If we take the limit as

N0 ,

f [n]

called

we obtain a precise

model of an aperiodic signal for which all rules that govern periodic signals can be applied, including Fourier
Analysis (with an important modication).

Discete Time Fourier Transform.

For more detail on this distinction, see the module on the

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

124

9.1.3 Aperiodic Signal Demonstration

Click on the above thumbnail image (when online) to download an interactive Mathematica
Player testing Periodic versus Aperiodic Signals. To download, right-click and save as .cdf.
Figure 9.4:

9.1.4 Conclusion
A discrete periodic signal is completely dened by its values in one period, such as the interval

[0, N ].

Any

aperiodic signal can be dened as an innite sum of periodic functions, a useful denition that makes it
possible to use Fourier Analysis on it by assuming all frequencies are present in the signal.

9.2 Eigenfunctions of Discrete Time LTI Systems

9.2.1 Introduction
Prior to reading this module, the reader should already have some experience with linear algebra and should
specically be familiar with the eigenvectors and eigenvalues of linear operators.

A linear time invariant

system is a linear operator dened on a function space that commutes with every time shift operator on
that function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.
It is particularly easy to calculate the output of a system when an eigenfunction is the input as the output
is simply the eigenfunction scaled by the associated eigenvalue.

As will be shown, discrete time complex

exponentials serve as eigenfunctions of linear time invariant systems operating on discrete time signals.

2 This

content is available online at <http://legacy.cnx.org/content/m47459/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

125

9.2.2 Eigenfunctions of LTI Systems


Consider a linear time invariant system

with impulse response

discrete time signals. Recall that the output

H (x [n])

h operating on some space of innite length


x [n] is given by the

of the system for a given input

discrete time convolution of the impulse response with the input

H (x [n]) =

h [k] x [n k] .

(9.2)

k=
Now consider the input

x [n] = esn

where

s C.

H (esn )

=
=
=

Computing the output for this input,

k=

h [k] es(nk)

sn sk
k= h [k] e e
P

esn k= h [k] esk .

(9.3)

Thus,

H (esn ) = s esn

(9.4)

where

s =

h [k] esk

(9.5)

k=
is the eigenvalue corresponding to the eigenvector

esn .

There are some additional points that should be mentioned.


eigenvalues of a linear time invariant system not described by
discussion has been somewhat formally loose as

esn

esn

Note that, there still may be additional


for some

s C.

Furthermore, the above

may or may not belong to the space on which the system

operates. However, for our purposes, complex exponentials will be accepted as eigenvectors of linear time
invariant systems. A similar argument using discrete time circular convolution would also hold for spaces
nite length signals.

9.2.3 Eigenfunction of LTI Systems Summary


As has been shown, discrete time complex exponential are eigenfunctions of linear time invariant systems
operating on discrete time signals. Thus, it is particularly simple to calculate the output of a linear time
invariant system for a complex exponential input as the result is a complex exponential output scaled by
the associated eigenvalue. Consequently, representations of discrete time signals in terms of discrete time
complex exponentials provide an advantage when studying signals. As will be explained later, this is what
is accomplished by the discrete time Fourier transform and discrete time Fourier series, which apply to
aperiodic and periodic signals respectively.

9.3 Discrete Time Fourier Transform (DTFT)

9.3.1 Introduction
In this module, we will derive an expansion for arbitrary discrete-time functions, and in doing so, derive the

Discrete Time Fourier Transform (DTFT).


3 This

content is available online at <http://legacy.cnx.org/content/m47370/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

126

Since complex exponentials


output of an LTI system

are eigenfunctions of linear time-invariant (LTI) systems , calculating the

given

eigenvalue corresponding to

ejn

as an input amounts to simple multiplication, where

H () C is the

As shown in the gure, a simple exponential input would yield the output

y [n] = H () ejn

Figure 9.5:

(9.6)

Simple LTI system.

Using this and the fact that

is linear, calculating

y [n]

for combinations of

complex exponentials is

also straightforward.

c1 ej1 n + c2 ej2 n c1 H (1 ) ej1 n + c2 H (2 ) ej1 n


X

cl ejl n

l
The action of

cl H (l ) ejl n

H indepenH (l ) C. As such, if

on an input such as those in the two equations above is easy to explain.

dently scales each exponential component ej n


l

we can write a function

y [n]

by a dierent complex number

as a combination of complex exponentials it allows us to easily calculate the

output of a system.
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals
in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will
present the

Discrete-Time Fourier Transform

(DTFT). Because the DTFT deals with nonperiodic

signals, we must nd a way to include all real frequencies in the general equations. For the DTFT we simply
let N go to innity. This will also change the summation over integers to an integration over real numbers.

9.3.2 DTFT synthesis


It can be demonstrated that an arbitrary

N -periodic

Discrete Time-periodic function

f [n]

can be written

as a linear combination of harmonic complex sinusoids

f [n] =

N
1
X

ck ej0 kn

(9.7)

k=0
2
N is the fundamental frequency. For almost all f [n] of practical interest, there exists cn to
make (9.7) true. If f [n] is nite energy, then the equality in (9.7) holds in the sense of energy convergence;
where

0 =

with discrete-time signals, there are no concerns for divergence as there are with continuous-time signals.

4 "Continuous Time
5 "Eigenfunctions of

Complex Exponential" <http://legacy.cnx.org/content/m10060/latest/>


LTI Systems" <http://legacy.cnx.org/content/m10500/latest/>
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

127

cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 kn is in f [n]. The formula
f [n] as a sum of complex exponentials, each of which is easily processed by an LTI system (since it

The
shows

is an eigenfunction of

ej0 kn , k Z

every LTI system).

Mathematically, it tells us that the set of complex exponentials

form a basis for the space of N-periodic discrete time functions.

9.3.2.1 Equations
Discrete-Time Fourier Transform

X () =

f [n] e(jn)

(9.8)

X () ejn d

(9.9)

n=

Inverse DTFT

1
x [n] =
2

warning: It is not uncommon to see the above formula written slightly dierent. One of the most
common dierences is the way that the exponential is written. The above equations use the radial

in the exponential, where = 2F , but it is also common to include the more



j2F n, in the exponential. Sometimes DTFT notation is expressed as X ej ,
6
that it is not a CTFT (which is denoted as X ()). Click here for an overview of

frequency variable

explicit expression,
to make it clear

the notation used in Connexion's DSP modules.

6 "DSP

notation" <http://legacy.cnx.org/content/m10161/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

128

9.3.3 DTFT Denition demonstration

Click on the above thumbnail image (when online) to download an interactive Mathematica
Player demonstrating Discrete Time Fourier Transform. To Download, right-click and save target as .cdf.
Figure 9.6:

9.3.4 DTFT Summary


Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals using
a set of complex exponentials as a basis. The discrete time Fourier transform synthesis formula expresses a
discrete time, aperiodic function as the innite sum of continuous frequency complex exponentials.

X () =

x [n] e(jn)

(9.10)

X () ejn d

(9.11)

n=

x [n] =

9.4 Properties of the DTFT

1
2

9.4.1 Introduction
8

This module will look at some of the basic properties of the Discrete-Time Fourier Transform

note:

(DTFT).

We will be discussing these properties for aperiodic, discrete-time signals but understand

that very similar properties hold for continuous-time signals and periodic signals as well.

7 This content is available online at <http://legacy.cnx.org/content/m47374/1.10/>.


8 "Discrete Time Fourier Transform (DTFT)" <http://legacy.cnx.org/content/m10108/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

129

9.4.2 Table of DTFT Properties


Discrete-Time Fourier Transform Properties

Linearity

Sequence Domain

Frequency Domain

a1 s1 [n] + a2 s2 [n]

a1 S1 () + a2 S2 ()

Conjugate Symmetry

s [n]

S ()

Time Scaling (Expansion)

sc [n] = {

s [n/c]
0

if

n/c

is integer

S (c)

otherwise

Time Reversal

s [n]

S ()

Time Delay

s [n n0 ]

e(jn0 ) S ()

Multiplication by n

ns [n]
P

j dS()
d

Sum

n=

s [n]

S (0)
R
1

Value at Origin

s [0]

Time Convolution

s1 [n] s2 [n]

Frequency Convolution
Parseval's Theorem

s1 [n] s2 [n]
P
2
n= (|s [n] |)

S1 () S2 ()
R
1
2 2 S1 (u) S2 ( u) d
R
2
1
2 (|S () |) d

Complex Modulation

ej0 n s [n]

S ( 0 )

Amplitude Modulation

s [n] cos (0 n)

S(0 )+S(+0 )
2
S(0 )S(+0 )
2j

s [n] sin (0 n)

S () d

Table 9.1: Discrete-time Fourier transform properties and relations.

9.4.3 Discussion of Fourier Transform Properties


9.4.3.1 Linearity
The combined addition and scalar multiplication properties in the table above demonstrate the basic property
of linearity. What you should see is that if one takes the Fourier transform of a linear combination of signals
then it will be the same as the linear combination of the Fourier transforms of each of the individual signals.
This is crucial when using a table

of transforms to nd the transform of a more complicated signal.

Example 9.1
We will begin with the following signal:

z [n] = af1 [n] + bf2 [n]

(9.12)

Now, after we take the Fourier transform, shown in the equation below, notice that the linear
combination of the terms is unaected by the transform.

Z () = aF1 () + bF2 ()

9 "Common

Fourier Transforms" <http://legacy.cnx.org/content/m10099/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(9.13)

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

130

9.4.3.2 Symmetry
Symmetry is a property that can make life quite easy when solving problems involving Fourier transforms.
Basically what this property says is that since a rectangular function in time is a sinc function in frequency,
then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similarity
between the forward DTFT and the inverse DTFT. The only dierence is the scaling by

and a frequency

reversal.

9.4.3.3 Time Scaling


This property deals with the eect on the frequency-domain representation of a signal if the time variable
is altered. The most important concept to understand for the time scaling property is that signals that are
narrow in time will be broad in frequency and
10

a unit pulse

with a

very

vice versa.

The simplest example of this is a delta function,

small duration, in time that becomes an innite-length constant function in

frequency.
In contrast to the CTFT property, the DTFT time scaling property is available only for expansion in
the time domain. This is because decimation discards samples of the original signal and therefore there is
no unique relationship between the original signal and a decimated signal (that is, a decimated signal could
correspond to many original signals) that would provide a single transformation between the original DTFT
and the "decimated" DTFT. The intuition from CTFT still holds for expansion: expanding the signal in
time compacts the DTFT in frequency.

9.4.3.4 Time Shifting


Time shifting shows that a shift in time is equivalent to a linear phase shift in frequency. Since the frequency
content depends only on the shape of a signal, which is unchanged in a time shift, then only the phase
spectrum will be altered. This property is proven below:

Example 9.2
We will begin by letting

z [n] = f [n ].
z [n].

Now let us take the Fourier transform with the previous

expression substituted in for

Z () =

f [n ] e(jn)

(9.14)

n=
Now let us make a simple change of variables, where

= n .

Through the calculations below,

you can see that only the variable in the exponential is altered thus only changing the phase in the
frequency domain.

Z ()

=
=

(j(+))
= f [] e
P

e(j) = f [] e(j)
(j)

= e

(9.15)

F ()

9.4.3.5 Convolution
Convolution is one of the big reasons for converting signals to the frequency domain, since convolution in
time becomes multiplication in frequency.

This property is also another excellent example of symmetry

between time and frequency. It also shows that there may be little to gain by changing to the frequency
domain when multiplication in time is involved.

10 "Elemental

Signals": Section Pulse <http://legacy.cnx.org/content/m0004/latest/#pulsedef>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

131

We will introduce the convolution integral here, but if you have not seen this before or need to refresh
your memory, then look at the discrete-time convolution

11

module for a more in depth explanation and

derivation.

= f1 [n] f2 [n]
P
=
= f1 [] f2 [n ]

y [n]

(9.16)

9.4.3.6 Time Dierentiation


12

Since LTI

systems can be represented in terms of dierential equations, it is apparent with this property

that converting to the frequency domain may allow us to convert these complicated dierential equations to
simpler equations involving multiplication and addition. This is often looked at in more detail during the
study of the Z Transform

13

9.4.3.7 Parseval's Relation

(|f [n] |) =

(|F () |) d

(9.17)

n=

Parseval's relation tells us that the energy of a signal is equal to the energy of its Fourier transform.

Figure 9.7

9.4.3.8 Modulation (Frequency Shift)


Modulation is absolutely imperative to communications applications. Being able to shift a signal to a dierent
frequency, allows us to take advantage of dierent parts of the electromagnetic spectrum is what allows us
to transmit television, radio and other applications through the same space without signicant interference.
The proof of the frequency shift property is very similar to that of the time shift (Section 9.4.3.4: Time
Shifting); however, here we would use the inverse Fourier transform in place of the Fourier transform. Since
we went through the steps in the previous, time-shift proof, below we will just show the initial and nal step
to this proof:

z [n] =

1
2

F ( ) ejn d

(9.18)

z [n] = f [n] ejn


11 "Discrete Time Convolution" <http://legacy.cnx.org/content/m10087/latest/>
12 "System Classications and Properties" <http://legacy.cnx.org/content/m10084/latest/>
13 "The Laplace Transform" <http://legacy.cnx.org/content/m10110/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(9.19)

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

132

9.4.4 Online Resources


The following online resources provide interactive explanations of the properties of the CTFT:
Discrete-Time Fourier Transform Properties (Johns Hopkins University)

14

9.4.5 Properties Demonstration


An interactive example demonstration of the properties is included below:
This media object is a LabVIEW VI. Please view or download it at
<CTFTSPlab.llb>

Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.

Figure 9.8:

9.5 Common Discrete Time Fourier Transforms

15

9.5.1 Common DTFTs


Time Domain

x [n]

j0 n

Frequency Domain

X ()

( 2m)

( 0 2m)

m=
m=

[n]

[n M ]
P
m= [n M m]

ejM
P
ejM m
=
m=

P
1
k

k=
M
2
M
P
1
+

( + 2k)
k=
1ej

u [n]
an u [n]
an u [ (n + 1)]
a|n|
nan u [n]
sin (an)
cos (an)
c 2
2

sinc2

c n

Notes

real number

integer

integer

1
if |a| < 1
1aej
1
if |a| > 1
1aej
1a2
if |a| < 1
12acos()+a2
aej
if |a| < 1
(ej a)2
P
j m= [ ( + a 2m) (real
a number
2m)]a
P
m= [ ( a 2m) + ( +
real
a number
2m)] a


P
2m
real number c ,
m=
2c

0 < c

continued on next page

14 http://www.jhu.edu/signals/dtftprops/indexDTFTprops.htm
15 This content is available online at <http://legacy.cnx.org/content/m47373/1.5/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

133

sinc

c n

m= p

u [n] u [n M ]
c
(n+a) {cos [c

(n + a)]
sinc [c (n + a)]}
n

1
n2

[(1) 1]
0
(1)n
n

2m
2c

real number

sin(M/2) j(M 1)/2


e
sin(/2)



j p
eja
c

integer

c , 0 < c

real number

c , 0 < c 1

||

n=0

dierentiator lter

elsewhere

n odd

2
n

n even

<0

{ 0

=0

Hilbert Transform

>0
Table 9.2

Notes
p (t)

is the pulse function for arbitrary real-valued

p (t) = {
(t)

t.
0

if

|t| > 1/2

if

|t| 1/2

is the triangle function for arbitrary real-valued

(9.20)

t.

1+t

if

(t) = { 1 t

1t0

if

0<t1

(9.21)

otherwise

9.6 Discrete Time Convolution and the DTFT

16

9.6.1 Introduction
This module discusses convolution of discrete signals in the time and frequency domains.

9.6.2 The Discrete-Time Convolution


9.6.2.1 Discrete Time Fourier Transform
The DTFT transforms an innite-length discrete signal in the time domain into an nite-length (or

2 -

periodic) continuous signal in the frequency domain.

DTFT

X () =

x [n] e(jn)

(9.22)

X () ejn d

(9.23)

n=

Inverse DTFT
x [n] =
16 This

1
2

content is available online at <http://legacy.cnx.org/content/m47375/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

134

9.6.2.2 Demonstration

Interact (when online) with a Mathematica CDF demonstrating the Discrete Convolution.
To Download, right-click and save as .cdf.
Figure 9.9:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

135

9.6.2.3 Convolution Sum


As mentioned above, the convolution sum provides a concise, mathematical way to express the output of
an LTI system based on an arbitrary discrete-time input signal and the system's impulse response.
convolution sum is expressed as

y [n] =

x [k] h [n k]

The

(9.24)

k=
As with continuous-time, convolution is represented by the symbol *, and can be written as

y [n] = x [n] h [n]

(9.25)

Convolution is commutative. For more information on the characteristics of convolution, read about the
Properties of Convolution (Section 4.4).

9.6.2.4 Convolution Theorem


Let

and

be two functions with convolution

f g.

Let

be the Fourier transform operator. Then

F (f g) = F (f ) F (g)

(9.26)

F (f g) = F (f ) F (g)

(9.27)

By applying the inverse Fourier transform

F 1 ,

we can write:

f g = F 1 (F (f ) F (g))

(9.28)

9.6.3 Conclusion
The Fourier transform of a convolution is the pointwise product of Fourier transforms.

In other words,

convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain
(e.g., frequency domain).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

136

CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 10
Computing Fourier Transforms
10.1 Discrete Fourier Transform (DFT)

10.1.1 Introduction
The discrete-time Fourier transform (DTFT) can be evaluated when we have an analytic expression for the
signal. This is a good match to the digital world, where signals are discrete-time and quantized, and thus
it is crucial to implement transforms like the DTFT in a computer. The formula for the DTFT is a sum,
which conceptually can be easily computed save for two issues.

Signal duration:

The sum extends over the signal's duration, which must be nite to compute the

signal's spectrum.

It is exceedingly dicult to store an innite-length signal in any case, so we'll

assume that the signal extends over

Continuous frequency:

[0, N 1].

Subtler than the signal duration issue is the fact that the frequency variable is

continuous: It may only need to span one period, like

[12, 12]

or

[0, 1],

but the DTFT formula as it

stands requires evaluating the spectra at all frequencies within a period.

10.1.2 Periodic Discrete-Time Signals


As a way to bridge nite and innite-length signals, we can consider the periodization of a signal

xN [n]

of

x [n] = x [n N ]

for all

n.

nite length

N.

The signal is periodized to

x [n]

by repeating its values so that

One can also write this relationship using a convolution with a train of impulses:

x [n]

= xN [n mod N ] = xN [n]

k=

[n kN ] .

(10.1)

Recall that each convolution with a shifted impulse will contribute one copy of the original signal, therefore providing periodization.
denote the DTFTs of

k=

factor of

x [n]

and

Let us consider the eect of periodization in the frequency domain.

xN [n]

by

X ()

= XN () F

X ()

We

respectively. Note also that the impulse train


that has been expanded in time by a


P
[n kN ] = XN () m= (N 2m)

P
= XN () N1 m= 2m
.
N

k=

In words, the DTFT of the periodic signal

1 This

XN (),

[n kN ] can be interpreted as a constant function f [n] = 1


N . Using the properties of the DTFT, we have:
X ()

signal

and

X ()

and a train of impulses with spacing

(10.2)

is equal to the product of the DTFT of the nite-length

2/N .

This means that

X ()

is nonzero only for values

content is available online at <http://legacy.cnx.org/content/m47468/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>
137

CHAPTER 10. COMPUTING FOURIER TRANSFORMS

138

of

that are multiples of

dened by a set of

2/N .

Additionally, recall that a DTFT is

values of the DTFT

2 -periodic;

thus,

X ()

is uniquely

XN ().

Conceptually, we see that periodization in time is reected as sampling in frequency, and that the DTFT
of a

N -periodic signal (completely determined by its N

values in time) is completely determined by

distinct

values of its DTFT. This discrete, nite-length relationship between a signal and its Fourier transform gives
rise to the discrete Fourier transform (DFT).

10.1.3 Discrete Fourier Transform


Following this narrative, we can consider the DFT as a sampling of the DTFT of the nite-length signal

x [n]

with a spacing of

2/N

between samples. Instead of using the DTFT frequency

we will index the

values of the DFT with integers:

X [k] =

PN 1
n=0

x [n] e

j2kn
N

(10.3)

X ()

Similarly, we can dene the inverse transform by plugging in

into the inverse DTFT, which due to

its sampled nature turns the corresponding integral over the range of frequencies

x [n] =

1
N

PN 1
n=0

X [k] e

j2kn
N

[0, 2]

into a sum:
(10.4)

This pair of equations provide the denition for the forward and inverse DFT, respectively.

10.1.4 Computing the Discrete Fourier Transform


One would like to simplify the computation process as much as possible, since DFTs are computed so often
in science and engineering. One step in this direction can be made by realizing that both the forward and
inverse DFT transforms relate to a complex number
the signal

N.

wN = ej2/N

that is dependent only on the length of

With this notation, we can then write the pair of transforms as

X [k] =

PN 1
n=0

kn

x [n] (wN )

, x [n] =

PN 1
k=0

kn

(10.5)

X [k] (wN ) .

One may observe that the equation for these two transforms are very reminiscent of one another: in fact,

X [k] = DF T (x [n]), then one could write


1
DF
T
(X
[k])
- that is, one only need to invert X [k]
N
in time, run it through the DFT, and divide the output by N to obtain an equivalent IDFT. The DFT
if we write the DFT transform as a "system" using the notation

the inverse transform as

x [n] = IDF T (X [k]) =

has many additional useful properties that signicantly simplify it computation and that are leveraged by
modern implementations such as the Fast Fourier Transform (FFT).

10.1.5 Summary
By restricting our attention to nite-length signals, it is possible to dene a version of the Fourier transform
that also has nite length and that can be easily obtained by a computer. This new discrete Fourier transform
is intimately related with the discrete-time Fourier transform and can be interpreted as performing a sampling
operation in the frequency domain.

10.2 DFT: Fast Fourier Transform

We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform
(DFT)

computes the spectrum at

equally spaced frequencies from a length-

sequence. An issue that

never arises in analog "computation," like that performed by a circuit, is how much work it takes to perform

2 This content is available online at


3 "Discrete Fourier Transform", (1)

<http://legacy.cnx.org/content/m0504/2.9/>.
: Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

139

the signal processing operation such as ltering. In computation, this consideration translates to the number
of basic computational steps required to perform the needed processing.
the

complexity,

The number of steps, known as

becomes equivalent to how long the computation takes (how long must we wait for an

answer). Complexity is not so much tied to specic computers or programming languages but to how many
steps are required on any computer. Thus, a procedure's stated complexity says that the time taken will be

proportional to some function of the amount of data used in the computation and the amount demanded.

For example, consider the formula for the discrete Fourier transform. For each frequency we chose, we
must multiply each signal value by a complex number and add together the results. For a real-valued signal,
each real-times-complex multiplication requires two real multiplications, meaning we have

2N

multiplications

to perform. To add the results together, we must keep the real and imaginary parts separate. Adding
numbers requires

N 1

2N + 2 (N 1) = 4N 2
computations is N (4N 2).

additions. Consequently, each frequency requires

computational steps. As we have

frequencies, the total number of

basic

In complexity calculations, we only worry about what happens as the data lengths increase, and take the
dominant termhere the

4N 2

termas reecting how much work is involved in making the computation.

As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd the
DFT is an

O N2

computational procedure. This notation is read "order

N -squared".

Thus, if we double

the length of the data, we would expect that the computation time to approximately quadruple.

Exercise 10.2.1

(Solution on p. 145.)

In making the complexity evaluation for the DFT, we assumed the data to be real. Three questions emerge.

First of all, the spectra of such signals have conjugate symmetry, meaning that

N

2 + 1, ..., N + 1 in the DFT ) can be computed from the


corresponding positive frequency components. Does this symmetry change the DFT's complexity?

negative frequency components (k

Secondly, suppose the data are complex-valued; what is the DFT's complexity now?
Finally, a less important but interesting question is suppose we want
of

N;

frequency values instead

now what is the complexity?

10.3 The Fast Fourier Transform (FFT)

10.3.1 Introduction
The Fast Fourier Transform (FFT) is an ecient O(NlogN) algorithm for calculating DFTs The FFT
symmetries in the

exploits

matrix to take a "divide and conquer" approach. We will rst discuss deriving the

actual FFT algorithm, some of its implications for the DFT, and a speed comparison to drive home the
importance of this powerful algorithm.

10.3.2 Deriving the FFT


To derive the FFT, we assume that the signal's duration is a power of two:

N = 2l

. Consider what happens

to the even-numbered and odd-numbered elements of the sequence in the DFT calculation.

4 "Discrete Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>


5 This content is available online at <http://legacy.cnx.org/content/m47467/1.1/>.
6 "Fast Fourier Transform (FFT)" <http://legacy.cnx.org/content/m10250/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 10. COMPUTING FOURIER TRANSFORMS

140

22k

s [0] + s [2] e(j) N


2(2+1)k
+ s [3] e(j) N
+

2(N 2)k

+ s [N 2] e(j) N
2(N 2+1)k
N
+ s [N 1] e(j)
2 ( N
2 1)k
(j)
(j) 2k
N
N
2
2
s [0]
+
s [2] e
+

+ s [N 2] e

2 ( N
1
k
)
2
(j) 2k
(j)
N
N
s [1] + s [3] e
e (j2k)
2 + + s [N 1] e
2
N
S [k]
=
(j) 2k
N
s [1] e

Each term in square brackets has the

form of a

+
=

(10.6)

N
2 -length DFT. The rst one is a DFT of the evenThe rst DFT is combined with the

numbered elements, and the second of the odd-numbered elements.


second multiplied by the complex exponential
frequency indices

k {0, . . . , N 1} .

(j2k)
N

. The half-length transforms are each evaluated at

Normally, the number of frequency indices in a DFT calculation range

between zero and the transform length minus one. The

computational advantage of the FFT comes from

recognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations
made in the half-length transforms and combines them through additions and the multiplication by

(j2k)
N

N
, which is not periodic over
2 , to rewrite the length-N DFT. Figure 10.1 (Length-8 DFT decomposition)
 2
N
N
illustrates this decomposition. As it stands, we now compute two length2 transforms (complexity 2O
4
), multiply one of them by the complex exponential (complexity

O (N )

O (N )

), and add the results (complexity

). At this point, the total complexity is still dominated by the half-length DFT calculations, but the

proportionality coecient has been reduced.


Now for the fun. Because

N = 2l , each of the half-length transforms can be reduced to two quarter-length

transforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left with
length-2 transforms. This transform is quite simple, involving only additions. Thus, the rst stage of the

N
2 length-2 transforms (see the bottom part of Figure 10.1 (Length-8 DFT decomposition)). Pairs
of these transforms are combined by adding one to the other multiplied by a complex exponential. Each pair
N
N
requires 4 additions and 4 multiplications, giving a total number of computations equaling 8
4 = 2 . This
number of computations does not change from stage to stage. Because the number of stages, the number of

FFT has

times the length can be divided by two, equals

log2 N

, the complexity of the FFT is

O (N logN )

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

141

Length-8 DFT decomposition

(a)

(b)

The initial decomposition of a length-8 DFT into the terms using even- and odd-indexed
inputs marks the rst phase of developing the FFT algorithm. When these half-length transforms are
successively decomposed, we are left with the diagram shown in the bottom panel that depicts the
length-8 FFT computation.
Figure 10.1:

Doing an example will make computational savings more obvious. Let's look at the details of a length-8
DFT. As shown on Figure 10.1 (Length-8 DFT decomposition), we rst decompose the DFT into two length4 DFTs, with the outputs added and subtracted together in pairs. Considering Figure 10.1 (Length-8 DFT
decomposition) as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTs
into the nal calculation because of the periodicity of the DFT output. Examining how pairs of outputs are
collected together, we create the basic computational element known as a

buttery (Figure 10.2 (Buttery)).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 10. COMPUTING FOURIER TRANSFORMS

142

Buttery

Figure 10.2:
The basic computational element of the fast Fourier transform is the buttery. It takes
two complex numbers, represented by a and b, and forms the quantities shown. Each buttery requires
one complex multiplication and two complex additions.

By considering together the computations involving common output frequencies from the two half-length
DFTs, we see that the two complex multiplies are related to each other, and we can reduce our computational
work even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their
outputs, we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 10.1 (Length-8
DFT decomposition)).

Although most of the complex multiplies are quite simple (multiplying by

e(j)

means negating real and imaginary parts), let's count those for purposes of evaluating the complexity as full

N
2 = 4 complex multiplies and 2N = 16 additions for each stage and
3N
stages, making the number of basic computations
2 log2 N as predicted.
complex multiplies. We have

Exercise 10.3.1

log2 N = 3

(Solution on p. 145.)

Note that the ordering of the input sequence in the two parts of Figure 10.1 (Length-8 DFT
decomposition) aren't quite the same. Why not? How is the ordering determined?

10.3.2.1 FFT and the DFT


We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform
(DFT)

computes the spectrum at

equally spaced frequencies from a length-

sequence. An issue that

never arises in analog "computation," like that performed by a circuit, is how much work it takes to perform
the signal processing operation such as ltering. In computation, this consideration translates to the number
of basic computational steps required to perform the needed processing.
the

complexity,

The number of steps, known as

becomes equivalent to how long the computation takes (how long must we wait for an

answer). Complexity is not so much tied to specic computers or programming languages but to how many
steps are required on any computer. Thus, a procedure's stated complexity says that the time taken will be

proportional to some function of the amount of data used in the computation and the amount demanded.

For example, consider the formula for the discrete Fourier transform. For each frequency we chose, we
must multiply each signal value by a complex number and add together the results. For a real-valued signal,
each real-times-complex multiplication requires two real multiplications, meaning we have

2N

multiplications

to perform. To add the results together, we must keep the real and imaginary parts separate. Adding
numbers requires

N 1

2N + 2 (N 1) = 4N 2
computations is N (4N 2).

additions. Consequently, each frequency requires

computational steps. As we have

frequencies, the total number of

basic

In complexity calculations, we only worry about what happens as the data lengths increase, and take the
dominant termhere the

7 "Discrete

4N 2

termas reecting how much work is involved in making the computation.

Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

143

As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd the

O N2

DFT is an

computational procedure. This notation is read "order

N -squared".

Thus, if we double

the length of the data, we would expect that the computation time to approximately quadruple.

Exercise 10.3.2

(Solution on p. 145.)

In making the complexity evaluation for the DFT, we assumed the data to be real. Three questions emerge.

First of all, the spectra of such signals have conjugate symmetry, meaning that

negative frequency components (k

N

2 + 1, ..., N + 1 in the DFT ) can be computed from the


corresponding positive frequency components. Does this symmetry change the DFT's complexity?
Secondly, suppose the data are complex-valued; what is the DFT's complexity now?
Finally, a less important but interesting question is suppose we want
of

N;

frequency values instead

now what is the complexity?

10.3.3 Speed Comparison


How much better is

O (N logN )

than

O N2

This gure shows how much slower the computation time of an O(NlogN) process grows.

Figure 10.3:

10

100

1000

106

109

N2

100

104

106

1012

1018

N logN

200

3000

6 106

9 109

Table 10.1

Say you have a 1 MFLOP machine (a million "oating point" operations per second). Let

N = 1million =

106 .
An
An


O N 2 algorithm takes 1012 ors 106 seconds ' 11.5
O (N logN ) algorithm takes 6 106 Flors 6 seconds.

note:

N = 1million

is

days.

not unreasonable.

Example 10.1

3106 numbers for each


 picture. So for two N
f [n] and h [n]. If computing f [n] ~ h [n] directly: O N 2 operations.
taking FFTs  O (N logN )
3 megapixel digital camera spits out

8 "Discrete

point sequences

Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 10. COMPUTING FOURIER TRANSFORMS

144

O (N )
O (N logN ).
complexity is O (N logN ).

multiplying FFTs 
inverse FFTs 
the total

10.3.4 Conclusion
Other "fast" algorithms have been discovered, most of which make use of how many common factors the
transform length N has. In number theory, the number of prime factors a given integer has measures how

composite

it is.

The numbers 16 and 81 are highly composite (equaling

number 18 is less so (

1 2

2 3

), and 17 not at all (it's prime).

24

and

34

respectively), the

In over thirty years of Fourier transform

algorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. It
is so computationally ecient that power-of-two transform lengths are frequently used regardless of what
the actual length of the data. It is even well established that the FFT, alongside the digital computer, were
almost completely responsible for the "explosion" of DSP in the 60's.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

145

Solutions to Exercises in Chapter 10


Solution to Exercise 10.2.1 (p. 139)
When the signal is real-valued, we may only need half the spectral values, but the complexity remains
unchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity is
again the same. When only

frequencies are needed, the complexity is

Solution to Exercise 10.3.1 (p. 142)

O (KN).

The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has.
The ordering is determined by the algorithm.

Solution to Exercise 10.3.2 (p. 143)

When the signal is real-valued, we may only need half the spectral values, but the complexity remains
unchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity is
again the same. When only

frequencies are needed, the complexity is

O (KN).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

146

CHAPTER 10. COMPUTING FOURIER TRANSFORMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Chapter 11
Sampling and Reconstruction
11.1 Signal Sampling

11.1.1 Introduction
Digital computers can process discrete time signals using extremely exible and powerful algorithms. However, most signals of interest are continuous time signals, which is how data almost always appears in nature.
This module introduces the concepts behind converting continuous time signals into discrete time signals
through a process called sampling.

11.1.2 Sampling
Sampling a continuous time signal produces a discrete time signal by selecting the values of the continuous

x with sampling period


x [n] = x (nTs ) . The sampling angular frequency is then given

time signal at evenly spaced points in time. Thus, sampling a continuous time signal

Ts

gives the discrete time signal

by

fs = 1/Ts .

xs

dened by

It should be intuitively clear that multiple continuous time signals sampled at the same rate can produce
the same discrete time signal since uncountably many continuous time functions could be constructed that
connect the points on the graph of any discrete time function. Thus, sampling at a given rate does not result
in an injective relationship. Hence, sampling is, in general, not invertible.

Example 11.1
For instance, consider the signals

x, y

dened by

x (t) =

sin (t)
t

sin (5t)
t
period Ts = /2

y (t) =
and their sampled versions

x, y

with sampling

1 This

(11.2)

sin (n/2)
n/2

(11.3)

sin (n5/2)
.
n/2

(11.4)

x [n] =
y [n] =

(11.1)

content is available online at <http://legacy.cnx.org/content/m47377/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


147

CHAPTER 11. SAMPLING AND RECONSTRUCTION

148

Notice that since

sin (n5/2) = sin (n2 + n/2) = sin (n/2)

(11.5)

it follows that

y [n] =
Hence,

and

sin (n/2)
= x [n] .
n/2

(11.6)

provide an example of distinct functions with the same sampled versions at a

specic sampling rate.


It is also useful to consider the relationship between the frequency domain representations of the continuous
time function and its sampled versions. Consider a signal

x [n] = x (nTs ).

the discrete time signal

sampled with sampling period

Ts

to produce

Let us dene the continuous-time "impulse train"

p (t) =

(t nTs )

(11.7)

n=
Using Fourier series, it can be shown that

p (t) =

1 X j T2 kt
e s
Ts

(11.8)

k=

Multiplying

x (t)

by the impulse train yields the "continuous-time sampled signal"

xs (t)

xc (t)

= xc (t)
Taking the CTFT of

=
=
=

Xs (f )

P
1
Ts

n= (t nT )
P
2
kt
jT
s
k= e

(11.9)

xs (t),
Xs (f )

Notice that

xs (t):

R 


P
2
kt
jT
1
s
x
(t)
e
e(j2f t) dt
c
k=
Ts

R
P
k
1
xc (t) e(j2(f Ts )t) dt
k=
Ts


P
1
k
k= Xc f Ts
Ts

(11.10)

Xc (f ).

We now investigate the

is a summation of scaled and shifted copies of

relationship between the CTFT and the DTFT:

X ()

=
=
=
=

where

xc (t) (t nTs ) e(jn)


X ()

(jn)
n= x [n] e
P
(jn)
n= xc (nTs ) e
R
P
n= xc (t) (t
R
P
n= xc (t) (t

is nonzero if and only if

=
=
=

n=

nTs ) dte(jn)

(11.11)

nTs ) e(jn) dt

t
Ts .

R
(j Tts )
dt
n= xc (t) (t nTs ) e
 (j t )
R
P
T
s dt
xc (t) n= (t nTs ) e

R
(j Tts )
x (t) e
dt
s

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(11.12)

149

where

xc (t)

(t nTs ) = xs (t)

n=
so that


X () = Xs

2Ts


(11.13)

Plugging (11.10) into (11.13) yields:




2k
1 X
Xc
X () =
Ts
2Ts

(11.14)

k=

Notice that the DTFT of the sampled signal

x [n]

is a summation of scaled, stretched, and shifted copies

xc (t).

of the CTFT of the continuous signal

11.1.3 Sampling Summary


Sampling a continuous time signal produces a discrete time signal by selecting the values of the continuous
time signal at equally spaced points in time. However, we have shown that this relationship is not injective as
multiple continuous time signals can be sampled at the same rate to produce the same discrete time signal.
This is related to a phenomenon called aliasing which will be discussed in later modules.

Consequently,

the sampling process is not, in general, invertible. Nevertheless, as will be shown in the module concerning
reconstruction, the continuous time signal can be recovered from its sampled version if some additional
assumptions hold.

11.2 Sampling Theorem

11.2.1 Introduction
With the introduction of the concept of signal sampling, which produces a discrete time signal by selecting
the values of the continuous time signal at evenly spaced points in time, it is now possible to discuss one
of the most important results in signal processing, the Nyquist-Shannon sampling theorem. Often simply
called the sampling theorem, this theorem concerns signals, known as bandlimited signals, with spectra that
are zero for all frequencies with absolute value greater than or equal to a certain level. The theorem implies
that there is a suciently high sampling rate at which a bandlimited signal can be recovered exactly from its
samples, which is an important step in the processing of continuous time signals using the tools of discrete
time signal processing.

11.2.2 Nyquist-Shannon Sampling Theorem


11.2.2.1 Statement of the Sampling Theorem
The Nyquist-Shannon sampling theorem concerns signals with continuous time Fourier transforms that are
only nonzero on the interval

(B, B).

(B, B)

for some constant

B.

Such a function is said to be bandlimited to

Essentially, the sampling theorem has already been implicitly introduced in the previous module

concerning sampling. Given a continuous time signals


that the spectrum

Xs

of sampled signal

xs

with continuous time Fourier transform

with sampling period

Xs () =

Ts




1 X
2k
X
.
Ts
2Ts
k=

2 This

X,

recall

is given by

content is available online at <http://legacy.cnx.org/content/m47378/1.2/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(11.15)

CHAPTER 11. SAMPLING AND RECONSTRUCTION

150

x is bandlimited to (1/2Ts , 1/2Ts ), the period of Xs centered about the


X scaled in frequency since no aliasing occurs. This is illustrated in Figure 11.1.
(1/2Ts , 1/2Ts ) bandlimited continuous time signals sampled to the same signal, they

It had previously been noted that if


origin has the same form as
Hence, if any two

would have the same continuous time Fourier transform and thus be identical. Thus, for each discrete time
signal there is a unique

(1/2Ts , 1/2Ts ) bandlimited continuous time signal that samples to the discrete
Ts . Therefore, this (1/2Ts , 1/2Ts ) bandlimited signal can be found from

time signal with sampling period

the samples by inverting this bijection.


This is the essence of the sampling theorem. More formally, the sampling theorem states the following. If

x is bandlimited to (B, B), it is completely determined by its samples with sampling rate fs = 2B ,
x can be reconstructed exactly from its samples xs with sampling
rate fs = 2B . The angular frequency 4B is often called the angular Nyquist rate. Equivalently, this can be
stated in terms of the sampling period Ts = 1/fs . If a signal x is bandlimited to (B, B), it is completely
determined by its samples with sampling period Ts = 1/2B . That is to say, x can be reconstructed exactly
from its samples xs with sampling period Ts .

a signal

known as the Nyquist rate. That is to say,

The spectrum of a bandlimited signals is shown as well as the spectra of its samples
at rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquist
frequency, and the period of the samples spectrum centered about the origin has the same form as the
spectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur and
causes the spectrum to take a dierent than the original spectrum.
Figure 11.1:

11.2.2.2 Proof of the Sampling Theorem


The above discussion has already shown the sampling theorem in an informal and intuitive way that could
easily be rened into a formal proof. However, the original proof of the sampling theorem, which will be

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

151

given here, provides the interesting observation that the samples of a signal with period

Ts

provide Fourier

(1/2Ts , 1/2Ts ).
and xs be its samples

series coecients for the original signal spectrum on


Let

represent

(1/2Ts , 1/2Ts ) bandlimited signal


with sampling period Ts . We can
x in terms of its spectrum X using the inverse continuous time Fourier transfrom and the fact that

be a

is bandlimited. The result is

x (t)
This representation of

R 1/2Ts
1/2Ts

X (f ) ej2f t df

may then be sampled with sampling period

xs [n]
(1/2Ts , 1/2Ts ),

= xs (nTs ) =

xs [n]

Noticing that this indicates that


the interval

is the

nth

R 1/2Ts

(11.16)

Ts

to produce

X (f ) ej2f nTs df

1/2Ts

(11.17)

X (f )
X (f ) and,

continuous time Fourier series coecient for

it is shown that the samples determine the original spectrum

on
by

extension, the original signal itself.

11.2.2.3 Perfect Reconstruction


Another way to show the sampling theorem is to derive the reconstruction formula that gives the original
signal

x
=x

from its samples

xs

with sampling period

Ts ,

provided

is bandlimited to

(1/2Ts , 1/2Ts ).

This is done in the module on perfect reconstruction. However, the result, known as the Whittaker-Shannon
reconstruction formula, will be stated here. If the requisite conditions hold, then the perfect reconstruction
is given by

x (t) =

xs [n] sinc (t/Ts n)

(11.18)

n=
where the sinc function is dened as

sinc (t) =

sin (t)
.
t

(11.19)

From this, it is clear that the set

{sinc (t/Ts n) |n Z}
forms an orthogonal basis for the set of

(1/2Ts , 1/2Ts )

(1/2Ts , 1/2Ts )

(11.20)

bandlimited signals, where the coecients of a

signal in this basis are its samples with sampling period

Ts .

11.2.3 Practical Implications


11.2.3.1 Discrete Time Processing of Continuous Time Signals
The Nyquist-Shannon Sampling Theorem and the Whittaker-Shannon Reconstruction formula enable discrete time processing of continuous time signals. Because any linear time invariant lter performs a multiplication in the frequency domain, the result of applying a linear time invariant lter to a bandlimited signal
is an output signal with the same bandlimit. Since sampling a bandlimited continuous time signal above
the Nyquist rate produces a discrete time signal with a spectrum of the same form as the original spectrum,
a discrete time lter could modify the samples spectrum and perfectly reconstruct the output to produce
the same result as a continuous time lter. This allows the use of digital computing power and exibility to
be leveraged in continuous time signal processing as well. This is more thouroughly described in the nal
module of this chapter.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

152

11.2.3.2 Psychoacoustics
The properties of human physiology and psychology often inform design choices in technologies meant for
interactin with people. For instance, digital devices dealing with sound use sampling rates related to the
frequency range of human vocalizations and the frequency range of human auditory sensativity.

Because

most of the sounds in human speech concentrate most of their signal energy between 5 Hz and 4 kHz, most
telephone systems discard frequencies above 4 kHz and sample at a rate of 8 kHz. Discarding the frequencies
greater than or equal to 4 kHz through use of an anti-aliasing lter is important to avoid aliasing, which
would negatively impact the quality of the output sound as is described in a later module. Similarly, human
hearing is sensitive to frequencies between 20 Hz and 20 kHz. Therefore, sampling rates for general audio
waveforms placed on CDs were chosen to be greater than 40 kHz, and all frequency content greater than
or equal to some level is discarded. The particular value that was chosen, 44.1 kHz, was selected for other
reasons, but the sampling theorem and the range of human hearing provided a lower bound for the range of
choices.

11.2.4 Sampling Theorem Summary


The Nyquist-Shannon Sampling Theorem states that a signal bandlimited to
structed exactly from its samples with sampling period

Ts .

(1/2Ts , 1/2Ts )

can be recon-

The Whittaker-Shannon interpolation formula,

which will be further described in the section on perfect reconstruction, provides the reconstruction of the
unique

(1/2Ts , 1/2Ts ) bandlimited continuous time signal that samples to a given discrete time signal
Ts . This enables discrete time processing of continuous time signals, which has many

with sampling period

powerful applications.

11.3 Signal Reconstruction

11.3.1 Introduction
The sampling process produces a discrete time signal from a continuous time signal by examining the value
of the continuous time signal at equally spaced points in time. Reconstruction, also known as interpolation,
attempts to perform an opposite process that produces a continuous time signal coinciding with the points
of the discrete time signal. Because the sampling process for general sets of signals is not invertible, there
are numerous possible reconstructions from a given discrete time signal, each of which would sample to that
signal at the appropriate sampling rate. This module will introduce some of these reconstruction schemes.

11.3.2 Reconstruction
11.3.2.1 Reconstruction Process
The process of reconstruction, also commonly known as interpolation, produces a continuous time signal
that would sample to a given discrete time signal at a specic sampling rate. Reconstruction can be mathematically understood by rst generating a continuous time impulse train

ximp (t) =

xs [n] (t nTs )

(11.21)

n=
from the sampled signal

xs with sampling period Ts and then applying a lowpass lter G that satises certain
x
. If G has impulse response g , then the result of the reconstruction

conditions to produce an output signal

3 This

content is available online at <http://legacy.cnx.org/content/m47463/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

153

process, illustrated in Figure 11.2, is given by the following computation, the nal equation of which is used
to perform reconstruction in practice.

= (ximp g) (t)

x
(t)
R

= ximp ( ) g (t ) d
R P
= n= xs [n] ( nTs ) g (t ) d
R
P
= n= xs [n] ( nTs ) g (t ) d
P
= n= xs [n] g (t nTs )

Figure 11.2:

(11.22)

Block diagram of reconstruction process for a given lowpass lter G.

11.3.2.2 Reconstruction Filters


In order to guarantee that the reconstructed signal
was reconstructed using the sampling period

Ts ,

samples to the discrete time signal

the lowpass lter

can be expressed well in the time domain in terms of a condition on the impulse response
lter

G.

xs

from which it

must satisfy certain conditions. These

of the lowpass

The sucient condition to be a reconstruction lters that we will require is that, for all

g (nTs ) = {
This means that
that sampling

sampled at a rate

with sampling period

n=0

n 6= 0

Ts produces a
Ts results in

x
(nTs )

(11.23)

discrete time unit impulse signal. Therefore, it follows

m=

= [n] .

n Z,

xs [m] g (nTs mTs )

m= xs [m] g ((n m) Ts )
P
m= xs [m] [n m]

(11.24)

= xs [n] ,
which is the desired result for reconstruction lters.

11.3.2.3 Cardinal Basis Splines


Since there are many continuous time signals that sample to a given discrete time signal, additional constraints are required in order to identify a particular one of these.

For instance, we might require our

reconstruction to yield a spline of a certain degree, which is a signal described in piecewise parts by polynomials not exceeding that degree. Additionally, we might want to guarantee that the function and a certain
number of its derivatives are continuous.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

154

This may be accomplished by restricting the result to the span of sets of certain splines, called basis
splines or B-splines. Specically, if a

nth

degree spline with continuous derivatives up to at least order

is required, then the desired function for a given

Ts

belongs to the span of

{Bn (t/Ts k) |k Z}

Bn = B0 Bn1
for

n1

n1

where
(11.25)

and

B0 (t) = {

1/2 < t < 1/2

otherwise

(11.26)

The basis splines Bn are shown in the above plots. Note that, except for the order 0 and
order 1 functions, these functions do not satisfy the conditions to be reconstruction lters. Also notice
that as the order increases, the functions approach the Gaussian function, which is exactly B .
Figure 11.3:

Bn do not satisfy the conditions to be a reconstruction lter for n 2 as is


Bn are useful in dening the cardinal basis splines, which do satisfy the
conditions to be reconstruction lters. If we let bn be the samples of Bn on the integers, it turns out that bn
1
1
has an inverse bn with respect to the operation of convolution for each n. This is to say that bn bn = .
The cardinal basis spline of order n for reconstruction with sampling period Ts is dened as
However, the basis splines

shown in Figure 11.3.

Still, the

n (t) =

b1
n [k] Bn (t/Ts k) .

k=

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(11.27)

155

In order to conrm that this satises the condition to be a reconstruction lter, note that

n (mTs ) =


1
b1
n [k] Bn (m k) = bn bn (m) = (m) .

(11.28)

k=

n is a valid reconstruction lter. Since n is an nth degree spline with continuous derivatives up
n 1, the result of the reconstruction will be a nth degree spline with continuous derivatives up
n 1.

Thus,
order
order

to
to

Figure 11.4: The above plots show cardinal basis spline functions 0 , 1 , 2 , and . Note that the
functions satisfy the conditions to be reconstruction lters. Also, notice that as the order increases, the
cardinal basis splines approximate the sinc function, which is exactly . Additionally, these lters are
acausal.

The lowpass lter with impulse response equal to the cardinal basis spline

of order 0 is one of the

simplest examples of a reconstruction lter. It simply extends the value of the discrete time signal for half
the sampling period to each side of every sample, producing a piecewise constant reconstruction. Thus, the
result is discontinuous for all nonconstant discrete time signals.
Likewise, the lowpass lter with impulse response equal to the cardinal basis spline

1 of order 1 is another

of the simplest examples of a reconstruction lter. It simply joins the adjacent samples with a straight line,
producing a piecewise linear reconstruction.

In this way, the reconstruction is continuous for all possible

discrete time signals. However, unless the samples are collinear, the result has discontinuous rst derivatives.
In general, similar statements can be made for lowpass lters with impulse responses equal to cardinal
basis splines of any order.

Using the

nth

order cardinal basis spline

n ,

the result is a piecewise degree

polynomial. Furthermore, it has continuous derivatives up to at least order

samples are points on a polynomial of degree at most

n,

the derivative of order

n 1. However, unless
n will be discontinuous.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

all

CHAPTER 11. SAMPLING AND RECONSTRUCTION

156

Reconstructions of the discrete time signal given in Figure 11.5 using several of these lters are shown
in Figure 11.6. As the order of the cardinal basis spline increases, notice that the reconstruction approaches
that of the innite order cardinal spline

the sinc function. As will be shown in the subsequent section

on perfect reconstruction, the lters with impulse response equal to the sinc function play an especially
important role in signal processing.

The above plot shows an example discrete time function. This discrete time function will
be reconstructed using sampling period Ts using several cardinal basis splines in Figure 11.6.
Figure 11.5:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

157

The above plots show interpolations of the discrete time signal given in Figure 11.5 using
lowpass lters with impulse responses given by the cardinal basis splines shown in Figure 11.4. Notice
that the interpolations become increasingly smooth and approach the sinc interpolation as the order
increases.
Figure 11.6:

11.3.3 Reconstruction Summary


Reconstruction of a continuous time signal from a discrete time signal can be accomplished through several
schemes.

However, it is important to note that reconstruction is not the inverse of sampling and only

produces one possible continuous time signal that samples to a given discrete time signal.

As is covered

in the subsequent module, perfect reconstruction of a bandlimited continuous time signal from its sampled
version is possible using the Whittaker-Shannon reconstruction formula, which makes use of the ideal lowpass
lter and its sinc function impulse response, if the sampling rate is suciently high.

11.4 Perfect Reconstruction

11.4.1 Introduction
If certain additional assumptions about the original signal and sampling rate hold, then the original signal
can be recovered exactly from its samples using a particularly important type of lter. More specically, it
will be shown that if a bandlimited signal is sampled at a rate greater than twice its bandlimit, the WhittakerShannon reconstruction formula perfectly reconstructs the original signal. This formula makes use of the
ideal lowpass lter, which is related to the sinc function. This is extremely useful, as sampled versions of

4 This

content is available online at <http://legacy.cnx.org/content/m47379/1.3/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

158

continuous time signals can be ltered using discrete time signal processing, often in a computer. The results
may then be reconstructed to produce the same continuous time output as some desired continuous time
system.

11.4.2 Perfect Reconstruction


In order to understand the conditions for perfect reconstruction and the lter it employs, consider the
following.

As a beginning, a sucient condition under which perfect reconstruction is possible will be

discussed. Subsequently, the lter and process used for perfect reconstruction will be detailed.
Recall that the sampled version

xs

of a continuous time signal

x with sampling period Ts

has a spectrum

given by

Xs () =




2k
1 X
X
.
Ts
2Ts

(11.29)

k=

As before, note that if x is bandlimited to (1/2Ts , 1/2Ts ), meaning that X is only nonzero on
(1/2Ts , 1/2Ts ), then each period of Xs has the same form as X . Thus, we can identify the original
spectrum X from the spectrum of the samples Xs and, by extension, the original signal x from its samples
xs at rate Ts if x is bandlimited to (1/2Ts , 1/2Ts ).
If a signal x is bandlimited to (B, B), then it is also bandlimited to (fs /2, fs /2) provided that Ts <
1/2B . Thus, if we ensure that x is sampled to xs with suciently high sampling frequency fs = 1/Ts > 2B
and have a way of identifying the unique (fs /2, fs /2) bandlimited signal corresponding to a discrete time
signal at sampling period Ts , then xs can be used to reconstruct x
= x exactly. The frequency 2B is known
as the Nyquist rate. Therefore, the condition that the sampling rate fs = 1/Ts > 2B be greater than the
Nyquist rate is a sucient condition for perfect reconstruction to be possible.
The correct lter must also be known in order to perform perfect reconstruction.

The ideal lowpass

G (f ) = Ts (u (f + fs /2) u (f fs /2)), which is shown in Figure 11.7, removes all signal


content not in the frequency range (fs /2, fs /2). Therefore, application of this lter to the impulse train
P
n= xs [n] (t nTs ) results in an output bandlimited to (fs /2, fs /2).
We now only need to conrm that the impulse response g of the lter G satises our sucient condition
to be a reconstruction lter. The inverse Fourier transform of G (f ) is

lter dened by

g (t) = sinc (t/Ts ) = {

t=0

sin(t/Ts )
t/Ts

t 6= 0

(11.30)

which is shown in Figure 11.7. Hence,

g (nTs ) = sinc (n) = {


Therefore, the ideal lowpass lter

n=0

sin(n)
n

n 6= 0

n= xs [n] (t nTs )

n 6= 0

= [n] .

(11.31)

(fs /2, fs /2),

this lter always produces the unique

Ts

when

is input.

Therefore, we can always reconstruct any

Ts

n=0

bandlimited signal that samples to a given discrete time sequence at sampling period

the impulse train


period

is a valid reconstruction lter. Since it is a valid reconstruction lter

and always produces an output that is bandlimited to

(fs /2, fs /2)

={

(fs /2, fs /2)

bandlimited signal from its samples at sampling

by the formula

x (t) =

xs [n] sinc (t/Ts n) .

(11.32)

n=
This perfect reconstruction formula is known as the Whittaker-Shannon interpolation formula and is sometimes also called the cardinal series. In fact, the sinc function is the innite order cardinal basis spline

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

159

Consequently, the set

{sinc (t/Ts n) |n Z} forms a basis for the vector space of (fs /2, fs /2) bandlimited

signals where the signal samples provide the corresponding coecients. It is a simple exercise to show that
this basis is, in fact, an orthogonal basis.

Figure 11.7:

The above plots show the ideal lowpass lter and its inverse Fourier transform, the sinc

Figure 11.8:

The plots show an example discrete time signal and its Whittaker-Shannon sinc recon-

function.

struction.

11.4.3 Perfect Reconstruction Summary


This module has shown that bandlimited continuous time signals can be reconstructed exactly from their
samples provided that the sampling rate exceeds the Nyquist rate, which is twice the bandlimit.

The

Whittaker-Shannon reconstruction formula computes this perfect reconstruction using an ideal lowpass lter,
with the resulting signal being a sum of shifted sinc functions that are scaled by the sample values. Sampling
below the Nyquist rate can lead to aliasing which makes the original signal irrecoverable as is described in
the subsequent module.

The ability to perfectly reconstruct bandlimited signals has important practical

implications for the processing of continuous time signals using the tools of discrete time signal processing.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

160

11.5 Aliasing Phenomena

11.5.1 Introduction
Through discussion of the Nyquist-Shannon sampling theorem and Whittaker-Shannon reconstruction formula, it has already been shown that a
at rate

fs = 1/Ts

(B, B) continuous time signal can be reconstructed from its samples


fs > 2B . Now, this module will investigate a problematic

via the sinc interpolation lter if

phenomenon, called aliasing, that can occur if this sucient condition for perfect reconstruction does not
hold. When aliasing occurs the spectrum of the samples has dierent form than the original signal spectrum,
so the samples cannot be used to reconstruct the original signal through Whittaker-Shannon interpolation.

11.5.2 Aliasing
Aliasing occurs when each period of the spectrum of the samples does not have the same form as the spectrum
of the original signal. Given a continuous time signals
that the spectrum

Xs

of sampled signal

xs

with continuous time Fourier transform

with sampling period

Ts

X,

recall

is given by




2k
1 X
X
.
Xs () =
Ts
2Ts

(11.33)

k=

As has already been mentioned several times, if


the same form as

X.

However, if

x is bandlimited to (fs /2, fs /2) then


 eachperiod of Xs

x is not bandlimited to (fs /2, fs /2), then the X

2k
2Ts

has

can overlap and

sum together. This is illustrated in Figure 11.9 in which sampling above the Nyquist frequency produces
a samples spectrum of the same shape as the original signal, but sampling below the Nyquist frequency
produces a samples spectrum with very dierent shape. Whittaker-Shannon interpolation of each of these
sequences produces dierent results. The low frequencies not aected by the overlap are the same, but there
is noise content in the higher frequencies caused by aliasing. Higher frequency energy masquerades as low
energy content, a highly undesirable eect.

5 This

content is available online at <http://legacy.cnx.org/content/m47380/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

161

The spectrum of a bandlimited signals is shown as well as the spectra of its samples
at rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquist
frequency, and the period of the samples spectrum centered about the origin has the same form as the
spectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur and
causes the spectrum to take a dierent than the original spectrum.
Figure 11.9:

Unlike when sampling above the Nyquist frequency, sampling below the Nyquist frequency does not yield

(B, B) bandlimited continuous time signals to the discrete time


x with spectrum X which overlaps and sums to Xs samples to xs . It should be intuitively
clear that there are very many (B, B) bandlimited signals that sample to a given discrete time signal below

an injective (one-to-one) function from the


signals. Any signal

the Nyquist frequency, as is demonstrated in Figure 11.10. It is quite easy to construct uncountably innite
families of such signals.
Aliasing obtains it name from the fact that multiple, in fact innitely many,
sample to the same discrete sequence if

fs < 2B .

(B, B) bandlimited signals

Thus, information about the original signal is lost in this

noninvertible process, and these dierent signals eectively assume the same identity, an alias. Hence, under
these conditions the Whittaker-Shannon interpolation formula will not produce a perfect reconstruction of
the original signal but will instead give the unique

(fs /2, fs /2)

bandlimited signal that samples to the

discrete sequence.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

162

The spectrum of a discrete time signal xs , taken from Figure 11.9, is shown along with
the spectra of three (B, B) signals that sample to it at rate s < 2B . From the sampled signal alone,
it is impossible to tell which, if any, of these was sampled at rate s to produce xs . In fact, there are
innitely many (B, B) bandlimited signals that sample to xs at a sampling rate below the Nyquist rate.
Figure 11.10:

11.5.3 Online Resources


The following online resources provide interactive explanations of sampling, reconstruction, and aliasing:
Sampling and Reconstruction with Sound Output
Sampling and Reconstruction (Rice University)

7
8

An Introduction to Sampling (University of Houston)

6 http://cwx.prenhall.com/bookbind/pubbooks/cyganski/chapter0/medialib/SAMPLING_RECONS_SOUND/sampling.html
7 http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Reconst/index.html
8 http://www2.egr.uh.edu/glover/applets/Sampling/Sampling.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

163

11.5.4 Aliasing Demonstration

Figure 11.11: Interact (when online) with a Mathematica CDF demonstrating sampling and aliasing
for a sinusoid. To Download, right-click and save target as .cdf.

11.5.5 Aliasing Summary


Aliasing, essentially the signal processing version of identity theft, occurs when each period of the spectrum
of the samples does not have the same form as the spectrum of the original signal.
there can be innitely many

fs = 1/Ts < 2B
that samples to xs ,

(B, B)

As has been shown,

xs
(B, B) bandlimited
of xs , at rate fs 2B

bandlimited signals that sample to a given discrete time signal

at a rate

below the Nyquist frequency. However, there is a unique

signal

which is given by the Whittaker-Shannon interpolation

as no aliasing occurs above the Nyquist frequency. Unfortunately, suciently high sampling rates cannot
always be produced. Aliasing is detrimental to many signal processing applications, so in order to process
continuous time signals using discrete time tools, it is often necessary to nd ways to avoid it other than
increasing the sampling rate. Thus, anti-aliasing lters, are of practical importance.

11.6 Anti-Aliasing Filters

11.6.1 Introduction
It has been shown that a
rate

fs = 1/Ts 2B .

(B, B)

bandlimited signal can be perfectly reconstructed from its samples at a

However, it is not always practically possible to produce suciently high sampling

rates or to ensure that the input is bandlimited in real situations.

Aliasing, which manifests itself as a

dierence in shape between the periods of the samples signal spectrum and the original spectrum, would
occur without any further measures to correct this.
energy at frequencies above

fs /2

Thus, it often becomes necessary to lter out signal

in order to avoid the detrimental eects of aliasing.

This is the role of

the anti-aliasing lter, a lowpass lter applied before sampling to ensure that the signal is

(fs /2, fs /2)

bandlimited or at least nearly so.

11.6.2 Anti-Aliasing Filters


Aliasing can occur when a signal with energy at frequencies other that

(B, B)

is sampled at rate

fs < 2B .

Thus, when sampling below the Nyquist frequency, it is desirable to remove as much signal energy outside
the frequency range

(B, B) as possible while keeping as much signal energy in the frequency range (B, B)
fs /2 would be the optimal anti-

as possible. This suggests that the ideal lowpass lter with cuto frequency

aliasing lter to apply before sampling. While this is true, the ideal lowpass lter can only be approximated
in real situations.

9 This

content is available online at <http://legacy.cnx.org/content/m47392/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

164

In order to demonstrate the importance of anti-aliasing lters, consider the calculation of the error
energy between the original signal and its Whittaker-Shannon reconstruction from its samples taken with
and without the use of an anti-aliasing lter.
ltered signal where

Let

be the original signal and

fs /2.

is the ideal lowpass lter with cuto frequency

y = Gx

be the anti-alias

It is easy to show that the

reconstructed spectrum using no anti-aliasing lter is given by

(f ) = {
X

Ts Xs (Ts f ) |f | < fs /2
0

P
={

k=

X (f kfs ) |f | < fs /2
0

otherwise

otherwise

(11.34)

Thus, the reconstruction error spectrum for this case is

P


k=1 (X (f + kfs ) + X (f kfs )) |f | < fs /2

X X (f ) = {
.
X (f )
otherwise

(11.35)

Similarly, the reconstructed spectrum using the ideal lowpass anti-aliasing lter is given by

Y (f ) = Y (f ) = {

X (f ) |f | < fs /2
0

(11.36)

(11.37)

otherwise

Thus, the reconstruction error spectrum for this case is


X Y (f ) = {

Hence, by Parseval's theorem, it follows that


identical to that of the original signal

|f | < fs /2

X (f )

otherwise

||x y|| ||x x


||. Also note that the spectrum of Y
(fs /2, fs /2). This is graphically shown

in the frequency range

Figure 11.12.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

is
in

165

The gure above illustrates the use of an anti-aliasing lter to improve the process of
sampling and reconstruction when using a sampling frequency below the Nyquist frequency. Notice that
when using an ideal lowpass anti-aliasing lter, the reconstructed signal spectrum has the same shape
as the original signal spectrum for all frequencies below half the sampling rate. This results in a lower
error energy when using the anti-aliasing lter, as can be seen by comparing the error spectra shown.
Figure 11.12:

11.6.3 Anti-Aliasing Filters Summary


As can be seen, anti-aliasing lters ensure that the signal is

(fs /2, fs /2)

bandlimited, or at least nearly so.

The optimal anti-aliasing lter would be the ideal lowpass lter with cuto frequency at

fs /2,

which would

ensure that the original signal spectrum and the reconstructed signal spectrum are equal on the interval

(fs /2, fs /2).

However, the ideal lowpass lter is not possible to implement in practice, and approximations

must be accepted instead.

Anti-aliasing lters are an important component of systems that implement

discrete time processing of continuous time signals, as will be shown in the subsequent module.

11.7 Changing Sampling Rates in Discrete Time

10

Up to this point, we have discussed the connection between continuous-time and discrete-time signals that
are captured by the concepts of sampling and reconstruction. In particular cases, we might be interested in
observing a signal under a variety of sampling rates. For example, the amount of memory or communication
bandwidth available for transmission or storage of a discrete-time signal might uctuate in time, which may
require increasing or decreasing the sampling rate (or sampling period) accordingly.

10 This

content is available online at <http://legacy.cnx.org/content/m48038/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

166

Changing the sampling frequency by reconstructing and sampling always works, but
sometimes it may be possible to do so working only in the discrete-time domain.
Figure 11.13:

Naively, if we have sampled the signal suciently often to be able to recover it (according to the Nyquist
criterion), then we can always return from the discrete-time signal to a continuous-time signal using reconstruction and then sample the signal at the new desired sampling rate.However, there are specic cases
where it is possible to modify the sampling rate of the signal without having to switch back to a continuoustime representation.

In other words, certain changes of sampling rate can be performed directly on the

discrete-time signal. We discuss three specic cases: downsampling, upsampling, and rational scaling.

11.7.1 Downsampling
fs and are asked to reduce the sampling frequency
fs
. When this change is translated to the sampling period T (as
k
1/T ), it is easy to see that the new sampling period T ' = kT will be k times larger than its original

Consider the case where we start with a sampling frequency


by an integer factor to a new value

fs =

fs' =

value. Therefore, the change in sampling frequency can be accounted for by taking the existing discrete signal

x [n]

(sampled at frequency

fs )

and decimating it by a factor of

re-sampled with sampling frequency

to obtain the new signal

x' [n] = x [kn]

'

fs .

We know that for both the old and new sampling frequencies the discrete-time Fourier transform of
the sampled signal will correspond to a periodized, frequency-scaled version of the continuous-time signal's
Fourier transform
to

= 2 .

XCT (f )

where the respective periods/sampling frequencies

fs

and

fs'

each gets mapped

We now compare how these two discrete-time transforms match to one another:



s
x [n] ejn = XCT f
,
2


P
s
= n= x [kn] ejn = XCT f
k2 .

(11.38)

By connecting the two equations through the right-most terms, it is easy to see that

X ' () = X (/k),

X ()
X ' ()

=
=

n=

n=

x' [n] ejn

i.e., that the downsampling performed expands the DTFT of the original signal
since we are still working with a discrete-time signal

x' [n],

x [n].

Note, however, that

the new transform must remain

2 -periodic,

and

so this expansion occurs for each copy of the spectrum around its center, but the copies stay stationary at
multiples of

2 .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

167

Figure 11.14:

The downsampling operation in time, continuous frequency, and discrete frequency.

Note also that this result eectively provides us with a new property for the DTFT: decimation in the
time domain corresponds to a qualied expansion in the frequency domain, where the expansion is around
the center of each copy of the CT spectrum (

= 0, 2, 4, ...).

Finally, notice that in downsampling there is the risk that aliasing may occur when the new frequency
'

fs'

does not hold the Nyquist frequency criterion (fs 2f0 , where f0 is the bandwidth of the CT signal).
Noticeably, this is the rst time that we observe the possibility of aliasing directly in the discrete-time
domain.

Since aliasing may occur, it is good engineering practice to apply a (discrete-time) anti-aliasing

lter to the signal before decimation so that aliasing is prevented.


low-pass lter with cuto frequency

c = 2/k .

Such a lter will ideally be a perfect

The combination of an anti-aliasing lter and a decimator

is known in the community as a donwsampler, as shown below.

Figure 11.15:

A downsampler consists of an anti-aliasing lter and a decimator.

11.7.2 Upsampling
Now, consider the case where we start with a sampling frequency
frequency by an integer factor to a new value
period

(as

fs = 1/T ),

fs' = k fs .

fs

and are asked to increase the sampling

When this change is translated to the sampling

it is easy to see that the new sampling period

T ' = T /k

will be a fraction (1/k ) of

the original sampling period. Therefore, the change in sampling frequency requires the acquisition of new
samples in addition to those already available under the old sampling frequency. For this reason, this process
is commonly known as up sampling.
While at rst sight this may imply a demand to go back to the continuous-time signal, we must recall
that samples that are obtained with a sampling frequency greater than the Nyquist frequency contain all
information needed to recover the continuous-time signal, and so it should be possible to infer the new

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

168

samples directly from existing ones (as long as no aliasing has occurred). For this purpose, we will retrieve
the concept of an expanded discrete-time signal:

xk [n]
Notice that this signal

xk [n]

={

x [n/k]
0

if

n/k Z,

should match the upsampled signal

and our goal is to ll in the missing samples in

xk [n]

(11.39)

otherwise.
x' [n]

for indices that are multiples of

k,

currently having value zero. To see how this can be

done, we appeal to the DTFT of the expanded signal: recall that the time expansion property of the DTFT

Xk () = X (k), which in practice compresses the DTFT in frequency by a factor of k and


2/k -periodic. In contrast, the DTFT of the upsampled signal would remain 2 -periodic, while
simultaneously compressing each copy of the signal's spectrum by a factor of k .

gives us that
makes it

Figure 11.16:

The upsampling operation in time, continuous frequency, and discrete frequency.

This comparison illuminates a method to retrieve the upsampled signal

x' [n]

from the expanded signal

xk [n]: to apply a low-pass lter on the expanded signal xk [n] so that one of the k copies that appear
each 2 -length region of the DTFT is preserved. Such a lter will need a cuto frequency of fc = /k .
combination of an expander and a low-pass lter is known as an upsampler, as shown below.

Figure 11.17:

An upsampler consists of an expander and a low-pass lter.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

over
The

169

11.7.3 Rational Sampling Frequency Scaling


A third case where changes in the sampling frequency can be resolved in the discrete-time domain is when

a
b fs .
Intuitively, one can see that such a change can be obtained by combining an upsampling by a factor of a

the ratio between the new and old sampling frequencies provides a rational number.
with a downsampling by a factor of

b.

That is,

fs' =

However, the order of these two operations is crucial.

Essentially, if downsampling is applied before upsampling, there is a chance that the downsampling antialiasing lter will remove a portion of the signal's spectrum that would alias but would have been shrunk
into the allowable region during the upsampling, and so the potential for unnecessary distortion is introduced.
In contrast, if upsampling is performed before downsampling, the cascade of the two systems will yield a
sequence of two low-pass lters, and implementing only the narrower lter would provide the same output
as the original cascade. This is illustrated in an example below.

fs = 15kHz, the signal's bandwidth is f0 = 5kHz


= 2 (5kHz/15kHz) = 2/3, and we are interested in resampling to

Consider the case where the original sampling frequency


(so that the DTFT bandwidth is

fs = 10kHz

(i.e., the minimum allowed sampling frequency under the Nyquist criterion).

applying downsampling before upsampling results in the following:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

In this case,

CHAPTER 11. SAMPLING AND RECONSTRUCTION

170

Changing the sampling frequency by downsampling followed by upsampling. The downsampling anti-aliasing lter cut o part of the original spectrum.
Figure 11.18:

and so only the portion of the signal's spectrum below

2.5kHz (/3 in the original spectrum) survives the

process. In contrast, applying upsampling before downsampling allows the entire spectrum to go through,
eectively meeting the bound given by the Nyquist criterion.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

171

Figure 11.19: Changing the sampling frequency by upsampling followed by downsampling. The entire
spectrum is preserved through the process, and one of the lowpass lters is redundant.

11.8 Discrete Time Processing of Continuous Time Signals

11

11.8.1 Introduction
Digital computers can process discrete time signals using extremely exible and powerful algorithms. However, most signals of interest are continuous time signals, which is how data almost always appears in nature.
Now that the theory supporting methods for generating a discrete time signal from a continuous time signal
through sampling and then perfectly reconstructing the original signal from its samples without error has
been discussed, it will be shown how this can be applied to implement continuous time, linear time invariant systems using discrete time, linear time invariant systems. This is of key importance to many modern
technologies as it allows the power of digital computing to be leveraged for processing of analog signals.

11 This

content is available online at <http://legacy.cnx.org/content/m47398/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

172

11.8.2 Discrete Time Processing of Continuous Time Signals


11.8.2.1 Process Structure
With the aim of processing continuous time signals using a discrete time system, we will now examine one of
the most common structures of digital signal processing technologies. As an overview of the approach taken,
the original continuous time signal
the samples spectrum

H2

time invariant lter


the bandlimit of

Xs

Xs ,

x is sampled to a discrete time signal xs

X.

Then a discrete time, linear

is applied, which modies the shape of the samples spectrum

to produce another signal

to produce a continuous time output signal

H1 .

in such a way that the periods of

is as close as possible in shape to the spectrum of

y,

ys .

Xs

but cannot increase

This is reconstructed with a suitable reconstruction lter

thus eectively implementing some continuous time system

This process is illustrated in Figure 11.20, and the spectra are shown for a specic case in Figure 11.21.

Figure 11.20:

shown.

A block diagram for processing of continuous time signals using discrete time systems is

Further discussion about each of these steps is necessary, and we will begin by discussing the analog to
digital converter, often denoted by ADC or A/D. It is clear that in order to process a continuous time signal
using discrete time techniques, we must sample the signal as an initial step. This is essentially the purpose of
the ADC, although there are practical issues that which will be discussed later. An ADC takes a continuous
time analog signal as input and produces a discrete time digital signal as output, with the ideal innite
precision case corresponding to sampling. As stated by the Nyquist-Shannon Sampling theorem, in order to
retain all information about the original signal, we usually wish sample above the Nyquist frequency
where the original signal is bandlimited to

(B, B).

fs 2B

When it is not possible to guarantee this condition, an

anti-aliasing lter should be used.


The discrete time lter is where the intentional modications to the signal information occur. This is
commonly done in digital computer software after the signal has been sampled by a hardware ADC and
before it is used by a hardware DAC to construct the output.

This allows the above setup to be quite

exible in the lter that it implements. If sampling above the Nyquist frequency the. Any modications
that the discrete lter makes to this shape can be passed on to a continuous time signal assuming perfect
reconstruction. Consequently, the process described will implement a continuous time, linear time invariant
lter. This will be explained in more mathematical detail in the subsequent section. As usual, there are, of
course, practical limitations that will be discussed later.
Finally, we will discuss the digital to analog converter, often denoted by DAC or D/A. Since continuous
time lters have continuous time inputs and continuous time outputs, we must construct a continuous time
signal from our ltered discrete time signal. Assuming that we have sampled a bandlimited at a suciently
high rate, in the ideal case this would be done using perfect reconstruction through the Whittaker-Shannon
interpolation formula. However, there are, once again, practical issues that prevent this from happening that
will be discussed later.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

173

Spectra are shown in black for each step in implementing a continuous time lter using
a discrete time lter for a specic signal. The lter frequency responses are shown in blue, and both are
meant to have maximum value 1 in spite of the vertical scale that is meant only for the signal spectra.
Ideal ADCs and DACs are assumed.
Figure 11.21:

11.8.2.2 Discrete Time Filter


With some initial discussion of the process illustrated in Figure 11.20 complete, the relationship between
the continuous time, linear time invariant lter
be explored.

H1

and the discrete time, linear time invariant lter

H2

can

We will assume the use of ideal, innite precision ADCs and DACs that perform sampling

and perfect reconstruction, respectively, using a sampling rate


bandlimited to

(B, B).

fs = 1/Ts 2B

where the input signal

is

Note that these arguments fail if this condition is not met and aliasing occurs. In

that case, preapplication of an anti-aliasing lter is necessary for these arguments to hold.
Recall that we have already calculated the spectrum

Xs

of the samples

xs

given an input

x with spectrum

as




2k
1 X
X
.
Ts
2Ts

Xs () =

(11.40)

k=

Likewise, the spectrum

Ys

of the samples

ys

given an output

Ys () =

with spectrum

is




1 X
2k
Y
.
Ts
2Ts

(11.41)

k=

From the knowledge that

ys = (H1 x)s = H2 (xs ),




H1

k=
Because

is bandlimited to

H2 () =

2k
2Ts

H2

is


X

2k
2Ts

(1/2Ts , 1/2Ts ),

k=
More simply stated,


H1

it follows that

2k
2Ts


= H2 ()

X
k=


X

2k
2Ts


.

we may conclude that


(u ( (2k 1) ) u ( (2k + 1) )) .

(11.43)

H2 () = H1 (/2Ts ) for [, ).
H1 , the above equation solves the system
implement H2 . The lter H2 must be chosen such that it has a

periodic and

Given a specic continuous time, linear time invariant lter


design problem provided we know how to

(11.42)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

CHAPTER 11. SAMPLING AND RECONSTRUCTION

174

frequency response where each period has the same shape as the frequency response of

H1 on (1/2Ts , 1/2Ts ).

This is illustrated in the frequency responses shown in Figure 11.21.


We might also want to consider the system analysis problem in which a specic discrete time, linear
time invariant lter

H2

is given, and we wish to describe the lter

(fs /2, fs /2)


H1 (f ) = H2 (2f /fs )

H1 .

There are many such lters, but we

can describe their frequency responses on

using the above equation. Isolating one period of

H2 ()

for

yields the conclusion that

be bandlimited to

(fs /2, fs /2),

f (fs /2, fs /2).

Because

was assumed to

the value of the frequency response elsewhere is irrelevant.

11.8.3 Practical Considerations


As mentioned before, there are several practical considerations that need to be addressed at each stage of
the process shown in Figure 11.20. Some of these will be briey addressed here, and a more complete model
of how discrete time processing of continuous time signals appears in Figure 11.22.

A more complete model of how discrete time processing of continuous time signals is
implemented in practice. Notice the addition of anti-aliasing and anti-imaging lters to promote input
and output bandlimitedness. The ADC is shown to perform sampling with quantization. The digital
lter is further specied to be causal. The DAC is shown to perform imperfect reconstruction, a zero
order hold in this case.

Figure 11.22:

11.8.3.1 Anti-Aliasing Filter


In reality, we cannot typically guarantee that the input signal will have a specic bandlimit, and suciently
high sampling rates cannot necessarily be produced. Since it is imperative that the higher frequency components not be allowed to masquerade as lower frequency components through aliasing, anti-aliasing lters
with cuto frequency less than or equal to

fs /2

must be used before the signal is fed into the ADC. The

block diagram in Figure 11.22 reects this addition.


As described in the previous section, an ideal lowpass lter removing all energy at frequencies above

fs /2

would be optimal. Of course, this is not achievable, so approximations of the ideal lowpass lter with low
gain above

fs /2

must be accepted. This means that some aliasing is inevitable, but it can be reduced to a

mostly insignicant level.

11.8.3.2 Signal Quantization


In our preceding discussion of discrete time processing of continuous time signals, we had assumed an ideal
case in which the ADC performs sampling exactly. However, while an ADC does convert a continuous time
signal to a discrete time signal, it also must convert analog values to digital values for use in a digital logic
device, a phenomenon called quantization. The ADC subsystem of the block diagram in Figure 11.22 reects
this addition.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

175

The data obtained by the ADC must be stored in nitely many bits inside a digital logic device. Thus,
there are only nitely many values that a digital sample can take, specically

2N

where

is the number of

bits, while there are uncountably many values an analog sample can take. Hence something must be lost in
the quantization process. The result is that quantization limits both the range and precision of the output
of the ADC. Both are nite, and improving one at constant number of bits requires sacricing quality in the
other.

11.8.3.3 Filter Implementability


In real world circumstances, if the input signal is a function of time, the future values of the signal cannot
be used to calculate the output. Thus, the digital lter

H2

and the overall system

H1

must be causal. The

lter annotation in Figure 11.22 reects this addition. If the desired system is not causal but has impulse
response equal to zero before some time

t0 ,

a delay can be introduced to make it causal. However, if this

delay is excessive or the impulse response has innite length, a windowing scheme becomes necessary in order
to practically solve the problem. Multiplying by a window to decrease the length of the impulse response
can reduce the necessary delay and decrease computational requirements.
Take, for instance the case of the ideal lowpass lter. It is acausal and innite in length in both directions.
Thus, we must satisfy ourselves with an approximation. One might suggest that these approximations could
be achieved by truncating the sinc impulse response of the lowpass lter at one of its zeros, eectively
windowing it with a rectangular pulse.

However, doing so would produce poor results in the frequency

domain as the resulting convolution would signicantly spread the signal energy. Other windowing functions,
of which there are many, spread the signal less in the frequency domain and are thus much more useful for
producing these approximations.

11.8.3.4 Anti-Imaging Filter


In our preceding discussion of discrete time processing of continuous time signals, we had assumed an ideal
case in which the DAC performs perfect reconstruction. However, when considering practical matters, it is
important to remember that the sinc function, which is used for Whittaker-Shannon interpolation, is innite
in length and acausal. Hence, it would be impossible for an DAC to implement perfect reconstruction.
Instead, the DAC implements a causal zero order hold or other simple reconstruction scheme with respect
to the sampling rate
to

(fs /2, fs /2).

fs

used by the ADC. However, doing so will result in a function that is not bandlimited

Therefore, an additional lowpass lter, called an anti-imaging lter, must be applied to the

output. The process illustrated in Figure 11.22 reects these additions. The anti-imaging lter attempts to
bandlimit the signal to

(fs /2, fs /2),

so an ideal lowpass lter would be optimal. However, as has already

been stated, this is not possible. Therefore, approximations of the ideal lowpass lter with low gain above

fs /2

must be accepted.

The anti-imaging lter typically has the same characteristics as the anti-aliasing

lter.

11.8.4 Discrete Time Processing of Continuous Time Signals Summary


As has been show, the sampling and reconstruction can be used to implement continuous time systems
using discrete time systems, which is very powerful due to the versatility, exibility, and speed of digital
computers. However, there are a large number of practical considerations that must be taken into account
when attempting to accomplish this, including quantization noise and anti-aliasing in the analog to digital
converter, lter implementability in the discrete time lter, and reconstruction windowing and associated
issues in the digital to analog converter. Many modern technologies address these issues and make use of
this process.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

176

CHAPTER 11. SAMPLING AND RECONSTRUCTION

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Appendix: Mathematical Pot-Pourri


12.1 Basic Linear Algebra

This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those of
you trying to gain a deep insight into linear algebra. Rather, this brief introduction to some of the terms and
ideas of linear algebra is meant to provide a little background to those trying to get a better understanding
or learn about eigenvectors and eigenfunctions, which play a big role in deriving a few important ideas on
Signals and Systems. The goal of these concepts will be to provide a background for signal decomposition
2

and to lead up to the derivation of the Fourier Series .

12.1.1 Linear Independence


A set of vectors

{x1 , x2 , . . . , xk } , xi Cn

are

linearly independent if none of them can be written as

a linear combination of the others.

Denition 12.1: Linearly Independent


For a given set of vectors,

{x1 , x2 , . . . , xn },

they are linearly independent if

c1 x1 + c2 x2 + + cn xn = 0
only when

Example

c1 = c2 = = cn = 0

We are given the following two vectors:

x1 =

x2 =
These are

2
6
4

not linearly independent as proven by the following statement, which, by inspection,

can be seen to not adhere to the denition of linear independence stated above.

(x2 = 2x1 ) (2x1 + x2 = 0)


Another approach to reveal a vectors independence is by graphing the vectors. Looking at these two
vectors geometrically (as in Figure 12.1), one can again prove that these vectors are
independent.

1 This content is available online at <http://legacy.cnx.org/content/m10734/2.7/>.


2 "Fourier Series: Eigenfunction Approach" <http://legacy.cnx.org/content/m10496/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


177

not linearly

APPENDIX

178

Graphical representation of two vectors that are not linearly independent.

Figure 12.1:

Example 12.1
We are given the following two vectors:

x1 =

x2 =
These are

3
2
1
2

linearly independent since


c1 x1 = (c2 x2 )

only if

c1 = c2 = 0.

independent.

Based on the denition, this proof shows that these vectors are indeed linearly

Again, we could also graph these two vectors (see Figure 12.2) to check for linear

independence.

Figure 12.2:

Graphical representation of two vectors that are linearly independent.

Exercise 12.1.1
Are

{x1 , x2 , x3 }

(Solution on p. 187.)
linearly independent?

x1 =

3
2

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

179

x2 =

x3 =

1
2
1
0

As we have seen in the two above examples, often times the independence of vectors can be easily seen
through a graph. However this may not be as easy when we are given three or more vectors. Can you easily
tell whether or not these vectors are independent from Figure 12.3. Probably not, which is why the method
used in the above solution becomes important.

Plot of the three vectors. Can be shown that a linear combination exists among the
three, and therefore they are not linear independent.

Figure 12.3:

Hint:

A set of

vectors in

Cn

cannot be linearly independent if

m > n.

12.1.2 Span
Denition 12.2: Span
The span

{x1 , x2 , . . . , xk }
{x1 , x2 , . . . , xk }

of a set of vectors

combination of

is the set of vectors that can be written as a linear

span ({x1 , . . . , xk }) = {1 x1 + 2 x2 + + k xk , i Cn }

Example
Given the vector

x1 =

the span of

x1

is a

Example
Given the vectors

line.

x1 =

3 "Subspaces",

3
2

Denition 2: "Span" <http://legacy.cnx.org/content/m10297/latest/#defn2>


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

180

x2 =
the span of these vectors is

C2 .

12.1.3 Basis
Denition 12.3: Basis
A basis for

Cn

is a set of vectors that: (1) spans

Clearly, any set of

Cn

linearly independent vectors is a

Example 12.2

and (2) is linearly independent.


basis for Cn .

We are given the following vector

.
.
.

ei =
1

.
..

0
where the

1 is always in the ith place and the remaining values are zero.

Then the

basis for Cn is

{ei , i = [1, 2, . . . , n] }

note:

{ei , i = [1, 2, . . . , n] }

is called the

standard basis.

Example 12.3

h1 =

h2 =
{h1 , h2 }

is a basis for

1
1
1
1

C2 .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

181

Plot of basis for C2

Figure 12.4:

If

{b1 , . . . , b2 }

is a basis for

Cn ,

then we can express

any x Cn as a linear combination of the bi 's:

x = 1 b1 + 2 b2 + + n bn , i C

Example 12.4
Given the following vector,

x=
writing

in terms of

{e1 , e2 }

1
2

gives us

x = e1 + 2e2

Exercise 12.1.2
Try and write

(Solution on p. 187.)

{h1 , h2 }

in terms of

In the two basis examples above,

(dened in the previous example).

is the same vector in both cases, but we can express it in many dierent

ways (we give only two out of many, many possibilities). You can take this even further by extending this
idea of a basis to

note:

function spaces.

As mentioned in the introduction, these concepts of linear algebra will help prepare you
4

to understand the Fourier Series , which tells us that we can express periodic functions,
terms of their basis functions,

j0 nt

[Media Object]5
4 "Fourier Series: Eigenfunction Approach" <http://legacy.cnx.org/content/m10496/latest/>
5 This media object is a LabVIEW VI. Please view or download it at

<LinearAlgebraCalc3.llb>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

f (t),

in

APPENDIX

182

Khan Lecture on Basis of a Subspace


This media object is a Flash object. Please view or download it at
<http://www.youtube.com/v/zntNi3ybfQ&rel=0&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_embedded&fs=1>

Figure 12.5:

video from Khan Academy, Basis of a Subspace - 20 min.

12.2 Linear Constant Coecient Dierence Equations

12.2.1 Introduction: Dierence Equations


In our study of signals and systems, it will often be useful to describe systems using equations involving
the rate of change in some quantity. In discrete time, this is modeled through dierence equations, which
are a specic type of recurrance relation. For instance, recall that the funds in an account with discretely
componded interest rate

will increase by

times the previous balance.

Thus, a discretely compounded

interest system is described by the rst order dierence equation shown in (12.1).

y (n) = (1 + r) y (n 1)

(12.1)

Given a suciently descriptive set of initial conditions or boundary conditions, if there is a solution to the
dierence equation, that solution is unique and describes the behavior of the system. Of course, the results
are only accurate to the degree that the model mirrors reality.

12.2.2 Linear Constant Coecient Dierence Equations


An important subclass of dierence equations is the set of linear constant coecient dierence equations.
These equations are of the form

Cy (n) = f (n)
where

(12.2)

is a dierence operator of the form given

C = cN DN + cN 1 DN 1 + ... + c1 D + c0
in which

(12.3)

is the rst dierence operator

D (y (n)) = y (n) y (n 1) .
Note that operators of this type satisfy the linearity conditions, and

(12.4)

c0 , ..., cn

are real constants.

However, (12.2) can easily be written as a linear constant coecient recurrence equation without dierence
operators.

Conversely, linear constant coecient recurrence equations can also be written in the form of

a dierence equation, so the two types of equations are dierent representations of the same relationship.
Although we will still call them linear constant coecient dierence equations in this course, we typically
will not write them using dierence operators. Instead, we will write them in the simpler recurrence relation
form

N
X
k=0

6 This

ak y (n k) =

M
X

bk x (n k)

k=0

content is available online at <http://legacy.cnx.org/content/m12325/1.5/>.


Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(12.5)

APPENDIX
where

183

is the input to the system and

1
y (n) =
a0

is the output. This can be rearranged to nd

N
X

ak y (n k) +

k=1

M
X

y (n)

as

!
bk x (n k)

(12.6)

k=0

The forms provided by (12.5) and (12.6) will be used in the remainder of this course.
A similar concept for continuous time setting, dierential equations, is discussed in the chapter on time
domain analysis of continuous time systems.

There are many parallels between the discussion of linear

constant coecient ordinary dierential equations and linear constant coecient dierece equations.

12.2.3 Applications of Dierence Equations


Dierence equations can be used to describe many useful digital lters as described in the chapter discussing
the z-transform. An additional illustrative example is provided here.

Example 12.5
Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when a
pair rabbits get left alone in a black box...

The assumptions are that a pair of rabits never die

and produce a pair of ospring every month starting on their second month of life. This system is
dened by the recursion relation for the number of rabit pairs

y (n)

at month

y (n) = y (n 1) + y (n 2)
with the initial conditions

y (0) = 0 and y (1) = 1.

(12.7)

The result is a very fast growth in the sequence.

This is why we do not open black boxes.

12.2.4 Linear Constant Coecient Dierence Equations Summary


Dierence equations are an important mathematical tool for modeling discrete time systems. An important
subclass of these is the class of linear constant coecient dierence equations. Linear constant coecient
dierence equations are often particularly easy to solve as will be described in the module on solutions to
linear constant coecient dierence equations and are useful in describing a wide range of situations that
arise in electrical engineering and in other elds.

12.3 Solving Linear Constant Coecient Dierence Equations

12.3.1 Introduction
The approach to solving linear constant coecient dierence equations is to nd the general form of all
possible solutions to the equation and then apply a number of conditions to nd the appropriate solution.
The two main types of problems are initial value problems, which involve constraints on the solution at several
consecutive points, and boundary value problems, which involve constraints on the solution at nonconsecutive
points.
The number of initial conditions needed for an

N th

order dierence equation, which is the order of the

highest order dierence or the largest delay parameter of the output in the equation, is

N,

and a unique

solution is always guaranteed if these are supplied. Boundary value probelms can be slightly more complicated
and will not necessarily have a unique solution or even a solution at all for a given set of conditions. Thus,
this section will focus exclusively on initial value problems.

7 This

content is available online at <http://legacy.cnx.org/content/m12326/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

184

12.3.2 Solving Linear Constant Coecient Dierence Equations


Consider some linear constant coecient dierence equation given by

Ay (n) = f (n),

in which

is a

dierence operator of the form

A = aN DN + aN 1 DN 1 + ... + a1 D + a0
where

(12.8)

is the rst dierence operator

D (y (n)) = y (n) y (n 1) .
yh (n)

Let

A,

yp (n) be two functions such that Ayh (n) = 0


L (yh (n) + yp (n)) = 0 + f (n) = f (n). Thus, the

and

note that

and

(12.9)

Ayp (n) = f (n).

By the linearity of

yg (n) to any
yh (n) to the
f (n).

form of the general solution

linear constant coecient ordinary dierential equation is the sum of a homogeneous solution
equation

Ay (n) = 0

and a particular solution

yp (n)

that is specic to the forcing function

We wish to determine the forms of the homogeneous and nonhomogeneous solutions in full generality in
order to avoid incorrectly restricting the form of the solution before applying any conditions. Otherwise, a
valid set of initial or boundary conditions might appear to have no corresponding solution trajectory. The
following sections discuss how to accomplish this for linear constant coecient dierence equations.

12.3.2.1 Finding the Homogeneous Solution


In order to nd the homogeneous solution to a dierence equation described by the recurrence relation

PN

PN
ak y (n k) = f (n), consider the dierence equation k=0 ak y (n k) = 0. We
PN
n
nk
have the form c for some complex constants c, . Since
= 0 for
k=0 ak c

k=0

tions

know that the solua solution it follows

that

cnN

N
X

ak N k = 0

(12.10)

k=0
so it also follows that

a0 N + ... + aN = 0.

(12.11)

Therefore, the solution exponential are the roots of the above polynomial, called the characteristic polynomial.
For equations of order two or more, there will be several roots. If all of the roots are distinct, then the
general form of the homogeneous solution is simply

yh (n) = c1 n1 + ... + c2 n2 .

(12.12)

If a root has multiplicity that is greater than one, the repeated solutions must be multiplied by each power
of

from 0 to one less than the root multipicity (in order to ensure linearly independent solutions). For

instance, if

had a multiplicity of 2 and

had multiplicity 3, the homogeneous solution would be

yh (n) = c1 n1 + c2 nn1 + c3 n2 + c4 nn2 + c5 n2 n2 .

(12.13)

Example 12.6
Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when a
pair rabbits get left alone in a black box...

The assumptions are that a pair of rabits never die

and produce a pair of ospring every month starting on their second month of life. This system is
dened by the recursion relation for the number of rabit pairs

y (n)

at month

y (n) y (n 1) y (n 2) = 0
Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(12.14)

APPENDIX

185

with the initial conditions

y (0) = 0

y (1) = 1.

and

Note that the forcing function is zero, so only the homogenous solution is needed. It is easy to

2 1 = 0, so there are two roots with multiplicity

1 5
2 . Thus, the solution is of the form

see that the characteristic polynomial is


one. These are

1 =

1+ 5
and
2

2 =

y (n) = c1

!n
!n
1 5
1+ 5
+ c2
.
2
2

(12.15)

Using the initial conditions, we determine that

c1 =

5
5

(12.16)

and

c2 =

5
.
5

(12.17)

Hence, the Fibonacci sequence is given by

5
y (n) =
5

!n
5
1+ 5

2
5

!n
1 5
.
2

(12.18)

12.3.2.2 Finding the Particular Solution


Finding the particular solution is a slightly more complicated task than nding the homogeneous solution. It
can be found through convolution of the input with the unit impulse response once the unit impulse response
is known.

Finding the particular solution ot a dierential equation is discussed further in the chapter

concerning the z-transform, which greatly simplies the procedure for solving linear constant coecient
dierential equations using frequency domain tools.

Example 12.7
Consider the following dierence equation describing a system with feedback

y (n) ay (n 1) = x (n) .

(12.19)

In order to nd the homogeneous solution, consider the dierence equation

y (n) ay (n 1) = 0.
It is easy to see that the characteristic polynomial is

(12.20)

a = 0,

so

=a

is the only root. Thus

the homogeneous solution is of the form

yh (n) = c1 an .

(12.21)

In order to nd the particular solution, consider the output for the

x (n) = (n)

unit impulse case

y (n) ay (n 1) = (n) .
By inspection, it is clear that the impulse response is
given

x (n)

a u (n).

(12.22)
Hence, the particular solution for a

is

yp (n) = x (n) (an u (n)) .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

(12.23)

APPENDIX

186

Therefore, the general solution is

yg (n) = yh (n) + yp (n) = c1 an + x (n) (an u (n)) .

(12.24)

Initial conditions and a specic input can further tailor this solution to a specic situation.

12.3.3 Solving Dierence Equations Summary


Linear constant coecient dierence equations are useful for modeling a wide variety of discrete time systems.
The approach to solving them is to nd the general form of all possible solutions to the equation and then
apply a number of conditions to nd the appropriate solution.

This is done by nding the homogeneous

solution to the dierence equation that does not depend on the forcing function input and a particular
solution to the dierence equation that does depend on the forcing function input.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

187

Solutions to Exercises in Chapter 12


Solution to Exercise 12.1.1 (p. 178)
By playing around with the vectors and doing a little trial and error, we will discover the following relationship:

x1 x2 + 2x3 = 0
Thus we have found a linear combination of these three vectors that equals zero without setting the coecients
equal to zero. Therefore, these vectors are

Solution to Exercise 12.1.2 (p. 181)

not linearly independent!


x=

1
3
h1 +
h2
2
2

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

188

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Appendix: Viewing Interactive Content

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>


189

APPENDIX

190

13.1 Viewing Embedded LabVIEW Content in Connexions

13.1.1 Introduction
In order to view LabVIEW content embedded in Connexions modules, you must install and enable the LabVIEW 8.0 and 8.5 Local VI Execution Browser Plug-in for Windows. Step-by-step installation instructions
are given below. Once installation is complete, the placeholder box at the bottom of this module should
display a demo LabVIEW virtual instrument (VI).

13.1.2 Installing the LabVIEW Run-time Engine on Microsoft Windows


1. Download

and

install

the

LabVIEW
2

http://zone.ni.com/devzone/cda/tut/p/id/4346
2. Download

and

install

the

LabVIEW
3

http://zone.ni.com/devzone/cda/tut/p/id/6633
3. Dowload the

8.0

Runtime

Engine

found

at:

8.5

Runtime

Engine

found

at:

.
.

LVBrowserPlugin.ini le from http://zone.ni.com/devzone/cda/tut/p/id/82884 , and


My Documents\LabVIEW Data folder. (You may have to create this folder if it doesn't

place it in the

already exist.)
4. Restart your computer to complete the installation.
5. The placeholder box at the bottom of this module should now display a demo LabVIEW virtual
instrument (VI).

13.1.3 Example Virtual Instrument


This media object is a LabVIEW VI. Please view or download it at
<DFD_Utility.llb>
Figure
13.1:
Digital
lter
http://cnx.org/content/m13115/latest/5 .

design

LabVIEW

virtual

instrument

from

13.2 Getting Started With Mathematica

13.2.1 What is Mathematica?


Mathematica is a computational software program used in technical elds.

It is developed by Wolfram

Research. Mathematica makes it easy to visualize data and create GUIs in only a few lines of code.

13.2.2 How can I run, create, and nd Mathematica les?


Run
The free CDF Player

is available for running non-commercial Mathematica programs. The option exists

1 This content is available online at <http://legacy.cnx.org/content/m34460/1.5/>.


2 http://zone.ni.com/devzone/cda/tut/p/id/4346
3 http://zone.ni.com/devzone/cda/tut/p/id/6633
4 http://zone.ni.com/devzone/cda/tut/p/id/8288
5 http://cnx.org/content/m13115/latest/
6 This content is available online at <http://legacy.cnx.org/content/m36728/1.13/>.
7 http://www.wolfram.com/cdf-player/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

191

of downloading source les and running on your computer, but the CDF-player comes with a plug-in for
viewing dynamic content online on your web browser!

Create

Mathematica 8 is available for purchase

from Wolfram. Many universities (including Rice) and companies


9

already have a Mathematica license. Wolfram has a free, save-disabled 15-day trial version

Find

of Mathematica.

Wolfram has thousands of Mathematica programs (including source code) available at the Wolfram Demon10

strations Project

. Anyone can create and submit a Demonstration. Also, many other websites (including

Connexions) have a lot of Mathematica content.

13.2.3 What do I need to run interactive content?


Mathematica 8 is supported on Linux, Microsoft Windows, Mac OS X, and Solaris.

Mathematica's free

CDF-player is available for Windows and Mac OS X, and is in development for Linux; the CDF-Player
plugin is available for IE, Firefox, Chrome, Safari, and Opera.

13.2.4 How can I upload a Mathematica le to a Connexions module?


Go to the Files tab at the top of the module and upload your .cdf le, along with an (optional) screenshot
of the le in use. In order to generate a clean bracket-less screenshot, you should do the following:

Open your .cdf in Mathematica and left click on the bracket surrounding the manipulate command.
Click on Cell->Convert To->Bitmap.
Then click on File->Save Selection As, and save the image le in your desired image format.

Embed the les into the module in any way you like. Some tags you may nd helpful include image,
gure, download, and link (if linking to an .cdf le on another website). The best method is to create an
interactive gure, and include a fallback png image of the cdf le should the CDF image not render properly.
See the interactive demo/image below.

Convolution Demo

<figure id="demoonline">
<media id="CNXdemoonline" alt="timeshiftDemo">
<image mime-type="image/png" src="Convolutiondisplay-4.cdf" thumbnail="Convolution4.0Display.png" width
<object width="500" height="500" src="Convolutiondisplay-4.cdf" mime-type="application/vnd.wolfram.cdf"
<image mime-type="application/postscript" for="pdf" src="Convolution4.0Display.png" width="400"/>
</media>
<caption>Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download, right-cl
</figure>
8 http://www.wolfram.com/products/mathematica/purchase.html
9 http://www.wolfram.com/products/mathematica/experience/request.cgi
10 http://demonstrations.wolfram.com/index.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

APPENDIX

192

Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download, right-click and save target as .cdf.
Figure 13.2:

Alternatively, this is how it looks when you use a thumbnail link to a live online demo.

Click on the above thumbnail image (when online) to view an interactive Mathematica
Player demonstrating Convolution.
Figure 13.3:

13.2.5 How can I learn Mathematica?


Open Mathematica and go to the Getting Started section of the "Welcome to Mathematica" screen, or check
out Help: Documentation Center.
11

The Mathematica Learning Center

has lots of screencasts, how-tos, and tutorials.

When troubleshooting, the error messages are often unhelpful, so it's best to evaluate often so the problem
can be easily located. Search engines like Google are useful when you're looking for an explanation of specic
error messages.

11 http://www.wolfram.com/learningcenter/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

GLOSSARY

193

Glossary
B Basis
Cn

A basis for

is a set of vectors that: (1) spans

Cn

and (2) is linearly independent.

L Linearly Independent
For a given set of vectors,

{x1 , x2 , . . . , xn },

they are linearly independent if

c1 x1 + c2 x2 + + cn xn = 0
only when

c1 = c2 = = cn = 0

Example:

We are given the following two vectors:

x1 =

x2 =
These are

3
2
6
4

not linearly independent as proven by the following statement, which, by

inspection, can be seen to not adhere to the denition of linear independence stated above.

(x2 = 2x1 ) (2x1 + x2 = 0)


Another approach to reveal a vectors independence is by graphing the vectors. Looking at these
two vectors geometrically (as in

12

), one can again prove that these vectors are

not linearly

independent.

S Span
13

The span

{x1 , x2 , . . . , xk }
{x1 , x2 , . . . , xk }

of a set of vectors

combination of

is the set of vectors that can be written as a linear

span ({x1 , . . . , xk }) = {1 x1 + 2 x2 + + k xk , i Cn }
Example:

Given the vector

x1 =
the span of

x1

is a

3
2

line.

12 http://legacy.cnx.org/content/m10734/latest/
13 http://legacy.cnx.org/content/m10734/latest/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

GLOSSARY

194

Example:

Given the vectors

x1 =

x2 =
the span of these vectors is

3
2
1
2

C2 .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

INDEX

195

Index of Keywords and Terms


Keywords are listed by the section with that keyword (page numbers are in parentheses).

Keywords

do not necessarily appear in the text of the page. They are merely associated with that section.
apples, 1.1 (1)

Terms are referenced by the page they appear on. Ex.

acausal, 11.8(171)

continuous frequency, 6.2(76), 9.3(125)

ADC, 11.8(171)

continuous time, 2.1(17), 2.2(22), 4.1(51),

algebra, 1.2(5)

4.2(52), 54, 4.3(55), 5.1(65), 5.3(68),

alias, 11.5(160), 11.6(163)

5.4(69), 6.1(75), 6.2(76), 6.3(79),

aliasing, 11.5(160), 11.6(163), 11.8(171)

6.4(83), 11.1(147)

analog, 2.1(17), 18, 11.8(171)

Continuous Time Convolution Integral, 54

analysis, 5.4(69)

Continuous Time Fourier Series, 69

Anti-Aliasing, 11.6(163), 11.8(171)

Continuous Time Fourier Transform, 75, 76

anti-imaging, 11.8(171)

continuous-time, 4.1(51)

anticausal, 2.1(17), 11.8(171)

Continuous-Time Fourier Transform, 77

aperiodic, 2.1(17), 66, 6.1(75), 75,

Continuous-Time Systems, 4.5(62)

9.1(121), 123

converter, 11.8(171)
Convolution, 54, 4.3(55), 4.4(58), 6.3(79),

bandlimited, 11.8(171)

6.5(85), 109, 8.3(110), 8.4(115)

bases, 12.1(177)

convolution integral, 85

basis, 12.1(177), 180, 180, 180

Cooley-Tukey, 10.2(138)

BIBO stability, 3.2(39), 4.5(62), 8.5(118)

CTFS, 5.4(69)

BIBO stable, 3.2(39)


bits, 11.8(171)
block diagram, 3.1(37)

Ex.

apples, 1

CTFT, 6.2(76), 6.5(85)

DAC, 11.8(171)

bounded input bounded output stable,

de, 4.1(51)

3.2(39)

decimation, 7.3(95)

buttery, 141

decompose, 7.1(91)

cardinal, 11.3(152)
cascade, 3.1(37), 3.3(43)
causal, 2.1(17), 3.2(39), 11.8(171)
causality, 3.2(39), 4.5(62), 8.5(118)
common, 9.5(132)
complex, 2.2(22), 7.1(91)
complex exponential, 2.6(33), 5.3(68), 92,
7.5(100), 9.2(124)
complex exponentials, 126
complex numbers, 1.1(1), 1.2(5), 1.3(11)
complex plane, 2.6(33), 7.5(100)
complex-valued, 2.2(22), 7.1(91)
complexity, 139, 142
composite, 144
computational advantage, 140
considerations, 11.8(171)
Constant Coecient, 12.2(182), 12.3(183)
continuous, 17, 6.5(85), 11.8(171)

delta function, 8.2(107)


DFT, 10.1(137), 10.3(139)
dierence equation, 8.1(105)
Dierence Equations, 12.2(182), 12.3(183)
dierential, 4.1(51)
dierential equation, 54
dierential equations, 4.1(51)
digital, 2.1(17), 18, 11.8(171)
digital signal processing, 8.1(105)
dirac delta function, 2.2(22), 2.5(31), 31
Discete Time Fourier Transform, 123
discrete, 17, 11.8(171)
discrete convolution, 9.6(133)
Discrete Fourier Transform, 10.1(137)
discrete time, 2.1(17), 5.1(65), 8.2(107),
109, 8.3(110), 8.4(115), 9.1(121),
9.2(124), 9.3(125), 9.5(132), 9.6(133),
11.1(147)
Discrete Time Convolution Sum, 109

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

INDEX

196

Discrete Time Fourier Transform, 125


discrete-time, 7.1(91), 9.4(128)
Discrete-Time Fourier Transform, 126

homogeneous, 12.3(183)

imperfect, 11.8(171)

9.4(128)

implementability, 11.8(171)

discrete-time periodic signal, 121

impulse, 2.2(22), 2.5(31), 4.2(52),

discrete-time signals, 7.3(95), 11.7(165)

7.4(98), 8.2(107), 109

discrete-time systems, 8.1(105), 8.5(118)

impulse response, 4.2(52), 8.2(107),

downsampling, 11.7(165)

8.3(110), 12.3(183)

DSP, 8.1(105), 11.2(149)

impulse-like input signal, 54

DT, 8.3(110)

independence, 12.1(177)

DTFT, 9.3(125), 9.5(132), 9.6(133)

innite-length signal, 19

duality, 6.3(79)

initial value, 12.3(183)

dynamic content, 13.1(190)

input, 3.1(37)

eigenfunction, 5.3(68), 5.4(69), 9.2(124)

interpolation, 7.3(95), 11.3(152)

eigenvalue, 5.3(68), 9.2(124)

invariant, 11.8(171)

eigenvector, 5.3(68), 9.2(124)


ELEC 301, 12.2(182)

linear, 3.2(39), 3.3(43), 11.8(171),

1.3(11)

12.2(182), 12.3(183)

embedded, 13.1(190)

linear algebra, 12.1(177)

Energy, 2.4(28), 7.2(93)

linear independence, 12.1(177)

engineering, 1.1(1), 1.2(5), 1.3(11)

linear system, 4.1(51)

even signal, 2.1(17), 20

linear time invariant, 4.3(55), 5.3(68),

examples, 9.5(132)

9.2(124)

expansion, 7.3(95)

Linear Time-Invariant Systems, 8.5(118)

exponential, 2.2(22), 7.1(91)

linearity, 6.3(79)

fast Fourier transform, 10.2(138), 10.3(139)

linearly independent, 177, 177

feedback, 3.1(37)

lowpass, 11.3(152), 11.4(157), 11.6(163)

FFT, 10.2(138), 10.3(139)

LTI, 4.3(55), 5.3(68), 9.2(124)

lter, 11.3(152), 11.6(163), 11.8(171)

LTI Systems, 4.5(62), 6.6(87)

nite-length signal, 19
form, 140
fourier, 10.3(139)
Fourier methods, 54, 109
fourier series, 5.1(65), 5.4(69)

M Mathematica, 13.2(190)
modulation, 6.3(79)

nonlinear, 3.2(39)

6.3(79), 6.4(83), 6.5(85), 6.6(87),

not, 143

9.3(125), 9.4(128), 9.6(133), 10.1(137),

Nyquist, 11.1(147), 11.2(149), 11.6(163),

10.3(139)

11.8(171)

fourier transforms, 9.5(132)

Nyquist frequency, 11.2(149)

frequency, 11.5(160), 11.6(163), 11.8(171)


Frequency Domain, 9.4(128)
FT, 6.5(85)
function spaces, 181
functional, 37
fundamental period, 18, 66, 75, 121

noncausal, 2.1(17), 3.2(39)


nonhomogeneous, 12.3(183)

fourier transform, 5.1(65), 6.2(76),

G
H

LabVIEW, 13.1(190)
Laplace transform, 54, 5.1(65)

electrical engineering, 1.1(1), 1.2(5),

ideal, 11.3(152), 11.4(157)


imaging, 11.8(171)

Discrete-Time Fourier Transform properties,

Nyquist theorem, 11.2(149)

odd signal, 2.1(17), 20


output, 3.1(37)

parallel, 3.1(37), 3.3(43)


perfect, 11.4(157)

geometry, 1.1(1)
hold, 11.8(171)

period, 18, 5.2(66)


periodic, 2.1(17), 5.2(66), 66, 67, 75, 121

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

INDEX

Q
R

197

periodic function, 5.2(66), 66, 75, 121

sinusoid, 7.1(91)

periodic functions, 66, 122

solution, 12.3(183)

periodicity, 5.2(66)

span, 12.1(177), 179

Player, 13.2(190)

spectrum, 11.1(147)

plug-in, 13.1(190)

spline, 11.3(152)

Power, 2.4(28), 7.2(93)

stability, 3.2(39)

practical, 11.8(171)

stable, 3.2(39)

precision, 11.8(171)

standard basis, 180

processing, 11.8(171)

superposition, 3.3(43), 8.1(105)

properties, 8.4(115)

symbolic-valued signals, 7.1(91)

property, 4.4(58)

synthesis, 5.4(69)

proportional, 139, 142

system theory, 3.1(37)

pulse, 2.2(22)

systems, 4.3(55), 4.4(58), 5.1(65),

quantization, 11.8(171)
range, 11.8(171)

6.4(83), 7.1(91)

theorem, 11.2(149)

real-valued, 7.1(91)

time, 11.8(171)

real-valued signal conjugate symmetry,

time dierentiation, 6.3(79)

6.3(79)

time domain, 8.1(105), 8.4(115)

reconstruct, 11.4(157)

time invariant, 3.2(39)

reconstruction, 11.1(147), 11.2(149),

time reversal, 2.3(25), 7.3(95)

11.3(152), 11.4(157), 11.5(160),

time scaling, 2.3(25), 6.3(79), 7.3(95)

11.8(171)

t-periodic, 5.2(66)

time shifting, 2.3(25), 6.3(79), 7.3(95)


time varying, 3.2(39)

sample, 11.2(149), 11.5(160)

time-invariant, 3.3(43)

sampling, 11.1(147), 11.2(149), 11.4(157),

transfer function, 65, 5.4(69)

11.5(160), 11.6(163), 11.7(165),

triangle, 2.2(22)

11.8(171)
sampling rate, 11.7(165)
sampling theorem, 11.8(171)
Sequence-Domain, 9.4(128)

triangle function, 24

unit sample function, 98

sequences, 7.1(91)

unit step, 2.2(22)

Shannon, 11.1(147), 11.2(149), 11.3(152),

unit-pulse function, 24

11.4(157), 11.8(171)

unit-step function, 23, 92

shift-invariant systems, 8.1(105)


sifting property, 2.2(22)
signal, 3.1(37), 5.2(66)

unit sample, 7.1(91), 92, 7.4(98)

upsampling, 11.7(165)

vector space, 1.3(11)

signals, 2.2(22), 2.3(25), 2.5(31),

VI, 13.1(190)

2.6(33), 3.2(39), 4.3(55), 4.4(58),

virtual instrument, 13.1(190)

5.1(65), 6.1(75), 6.4(83), 7.1(91),


7.4(98), 7.5(100), 8.3(110), 9.1(121),

W Whittaker, 11.3(152), 11.4(157),


11.8(171)

11.8(171)

window, 11.8(171)

signals and systems, 2.1(17), 8.3(110)


sinc, 11.3(152), 11.4(157), 11.8(171)
sine, 7.1(91)

Wolfram, 13.2(190)

z transform, 5.1(65)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

198

Attributions
Collection:

Signals and Systems

Edited by: Marco F. Duarte


URL: http://legacy.cnx.org/content/col11557/1.10/
License: http://creativecommons.org/licenses/by/4.0/
Module: "Complex Numbers: Geometry of Complex Numbers"
Used here as: "Geometry of Complex Numbers"
By: Louis Scharf
URL: http://legacy.cnx.org/content/m21411/1.6/
Pages: 1-5
Copyright: Louis Scharf
License: http://creativecommons.org/licenses/by/3.0/
Module: "Complex Numbers: Algebra of Complex Numbers"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50948/1.1/
Pages: 5-11
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Complex Numbers: Algebra of Complex Numbers
By: Louis Scharf
URL: http://legacy.cnx.org/content/m21408/1.6/
Module: "Complex Numbers: Representing Complex Numbers in a Vector Space"
Used here as: "Representing Complex Numbers in a Vector Space"
By: Louis Scharf
URL: http://legacy.cnx.org/content/m21414/1.6/
Pages: 11-15
Copyright: Louis Scharf
License: http://creativecommons.org/licenses/by/3.0/
Module: "Signal Classications and Properties"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47271/1.3/
Pages: 17-22
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Signal Classications and Properties
By: Melissa Selik, Richard Baraniuk, Michael Haag
URL: http://legacy.cnx.org/content/m10057/2.21/
Module: "Common Continuous Time Signals"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47606/1.4/
Pages: 22-25
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Common Continuous Time Signals
By: Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m10058/2.16/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

199

Module: "Signal Operations"


By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50957/1.2/
Pages: 25-28
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Signal Operations
By: Richard Baraniuk
URL: http://legacy.cnx.org/content/m10125/2.18/
Module: "Energy and Power of Continuous-Time Signals"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47273/1.4/
Pages: 28-30
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Energy and Power
By: Anders Gjendemsj, Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m11526/1.20/
Module: "Continuous Time Impulse Function"
By: Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m10059/2.27/
Pages: 31-33
Copyright: Melissa Selik, Richard Baraniuk, Adam Blair
License: http://creativecommons.org/licenses/by/3.0/
Module: "Continuous-Time Complex Exponential"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50675/1.3/
Pages: 33-36
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Continuous Time Complex Exponential
By: Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10060/2.25/
Module: "Introduction to Systems"
By: Don Johnson
URL: http://legacy.cnx.org/content/m0005/2.19/
Pages: 37-39
Copyright: Don Johnson
License: http://creativecommons.org/licenses/by/1.0
Module: "System Classications and Properties"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50678/1.1/
Pages: 39-43
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: System Classications and Properties
By: Melissa Selik, Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10084/2.24/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

200
Module: "Linear Time Invariant Systems"
By: Thanos Antoulas, JP Slavinsky
URL: http://legacy.cnx.org/content/m2102/2.26/
Pages: 43-50
Copyright: Thanos Antoulas, JP Slavinsky
License: http://creativecommons.org/licenses/by/3.0/
Module: "Continuous Time Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47437/1.1/
Pages: 51-52
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Systems
By: Michael Haag, Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10855/2.8/
Module: "Continuous Time Impulse Response"
By: Dante Soares
URL: http://legacy.cnx.org/content/m34629/1.2/
Pages: 52-55
Copyright: Dante Soares
License: http://creativecommons.org/licenses/by/3.0/
Module: "Continuous-Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47482/1.2/
Pages: 55-58
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Continuous Time Convolution
By: Melissa Selik, Richard Baraniuk, Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m10085/2.34/
Module: "Properties of Continuous Time Convolution"
By: Melissa Selik, Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10088/2.20/
Pages: 58-61
Copyright: Melissa Selik, Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/4.0/
Module: "Causality and Stability of Continuous-Time Linear Time-Invariant Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50671/1.3/
Pages: 62-63
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

201

Module: "Introduction to Fourier Analysis"


By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47439/1.2/
Page: 65
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Introduction to Fourier Analysis
By: Richard Baraniuk
URL: http://legacy.cnx.org/content/m10096/2.12/
Module: "Continuous Time Periodic Signals"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47350/1.1/
Pages: 66-67
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Periodic Signals
By: Michael Haag, Justin Romberg
URL: http://legacy.cnx.org/content/m10744/2.13/
Module: "Eigenfunctions of Continuous-Time LTI Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47308/1.2/
Pages: 68-69
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Eigenfunctions of Continuous Time LTI Systems
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34639/1.1/
Module: "Continuous Time Fourier Series (CTFS)"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47348/1.2/
Pages: 69-73
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Fourier Series (CTFS)
By: Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m34531/1.9/
Module: "Continuous Time Aperiodic Signals"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47356/1.1/
Pages: 75-76
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Aperiodic Signals
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34848/1.5/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

202

Module: "Continuous Time Fourier Transform (CTFT)"


By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47319/1.5/
Pages: 76-79
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Fourier Transform (CTFT)
By: Richard Baraniuk, Melissa Selik
URL: http://legacy.cnx.org/content/m10098/2.16/
Module: "Properties of the CTFT"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47347/1.4/
Pages: 79-82
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Properties of the CTFT
By: Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m10100/2.15/
Module: "Common Fourier Transforms"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47344/1.5/
Pages: 83-85
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Common Fourier Transforms
By: Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m10099/2.11/
Module: "Continuous Time Convolution and the CTFT"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47346/1.1/
Pages: 85-87
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Continuous Time Convolution and the CTFT
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34849/1.5/
Module: "Frequency-Domain Analysis of Linear Time-Invariant Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50661/1.1/
Pages: 87-88
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

203

Module: "Common Discrete Time Signals"


By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47447/1.2/
Pages: 91-93
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Common Discrete Time Signals
By: Don Johnson, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34575/1.2/
Module: "Energy and Power of Discrete-Time Signals"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47357/1.3/
Pages: 93-94
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Energy and Power
By: Anders Gjendemsj, Melissa Selik, Richard Baraniuk
URL: http://legacy.cnx.org/content/m11526/1.20/
Module: "Discrete-Time Signal Operations"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47809/1.1/
Pages: 95-98
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Signal Operations
By: Richard Baraniuk
URL: http://legacy.cnx.org/content/m10125/2.17/
Module: "Discrete Time Impulse Function"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47448/1.1/
Pages: 98-100
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Impulse Function
By: Dan Calderon
URL: http://legacy.cnx.org/content/m34566/1.6/
Module: "Discrete Time Complex Exponential"
By: Dan Calderon, Richard Baraniuk, Stephen Kruzick, Matthew Moravec
URL: http://legacy.cnx.org/content/m34573/1.6/
Pages: 100-104
Copyright: Dan Calderon, Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Based on: The Complex Exponential
By: Richard Baraniuk
URL: http://legacy.cnx.org/content/m10060/2.21/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

204
Module: "Discrete Time Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47454/1.4/
Pages: 105-107
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Time Systems
By: Don Johnson, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34614/1.2/
Module: "Discrete Time Impulse Response"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47363/1.2/
Pages: 107-109
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Impulse Response
By: Dante Soares
URL: http://legacy.cnx.org/content/m34626/1.1/
Module: "Discrete-Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47455/1.2/
Pages: 110-115
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Time Convolution
By: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick, Catherine Elder
URL: http://legacy.cnx.org/content/m10087/2.27/
Module: "Properties of Discrete Time Convolution"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47456/1.1/
Pages: 115-117
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Properties of Discrete Time Convolution
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34625/1.2/
Module: "Causality and Stability of Discrete-Time Linear Time-Invariant Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m50677/1.1/
Pages: 118-119
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

205

Module: "Discrete Time Aperiodic Signals"


By: Marco F. Duarte, Natesh Ganesh
URL: http://legacy.cnx.org/content/m47369/1.3/
Pages: 121-124
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Aperiodic Signals
By: Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m34850/1.4/
Module: "Eigenfunctions of Discrete Time LTI Systems"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47459/1.1/
Pages: 124-125
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Eigenfunctions of Discrete Time LTI Systems
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34640/1.1/
Module: "Discrete Time Fourier Transform (DTFT)"
By: Marco F. Duarte, Natesh Ganesh
URL: http://legacy.cnx.org/content/m47370/1.2/
Pages: 125-128
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Fourier Transform (DTFT)
By: Richard Baraniuk
URL: http://legacy.cnx.org/content/m10108/2.18/
Module: "Properties of the DTFT"
By: Marco F. Duarte, Natesh Ganesh
URL: http://legacy.cnx.org/content/m47374/1.10/
Pages: 128-132
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Properties of the DTFT
By: Don Johnson
URL: http://legacy.cnx.org/content/m0506/2.7/
Module: "Common Discrete Time Fourier Transforms"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47373/1.5/
Pages: 132-133
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Common Discrete Time Fourier Transforms
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34771/1.3/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

206
Module: "Discrete Time Convolution and the DTFT"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47375/1.2/
Pages: 133-135
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Convolution and the DTFT
By: Stephen Kruzick, Dan Calderon
URL: http://legacy.cnx.org/content/m34851/1.6/
Module: "Discrete Fourier Transform (DFT)"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47468/1.2/
Pages: 137-138
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Discrete Fourier Transform (DFT)
By: Don Johnson
URL: http://legacy.cnx.org/content/m10249/2.28/
Module: "DFT: Fast Fourier Transform"
By: Don Johnson
URL: http://legacy.cnx.org/content/m0504/2.9/
Pages: 138-139
Copyright: Don Johnson
License: http://creativecommons.org/licenses/by/3.0/
Module: "The Fast Fourier Transform (FFT)"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47467/1.1/
Pages: 139-144
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: The Fast Fourier Transform (FFT)
By: Justin Romberg
URL: http://legacy.cnx.org/content/m10783/2.7/
Module: "Signal Sampling"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47377/1.2/
Pages: 147-149
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Signal Sampling
By: Stephen Kruzick, Justin Romberg
URL: http://legacy.cnx.org/content/m10798/2.8/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

207

Module: "Sampling Theorem"


By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47378/1.2/
Pages: 149-152
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Sampling Theorem
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10791/2.7/
Module: "Signal Reconstruction"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47463/1.1/
Pages: 152-157
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Signal Reconstruction
By: Stephen Kruzick, Justin Romberg
URL: http://legacy.cnx.org/content/m10788/2.8/
Module: "Perfect Reconstruction"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47379/1.3/
Pages: 157-159
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Perfect Reconstruction
By: Stephen Kruzick, Roy Ha, Justin Romberg
URL: http://legacy.cnx.org/content/m10790/2.6/
Module: "Aliasing Phenomena"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47380/1.3/
Pages: 160-163
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Based on: Aliasing Phenomena
By: Justin Romberg, Don Johnson, Stephen Kruzick
URL: http://legacy.cnx.org/content/m34847/1.5/
Module: "Anti-Aliasing Filters"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m47392/1.1/
Pages: 163-165
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Anti-Aliasing Filters
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10794/2.6/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

208
Module: "Changing Sampling Rates in Discrete Time"
By: Marco F. Duarte
URL: http://legacy.cnx.org/content/m48038/1.2/
Pages: 165-171
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/4.0/
Module: "Discrete Time Processing of Continuous Time Signals"
By: Marco F. Duarte, Natesh Ganesh
URL: http://legacy.cnx.org/content/m47398/1.3/
Pages: 171-175
Copyright: Marco F. Duarte
License: http://creativecommons.org/licenses/by/3.0/
Based on: Discrete Time Processing of Continuous Time Signals
By: Justin Romberg, Stephen Kruzick
URL: http://legacy.cnx.org/content/m10797/2.11/
Module: "Linear Algebra: The Basics"
Used here as: "Basic Linear Algebra"
By: Michael Haag, Justin Romberg
URL: http://legacy.cnx.org/content/m10734/2.7/
Pages: 177-182
Copyright: Michael Haag, Justin Romberg
License: http://creativecommons.org/licenses/by/3.0/
Module: "Linear Constant Coecient Dierence Equations"
By: Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m12325/1.5/
Pages: 182-183
Copyright: Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Module: "Solving Linear Constant Coecient Dierence Equations"
By: Richard Baraniuk, Stephen Kruzick
URL: http://legacy.cnx.org/content/m12326/1.6/
Pages: 183-186
Copyright: Richard Baraniuk, Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Module: "Viewing Embedded LabVIEW Content in Connexions"
By: Stephen Kruzick
URL: http://legacy.cnx.org/content/m34460/1.5/
Page: 190
Copyright: Stephen Kruzick
License: http://creativecommons.org/licenses/by/3.0/
Based on: Viewing Embedded LabVIEW Content
By: Matthew Hutchinson
URL: http://legacy.cnx.org/content/m13753/1.3/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

ATTRIBUTIONS

209

Module: "Getting Started With Mathematica"


By: Catherine Elder, Dan Calderon
URL: http://legacy.cnx.org/content/m36728/1.13/
Pages: 190-192
Copyright: Catherine Elder, Dan Calderon
License: http://creativecommons.org/licenses/by/3.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.10>

Signals and Systems


This course deals with signals, systems, and transforms, from their theoretical mathematical foundations to
practical implementation in circuits and computer algorithms. At the conclusion of ECE 313, you should
have a deep understanding of the mathematics and practical issues of signals in continuous and discrete time,
linear time invariant systems, convolution, and Fourier transforms.

About OpenStax-CNX
Rhaptos is a web-based collaborative publishing system for educational material.

Vous aimerez peut-être aussi