Vous êtes sur la page 1sur 438

• 11.

,_
·' . .. .. • ll


- i .
•· .. . . '
' .

'.;,' .
.. .
••
·•
'
s an
.,,
, '

'• .··..
.•.
.~·.:

'• _,
...; ·- . .. ' .... '
.. .
~
A....
'""·>..· ..._
#--..
~...

..... ,Jitr. ~:. .


<t<,;.
..-,s.<f • ~:.:: ••,·,.··,. •• >
:·;.<,
:;.;;.
:;!
••$,:
.. ~· '\:;

• ·~...
·,;;. .,
.,.
_,, . •.
.
.
. . t

.
'
...
. ,
.

- .. ..
,. •
. ..
·.. ... .. . ' ... '

• .... 1
• • ·.·,

Simon Haykin
AtJcMaster University

Barry Van Veen


University of Wisconsi11

JOHN WILEY & SONS, INC.

New York m Chichester m Wcinl1ei1n fl Brisbane II Singap<>re s T<>r<>11t<>


-----------~-------------------------
To Nancy and Kathy, Emily, David, and Jonathan

EDITOR Bill Zohríst


MARKETING MANAGER Katherine Hepburn
SENIOR PR()DUC~'ff()N MANAGER l,ucille Buonocore
SI-:NJOR l>R()DUC1'l()N EDITOR Moniq1,e Calei/o
SENI()R DESI(;NER Laura Boucher
TEXT DESIGNER Nancy Fie/d
COVER DESJ(;NER Laura Boucher
c:OVER PHOTO <:rJurtesy of NASA
ILLlJSTRATI()N EDITOR Sígmund Malinowskí
Il.LUS"fRATION Wellington Studios

This hook was set in Times Roman by UG division of CiGS lnformation Services and printed and bound by
Quebecor Printing, Kingsport. The cover was printed by Phoenix C:olor Corporation.

This hook is printcd on acíd-free papcr. @>

Thc paper in this book \vas manufactured by a mill whose forest management programs include sustained
yield harvescing of irs timbcrlands. Sustained yield harvcsting principies ensure rhat the numbers of crccs cu
cach year does not cxceed chc an1ount of new growth.

Copyright© 1999, John Wiley & Sons, Inc. All rights reservcd.

No part of thís publicaríon n1ay he rcproduced, src>red ín a recrieval system or rransmitted in any forrn or b
any means, electroníc, mechanical, photocopying, recording, scar1ning or orherwise, exccpt as permitted un,
Sections 107 or 108 of the 1976 lJnited States Copyright Act, ,vithout either che prior wrítten pertníssion 01
the Publísher, or authorizarion rhrough payment of the appropriate per-copy fcc to the Copyright Clearanc1
C:enter, 222 Rose\.vood Drive, Danvers, MA 01923, (508) 750-8400, fax (508) 750-4470. Requests to the
Puhlisher for permil>sion should be addressed to the Permissions Dcpartn1ent, Joh11 Wilcy & Sons, lnc.,
605 Third Avenue, Ncw York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail:
PERMREQ@WILEY .C:() M.

l~ibrary of Congress Cataloging-in-Publication Data


Haykin, Símon
Signals an<l sy~cems / Sitnon Haykitl, Barry Van Vcen.
p. cn1.
Includes indcx.
ISBN 0-4 71-13820-7 (cloth : alk. paper)
1. Sig11al processing. 2. System analysis. 3. Linear tíme
invariant systen1s. 4. Telecomn1unication systen1s. 1. Van Veen,
Barry. II. Titlc.
TKS102.5.H37 1999
621.382'2-dc21 97-52090
(:JP

Princed in rhe Unitcd Srates of A111erica

10 9 8 7 6
xviii CONTENTS

Ts settling time
X(T, jw) short-time Fourier transform of x(t}
Wx( T, a) wavelet transform of x(t)

Abbreviations

A/D analog-to-digital (converter}


AM. amplitude modulation
BIBO bounded input bounded <)utput
.
cw cont1nuous wave
DIA digital-to-analog (converter)
dB decibel
DOF degree c>f f reedorn
DSB-SC double sideband-suppressed carrier
DTFS discrete-time Fotirier series
DTFT discrete-time Fot1rier transform
FDM frequency-divisi<)n multiplexíng
FFT fast Fourier transform
FIR finite-durati<)n impulse response
FM frequency modulation
FS Fourier series
FT Fourier transform
Hz hertz
IIR infinite-duratíon impulse response
LTI linear time-invariant (system)
• •
MRI magnet1c resonance 1mage
MSE mean squared error
PAM pulse-amplitude modulation
PCM pulse-c<>de modulation
PM phase modulation
QAM quadra ture-am pli tude m<>d ula tion
ROC region of convergence
rad radian(s)
s second
SSB síngle sideband modulation
STFT short-time Fourier transform
TOM time-division multiplexing
VSB vestigial sideband m<>dulation
WT wavelet transfor1n

VI PREFACE

Each ''Exploring Concepts with MATLAB'' section is designed t<) instruct the s
on the proper application of the relevant MATLAB comrnands and develop add
insight int<> the C(>ncepts introduced in the chapter. Minimal previous exposure to MA
is assumed. The MATLAB code Í(>r all the computatíons performed in the book, inc
the last chapter, are available on the Wiley Web Site: http://www.wiley.com/college
There are 10 chapters ir1 the book, organized as follows:
• Chapter 1 begins by motivating the reader as to what signals and systems a
how they arise in communication systems, contrai systems, rcmote sensing, bi
ical signal processing, and the auditory system. lt then describes the different,
of signals, defines certain elementary signals, and introduces the basic notic
volved in the characterization of systems.
• Chapter 2 presents a detailed treatment of time-d<>main representations of
time-invaríant (LTI) systems. It devclops convolution fro1n the representatior
input signal as a superpositi<>n of impulses. The notions of causality, memor
bílity, and invertibility that were briefly introduced in Chapter 1 are then re
in terms of the impulse response description for LTI systems. The steady-st;
sponse of a LTI system t(> a sinusoidal input is used to introduce the cc,nc
frequency response. Differential- and difference-equation representations for
time-invariant systems are also presented. Next, block diagram reprcsentatio
LTI systen1s are introduced. The chapter finishes with a discussion of the
variable description of LTI systems.
• Chapter 3 deals with the Fourier reprcsentation of sígnals. ln particular, the F
representations of four fundamental classes c1f sígnals are thoroughly discusse
unified manner:
• Discrete-time períodic signals: the discrete-time Fourier series
• Continuous-time periodic signals: the f(>urier series
• Discrete-time nonperiodic sígnals: the discrete-time Fourier transform
• Continuous-time nonperiodic signals: the Fourier transform
A novel feature of the chapter is the way in which sin1ilarities between thes1
representarions are exploited and the differe11ces between them are highlightcc
fact that complex sinusoids are eigenfunctions of LTI systems is used t(> motiva
representatÍl)n of signals in terms l)Í complex sinusoids. The basic form of the F1
reprcsentati<>n for each signal class is introduced and the four representatic>1
developed ín sequence. Next, the properties of ali four representations are st
side by side. A stríct separation between signal classes and the corresponding F<
representations is maintained throughout the chapter. It is our conviction t
parallel, yet separate, treatment minimizes confusion between representation
aids later mastery of proper application for each. Mixing of Fourier represent,
occurs naturally in the context of analysis and computational applicatians .:
thus deferred to Chapter 4.
,. Chapter 4 presents a thorough treatment of the applications of f(>urier rcprei
tions to the study of signals and LTI systems. Links between the frequency-dc
and time-domain system representations presented in Chapter 2 are established.
analysis and computational applications are then used to motivare derivation e
relationships betwcen the four Fourier representations and develop the student'
in applying these tools. The continuous-time and discrete-time Fourier tran~
representations of periodic signals are introduced for analyzing problems in \
there is a mixturc of períodic and nonperiodic signals, such as application of
riodíc inpt1t to a l,Tl system. The Fourier transform representation for discrete
••
Preface Vil

signals is then developed as a tool for analyzing situations in which there is a mixture
of continuous-time and discrete-time signals. The sampling process and continu-
ous-time signal reconstruction from samples are studied in detail within this context.
Systems for discrete-tíme processing of continuous-time signals are als(> discussed,
íncluding the issues of oversampling, decimation, and interpolation. The chapter
concludes by developing relationshíps between the discrete-time Fourier series and
the discrete-time and continuous-time Fourier transf<lrms in order to introduce the
computational aspects of the Fourier analysis of signals.
• Chapter 5 presents an introductory treatment of linear modulation systems applied
to communication systems. Practical reasons for usir1g r11c>dulatíon are descril,ed.
Amplitude modulation and its variants, namely, double sideband-suppressed carrier
modulation, single sideband modulation, and vestigial sideband modulation, are dis-
cussed. The chapter also includes a discussion of pulse-amplitude 1nodulation and
its role in digital communications to again highlight a natural interactic>n between
continuous-tíme and discrete-time signals. The chapter includes a discussion of
frequency-division and time-division multiplexing techniques. lt finishes with a treat-
ment of phase and group delays that arise when a modulated signal is transmitted
through a linear channel. ·
• Chapter 6 discusses the Laplace transform and its use for the complex exponential
representations of continuous-time signals and the characterization of syscems. The
eigenfunction property of LTI systems and the existence of complex exponential
representations for signals that have no Fourier representarion are used to motivate
the study of Laplace transforms. The unilateral Laplace transform is studied :first and
applied to the solution of differential equations with inicial conditions to reflect the
dominant role of the Laplace transÍ()rm ín engineering applications. The bilateral
Laplace transform is introduced next and is used to study issues of causaliry, stability,
invertibility, and the relationship between poles and zeros and frequency response.
The relationships between the transfer function description of l.TI systems and the
time-domain descriptions introduced in Chapter 2 are developed.
• Chapter 7 is devoted to the z-transform and its use in the complex exponential rep-
resentation of discrete-time signals and the characterízation of systems. As in Chapter
6, the z-transform is motivated as a more general representation than that of the
discrete-time Fourier transform. Consistent with its primary role as an analysis t<)ol,
we begin with the bilateral z-transform. The properties of the z-transform and tech-
niques for inversion are introduced. Next, the z-transform is used for transform
analysis of systems. Relationships between the transfer function and tíme-domain
descriptions introduced in Chapter 2 are developed. Issues of invertibility, stability,
causality, and the relationship between the frequency response and poles and zeros
are revisited. The use of the z-transform for deriving computational structures for
implementing discrete-time systems on computers is introduced. Lastly, use of the
unilateral z-transform for solving difference equations is presented.
• Chapter 8 discusses the characterization and design of linear filters and equalizers.
The approximation problem, with emphasis on Butterworth functíons and brief men-
tion of Chebyshev functions, is introduced. Direct and indírect methods for the design
of analog (i.e., continuous-time) and digital (i.e., discrete-time) types of :filters are
presented. The window method for the design of :finite-duration impulse response
digital filters and the bilateral transform method for the design of infinite-duratíon
impulse response digital filters are treated in detail. Filter design offers another op-
portunity to reinforce the links between continuous-time and discrete-time systems.
The chapter builds on material presented in Chapter 4 in developing a method for the
•••
VIII PREFACE

equalization of a linear channel using a discrete-time filter of finite impulse response.


Filters and equalizers provide a natural vehicle for developing an appreciation for how
to design systems required to meet prescribed frequency-domain specifications.
• Chapter 9 presents an introductory treatment of the many facets of linear feedback
systems. The various practical advantages of feedback and the cost of its application
are emphasized. The applications of feedback in the design of operational amplifiers
and feedback c<Jntrol systems are discussed in detail. The stability problem, basic to
the study <>f feedback systems, is treated in detail by considering the following
methods:
• The root-locus method, related to the closed-loop transient response of the system
• Nyquist stability criterion, related to the open-loop frequency response of the
system
The Nyquist stability criterion is studied using both the Nyquist locus and B<>de
diagram. The chapter also includes a discussion of sampled data systems to illustrate
the natural inceracti<>Il between continu<>us-time and discrete-time signals that occurs
in control applications.
• Chapter 10, the final chapter in the book, takes a criticai look ar limítations of the
representations of signals and systems presented in the previous chapters of the book.
lt highlights <>ther advanced tools, namely, time-frequency analysis (the short-time
Fourier transform and wavelets) and chaos, for the characterization of signals. It
also highlights the notions of nonlinearity and adaptivicy in the study of systems. ln
so doing, the student is made aware of the very broad nature of the subject of sig-
nals and systems and reminded of the limitations of the linear, time-invariance
.
assumpt1on.
ln organizing the material as described, we have tried to follow theoretical material
hy appropriate applications draw11 from the fields of communication systems, design of
filters, and control systems. This has been clone in order to provide a source of motivation
for the reader.
The material in this book can be used for either a one- or two-semester course se-
quence on signals and systems. A two-semester course sequence would cover most, if not
all, of the topics in the bo()k. The material for a one-semester course can be arranged in a
variety of ways, depending on the preference of the instructc,r. We have attempted te>
maintain maximum teaching flexibility in the selection and order of topics, subject to our
philosophy of truly integrating continuous-time and discrete-tíme concepts. Some sections
of the book include material that is considered to be of an advanced nature; these sections
are marked with an asterisk. The material covered in these sections can be omitted without
disruptíng the continuity of the subject matter presented in the pertinent chapter.
The book finishes with the following appendices:
• Selected mathematical identities
• Parcial fracti<>n expansions
• Tables of Fourier representations and properties
• Tables of Laplace transforms and properties
• T ables of z-rransforn1s and properties
A C<>nsistent set of 11otations is used throughout the book. Except for a few places, the
derivations of all the formulas are integrated into the text.
The book is accompanied by a detailed Solutions Manual for all the end-of-chapter
problems in the book. A copy of the Manual is only available to instructors adopting this
book for use in classrooms and may be obtained by writing to the publisher.

Acknmvledgments IX

Acknowledgments
ln writing this bc>ok <>ver a períod <>f four years, we have bcnefited enormously from the
insightful suggestions and cc)nstructive inputs received fr<>m many colleagues and reviewers:
• Professor Rajeev Agrawa1, llniversity of Wisc()nsin
• Professor Richard Baraniuk, Rice University
• Professor Jím Bucklew, Uníversíty of Wisconsin
• Professor C. Sidney Burrus, Rice Uníversity
• Professor Dan Cobb, Uniuersity of Wisconsin
• Professor Chris DeMarco, University of Wisconsin
• Professor John Gubner, Universíty of Wisconsín
• Profess<>r Yu Hu, University of Wisconsin
• Professor John Hung, Aubur11 U11iversity
• Professor Steve Jacobs, Uníversity of Pittsburg
• Dr. James F. Kaiser, Bel/core
• Professor Joseph Kahn, Uniz1ersít)' of Califí>rnia-Berkele)'
• Professor Ramdas Kumaresan, University <){ Rhode lsland
• Professor Troung Nguyen, Boston University
• Professor Robert Nowak, Michigan State University
• Professor .s. Pasupathy, University o( Tor(>nto
• Professor John Platt, McMaster University
• Professor Naresh K. Si11ha, McMaster University
• Professor Mike Thomson, University of Texas-Pan America
• Professor Anthony Vaz, McMaster U11iversity
We extend our gratitude to them ali for helping us in their own_ individual ways to shape
the book into its final form.
Barry Van Veen js indebted to his colleagt1es at the lJnjversity of Wísconsjn, and
Professor Willis Tompkins, Chair of the Department of Electrical and Computer Engi-
neering, for all<1wing him to teach the Signals and Syste1ns Classes repeatedly whíle in the
process of working on this text.
We thank the many students at both Mc.Nlaster and Wisconsin, whose suggestíons
and questi()ns have helped us over the years to refine and in some cases rethink the pre-
sentation of the material in this book. ln particular, we thank Hugh Pasíka, Eko Ongge.>
Sanusi, Dan Sebald, and Gil Raz for their invaluable help in preparing some of the com-
puter cxperiments, the s<)luti(>11s manual, and in reviewing page proofs.
The idea of writing this l)ook was conceived when Steve Elliott was the Edit(>r of
Electrícal Engineering at Wiley. We are deeply grateful to him. We ais<) wish to express
our gratitude to Charíty Robey for undertaking the many helpful reviews of the book, and
Bill Z(>brist, the prese11t editor of Electrical Engineering at Wiley) for his strong support.
We wish to thank !vf<,nique Cale11o for dextrously managing the production of rhe book,
and Katherine Hepburn f(>r her creative promotion of the book.
Lastly, Sim<Jn Haykin thanks his wife Nancy, and Barry Van Veen thanks his wife
Kathy and children Emily and David, for their support and understanding throughout the
long hours involved in writíng this book.

Simon Haykin
Barry Van Veen
To Nancy and Kathy, Emily, David, and Jonathan
Contents


N()tation XVI

CHAPTER 1 Introduction l

1.1 What Is a Signal? 1


1.2 What Is a System? 2
1.3 Overview of Specific Systems 2
1.4 Classification of Signals 15
1.5 Basic Operations on Signals 22
1.6 Elementary Signals 29
1. 7 Systems Viewed as lnterconnections of Operatic.>ns 42
1.8 Properties of Systems 44
1.9 Exploring Concepts with MATLAB 54
1.10 Summary 60
~urther Reading 61
Problems 62

CHAPTER 2 Time-Domain Representations


for Linear Time-Invariant Systems 70

2.1 lntroduction 70
2.2 Convolution: Impulse Response Representation for LTI Systems 71
2.3 Properties of the Impulse Response Representati(>n fc>r LTI Systems 94
2.4 Differential and Difference Equation Representations for LTI Systems 108
2.5 Blc)ck Diagram Representations 121
2.6 State-Variable Descriptions for LTI Systems 125
2.7 Exploring Concepts with MATLAB 133
2.8 Summary 142
Further Reading 143
Problems 144
••
XII CONTENTS

1CHAPTER -~ _ Fourier J!epresentationsfor Signals 155

3.1 Introduction 155


3.2 Discrete-Time Periodic Signals: The Discrete-Time Fc)urier Series 160
3.3 Cc>11tinuous-Time Peric>dic Signals: The Fc)urier Series 171
3.4 Discrete-Time Nonperiodic Signals: The Discrete-Time Fourier Transform 182
3.5 Continuous-Time Nonperiodic Signals: The f<>urier Transform 190
3.6 Pr<>perties of Fourier Representatíons 196
3.7 Exploring Concepts with MATI,AB 237
3.8 Summary 241
Further Reading 242
Problems 243

1CttAPTE~ ~ Applications of Fourier Representations


- .. . "' "
256

4.1 lntroduction 256


4.2 Frequency Response of LTI Systems 257
4.3 Fourier Transft>rm Representations Í(>r Periodic Signals 266
4.4 Convolution and Modulation with Mixed Signal Classes 272
4.5 Fourier Transform Representation for Discrete-Time Signals 279
4.6 Sampling 283
4.7 Reconstruction of Continuc>us-Time Signals from Samples 291
)~4.8 Discrete-Time Processíng of Continuous-Time Signals 301
4.9 fourier Series Rcpresentations for Finite-Duration Nonperi(>dic Signals 311
"'4.10 Computational Applicatíons of the Discrete-Time Fourier Series 316
*4.11 Efficient Algoríth1ns for Evaluating rhe DTFS 32 7
4.12 Exploring Concepts with MATLAB 331
4.13 Summary 336
Further Reading 336
Problems 337

Application
...
to Communication Systems 349
- u . _"".
..

5.1 Introduction 34 9
5.2 Types of Modulation 349
5.3 Benefits of Modulation 353
5.4 full Amplitude Modulation 354
5.5 Douhle Sideband-Suppressed Carrier Modulation 362
5.6 Quadrature-Carrier Multiplexing 366
5.7 Other Varianrs of Amplitude Modulation 367
5.8 Pulse-Amplitude Modulation 372
5.9 Multiplexing 376
Contents xííí

*5.10 Phase and Group Delays 381


5.11 Exploring Concepts with MATLAB 385
5.12 Summary 395
Further Reading 396
Problems 397

CHAPTER6 Representation of Signals Vsing Continuous-


Time Complex Exponentials: The Laplace Transform 401

6.1 Introduction 401 ·


6.2 The Laplace Transform 40 l
6.3 The Unilateral Laplace Transf(>rm 407
6.4 Inversion of the Laplace Transform 412
6.5 Solving Differential Equations with Initial Conditions 416
6.6 The Bilateral Laplace Transform 423
6.7 Transform Analysis of Systems 432
6.8 Exploring Concepts wíth MATLAB 446
6.9 Surnmary 449
Further Reading 450
Problems 450

CHAPTER 7 Representation of Signals Vsing Discrete-


Time Complex Exponentials: The z-Transform 455

7.1 lntroduction 455


7.2 The z-Transform 455
*7.3 Properties of the Region of Convergence 463
7.4 Properties of the z-Transform 468
7.5 lnversi<>n of the z-Transforrn 4 72
7.6 Transform Analysis of LTI Systems 479
7.7 Computational Structures for Implementing Discrete-Time Systems 489
7.8 The Unilateral z-Transform 493
7.9 Exploring Concepts with MATLAB 479
7.10 Summary 500
Further Reading 501
Problems 501

CHAPTER 8 Application to Filters and Equaliz.ers 508

8.1 lntroduction 508


8.2 Conditions for Distortionless Transmission 508
8.3 Ideal Lowpass Filters . 510
8.4 Design of Filters 517
xiv CONTEN'fS

8.5 Approximating Functions 518


8.6 Frequency Transformations 524
8.7 Passive Filters 526
8.8 Digital Filters 527
8.9 FIR Digital Fílters 528
8.10 IIR Digital Filters 538
8.11 Linear Distortion 542
8.12 Equalization ,543
8.13 Exploring Concepts with MATLAB 546
8.14 Summary 551
Further Reading 551
Prc)blems 552

1C~APTER 9 Application to Feedback Systems


• r •
-
55(

9.1 Introduction 5 56
9.2 Basic Feedback Concepts 557
9.3 Sensitivíty Analysis 559
9.4 Effect of Feedback on Disturbances or Noise 561
9.5 Distorti(Jn Analysis 562
9.6 Cost of Feedback .564
9.7 Operational Amplifiers 564
9.8 Control Systems 569
9.9 Transient Response of Low-Order Systems 576
9.10 Time-Domain Specifications 579
9.11 The Stability Problem 581
9.12 Routh-Hurwitz Criteri<>n 585
9.13 Root Locus Method 588
9.14 Reduced-Order Models 597
*9.15 Nyquist Stabiliry Criterion 600
9.16 Bode Diagram 600
*9.17 Sampled-Data Systems 607
9.18 Design of Control Systems 625
9.19 Exploring Concepts with MATLAB 633
9.20 Summary 639
Further Reading 640
Problems 640

1~HAPT_ER 10 Epilogue 648

10.1 Physical Properties of Real-Life Signals 648


10.2 Time-Frequency Analysis 652
Contents XV

10.3 Departures from the "Linear Time-lnvaríant System'' Model 659


10.4 Concluding Remarks 665
Further Reading 666

APPENDIX A Selected Mathematical Identities 667

A.1 Trigonometry 667


A.2 Complex Numbers 668
A.3 Geometric Series 669
A.4 Definite Integrais 669
A.5 Matrices 670

APPENDIX B Partial Fraction Expansions 671

B.1 Partia! Fractíon Expansions for Continuous-Tíme Representatíons 671


B.2 Partial Fractíon Expansions for Díscrete-Time Representations 674

APPENDIX e
Tables of Fourier Representations
i and Properties 676

C.1 Basic Discrete-Time F<>urier Series Pairs 676


C.2 Basic fourier Series Pairs 677
C.3 Basíc Díscrete-Time Fourier Transform Pairs 677
C.4 Basíc Fourier Transform Pairs 678
C.5 Fourier Transform Pairs for Periodic Signals 678
C.6 Discrete-Time Fourier Transft>rm Pairs Í()r Periodic Signals 679
C.7 Pr<)perties {)Í fc)uricr Representations 680
C.8 Relating the Four Fourier Representations 682
C.9 Sampling and Aliasing Relationships 682

1 APPENDIX o Tables of Laplace Transforms and Properties


d "- L L l •- - L - - d - 1 1• - a - - - -
684

D.1 Basic Laplace Transforms 684


D.2 l.aplace Transft>rm Pr(>perties 68.5

APPEND_IX ~ Tables of :;-Transforms and Properties 687


1
.. . ..,
- - - ,,,_ -
E.1 Basic z-Transforms 687
E.2 z-Transform Properties 688
Notation

[·] indicares discrete-valued independent variable, for example, x[n]


(·) indicates continuous-valued independent variable, for example, x(t)
• Lowercase functions denote time-domain quantities, for example, x(t), w[n]
• Uppercase functíc>ns denote frequency- or transform-domain quantities
X[kl discrete-time Fourier series coefficients for x[n]
X[k] Fourier series coefficients fc>r x(t)
X(ei!l) discrete-time Fourier transform c>f x[ n]
X(jw) Fourier transform of x(t)
X(s) Laplace transform of x(t)
X(z) z-transform of xr nl
• Boldface lowercase symbols denote vector quantities, for exarnple, q
• Boldface uppercase symbols denote matrix quantities, for example, A
• Subscript l5 indicares cc>ntinuous-time representation for a discrete-time signal
x 8 (t) cc>ntinuous-time representation for x[nl
Xs( jw) Fourier transfc>rm of xh(t)
• Sans serif type indicates MATLAB variables or commands, for example,
X= fft(x,n)
Oº is defined as 1 for convenience
arctan refers to thc Í<)Ur-quadrant function and produces a value between -1T t
1T radians.

Symbols

lei magnitude of complex quantity e


arg{c} phase angle of complex quantity e
Re{c} real part of e
lm{c} i1naginary part of e
complex conjugate <>f e
.
J square root of -1

1 sq uare r<><>t <>f -1 used by MATLAB
2T sampling interval in seconds
T fundamental period for continuous-tíme signal in seconds
N fundamental period for discrete-tíme signal in samples
(!) (angular) frequency for C<>ntinuous-time signal in radians/second
••
Symbols XVII

n (angular) frequency for discrete-time signal in radians


fundamental (angular) frequency for co11tinuous-time periodic signal
in radians/second
fundamental (angular) frequency for discrete-time periodíc signal ín
radians
u(t}, u[n] step function of unit amplitude
«S[n], ô(t} impulse function of unit strength
H{·} representatíon of a system as an operator H
ST{·} time shift of T units
H-1 h-1 superscript -1 denotes inverse system
'
* denotes convolution operation
H(eiº) discrete-time system frequency response
H(jw) continuous-time system frequency response
h[n] discrete-time system impulse response
h(t) continous-time system impulse response
Y(n) superscript (n) denotes natural response
y<f) superscript ([) denotes forced response
ylP> superscript (p) denotes particular solution
DTFS; 0 0 discrete-time Fourier series pair wíth fundamental frequency .fl0
f5; W0
Fourier series pair with fundamental frequency w
~ > 0

DTFT discrete-time Fourier transform pair


~ >
f'f Fourier transform pair
( >

(
5E ) Laplace transform pair
;fu unilateral Laplace transform pair
( •

z z-transform pair
( >

Z,, unílateral z-rransform paír


( )

sinc(u) sin( 7ru)/7ru


© periodic convolution of two periodíc sígnals
n •
1ntersect1on

T(s) closed-loop transfer function


F(s) return difference
L(s) loop transfer function
€ss steady-state error
. .
Kp pos1t1on error constant
Kv velocity error constant
Ka acceleration error constant
P.O. percentage overshoot
Tp peak time
• •
T,, r1se time
Preface

The study of signals and systems is basic to the discipline of electrical engineering at all
levels. It is an extraordinarily rich subject with diverse app]icati()ns. Indeed, a thorough
understanding of signals and systems is essential for a proper appreciation and application
of other parts of electrical engineering, such as signal processing, communication systems,
and contrai systems.
This book is intended to provide a modern treatment of signals and systems at an
introductory levei. As such, it is intended for use in electrical engineering curricula in the
sophomore or junior years and is designed to prepare students f{>r upper-level courses in
communication systems, contrc)l systems, and digital signal processing.
The book provides a balanced and integrated treatment of cc>ntinuous-time and
discrete-time forms of signals and systems intended to reflect their rc>les i11 engi11eering
practice. Specifically, these tW<) fc>rms of signals and systems are treated side by side. This
approach has the pedagogical advantage of helping the student see the fundamental sim-
ilarities and differences bet,1/een discrete-time and continuous-time representations. Real-
world problems often involve mixtures of continuous-time and discrete-time forms, so the
integrated treatment also prepares the student for practical usage of these concepts. This
integrated philosophy is carried over to the chapters of the book that <leal with applications
of signals and systems in modulation, filtering, and feedback systems.
Abundant use is made of examples and drill problems with answers throughout the
book. All of these are designed to help the student understand and master the issues under
consideration. The last chapter is the only one without drill problems. Each chapter, except
for the last chapter, includes a large number of end-of-chapter problems designed to test
the student cJn the material covered in the chapter. Each chapter also includes a list of
references for further reading anda collection of historical remarks.
Another feature of the bc)ok is the emphasis given to design. ln particular, the chap-
ters dealing with applications include illustrative design examples.
MATLAB, acronym for MATrix l.ABoratory and product of The Math Works, lnc.,
has emerged as a powerful environment for the experimental study of signals and systems.
We have chosen to integrate MATLAB in the text by including a section entitled ''Ex-
ploring Concepts with MATLAB'' in every chapter, except for the concluding chapter. ln
making this choice, we have been guided by the conviction that MATLAB provides a
computationally efficient basis for a ''Software Laboratory," where concepts are explored
and system designs are tested. Accordingly, we have placed the section on MATLAB before
the ''Summary'' section, therehy relating to and building on the entire body of material
discussed in the preceding sections (>f rhe perrinent chapter. This approach also offers the
instructor flexibility to either formally incorporate MATLAB exploration into the class-
room or leave it for the students to pursue on their own.
688 APPENDIX E • TABLES OF z-TRANSFORMS AND PROPERTIES

• BILATERAL TRANSFORMS FOR SIGNALS THAT ARE NONZERO FOR n <0


Sígnal Bilateral Transform ROC

1
u[-n - 1]
1 -z -1 lzl < 1

1
-a"ul-n - 1]
1 - az- 1
lzl < lal
az-1
-na u[-n - 1]
11

(1 - az- 1) 2 lzl < lal

1E.2 z.-Transfo~m Propertie~


Signal Unilateral Transform Bilateral Transform ROC
x[n] X(z) X(z) Rx
y[n] Y(z) Y(z) Ry

ax[nJ + by(nJ aX(z) + b Y(z) aX(z} + b Y(z} At least Rx íl Ry

x[n - k] See below z-kx(z) Rx exccpt possibly lzl = O, x


I \
a"x[n]
X -z X
z
- lalRx
\ a ,a
x[-n] - /1\ 1
X - -
,z Rx
x[n) * y[n] X(z) Y(z) X(z)Y(z) At least Rx n Ry
nx[n] d d Rx except possibly additi<>n
-z-X(z) -z- X(z) or deletion of z = O
dz dz

• UNILATERALz•TRANSFORM TIME-SHIFT PROPERTY

x[n - kl ~ z,, > x[-k] + xl-k + l]z- 1 + · · · + x[-1]z-k+t + z-kX(z) for k > O
x[n + k] < Zu • -xlO]zk - x[l]zk-I - · · · - x[k - 1]z + zkX(z) for k > O
2 CHAPTER 1 • }NTROPUCTION

single variable, the signal is said to be one-dimensional. A speech signal is an example of


a one-dimensional signal whose amplitude varies with time, depending on the spoken word
and who speaks it. When the function depends on two or more variables, the signal is said
to be multidimensic)nal, An image is an example of a two-dimensional signal, with the
horizontal and vertical coordinates of the image representíng the two dímensions.

( 1.2 __ What Is a System?


In the examples of signals mentioned above, there is always a system associated with rhe
generation of each signal and another system associated with the extraction of information
from the signal. For example, in speech communicatíon, a sound source or signal excites
the vocal tract, which represents a system. The processing of speech signals usually relies
on the use of our ears and auditory pathways in the brain. ln the situation described here,
the systems responsible for the production and reception of signals are biological in nature.
They could also be performed using electronic systems that try to emulate or mimic their
bi<>logical counterparts. For example, the processing of a speech signal may be performed
by an automatic speech recognition system in the form of a computer program that rec-
ognizes words or phrases.
There is no unique purpose for a system. Rather, the purpose depends on the appli-
cation of interest. ln an automatic speaker recognition system, the function of the system
is to extract information from an incoming speech signal for the purpose of recognizing
or identifying the speaker. ln a communícation system, the function of the system is to
transport the information content of a message signal over a communication channel and
deliver it to a destination in a reliable fashion. ln an aircraft landing system, the require~
mentis to keep the aircraft on the extended centerline of a runway.
A system is formally defined as an entity that manipulates one or 1nore signals to
accomplish a function, thereby yielding new signals. The interaction between a system ar1d
its associated signals is illustrated schematically in Fig. 1.1. The descriptions of the input
and output signals naturally depend on the intended application of the system:
• In an autornatic speaker recognition system, the input signal is a speech (voice) signal,
the system is a computer, and the output sígnal is the identity of the speaker.
• ln a communication system, the input signal could be a speech signal or computer
data, the system itself is made up of the combination of a transmitter, channel, and
receiver, a11d the output signal is an estimate of the original message signal.
• ln an aircraft landing system, the input signa\ is the desired position of the aircraft
relative to the runway, the system is the aircraft, and the output signal is a correction
to the lateral position of the aircraft.

11.3 Overview of Specific Systems


- w - '"'

ln describing what we mean by signals and systems in the previous two sections, we men-
tioned severa! applications of signals and systems. ln this section we will expand on five

FIG(.IRE 1.1 Block diagram representation of a systern.


1.3 Oven1iew of Specific Systems 3

Estimate
Message Transmitted Received of message
síunal
e signal signal sígnal
_ _ _..,._ Transtnit.ter _ _ _...,... •··. Cba~el · - - - - . . . : / R~eiver I
FrGVRE 1.2 Elerne11ts of a Cf)111n1unícatíon syste111. The transmitter changes the message signal
ínto a form suítable for transmíssi<>n over the c.hannel. The receiver processes the channel output
(i.e., the received signal) to producc .an estimate of the message signal.

of those applicatic>ns systen1s, nan1cly, communication systems, contr<>l systems, remote


sensing, bion1edical sig11al pr()ccssjng, a11d audit()ty sysren1s.

• COMMUNICATION SYSTEMS

There are three basic elements to every communication system, namely, transmitter, cha11-
nel, and receiver, as depicted in fig. 1.2. The transmitter is located atone point in space,
the receiver is located at sorne otl1er poi11t separate frorn the transmitter, and the channel
is the physícal medium that connects them together. Each of these three elements may be
viewed as a system with associated signals of its own. The purpose of the transmitter is to
convert the message signal produced by a source of information into a form suítable for
transmission over the channel. The message signal could be a speech signal, television
(video) signal, or computer data. The channel may be an optical fi.ber, coaxial cable, sat-
ellite channel, or m<)bile radio channel; each of these channels has its specific area of
a pplication.
As the transmitted signal propagates over the channel, it is distorted due to the
physícal characteristics of the channel. Moreover, noise and interfering signals (originating
from other sources) contaminate the channel output, with the result that the received signal
is a corrupted version of the transmitted signal. The function of the receiver is to operate
on the received signal so as to reconstruct a recognizable form (i.e., produce an esrimate)
of the original message signal and detiver ir to the user destination. The signal-processing
role of the receiver is thus the reverse of that of the transmitter; in addition, the receiver
reverses the effects of the channel.
Detaíls of the operations performed in the transmitter and receiver depend <>n the
type of comznunication sysrem being considered. The C(>mmunication system can be of an
analog or digital type. ln signal-processing terms, the design of an analog cc>mmunícation
system is relatively simple. Specifically, the transmitter consists of a modulator and the
recei\>"er consists of a deniodulator. M<)dt1lati<J11 is rhe process of co11verti11g rhe message
signal into a forn1 that is compatible with the trans1nission characteristics <)f the channcl.
Ordinarily, the transmitted signal is represented as amplitude, phase, or frequency varia-
tion of a sinusoidal carrier wave. We thus speak of amplitude modulation, phase modu-
lation, or frequency modulation, respectively. Correspondingly, through the use of ampli-
tude demodulation, phase demodulation, or frequency demodulation, an estimate of the
original message signal is prc>duced ar the receíver output. Each one of these analc>g mod-
ulation/demodulati<>n techniques has its own advantages and disadvantages.
ln contrast, a digital communication system is considerably more complex, as de~
scribed here. If the message signal is of analog form, as in speech and video signals, the
transmitter performs the following operations to convert it into digita) form:
• San-zpling, which converts the message signal into a sequence of numbers, with each
number representing the amplitude of the message signal at a particular instant of

time.
4 CHAPTER l • INTRODUCTION

• Quantization, which involves representing each number produced by the sampler to


the nearest level selected from a finite number of discrete amplitude leveis. For ex-
ample, we may represent each sample as a 16-bit binary number, in which case there
are 2 16 amplitude leveis. After the combination of sampling and quantization, we
have a representation of the message signal that is discrete in both ti1ne and
amplitude.
• Coding, the purpose of which is to represent each quantized sample hy a codeword
made up of a finíte number of symbols. For example, in a binary code the symbols
n1ay he 1 's or O's.
Unlike the c>perations of sampling and coding, quantization is completely irreversible:
that is, a loss of informatÍ<)n is always incurred by its application. However, this loss
can be made small, and nondiscernible for ali practical purposes, by using a qua11tizer
with a sufficiently large number of discrete amplitude leveis. As the number of discrete
amplitude leveis increases, the length of the codewc)rd must also increase in a correspc)nd-

1ng way.
If, h<>wever, the source of information is discrete to begin with, as in the case of a
digital computer, none of the above operations would be needed.
The transmitter may involve additional tlperations, namely, data compression and
channel encoding. The purpose of data compression is to remove redundant information
from the message signal and thereby provide for efficíent utilization of the channel by
reducing the number of bits/sample required for transmission. Channel encoding, on rhe
other hand, involves the insertion of redundant elements (e.g., extra symbols) into the
codeword in a controlled manner; this is clone to provide protection against noise and
interfering signals picked up during the course cJf transmission through the channel. Fi-
nally, the coded signal is ffi()dulated onto a carrier wave (usually sinusoidal) for transmis-
sion c>ver the channel ..
At the receiver, the above operations are performed ín reverse order. An estimate of
the original message signal is thereby pr<lduced and delivered to the user destination. How-
ever, as mentioned previously, quantization is irreversible and therefore has no counterpart
in the receiver. ·
lt is apparent from this discussion that the use of digital communications may requíre
a considerable amount of electronic circuitry. This is not a significant problem since the
electr()nics are relatively inexpensive, due to the ever-increasing availability of very-
large-scale-integrated (VLSI) circuits in the form c>f silicon chips. Indeed, with continuing
ímprovements in the semiconductor industry, digital communications are often more C<Jst
effective than analog cómmunications.
There are two basic modes of communication:
1. Broadcasting, which involves the use of a single powerful transmitter and numerous
receivers that are relatively cheap to build. Here information-bearing signals flow
only in one dírection.
2. Point-tcJ-point comn-iunicatíon, in which the communication process takes place over
a link between a single transmítter anda single receiver. ln this case, there is usually
a bidirectional flow of information-bearing signals. There is a transmitter and re-
ceiver at each end of the link.
The broadcasting mode of communication is exemplified by the radio a11d relevision
that are integral parts of our daily lives. On the ocher hand, the ubiquitous telephone
provides the means for one form of point-to-point communication. Note, however, that
ín this case the link is part of a highly complex telephone network designed t() accom-
modate a large number of users on demand.
1.3 Overview ofSpecific Systems ;

Another example of point-to-poínt communication is the deep-space communica-


tions link between an Earth station and a robot navigating the surface of a distant planet.
Unlike telephonic cc>mmunication, the composition of the message sígna1 depends on the
direction of the communicarion process. The message signal may be in tl1e form of com-
puter-generared instructions transmitted fro1n an Earth statíon that command the robot
to perform specific maneuvers, ór it may cc)ntain valuable information about the chemical
composition <>Í the s<>il on the planet that is sent back to Earth Í(>r analysis. ln order to
reliably communicate (>ver such great distances, it is necessary to use digital communica-
tions. Figure 1.3(a) shows a phorograph of the robot, named Pathfi,nder, which landed on

,,,. L •· • ..
• .4'

. . ~. .'
<~
. ""'"'
ti. :·
,.
Í

.!/.· ~

•• .., •


.
~ I ) ;;
"•;:<14'• A>

Jtr, ·-' ... ...-..~ .


.,·.....-.~' ....

(a)

. ;
,'
. ~
; .
:
·t
1
t,

(b)
FIGURE l .3 (a) Snapshot of Patlifi1ider exploring thc Sl1rfacc of w1ars. {h) The 70-meter
(230-foot) diameter antenna located at Canberra, Australia. The surface of the 70-ineter reflcctor
must remain accurate ,vithin a fraction of the signal wavelength. (Courtesy of Jet Propulsion
Laboratory.)
6 CHAPTER 1 • INTRODUCTION

Mars on July 4, 1997, a historie day in the National Aeronautics and Space Administra-
tion's (NASA's) scientific investigation of the solar system. Figure 1.3(b) shows a photo-
graph of the high-precision, 70-meter antenna located at Canberra, Australia, which is an
integral part of NASA's worldwide Deep Space Network (DSN). The DSN provides the
vital two-way communicati<>ns link that guides and controls (unmanned) planerary ex-
plorers and brings back images and new scientific information collected by them. The
successful use of DSN for planetary exploration represents a tríumph of communication
theory and technology over the challenges presented by the unav(>idable presence c>f nc>ise.
Unfortunately, every communication system suffers from the presence (lf chan11el
noise in the received signal. Noise places severe limits on the quality of received messages.
Owing to the enormous distance between our own planet Earth and Mars, for example,
the average power of the information-bearing component of the receíved signal, at either
end of the link, is relatively small compared to the average power of the noise component.
Reliable operation of the link is achieved through the combined use of (1) large antennas
as part of the DSN and (2) errar contrai. For a parabolic-reflector antenna (i.e., rhe type
of antenna portrayed in Fig. 1.3(6)), the effective areais generally between 50o/o and 6,So/o
of the physical area of the antenna. The received power available at the terminais of the
antenna is equal to the effective area times the power per unit area carried by the íncident
electr(>magnetic wave. Clearly, the larger the antenna, the larger the received signal power
will be, hence the use of large antennas in DSN.
Turning next to the issue of error control, ít involves the use of a channel encoder
at the transmitter and a channel decoder at the receiver. The channel encoder accepts
message bits and adds redundancy according to a prescribed rule, thereby producing en-
coded data ata higher bit rate. The redundant bits are added for the purpose of protection
against channel noise. The channel decoder exploíts the redundancy to decide which mes-
sage bits were actually sent. The combined goal of the channel encoder and decoder is to
minimize the effect of channel noise: that is, the number of errors between the channel
encoder input (derived from the source of information) and the encoder output (delivered
to the user by the receiver) is minimized on average.

• CONTROL SYSTEMS

Control of physical systems is widespread in the application of signals and systems in ()Ur
industrial society. As some specífic examples where control is applied, we mention aircraft
autopilots, mass-transit vehicles, automobile engines, machine tools, oil refineries, paper
mills, nuclear reactors, power plants, and robots. The object to be controlled is commonly
referred to as a plant; in this context, an aircraft is a plant.
There are many reasons for using contrai systems. From an engineering viewpoint,
the two most important ones are the attainment of a satisfactory response and robust
performance, as described here:
1. Response. A plant is said to produce a satisfactory response if its output follows or
tracks a specified reference input. The process of holding the plant output close t(>
the reference input is called regulation.
2. Robustness. A control system is said to be robust if it exhibits good regulatíon,
despite the presence of externai disturbances (e.g., turbulence affeccing the flight of
an aircraft) and in the face of changes in the plant parameters dueto varying envi-
ronmental conditions.
The attainment of these desirable properties usually requires the use of feedback, as
illustrated in Fig. 1.4. The system in Fig. 1.4 contains the abstract elements of a control
1.3 Overview of Specific Systems 7

Disturbance
v(t)

Reference + . e( li ' ' "' v( t) '" • "e:: · i Output


input x(t) -:.
• '.E

Fe~dback signal

~--
. CQ,f..ltrolleJ;, • ;'(li

~~
PlfL~_,....,, l: -+-• y(t)

1:·
r(t) Sensor(s)

FIGURE 1.4 Block diagram of a fecdback control system. The controller drives the J>lant, whose
disturbed outJ)Ut drives the sensor(s). TI1e resulting feedback signal is subtractcd fro1n thc refcr-
encc input to produce an error signal e(t), \vhich, in turn, drives the contr<>ller. Thc feedback loop
is thercby closed.

system and is referred to as a closed-loop contrai system (>r feedback contrai system. For
example, in an aircraft landing system the plant is represented by the aircraft bc.>dy and
actuator, the sensors are used by the pilot to determine the lateral position of the aírcraft,
and the controller is a digital computer.
ln any event, the plant is described by mathematical operations that generate the
output y(t} in response to the plant input v(t) and external disturbance v(t). The sensor
included in the feedback loop measures the plant output y(t) and converts it into another
form, usually electrical. The sensor output r(t) constitutes the feedback signal. Tt is com-
pared against the reference input x(t) to produce a difference or errar signal e(t). This latter
signal is applied to a controller, which, in turn, generates the actuating signal v(t) that
perf<)rms the controlling action on the p1ant. A control S}'Stem with a síngJe input and
single output, as illustrated in Fig. 1.4, is referred to as a single-input!single-output (SISO)
system. w·hen the number of plant inputs and/or the number of plant outputs is more than
one, the system is referred to as a multiple-inputlmultiple-output (MIMO) system.
ln either case, the controller may be in the form of a digital computer or micropro-
cessor, in whjch case we speak of a digit11l control system. The use of digira} contro] systems
is becoming more and more common because of the flexibility and high degree of accuracy
affl>rded by the use of a digital computer as the controller. Because of its very nature, the
use of a digital control system involves the operatic>ns c>f sampling, quantization, and
coding that were described previously.
Figure 1.5 shows the photograph <)f a NASA (National Aeronautics and Space Ad-
ministration) space shuttle launch, which relies on the use of a digital computer for its
control.

• REMOTE SENSING

Remate sensing is defined as the process of acquiring information about an object of


jnreresr without being jn physicaJ conract with it. Basical)y, the acquisition of information
is accomplished by detecting and measuring the changes that the obiect imposes on the
surrc)u1tding fie/d. The field can be electromagnetic, acoustic, magnetic, or gravitationaJ,
depending on the application of interest. The acquisition of information can be performed
in a passive manner, by listening to the field (signal) that is naturally emitted by the object
and processing it, or in an active manner, by purposely illuminating the object with a well-
defined field (sígnal) and processing the echo (i.e., signal returned) from the object.
The abo,,e definition of remote sensing is rather broad in that it applies to every
possible field. ln practice, hc>wever, the term ''remate sensing'' is commo11ly used in the
8 CHAPTER 1 • INTRODlJCTION

..• ,t;•
~\,

i;.,_,,;;,::,i·..
.rn.· . .....
' •
FIGURE 1.5 NAS,.\ S[lace shuttlc launch. (Courtesy of NASA.)

context of electromagnetíc fields, with the techniques used for informatíon acquisition
covering the whole electromagnetic spectrun1. lt is this specialized form of rem<>te sensíng
that we are concerned with here.
The scope of remate sensing has expanded en<>rmously since che 1960s dueto the
advent ()Í satellites and planetary probes as space platforms f(>r the sens(>rs, and the avail-
ability of sophisticated digital signal-processing techniques for extracting infc>rmatic>n fr(>m
the data gathered by the sensors. ln particular, sensors on Earth-orbiting satellites provide
highly valuable inf<>rmation about global weather patterns and dynamics <>f clouds, surface
vegetation cover and its seasonal variations, and ocean surface temperatures. Mc>st im-
portantly, they do so in a reliable way and on a conrinuing basís. ln planetary studies,
spaceborne sensors have provided us with high-resc>lution ímages of planetary surfaces;
the images, in turn, have uncovered for us new kinds of physical phen<>mena, some similar
to and others completely different from what we are fa1niliar \1/ith on our planet Earth.
The electromagnetic spectrum extends from lc>w-frequency radio waves through mi-
crowave, submillimeter, infrared, visible, ultraviolet, x-ray, and gamma-ray regions of the
spectrum. Unfortunately, a single sensor by itself can cover only a small part of the elec-
tromagnetic spectrum, with the mechanism responsible for wave-matter ínteraction being
influenced by a limited number of physical properties of the ol-,ject of interest. If, therefore,
we are to undertake a detailed study of a planetary surface or atmosphere, then che si-
multaneous use of multiple sens<Jrs covering a large pare of the electromagnetic spectrum
is required. For example, to study a planetary surface, we may requíre a suite of sensors
covering selected bands as follows:
1.3 Overvieiv of Specific Systems 9

• Radar sensors to provide information on the surface physical properties <)Í the planet
under study (e.g., topography, roughness, moisture, and dielectric constant)
• lnfrared sens(>rs to measute the near-surface thermal properties of the planet
• Visible and near-infrared sensors to provide inf<1rmation about the surface chemical
comp<>sition of the planet
• X-ray sensors to provide information on radioactive materiais co11tained in the planet
The data gathered by these highly diverse sensors are then processed on a computer to
generate a set c>f images that can be used collectively to enhance the knowledge of a scientist
studying the p1anetary surface ..
Among the electromagnetic sensors mentioned above, a special type <>f radar know11
as synthetic aperture radar (SAR) stands out as a unique imaging system in remore sensing.
lt offers the following attractive features:
• Satisfactory operation day and night and under a1l weather conditions
• High-resoJution imaging capabiJity that is independent <>Í sensor altitude <)r
wavelength
The realization (>Í a high-resc>lution image with radar requires the use c>f an antenn.,1 wíth
large aperture. From a practical perspectíve, however, there is a physical limit <>n the size
of an ar1tenna that can be accornmodated on a11 airborne or spaceb<1rne plarform. In a
SAR system, a large ante11na aperture is synthesized by signal-processing means, hence the
name ''synthetic aperture radar." The key idea behind SAR is that an array of antenna
elements equally spaced al<)ng a straight line is equivalent to a single antenna moving along
rhe array line ar a unjform speed. This is true provided rhat we sarjsfy the follc>wing
requirement: the signals received by the single antenna at equally spaced points along the
array line are coherently recorded; that is, amplitude and phase relationships among the
' received signals are maintained. Coherent recording ensures that signals received from
the single antenna correspond to the signals received from the individual elements of an
equivalent antenna array. ln (>rder to obtain a high-resolution image from the single-
antenna signals, highly sophisticated signal-processing operations are necessary. A central
operation in the signal processing is the Fourier transform, which ís implemented efficiently
on a digital c<lmputer using an algorithm known as the fast Fourier transf<>rm (FFT) al-
g<>rithm. t'ot1rier analysis <>f signals is one of the main focal points of this bc>ok.
The phc>tograph in Fig. 1.6 shc>ws a perspective view <>f Mt. Shasta (Calif<)rnia),
which was derived from a stereo paír of SAR images acquired from Earth c>rbit with the
Shuttle lmaging Radar (SIR-B). The color version of this photograph appears on thc co1or
plate.

li 8101\'IEDICAL SJGNAL PROCESSING

The goal of biomedical signal processing is to exrract information from a bioll>gical signal
rhat helps us to further impr<1ve our understanding of basic mechanisms of biological
functic)n or aids us in the diagnosis or treatment of a medical condition. The ge11eration
<>f 1nany bio/()gical signals found in the human bc>dy is traced to the electrical activity of
large groups of nerve cells or muscle cells. Nerve cells in the brain are commonly referred
to as neurons. Figure 1. 7 shows morphological types of neurc)ns identifiable in a monkey
cerebral cortex, based on studies of prin1ary somatic sens<1ry and motc)r cortex. This figure
il1ustrates the many different shapes and sizes of neurons that exist.
Irrespective of the signal (>rigin, biomedical signal processing begins with a temp<)ral
rec<>rd of the bi<)logica] e\'ent. of interest. For example, the eJecrrical activit}' of the heart
10 CHAPTER 1 • (l\'TRODUCTIO~

.": . ,,
..... .... . ....... . ,.,...
+

~
' . ~--
....:..)IQ. '
·· ... ",
. .,. -
. .. .......,
. ' ..
'
.,

., .
,.
. ...~· . . ,.

FIGURE 1.6 Pcrspcctive vie\v <,f l\1ount Shasta (Californía) derived from a pair of stereo radar
images acquired from orbít ,,vith the Shuttle lmaging Radar (SIR-B). (Courtesy of Jet Propulsi<)11
Laboratorv.)
, Scc Color Plate.
.

is represented by a record called the electr<)cardic)gram (ECG ). The ECG represents changes
in the potential (voltage) due to electr(>chemical pr<>cesses involved in the formation and
spatial spread of elecrrical excitations in the heart cells. Acc<>rdingly, derailed i11ferences
abc>ut the heart can be made from the ECG.
Another important example <>f a hi<)logical signal is the electroencephalogram (EEG).
The EEG is a record of fluctuations in the electrical activity of large groups of neurons in

100µ,m

FIGURE 1. 7 i\lorphological typcs of nerve cells (neurons) identifiable in a monkcy cerebral cor-
tex, based on studies of priritary somatic sensory and motor cortcx. (Reproduced from E. R.
Kandel, J. H. Sch,vartz, and 'I'. .!Vl. Jesscl, Principles of Nettral Scie1ice, Third Edition, 199 J; cour-
tesy of Appleton and Lange:)
1.3 Ove"7iew of Speciflc Systems 11

the brain. Specifically, the EEG measures the elecrrical .field associated with rhe ctrrrent
flowing through a group of neurons. To record the EEG (<lr che ECG for that matrer) at
least two electrodes are needed. An active electrode is placed over the particular site of
neuronal actívity that is of interest, and a reference electrode is l)laced at S<)me rem<>te
distance from this site; the EEG is measured as the voltage or pc>tential difference between
the active and reference elecrrodes. Fig11re 1. 8 sho"vs three examples <Jf EEG signals re-
corded from the hippocampus of a rar.
A major issue of concern in biomedical signal processing-in rhe cc>ntext c.>f ECG,
EEG, or some other hiologica1 signal-is the detection and suppression of artifacts. An
artifact refers to that part of the signal produced by events rhat are cxtrane<>us to the
biological event of interest. Artifacts arise in a biological signal ar different stages c>f pro-
cessing and in many different ways, as sun1marized herc:
• Instrumental artifacts, generated by the use of a11 instru111ent. A11 example of an
instrumental artifact is the 60-Hz interference picked up by the recc>rding i11strun1ents
from the electrical maíns power supply.
• Biological artifacts, in ""'hich c>ne bi<>logical signa) C()ntaminates or interferes \Vith
another. An example C)Í a biological artifact is the electrical pc)tc11tial shift that n1ay
be observed in the EEG dueto heart acrivity.
• Analysis artifacts, which may arise in the course of proccssing the biol<>gical signal
to produce an estimate of the event of interest.
Analysis artifacts are, in a way, controllable. For exan1ple, rc>und<)ÍÍ errors due to
quantization of sig11al samples, which arisc from the use <>f digital signal prc>cessi11g, can
be made nondiscernible fc)r al] practical purposes hy making the number c>f discretc am-
plitude leveis in the quantizer large enough.
What about instrumental and biological artifacts? A comm<)n meth<>d c)f reducing
their effects is through the use of filtering. A filter is a system that perforn1s a desired

(a)

(b)

(e)

Os Time ... 2s

FIGURE 1.8The traces sho\Vn in (a), (h), and (e) are three c:xamples <)f EEG signals rccc)r<.lecf
from the hippocampus of a rat. Neurobiol<>gical studies suggest that the hip[><>c.:am1>us plays a kcy
role ín certain aspects of learning or memory.
12 CHAPTER) • INTRODUCTION

operatio11 t)n a signal or signals. lt passes signals containing frequencies in <>ne frequency
range, termed the filter passband, and removes signals contaíning frequencies in other
frequency ranges. Assuming that we have a fJric)ri knowledge concerning the signal of
interest, we may estimate rhe range of frequencies inside whích the significant components
of the desíred signal are located. Then, by designing a filter wh<)Se passband corresponds
to the frequencies of the desired signal, artifacts with frequency c<)mponents outside this
passl-,and are removed by the filter. The assumptio11 made here is that the desired signal
and the artifacts contaminating it occupy essentially nonoverlapping frequency bands. If,
however, the frequency bands overlap each other, rhen the filtering problem becomcs more
difficult and requires a solution beyond rhc sc<>pe of the present book.

• AUDITORY SYSTEM

For our last example <)Í a system, we turn to the 111ammalian auditory system, the functi<>n
of which is to discriminate and recognize complex S<)unds on the basis of their frequency
content.
Sound is produced by vibrations such as the movements of vocal cords <lr vi<)lin
strings. These vibrati()ns result in the compressi(>n and rarefaction (i.e., increased or re-
duced pressure) of the surrounding air. The disturbance so produced radiates outward
from the source of sound as an acoustical wave with alrernating highs and lows of pressure.
The ear, rhe <>rgan of hearing, responds t<> incoming acoustical waves. lt has three
main parts, with their functions summarized as follows:

• The outer ear aids in the collection of sounds.


• The middle ear provides an acc>ustic impedance n1atch between the air and the C<>-
chlea fluids, thereby conveying the vibrations of the tympanic membrane (eardrum)
due to the incoming sounds te) the inner ear in an efficienr manner.
• The inner ear converts the mechanical vibratíons from the middle ear t<> an "'electro-
chemical'' or ''neural'' signal for transmission to the brain.

The inner ear consists of a bony spiral-shaped, fluid-filled tube, called the cochlea.
Sc>und-induced vibrations of the rympanic membrane are transmitted into the oval window
of the cochlea by a chaín <>f h<>nes, called ossicles. The lever action of the ossicles provides
some amplificati<)n <>f rhe mechanical vibrarions <>Í the tympanic membra11e. Thc cc>chlea
tapers in size like a cone t<>ward a tip, so that there is a base at the oval window, and an
apex at the tip. Through the middle of the cochlea stretches the basilar membrane, which
gets wider as the c<)chlea gets narrower.
The vibratory movement of the tympanic membrane is transmitted as a travelíng
wave along the length of the basilar membrane, starting from the oval window to the apex
at the far end of the C<)chlea. The wave propagares along the basilar membrane, much like
the snapping of a rope tied atone end causes a wave to propagare along rhe rope from
the snapped end to the fixed end. As illustrated in fig. 1.9, the wave attains its peak
amplitt1de ata specific lc>cation along the basilar n1embranc that depends on the frequency
of rhe incoming sc>und. Thus, although the wave ítself travels al(>ng the basilar men1brane,
the envelope of the wàve is ''stationary'' for a given frequency. The peak displacements
for high frequencies occur toward the base (where the basilar membrane is narrowest and
sriffesr). The peak displacements for low frequcncies occur toward the apex (where the
basilar membrane is \videst and mosr flcxible). That is, as the wave propagares along rhe
basilar me1nbra11e, a res(>nance phenomenon takes place, wirh the end <>f the basilar mem-
brane at the base of the cochlea resonating at ab()Ut 20,000 Hz and its other end at the
J .3 Overview of Specific Systems 13

>: >,.
Base:
stiff region
Apcx:
flexible region
(a)

4000 Hz

15,000 Hz

(b)

FIGURE J.9 (a) ln this diagram, thc basilar membrane in the c<ichlea is de1,icted as if it ,vere
uncoíled and strctched out flat; the "base" an<l "apex" refcr to the cochlca, but the remarks ''stiff
region" and "flexil)le regíon" refer to thc basilar membrane. (b) This diagram illustrates the travei~
ing ,vavcs along the basilar membrane, sho'vvir1g their enveloJ)CS induced by inc<>ming sound at
three different frequencíes.

apex of the cochlea resonating at about 20 Hz; the res<)nance frequency of the basilar
membrane decreases gradually with dístance from base to apex. Consequently, the spatial
axis of the cochlea is said to be tonotopically ordered, because each location is associated
with a particular resonance frequency or tone.
The basilar membrane is a dispersive medium, in that higher frequencies propagate
more sJowly than do lower frequencies. In a dispersive medium, we distinguish two dif-
ferenr velocities, namely, phase velocity and group velocity. The phase velocity is the ve-
locity at which a crest or valley of the wave propagares along the basilar membrane. The
group vclocity is the velocity ar which the envelope of the wave and its energy pr<>pagate.
The mechanical vibrations of the basilar membrane are transduced into electrochem-
ical signals by hair cells that rest in an orderly fashion on the basilar membrane. There are
two main types of hair cells: inner hair cells and outer hair cells, with the latter being by
far the most numerous type. The outer hair cells are motile elements. That is, they are
capable of altering their Jengrh, and perhaps other mechanical characteristics, which is
believed to bc responsible for the cc>mpressive nonlinear effect seen in the basilar membrane
vibrations. There is also evidence that the outer hair cells contribute to the sharpening of
tuning curves from the basilar membrane and on up the systern. However, the inner hair
cells are rhe rnain sites of auditory transduction. Specifically, each auditory neuron syn-
apses wírh an inner hair cell ata particular l<)Cation on the basilar mernbrane. The 11eurons
that synapse with inner hair cells near the base of the basilar membrane are found in the
periphery <>f the audit<)ry nerve bundle, and there is an orderly progression toward syn-
apsing at the apex end of the basilar membrane with movcment toward the center c>f the
bundle. The tonotopic organization of the basilar rnembrane is therefore anatomically
preserved in the auditory nerve. The inner hair cells also perforrn rectification and com-
14 CHAPTER l II INTRODllC'ílON

pression. The mechanical signal is approximately half-wave rectified, thereby responding


to motion of the basilar membrane in one direction only. Morcover, the mechanical signal
is corr1pressed nonlinearly, such that a large range of incon1ing sound ínrensiries is reduced
to a manageable excursion of electrochemical potential. The electrochemical signals so
produced are carried over to the brain, where they are further processed to become our
hearing sensations.
ln summary, in the cc>chlea we have a wonderfttl example of a biological system that
operares as a bank of filters tuned to different frequencies and uses nonlinear processing
to reduce dynamic range. It enables us to discriminate and recognize complex sounds,
despite the enormous differences in intensity leveis that can arise in practice.

• ANALOG VERSUS DIGITAL StGNAL PROCESSING

The sígnal processí11g operatíons ínvolved in buíldíng con1municatíon systems, C(>Otí()l


systems, instruments for remote sensing, and instruments for the pr()Cessing of bic>lc>gical
signals, among the many applications of signal processing, can be implemented in two
fundamentally different ways: ( 1) analog or continuous-time appr<>ach and (2) digital or
discrete-time approach. The analog approach to signal processing was dominant for many
years, and it remains a viable optíon for 1nany applicatic,ns. As the name implies, analog
signal processing relies on the use of analog circuit elements such as resistors, capacitors,
inductors, transistor ampli.fiers, and diodes. Digital signal processing, on rhe other hand,
relies on three basic. digital cc)mputer elements: a d ders and multipliers (for arithmetic op-
erations) and memory (for st<>rage).
The main attribute of the analog approaçh is a 11atural abiliry to solve differential
equations that describe physical systems, without having to resort to approximate solu-
tions for them. These solutions are also obtained in real time irrespective of the input
signal's frequency range, since the underlying mechanisms responsíble for the operations
of the analog appr<>ach are ali physícal in nature. ln conrrast, the digital approach relies
on numerical computations for its operation. The time required to perform these com-
putations determines whether the digital apprclach is able to <Jperate in real time, rhat is,
to keep up wíth the changes in the input signal. ln othcr words, the analog approach is
assured of real-time operation, but there is no such guarantee for the digital approach.
However, the digital approach has the fc)ll()wing impc>rtant advantages c>ver analog
signal processing:
• Flexibility, whereby the sarne digital machine (hardware) can be used fc>r imple-
menting different versions c>f a signal-processing operaticJn c)f i11terest (e.g., filtering)
merely by making changes to the software (pr(>gram) read int<> the machine. On the
other hand, in the case of an analog machine, the system has to be redesigned every
time the signal-processing specifications are changed.
• Repeatabílity, which refers to the fact thar a prescribed signal-processing operation
(e.g., control of a robot) can be repeated exactly over and over again when it is
implemented by digital means. ln C()11trast, analog systems suffer from parameter
variations rhar can arjse dLJe to changes jn the s11pply voJtage <)r room temperature.
For a given signal-prc>cessing operatic)n, however, we usually find that the use <>f a
digital approach requires greater circuit complexity than an analog approach. This was
an issue of major concern in years past, but this is no longer so. As remarked earlier, the
ever-increasing availahility of VLSI circuits in the form of silicc)n chips has 1nade digital
electronics relatively cheap. Consequently, we are now able to build digital signal proces-
sors that are cost competitive with respect to their analog counterparts over a wide fre-
1.4 Classification of Signals 15

quency range that includes b<>th speech and video signals. ln the final analysis, however,
the ch<>ice of a11 analog or digital approach for the solution of a signal-processing problem
can only be determined by the applicatíon of interest, the resources avaílable, and the cost
inv<1lved ín building the system. It should also be noted that the vast majority of systems
builr i11 practíce are ''mixed'' in narure, c<)n1bining the desirable features of both analog
and digital approaches t<> signal processing.

1.4 Classification of Signals


ln this book we will restrict our actention to one-dimensior1al signals defined as single-
valued functions of rime. ''Single-valued'' means that for every instant of time there is a
unique value <)f the function. This value may be a real number, in which case we speak of
a reaJ-1,,alt1ed signal, or it may be a compJex number, in wh1ch case we speak of a complex-
valued signal. ln either case, the independent variable, namely, time, is real valued.
The most useful method of signal representation for a given situatic,11 hinges <>n rhe
particular rype c>f signal being cc>nsidered. We may identify five methods of classifying
signals based on different features:

1. (;<)ntintt()us-time and díscrete-time signals.


One way of classifying signals is on the basis of how they are defined as a function of time.
ln this contcxc, a sig11al x(t) is said to be a continuous-time signal if ir is defincd for all
time t. figt1re 1.10 represents an example of a continuous-time signal whose amplitude or
value varies C(}ntinuously with time. Continuous-time signa)s arise naturally when a phys-
ical waveform such as an acoustic wave or light wave is converted into an clectrical signal.
The conversion is effecte<l by means of a transducer; cxamples include the microphc>ne,
which co11verts sound pressure variations int<> corresp<>nding voltage or current variati<>11s,
and the photocell, which d<)es the sarne for light-intensity variatic)ns.
On thc other hand, a discrete-time signal is defined only ar discrere instants of time.
Thus, in this case, the independenc variable has discrete values only, which are usually
uniformly spaced. A discrete-time signal is often derived from a continuous-time signal by
samJ>ling it ata uniform rate. Let ?J denote the sampling period and n denote an integer
·· that may assume positive and negative values. Sampling a c<>ntinuous-t1me signal x(t) at
time t = n?J yields a sample of value x(n?.T). For convenience c)f presentarion, we write
x[11I = x(n?.T), n = O, :::!::: 1, ±2, ... (1.1)
Thus a discrete-rime signal is represented by the sequence numbers ... , x[-21, xl-1],
x[O], xl 11, x[2], ... , which can take on a continuum of values. Such a sequence of numl-,ers
is referred t<> as a time series, written as (x[n}, n = O, ± 1, +2, ... } or simply x[n]. The

x(t)

FIGURE ) •) 0 c:ontinllOllS-ti1ne signal.


16 CHAPTER 1 • INTRODUCTION

x[n]
1

(a) (b)

FIGURE 1. l I (a) C.~ontínuous-time signal x(t ). (b) Rcpresentation of x(t) as a discrete-tíme sígnal
xln].

latter nc>tation is used throughc>ut this book. Figure 1.1 J illustrarcs rhe relationship be-
tween a continuous-tirrie sígnal x(t) and discrete-time signa) x[n] derived from it, as de-
scribed ahc>ve.
Thrc>ughout this bo<)k, we use the syml"lc>l t to denote ti1nc fc>r a continuous-rime
signal and the symbol n t<> denote time for a discrete-time sig11al. Similarly, parenrheses (·)
are used to denote contint1ous-valued quantities, whilc brackets [·] are used to denote
discrete-valued quantities.

2. Even and c)dd signals.


A continuc)us-time signal x(t) is said to be an even signal if ir satisfies the condirion
x(-t) = x(t} for all t ( 1.2)
The signal x(t) is said to be an ()dd signal if ir satisfies the co11ditic>n
x(-t) = -x(t) for ali t ( 1.3)
ln other words, even signals are symmetric about the vertical axis or time c>rigin, whereas
odd signals are antisymmetric (asymmetric) about the time origin. Similar remarks apply
to discrete-time signals.

·.
.
·.)
. ,.
.,, .:...r f.
. .. , ... , .
.....
·. ·.
ExAMPLE J. l Develop the even/odd decornposition of a gener~l signal x(t) by applying the
definitions of Eqs. (1.2) and (1.3).
Solution: Let the signal x(t) be expressed as the sum of two components Xe(t) and x (t) as 0

follows:
x(t) = Xe(t) + X 0 (t)
Define Xe(t) to be even and x 0 (t) to be odd; that is,
Xe(-t) = Xe(t)
and
x 0 (-t) = -x (t) 0

Putting t = -t in the expression for x(t), we may then write

x(-t) = Xe(-t) + x 0 (-t) ..


= Xe(t) - X 0 (t)
• •• •,. ·'.'·>>s. ••:. ·-! :
., ... .... ...•· ,,~ .v(:,.•.
. ..... ~r.
.,..... :, .<~' : •••·::. ·~ •'<
: . .., .: . :
1.4 Classifwation of Signals 17

'... \
•• r • •
.. ' . :

,. ,,~,, ..
Solving for Xe(t) and x 0 (t), we thus obtain :. .
·~,·
,
.,
• • l::
;i:.,; .~ .. -

,.
1 ;.:
+ x(-t}
;

Xe(t) =l x(t) .,
" ,: ,. .
... .. 1· .;·:
. . ....
. '
and .. .',,.,,.,

' .i •• : • .,
'
1 . .: '

x 0 (t) = x(t) - x(- t)


2
.. .,,, .

The above definitions of even and odd signals assume rhat the signals are real valued.
Care has to be exercised, however, when the signal of interest is complex vall1ed. ln the
case of a complex-valued signal, we may speak of conjuga te symmetry. A complex-valued
signal x(t) is said to be co11jugate symmetric if it satisfies the condition
x(-t) = x*(t) ( 1.4)
where the asterisk denotes complex conjugati<>n. Let
x(t) = a(t) + jb(t)

where a( t) is the real part of x( t}, b( t) is thc imagina ry part, and j is the sq ua re r<)<>t of - 1.
The complex conjugate of x(t) is
x'"(t) = a(t) - jb(t)
From Eqs. (1.2) to (1.4), it follows therefore that a complex-valued signal x(t) is conjt1gate
symmetric if its real part is even and its imaginary part is odd. A similar remark applies
to a discrete-cime sig11al.

• Drill Problem 1.1


Consider the pair of signals shown in fig. 1.12. Which of thcse
two signals is evcn, and which 011e is odd?
Answer: x 1(t) is even, and x 2 (t) is odd. •
• Drill Problem 1 .2
The signals x 1(t) and x 2 (t) shown in Figs. l .12(a) and (b} C(>n-
stitute the real and imaginary parts of a complex-valued signal x(t). What fc>rn1 of sym-
metry does x(t) have?
Answer: x(t) is conjugate symmetric. •

-----J~--t
-T/2
O T/2
-
-T/2
.....···· --·-+----- t
O T/2

----A
(a) (b)

FIGURE 1. 12 (a) ()ne example of contintrous-time signal. (b) Another examplc of continu,lus-
time signal.
18 CHAPTER 1 • INTRODlJCTION

3. Periodic signals, nonperiodic signals.


A periodic signal x(t) is a function chat satisfies the condition
x(t) = x(t + T) for all t (1.5)
where Tis a positive constant. Clearly, if this c<>ndition is satisfied for T = T0 , say, then
it is also satísfied for T = 2 T0 , 3 T0 , 4 T 0 , • • • • The smallest value of T that satísfies Eq.
(1.5) is called the fundamental period of x(t). Accordingly, the fundamental peric>d T
defines the duration of one complete cycle of x(t). The reciproca! of the fundamental period
Tis callcd the fi-,ndamental frequency of the periodic sig11al x(t); it describes h(>W frequently
the periodic signal x(t) repeats itself. We thus formally write

f=! (1.6)
T
The frequency f is measured in hertz (Hz) or cycles per second. The angular frequency,
measured in radians per second, is defined by
21T
w= (1.7)
T
since there are 21r radians in one complete cycle. To simplify terminology, w is often
referred to simply as frequency.
Any signal x(t) for which there is no valt1e <)f T to satisfy the condition of Eq. (1.5)
is called an aperi<>díc or nonperiodic signal.
Figures 1.13(a) and (b) present examples of peri<)dic and nonperiodic signals, re-
spectively. The periodic signal shown here represents a square wave of amplitude A = 1
and peric>d T, and the nonperiodic signal represents a rectangular pulse <>Í amplitude A
and duratic>11 T,.

• Drill Problem 1 .3 Figure 1.14 shows a triangular wave. What is the fundamental
frequency of this wave? Express the fundamental frequency ín units c)f Hz or rad/s.
Answer: 5 Hz, or 101r rad/s. •
The classification (>Í signals into periodic and n<>nperiodic signals presented thus far
applies to contínu(>us-time signals. We next cc>nsider the case of discrete-time signals. A
discrete-time signal x[nl is said to be periodíc if ít satisfies the conditi<)n:
x[n] = xln + N] for all integers n (1.8)

x(t) x(t)

11-- A

o -
-----··········L········· ·--- t
-1 -
!
o T 2T 3T 4T 5T
Time t, seconds
(a) (~

FIGURE 1.13 (a) Square wave \Vith amplitude í\ = l, and l)eriod T = 0.2 s. (b) Rcctangular
pulse of amplitt1cle A and duration T 1•
l .4 Classification of Signals 19

o O, l 0.2 0.3 0.4 0.5 0.6 o. 7 0.8 0.9 1


Titne t, scconds

FIGURE 1.14 Triangular wavc alternating between -1 and + l ,vith fundamental period of 0.2
second.

where Nisa positive integer. The smallest value of integer N for which Eq. (1.8) is satisfied
is calJed the fundamental period of the discrete-time signal x[n]. The fundamental angular
frequency or, simply, fundamental frequency of xfnl is defined by

n = 2'" (1.9)
N
which is measured in radians.
The differences between the defining equations (1.5) and ( 1.8) should be carefully
noted. Equation (1.5) applies to a periodic continuous-time signal whose fundamental
period T has any pc.>sitive value. On the other hand, Eq. (1.8) applies to a periodic discrete-
time signal whose fundamental period N can only assume a positive integer value.
Two examples of discrete-time signals are shown in Figs. 1.15 and 1.16; the signal
<>f fig. 1.15 is peric>díc, whereas that <>f Fig. 1.16 is aperiodic.

• Drill Problern 1.4 What is the fundamental frequency of thc discrete-time square
wave shc>wn in Fig. 1.15?
Answer: TTl4 radians. •
4. Deterministic signals, random signals.
A deterministic signal is a signal about which there is no uncertainty with respect to its
value at any time. Accordingly, we find that deterministic signals may be modeled as

x[n]
1

· · · · ····-··· ··-·-··· · · · · · · ··-· - · · ··· · -·········~~~.. · · · ·····~-~- ·, - · · · Time n


-8 O 8

..... -1

FIGURE 1.15 Discretc-time squarc wave altcrnating bet,-veen - 1 and + J.


20 CHAPTER ) li INTRODU{:TJON

x[n1
1i

-ó--o--o-- ·--'--...__--'-----('1--- <>---<>-· n


-4 -3 -2 -1 O 1 2 3 4
FIGURE 1. 16 1\periodíc discrctc-time signal consisting oi' three nonzcrc) samples.

completely specified fu11ctions of time. The square wave shown in Fig. 1.13(a} and the
rectangular pulse shown in t'ig. 1.l3(b) are examples of deterministic signals, and so are
the signals shown in Figs. 1.15 and 1.16.
On the other hand, a random signal is a signal about which there is uncertainty
before its actual occurrence. Such a signal may he viewed as belonging to an ensemblc or
group of sígnals, with each signal in the ensemble having a dífferent wavcform. Moreover,
each signal within the ensemble has a certain probability of occurrence. The ensemble of
such signals is referred to as a random process. The n<>ise generated in the an1plifier of a
radio or television receiver is an example of a rand<>m signal. lts amplitude fluctuates
between positive and negative values ín a completely ra11dom fashion. The EEG signal,
exemplified by the waveforms shown in Fig. 1.8, is another example of a random signal.

5. Energy signals, power signals.


ln electrical systems, a signal may representa voltage ora current. Considera voltage v(t)
developed across a resistor R, producing a current i(t). The instantaneous power dissipated
in this resistor is defined by

(1.10)

or, equivalent)y,
p(t) = Ri 2 (t) (1.11)

ln both cases, the instantaneous power fJ(t) is proporti<>nal to che squared amplitude of
the signal. Further111ore, for a resistance R (>Í 1 ohm, we see that Eqs. ( 1.1 O) and ( 1.11)
take on the sarne mathematical form. Accordingly, in signal analysis it is customary to
define power in ter1ns of a 1-c>hm resistor, so that, regardless of whether a given signal
x(t) represents a voltage or a current, we may cxpress the instantaneous power of the
signal as
p(t) = x 2 (t) (1.12)

Based on this C<)nventíon, we define the total energ)1 of the continuous-time signal x(t) as
T/2
E = lim
y_,,_:,o I -"1'/2
x 2 (t) dt
(1.13)
= f QC"" x 2 (t) dt

and its average power as

(1.14)
1.4 Classificatio11 of Signals 21

Frc>m Eq. (1.14) we readíly see that the average power of a periodic signal x(t) of funda-
mental period Tis given b}·
· fT/2
P = Tl x 2 (t) dt (1.15)
-T/2

The square root of the a,1 erage power P is called the root mean-square (rms) vaJue (>f rhe
signal x(t). .
ln the case of a discrete-time signal xlnl, the integrais in Eqs. (1.13) and (1.14) are
replaced by corresponding sums. Thus the total energy of x[n] is defined by
00

( 1.16)
n~-oo

and its average pc1,ver is defined by


1 N
p = E~ 2N,, LN x2lnl (1.17)

Here again we see from Eq. {1.17) that the average power in a peri<)dic signal x[n] with
fundan1e11tal pcriod N is given by
1
p =-
N
r
N-l

n=O
x2[n]

A signal is referred t(> as an energy sig11al, if and only if the total energy of thc signal
satisfics the condition
O<E<oo
On the other hand, ir is referred to as a f)ower signal, if and'c>nly if the average power of
the signa] sarisfies the conditi(>Il ·
O<P<oo
The energy and power classifications of sig11als are mutually exclusive. ln particular, an
energy signal has zero average power, whereas a power signal has infinite energy. lt is also
of interest to nc>te that periodic signals and random signals are usually viewed as power
signals, whereas signals that are both deterministic and nonperiodic are energy sig11als.

• Drill Problem 1. 5
(a) What is the total energy of the rectangular pulse shown in Fig. 1.13(6)?
(h) Whar is the average power of the square wave shown in Fig. 1.13(a)?
Answer: (a) A 2 T 1• (b) 1. •
• Drill Prohlem 1.6 What is rhe average power <)Í the triangular wave shown in
Fig. 1.14?
Answer: 1/3. •
• Drill Problem 1. 7 What is the total energy <>Í the discrete-time signal shown i11
Fig. 1.16?

Answer: 3. •
22 CHAPTER l • INTRODlJCTION

• Drill Problem 1.8 What is the average power of the perÍ{)dic discrete-time signal
shown in Fig. 1.15?
Answer: l •
11.5 Basic
-
Operations on Signals
-·. ··-
An issue of fundamental importance in the study of sig11als and systems is thc use of syscems
t<> process or manipulate signals. This issue usually involves a combination <>f sc)me basic
operations. ln particular, we may identify two classes of operatic)ns, as descril1ed here.
1. Operations performed on dependent variables.
Amplitude scaling. Let x(t) denote a continuous-time signal. The signal y(t) resulting
from amplitude scaling applied to x(t) is defined by
y(t) = cx(t) (1.18)
where e is the scaling facror. According te> Eq. (1.18), the value of y(t) is obtained by
multiplying the corresponding value of x(t) by the scalar e. A physical example of a <levice
rhat performs an1plitude scaling is an electronic amt>lifier. A resistor also performs ampli-
tude scaling when x(t) is a current, e is the resistance, and y(t) is rhe output voltage.
ln a manner similar to Eq. (1.18), for discrete-time signals we write
y[n] = cx[nl
Addition. Ler x 1(t) and x 2 (t) denc)te a pair of contint1<>us-ti1ne signals. The signal y(t)
obtained by the addition of x 1 (t) and x 2 (t) is defined by
y(t) = X1(t) + X2(t) (1.19}
A physical examplc of a <levice that adds signals is an audíc) mixer, \vhich cc>n1l-)i11es 111usíc
and voice signals.
ln a manner similar tl> Eq. (1.19), for discretc-time signals we write
yl_n l = X1[n] + x2lnj
Multiplication. Let x 1 (t) and x 2 (t) denc>te a pair of contint1c>us-rime signals. The signal
y(t) resulting from the mt1ltiplication of x 1(t) by x 2 (t) is defined by
(1.20)
That is, for each prescrihed time t the value of y(t) is given by the product c>f the corre-
sponding values of x 1(t) and x 2 (t). A physícal example of y(t) is a11 AM radio sígnal, in
which x 1(t) cc>nsists of an audic) signal plus a de cc)rnponent, and x 2 (t) consists of a sinu-
soidal signal called a carrier wave.
ln a manner similar t<> Eq. ( 1.20), for discrete-time signals we write
y[n] = x,[n.lx2l nl

i(t)

+
v(t) L

FIGURE 1.17 lnductc>r ,vith current i(t ), inducing voltage v(t) across íts tern1inals.
1. 5 Basic Operations on Signals 23

i(t}
-
+

v(t)

FIGURE 1.18 Capacit(lT ,vitl1 voltage v(t) across íts terminais, indt1cing current i(t ).

Differentiation. I.et x(t) denote a continuous-time signal. The derivative ()f x(t) with
respect to time is defined by
d
y(t) = dt x(t) (1 .21 )

For example, an inductor perf()rms differentiation. Let i(t) denote che curre11t flowing
through an inductor of inductance L, as shown in Fig. 1.17. The voltage v(t) developed
across the inductor is defined by

v(t) = L f, i(t) {1.22)

Integration. Let x(t) denc)te a continuous-time signal. The integral of x(t) with respect
to time t is defined by ·

(1.23)

where ris the integration variable. f<)r cxample, a capacitor perforn1s integration. Ler i(t)
denote the current flowing through a capacit<)r of capacitance C, as shown in Fig. 1.18.
The voltage v(t) developed across rhc capacitor is defined by

v(t) = e1 f' -c,o i( T) dT (1.24)

2. Operations performed on the independent variable.


Time scaling. I.et x(t) denote a continuous-ti1ne sígnal. The signal y(t) obtained by
scaling the independent variable, time t, by a fact<>r a is defined by
y(t) = x(at)
If a > 1, the signal y(t) is a compressed version <>f x(t). If, <)n the <>ther hand, O < a < 1,
the signal y(t) is an expanded (stretched) version of x(t}. These two operatio11s are illus-
trated in Fig. 1.19.

x(t) y(t) = x(2t) y(t)=x(~t)


. 1.0 . 1.0::,r,,,.

~-t
;

-1 o 1 l O l -
_J o 2
2 2
(a}

FIGlJRE 1. 19 Timc-scaling operation: (a) continuous-time signal x(t), ( l)) compressed versi(>n <>f
x(t) by a factor of 2, and (e) expanded vers Í<>n c>f x( t) by a factor of 2.
24 CHAPTER 1 li INTRODU(:TJON

x[nJ y[n] = xf2n]


1 !

__.__,____ _ _ _ _ _ _ _ _ ___,,_+---t---+--1- n n
'
-6 -5 -4 -3 -2 -1 O 1 2 3 4 5 6 -3 -2 -1 o 1 2 3
(a) (h)

FIGURE 1.20
Effcct ,,f
time scalíng on a díscrete-tin1c signal: (a) c.liscrete-tíme signal xí 1-i l. a11d
(b) c<>mpressed version of x[1i] by a factc,r <,f 2, wiLh some values of thc original x[1i] losl as a
result of the comprcssion.

ln the discrete-time case, we write


k>O y[n] = x[kn ],
which is defined only for integer values ()f k. If k > 1, then some values <)f the discrete-
time signal y[11} are lost, as illustrated in Fig. 1.20 for k = 2.
Reflection. Let _-r(t) denote a continuous-time signal. Let y(t) denote the sjgnaJ ob-
tained by replacing time t with - t, as shown by
y(t) = x(-t)
The signal y(t) represents a reflected version of x(t) al1out the amplitude axis.
The following two cases are c)f special interest:
• Even signals, for which wc have x(-t) = x(t) for all t; that is, an even signal is the
sarne as its reflected vcrsion.
• Odd signals, for which we have x(-t) = -x(t) for all t; that is, an odd signal is the
negative of its reflected version.
Similar observarions apply to discrete-time signals.

EXAMPLE t.2 Consider the triangular pulse x(t) shown in Fig. 1.21{a). Find the reflected
version of x(t) about the amplitude axis.
Solution: Replacing the independent variable t in x(t) with -t, we get the result y(t) = x(-t) '
shown in Fig. 1.21 (b).
Note that for this example, we have
x(t) = O for t < -T1 and t > T2
, Correspondingly, we find that
'

y(t)=O fort>T1 andt<-T2

x(t) y(t) = x(-t)

- ~ ,; _ _ _ _ _ J _ i - - - - - - ~ - t --t!!::.------+--.3!--- t
'
-T1 O -T2 o
(a) (b)

fIGlJRE 1.21
()peration of reflcctÍ<>n: (a) contint1ot1s-tirne signal x(t) and (b) reflected version of
x(t) about the origin.
l. 5 Basic Operations on Signals. 25

• Drill Problem 1.9 The discrete-time signal x[nl is defined by

1, n = 1
xlnl -1 n = -1
'
O, n = O ar1d /nl > 1

Fi11d the cc>mposite signal y[nl defined in terms c>f x[n] by

y(n] = x[nj + x(-nj

A1isiver: y[n] = O for ali integer values of n. •


• Drill Problem 1.10 Repeat Drill Problem 1.9 for
1, n = -1 and n = 1
x(nJ =
O, n = O and I n1 > 1

2, n = -1 and n = 1
Answer: yf nl =
O, 11 = O and In I > 1 •
Time shifting. Let x(t) denote a c<>ntinuous-time signal. The time-shifted version of
x(t) is defined by

y(t) = x(t - t 0 }
where t 0 is the tin1e shift. lf t0 > O, the waveforn1 representing x(t) is shifted intact t() the
right, relative to the time axis. If t 0 < O, it is shifted te> the left.

EXAMPLE 1.3 Figure 1.22(a) shows a rectangular pulse x(t) of unit amplitude and unit
duratíon. Find y(t) = x{t - 2).

Solution: ln this example, the time shift t 0 equals 2 time units. Hence, by shifting x(t) to
the right by 2 time units we get the rectangular pulse y(t) shown in Fig. 1.22(6). The pulse
y(t) has exactly the sarne shape as the original pulse x(t); it is merely shifted along the time

ax1s.
·. ..t•:. .. .. .: ~i. ~ <,;::. • ; ··.;. ·v"~ ·. .. ·; ;. . ::;: .:,.,,·.

x(t) y(t) = x(t - 2)


1.0
1.0· ... - - --··----.

i
'
t '
t
1 o -l o l -3 2 5
-
2 2 2 2
(a) (b)

FIGURE 1.22 ·1·in1e-shiftíng operation: (a) conti11uous-time signal in the form of a rectangu]ar
pulse of amplitude 1.0 and duration 1.(), symmelric about the origin; and (b) time-shiftc<l version
of x(t) bv, 2 time units.
26 CUAPTER 1 • INTRODUCTION

ln the case of a discrete-time signal x[n ], we define its time-shifted version as follows:

y[nJ = x[n - m]

where the shift m must be an integer; it can l-,e positive <>r 11egative.

• Drill Problem 1.11 The discrete-time signal x[11] is defined by

1, n = 1, 2
x[nJ -1 n = -l, -2
'
o, n = O and n > 2I 1

Find the time-shifted signal yln] = x[n + 3].


1, n= -1, -2
Answer: ylnl = -1 n= -4, -5
'
o, 1t = -3, n < -.',, and n > -1 •
• PRECEDENCE RULE FOR Ttl\i1E SHIFTING AND TIME SCALING

Let y(t) denote a continu<)us-time signal that is derived from a11other continu<)us-tíme
signal x(t) through a combination of time shifting and time scaling, as described here:

y(t) = x(at - b) ( 1.25)

This relation between y(t) and x(t) satisfies the followi11g conditions:

y(O} = x(-b) ( 1.26}


and
b
Y a = x(O) (1.27)

which provide useful checks on y(t) in terms of cc,rresponding values of x(t).


To correctly obtain y(t) from x(t), the rime-shifting and time-scalíng operations must
be performed in the correct order. The proper order is based on the fact that the scaling
operation always replaces t by at, while the time-shifting operation always replaces t by
t - b. Hence the time-shifting operation is performed first <>n x{t), resu!ting in an inter-
mediate signal v(t) defined by

v(t) = x(t - b)

The rime shift has replaced t in x(t) b}' t - b. Next, the time-scaling operarion is performed
on v(t). This replaces t by at, resulting i11 the desired output

y(t) = v(at)
= x(at - b)

To illustrate h<>w the <>peratÍ<)n descri bcd in Eq. (1.25) can arise in a real-life situa-
tíon, consider a voice signal recorded on a tape recc)rder. If the tape is played back at a
rate faster than the original recording rate, we get compressi{>n (i.e., a > 1 ). If, on the
I. 5 Basic Operation.~ on Signals 27

x(t) v(t) =x(t + 3) y(t) =v{2t)


1
1.0
- - - - ·- J 1.0 - - - -'- 1.0

-~'--+------ t --_;....~·---......._ _ _ t
-l O 1 -4 -3 -2 -1 O -3 -2 -l O
(a) (b) (<.:)

FIGURE 1.23 Thc proper c>rder in \vhich the operations of time scaling and time shifting should
be applied for the case of a contintu)Ús-time signal. (a) Rectangular }Julse x(t) of am1-,litude 1.()
and duration 2.0, symmetric about the origin. (b) lntermediate pulse v(t), representing time-
shifted versÍ(Hl c>f x(t). (e) Desíred signal y(t), resulting from the compression of 1J(t) by a factor
of 2.

other ha11d, the tape is played back at a rate slower tha11 rhe origina) reC()rding rate, we
get expansion (i.e., a< 1). The constant b, assumed to be positive, acc<)unrs for a delay
in playíng back the tape.

,, .~. ..
EXAMPLE 1.4 Consider the rectangular pulse x(t) of unit amplitude and duration of 2 time
units depicted in Fig. 1.23(a). Find y(t) = x(2t + 3).
Solution: ln this example, we have a = 2 and b = -3. Hence shifting the given pulse x(t)
to the left by 3 time uníts relative to the time axis gives the intermediate pulse v(t) shown in
Fig. 1.23(b). Finally, scaling the independent variable t in v(t) by a = 2, we get the solution
y(t) shown in Fig. 1.23(c).
Note that the solutíon presented in Fig. 1.23(c} satisfies both of the conditions defined
in Eqs. (1.26) and (1.27).
Suppose next that we purposely do not follow the precedence rule; that is, we first apply
time scaling, followed by time shifting. For the gíven signal x(t), shown in Fig. 1.24(a}, the
waveforms resulting from the application of these two operations are shown in Figs. 1.24(b)
and (e), respectively. The signal y(t) so obtained fails to satisfy the condition of Eq. (1.27) •
..

This example clearly illustràtes that if y(t) is defined in rerms of x(t) by Eq. (1.25),
then y(t) can on1y be <>htained from x(t) correctly by adhering to the precedence rule for
time shifting and time scaling. •
Sin1ilar remarks apply to the case t>f discrere~time signals.

x(t) x(2t) y(t)


1.0
1.0_.... - .. .. -- - - -'- 1 .o
-2 1
2
--·-······· 1-··········-- t ......L.... t
-l O 1 1 o 1 -3 -2 -1 O
2 2
(a) (b) (e)

FIGURE I .24 The incorrect ,vay of applying the precedence rule. (a) Signal x(t ). (b) Time-scaled
signal x(2t). (e) Signal y(t) ohtained by shifting x(2t) by 3 time units.
28 CHAPTER 1 • INTRODVCTION

ExAMPLE 1.5 A discrece-time signal x[n] is defined by


1, n = 1, 2
., ,,. x[nJ = -1, n = -1, -2
O, n = O and !nl > 2
Find y[n] = x[2n + 3].
Soliition: The signal x[n] is displayed in Fig. 1.25(a). Time shifting x[n] to che left by 3 yields
the intermediate signal v[n] shown in Fig. 1.25(b). Finally, scaling n ín v[n] by 2, we obtain
the solution y[n] shown in Fig. 1.25(c).
Note that as a result of the con1pression performed in going from v[nJ to y[,tJ = v[211J,
the samples of v[n] at n = -5 and n = -1 (i.e., those contained in the original sígnal at
n = -2 and n = 2} are lost.

• Drill Prohlem 1.12 Considera discrete-time sig11al xlnJ defined by


1, -2 < n ~ 2
x[nl =
o, lnl > 2
Find y[nJ = x[3n - 2].
1, n = O, 1
Answer: y[nl =
O, otherwise •

x[n] v[nl
l . .., l l
-5 -4 -3. -2 -l
.
-5 -4
'
'
'
'
n "
-<>-- o n
o 1 2 3 4 5 -3 -2 -1 O 1 2 3 4 5

-1 ...... -1 L.

(a) (b)

y[n]

-2
---,.--o---<>--o------'--O---<:>-----Ç.....___.,❖ ·-O i n
-5 -4 -3 -1 O i l 2 3 4 5

-1 l
(e)

fIGURI:': l .25
The proper ordcr of appl}ing the operations <1f time scali11g and ti1nc shifting for
Lhe case <lf a discretc-titne signal. (a) Discrete-tin1c sígnal x[11 l. antisymmetric about thc origin.
(b) Intermediate signal 1,[111 <1l>Laíned by shiftingx[,il t(> the left by 3 samples. (e) Discrete-tin1e
signal rí1i] resulting from the compression of v[n] by a factor <>f 2, as a rcsult of \Vhich t\vo sam-
plcs of the original xln] are lost.
1.6 Elementary Si.gnals 29

11.6 Elementary Signals


There are several elementary signals that feature prominently in the study <>f signals
and systems. The list of elementary sígnals includes exponential and sinusoidal sígr1als, the
srep function, impulse function, and ramp functic)n. These elementary sig11als serve as
building blocks for the construction of more cc)mplcx signals. They are also i1nportant in
their own right, in that they may be used t<> mc>del many physical signals thar occur in
nature. ln what follo,vs, we will descríbe the above-mentioned elementary signals, 011e by
one.

• EXPONENTIAL SIGNALS

A real exponencial signal, in íts most general form, is written as

x(t) = Be"1 (1.28)


where both B and a are real parameters. The parameter B is the a1nplir11(ic c>f the expo-
nential signal measured at time t = O. Depending on whether the other paramctcr a is
positive or negative, we may identify two special cases:

• Decaying exponential, fc)r which a < O


• Growing exponential, for which a > O
These two forms of an exponential signal are illustrated in Fig. 1.26. Part (a) c)f the figure
was generated using a = -6 and B = 5. Part (b) of the figure was generated t1sing a = .5
and B = 1. If a = O, the signal x(t) reduces to a de signal equal to the constant B.
For a physical example of an exponential signal, consider a ''lossy'' capacit()r, as
depicted in Fig. 1.27. The capacitar has capacitance C, and the loss is represented L1y shunt
resistance R. The capaciror is charged by connecting a battery across it, and then the
battery is removed at time t = O. Lct V O denote the initial value of the voltage deveJoped

5..----..--------~---~ 150 r - - - r - - . - - - - - , - - , - - - - - , - - - - - . . , . . . _ - - , - - - - ,
4.5 · .
4 . . .. 1
'
'

3.5 100
3
x(t) 2.5 x(t)
2 . . ..
50 . . . .
1.5
1
0.5

OO 0.1 0.2 0.3 0.4 0.5 0.60. 7 0.8 0.9 l ºo 0.1- 0.2 0.3 Õ.4 0.5 0.6 0.7 0.8 0.9 l
Time t Time t
(a) (b)

FIGURE 1 .26 (a) Decaying exponcntial f<)rm (>f c:<>ntilltlous-Lime signal. (b) Gro\\'ing eX})Oncntial
form of continuous-time signal.
30 CHAPTt:R 1 li INTRODlJCTION

i(t) = e :e u(t)

+
v(t) ~e R

FIGURE 1.27 Lossy ca1>acitor, wíth the loss rc11resented by shunt resislance R.

across the capacítor. From Fig. 1.2 7 we readily see that the operation of the capacitor for
t 2:: O is described bv,
d
RC dt v(t) + v(t) = O (1.29

where v(t) is the voltage measured across the capacitar at time t. Equation (1.29) is él
differential equation of arder <>ne. lts solutic)n is given by
v(t) = V 0 e- 111<(: (1.30:
where the product term RC plays the role of a time constant. Equation ( 1.30) shc>ws tha1
the voltage across the capacitor decays exponentially with time at a rate determined h)
the time constant RC. The larger the resistor R (i.e., the less Iossy the capacit<>r), the slowe1
will be the rate of decay <Jf v(t) with time.
The discussion thus far has been ín the context <>f continuous time. ln discrete time
it is common practice t<> write a real expc>nential signal as
x[n] = Brn (1.31
The expo11ential nature of this signal is readily confirmed by defining
r = eª
for s<>me a. Figure 1.28 illustrates the decaying and growing forms of a díscrete-tim1
exponential signal corresp(>nding to O < r < 1 and r > 1, respectively. This is where thc
case of discrete-time exponential signals is distinctly different from continuous-time ex
ponential sigr1als. Note also that when r < O, a discrete-time exponential signal assume
alternating signs.

6 ! ! \ i ! ! i
4.5 1 1 1 { l i l !

-
1 1 •
'
4 --
5 -- -
3.5 -
4 ...... - 3 ....... '
'
2.5 •h•

xln] -
1 ' x[n]
2 ·-

...... ...... 1.5 ·-·


2
'
,;
1 •--
1 ··- }
'
-
~10 -8 -6 -4 -2 O 2
'

4
' ? 'r
6
9 9
8
1
10
0.5
'
O10 -8 -6 -4 -2 O 2 4 6 8
-
1
Timcn Time n
(a) (b)

F1<;.URE 1.28
(a) Decaying cx11onential form (lf discrete-timc signal. (b) Growing exponential
form of disçrete-time sig11al.
1.6 Eleme-ntary Signals 31

The exponential signals shown in Figs. 1.26 and 1.28 are ali real valued. lt is possible
for an exponential signal to be complex valued. The mathematícal forms of complex ex~
ponential signals are the sarne as those shown in Eqs. (1.28) and (1.31 ), with some differ•
ences as explained here. ln the continuous-time case, the parameter B or parameter a or
both in Eq. (1.28} assume complex values. Simílarly, in the discrete-time case, the param-
eter B or parameter r or bc,th in Eq. (1.31) assume complex values. Two commc)nly en-
countered examples of complex exponential signals are eiwt and eii1.n.

• SINUSOIDAL SIGNALS

The conti11uous-time version of a sinusoidal signal, in its most general form, may be written
as
x{t) = A cos(wt + </>) (1.32)
where A is the amplitude, w is the frequency in radians per second, and </> is rhe phase angle
in radians. Figure 1.29(a) presents the waveform of a sinusoidal signal fc>r A = 4 and
cJ> = + Trl6. A sinusoidal signal is an example of a períodic signal, the period of which is

T = 21r
w

We may readily prove this property of a sinusoidal signal by using Eq. (1.32) to write
x(t + T) =A cos(w(t + T) + cp)
=A cos(wt + wT + </>)
=A cos(wt + 21T + </>)
=A cos(wt + </J)
= x(t)
which satisfies the defining condition of Eq. (1.5) for a periodic signal.

5 .....----------....---~--.....------·----.----,---~

x(t) ..
0

-5
o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 l
Timet
(a)

5 -
l
1
x(t) ..
0

-5 ----~--__,__ __,__ __.___ __,___ _.....___________ ~ _ ____,


o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Tjme t
(b)

FIGURE 1.29 (a) Sinusoidal signal A cos(wt + cp) \vith phase </.> = +1r/6 radians. (b) Sinusoidal
signal A sín ( wt + <P) with phase <b = + 1r/6 radians.
32 CHAPTER l • INTRODOCTION

To illustrate the generatíon of a sinusoídal sígnal, consider the círcuít of Fig. 1.30
consisting of an inductor and capacitar connected in parallel. lt is assumed that the lasses
in both components of the circuit are small enough for then1 to be considered ''ideal." The
voltage developed across the capacitor at time t = O ís equal to V 0 • The operatíon of the
circuit in Fig. 1.30 for t 2:: O is described by
Jl
LC dt 2 v(t) + v(t) =O (1.33)

where v(t) is the voltage across the capacitar at time t, C is its capacitance, and L is the
inductance of the inductor. Equation (1.33) is a differential equation of arder two. Its
solution is given by
v(t) = V0 cos(w 0 t), t 2:: o ( 1.34}
where w 0 is the natural angular frequency of oscillation of the circuit:
1
w o -v'LC
- (1.35)

Equation (1.34) describes a sinusoidal signal of amplitude A = V 0 , frequency w = w0 , and


phase angle </J = O.
Consider next the discrete-time version of a sinusoidal signal, written as
x[n] = A cos(!ln + </J) (1.36)
The period <>f a períc)dic discrete-time signal is measured in samples. Thus for x[n] to be
periodic with a peri<>d of N samples, sa}', it must satisf}' the condition of Eq. {1.8) for all
integer n and some integer N. Substituting n + N for n in Eq. (1.36) yields
x[n + N] = A cos(fln + !lN + <f;,)
For the condition of Eq. (1.8) to be satisfied, in general, we require that
!lN = 21rm radians
or

n
~L = 21rm
N ra dº1ans/cyc1e, integer m, N ( 1.3 7)

The important point to note here is that, unlike continuous-time sinusoidal signals, not ali
discrete-tíme sinusoidal systems with arbítrary values of !1 are periodic. Specifically, for
the discrete-time sinusoidal signal descríbed in Eq. (1.36) to be periodic, the angular fre-
quency O must be a rational mu)tiple of 21r, as indicated in Eq. (1.37). Figure 1.31 illus-
trates a discrete-time sinusoidal sígnal for A = 1, <f;, = O, and N = 12.

i(t) = e~ v(t)
+

v(t) L ~e

FIGURE J.30 Parallel LC circuít, a~suming that the inductor L and capacitor C are l)oth ideal.
1.6 Elementary Signals 33

1 ,-----,-----,-----,~--()---------,.---,-~---.----,
1 1 1 ,
1 1
0.8 1-

0.6 ...... -

0.4 . . . .

0.2 - -
x[nJ O -O··-----~...--.,--o--·--'--...... ······•---•·······<>-······•·----

-0.2
-0.4 .. -
'
'

-0.6 . . . .
-
-0.8 --
'
i
'
QIO -8 -6 -4 -2 o 2 4 8 10
Tímc n

FIGURE 1.31 Discrete-time sinusc,i<lal sígnal.

f.xAMPLE 1.6 A pair of sinusoidal sig11als with a common angular frequency is defined by
X1[n1 = sin[511'1t]
and
x2[nJ = v'3 cosf5?Tn]
(a) Specify the condition which the period N of both x 1 (n] and x 2 [n] must satisfy for them
to be periodic.
(b) Evaluate the amplitude and phase angle of the composite sinusoidal signal
y[n] = x 1 [n] + x 2 [n]
Solution:
(a) The angular frequency of both x 1 [n] and x 2 [n] is
a = 51T radians/cycle
Solving Eq. (1.37) for the period N, we get
. .;.
. - N = 21rm
o ;;:

;t .
2'1T'm
/.
'
. .. ~:''r 5'1T'
·~ :
i :
2m
5
For x 1 [n1 .and x 2 [n] to be periodic, their period N must be an integer. This can only be
~:
satisfied for m = 5, 10, 15, ... , whích results in N = 2, 4, 6, ....
(b) We wish to express y[n] in rhe form ;·r ._. .·.:.: !:
..

y[n] =A cos(On + cp) .· ':~


:~:.;. ;.. , .

Recall the trigonometric identity

:" .
A cos(On + <f,) = A cos(On) cos(<f,) - A sin(On) sin(<f,}
. >:: ..•,...~:·:· :~ . ·",, : ;;. : ,. .
34 CHAP'f'ER l • IN'I'RODlJC"l'ION

Identifying .n = 5 71', we see that the right-hand side of this identity is of the sarne form as
x 1 [n] + x 2 [n]. We may therefore write · ·· · · - ·

·. . ';, .. . ,: ,.., ''1 .·. ·'> ;;


·,
~· ·, •
A sin(cp) = -1 and A cos(<f,) = V3
Hence • • :· ; • •f' '!

., ' . . x· ~-

sin( <f,) amplitude of x 1 [ n J


tan(q,) =:: - - = -=--------
.. t : .~
cos(cf>) amplitude of x 2 [n]
. • . . ·i ,.• >$!·-!·· ..
e · ·· ii f -1 '··· ·< ..

... -·
<
. - ·-
• •
. .
:::::
• • <·
V3 • > ,,. •• ; .;, • • • .. • • : :- • ._

from which we find that </> = -'!T'/6 radians. Similarly, the amplitude A is given by
A = v'(amplitude of x 1 [n])2 + (amplitude of x 2 [n])2
=V1+3=2
Accordingly, we may express y[ti] as
y[nl = 2 cos(51rn - 7r/6)
•' • • •: •• • ':>" :'< ... ._. ·: ,. ·. ... .. :· : ' ,
. -~ .
:,;. >" • .. ;.

• Drill Problem 1.13 Consider the following si11usoidal sig11als:


(a) x[nJ = 5 sin[2n]
(h) xf n1 = 5 cosf 0.21rn]
(e) x[nl = .5 cc)sl61rnJ
(d) xlnl = 5 sinl61rn/351
Determine whether each x(n) is peric>dic, and if it is, find its fundamental period.
Answer: (a) Nonperiodic. (l1) Periodic, fundamental period = 10. (e) PerÍl)dic, funda-
mental period = 1. (d) Periodic, fundamental period = 35. •
• Drill Prohlem 1. 14 Find the smallest a11gular freque11cies fc>r whích discrete-time
sinusoidal signals wíth che following fundamental peri<>ds w<>uld be peri{>dic: (a) N = 8,
(b) N = 32, (e) N = 64, (d) N = 128.

Answer: (a) n= 1r/4. (b) n= 1r/16. (e) !l = 1r/32. (d) n= 1r/64. •


il RELATION BETWEEN SINUSOIDAL AND COMPLEX EXPONENTIAL StGNALS

Consider the complex exponential e ;e_ Using Euler's identity, we may expand this tcrm as
ei6 = cos 0 + j sin 0 (1.38)

This result indicares that we may express the C<>11tinuc>t1s-ti111e sinusc>idal signal of Eq.
(1.32) as the real pare of the complex expc>ncntial signal Beiwt, where Bis itself a C(>mplex
q uantity defined by
B = Aei<t> (1 ..,9)
That is, we may write
A cos( wt + <J>) = Re{ Beiwt} (1.40)
1.6 Elementary Signals 35

where Re{ } denotes the real part of the complex quantity enclosed inside the braces. We
may readily prove this relation by noting that
Beiwt = Aei<Peiwr
= Aeílwt+tt>>
= A cos(wt + </>) + jA sin(wt + </>)

from which Eq. ( 1.40) follows immediately. The sinusoidal signal of Eq. (1.32) is defined
in terms of a cosine function. Of course, we may also define a continuous-time sinusoidal
signal in terms of a sine function, as shown by

x(t) = A sin(wt + </>) (1.41)

which is represented by the imaginary part of the complex exponential signal Beiwt. That
• •
1s, we may wr1te
A sin(wt + <b) = Im{Beiwt} (1.42)

where B is defined by Eq. (1.39), and lm{ } denotes the imaginary part of the cornplex
quantity enclosed inside the braces. The sinusoidal sígnal of Eq. (1.41) differs from that
of Eq. (1.32) by a phase shift of 90º. That is, the sinusoidal signal A cos(wt + </>) lags
behind the sinusoidal signal A sin( wt + </>), as illustrated in Fig. 1.29{b) for </) = 7r/6.
Similarly, in the discrete-time case we may write
A cos (!ln + </>) = Re {Bei!ln} (1.43)

and
A sin(fln + cp) = lm{Be;nn} (1.44)

where B is defined in terms of A and </> by Eq. (1.39). Figure 1.32 shows the two-
dimensional representation of the complex exponential e1' 1n for n = 1T!4 and n = O, 1, ... ,
7. The projection of each value on the real axis is cos(fln), while the projection on the
imaginary axis is sin(fln).

Imaginary axis

Unit circlc
n=2

/
/
/
/

n == 4 / 1T/4 n=O Real axís


/
o ' -1r/4
/
/
""
/
/
'"
n=5 n=7
n == 6

FIGURE J .32 Complex plane, sh()\Ving eight points unif(>rmly distributed on the t1nit círcle.
36 CHAPTER 1 • INTRODUCTION

60
50
40
30

20
x(t)
10 .....

o
-10
-20
-30
-40
o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Timet
FIGURE 1.33 Exponentially damped sinusoidal sígnal e-a, sín wt, with a > O.

• EXPONENTIALLY DAMPED SINUSOIDAL SIGNALS

The multiplication of a sinusoidal signal by a real-valued decaying exponential signal


results in a ne,v signal referred to as an exponentially damped sinusoidal signal. Specifi-
cally, multiplying the C(>nti11uc)us-time sinusc>idal signal A sin( wt + </>) by the exponential
e-(•t results in the exponentially damped sinusoidal signal
x(t) = Ae-·at sin( wt + </> ), a>O ( 1.45)
Figure 1.33 shows the waveform of this signal for A = 60, a= 6, and </>=O. For increasing
time t, the amplitude of the sinusoidal oscillations decreases in an expone11tial fashic>n,
approaching zero for infinite time.
To illustrate the generation of an exponentially damped sinusoidal signal, consider
the parallel circuit of Fig. 1.34, consisting of a capacitar of capacitance C, an inductc>r c.>f
inductance L, and a resistor of resistancc R. The resistance R represents the combined
effect of losses associated with the inductor and the capacitor. Let V0 denote the voltage
developed across the capacitor at time t = O. The <>peratic>n of the circuit in Fig. 1.34 is
described by

d v(t) + -l v(t) + -L
e -dt R
1 f'
-oç
v(T) dT =o ( 1.46)

f.lf'
-= v(r)dr

+
v(t) J_ R

FIGURE 1.34 Parallel LCR circuít, with inductor L, capacitar e;, and resistor Rali assumed to
bc ideal.
1.6 Elementary Signals 37

where v(t) is the voltage across the capacitar at time t 2: O. Equati(>n (1.46) is an integro-
differential equation. lts solution is given by
v(t) = V 0 e-tllCR cos(w0 t) (1.47)
where
1 1
Wo = ( 1.48)
LC 4C 2 R 2
ln Eq. (1.48) it is assumed that 4CR 2 > L. Comparing Eq. (1.47) with (1.4,S), we have
A = V0 , a = l/2CR, w = w0 , and <p = Tr/2.
The circuits of Figs. 1.27, 1.30, and 1.34 served as examples in which an exponential
signal, a sinusoidal signal, and an exponentially damped sinusoidal signal, respectively,
arose naturally as solutions to physical problems. The operations of these circuits are
described by the differential equatio11s (1.29), (1.33 ), and (1.46), whose solurions were
simply stated. Methods for solving these differential equarions are presented in suhsequent
chapters.
Returning to the subject matter at hand, the discrete-time version of the exponentially
damped sinus<)idal signal of Eq. (1.45) is described by
x[n] = Brn sin[On + </> 1 (1.49)
For the signal of Eq. (1.49) to decay exponentially with time, the parameter r must lie in
the range O< lrl < 1.

• Drill Problem 1.15 Is it possible for an exponentially damped sinusoidal signal of


whatever kind to be periodic?
Answer: No. •
• STEP FUNCTION

The discrete-time version of the step function, commonly denoted by uf111, is defined by
l, n;;::: O
u[n] = (1.50)
O, n < O
which is illustrated in Fig. 1.35.
The continucJus-time version of the step function, commc>nly denoted by u(t), is de-
fined by
1, t 2: O
u(t) = (1.51)
o, t < o
Figure 1.36 presents a portrayal of the step function u(t). lt is said t<> exhibit a discontinuity
at t = O, since the value of u(t) changes instantaneously from O to 1 when t = O.
The step function u(t) is a particularly simple signal to apply. Electrically, a battery
or de sc>urce is applied ar t = O by closing a switch, for example. As a test signal, it is

x[n]
1
1.0 . 1

...
-o~--o-·-o-·----;::>--'-+---'--+--+--- n
: ' T

-4 -3 -2 -1 O l 2 3 4
FIGURE 1.35 Discrctc-time version of stcp fun<.:lÍt)n of unit amplitude.
38 CHAPTER 1 • INTRODUCTION

u(t)

1.0+------

--------------t
o
FIGURE 1.36 Continuous-time version <lf step functíon of unit amplitude.

useful because the <)utput of a system dueto a step input reveals a great <leal about how
quickly the system responds to an abrupt change in the input signal. A similar remark
applies to u[n] in the context of a discrete-time system.
The step function u(t) may also be used to construct other discontinuous waveforms,
as illustrated in the following example.

. ' :
... : ·. . .: .. ·.· : . ,.

EXAMPLE 1.7 Consider the rectangular pulse x(t) shown in Fig. 1.37(a). This pulse has an
amplitude A and duration T. Express x(t) as a weighted sum of two step functíons.
Solution: The rectangular pulse x(t} may be wrítten ín mathematical terms as follows:

' A, O ::; 1t 1 < T/2


x(t) = (1.52)
o, ltl > T/2
where !ti denotes the magnitude of time t. The rectangular pulse x(t) is represented as the
difference between two time-shifted step functions, as íllustrated in Fig. 1.37(b). On the basis
of this figure, we may express x(t) as
T T
: : ., ... x(t) = Au t + - - Au t - - (1.5 3)
,.
2 2
· where u(t) is the step function. For the purpose of illustration, we have set T = ls in Fig. 1.37.

• Drill ProLlem 1.16 A discrete-time signal x[,i J is defined by


1, O :::S n < 9
x[n] =
O, otherwise

x(t)
l
A
At---- . . ----
!

-1
'
-0.5 o
'
0.5
'
'
1
t •,-------.
-1
---------------+-
-0.5 o 0.5 1
t

(a) (b)

-+----..,.__----+----+-------+- t
-l -0.5 o 0.5 1
(e)

FIGURE 1,37 (a) Rectangular pulse x(t) of amplitude A and <luration T = Is symmetric about
the origin. (b) Representatic,n of x(t) as the superposition of two ste1, functions of amplitude A,
with one step function shifted to the lcft by T/2 and the other shifted to the right by T/2; these
two shiftcd signals are denoted by x 1(t) and x 2 (t), respectively.
1.6 Elen,entary Signals 39

c5[n]

1.0

-----<>---. -<-- n
-4 -3 -2 -1 O l 2 3 4
FIGVRE 1.38 Oiscrete-time form <>f ímJlt1lse.

Using u[n], describe x[nJ as the superpc)sirion of twc> step functions.


Answer: x[nJ = u[nJ - u[n - 10]. •
• IMPULSE fUNCTION

The díscrete-rime version c>f the impulse, commc)n]y denoted by ô[n), is defined by
1, n = O
B[n] = (1.54)
O, n *O
,ivhich is í)lt1strated in Fig. 1.3 8.
Thc continuous-time version of the u11it impulse, commonly denoted by ô(t}, is de-
fined by the following paír of relations:
ô(t} = O for t *O (1.55)
and

J X 00 8 ( t} dt = 1 (1.56 J

Equation (1.55) says that the impulse ô(t) is zero ever}'\.vhere except at the <>rigin. Equation
(1.56) says that the t<)tal area under the unit impulse is ttnity. The impulse B(t) is also
referred to as the Dirac delta function. Note that the impulse o(t) is rhe derjvative of the
step function u(t) with respect to time t. Conversely, the step function u(t) is the integral
of the impulse ô(t) with respect to time t.
A graphical descríption <)f the impulse ô[nj Ít)r discrete time is straightforward, as
shown in Fig. 1.38. ln ct>ntrast, visualizatic)n of thc t1nit impulse ô(t) for continuous time
requires mc>re detaíled attentic>n. One way te> visualize 8(t) is to view ít as the limitíng form
of a rectangular pulse of unit area, as illustrated in Fig. 1.39(a}. Specifical]y, the duration
of the pulse is decreased and its amplitude is increased such that che area under rhe pulse

x(t) a8(t)
1

Area = 1
a

Area = I r--"""'+-"'~t--"'"'
Area = 1
'
1/1"

••••••••--•
!
•• ••'• •••• •••• • •• • •••v•w•••- •••••- t _ _ _ _..___ _ _ _,t
-T/2 O T/2 o
(a) (b)

FIGURE 1.39 (a) Evolution of a rectangular pulse <.>f unit area int(> an in1pl1lsc of unit strength.
(b) GraphícaJ syn1boJ fc,r at1 i111pulse of strc11gth a.
40 CttAPTER l • INTRODlJCTION

is maíntaíned C<>nstant at unity. As the duration decreases, the rectangular pulse better
approximates the impulse. Indeed, we may generalize this result by stating that
o(t) = lim gT(t) (1.57)
T-+O

where gT(t) is any pulse that is an even functi<)n of time t, with duratic,n T, and unit area.
The area under the pulse defines the strength of the impulse. Thus when we speak <>Í the
impulse function ó(t}, in effect we are saying that its strength is unity. The graphical symbol
for an impulse is depicted in Fig. 1.39(6). The strength of the impulse is denoted by the
lal,cl next to the arrow.
From the defining equation (1.55), it immediately follows that the unit impulse B(t)
is an even funcrion of time t, as shown by
õ(-t) = 5(t) (1.5 8)
For the unit impulse ô(t) to have mathematical meaning, however, it has to appear
as a f acror in the integrand of an integral with respect to time and then, strictly speaking,
011ly when the other factor in the integrand is a continuous function of time at which the
impulse occurs. Let x(t} be such a function, and consider the product of x(t) and the time-
shifted delta function o(t - t0 ). In light of the two defining equations (1.55) and (1.56),
we may express the integral of this product as follows:

Jx"" x(t)ô(t - t0 ) dt = x(to) (1.59)

The operation indicated (>n rhe left-hand side <>f Eq. (1.59) sifts out the value x(t0 ) of the
function x(t) at time t = t0 • Accordingly, Eq. (1.59) is referred to as the sifting property
of the unit impulse. This property is sometimes used as the definition of a unit impulse; ín
effect, it incorporares Eqs. (1.,55) and (1.,56) into a single relation.
Another useful property of the unít impulse ô(t) is the time-scaling property, de-
scribed by
1
o(at) = - ô(t), a>O (1.60)
a
To prove this property, we replace t in Eq. (1.57) with at and so write
ô(at) = lim gT(at) (1.61)
T-+O

Te> represent the function gT(t), we use the rectangular pulse shown in Fig. 1.40(a), which
has duration T, arnplitude 1/T, and therefore unit area. Correspondingly, the time-scaled

gj-(at)

la!T
.,,,,-Area= l
_,,.Area == l / Area = 1a /
/ /

-~~---~-t _ _..._____,____..__ _ t

L~_j
(a) (e)

FIGURE 1.40 Steps involved in províng thc timc-scaling prc.)perty <>f the unit impt1lse. (a) Rec-
ta11gular pulse g1 -(t) c>f an1plitude I IT and duralion T, symmetric about the origin. (b) [>ulse g r(t)
compressed by factor a. (e) Amplitude scaling of the compressed pulse, rcstoring it to unit area.
1.6 Elementary Signals 41

function gT(at) is shown in Fig. 1.40(6) for a> 1. The amplitude of g·r(at) is left unchanged
by the time-scaling operation. Therefore, in order to restore the area under this pulse to
unity, the amplitude of gT(at) is scaled by the sarne factor a, as indicated in Fig. 1.40(c).
The time function in Fig. 1.40(c) is denoted by g}(at); it is related to gT(at) by
gr(at) = agy(at} (1.62)
Substituting Eq. (1.62) in (1.61), we get

ô(at) = !a lim g~(at) (1.63)


t--+O

Since, by design, the area under the function gr(at) is unity, it follows that
ô(t) = lim g~(at) (1.64)
T-40

Accordingly, the use of Eq. (1.64) in (1.63) results in the time-scaling property described
in Eq. (1.60).
Having defined what a unit impulse is and described its properties, there is one more
question that needs to be addressed: What is the practical use of a unit impulse? We cannot
generate a physical impulse function, since that would correspond to a signal of infinite
amplitude at t = O and that is zero elsewhere. However, the impulse function serves a
mathematical purpose by providing an approximation to a physical signal of extremely
short duration and high amplitude. The response of a system to such an input reveals much
about the character of the system. For example, consider the parallel LCR circuit of Fig.
1.34, assumed to be initially at rest. Suppose now a voltage signal appr<)ximating an
impulse function is applied to the circuit at time t = O. The current thrc>ugh an induct<Jr
cannot change instantaneously, but the voltage across a capacitar can. lt fc>llows therefcJre
that the voltage acrc,ss the capacitar suddenly rises to a value equal to V 0 , say, at time
t = o+. Here t = o+ refers to the instant of time when energy in the input signal is expired.
Thereafter, the circuit operares without additional input. The resulting value of the voltage
v(t) across the capacitar is defined by Eq. (1.47). The response v(t) is called the transient
or natural response of the circuit, the evaluation of which is facilitated by the application
of an impulse function as the test signal.

• RAMP FUNCTION
The impulse function cS(t) is the derivative of the step function u(t) with respect to time.
By the sarne token, the integral of the step function u(t) is a ramp function of unir slope.
This latter test signal is commonly denoted by r(t), which is formally defined as Í()llc,ws:
t, t > o
r(t) = (1.65)
o, t < o

r(t)

o
FIGURE 1.41 Ramp function of unit slope.
42 CHAPTER 1 • INTRODUCTION

r(n)

9
n
o
FIGURE 1.42 Discrete-time version of the ran1p function.

Equivalently, we may write


r(t) = tu(t) (1.66)

The ramp function r(t) is shown graphically in Fig. 1.41.


ln mechanical terms, a ramp function may be visualized as follows. If the input
variable is represented as the angular displacement ()Í a shaft, then the constant-speed
rotation of the shaft provides a representation of the ramp function. As a test signal, the
ramp function enables us to evaluate how a continu<>us-time system would resp<)nd t<> a
signal thar increases linearly with time.
The discrete-time version of the ramp function is defined by
n, n 2: O
rfn1 = (1.67)
O, n < O
or, eq ui valentl y,
r[n] = nulnl (1.68)

lt is illustrated in Fig. 1.42.

1. 7 Systems Viewed as
lnterconnections of Operations
ln mathematical terms, a system may be viewed as an ínterconnection of operations that
transforms an input signal into an <>utput signal with properties different from those <)Í
the input signal. The sígnals may be of the contínuous-time or discrete-time variety, ora
mixture of both. Let the overall operator H denote the action of a system. Then the ap-
plícation of a C<)ntinuous-time signal x(t) to the input of the system yields the output signal
described by
y(t) = H{x(t)} (1.69)

Figure 1.43{a) sh<)ws a block diagram representat1on of Eq. (1.69). Correspondingly, for
the discrete-time case, we may write
y[n) = H{x[n]} (1.70)

.. ' ...
y[n]
..
x(t)
H. ...
,.
y(t) x[nl
... H .. ,.
"'
•r . ....·

(a) (b)

FIGURE 1.43 Block diagram representation of operator H for (a) continuous time and (b) dis-
crete time.
1. 7 Systems Viewed as lnterconnections of Operations 43

x[nJ x[n - k]

FIGURE 1.44 Discrete-timc shift <>perator Sk, operating on thc discrete-Lime signal xlnJ to pro-
duce x[n - k].

where che discrete-time signals x[n] and y[n] denote the input and output signals, respec-
tively, as depicted in Fig. 1.43(b).
. . . ·" ... .. ': ·. . . ..,· ·......

EXAMPLE 1.8 Considera discrete-time system whose output sígnal y[n] is the average of the
three most recent values of the input signal x[n], as shown by

y[n] = ½(x[n] + x[n - 1] + x[n - 2])


. .
Such a system is referred to as a moving-average system for two reasons. First, y[n] is the
average of the sample values x[n], x[n - 1], and x[n - 2]. Second, the value of y[n] changes
as n moves along the discrete-time axis. Formulate the operator H for rhis system; hence,
develop a block diagram representation for it.

Solution: Ler the operator Sk denote a system that time shifts the input x[n] by k time uníts
to produce an output equal to x[n - kJ, as depicted in Fig. 1.44. Accordingly, we may define
the overall operator H for the moving-average system as

H = ½(1 + S + S2 )
...'

Two different ímpleme11tations of the operator H (i.e., the moving-average system) that suggest
themselves are presented in Fig. 1.45. The implementation shown in part (a) of the figure uses
the cascade connection of two identical uníty time shifters, namely, S 1 = S. On the other hand,
2
the implementation shown in ...
part (b) of the figure uses two different time shifters, S and S ,

connected in para/lei. ln boch cases, the moving-average system is made up of an intercon-


nection of three functional blocks, namely, two time shifters, an adder, and a scalar
multiplícation.

_x.,[n_]-+-• • x[n-1] x[n -2]

x[n]
.
s "'
- ....~.... 2: ---...• 1/3 •
--1J1s yí nl

.........iS~:,-
. ..

(b)

y[n]
(a)

FIGlJRE 1.45
T"vo different (but equivalcnt) írnplementations of the mo\ing-avcrage systen1:
(a) cascade form of implemcntatÍ<>n, and (h) parallel form of implcn1entation.
44 CHAPTER 1 • INTRODUCTION

• Drill Problern 1.17 Express rhe operator that describes the input-output relation

y[nJ = ~(x[n + 1] + x[n] + x[n - 1])

in terms of the time-shift operator S.

Answer: H = }(s- 1 + 1 + S 1). •


ln the interconnected systems shown in Figs. 1.45(a) and (b), the sígnal flows through
each one of them in the forward direction only. Another possible way of combining systems
is through the use of feedback connections. Figure 1.4 shows an example of a feedback
system, which is characterized by two paths. The forward path involves the cascade con-
nection of the controller and plant. The feedback path is made possible through rhe use
of a sensor C(lnr1ected to the output of the system at one end and the input at the <>ther
end. The use of feedback has many desirable bene.fits and gives rise to problems <>Í its own
that require special attention; the subject of feedback is discussed in Chapter 9.

11.8__Properties of Systems
The properties of a system describe the characteristics of the operator H representing the
system. ln what follows, we study some of the most basic properties of systems .

• STABILlTY

A system is said to be h<>unded input-bounded <)utput (BIBO) stable if and only if every
bounded input results in a bounded output. The output of such a system does not diverge
if the input does not diverge.
To put the condition for BIBO stability on a formal basis, consider a continuous-
time system whose input-output relarion is as descríbed in Eq. (1.69). The operator H is
B1B() stable if the output signal y(t) satisfies the condition

ly(t) 1 :S My < 00 for ali t

whenever the input signals x(t) satisfy the conditic)n

lx(t) 1 :S Mx < 00 for ali t

Both Mx and My represent some finite positive numbers. We may describe the Cl1ndition
for the BlBO stability of a discrete-time system 1n a similar manner.
From an engineeríng perspective, it is important rhat a system of ínterest remains
stable under ali possible ()perating conditions. lt is only then that the system is guaranteed
to produce a bounded output for a bounded input. Unstable systems are usually to be
avoided, unless some mechanism can be found t(> stabilize them.
One famous example of an unstable system is the first Tacoma Narrows suspension
bridge that cc>llapsed on November 7, 1940, at approximately 11 :00 a.m., due to wind-
induced vibrations. Situated on the Tacoma Narrows in Puget Sound, near the city of
Tacoma, Washington, the bridge had only been open for traffic a few months before it
collapsed; see Fig. 1.46 for photographs taken just prior to failure of the bridge and soon
thereafter.
1.8 Properties of Systems 45

' .
.t •.
: ..·· '


~I ';
.. '
1
"

d
;:
.....
,.

;. ,

: ..~ i
,,. '<''"'.
..·•· · ,-u........
.... .
.. .. •.•
. . . . . . . . . •••
.. .
',!M.,)l.;..... ~ • J I J l ( •. .OW ~ ) o o ( - , , 1 ~

(a)

'·, :q\\11
''
'
' ,,
I
;
i
;

' ' ,
'
. ..... ~

••
,. •
• ,
'

• •'
I


• •
• ••
l \'

.
• . "

,

~ .........-·..
'. 1; ., .
.. -
•y.·,

·'
. 131' "'·'
•.,.,,-. , • F ~·: •

(b)

FIGURE 1.46 Dramatic photographs showing Lhe collapse of the Tacoma Narro,vs suspension
bridge on ~<>veml>er 7. 1940. (a) Photograph sho,ving the t,vísting motion (lf the brídge's center
span jt1st l>efore failurc. (b) ,;\ Íe\v minutes after the first píece of C()ncrete fell. thís second photo-
graph sho\vs a 600-ft section of thc bri<lge ilreaking out of the sttspension S}léln an<l turníng upside
down as it crashed in Pugt't S<>u11d, \Vashington. Note the car ín thc top right-hand corner of the
photograph. (C~ourtesy (>Í the Smithsonian Inslíltttion.)
46 CHAPTER 1 Ili INTRODUCTION

EXAl\tPLE 1.9 Shc>w that the moving-average system described in Example 1.8 is BIBO
stable.

Solution: Assume that


lx{n] 1 < Mx < 00 for ali n
;.

r Using the given input-output relation


f;r
··.
y[n} = ½(x[n] + x[n - 1] + x[n - 2])

we may wr1te
ly[n}I = ½lx[n] + x[n - 11 + x[n - 2]1 ...,.r.,

:s ½(lx[n]I+ lx(n - 1]1 + lx[n - 2]1}


S ½(Mx + Mx + Mx)
= Mx
Hence the absolute value of the output signal y[n] is always less than the maximum absolute
( value of the input signal x(n] for ali n, which shows that the moving-average system is stable.

• Drill Problem 1.18 Show that the moving-average system described by the in-
put-output relati<>n

yfnl = 1(x[n + 1J + x[n] + x[n - 1])


is BIBC) stable. •

.,.
ExAMPLE 1.1 O Consider a discrete-time system whose input-output relation is defined by ,~:
·'
.:· ..
"
y[n] = r' xl n]
1
. : . ., .

where r > 1. Show that this system is unstable. . •'::


L .
...,.·
Solution: Assume that the input signal x[n] satisfi.es the condition , ,.
....h·
,;

lx[n] I s Mx < 00 for all n . .


. .. ,
....., .. , .,, :.
·: .

We then find that


··.
ly[n] 1= J rnx[n] 1 <:~·

= lrnl · lx[n]I
With r > 1, the multiplying factor r 11 diverges for increasing n. Accordingly, the condition
that the input signal is bounded is not sufficient to guarantee a bounded output signal, and
so the system is unstable. To prove stability, we need to establish that all bounded inputs
produce a bounded output.
,> •
t'F . ,.: .. •·:!',.. ; " ••
•• >
•i •' •

11 MEl\10RY

A systen1 is said to possess memory if irs output signal depends on past values of the input
signal. The temporal extent of past values on which the output depends defines how far
the memory of the system extends into the past. ln conrrast, a system is said to be me-
moryless if its output signal depends 011ly on the present valt1e <)f the input signal.
l. 8 Properlies of Syslems 47

For example, a resistor is memoryless since the current i(t) flowing through it in
response to the applied voltage v(t) is defined by

1
i(t) =R v(t)

where Ris the resistance of the resistor. On the other hand, an inductor has memory, since
the current i(t) flowing through it is related to the applied voltage v(t) as follows:

i(t)
ft 1
= L _"" v( T) dT

where L is the inductance of the inductor. That is, unlike a resistor, the current through
an inductor at time t depends on ali past values of the voltage v(t); the memory c>f an
inductor extends into the infinite past.
The moving-average system of Example 1.8 described by the input-output relation

y[n] = 1(x(n] + xln - 11 + x[n - 2])


has memory, since the value of the output signal y[1t] at time 11 depends on the present and
two past values of the input signal x[nj. On the other hand, a system described by the
input-output relation

y[n] = x 2 [n]

is memoryless, since the value of the output signal y[n] at time n depends <>nly <)D the
present value of the input signal x[nj.

• Drill Problem 1.19 How far does the memory of the moving-a verage system de-
scribed by the input-output relatíon

y[n] = j(x[n] + x[n - 11 + x[n - 2])

extend into the past?

A1iswer: Two time units. •


• Drill Problem 1.20 The input-output relation of a semiconductor diode is repre-
sented by

i(t) = a0 + a 1 v(t) + a2 v 2 (t) + a 3 v 3 (t) + ···


where v(t) is the applied voltage, i(t) is the current flowing through the diode, and a 0 , a 1,
a3 , ••• are constants. Does this diode have memory?
Ansiver: No. •
• Drill Problem 1.21 The input-output relation of a capacitor is described by

v(t) =e
ft 1
-x, i( T) dT

What is its memory?

Answer: Memory extends from time t to the infinite past. •


48 CHAPTER 1 • INTRODlJCTION

a CAUSALIIY

A system is said to be causal if the present value of the output signal depends only on the
present and/or past values of the input signal. ln contrast, the output signal of a noncausal
system depends on future values of the input signal.
For example, the moving-average system described by

y[n] = ½(x[n] + x[n - 1] + x[n - 2])


is causal. On the other hand, the moving-average system described by

y[nJ = ½(x[n + 11 + x[n] + x[n - 11)


is non~ausal, since the output signal y[nJ depends on a future value of the input signal,
namely x[n + 1].

• Drill Problern 1 .22 Consider the RC circuit shown in Fig. 1.47. Is it causal or
n<)ncausal?
Answer: Causal. •
• Drill Problem 1.23 Suppose k in the operat<>r of Fig. 1.44 is replaced by -k. Is the
resulting system causal or noncausal for positive k?

Answer: Noncausal. •
• INVERTIBILl'lY

A system is said to be invertible if the input of the system can be recovered frc)m the systcm
output. We may view the set of operations needed to recover the input as a second system
connected in cascade with the given system, such that the output signal c>f the second
system is equal to the input signal applied to the given system. To put the notic)n of
invertibility on a formal basis, let the operator H represent a continuous-time system, with
the input signal x(t) producing the output signal y(t). Let the output signal y(t) be applied
to a second continuous-time system represented by the operator H- 1, as illustrated i11 Fig.
1.48. The output signal of the second system is defined by
H · 1 (y(t)} = H·- 1{H{x(t)}}
= H- 1H{x(t)}
where we have made use of the fact tl1at two operators H and H- 1 connected in cascade
are equivalent to a single operator H- 1 H. For this output signal to equal the c>riginal input
signal x(t), we require that

(1.71)

+
v 1(t) ~j

Input Output

FIGlJRE 1.47 Series RC círct1it driven from an ideal voltage sourcc v 1(t), producing output volt-
age v2 (t).
1.8 Properties of Systems 49

x(t) . .r . ' )'(t) .,, ' x(t)


_,...,, ,. 'H ,. ,. H-1 .,

FIGURE 1.48 The notion of system invertibility. The seconcl operat,>r H- 1 is the inverse of the
first operator H. Hence the input x(t) is passed through the cascade correction of H anel H- 1 cclm-
pletely unchanged.

where I denotes the identity operator. The output of a system described by the identity
operator is exactly equal to the input. Equation ( 1. 71) is the condition that the new op-
erator H- 1 must satisfy in relation to the gíven operator H for the original input sígnal
1
x(t) to be recovered from y(t}. The operator H- is called the inverse c>perator, and the
associated system is called the inverse system. Note that H- 1 is not the reciprocai of the
operator H; rather, the use of the superscript -1 is intended to be merely a flag indicatíng
''inverse. '' ln general, the problem of finding the inverse of a given system is a difficult
one. ln any event, a system is not invertible unless distinct inputs applied to the system
produce distinct <)Utputs. That is, there must be a one-to-one mapping between input and
output signals for a system to be invertible. Identical conditions must hold for a discrete-
time system to be invertible.
The property of invertibility is of particular importance in the design of communi-
cation systems. As remarked in Section 1.3, when a transmitted signal propagates through
a communication channel, it becomes distorted due to the physical characteristics of the
channel. A widely used method of compensating for this distortion is to include in the
receiver a necwork called an equalizer, which is connected in cascade with the channel in
a manner similar to chat described in Fig. 1.48. By designing the equalízer to be rhe inverse
of the channel, the transmitted signal is restored to its original form, assuming ideal
condirions.

EXAMPLE 1.11 Consider the time-shift system described by the input-output relation
.. .": . . . y(t) = x(t - t 0 ) :,::: S'0 {x(t)}
'

where the operator sro represents a time shift of t 0 seconds. Find the inverse of this system.
Solution: For this example, the inverse of a time shift of t0 seconds is a time shift of - t0
seconds. We may represent the time shift of -t0 by the operator s-to. Thus applying s-i0 to
the output signal of the given time-shift system, we gec
s-ro(y(t)} = s-' {Sr {x(t)}}
0 0

= s-105to{x(t)}
For this output signal to equal the original input signal x(t), we require chat
s-to5to = 1
., which is in perfect accord with the condition for i11vertibility described in Eq. (1.71).

• Drill Prohlem 1 .24 An inductc>r is described by the input-output relation

y(t) = L
ft l
-ao x( 'T) d'T

Find the operation representing the inverse system.

!!:_
Answer: L
dt •
50 CHAPTER 1 • INTRODUCTION

.· .~ .··•

EXAMPLE 1.12 Show that a square-law system described by the input-output relation
' y(t) = x 2(t)
~,;. . .· .,.
.'

is not invertible.
Solution: We note that the square-law system violates a necessary condition for invertibility,
which postulates that distinct inputs must produce distinct outputs. Specifically, the distinct
inputs x(t} and -x(t) produce the sarne output y(t). Accordingly,.the square-law system is not
invertible.
·:·, ..

• TIME INVARIANCE

A systern is said to be time invariant if a time delay or time advance of the input sigr
leads to an identical time shift in cl1e c>utput signal. This implies that a time-invariant syst<
responds identically no matter when the input signal is applied. Stated in another way, t
characteristics of a time-invariant system do not change with time. Otherwise, the syst<
is said t<> be time variant.
Considera continuous-time system whose input-output relation is described by E
(1.69), reproduced here for convenience of presentation:

y(t) = H{x(t)}

Suppose the input signal x(t) is shifted in time by t 0 seconds, resulting in the new i11i:
x(t - t0 ). This operation may be described by writing

where the operator S'0 represents a time shift equal to t 0 seconds. Ler y; (t) denote t
output signal of the system produced in response to the time-shifted input x(t - t 0 ). "\
may then write

y;{t) = H{x(t - t 0 )}
= H{St {x(t)}}
0
(1.7
= HS 10 {x(t)}
which is represented by the block diagram shown in Fig. 1.49(a). Now supp(>Se y 0

represenrs the output of the original system shifted in time by t 0 seconds, as shown by

Yo(t) = Si0 {y(t)}


= S' {H{x(t)}}
0 (1.7
= St H{x(t)}
0

which is represented by the block diagram shown in Fig. l.49{b). The system is tir
invariant if the outputs
.
y;(t) and .Y (t) defined in Eqs. (1.72) and (1.73) are equal for ai
0

identical input signal x(t). Hence we require

(1. 7

That is, for a system described by the operator H to be time invariant, the system operar
H and the time-shift operator Sto must commute with each other for all t 0 • A similar relati<
must hold for a discrete-time system to be time invariant.
1.8 Properties of Systems 51

x(t) i<i. . ...x(t - t 0 )


' ':;" Y;(t) x(t) ,.,,
-•• ~o . - - ~ fl' • -•- '~'lir-·..- ---1'11•
;j;• .. . .
.sto.·e,- '
: .~

(a) (b)

FIGURE 1.49 The notion of tin1e invariancc. (a) l'ime--shift <>perator S'0 llrccedíng c>perator H.
(b) Timc-shift operator .S111 follo\l\'Íng opcralor H, These two situations are equiva1ent, IJrc>vicled that
H is time invaríant.

ExAMPLE 1.13 Use the voltage v(t) across an inductor to represent the input signal x(t)., and
the current i(t) flowing through it to represent the output signal y(t). Thus the inductor is
described by the input-output relation

y(t) =L
1 f' -<,> X('T) d-r

where L is the inductance. Show that the inductor so described is time invaríant.
Solution: Let the input x(t) be shifted by t 0 seconds,. yielding x(t - t 0 ). The response y,{t) of
the ínductor to x(t - t 0 ) is
. ,.

ft
y;(t) = L -~ x(
1
'T - t0 } d-r

;. Next, let y (t) denote the original output of the .inductor shifted by t0 seconds, as shown by
0

Yo(t} = y(t - to)

=-
l Jt-to X(T) d1r
L -oo

Though at first examination y;(t) and y 0 (t) look different, they are in fact equal, as shown by
a simple change in the varíable for integration. IJet

For a constant t 0 , we have d-r 1 = dT. Hence changing the limits of integration, the expression
for y;(t) may be rewritten as

y;(t) =-
l ft-r 0

X('T 1 ) d'T'
L -oo

which, in mathematical terms, is identical to y0 (t). lt follows therefore that an ordinary in-
ductor ís time ínvaríant.

;,.·-~ •• ..,..,.. •••••• <. ~.., ,,.,.. • :'Jlt'::: ••• ' .,. • • ·,~,. •• ,... .·;i•.· ... ·..
~ ~,;. ..•.'·

ExAMPLE 1.14 A rhermisror has a resistance that varies with time due to temperature
changes. Let R(t) denote rhe resístance of the thermisror, expressed as a function of time.
Associating the input signal x(t) wíth che voltage applied across the thermistor, and the ot1tput
signal y(t) wirh rhe current flowing through it, we may express the input-output relacion of _
the thermistor as

x(t)
y(t) = R(t}

Show that the thermistor so descríbed is time variant.


'.l.t .. :·~· :.: ·. .
.. ."t'··· ·~ . ... . ..."':'
52 CHAPTER 1 • INTRODlJCTION

Solution: Let y;(t) denote the response of the thermistor produced by a time-shifted version
x(t - t 0 ) of the original input signal. We may then write '> , ,

x(t - t 0 )
Y;(t) = R(t)

Next, ler y0 (t) denote the original output of the thermístor shifted in tin1e by t 0 , as shown by
Yo(t) = y(t - to)
x(t - t0 )
R(t - t 0 )
We now see that since, in general, R(t) * R(t - t 0 ) for t 0 * O, then
y0 (t) * y;(t) for t 0 *O
Hence a thermistor is tíme variant, which is intuitively satisfying.

• Drill Problem 1.25 Is a discrete-time system described by the input-output relatic)n


y(n) = r"x(n)
time invariant?

Answer: No. •
• LINEARIIT

A system is saíd to be linear if it satisfies the princif,le (>( superposition. That is, the rcspc)nse
of a linear system to a weighted sum of input signals is equal to the sarne wcighred sum
of ot1tput signals, each output signal being associated with a particular input signal acting
on the system independently of ali the other input signals. A system rhat víolates the
principle of superposition is said to be nonlinear.
l.et the operator H represent a continuous-time system. l,et the signal applied to rhe
system input be defined by the weighted sum
N
x(t) = L a;x;(t)
i= l
(1.7,5)

where x 1(t), x 2 (t), ... , x 1'J(t) denote a set of input signals, and a 1 , a 2 , ••• , aN denote the
C<)rresponding weighting factors. The resulting output signal is written as
y(t) = H{x(t)}
N (1.76)
=H L a;x;(t)
i= l

If the systen1 is linear, we may (in accordance with the principie of superposítion) express
the output signal y(t} c>f the system as
N
y(t) = 2- a;y;(t) (1. 77)
i= 1

where y;(t) is the output <lf the system in response to the input X;(t) acting alone; that is,

Y;(t) = H{x;(t)} (1.78)


I. 8 Properties of Systems 53

X\ (t) o .. x 1(t) o .~ •
~- :• >l•

ª1

Output Output

Inputs X2(t}



o .... .....
~
·.~..
.
••
.
{A•

:~~: •


y(t)
Inputs X2(t)


• •••
• •
~
. )o l: )o y(t)

• • • ••

(a) (b)

FIGURE 1.50 The linearity property of a system. (a) The combined operation of amplitude scal-
ing and summation precedes the operator H for multiple inputs. (b) The operator H precedes
amplitude scaling for each input; the resulting outputs are summed to produce the overall output
y(t). If these t\vo configurations produce the sarne output y(t), the operator H is linear.

The weighted sum of Eq. (1.77) describing the output signal y(t) is of the sarne mathe-
matical formas that of Eq. (1. 75), describing the input signal x(t). Substituting Eq. ( 1. 78)
into (1. 77), we get
N
y(t) = L a;H{x;(t)}
i=I
( 1. 79)

ln order to write Eq. (1.79) in the sarne formas Eq. (1.76), the system operation described
by H must commute with the summation and amplitude scaling in Eq. (1.79), as illustrated
in Fig. 1.50. Indeed, Eqs. (1.78) and (l.79), víewed together, represenr a marhemarícal
statement of the principie of superposition. For a linear discrete-time system, the principle
of superposition is described in a similar manner.

ExAMPLE 1.15 Consider a discrete-time system described by the input-output relation


y[n] = nx[n] .. .•;.. .. ,, ., ,.

Show that this system is linear. ,


:.•

Solution: Let the input signal x[n) be expressed as the weighted sum
N
· x[n] = L a;x;[n]
;.= 1

We may then express the resulting output signal of the system as


N
·.. ~· ·..
• <.
'· y[n] =n L a;x;[nJ
i=l

'>·· : .,

: . ·. ' . ,..
' ..
,<

i=1
' .
. N
=L a;y,[n] . . .

í=l
'-· , . . ,.
where . ,.

.' ..• ,

is the output dueto each input acting independently. We thus see that the given system satisfies
the principie of superposition and is therefore linear.
. ;,· ..
54 CHAPTER 1 • INTRODUCTION

ExAMPLE 1.16 Consider next the continuous-time system described by the input-output
relarion
y(t) = x(t)x(t - 1)

Show that this system is nonlinear.


Solution: Let the input signal x(t) be expressed as the weighted sum
N
x(t) = L a;x;(t) ··
i=l

Correspondingly, the output signal of the systern is given by the double summatíon
r
' N N
y(t} = Í: a;x;(t) Í: a;x;(t - 1)
'
·::
. i=-1 i=1
;·. .
N N
= Í:, 2, a;a;x,(t)x;(t - 1}
i=-1 i= l
The form of this equation is radically different from that describing the input signal x(t). That
is, here we cannot write y(t) = !f'1 1a;y;(t). Thus the system violates the principie of superpos-
ition and is therefore 11onlinear.

• Drill Problem 1.26 Show that the moving-average system described by


y[n] = ½(x[n] + x[n - lj + x[n - 2])
is a linear system. •
• Drill Problem 1.27 Is it possible fc)r a linear system to be noncausal?
Answer: Y es. •
• Drill Prohlem 1 .28 The hard limiter is a mem<)ryless device ,vhose output y is
related co the input x by
1, X> 0
y=
0, X< 0
Is the hard limiter linear?
Answer: No. •
11.9 Exploring Concepts with
" --
MATLAB
- .,

The basic object used in MATLAB is a rectangular numerical matrix wíth possibly complex
elements. The kinds of data objects encountered in the study of signals and systems are all
well suired to matrix representations. ln this section we use MATLAB to explore the
generation of elementary signals described in previous sections. The exploratior1 of systems
and more advanced signals is deferred to subsequent chapters.
The MATLAB Signal Processing Toolb<>x has a large variety c>f functi<Jns for gen-
erating signals, most of which require rhat we begin with the vector representation of time
t or n. To generate a vector t of time values with a sampling interval 2T of 1 rns on the
interval from O to 1s, for example, we use the command:
t = O: . 001 : 1 ;
I.9 Exploring Concepts wit1~ MATLAB 55

This corresponds 1000 rjme samples for each sec,)nd or a sampling rate of 1000 Hz.
t<)
To generate a vector n c>f time values for discrete-time signals, say, from n = Oto n = 1000,
we use thc command:
n = 0:1000;
Given t <.>r n, we 1nay then proceed to generate the signal of interest.
ln MATLAB, a discrete-tíme signal is represented exactly, because the values of the
signal are described as the elements of a vector. On the other hand, MATI,AB provides
only an approximation to a continuous-time signal. The approximation cc>nsists of a vector
whc>se individual elements are samples of the underlying continuous-time signal. When
using this approximate approach, it is important that we choose the sampling interval 2T
sufficiently small so as to ensure that the samples capture ali the details of the signal.
ln this section, we consider the generation of both continuous-time and discrete-time
signa]s (lÍ various kinds.

• PERIODIC SJGNALS

lt is an easy 1natter to generate peric)dic signals such as square waves and triangular waves
using MATLAB. Consider first the generation of a square wave <>Í amplitude A, funda-
mental freqt1ency wO (measured in radians per second), and duty cycle r h o. That is, r h o
is the fracrion of each peric>d for which the signal is positive. To generate such a sígnal,
we use the basic command:
A*square(wO*t + rho);
Thc sqttare wave sh(>wn in Fig. 1.13(a) was thus generated using the following complete
set <)f commands:
>> = 1;
A
>> wO = 10*pi;
>> rho = 0.5;
>> t = 0:.001:1;
» sq = A*square(wO*t + rho);
>> plot(t, sq)

ln the second command, pi is a built~in MATLAB function that returns the floating-pc>i11t
number clc>sest to '71'. The last con1mand is used to view the square wave. The comma11d
p lo t draws lines C<>nnecti11g the successive values of the signal a11d thus gives the ap-
pearance of a continuc)us-time signal.
Consider next the generacion of a triangular wave <>f amplitude A, fundamental fre-
quency wO (measured in radians per second), and width W. Ler che peric)d of the triangular
wave be T, with the first maximum value occurring at t = WT. The basic command for
generating this second periodic signal is
A*sawtooth(wO*t + W);
Thus to generate the symmetric triangular wave show11 in Fig. 1.14, we t1sed the fclllowing
commands:
>> A = 1;
>> wO = 10*pi;
>>W=0.5;
>> t = 0:0.001:1;
» tri = A*sawtooth(wO*t + W);
>> plot(t, tri>
56 CHAPTER l • INTRODlJCTION

As mentioned previously, a signal generated on MATLAB is inherently of a discret


time nature. To visualize a discrete-time signal, we may use the s tem command. Speci:
cally, s tem< n, x) depícts the data contained in vector x as a discrete-time sígnal at tf
time values defined by n. The vectors n and x must, of course, have compatible dimension
Consíder, for example, the discrete-time square wave shown in Fig. 1.15. This sig11
is generated using the following commands:
>>A= 1;
>>omega= pi/4;
>> rho = 0.5;
>> n = -10:10;
» x = A*square(omega*n + rho);
>> s t em ( n , x )

• Drill Problem 1 .. 29 Use MATLAB to generare the triangular wave depicted in Fi1
1.14. ·

• EXPONENTIAL SIGNALS

Moving on to exponential signals, we have decaying exponentials and growing exponer


tials. The MATLAB command for generating a decaying exponential is
B*exp(-a*t);

To generate a growing exponential~ we t1se the command


B*exp(a*t);

ln both cases, the exponential parameter a is positive. The following commands \.Vere use,
to generate the decaying exponential signal shown in Fig. 1.26(a):
>> B - 5;
-
>> a - 6 ,•
-
>> t - 0:.001:1;
-
>> X - B*exp(-a*t); % decaying exponential
>> plotCt, x)
The growing exponencial signal shown in Figure 1.26(b) was generated using th
commands
>> B = 1;
>>a= 5;
>> t = 0:0.001:1;
» x = B*exp(a*t); % growing exponential
>> plot(t, x)

Consider next the exponential sequence defined in Eq. (1.31). The growing form <>
this exponential is shown in Fig. 1.28(b). This figure was generated using the followini
commands:
>> B = 1;
>> r = O. 85
>> n = -10:10;
>> x = B*r.~n; % decaying exponential
>> stem(n, x)
1.9 Exploring Concepts with MATIAB 57

Note that, in this example, the base r is a scalar but rhe exponent is a vect<)r, hence the
use of the syml1t>l ." to denote element-by-element powers.

• Drill Problem 1.30 Use MATLAB to generate the decaying exponential sequence
dep1cted in Fig. 1.28(a). •
li SINUSOIDAL SIGNALS

MA TLAB also contains trigt>n<>1netric functions that can be used to generatc si11usoidal
signals. A cosine signal of amplitude A, frcquency wO (measured in radians per sect>nd),
and phasc angle p h i (in radians) is obtained by using the command
A*cos(wO*t + phi);
Alter11atively, we may use the sine function t<> generate a sinusoidal signal by using the
cc>1nmand
A*sin(wO*t + phi);
These two commands were used as the basis of generating the sinusoidal signals shown in
Fig. 1.29. Specifically, for the cosine signal shown in Fig. 1.29(a), we used the following
commands:
>>A= 4·,
>> wO = 20*pi;
>> phi = pi /6;
>> t = 0:.001:1;
» cosine = A*cos(wO*t + phi);
» plot(t, cosine)

• Drill Problem 1.31 Use MATLAB ro generate the sine signaJ shown in Fig.
1.29(6). •
Consider next the discrete-time sinusoidal signal defined in Eq. (1.36). This periodic
signal is plotted in Fig. 1.31. The figure was generated using the follc>wing commands:
>>A= 1;
»omega= 2*pi/12; % angular frequency
>> ph i = O;
>> n = -10:10;
» y = A*cos(omega*n);
>> stem(n, y)

• EXPONENTIALLY DAMPED SINUSOIDAL SIGNALS

ln all <>Í the signal-generatÍ()n commands described above, we have generated the desired
amplitude by multiplying a scalar, A, into a vector representing a unit-amplitude signal
(e.g., si n ( wO* t + p h i ) ). This operation is described by using an asterisk. We next
consider thc generation of a signal that requires element-by-element multifJ/ication of two
vectors.
Suppose we multiply a sinusoidal signal by an exponential signal to produce an
exponentially dan1ped sinusoidal signal. With each signal component l1eing represented
58 CHAPTER l Ili INTRODUCTION

by a vecror, the generation of such a product signal requires the multiplicatior1 of one
vector by another vector on an element-by-element basis. MATLAB represents elemenr-
by-element multiplication by usíng a dot followed by an asterisk. Thus the command for
generating the exponentially damped sinusoidal signal
x(t) =A sin(w0 t + </>) exp(-at)
is as Í(>llows:
A*sin(wO*t + phi).*exp(-a*t);

For a decaying exponential, a is positive. This command was used in the generation of
rhe waveform shown in Fig. 1.33. The complete set of comn1ands is as follows:

>> A = 60;
>> wO = 20*pi;
>> ph i = O;
>>a= 6;
>> t = 0:.001:1;
» expsin = A*sinCwO*t + phi).*exp(-a*t);
» plot(t, expsin)
Consider next the exponentially damped sinusoidal sequence depicted in Fig. 1.51.
This sequence is obtained by multiplying the sinusoidal sequence x[nl of Fig. 1.31 by the
decaying exponencial sequence y[n] of Fig. l.28(a). Both of these sequences are defined for
n = -1 O: 1 O. Thus usíng z[nl to denote this product sequence, we may use the following
commands to generate and visualize it:
» z = x.*y; % elementwise multiplication
>> stem(n, z)

Note that there is no need to include the definition of n in the generatíon of z as it i~


already included in the commands for both x and y.

3 .----.------.----..----,--....----,---...----....-----.

2 ·-

1 -

x[nl O ... 0 ······--·······

-1 . . ..

-2 . . ..

- 3 10 -8 -6 -4 -2 O 2 4 6 8 10
Time n
FIGURE 1.51 Exponentíally damped sinusoidal sequence.
1.9 Exploring Concepts urith MATlAB 59

• Drill Problem 1.32 Use MATLAB to generate a signal defincd as the product of
the growing exponential of fig. 1.28(6) and the sinusoidal signal of Fig. 1.31. •
• STEP, IMPULSE, ANO RA.l\ilP FUNCTIONS

ln MATLAB, on e s ( M, N) is an M-by-N matrix of ones, and zeros ( M, N) is a11


M-by-N matrix of zeros. We may use these two matrices to generate tW<) con1monly used
signals, as folltJws:

• Step functi<Jn. A unir-amplitude stcp fu11ction is generated l-,y writi11g

u = Czeros(1, 50), ones(1, 50)];

• Discrete-time impulse. A unir-amplitude discrete-time i1npulse is ger1erated by wríti11g


delta= Czeros(1, 49), 1, zeros(1, 49)J;

To ge11erate a ramp sequence, we simply write

ramp = n;
ln Fig. 1.37, we illuscratcd how a pair of step functi<)11s shifted in time relative to
each other may be used t<) produce a rectangular pulse. ln light of the pr<,ce(it1re illustrated
therein, wc may formulate che foll<)wing set of commands for generating a rectangular
pulse centered on che origin:

t = -1:1/500:1;
u1 = Czeros(1, 250), ones(1, 751)];
u2 = Czeros(1, 751), ones(1, 250)];
u = u1 - u2;

The first command defines time running from -1 second to 1 sec<)nd in increments <>f 2
millisec<)nds. The second command generates a step functi(,n u 1 of unir amplitude, onset
at time t = -0.5 second. The third command generates a seco11d step functi<)n u 2, 011ser
at time t = 0.5 second. The fourth command subtracts u 2 from u 1 to prodt1ce a rectan-
gular pulse of unit amplitude and unit duration centercd <>n the origín.

• USER-DEFINED FUNCTION

An important feature of the MATLAB environme11t is that ir permits us to creare <>ur <>wn
M•files or subroutines. Two types of M-files exist, namely, scripts and funcrions. Scripts,
or script files, automate long sequences of C<>mmands. On the <)thcr l1and, functions, or
functi<)n files, provide extensibility to MATLAB by allowing us to add new functío11s. Any
variables used in function files do not remain in memory. For this reasou, input and outpL1t
variables must be declared explicitly.
We may thus say that a function M-file is a separate entity characterized as follo\vs:

1. lt begins with a statement defi11ing thc fu11ction name, its i11put arguments, and its
output arguments.
2. lt also includes additional statements that compute the values to l,e rcturned.
3. The inputs may be scalars, vectors, or matrices.
60 CHAPTER l ri INTRODUCTION

Co11sider, for examplc, the generation c,f the rectangular pulse depicted in Fig. 1.37
using an M-file. This pulse has unir amplitude and unir duration. To generate it, wc create
a file called r e e t . m containing the following statements:

function g = rect(x)
g = zeros(size(x));
set1 = find(abs(x)<= 0.5);
g(set1) = ones(size(set1));
ln rhe last three staternents of this M-file, we have introduced two useful functions:
• The function s i z e retur11s a two-element vecror containing the row and column
din1e11sions of a marrix.
• The function f i n d returns the índices of a veccor or matrix that satisfy a prescribed
relacional condítion. For the example ar hand, f i n d ( a b s ( x ) < = T ) returns the
índices of the vector x where the abs<)lute value of x is less than <>r equal to T.
The new function r e e t. m can be used like any ocher MATLAB function. ln particular,
we may use it t<> generate a rectangular pulse, as fc>llows:
t = -1:1/500:1;
plot(t, rect(0.5))

11 •.~_0 _S~im1!'ary
ln this chapter we prcsented an overview of signals and systems, setting the stage for the
rest t)f the bol)k. A particular theme that stands <)Ut in the discussion presented herein is
that sígnals may be <)Í the continuous-time or discrete-time variety, and likewise for sys-
tems, as summarized here:
• l\. continuc>us-time signal is defined for all values of time. ln C()ntrast, a discrete-tíme
signal is defined only for discrece instants of time.
• A continuous-time system is described l1y an operator that changes a continuous-
time input signal into a continuous-time output signal. l11 contrast, a díscrete-cime
system is described by an operator that changes a discrete-time input sigr1al into a
discrete-timc <>utput signal.
ln practice, many systems mix continuous-time and discrete-time componencs. Analysis of
mixed systems is an ímportant part of the material presented in Chaprers 4, ,S, 8, and 9.
ln discussing the various properties of sígnals a11d systems, we took special care in
treating these two classes of signals and systems sidc by síde. ln so doíng, much is gained
by emphasizing the similarities and differences between C<)ntinuous-time signals/systems
and their discrete-time counterparrs. This practice is followed in later chapters too, as
a ppropr1ate.
Another n<)teworthy point is that, in the study of systerns, particular attention is
give11 to the analysis of linear time-invariant systems. Linearity means chat the system obeys
the principle of superposition. Time invaríance means that the characreristics of the system
do n(>t change with time. By invoking these two properties, the analysis c)f syste1ns becornes
mathematically tracrable. lndecd, a rich set of toc>ls has been developed for the analysis of
linear tíme-invariant systems, which provides direct motivatio11 Í<>r much <>f the material
on system analysis presented in this book.
Furtlier Readings 61

ln thís chapter, we also explored the use of MATLAB for the generati<>n of elemen-
tary waveforms, representing the continuous-time and discrete-time variety. MATLAB
provides a powerfu1 environment for exploring concepts and testing system designs, as
will be illustrated in subsequent chapters.

FURTHER READING

1. For a readabJc account of signals, their represencations, and use in communication systems,
see the book:
• Pierce, J. R., and A. M. Noll, Signals: The Science o{Telecommunícations (Scientific American
Library, 1990}

2. For examplcs of control systems, see Chapter 1 of the book:


• Ku<>, B. C., At1tomatic Control Systems, Scventh Edition (Prentice-Hall, 1995)
and Chapters 1 and 2 of the book:
• Phiffips, C. I,., and R. D. Harbc>r, Feedhack C<Jntro{ 5;ystems, Third Edítíon (Prentíce-HalJ,
1996)

3. For a general discussion of re1note sensing, see the book:


• Hord, R. M., Remate Sensing: Methods and Applications (Wiley, 1986)
For material 011 the use of spaceborne radar for remore sensíng, see the book:
• Elachi, C., lntroduction t<J the Physics and Techniques of Remate Sensing (Wiley, 1987)
For detailed description of synthetic aperture radar and thc r()le of signal proeessing i,1 its
implementation, see the book:
• Curlander, J. e:., and R. N. McDonough, Synthetic Aperture Radar: Systems and .Çignal Pr<J-
cessing (Wiley, 1991)

4. For a collection of essays on biological signal processing, see thc book:


• Weitkunat, R.~ editor, Digital Biosignal Processíng (Elsevier, 1991)
5. For detailed discussion of the auditory sy·srem, see the following:
• Dallos, P., A. N. Popper, and R. R. Fay, editors, The Cochlea (Springer-Verlag, 1996)
• Hawkíns, H. L.1 and T. McMt1llen, edít<)rs, Auditory C<Jmputation (Springer-VcrJag, 1996)
• Kelly, J. P., ''Hearing." ln E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Principies of Neural
Scíence, Thi rd Edition (Elsevier, 1991 )
The cochlea has provided a source of motivation for building a11 electronic version of it,
using silícon integrated circuits. Such an artificial implementation is sometimes referred to
as a ''sil1con cochlea. '' For a díscuss;on of the silicon cochlea, see:
• Lyon, R. F., and C. Mea<l, "Electronic Cochlea." ln C. Mead, Analog VLSI and Neural Sys-
tems (Addison-Wesley, 1989)

6. For an account of the legendary story of the first Tacoma Narrows suspension bridge, see
the report:
• S1nith, D., ''A Case Study and A11alysis of rhe Tacoma Narrows Bridge Failure," 99.497 E11-
gineeríng Prc>iect, Department of Mechanical Engineering, Carletc>n Univcrsity, March 29,
1974 (supervised by Prc>fessor G. Kardc)s)

7. For a textbook treatment of MATLAB, see:


• Etter, D. M., Engíneering JJroblem So/ving with MATLAB (Prentice-Hall, 1993)
62 CHAPTER 1 • INTRODOCTION

• - - • • -
1.1 Find the even and odd components of each of show that the output y(t) consists of a de com·
the followíng signals: ponent anda sinusoidal component.
(a) x(t} == cos(t) + sin(t) + sin(t) cos(t) (a) Specify the de component.
(b) x{t) = 1 + t + 3t2 + St] + 9t 4 (b) Specify the amplitude and fundamental fre-
(e) x(t) = 1 + t cos(t) + t 2 sin(t) quency of the sinusoidal component in rhe
+ t 3 sin{t) cos(t) output y(t).
( d) x(t) = (1 + t 3 ) cos3 ( 1Ot) 1.4 Categorize each of the following signals as an
1.2 Determine whether the following signals are pe- energy or power signal, and find the energy or
riodic. If they are periodic, find the fundamental power of the signal.
period. t, O< t:::; 1
(a) x(t) = (cos(2m))2 (a) x(t) = 2 - t, 1 :::; t :::; 2
(b) x(t) = Lk=-s w(t - 2k) for w(t) depicted in O, otherwise
Fíg. Pl.2b.
n, O s n :::; 5
{e) x(t) = L.k--oo w(t - 3k) for w(t) depicted in
Fig. P1.2b. (b) x[n\ = 10 - n, 5 s n:::; 10
(d) x[nl = (-1) 11 O, otherwise
(e) x[n] = (-1)"
2
(e) x(t) = 5 cos(-rrt} + sin(5-rrt),
(f) x[n] depicted in Fig. P1.2f. -oo<t<oo
(g) x(t) depicted in Fig. P1.2g. 5 COS('TT't), -1:::; t:::; 1
(d) x(t) =
(h) x[n] = cos(2n) O, otherwise
(i) x[n] = cos(21rn) 5 cos( 'TT't), -0.5 < t :::; 0.,5
1.3 The sinusoidal sígnal (e) x(t) =
O, otherwise
x(t) = 3 cos(200t + 7r/6) sin( 7r/2 n), -4 < ,z :::; 4
is passed through a square-law devíce defined by
(f) x[nl = O, otherwíse
the input-output relation
cos(1rn), -4 < n ::S 4
y(t) = x 2 (t) (g) x[n] =
O, otherwise
Using the trigonometríc identity cos( 1rn }, n > O
(h) x[nl =
cos 8 2
= ½(cos 28 + 1) O, otherwise

w(t) x{n]
l
' • ' ' 1

... • ••

o
--ifC---+-~- t ;
'
'
, '
' '
-n
-1 1 -5 -l 1 4 8 6
(b) (f)

x(t)
- 1- -
••• ...
' ' ' ' ' ;
'
t
-5 -4 -3• -2 -1 l 2 3 4 5 6 7 s
-1
(g)

FIGORE Pl.2
Problems 63

1.5 Consider the sinusoidal signal 1.10 A rectangular pulse x(t) is denr1ed by·
x(t) = A cos(wt + cp) A, O< t < T
x(t) = .
Determine the average power of x(t). O, otherw1se
1.6 The angular frequency n of the sinusoidal signal The pulse x(t) is applied to an integrator dcfi11cd
xlnl = A cos(On + <b) by
satisfies the conditíon for x[n] to bc periodic.
Derern1ine the average power of x[n]. y(t) = J: x( T) dr
1.7 The raised-cosine pulse x(t) shown in Fig. Pl.7
is defined as Find the total energy of the output y(t).
1.11 The trapezoidal pulse x(t) of Fig. Pl.8 is time
t[cos(wt) + l], -1rlw < t < 1rl<v
x(t) = - · scaled, producing
O, otherwise
y(t) = x(at)
Determine rhe total energy of x(t).
Sketch y(t) for (a) a = 5 and (b) a = 0.2.
1.12 A triangular pulse signal x(t) is depicted in Fig.
Pl.12. Sketch each of the following sígnals de-
rived from x(t):
(a) x(3t)
(b) x(3t + 2)
-1r/w o Trlw {e) x(-2t-1)
FIGURE PI.7 (d) x(2(t + 2))
(e) x(2(t - 2))
1.8 The trapezoidal pulse x(t) shown in Fig. Pl .8 is (f) x(3t) + x(3t + 2)
defined by
.5 - t, 4 < t < 5
x(t)
1, -4 < t < 4
x(t) =
t + 5 -5 < t < -4
O, otherwise
Determi11e the total energy of x(t). --~-+-,- ~ ,- - t
-1 O l

x(t) FIGURE Pl.12

1.13 Sketch the trapezoidal pulse y(t) that ís related


to that of Fig. Pl.8 as follows:

~'----+---+--+--+-+--+;-+:-+-~-,.___ t
y(t) = x(10t - .5)
-5 -4 -3 -2 -1 O l 2 3 4 5 1.14 Let x(t) and y(t) be given in Figs. P1 .14(a) and
FIGURE Pl.8 (b}, respectively. Carefully sketch the following
signals:
1.9 The trapezoidal pulse x(t) of Fig. Pl.8 is applied (a) x(t)y(t - 1)
to a differentiatt)r, defined by (h) x(t - l )y(-t)
d (e) x(t + 1 )y(t - 2)
y(t) = dt x(t} (d) x(t)y(-1 - t)

(a) Determine the resulting output y(t) of the (e) x(t)y(2 - t)


differentia tor. (f) x(2t)y(½t + 1)
(h) Determine the total e11ergy of y{t). (g) x(4 - t)y(t)
64 CHAPTER l te INTROD\JCTION

x(t}
x(t) y(t)
1 3
t 1 t
j

-~~;.__-+---+-~,- t -+-,--~-~--r----t
-2 -1
-----'······· ... ··- ··-··
2
·····---
-1
1 2 3

-l
2
--- .-• - -
i,
- i-
!
-
l
-,-
1

- - !- ·- - - ·······--
!

i
(a) (b)
1

'
1

' 1 t
-4 -3 -1 o l 3 4
FIGURE Pl.14 (a)

g(t)
1.15 Figure Pl.l 5(a) shows a staircase-like signalx(t) 1
that may be viewed as the superposition of four
rectangular pulses. Starting with the recranguJar ------1---+--,-....--t
pulse g(t) shown in Fig. P1.15(b), construct this -1 l
waveform, and express x(t) in terms of g(t). (b)

FIGURE Pl.l 7
x(t)

4 ->-·- - - - ···-·----
(e) xln - 21 + yln + 2}
3------·--- (f) x[2n] + y[n - 4]
g(t) (g) x[n + 2}yln - 2j
2 ·- ····---
(h) x[J - n]y[n]
1
1 (i) x[-n}y[-n}
i
f
1
;
; t ---l----+----4-----t (j) x[n]yf-2 - n]
o 1 2 3 4 -1 O 1 {k} x[n + 2]y[6 - n]
(a) (b)

FIGURE. Pl.15
x[n]
3--
1.16 Sketch the \Vaveforms of the following signals:
(a) x(t) = u(t) - u(t - 2)
1
2,\
(b) x(t) = u(t + 1) - 2u(t) + u(t - 1)
(e) x(t) = -u(t + 3) + 2u(t + 1) - 2u(t - 1)
+ u(t - 3} l .
(d) y(t) = r(t + 1) - r(t) + r(t - 2)
(e) y(t) = r(t + 2) - r(t + l) - r(t - 1) --<>--O--<>--+--+---o-.--+~-<>-o-o--n
,·- ' : :

+ r(t - 2) -3 -2 -1 l 2 3
1.17 Figure Pl.17(a} shows a pulse x(t) that may be (a)
viewed as the superposition of three rectangular
pulses. Starting with the rectangular pulse g(t) y[n]
of Fig. Pl.17(b), construcr rhis waveform, and
express x{t) in rer1ns of g(t). 1 --

1.18 Let x[11] and yfnl be given in Figs. P1 .18(a) and -4 -3 -2 -1


(b}, respectively. Carefully sketch the following ---<-:>----<o--er--....-•.--+--+--r---O--+---!,---'..........,~>-<:>--<o-- n
signals: l 2 3 4
(a) x[2nl 1 1 •• l'-

-1
(b) x[3n - 1]
(b)
(e) y[l - 11}
(d) yf2 - 2nl FIGURE Pl.18
Problems 65

1.19 Consider the sinusoidal signal (b) What happe11s to the differentiator output
y(t) as T approaches zere)? lJse the definition
41T 1T
x[n] = 10 cos n + of a unir impulse 5(t) to express your
31 5 answer.
Determine the fundamental period of x(n). (e) What is the total area l1nder rhe diffcrentia-
1.20 The sinusoidal signaJ x[nJ has fundamental pe- tor output y(t) for ali T? Jt1srify y<)ur
riod N = 10 samples. Determine the smallest answer.
angular frequency .n for which xlnl is periodic. Based on your findings ín parts (a) to (e), de-
1.21 Determine \.Vhether the following signals are pe- scribe in succinct terms the result of differenti-
riodic. If they are periodic, find the fundamental ating a unir impulse.
period.
x(t)
(a) x{n] = cos(fs1rn)
(b) x[n) = cos(n1rn)
(e) x(t) = cos(2t) + sin(3t)
(d) x(t) = Lk _""(-l)k8(t - 2k)
(e) x[n] = LZ'= ""{8[n - 3kl + S[n - k 2 ]}
(f) x(t) = cos(t)u(t)
-T/2 o T/2
(g) x(t) = v(t) + v(-tJ, where v(t) = cos(t)u(t)
FIGURE Pl.25
(h) x(t) = v(t) + v(-t), where v(t) = sin(t)u(t)
(i) x[n] = cos(½1rn) sin(_~ 7Tn) 1.26 The derivative c.)f 11npl1lse functic)tl 5(t) is re-
1.22 A complcx sinusoidal signal x(t) has the follow- ferred to as a doublet. lt is denoted l)y 8 1 (t) .
.
1ng components: Show thar 8' (t) sa tisfies the si fting properry
Re{x(t)}
lm{x(t)}
=
=
xR(t)
x 1(t)
= A cos(wt + </>)
= A sin(wt + </J) f" " ô'(t - t 0 ) f(t) dt = f'(t0 )
The amplitude of x(t) is defincd by the square where
root of x1(t) + xy(t). Show that this amplitude
equals A, independent of the phase angle <p. f''(to) = ~ f(t)
1.23 Consider the complex-valued exponencial signal
Assume that the fu11ction f(t) has a contjnuous
x(t) = Ae°'r+;w,, a>O derivative at time t = t 0 •
Evaluate the real and imaginary components c)f 1.27 A svstem
, C<)nsisrs of se\ eral subsvstems
, cc)n- 1

x(t). necte<l as shown in Fig. Pl .27. Find the operator


1.24 Consider the contínuous-time signal H relating x(t) to y(t) for the Sl1l,system <Jpcra-
tors given by:
t/T + 0.5, -T/2 < t < T/2 H1 :y1(t) = x 1(t)x 1(t - 1)
x(t) = 1, t > T/2 H2: Y2(t) = lx2(t) 1
O, t < -T/2 H_,, :y3 (t) = 1 + 2x,,(t)
H4:y4(t) = cos(x4 (t))
whích is applied to a differentiator. Sho\v that
the output of the dífferentiator approaches the x,(t) Yi<t)
unit impulse S(t) as T approaches zero. ---...~. J-1.,.,., ,-.- - - .
óf! 9lt $1< •!ll ~

1.25 ln this problem, we explore what happens when


a unit impulse is applied to a diffcrentiator.
Consider a triangular pulse x(t) of duration T +
and amplitude 1/2T, as depícted in Fig. Pl.25. X2(t) )'2(!)
+
The area under the pulse is unity. Hence as the
x(t) '-----+-.11: .l-4?. . ,, ___. y(t)
'J.: .~ •)(,. ,,., I r - -.....
duration T approachcs zero, the triangular pulse
approaches a unit impt1lse.
(a) Suppose the triangt1lar pulse x(t) is applied
to a differentiator. Determine the output
y(t) of the differenriator. f tGURI:'. p 1.2 7
66 CHAPTER l • INTRODUC'f'ION

-IJo• lfP ..
---- • --t
1 1 2

X2(t)
-- . Y2(t)

1 2 3 .. k~;JJI*·
... .. ..
t t
1 1 1 4
-1 1
' 1 1 -1 ...
í

y3(t)

1 r··-,.,_ 1 ...
)111
i~
H •
' t t
1 2 3 1 4
-1

(a)

Y1(t)

_! 1
2'

_...,_--+--+-- t
l -1 l

X2(t)

2 Y2(t)
l ·,. l ·-t--------,

--+----t-----1------- t -+--r----+---"---- t
1 2 3 4 1 2 3 4

• • . -
_....__~--"---------1-- t
-JI H -1 t,.,
: : ·········-·-
' - : - '
t
l 2 3 4 -l l 2 3
1 2

y4(t)
1 ......

-1Jo
,rv.><~•
· ··H .,.,.
··"1!~
...,_ """"1'· · -·

• •l
--+--~--+-----
,., t
1
----·······---+-·-----+- t
1 - 1
l 2 3 4

(b)

FIGURE Pl.39
Problems 67

1.28 The systems given below have input x(t) or x[nJ 1.34 Show that the discrete-time systen1 described ín
and output y(t) or yfn], respectively. Deter- Problem 1.29 is time ínvariant, independent of
mine whether each of them is (i) memoryless, the coefficients a0 , a 1 , a 2 , anda:~•
(ii) srable, (jjj) cat1sal, (iv) Jjnear, and (v) rime 1.35 Is ir possib1e for a time-variant S}'Ste1n to be lin-
. .
1nvar1ant. ear? Justify your answer.
(a) y(t) = cos(x(t)) 1.36 Show that an Nth power~law <levice defined by
(b) yf n} = 2x[n)ufn] the input-output relation
(e) y[,tl = log10( lx[n} 1) y(t) = x'-..:(t), N ínteger and N =I= O, 1
(d) y(t) = f' 1~ x( T) dT
is nonlinear.
(e) )'tn) = IZ=-oc .i-[k + 21
1.37 A linear tíme-invariant syste1n 111ay he causal or
d
(f) y(t) = dt x(t) noncausal. Give an example for each one <)Í
these two possibilities.
(g) )'[n] = cos(21rxfn + 1]} + x[n] 1.38 Figure l ..50 shows two eqt1ivalent systen1 con-
d figurations on condirion that the sysrem
(h) y(t) = dt {e- x(t)}
1

operator H is linear. Which of these two config-


(i) y(t) = x(2 - t) 11rations js sjmpler to impJement? Justify yo11r
answer.
(j) y[nl = x[n] Lk~-'"' 8[n - 2k]
1.39 A system H has íts input-output pairs gíven.
(k) y(t) = x(t/2}
Determine whether che system could be mc-
(1) yínJ = 2x(2''1 moryless, causal, linear, and time invariant for
1.29 The output of a discrete-ti1ne system is related (a) signals depicted in Fíg. P1.39(a) and (b) sig-
to its input x[ n I as follo\1\1S: nals depicted ín Fig. Pl.39(b}. For ali cases, jus-
tify your answers.
y[nl = aox[nl + a 1x[n - 1]
1.40 A linear system H has the input-output pairs
+ a2x[n - 21 + a_~xln - 3]
depicted in Fig. Pl.40(a). Determine the follow-
Ler the operat<.)r .\k denote a system that shifts ing a.nd explain yot1r answers;
the input x[n] by k time units to produce (a) Is this system causal?
x[n - k]. Formulate the opcrator H for the
(b} Is this system time invariant?
system relating y[nJ to x[n]. Hence develop a
(e) Is this system memoryless?
block diagram representation for H, using
(a) cascade ímplementation and (b) parallel (d) Find the output for the input depicted in fig.
ímplementatio11. Pl .40(b}.
1.30 Show that tl1e syste111 described in Problen1 1.29 1.41 A discrete-timc system is both linear and time
is BIBO stable for ali a 0 , a,, a2 , and a 3 • invariant. Suppose the output due to a11 input
x[n] = o[n} is gíven i11 Fig. Pl.41 (a).
1.31 How far does the memory of the discrete-rime
system described jn Prob]em 1.29 extend into (a) Fínd the output due to a11 input xlnl =
the past? 8[n - 1}.
1.32 Is it possible fc>r a noncausal system to possess (b) Find the output dueto an input xlnl = 2ô[n]
memc)ry? Jusrify your answer. - o[n - 2J.
1.33 The Ot1tput sjg11al )1[n] of a discrete-time system (e) Find the output dueto the input depicred in
is related to its i11put signal xi nl as follows: Fig. Pl.41(b).

y[nl = x[nJ + .r[n - 11 + x[n - 2]


Let the operator S denote a system that shifts its • Compute1· Experiments
input by <.1ne time unit.
(a) f<1r111ulate the operator H for the system re- 1.42 Write a set of MA TLAB commands for appr<)X-
lating y[n] to x[n}. imating the following continuous-time periodic
(b) The operator H- 1 denotes a discrete-time waveforms;
system rhat is the inverse of this system, (a) Square wave of amplitude 5 volts, funda-
How is H- 1 defined? mental frequency 20 Hz, and duty cycle 0.6.
68 CHAPTER l • INTRODlJCTION

X1(t) Y1(t)

; t
... .. l

t
o l o l
x,,(t)
- Y2(t)

1- ~

l

;
1 t •- t
o 1 2 3 o 3
-1

1 -~

--t-~,----t -+---1',;,__-+-_..,__ t
o 1 2 o 1 2 3
(a)

x(t)

2--

_..,___.•1---+-- t
'
o l 2

(b)

FIGURE P 1.40

y[n] (b) Sawtooth wave of amplitude 5 volts, and


2 fundamental frequency 20 Hz.
Hence plot five cycles of each of these two
l waveforms.

--c>-0--o--+-t-l-0---0--0-- n
1.43 (a) The solution to a linear differential equation
1 2 is given by
-1 x(t) = 10e - t - 5e-O-St

(a)
Using MATLAB, plot x(t) versus t for t =
x[n] 0:0.01 :5.
2
( b) Repeat the problem for
1 X(t) = 10e-t + Se-O.Sr

--c>-----0--0--0-+-~,-o--o--o-- n 1.44 An exponentially damped sinusoidal signal is


l defined by
-1 x(t) = 20 sin(21r X lOOOt - 1T!3) exp(-at)
(b)
where the exponential parameter a is varíable;
FIGURE P 1.41 it takes on the followíng set of values: a = 500,
Problems 69

750, 1000. Using MATLAB, investigate the ef- 1.46 A rectangular pulse x(t) ís defined by
fect of varying a on the signal x(t) for -2 < t
< 2 milliseconds. 10, O< t < 5
x(t) =
1.45 A raised-cosine sequence ís defined by O, otherwise
.
t cos(21rFn), -1/2F < n < 1/2F Generate x(t) using:
" w[n] = .
O, otherw1se (a) A pair of time-shifted step functions.
Use MATLAB to plot w[n] verst1s n for F = 0.1. (b) An M-file.

'-·

'

t
1· ,.

; '
'

> •

'
•..-
Time-Domain Representations
for Linear Time-Invariant Systems

.. .,, ..
.;, . : :i,. .:' : ·.
.t>1;:.
...
.. ·;. : . ~.
·. ,,,':·. : \.: ·....•: '. . .,
.,
<

·.. ·~'.i· ,: ::o>'·


\

.,.:,,
.,. ··.' .",'!·

:1.,

12.1 lntroduction
ln this chapter we C<>nsider severa! methods for describing the relationship betwcen the
input and output of linear time-invariant (LTI) systems. The focus here is on sysrem de-
scriptions that relate the output signal to the input signal when b<.)th signals are represented
as functions of time, hence the terminol<)gy ''time domain '' ín the chapter title. Merhods
for relating system output and input in domains other than rime are presented in later
chapters. The descriptions developed in this chapter are useful for analyzing and predicting
the behavior of LTI systems and for implementíng discrete-time systems on a compurer.
We begin by characterizing a LTI system in terms <>f its impulse response. The impulse
response is the system output associated with an impulse input. Given the impulse re-
sponse, we determine the output due to an arbitrary input by expressing the input as a
weíghted superposition of time-shifted impulses. By linearity and time invariance, rhe out-
put must be a weighted superposition of time-shifted impulse responses. The tcrm ''con-
volution'' is t1sed to describe the procedure for determining the output from rhe input and
the impulse response.
The second method considered for characterizing the input-(>utput bel1a vi<)r of LTI
systems is the linear constant-coefficient differential or difference equatic>n. Differential
equations are used to represent continuous-time systems, while difference equations rep-
resent discrete-time systems. We focus on characterizing differential and difference equa-
tion S<llutions with the g()al of developing insight into system behavior.
The third system representation we discuss is the block diagram. A block díagram
represents the system as an interconnection of three elementary operati<>ns: scalar multi-
plication, additíon, and either a time shift for discrete-time systems <>r integration Í(>r
• •
cont1nuous-t1me systems.
The final time-domain representation discussed in this chapter is the state-variable
description. The state-variable description is a series of coupled first-order differentíal or
difference equations that represent the behavior of the system's ''state'' and an equation
that relates the state to the output. The state is a set of varia bles associaced with energy
stc>rage or memory <levices in the system.
Ali four of these time-domain system representations are equivalent in the sense that
identica1 outputs result from a given input. However, each relates the input and output in
a different manner. Different representations offer different views of the system, with each
offering different insights into system behavior. Each representati()n has advantages and
2.2 Convolution: Impulse Hesponse Hepresentatio,,for LTI Systems 71

disadvantages for analyzing and implementing systems. Understa11ding h(>W differcnt rcp-
resentati<>ns are related and determining which offers the most insight and straightforward
so1ution in a particular prol)lem are important skil1s to develop.

2.2 Convolution: Impulse Response


Representation for LTI Systems
The impulse response is the ()utput of a I~TI system dt1e te> an impulse input applied at
time t = O or n = O. The impulse resp<>nsc completely characterizes the behavi(>r c,f a11y
LTI system. This may see1n surprising, but it is a basíc pr<>perty of all LTI systems. The
impulse response is often determined from knowledge of the system configt1rati<.>n and
dynamics <>r, in the case of an unknow11 system, can l-,c 111easured by applyi11g a11 approx-•
ímate impulse te) the system input. Generati<.>n c>f a discrctc-time impulse sequence for
testing an unknc>wn system is straightforward. ln the cc>11tinuc>us-time case, a true itnpulse
of zer<> width and infinite amplitude cannot actually bc gcncratcd a11d usually is physically
approximated as a pulse of large amplitude and narr(>W width. Thus rhc impulse response
may be interpreted as thc system behavior in response to a high-amplitt1de, cxtremely
short-duration input.
lf thc input to a linear system is expressed as a weíghtcd superposicion of ti111e-shifted
impulses, then the ()Utpt1t is a weighted superposition of the systen1 resp<.>nse t<) each rin1e-
shifted impulse. If the system is ais(> time invariant, chen the system response to a time-
shifted impulse is a cime-shifted version of the system respc>nsc to an impulse. Hence the
()Utput c>f a LTI system is given by a weighted superpc>sitic,11 c>f ti1nc-shifted impulse re-
sponses. This weighted superposition is termed the C<)nvc)li,tic>n su111 fc>r discrcte-rin1e sys-
tems and the convolution integral for continuous-time syste111s.
We begin by considering the discrete-time case. First an arhitrary signal is expressed
as a weighted superpc>sition of time-shifted impulses. The convolution sumis then (>l)t::1inec.l
by applying a signal represented in rhis manner to a L TI system. A similar procedure is
used to obtain the convolution integral for continuot1s-time systems later in this section.

li THE CONVOLUTION SUl\1

Consider the prl)duct <.>f a sig11al x[11} and the impulse seqt1ence 81.nl, written as
x[n}8[n] = x[OJôlnJ
Generalize this relationship t() che product of x[n] and a cirne-shifted impulse seque11ce t<>
obtain

X rn lô[ n - k] = X [ k] Bl n - k]

ln this expression n represents the time indcx; hence x[n] de11otes a signal, while xlkl
represents the value of the signal xf n I at time k. We see that multiplicatjon of a signal by
a time-shifted impulse results in a time-shifted impulse with an1plitude given by the value
of the signal at the ti1ne the impulse occurs. This property allows us to express xln] as the
following weighted sum of time-shifted impulses:

x[n] = · · · + xf-2]B[n+ 2] + x[-l]Bln + 1_1 + xf0]8fnl


+ x[l]B[n - 1] + xl2]8ln - 2] + · · ·
72 CHAPTER 2 Ili! TIME-D01\IAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYS'l"El\-1S

We may rewríte rhis representation for x[ n J ín concise form as


co

x[nJ = L
k= -oo
x[k]ô[n - k] (2.1

11\ graphical illustration <)f Eq. (2.1) is given in Fig. 2.1.


Let the (>perator H denc>te the system t<) which the input x[n] is applied. Then usinl
F.q. (2.1) to represent the input x[n] t<> the system results i11 the output
00

y[n] = H 2, xlkl8[n - kl
k=-=

••

xl-2]8[n + 2]

x[-2} · · · ·

o
xl-1]5[n + IJ

+
--o--o---o--.---i--0---0-----..:>---o--- n
o
xl-1)

x[0]5ln - l l

1
+ x[O]

-· <>------<O>----<Q->----<O--··-',!--O--···-ó--❖-··~--- n
o
x[l)o[n -1]

+ x[t] +
o
x[2)ô[n - 2]

+
x[2] ·

o
••

x[rzJ

--
__________
••• ...___.__•••_ _ _ _ n
o

FtGlJRE 2.1 GraJ>hical example illustrating the reprcsentati(>n of a signal x[1i] as a \veighted sum
of time-shiftcd impulses.
2.2 Convolution: Impulse Hesponse Hepresentationfor LTI Systems 73

Now use the linearity property to interchange the system operator H with thc summation
and sigr1al values x[kj to obtain

yfn] :Z.: xfkJH{!Sfn - kl}


k=-oo
oc
(2.2)

k=-oo

where hk[ nl = H{B[n - k]} is the response of the system to a time-shifted impulse. If we
further assume the system is time ínvariant, then a time shift ín the input results ín a tin1e
shift in the output. Thís implies that the output due to a time-shifted impulse is a time-
shifted ,,ersion of the output due to an impulse; that is, hk[n] = h0 [n - k]. Letting
h[nl = h0 [n] be the impulse response of the LTI system H, Eq. (2.2) is rewritten as

y[nJ = L
k=-oo
x[k]hín - k] (2 ..3)

Thus the output of a LTI system is given by a weighted sum <>Í time-shífted impulse re-
sponses. This is a direct consequence of expressing the input as a weighted sum of tin1e-
shífted impulses. The sum in Eq. (2.3) is termed the convolution sum and is denoted by
the symh(>l *; that is,
00

x[nJ * h[nJ = L
k= -·- CC
x[k]hln - kJ
The convolution process is illustrated in Fig. 2.2. Figure 2.2(a) depicts the impulse response
of an arbitrary LTI system. ln Fig. 2.2(b) the input is represented as a sum of weighted
and time-shifted impulses, Pk[n 1 = x[k IB[n - k 1- The <>utput <>Í the system associated with
each input pk[n] is
vk[n 1 = x[k lh[n - k]
Here vk[n J is obtained by time-shifting the impulse response k units and multiplying by
x(k]. The outputy[n] in resp<>nse t<> the inputx[n} is obtained by summing ali the sequenc.:cs
vkf n]:

k=-oo
That is, for each value of n, we sum the values along the k axís indicated on the right side
of Fig. 2.2(6). The following example illustrates this pr(>cess.
·. ~...: v·
'
ExAMPLE 2.1 Assume a LTI system H has impulse response
1, lt .::::: ± 1
h[n] = 2, n == O
o, otherwise
'
Determine the output of thís system in response to the input
2, n=O
3, n = 1
x[n] = -
-2, n=2
... o, otherwise
74 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEJ\-tS

h[nJ
l - ..
••• 9 •••
---c...,._-0---11----+-+---+-+---+--n !

-l-·

(a)


• ••
• •

-1
-1
l 2 3 4
n ... .. •
.
-1
1

D
l
'
2
3
1
4
'
6 • ••
n

-1 -1. •

Po[n] v0 [n]
1 1
i>·
3
n .. h[n]
.... ,..
.. -❖----0--1---1--+-+--+
' '
-..
. ~'-...:_.:.;,,__ n ..... o
-1 l 2 3 4 • "li -1 l 2 4 5

4
.. ~~- ..
1

l
-1 l 2 3 4 ~- 1 2 3 5 6
9-··-· n
--o---~4----+-+-~---1.-....1- \

-.;.,.,,.,_ ..
5
2- ---<>---<>---<>-_._,--<>----<>---o---<>- n ., h[ l n 11 -~--<>--<>--__,__..,__-+---,-+--+ n -2
-1 1 2 3 4 1 2 3 4

p3(n1

3
-l l 2
3
--<>---<>-----<>--o----..----<>--<>----<>- n
4
3
~ I
4 5
J j_ ~n 3

-1
k •

•• k
• •

00 00

x[n] == L Pk[n] y[n] == L vk[n]


k=-oo k =-oo
L
-1 1
3 1 •••
... c•••
' -----n .... • -......A--1---j,---+,-+--+---,,---+--- n
1 2 ! -1 2 3 4
-1 - - -1 ...

(b)

FIGURE 2.2 Illustration of the convolution sum. (a) Impulse response of a system. (b) Decom-
position of the input x[nJ into a \Veighted sum of time-shifted impulses results in an output y[n]
given hy a weighted sum of time-shifted imJlulse resilonses. Here Pk[n) is the weightcd (by x[k]}
and time-shifted (by k) impulse input, and vk[n] is the wcighted and time-shifted im1>ulse respons
out1>ut. The dependcnce of both pk[n] and vk[n] on k is depicted by lhe k axis shown on the left-
and right•hand sides of the figure. The output is ohtained by summing vk[n] over all values of k.
2.2 Convolution: Impulse Response Representation for LTI Systems 75

••• ••

w 0 [k]
l
-1 00

o---o-------c>----o----<>-----o-- k y[O] = L w0 [k]


••• l 2 3 4 k=-oo
-1 ...

W1[k]
l
-1
-
00

•••
k y[ll = L W1[k]
1 2 3 4 k =-oo
..
-1

l
~

yL2J = L W2lk] N ··t--O


k =-oo

1-
-1 3 00

---+-----l.---+-----1f-----+---<>--<>--~
••• 6
k yL3J = L W3[k]
1 2 4 k ;-oo
-1- ...

W4[k]
1
-1 l 3 00

~ ... t··- -···r·2:~-r.....' - - - - - - - k


,!. ~ •••
y[4] =
k
L
=-oo
W4LkJ --o
-1 - .
•• ••
• •
(e)

FIGllRE. (e) The signals w nlk] used to con,pute thc output at time 1-1 for several values <>f n.
2.2
Here \Ve have redrawn the right-hand side of Fig. 2.2(b) so that the k axis is horizontal. The out-
put is (>htained f(>r 11 = 110 by summing w,. 0 [k] over all values c>f k.

'·· : . .., ·i: ...,..~ .. ./:'.: .... . .


. ~•.: . •:r ''l .

Solution: First write x[n] as the weighted sum of time-shifted impulses


.. '
..' , .
x[n] = 28[n] + 38[n - 1) - 28[n - 2]

Here Po[n] = 28[n], P1[n] = 3S[n - 1], and p 2 [n] = -25[n - 2]. All other tíme-shifted Pk[n]
are zero because the input is zero for n < O and n > 2. Since a weighted, time-shifted, impulse
inptit, a8[n - k}, results ín a weighted, time-shifted, impulse response output, ah[n - k], the
system output may be written as

y[n] = 2h[n] + 3h[n - 1] - 2h(n - 2]


76 CHAPTER l • 'fIME-DOMAIN REPRESENTATIONS FOR LINEAR Tll\1E-INVARIANT SYSTEMS

Here v0{n) = 2hln), v 1[n] = 3h{n - 1], v2[n] = -2h[n - 2), and all other vk{n] = O. Summation
of the weíghted and time-shifted impulse responses over k gives
-~· ...>·
º~
2, n
n -<
-
-2
-1
• .·,d~i
>.i>

./. ,:'· 7, n - o
'.,·
y[n] - 6, n - 1
...
, .
-1 , n - 2
-2, n - 3

.., . ,
·;. ... <
o, n -> 4
, .. , ::i; .. ·:.
. ""· . .. ,....~. {,\ ·. .,,~;;: ...~~ .....:~;i: ;~~· ·•~li ·•;~·· > ' '. ..'

ln Example 2.1, we found all the vk[nJ and the11 summed over k to determine yl1z
This approach illustrates the principies that underlie convolution and is very effective whe
the input is of shc>rt duration so that only a small nun1bcr of signals vk[nl need t<> t
determincd. When the input has a lclng duration, then a very large, p<lssibly infinite, nurr
ber of signals vk[nJ must be evaluated before y[nJ can be found and this procedure can t
cum berS<>me.
A11 alternative approach f<>r evaluating the convolution sum is obtained by a sligt
change in perspective. Consider evaluating the output ata fixed time n 0

00

y[nol =
k
I= - ,,, vk[noJ
That is, we sum along the k or vertical axis on the right-hand side of Fig. 2.2(b) ar a fixe,
time n = n 0 • Suppose we define a signal representing the values at n = n 0 as a function e
the independent variable k, Wn [kl = vk[n 0 ] . The ()Utput is now obtained by summing ove 0

the independent variable k:


00

y[noJ = L
k=-•x
Wn(J[k]

Note that here we need only determine one signal, w ,., [kJ, t<> evaluate the <)t1tput a 11

n = n 0 • figure 2.2(c) depicts w 11,,[kJ for several different values <>Í n 0 and the correspondin:
output. Here the horizontal axis corresponds to k and the vertical axis corresp<>nds to n
We may view vk[nJ as representing the kth row c>n the right-hand side of Fig. 2.2(l-,), \.Vhil,
w 11 l_kJ represents the nth column. ln Fig. 2.2(c), wn[kJ ís the nch row, while vk[nl is the ktl
column.
We have defined the intermediate sequence w [kJ = x[k]h[n - k] as the product o 11

x[k] and hfn - kJ. Here k is che independent variable and n is treated as a constant. Henc1
h[n - kl = h[-(k - n)] is a reflected and time-shifted (by -n) version of hfkl. The tim1
shift n determines the time ac which we evaluate the output of the systetn, since
oc

y[nJ = L
k=-,,,
Wn[kJ (2.4

Note that now we need only determine one signal, w n[ k 1, for each time ar which we desirc
to evaluate the output.
2.2 Convolution: Impulse Response Representationfor LTI Systems 77
. ,.

ExAMPLE 2.2 A LTI system has the impulse response


h[n] = (¾) u(nJ
17

Use Eq. (2.4) to determine the output of the system at times n = -5, n = 5, and n = 10 when
the input is x[n] = u[n].
Solution: Here the impulse response and input are of infinite duration so the procedure
followed in Example 2.1 would require determining an infinite number of sígnals vkln]. By
using Eq. (2.4) we only form one signal, wn[k], for each n of interest. Figure 2.3(a) depicrs
x[k], while Fig. 2.3(b) depicts the reflected and tíme-shifted impulse response h[n - k]. We
see that
( l)n-k
4
k < n
h[n - k] = ' -
"'' · O, otherwise
Figures 2.3(c), (d), and (e) depict the productwn[k] for ti= -5, n = 5, and n = 10, respectively.
We have
w_5 [k] = O . , :. . . ·:.~..
.:,... ·. .. . :

and thus Eq. (2.4) gives y[-5] =O.For n = 5, we have .•


. ·.·. .
...
". ..
3)5-k
(4 o :S k < 5
..~··
·.{ w 5 [k] = ' .' .
, '•
O, otherwise '
..
: ;. . .,. . ..
.,

and so Eq. (2.4) gives


s 5-k
• ·< 3
·':.
y[5] - Ik=O
..
~.;., ., 4 . ~·

x[kl hln - kJ
l ,• )n~-k 1 l·
(3
4
1

-2 o 2
;

4
;

6
•••

k ~ 2-.'.i 1 1
' .
rI n
0--0--0-0-0 k

(a) (b)

UJ_sfk] (3)5-k
·t ✓-:4
- ~0--0..0-ó-<>-<>-<>--0-<>-<>-0--0-<>----
-4 -2 O 2 4
k _ _ o--0 º º <> 1-1_r_Il . . ºº º
ü l 4
•--- __ k

(e) (d)

O 2 4 6 8 10
(e)

FIGVRE Evaluation of Eq. (2.4) in Examplc 2,2. (a) The inpltl sígnal x[k] dcpíctcd as a
2.3
functi<>n tlf k. (b) Thc reflected and time-shiftecl impuJst.' rcs1Jc.>nsc. /1-[11 - kJ, as a l\1nctít>n <>f k.
(e) 'I'he product signal u 1 _:;[kl used to cvaluate y[-5]. (d) The product signal 11 ,,[kl used to eYaltt- 1

ate y[5]. (e) Thc product signal t,v 10 lkJ used t<> evaluatc rí .1 O].
78 CHAPTER 2 Ili Tll\1E-00MAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT Sl'STI

·,. ..,.

Factor (¾)5 from the sum and apply the formula for rhe sum of a finite geometric serie
obtain
5 5 k
i;i:
3 4
y[S] = -4 I -
3
.~.·
""º k
3 s 1 - (1)6
-
4 1 - (1)
Lastly, for n = l O we see that
O< k:::;; 10
otherwise
..,,,
....
and Eq. (2.4) gives
lO 10-k

y[lO] = L i4
k..:O
....
.,.... ,,,

3 10 10 4 k
-4 >. -
:.:'o 3

.>Á
·•.

3' 10 1 - (!)11
-
4 1 - (1)
Note that in this example Wn[kJ has only two different functional forms. For n < O)
have wn[k] = O since there is no overlap between the nonzero portions of x[kl and h[n -
When n:::: O the nonzero portions of xlk] and h[n - k] overlap on the inrerval O~ k .:::ã n é:

. we may write
3 n-k
(4) , Os k s n ....
•;.:.• ... ..·
~
·,..
w,,[k] = h . .:·
O, ot erw1se
Hence we may determine the output for an arbitrary n by using the appropriate functioi
. forro for Wn[k] in Eq. (2.4).
.,

This example suggests that in general we may determine y[n 1for all n without
uating Eq. (2.4) at an infinite number <>f distinct time shifts n. This is accomplish(
identifying intervals of n on which w [kl has the sarne functional form. We rhen <)nly
11

to cvaluate Eq. (2.4) usi11g tl1e w [kl ass(>ciated with each interval. Often ir is very ht
11

to graph both xlk] and hln - k] when determini11g w,,lk] a11d ídentifying the approf
intervals of ti1ne shifts. This procedure is now summarized:
1. Graph both x[kJ and l7[n - kJ as a functi<>n of the índependent variable k. T•
termine h[n - k\, first reflect hík] ahout k = O to <)l1tain h[-kj and then time
h[-kl by -n.
2. Begin with the time shift n large and negative.
3. Write the functional form for w,,[kJ.
4. Increase the time shift n until the functional form for t.v [kl changes. The valuc 11

at which the change <>ccurs defines the end of the current interval and the begin
c>f a 11ew interval.
5. Lct n bc in the new intcrval. Repeat steps 3 and 4 until a\l íntervals of time shi
and the corresponding functional forms for zv,,[kl are identified. This ust1ally im
increasing n to a very large positive number.
6. For each interval of time shifts n, sum ali che values of the corresponding w.,[,
obtain ylnJ on that interval.
2.2 Convolution: Impulse Response Representatio•ifor LTI Systems 79

The effect of varying n from - oo to oc is to slide h[-k] past x[ k] fr<)m lcft tt> right.
Transitions in the intervals of n identified in step 4 generally t>ccur when a change point
in the representation for h[-kJ slides through a change point in the representati{>n for
xlk]. Alternatively, we can sum all the values in w,,[k] as each interval ()Í time shifts is
identífied, that is, after step 4, rather than waiting until ali intervals are identified. The
following examples illustrate this procedure for evaluacing the cc>nv<>lution sun1 .

.•

ExAMPLE 2.3 A LTI system has impulse response gíven by
h[n] = u[n] - u[n - 10]
and depicted in Fig. 2.4(a). Determine the output of this system when the input is the rectan~
gular pulse defined as
x[n] = u[n - 2] - u[n - 7]
and shown in Fig. 2.4(b).
Solution: First we graph x[k] and h[n - k], treating nas a constant and k as the independent
variable as depicted in Figs. 2.4(c) and (d). Now identífy intervals of time shifts n on which
the product signal wn[k] has the sarne functional form. Begin with n large and negative, in
which case w,.[k] = O because there is no overlap in the nonzero portio11s of x[k] and
h[11 - k]. By increasing n, we see that w,,[k] = O províded n < 2. Hence the first interval of
time shifts is n < 2.
. . .
....,,. ···"· ~- .. ... ...
.:.
,;,,..,.

h[n] x[n] x[kJ



l • 1 l -~ ' }

; n '
n ~ - k
' ' '
;

O 2 4 6 8 o 2 4 6 2 4 6
(a) (b) (e)

wn[k] wnfk]
h[n - k]
l- - • 1 ... ~

-<>--<>······ .. ...L.......IL.......L.__.__.____.___...._--<>--<o-o- k
n-9 n O 2 n 2 4 6
(d) (e} (f)

y[n]

n -9 6
-~~-l- --------~+~~~~~~n 2
:

4
l

6
l

8
:

10
l

12 14 16
(g) (h)

FIGURE 2.4
Eva1uati{ln of the convolution sum for Example 2.3. (a) The system in1pt1lse re-
sponse Ji[n]. (b) The ínpt1t signal x[fiJ. (e) The input depicte<l as a functi<>n ,)f k. (d) 'J'hc rcflected
and time-shifted impulse rcsponse h[1i - k] depicted as a function of k. (e) The product signal
tvnlkJ for the interval of time shifts 2 < n < 6. (f) The tJroduct signal w .. [k] for the interval of time
shifts 6 < n s; 11. (g) 'I'he product signal w,.[k] for the interval of time shifts 12 < n s; l 5.
(h) The Ot1tpt1l rí 111.
80 CHAPTER 2 • TI\\11:'.-DOMAIN REPRE.SE.NTATIONS FOR LINEAR TIME-INVARIANT SYSTEM~

.,;••

When n = 2 the right edge of h[1t - kJ slides past the left edge of x[kJ anda transition occur,
in the functional form for wn[kJ. For n .2: 2, ·

, 1, 2 s k s n
' Wn[kJ ==
·. . O~ otherwise t:' ..,

.."
This functional form is correct until n > 6 and is depicted in Fig. 2.4(e). When n > 6 the rigw
edge of h[n - k] slides past the right edge of x[kJ so the form of w,i[k] changes. Hence ou1
second interval of time shífts is 2 s n s 6.
For n > 6, the functional form of w,i[ k J is given by

1, 2 s k s 6
., Wn[k] = .
.· O, otherw1se

as depicted in Fig. 2.4(f). This form holds until n - 9 = 2, or n = 11, since at that value of
n the left edge of h[n - k1 slides past the left edge of x[k}. Hence our third interval of time
shifts is 6 < n s 11.
...'' Next, for n > 11, the functional form for wn(k] is given by . .. ,

.,; i:
'.
1, n - 9 s k $. 6
Wn[k] = .
..,....
O, otherw1se

as depicted in Fig. 2.4(g). Thís forn1 holds unti] n - 9 = 6, or n = 15, since for n > 15 the
left edge of h[n - k] lies to the right of x[kl and the functional form for w,,[kJ again changeS'i
Hence the fourth interval of time shifts is 11 < n s 15.
For all values of n > 15, we sce that wn[k] = O. Thus the last interval of time shifts i.Q.
this problem is n > 15.
The output of the system on each interval of 1t is obtained by summing the values of
the corresponding w,,[kJ according to Eq. (2.4). Beginning with n < 2 we have y[n] = O. Next,
for 2 s n s 6, we have
·,
.,
< '

"
·.,
y[nJ = L1
k=2

=n-1

On the third interval, 6 < n s 11, Eq. (2.4) gives ,,:


,1

. , .,,
'
6

.,. .·
y[nl = L
k=2
1

=5

For 11 <n < 15, Eq. (2.4) gives .,,

:~,
6
;
..... ···~
...
, ..

. '
y[nJ L 1
= k=n-9

= 16 - n
Lastly, for n > 15, we see that y[nj ~ O. Figure 2.4(h) depicts the output y[n] obtained by
combining the results on each interval.
,.
2.2 Convolution: Impulse Response Representationfor LTI Systems 81
... , .• : f'. • • • ·' ••

ExAMPLE 2.4 Let the input, x[n], to a LTI system H be given by


x[n] = a"{u[n] - u[n - 10]}
and the impulse response of the system be given by · ·
'
'
~- ,. . h[n] = t3nu[n]
where O < J3 < 1. Find the output of this system.
Solution: First we graph x[k] and h[n - k], treating nas a constant and k as the independent
variable as depicted in Figs. 2.5(a) and (b). We see that
,,, .
cf, Os k s 9
x[k] =
O, otherwise
.,_,
13n-k, k :s; n
h[n - k] =
O, otherwise
. . ' . . :

Now identify intervals of time shifts n on which the functional form of w n[ k] is the sarne.
Begin by consídering n large and negative. We see that for n < O, wn[k] = O since there are
no values k such that x[k] and h[n - k] are both nonzero. Hence the first interval is n < O.
When n = O the right edge of h[n - k] slides past the left edge of x[k] soa transition
occurs in the form of wn[k]. For n > O,
·. -.: - cl{:3"-k, Os k s n
. .> ' ..
. ·• . w,,[k) = O,
.
otherw1se
This form is correct provided Os n s 9 and is depicted in Fig. 2.5(c). When n = 9 the right
edge of h[n - k] slides past the right edge of x[k] so the form of w,.[k] again changes.
Now for n > 9 we have a third form for wn[k],
·. .' . , .. , W {k] = ~/3n-k, OS k S 9
,. O, otherwise
Figure 2.5(d) depícts this wn[k] for the third and last interval in this problem, n > 9.
We now determine the output y[n] for each of these three sets of time shifts by summing

x[kJ h[n - kJ
1- l - -..
a- ~

f3
Cl' 9 ... .. '
• ••
k - - k
' ' ' ' ' 1

o 2 4 6 8 10 n
(a) (b)

wn[kJ
n-
a- a
'
'
13n' ' 13n >·

- - k -o-- k
' ' ' ' <

o n O 2 4 6 8 10
(e) (d)

FIGlJRE 2. 5 Evalttation of the convolution sum for ExamJJle 2.4. (a) The input signal xf k] de-
picted as a function of k. (b) Reflccted and time-shifte<l impulse response, h[n - k]. (e) The
product sígnal wnfkl for()< n < 9. (d) The product signal w,.[k] for 11 > 9.
82 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR Til\1E-INVARIANT SYSTE.1\1S

Wn[kJ over ali k. Starting with the .first intervaI, n < O, we have Wn[k] O, and thus;
y[n] = O. For the second interval, Os n s 9, we have
n
y{n] = L o/t13n-k
k=O
Here the index of summation is limited from k = O to n because these are the only times kii:
for which wn[k] is nonzero. Combining terms raised to the kth power, we have
k
~ a
y[n] = /3" Li -
k=O f3
Next, apply the formula for summing a geometric series of {n + 1) terms to obtain
:= 1311 1 - (a/f3)n+l
[
y n1 1 - a//3 .' ..
~

.. , .

Now considering the thírd interval, n;:::; 10, we have


9
y[n] = L ak13n-k .,,'.
k=O
,.,..
9 k

I ª/3
= /3'' k=O
1 -_(a//3)10
= 1311 _ __
1 - a//3
where, again, the índex of summation is limited from k = O to 9 because these are the only
times for which w,,[kl is nonzero. The last equality also follows from the formula for a finite
geometric series. Combining the solutions for each interval of shifts gives the system output
as ,.;:

o, n < O,
1 - (a//3)"+ 1
y[n] = fr 1 - a/(3 '
1 - (a/(3)10
n
n>9
{3 1 - a//3 '

• Drill Prohlem 2. 1 Repeat the C()nvolution in Example 2.1 hy directly evaluatin


the convolution sum.
Answer: See Example 2.1. •

• Drill Problem 2.2 Ler the i11put to a LTI system with i1npulse respc>nse h[nl -
a 11 {i,[n - 2J - u[n - 13)} be xlnJ = 2{uln + 21 - u[n - 12J}. Find the output y[nJ.
Answer:
O, n < O
1 ( }-1-n
2 a''+ 2 ; ª
-a

1
, O< 11 ::s: 1O

12 1 - (a)-11
y[nJ = 2a 1 - a-1 '
11 :'.5 n :5 13

12 1 - (a)n- 24
2a _1 , 14 -5. n ::s: 23
1 -a
o, }1 > 24 •
2.2 Convolution: Impulse Response Representationfor LTI Systems 83

• Drill Problem 2.3 Suppose the input x[n] and impulse response h[nJ of a LTI system
H are given by
x[nJ = -u[nJ + 2u[n - 31 - u[n - 61
h[nJ = u[n + ll - u[n - 10]
Find the output of this system, y[n 1-

Answer:
o, n < -l
-(n + 2), -1 ~ n s 1
n-4 2~n<4
'
y[n} = o, 5:Sn<9
n - 9 10<n<11
'
15 - n , 12 .::s n .::S 14
o, n > 14 •
The next example in this subsecti<>n uses the convolution sum to obtain an equatÍ()n
directly relatíng the input and output of a system with a finite-duration impulse response.

, .. ... ..,.,. ·.: ; ..: .·


. ··,· :

ExAMPLE 2.5 Considera LTI system with impulse response


,, ¼, Os n s 3 ..
., h[n] =
· O, otherwise
Find an expression that directly relates an arbitrary input x[n], to the output of thís system,
y[nJ.
. .
Solution: Figures 2.6 (a) and (b) depict an arbitrary input x[k] and the reflected, time shifted
impulse response h[n - k]. For any time shift n we have
. .. \. . ,·:. .
1
....,., ... :, . . .· _: ·. , . , Wn[kl = 40x' [k], n - 3 s k < n
. ., . . otherwise
Summing wn[kJ over ali k gives the output , ·
• >

,, .'. . .
·,·
." .. y[n] = ¼(x[nl + x[n - 1] + x[n - 2] + x[n - 3]) . .··· '

t'· ·".. ,.
;··. ,.,i, ••• ,._ ,..,.. .,., • •

The output of the system in Example 2.5 is the arithmetic avcrage of the fc)ur most
rece11t inputs. I11 Chapter 1 such a syste111 was termed a n1oving-average system. The

x[k] h[n - k]
-1
4
>
•• • ... '
'
k >
T
<
T k
' o 2 4 n-3 n

(a) (b)

FIGURE 2.6 Evaluatíon of the convolution sum for Example 2. 5. (a) An arbitrary iil}JUt signal
depicted as a function of k. (b) Reflccted and timc-shifted impulse response, h[1i - k].
84 CHAPTE.R 2 • TI!\-1E.-00MAIN REPRESENTA'TIONS FOR LINl::'.AR Tll\lE•INVARIANT Sys·rEl\'IS

effect of the averaging in this system is to smooth out short-terrn fluctuations in the input
data. Such systems are often used to identify trends in data.

"·"' ·.,:

EXAMPLE 2.6 Apply the average January temperature data depicted in Fig. 2.7 to the fol-
lowíng moving-average systems:
-z1., Osn< 1
(a) h[n] =
o, otherwise ·.~'. ..
.

-4l , 0:Sn.:S3 •••


,.
.., .
(b) h[nl =
o, otherwise
.;, .

-s1 , Osns7
{e) h[n) =
o, otherwise
Solution: In case (a) the output is the average of the tW<) most recent inputs, in case (b} the
four most recent inputs, and in case (e) the eight most recent inputs. The system output for
cases (a), (h), and (e) is depicted in Figs. 2.8(a), (b), and (e), respectively. As the impulse
response durarion increases, the degree of smoothing introduced by the system increases be-
cause the output is computed as an average <)f a larger number of inputs. The input to the
system prior to 1900 is assumed to be zero, so the output near 1900 involves an average \Vith
some of the values zero. Thís leads to low values of the output, a phenomenon most evident
in case (e).

ln general, the output of any discrete-time system wirh a finite-duration in1pulse


response is given by a weighted sum of the input signal values. Such weighted sun1s can
easily be implemented in a computer to process discrete-time signals. The effect of the
system <)n the signal depends on the weights or values of the system impulse response. The
weighrs are usually chosen to enhance S<>me feature of the data, such as an underlying
trend, or to impart a particular characteristic. These issues are discussed throughout \ater
chapters of rhe text.

• THE CONVOLUTION INTEGRAL

The output of a continuous-time LTI system may also be determi11ed solely fro111 knowl-
edge of the input and the system's impulse respo11se. The approach and result are analog<>us
to the discrete-time case. We first express an arbitrary input signal as a weighted super-
position of time-shifted impulses. Here the superposition is an integral ir1scead <>f a sum

50 1--
r
r
-
'(
-
>.- r r
... u. l r
- ~
_Q
~ o

::,
e
Cll
--·...
oi)
40 ~

r
u,
Ir
' TU

,
')
')
Ir
r
~
')Ir

- u-1,.. • vr !'

....
ou
....a
:::l
30 .. ~("
Ir
~
OI) (L)
e e.
(L)
> a 20 ..
'
<~ 'i
..
10
!
o .. .. .. .. .. .. .. ..

.. .. J
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
Year
FIGURE 2. 7 Averagc January tcmperature from 19()0 to I 994.
2.2 Cmavolutimi: Impulse Response Represetitationfor LTI Systetns 85

50 .....
-
L~ -- ,.
)

--
r rl li" ,.
-~
~
"}
,... r '
t
,. r

,.,o j
í - IW ~
,.
,1
1 '
i IC
j
j
1 >

.. ..
10

o . - . .. .. .. .. ~ .. .. ..
'
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
Ycar
(a)

60 r-----,------,--------,-----------..---------r----.
:
'

50 . . . . ···-

401 - -
r
,. -
30 1·...." 1
'
j
'l
10 1
1
O u_._,t..1..1._. J
1900 l 91() 192() 193() 1940 \950 1960 \970 198() 199()
Year
(b)

,.

. ..J
i
!
i
1910 1920 1930 1940 1950 1960 1970 1980 1990
Ycar
(e)

FIGURE 2.8 Result of passing average January temperaturc data throttgh severa! n1oving-averagc
systen1s. (a) Output of two-point moving-average systcm. (b) ()utput ()f four-poinl 111c,ving-average
system. (e) Output of eight-point mc,ving-a,'erage system.

due to the continuous nature <>f the input. We then apply this input to a l.TI system to
write the <>utput as a weighted superposition c>f time-shifted impulse responses, an ex-
pression termed the convolution integral.
The convolution sum was derived by expressing the input signal x[n] as a weighted
sum of time-shifted impulses as shown hy
00

X [ 11] = L
k=···oc
X [ k] B[ n - k]
86 CHAPTER 2 • TtME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEl\'IS

Similarly, we may expressa continuous-time signal as the weighted superposition of time-


shifted impulses:

x(t) = f 0000 x( r)o(t - r) dr (2.5)

Here the superposition is an integral and the time shifts are given by the continuous vari-
able T. The weights x(-r) dT are derived from the value of the signaJ x(t) ar rhe time ar
which each impulse occurs, T. Equation (2.5) is a statement of the sifting property of the
impulse function.
Define the impulse response h(t) = H{o(t)} as the output <>f the system in response
to an impulse in pur. I f the system is time ínvarian t, then H {8( t - T}} = h( t - -r). Tha r is,
a time-shifted impulse input generates a time-shifted impulse response output. Now con-
sider the system output in response to a general input expressed as the weighted super•
position in Eq. (2.5), as shown by

y(t) = H J~,, x(-r}B(t - -r) dr

Using the linearity property of the system we obtain

y(t) = f'"'a, X(T)H{S(t - -r)} dT

and, since the system is time ínvariant, we have

y(t) = J 00

,,, x(-r)h(t - r) dT

Hence the <Jutpuc of a LTI sysrem in response to an input of the form of Eq. (2.5) may be
expressed as

y(t) = f °"°" X( T)h(t - T) dT (2.6)

The <)Utput y(t) is given as a weighted superposition of impulse responses time shifted by
r. The weighcs are x( T) dr. Equacion (2.6) is termed the convoluti<Jn integral and, as before,
is denoted by rhe symboJ *; that ís,

x(t) * h(t) = f" ~ x( T)h(t - T) d-r

The convolution process is illustrated in Fig. 2.9. Figure 2.9(a) depicrs the impulse
response of a system. ln Fig. 2.9(b) the input to this system is represented as an integral
of weighted and time-shífted impulses, P-r(t) = x(-r)B(t - T). These weighted and cime-
shifted impulses are depicted for several values of T on the left-hand side of Fig. 2.9. The
outptlt associated with each input p (!) is the weighted and time-shifted impulse response:
7

V 7 (t) = x( T)h(t - -r)

The ríght-hand side of Fig. 2.9(b) depicts V (t) for several values of T. Note that V (t) is a
7 7

functíon of two independent variables, T and t. On the right-hand side of Fig. 2.9(b), the
variation with tis shown on the horizontal axis, while the variation wíth TOccurs vertícally,
2.2 Convolution: Impulse Response Representationfor LTI Systems 87

h(t)
1

t
-1
(a)
• •




P-i<t) V -1 (t)

""':''. ..... t
-1 -1 2
..
-1 .. ..

-0.5
t
··-~
11.(ty )1

·-0.5 4
t -1

Po(t)

0.5 0.5
o ----+-------t .. o

l ----+--+------t
1
........
..,,..,. ·,
"· l

2 -----1----.J-----t -• h(t) ., _ _LL--+-,--1-----i,c.~- t 2


2 2

0.5 ;;i.:

..t;í.
•> 0.5 ·
3 -----l--------t
3
_,..• l,(t). ..

1' • • T
• •
• •

-----+-,..+--+-+---+-;- , 1 4 , - - - t
2 3 4 l 2 3 4 5

(b)

FIGURE 2.9
Illustration of the co11volutic>n integral. (a) Impulse response of a continuous-timc
system. (b) Decc>mpc)sition of x(t) into a weightcd integral <>Í tin1e-shiftecl impulses results in an
output y(t) givcn by a \veighted integral of time-shifted impulse rcsponscs. Here pT(t) is the
weighted (by x(T)) and time-shifted (by 'T} impulse input, and vT(t) is the weighted and time-shifted
impulse respe.>nse c,ulpL1l. Both Pr(t) and v.,.(t) are depicted <.>nly at integer values of T. The depen-
dence of both p.,.(t) and v.,.(t) e.>n Tis depicted by the T axis sho,vn on the lcft- and right-hand sides
of the figt1re. The output is obtained by integrating vT(t) over T.
88 CHAPTER 2 • TiME-DOMAIN REPRESENTATIONS FOR LINEAR TIME•INVARIANT SYSTEMS



W_J(T)
l .....
••• -1
--- ~~........-..;..,- -i-- 'T . .L 1
r -
l 2 4 6

Wo( T)

l 2 4 6
y(O) = I: w0( T)d-r --li---~
o ~

...
--~ft':IH-4,--+--+- T __... ......
l 2 4 6

1 ....
••• -]
---..::!1~7---j------+-··· 'T N .....
I 2 4 6

W3(T)

1 .l
••• -1 1 1
· ' -........- 'T
6

•••

(e)

F1GlJRE 2.9 (e) Thc signals tv,( T) uscd to compute thc output at time t corresp<)nd t(> vertical
slices ()f v,.(t). Here ,ve have rl'dravvn the right-hand sicle oi' Fig. 2.9(b) so that the T axis is h<>rizo1
tal. l'he outpltl is <lbtained for t = t,, by integraling w,( T) over T.

as sh<)wn by the vertical axis on the right-hand side. The system output at time t = t 0
obtained by integrating over T, as sh(>wn by

That is, we integrate along the vertical or T axis on the right-hand side of Fig. 2.9(6) at
fixed time, t = t 0

Define a signal w,,.(-r) to represent the variation of v,.(t) along the T axis for a fixe
time t = t 0 • This ímplies ivt.,( T) = V 7 (t0 ). Examples of this signal for several values of t 0 ai
depicted in Fig. 2.9(c). The correspc>nding system output is now obtained by integratin
Wt ( 'T) over T frc>m - x to 00 • Note that the horizontal axís in Fig. 2.9(6) is t and the vertic,
o
axis is T. ln Fig. 2.9(c} we have in effect redrawn the right-hand síde of Fig. 2.9(b) with
as the horizontal axis and tas the vertical axis.
We have defined the intermediate signal Wr(r) = x(T)h(t - -r) as the product c>f x(~
and h(t - T). ln this definition Tis the independcnt variable and tis treated as a cc)nstan
This is explicitly indicated by writing tas a sul,script and T within the parentheses of wr(T
Hence h(t - T) = h(-(r - t)) is a reflected and time-shifted (by -t) version of h(T). Tb
2.2 Convolution: Impulse Response Representationfor LTI Syste1ns 89

time shift t determines rhe time at which we evaluate the output of the sysrem since Eq.
(2.6) becomes

(2.7)

The system output at any time tis the area under the signal w 1(-r).
ln general, the functional form for w 1( r) will depend on the value of t. As in the
discrete-time case, we may avoid evaluating Eq. (2.7) at an infinite number c>f values c>f t
by jdentifying intervals of t on which ivt(T} has the sarne funcrionaJ form. We then only
need to evaluate Eq. (2.7) using the w 1(-r) associated wíth each interval. Often it is very
helpful to graph both x( r) and h(t - T) when determining wt( T) and identifying the ap-
propriate interval of time shifts. This procedure is summarized as follows:
1. Graph x( r) and h(t - T) as a function of the independent variable To obtain
T.
h(t - T), reflect h( r) about r = O to obtain h(-T) and then time h(-t) shift by -t.
2. Begin with the time shift t large and negative.
3. Wríte the functional form for wt(T).
4. Increase the time shift t untíl the functi<)nal form for w 1(T) changes. The value t at
which the change occurs defines the end of the current ínterval and the begin11ing of
a new interval.
5. Let t be in the new interval. Repeat steps 3 and 4 untíl all intervals of time shifts t
and the corresponding functional forms for wt( T) are identified. This usually implies
increasing t to a large and positive value.
6. For each interval of time shifts t, integrate w t( T) from -r = - ·~ to T = oo to obtain
y(t) <)n that jnrerval.
The effect of increasing t from a large negative value to a large positive value is to slide
h(- r} past x( T) from left to right. Transitions in the intervals of t associated with the sarne
form of wt(r) generally occur when a transition in h(-r) slides through a transition in
X(T). Alternatively, we can integrate w 1 (r) as each interval of time shifts is identified, that
ls, after srep 4, rather rhan wajring untiJ a]) jnrervals are identified. The following exampJes
illustrate this procedure for evaluating the convolution integral.

EXAMPLE 2.7 Consider the RC circuit depicted in Fig. 2.10 and assume the circuit's time
constant is RC = 1 s. Determine the voltage across the capacitor, y(t), resulring from an input
voltage x(t) = e-31{u(t) - u(t - 2)}.
Solution: The circuit is linear and time invariant, so the output is the convolution of the in-
put and the impulse response. That is, y(t} = x(t) * h(t). The impulse response of this circuit is
. ..
. . ,,
. ....~: .,
,,... .,
.· .......,.;, . ,·.· •:.i:t .·...:..... . ·,.::·~ . . ...

+
x(t) i(t) :;::;:::: e y( t>

FIGURE 2.10 RC circuit system with the voltage source x(t) as input and the voltage measured
across the capacitor, y(t), as output.
90 CtlAPTER 2 li TIME-DOMAIN REPRESENTATIONS •·oR LINEAR TIME-INVARIANT SYS'fE

To evaluate the convolution integral, first graph x('T) and h(t - T) as a function of
índependent variable 1while treating tas a constant. We see from Figs. 2.1 l(a) and (b) ti
e- 31", O < T <2
X(T) =
O, otherwise
• • •
and
e -(t-1"), T < t : '
h(t - T) = ·:,,.'

O, otherwise
Now identify the intervals of time shifts t for which the functional form of w,(-r) d
not change. Begin with t large and negative. Provided t < O, we have w,( T) = O since tb
are no values T for which both x( -r) and h(t - ,) are both nonzero. Hence the first interva
time shifts is t < O.
Note that at t = O the right edge of h(t - r) intersects the Jeft edge of x(r). For t >
e-r-lT, O< T <t
.. w,(r) =
'/
O, otherwise
This form for w ,( r) is depicted in Fig. 2.11 (e). lt does not change untiJ t > 2, at which pc
the right edge of h(t - T) passes through the right edge of x(T}. The second ínterval of ti
shifts t is thus O s t < 2.

X(T) h(t- T)
i
l {

e-<' -1")
- . .O. . . ··················-==~-
l 2
; ____'.=-~·····~····~··~·····.::_:±=._::::._. . . . . . ··········------
oii t
7'

(a) (b)

e-r .. ·.1

---····-·····--· ; T
'
o t 2 o 1 2 3
1

(e) (d)

y(t)

- - ·---~---+········-···--··--+--············••i••········
; ; ·······-·-- t
o l 2 3

(e)

FIGURE 2.11 Evaluation of the convolution integral Í<)í Example 2. 7. (a) 'fhe input <lcpicte
a functíon <>f T. (b) Reflectcd and time-shifted impulse rcsponse, h(t - -r). (e) The producl si!
w 1 ( r) for O < t < 2. (d) 'l'he product sígnal u11 ( r) for t ::2: 2. (e) Systc1n output }{t).
2.2 Convolution: Impulse Response Representationfor LTI Systems 91

'..

For t > 2 we have a third form for w 1( T), which is written as


e-,-ir, O < T <2
O, otherwíse

Figure 2.ll(d) depicts wt(r) for thís third interval of time shifts, t ~ 2.
We now determine the output y(t) for each of these three intervals of time shifts by
integrating w1 (-r) from T = - oo to T = oo. Starting with the first interval, t s: O, we have
w,( r) = O and thus y(t) = O. For the second interval, O s t < 2, we have

,
,

..·......
, y(t) = J: e-,-zr d-r
= e-'{-½e-irl~
= ½(e-r _ e·-3t)
. ' ..·

For the third interval, t ~ 2, we have

y(t} = J: e-r-lr dr ..

;;
~

.
·,
= e-t{-½e-2Tlã . . :..
:•';
.. •
··.,
\

,,
= ½(1 - e- 4 } e- 1
.···~:
,

Combining the solutions for each inrerval of time shifts gives the output
·,,
..

o, t < o
< •• •
y(t) = ½(1 - e- 21 ) e- 1, O< t < 2 .
:
.i • t
. ,..

½(1 - e- 4 ) e-i, t~ 2

as depicted in Fig. 2.1 l(e).

EXAMPLE 2.8 Suppose the input x(t) and impulse response h(t) of a LTI system are given
by
,.,
x(t) = 2u(t - 1) - 2u(t - 3)
h(t) = u(t + 1) - 2u(t - 1) + u(t - 3)

Find the output of this system.


Solution: Graphical representations for x(T) and h(t - r) are gíven in Figs. 2.12(a) and (b).
From these wc can determine the inrerva1s of time shifts t on which the functional form of
w 1( -r) is the sarne. Begin with t large and negative. For t + l < 1 or t < O the right edge of
h(t - r) is to the left of the nonzero p(>rtion of x( -r) and consequently Wi( r) = O.
For t > O the right edge of h(t - T) overlaps with the nonzero portion of x( T) and we
have
.,
2, 1< T< t + 1 ..
'.;

w ,( r) = O, otherwise

,This form for w 1 ( r) holds provided t +1< 3, or t < 2, and is depicted in Fig. 2.12(c). ,,,
92 CHAP'fER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

X(T)
h(t- T)
2-- 2--
1- >-
t-3
--i-----+--~'T ' "
'
'T ---4--+-_...,._ _ .,
'
o I 2 3 t-1 t+I o 1 t+1 3
-1
(a) (b) (e)
y(t)

4
2- -
t-1 2
\ 1
' ---+---+:--.----;,'-
t- 3 3
T ,
'
t
o 1 2 3 1 1 2 7
-2
-2- - -2--
-4
(d) (e) (f)

FIGURE 2. 12 Evaluation c)f the convolution integral for Example 2.8. (a) The input depicted as
a function of T. (b) Reflected and time-shifted impulse response, h(t - 7). (e) Thc }Jroduct signal
wt( T) for O < t < 2. (d) The product signal w 1( 7) for 2 s t < 4. (e) The product signal w,( r} for
4 ~ t < 6. (f) System output y(t).

For t > 2 the right edge of h(t - T) is to the right of the nonzero portion of x(r). ln this
·. case we have
,.,

' '
-2, 1 < T < t - 1
Wt(1") = 2, t - 1 < r < 3
O, otherwise
This form for w1 (-r) holds provided t - 1 < 3, or t < 4, and is depicted in Fig. 2.12(d).
For t > 4 the leftmost edge of h(t - T) is within the nonzero portion of x(T) and we
have
,,·:.
'
,,
', -2, t - 3 < ,,. < 3
O, otherwise
This form for w1 ( T) is depicted in Fig. 2.12(e) and holds provided t - 3 < 3, or t < 6.
For t > 6, no nonzero portions of x( T) and h(t - T) overlap and consequently wt( T) = O.
Tb.e system output y(t) is obtained by integrating Wt( -r) frorn 'T = - oo to T = oo for each
. interval of time shifts identified above. Beginning with t < O, we have y(t) = O since
w 1(T) =O.For O< t < 2 we have

, ..
' '
' ',' y(t) = I:+l 2 dT
, ,.
' '

= 2t
On the next interval, 2 ~ t < 4, we have
,
t-1 J,3
.,' ·.
y(t) ==
ft
-2 dT +
t-1
2 dr

= -4t + 12
• •<' •• • . '· . ,~ ·, ~.
2.2 Convolution: Impulse Response Representationfor LTI Systems 93

Now consideríng 4 :s t < 6 tbe output is '

., . y(t) = t-3 -2 di 13

·.' : ·.., •• ·< .·,


.. · = 2t - 12
· Lastly, for t > 6, we have y(t) = O since w,( T) = O. Combining the outputs for each interval
of time shifts gives the result
..
o,
t< o . ' ..

'.
2t, 0:St<2
, ..

y(t) -- -4t + 12, 2 < t < 4


..
.'.\ ·< .. ". 2t - 12, 4:St<6
.,. v, .

o, t 2= 6
as depicted in Fig. 2.12({). ,,.
...
•• • •
.f.<
.. :..: .

• Drill Problem 2.4 Let the impulse response of a LTI system be h(t) = e-l(t+Ilu(t + 1).
Find the output y(t) if the input is x(t) = e-1 1 •.
Answer: For t < -1,
e- 2 <r.+ 1 le-'T, -oo < T < t+ 1
O, otherwise
y(t) = ½et+I
For t > -1,

O<T<t+l
o, otherwise
y(t) = e-(t+ Il _ ie-2(t+l)
J •
• Drill Problem 2.5Let the impulse response of a LTI system be given by y(t)
u(t - 1) - u(t - 4). Find the output of this system in response to an input x(t)
u(t) + u(t - 1) - 2u(t - 2).

Answer:
o, t < l
t - 1 l<t<2
'
2t - 3 2:st<3
'
y(t) = 3, 3st<4
7 - t 4<t<,5
'
12 - 2t, 5:St<6
o, t 2:: 6 •
The C()nvolution integral describes the behavior of a continuous-time system. The
system impulse response can provide insight ínt(> the nature of the system. We will develop
94 CtIAPTER 2 li TIME-DOMAIN REPRESENTATIONS 1-'0R LINEAR TtME-INVARIANT SYSTEl\'1S

this insight in the next section and subsequent chapters. T(> glimpse some of the insight
offered by the impulse response, consider the following example.

...· .,··.
..
EXAMPLE 2.9 Ler the impulse response of a LTI system be h(t) = 5(t - a). Determine the ~
output of this system in response to an input x(t).

Solution: Consider first obtaining h(t - -r). Reflectíng h(1') = 8(r - a) about T = O gives
h( - r) = ô( T + a) since the impulse function has even symmetry. Now shift the independent
variable T by -t to obtain h(t - T) = ô(T - (t - a)). Substitute this expression for h(t - T) in
the convolution integral of Eq. (2.6) and use the sifcing property of the impulse function to
obtain

y(t) = f~oo x(r)(5(T - (t - a)) dT


. .. . .,..
= x(t - a) : :. '

. . . •~:

Note tl1at thc identity system is represented for a = O since in this case the output is equal
to the input. When a -=I= O, the system time shifts the input. If a is positive the input is
delayed, and if a is negative the input is advanced. Hence the location of the impulse
response relative t<) the time origin determines the amount of delay introduced by the
systen1.

2.3 Properties of the Impulse Response


Representation for LTI Systems
The impulse response completely characterizes the input-output behavior of a LTI system.
Hence properties of a system, such as memory, causality, and stability, are relared to its
impulse response. Also, the impulse response of an intercor1nection of LTI systems is related
to the impulse response of the constiruent systems. ln this section we examine the in1pulse
resp<.)nse of interconnected systems and relate the irnpulse response to system properties.
These relationships tel1 us how the impulse resp<>nse characterizes sysrem behavior. The
results for continuc>us- and discrete-time systems are obtained using ncarly ider1tical ap-
proaches, so we derive c>ne and símply state the results for the other.

• PARALLEL CONNECTION OF SYSTEMS

Consider two LTI systems with impulse responses h 1(t) and h 2 (t) cc>nnected in parallel as
illustrated in Fig. 2.13(a). The output of this connection c.>f systems, y(t), is rhe sum of the
outputs of each system

y(t) = Y1 (t) + Y2(t)


= x(t) * h 1 (t) + x(t) * h2 (t)
Substitute the integral representatic>n for each convolution
2.3 Properties of the lmyulse Hesponse Hepresentation for LTI Systems 95

+
x(t) - , ~ ----l • y(t)

(a) (b)

FIGURE 2.13 lnterconnection of two systems. (a) Parallel connection of t\.vo systems. (b) Equiv-
aJent system.

and combine the integrais to obtain

y(t) = f 0000 x(-r){h1 (t - -r) + h2(t - T)} dT

= J:co x( T)h(t - 7) d-r

= x(t} * h{t)
where h(t) = h 1 (t) + h 2 (t). We identify h(t) as the impulse response of the parallel con-
nection of two systems. This equivalent system is depicred in Fig. 2.13(b). The impulse
response of two systems connected in parallel is the sum of the individual impulse
responses.
Mathematically, this implies rhat convolution possesses the distributive property:
x(t) * h 1 (t) + x(t) * h 2 (t) = x(t) * {h 1 (t) + h2 (t)} (2.8)
Identical results hold for the discrete-time case:
(2.9)

• CASCADE CONNECTION OF SYSTEMS

Now consider the cascade connection of two LTI systems illustrated in Fíg. 2.14(a). l,et
z(t) be the output of the first system and the input to the second system in the cascade.
The output y(t) is expressed in terrns of z(t) as
y(t) = z(t) * h2 (t) (2.10)

= f 00,,,, z( T)h2(t - -r) d-r (2.11)

~ illiMll'.lltl~'llfA.I INdll'.

x{t) - •• ~r ~: l1 1(t) * h2(t) · • y(t)


·,· :, ...

(a) (b)

x(t) • ·. ~~(t,i • • • y(t)

(e)

FIGURE 2.14 Intercclnnection <>f two systems. (a) Cascade con11ecti(>n (Jf t\.vo systcms.
(b) EquivaJent system. (e) Equivalent system; interchange system order.
96 CHAPTER 2 Ili Til\'lE•DOMAIN REPRESENTATIONS FOR LINJ:::AR Til\'IE-INVARIANT SYSTEi\-1S

However, z( -r) is the output of the first system and is expressed in terms of the inptlt x( 1
as
z(r) = x(T) * h 1(1')

= f" 00
x( v)h 1( T - v) dv
(2.12

Here vis used as the variable of inregration i11 the convolutic>n integral. Substituting Eq
(2.12) for z( r) in Eq. (2.11) gives

y(t) = Jx"" f"""" x(v)h (r - 1 v)h 2 (t - r) dv dr

N(>W perfor1n the change of variable 71 =T - v and interchange integrais to obtain

(2.1,3

The inner integral is identified as the convc>lution c>f h 1(t) wirh h 2 (t) evaluated at t - v
That is, if we define h(t) = h 1(t) * h 2 (t), then

[""'ex. h1 ( 17)/12 (t - v - 11) d17 = h(t - v)

Substiruting this relationship i11t<> Eq. (2.13) yields

y(t) = f"'" x(v)h(t - v) dv


(2.14,
= x(t) * h(t)
Hence che irr1pulse resp<>nse of two LTI systems connccted in cascade is thc cc.>nvolutÍ<>r
of the individual impulse resp<lnses. The cascade C(Jnnection is input-output equivalent te
the single system represented by the impulse rcsponse h(t) as shown ín Fig. 2.14(6}.
Substituting z(t) = x(t) * h 1 (t) into the expression for y(t) given in Eq. (2.10) and
h(t) = h,(t) * h 2 (t) into the alternative expression for y(t) given in Eq. (2.14) establishe~
that convoluti<,n possesses the associatíve property
(2.15)

A seco11d importanr property fclr the cascade connectic>n of sysrems concerns the
ordering of rhe systen1s. Write h(t) = h 1(t) * h 2 (t) as the integral

and perform the change of variable v =t - T to obtain

h(t) = J~"" h (t - 1 v)h 2 ( v) dv


(2.16)
== h2(t) * h 1(t}
Hence the conv<>lution of h 1 (t) and h 2 (t) can be perf<)rmed ín either order. This corresponds
t<> interchanging the order of the systems in rhc cascade as shc>wn in fig. 2.14(c). Since
2.3 Properties of tlie Impulse Response Representationfor LTI Systems 97

we conclude rhat the output of a cascade con1bination of LTI sysrems is independent of


the order in which the systems are connected. Mathematically, we say that thc convolt1tion
operation possesses the commutative property

(2.17)

The commutative property is often used to simplify evaluation or interpretation of the


convolution integral.
Discrete-rime systems and convolution have identical properties to their co11tinu-
ous-time counterparts. The impulse response of a casca de connection of LTI systems is
given by the conv<)lution of the individual i1npulse respo11ses a11d the <)utput of a cascade
combination of LTI systems is independent <)f the order in which the syste1ns are connected.
Discrete-time convolution is associative

{x[n] * h 1[n]} * h 2 [n] = x[n] * {h 1[n] * h 2 [n]} (2.18)

and commutative

(2.19)

The following example demonstrares the use of convolution properties for finding a single
system that is input-output equivalent to an ínterconnected system.

EXAl\tPLE 2.10 Consider the interconnectíon of LTI systems depícted ín Fig. 2.15. The ,.
impulse response of each system is given by . '
;:~✓;-::
. ·.·

h1[n) = u[n]
~ . h2 [n) = u[n + 2] - u[n]
,.:

• >·
.. h3 [n] = 8[n - 2]
h4 [n] = d'u[n]
Find the impulse response of the overall system, h[n].

Solution: We first derive an expression for the overall impulse response in ter1ns of the
impulse response of each system. Begin with the parallel combínation of h1 [n] and h 2 [n].
The equívalent system has impulse response h 12 [n] = h 1 [n] + h 2 [n]. Thís system is in series
with h3 [n], so the equivalent system for the upper branch has impulse response h 123 [n] =
h 12 [n] * h 3 [n]. Substituting for h 12[n], we have h 123 [n] = {h 1[n] + h 2 [n]) * h 3 [n]. The upper
branch is in parallel with the lower branch, characterized by h4 [n]; hence the overall system
impulse response is h[n] = h123[n] - h4{n]. Substitutíng for h123(n] yields
h[n] = (h 1[nJ + h2 [n]) * h3 [n] - h 4 [n] ~.... .,......
.. ,::. , ·:

Now substitute the specific forms of h 1 [n] and h2 [n) to obtain . ,.

. : ,·
'
.. , . ,, /. :

h 12[n] = u[n] + u[n + 2] - u[n]


= u[n + 2]
....
'"i:·.
.....
Convolving h 12[n] with h 3 [n] gives ,,-
.··. . ....

h123[n] = u[n + 2] * c5[n - 2] ....~·...


,.., .

' . .-:..
= u[n]
' ..
Lastly, we obtain the overall impulse response by summing h 123[n] and h 4 [n] •·.'

h[n] = {l - d'} u[n] ...~,. .,.. ·. ,,· . '·


'f:
98 CHAPTER 2 • T11\-tt::-D0.1\-tAIN REPRESENTATIONS FOR LINEAR TIME~INVARIANT SYSTEMS

+
,.....~ · · ~

x[n] ~.,...

,....__ _ _ _....,._ . h [n] _ _ _ _ _____,


4

FIGURE 2.15 1nterc<>n11ecti<>n of systems for Example 2.1 O.

Interconnections of systems occur naturally in analysis. Often it is easier to break a


complex system into simpler subsystems, analyze each subsystem, and then study the entire
system as an interconnection of subsystems than it is to analyze the overall system directly.
This is an example of the ''divide-and-conquer'' approach to problem solving and is pos-
sible due to the assumptions of linearity and time invariance. Interconnections of systems
are als<> useful in system implementation, since systems that are equivalent in the input-
<)utput sense are not necessarily equivalent in other senses. For example, the computational
cc>mplexity of two input-output equivalent systems for processing data in a computer may
differ significanrly. The fact that many different interconnections of LTI systems are input-
output equivalent can be exploited to optimize some other implementation criterion such

as computat1on.

• MEMORYLESS SYS'I'EMS

Recall that the output of a memoryless system depends only on the present input. Exploit-
ing the commutative property of convolution, the output of a LTI discrete-time system
may be expressed as
y[n] = h[n] * x[n]
CC

= L
k=-""
h[kJxln - k]

For this system to be memoryless, y[n] must depend only on x[n] and cannot depend on
x[n - k] for k -=/= O. This condition implies that hf kl = O for k -=/= O. Hence a LTI dis-
crete-time system is memoryless if and <>nly if h[kl = co[k], where e is an arbítrarycc>nstant.
Writing the output of a continuous-time system as

f 00

y(t) = 00 h(T)x(t - T) dT

we see that, analogous to the discrete-time case, a continuous-time system is memoryless


if and only if h( T) = cS( T) for e an arbitrary constant.
The memoryless condítion places severe restrictíons on the form of the impulse re-
sponse. Ali memoryless LTI systems perform scalar multiplication on the input.

11 CAUSAL SYSTEMS

The output of a causal system depends <)nly (>n pastor present values of the input. Again
write the convolution sum as
CC

y[n]
k=-oo
L h[kJxln - kJ
2.3 Prt>perties of the Impulse Response Representationfor LTI Systems 99

Past and present values of the input, x[n], x[n - 1], x[n - 2J, ... , are associated with
indices k 2:: O in the convolution sum, while future values of the input are associated with
indices k < O. ln order for y[n] to depend only on pastor present values of the input, we
require h[kJ = O for k < O. Hence, for a causal system, h[k] = O f<,r k < O, and rhe
convolution sum is rewritten
00

y[n] = L
k=O
h[k]x[n - kl
The causality condition for a continuous-time system follows in an analogous manner
from the convolution integral

y(t) = J,,,,,,., h(T)x{t - r) d'T

A causal cc)ntinuous-time system has impulse response that satisfies h( T) = O for 1' < O.
The output of a causal system is thus expressed as the convolution integral

y(t) = f 0
"" h(T)x(t - -r} dr

The causality condition is intuitively satisfying. Recall that the impulse response is
the output of a system in resp(>nse to an impulse input applied at time t = O. Causal
systems are nonanticipative: that is, they cannot generate an output before the input is
applied. Requiring the impulse response to be zero for negative time is equivalent to saying
the system cannot respond prior to applícation of the impulse.

• STABLE SYSTEMS

Recall from Chapter 1 that a system is bounded input-bounded output (BIBO) stable if
the output is guaranteed to be bounded for every bounded input. Formally, íf the input
to a stable discrete-time system satisfies lx[n] I ::s Mx < 00 , then the output must satisfy
!yfn]) ::s My < co. We shall derive condjrions on h[n] rhat g11arantee stability of the system
by bounding the convolution sum. The magnitude of the output is given by
ly[n] 1 = 1h[n] * x[n] I

k=-x
L hlklx[n - k]

We seek an upper bound on Jy[nJ j that is a function of the upper bound on Jx[n] J and the
impulse response. Since the magnitude of a sum of numbers is less than or equal to the
sum of the magnitudes, that is, Ia + b I ::s Ia I + Ib 1, we may write
00

ly[n] 1 < L
k=-oc
[h[k]x[n - k] 1

Furthermore, the magnitude of a product is equal to the product of the magnitudes, that
is, /ah/ = /ai//;/, and so we have
00

ly[n]I < L
k=-oo
lh[k]llxín - k]I

If we assume that the input is b()unded, lxlnJ I < Mx < 00 , then lx[n - k] I < Mx and
(X)

Jy[nJJ < Mx L
k=""
)h[k]J (2.20)
100 CHAP'fER 2 9ll T11\-tl::'.-DOl\.1AIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTE!\-1S

Hence the output is bounded, jyfn]J < oo, provided that the impulse response of the system
is absolutely summable. We conclude that the impulse response of a stable system satisfies
the bound
:,e:

I
k ==- - ""
lh[k] < 1
00

Ot1r derivation so far has established absolute summability of the impulse response as a
sufficient condition for BIBO stability. The reader is asked to sho\V thar rhís is also a
necessary cc>ndition for BIBO stability in Prohlem 2.13.
A similar set of steps may be used to establish that a continuous-time system is BIBO
stable if a11d only if the impulse response is absolutely integrable, that is,

.. .:::, .
: : • ·< 1· . ,.,. ; .·~ '< • •
·. ':.
.' .. ·,.
., ..
..
EXAMPLE 2.11 A discrete-time system has impulse response
h[n] = anu[n + 2] . .,
' ..,'
Is this system BIBO stable, causal, and memoryless? .,, ~;·. .. ·'·..•

Solution: Stability is determined by checking whether the impulse response is absolutely


summable, as shown by
.·,
·;::·
0C "'

~ lh[n]] = ~ la I 11
.•
..
..
· ··:
k=-oo k=-2
... . ·•·. . :

> •

= 1a- ) + 1a- • ) + 2
~ 1a ln . '~•,: : ::
.. k=O

The infinite geometric sum ín the second line converges only if !ai< 1. Hence the system is
stable provided O < la! < 1. The system is not causal, since the impulse response h[n] is
nonzero for n = -1, -2. The system is not memoryless because h[n] is nonzero for some
values n O.*
• .:•)!\ • •:\(. • • • • • ·"'!!· ;~·· ·í,1','.':: •••• ••• • i'°"<. .,...,:: ...... • . ..,.. ,.· .

• Drill Problem 2.6 Determine the conditions on a such rhat the continu<>us-time
systen1 wirh i111pulse response J1(t) = eª'u(t) is stable, causaJ, and memory1ess.

A1iswer: The system is stable provided a< O, causal for all a, and there is no a for which
the system is memoryless. •
We emphasize that a system can be unstable even thot1gh rhe impulse response is
finite valued. For example, the impulse response h[nj = uínl is never greater than one, but
is not absolutely summable and thus the system is unstable. T(> demonstrate this, use che
convolution sum to express the output of this system in terms of the input as
n

y[nl = 2, x[k]
k=- X

Although the output is bounded for some bounded inputs xln l, ir is not bounded for every
bounded x[n]. ln particular, the conscant input xl11l = e clearly results in an unl1c)unded
output.
2.3 Properties of the Impulse Response Representation for LTI Systems 101

x(t) •
·a·.
• .,
, y(t)•
1. ",,_:..1 ($) - •JII• x(t)
~~'

FIGURE 2.16 Cascade of LTI system ~rith impulse response h(t) and inverse system with im-
pulse response h- 1 (t ).

• INVERTIBLE SYSTEMS AND DECONVOLUTION

A system is invertible if the input to the system can be recovered from the outpt1t. This
implies existence of an inverse system that takes the output of the original system as its
input and produces the input of the original system. We shall limit ourselves here to con-
sideration of inverse systems that are LTI. Figure 2.16 depicts the casca de of a LTI system
having impulse response h(t) with a LTI inverse system whose impulse response is denoted
as 1,- 1(t).
The process of recoveríng x(t) from h(t) * x(t) is termed deconvolution, since it
corresponds to reversing or undoing the convolution operation. An inverse system has
output x(t) in response to input y(t) = h(t) * x(t) and thus solves the deconvolution prob-
lem. Deconvolution and inverse systems play an important role in many signal-processing
and systems problems. A common problem is that of reversing or ''equalizing'' the distor-
tion introduced by a nonideal system. For example, consider using a high-speed modem
to communícate over telephone Iínes. Dístortion íntroduced by the telephone network
places severe restrictions on the rate at which information can be transmitted, so an equal-
izer is incorporated into the modem. The equalizer reverses the telephone network distor-
tion and permirs rnuch higher data rates to be achieved. In this case the equa1izer represents
an inverse systern for the telephc>ne network. We will discuss equalization in more detail
in Chapters 5 and 8.
The relationship between the in1pulse response of a system, h(t), and the correspond-
ing inverse system, h- 1 (t), is easily derived. The impulse response of the cascade connection
in Fig. 2.16 is the convolution of h(t) and h- 1(t). We require the output of the cascade to
equal the input, <>r
x(t) * (h(t) * h- 1 (t}) = x(t)
This implies that
h(t) * h- 1 (t) = ô(t) (2.21)
Similarly, the impulse response of a discrete-time LTI inverse system, h- 1 (n], must satisfy
hfnJ * h- 1 /n] = 8[nj (2.22)
ln many equalization applications an exact inverse system may be difficult to find or im-
plement. Determination of an approximate solution te) Eq. (2.21) or Eq. (2.22) is often
sufficient in such cases. The following example íllustrates a case where an exact inverse
system is obtained by directly solving Eq. (2.22).

·'
·.· .
. ExAMPLE 2.12 Consider desígning a discrete-time inverse system to elirninate the distortion
.· associated with an undesired echo in a data transmission problem. Assume the echo is rep-
·.· resented as actenuation by a constant a and a delay corresponding to one time unic of the
.·. ínput sequence. Hence the dístorted receíved signal, y(n], ís expressed ín terms of the trans-
. rnitted signal x[n] as ·

"
y[n] = x[n} + ax[n - 11
.. . : ,, : .
102 CHAPTER 2 li Til\11:'.-00l\1AIN REPRESENTATIONS FOR LINEAR Tll\-lE-)NVARIANT SYSTEMS

Finda causal ínverse system that recovers x[n j from y[n]. Check if this inverse system is stable.
Solution: First we identify the impulse response of the system relatíng y{n] and x[n]. Writing
the convolution sum as
""
y[n] = L
k=-oo
h[k]x[n - k]

. ., .~
we identify ·:
.:

...,
1, k =o
h[kJ = a, k =1 ·•r:
; ·;
: ~

O, otherwise
as the impulse response of the system that models direct transmission plus the echo. The inverse
system h- 1 [n] must satisfy h[n] * h- 1 [n] = 8[nJ. Substituting for h[n], we desire to find h- 1 [n]
that satisfies the equation
(2.23)
Consider solving this equation for severa] different values of n. For n < O, we must have
h- 1 [n] = O ín order to obtain a causal inverse system. For n == O, o[n] = 1 and Eq. (2.23)
implies
h- 1 fO] + ah- 1[-t] =1
so h- 1 [0] = 1. For n > O, õ[nJ = O and Eq. (2.23) implíes
h- 1 In} + ah-\ln - 1) = O
or h- 1 [n] = -ah- 1 [n - 1]. Since h- 1 [0] = 1, we have h- 1 [1] = -a, h- 1 [2] = a 2, h- 1 [3] =
-a3 , and so on. Hence the invecse system has the impulse response
h- 1 [n] = (-a)nu[n]
To check for stability, we determine whether h- 1 ln] is absolutely summable, as shown
by
00 00

L
k=
lh- [kl\ = L \a\k
·-aok=O
1

This geometric series converges and hence the system is stable provided Ia 1< 1. Thís implies
that the inverse system is stable if the echo attenuates the transmitted signal x[n], but unstable
if the echo amplifies x[n]. ·
::11t, : :··~ ... .: ·:;

Obtaining an inverse system by directly S<)lving Eq. (2.21) or Eq. (2.22) is difficult
in general. Furtherm<>re, not cvery LTI sysrem has a stable and causal inverse. Methc>ds
developed in later chapters provide additional insíght into the existence and determi11ation
of ínverse systems.

• STEP RESPONSE

The response of a LTI system to a step characterizes h(>W the system responds te> sitdden
changes in the input. The step response is easily expressed in terms of the impulse response
using convolution by assuming that the input is a step function. Let a discrete-time system
have impulse response h[n] and denote the step respo11se as s[n]. We have
sf nJ = h[nl * u[n1
= L
k=-oo
h[k]u[n - kl
2.3 Properties of the Impulse Response Representationfor LTI Systems 103

Now, since ufn - kl = O for k > n and u[n - k] = 1 for k < n, we have
ll

sfn] =
k=-x
I h[k]

That is, the step response is the running sum of the impulse response. Similarly, the step
response, s(t), for a contínuous-time system is expressed as the runníng integral of the
impulse response

s(t) = f~e,; h(r) dr (2.24)

Note that we may invert these relationships to express the impulse response in terms of
the step response as
hínl = sfn] - sfn - 1]

d
h(t) = dt s(t)

ExAMPLE 2.13 Find the step response of the RC circuit depicted in Fig. 2.10 having impulse
response
1
h(t) = - e-ttRcu(t)
RC
Solution: Apply Eq. (2.24) to obtain
1
f
t
. ·::.. ·:<.
..
·.~i: s(t) = - e--rtRCu( -r) d-r
-.., RC
Now simplify the integral as
:·.~ .

.; ..
o, t s o
y(t) =
1
RC
it
o
e---r1Rc dT.
'
t> o
o, t s o
. ..
1 - e-t/RC, t > 0
,,..
,. . .·~. ; , .. :. .
i':.

• Drill Problem 2. 7 Find the step response of a discrete-time system with impulse
response
hfn] = (-a) 11 u[n]

assuming la 1< 1.
Answer:
1 - (-a)n+l
s[n] = u[n]
1 +a •
11 SINUSOIDAL S'I'EADY-STATE RESPONSE

Sinusoidal input signals are often used to characterize the response of a system. Here we
examine the relati(>nship between the impulse response and the steady-state response of a
104 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME~INVARIANT SYST.EMS

LTI system to a complex sinusoidal input. This relationship is easily established using
convolutíon and a complex sinusoid input signal. Consider rhe (>utput of a discrete-tíme
system with impulse response h[nJ and unir-amplitude complex sinusoidal input
x[n] = e;nn, given by
cr,

y[nl = k=L-- oc
h[klxln - k]
00

= L h[k]e;s1,,,-k1
k=-oo

Factor e;nn from the sum to obtain


y[n] = ei!ln L h[k]e-i!lk


k=-oo

where we have defined

H(e;t1) = L h[k]e-;.rik (2.25)


k=-oc

Hence the output of the system is a complex sinusoid of the sarne frequency as the input
multiplied by the complex number H(ei!1). This relationship is depicted in Fig. 2.17. The
quantity H(e;n) is nota function of tíme, n, but is or1ly a function of frequency, n, and is
termed the frequency response of the discrete-time system.
Similar results are obtained for continuous-time systems. Ler the impulse response
of a system be h(t) and the input be x(t) = eiwt. The convolution integral gives rhe output
as

y(t) = f """° h( T)eiw(t-T) dr

= efwt J"".,., h(r)e-iwT d-r (2.26)

= H(jw)eiwt
where we de.fine

H( jw) = f" , , h( T)e-iwT dr (2.27)

The output of the system is a cc>mplex sinusoid of the sarne frequency as the input mul-
tiplied by the complex constant H( ;w). H( jw} is a function of only frequency, w, and not
time, t. lt is termed the frequency response of the continuous-time system.
An intuitive interpretation of the sinusoidal steady-state response is obrained by writ-
ing the complex number H( jw) in polar form. Recall that if e = a + jb is a complex
num ber, then we ma y vvri te e in polar fc>rm as e = Ie Iei arglcl, where Ie I = V a2 + b2 and
arg{c} = arctan(b/a). Hence we have H( jw) = IH( jw) 1e1 arg(H(iw>l. Here IH( jw) 1 is termed

eif2n ,. h[n]
,. ·.. . ..
- •., H(eiº)e1D.n

FIGURE 2.17 A complex sint1soidal ínpitt te> a 1;r1 system results in a cc)n1plcx sinusoidal c>utput
of the sarne frequency multiplíed by the frequency respc>nse of the system.
2.3 Properlies of the Impiilse Respo,ise Representationfor LTI Systems 105

the magnitude response and arg{H( jw)} is termed the phase response of the system. Sub-
stituting this polar fc,rm in Eq. (2.26), che output y(t) is expressed as
y(t) = H( jw)
1 1 ei(wt+arg{H(jw)I)

The syscen1 modifies the amplitude of the input by IH( jw) 1 and the phase by arg{H( jw)}.
The sin11soídal steady-state response has a similar interpretation for real-valued sinusoids.
Write
x(t) = A cos(wt + cp)
= -A e'.( wt+.,..)
.... + -A . -">
e-;(wt+.,..
2 2
and use linearicy to obtain the output as

y(t) = 1H( jw) 1 A ei(wt+<J,+arglH(jw)}I + 1H( -jw) 1 ~ e-i(wt+$-arg{H(-jw}I)


2 2
Assun1ing thac h(t) is real valucd, Eq. (2.27) implies that H( jw) possesses conjugate sym-
merry, that is, H'; ( i<v) = H(-jw). This implies rhat IH( jw) 1 is an even functi<>n of w while
arg{H( jw)} is <)dd. Exploiting these symmetry conditions and símplifying yields
y(t) = 1 H( jw) IA cos(wt + cp + arg{H(jw)})
As with a complex sinusoidal input, the system modifies the input sinusoid's amplitude by
IH( jw) 1 a11d the phase by arg{H( jw) }. This modification is illustrated in Fig. 2.18.
Similar results are obrained for discrere-time systems using the polar form for H(eií1).
Specifically) if x[nJ = ei!ln is tl1e input, ther1
y[ n =
1 IH( ei!l) Iei(fln+arg(H(d!})I)
Furthermore, if x[ n] = A c<>s( fln + </>) is the input to a discrece-time system wíth real-
valued impulse response, then
y[n] = 1 H(ei12 ) 1 A cos(íln + </> + arg{H(ei! 1)})
Once again, the system changes the amplitude of rhe sinusoidal input by IH(eií!) 1 and its
phase by arg{H(eií1)}.
The frequency response characterizes the steady-state response of the system to si-
nusoidal inputs as a functi<>n of the sinusoid's frequency. We say this is a steady-state
response because the input sinusoid is assumed to exist fc)r all time and rhus the system is
in an equilibríum or steady-state condition. The frequency response pr<.>vides a great <leal
of information about the system and is useful for both understanding and analyzing sys-
tems, topics that are explored in depth in later chapters. lt is easily measured with a

x(t) = A cos (wt + </>) y(t)=AIH(jw)I cos(wt+cp+arglH(iw)l)


1 A l H(jw)I
A- 4>
--
ú)
-)lo • h(t~ • --------------- t
-A -AIH(jw)I \
q, + arg!H(iw)l
w
FIGURE 2.18 A sinusoidal inJ)Ut to a L'I'l systen1 results in a sinL1soidal output of the sarne fre-
quency \.vith the amplitude and phase mo<lified by thc system's frequcncy response.
106 CHAPTER 2 • TJ;\-IE-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEl\1S

sinusoidal oscillacor and oscilloscope by using the oscill<.Jscc>pe t<> measure the amplitude
and phase change between the input and output sinusoids for different oscillator
frequencies.
lt is standard practice to represent the frequency response graphically by separately
displaying the magnitude a11d phase response as functions of frequency, as illustrated in
the following examples.

'· ·:.. ·.,,,· '· ..,;:. . .., ·,:. ..,/."'.... ;:~.ih, · .,. +,.: ,. . . : :.. .

EXAMPLE 2.14 The impulse responses of t\.vo discrete-tíme systems are given by
h 1[n] = ½(S[n] + õ[n - 1]) .
'·..

h2 (n] = ½(o[n] - 8[n - 1]} ·.


.. .,... ..

Find the frequency response of each system and plot the magnitude responses.
Solution: Substitute h 1 [n] into Eq. {2.25) to obtain

,...
...
:~_;.
, ,..;. ;.,:: .: {

. .' ' .,..


.. . '.
which may be rewrítten as :. . :J; .. . •• ·li'.· ❖

. . e;n,2 + e-;n12
Hi(eln) = e-1!112 _ _ _ __ ;~ : t·.:: .•.'f:ff .·,.;..·«:, .,.. .;, .. . ~ ·: . ,i :••
2 >: ,: • • • •' ••

.; ; ., .·, .....
= e-;nri. cos(ü/2) : .:

·.•

Hence the magnitude response is expressed as


, ·.,·.:. . :: . .:~ i
IH1(ei )j 11
= lcos(il/2)1 .;..., ,.... . :r.• . ..
\: . . . ., .

and the phase response is expressed as


-0/2 for cos(il/2) > O
.,,"·~·:. -0/2 - 1T for cos(fi/2) < O
. ., .
Simílarly, the frequency response of the second system is given by .., ..·..
,,, .. .
~i·. . :
1 - e-in
.-~·. ....
.. .. -.:' ·. H2(e;i1) .. •. .
=--- ·•·· ,,
..
,.:...
',i 't ::
·,.:,
•·: .·
. ,..:
,., ...
.1
< ,r,
·,r
2
_ e;!u2 _ e-;iu2
= ,·e-,n,2 _____
., 2,· . ~ ·:U :·.:.. '1... ~·•.+: .:; ~..
J: . (..

= je-;wi sín({}/2) .,
.,..,.. .. .. / •: } ...:
....., ,. .·
,,. '. ••>

,; : . .. ,, .
ln this case the magnitude response is expressed as . .
r~ ·~k: .,~ . .;
... . . .· . . ":. .··..
IH2(eiº)I = lsin(fi/2)1
~

..
.... ·"

and the phase response is expressed as


-ft/2 + 1T!2 for sin(il/2) > O ..
..
,..
...,
• .. • •
"

; .: .:
-!l/2 - rr/2 for sin(ü/2) < O
.. Figures 2.19(a) and (b) depict the magnitude response of each system on the interval
- 1T' < n < 1r. Thís interval is chosen beca use it corresponds to the range of frequencies for
which the complex sinusoid eiD.n is a unique function of frequency. The convolution sum
indicates that h 1 [n] averages successive inputs, while h2 [n] takes the difference of successíve
inputs. Thus we expect h 1 [nl to pass low-frequency signals while attenuatíng high frequencíes.
This characteristic is reflected by the magnitude response. ln contrast, the differencing oper-
ation implemented by h2 [n] has the effect of attenuating low frequencies and passing high
frequencíes, as indicated by its magnitude response.
r
2.3 Properties of the Impulse Response Representationfor LTl Systems 107

1 H 1(eiª) 1
!
'
11
1

-11' o -1T o 1T

(a) (b)

FIGURE 2.19
'l'he magnitude responses of two simple discrete-time syslen1s. (a) .A. sysle1n tl1at
averages Sltccessive inputs tends to attenuate high frequencies. (b) A system that forms the differ-
ence of successi,,e inputs tends to atten11ate lo\v frequencies.

. . , . .;: t'r: ': . ..... :·· .,;- . . .•4!


.,..... .
'
EXAMPLE 2.15 The impulse response of the system relatíng the input voltage to the voltage
across the capacitor in Fig. 2.10 is given by
'.

h(t) = - 1 e- 11Rc··u(t) . ; :.

RC

Find an expression for the frequency response and plot the magnitude and phase response.
Solution: Substituting h(t) into Eq. (2.27) gives
. ·.·..

. .. .

•:

·.:. . = 1 f"° e-,;<,J+11RC>-r dT


RC Jo . ·.'
00

1 -1 e-(iw+l/RC)-r • • •:--..11

RC (jw + 1/RC) o
't· , :•

1 -1 .....
J:t.: ..··.·
. ;
= RC (iw + 1/RC) (O - l)
•• <
.. r
1/RC
-----
jw + 1/RC

The magnitude response is


.:;;,.'.

·.
1
:.:>. '. ·..
;
1H(jw) 1 = -;::::::=R=C=:::;:2
1
ú)2 + .. .. ' .
..,. .. RC •••; . • • •••

while the phase response is


..
arg{H(jw)} = -arctan(wRC)
·. The magnitude response and phase response are presented in Figs. 2.20( a) and (b), respectively.
The magnitude response indicates that the RC circuit tends to attenuate high-frequency si-
nusoids. This agrees with our intuition from circuit analysis. The circuit cannot respond to
rapid changes in the input voltage. High-frequency sinusoids also experience a phase shíft of
1r/2 radians. Low-frequency sinusoids are passed by the círcuit with much higher gain and
experience relatively little phase shift.
) 08 CHAP'fER 2 Ili TIJ\IE-DOJ\.-1.i\lN REPRESEN'fATIONS J."OR LINEAR TIME•INVARJANT SYSTEMS

IH(Jw)I arg{H(iw) l
11 -172
l
-./2 .. 4
1T
;

l
RC_ _ _ _ _ w
------+~_...,.
1
RC
--1T4
------.t-+---icc----······ w
1 O l 1T
---······
RC RC 2
(a) (b)

FICURE 2.20 Frequenc.·y response of the RC circuit i,1 Fig. 2.1 O. (a) J\ilagnitu<le response.
(b) Phase responsc.

• Drill Problem 2.8 Find a11 expressi<ltl for the frequency response of the discrete
time system wirh impulse response

h[nl = (-a)nu[nl
assuming lal < 1.
Answer:

2.4 Differential and Difference Equation


Representations for LTI Systems
Linear constant-coefficient difference and differential equations provide another represen
tation for the input-output characteristics of LTI systems. Difference equations are usei
to represent discrete-ti1ne syste1ns, while diffcrenrial equatíons represcnt continu<>us-tim
systems. The general form of a linear consta11t-coefficient differential equation is

(2.28

Here x(t) is the input to the system and y(t) is the output. A linear constant-c<)efficier
differe11ce equatÍ<)11 has a similar form, with the derivatives replaced by delayed values e
the input x[nl and output ylnJ, as shown by
N M
L akyln - kl = I bkx[n - kl (2.25
k=O kc:O

The integer N is tern1ed the arder <)f the differential or difference equatio11 and cc>rresponc
to rhe highest derivative or maximum memory involving the system <)utput, respectivel~
The order represents the number <)f energy st<>rage <levices in the system.
As an example of a differential equation that describes the behavic>r of a physic,
system, consider the RLC circuit depicted in Fig. 2.21 (a). Assume the input is the volta~
2.4 Differential and Difference Equation Representationsfor LTI Systems 109

R 1,
/
Mass Force
k ·· ,, x(t)
x·(t) + y(t) ::::::: e --.. m.
1~ y(t)
Friction f
(a) (b)

FIGURE 2.21 Exarnples of systems clescribcd by differential equatíons. (a) RL(~ circuit.
(b) Spring-mass-damper system.

source x(t) and rhe (>utput is the current around the loop, y(t). Summing the voltage drops
around the lc)op gives

Ry(t) + L
dy(t)
dt +e
Jt 1
-oç y( T) dT = x(t)

Differcntiating both sides of this equation with respect to t gives


2
1 () R dy(t) L d y(t) dx(t)
Cy t + dt + dt 2 dt

This differential equation describes the relationship between the current y(t) and voltage
x(t) in the circuit. ln this example, the order is N = 2 and we note that the circuit contai11s
two energy storage devices, a capacitor and an inductor.
1v1echanícal systems may also be described in terms c>f differencial equations using
Newton's laws. l11 rhe system depicted in Fig. 2.21(6), the applied force, x(t), is the input
and the posítion <>f the mass, y(t), is the output. The fc)rce associated with the spring is
directly proporti<>nal to position, the force due to friction is díreccly prc>portíonal to ve-
locity, and the force due to mass is proportional to acceleration. Equating the forces c)n
the mass gives
d2 d
m dt 2 y(t) +f dt y(t) + ky(t) = x(t)

This differential equ.ation relates position to the applied force. The system contains two
energy stc>rage n1echanisms, a spring anda mass, and the order is N = 2.
An example of a second-c>rder difference equation is
y[nJ + y[n - 1] + ¼Yln - 2] = x[n] + 2x[n - 1] {2.30)
This difference equatic>n might represent the relationship between the input and output
signals for a system that processes data in a computer. ln this example the order is N = 2
because the difference equation involves y[n - 21, implying a maximum memory in the
system output of 2. Memory in a discrete-time system is a11alogous to energy storage in a
continuous-time system.
Difference equations are easily rearranged to c>btain recursive formulas for comput-
ing the current output of the system from the input signal and past outputs. Rewrite Eq.
{2.29) so that y[n] is al<>ne c>n the left-hand side, as shown by
l A1 l N
y[nl = - L bkx[n - k] - - L aky[n - k]
ao k=O ao k=l
This equation indicares how to obtain y[n] from the input and past values of the C>utput.
Such equatic.>ns are often used to implement discrete-time systems in a computer. Consider
110 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

computing y[n] for n > O from x[n] for the example second-order difference equation
given in Eq. (2.30}. We have
y[nj = x[n] + 2x[n - 11 - y[n - 11 - ¾y[n - 2]
Beginning with n = O, we may determine the output by evaluating the sequence of equations
y[OJ = x[OI + 2xf-1] - y[-1] - ¾y[-2]
y[l] = x[l] + 2x[OJ - y[OJ - ¼yl-11
y[2] = x[2] + 2x[l] - y[l] - ¼y[O]
y[3J = xl3J + 2xl2J - yl2I - ¾yfll


ln each equation the current output is computed from the input and past values of the
output. ln order to begin this pr<)cess ar time n = O, we must know the two most recent
past values of the output, namely, y[ -1 l and y[ - 2 }. These values are known as initial
conditions. This technique for finding the output of a system is very useful for computation
but does not provide much insight intc> the relatic>nship between the difference equation
description and system characteristics.
The initial conditions summarize ali the information about the system's past that is
needed to determine future outputs. No additiona] information about the past output or
input is necessary. Nc)te that in general the number of initial conditions required to deter-
mine the output is equal to the order of the system. Initial conditions are also required to
solve differential equations. ln this case, the i11itial conditions are the values of the first N
derivatives of the output
dy(t) d 2y(t) JN-ly(t)
y(t), dt ' dt 2 ' • • • ' dtN- I
evaluated ar the time t 0 after which we desire te> deter1nine y(t). The initial conditions in
a differential-equation description for a LTI system are directly related to rhe initial values
of the energy storage <levices in the system, such as initial voltages on capacitors and initial
currents through inductors. As in rhe discrete-time case, the initial conditions summarize
ali information about the past of the system that can impact future outputs. Hence initial
conditions alsc) represent the ''memory'' of continuous-time systems.
:t~
.. ,. ;....... "' .. •
. . t,\ ·. •
. .' '

ExAMPLE 2.16 A system is described by the difference equation


y[n] - 1.143y[n - 1] + 0.4128y[n - 2] = 0.0675x[n] + 0.1349x[n - 1] + 0.0675x[n - 2}
Write a recursive formula to compute the present output from the past outputs and current
inputs. Determine the step response of the system, the system output when the input is zero
and the initial conditions are y[-1] = 1, y[ - 2] = 2, and the output ín response to the sinu-
soidal inputs x 1 [ n] = cos( 'fõ 1rn), x 2 [ n] ;;;: cos(½ 1rn ), and x 3 [ n] = cos(fõ 1Tn) assuming zero initial
conditions. Lastly, find the output of the system if the input is the average January temperature
data depicted in Fig. 2.22(f).
Solution: We rewrite the difference equatíon as shown by
y[n] = 1.143y[n - 1] - 0.4128y[n - 2] + 0.0675x[nl + 0.1349x(n - 1) + 0.0675x[n - 2J
This equation is evaluated in a recursive rnanner to determine the system output from the
input and initíal conditions y(-1] and y[-2].
The step response of the system is evaluated by assuming the input is a step, x{n] =
u[n], and that the system is initially at rest, so the initial conditions are zero. Figure 2.22(a)
2.4 Di.fferential and Difference Equation Representationsfor LTI Systems 111

- ....
! i l ' ! l ! ! •
1 r

·- ) ) r r ..., r r r ...
l '
r
' 1 } >

)
slnJ !'
0.5 l .....

1
r

o
o 5 10 15 20 25 30 35 40 45 50
(a)

0.4 ! i ! 1
; i '; ! i 1

0.2 ~-··· .......

y[nJ

o1 ~
·-· -0-0...0 O -000-00-00-0-00- - - - - -0--0-0 O 0-0--0--0 000-0-0-00-0--0-o-
... A'"'

i i l 1 i 1 ! !
-0.2 ·-··- 1

o 5 10 15 20 25 30 35 40 45 50
(b)
....
1 l
··-· : ..._. . . . . j
·····-··
! i--· i i--· ! ' .õW.- !
~~ ~~
'-''
r r

ylnJ 00
ljr
1
..,
-1
......
i i
-
i-· l ~-
.•
~

l
-i-·
l
,n•
í
IC

o 10 20 30 40 50 60 70 80 90 100
(e)

l f -· -,-
\
ylnl

j
-1 - - - - - - - - ' - - - ~ - - - - - ~ - - - - - - - - - - - - - - · - - - - '
O 1O 20 30 40 50 60 70 80 90 l00
(d)

1.----,----,---------------,----,----,-----,----,

-] ,.__
o
_____ ____________ __
10 20
;,...._

30 40
__..__
50 60
.__~-'-------'~----~--'
70 80 90 100
(e)

FIGURE 2.22 lllustration of the solt1tion to Example 2.16. (a) Stcp resi)onse of system. (h) Out-
pt1t dueto nonzero initial conditions with zero input. (e) Output dt1e to x 1 [n] ~ cos(-to1T1i).
(d) ()uti)ut <.-lue t<.> x 2(nJ = cos(¾1r1i), (e) ()t1t1)ut due t<> x 3 (n] = cos(jfi1Tn).
112 CHAPl'ER 2 • TIME-001\'IAIN REPRE.SENTATIONS FOR LINEAR TIME-INVARIAN'f SYSTEMS

60 '
; ;
! '
r
50 - ' r <.:
r
.......

~~ ~ •,
• r •

- r
e
(:

' r
IC r
~ o
:;::s ._, 40 ·;,. (
r
IÇo (
r r
,-
... ....
e:

~
::o
~
::,
,:o:: 30
ri
~

,r
(
IC
r

( 1,..
' 'j
~
I,,;;
g_ D IC
,_
Q,)
> E 20 j
<1'.~
10
1
1
o .. .. .. . .. .. .. .. .. .. ..
'
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
19(>0 1910 1920 193() 1940 1950 }960 l 97l) 198(} 1990
Ycar
(f)

60r---~--------,.---~----,-----,------,---~---,---,

~ •r
-

.j

............................... • •••••• 1,,., ..................,

1910 1920 1930 1940 1950 1960 ]970 1980 1990


Year
(g)

FIGURE 2.22
(f) Input signal consisting of avcrage Jant1ary te1n1leraturc data. (g) Output ass<)Ci-
ated \Yith average January temJ)crature data.

depicts the first 50 values of the step response. This system responds to a step by initially
rising to a value slightly greater than the input amplitude and then decreasing to the value of
the input at about n = 13. For n sufficientl}· large, we may consider the step to be a de or
consrant input. Since the output amplitude is equal to the input amplitude, we see that this
system has unir gain to constanr inputs.
The response of the system to the initial conditions y[ -1 J = 1, y[ - 2J = 2 and zero
input is shown in Fig. 2.22(b). Although the recursive nature of the dífference equation sug-
gests that the initia) conditions affect ali future values of the output, we see that the significant
portion of the output dueto the inicial conditions Iases until about n = 13.
The outputs dueto the sinusoidal inputs x 1 [n], x 1 {n], and x 3 [n] are depicted in Figs.
2.22{c), (d), and (e), respectively. Once we are disrant fr<JJn the injrjal conditions and enter a
steady-state condition, we see that the system output is a sinusoid of the sarne frequency as
the input. Recall that the ratio of the steady-state output to input sinusoid amplitude is the
magnitude response of the S}'Stem. The magnitude response at frequencyto1ris unity, is about
O. 7 at frequency ½'77', and is near zer{) at frequency These results suggest that the magnitude t51r.
response of this system decreases as frequency increases: that is, the system attenuates the
components of the input that vary rapídly, whíle passíng with unít gaín those that vary s1owly.
This characteristic is evidenr in the output of the syscem in response to the average January
temperaturc input shown in Fig. 2.22(g). We see that the output initially increases gradually
in the sarne manner as the step response. This is a consequence of assuming the input is zero
prior to 1900. After about 1906, the systern has a smoothing effect sínce ít attenuates rapid
fluct11arjons jn the input and passes constant terms wjrh .zero gain. ' . ,, ..
ii. . .... , ,,... ·.
. .
"
'
2.4 Di.fferential and Di.fference Equation Representations for LTI Systems 113

R
+
x(t) ! i (t) e :::::::: y(t)

FIGURE 2.23 R(: circuit.

• Drill Problem 2.9 Write a differential equation describing the relationship between
the input vt>ltage x(t) and voltage y(t) across the capacitor in Fig. 2.23.
Answer:


• SOLVING DIFFERENTIAL AND DtFFERENCE EQUATIONS

We now briefly review a method f<)r solving differential and differencc equatío11s. This
offers a general characterization of solutions that provides insight i11to systcn1 behavi<.>r.
lt is convenient to express the output of a system described by a dífferential or dif-
ference equation as a sum of two components: one associated only with inítial conditi<>ns,
anda second due only to the i11put. We shall term the con1ponent <)f the otttput associated
with the initia1 conditions the natural resp(>nse of the system and denote it as y("i. The
component of the output due only to the input is termed thc forced res/Jo11se of the system
and den<)ted as yCt'I. The natural response is the system <>utput for zer<, i11put, whilc the
forced response is che system (Jutput assuming zero inicial conditions. A system with zero
initial conditions is said to be at rest, since there is no stc,red energy <>r memory i11 the
system. The natural response describes the manner in which the system dissipares any
energy or memory of the past represented by nonzerl) initial conditions. The forced re-
sponse describes the system behavior that is ''forced'' by the input whc11 the system is at
rest.

Tire Natural Respo-nse


The natural response is rhe S}'Stem output vvhen the input is zero. Hence for a c<,11-
tinuous-time system the natural respc>nse, y<nl(t), is the solution to the h<>mogene<>us
.
equat1on
N dk
L ak k y(n)(t) = O
k:0 dt
The natural response for a continuous-time system is of the form
,'\/
Y(n)(t) = L C;er;t (2.31)
i= l

where the ri are the N roc>ts of the system~s characteristic equatíon

(2.32)

Substitution of Eq. (2.31} into the homogeneous equation establishes that y("l(t) is a so-
lution for any set of constants C;.
114 CHAPl'ER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR Ttl\-lE-INVARIANT SYSTE1"1S

ln discrcte time the 11att1ral response, y 1"llnl, is the solution to che h(>mc>geneous

equat1on
,'-J
L aky(nl[n - k] O
k=O

lt is of thc form
1'J
y(n)[n] = L C;Y; (2.33)
i=l

where the r; are rhe N roots of the discrete-time syste1n's characterístic equati()n
,\l

L akr·"-1-k =O (2.34)
k=O

Again, su bstittttic>n of Eq. (2.33} into the h(>mogeneous equatíon establishes that y( 11 )I n] is
a solutic>11. ln both cases, the C; are determined so that the solution y(n) satisfies the initial
conditic>ns. Note that the continu<>us-time and discrete-time characteristic equations differ.
The form of the natural respo11se changcs slightly whcn the characteristic equati(>n
descri!Jed by Eq. (2.32) or Eq. (2.34) has repeated roots. If a root r; is repeated p tin1es,
then we include p distinct terms in rhe solutions Eqs. (2.31) and (2.33) associated with r;,
They involve the p functic>ns
er;t' ter;t' ... ' tp-1 er,t

and
11 n p-1 n
r; , nr; , ... , n r;

respectively.
The nature (>Í each term in the natural resp<>11se depends <>n whether the roots ri are
real, ímagi11ary, or co1nplex. Real r<>(>ts lead to real exponentials, i1naginary r<><>ts to sí-
nusoids, and con1plex r()()ts to exponentially dan1ped sinusoids.

. .. .,~· • • ....,,., • ·>r"· • ~ •••• • ' . :r,. . :•·· ·,n..

EXAMPLE 2.1 7 Consider the RL circuit depicted in Fig. 2.24 as a system whose input is the
applied voltage x{t) and output is the current y(t). Fínd a differential equation that describes
this system and determine rhe natural response of rhe sysrem for t > O assuming rhe current
through the inductor at t = O is y(O) = 2 A.
Solution: Summing the voltages around the loop gives the differential equation
. _. ...
dy(t) .
Ry(t) + L dt = x(t)

The natural response is the solution of the homogeneous equation

Ry(t) + L d~~t) = O

The solutíon is given by Eq. (2.31) for N = 1,


y{"~(t) = c1e"11 A

where r1 is the root of the equation


R + Lr = O
2.4 Differential and Dif.ference Equation Representationsfor LTI Syste1ns 115

x(t) +
,.... y(t) L

FIGURE 2.24 HL circuit.

Hence r 1 = -RIL. The coefficient c1 is determined so that the response satísfies the initial
condition y(O) = 2. This implies c 1 = 2 and the natural response of this system is
y("'(t) = 2e··(RJL>t A, t ~ O

• Drill Problem 2. 1O Determine the form of the natural response for the system
described by the difference equation
y[nl + ¼y(n - 21 = xlnl + 2xln - 21
Answer:

• Drill Problem 2.11 Determine the form of the natural response for the RLC circuit
depicted in Fig. 2.21 (a} as a function of R, I.,, and C. Indica te the conditions on R, L, and
C so that the natural response consists of real exponentials, complex sinus<>ids, and ex-
ponentially damped sinusoids.
Answer: for R 2 -=I= 4LIC,

where
-R + YR 2 - 4L/C -R - YR 2 - 4L/C
2L ' 2L
')
For R- = 4LIC,

F()r real cxp<,ne11tials R 2 > 4LIC, for complex sinusoids R = O, and for exponentially
damped sinusoids R 2 < 4L/C. •
The Forced Response
The forced response is the solution to the differential or difference equation for the
given input assuming the initial conditíons are zero. It consists of the sum of two co1n-
ponents: a term of the sarne form as the natural response, and a particular solution.
The particular solution is denoted as y(P> and represents any solution to the differ-
ential or difference eqttation for the given input. lt is usually obtained by assumíng the
system output has the sarne general form as the input. for example, if the input to a
discrete-time system is x[n] = o(', then we assume the output is of the form ylPl[nl = can
and find the constante so that yíP)[nl is a S<)lutÍ(>D te) the system's difference equation. If
the input is xlnJ = A cos(!ln + <p), then we assume a general sinusoidal response of the
J 16 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TJl\-fE•fNVARlANl' SYSTEMS

TABLE Form of a Particular Solution Corresponding


2. 1
to Several Common Inputs
Contínu<)US Time Discrete Time

Input Particular .Solution Input Particular Solution

1 c 1 e
e-at ce-at a" can
cos(wt + cp) c 1 cos(wt) + c2 sin(wt) cos(On + cp) c 1 cos(!ln) + c2 sin(!ln)

form y1P1[nl = c 1 cos({ln) + c2 sin(iln), where c 1 and c2 are determined so that y(Pl[n]
satisfies the system's difference equation. Assuming an output of the sarne form as the
ínput is consistent with our expectation that the output of the system be directly related
to the input.
The form of the particular solution associated with common input signals is given
in Table 2.1. More extensive tables are given in books devoted to solving difference and
dífferential equations, such as th<)se lísted at the end of rhis chapter. The procedure for
identifying a particular solution is illustrated in the following example.
. ·lr·'

. 2.18 Consider the RL circuit of Example 2.17 and depicted in Fig. 2.24. Find a
EXAMPLE
. particular solution for this system with an input x(t) = cos(w0 t) V.
· Solution: The differential equation describing this system was obtained in Example 2.17 as
dy(t)
Ry(t) + L dt = x(t)

We assume a particular solution of the form y(P 1(t) = c 1 cos(w0 t) + c2 sin(úJ0 t). Replacingy(t)
in the differential equation by y(P'(t) and x(t) by cos(w0t) gives
Rc1 cos(wot) + Rc2 sin(w0 t) - Lwoc1 sin(wot) + Lw0c2 cos(w0 t) = cos{w0 t)
The coefficients c 1 and c2 are obtained by separately equating the coefficients of cos( w 0 t) and
sin(w0 t). This gives a system of two equatíons in two unknowns, as shown by
..•. Rc1 + Lw0 c2 = 1
-Lw0 c 1 + Rc2 = O
. .
Solving these for c 1 a11d c2 gives
R
..
. ..
··,:
,,: •.

Hence the particular solution is

R
IP> ) _ ( 0) Lw0 • ( ) A
,.
Y (t - R2 + L2 wã cos w t + R2 + L 2 Wõ s1n w0 t
4i!>' •
. ·.· ;. • f• •
.•:.• .

This approach for finding a particular solution is m(>dified when the input is of the
sarne form as one of the components of the natural response. ln this case we must assume
a particular solution that is independent (>f all terms in the natural response in order to
2.4 Dijferential and Difference Equation Representatio,i.~ for LTI Systems 11 7

obtaín the forced response of the system. This is accomplished analogously to the proce-
dure for generating independent natural response components when there are repeated
roots in the characterístic cquation. Specifically, we multiply the form of the particular
solution by the lowest power of t or n that will give a response component not included
in the natural response. For example, if the natural response cc.>ntains the terms e-at and
te-" 1 due t<> a seC<>nd-order root at -a, and the input is x(t) = e-ª', then \..Ve assume a
particular s<>lutic>n of the form y(P)(t) = ct 2 e .2t.
The fcJrced respc)nse of the system is obtained by summing the particular sc>luti<>11
with the form of the 11atural rcsp<>nse a11d finding the unspecífied coefficients in the natural
response so that the combined response satisfies zero initial conditions. Assuming the input
is applied at time t = O or n = O, this procedure is as follows:
1. Fi11d thc form of the natural response y<n) from the ro<Jts cJf the characteristic
.
equat1on.
2. Find a particular sc>lutÍ<)11 ylPJ by assu1ning it is of the sarne form as the input yet
independent of ali terms in thc natural response.
3. Determine the coefficients in the natural response so that the fc.)rced response y(fJ =
y(P> + y(nl has zero initial conditions at t = O or n = O. The f(>rced response is valid
fc>r t > O <)r n > O.
ln the discrete-cime case, the zero initial conditions, ylf 1[-N], ... , y1f 1f -1 ], must be trans-
lated to times 11 > O, sincc thc forced response is valid only for times n 2:::: O. This is
accomplished by using the recursive form of the difference equation, the input, and the at-
rest C()ndi tic,ns y(f) f - Nl = O, ... , y(f'J [ -1] = O to o btain transla ted in iti a l con d iti ons
y111 [0J, y(f)ll l, ... , ytfllN - 1l. These are then used to determine the unknown coefficients
ín the natural resp<)nse cc>mpc,nent of yi11 [n].

.•. . . .

EXAMPLE 2.19 Find the forced response of the RL circuit depicted in Fig. 2.24 to an input
x(t} = cos(t) V assuming norrnalized values R = 1 O and L = 1 H.
Solution: The form of the natural response was obtained ín Example 2.17 as
y("'(t) = ce-(RtL11 A

. . A particular solution was obtained in Example 2.18 for this input as

::lit:.
.. :~
.. ,
y(Pl(t) = R2 : L 2 cos(t) + R2 ~ L 2 sin(t) A

where we have used w0 = 1. Substituting R = 1 O and L = 1 H, the forced response for


t >Ois . ·.;

y(f)(t) = ce-t + ½cos t + ½sin t A

,,..
The coefficient e is now determined from the initial condition y(O) = O
O = ce-0 + ½cos O + ½sin O
=e+½
...
and so we find that e = -½.

• Drill Problem 2.12 A system described by the difference equati<)n


y[n] - !yln - 2J = 2xlnJ + x[n - 1]
118 CtlAPl'ER 2 • TtME-DOl\1AIN REPRESENTATIONS 1-'0R LINEAR TIME-11''VARl:\.NT SYSTEMS

has input signal x[nl = u[n]. Find the f<.)rced response of the system. Hint: Use y[n]
{yln - 2] + 2x(n] + x[n -1] withxfnl = u[rzJ andy(fll-21 = O,yifll-11 = O to determine
y<f)[O} and y(11 [1}.
Answer:
y[n] = (-2(½} 11 + 4) u[n] •
The Coniplete Response
The complete response of the system is the sum <)Í the natural response and the forced
response. If there is no need to separately <.>btain the 11atural and the forced response, then
the complete response of the system may be obtained directly by repeating the three-step
procedure for determi11ing the forced response using the actual initial conditions instead
of zero i11itia1 conditions. This is illustrated in the following example.

•• • :•V;;,: •: ••
. . ··,., ·.' ,. '

ExAMPLE 2.20 Find the current through the RL circuit depicted in Fig. 2.24 for an applied
voltage x(t) = cos(t) V assuming normalized values R = 1 n, L = 1 H and that the ínitial
condítion is y(O) = 2 A.
Solution: The form of the forced response was obtained in Example 2.19 as
y(t) = ce-z + ½cos t + ½sin t A
We obtain the complete response of the system by solving for e so that the initial condition
y(O) = 2 is satisfied. This ímplies

2 = e + ½( 1) + ½(O)
ore = Í· He11ce
y(t} = fe-t + ½cos t + ½sín t A, t ~ O
Note that thís corresponds to the sum of the natural and forced responses. ln Example 2.17
we obtained
y<~>(t) = 2e-t A, t ~ O
while in Example 2.19 we obtained
y(ll(t) = -½e-t + ½cos t + ½sin t A, t ~ O
The sum, y(t) = y 1111 (t) + y(fl(t), is given by
y(t) = ie-t + ½cos t + ½sin t A, t ~ O

and is exactly equal to the response we obtained by dírectly solving for the complete response.
Figure 2.25 depicts the natural, forced, and complete responses of the system.
••>(»:;a
..v.· ,

• Drill Problem 2. 13 Find rhe response of the RC circuit depicted in Fig. 2.23 to x(t)
= u(t), assuming the initial voftage across the capacit<.>r is y(O) = -1 V.

Answer:
y(t) = (1 - 2e-,,Rc) V, t 2:: O •
The Impulse Hesponse
The method described thus far for solving differential and difference equations can-
not be used to find the impulse response directly. However, the impulse response is easily
2.4 Differential and Difference Equation Representationsfor LTl Systen,s 119

2 ~-----.-----....--.---.-----,,--r-----.---,

1.5 .. 1.5

l ...

y<f>(t)
0.5 . . . .

o- o . . ..

-0.5 -0.5 -

-1 ~-'----'-----"---J'----'--'-----'---'---' -1 .____j__L___..1_._·····-L.·-·········.....·········-"······-·······;_·········-·-·L········-·····;._ _
O 2 4 6 8 1O 12 14 16 18 20 O 2 4 6 8 1O 12 14 16 18 20
Time (seconds) Time (scconds)
(a) (h)

2.0•---,--..----.--....----.---,-----r---,..---.-----.

l .5

y(t) l .0

0.5

-0.5

- 1·º0 2 4 6 8 1O 12 l 4 16 l8 20
Time (seconds)
(e}

FIGURE 2.25Responsc of RL circuit dcJJÍctcd i11 Fig. 2.24 t<> input x(t) = cos(t) V ,vhen y(O) =
2 A. (See Example 2.2().) (a) Natural response. (b) Force<l respor1se. (e) Complete response.

determined by first finding the step response and then explc>iting thc rclatic>11ship bctwce11
the impulse and step response. The definition <>Í the step response assumes the system is
at rest, so it represents the forced response <1f the system t<> a stcp input. f<>r a contínuous-
time system, the impulse response, h(t), is related to the step response, s(t), as h(t) =
1;s(t). For a discrete-time system we have h[n] = s[nJ - s[n - 11, Thus the impulse resp<>nse
is obtained by differentiating or dífferencing the step response. The differentiatic)n and
differencing operations eliminate the constant term assc>ciated with the particular solution
in the step response and change only the constants associated with the exponential terms
in the natural response compc>nent. This implies that the impulse response is only a func-
tion of the terms in the natural respc)nsc.

11 CHARACTERIS'JICS OF SYSTEJ\·1S DESCRIBED


BY Dil-"f"ERENTIAL AND DIFFERENCE EQUATIONS

The forced response of a l,TI system described hy a diffcrcnrial or difference equation is


linear with respect to the input. If y\fi is the forced response associated with an input x 1
and y(j) is the forced response associated with an input x 2 , then the input ax 1 + {3x 2
120 CHAPTER 2 • T1ME~DOJ\'lAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTF,l\1S

generates a Í(>rced response gíven by ay1/·i + f3yi). Similarly, the natural response is linea
with respect to the initial conditions. If y\111 is the natural response associated with initia
conditions JI and y'.{'l is the natural response ass()ciated with initial conditions 12, rhen th
inicial condition al 1 + /31 2 results in a 11atural resp<>nse ay\"J + {3y<.;,ii. The forced respons
is also time invariant. A time shift in the input results in a time shift i11 the output sinc
the syste1n is ínítially at rest. ln general, the con1plete resp<>nse of a system described by.
differential or differcnce eq uation is 11<>t ti1ne invariant, si11ce the initial conditions wi:
result in an output term rhat does not shift with a time shift of the input. Lastly, we observ
that thc forced response is also causal. Since the system is i11itially at rest, the <)utput doe
not begin pri<)r to the time at which the input is applied to the systen1.
The forced response depends c>n both the input and the roots of the characteristi
equation since it involves hoth the basic f<>rm of the natural respo11se and a partícula
solution to the differcntial or difference equation. The basic form of the natural respons
is dependent entirely ()O the roots of the characteristic equatÍ<)n. The impulse response e
the sysrem also depends 011 the roots <>f the characteristic e'-1uation since it C(>ntains th
identical terms as the natural response. Thus the roots of the characteristic equation prc
vide considerable infc>rmation about the system behavior.
For example, the stal1ility characteristics <)f a system are direcrly related t<> rhe root
of the system's characteristic cquation. To see this, note that the <>t1tput of a stable syster
in response t<) zero input must l1e l1ou11ded for any ser of i11itíal conditi<>11s. This follo\\ó
from the definition of BIBO stability and implíes rhat the natural response of the syster
must be b(>unded. Thus each term i11 the natural resp(>nse must be b()t1nded. ln the discrett
time case we must have Ir? I bounded, <)r Ir; I < 1. When Ir; 1= 1, the natural response do<:
nc>t decay and the sysrcm is said to be <>n the verge of insta bility. For continu<)us-tin1
systcms we require that Ier,r I be bounded, which implies Rc{r;} < O. Here again, whe
Ref ri} = O, the system is said to be on the verge of instabílity. These results imply that
discrcte-rime system is unstable if any r<><>t of the characteristíc equation has mag11itud
greater than unity, and a C(>ntinuous-time system is unstable if rhe real pare of any r<)Ot e
the characteristic equation is positive.
This discussion establishes that the roots of the characteristic equation indica te whe
a system is unstable. l11 later chapters we cstablish that a discrete-time causal system
stable if and (>nly if all roots of the characteristic equation have magnitude less than unir:
and a continu<)us-ti1ne causal system is stable if and <>11ly if the real parts of ali roots <
the characteristic equation are negative. These stability c<>nditions imply that the 11atur,
response of a system goes to zero as time approaches infinity since each term in the natur,
resp<>nse is a decaying exponential. This ''decay to zer()'' is co11sistent with our intuiti,
concept of a system's zero input behavic>r. We expecta zere> output when thc i11put is zer<
The initial conditions represent any energy present in the system; in a stable system wit
zero input this energy eventually dissipares and the output approaches zero.
The respo11sc ti1ne of a system is also determined by the roots of the characterist
equation. 011ce the natural response has decayed to zero, the system behavior is governe
c>nly by the particular solution-which is of the sarne form as the input. Thus the natur;
response comp(>nent describes the transient behavi<>r of the system: rhat is, it describes tl
transition of the system from its initial condition to an equilibrium cc>ndition determint
by the input. Hencc the transient rcsponse time of a system is determined by the time
takes rhc 11atural respc>nse to decay t<> zero. Recall that natural response contains terms e
the form r;1 for a discretc-time system and er;t for a continuous-time system. The transie1
response time of a discrete-time system is therefore propc>rtional to the magnitude of tJ
largest root of thc characteristic equation, while that of a continuous-time system is d
termined hy tl1e root whose real component is closest t<) zero. ln order to have a conti1
2.5 Block Diagram Representations 121

uous-time system with a fast response time, ali the roots of the characteristic equation must
have large and negative real parts.
The imptzlsc response of the system can be determined directly from the differentia1-
or difference-eqt1ation description of a system, although it is generally much easíer to
obtain the itnpulse response indirectly using methods described in later chapters. Note that
thcre is TI<) provision for initial conditic>ns when using the impulse response; it applies (>nly
to systems that are initially at rest or when che input is known for ali time. Differential
and diffcrcnce equation system descriptic)ns are more flexible in this respect, since they
apply t<> systems either at rest or with nonzer() inicial co11ditions.

2.5 Block Diagram Representations


ln thís section we examine block diagram representations for I.TI systems described by
differentíal and difference equations. A bl<Jck diag1'am is an interconnection of elementary
operatic>ns that act on the input signal. The blc>ck diagram is a n1ore detailed representation
c>f the syscem tha11 the impulse respt)nse <>r diffcrence- and differentia]-equation descrip-
tions since it describes how the system's internai cc>mputations or operations are ordered.
The impulse response and difference- or differential-equatíon descriptions represent only
the input-output behaví<1r of a system. We shall show that a system with a given input-
output characteristic can be represented with different block diagrams. Each block diagram
represe11tatic>n describes a different set of internai computations used to determine the
system output.
Block diagran, representatit>ns cc>nsist <>Í a11 interconnection of three elementary t>p-
erations on signals:
1. Scalar multiplication: y(t) = cx(t) or y(n] = cx[n], where e is a scalar.
2. Addition: y(t) = x(t) + w(t) or ylnl = xlnl + w[n].
3. lntegratíon for c<)ntinuc>t1s-time systems: y( t) = f 1 "" x(-r) d-r; or a time shift fc>r
discrete-tin1e systems: yln] = x[n - 1].
Figure 2.26 depicts the block diagram symbols used to represent each of these operations.
ln order to express a c<>11ti11uc1us-time system ín terms of integration, we shall co11vert the
differential eqttation intt> an integral equation. The operation of integratit>n is usually uscd
in block diagrams for continuc>us-timc systerns instead of dífferentiation because integra-
tors are more easily bt1ilt from analc>g cc>mponents than are differentiators. Also, integra-
tc1rs s111ootl1 out noise in the system, while differentiators accentuate nc>ise.
The i11tegral or difference equatic>n cc>rresponding to the system behavic>r is <>l1tai11ed
by exprcssing the sequence of operatic.>ns represented by the block diagram in equati<>n

x(t)
-]-----i• l: . ,. y(t) = x(t) + w(t) x(t) - • ~ ---f ~-•• y(t) Í~ x(T)dT
=
__......,___
x(t) e y(t) = cx(t)
xn
l y[nl = .x-fnl + w[n]

x[11} y[n]; cx[nJ


______.
w(t)
x[n] - - • S --• y[n J = x[ n - 1J
w[n]
(a) (b) (e)

FIGURE 2.26 Symb<)ls f<>r elen1entary operations ín block diagram descriptions for systems.
(a) Scalar multiplication. (b) Additíon. (e) lntegration for continuous-time systems and tin1e shift
for discrete-tin1e svstcms.
,
122 CHAPTER 2 • T11\1E·DO.l\1AIN REPRESEN'l'ATIONS FOR LINEAR TIME~INVARIANT SYSlºEMS

r-----------------
ho u,[nl
x[n] ' )1, )1,
.
t''
...
t' )lo 'Vínl
l
,

#.
•;
l;i';?
··~
s
b, -a,
x[n - l] i.,t yí n - 1]
'

·-s ·,~..
.,, : .

-a..,•
]
b2
x[n - 2] ..,__....,_ _... L--_...,.__.. y[n - 21
l ________________ _

FIGURE 2.27 Blc,çk diagram reprcscntation for a discretc-tirne system described by a second·
order difference equation.

form. Begin with the discrete-time case. A discrete-time system is depicted in Fig. 2.27.
Consider writing an equation corresponding to the portic)n of the system within the dashed
box. The output c>f the first time shift is x[n - 1]. The second time shift has output
x[n - 2]. The scalar multiplications and surnmations imply
w[nl = b0 x[n] + h1x[n - 11 + h2xln - 2] (2.35)
Now we may write an expression for y[nJ ir1 ter111s of w[nJ. The block diagran1 indicares
that
y[n] = w[n] - a1y[n - 1] - a2y[n - 21 (2.36)
The output c,f this system may be expressed as a functic>n of the input xln I by substituting
Eq. (2.35) for w[n] in Eq. (2.36). We have
y[n] = -a 1y[n - 1] - a2y[n - 2] + boxlnl + h 1x[n - lJ + b2x[n - 2]
or

y[nl + a 1y[n - lJ + a1 y[n - 2] = hox[n] + h1x[n - 1] + b2xín - 2] (2.37)


Thus the block diagram in Fig. 2.27 describes a systern whose input-c)utput characteristic
is represented by a second-order difference equation.
Note that the blc>ck diagram explicitly represents the operations involved in cc>m-
puting the output from the input and tells us how to simu1ate the system on a computer.
The operations of scalar multiplication and additi<>n are easily evaluated using a computer.
The outputs of the time-shift operations correspond to memory locations in a computer.
ln order to compute the current output frc)m the current input, we must have saved the
past values of the input and output in memc>ry. T () begin a computer simulatíon at a
specified time we must know the input and the past two values of the output. The past
values of the output are the initial condirions required to solve the difference equation
directly.

• Drill Problem 2.14 Determine the difference equatic>n corresponding to the block
diagram description of the system depicted in Fig. 2.28.
Answer:
y[n] + ½y[n - 1] - lyln - 31 = x[n] + 2x[n - 2]
2. 5 Bloch Diagram Representations 123

x[n] - - - - ~ - - - • • . l : - -....... ~$)\-• ----"'"'!li""'-~•~ y[n]


·•.. g,-~.,.
,. ••<...

1
2
• +~----------.
·( s·,..
2

-·-
1 .. ,.

FIGURE 2.28 Block clíagran, rcprcscntation for Drill Problen1 2.14. •


The block diagram descriptic>n for a system is not unique. We illustrate this by de-
veloping a second block diagram description for the syste111 described by the second-order
difference equation given by Eq. (2.37). We may view the system in Fig. 2.27 as a cascade
of two systems: one with input xi n I and output w[ n] described by Eq. (2.35) and a second
with input w[11l and output ylnJ described by Eq. (2.36). Since these are LTI sysrems, we
may interchange their order without changing the input-output behavíor of the cascade.
lnterchange their order a11d den<>te the output of the new first system as fl n ]. Thís output
is obtaíned frc>m Eq. (2.36) and the input x[n l as sh<)wn by
f'lnl = -a,f[n - 1] - a2 f[n - 21 + xln] (2.38)
The signal fl n] is also the i11put te> the sec(>nd system. The output <>f the second systetn,
y[n], is obtained from Eq. (2.35) as
yln] = bof[,i] + b1f[n - 11 + b2fln - 2] (2.39)
Both systems inv<>lve time-shifted versions of /[nl. Hence only one set of time shifts is
needed in the block diagran1 for this systcm. We may represent the system described by
Eqs. (2.38) and (2.39) as the block diagram illustrated i11 Fig. 2.29.
The block diagrams in Figs. 2.27 and 2.29 represent different ímplementations Í(>r a
systc1n with input-output behavior described by Eq. (2.37). The system ín Fig. 2.27 is
termed a ••direct form I'' in1pleme11tatic>n. The system in Fig. 2.29 is termed a ''direct form

x[n] • ~ ,, /fnl bo
)lo • E ... y[nJ

ls
·.41b<>·

-a, h1
f[n - l]
!

·s

f[n - 21
FIGURE 2.29 Alternative block <liagram rcprcscntation for a system described l>y a sec<>n<l-<)rdcr
dífference c4uation.
124 CHAP'l'l::R 2 li TIME-DOMAIN REPRESENTATIONS 1-·oR LINEAR TIME~INVARIANT SYSTEMS

11'' impleme11tation. The direct form II implementation uses memory more efficic11tly, since
for this example it requires only two memory locations co1npared t<) the four required for
the dírect form I.
There are many different i111pleme11tatio11s fc>r a system whose input-c>utput behavíor
is described by a difference equation. They are <)btained by manipularing cither the differ-
ence equation or the elen1ents in a bl<)Ck diagram representation. \Xfhile these different
systems are equivalent from an input-output perspectíve, they will gcnerally differ with
respect to other criteria such as memory requírements, the number clf comptttations re-
quired per output value, or numerical accuracy.
Analogous results h<>ld for continuous-time systen1s. We may simply replace thc time-
shift operations in Figs. 2.27 and 2.29 with time differentiatíon to ()btain block diagram
representations for systems described by differential equatíons. However, in order to dcpict
the continuous-time system ín terms of the mc,re easily implemented i11tegrati<>n operation,
we must first rewrite the differential equation description
dk ;vi
N dk
L
k=O
ak d k y( t)
t
L
= k=O bk dtk x(t) (2.40)

as an integral equation.
We define the integration operation in a recursive manner to simplify the notation.
Let v101 (t) = v(t) be an arbitrary signal and set

v(n)(t) = ft"" v(n-l)('T) dT, n = 1, 2, 3, ...

Hence v(n 1(t) is the n-fc>ld integral of v(t) with respect to time. This definiti(>n integrares
over ali past values c>f time. We may rewrite this in terms of an inicial C()11ditic>n on the

1ntegrator as

n = 1, 2, 3, ...

If we assume zcr<> initial conditions, then i11tegrati<>n and differentíatio11 are inverse op-
erations; that is,

t > O and n = 1, 2, 3, ...

Hence if N 2:: M and we integrate Eq. (2.40) N times, we obtain the integral equation
description for the system:
N M
L ªkY(N-k)(t) = L bkx(N-kl(t) (2.41)
k=O k=O

For a second-order system with a 2 = 1, Eq. (2.41) may be written

y{t) = -a 1y 111 (t) - a 0 y 121 (t) + b1x(t) + h 1x( 1J(t) + b0 x( 21 (t) (2.42)

Direcr fc>rm I a11d direct fc>rn1 II imp1ementatio11s ()f rhis system are depicted in Figs. 2.30(a)
and (b). The reader is asked to show that these bl<>ck diagrams in1pleme11t the integral
equation in Problem 2.25. Note that the direct fc,rm II implementatíon uses fewer integra-
tors than the direct form I implementati<>n.
Block diagram representations for continuous-time systems may be used to specify
analog computer simulations of systems. 111 such a simulation, signals are represented as
2.6 State-Variable Descriptionsfor LTl Systems 125

h2
,. ,. f <t)
x(t) )lo~"?- . ~.;
• y(t) x(t) • ·•:r:,
., ""'
-----4i.,..__....,,, b2
---l••
-
1: •
----,)1, y(t)

l l

f
. ··: i.
>·:

,.f•
• •••
,.: ·i.

"<;.
·. :
,
X( li(t)
bi
2I:
-ai
y<n(t) ··1:
-a,

-ao
,. .:.
·;~
~ ·,·
;..,t•:
": ;~:.
.
• l>o
....__---4_ __, y(2)(t)

(a)

FIGURE 2.30Block diagram represenlalions for C()t1tinuous-tín1c system descrilled hy a sec()l1d-


order integral ec1t1ation. (a) Direct Í<>r1n I. (b) l)i rcct form I l.

voltages, resistors are used te> implement scalar n1ultiplication, and the integrators are
constructed using operational amplifiers, resistors, and capacitc>rs. lnitial cor1ditions are
specified as initial voltages on integrators. Analog C<>mputer simulations are n1uch 111ore
cumbersome than digital computer sin1ulations and suffer from drift, h<)wever, so it is
common to simulate continuous-tíme systems on digital C<>mputers by using numerical
apprc>ximati{)ns to either integration or differentiation operations.

2.6 State-Variable Descriptions


for LTI Systems
The srare-variable description for a LTI system C<>nsists of a series of coupled first-<)rder
differential or difference equations that <lescribe how the state of the systern evolves and
an equation that relates the output of the systetn to the current state variables and input.
These equations are written in 1natrix Í<)rm. The state of a system n1ay be defined as a
minimal set of signals that represcnt the system's entire memory of the past. That is, given
only the value of the statc ar a pc>int in time n (or t a11d the input f<>r ti1nes 11 > n (or
0 0 ) 0

t 2:: t we can determine thc <>utput Í<>r ali times n 2: n (or t 2: t We shall see that the
0 ), 0 0 ).

selection of signals comprising the state of a system is not unique a11d that there are rnany
possible state-variable descripti<)ns corresponding to a syste111 with a given i11put-c.>utput
characteristic. The ability t<) represent a system with different state-variable descriptions
is a powerful attribute that finds application in advanced methods for conrrol system
analysis and discrete-time system implementation.

• THE STATE-VARIABLE DESCRIPTION

We shall develop the general state-variable description by starting with rhe dircct Í<>r111 II
implementation for a second-order LTI system depicted in Fig. 2.31. ln order t<> dctcr1nine
the output of the system for 11 > n 0 , we must know the input for 11 > n 0 and the c>utpt1ts
of the time-shift operations labeled q 1 [n] and q2 [n] at time n = n0 • This suggests that we
may ch<>osc q 1 fnl and q 2 ln] as the state of the system. Note that since q 1 lnJ and q 2 [nJ
are the outputs of the time-shift operations, the next value of the state, q 1 [n + 1l and
q 2 [n + 1], must corresp<)nd t(> the variables at the input to the tin1e-shift c>perati<.>ns.
126 CHAPTJ::R 2 li TIME-DOl\-1AIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTE.MS

x[n] ,. ~ 1 - ·_ _ _ _ i..,____. .,., .1: • yln]

>~ .iJ..
. L·.. ·,:.

FIGURE 2.31 Direct form 11 rcpresentation for a second-<)rder <liscrete-time system dcpicting
state variables q 1 [n] and q 2 [n].

The block diagran1 i11dicates that the next value of the state is <>htained fro1n the cttrrent
state and the input via the equatic>ns
q1[n + 11 = -a 1q 1[n] - a2q2ln] + xlnl (2.43)
q2[n + 1] = q1[nj (2.44)
The bl(>Ck diagram also indicates that the system output is cxpressed in terms of the i11put
and state as

or
y[11] = (b1 - a,)q,lnl + (l,2 - a2)q2[n] + xlnl (2.45)
We write Eqs. (2.43) a11d (2.44) in matrix Í<)rm as
q1ln + 11 - 1
+ xlnl (2.46)
q 2 [n + tj 1 O o
while Eq. (2.45) is expressed as

+ lllxí1tl (2.4 7)

If we define the state vectc>r as the column vector

then we Célll rewrite Eqs. (2.46) a11d (2.4 7) as


qln + 1] = Aq[nj + bx[nj (2.48)
y[n] = cq[nJ + DxfnJ (2.49)
where the matrix A, vecrors b ande, and scalar D are given L",y
1
A= , b=
1 O o
D= [1]
Equations (2.48) and (2.49) are the general form for a state-variable descriptic.Jn corre-
spondíng to a discrete-time system. The matrix A, vectors b and e, and scalar D represent
2.6 State-Variable Descriptionsfor LTI Systems 127

another description for the system. Systems having different internai structures will be
represented by different A, b, e, and D. The state-variable description is the only analytic
system representation capable of specifying the internai structt1re of the system. Thus rhe
state-variable description is used ín any problem in which the internai system structt1re
needs to be considered.
If the input-output characteristics c>f the system are described by an Nth <>rder dif-
ference equation, then the state vector q[n] is N-by-1, A is N-by-N, b is N-by-1, and e is
1-by-N. Recall that solution of the difference equation requires N initial conditions. The
N inicial conditions represent the system's memory of the past, as does the N-dimensional
state vector. Also, an Nth order sysrem contains at least N time-shift operatíons in its block
diagram representation. If the block diagram for a system has a minimal number of time
shifts, then a natural choice for the states are the outputs of the unit delays, since the unir
delays embody the memory of the system. This choice is illustrated ín the following
example.

EXAMPLE 2.21 Find the state-variable description corresponding to the second-order systen1
depicted ín Fig. 2.32 by choosíng the state variables to be the outputs of the unit delays.
Solution: The block diagram indicares that the states are updated according to the equations
+ 1} = aq1 [n] + ô1 x[n]
q 1 [n
., .
.· · · q2(n + 1} = yq 1[n] + J3q2[n] + S2x[n]
and the· output is given by
..•

·"' .: .

These equations are expressed in the state-variable forms of Eqs. (2.48) and (2.49) if we define
.,
., ~:

. .. . ·. ....
~
.. q[nJ =
and

, A=
'}'
...
..,,:. ,., : ··:
.. ' . ·. ':
e = [111 . .: ..,....
.; .
· · .,,.,,.. ;.:1,: .•<"·'!·· • •":'..;:' ,. ,. • ••~;~;r :·;:o,, ....:i<· • '•\:·"' ···.':':·. r~·"' :·· . .,..1··'

__ q 1(n + 1) NIMir.J q1[n]


;n:~---~• ~r -------:

1} 1

,-r [n] .,,._ }' ín)


t:.--


.3',i·

~:r=-q_l_n_+_l-i]• 2

FIGURE 2,32 Block diagram of system for Example 2.21.


128 CHAPTER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEl\1S

3
....,,, q2[ n]
-•-- .~ S ,, --,,.---i• ··1: -
-· •lll YÍ n]

--2l -31
FIGURE 2.33 Block diagram of system for Drill Problem 2.15.

• Drill Problem 2.15 Find the state-variable description corresponding to the block
diagram in Fig. 2.33. Choose rhe state variables to be the outputs of the unit delays, q 1 ln]
and q 2 fn J, as indicated in the figure.

Answer:
--21 o 1
A= b=
1 3 '1
- 3
e= [O 1], D= [2] •
The state-variable descriptíon for continuous-time systems is analogous to that for
discrete-time systems, with the exception that the state equation given by Eq. (2.48) is
expressed in terms of a derivarive. We rhus write

d
dt q(t} = Aq(t) + bx(t) (2.50)

y(t) = cq(t) + Dx(t) (2.51 1

Once again, the matrix A, vectors b and e, and scalar D describe the internai structure of
the systerr1.
The memory of a continuous-time system is contained wirhin the system,s energy
storage <levices. Hence state variables are usually chosen as the physícal quantities asso-
ciated with the energy storage devices. For example, in electrical systems the energy storage
<levices are capacitors and inductors. We may choose state variables to correspclnd to rhe
voltage across capacitors c>r the current through inductc>rs. ln a mechanical systen1 rhe
energy-storing <levices are springs and masses. State variables may be chosen as spring
displacement or mass velocity. ln a block diagram representation energy storage <levices
are integrators. The state-variable equatíons represented by Eqs. (2.50) and (2.51) are
obtained from the equations that relate tl1e behavior of the energy storage <levices to the
input and c>utput. This procedure is demonstrated in the following examples.

'· ,'

ExAMPLE 2.22 Consider the electrical circuit depicted in Fig. 2.34. Derive a state-varíable
description for this system if the input is the applied voltage x(t) and the output is the current
through the resistor labeled y(t).
Solutiott: Choose the state variables as the voltage across each capacitor. Sumrning the
voltage drops around the loop ínvolving x(t), R 1, and C 1 gives l
,.

x(t) = y(t)R1 + q 1 (t)


>: .: ·.'' • ,, , .....
2.6 State-Variable Descriptionsfor LTl Systems 129

y(t}

+ R2 i-
x(t) : . e 1 ::::::::: q 1( t) e2 ::::::::: q 2 <t)

F1GURE 2.34 Circuit diagram of system for Example 2.22.

,.
·, i
...
.. . . ,•...

.or
1 1 ·
y(t) = -. Ri
- q1(t) + -R1 x(t) (2.52)
.
This equation expresses the output as a function of the state variables and input. Let i2 (t) be
the current through R 2 • Summing the voltage drops around the loop involving C 1, R2 , and C2
we obtain ·
. 1.

.. . ' .
... ,:·
'
: ·,..
. or·· .

;, .. ... ·. : ~ (2.53)

However, we also know that


'

i2(t) = C2 ~ q2(t) ·,

Substitute Eq. (2.53) for i2 (t) to obtain ·


d 1 1
. , dt q2 (t} = C2R2 qi(t) - C2R2 q2 (t) . .. · (2.54)

Lastly, we need a state equation for q 1(t). This is obtained by applying Kirchhoff's current
law to the node between R 1 and R 2 • Letting i1 (t) be the current through C1 , we have
y(t) = i1(t) + i2(t)
Now substitute Eq. (2.52) for y(t), Eq. (2.53) for i2 {t), and
. . d
, ..
.· : ,· t1(t) = C1 dt q1(t)
,.
for i 1 (t), and rearrange to obtaín
d 1 1 1 1
dt qi(t) = - C1R1 + C1R2 qi(t) + C R qz(t)
1 2
+ C R x(t)
1 1
(l.SS)

The state-variahle description is now obtained from Eqs. (2.52), (2.54), and (2.55) as
1 1 1
--+-- 1
C1R1 C1R2 C1R2
A= , h= C1R1
--·. 1
- C2 R2
o
.,.::

·'
1 1
e= --
R1
o
'
D -
- -
R1 .. ,.:

' ·~. .
130 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

y(t)

R1 R2 +
x(t) ~ q2(t)! L C:::;::: q 1(t)

FIGURE 2.35 Circuit diagrarn of system for Drill Problem 2.16.

• Drill Problem 2.16 Find the state-variable description for the circuit depicted in
Fig. 2.35. Choose state variables q 1 (t) and q 2 (t) as the voltage across the capacitor and the
current through the inductor, respectively.
Answer:
-1 -R1 1
(R1 + R2)C (R1 + R2)C (R1 + R 2 )C
A= b=
Ri -R1R2 ' R2
(R1 + R 2)L (R1 + R2)L (R1 + R 2)L
-1 -R1 1
e= D=
R 1 + R 2 R1 + R2 R1 + R2 •
ln a block diagram representation fc>r a continuous-time system the state variables
correspond to the outputs c>f the integrators. Thus the input to the integrator is the deriv-
ative of the corresponding state variable. The state-variable description is obtained by
writing equations that correspond to the operations in the block diagram. This procedure
is illustrated in the following example.

EXAMPLE 2.23 Determine the state-variable description corresponding to the block diagram ;
in Fig. 2.36. The choice of state variables is indicated on rhe diagram. .,
.lj
Solution: The block diagram indicares that
d ..
dt q1(t) = 2q1(t) - q2 (t) + x(t)

d
df q2(t) = q1(t)
..
y(t) = 3q1(t) + q2(t) ~
'
~
Hence the state-variable description is ,.,.

2 -1 1
A= o ) b=
1 o
e= [3 1 ), D= [O] .
>,·:
,V

.. ' . .;.;.

• TRANSFORMATIONS OF THE STATE

We have claimed that there is no unique state-variable description for a system with a
given input-output characteristic. Different state-variable descriptions may be obtained
2.6 State-Variable Descriptionsfor LTl Systems 131

q1(t) q2(t)
---~~-d--• J---t-.....,.. ~ ---•
q2(t) ,,,_.,,_
y(t)

2 dt
-1

FIGURE 2.36 Block diagram of system for Example 2.23.

by transforming the state variables. This transformation is accomplished by definíng a new


set of state variables that are a weighted sum of the original state variables. This changes
the form of A, b, e, and D but does not change the input-output characteristics of the
system. To illustrate this, reconsider Example 2.23. Define new states q 2(t) = q 1 (t) and
q~ (t) = q 2 (t). Here we simply have interchanged the state variables: qí(t) is the output c>f
the first integrator and qi(t) is the output of the second integrator. We have not changed
the structure of the block diagram, so clearly the input-output characteristic of the system
remains the sarne. The state-variable description is dífferent, however, since now we have
o 1 o
A'= b' =
-1 2 ' 1
e' = [1 3], D' = [o]
The example in the previous paragraph employs a particularly simple transformation
of the original state. ln general, we may define a new state vector as a transfc>rmation of
the original state vector, or q' = Tq. We define Tas the state transformation matrix. Here
we have dropped the time index (t) or [n] in order to treat both co11tinuous- and dis-
crete-time cases simultaneously. ln order for the new state to represent the entire system's
memory, the relationship between q' and q must be one to one. This implies that T must
be a nonsingular matrix, or that the inverse matrix T- 1 exists. Hence q = T- 1q'. The
original state-variable description is
q = Aq + bx
y = cq + Dx

where the dot over q denotes differentiation in contínuous time or time advance in discrete
time. The new state-variable description A', b', e', and D' is derived by noting
q' = Tq
= TAq + Tbx
= TAT 1 g' + Thx
and
y = cq + Dx
= cT -1 q' + Dx
Hence i f we ser
A'= TAT- 1 b' = Tb
(2.56}
e' = cT- 1 D'= D
132 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

then

q' =A'q' + b'x


y = c'q' + D'x

is the new state-variable description.

ExAMPLE 2.24 A discrete-time system has the state-variable description

1 -1 4
h = 2
A= 10 4 -1 ' 4
'·.

e= ½[1 1], D= [2]


Find the state-variable description A', b', e', and D' corresponding to the new states
q;[n] = -½q 1 [n] + ½q2 [n} and q_í[n] = ½q 1 (n] + ½q 2 [n].

Solution: Write the new state vector as q' = Tq, where


.,
.,.,.
·if«•
.,
1 -1 1
:•
F T -- -
,~.·:·
2 1 1

This matrix is nonsingular, and its inverse is

-1 1
.. '
1 1

Hence substituting for T and r- 1 in Eq. (2.56) gíves


., .. ..

,, --21 o 1
A' -
··:

b' -
-
o -103 ' 3
••• >
e'= [O 1], D' = (2)
Note that this choíce for T results in A' being a diagonal matrix and thus separares the state
update into the two decoupled first-order difference equations as shown by

+ 1] = -½q1 [n] + x(n]


q 1 [n
q2[n + 1] = fõq2 [n} + 3x[n]

The decoupled form of the state-variable description ís particularly useful for analyzing sys-
tems because of its simple structure.

• Drill Problem 2.17 A continuous-time system has the state-variable description

-2 O 1
A= h=
1 -1 ' 1
e = [O 2], D= [1]
Find the state-variable description A', b', e', and D' corresponding to the new states
qi (t) = 2q, (t) + q2(t) and q~(t) = q 1(t) - q 2(t).
2. 7 Exploring Concepts with MATLAB 133

Answer:
1 -4 -1 3
A' b' =
3 -2 -5 ' o
e' = ½[2 -4], D' = [1] •
Note that each nonsingu1ar transformation T generates a dífferent state-'1"ariabJe de-
scription for a system with a given input-output behavior. The ability to transform the
state-variable description without changing the input-output characteristics of the system
is a powerful tool. Ir is used to analyze systems and identify implementations of systems
that optimize some performance criteria not directly related to input-output behavior,
such as the numerícal effects of roundoff in a computer,based system implementation.

2. 7 Exploring Concepts with MATLAB


Digital cc>mputers are ideally suited to irnplementing time-domain descriptions of discrete-
time systems, because computers naturally store and manipulate sequences of numbers.
For example, the convolution sum describes the relationship between the input and out-
put of a discrete-time system and is easil}' evaluated \vith a computer as the sum of products
of nurnbers. ln contrast, continuous-time systems are described in terms of continu<>us
functions, which are not easily represented or manipulated in a digital cornputer. For
example, the output of a continuous-time system is described by the convolution integral.
Evaluation of the convolution integral with a computer requires use of either nurnerical
integration or symbolic manipulation techniques, both of whích are beyond the scope of
this book. Hence our exploration with MATLAB focuses on discrete-time systems.
A second limitation on exploring signals and systems is imposed by the finite mernory
or storage capacity and nonzero computation times inherent to all digital computers. Con-
sequently, we can only manípulate finite-duratíon sígnals. For example, íf the ímpufse
response of a system has infinite duration and the input is of infinite duration, then the
convolution sum involves summing an infinite number of products. Of course, even if we
could store the infinite-length signals in the computer, the infinite sum could not be com-
puted in a finite amount of time. ln spíte of this limitation, the behavior of a system in
resp<>nse to an infinite-length signal rnay often be inferred frc>m irs respc>nse to a carefully
chc>sen finite-length signal.
Both the MATLAB Signal Processing Toolbox and Control Systern Toolbox are used
in this sectíon.

• CONVOLUTION

Recall that the convolution sum expresses the output of a discrete-time system in terrns <)Í
the input and impulse response of the system. MATLAB has a functíon named e o n v that
evaluates the convolution of finite-duration discrete-time sígnals. If x and h are vecrors
representing signals, then the MATLAB command y = e o n v Cx, h ) generates a vector
y representing the convo)utÍ(>n of the signals represented by x and h. The number of
e1ements in y is gi·ven by the sum of the number of elements in x and h minus one. Nore
that we must know the time origin of the signals represented by x and h in order to
determine the time origin of their convolution. ln general, if the first elernent of x corre-
sponds to time n = kx and the first element of h corresponds to time n = kh, then rhe first
element of y corresponds to time n = kx + kh.
134 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TtME-INVARIANT SYSTEMS

To illustrate this, consider repeating Example 2.1 using MATLAB. Here the first
nonzero value in the impulse response occurs at time n = -1 and the first element of the
input x occurs at time n = O. We evaluate this convolution in MATLAB as follows:
>> h = [1, 2, 1 J;
>>X: (2, 3, -2J;
>> y = e o n v Cx , h )
y =
2 7 6 -1 -2
The first element in the vector y corresponds to time n = O + (-1) = -1.
ln Exa1nple 2.3 we used hand calculation to determine the output of a system with
impulse response given by
h[n] = u[n] - u[n - 10]
and input
x[nl = u[n - 21 - u[n - 71
We may use the MATLAB command conv to perform the convolution as follows. ln this
case, the impulse response consists of ten consecutive ones beginning at time n = O, and
the input consists of five consecutive ones beginning at time n = 2. These signals may be
defined in MATLAB using the commands
» h = onesC1,10);
>> x = ones(1,5);

The output is obtained and graphed using the commands


>> n = 2:15;
>> y:: conv(x,h);
» stemCn,y); xlabel('Time'>; ylabel('Amplitude')
Here the first element of che vector y corresponds to time n = 2 + O= 2 as depicted in
Fig. 2.37.

5 1 1 1

4.5 >- -
4 -·· -
3.5 >- -
-8 3 ....... ' -
....:::1
:g_ 2.5 ..... -
e
< 2 >- -

1.5 ....... --
1 - -

0.5 - -

o
2 4 6 8 10 12 14 16
Time
FIGURE 2.37 Convolutíon sum computed using MATLAB.
2. 7 Exploring Concepts with MATIAB 135

• Drill Problem 2.18 Use ~1ATLAB to solve DriJJ Problem 2.2 for a = 0.9. That is,
find the output of the system with input x[n] = 2{u[n + 2} - u[n - 12}} and impulse
response h[nJ = 0.9n{uín - 2] - uln - 13]}.
Answer: See Fig. 2.38. •
• STEP AND SINUSOIDAL STEADY~STATE RESPONSES

The srep response is rhe output of a system in response to a srep input and is infinite in
duration in general. However, we can evaluate the first p values of the step response using
the e o nv function if the system impulse response is zero for times n < k,, by convolving
the first p values of h[nl wíth a finite-duration step of length p. That is, we construct a
vector h frc>m the first p nonzero values of the impulse response, define the step u =
o n e s ( 1 , p), and evaluate s = e o n v ( u, h). The first eJement of s corresponds to
time k1, and the first p values of s represent the first p values of rhe step response. The
remaining values of s do not correspond to the step response, but are an artifact <>f con-
volving finite~durati<>n signals.
For example, we may determine the first 50 values of the step response of the system
,ivith impulse response given in Drill Problem 2. 7:
h[ n] = (-a)"uln]
with a = 0.9 by using the MATLAB commands
» h = (-0.9).A [0:49];
» u = ones(1,50);
>> s = conv(u.,h);

The vector s has 99 values, the first 50 of which represent the step response and
are depicted in Fig. 2.39. This figure is obtained usíng the MATLAB command
stem([0:49J, s(1:50)).
The sinusoidal steady-state response of a discrete~time system is given by the ampli-
tude and phase change experienced by the infinite-duration complex sinusoidal input signal

System Output
12 1 ! 1 !
'
1

10 - -
1

8 ··- -

...... -

4 ...._ -
)
'

2 ···- -
~
o
o 5 10 15 20 25
Tin1e
FIGURE 2.38 Solution to Drill Problem 2.18.
136 CHAPTER 2 ~ Til\-'1E-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

Step Response
l \ l 1 '! 'i l
;
; j ·1
'

0.9 -· -

0.8 -

0.7 -

(!.)
-g
0.6 ···- -
-
;.g_ 0.5
E
r
'
r
..

<t: 0.4 ···-·· ...

0.3 .... ...

0.2 ...

0.1 ...

o ~ ~ ... .. . .. . . ..• .. ~ .
o 5 10 15 20 25 30 35 40 45 50
Time

FIGURE 2.39 Step response computed using MATLAB.

x[n 1 = ei!ln_ The sinusoidal steady-state response of a system with finite-duration impulse
response may be determined using a finite-duration sinusoid prc>vided the sinusoid is suf-
.ficiently long to drive the system to a steady-state condition. To show this, suppose
h[n} = O for n < n 1 and n > n 2 , and let the system input be the finite-duration sinusoid
v[n 1 = ei!ln(u[n1- u[n - nv]). We may write the system output as

y[n] = h[n] * v[n]

= h[nJ * e;nn,
Hence the system output in response to a finite-duration sinusoidal input corresponds to
the sinusoidal steady-state response on the interval n 2 ::s n < n 1 + nv. The magnitude and
phase response of the system may be determined from y[n], n 2 < n < n 1 + n 11 , by noting
that

Take the magnitude and phase of y[n] to obtain

and
arg{y[n]} - nn = arg{H(eiº)},

\Y/e may use this approach to evaluate the sinusc)idal steady-state response of one of
the sysrems given in Exa1nple 2.14. Consider the system with impulse response
-1 n=O
2,
h[n] --2,1 n=1
o, otherwise
2. 7 Exploring Concepts 1vith MATIAB 137

We shall determine the frequency response and 50 values of the sinusoidal steady-state
response of this system for input frequencíes !l = ¼1r and 1r. J
Here n 1 = O and n 2 = 1, so to obtain 50 values of the sinusoidal steady-state response
we require nv > 51. The sinusoidal steady-state responses are obtained by MATLAB
commands
» Omega1 = pi/4; Omega2 = 3*pi/4;
» v1 = exp(j*Omega1*[0:50J);
» v2 = exp(j*Omega2*[0:50]);
>> h = [O. 5, -O. 5];
>> y1 = conv(v1,h); y2 = conv(v2,h);
Figures 2.40(a) and (b) depict the real and imagínary components of y 1, respectively, and
may be obtained with the commands
>> subplot(2,1,1)
» stem([0:51J,real(y1))
» xlabel('Time'); ylabel('Amplitude');
>> title( 'Real(y1) 1 )
>> subplotC2,1,2)
» stem([0:51],imag(y1))
>> xlabel('Time'); ylabel('Amplitude');
>> title('Imag(y1)')

Real(y 1)
0.6 , - - - - - - - - . l - - - - - - - - . - ! - - - - - . . - - 1 - - - - - - - . 1 - - - - - - - - - - - - ,

0.4 - -

<t>
] 0.2 ..__ . -
---a.
~
-0.2 - -

' _ ____.' _
-0.4 ,...___ ' ___' __._i _ _ _
' ___._
i ____________
' l ' ...,___ _ ____,
o 10 20 30 40 50 60
Time

0.4 - - - - - - - - - - - - - - - - , - - - - - - , - - - - - - . . . . - - - - - - - - ,

0.2

.g:, o
-
....
~ -0.2 . . . .

-0.4

-0.6 .___ _ _____._ _ _ ____.__ _ _ _~ - - - - - - - - - - - _ . _ _ - - - ~


o 10 20 30 40 50 60
Time
FIGURE 2.40 Sinusoidal steady-state response computed using lvIATLAB. The ,,alues at times l
through 5() represent the sinusoidal steady-state response.
138 CHAPTER 2 • Ttl\1E-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

The sinusoidal steady-state response is represented by the values at time índices 1 through
50.
We may now obtain the magnitude and phase responses from any element of the
vectors y 1 and y 2 except for the first one or the last one. Using the fifth element, we use
the commands

» H1mag = abs(y1(5))
H1mag =
0.3287
» H2mag = abs(y2(5))
H2mag =
0.9239
» H1phs = angle(y1(5)) - Omega1*5
H1phs =
-5.8905
» H2phs = angle(y2C5)) - Omega2*5
H2phs =
-14.5299
The phase response is measured in radians. Note that the a n g l e command always returns
a value between - 1T and Tr radians. Hence measuring phase wíth the command
a n g l e ( y 1 { n) ) - Omega 1 * n may result in answers that differ by integer multiples
of 2Tr when different values of n are used.

• Drill Problem 2.19 Evaluate the frequency response and 50 values of the sinusoidal
steady-state response of the system with impulse response

O :5 n < 3
h[n] =
O, otherwise
at frequency n = ½1r.
Answer: The steady-state response is given by the values at time indices 3 through 52 in
Fig. 2.41. Usíng the fourth element of the steady-state response gives I H(ei7r/J) 1 = 0.4330
and arg{H(e;,"13 )} = -1.5708 radians. •
• SIMULATING DIFFERENCE EQUATIONS

ln Section 2.4, we expressed the difference-equation description for a system in a recursive


form that allowed the system output to be computed from the input signal and past outputs.
The f i L ter command performs a simiJar function. Define vectors a = [a0 , a 1, ••• , aN]
and b = lho, b1, ••• , bM] representing the coefficients of the difference equation given
by Eq. (2.29). If x is a vector representing the input signa), then the command y =
f i l ter ( b, a, x ) results in a vector y representing the output ()Í the system for zero
initial c<>nditions. The number c)f output values in y corresponds to the number <>f input
val LJes in x. Nonzero initial C<)nditions are .incorporated by t1sing the alterna tive com-
mand syntax y = f i L ter ( b, a, x, z i ) , where z i represents the inítial conditions re-
quired by f i Lter. The initial cc)nditions used by f i l ter are not the past values of the
<)utput since f i l ter uses a modified form of the dífference equation to determine the
output. These initial conditions are obtained from knowledge of the past outputs using
2. 7 Exploring Concepts with MATLAB 139

Real(y)
0.4 1 1 1

0.2 ... -
CL)
-e

·--o-. ºl
::,
o .. o o o o o o o o o o o o o o o o o ·-

E
<C
.......
-0.2 1--·····

i " i .
'' :
-0.4
o 10 20 30 40 50 60
Time

lmage(y)
0.5 ,---------.------,-------.--------r------,-------,

~
a
·--s o
<C

-0.5 .___ _ ____._ _ _ ____.__ _ _ ____.__ _ _ __,___ _ _ __,___ _


o 10 20 30 40 50 60
Time
FIGURE 2.41 Sinusc>i<lal steady-state response for Orill Problem 2.19.

the cc)mn1and z i = f i l ti e ( b, a, y i ) , wherc y i is a vectc)r co11taining the initial


conditions in the order fy(-1), y(-2}, ... , y(-N)J.
We illustrate use of the f i l ter command by revisiting Example 2.16. The system
of interest is described by the difference equation
y[nl - l.143y[n - 11 + 0.4128y[n - 2] = 0.0675x[n]
(2.57)
+ 0.1349xln - 1] + 0.0675x[n - 2]
We determine the c>utpuc in response to zero input and inítial conditi<)ns y[-1] = 1
y[-21 = 2 using the commands
'
>> a = [1, -1.143, 0.4128J;
» b = [0.0675, 0.1349, 0.0675J;
» x = zerosC1, 50); ·
>> zi = filticCb,a,[1, 2J);
>> y = filter(b,a,x,zi);
The result is depicted in Fig. 2.22(b). We may determine the system respc>nse to an input
consisring of the average January te1nperature dara wjrh the commands
>> load Jantemp;
>> fi lttemp = fi lter(b,a,Jantemp);
Here we have assumed the average January temperature data are in the file
J ante mp . ma t. The result is depicted in Fig. 2.22(g).
140 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

• Drill Problem 2.20 Use f i l ter to determine the first 50 values of the step re-
sponse of the system descríbed by Eq. (2.57) and the first 100 values of the response to
the input xfnl = cos( ¼7Tn) assuming zero initial conditions.

Answer: See Figs. 2.22(a) and (d). •


The command [ h, t J = i mp z ( b, a, n) evaluates n values of the impulse re-
sponse of a system described by a dífference equation. The dífference-eq uation coefficients
are contained in the vectors b and a as for f i l te r. The vector h contains the values of
the impulse response and t contains the corresponding time índices.

• STATE-VARIABLE DESCRIPTIONS

The MATLAB Control System Toolbox contains numerous routines for manipulating
state-variable descriptions. A key feature of the Control System Toolbox is the use of LTI
objecrs, which are customized data structures that enable manipulation of LTI system
descri ptíons as single MATLAB varia bles. If a, b, e, and d are MATLAB arra ys repre-
senting the A, b, e, and D matrices in the state-variable description, then the command
s y s = s s (a, b, e, d, -1 ) produces a LTI object s y s that represents the discrete-time
system in state-variable form. Note that a continuous-time system is obtained by omitting
the -1, that is, using s y s = s s (a, b, e, d>. LTI objects corresponding to other system
representations are discussed in Sections 6.9 and 7.1 O.
Systems are manipulated in MATLAB by operations on their LTI objects. For ex-
ample, if s y s 1 and s y s 2 are objects representing two systems in state-variable form,
then s y s = s y s 1 + s y s 2 produces the state-variable description for the parallel
combination of s y s 1 and s y s 2, while s y s = s y s 1 * s y s 2 represents thc casca de
combina tion.
The functic>n L s i m sin1ulates the output of a system in response to a specified input.
For a discrete-time system, the command has the form y = l sim ( s y s, x), where x
is a vector containing the input and y represents the output. The command h =
i mp u l se <s y s, N) places the first N values of the impulse response in h. Both of these
may also be used for continuous-time systems, although the command syntax changes
slightly. ln the continuous-time case, numerical methods are used to approximate the con-
• • •
t1nuous-t1me system response.
Recall that there is no unique state-variable description for a given system. Different
state-variable descriptions for the sarne system are obtained by transforming the state.
Transforma tions of the state ma y be com pu ted in MATLAB using the rou tine s s 2 s s.
The state transformation is identical for both continuous- and discrete-time systems, so
the sarne command is used for transforming either type of system. The command is of the
form s y s T = s s 2 s s ( s y s, T), where s y s represents the original state-variable de-
scription, T is the state transformation matrix, and s y s T represents the transformed state-
variable description.
Consider using s s 2 s s to transform the state-variable description of Example
2.24

-1 4 2
A= 1 h=
10 4 -1 ' 4
e= 1[1 1],
2 D= [2]
2. 7 Exploring Concepts with MATIAB 141

using the state transfc>rmation matrix

-1 1
1 1
The following commands prc>duce the desíred result:

>> a --
[-0.1, 0.4; 0.4, -0.1J; b - [2; 4 J ,.
>> e - [0.5, 0.5]; d - 2; -
>> sys -
- ss(a,b,c,d,-1); % define the state-space object sys
>> T -
- 0.5*[-1, 1 ; 1 , 1 ] ,.
>> sysT - ss2ss(sys,T)
a --
x1 x2
x1 -0.50000 o
x2 o 0.30000
b --
u1
x1 1.00000
x2 3.00000
e - -
x1 x2
y1 o 1 .00000
d --
u1
y1 2.00000
Sampling time: unspecified
Discrete-time system.
This result agrees with Example 2.24. We may verify that the two systems represented by
s y s and s y s T have identical input-output characteristics by comparing their impulse
responses via the following commands:

» h = impulse(sys,10); hT = impulse(sysT,10);
» subplot(2,1,1)
>> stem([0:9],h)
>> title( 'Original System Impulse Response');
>> xlabel( 'Time'>; ylabel('Amplitude')
» subplot(2,1,2)
>> stem([0:9],hT)
>> title('Transformed System Impulse Response');
>> xlabel('Time'); ylabel('Amplitude')
Figure 2.42 depicts the first 10 values of the impulse responses of the original and crans-
formed systems produced by rhis sequence of commands. We may verify that the original
and transformed systems have the (numerically) identical impulse response by computing
the error e r r = h - h T.

• Drill Problem 2.21 Solve Drill Pr(>blem 2.17 using MATLAB. •


142 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

Original System Impulse Response


3
2.5 ..
li)
~
...·-=
-e..
E
2
.....
<!'. 1.5
l
0.5 ...,,.

o
o 1 2 3 4 5 6 7 8 9
Time
Transforrned System Impulse Response
3
~ 2.5 ....
...·-=
-o.
E 1.5
2
<!'.
1 ···--

0.5
o
o l 2 3 4 5 6 7 8 9
Time
FIGURE 2.42 lm1lulse responses associated with the original and transformed state-variahle dc-
scriptions computed t1sing MATLAB.

12-8 Su1nmary
- .. .
There are many different methc>ds fc>r describing the actic>n c>f a l,TI system on an input
signal. ln this chapter we have examined four different descriptions for LTI systems: the
impulse response, difference- and differential-equation, block diagram, and state-variable
descriptions. All four are equivalent in the input-output sense; for a given input, each
description will produce the identical output. However, different descriptions offer differ-
ent insights into system characteristics and use different techniques for obtaining the output
from the input. Thus each description has its own advantages and disadvantages for solving
a particular system problem.
The impulse response is the output c>f a system when the input is an impulse. The
output of a linear time-invariant sysrem in response to an arbitrary input is expressed in
terms of rhe impulse response as a convolution operation. System properties, such as caus-
ality and stability, are directly related to the impulse response. The impulse response also
offers a convenient framew<>rk for analyzing intercl)nnections of systems. The input must
be know11 fc">r all time in order to determine the <lutput of a system using the impulse
response and convolution.
The input and <)utput of a LTI system may als<> be related using either a differential
or difference equati<>n. Differential equations ofren follow directly from the physical prin-
cipies that define the behavior and interaction of continuous-time system components. The
order of a differenrial equation reflects the maximum number of energy storage <levices in
the system, while the order of a difference equation represents the system's maximum
memory of past outputs. ln contrast to impulse response descriptions, the <>utput of a
system frt>m a given point in time forward can be determined withc>ut knowledge of ali
past inputs provided initial conditions are known. Initial ct>nditions are the initial values
of energy storage or system memory and summarize the effect of all past inputs up to the
Further Reading 143

starting time of interest. The solution to a differential or difference equation can be sep-
arated into a natural and forced response. The natural response describes the behavior of
the system dueto the initial conditions. The forced response describes the behavior of the
system in response to the input alone.
The block diagram represents the system as an interconnection of elementary oper-
ations on signals. The manner in which these operations are intercon11ected defines the
internai structure of the system. Different block diagrams can represent systems with iden-
tical input-output characteristícs.
The stare-variable description is a series of coupled first-order differential or differ-
ence equations representing the system behavior, which are written in matrix form. It
consists of two equations: one equation describes how che state of the system evolves and
a second equation relates the state to the output. The state represents the system's entire
memory of the past. The number of states corresponds to the number of energy storage
<levices or maximum memory of past outputs present in the system. The choice of state is
not unique; an infinite number of dífferent state-variable descriptions can be used to rep-
resent systems wíth the sarne input-output characteristic. The state-variable description
can be used to represent the internai structure of a physical system and chus provides a
more detailed characterization of systems than the impulse response or differentíal (dif~
ference) equations.

fURTHER READING

1. A concise summary and many worked problems for much of the material presented in this
and later chapters is found in:
• Hsu, H. P., Sígnals and Systems, Schaum's Outline Series (McGraw-Hill, 1995)
2. The notation H(eifl) and H(jw) for the sinusoidal steady-state response of a discrete- anda
continuous~time system, respectively, may seem unnatural at first glance. Indeed, the alter-
native notations H(fl) and H(w) are sometimes used in engineering pracrice. However,
our notation is more commonly used as it allows the sinusoidal steady-state response
to be de.fined naturally in terms of the z-transform (Chapter 7) and the Laplace transform
(Chapter 6).
3. A general treatment of differential equations is given in:
• Boyce, W. E., and R. C. DiPrima, Elementary Differential Equations, Sixrh Edition (Wiley,
1997)
4. The role of difference equations and block diagram descriptions for discrete-time systems
in signal processing are described in:
• Proakis, J. G., and D. G. Manolakis, Introductíon to Digital Signal Processing (Macmillan,
1988)
• Oppenheím, A. V., and R. W. Schafer, Discrete-Time Signal Pr<Jcessíng (Prentice Hall, 1989)
5. The role of differential equations, block diagram descriptions, and state-variable descriptions
in control systems ís described in:
• Dorf, R. C., and R. H. Bishop, Modern Control Systems, Seventh Edition (Addison-Wesley,
1995)
• Phíllips, C. L., and R. D. Harbor, Feedback Contrai Systems, Third Editíon (Prentice Hall,
1996)
6. State variable descriptíons in control systems are discussed in:
• Chen, C. T., Linear System TIJeory and Design (Holt, Rinehart, and Winston, 1984)
144 CHAPTER 2 • TIJ\ill::'.-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

• B. Friedland, Contr()l System Design: An lntr<>ductíon to State-Space Methods (McGraw-Hill,


1986)
A thorough, yet advanced, treatment of srate-variable descriptions in the context of signal
. . . ..
process1ng 1s g1ven 1n:
• Roberts, R. A., and C. T. Mullis, Digítal Signal Processíng (Addison-Wesley, 1987)

1PROBL~~S
2. 1 A discrete-tíme LTI system has the impulse re- (k) y[nl = (u[n + 10] - 2u[n + 51 + uln - 6})
sponse l1[11J depicted in Fig. P2.1 (a). Use linear- :1< cos( ½í'Tn)
ity and time invariance to determine the system {l) = u[n] * ~p=oô[n - 2p]
y[n]
output y[,i] if the input x[ n] is: (m) y[n] = j3'1u[n] * Lp oô[n - 2pl, 1J31 < 1
(a) x[n] = 2o[n] - 8[n - 1] (n) yf nl = u[n - 2] * h[n], where hfn] =
(b) x[n] = u[n] - u[n - 31 '}'n, n < O, YI > 1 1

(e) x[n] as given in Fig. P2. l(b) T/


11
, n > O, 1111 < 1
(o) y[n] (½) 11u[n + 21 * hln], where h[n] is
=
h[n] defined in part (n)
3 2.3 Consider the discrete-time signals depicted in
2
Fig. P2.3. Evaluate the convolution sums indi-
l --
cated below.
•••
;
'
. ..
•••
n (a) m[nJ = xf nl * z[n]
-2 -1_ ,
1
l (b) m[n] = xln 1 * y{n]
i
(e) m[n] = x[nl * f[n]
(a)
(d) mini= x[n] * g[n]
x[n] (e) m[n] = y[n] * z[n]
(f) m[n] = y[n] * g[n]
2 (g) m[n] = y[n] * w[nl
(h) m[n] = y[n] * /[n]
-1 1 •••
(i) m[n] = z[n] * g[n]
l 2 3
-1 (j) m[n] = w[n] * g[n]
(b) (k) m[n] = f[n] * g[n]
2.4 A LTI system has impulse response h(t) depicted
FIGURE P2.l in Fig. P2.4. Use linearity and time invariance
to determine the system output y(t) if the input
2.2 Evaluate the discrete-timc convolution sums x(t) is:
given below. (a) x(t) = 28(t + 1) - 8(t - 1)
(a) y[n] = u[nl * uf n - 3] (b) x(t) = 5(t - +
é5(t - 2) + S(t - 3)
1)
{b) y[n} = lnu[ -n + 2] * u[n - 3] (e} x(t) = Lp--oo(-l)PS(t - 2p)
(e) y[nJ = (f}»u[n - 2j * ufnJ 2.5 EvaJuate the continuous-time convolution inte-
(d) ylnl = cos(½1rn)u[n] * u[n - 1l grais given below.
(e) ylnl = cos(½1rn) * 2"u[-n + 2j (a} y(t) = u(t + 1) * u(t - 2)
{f) y[n] = cos(½1rn) * (½)nu[n - 2] (b) y(t) = e- 2 tu(t) * u(t + 2)
(g) y[n] = J3nu[nl * u[n - 3], 1J31 < 1 (e) y(t) = cos(1rt)(u(t + 1) - u(t - 3)) * u(t)
(h) y[n] = J3nuln] * anu[n], IJ31 < 1, lal < 1 (d) y(t) = (u(t + 2) - u(t - 1)) * u(-t + 2)
(i) y[n} = (u[n + 10] - 2u[n + 51 + u[n - 6]) (e) y(t) = (tu(t} + (10 - 2t)u(t - 5) -
* u[n - 2] (10 - t)u(t - 10)) * u(t)
(j) yfn] = (u[n + 10] - 2uf n + 5] + u[n - 6]) (f) y(t) = (t + 2t2 )(u(t + 1) - u(t - 1)) *
* J3 u[n},
11
IJ31 < 1 lu(t + 2)
Problems 145

x[n] y[n]

1
1 1
1
'
' ; " ,.' ,. n •' n
-4 -1
-1 - ~

z[n]

2 ~
l l

wfn]
1 3- ~

• ' ' ,n
,.
' 2- ~

2 3
l
f[n]
-0-~,--o--+--+--+-~,-1-.---<>-~-()-T-(>-•--(>---n
2- -
-3 3
1- f--

-4
.
; ;
' '
..,' n
' 4
-2- ...
'
g[n]
-4- f--

-<>----..;>--t-,--+--+--+--+--<>--~-<>-+-+--f--~1--<>-<>-- n
-5 5

FIGURE Pl.3

h(t) 2.6 Consider the continuous-time signals depicted


in Fig. P2.6. Evaluate the convolution integrais
indicated below.
(a) m(t) = x(t) * y(t)
-----1,~--'--~--t
-1 1 (b) 1n(t) = x(t) * z(t)
FIGURE P2.4
(e) m(t) = x(t) * f(t)
(d) m(t} = x(t) * b(t)
(e) m(t) = * a(t)
x(t)
(g} y(t) = cos( 1rt)(u(t + 1) - u(t - 3)) *
(u(t + 2) - u(t - 1)) (f) m(t) = y(t) * z(t)
(h) )'(t) = cos( m)(u(t + 1) - tJ.(t - 3)) *
(g) m(t) = y(t) * w(t)
e- 2 tu(t) (h) m(t) = y(t) * g(t)
(i} y(t) = (2ô(t) + ô(t - 5)) * u(t + 1) (i) m(t) = y(t) * c(t)
(j) y(t) = (5(t + 2) +
õ(t - 5)) * (tu{t) + (j) m(t) = z(t) * f(t)
(10 - 2t)u(t - 5) - (10 - t)u(t - 10)) (k) m(t) = z(t) * g(t)
(k) y(t) = e-ytu(t) * (u(t + 2) - u(t - 2)) (1) m(t) = z(t) * b(t)
{J} y(t} = e-ytu(t) * }:; 0 {½)P8(t - p) (rn) m(t) = zv(t) * g(t)
(m) y(t) = (2ô(t) + ô(t - 5)) * Lp=-o(½)PB(t ·- p) (n) m(t} = w(t) * a(t)
(n) y(t) = e--Y'u(t) * e13tu(-t) y > O, f3 > O (o) m(t) = /(t) * g(t)
(o) y(t) = u(t - 1) * h(t), where h(t) = (p) m(t) = /(t) * c(t)
e 2 ', t < O (q) m(t) = f(t) * d(t)
e- 3 ,, t > O (r) m(t) = x(t) * d(t)
146 CHAPTER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

y(t)

x(t)
2 ....

1
l --

-----+------ t -.-----+---+--_...,_ _ t
-l 1 1 2 3 4

z(t) w(t)

1
1- 1

-,,,.., -1 1 2
'
' ' '
t t
1 ·2

-1 -1- -

/(t) g(t)

1 e-t 1

_...,.__ __..,_ _ _ _ t -1

l l
-)

b(t) c(t)

_ _ _ _"'-1
1
5
1----
-1 2
--~-"i'---+---t---31---t ----+-----+---+---+-- t
-2 -l . l 5 l
--
2
-1

d(t) a(t)

l .
••• •••
•• • .. .
-----,---------+:-- t t
1 2 3 4 -2 -1 1 2 3

FIGURE P2.6

2.7 Use the definirion of the convolution sum to


h(t)
prove the following properties:
(a) Distributive: x{n] * (h[nl + g[n]) = 1/A
x[n) * h[n) + x[n] * g[n)
(b) Associarive: x[n] * (h[n) * g[n]} = -------t
(x[n] * h[n]) * g[n]
(e) Commutative: x[n] * h[n] = h[n] * x[n] -1/A
2.8 A LTI system has the impulse response depicted
in Fig. P2.8. FIGURE P2.8
Problems 147

~~·.----
·j; ...

(a)

"·'· • 1. ••

..
x[nJ _,...., . h 1[n] ----.
• h fn] ___:'.j
.: ......... •$>'

e:·· .,........•-··· 1...--

.,. 3, ;:· '


"·· ..,;......:: .:......

(b)

x(t) --+-...,._. ~··~ ..


h1(t)
+
·" ._... -
., 'h3(t) · ,. h4(t) ~ •• y[nl
+ •
~ ... :~ .:.

(e)

FIGURE P2.9

(a) Express the system output y(t) as a function (b) h(t) = h1(t) * h2(t) + h3 (t) * h4 (t)
of the input x(t). (e) h(t) = h1(t) * {h2(t) + h_-,,(t) + h 4 (t)}
(b) Identify the mathematical operation per- 2 .11 An interconnection of LTI systems is depicted
formed by this system in the limít as ~---'), O. in Fig. P2.11. The impulse responses are h1 [n]
(e) Let g(t) = lim,l_.0 h(t). Use the results of = (½)n(u(n + 21 - u[n - 3]), h 2 [n] = 8[n], and
(b} to express the output of a LTI system h3 [n] = u[n - 1]. Let the impulse response of
with impulse response the overall system from x[ n] to yl n] be denoted
hn(t) = g(t) * g(t) * · · · * g(t) as h[n].
n times (a) Express h[n] in terms of h 1 ln], h2 [n], and
as a function of the input x(t). h3 [nJ.
2.9 Find the expression for the impulse response re- (b) Evaluate h[n] using the results of (a).
lating the input xf nl or x(t) to the output yf nl ln parts (c)-{e) determine whether the system
or y(t) in terms of the impulse response of each corresponding to each impulse response is (i)
subsystem for the LTI systems depicted in: stable, (ii) causal, and (iii) memoryless.
(a) Fig. P2.9(a) (e) h 1 [n]
(b) Fig. P2.9(b)
{e) Fig. P2.9(c)
2.10 Let h 1(t), h 2 (t), h3 (t), and h 4 (t) be impulse re-
sponses of LTI systems. Construct a system with
impulse response h(t) using h 1(t), h 2 (t), h 3 (t),
and h 4 (t) as subsystems. Draw the interconnec-
tion of systems required to obtain:
(a) h(t) = h 1(t) + {h 2 (t) + h3 (t)} * h4(t) FIGURE P2.l 1
148 CHAPTER 2 • nME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYS1'EMS

2.12 For each impulse response lístcd belo"'·, deter-


mine whether the corresponding systern is (i)
x(t) +
- y(t) ! L ::::!::::C
memoryless, (ii) causal, and (iii) stable.
(a) h(t) = e- 21 '
(a)
(b) h(t) = e 2 ,u(t - 1)
(e) h(t) = u(t + 1) - 2u(t - 1)
(d) h(t) = 38(t) +
(e) h(t) = cos( 1rt)u(t) x(t) +
(f} h[nJ = 2 u[-nj
11

(g) h[n] = e2 ,,u[n - lJ


(h) h{n] = cos(~?Tn){ulnl - u[n - 10]} (b)
(i) h[n] = 2u[n] - 2uln - 1] FIGURE P2.) 6
(j) h[nl = sin(½1rn)
(k) h[nl = B[n] + sin(1rn)
2.17 Determine the natural response for the sys-
'~2.13 Prove that absolute summability of the impulse tems described by the followíng differential
response is a necessary co11dition for stability of .
equat1ons:
a discrete-time system. Hint: Finda bounded in-
d
put xln] such that the output ar some rjme n0 (a) Sdt J1(t) + 1O),{t) = 2x(t), J'(O) = 3
satisfies [y[no] l = Lk--oolhlk 11.
2.14 Evaluate the step response for the LTI systems d2 d
represented by the following impulse responses: (b) dt2 y(t) + 5 dt y(t) + 6y(t) = 2x(t) +
(a) h{nl = (½}nu[n] d d
(b) h In] = 8[ n] - B[n - 1] dt x(t}, y{O} = 2, dt y(t} t=O = 1
(e) h[n] == (-1 )''{u[n + 21 - u[n - 3 \}
d2 d
(d) l1ln] == ulnl (e) dt2 y(t) + 3 dt y(t) + 2y(t) = x(t) +
(e) h(t) = e- 1 1
'
d d
(f) h(t) = 5(t) - 5{t - 1) dt x(t), y(O) = O, dt y(t) r==o = 1
(g) h(t) = u(t + 1) - u{t - 1)
d2 d d
(h) h(t) = tu(t) (d) dt 2 y(t) + 2 dt y(t) + y(t} = dt x(t),
2.15 Evalt1ate rhe frequency response for the LTI
d
systems represe11ted by the following impulse y(O) = 1, dt y(t) = 1
responses: t=O

(a) h[n] = (i)1'u[n] d2 d


(b) h[ n] = BI n] - 5[ n - 1] (e) dt 2 y(t) + 4y(t) = 3 dt x(t), y(O) = -1,

(e} hln] = (-1 )n{t1[n + 2} - tt[n - 3]} d


(d) h[n] = (.9)nei<rr12 )nu[n)
dt y(t) =1
t=O

(e) h(t) = e-lrl d2 d d


(f) h(t) = -5(t + 1) + S(t) - S(t - 1) (f) dt2 y(t) + 2 dt y(t) + 2y(t) - dt x(t),
(g) h(t) == cos( ?Tt){1,(t + 3) - u(t - 3)} d
y(O) = 1, dt y(t) =O
(h) h(t) == e21u(-t) t=O
2.16 Write a differential equation description relat- 2.18 Determine the natural response for the systems
ing the output to the input of the following elec- described by the following difference equations:
trical circuits: (a) yínJ - ay[n - 1] = 2x[n], y[-1] = 3
(a) Fig. P2.16(a) (b) yln] -tgy[n - 2] = x[n - 1], y[-1] = 1,
(b) Fig. P2.16(b) y[-2] = -1
Problems 149

(e) y[n] =
-!yín - 1] - ½y[n - 2] = x[n] + (ií) x[n] = (})nu[n]
xln - 1], y[-1] = O, y[-2] = 1 (iii) xf n] = ei<rrt4 )nufn]
(d) y[n] + {6 y[n - 2] = xln - 11, y[-1] = 1, (iv) x[n] = (l)nu[n]
y[-2] = -1 (d) y[n] + y[n - 1] + ½yln - 2] = x[n] +
2x[n - 1}
(e) y[n] + y[n - 1] + ½y[n - 2J = x[n] +
2x[n - 1], y[-1] = -1,y[-21 = 1 . (i} x[n] = u[11]
2.19 Determine the forced response for the systems (ií) x[n] = (-½)»u[nl
described by the following differential equa- 2.21 Determine the output of the systems described
tions for the given inputs: by the following differential equations with in-
put and initial conditions as specified:
d
(a) 5 dt y(t) + 10y(t) = 2x(t) d
(a) dt y(t) + 10y(t} = 2x(t), y(O) = 1,
(i} x(t) = 2u(t)
x(t) = u(t}
(ii) x(t) = e- 1u(t)
(iii) x(t) = cos(3t)u(t} d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t) = dt x(t),
d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 6y(t} = 2x(t) + dt x(t) d
y(O) = O, dt y(t) = 1, .x-(f) = e 2
tu(t)
(i) x(t) = -2u(t) t=O

(íi} x(t) = 2e-tu(t} d2 d


(e) dt 2 y(t) + 3 dt y(t} + 2y(t) = 2x(t),
(iii) x(t) = sin(3t)u(t)
(iv) x(t) = se- 2 tu(t) d
. y(O} = -1, dt y(t) = 1, x(t) = cos(t)u(t)
d2 d d t=O
(e) dt 2 y(t} + 3 dt y(t) + 2y(t) = x(t) + dt x(t)
d2 d
(i) x(t) = Su(t)
(d) dt 2 y(t} + y(t) = 3 dt x(t), y(O) = -1,
(ii) x(t) = e21u(t) d
-d y(t) = 1, x(t} = 2e- u(t)1

(iii} x(t} = (cos(t) + sin(t))u(t) t t=O

(iv) x(t) = e-'u(t) 2.22 Determine the output of the systems described
by the following difference equations with input
d2 d d
(d) dt 2 y(t) + 2 dt y(t) + y(t) = dt x(t} and initial conditions as specified:
(a) y[n] - 2l y[n - 1] = 2x[n], y[-1} = 3,
(i) x(t) = e- 'u(t)
3
x[n] = 2(-½)nu[n]
(ii) x(t} = 2e- 1u(t)
(b) y[n] - ¼y[n - 21 = x[n - l], y[-11 = 1,
(iii) x(t) = 2 sin(t)u(t) y[-2] = O, x[n] = u[nl
2.20 Determine the forced response for the systems
described by the following difference equations (e) y[n] - Jy[n - 1] - ½y[n - 21 =
x[n] +
for the given inputs: x[n - 1], y[-1] = 2, y[-2] = -1,
x[nl = 2 11u[n]
(a) y[n] - ~yín - 1] = 2x[n]
(i) x [n] = 2u [ n] (d) y[n] - + !y[n - 2] =
¾y[n - 1]
(ii) x[n] = -(½)nu[nl 2x[n], y[-1] = 1, y[-2] = -1, xln] =
2u[n]
(iii) x[n] = cos(½1rn)ul11]
9 2.23 Find difference-equation descriptions for the
(b) y[1tJ - 16y[11 - 21 = x[n - 1]
four systems depicted in Fig. P2.23.
(i) x[n) = u[n} 2.24 Draw direct form I and direcr form li im-
(ii) x[n] = -(½)11u[n] plementations for the following difference
.
(iií) x[n] = (¾) u[nl 11
equat1ons:
(e) y[n] - ¼y[n - 1] - !y[n - 2] = x[nj + (a) y[n} - lyln - 1] = 2x[n}
x[n - 1] (b) y[n] + ¼y[n - 1] - ½yln - 2] = x[n} +
(i) x[n] = -2u[n] x[n - 1]
150 CHAPTER 2 • TtME-D01\IAIN REPRESENTATIONS FOR LINEAR Til\tE-INVARl,\.JU SYSTEI\IS

x(t) .l: .. y(t)

--;.-v[n]
, t ::
7
~
J
(a)
-2 ..............,
(a) x(t) ··"'f. "11 • y(t)

x[nl I:
••::.:>,

s •· ~
..
-: -:•:
s
....
• yfnl
tf·
t • J t • J 2

--4l l
4
(b)

,2 (b)

x[nl 1• s l
.. :;[ y[n] -•
...
•·J ...............
l
,.. .
--2l ;
.i x(t} -2 -y(t)

-?
- -1
4 ~·s- '11--J
(e)
-3
-21 (e)
• .. s P2.27
l s s
l FIGURE

}:= l
x[nJ •· l: .. ylnJ
t • •
3

--8 l
x[n] l .... I: s
l
.. lo;. y[nl
(d)
t •
]
'

FIGURE Pl.23 -2
(a)

(e) y[n] - iy[n - 2] = 2x[n] + x[n - l J


(d) y[n] + !)'[n - 1] - y/n - 3] = 3xtn - 1]
--yínl
+ 2xín - 21
2.25 Shc>\V that the direct form l and 11 implemen- - -,
E- S - -
-
tations depicted in Fig. 2.27 implement the
second-order integral equarion given by Eq. (b)
(2.42).
2.26 Convert rhe fc.>llowing differential equatíons to
integral equatio11s and draw direcr forn1 I and
xln1 fl
'
l" ~ ! 3

'f:-~-t- s --1:-- s -yftt]


direct form II implementations ,.1f thc corre- 1
....... _ -1 - -l
spondíng systems: 8 4 2
d
(a) dt y(t) + 10y(t) = 2x(t) (e)

d2 d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t)
d
= dt x(t) -
? -1

xln1-:t S r • E- s
(e)
d2
dt1. y(t) + y(t) = 3 dt x(t)
d "'. ~

l l
-~-y[nl

4
3
d d d
(d) dt 3 y(t) + 2 dt y(t) + 3y(t) = x(t) + 3 dt x(t) 1
6
(d)
2.27 Find differential-equation descriptions for rhe
three systems depicted in Fig. P2.27. FIGURE P2.28
Problems 151

2.28 Determine a state-variable description for the l


3
Ü -1
four discrete-time systems depicted in Fig. (a) A = 1
, b = ,
O -2 2
P2.28.
e= [1 1], D= [O]
2.29 Dra w block diagram system representations
corresponding to the following discrete-time 1 1 -1
stare-variable descriptions. (b) A= b =
1 O ' 2 '

(a) A=
O -½ , b=
2
, e= [1 -1],
e= [O -1], D= [O]
-.1 o
1
0 1 -1 o
D= [O] (e) A= O -1 ' b = 5 '
1 _l1 1 e= [1 O], D= [O]
(h) A= b= e= [1 -1],
13 o ' 2 '
(d) A= 1 -2 b = 2
D= [O] 1 1 , 3 '

O -½ b = O e = [1 1 ], D= [O]
(e) A = • -1 ' 1 , 2.32 Let a discrete-time system have the state-
3
e= [1 O], D= [1] variable description

O O 2 1 --21 1
(d) A= b = A= h=
O 1 ' 3 ' - o '
1
3 2 '
e= [1 -1], D= [O] e= [1 -1], D= [O]
2.30 Deter1nine a state-variable description for the (a) Define new states q; [n] = 2q1 lnl, qíln l
five continuous-time systems dcpicted in Fig. 3q2 [n]. Find the new state-variahle descrip-
P2.30. . A' , b' , e ' , D' .
t1on
2.31 Draw block diagram system representations (b) Define new states qi[nl = 3q2[n], qílnl =
corresponding to the following continuous-time 2q 1 [n ]. Find the new state-variable descrip-
state-variable descriprions: tion A', b', e', D'.

3 x(t)

J f½ t2
x(t) l • l: J • •E
2 y{t)
f
l ~ • • f y(t)

l
- L,
t •
-1
~ •
(a)
-2 3
(b)

3
2

• { J',
x(t)
1€.\__f - I: f • ..l;
. w;.
- y(t)
.
-2
t •4 f '
-1

•-3
(e)

R
+ y(t) -
L
-
y(t)

R
x(t) t e e L

(d) (e)

FIGURE P2.30
152 CHAPTER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS

(e) Define new states q;[n] = q 1 [n] + q2 [n], pression for the system output derived in (b)
qí[nl = q1[nl - q2 [n]. Find the new state- reduces to x(t) * h(t) in the limit as â goes
variable description A', b', e'; D'. to zero.
2.33 Consider the continuous-time system depicted
in Fig. P2.33.
gt:,(t)
{a) Find the state variable description for this
system assuming the states q 1(t) and q2 (t) 1/.ó.
are as labeled.
(b) Define new states q;(t) = q 1(t) - q2(t),
qí(t) = 2q 1 (t). Find the new state-variable ' t
-t::./2 D,.12
description A', b', e', D'.
(a)
(e) Draw a block diagram corresponding to the
new state-variable descriptiort in (b).
x(t)
(d) Define new states qi(t) = (l/b 1)q1(t), q2(t)
= b2q1 (t) - h1q2 (t). Find the riew state-vari-
able description A', b', e', D'.
(e) Draw a block díagram corresponding to the
new state-variable description in (d).

x<t>
x(-t::.)
x(O)
~t,
x(A)
Í .. y(t) x(2/l)
-.L....---J_--l----l----i.~-+---1,....!!~--- t
-.1.

(b)

FIGURE P2.34

FIGURE P2.33
)~2.35 ln this problem we use linearity, time invari-
ance, and representation of an impulse as the
*2.34 We may develop the convolution integral using limiting form of a pulse to obtain the impulse
linearity, time invariance, and the limiting form response of a simple RC circuit. The voltage
of a stairstep approximation to the input signal. across the capacitor, y(t), in the RC circuit of
Define gll.(t) as the unir area rectangular pulse Fig. P2.35{a) in response to an applied voltage
depicted in Fig. P2.34(a). x(t) = u(t) is given by
(a) A stairstep approximation to a signal x{t) is s(t) = {1 - e- ttRc}u(t)
depicted in Fig. P2.34(b). Express x(t) as a
weighted sum of shifted pulses g~(t). Does (See Drill Problems 2.8 and 2.12.) We wish to
the approximation quality improve as ~ find the impulse response of the system relating
decreases? the input voltage x(t) to the voltage across the
(b) Ler the response of a LTI system to an input capacitor y(t).
gt:..(t) be ha(t). If the input to .this system is (a) Write the pulse input x(t) = gt:..{t) depicted
.x{t), find an expression for the output of this in Fig. P2.35(b) as a weighted sum of step
system in terms of ht:..(t). functions.
(e) ln the limit as ti goes to zero; g,i(t) satisfies (b} Use linearity, time invariance, and knowl-
the properties of an impulse. and we may edge of the step response of this circuit to
interpret h(t) = lima-oha(t) as the impulse express the output of the circuit in response
response of the system. Show that the ex- to the input x(t) = g 6 (t) in terms of s(t).
Problenis 153

(e) ln the limit as A ~ O the pulse input g~(t) (iii) x(t) = u(t) - 2u(t - l) + u(t - 2)
approaches an impulse. Obtain the impulse (iv) x(t) = u(t - a) - u(t - a - 1)
response of the circuit by taking the limitas
(e) Show that rx,,(t) = r,,x(-t).
~~O of thc c>utpt1t obtained i11 (b). Hint:
Use the definition of the derivative (f) Show that r,..,..(t) = rxx(-t).

.l .:i
zt+- -zt--
d 2 2
- z(t) = lim - - - - - - - -
dt ~. . . o ~
• Computer Experiments

2.37 Repeat Problcm 2.3 usi11g the MATLAB com-


gc,.(t) mand conv.
R
2.38 Use MATI,AB to repear Example 2.6.
1/L\ i
+ 2.39 Use MATLAB to evaluate the first 20 values of
x(t) + e: y(t) the step response for the systems in Problem
2.14(a)-(d).
;
'
t
-L\/2 1112 2.40 Consider the three moving-average systems de-
fined in Example 2.6.
(a) {b) (a) Use MATLAB to evaluate and pior 50 val-
fIGlJRE P2.35 ues of the sinusoidal steady-state response
ar frequencics of n = rr/3 and íl = 2rr/3 for
each system.
::-2.36 The cross-correlation l)et,veen r,vo real signals (h) Use the results of (a) to determine the mag-
x(t) and y(t) is defined as nitude and phase response of each system at
frequencies n = rr/3 and !l = 21r/3.
fxy(t) = J"'.,_ x( T)y( T - t) dT (e) Obtai11 a closed-form expressic>11 for the
magnitude response of each system and plot
This is the area under thc product of x(t) a11d a it on - rr < il s 7T t1sing MATLAB.
shifted version of y(t}. Note that the indepe11-
2.41 Considcr the two systems having impulse
dent varial)le T - t is the negative of rhat found
responses
in the dcfi11itio11 of convolution. The autocor-
relatio11, rx_"(t), of a signal x(t) is obraí11ed by !, Os; n < 3
replacing y(t) wírh x(t). O, otherwise
(a) Show that rxy(t) = x(t) * y(-t).
¼, n = O, 2
(b) Derive a step-l)y-ster) procedure for evalu-
ating the cross-correlatio11 a11alogous to
-¼, n = 1, 3
the one for evaluating convolution íntegral O, otherwise
given in Section 2.2.
(a) Use thc MATLAB co1nmand conv to plot
(e) Evaluate the cross-correlatio11 between the the first 20 values of the step response.
following signals:
(b) Obtai11 a closed-form expression for the
(i) x(t) = e- 1u(t}, y(t} = e- 31u(t)
mag11itude responsc and plot it on - rr < n
(ii) x(t} = cos( 1rt){u(t + 2) - s; 1T llSÍIJg MATLAB.
u(t - 2)], y(t) = cos(21rt)[u(t + 2)
2.42 Use the MATLAB commands f i l ter and
- u(t - 2)1
f i l ti e to repeat Example 2.16.
(iíi) x(t) = u(t) - 2u(t - 1} +
2.43 Use the MATLAB commands f i l ter and
u(t - 2), y(t) =
u(t + 1) -
u(t)
f i l ti e to determine the first 50 output values
(iv) x(t) = u(t - a) - u(t - a - 1), in Problem 2.22.
y(t) = u(t) - u(t - 1)
2.44 The magnitude response of a system described
(d) Evaluate the autocorrelation of the follow- by a differe11ce equation may be obtained from
ing sig11als: the output y[n] by applying an input x[nl =
(i) x(t) = e- 1u(t) e)!lnutnl to a syster11 thar is inítially at rest. 011ce
(ii) x(t) = cos( 1rt)lu(t + 2) - u(t - 2)1 the natural response of the system has decayed
154 CHAPTER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR TIME•INVARIANT SYSTEMS

to a negligible value, y[n] is due only to the input 2.46 Use the MATLAB command s s 2 s s to solve
and we have y[n] ""' H(ei!2)e;nn. Problem 2.32.
(a) Determine the value n for which each term
0 2.47 A system has the state-variable description
in the natural response of the system in Ex-
ample 2 .16 is a factor of 1000 smaller than -l
2 --2l 1
A= b=
its value at time n = O. --~ o '
l
2 '
(b) Show that I H(ei0 ) 1 = 1y[no] I. · e= [1 -1], D= [O]
(e) Use the results in (a) and (b) to experimen-
tally determine the magnitude response of (a) Use the MATLAB commands L sim and
this system with the MATLAB command impulse to determine the first 30 values
f i l ter. Plot the magnitude response for of the step and impulse responses of this
n
input frequencies in the range - 7T < s; 7T. system.
2.45 Use the MATLAB command i mp z to determine (b) Define new states q 1 [n] = q 1 [n] + q2 [n] and
the first 30 values of the impulse response for q 2 [n] = 2q 1[n] - q 2 [nJ. Repeat part (a) for
the systems described in Problem 2.22. the transformed system.
Fourier Representations for Signals

··•:· .. ,..
...
;:.,.:~. . ,
~,··
. .:,\ .~d ·,, .
~: •>
·*'
....,.,,.. . ,.· . >'
: ..'

3.1 Introduction
ln this chapter we consider representing a signal as a weighted superpositic,n of con1plex
sinusoids. If such a signal is applied to a linear system, then the system <>utpttt is a weighted
superposition of the system response to cach complex sinusoid. A similar application <>f
the Jinearity property was exploited in the previous chapter to develop the conv<.>lution
integral and convolution sum. ln Chapter 2, the input signal was expressed as a weighted
superposition of time-shifted impulses; the output was then given by a weighted super-
position of time-shifted versions <>Í che sysrem's impulse response. The expressíon f()f the
output that resulted from expressing signals in terms of impulses ,vas termed ''cc>nvolu-
ti<>n.'' By rcpresenting signals in terms of sinusoids, we \vill obtaín an alternative expression
for the input-output behavior of a LTI system.
Representatit)n of signals as superpositions of complex sinusoids not only leads to a
useful expression for the system output but also provides a very insightful characrerization
of signals and systems. The focus of this chapter is representation of sígnals using complex
sinusoids and the properties of such representations. Applications t>f these representations
to system and sígnal analysis are emphasized in the ft)llowing chapter.
Thc srudy <1Í signals and systems using sinusoidal representations is termed Fourier
analysís after J<>seph Fc>urier (1768-1830) for his contributions to the cheory {>f reprc-
senting functions as weighted superpc)sitions <>Í sinus<>ids. Fourier methods have widc-
spread applicati<>n beyond síg11aJs and systems; they are used in every branch of engineering
and science.

• CoMPLEX StNUSOIDS AND LTI SYSTEMS

The sinusc)idal steady-state respc>11se of a L TI systen1 was intr<>duced ir1 Secti{>n 2.3. W'e
showed that a complex sinusoid input to a LTI system ge11erates an outpt1t eqt1al to the
sinusoidal input multiplied by the system frequency response. That is, in discrete time, the
input x[nl = eiihi results in the output
y[nl = H(e;11)eifl11
where the frequency respc)nse H(eiº) is defined in terms of the impt1lse response h[11J as
X

H(ei11 ) = I h[kJe-i!!k
k :e - ""
156 CHAPTER 3 • f OlJRIER REPRESENTATIONS FOR SIGNALS

ln continuous time, the input x(t) = eiwt results in the <>utput


)'( t) = H( jw)e''"'
where the frequency response H( iw) is defined in rerms of rhe impulse response h(t) as

H( jw} = J: ?C h(-r)e-;,.. -r d-r

We say that che complex sinusoid lj,(t) = eit.,n is an eigenfunction of the system H
associated with the eigenvalue À = H( jw) beca use it satisfies an eigenvalue pr<>blen1 de-
scribed by
H{lf,(t)} = Alf,(t)
This eigenrelation is illustrated in Fig. 3 .1. The effect of the system on an eigenfunction
input signal is one of sca1ar multiplication-the output is given by rhe product of the input
anda compJex number. This eigenrelation is analogous te> the more familiar macrix eiger1-
value prc>blem. If ck is an eigenvector of a matrix A with cigenvalue Ak, then we have
Aek = Àkek

Multiplying ek by the matrix A is equivalent to multiplying ek by the scalar Àk.


Signals that are eigenfunctions of syscems play an imp<)rta11t role ín systems rheory.
By representing arbitrary signals as weighted superp<>Sitions of cigenfunctions, we trans-
Í<)rm the operation of convolutíon to one of m11ltiplication. To see this, consider expressing
the input to a LTI system as the weighted sun1 of M complex sinusoids
i\.1

x(t) = L akeicokt
k=l

If eiwkt is an eigenfunccion of the system with eige11value H( jwk), then each term in the
input, akeiwkt, produces an output term, akH( iwk)eiwkt. Hence we express rhe output of the
system as
M
y(t) = L akH( jwk)eiwkt
k=I

The <>utput is a weíghted sum of M complex sinusoids, with the weights, a1.,, modi.fied by
the system frequency response, H( jwk). The operation of convolution, h(t) * x(t}, becomes
multiplication, akH( jwk}, because x(t) is expressed as a sum c)f eigenfunccions. The ana1-
ogous rclationship holds in the discrete-time case.
This property is a powerful motivatic)n for representing signals as weighted super-
positions of complex sinusoids. ln addition, the weights provide an alternative interpre-
tarion of thc signal. Rather than describing the signal behavior as a function of time, the

--• H(iw)eil,,)t eifln_.,


..... H

(a) (b) (e)

fIGlJRE: 3.1 Jllustratíc>n of the cigenfunctíon pro1lcrty of linear systems. The action of the
system on an eigcnft1nction input is one of multiplication by the corresponding eigenvalue.
(a) General cigenft1ncti<>n iJ,(t) or it,[n J anel eigenvalue À. (b) C<>mplex sinusoid eigenfunction e_;,.,,,
and eigenvalue H(jú>). (e) Cornple.'!í sjnusoid eigenfunctjon e.iíh, and eigenvalue H(ei!l),
3.1 lntroduction 157

weights describe the signal as a function of frequency. The general notion (>f describing
complicated signals as a function of frequency is commonly encountered in music. For
example, the musical score for an orchestra contains parts for instruments having different
frequency ranges, such as a string bass, which produces very low frequency sound, and a
piccolo, which produces very high frequency sound. The sound that we hear when listeníng
t<> an orchestra represents the superposition (JÍ sounds generated by each instrument. Sim-
ilarly, the score fc>r a choir contains bass, tenor, alto, and soprano parts, each <>f which
contributes to a different frequency range in the overall sound. The signal representations
developed in this chapter can l)e viewed analogously: the weight associated with a si11usoid
of a given frequency represents rhe contributíon of that sinusoid t(> the overall sígnal. A
frequency-do1nain view of signals is very informative, as we shall see in what foll<)ws.

• FOURIER REPRESENTATJONS FOR FOUR SIGNAL CLASSES

There are four distinct Fourier representations, each applicable to a different class <>f sig-
nals. These four classes are defined by the peri<><lícity properties of a signal and whether
it is continuous or discrete time. Periodic signals have Fourier series represcntations. The
Fc>urier series (FS) applies to C<)ntinuous-time periodic signals and the discrete-time Fourier
series (DTFS) applies to discrete-time periodic signals. N()nperiodic signals have Fourier
transform representations. If the signal is continuous time and nonperiodic, the represen-
tation is termed the Fourier transform (FT). If the signal is discrete time and nonperiodic,
then the discrete-time Fourier transform (DTFT) is used. Table 3.1 illustrates the relatic>n-
ship between the time properties of a signal and the appropriate Fourier representation.
The DTFS is often referred to as. the discrete Fourier transform or DFT; however, this
termínc)l<1gy does not correctly reflect the series nature of the DTFS and often leads to
cc>nfusion with the DTFT S<J we adopt the mc>re descriptive DTFS terminc>logy.

TABLE 3.1 Relationship Between Time Properties


of a Signal a1id the Appropriate Fourier Representation
Time
Pr<>fJerty Periodic Nonperiollic

e
o
n
t
.
l Fourier Series Fourier Transf<>rm
n (FS) (FT)
u
{)

u
s
1)
1
s
e Discrete-Timc l-'ourier Series Discrete-Time Fourier T ransform
r (DTFS} (DTt'T)
e
;
'
158 CHAPTER 3 • fOlJRll:'.R REPRESENTATIONS FOR StGNALS

Periodic Signals: Fourier Series Representations


Consider representing a periodic signal as a weighted superposition of complex si-
nusoids. Since the weighted superpositíon must have the sarne period as the signal, each
sinusoid in the superposition must have the sarne period as the signal. This implies that
the frequency of each sinusoid must be an integer multiplc of the signal's fundamental
frequency. lf xfn] is a discrete-time signal of fundamental period N, then we seek to rep-
resent x[n] by rhe DTFS
x[n] = L A[k Jeikílon (3.1)
k

where 0 0 = 21TIN is the fundamental frequency of x[n]. The frequency of the kth sinusoid
in the superposition is kfi0 • Similarly, if x(t} is a continuous-time signal of fundamental
period T, we represent x(t) by the FS
x(t) = L A[k]eikwut (3.2)
k

where w 0 = 211'/T is the fundamental frequency of x(t). Here the frequency of the kth
sinusoid is kw 0 • ln both Eqs. (3.1) and (3.2), Alk] is the weight applied to the kth complex
sinusoid and the hat ~ denotes approximate value, since we do not yet assume that either
x[n] or x(t) can be represented exactly by a series of this form.
How many terms and weights should we use in each sum? Beginning with the DTFS
described in Eq. (3.1), the answer to this question becomes apparent if we recai! that
complex sinusoids with distinct frequencies are not always distinct. ln particular, the com-
plex sinusoids eikfl,,n are N periodic in the frequency índex k. We have
ei(N+k)!l 0 n = ejNn0 ne;kn0 n

= eí2-rr•reik!i0 n

= eikíl n 0

Thus there are only N distinct complex sinusoids of the form eikílºn. A unique set of N
dístinct complex sinus()ids is obtained by letting the frequency índex k take on any N
consecurive values. Hence we may rewrite Eq. (3.1) as
x[ n] = ~ A[k]eik!lon (3.3)
k=(N>

where the notation k = (N) ímplies letting k range over any N consecutive values. The set
of N consecutive values <>ver which k varies is arbítrary and is usually chosen to simplify
the problem by exploiting symmetries in the signal x[n]. Common choices are k = O to
N - 1 and, for N even, k = -N/2 to N/2 - 1.
ln order to determine the weights or coefficients A[k], we shall minimize the mean-
squared error (MSE) between the signal and its series representatíon. The construction of
the series representation ensures that both the signal and the representation are periodic
with the sarne period. Hence the MSE is the average of the squared difference between the
signal and its representation over any one period. ln the discrete-time case only N consec-
utive values of x[n] and x[n] are required since both are N periodic. We have

MSE = -h 1t~N> lx[n] - xlnJ[


2
(3.4}

where we agaín use the notation n = (N) to indicate summation over any N consecutive
values. We leave the interval for evaluating the MSE unspecified since it will later prove
convenient to choose different intervals in different problems.
3.1 l1itroduction 159

ln C<>ntrast to the discrete-time case, C()ntinuous-timc complex sinusoids eikw"t with


distinct frequencies kw are always distinct. Hence there are potentially an infinite number
0

of disti11ct terrns in thc series of Eq. (3.2) and we approximate x(t) as

. x(t} = L A[k]eikc.>,,, (,) .5)


k=-,;,;,

We seek c<)efficients A lk I s<, that x(t) is a good approxin1ati<)n to x(t).

No1iperiodic Signals: Fourier Transforni Represe1itatio1is


ln co11trasr to the peri()dic signal case, there are no restrictions on the period of the
sínusoids used to represent nonperiodic signals. Hence the Fourier transform representa-
tions empl(>y complex si11usoids. having a continuum of frequencies. The signal is repre-
sented as a weighted integral of complex sinusoids where the variable of integration is the
sinusoid's frequc11cy. Discrete-time sinus<)ids are used to represent díscrete-time signals in
the DTFT, whilc continuous-time sinus{>ids are used to represent continuous-time signals
in the FT. C<>11tínuous-time sinusoids with distinct frequencies are distinct, s<, the FT in-
volves sinusoidal frequencies from - oo t<> co. Discrete-time sinus(>ids are only uni que over
a 2TT interval of frequency, since discrete-time sinusoids with frequencies separated by an
ínteger multiple of 2 TT are ídentical. Hence the DTFT involves sinusoidal frequencies within
a 2 TT interval.
The next four sections of this chapter develop, in sequence, the DTFS, FS, DTFT,
and Ff. The remainder of rhe chapter explores tl1e properties of these four representations.
All four representati<>11s are based <Jn complex sinusoidal basis functions and thus have
analogous prc>perties.

• 0RTH<>GONALJTY <>F COJVIPLEX SINUSOIDS

The orthog<,11ality of cc>mplex sinusc>ids plays a key role in Fc>urier representations. We


say rhat two signals are orth<)gc)nal if their inner product is zero. Fc.>r discrete-time periodic
signals, the inner prc>duct is defined as the sum of values in their product. lf cf,k[nl and
cf>,11 [n] are tw<) N periodic signals, their inner product is

. rk,,n =
.
I
n=(l\J)
<Pk[111 <P ,:[n]

Nc>te tl1at the i1111er product is defined using complex conjugatic>n when the signals are
cc>1nplex value(i. If lk.,n = O fc,r k -=!= m, then <Pk[n] and q>,,1 [11] are <>rthogonal. c:cJrrespond-
íngly, Í(>f co11tinuc)us-time signals with period T> rhe inner product is defined in terms of
an integraJ, as sho,vn by

.lk,,n = J (T)
<Pk(t}<f>:! (t) dt
1

where the r1c>tation (T) in1plies integration over any interval <)f length T. As in discrete
time, if l 1l,,,, = O for k -=!= r11, chen we say <Pk (t) and tj), 11 (t) are c>rthogonal.
Begínning witl1 the discrete-cime case, let <Pklnl = eikíl"11 be a complex sinusoid with
frequency k!1 Choosing the interval n = O to n = N - 1, the inner product is given L1y
0

N-1

Ik,rn = ""'
L.J
ei!k-1n)!t0 11
11=0
160 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

Assuming k and m are restricted to the sarne interval of N consecutive values, rhis is a
finite geometric series whose sum depends on whether k = m or k * m, as shown by

N-1 N, k=m
L ei(k-m)non ==
1 _ eik2Tr
n=O
1 - e;kno' ki=m

Now use e-;k 2rr = 1 to obtain


N-1
N, k = m
L ei< k - m'ª"n = (3.6)
n=O O, k *m
This result indicates that complex sinusoids with frequencies separated by an integer mul-
tiple of the fundamental frequency are orthogonal. We shall use this result in deriving the
DTFS representation. .
Continuous-time complex sinusoids with frequencies separated by an integer multi-
ple of the fundamental frequency are also orthogonal. Letting <!>k (t) = eikw,.,r, the inner
product between eikw,,t and eimw"t is expressed as

1 = (T ei<k-m)wc,t dt
k,m )
0

This integral takes on two values, depending on the value k - m, as shown by

T, k=m
ki=m
Using the fact ei<k-m)w,.,T = ei(k-m)2 = 1, we obtain
'TT

T, k =m
(3.7)
O k-:f=m
This pr<)perty is central to determining the FS coefficients.

3.2 Discrete-Time Periodic Signals:


The Discrete-Time Fourier Series
*• DERIVATION
The DTFS represents an N periodic discrete-time signal x[n] as the series of Eq. (3.3):
x[n] = L A[k]eikOun
k=(N)

where !1 = 21r/N.
0 .
ln order to choose the DTFS coefficients A[k], we now minimize the MSE defined in
Eq. (3.4 ), rewritten as

MSE = .!_ L lx[n] - x[n] 12


N n=(N>
2

= ..!_ L x[n] - L A[k]eik!l,,n


N n=(N> k:::(N)
3.2 Discrete-Time Periodic Signals: The Discrete-Time Fourier Series 161

Minimization of the MSE is instructive, although it involves tedious algebraíc manipula-


tion. The end result is an expression for A[k] in terms of x[12J. Also, by examining the
minimum value of the MSE, we are able to establish the accuracy with which x[11] ap-
proximates x[n ].
The magnitude sc.1uared ()Í a complex num ber e is given by e 2 = cc*. Expanding I 1

the magnitude squared in the sum using la + b 12


== (a + b){a + h) * yields

1
MSE = - L x[nJ - .L A[kJejk!l 0

1
x[ t1_I - L A 1111Jei11112'' 11
N ,1=(.~) k=<N) n1=(N>

Now multiply each term to obtai11

MSE = _!_ L lx[n] 12 - L A,., [m l .!_ L x[n]e-i"ifl,,n


N n=(N) n1=(N) N n=(N)

- L k=(N)
A[k]

Define

Xlk l = .!_ L x[n]e-;k! 10" (3.8)


N 11=<·"''>

and apply the <)rthogonality property <>f discrete-time complex sinusoids, Eq. {3.6), to the
last term in the MSE. Hence we may write the MSE as

MSE = t n~N)
2
]xlnll - k~N) A'~[k]Xlkl - ki) Alk]X'~[k] + k¾.,i IA1kll
2
Now use the technic.1ue of ''cc>mpleting the square'' to write the MSE as a perfect
square in the DTFS coefficients AlkJ. Add and sul)tract ~k=<N> 1X[kj 12 t<) rhe right-hand
side of the MSE> so that it may he written as

MSE = ~ ,, ~~, lxlnl 1 + , ~N> ( IA[k] I' -


2
A ''[k]X[kl - AlkJX>.-{k] + 1Xlk] 1 2
)

- L k=<N)
IX!kl 12
Rewrite the middle sum as a square t<) <)btain

MSE = .!_
N
L
liº (N}
lx[n]l 2 + L
k=(N)
IAlkl - Xlkll 2 - L
k=<N)
IX[k]l2 {3.9)

The depende11ce of the MSE on the unkn<>wn DTFS coefficients A[kj is confi11ed t<) rhe
middle term <>f Eq. (3.9), and rhis term is always nonnegative. Hence the MSE is minimized
by forcing the middle term t<> zero \Vith the choice
A[kl = X[k]
These coef.ficients mi11imize the MSE l-,erween xf ,zl a11d x[11J.
Note that XI k] is N periodic ín k, si11ce

Xlk + N] = .!_ L xlnle-j(k f ,",')flnll

N 11=<-"N>
162 CHAPTER 3 lll f OlJRIE.R REPRF.:SENTATIONS FOR SIGNALS

Usíng the fact that e- iN!l,,n = e-il-rrn = t we <)btain

X[k + Nl = _!_ L x[111e-·ikílon


N 11=(1\l)
= X[kJ
which establishes that X[k] is N pcriodic.
The value of thc minímum MSE determines how \vell x[n] apprc)ximates x[n). We
determine the minimum MSE by substituting Alk] = X[kJ into Eq. (3.9) t(> obtain

· MSE = h n~N> lx[n)l 2


- k ~>IX[kj 1 2
(3.10)

We next substitute Eq. (3.8).into the second ter1n of Eq. (3.10) to obtain
. 1 .
L IX[k]l 2 = L L L x[n]x*[m]e 11111
-n)fl0 k
k=(N) . k=(N) N2 n=(N) m=(N)

lnterchange the arder of summatic>n to write

(3.11)

Equation (.3.11) is simplified by recalling that e;,n!l.,k and ein!l,,k are orthogonal. Referring
to Eq. (3.6), we have
n = m
- L
1 .
e'(,n-nHlc,k =
1
'
N k=(l\l} O, n =fa m
This redttces the doub)e sum c>ver m and n <>11 the right-hand side of Eq. (3.11) to the single
sum

k~N> 1X[kl I' = h"tN, lx[n]l 2

Substituting this resuJr into Eq. (3.1 O) gives MSE = O. That is, if rhe DTFS Cí>efficients are
givcn by Eq. (3.8), then the MSE between x[n] and xlnl is zero. Since the MSE is zero, the
err<>r is zero for each value of n a11d thus xlnl = x[n].

m THE DTFS REPRESENTATION


The DTFS represe11tation for xlnl is given by
x[n] = L X[k]eik!lºn (3.12)
k=(N)

X[k}=1_ L xfnle-;knºn (3.13)


N n=<,\i>
where x[n] has fundamental period N and n,, = 21r/N. We say that x[11J and X[k] are a
DTFS pair and denote this relatíonship as

xlnl DTFS;íl,, Xfkl

from N values of X[k] we may determine xlnl using Eq. (3.12), and from N values of
xfnl we may determine X[k] using Eq. (3.13). Either X[kJ or x[n] provides a complete
description of the signal. We shall see that in some pr<Jblems it is advantageous to represent
the signal using its time values x[n l, while in <>thers the DTFS coefficients X[kj <Jffer a
3.2 Discrete-Time Periodic Signals: The Discrete-Titne Fourier Series 163

more convenient description of the signal. The DTFS coefficient representation is also
known as a frequency-domain representation because each DTFS coefficient is associated
with a complex sinusoíd of a different frequency.
Before presenting several examples illustrati11g the DTFS, we remind the reader that
the starting values of the índices k and n in Eqs. (3.12} and (3.13) are arbitrary beca use
both x[n] and X[k] are N periodic. The range for the índices may thus be chosen to simplify
the problem at hand.
.. . . .,. . ..·4 -t·:... •• > ' • ·~.., :·:· •• ·- .~.. •><'><llt: , ,_ _, . ...,.. "·· ·:·· ..,,...... ,.,:.,. .,•;,.:~

ExAMPLE 3.1 Find the DTFS representation for


x[nJ = cos(fn + </>)
Solution: The fundamental períod of x[ n] is N = 16. Hence 0 0 = 21r/16. We could determine
the DTFS coefficients using Eq. (3.13); however, in this case it is easier to find them by in-
spection. Write
,
. ~: . eil< 111s},i+ <!> l + e - i[ (1t/8 )n +<I> J

,,,: ,.
x[n) = 2
..., (3.14)
and compare this to the DTFS of Eq. (3.12) written using a starting index k = -7
8
x[n) = L X[k]eik(rrl8)n (3.15)
k=-7

Equating the terms in Eq. (3.14) and Eq. (3.15) having equal frequencies, k1rl8, gives
12e-;tf,, k= -1
DTFS; 21r/l6 X[k] =
x[n).,_____ leitl>
2 '
k= 1
O, -7 < k :S;. 8 and k =!:- ±1
Since X[k) has period N = 16, we have X[15] = X[31] = · · · = ½e-;,t, and similarly X[l 7] =
X[33] = · · · = fei<f> wíth ali other values of X[k] equal to zero. Plots of the magnitude a11d
phase of X[k] are depicted in Fig. 3.2.
ln general it is easíest to determine the DTFS coefficients by inspection when the signal
consists of a sum of sinusoids. ....
·
• · . ·,; ·. ·..·
•.~·. , •~ - •; ~
··:!"~':· · · ,mr·:.··
.,,.-,.. • •• • ...,.• • •·· · ···· • , .•.,•...
......, .. •,ri;..;~,~ ..~•x~ • ~'"· ,._,,.. ........::,, .........:

1X[k] 1

1/2 ~

... ...
-----<>-o •~ - ~ - ~ "'!" ~ - - ~ -- - -- -- - o-k
-20 -10 10 20 30

arg{ X[k] 1

4>
... •• •

, . - - - - . -
<

'
k
-20 -10 10 20 30

'
. . . -4>
FIGlJRI:'. 3.2 l\:lagnitu<le and phase of DTFS coefficie11Ls for Example 3.1.
164 CHAPTER 3 • FOlJRIER REPRESE.NTATIONS FOR SIGNALS

The magnitude of X[k], IX[kJI, is known as the magnitude spectrum of x[n]. Simi-
larly, the phase of Xf kl, arg{X[k}}, is known as the phase spectrum of x[n}. ln the previous
example ali the components of x[n] are concentrated at two frequencies, 0 0 (k = 1) and
-nº (k = -1).
• Drill Problem 3.1 Determine the DTFS cc>efficients by inspection for the signal

1 37T
x[n] = 1 + sin 12 7T n + 8

Answer:
e-i(3,.,,1s)
k= -1
2j '
DTFS; 2-rr/24 1, k=O
x[n} X[k} ei(J,.,,18)
k = l
2j '
o, otherwise on -11 s k s 12 •
The next example directly evaluates Eq. (3.13) to determine the DTFS coefficients.

EXAMPLE 3.2 Find the DTFS coefficients for the N periodíc square wave depicted in
Fig. 3.3.
Solution: The period is N, so fl = 2TTIN. It is convenient in this case to evaluate Eq. (3.13)
0

over indíces n = -M to n = N - M - 1. We thus have


1 N-M-1 _ ..
X[k] = - x[n]e-,kn.,n L
N n=-M

1
·.'·. <•
=- IM .
e-,kOon
N n=-M

Perform the change of variable on the índex of summation, m = n + M, to obtain


1 . 2M _
X[k] =- e'kn,.M L e-,kfiom
.·.
·. N m=O

Summing the geometric series yields

k =I= O, ±N, ±2N, ...


'
~.
. >. •

x[nl

r J •>
'
••• ••• ... • •• ••• • ••

-N+M -M M N-M N N+M

FIGURE 3.3 Square wave for Example 3.2.


3.2 Discrete-Time Periodic Sig1ials: The Discrete-Time Fourier Series 165

'. . .
. ,., - ,.. ·:: . :
'· ..
•,:,;,~:·:· .,:,i_::.:,,._
..-;.; '··' .· ,.
which may be rewritren as
.
1 eik!l0 (2M + 1 )12 1 _ e-ik!l0 (2M+1)
X[k] =N ejkfi</2 . 1 - e-ikfl,,
..,,
eikfi.,(2,'\,f + t )/2 _ e-ik0(1(2M+1)/2
..
: f
.. =-
1
N eikfi,/ :!. _ e- ;kfl,/2 , k * O, ±N, +2N, ...
At this point we may divide the ~umerator and denomínator by 2j to express X[k] as a ratio
of two sine functions, as shown by

. k
sin !1 (2M + 1 ).
2
X[k] ~ ~------, k * O, -:!:N, ±2N, ...
. . ... • k fiº
: .,

·-~-
/ .. s1n
. .
2
-~··

An alternative expression for X[k] is obtained by substituting fi 0 = 21r/N, yielding

sin k~ (2M + 1)
X[k] = -h-----,
!!.
k * O, ±N, -:±:2N, ...
s1n k

The technique used here to write the finite geometric sum expression for X[k] as a ratio of
sine functic>ns involves symmetrizing both the numerator, 1 - e-ik0,,(2 ,\.1+ 1 >, and denominator,
1 - e-ik!iº, with the appropriate power of eik110• Now, for k = O, ±N, -:±:2N, ... , we have

1 M
X[k) =- L 1
f ., N m=--M
..
. 2M + 1
,;,':.. N

and the expression for X[k] is

.
~.:). :. . ~: ··li{:: .. /.'; .\"
!
. ,.;.~
. ~ . . . .;
: . .
\,

1

s1n k; (2M + 1)
.. -~------, k =fo. O, + N, ±2N, ...
N • k 'lT
s1n
X(k] = N
2M + 1
k == O, ±N, ±2N, ...
N '
Using L'Hopital's rule, it is easy to show that

1T
k N (2M + 1

Slll
1 2M + 1
lim - --------
k-o, + N,:!:2N.... N . N
s1n k~
N
....,,... . ('::
166 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

0.2 ...------~-----,.----,.----.----.----~

0.15

0.1

X[k]

0.05 ·

-0.05 L..-_._______.:_----L_ ___.._ _..;._-...i.._--.L._.....L.._ _ _ _ _ _...J

-50 -40 -30 -20 -10 O 10 20 30 40 50


k
(a)

0.5 ,----,,--------,.-----.-----0-----.------,---,----0

0.4 . .

0.3

0.2
X[k]
0.1

-0.l

-0.2 ..____.____'--___._ __._ _..._-'-_ __.__ _.__ _.__


-50 -40 -30 -20 -10 O 10 20 30 40 50
k
(b)

FIGURE 3.4 The DTFS coefficients for a square wave: (a) 1\1 = 4 and (b) J\1 = 12.

For this reason, it is common to write the expression for X[k] as


1T'
. .. 1 Slfl k N (2M + 1)
.,. •

. .. .
.·•
X[k] = N - - - - -

:. . ..
sin k ~
ln this form it is understood that the value X[k] for k = O, ±N, :!:2N, ... is obtaíned from
the limitas k ~O.A plot of two periods of X[k] as a function of k is depicted in Fig. 3.4 for
M = 4 and M = 12 assuming N = 50. Note that in this example X[k] is real; hence the
magnitude spectrum is the absolute value of X[k] and the phase spectrum is O when X[k] is
positive and 1r when X[k] is negative.
' .. ·'.
3.2 Discrete-Time Periodic Signals: The Discrete-Time Fourier Series 167

x{n]

2 !'
/

- ~ _l ,-~ . . . . L<>-<:H>-+-1L-O-<O-O-.L.J.... ~-- n


-5 5 10

FIGURE 3.5 Signal x[n] for Drill Problem 3.2.

• Drill Problem 3.2 Determine the DTFS coefficients for the periodic signal depictcd
inFig.3.5.
Answer:
l)TFS; lrr/6 X[kl 1 2 k 7r
l l ---- = -6 + -3 COS -3
X 11

Each term in the DTFS c>f Eq. (3.12) associated \-Vith a nonzer<> coefficient X[k]
contributes to the represenrati<>n of rhe signal. We now examine this rcpresentation by
considering the contribution of each term for the square wave in Example 3.2. ln this
example the DTFS coefficíents have even symmetry, Xf k] = XJ-k], and we may rewrite
the DTFS of Eq. (3.12) as a series involving harmonically related cosines. General cc>ndi-
tions under which the DTFS coefficicnts have even or <>dd symmctry are discussed in
Section 3.6. Assume for convenience that N is even so that N/2 is integer and let k range
frorn - N/2 + 1 to N/2, and thus write
N/2
x[n] = I xr k]eikíl<>n
k:c-N/l+I
N/2-l
= X[Ol + L (X[m]eini{}()n + X[-m]e-i•nil,,11) + X[N/2.lei(Nll)il,.n

Now exploit Xf ml = X[-m] and N0 0 = 21r to obtain


N/2-1 · · n + e-11n!!
e1mtt · n 0 0

x[nJ = XfO] + n~l 2X[,nl


2
+ XlN/2]e'7Tn
N/2-1

= XfOJ + L
171= 1
2X[m] cos(míl n) + X[N/2] cos(1rn) 0

where we have also used ei7Tn = cc>s( 1rn). If we define the new sct of coefficients
X[kj, k = O, N/2
Blkl =
2X[k], k = l, 2, ... , N/2 - 1
then wc may write the DTr'S in terms of a series of harmonically related cosines as
N/2
x[n] =I Blk) cos(kil n) 0
k=O
{' ·JIC· .·>1,;: • ••• i,. ·* . ,; ....;; ·~ ·,.. . ,. . ·<1: .Jt ·:,. . . ...,·. i> ;••• • • •••• ."l· ;;., ••

EXAMPLE 3.3 Define a parcial sum approximation to x[n} as


. ·,·
J
xj[nJ = L
k=O
B[kJ cos(kflon)

where J s N/2. This approximarion contains the first 2J + 1 terms centered on k = O in Eq.
(3.12). Evaluate one period of the Jth term in the sum and x1(n] for J = 1, 3, 5, 23, and 25,
assumíng N = 50 and M == 12 for the square wave in Example 3.2. ..•
168 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

. ..
.: "j. . .,.-~:;;,. ·~1;~~:,. /{{:: \Y,~: ;.. ,;i ..:; ;:. ..

Solution: Figure 3.6 depicts the Jth term in the sum, BU] cos(Jfi n), and one period of x/[n] 0

for the specífied values of ]. Only odd values for J are considered because the even indexed
coefficients B[k] are zero. Note that the approximation improves as J increases, with exact
representation of x[n] when J = N/2 = 25. ln general, the coefficients B[k] associated with
values of k near zero represent the low-frequency or slowly varying features in the signal,
while the coefficients associated with the values of k near ±.N/2 represent the high-frequency
or rapidly varyíng features in the signal.

l ...----,-----,----.-!--.,---...-
,--,,----,-----.--.,----,

..-..
l::~ -
cf
'-'
:ll m·-r-··••,-

.....8
-
~ -O.'.,___[_·_·___1.___......i _ 1_
____.._ _...... ___._ _ _ _ _____,__1_ _ _ ...._i_ ' _ _ _ _ , •

-25 -20 -15 -10 -5 o 5 10 15 20 25


n
1.5 ~ - ~ - - - ~ - - - - - - - - - - - - - - ~

l -
~

;:
~
(~- 0.5 ..

o
-0.5 ....._____________......___...,_____~---------~-'1
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
(a)
l ,-···-·--······. -·····-·"·~--,-------,---,..........-...-----.----,...---.
! i i

a~ o.s l,
§
,-,
0 tii-001,! !!!Aºo?f ttfj~TI!r-9 yf i 2-,A!!!!A-.-()--L-L-Y rr
~ -0.5 . . .
-1 ,________________l_._----L____ !__ -~···-·-··--L.._.__ .-t....___L_. __~
-25 -20 -15 -10 -5 O 5 10 15 20 25
n
l .5 1 ! ; í - ' í 1

....
1 t- > >
,

- -
o12..00-~ 0i_0 J tJ_ ......... ·-~ -~ ......... -~ - ~
_l11 º6bbbõ -~ >

1
1 i i i ! ; 1
-0.5 ª i ;

-25 -20 -15 -10 -5 o 5 10 15 20 25


n
(b)

FIGURE 3.6 Individual terms in the DTFS expansion f<)r a square \-Vave (top panei) and the cor-
respc,11cling partia) SLlm approximations x1 [1i] (bottom panei). 1·he J = O term is x0[1i] = ½and is
not shown. (a) J = l. (b) J = 3.
1.0

-a-
;: ~
0.5
.._.,
li")

'.Jl
ou o
-
...... -0.5
l i")

-1.0 L---...J....-__.J_ _ ...J.__ _,,,1.._ _ ...__...J...._ ___J_ _.,;.__ _,,,1.._ ___,J

o 25
-25 -20 -15 -10 -5

1.5 - - - - ~ - - - - . . . . - - - ~ - ~ - - - - ~ - - - ~ - - - - - ,
n
5
'º 15 20

1.0 '
r-,

......
~
.,., 0.5
<~

o ~~il .
-0.5
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
(e)
1 1 1 1 1 1 1

-
0.5

r-,
("'l
...... -0.5
N - -
~
1 1 i 1 !
-1
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
1.5 1 1

1 1-
' '
\ > ' -

0.5 ~ -

O 1--o-t; ··o·º·o··º··o·º o o o,.o·-LL-'-L~- ~ ··· ·· ·· ··· •- _,,IJ.-«...l--'-1.-'-'...L..J'..LJ< ·· ···º·o·.o-o·ºo o o-0-o·.O-·o-

1 1 1 1
-0.5 · 1

-25 -20 -15 -10 -5 o 5 10 15 20 25


n
(d)
1 - -........- - ~ - - - , 1 - - - , . - - - , - , ,- - ~ 1 - - . . . . . 1 - - ~ 7 - - - - - - ,

"' O ···º o O o ·º··o º··o·º ·o··º o·º··o Oo 0 o O o·º o··º·o-º··o·0 •o··º ·o·º o 0 o 0 ··o 0··0·º·0··º ·o·º o··º o O o 0

-~
8
__,
~
-0.5 ~ -
-1 L - - - - ' - - - . . L I_ _..1l_ _ _ _...._1_ _,__1_
~ __.__ __,_,_ _--11_ _...J

-25 -20 -15 -10 -5 O 5 10 15 20 25


n

1.5 í \ i i 1 ! ; i l

'. -
1 ~ > ' >
,....,
1-:!
~ 0.5 ..... -
·~"' o . . - - - 0-0-000-0-00·0-0-
i

i ! • i i ! 1
-0.5 i

-25 -20 -15 -10 -5 o 5 10 15 20 25


n
(e)

FIGURE 3.6 (continued} (e} J = 5. (d) J = 23. (e) J = 25.


} 70 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

The DTFS is the only Fourier representation that can be numerically evaluated and
manipulated in a computer. This is because both the time-domain, x[n], and frequency-
domain, Xlk], representations of the signal are exactly characterized by a finite set of N
numbers. The computational tractability of the DTFS is of great significance. The DTFS
finds extensive use in numerical signal analysis and system implementation and is often

Nonnal
3 .----,----,-----,------------.!--....
,. ----,----,----,

2- -
1 >- -
x[n]
o ~..,,.....J"", --"'-l.__.........,-~__,,-...i""-11-~""i _..,...~ ,__--4
-1 ... ..

-2 '------------------'--------'--'---......__ _._..._ _,
O 200 400 600 800 1000 1200 1400 1600 1800 2000
Time indcx (n)
(a)
Ventricular tachycardia
3 .---..-----,-----,------,----,.--.---....---,---.....
2 ..

l
y[n]
oi..····~
-1 .

-2'------------~---'-------~-------
0 200 400 600 800 l 000 1200 1400 J600 1800 2000
Time index (n)
(b)

Nonnal
0.25 ,------.----...-------,------,------,----,

0.2
0.15
1X[k] 1
0.1 . . .

0.05
oL.L&.&..l~_._._._._~flllJ.lll~~~=~=~0ooo.~
o 10 20 30 40 50
Frequency index (k)
(e)
Ventricular tachycardia
0.25 . - - - - ~ - - - - , - . . . - - - - , - - - - ~ - - - - - . - - - - ,
0.2 . .

0.15
1 Y(k] 1
0.1
0.05
o 1..LLJ..J..1.,,U..J,J.....LJLJ...LLLI.J..J..1...LU..o.L(l.O.O,Jl)0.0,~.oD-CL().0().06JO,O.O.c,Oo.OO.:,ó-o(:.O.O.C)Q-OOÓ
o 10 20 30 40 50
Frequency index (k)
(d)

FIGURE 3. 7 Electrocardiograms for two clifferent heartbeats and the fírst 60 coefficients of their
magnitude spectra. (a) Normal heartbeat. (b) Ventricular tachycardia. (e) .lvlagnitude spectrum for
the normal heartheat. (d) Magnitude spectrum for ventricular tachycardia.
3.3 Continuous-Time Periodic Signals: The Fourier Series 171

usec.l numerically approximaté the other three Fourier representations. These issues are
t<)
explored in rhe next chapter.

.:,:~< . : . , .. . .,. ,·,;,;.


·i- t ,.': •, ; • : ,••~• • ~. .::> ·~ • ..,. :·;,:. ••,.,,. •••'>•• ....

EXAMPLE 3.4 ln this example we evaluate the DTFS representations of rwo different elec-
trocardiogram (ECG) waveforms. Figures 3.7(a) and (b) depict the ECG of a normal heart
and one experiencing ventricular tachycardia, respectively. These sequences are drawn as con-
tinuous functions due to the dif.ficulty of depicting ali 2000 values in each case. Both of these
appear nearly periodic, with very slight variations in the amplitude and length of each period.
The DTFS of one period of each ECG may be computed numerically. The period of the normal
ECG is N == 305, while thc period of the ventricular rachycardia ECG is N = 421. One period
of each waveform is available. Evaluate the DTFS coefficients for each and pior their 1nagni-
rude spectrum. . ·., :•·
::
'

Solution: The magnitude spectrum of the first 60 DTFS coefficients is depicted in Figs. 3.7{c)
a11d (d). The higher indexed coefficients are very small and thus not shown.
The time waveforms differ, as do the DTFS coefficíents. The normal ECG is dominated
by a sharp spike or impulsive feature. Recall that the DTFS coefficients for a unit impulse have
constant magnitude. The DTFS coefficients of the normal ECG are approximately constant,
showing a gradual decrease in amplitude as the frequency íncreases. They also have a fairly
small magnitude, since there is relatively little pc>wer in the impulsive signal. ln contrast, the
ventricular tachycardia ECG is not as impulsive but has smoother features. Consequently, the
DTFS coefficíents have greater dynamic range with the low-frequency coefficients dominating.
The ventricular tachycardia ECG has greater power than the normal ECG and thus the DTFS
coefficients have larger amplítude.
. ,,,.

3.3 Continuous-Time Periodic Signals:


The Fourier Series
*• DERIVATION

We begin otir derivatic>n <)f the fS by approximating a signal x(t) having fundamental
peric>d T t1sir1g the series of Eq. (3.5):

x(t) L Af.k]eikwut (3.16)


k=-oc

where w() = 2 Tr!T.


We shall now use the orthogonality property, Eq. (3. 7), t<> find the FS C(>efficients.
We begin by assuming we can fi11d coefficients A[kJ so that x(t) = x(t). If x(t) = x(t), then

f(/"}
X ( t )e - jinw,,t dt = f (T)
x( t )e . jin«Jot dt

Substit11te the series expression for x(t) in this equality te> obtain the expression

i,, x(t)e-fmw,,i dt =in i. k A[k]efkw,/e-fmw,,, dt

= i AlkJ J. eik<tJ,,te-j111w,.t dt
k=-,,,, (f}
1 72 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

The orthogonality property of Eq. (3.7) implies that the integral on the right-hand side is
zero except for k = m, and so we have

f(T)
x(t}e-jmwol dt = A[m]T

We conclude that if x(t) = x(t), then the mth coefficient is given by

A[m] = -1 J, '
x(t)e- 1111"' 0
t dt (3.17)
T (T)

Problem 3.32 establishes that this value also minimizes the MSE between x(t) and the
2] + 1 term, truncated approximation
.,
X1(t) = 2: A[kJeikwot
k=-J

Suppose we choose the coefficients according to Eq. (3.17). Under what conditions
does the infinite series of Eq. (3.16) actually converge to x(t)? A detailed analysis of this
question is beyond the scope of this book. However, we can state severa( results. First, if
x(t) is square integrable, that is,

_!_
T
f (T)
lx(t) 12 dt < oo

then the MSE between x(t) and x(t) is zero. This is a useful result that applíes te> a very
broad class of signals encountered in engineering practice. Note that in contrast t(> the
discrete-time case, zero MSE does not imply that x(t) and x(t} are equal pointwise (at each
value of t); it simply implies that there is zero energy in their difference.
Pointwise convergence is guaranteed at ali values of t except those corresponding to
discontinuities if the Dirichlet conditions are satisfied:

• x(t) is bounded ..
• x(t) has a finite number of local maxima and minima in one period.
• x(t) has a finite number of discontinuitíes in one peric>d.

If a signal x(t) satisfies the Dirichlet conditions and is not C<)11tinuous, then the FS repre-
sentati<>n of Eq. (3.16) converges to the midpoint of x(t) at each discontinuity.

• THE FS REPRESENTATION

We may write the FS as


CC

(3.18)
k=-oo

X[k] = _!_
T
f (T>
x(t)e-ikwot dt (3.19)
3.3 Continuous•Time Periodic Signals: The Fourier Series 173

where x(t) has fundamental period T and w 0 = 27r/T. We say that x(t) a11d Xf kl are a FS
pair and denote this relatíonship as

· x(t) - - -
X[kl
FS; Wr,

From the FS coefficients X[k1 we may determine x(t) using Eq. (3.18) and from x(t) we
may determine Xlkl using Eq. (3~19). We shall see later that in some problems it is ad-
vantageous to represent the signal in the time domain as x(t}, while in others the FS co-
ef.ficients X[kj offer a more convenient description. The FS coefficient representation is
also known as a frequency-domairi representation because each FS coefficient is associated
with a complex sinusoid of a different frequency. The follc)wi11g examples illustrate deter-
mination of the FS representation.
. ,.

ExAMPLE 3.5 Determine the FS representation for the signal


1T 1T
x(t) = 3 cos t +
2 4
Solution: The fundamental period of x(t) is T = 4. Hence w = 2,.,,.14 = 7T/2 and we seek to
0

express x(t) as
,, ...... 00

:',: . x(t) = L X[k]eik('ITtl)t


k= -,,,,

One approach to finding X[k] is to use Eq. (3.19). However, in this case x(t) is expressed in
terms of sinusoids, so it is easier to obtain X[k] by inspection. Write
1T 1T
x(t) = 3 cos 2 t + 4
ei('ff'l2)t+'ff'l4 + e-[;(-n-/2)t+'ff'l4]
= 3 ---------
2

This last expression is in the form of the Fourier series. We may thus identify
le-;'"14 k= -1
2 '

,.·,.
X[k] = 1efrrl4
2 '
k= 1
o, otherwise
The magnitude and phase of X[kl ·are depicted in Fig. 3.8. ., :
. <'

X[kJ 1
1 arg{ X[kJ 1
1
3/2 'ff/4 ......

-1T/4

FIGURE 3.8 l\ilagnitude and l)hasc spectra for Example 3.5.


174 CHAPTl:::'.R 3 • FOURIER REPRE:SENTA'l'IONS t'OR StGNALS

• Drill Problem 3.3 Determine the FS representation for

x(t) = 2 sin(2m - 3) + sin( 6m}


Answer:
j/2, k= -3
jei3, k = -1
F.S; 21r ,,
x(t) X[kj = -;e-'·', k = 1
-j/2, k = 3
o, otherwise •
As in the DTFS, the magnitude of X[k l is known as the magnitude spectrum of x(t),
while the phase c)f XlkJ is known as the phase spectrun1 of x(t). ln the previous exa1nple
aJI the power in x(t) is concentrated at two frequencíes, úJ and -úJ ln the next example 0 0 •

the p<>wer in x(t) is distributed acr<>Ss many frequcncies.

ExAMPLE 3.6 Determine the FS representation for the square wave depicted in Fig. 3.9.

Solution: The period is T, so w0 = 21r/T. lt is convenient in this problem to use the integral
formula Eq. (3.19) to determine rhe FS coefficients. We integrate over the period t = -T/2 to
t = T/2 to exploit the even symmetry of x(t) and obtain for k =/:- O
l
X[k] = -
JT/2 .
x(t)e-ikw 01
dt
T -TIZ

Tkw 0 2; '
_ 2 sin(kw0 Ts)
, k *O
For k = O, we have
l
X[O] = -
JT, dt
T -T$

'. .· ··' •:
·'·,/· .,::,~: .-~''" ' "'
•<, .,.,

x(t)

•••
i
l
7· • ..
-T .
'
'
t
-T-Ts -T+Ts

FIGURE 3.9 Square wave for Example 3.6.


3.3 Continuous-Time Periodic Signals: The Fourier Series 175

' ~·.

Using L'Hopítal's rule it is straightforward to show that ..'

. 2 sin(kw T5 )02Ts
11 m - - - - - =
•-o Tkü>o T
and thus we write .' .

. X[k] = 2 sin(kw0 T5 )
Tkw 0

with the understanding that X[O] is obtained as a limit. ln this problem X[k] is real valued.
Substituting w0 = 27r/T gives X[k] as a functíon of the racio T 5 /T, as shown by
• >
. ,. ·.
. . . . k 21rT5
' 2 Slll T
.,
X[k] = - - - - (3.20)
k21r
Figure 3.10 depicts X{k], - 50 ~ k ~ 50, for T 5 /T = ¼and T 5 /T = ft. Note that as T 5 /T
decreases, the signal becomes more concentrated in time within each period while the FS
representation becomes less concentrated in frequency. We shall explore the inverse relation-
ship between time- and frequency-domain concentrations of signals more fully in the sections
that follow.

0.6 .---------.-----r----.----....----..---~----,-----.----,

0.4

X[k]
0,2

-0.2 ...__ ___.__ _ _ _ _.......__ _......____ _.......__ _,....__ _ _ ___..__ ___...._ __,
-50 -40 -30 -20 -10 o 10 20 30 40 50
k
(a)
0.15 ..------.----.-------r----,.-----.-----..------,-------,----,

0.1

X[k]
0.05 . . . .

-0.05 ..__________....__ ___.__ _.....__ _ _ _ _..___________________,


-50 -40 -30 -20 -10 o 10 20 30 40 50
k
(b)

FIGURE 3.10 The FS coefficjents, X[k], -50 < k < 50, for tw<> square waves: {a) T,IT = ¼and
(b) TslT = ft.
176 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

The functional form sin( 1ru)/1ru occurs sufficiently often in Fourier analysis that we
give it a specíal name:

. ( ) sin( 1ru}
s1nc u = (3.21)
1TU

A graph of sinc(u) is depicted in Fig. 3.11. The maximum of the sinc function is unity at
u = O, the zero crossings occur at integer values of u, and the magnitude dies off as 1/u.
The portion of the sínc function between the zero crossings at u = :::t: 1 is known as the
mainl<>he of the sinc function. The smaller ripples outside the mainlobe are termed side-
lobes. The FS coefficients in Eq. (3.20) are expressed using the sinc function notation as

X[kl
= 2Ts . k 2Ts
T s1nc T

Each term in the FS of Eq. (3 .18) associated with a nonzero coefficient X[k] contri butes
t(> the representation of the signal. The square wave of the previous example provides a
convenient illustration of how the individual terms in the FS contribute to the representation
of x(t). As with the DTFS square wave representation, we exploit the even symmetry of X[k]
to write the FS as a sum of harmonically related cosines. Since X[k] = X[-k], we have
00

x(t) = L X[k]eikwot
k=-oo
00

rn=l

= XfO] + L
m=l
2X[m] cos(mw t) 0

If we define B[O] = X[O] and B[k] = 2X[k], k -::/= O, then


""
x(t) = I
kc:O
B[k] cos(kw0 t)

0.8

0.6

0.4
sinc (u)
0.2

-0.2

-0.4 .______.____________.__ __.___~_......__.__~-----


-10 -8 -6 -4 -2 O 2 4 6 8 10
u
FIGVHE 3.11 Sínc functit>n.
3.3 Continuous-Ti1ne Periodic Signals: The Fourier Series 177

ExAMPLE 3. 7 We define the partial sum approximation to the FS representation for the
square wave, as shown by
J
X;(t) == L
k=O
B[k] cos(kw t) 0

Assume T = 1 and T /T = ¼. Note that in this case we have


5

.!
2' k=O
,,, .. . 2( -1 )lk-1)/2
.....

B[k] = k'TT , k odd .·

o, k even
so the even indexed coefficients are zero. Depict one period of the Jth term in this sum and
x1(t) for J = 1, 3, 7, 29, and 99.
Solution: The individual terms arid partia! sum approximations are depicted in Fig. 3.12.
The behavior of the partial sum approximation in the vicinity of the square wave disconti-
nuities at t = ±¼ is of particular interest. We note that each partial sum approxímation passes
through the average value (½) of the discontinuity, as stated in our convergence discussion.
On each side of rhe discontinuity the approximation exhibits ripple. As J increases, the max-
imum height of the ripples does not appear to change. ln fact, it can be shown for any finite
J that the maximum ripple is 9% of the discontinuity. This ripple near díscontinuities in partial
sum FS approximatíons is termed the Gibbs phenomenon in hont>r of rhe mathematical phys-
icíst J. Willard Gibbs for his explanation of this phenomenon in 1899. The square wave
satisfies the Dirichlet conditions and so we know that the FS approximation ultimately con-
verges to the square wave for ali values of t except at the discontinuities. However, for finite
J the ripple is always present. As J increases, the ripple in the partia! sum approximations
becomes more and more concentrated near the discontinuitíes. Hence, for any given J, the
accuracy of the partial sum approximation is best at times distant from discontinuities and
worst near the discontinuities•
..
. ..,::

~ 0.5
-!
cS
u
o
-
~ -0.5 i::-_.,.-
-1 '---------'---_,___ _ _ _ __,__-'-_~_ __.__ ___.
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
1.5 .----..----,---.......---r---.---~---,----.----.----,

l -
~
--:::: 0.5 ..

oi=
. . . ___._..
-0.5 ,__-~- .L._ _,__ _ _ _ _ _ __.__ __.___ _..l. _ _,
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
(a)

FIGURE 3.12 Individual terms in FS expansion for a square \vave (top panei) and the corre-
sponding partial sum approximations x1(t) (bottom panei). The J = O term is .x0 (t) = ½and is not
shown. (a) J = 1.
1 .---~---------~-------,------.---

~
u
o
,.....
~ -0.5 -
-1 .___...__....__....,__..........._ __.__ _ ____.__ ___.__ ___,__ _
-0.5 -0.4 -0.3 -0.2 -0. 1
O 0.1 0.2 0.3 0.4 0.5
t
1.5 .-----......----.....---,------------.--~

1
e-
~ 0.5 ....
( lo<
o ...
-0.5 L----'---...i.---'--...L..-.........-....L.._ ___.__ ____.__ ___,______

-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
(b)

l .-----~-------.....--~------.-----.-----.
~
a 05
.
-
r---

.....,
~-0.5 ... ..
-11---~-...1.--...1..--....L.---'--.....__ _.__-1-_-1-_--1
-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
1.5 ....---..---~-...-----,---.---..----,--......,.----,

1
-·~....
._.,
r--- 0.5 -
o
-0.5 '---..L----'----'---...L..-~---'----'----'--__.._---'
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
(e)

-.a...-
~

0.5
l
.....
1 ! ! 1 1

-
-
°'
N
:r.
ou
,.....
o
O\
~-0.5
._ -
CCi
i f r r r
-1
-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5
t
1.5 ! 1

,.._ ,..•
1 ,_ -

-....
.._,
0.5 ._

-
·~ °'
N

o - - -
V

-0.5 1 1
' 1 1

-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5
t
(d)

FIGlJRI:: 3. 12 (continued) (b) J = 3. (e) J = 7. (d) J = 29.

178
3.3 Continuous-Time Periodic Signals: The Fourier Series 179

1 1 ! 1 1 1 1
.....
,-._

3-
~

O\
0.5 -··- -
°'.,_,
<r. o
ou
~

g'.: -0.5
~
,-

i:.tl
1 j
-1
-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5
t
l.5 ! i ! 1 l ;' i
• 1 1

-.:-
1 - • -

g: 0.5 .... -·
<~

o • •

-0.5 i l l 1 1 ! i l i

-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
(e)

FIGURE 3. l 2 (c()ntinued) (e) J = 99.

• Drill Problem 3.4 Find the FS representation for the sawtooth wavc depic.:ted in
Fig. 3.13. Hint: Use integration by parts.
Answer: Integrate t from -½ to 1 in Eq. (3.19) to obtain
-1 k = O
x[nl FS; 41r/3 X[k]
-2 2 . ,kw0
e-,kwo + e1 2 , ()therwise
3jkw 0


The following example exploits linearity and the FS representation for the square
wave to determine the output of a LTI system.

•·v •. .. .

ExAMPLE 3.8 Here we wish to find the FS representation for the output, y(t), of the RC
. . circuit depicted in Fig. 3.14 in response to the square wave input depícted in Fig. 3.9 assuming
·<.

TslT = ¾, T = 1 s, and RC = 0.1 s.


Solution: If the input to a LTI system is expressed as a weighted sum of sinusoids, then the
output is also a weighted sum of sinusoids. The kth weight in the output sum is given by

x(t)

1 j
••• • ••
1
'
t
-2 -1 2 3
--21
FIGURE 3.13 Períodic signal for Orill Problem 3.4.
] 80 CHAPTER 3 • FOURIER REPRESENTATIONS •·oR SJGNALS

R +
y(t> e

FIGURE 3.14 RC circuit for Example 3.8.

the product of the kth weíght in the input sum and system frequency response evaluated at
the kth sinusoid's frequency. Hence if

00

x(t) = L X[k]eikw.,t
k= -""

then the output y(t) is

...
y(t) = L H(ikwc,)X[k]eikwºt
k-=-oo

where H(jw) is the frequency response of the system. Thus


\.

FS; (J}o
y(t) "---_,. Y[k] = H(ikW 0 )X[k]

The frequency response of the RC circuit was computed in Example 2.15 as

H(íw) = 1/RC
jw + 1/RC

and the FS coefficients for the square wave are given in Eq. (3.20). Substituting for H(jkw0 )
with RC = 0.1 s, w0 = 21T, and using Ts!T = ¼gives

Y[k] = 10 sin(k1r/2)
j21rk + 10 k1r

The magnitude spectrum IY[kl l goes to zero in proportion to 1/k 2 as k increases, soa reason-
ably accurate representation for y(t) may be determined using a modesr number of terms in
the FS. Determine y(t) usíng

100
y(t) = ~ Y[kJeikwot
k=-100

The magnitude and phase of YfkJ for -2.'l :5 k =S 25 are depicted in Figs. 3.15(a) and
(b), respectively. Comparing Y{k] to X[k] as depicted in Fig. 3.10(a), we see that rhe circuit
attenuates rhe amplitude of X[k] when k ;;?:: 1. The degree of attenuation increases as fre-
I 1

quency, kw0 , increases. The circuit also introduces a frequency-dependent phase shift. One
period of the time waveform y(t) is shown in Fig. 3.15(c). This result is consistent with our
intuition from circuit analysis. When the input switches from O to 1, the charge on the capac-
itor increases and the voltage exhibits an exponential rise. When the input switches from 1 to
O, the capacitor discharges and the voltage exhibits an exponential decay.
3.3 Continuous-Time Periodic Signals: The Fourier Series 181

0.5 1 1 1 i 1 1 l i

0.4 - -

..... -
0.3

0.2 - -

0.1 - -
(j'
- o
o 1 1 - - t~ r, n - l l

-25 -20 -15 -10 -5 o 5 10 15 20 25


k
(a)

3 ,- 1 1 '. i i ! '; ! ' -


' '

2 - ~
--
'
'
-,......
-J:i
l -- )
-
__.

-...
>-
::.o
o .... ·- .... • o -· o- -O -0- ... -<> 0-· .. -

e.: -1 ··-· .....


'
' -
-2 ~·····

! ......
-3 ·-· 1
' ' 1 ' 1 1 ! l

-25 -20 -15 -10 -5 o 5 10 15 20 25


k
(b)

1
0.9
0.8 .......

0.7 -

.......
0.6
y(t)
0.5

0.4 ·-

0.3 -·

0.2
0.1
o
-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5
t
(e)

FICURE 3.15 The FS coefficients, Y[k], -25 s k s 25, for the RC circuit outpt1t ín rcsponse to
a square ,vave input, (a) lv1agnitudc spectrum. (b) Phase spectrum. (e) One period of the output,
y(t ).
182 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS

3.4 Discrete .. Time Nonperiodic Signals:


The Discrete ..Time Fourier Transform
:to • DERIVATION

A rigc)rous derivation <)Í the DTFT is complex, so we employ an intuitive approach. We


develop the DTFT frc)m the DTFS by describing a n(>nperiodic signal as the limit of a
periodic signal whose period, N, approaches infinity. For this approach to be meaningful,
we assume that the nonperiodic signal is represented by a single period of the periodic
signal that is centered on the origin, and that the limít as N approaches infinity is taken
in a symmetric manner. Let x [n] be a periodic signal with period N = 2M + 1. Define the
finite-duration nonperiodic signal x[ n) as one period of x [n l, as shown by

x[n], -M < n s M
xfn] = O, lnl > M

Thís relationship is illustrated in Fig. 3.16. Note that as M increases, the periodic replícates
of xfnl that are present in x[n] move farther and farther away from the <>rigin. Eventually,
as M ~ oo, these replicares are removed to infinity. Thus we may wríte

x[nj = lim x[n] (3.22)


M-.oo

Begin with the DTFS representation for the periodic signal x[n]. We have the DTFS

pa1r
M
x[n] í: X[k]eikílon (3.23)
k=-M

M
1 L X [n ]e-jk!lon
X[k] (3.24)
2M + 1 11=-M

x{n]

••• o ... i
---0...o-0-o-0,o.-o-0-0-0-0-0-0-0-,--,--o-__._i+..LL.LL_ __o-o--,_o-o-~~=>-<>--------- n
___.___o-_o-
-M M
(a)

x[n]

••• ... • •• • ••

-'..L4-'---1-....J...i-...Lo-0-0-0-0-0,---,-------'-.i.+.L.l....L...L----'-o-o-o-o-0-0--y--y-----L.4-,L.L.l-L--Lo-<>ó--- n
-2M-1 -M M 2M + 1

(b)

FIGURE 3. 16 Approximatíon of a nonperiodíc signal ,vith a periodic signal. (a) Nonperiodic sig-
nal x[n]. (b) Periodic approximation xln].
3.4 Discrete-Ti,ne Nonperiodic Signals: The Discrete-Titne Fourier Transfonn 183

Since x[n] = x[n] for -M < n :5 M, we may rewrite Eq. (3.24) in terms of x[n] as
1 M .
X[k] = ---
2M + 1 n~·-M
L xfnle -,knº,,

1 ~ .
= --- L xlnle-,k!l.,n
2M + 1 n=-oo
where rhe second li11e foll(>WS from the fact that x{nJ = O f<>r lnl > M. We now define a
continuous function of frequcncy, X(e;11 ), whose samples at kn 0 are equal to the DTFS
coefficients normalized by 2M + 1. That is,
:X:

(3.25)
n=-cc

so that X[k] = X(eikfl")/(2M + 1 ). Substitute this definiti(>n for X[k] into Eq. (3.23) to obtain
1 M . .
x [n] = - - - L X(e'k!1.,)e'k11.,n
2M + 1 k=-M
Using the relationship n 0 = 27r/(2M + 1 ), we write
1
X ln J =-
27r k=-M
L
A,1 .
X(e'kít,,)e'ki!()nno
. -
(3.26)

At this point we invokc the fact that x[n] is the limiting value <>f x fn] as M ~ oo.
However, let us first consider the effect <>f M ~ 00 on the fundamental frequency, n 0 • As
M increases, n decreases and the spacing between har1nonics in the DTFS decreases. This
0

decrease in harmoníc spacing is illustrated in Fíg. 3.17 by depícting X(e;kn,.) for incrcasing
values (>Í M. Note that X(e;n) is 27r peri(>dic in n. This follows from Eq. (3.25) and the
21r periodicity of e-if!n. Combining Eq. (3.22) with Eq. (3.26), we have
1 t.1 . .
x[n l = lim - I.
X(e'knú)e'kíl..,nno (3.27)
M-+oc 27r k=-M

ln Eq. (3.27) we are summing values of a function X(e;12 )ei!ln evaluated at kfl multiplied 0

by the width between samples, fl. 0 • This is the rectangt1lar rule approxímation to an in-
tegral. Taking the limit and idcntífying n = kfl. so that dfl = !1 0 , thc su1n in Eq. (3.27)
0

passes to the integral

x[n] = -1 f;; X(ei!l)ei!!n d!l


27T" -rr

The limits on the integral are c)btained by noting that limM--oc Mfl.0 = 7r. We have thus
expressed x[n] as a weíghred superposition <Jf discrete-time sinusoids. ln this case the
superposition is an integral an<l the weighting on each sinusoid is (1/2 7r)X(ei11 ) d!l.

• THE DTFT REPRESENTATION


The DTFT representation is expressed as

xf nl = 1
27T
Jrr 7T
X(eifl)ei!ln d.O (3.28)
184 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

('

('
}

r r,
••• •• •
' ! ' '
Q
-31T -21T -1T o i ! 1T 2'1T 31T
Qo
(a)

r
'
1

-
(' 1

'
-
(' r
- - r

r
- r

•• • •••

'
Q
' '
!
-317' -27T -7( o 1T 211 3?T

(b)

~órl' IWr -
OTI OI rro") -
~º~ ~ ç
")
r"'-

•••
,~.
'"''
Ir ~

-~
(

. ..
: :
' ' '
-Q
-31T -27T -1T 1T 21T

FIGURE 3.17 Example spectra X(eikí!,,) for increasing values of M. M increases from (a) to (e),
S() fi,, Jecreases.

where
oc

(3.29)
n=-oc

We say that X(e;n) and x[n] are a DTFT pair and wríte
x[n] DTFT X(ei{l)

The transform X(e;n) describes the signal x[n] as a function of sinusoidal frequency n and
is termed the frequency-domain representation <>f x[ n1. We say that Eq. (3.29) is the DTFT
<)Í x[n] since it converts the time-domain signal into its frequency-domain representation.
Equation (3.28) is termed the inverse DTFT since it converts the frequency-domain rep-
resentation back into the time domain.
ln deriving the DTFT we assumed that x[n] has finite duration. We may apply these
results to infinite-duration signals but must then address the conditions under which the
infinite sum in Eq. (3.29) converges. If x[nJ is absolutely summable, that is,
CC:

L
n=-=-o
lx[n]I < OO

then the sum in Eq. (3.29) converges uniformly to a continuous function of f!. If x[n] is
not absolutely sumn1able, but does have finite energy, that is,

L
n=-oo
lx[nJ 12 < 00
3 .4 Discrete· Time Nonperiodic Signa.ls: The Discrete-Time Fourier T ransform 18 5

then it can be shown that the sum in Eq. (3.29) converges in a mean-squared error sense
but does not converge pointwise.
Many physical signals encountered in engineering practice satisfy these conditions.
However, severa! common nonperiodic signals, such as the unir step, u[ n], do nc)t. ln some
of these cases we can define a transform pair that behaves like the DTFT by including
impulses in the transform. This enables us to use the DTFT as a problem-solving tool even
though srrictly speakíng it does not converge. One example of this is given later ín the
section; others are presented in Chapter 4.
We now consider several examples illustrating determination of the DTFT for com-
mon signals.

.. , ,. •• • ir.. • • • • . : .. . ,; .

ExAMPLE 3.9 Exponential Sequence. Find the DTFT of the sequence x[n] = anu[n].
Solution: Using Eq. (3.29), we have

"'
.:\:: .: ~ ..
X(eiº) = .Í: a"u[n]e-;nn
n=-oo
. ·,
..,, . ' . ..

This sum diverges for Iai 2: 1. For Iai< 1 we have the convergent geometric series:
..
X(eííl) = I (ae-iº)n
'"'"'º
.., .
' ...
.,,;:· 1
f
'
...
..
... · .. ::=---
1 - ae
-
-,·n' 'ª' < 1
If a is real valued, we may write the magnitude and phase of X(eiº) as

n 1 1
~..
:!·
. : .. 1X(e') = ((1 - acos0.) 2 + a2sin2 U) 112
·, ..

,.. 1
';::

., :
(a2 + 1 - 2a cos 0)112
...
arg{X(e
.
1º)} = -arctan
asin n
1 - acosn ..
.

· The magnitude and phase of X(eiº) are depicted graphically in Fig. 3.18 for a = 0.5. Note
;, that both are 21r periodic.

As ín the other Fourier representations, the magnitude spectrum of a signal is the


magnitude of X(ei!l) depicted as a function of !l. The phase spectrum is rhe phase of X(e;º).

• Drill Problem 3.5 Find the DTFT of x[n] = 2(3}nu[-n].

Answer:


186 CHAPTER 3 ~ FOURIER REPRESENTATIONS 1-"0R SIGNALS

2.5 ~ - - - - - - - - - - - - - - - - - -

2 ....

1
.
1
1.5 t·
!

0.5 ·

o
-41T -21T o 211' 41T
n
(a)

1.5

0.5

arglX(eiº)! O ···

-0.5 .....

-1 -

-1.5 1 . . . - . . . . . i . . - - - - - ' - - - - - - ' - - - - . . . . . i . . - - - - - - l . . . -


-27T o 21T 411
Q
(b)

FIGURE 3.18 'J'hc DTFT of x[11] = (½)nu[n]. (a) l\1agnitu<le spectrum. (b) Phase spectrum.

EXAI\-IPLE 3.10 Rectangular Pulse. Let

1, lnlsM
x[n] =
..• O, lnl > M
as depicted in Fig. 3.19(a). Find the DTFf of x[n].
Solution: Substitute for x[n] in Eq. (3.29) to obtain
:\1
X(e;n) = L 1e- ;nn
n=-M
3.4 Discrete-Time Nonperiodic Signals: The Discrete-Ti-me Fourier Transform 187

x[n)

) )
l >- }

••• •••

-o-<:>----<:>----<>-<>-e~1-....1----1-,--1_ ___._-t-o-o--o--o--o---- n
-M o M
(a)

211
••• 2M+ 1 • ••

-'Tr 1T 21T

(b)

FIGURE 3.19 (a) Rectangular pulse intime. (b) DTFT.

Now perform the change of varíable, m = n + M, obtaining


lM
X(e;n) = .2: e-i!l(m-M>
rn=O
2M ·
= eiOM ~ e-;nm
m=O
: , 1 _ e-j.0(2M+1)
e j!lM __l ___e_í__
n_'
-
2M + 1, n = o, ±21r, ±41r, ...
The expression for X(e;.o), when O '# O, ±21r, ±41r, •.. , may be simplified by symmetrizing
the powers of the exponential in the numerator and denominator, as shown by
...
. ·. . e-í!l(2M+ 1)/2(eif2(2M+ 1 )/2 _ e-i1l(2M+ 1)/2)
X(eifi) -- e'nM - - - - ----
e-;r1.12(e;n12 _- -----
e-;wz)

I
. ... ,.
. •
,
SID (2M + 1)
•• > •
~-------
. ·.

'· '.
.::, .
.
s1n
n •••• • •

...
2 , . .

Note that L'Hopital's rule gives .


'
,

'


sm
O 2M + 1
; '· 2
,.
· · lim ------ = 2M + 1
0-.0,:!:2 ,r, +471';...
s1n '
n
, ...
.·~·
2
.· " . ··•-:
188 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

..· :~ . .. , ::

Hence rather than writing X(eiº) as two forms dependent on the value of n, we simply write


Slll
!l 2M +1
2


Slll
2
n
with the understandingthat X(e;n), for = O, ~21T, +41r, ... , is obtained as a limit. ln this
example X(eiº) is purely real. A graph of X(eiíl) as a function of n is given in Fig. 3.19(b).
We see that as M increases, the time extent of x[n] increases while the energy in X(eiº) becomes
more concentrated near = O. n ,
• • ,i;;.·:

ExAMPLE 3. 1 l Discrete-Time sínc Function. Find che inverse DTFT of


1, /ili s W
w< 1n1 < 1T
. ·. .,
as depicted in Fig. 3.20(a).
Solution: First note that X(e;n) is specified only for -1T < n < 1T. Thís is ali that is needed,
since X(eíº) is always 21r periodic and the inverse DTFT depends only on the values in the
interval -1T < n < 1T. Substituting for X(eíº) in Eq. (3.28) gives

x[n] = -1 Jw eitln dfl.


'. 21T -w
w
- 1 eiLln n -::j:, Q
21Tffj -w'
,. 1
= 1rn sin(Wn), n -::/= O

For n = O, the integrand is unity and we have x[O] = W/1r. It is easy to show using L'Hopital's
rule that

.. ,
'
• 1 s1n
I1m wn ) = -w
, ( lVT.
n .....o 1rn '1T

and thus we usually write


1
x[n] = 1Tff sin(Wn)
..,,., .
• <:

x[n]

X(eiº) w
-
--
7T 1T
-7T
1 w w
_ ___.__...,.__+-------i--:- n
-1T -W w 1T -2-1 l 2
o
(a) (b)

FIGURE 3.20 (a) Rectangular spectrum X(ei0 ). (b) lnverse DTFT of X(ei0 ).
3.4 Discrete-Time Nonperiodic Signals: The Discrete-Time Fourier Transform 189

);~·.:.

.: as the inverse DTFT of X(ei0 ) with the understanding that the value at n = O is obtained as
1 the Jimjt. We n1ay aJso write
' .
~· .. ·
' W . Wn
x [n] = - s1nc
7T 7T

using the sinc function notation defined in Eq. (3.21). A graph depictíng x[n] is given in
Fig. 3.20(b).
~ ..
.,:..

EXMIPLE 3.12 The Impulse. Find the DTFf of x[nJ = S[,zJ.


Solution: For x[n] = B[n], we have
°"
,··..
., X(eiº) = }: l>[n]e-;nn
,,. n~-oo

=1
Hence
...
.·:''

DTFT
8[11]-E---~ 1
Thís DTFT pair is depicted in Fig. 3.21.

. :,: ... .-.. ,...


, .. ...,
..,,, ..: .

EXAMPLE 3.13 Find the inverse DTFT of X(ein) = 5(0), - '1T < ns 1r.

Solution: By definition,

1 J7t
x[nJ = l'TT _ o(f!)e;nn d(!
71

Use the sifting property of the impulse function to obtain x[n] = 1/21r, and thus write
"

-
1 ~
DTFT
) u ~l,).
~(º -1T <o$ 1T
21T ,
ln thís example we have again de.fined only one period of X(eiº). We can alternatively define
X(ei11) over ali n by writing it as an infinite sum of delta functions shifted by integer multiples
of 21r
··.· ~

· ··•· ··· X(ei0 ) = L 5(fi - k21T)


·, k=-oo

Both definitions are common. This DTFT pair is depicted ín Fig. 3.22.
,,. ; •~.. . • •:: <., ·- -«"•....,,. ,. ••

x[n]
l
••• • ••

~>--❖---0--<:>--+-<>--<--+-<>-n ;

-3 -2 -1 O l 2 3 -21T -71' o 1T

(a) (b)

FIGURE 3.21 (a) Impulse intime. (b) D'fFT.


190 CHAPTER 3 • FOURIER RE.PRESf:NTATIONS FOR SIGNALS

x[n]
X(eiº) 1
-21T
l l l l
•• • ••• • •• •••
• ';
• ' '
;

-21T -1T 1T 21T. 31T 4'1T -2 -1 1 2


(a) o
(b)

FIGURE 3.22 (a) Impulse in frequency. (b) lnverse DTFT.

This last example presents an interesting dilemma. The DTFT of x[n] = 1/277' does
not converge, since it is not a square summable signal, yet x[n l is a valid inverse DTFT.
This is a direct consequence of allowing impulses in X(eií 2). We shall treat x[n] and X(eifl)
as a DTFT pair despite this apparent quandary, beca use they do satisfy ali the properties
of a DTFT pair. Indeed, we can greatly expand the class of signals that can be represented
by the DTFT if we allow impulses in the transform. Strictly speaking, the DTFTs of these
signals do not exist since the sum in Eq. (3.29) does not converge. However, as in this
example, we can identify transform pairs using the inverse transform of Eq. (3.28) and
thus use the DTFT as a problem-solving tool. Additional examples illustrating the use of
impulses in the DTFT are presented in Chapter 4.

• Drill Problem 3.6 Find the inverse DTFT of


X(ei11 ) = 2 cos(2!l)
Answer:
1, n = +2
x[n] =
O, otherwise •
• Drill Problem 3. 7 Find the DTFT of
2n O< n =:S 9
x[n] = '
O, otherwise
Answer:
1 _ 2 1oe-;101i
1 - 2e-i!l

3.5 Continuous-Time Nonperiodic Signals:
Tiie Fourier Transfor,n
The Fourier transform (FT) is used to representa continuous-time nonperiodic signal as a
superp{>Siti<.)n of complex sinusoids. We shall simply present the FT in this section. An
interpretation of the FT as the limiting form of the FS is developed as Problems 3.34.
Recall from Section 3.1 that the continuous nonperiodic nature of a time signal implies
that the superposition of complex sinusoids involves a continuum <>f frequencies ranging
from - oo to oo. Thus the FT representation for a time sígnal involves an integral over
frequency, as shown by

x(t) = -1
27r
J"°
-oc
.
X( jw)e'wt dw (3.30)
3. 5 Continuous-Time Nonperiodic Signals: The Fourier Transfortn 191

where

X( jw) = f 00.,., x(t)e-iwt dt (3.31)

ln Eq. (3.30) we have expressed x(t) as a weighted superposition of sinusoids. The super-
position is an integral and the weight on each sinusoid is (1/27r)X( jw) dw. We say thac
x(t) and X( jw) are a FT pair and write

x(t) < FT > X( jw)

The transform X(jw) describes the signal x(t) as a functíon of sinusoidal frequency w and
is termed the frequency-domain representation for x(t). Equation (3.31) is termed the FT
of x(t) since it converts the time-domain signal into its frequency-domain representation.
Equation (3.30) is termed the inverse FT since it converts the frequency-domain represen-
tation X( jw) back intc> the time dc)main.
The integrals in Eqs. (3.30) and (3.31) may not converge for ali functions x(t) and
X( jw). An analysis of convergence is beyond the scope of this h<>ok, so we simply state
several convergence conditions on the time-domain signal x(t). Define
1
.x(t) = f"" X( jw)eiwt dw
27T -oo
where X ( j w) is given in terms of x( t) by Eq. (3. 31). lt can be shown tha t the MSE between
x(t) and x(t), given by

f'"'" lx(t) - x(t) 1


2
dt

is zero if x(t) is square integrable, chat is, if

2
J""oo lx(t) 1 dt < 00

Zero MSE does not imply pointwise convergence, or x(t) = x(t) at ali values of t, but
rather that there is zero energy in their dífference.
Pointwise convergence is guaranteed at all values of t except those corresp<>nding to
discontinuities if x(t) satisfies the Dírichlet conditions for nonperiodic signals:
• x(t) is absolutely integrable:

f"' 00
lx(t) 1dt < ao

• x(t) has a finite number of local maxima, minima, and discontinuities in any finite
interval.
• The size of each discontinuity is finite.
Almost all physical signals encountered in engineering practice satisfy the second and third
conditions. However, many common signals, such as the unit step, are not absolutely or
square integrable. ln some of these cases we can define a transform paír that satisfies FT
properties through the use c)f impulses. ln this way we can still use the FT as a problem-
solving tool, even though the FT does not converge for such signals in a strict sense.
The following examples illustrate determination of the FT and inverse FT for several
common signals.
192 CHAPTER 3 • FOURIER REPRESENTATIONS 1-·oa SIGNALS

EXAMPLE 3.14 Real Exponential. Find the FT of x(t) = e-ª'u(t).


Solution: The tT does not converge for a s O since x(t) is not absolutely integrable, as
shown by

a<O

For a > O, we have

X(jú>) = J:"" e-ªtu(t)e-;w, dt

= L"° e-(a+iw)t dt
00

=_ 1. e-(a+iw)t
a + JW o
1
a+ jw
Converting to polar form, the magnitude and phase of X(jw) are given by
1
/X(jw} / = (a2 + w2)112
arg{X(jw)} = -arctan ~
a
and are depicted in Fig. 3.23.

As before, the magnitude <>Í X( jw) is termed the magnitude spectrum and che phase
of X( jw) is termed the phase specrrum of the signal.

• Drill Problem 3.8 Find the FT of x(t) = e' tu(-t) assuming a> O.
1

Answer: X( jw) = - li( iw - a). •


. ..::;.
' .
> •

EXAMPLE 3.15 Rectangular Pulse. Consider the rectangular pulse depicted in Fig. 3.24(a)
and defined as
1, -T s t s T
x(t) =
o, lt] > T
Finà the FT of x(t).
Solutíon: The rectangular pulse x(t) is absolutely integrable provided T < oo. For w * O we
have

X(jw) = f 00,,, x{t)e-;wt dt

= JT e-iwt dt
-T
T
1 e-1wt
= --:-- '
/W -T

= ~ sin(wT), w-:fo.0
(a)
3.5 Continuous-Time Nonperiodic Signals: The Fourier Transfonn 193

x(t)

o
(a)

1X(jw)I

-a1
1
a -./2

----------------------------(1)
-2a . -a a 2a
(b)

argl X(jw)}

1T/2

'Tf/4

-2a -a a 2a
-1r/4 · ·· · - · · · · ······ · ··· · · 1

-'TT/2

(e)

FIGURE 3.23 (a) Real exponential signal. (b) l\tlagnitude spectrum. (e} Phase spectrum.

"····

For w = O, the integral simplífies to 2T. lt is straightforward to show using L'Hopital's rule
that
.. . ~·
.....
lim
-o
I sin(wT) = 2T
(JJ

Thus we write for all w

. X(jw) =! sin(wT)
CI)
', • < :

X(jw)
x(t) 2T

------+--'"'--t
' '
-T T 31T
T T
(a) (b)

FIGlJRE 3.24 (a) Rectangular pulse in time. (b) FT.


194 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

with the understanding thac the value at w = O is obtained by evaluating a limit. ln thís case
X(jw) is real. It is depicted in Fig. 3.24(b). The magnitude spectrum is

1X(jw) 1 =2 sin( wT)


. ··" "'
and the phase spectrum is
O, sin(wT} >0
w
arg(X{iw)J =
.· .
>
sin(wT) <
1T, 0
w
~ ..
.· We may write X(iw) using the sinc function notation as
wT
,: .
· X(jw) = 2T sinc 1T
-~-
..., .. ._-:

This example illustrates a very important property of the Fourier transform. Consider
the effect of changing T. As T increases, x(t) becomes less concentrated about the time
origin, while X( jw) becomes more concentrated about the frequency origin. Conversely,
as T decreases, x( t) becomes more concentrated about the time or1gin, while X( jw) becomes
less concentrated about the frequency origin. ln a certain sense, the ''widrh'' of x(t) is
inversely related to the ''width'' of X( jw). As a general principie, we shall see thac signals
concentrated in one domain are spread out in the other domain.
.
...

EXAMPLE 3.16 The sinc Functíon. Find the in verse FT of rhe reccangular spectrum depicted
in Fig. 3.25(a) and given by
1, -W-s w:S W
~,. · ,,, ,, X(j<,J) = O, lwl > W . ,·

Solution: Using Eq. (3.30) for the inverse FT gives for t *O


f e'"'t.
...
.,
.. u;•
....
x(t) = -1 dw
21T -w
.. / .. w
i>',,
·. ·.
1 ,
- - ef<,;I
2j1Tt -w
,.

1
. i',.

~- ;
-
- -1Tt sin(Wt)
.•.. :, .
'
..
w
-- - s1nc Wt
*o

.',. . .. . t
1T 1T '
When t = O, the integral simplifies to W/7r. Since

1
lim _!_ sin(Wt) == ~
,_,.o 1Tt 1T

we write for all t

. <. x(t) = _! sin(Wt)


1Tt
with the understanding that the value at t = O is obtained as a limit. Figure 3.25(b) depicts
x(t).
3.~ Continuous-Time NOKperlodic Signals: Tlae Fourier TTansform 195

x(t)

w
X(jw)

------+---__..,___ w -------_;~~,;;;;:;-;,Y--7~-+--+----\----J~~,.;;;;:;;~=-,.,..._-··- t
-W w -31T
-
31T
- 51T
w w W \V
(a) (b)

FIGURE 3.25 (a) Recta.ngular (}Ulse in frequency. (b) J11verse F"f.

Note again the inverse relationship between the concentration of the signal about the
origin ín the time and frequency domains. As W increases, the frequency-domain represen-
tation becomes less concentrated about w = O, while the time-domain representation be-
comes more concentrated about t == O. Another interesting observation can bc made by
considering this ar1d the previous example. ln the previous example a rectangular tíme-
don1ain pulse is transfc)rmed to a sinc function in frequency. ln this example, a sinc function
intime is transformed te> a rectangular function in frcquency. This ''duality'' is a consequence
of the similarity between the forward transform in Eq. (3.31) and inverse transform in Eq.
(3.30) and is srudied further in Sectíon 3.6. The next two examples also exhibitthis property.

'
't
...
ExAMPLE 3.17 The Impulse, Find the FT of x(t) = 8(t).
Solution: This x(t) does not satisfy the Diríchlet conditions, since the impulse S(t) is only
defined within an integral. We attempt to proceed in spite of this potential problem using Eq.
(3.31) to wtite

X(jw) = J"'.,, S{t)e-;..., dt


. -~·- =1
The second line follows from the sifting property of the impulse function. Hence
Ff
B(t) < • 1
..
·· and the impulse contains unity contributions from complex sínusoids of ali frequencies~ from
w = -oo to w = oo.
:·- .

' . ·.
" ' ./\:;. ;; ;'•

.'

ExAMPLE 3.18 DC Signal. Find the inverse FT of X(j(I)) = 21T6{w).


Solution: Here again we may expect convergence irregularities since X(i(I)) has an infinite
discontinuity at the origin. Using Eq. (3.30), we find
1 JCQ
x(t) = - 271"5(w}eiwt dw
217" -ao
..;, =1
Hence we identify ; .

"
FT
: '; 1 ~ > 21rô(w)

as a FT pair. This ímplies that the frequency conte11t of a de sig11al is concentrated entirely at
w = O, which is intuirively satisfying.
.....,.. .... ·.
. ... .,...
196 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

x(t)

-1

1
-1

FIGUR.I::'. 3.26 Timc-domain signal Í<>r Drill Problem 3. I O.

While convergence of the FT cannot be guaranteed in the previous two examples,


the transform pairs do satisfy the properties of a FT pair and are thus useful for analysis.
ln both cases the transform pairs are consequences of the properties of the impulse func-
tion. By permitting the use of impulse functions we greatly expand the class of signals that
are representable b}· the FT and thus enhance the power of the FT as a problem-sol\.'·ing
tool. ln Chapter 4 we shall use impulses to obtain FT representations Í<>r both periodic
and discrete-time signals.

• Drill Prohlem 3.9 Find the inverse FT of

X(jw) =
2 cos w, lwl :5 1r

o, lwl > 1r

Answer:
sin(1r(t + 1)) sin(1r(t - 1))
x(t ) = - - - - - + ·
1r(t + 1) 1T(t - 1) •
• Drill Problem 3.1 O Find the FT of the time signal depicted in Fig. 3.26 and given
by

x(t) = t, 1 ti s 1
o, 1t 1 > 1
Hint: Use integrati()n by parts.
,4nswer:
. .2 . 2 .
X( 7w} = J -w cos w - J- s1n w
w2

13.6 P~op~rties '!Í F!'~rier Represe_nta'!_ons
The four Fourier representations discussed in this chapter are summarized in Table 3.2.
This table provides a convenient reference for both the definition of each transform a11d
identification of the class of signals to which each applies. Ali four Fourier representations
are based on the complex sinusoid. Consequently, they all share a set of common properties
that follow from the characteristics of complex sinusoids. This section examines the prop-
erties of Fourier representarions. ln many cases, we derive a property for one representation
and simply state it for the other representations. The reader is asked to prove some of
these properties ín the problem section of this chapter. A comprehensive table c)f all pr{>p-
erties is given in Appendix C.
3.6 Properties ofFourier Representations 197

( TABLE 3.2 The Four ......Fourier.. ,,,_Representations,._


• - -. '
- - - -· • - - -
Time
Do1naitt J>eriodic Nonperiodic

Fourier Series Fourier T ransform N


e
o
n
x(t) = I
k=-oo
""
X[ k ]eikw,.t x(t) = -1 J~
21T -,:,;
.
X(iw)e'wr dw
o
n
p
t.
t
n
X[k] = _!
T
f (T)
x(t)e-ikw.,t dt X(jw) = J°",., x(t)e-iwt dt e
r
.
l
u x(t) has peric>d T
()
o 21T
{J) =- d
l/
º T .
I
s
e

Discrete-Time Fourier Series Díscrete-Time :Fc>urier Transform


D p
t
. x[n] = L X[k]eikflon
k=(N) x[n] = 1 f"' X(e;n) eiíln dil e
s 2 '71' .. '1T r.
1
e X[kj = - L X [ n ]e-ik!l.,n
00
I
r N >I=(N! X(eiº) = I x[n]e-;nn ()
11=-x
e x[n] and X[k) have period N d.
t X(eiíl) has period 21r l
2 '71'
e Dº= N e

Discrete Continuous Frequency


Domain

• PERIODICITY PROPERTIES

The borders of Table 3.2 summarize the periodicity properties of the four representations
by denoting time-domain characteristics on the top and left sides with the corresponding
frequency-domain characteristics on the bottom and right sides.
Continuous- or discrete-time peri<>dic signals have a Fourier series represe11tation. ln
a Fourier series the signal is represented as a weighted sum of complex sinus<>ids ha ving
the sarne period as the signal. A discrete set of frequencies is involved in the series; hence
the frequency-domain representation involves a díscrete set of weights or coefficients. ln
contrast, for nonperiodic signals both continuous- and discrete-time Fourier rransform
representations involve weíghted integrais of complex sinusoids over a continuun1 c>f
frequencies. Hence the frequency-domain representation for nonperiodic signals is a con-
tinuous functi(>n of frequency. Signals that are periodic in time have discrete frequency-
domain representations, while nonperiodic time signals have continuous frequency-
domain representations. This is the correspondence indicated on the top a11d bottom of
Table 3.2.
We also observe that the Fourier representations for discrete-time signals, either the
DTFS or DTFT, are periodic functions of frequency. This is because rhe discrete-time
complex sinusoids used to represent discrete-time signals are 21r periodic functions of
frequency: that is, díscrete-time sinusoids whose frequencies differ by integer mt1ltiples of
21r are identical. ln contrast, Fourier representations for continuous-time signals involve
superpositions of continuous-time sinusoids. Continuous-time sinusoids wíth distinct
frequencíes are always distinct; thus the frequency-domain representations for continuous-
198 CHAPTER 3 • FOURll:'.R Rl::PRESENTATIONS FOR SIGNALS

TABLE 3.3 Fourier Representation


Periodicity Properties
Time-Domaín Property Frequency-Domain Property

Continuous Nonperiodic
Discrete Periodic
Periodic Discrete
Nonperiodic Continuous

time signals are nonperiodic. Summarizing, discrete-time signals have periodic


frequency-domain representations, while contint1ous-time signals have 11onperic)dic fre-
quency-domain representations. This is the correspondence indicated on the left and right
sides of Table 3.2.
ln general, representations thar are cc>ntinuc>us in <>nc <l<>main, cirher time c)r fre-
quency, are nonperiodic in the other d(>main. (~onversely, representatÍ<)ns that are discrete
in one domain, either time or frequency, are peric>dic in rhe c>ther dc>1nain. These relaric)n-
ships are indicated ín Table 3.3.

• LINEARITY

lt is a straightforward exercíse to shc)w thar all four Fourier representarions involve linear
operations. Specifically, they satisfy the linearity property
FT
z(t) = ax(t) + by(t) - - Z( jw) = aX( jw) + b Y( jw)
z(t) = ax(t) + by(t) _Fs_;w_,,_ Z(k] = aX(k] + b Y[k]
DTFT
z[nJ = ax[nl + hylnl - - - Z(ei!l) = aX(ei!l) + bY(ei!l)

z(n] = ax(n] + by(n] _n_T_Fs_·;_ílº- Zfk] = aX(k] + hY[k]

ln the above relationships we assume that the uppcrcasc symb<)ls denc)te the Fourier rep-
resentation of the corresponding lowercase symbol. Furthermore, in the FS and DTFS cases
the signals being summed are assumed to have the sarne fundamental period.

. '· ..

ExAMPLE 3.19 Suppose z(t) is the periodic signal depicted in Fig. 3.27(a). Use the linearity
property and the results of Example 3.6 to determine the FS coefficients Z[k].
Solution: Wríte z{t) as the sum of signals
z(t) = ix(t) + ½y(t)
where x(t) and y{t) are depicted in Figs. 3.27(6) and (e), respectively. From Example 3.6 we
,., have
FS; 2'1T 1
x(t) .,___....,. X[k] = - sín
k1r
FS; 21r 1
y(t) - - - Y[k} = k1r sin
\.
The linearity property implíes that ••

FS; 21r
k 3
'!!. 1 .
z(t) - - - Z[k] = lk1r sin
i, 4 + 2k1T Slil
3.6 Properties o._f Fourier Representations 199

z(t)

••• • ••

-l - ...
2
'
'
'
;
'
r
-1 o l
- -l
l
8 4
(a)

x(t)

1
••• • ••

-----.1'---Ji..,....
;
_ _ _ _ _....._-+-........ ---------•-•-i----&___ t
~
-1 l
o1
(b)

y(t)
l ,'

• •• •••

' -· ;
; t
-l o 1
-
l
4
(e)

FIGURE 3.27 Representation <>f the periodíc signal ::(t) as a weighted sum of peri<>dic square
\Vaves: z(t) = fx(t) + ½r(t). (a) z(t). (b) x(t). (e) y(t).

The linearity property is also the basis of rhe partial fraction method for derermi11ing
the inverse FT of frequency-domain representations X( jw) given by a rario of polynomials
in jw. Ler
X . = bM(jw)M + · · · + h1(iw) + b0
(Jw} ( J·w )N
· + a,'\l- 1( J·w)''\l-1 + · · · + a1( J·w) + ao
B(jw)
= -----,--
A( jw)
Frequency-domain representarions of this form occur frequently in analysis of systems that
are described by linear constant-C()efficient differential equarions. The idea is t(> wrire
X(jw) as a sum of terms for which the inverse FT is known. We may accomplish rhis using
a partia] fraction expansion of X( jw).
We assume that M < N. If M > N, rhen we may use long division to express X( jw)
in the form

X( iw) = /1,IN f k( iw)k + B( ~w)


k=O A( J(J))
The numerator polynomial B( jw) now has order one less than that of the denominat<>r
and the partia} fraction expansion is applied te> determine the inverse Fourier transform
~
of B( íw). The inverse Fourier transform of the terms in the sum are obtained frc>m the
A{jw)
pair o(t) < FT > 1 and the differerttiation property, which is introduced later in this section.
200 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

Let the roots of the den<)minator polynomial in (jw) be dk, k = 1> 2, ... , N. These
roots are found by replacing jw with a generic variable u and determining rhe roots of the
pol }'n<>mial.

We may then write

X( jw} == :=o .
~-"-1
k JWd
Tik=1(1w- k)
b ( . )k

i\ssuming all the roots dk, k = 1, 2, ... , N are distinct, we may express X( iw) as rhe sum
N
X(jw} = L JW
k=I
. ck
- dk

where the coefficients Ck, k == 1, 2, ... , N are determined by either solving a system of
linear equations or by the method of residues. These methods and the expansíon for re-
peated roots are reviewed in Appendix B. ln Example 3.14 we derived the FT pair
fT 1
ed u( t) ~
1
> . for d < O
JW - d
The reader may verify that this pair is valid even if d is complex, provided Re{d} < O.
Assuming the real part of each dk, k = 1, 2, ... , N, is negative, we use linearity to write

. N fT __ ~ Ck
x(t)
·
= k=l
~ Ckedktu(t} ~ > X( jw) ~
k=1 jw - dk
The following example illustrates this technique.

EXAMPLE 3.20 Find the inverse Ff of

X(. ) iw +
1
Jú> = (jw) 2 + Sjw + 6

Solution: First find the partia} fraction expansíon of X(jw). The roots of the denominator
polynomial are d 1 = -2 and d2 = -3. Hence we write X(jw) as the sum

... _ __,;.;w
_+_1_ _ :;: :; C1 + C2
(jw)2 + Sjw +6 jw +2 jw +3
We may solve for C 1 .and C2 using the method of residues, as shown by
jw + 1
C1 = (i(J) + 2) (jw)2 + Sjw + 6 jw=-2

_ jw +1
• 't. . +3
JW iw=-2 , ..

= -1
·, jw +1
C2 = (iw + 3) ( . )2 + 5.
JW /W
+ 6 .
fW=-3

.,
- ;w+
'---
1
iw + 2 jw=-3
, .,. ··~.,. · .'= 2
•·· :·'~ :.;½,, • ~•• : ' :~'tÇ: •• '\:;.l8t\; • •• \;~,: • • • V •'"
3.6 Prqperties of Fourier Representations 201

..•. ...~

Thus the pa.rtíal fraction expansion of X(íw) is


-1 2
X(jw) = /W
. + l + JW
.
+
Now using linearity, we obtain

>
;~ ...
,, •
.. ••
..:~
X(jw) < Ff • x(t) = 2e- 3 tu(t) - e- 2 tu(t)
.~:'· :
.,.

• Drill Problem 3.11 Use partia! fraction expansion and linearity to determine the
inverse FT of
.
X . -1w
.,. ( Jw) = ( jw) 2 + 3jw +2
Answer:
x(t} =e u(t) - 2e- 2 tu(t)
1

Partia) fraction expansions and lincarity are also used to determine the inverse DTFT
of frequency-domain representations given as a ratio of polynomials in e-i!l. Let
. _
X( ,11) hi\-1e-i!lM + · · · + be-;!l
1
+ bo
e - a,"'le -j!lN + aN_ 1e -j!l(N-1) + • • • + a e -;n +1
1

Representations of this form c)ccur frequently in the study of systems described by linear
constant-coefficic11t difference equations. Note that the constant term in the denominator
polynomial has been nc)rmalized t<) unity. As before, we rewrite X(ei!l) as a sum of terms
whose inverse DTFT is k11c>wn using a partial fraction expansion. We factc>r the denomi-
nator pc)lync>mial as

aNe-i!lN + ªN-le-iíl(N-1) + ... + a1e-iº + 1 = n (1 -


N

k=l
dke-iíl)

Partia\ fr.,1ction expansions based on this factorization are reviewed in Appendix B. ln this
case, the dk are roots of the poly11omial
u'"J + a1UN-l + a2UN-l + ' ' • + ªN-1U + ªN =0
Assumí11g M < N, we n1ay exprcss X(ei!l) as the sum
N
X(eií!) = I
k- 1 1-
ck - ·n
dke '

This form again assumes all of the dk are distinct. Expansions for repeated roots are treated
in Appendix B. Si11ce

DTFT 1
1 - dke-;f.>.

the li11earity prl)perty implies


!'/
202 CHAPTER 3 • FOlJRlt::R Rl:'.PRESENTATIONS FOR SIGNALS

·. :. . '... .~- ,:.. ···• .. . ..,, :· ,.;, . .~ 1


...
EXAMPLE 3.21 Find the inverse DTFT of
. -1e-;n
6
+5
X(e'n) = 1
+ -e
1 -;n
- 1 -;n2
-e
6 6

Solution: The roots of the polynomial


u2 + ¼u - ¼=O
are d 1 = -½ a11d d2 = }. We seek coeffi.cients C1 and C2 so that
, . .
,.~-· C1 C2 . :·'\ .,.~. .."' ,. ·. ~-: .....,
1 + ¼e-;n - ¼e-;,12 1+ ½e-í- 0
+1- ½e-iº
Using the method of residues, we obtain
. .,. . ,, .
.. . ..; ...
t.:.

_áe-;n + 5
C = (1 + le-iº) _ _..;;.6_ _ _ __
.: ."
l 2 1 + 6le-i!l - le-;n2
6 e-,n=-2
.
,:~,· ,, . . ,, .
.:. ':.
-Ie-;n + 5
- 6
-
½e-iº
·;·~.·· •• 4.
·...
1- e
....,·n=-2 . . . ;

..•...
=4 •~.. ..,:..: .:
_áe-;n + 5
C = (1 - le-íº) _____6 _ _ _ __ . ,

2 3
1+_
1e ...•;n
6
__1e-;n2
6 e-;n=3
. . ' .··:
: ,<,' •
:·.

.,:
.·..., .
.;

=1 .. .·. t . '.
. ·.'< ... .._;;:·.
.. . .....,;, .. . .. . . . . . -~·
: •$ "
• ,,••

Hence ..
i
,, ...

. <.. •
.:
..,··

• SYMMETRY PROPERTIES-REAL AND IMAGINARY SIGNALS

We develop the symn1etry properties using the FT. Results for the other three Fourier
representations may be obtained in an anal(>g{)US manner and are simply stated.
First, suppose x(t) is real. This implies that x(t) = x'~ (t). Consider X'~ ( jw}, defined
by

X*( jw) f". x(t)e-;w, dt *

= Jx"" x~·(t)e;wr dt
We now may substitute x(t) f<>r x';(t) since x(t) is real, <>btaining

X';( jw) = f" ~ x(t)e-,<-(vJt dt


= X(-jw)
This shc>\vs that X( jw) is cc)mplex-conjugate symmetric or X'i-(jw) = X(-jw). Taking the
real and imaginary parts of this expression gives Re{X(jw)} = Re{X(-jw)} and lm{X(jw)}
= -lm{X(-jw)}. ln \V()rds, if x(t) is real valued, then the real part c)f the tra11sfc)rm is an
even function of frequency, while the imaginary part is an odd function of frequency. This
also implies that the magnitude spectrum is an even function while the phase spectrum is
an odd function. The symmetry conditions in all four Fourier represe11tations of real-valued
3.6 Properties of Fourier Hepresentations 203

TABLE 3 .4 Fourier Representation Symmetry Properties


for Real-Valued Time Signals
Complex Form Rectangular Form Polar Form

FT X*(jw) = X(-jw) Re{X(jw)} = Re{X(-jw)} IX(jw)I = IX(-jw)I


Im{X(jw)} = -Im{X(-jw)l arg[X(jw)} = -arg{X(-jw)l
FS x~·[k] = Xl-kl Re{Xfkl} = Re{X[-k]} IX[k]I = IX[-kll
·Im{XlkJ} = -Im{Xf-kl} arg{X[k]} = -arg{Xl-kJl
DTFT X'~(eill) = X(e-i!l) Re{X(eiíl)} = Re{X(e-iº)} 1X{e;11) = IX(e-;11)
1 1

Irn{X(eiíl)} = -Im{X(e-;11 )} arg{X(ei11 )} = -arg{X(e· i!1)}


DTFS X'~ [k] = Xl-kl Re{X[k]} = Re{X[-k]} IX[kll = IXl-kll
Im{X[k]} = -lm{X[-k]} arg{Xlkl} = -arg{X[-k]}

signals are indicated in Table 3.4. ln each case the real part of the Fourier representation
has eve11 symmetry and the imaginary part has odd symmetry. Hence chc magnitude spec-
trum has even symmetry and the phase spectrum has odd symmctry. Note that the con-
jugate symmetry property for the.DTFS may also be written as X,.'lkl = XIN - kJ, becal1se
the DTFS coefficients are N periodic, satisfying Xf kl = X[N + kJ.
Now suppose x(t) is purely imaginary so that x(t) = -x*(t). ln this case, we may

wr1te
)'°'
X';(jill) = J"",., x(t)e· ;",t dt

= J x * (t )e;,.,t dt
X"'

= - J x(t)e-i(-w)t dt
°"oc

= -X(-jw)

Examining the real and imaginary pares of this relationship gives Re{X( jw)} =
-Re{X(-;w)} and Im{X( jw)} = lm{X(-jw)}. That is, if x{t) is purely imaginary, thcn the
real part of the FT has odd symmerry and the imaginary pare has even symmetry. The
corresponding symmetry relati<)nships for ali four Fourier representati<>11s are given in
Table 3.5. Note that the magnitude and phase spectra have the sarne symmctry as given
in Table 3.4.

TABLE 3.5 Fourier Representation Symmetry


Properties for Imagina1-y-Valued Time Signals
Complex Form Rectangular Form

FT X*(jw) = -X(-jw) Re{X(jw)} = -Re{X(-jw)}


Im{X(jw)} = lm{X(-jw)}
FS X*[k] ~ -Xl-kJ Re{Xlkl} = -Re{X[-k]}
Im{XlkJ} = Im{Xl-k l}
Re{X(e;tl)} = - Re{X(e-i'1 )}
lm{X(ei11 )} = Im{X(e i11 )}
DTFS X*[k] . -X[-k] Re{X[k]} = -Re{Xl-kJ}
Im{X[kl} = lm{Xl-k l}
204 CHAPTER 3 Ili FOURIER ftEPRESENTATIONS FOR SJGNALS

il SYMMETRY PROPERTIES-EVEN AND 0D0 SIGNALS

Assume x(t) is rea] valued and has even S}'mmetry'. These conditions imply• x~· {t) = x(t)
and x(-t} = x(t), respectively. Using rhese relationships we may write

x~· (jw) = f "'<C x* (t)efwt dt

= J:"" x(t)eiwt dt

= f~oc x(-t)e-iw(-ti dt
Now perform the change of variable 7 = -t to obtain

X*(jw) = f°"oc x(-r)e-jwTd'T


= X(jw)
The only way that the condition x:i-(jw) = X( jw) holds is for the imaginary part <>f X( jw)
to be zero. Hence if x(t) is real and even, then X( jw) is real. Simílarly, we may show that
jf ."C(t) is rea1 and odd, then X*( jw) = - X( jw) and X( jw) is imaginary.
The identical symrnetry re1ationships hold for all four Fourier representations. If the
time signal is real and even, then the frequency-domain representation is also real. If the
time signal is real and <)dd, then the frequency-domain representation is ímaginary. Note
that since we have assumed real-valued time signals in deriving these symmetry properties,
we may combine the results of this subsection with those of the previous subsection. That
is, real and eve11 time signals have real and even frequency-domain representations, and
real and odd time signa:ls have imaginary and odd frequency-domain representations.

• TtME-SHIFT PROPERTIES

ln this section we consider the effect of a time shift on the Fourier representation. As before,
we derive the result for the FT and state the results for the other three representations.
Let z(t) = x(t - t be a time-shifted version of x(t). The gt>al is to relate the FT of
0 )

z(t) to the FT of x(t). We have

Z( iw) = f 00

00
z(t)e-;wr dt

= f" ., , x( t - t 0 )e-;,..r dt

Now perform the change of variable T =t - t 0 , obtaining

Z( jw) = J:00 x( T}e-iw(,~ to) d7

= e-jwto foo,x; X('T)e-iwT d'T

The result of time shifting by t 0 is to multiply the transform by e-;wr(J. Note that
IZ(jw)I = IX(jw)I and arg{Z(jw)} = arg{X(jw)} - wt0 • Hence a shift intime leaves the
magnitude spectrum unchanged and introduces a phase shíft that is a linear function of
frequency. The slope of this linear phase term is equal to the time shift. A similar property
3.6 Properties of Fm,rier Hepr~sentations 205

TABLE 3.6 Time-Shift Properties


of Fourier Representations

FS; úJ0
x(t - t o ) e-;kw,,r,,X[k]
DTFT
e-jilno X(ei!l)
x[n- no]
DTFS; !l,, e ·-;kn,,11,.X[k]
x[n - n 0 ]

holds for the <)ther three Fourier representati<>ns, as indicated in Table 3.6. These properties
are a direct consequence of the time-shift properties of complex sinusoids used in Fourier
representations. Time shifting a C<>mplex sinusoid results in a complex sinusoid of the sarne
frequency whose phase is shifted by the product of the time shift and the sinus<>id's
frequency.

. ~--
EXAMPLE 3.22 Use the FT of the rectangular pulse x(t) depicted in Fig. 3.28(a) to determine
rhe FT of the time-shifted rectangular pulse z(t) depicted in Fig. 3.28(b).
Solution: First we note that z(t) = x(t - T}, so the time-shift property ímplies that Z(jw) =
e-iwTX(jw). ln Example 3.15 we obtained . ..

X(jw) = ±.w sin(@T)


Thus we have
.;:

Z(jw) = e-iwT ±. sin(wT)


w
:·.•: ,,. :,; .r:; .·· . .

• Drill Prohlem 3.12 Use the DTFS of che periodic square wave depicted in Fig.
3.29(a) as deríved in Example 3.2 to determine the DTFS of the peric>dic square wave
depictcd in Fig. 3.29(6).
Answer:
stn
• k S1r
7
Zfkl = e-ik(6rr/7) - - - - -

7 sín k 1T
7 •
x(t) z(t)

l
1 .....- - - -

--.----+--+-- t --t-----+--t
-T O T 2T
(a) (b)

FIGURE 3.28 App1ication of the time-shíft property for Example 3.22.


206 CHAPTER 3 • FOURIER REPRESENTATIONS FOR S1GNALS

x[nl

' 1
••• • ••

;
'
'
• ; T • ; ;
'
n
-2 -1 1 2
o
(a)

z[nl
1 - ' 1

••• .. .

. ; n
' •
' ' ' ' ' '
-1 l 2
o
(b)

FIGURE 3.29 Original an.<l tíme-shífted sq uare \Vav~s for Drill Problem 3 .12.

• FREQUENCY-SHIFT PROPERTIES

ln the previous subsectíon we considered the effect of a time shift on the frequency~domain
representatíon. ln thís sectíon we consíder the effect of a frequency shíft on the time-
FT
domain signal. Suppose x(t) - - - X( jw). The problem is to express the inverse FT
of Z(jw) = X(i(w - y)) in terrns of x(t). Let z(t) < FT '>' Z(iúJ). By the FT de.finition, we
have

z(t) = -1 f"" Z(jw)e'wt


. dw
2 '1T -c.c

== - 1
2'1T
f °"
-oo
X( j(w - y))e 1.wt dw

Perform the substitution of variables r, =w- y, obtaining

z(t) = 1 J:x; X( iTJ)ei(rt+..,,}t dTJ


21T -oc

. - 1 f"" .
= e'-rt X ( irt )e'YJ1 dr,
21T -0()

= e1Y1x(t)
Hence a frequenC}' shift corresponds to multiplication in the time domain by a compJex
sinusoid whose frequency is equal to the shift.
This property is a consequence of the frequency-shift properties of the complex si-
nusoid. A shift ín the frequency of a complex sinusoid is equivalent to multiplication of
the original complex sinusoid by another c<)mplex sinusoid whose frequency is equal to
the shift. Since all rhe Fotirjer representati<)ns are based on complex sinusojds, they ali
share this property as summarized in Table 3.7. Note that the frequency shift must be
integer valued in botl1 Fourier series cases. This leads to multiplication by a complex
sinusoid whose frequency is an integer multiple of the fundamental frequency. The other
observation is that the frequency-shift property ís the ''dual'' of the tíme-shíft property.
We may summarize both properties by stating that a shift in one domain, eíther frequency
or time, leads to multiplication by a complex sinusoid in che other domain.
3.6 Properties of Fourier Representations 207

TABLE 3. 7 Frequency-Shift Properties


of Fou,·ier Representations

eik "'""'x(t) - - -
FS; Wn Xlk -
k .,]

EXAMPLE 3.23 Use the frequency-shift property to determine the FT of the complex sinu-
soidal pulse
e;1or, 1ti s 1T
z(t) = .
O, otherw1se
Solution: We may express z(t) as the product <)f a complex sínusoid, ei10t, anda rectangular
pulse
1, ltl ~ 1T
x(t) = o,
otherwise
Using the results of Example 3.15, we write

x(t) < FT > X(jw) = ~ sin(w1T)


w
and using the frequency-shift property

ei101 x(t) < FT > X(j(w - 10))

we obtain

z(t) < FT > w : sin((w - 10)1r)


10
,,., .'. .,, :
... .· •/' ... . . ,:~...... ...

• Drill Problem 3.13 Use the frequency-shift property to find the inverse DTFT of

j!i - 1
Z(e ) - 1 - aei(il+?T/41

Assume lal < 1.


Answer:

• SCALING PROPERTIES

Now consider the effect of scaling the time variable on the frequency-domain represen-
tatíon of a signal. Beginni11g with the FT, let z(t) = x(at). By definition, we have

Z( jw) = f ""oc z(t)e-iwt dt

= J"",., x(at)e-;wi dt
208 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS

Perform the substitution 7 = at to obtaín

-1
a
f-oo
oo
x( T)e-J(w/a)T
.
dT, a>O
Z(jw) =
_! f-"" x( T)e-i(wia)T d--r, a < O
a "°

These two integrais may be combined into the single integral

Z( jw) = -1 f :,c
x( T)e-,lw/a)T dT •

\a\ -~ (3.32)
1 w
=-X j-
lal a
Hence scaling the signal in time introduces inverse scaling in the frequency-domain rep-
resentation and an amplitude scaling.
This effect may be experienced by playing a rec(>rded sound ata speed different from
that at which it was recorded. If we play the sound back ata higher speed, corresponding
to a> 1, we compress the time signal. The inverse scaling in the frequency d<>maín expands
the Fourier represenration over a broader frequency band and explains the increase in the
perceived pitch of the sound. Conversely, playing the sound back at a slower speed cor-
responds to expanding the rime signal, since O< a< 1. The inverse scaling in the frequency
domain compresses the Fourier representation and explains rhe decrease in the perceived
pitch of the sound.

• •• • .)(;t(:, • .:,~.., :t ·'/~·· , \(•: . . . ,.,,. ,,.,.,...!

ExAMPLE 3.24 Let x(t) be the rectangular pulse

.,. .·
x(t) =
1, 1 ti < 1
,., ... o, 1ti > 1
· Use the Ff of x{t) and the scaling property to find the Ff of the scaled rectangular pulse
1, 1,1::::;2
y(t) = o, 1t 1> 2

Solution: Substituting T = 1 into the result of Example 3.15 gives


·,. ..
·.. ..
~
X(iw) = ~ sin(w)
w
Note that y(t) = x(½t). Hence application of the scaling property of Eq. (3.32) with a=½ gives
Y(jw) = 2X(j2w)

·"' =I sin(2w)
úJ

This answer may also be obtained by substituting T = 2 into the result of Example 3.15.
Figure 3.30 illustrates the scaling between time and frequency that occurs in this example.
..
. '!''t.· ·.

If x(t) is a periodic signal, then z(t) = x(at) is also periodic and rhe FS is the appro~
priate Fourier representation. For convenience we assume that a is positive. ln this case,
scaling changes the fundamental period of the signal. If x(t) has fundamental period T,
3 .6 Properlies of Fourier Representations 209

1.5 . . - - - - - , - - - - - - , - - - - ' . - - - - - 1.5 ~--~---!~--~--~

1 1-- - 1 ,_

x(t) y(t)

0.5 ' ' ' ' 0.5 ' '

o o
-4 -2 o 2 4 -4 -
_')
o -
')
4
t t
(a) (e)

4 4

3 - 3

2 .. 2 ,,,.,
X(jw) Y(jw)
1 1

o'''' o
-1'---~---......__ _ ____.__ __. -1'---~--_,,;,_~-----'----'
-10 O 10 -to o 10
w w
(b) (d)

FIGURE 3.30 111ustration of the FT scaling property. (a) Original tin1e signal. (b) Original ltf.
(e) Scaled time sígnal y(t) = x(½ t). (d) Scaled FT Y(jw) = 2X(j2w).

then z(t) has fundamental period Tia. Hence if the fundame11tal frequency of x(t) is w 0 ,

then the fundamental frequency of z(t) is aw0 • By definition, the FS c<>efficients for z(t) are
given by

Zlkl = ~
T
f (/la}
z(t)e-jkaw,,t dt

Subsrituting x(at) for z(t} and performing the change of variahle as in the FT case, we
obtain

x(at) = z(t) FS; awº


+------'lo Z[k] = X[k], a>O

That is, the FS coefficients <>f x(t) and x(at) are identical; the scaling operation simply
changes the harmonic spacing from w 0 to aw0 •
The scaling operation has a slightly different character in discrete time than in con-
tinuous time. First, z[n] = x[pnl is defined 011ly for integer values <)Í p. Second, if IPI > 1,
then the scalíng operation discards information since it retains <>nly every pth value of
x[n]. This loss of information prevents us from expressing the DTFT or DTFS of z[n] in
terms of the DTFT or DTFS of x[n] in a manner similar to the continuous-time results
210 CtlAPTER 3 ili FOURIER Rl:'.PRESE.NTATIONS FOR SIGNALS

x,[nJ
~

-o--<>--....-<>--o--+-~-t,--<:~>---f--O---O--_.._--<)--ó--+----<>--<>--+-<>--o-----ir--<>-<C>- n
:.....6 -3 o 3 6

FIGlJRE 3.31 Example signal that is zero cxcept at multiJ)les of the scaling paramcter p = 3.

derived above. To pr<>ceed, we define a sequence that is zero except at ínteger multiples
of the scaling parameter p. That is, let
n
Xz[n] = O, unless - ís integer
p
An example of such a sequence ís gíven ín Fíg. 3.31 for p = 3.
If Xz[n] is nonperi<)dic, then the scaled sequence z[nJ = x.Jpn] is also nonperiodic
and rhe DTFT is the appropriate representation. Proceeding from the definítíon (>Í the
DTFT, we may sh<>w that
Z(eiº) = Xz(eiílfp)

ln thc special case <>f a reflection, fJ = - l, rhe assumption of zeros at intermediate values
in the time signaJ is unnecessary and we ha ve

• Drill Problem 3 .. 14 Use the DTfT of the signal wlnl depictcd in Fig. 3.32(a) and
the scalíng property to determine the DTFT <->f the signal fln] depicted in Fig. 3.32(b).
Answer:
f(e;11) = sin(7fl)
sin(!l) •
If Xz[n] is periodic with fundamental period N, then z[nl = Xz[Pn] has fundamental
period N/p. For convenience we assume p j5 posjrjve. Note that N js always an integer
mttltiple of p as a consequence of the definition of xz[nJ and thus the fundamental period

wlnJ

. ~ . - n
' • • '
-3 -1 ! l 3
(a)

{Ln)

~:>--o--<>--~-<>-- +-,-Q-,--+,----<;>,~;'"-<,:>---t--0-,__.__-<>--+,---<:.>--'~:r-o--❖-•<r--<>---- n
--6 -4 -2 2 4 6
(b)

FIG\JRE 3.32 Orill Prohlem 3.14. (a) w[ti]. (b} f[n].


3.6 Properties of Fourier Representations 211

of z[n] will always be an integer. Let 0 0 be the fundamental frequency of x_,Jn]. ln this
case we have

Xz[pnJ = z[n] _D_T_FS_;p_n_º- Z[k] = pXz[k], p>O


The scaling operation changes the harmonic spacing from !1 to p00 and amplifies the 0

DTFS coefficients by p.

• DIFFERENTIATION AND INTEGRATION

Differentiation and integration are operations that apply to continuous functions. Hcnce
we may consider the effect of differentiation and integration with respect to time for a
continuous-time signal, or with respect to frequency in the FT and DTFT since these are
continuc>us functions c>f frequency. We derive integration and differentiation properties
for several of these cases. The analc>gous discrete-valued operations of differencing and
summation are much less commonly used and only briefly discussed.

Differentiation in Time
Consíder the effect of differentiating a nonperiodic signal x(t). First, recall that x(t)
and íts FT, X( jw), are related by

· x(t) = l
21r
f "°
-oc
X( jw)eiwt dw

Differentiating both sides of this equation with respect to t yields

-d x(t) = -1 J"° X( jw)jwe 1'" 1 dw


dt 21r -- °"
which implies that

-d X(t) < FT

.
/W.
X( . )
JW
dt
That is, dífferentiating in time corresponds to multiplying by jw in the frequency domain.
This operation accentuates the high-frequency components of the signal. Note thar differ-
entiation destroys any de component of x(t) and, consequently, the FT of the differe11tíated
signal at w = O is zero.

ExAMPLE 3.25 l'he dífferentiation property ímplies that


.
d FT J(J)
- (e-atu(t)) ( ) .
dt a+ J(IJ
Verify this result by differentiating and finding the FT directly.
Solutimi: Using the product rule for differentíation we have
d
- (e-ª'u(t)} = -ae-ª'u{t) + e-ª'S(t)
dt
= -ae-ªtu(t) + ó(t) ...
212 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

.. ·: ··"····· .. ,.

Taking the Ff of each term and using linearity, we may write


d FT -a
-dt (e-ª u(t)) -(- - )
1
. + 1
a+ JW
.
JW
a+ jw

If x(t) is a periodic signal, then we have the FS representation

x(t) = L, xrk]e;kw()t
k-:: - ,X,

Differentiatíng both sides of this equation gives


d
- x(t) =
dt
L X

k==-oc
X[k]jkwoe'kw.,t
.

and thus we conclude

d X(t) ~
Jt FS; -
W ºk X[k]
• J Wo
0

Once again, differentiation forces the average value of the differentiated signal to be zero;
hence rhe FS coefficient for k = O is zero.

. .. '·..

ExAMPLE 3.26 Use the dífferentiation property to find the FS representation for the trian~
gular wave depicted in Fig. 3.33(a).
Solu.tion: Define a waveform
d
. '· .- z(t) = -dt f(t)

Figure- 3.33(b) illustrates z(t). The FS coef.ficients for a periodic square wave were derived
in Example 3.6. The signal z(t) corresponds to the square ,vave x(t) of Example 3.6 provided
we subtract a constant term of two units, scale the amplitude of z(t) by a factor of 4, and set
T5 /T = ¼- That is, z(t) = 4x(t) - 2. Thus Z[k] = 4X[k] - 28[kJ and we may write
,.
....., . ~· . , ..

FS; W [k
O, k =O
z(t}---z ]
0
= 4 sin(k1r/2)
k1t , k =I= o

The differentiation property implies that Z[k] = jkw0 F[k]. Hence we may determine F{k] from
Z[kJ as F[kJ = (1/jkw0 )Z[kJ, except for k = O. The quanrity FfOJ is the average value of x(t)
and is determined by inspection of Fig. 3.33(a) to be T/2 = 1TÍw0 • Therefore
1T
, .
k =O
'..
-1(t) ( FS; W 0

F[k] =
4 sin(k1r/2)
..
• '
k *O
,.··
'
3.6 Properties of Fourier Representations 213

/(t)
z(t)
T 2
1 ;
••• • ••
••• • ••

; ; t '
'
' '; ; t
-2T -T T
-- T T 2T -2T -T T 2T
4 4 '
-2 1'
(a) (b)

FIGURE 3.33 Signals for Example 3.26. (a) Triangular wave f(t). (b) Derivative of f(t) is the
square \'\lave z( t).

Di.fferentiation in Frequency
Next consider rhc effect of differentiating the frequency-domain representation of a
signal. Beginning with the FT, ·

X( jw) = J"'.,, x(t)e-;wi dt


we differentiate both sides of this equatic)n with respect to w, and obtain

- d X( jw) = IX -jtx(t)e -,wr dt '


dw . -=

which implies

-:-jtx(t) <
FT d X(. )
> dw Jw

Differentiatíon in frequency corresponds to multiplication intime by -jt.


.;,><.❖• ........: ••• • .·.,••• '::

ExM1PLE 3.27 Use the differentiation in time and differentiation in frequency properties to
determine the FT of the Gaussian pulse, defined by g(t) = (l~)e-t212 and depicted in Fig.
3.34.
Soli,tion: We note that the derivative of g(t) with respect to time is given by
d -t
- g(t) = - - e-e ,2 2

dt \12-rr (3.33)
= -tg(t)

g(t)

1
~

--~~+--4-+-~-+~----t
i .

-3 -2 -1 1 2 3

FIGlJRE. 3.34 (;aussian pulse g(t).


214 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SICNALS

The different.iation in time property says that


,.
..

-d g(t) ( 1-T , G(. )


...'
• /W JW
dt
and thus Eq. (3.33) implies that
'

-tg(t) < ~ > iú)G{jw) (3.34)


The differentiation in frequency property> as shown by

-jtg(t) < FT • :w G(iw)


.'·

,.,
indicates that ' .

-tg(t) ( FT ~ ~: G(jw) {3.35)


' w
Since the left-hand sides of Eqs. (3.34) and (3.35) are equal, the right-hand sides must also be
equal and thus

..!!-_ G{jw) = -wG(jw)


....
,. . d@
This is a differential~equation description for G(jw) that has the sarne formas the differential-
equatíon descriptíon for g(t), given ín Eq. (3.33). Thís ímplíes that the functíonal form of
G(jw) is the sarne as that of g(t), and thus
G(iw) = ce-w212 · :~

'
..

The consrant e is determined by noring (see Appendix A.4} that

G(jO) = f'"' 1 e_t212 dt


-oo V21T
== 1
This indicates that e = 1 and we conclude that a Gaussian pulse is its own Fourier transform,
as shown by .
·,
.., .,

• Drill Problem 3.15 Use the frequency-differentiation property to find the FT of


x(t) = te-' tu(t) 1

Answer:


The operation of differentiation does not apply t<> discrete-valued quantities, and
chus a frequency-don1ain differentiation property for the FS or DTFS does not exist. How-
ever, a frequency-domain differentiation property does exist for the DTFT. By definition,

,i== - 00
3.6 Properties ofFourier Representations 215

Differentiation of both sides of this expression with respect to frequency leads to the
property

-;nx[n]

lntegrati01i
The operatic)n of integrati<>n applies only to continuous independent varialJles. Hence
we may integrate ¼'Íth respect to time ín both rhe FT and FS and with respect to frequency
in the FT and DTFT. We limit our consideration here to integratíon of nonperiodic signals
with respect to time. Define

y(t) = f~ 00
x( r) dr

That is, the value of y at time tis thc integral of x C>ver all time prior to t. Note that
d
dty(t) = x(t)

so the differentiation property would suggest

. Y(jw) = ~ X(jw) (3.36)


JW
This relatjonship is jndererminare at cv = O, a c<)nseq11ence of the djfferenriation operari<)n
destroying any de component of y(t) and implying X( jO) must be zero. Hence Eq. (3.36)
applies only to signals wirh zero average value, that is, X( jO) = O.
ln general, we desire to apply the íntegratíon pr(>perty to signals rhat dcl noc have
zero average value. However, if the average value of x( T) is nc)t zero, then it is possible
that y(t) is not square integrable and consequently the FT of y(t) does not co.nverge. We
may get around this problem by including impulses in the transform. We know Eq. (3.36)
holds for all w except possibly w = O. The value at w = O is 1nodified by adding a term
cô(w), where the constant e depends on the average value of x( r}. The correct result is
obtained by setting e = 1rX( jO). This gives the integration property:
t FT 1
I -oo
x( T) dT < > -. X( jw)
JW
+ 1rX( jO)ô( w) (3.37)

where it is understc>od that the first term c)n the right-ha11d side is zero at w = O. lntegration
may be víewed as an averaging operation a11d thus it tends to smc)oth sig11als intime. This
smoothing in time corresponds to deemphasizing the high-frequency componenrs of the
signal, as indicated in Eq. (3.37) by the w term in the denominator.
We may demonstrate this pr<>perty by deriving the FT of the unir step. Thc unit step
may be expressed as the integral of the impulse function

f~ 1

· u(t) = ô(r) dr

Since <5(t) < FT > 1, Eq. (3.37) suggests

fT 1
u(t) _..,- -• U( jw) = -. + 1TÔ{w}
JW
216 CttAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

Let us check this result by independently deriving U( jw). First, express the unir step as the
sum of two functions
i,(t} = 2l + 21 sgn(t)
where the signum function, sgn(t), is defined as
-1 t<O
'
sgn(t) = o, t = o
1, t > O
This representation is illustrated in Fig. 3.35. Using the results of Example 3.18, we have

½< FT > 7ro(w). The transform of sgn(t) is derived using the differentiation propert}'· Let
FT
sgn(t} ~ > S( jw). We have
d
dt sgn(t) = 2B(t}

Hcnce
jwS( jw) =2
We knc)W that S( iO) = O l,ecause sgn(t) is ar1 c)dd fu11crior1 and tl1us has zero average value.
This knowledge removes the indeterminacy at w = O associated with the differentiation
property, and ,ve conclude
2/jw, w i= O
S( iw) =
o, ú) =o
Ir 1s common write this as .5( iw) = 2/jw ,vith the understanding that S( jO) = O. No,v
t<)
use linearity to c>btain the FT of u(t) as
FT 1
u(t) -<- -• ~ + 1ro(w)
JW
This agrees exactly with the transform of the step function obtained using the integration
property.

Summation and Differencing


The discrcte-time analog of integration is summation: that is, define
li

yJnl = L
k~ -e<;
xlk]

!sgn(t) u(t)
2 l ..

-1
+ 2
-
t t /
o 1
o 1
o
i
l 2

FIGURE 3.35 Re1Jrcsentati(,n of step functíon as thc sLtm of a constant and sígnum function.
3.6 Properties ofFouriei· Representations 217

The discrete-time analog of differentiation is differencing. The differencing operatÍ(>n re-


covers x[ n l from y[ n J, as shown by
x[n1 = y[n1 - y[n - 11
This gives the dífferencing property, assuming that x(k] is nonperiodic, as shown by

We may invert this equation to express Y(e;o.) as a function of X(ei11 ); however, we cann<)t
determine Y(eiº). As with the integration property, we may add an impulse to account for
a fl()nzero a verage vai ue in x[ k] and obtain the summa tion property as
;n
y[n] DTFT Y(e;(1) = 1X~ee};n + 7rX(eiº)ô(il), -7r < .n ~ '1T

n
where the first term in Y(e;n) is assumed zero for = O. The quantity Y(eiíl) is 27rperiodic,
and so we may alternatively express this property for all values of !las
X( i!l) ""
y[n] DTFT Y(ei!l) =
1 - e
e - ;11 + 7rX(eiº) L
k=-=-o
B(!l - k27r)

Table 3.8 summarizes the differentiation, í11tegration, and summation properties of


Fourier representations.

EXAMPLE 3.28 This example illustrates the use of multiple Fourier representation properties.
Find x(t) if
· d e;2w
. .. .. ' X(jw) =j dw 1 + jw/3
Solution: We ídentify three different properties that rnay be of use ín finding x(t): differen-
tiation in frequency, time shifting, and scaling. These must be applied i11 the proper order to
obtain the correct result. Use rhe transform pair
.,.
. , ..
Fr > S(jw) == l 1.
+ JW
·,,...
.'
' .
,.••~··
.:,.,
·.. ..:..:•

TABLE 3.8 Commonly Vsed Differentiation,


lntegration, and Summation Properties
d F1'
dt x(t) < > jwX(jw)

dtd x(t) FS; wº 1'kw Xlkl


0

-jtx(t)
F1'
:w X(jw)
DTFT
-jnx[n] d X(e;ri)
df!
fl_"" x(T) d-r < FT > j~ X(jw) + 1rX(jO)ó(w)

r>TFT X(eiº) .
:rk>=-:x, x[kl - - - 1 _ e-;n + 1rX(e'º) ~r -oc ó(f! - k27T)
218 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS

·. ;
·,r:

to exp.ress X(jw) as

. w
, ..
,
X(jw) = j !!_ ·2ws
e1 1-
dw 3

Performing the innermost property first, we scale, then time shift, and lastly apply the differ-
entiation property. If we define Y(iw) = S(jw/3), then the scalíng property gíves
.
'
, y(t) = 3s(3t)
..
.. = 3e- 3tu(3t)
,
=:: 3e- 3 tu(t)
'<.:

Now define W(jw) = ei CúY(jw) and apply the time•shift property to obtain
2

·.;,. :\..
w(t) = y(t + 2) ..

.,
.., :
·.v.
= 3e-3<t+ 2lu(t + 2)
Lastly, since
·.

X(j@} =i d: W(j(JJ}

the differentíation property yields


. .'
'
., x(t) = tw(t)
>·.
·,
= 3te- 31'+21 u(t + 2)
....
.. ;~:· .

• Drill ProbJem 3.16 Show that the DTFT of


IS

·n d e-j3(ft-'1Tl8)
X(e 1"') = 1· -dfi 1 - ae-;(n-1rtsi

• CONVOLUTION AND MODULATION PROPERTIES

Two of the most important properties of Fourier representations are the convolution and
modulation properties. An important form of modulation refers to multiplication of two
signals; one (>f the signals changes or ''modulates'' the amplitude of the other. We shall
show that convolution in the time domain is transformed to modulation in the frequency
domain, and that modulation in the time domain is transformed to convolution in the
frequency domain. Herice we may analyze the input-c)utput behavior of a linear system
in the frequency domain using mu]típlication (>Í transfc>rms jnstead of convolving time
signals. This can significantly simplify system analysis and offers considerable insight into
system behavior. Both rhe convolution and modulatíon properties are a consequence of
complex sinusoids being eigenfu11ctions of LTI S}'Stems. We now present these pcc1perties
for each of the four Fourier representatic)ns, beginning with nonperiodic signals.
3.6 Properties of Fourier Representations 219

Nonperiodic Convolution ·
C:onsider thc convolutíon of two nonperiodic continuous-time signals x(t) and h(t).
Define

y(t) = h(t) * x(t)

= J""x h( T)x(t - T) dr

Now express x(t - T) ín terms c>f its FT, as shown by

x(t - T) = -l f= X( jw)e'w(t-r) . dw
2'TI' -x

Substitute thís expression i11tc> the convolution integral to obtain

1
y(t) = f'"' h(t) fec X(jw)ei'l) 1e-;w-r dw d-r
- "" 2 'TI' - oc

= 1
2'TJ'
f f ec

-oc
°"
-x,
h( T}e-iw7 dT X( jw)eicvl dw

We recognize the inner integral (>ver Tas thc FT of h( r), or H( jw). Hence y(t) may he
.
rewr1 tte11 as

1 foc ,
y(t) = 'TJ' _,., H( jw)X( jw)e'wt dw
2

so )'(t) is the inverse FT of H( jw)X( jw). We cc>nclude that convo1uti<>n of sígna1s in time
corresponds t<) multiplícation of tra11sforms ín rhe frequency domain, as described by

y(t) = h{t) * x(t) < FT > Y( jw) = X( jw}H( jw) (3.38)

The foll<.)wing examples· illustrate applications of this property.

. .. "·-· .. ... . ::•.'<-4• •••

ExAMPLE 3.29 Let x(t) = (1/m} sin(mJ and h(t} = (1/m) sin(2m). Find y(t) == x(t) * h(t).
Solution: This problem is extremely difficult to solve in the time domain. However, it ís
simple to solve in the frequency domain using the convolution property. We have
1, /w/ s; 71"
x(t) < fT > X(jw) =
·.:...: .;,: .
o, otherwise

FT 1, lwl<21r
.·.·' .. . h(t) - - H(jw) = o, otherwise
·.~·
FT
Since y(t) = x(t) * h(t) < > Y(jw) = X(jw)H(íw)) we have

~, lwl s 1T
Y(jw) = O, otherwise

and conclude rhat y(t} = (1/m) sin( 711).


.:·. .. ,, . . ··. ir... .,,·.,~··, .
.·;,:-
:
.'
.. ~:::· '4N':.
:;:,.. ·..
'.... •
,..
··: J!!- .,,,f.
220 CHAPTER 3 • fOURll:::R REPRESENTATIONS FOR S1GNALS

•. "·«

· ExAMPLE 3.30 Use the convolutíon property to find x(t), where

x(t) iE FT > X(jw) = ;z sin2 (w)

Solution: We may write X(jw) as the product Z(jw)Z(iw), where

Z(jw) =~ sin(w)
úJ

FT
The convolution property states that z(t) * z(t) -(---"'> Z(jw)Z(jw) so x(t) = z(t) * z{t). We
· have
.,·,..
1, [t 1 :S 1 FT Z(
z(t) = .: > jw)
O, other,vise
as depicted in Fig. 3.36(a). Performing the co11volution of z(t) with itself gives the triangular
waveform depicted in Fig. 3.36(b) as the solution for x(t).

• Let the input to a system with impulse response h(t) = 2e- 2'u(t)
Drill Problem 3.17
be x(t) = 3e-tu(t). Use the convolution property to find the output of the system, y(t).
Answer:

A similar property holds for convolution of discrete-time nonperiodic signals. If
DTFT 11 DTFT
xlnl - - - X(e; ) and h[nJ - - - H(ei11), then •

(3.39)
The pr<>of of thís result closely parallels that of the continuous-time case.

.·: ,...

EXAMPLE 3.31 Reconsider the problem addressed in Example 2.12. ln this problema dis-
torted, received sígnal y[,iJ is expressed in terms of a transmitred signal x[1tJ as
. . ,

y[n] = x[n] + ax[n - 1], ial < 1


Find the impulse response of an inverse system that will recover x[n] from y[n} .
.,
<.

x(t)

z(t)

___...,.__-+-________ t -......ic:.-.---+----3ii-- t
-1 O l -2 o 2
(a) (b)

FIGURE 3.36 Sígnals for l-:xample 3.30. (a) Rectangular pulse z(t). (b) Convolution <>f z(t) with
itself gi,,es x(t).
3.6 Properties of Fourier Hepresentations 221

, . . : ·. .;: !:,: ..';~,. . ...,

Solution: ln Example 2.12 we solved this problem using convolution. We now solve it in
the frequency domain using the convolution property. Write the output as the convolution of
the input with the system impulse response as y[n] = x[n] * h[n], where the impulse response
h[n] is given by .
1, n = O
h[n] = a, n = 1
'
·~~:;,:
.,·.
O, otherwise
The impulse response of an i11verse system, h- 1 [n1, ntust sarisfy
h- 1 [n] * h[n] = ~[n]
T aking the DTFf of both sides of this equarion and using the convolution property gives
H-1(eiº)H(eiíl) == 1 ., .,
: ·:.

which implies the frequency response of the inverse sysrem is given by · ·~ .....
:; .

-1 ;n _ 1
H (e ) - H(e;n) ,
..

..
. ,,.. .
Substitution of h[n] inro the de.finition of the DTFT yields
~.•·,... Ji ,,,,..
DTFT
h[n] - - - H(e;n) = 1 + ae-;n
Hence

H-1 (e;n) = 1 .
1 + ae- 10
Taking the inverse DTFT of H- 1 (ei11 ) gives the impulse response of the inverse system
.
. <~. : . ,.~··,
.:
.··~
: .'\,••· .... • .;x, •• • .:·. : '-1'.

• Drill Problem 3.18 l,et the impulse response of a discrete-time system be givcn by
hln] = (l/7Tn) sin((7r/4)n). Find the output y[n] in response t(> the input (a) xlnJ =
(1/7Tn) sin{(7r/8)n), and (b) x[nl = (l/7Tn) sin((7r/2)n).
Answer:
1 1T'
(a) ylnl = sin -n
7rn 8
1 , 1T
(h) yfn] s1n -n
7rn 4

Modulation
If x(t) and z(t) are nonperiodic signals, then we wish to express tl1e FT <>f the pr<)duct
y(t) = x(t)z(t) in terms of the FT of x(t) and z(t). Represenr x(t) and z(t) in terms of their
FTs as

x(t) -1
= 27f f °"
-,;e
.
X( jv)e 1111 dv

1
z(t) = 21T f"" Z( ir,)ei171 dr,
-oo
222 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

The pr()duct term, y(t), may thus be written in the form

y(t) = 12 f"° foo . +v)i dr, dv


X( jv)Z( jr,)e1111
(21r) - 00 - 00

Now perform a change of variable on r,, substituting 17 =w - v, to {)btain

y(t) =
2
~ f oc~
2
~ f "'"' X( jv)Z( j(w - v)) dv eiwt dw

The inner integral over v represents the convolution of Z( jw) and X( jw), while the outer
integral over w is of the fc>rm of the Fourier representation for y(t). Hence we identify this
convolution, scaled by 1/21r, as Y( jw), shown by

FT 1
• Y( jw) = l'TT X( jw)
y(t) = x(t)z(t) _.,- - * Z( jw) (3.40)

where

X{ jw) * Z( jw) = f, , 00
X( jv)Z( j(w - v)) dv

Multiplicacion in the time domain leads to convolution in the frequency domain.


Similarly, if x[n] and z[nl are discrete-time nonperiodic signals, then the DTFT <)f
the product y[nl = x[n Jz[n 1 is given by the convolution of X(e;n) and Z(eiº) alth<)ugh the
definiti{>O of convol11tion changes slightly because X(eiíl) and Z(eiíl) are periodic.
Specifically,

(3.41)

where the symbol 0 denotes periodic C{>nvolution. The difference between periodic and
nonperi{)dic convolution is that the integration in a peric>dic convolution is performed over
a single period of the signals involved, whereas in the nc>nperiodic case it is perf<>rmed over
the e11tire interval. Here X(e;11 ) and Z(eifl) are 27rperiodic, so we evaluate the coz1volution
over a 27T interval, as shown by

An important application of the m(>dulation pr<>perty is understanding the effecrs of


truncating a signal {>n its frequency-domain representation. The process of truncating a
signal is ais<> known as windowing, since it corresponds to viewing the signal through a
window. The portio11 of tl1e sig11al tl1at is 11ot visible through thc window is truncated.
The windowing operation is represented mathematically by multiplyíng the signal, say,
x(t), by a wíndow function w(t) that is zero outside the time range of interest. Denoting
the windowed signal by y(t), we have y(t) = x(t)w(t). Thís operation is illustrated in Fig.
3.3 7 for a window functio11 tl1at trur1cares x(t) to che time interval -T < t < T. The FT
of y(t) is related to the FTs of x(t) and w(t) through the modulation property:
1
y(t) < rT > Y{ jw) = 7T X( jw) * W( jw)
2
If w(t) is the rectangular wind<)w depicted in Fig. 3.37, then
2
W( jw) = sin(wT}
w
3.6 Properties ofFourier Representations 223

x(t)

w(t)

1
X

----,-----4-----+----
-T o T
t

y(t)

-
------',---4-3-,.-----+---- t
-T T

FIGURE 3.37 Truncating a signal using a wíndow functi<>n w(t).

Figure 3.38 illustrates the effect of windowing with a rectangular window in the frequency
domain. The general effect of the window is to smooth detail in X( jw) and introduce
oscillations near discontinuities in X( jw). The smoothing is a consequence of the 21r/T
width of the mainlobe of W{ jw} while the oscillations are due t<) rhe oscillations in the
sidelobes of W( jw). The following example illustrates the effect of windc>wing the impulse
response of an ideal discrete-time system.

X(jw)

W(jw)

2T .......

Y(jw)

o
FIGURE 3.38 Convolution of signal and windo\.v FTs resulting from truncation in time.
224 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SICNALS

EXM'IPLE 3.32 The frequency response H(ei0 ) of an ideal discrete-time system is depicted
in Fig. 3.39(a). Describe the frequency response of a system whose impulse response is the
ideal impulse response truncated to the interval - M :S n s M.
Solution: The ideal impulse response is the inverse D1"FI' of H(eín), as shown by
1 . Trn
.. h[n1 = Trn s1n
2
This response is infinite in extent. Let ht[n) be the truncated impulse response,
h[n), lnl s M
h,[n] =
O, otherwíse ,.

... • ••

----•·-
1 ----
-27T -1T
--~---1--------------+------
.......
1r
'

21T
.Q

(a)

fo.(8)
21\tl+ 1 ······

Q +. 21T
2M + l

----4----~~HH---'~~~-- - - - - 0
-7T/2 .Q rr/2
\
S l -2?T
--
2M+ 1

(b)

fn<0)

-----1.------+------..;;~~-+--H,;____ (J
--rr/2 ! 1T/2 Q
1
(e)

...
-11" --rr/2 -rr/2 7T

(d)

FIGlJRE 3.39 1-~ffecl of truncating the impt1lse resp<>nse of a discrete-time system, (a) Ideal sys-
tem frequency responsc. (b) Íti(0) for fl. near zero. (e) f 12 (8) for fl. slightly greater than 1r/2. (d)
Frequency response of system ,-vith truncated impulse response,
3.6 Properlies ofFourier Representations 225

We may express ht[n] as the product of h[nl anda window function w[n], ht[n] = h[n]w(n],
where

... 1, /nl s M
w[n] ==
:~ . o, otherwise
DTJ-T
.·. Let h,(n] <E---~ H,(eiº), and use the modulation property to obtain

Ht(eiil) = 1 f H(e;e) W(ei(1l-9>) dO


.. 21T J(21T)
Choose the 21r interval of integration to be -1r < 8 < 1r. Now use

"".
;~•·
. H(e;e) =
1, 181 s 1r/2
o, 'TT/2 < 181 < 7T

and
"
;
• {D _ e) 2M + 1
SJil
f..
~ . 2
W(e;<n-:--91) ==
... ,

S10
n- 6
2
to obtain '

'

where we have defined


W(e;in- 9)), 1OI < 7r/2
f n(8) = O, otherwise
Figure 3.39(b) depicts f 11 (8) for n
< 'lr/2. Ht{eiº) is the area under /n(8) between e= -1rll
and O= 1r/2. To visualize the behavior of Ht(eif2), consider the area under f 0 (0} as O increases
starting from O= O. As n increases, the small oscillacions in / 0 (8) move through the boundary
at e = 1r/2. When a positive oscillation moves through the boundary, the net area under f n( 0)
decreases; the net area increases when a negative oscillation moves through this houndary.
Oscillations also move through the boundary at 0 = - rr/2. However, rhese are smaller than
those on the right because they are further away from !1 and thus have much less of an effect.
The effect of the oscillations in f n( O) moving through the boundary at 0 = 1r/2 is to introduce
oscillations in H 1(e;n). These increase in size as O increases. As O approaches 7r/2, the area
under fri(8) decreases rapidly because the mainlobe moves through 0 == 'TTl2. Figure 3.39(c)
depicts f r1(0) for n. slightly larger than 1r/2. As n. continues to increase, the oscillations to the
left of the mainlobe move through the boundary at 0 = 1r/2, causing additional oscillations
in the area under f n( 0). However, now the net area oscillates about zero because the mainlobe
of / 11(0) is no longer included.
Thus Ht(ei!l) takes on the form depicted in Fig. 3.39(d). Truncation of the ideal impulse
response introduces ripple ínto the frequency response a11d widens the transitions at = n
:t1r/2. These effects decrease as M increases, since then the mainlobe of W(eiº) becomes nar-
rower and rhe oscillatíons decay more quickly. ••
"···

• Drill Problem 3.19 Use the moduJation property to find the FT of


4 . 2
x(t) = 2 2
s1n (2t)
1T t

Answer: See fig. 3.40. •


226 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

X(jw)

16 ....

---~--+--~.--- w
-4 o 4

FIGURE 3.40 Solution to Drill Problem 3.19.

Convolution and Modulation for Signals That Are Periodic in Time


This subsection addresses the convolution and modulation propertíes for signals that
are periodic functions of time. Convolution of periodic signals does not occur naturally in
the C()ntext of evaluating the input-output relationships for systems, since any system with
a periodic impulse response is unstable. However, convolution <>f periodic signals is a
useful signal analysis and manípulation tool. We have already encountered an example of
periodic convolution in the modulation property for the DTFT.
Define the periodic convolution of two continuous-time signals x(t) and z(t) each
ha ving period T as
y(t) = x(t) © z(t}

= ( x(-r)z(t - r) d-r
J<n
Here again the 0 symbol denotes tha t integration is performed over a single period (>Í the
signals involved. y(t) is also periodic with períod T; hence the FS is the appropriate rep-
resentation for ali three signals, x(t), z(t}, and y(t).
Substitution of the FS representation for z(t) into the convolution integral leads to
the property
FS; l1r/T
y(t) = :t:(t) © z(t) Yfk] = TX(k]Zf k] (3.42)
Again we see that convolution in time transforms to multiplication of the frequency-
domain representations. This property explains the <)rigin of the Gibbs phenomen<)n that
was observed in Example 3.7. A partial sum approximation to the FS representation for
x(t) may be obtained using FS coefficients Y[k] given by the producr of X[k] anda function
Z[k] that is 1 for -J $ k .$ J and zero otherwise. ln the time domain, y(t) is the periodic
convolution of x(t) and z(t), where
• 2] + 1
s1n t 2
z(t) = -----
. t
s1n
2
The signal z(t) corresponds exactly to X(e;fi) depicted in Fig. 3.19(b) if we replace n by t
and M by ]. The periodic convolution of x(t) and z(t) is the area under shifted versions of
z(t) on ltl < ½. The ripples in the partia! sum approximation of x(t) are a consequence of
the variations in this area associated with shifting sidelobes of z(t) into and out of the
interval Iti < ½.
3.6 Properties ofFourier Representations 227

The discrete-time convolution of two N periodic sequences x[n] and z[n] is defined
as

y[nJ = x[n] ® z[n]


= I x[k]z[n - k]
k=<N>
This is the periodic convolution of x[n] and z[n]. The signal y[nJ is N periodic so the DTFS
is the appropríate representation for all three signals, x[n], z[n], and y[n]. Substitution of
the DTFS representation for z[n] results in the property

DTFS; 271'/N
y[n] = x[n] @ z[n] ~--....,. Y[k] = NX[k]Z[k] (3.43)

Convolution of time signals is transformed to multiplication of DTFS coefficients.


The modulation property for periodic signals is also analogous to that of nonperiodic
signals. Multíplication of periodic time signals corresponds to convolution of the Fourier
representations. Specifically, in continuous time we have
FS; 27r/T
y(t) = x(t)z(t) - - - Y[k] = X[k] * Z[k] (3.44)

where
00

X[k] * Z[k] = L
m=-oo
X[mJZ[k - m]

is the nonperiodic convolution of the FS coefficients. All three time-domain signals have
comm<>n fundamental period, T. ln díscrete time
DTFS; 27r/N
y[n] = x[n]z[n] - - - - Y[k] = X[k] ® Z[k] (3.45)

where

X[k] ® Z[kl = L
nt=(N>
X[mJZ[k - m]

is the periodic convolution of DTFS coefficients. Again, all three time-domain signals have
a common fundamental period, N.

ExAMPLE 3.33 Evaluate the periodic convolution of the sinusoidal signal ·,·
\:•
x(t) = 2 cos(21Tt) + sin(41Tt)
with the T períodic square wave z(t) depicted in Fig. 3.41.

Solution: Both x(t) and z(t) have fundamental period T = 1. Let y(t) = x(t) © z(t). The
FS; 21r
convolution property indicates that y(t) < > Y[k] = X[k]Z[k]. The FS representation for
x(t) has coefficients . .i:

...
.:;: .,
1,k = ±1
. .,.. '. .
'.· ..., , ,

1/2j, k=2 .·.....


. ,.
X[k] =
,. -1/2j, k = -2 . ~. .,
t :,. ·. . .

. '. : o, otherwise
..•: .;:
:· \;,:ç: .: '
228 CHAPTER 3 • f OlJRIER REPRESENTATIONS FOR SIGNALS

z(t)

•••

'
'
'
' '
t
-1 1
-- o -21 1
2

FIGURE 3.41 Square wave for Example 3.33 .

..,,,,,... .,.-~ ·......:· ..... . . .. .. ·"· •. '> •::': ··:--.<,• :. ·:· ••,_,.;, : .,...><;:. :.,.. • • : .. ,.
.. f" :.: .··~

The FS coefficients for z(t) may be obtained from Example 3.6 as

Z[k] = 2 sin(k7T/2)
kl1T
. :-. .,,
Hence the FS coefficíents for y(t) are

1l1r, k = ±1
Y[k] =
O, otherwise

which implies : , .:

. ,..
2
y(t) = - cos(2m)
1T
,• :
. ·'
·.· .,·. .
.~i· ,,;i,.i., ' .: •< : ·.• ·., ·., .. ..
: ~· . ..:.~,;..•i,~ ·.·.· ~,, . . ".'.:"'i,·.
• ';" ~:..· .: '· :; ' • •• • ;>",::.. :!l'W:t'.'· :; ·~: • • :• ··',',', .'•'<· • : ',•• ' •• ~'t''>'

The convolution and modulation properties are summarized for ali four Fourier
representations in Table 3.9. ln general, convolution in one domain corresponds to mul-
tiplicatic>n <>r modulation in the other d<>main. Periodic convolution is used for periodic
signals and nonperiodic convolution is used for nonperiodic signals.
We have not yet considered severa) irnportant cases of convolution and moduJation
that ()ccur when there is a mixing of signal classes. Examples include the modulation of a
nonperiodic signal with a periodic signal and the convolution of a periodic anda nc)nper-
iodic signal such as occurs when applying a periodic signal to a linear system. The prop-
erties deríved here can be applied to these cases if we use a Fc>urier rransform representation
for periodic sígnals. This representation is developed in Chapter 4.

( ~~L~ 3. 9 Convolution and Modulation Properties


Convolution Modulation

FT 1
x(t) * z(t) < FT • X(jw)Z(jw) x(t)z(t) - - - X(jw) * Z(jw)
21r

x(t) © z(t) _Fs_;w_"__,. TX[k]Z[k] x(t)z(t) _Fs_;"'_"....,. X[k] * Z[k]


V'ffT
x[n] * z[ nl +-----')o X(ei )Z(ei.n)
s1
x[n]z[n] DTFT l.._ X(e;ti) © Z(e;s1)
21T

x[11] © zl1ij DTFS; !l0 NX[k]Z[k] x[n)z[n] DTFS; !1(1 X[k] © ZlkJ
3.6 Properties of Fourier Representations 229

., , ·. . . ' , .. .· .
.. ,

· ExAMPLE 3.34 Find the FT of the signal


d
. .,. x(t) = dt {(e- 'u(t)) * (e- tu(t -
3 2
2))}

Solution: First we break the problem up into a series of símpler problems. Let w(t) = e- 3 tu(t)
and v(t) = e- 2 tu(t - 2) so that we may wríte
··. t, :· . .
.... . .. ., .
. . ..., d
~ .,
x(t) = dt {w(t) * v(t)}
Hence applying the differentiation and convolution properties, we obtaín
X(jw) = jwW(jw)V(jw)
The transform pair : ,,.. •• :·: < ., •••, ·, •• : :· •
. '> ' . .. ... . .

•·
.. ·. · Ft. . 1
· · · · · ·• :· .' .· e-ªtu(t) < • --.-
•. . .. ' , . a+ JW
.. ·: . .. ·: ·'•· ..; ..
implies '· :· : . ,. . '

:. ' . . ' . . W(jw) =3 1 .


+ fW
We use the sarne transform pair and the time-shift property to find V(jw) by .first writing
v(t). = e-4 e-2 <'- 2 >u(t - 2)
, Thus .. :. ..
.·.· .'
• • •• •• 4 ••

• :y v. .: . ·: ,
: • < ••• ;, • • -4 -;2w
• .
.· ,, . . . . ·"~· t \;; ;. :: . '. . .•. . ~(- )=e e ;

JW 2 .
+Jw ,.
' .,. ·, : '. ,. >

. : . :, . . ·l
and · ·. ·· : . . ·. "\ ' .
... .. . .,

. ,.

;.
. .,. ::
jwe-í2w
., .. . . . ,
• •• '<

X{jw) = e-4 (2 + jw)(3


. .. . ,., :~;. .·: .' ,.. .
"' ' ' ·. ·. .. ' '

• Drill Problem 3.20 Find x[n] if


e-j3!l
sin(21!l/2)
1 + ½e-i!! 0 sin(il/2)
Answer:
x[nl = 27T (-½)n- 3 (u[n - 31 - u[n - 11]) •
• PARSEVAL RELATIONSHIPS

The Parseval relationships state that the energy or power in the time-domain representation
of a sjgnal is equa] to rhe energy or power in the freq11ency-domain representation. Hence
energy or power ís conserved in the Fourier representation. We shall derive this result for
the FT and simply state it for the other rhree cases.
The energy ín a continuous-time nonperiodic signal is

Ex = f~oc lx(t) 1 dt 2
230 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS

where it is assumed that x(t) may be complex valued in general. Note that lx(t) 12 =
x(t)x*(t) and that x'~(t) is expressed in terms of its FT X( iw) as

1
. x*(t) = Joc X*(jw)e-;wt dw
· 27r -oo
Substitute this into the expression for Ex to obtain

x(t) 17T J"" .


Ex = _ J :x,

00 2
_
00
X*( jw)e-,wt dú) dt

Now interchange the order of integration


1
f
Ex = 7r ""oc X*( jw}
2
f ococ x(t)e-íwt dt dw

Observing that the term in braces is the FT of x(t), we obtain

Ex = 17r f""
_ X'~( jw)X( jw) dw
2 00

and so conclude
oo 2 1 Joo 2
· _,,, lx(t) 1 dt
J = l7r _ 1X( jw) 1 dw
00
(3.46)

Hence the energy in the tiine-domain representation of the signal is equal to the energy in
the frequency-domain representation normalized by 27r. The quantity IX( jw) ]2 is termed
the energy spectrum of the signal. Equation (3.46) is also referred to as Rayleigh's energy
theorem.
Analogous resuJts ho1d for the other three Fourier representations as summarized in
Table 3.10. The energy or power in the time-domain representation is equal to the energy
or power in the frequency-domain representation. Energy is used for nonperiodic time-
domain signals, while power applies to periodic time-domain signals. Recall that power is
defined as the integral or sum of the magnitude squared over one period normalized by
the length of the period. 1'he power or energy spectrum of a signal is defined as the square
of the magnitude spectrum. They indicate how the power or energy in the signal is dis-
tributed as a function of frequency.

TABLE Parseval Relationships


3.1 O
for the Four Fourier Representations
Represehtation Parseval Relation

FT
f
_
00

00
lx(t) 1 dt 2 J°"
J
= b -« 1X(jw} 1 dw
2

FS _Tl J lx(t)l
<T)
2
dt = L; '"'IX[k]l2

DTFT I=~-:,; lxrn] 1 2


=:
1
-2
1T'
J
(21T)
lX(ei11 ) 12 dO
DTFS -h Í:n=<N> /x[n}/ 2
= Í:k=<N> /X[kJ/ 2
3.6 Properties of Fourier Representations 231

.., .. ,.. . a, ;. .. .. .. . . .. ,, -~. ..


. .., .,
·" .
. ... ~- ., . . .
'"/':;,t.

ExAMPLE 3.35 Use Parseval's theorem to evaluate

_ ~ sin2 (Wn)
. :, . ·{ : . . X - n~"" -rrn 2

. ,
Solutio,i: Lec . .. .. ,
. t ,..

. ,. sin(Wn)
.,·· . xn
[ ] =. . 1Tn
.: .
so that x = ~;__ "" lx[n]!2 • By Parseval's theorem, we have

:· .
• • :• .Y
. . " . . ..
-~ -~
X=
1
27T
J (2'l7)
IX(ei0 ) 12 dfi
" ..., . ...
,'

.,. .

Since " ::

DTFf 1, JnJ s w
x[n] - - - ~ X(e n) 1
=
.
o, w < 101 < .. ,
7f'
, ...
, :·

we have
. w
' .
: . .
'
.. . .,, X=
1
2 1T -w
1 dO J
'< •• •• • .,, ·. , . w
' .
1T
'.
Note that direct calculation of x is very difficult. ·. · · · ..

• Drill Prohlem 3.21 Use Parseval's theorem to evaluate


2
f
:x;

x= - - - d2 w
-:x; liw + 21
Answer:
X= 1T •
• DuALI'IY
Throughout this chapter, we have ()bserved a consistent symmetry between the time, and
frequency-domain representations of signals. For example, a continuous rcctangular pulse
in either time or frequency corresponds to a sinc function in either frequency or time, as
illustrated in Fig. 3.42. An impulse intime transforms to a constant in frequency, while a
constant intime transforms to an impulse ín frequency. We have also observed symmetries
in Fourier representation properties: conV(>lution in (>ne domain corresponds to modula-
tion in the other domain, differentiation in ()ne domain corresponds to multiplication by
the independent variable in the other domain, a11d so on. These symmetries are a conse-
quence of the symmetry in the definitions of time- and frequency-domain representations.
lf we are careful, we may interchange time and frequency. This interchangeability property
is termed duality.
232 CttAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

X(jw) == 2 sin_ \~T)


w
x(t)

l
FT

_..,.__--+__,.__ t

-T O T

sin (Wt)
:i(t)= - -
7Tl
X(íw)

l
FT

-W O w
FtGllRE 3.42 Duality of rectangular pulses and sinc functions.

Begin with the FT, and recall

x(t) = -1 J"" .
X( jw)e'wt dw
21T -oo

X( jw) = J:,., x(t)e-iwt dt


The difference between the expression for x(t) and that for X(jw) is the factor 21r and the
sign change in the complex sinusoid. Both can be expressed in terms of the general equation

y( v) = -1 Joo z( 11)ei"11 dTJ (3.47)


21T -oo

If we choose v = t and 7J = w, then Eq. (3.47) implies that

y(t) = -1 f"" z(w)eiwt dw


21T -oo

Therefore we conclude that


FT
y(t) < > z(w) (3.48)
Conversely, if we interchange the roles of time and frequency by setting v =- (V and
'YJ = t, then Eq. (3.47) implies that

y(-w) = -1 f"° .
z(t)e-,wt dt
21T -oo

and we have
FT
z(t) < > 21ry(-w) (3.49)
The relationships of Eqs. (3.48) and (3.49) imply a certain symmetry between the
roles of time and frequency. Specifically, if we are given a FT pair,

f (t) < FT > F( jw) (3.50)


3.6 Properties of Fourier Representations 233

we may interchange the roles of time and frequency to obtain the new FT pair,
FT
F( jt) ( ) 27íf(-w} (3.51)
The notation F( jt) implies evaluation of F( jw) in Eq. (3.50) with frequency w replaced by
time t, while f(-<JJ) means we evaluate f(t) as a function of reflected frequency -w. The
duality relationship described by Eqs. (3.50) and (3.51) is illustrated in Fig. 3.43.

.. :' .' .. .· ,.. . . .

ExAMPLE 3.36 Use duality to evaluate the FT of ··


·,'

1 .
.., .. ,. :
,; '.
'
.
. ;, · .. ' x(t) = ' ...
,,' 1 + Jt
. ·.. '.
'
Sol·ution: First recognize that ' ' ' ,.

' '

FT 1 · ·..
f (t) = e-tu{t} ,(,-o(- - • F(jw) = .
1 +1w
'
Replacing w by t we obtain ,..,
·• '

F(jt) =1 ! jt

Hence we have expressed x(t) as F(jt) and using the duality property
Ff
.: ·;:
··. :, ,
F(it) < • 27íf(-w)
''
·· which indicates that ·· ··
1 •
. ,• • • ..
X(iw} = 21T/(-ro) '
\
, ... '
·:. ·" ", ...;. . .
= 21re"'u(-w) . ' ',

f(t) F(jw)

l
FT

-.J-----+--+--
'
~--- w
-1 o -l o l

F(jt) 21rf(-w)
w=t t=-w
2 l .
FT ,.
_.,___-i:---+---3+---- t
-1 o l o l

FIGURE 3.43 The f'T duality property.


234 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

• Drill Prohlem 3.22 Use duality to evaluate the inverse FT of the step function in
frequency, X( jw) = u(w}.
Answer:
-1 8(-t)
x(t) = 2 . + 2
11"/t •
The FT stays entirely within its signal class; it maps a continuous-time nonperi(>dic
function into a continuous-frequency nonperiodic function. The DTFS a1so stays entirel}'
within its signal class) since discrete periodic functions are mapped int<> discrete peric>dic
functions. The DTFS possesses a duality property analogous to the FT. Recall that
x[n] = L X[kJeikOºn
k=(N)

and

X[k] = l_ L x[n]e-;kn,,n
N n=(N)

Here the difference between the forms of the forward and inverse transforms is the factor
N and the change ín sign of the complex sinusoidal frequencies. The DTFS duality property
is stated as fo]Jows. If
DTFS; 2Tr/N
x[n] ~ - - ~ Xlk] (3.52)
then

X[n]
DTFS; 21r!N _!_ x[-k] (3.53)
N
Here n is the time index and k is the frequency índex. The notation X[n] implies evaluation
of X[k] in Eq. (3.52} as a function of the time índex n, while x[-kl means we evaluate
x[nj in Eq. (3.52) as a function of frequency índex -k.
The DTFT and FS do not stay within their signal class. The FS maps a continuous
periodic function into a discrete nonperiodic function. Conversely, the DTFT maps adis-
crete nonperiodic function into a continuous periodic function. Compare the FS expansion
of a periodic continuous-time signal z(t), as shown by
ôC

z(t) = L Z[k]eikwc,t
k=-oo

and the DTFT of a nonperiodic discrete-time signal x[n], as shown by


00

11 =- ""
If we force z(t) to have the sarne period as X(ei! 1), 21r, then w 0 = 1 and we see that !l in
the DTFT corresponds to t in the FS, while n in the DTFT corresponds to -k in the FS.
Similarly, the expression for the FS coefficients parai leis the expression for the DTFT
representati<Jn of x[n], as we now show:

Z[k] = 1
2 7T'
J (2-r.-)
z(t)e-ikt dt

x[n] = 1
2 ?T
f(211')
X(ei!1)e;nn dfl
3.6 Properties of Fourier Hepresentations 235

The roles of O and n in the DTFT again correspond to those <)Í t and - k in the FS. The
duality property here is between the FS and the DTFT. If
DTFT
x[n] - - - X(eiíl) (3.54)
then
FS· 1
X(eit) ~ - '~ x[-kJ (3.55)

ExAMPLE 3.37 Use the FS-lJ'l'Fl' duality property and the results of Example 3.26 to de-
termine the inverse DTFf of the triangular spectrum Y(eiª) depicted in Fig. 3.44(a).
Solution: Define a time function z(t) = Y(eit). The duality property of Eq. (3.55) implies that
FS· 1
if z(t) < ' > Z[k], then y[n] = Z[-n]. Hence we seek the FS coefficients, Z[k], associated
with z(t). z(t) is a time-shifted version of the triangular wave /(t) considered in Example 3.26
assuming T = 21T. Specífically, z(t) = f(t + Tr/2). Using the time-shift property we have

..
~
Z[k] = eik'll'12F[k]
71', k=O
i<k-I)'1T/2 4 sin(k1T/2)
.·.. '..' ,. e Trk2 '
k :fo O

Consequently, using y(n] = Z(-n], we have


Tr, n =O

y[n] = -;(n+l)11/2 -
4 Slfl
· (n1T,/2)
e 2
, n:f:0
1Tn

Figure 3.44(b) depicts y[nJ. , .,_

• TIME-BANDWIDTH PRODUCT

We have observed an inverse relatíonship between the time and frequency extent of a
signal. For example, recall that

x(t) =
ltl 1, ~ T
<
FT
>
X
(jw) = 2Tsinc
wT
o, ltl > T

y[n]
3

2
••• •••

-41T o 21T -8 -6 -4 -2 -1 .O 2 4 6 8

(a) (b)

FIGURE 3.44 Example 3.37. (a) Triangular spectrum Y(ei1~). (b) lnverse DTFT of Y(e·;r1).
236 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

As depicted in Fig. 3.45, the signal x(t) has time extent 2T. Its FT, X( j(J)), is actually of
infinite extent in frequency but has the majority of its energy contained in the interval
associated with the mainlobe of the sinc function, w] < 1r/T. As T decreases, the signal's
1

time extent decreases, while the frequency extent increases. ln fact, the product of the time
extent and mainlobe width is a constant.
The general nature of the inverse relationship between time and frequency extent is
demonstrated by the scaling property. Compressíng a signal intime leads to expansion in
the frequency domain and vice versa. This inverse relationshíp may formally be stated ín
terms of the signal's time-bandwidth product.
The bandwídth of a sigr1al refers to the significant frequency content of the signal
for positive frequencies. It is difficult to define bandwidth, especially for signals havíng
infinite frequency extent, because the meaning of the term ''sígnificant'' is not mathemat-
ically precise. ln spite of this difficulty, there are several definitions for band\vidth in com-
mon use. One such definition applies to sígnals that have a frequency-domain represen-
tation characterized by a mainlobe bounded by nulls. If the signal is lowpass, that is, the
mainlobe is centered on the orígin, then the bandwidth is defined as one-half the mainlobe
wídth. Using this definition, the signal depicted in Fig. 3.45 has bandwidth 1r/T. If the
signal is bandpass, meaning the mainlobe is centered on ::!: wc, then the bandwidth is equal
to the mainlobe width. Another commonly used definition of bandwidth is based on rhe
frequency at which the magnitude spectrum is 1/\12 times its peak value. At this frequency
the energy spectrum has a value of one-half its peak value. Note that similar difficulty is
encountered in precisely definíng the time extent or duration of a signal.
The above definitions of bandwidth and duration are not well suited for analytic
evaluation. We may analytically describe the inverse relatio11ship between the duration
and bandwidth of arbitrary signals by defining root n1ean-square measures of effective
duratíon and bandwidth. We shall formally define the duration of a signal x(t) as
1/2
f~ oot 2
x(t) [ dt
1
2
(3.56)
s~oc\x{t)l2 dt
and the bandwídth as
f~,,,w"',; 1X( jw) 12 dw 1/2
(3.57)
f"",,, [X( jw) !2 dw
These definirions assume x(t) is centered about the origin and is lowpass. The interpretation
of Td as an effective duration Í<}llows from examination of Eq. (3.56}. The integral in the
numerator is the second moment of the signal about the origin. The integrand weights the
square of the value of x(t) at each time instant by the square of its distance from t = O.
Hence if x(t) is large for large values of t, then rhe duration will be larger than if x(t) is
large for small values of t. This integral is normalized by the total energy in x(t). A similar
interpretatíon applies to Bw. Note that while the root mean-square definitions offer certain

X(iw)
x(t) 2T
l
FT
..
_...,.__ _+--_ _ _ _ t
-T O T

FIGURE 3.45 Rectangular pulse illustrating the invcrse relationshi1, between the time and fre-
quency extent of a sígnal.
3. 7 Exploring Concepts with .MATIAB 237

analytic tractabiliry, they are not easíly measured from a given signal and its magnitude
spectrum.
The time-bandwidth product for any signal is lower bounded, as shown by
{3,58)
This bot1nd indicates that we cannot simultane()usly decrease the duration and bandwidth
of a signal. Gaussian pulses are the only signals that satisfy this relationship with equality.
Equation (3.58) is also known as the uncertainty principie for its application in modern
physics, which srates that the exact position and exact momentum of an electron cannot
be determined simultaneously. This result generalizes to alternative definitions of band-
width and duration. The product of bandwidth and duration is always lower bounded by
a constanr, with the value of this constant dependent on the definitions of bandwidth and
duration.
Analogous results can be derived for the other Fourier representations.

3. 7 Exploring Concepts with .MATLAB


• THE DTFS
The DTFS is the only Fourier representation that is discrete valued in both time and fre-
quency and hence suited for direct MATLAB implementatíon. While Eqs. (3.12) and (3.13)
are easily implemented as M-files, the built-in MATLAB commands f f t and ; f f t may
also be used to evaluare the DTFS. Given a length N vector x representing one period of
an N periodic signal x[,z], che command
>>X= fft(x)/N
produces a length N vectc)r X containing the DTFS coefficients, X[k]. MATLAB assumes
the summations in Eqs. (3.12) and (3.13) run from O to N - 1, so the first elements of x
and X correspond to xlOl and X[O], respectively, while rhe last elements correspond to
x[N - 1] and X[N - 1]. Note that division by N is necessary because f f t evaluates the
sum in Eq. (3.13) without dividing by N. Similarly, given DTFS coefficients in a vector X,
the command
>>X= ifft(X)*N

produces a vector x that represents one period for the time waveform. Note that i f f t
must be multiplied by N to evaluate Eq. (3.12). Bc>th f f t and i f f t are computed using
a numerically efficient or fast algorithm termed the fast Fourier transfc>rm. Develc>pment
of this algorithm is discussed in Section 4.11.
Consider using MATLAB to solve Drill Problem 3.1 for the DTFS coefficients. The
signal is
. 7T 3'17'
x[ nl == 1 + s1n Ü n+ 8
This signal has period 24, so we define one period and evaluate the DTFS coefficients using
the commands
» x = ones(1,24) + sinCC0:23]*pi/12 + 3*pi/8);
>>X= fft(x)/24
X =
Columns 1 through 4
1.0000 0.4619 - 0.1913i 0.0000 + o_oooo;
-0.0000 + 0.0000;
238 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

Columns 5 through 8
0.0000 = 0.0000; -0.0000 - 0.0000; 0.0000 - 0.0000;
-0.0000 - 0.0000;
Columns 9 through 12
-0.0000 - O.OOOOi -0.0000 - 0.0000; -0.0000 - 0.0000;
0.0000 - 0.0000;
Columns 13 through 16
0.0000 + 0.0000; 0.0000 + 0.0000; -0.0000 + 0.0000;
0.0000 - 0.0000;
Columns 17 through 20
-0.0000 - 0.0000; -0.0000 - 0.0000; 0.0000 + 0.0000;
-0.0000 + 0.0000;
Columns 21 through 24
-0.0000 + O.OOOOi -0.0000 - O.OOOOi 0.0000 - O.OOOOi
0.4619 + 0.1913i
Note that MATLAB uses i to denote the square root of -1. Hence we conclude that
1, k =O
0.4619 - jü.1913, k =1
X[k] =
0.4619 + j0.1913, k = 23
O otherwise on O ::S k < 23
which corresponds to the answer to Drill Problem 3.1 expressed in rectangular form. We
may reconstruct the time-domain signal using i f f t and evaluate the first four values of
the reconstructed signal using the commands
» xrecon = ifft(X)*24;
>> xrecon(1 :4);
ans =
1.9239 - 0.0000; 1.9914 + 0.0000; 1.9914 + 0.0000;
1.9239 - 0.0000;
Note that the reconstructed signal has an imaginary component, albeit a very small one,
even though the original signal was purely real. The imaginary component is an artífact
of numerical rounding err()rs in the computations performed by f f t and i f f t and may
be ignored.

• Drill Prohlem 3.23 Repeat Drill Problem 3.2 using MATLAB. •


The partial sum approximation used in Example 3.3 is easily evaluated using
MATLAB as follows:
>> k = 1 : 2 4;
>> n = -24:25;
» 8(1) = 25/50; %coeff for k = O
» B(2:25) = 2*sin(k*pi*25/50)./(50*sin(k*pi/50));
» B(26) = sinC25*pi*25/50)/(50*sinC25*pi/50));
» xJhat(1,:) = B(1)*cosCn*O*pi/25); %term in sum for k =O
%accumulate partial sums
>> for k = 2:26
xJhat(k,:) = xJhat(k-1,:) + B(k)*cos(n*(k-1)*pi/25);
end
This set of commands produces a matrix x J h a t whose (J + 1 )st row corresponds to x1[n].
3. 7 Exploring Concepts with MATIAB 239

• THE FS
The partial sum approximation to the FS in Example 3.7 is evaluated analogously to that
of the DTFS with one important additional consideration. The signal x1(t) and the cosines
in the partial sum approximation are continuous functions of time. Since MATLAB rep-
resents these functions as vectors consisting of discrete points, we must use sufficiently
closely spaced samples to capture the details in .x1(t). This is assured by sampling the
functions closely enough so that the highest frequency term ín the sum, cos(Jmaxw 0 t), is
well approximated by the sampled signal, cos(Jmaxw0 nâ). The sampled cosine provides a
visually pleasing approximation to the continuous cosine using MATLAB's p lo t com-
mand if there are on the order of 20 samples per period. Using 20 samples per period, we
obtain ~ = T/20]max· Note that the total number of samples in one period is then 20Jmax•
AssumingJ1nax = 99 and T = 1, we may compute the partial sums given B[k] by using the
commands:
» t = C-(10*Jmax-1):10*JmaxJ*Delta;
» xJhatC1,:) = BC1)*cos(t*0*2*pi/T);
>> for k = 2:100
xJhat(k,:) = xJhatCk-1,:) + B(k)*cos(t*Ck-1)*2*pi/T);
end
Since the rows of x J h a t represent samples of a continuous-valued function, we display
them using p lo t instead of s tem. For example, the parti ai sum for ] = 5 is displayed
with the command p l o t ( t , x J h a t ( 6 , : ) ) .

• TIME-BANDWIDTH PRODUCT

The f f t command may be used to evaluate the DTFS and explore the time-bandwidth
product property for discrete-time periodic signals. Since the DTFS applies to signals that
are periodic in both time and frequency, we define both duration and bandwidth based
on the extent of the signal within one period. For example, consider the period N square
wave studied in Example 3.2. One period of the time-domain signal is defined as

x[n] =
1, lnl ::S M
O, otherwise on -M < n < N - M - 1
and the DTFS coefficients are given by


+ 1)
s1n k ~ (2M
1
X[k] --------
N
s1n k~

If we define the time duration Td as the nonzero portion of one period, then Td = 2M + 1.
If we further define the bandwidth B 111 as the width of the mainlobe of X[k] over one
period, then we have Bw = 2N/(2M + 1) and we see that the time-bandwidth product
for the square wave is independent of M: TdBw = 2N.
The following set of MATLAB c<>mmands may be used to verify this result!
» x = [onesC1,M+1), zeros(1,N-2*M-1), ones(1,M)J;
>>X= fft(x)/N;
» k = [-N/2+1:N/2]; %frequency index for N even
» stem(k,real(fftshift(X)))
240 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

Here we define one period of an even square wave on the interval O =s n =s N - 1, find
the DTFS coefficients using the f f t command, and display them using s tem. The com-
mand f f t s h i f t swaps the left and right halves of the vector X so that the zero frequency
index is in the center of the vector. That is, the frequency índices are changed fr<)m O to
N - 1 to -N / 2 + 1 to N / 2 (assuming N is even). The r e a L command is used to
suppress any small imaginary comp<)nents resulting from numerical rounding. We then
determine the effectíve bandwidth by counting the number of DTFS coefficients in the
mainlobe. One of the computer experiments evaluates the time-bandwidth product in this
fashion.
The formal definitions of duration and bandwidth given in Eqs. (3.56) and (3.57)
may be generalized to discrete-time periodic signals by replacing the integrais with sums
o ver one period as f ollows:
1/2
(N-1)/2

I
n=-(N-1)/2
n2 lx[n] 12
(N-1)/2
(3.59)
L
n=-(N-1)/2
lx[nj 1
2

1/2
(N-1)/2

I
k=-(N-1)/2
k 2 IX[k]l2
Bu, = (N-1)/2
(3.60)
L
k= -(N-1)/2
IXfkll 2

Here we assume that N is odd and the majority of the energy in x[n] and Xf kl is centered
<>n the (>rigin.
The following MATLAB function evaluates the product TdBw based on Eqs. (3.59)
and (3.60): ·

function TBP = TdBw(x)


% Compute the Time-Bandwidth product using the DTFS
% One period must be less than 1025 points
N=1025;
M = (N - max(size(x)))/2;
xc = [zeros(1,M),x,zeros(1,M)J; %center pulse within a period
n = [-(N-1)/2:(N-1)/2];
n2 = n.*n;
Td = sqrt((xc.*xc)*n2'/(xc*xc'));
X= fftshift(fft(xc)/N); %evaluates DTFS and centers
Bw = sqrt(real((X.*conj(X))*n2 1 /(X*X 1
)));

TBP = Td*Bw;
This function assumes that the length of the input signal x is (>dd and centers x within a
1025-point period before computing Td and Bw. Nc>te that . * is used to perform the
element-by-~lement prc>duct. The * operati<.>n c<>mputes the inner product when placed
between a rc>w vector and a column vectc>r and the apostrophe I indicares complex
conjuga te transpose. Hence the command X* X' performs the inner product of X and
the complex conjugate of X, that is, the sum of the magnitude squared of each element
of X.
3.8 Summary 241

We may use the function T d B w to evaluate the time-bandwidth product for two
rectangular, raised cosine, and Gaussian pulse trains as follows:
» x = ones(1,101); % 101 point rectangular pulse
>> TdBw(x)
ans =
788.0303
» x = ones(1,301); % 301 point rectangular pulse
>> TdBw(x)
ans =
1.3604e+03
» x = 0.5*ones(1,101) + cos(2*pi*[-50:50]/101);
% 101 point raised cosine
>> TdBw(x)
ans =
277.7327
» x = 0.5*ones(1,301) + cos(2*pi*[-150:150]/301);
% 301 point raised cosine
>> TdBw(x)
ans =
443.0992
» n = [-500:SOOJ;
» x = expC-0.001*(n.*n)); % narrow Gaussian pulse
>> Td Bw ( x)
ans =
81.5669
» x = exp(-0.0001*(n.*n)); % broad Gaussian pulse
>> TdBw(x)
ans =
81.5669
Note that the Gaussian pulse traíns have the smallest time-bandwidth product. Further-
more, the time-bandwidth product is identícal for both the narrow and broad Gaussian
pulse trains. These observations offer evidence that the time-bandwidth product for pe-
riodic discrete-time signals is lower bounded by that of a Gaussian pulse train. Such a
result would not be too surprising given that the Gaussian pulses attain the lower bound
for continuous~time nonperiodic signals. This issue is revisited as a cc>mputer experiment
in Chapter 4.

3.8 Summary
ln this chapter we have developed techniques for representing signals as weighted super-
positions of complex sinusoids. The weíghts are a function of the complex sinusoidal
frequencies and provide a frequency-domain description of the signal. There are four dis-
tinct representations applicable to four different signal classes.
• The DTFS applies to discrete-time N periodic signals and represents the signal as a
weighted sum of N discrete-time complex sinusoids whose frequencies are integer
multiples of the fundamental frequency of the signal. This frequency-domain repre-
sentation is a discrete and N periodic function of frequency. The DTFS is the only
Fourier representation that can be computed numerically.
242 CHAPTER 3 • f OlJRIER REPRESENTATIONS FOR StGNALS

• The FS applies to continuous-time periodic signals. Ir represents the signal as a


weíghted sum of an infinite number of continuous-time complex sinusoids whose
frequencies are integer multiples of the signal's fundamental frequency. Here the
frequency-domain representation is a discrete and nonperiodic function of frequency.
• The DTFT represents nonperiodic díscrete-time signals as a weighted integral of
discrete-time complex sinusoids whose frequencies vary continuc>usly over a 217" in-
terval. This frequency-domain representation is a continuous and 217" periodic func-
tion of frequency.
• The FT represents nonperiodic continuous-time signals as a weighted integral of
continuous-time complex sinusoids whose frequencies vary continuously from - oo
to oo. Here the frequency-domain representation is a continuous and nonperiodic
function of frequency.

Fourier representation properties relate the effect of an action on the time-domain


signal to the corresponding change in the frequency-domain representation. They are a
consequence of the properties of complex sinus()ids. Since ali Í()ur representations employ
complex sinusoids, all four share similar properties. The properties provide insight into
the nature of both time- and frequency-domain signal representations. They also provide
a powerful set of tools for manipulating signals in both the time and frequency domain.
Often it is much simpler to use the properties to determine a time- or frequency-domain
signal representation than it is to use the defining equatíon.
The frequency domain offers an alternative perspective of signals and the systems
with which they interact. Certain signal characteristics are more easily identified in the
frequency domain than in the time domain, and vice versa. Also, some systems problems
are more easily S<)lved in the frequency domain than in the time domain, and vice versa.
Both the time- and frequency-domain representations have their own advantages and dis-
advantages. Where one may excel, the other may be very cumbersome. Determining which
domain is the most advantageous for solving a particular problem is an important skill to
develop and can only be done through experience. The next chapter offers a start on this
journey by examining applications of Fourier representations in signal and systems
problems.

1 FURTHER READING -
1. Joseph Fourier studied the flow of heat in the ear1y nineteenth century. Understanding heat
flow was a problem of both practical and scientific significance at that time and required
solving a partial differential equation called the heat equation. Fourier developed a technique
for solving partia} differential equations based on assuming the solution was a weighted sum
of harmonically related sinusoids with unknown coefficients, which we now term the Fourier
series. Fourier's initial work on heat conduction was submitted as a paper to the Academy
of Sciences of Paris in 1807 and rejected after review by Lagrange, Laplace, and Legendre.
Fourier persisted in developing his ideas in spite of being criticized for a lack of rigor by his
contemporaries. Eventually, in 1822, he published a book containing much of his work~
Theorie analytique de la chaleur, which is now regarded as one of the classics oí
mathematics.
2. The DTFS is most often referred to in the literature as the discrete Fo•Jrier transform or
DFT. We have adopted the DTFS terminology in this text because it is more descriptive and
less likely to lead to confusion with the DTFT. The reader should be aware that they will
likel)' encounter the DFT terminology in other texts and references.
Problems 243

3. A general treatment of Fourier transforms is offered in:


• Bracewell, R. N., The F<)urier Transform and Its Applications, Secc>nd Edition (McGraw-Hill,
1978)
• Papoulis, A., The Fourier Integral and Jts Applications (McGraw-Hill, 1962)
4. An introductory treatment of the FS, FT, and DTFS is offered in:
• Morrison, N., Introduction to Fourier Analysis (Wiley, 1994)
5. The role of the FS and FT in solving the heat equation, wave equation, and potencial equation
is described in:
• Powers, D. L., Boundary Value Problems, Sec(>nd Edition (Acadcmic Press, 1979)
6. The uncertainty principie, Eq. (3.58), is proved in:
• Bracewell, R. N., The Fourier Transform and lts Applications, Second Edition (McGraw-Hill,
1978)

1~ROBLEMS .. -·-- - .. - -
3.1 Use the defining equation for the DTFS coeffi- 41T 101T
cients to evaluate the DTFS coefficients for the (b) xlnl = sin n + cos n +1
21 21
following signals. Sketch the magnitude and
phase spectra. (e} x[n] = L:=-.,, S[n - 2m] + ôln + 3m]
(d) x[n] as depicted in Fig. P3.1(a)
61T '1T' (e) x[n] as depicted in Fig. P3.1(b)
(a) x[n] = cos - n + -
13 6 (f) x[n] as depicted in Fig. P3.1 (e)

x[nl ·
)
1- ~

••• • ••
-2 6
-<>- n
-6 o 2 10
-1 ....

(a)

x[n]
1..
1
'
••• \ •••

;
' '
'
• ' '
n
2 4 6
(b)

xínl
1 - ...

8
...
'
n
•••
4

- .. -1 ' \
'

(e)

FIGlJRE P3.l
244 CUAPTE.R 3 Ili FOURIER REPRESENTATIONS FOR SIGNALS

3.2 Use the definition of the DTFS to determine the (e) X[k] as depicted in Fig. P3.2(b)
signals represented by the following DTFS
ti111e (f) X[kJ as depicted in Fig. P3.2(c)
coefficients. 3.3 Use the defining equation for the FS coefficients
to evaluate the FS coefficients for the following
(a) X[kJ = cos signals. Skerch the magnitude and phase
spectra.
4 (a} x(t) = sin{21it) + cos(31it)
(b) Xlk] = cos l07T k + j sín 1r k
21 21 (b) x(t) = ~:=-ao
5(t - ½m) + S(t - fm)
(e) X[k] = ~:= li[k - 2m] - 25[k + 3ml
-o; (e) x(t) = 2:;=-oo eí(2 '"131m5(t - 2m)
(d) X[kJ as depicted in Fig. P3.2(a) (d) x(t) as depicted in Fig. P3.3(a)

1X[kJI
1 - ....
••• ...
- . -

-4
..

4
- k

arg{ Xlkl}
1C 1
-T
2
••• •••
-4
'
4
- k

1t ....
--- '
2
(a)

X[k]
I

••• ...
--"--r-J.-.o--o -o-o-··L.....,..__. ··~•--o- - 0 , . - j ~ L.o--0-0-·-o·-1-...-J--<)-C)--0-0- .....L.-,,__L.-o-<:)-C)-- k
-4
--l 4 8
2
(b)

1X[kll

' ->
••• ...
.

; k
'
-8 -4 4 8

arg{XfkJ}
. ' -21T
-
3
••• •••
'r 'r 9
k
A A 1,
'

21T ..
---
3
(e)

FIGURE P3.2
Problenis 245

x(t) 1XLk] 1

1
'
s1n 1rt
1 ... .. 1
' )

••• • ••
21···
... 1 t • ••

• '
t --<>--<r-<>-·-+,-'-~--9-o-(>---1-+-,-<>--<>-0--0--- k
-3 -2 -1 I 2 3 4 -4 -2 2 4
(a) arg{X[k]}
1T'
x(t} -· - ~

4
. ..
••• • ••

. - k
••• • ••

- 1-
t 7T i
-3 -2 -1 i 2 3 4 -··
4
(b)
(a)
x(t)
[X[k]I
1 -· ...
l ~

••• • ••
• •• •••
;' t
'
-2 1
'
3 4 '
; k
-4 4
-1
arg{X[k]}
(e) 81r ...
••• • ••
FIGURE P3.3

(e) x(t) as depicted in Fig. P3.3(b)


(f) x(t) as depicted in Fig. P3.3(c)
(b)
3.4 Use the definition of the FS to determine the
time signals represented by the following FS X[kJ
coefficients.
(a) X[kJ = iô[k - 1) - jB[k + 1J + ô[k - 3] + .. ..
8[k + 3], W = 1T •
0
... • ••
(b) X[kJ = jô[k - lJ - j8[k + lJ + 8lk - 3J + ---<r-0--0.-<>-t-,-'--'-0-..1-1-+-0-<i>-0-<>--- k
5[k + 3], W = 31T 0 . -3 O 3
(e) X[k] = (-½)lkl, Wo = 1 (e)
(d) Xlk] as depicted in Fig. P3.4(a), w0 = 1r
FIGURE P3.4
(e) X[k] as depicted in Fig. P3.4(b), w 0 = 27T
(f) X(k] as depícted in Fig. 3.4(c), w 0 = 1T
(e) x[nl as depicted in Fig. P3.5(a)
3.5 Use the defining equation for the DTFT to eval-
(f) x[,iJ as depicted in Fig. P3.5{b)
uate the frequency-domain representations for
the following signals. Sketch the magnitude and 3.6 Use the equation describing the DTFT represen-
phase spectra. . tation to determine the time-domain signals cor-
responding to the following DTFTs.
(a) x[n] = (½}11u[n - 4]
(a) X(ein) = cos(Ü) + i sin(O)
(b) x[nJ = a1" 1, 1a 1 < 1
(b) X(ei!l) = sin(O) + cos(fi/2)
1
-2 + -21 cos N
T(
n , /nl < N (e) 1X(e;11) 1 = 1, rr/2 < 1O j < 7T'
(e) x[n] = O, otherwise
o, otherwíse arg{X(ei11)} = -40
{d) xln] = 5[6 - 3n] (d) X(ein) as depicted in Fig. P3.6(a)
246 CHAPTER 3 • FOURIER REPRESENTATlONS FOR StGNALS

xLnJ x[nl
1- - 1 1 -· ,..
... ... ••• • ••

--0--,:>-Q-O--'--+-o--<>--o-f-----r--0--<>-<o-<:---- tt - n
l ; -
-2 2 5
-1 ..... } - ... -1
1
(a) (b)

FIGURE P3.5

. Q -{l
·e e
\ ...
•• •

-----1----....--------r--------
' tl
-217" -'11' 1T 21T 31T
{a)

IX(e;ti)I
sin (Q)
1
-.1 ti
•••
...

-21T -'Tí 21T 31r

... ...
--~C----+---~'"'"-----+---~lfC-------t-: Q
-"JT '1T 31T

-21T /

(b)

• •• •••

' i
' '
' ' ' '
21T
'
-2'TT -'TT 1T

-21T -1T 7T/2 21T


' '
'
' n
... •••

-1T

(e)

FIGURE P3.6
Problems 247

(e) X(ei11 ) as depicted in Fig. P3.6(b) x(t) x(t)

(f) X(e;' 1) as depicred in Fig. P3.6(c) --- l e' e- t


3.7 Use the definíng equation for the FT to evaluate 2
the frequency-domain representations for the ---+---+---i'-- t
following signals. Sketch the magnitude and -2 -1 1
phase spectra. -1 _,___
(a) x(t) = e- 3tu(t - 1)
(b) x(t) = e-ltl (a) (b)

(e) x(t) = te- 21u(t) FIGURE P3. 7


(d) x(t) = L:-o amS(t - m), la 1< 1
(e) x(t) as depicted in Fig. P3.7(a)
(f) x(t) as depicted in Fig. P3.7(b)
3.8 Use the equation describing the FT representa- IX(jw)I
tion to determine the time-domain signals cor- 1
responding to the following FTs.
cos(w), 1wl < TT/2 -------4----li------'...,__ _ _ w
(a) X(jw) = '
0, otherwise -2 2
{b) X(jw) = e- 2 "'u(w) arg{X(jw)I
(e) X(jw) = e- w1 ...... 4
(d) X(jw) as depicted in Fig. P3.8(a)
2
(e) X(jw) as depicted in Fig. P3.8(b)
(f) X(jw) as depicted in Fig. P3.8(c) -2
3.9 Determine the appropriate Fourier representa- -4
tion for the following time-domain signals using
the defining equations. (a)

(a) x(t} = e 3 ' cos( m)u{t) X(íw)


(b) x[n] =
2
7T 7T
cos -n + j sin -n /n / < 10
5 5 ' -3
O, otherwise 3

(e) x[n] as depicted in Fig. P.3.9(a)


(d} x(t) = e1 +tu(-t + 2)
(b)
(e) x(t) = 1sin(2m) 1
{f) x[n] as depicted in Fig. P3.9(b) 1 X(jw) 1

(g) x(t) as depicted in Fig. P3.9(c) l


3.10 The following are frequency-domain represen-'
tations for sígnals. Determine the time-domairi --------i1------a1----.....,---w
'
signal corresponding to each. -2 2
eikr., lk[<lO arg{ X(jw)}
(a) X[kl = .
O, otherw1se -----t••'Tf/2
Fundamental period of time domain signal
2
is T = 1 -------'----11-------w
(b) X[k] as depicted in Fig. P3.10(a) -2
(e) X(jw) = -'TT/2---
cos(w/2) + j sin(w/2), Iwl < 71'
(e)
O, otherwise
(d) X{jw) as depicted in Fig. P3.10(b} FIGURE P3.8
248 CHAPTER 3 • f Ol.lRIER REPRESENTATIONS FOR SIGNALS

x[n]
1
••• •••
4
; .• '
'

n
-8 -4 8

-l - ~

(a)

x[n]

1-
..' -4 •••
---<>-0--<>-h--,-.--(>-'---'----'---!,-<>---0-o-c>-0-- n
4

-1

{b)

x(t)
3

••• l •••
;
'
'
'
t
-8 -4 -2 2 3 7 11
(e)

fIGl.lRE P3.9

X[k] X(iw)
1 ·- .. •
1
... -2 4 •••
-1
'
• k --.....;..'- - - 1 - - : i - - - - w
-4 2 1

' -~ -1 . -1

(a) (b)

1 Xík] 1
)
1
••• ... ••• • ••

.' • - k
' T
'
-1T 31T -6 -4 -2 2 4 6

-1 / arg{ X[kl}

1T/2 - ~

(e)
... 4 6 •••

'
;

. T
!
k
-6 -4 -2 2

-Tr/2 - -

(d)

FIGURE P3. I O
Problems 249

(e) X(eit 1) as depicted in Fig. P3.1 O(c) 3.14 Use the tables of transforms and properties to
(f) X[k] as depictcd in Fig. P3.1 O(d) find the inverse FTs of che following signals:
.
(g) X(ei1l);;;: )sjn(fi))
(a) X(jw) = (2 ~~w}2
3.11 Use partia) fraccion expansions to determine the
inverse FT for the following signals:
b)
X(. = 4 sin(2w - 2) + 4 sjn(2w + 2)
( /W) 2w - 2 2w + 2
. Sjw + 12
(a) X(1w} = (jw)2 + 5jw + 6 1
(e) X(jw) = . (. l) + 27í<'5(w)
4 JW JW +
(h) X(jw) = -w2 + 4.
JW +
3
. (d} X(jw) = d~ 4 cos(3w) sin~w)
( ) X(. ) -1w
e /W = (jw) 2 + 3jw + 2
e) X(. _ 2 sin(w)
(
. -(jw) 2 - 4jw - 6 Jw) - w(jw + 1)
(d) X(/w) = [(jw) 2 + 3jw + 2](jw + 4)
·3 1
2 (f) X(jw) = Im e 1· w - .- -
( ) X(. ) = 2(j<.tl) + 12jw + 14 JW + 2
e JW (jw) 2 + 6jw + 5 2
(g) X(jw) = 4 sin ( w)
. 2jw + 1 w2
(f) X(1w) = (jw + l)2
3.15 Use the tables of transforms and properties to
3.12 Use parcial fraction expansions to determine the find the DTFTs of the following signals:
inverse DTFT for the foilowing signals: (a) x[nl = (½)nuln - 2]
1 -1fl
3 - 4e =
(a) X(ei!l) =-
__
----
1 e-;211 +
(b} x[n] (n - 2)(uln - 5] - u[n - 6])
16 1 n

(b) X(e ) - 1
j!l -
-;211
3 -
5 -jfl
4e
3 -;11 +
(e) x[n] = sín fn 1
4
u[n - 1]
se - 4e 1
6 • 7T • 7T
(e) X(eifl) = Slll -n s1n - (n - 2}
e ·121l - 5e i!.l +6 4 4
(d) x[n] = *
6 - 2e-;!1
;t1 - + ½e-;2n 7Tn 7T(n - 2)
(d) X(e ) - (-¼e-;2{1 + 1)(1 - ¼e-;'1)
2
6 _ 1, -iSl _ .! -/2!.l . 1T
s1n -n
(e ) X( e;11) -- _le-;211
3e
+ le-;11
6e
+1 2
6 6 (e) x[n] =
7Tn
3.13 Use the tables of· transforms and propertíes to
find rhe FTs of the followjng sígnals: 3.16 Use the tables of transforms and properties to
(a) x(t) = sin( m)e- 21u(t) find the inverse DTFTs of the following signals:
(b) x(t} = e 3 t-2 1 (a) X(eiº) = cos(20) + 1
2 sin( m) sin(277t} ) sin(1fíl} a-. _13 n sin(iO)
(e) x(t ) = m m (b ) X( e'11 = --- ~ e
sin(fl/2} sin(0/2)

(d) x(t) = ~ (te- 2 t sin(t)u(t)) n sin(!ü}


(e) X(e' ) = cos{ü) sín(fl/l)

(e) X (t ) = f' sin( 7TT) dT


-oc - - - - - ' - {1 1, 7T/4 < < 37T/4 l!ll
7TT (d ) X(e' ) =
O, otherwise for 1!l l < 7T
t - 4
(f) x(t} = e- 2
t+ 1
u
(e) X(ein) = e-í(4fl+rr/2) d 2
2 1+ ¼e-i(f} +
dü -rr/8)

d sin(t) sin(2t) 2
(g} .'t"(t) =- -- * 1+
dt m m le-i(.0+1TISJ
4
250 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS

3.17 UsetheFTpair DTFT


3.18 We have x[n] =
n(-½)nu[n] - - - X(e;n).

X(t) =
1, ltl < 1
-~
FT x(·)
/W =
2 sin(w) Without evaluating X(eiº), find y[n] if Y(e;íl) is
O, otherwise
--+)

w given by:
(a) Y(ei' 1) = ei3 ºX(ei!t)
and the FT properties to evaluate the frequency-
(b) Y(e;n) = Re{X(eiº)}
domain representations for the signals depicted
in Fig. P3.17(a}-(i). (e) Y(eiíl) = d X(eiº)
d!l
(d) Y(ei!l} = X(e;n) © X(ei(il-rr))
y(t)
(e) Y(eiíl) = d {e;2I1X(e;11}]
cos (1rt) dfi
y(t)
J.....,__ __ (f) Y(ei.íl) = X(eiº) + X(e-;! 1)
t
-1 1 (g) Y(ei!l) = d {e-i2!1fX(ei(!l+rr/4))
___._ _ __...,_ _ t dO.
-1 . . .. _ X(ei(!l-rr/4))]}
2
(a) (b) 3.19 A periodic signal has FS representation
FS· 1r
y(t) x(t) X[kJ = -k2-ikJ_ Wirhouc deter-
'
y(t) 2- - mining x(t), find the FS representation (Y[kl and
--'·· 1
w if y(t) is given by:
0
)

1 (a) y(t) = x(2t)


1 5
_....,__-+---+-- t ' t d
-1
'
2 4 (b) y(t) = dt x(t)
-1- ~

-1..-- (e) y(t) = x(t - ¼)


1
(e) (d)
(d) y(t) = Re{x(t)}
(e) y(t) = cos(2m)x(t)
y(t) y(t)
-1 l (f) y(t) = x(t} © x(t - ½)
1
-~-~-~--t 3.20Given
-1
-1
. 11 ?T
1 Slil lO n
DTFS; rr/10 X[k}
-1 - -5 x[n] = - - - -
. '1T
SlO 2() n
(e) {0
y(t) y(t) evaluate the time-domain signal y[n] with the
following DTFS coefficients using only DTFS
-
propert1es.
t t (a) Y[k] = X[k - 51 + X[k + 5]
-2 2 -1 1 2 (b) Y[k] = cos{kTr/5} X[k]
(g) (h)
(e) Y[k] = X[k] G) X[k]
y(t)
(d) Y[k] = Re{X[k]}
1 3.21 Evaluate the following quantities:
1 1
t (a) fn- 11 - !e-;n 12 dfl
1
?T
-1
-1 ".,, sin2 (k7r/4)
(b) "'"'k = - co k2
(i) 4
FIGURE P3. l 7 (e) fco °" (w2 + l) 2 dw
Problems 251

(b) arg{xlnl}
sin2 (e) I;;""_"° x[n]
(d) Lk 9 O - - - - (d) I:=--,., lx[n] 1
2

. z
s1n 7T k (e) ~_:=--oo x[n]ei(1Tl4)n
20 3.25 Prove the following properries:
2
«. sin ( m) d
(e ) f _ 00
2
t (a) The FS symrnetry propertíes for:
1Tt
(i) Real-valued time signals.
3.22 Use rhe duality property of Fourier representa-
tions to evaluate the following: (ii) Real ar1d even rin1e sigr1als.
FT lw
(h) The DTFT tirne-shift property.
(a) x(t} ~ > e- u(w)
(e) The DTFS frequenc}r-shift propert}'·
1 FT
(d) Linearity for the FT.
(b) - -2 - - X(7'w)
1 + t
(e) The DTFT convolution property.
• 11 '17' (f) The DTFT modulation property .
s1n 20 n
DTFS; 1r/lO (g) The DTFS convolution property.
(e) X[kJ
. 7T (h) The FS modulation property.
Stn
20 n (i) The Parseval relationship for the FS.
3.23 For the signal x(t) shown in Fig. P3.23, evaluate 3.26 ln this problem we show that Gaussian pulses
the following quantities without explicitly corn- achieve the lower bound in the tirne-bandwidth
puting X(j(J:)); product. Hint: Use the definíte integrais in Ap-
(a} J~,,.,
X(jw) dw pendix A.4.
,
1 12
(b) f,,, )X(jw) !2 dw
o,;
(a) Let x(t) = e- - • Find the effective duration,
{e) f~,., X(jw)ei2 (" dw Td, and bandwidth, B1,., and evaluate the
time-bandwidth product.
(d) arg{X{jw)} 21 2
(b) Ler x(t) = e-t 2.' • fi11d the effectivc dt1ra-
(e) X(jO)
tion, Td, and bandwidth, B11,., and evaluate
the time-bandwidth product. What hap-
x(t)
pens to Td, Bu,, and TdBu, as a increases?
3.27 Let

x(t) =
1, 1 ti < T
-1 3 O, otherwise
Use the uncertainty pri11ciple to placc a bound
on the effective bandwidth of the following
FIGURE P3.23 signals:
(a) x(t)
D1'FT
3.24 Ler xlnJ · X(ei! 1), where X(ei11 ) is de- (b) x(t) * x(t)
picted in Fig. P3.24. Evaluate the following 3.28 Sho,IV' that the time-bandwidth product, Tt1Bu,,
without explicitly cornputíng x(n]: of a signal x(t) is invariant to scaling. That is,
(a) x(O] use the definitions of Td and Bw to show that

X(eiª)

.,- l!
!

... ...
-21T -31r/4 -1T/4 'IT/4 1T 27T

FIGIJRE P3.24
252 CHAPTER 3 • FOURIER ftEPRESE.NTATIONS FOR SIGNALS

x(t) and x(at) have the sarne time-. bandwidth :.•3.31 ln this problem we explore a matrix represen-
product. tation for the DTFS. The DTFS expresses the N
3.29 The forn1 of the FS representation presented in time-dornain values of an N periodic signalx[n]
this chapcer, as a function of N frequency-domain values,
00
X[k]. Define vectors
x[O] X[O]
x(t) = L X[k]eikw"t
k=-oo x[l] X[1 J
is termed the exponential FS. ln this problem we
x= • X= •


' •

explore several alterna tive, yet equivalent, ways
of expressing the FS representation for real- x[N - 1] X[N - 1)
valued períodíc sígnals. (a) Show thar the DTFS representation
(a) Trigonometric form.
N-1
(i) Show that the FS for a real-valued sig- x[n] = Í:. X[k]eikíl,,n,
nal x(t) can be written as k=O

x(t) = a0 + n = O, 1, ... , N - 1
oc
can be written in matrix form as x = VX,
L
k=l
ak cos(kw t) 0 + bk
·
sin(kw 0 t)
where V is an N by N matrix. Find the el-
ements of V.
,vhere ak and bk are rea1-va1ued
coefficients. (b) Show that the expression for the DTFS
coefficients
(ii) Express ak and bk in terms of X[k]. Also
express XlkJ in terms of ak and bk. } N-- l .

(iii) Sho\v that


Xíkl =- I x[n]e-,kn.,n,
N n=O

ao =
1
-T f (T)
x(t) dt
k = O, 1, ... , N - 1
can be written in matríx vector form as
ak
2i
= T
(T)
x(t) cos kwot
. dt
X = Wx, where W is an N by N matrix.
Find the elemenrs of W.
2f
bk = -T x(t)
. kwot dt
Slll
(e) The expression x == VX implies that X =
v- •x provided V is a nonsingular matrix.
<T) .
Comparing this equation to the results of
(iv) Show that bk = O if x(t) is even and (b) we conclude that W = v- 1 • Show that
ak = O if x(t) is odd. this is true by establishing WV = I where I
( b) Polar form. is the identity matrix. Hint: Use the defini-
(i) Show that the FS for a real-valued sig- tions of V and W determined in (a) and (b)
nal x(t) can be written as to obtain an expression for the element in
the 1th row and mth coJumn of WV.
oc

x(t) = c0 + L ck
*3.32 ln this problem we find the FS coefficients Xlk]
cos(kw0 t + 0k)
k= l · by minimízíng the mean squared error (MSE)
between the sígnal x(t) and its FS approxima-
where ck is the magnitude (positive)
tion. Define the J term FS
and 0k is the phase of the kth harmonic.
(ii) Express ck and 0k as funccions of X[k]. J

(iii) Express ck and 0k as functions of ak and


X1(t} = A[k]eikw,,t L
k=-J
bk from (a).
and the J term MSE as the average squared dif-
*3.30 A signal wirh fundamental period T js said ference over one period
to possess half-wave symmetry if it satisfies
x(t) = -x(t - T/2). That is, half of one period
of the s_ignal is the negative of the other half.
1
MSE1 = T <T> Jx(t) - x 1(t) 1 dt
2 i A

Show that the FS coefficients associated with


even harmonics, X[2k l, are zero for ali signals (a) Substitute the series representation for x1(t)
wich half-wave symmetry. and expand the magnitude squared using
Problems 253
. . .
the identity la+ bl 2
={a+ b)(a* + b*) to (d} Find the value for A[k] that m1n1m1zes
ohtain MSEJ.

MSEJ = _Tl f<T>


lx(t}j 2 dt
{e) Express the minimum MSE 1 as a funccíon of
x(t) and X[k]. What happens to MSE1 as J

- k--J
I..I A*[kl -1
T
f x(t)e-jkw.,t dt
increases?
*3.33 Generalized Fourier Series. The concept of the
(T) Fourier series may be generalized to sums of sig-
- L-J J A[k] -1 f x*(t)eikw"t dt
nals other than complex sinusoids. That is,
given a sígnal x(t) we may approximate x(t) on

+
k-

LJ LJ
T
A* [k]A[m] -
(T)

1 f . . e-1kwote1mw,,t dt
an interval [t 1 , ! 2 ] as a weighted sum of N func-
tions cf>o(t), cf>2(t), •.• , <PN-1(t):
,,., = - J k- - J T (1)

N-1
(b} Define
x(t) = L Ck<Pk(t)

X[k] = _!_
T
f (T}
x(t)e-ikw"t dt
We shall assume that these N functions are mu-
k~o

and use the orthogonality of eikw"t and tually orthogonal on [t 1, t 2 ]. This means rhat
e;mw,,t to show that
(tz * O, k* l
MSE1 = ½ir, lx(t) 1 dt - 2
.± 1
A *[k]X[k]
Jt1 <l>k(t)<J> I (t) dt = Ík, k = l

J J ,vhere [k is some constant. The MSE using this


- L k=-.f
A[kJX*[k] + L IA[k]
k=-J
1
2 . .
approx1mat1on 1s
.

N-1 2
(e) Use the techníque of completíng the square MSE = 1 rlz x(t) - L Ck<Pk(t) dt
to show that ti - t1 Jt, k~O

MSE1 =
1
-T f(T)
lx(t) 1 dt
2
(a) Show that the MSE is minimized by choos-
ing ck = (1/fk) f!! x(t)</J t(t) dt. Hint: Gen-
J J
eralize steps outlíned in Problem 3.32(a)-
+ L
k=-J
IA[kJ - X[k] 1 2
- I.
k=-J
IX[k] 2
1
(d) to this problem.

<l>o(t) c/>1 (t)


l l
-3
4
'' '
1
1
1 t 1
1 '• t
-l -1 -3 l -1 -1 l
4 2 4 -1 - - 4 2

1 1
-1 -3
2 4
'
'
t t
-41 l -1 -1 -3 1
-1 - ... -1- .. 4 2 4

1- l
-1 -3 -1
4 4 4
'
1
' •1 t '' ' t
'
l -l -3 1
-1 - ...
-2l -1 - 2 4

FIGURE P3.33
254 CttAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS

(b) Show that the MSE is zero if ally, as T approaches infiníty, these repli-
t2
cates are removed to infinity. Thus we write

f,, lx(t)l 2 dt = x(t) = lim x(t)


T--+oo

If this relationship holds for all x(t) in a (b) The FS representa tion for the periodic signal
given class of functions, then the basis func- x(t) is
tions </>0 (t), cp 2(t), ... , <PN-i(t) are said to 00

be ''complete'' for that class.

f
k=-oo
(e) The Walsh functions are one set of orthog-
onal functions that are used for signal rep- l Til .
X[k] = - x(t)e-,kwot dt
resentation on [O, 1]. Determine the ck and T -T/2
MSE obtained by approximating the fol-
lowing signals with the first six W alsh func-
tions depicted in Fig. P3.33. Sketch the sig-
Show that X[k] = ½X(jkw 0 ), where

nal and the Walsh function approximation.


· 2, l2<
- t<
-4l
X(jw) = f 00 x(t)e-iwt dt
(1) x(t) = 00

o, O< t < ½, ¾< t < 1 (e) Substitute this definition for X[k] into the
(ii) x(t) = sín{2m) expression for x(t) in (b) and show that
(d) The Legendre polynomials are another set
of orthogonal functions on the interval
[-1, 1). They are obtained from the differ-
.i(t) = l
21T
I
k=-oc
X(jkwo)efkw,,tWo


ence equat1on (d) Use the lirniting expressíon for x(t) in (a)
. .
and de.fine w = kw to express the lirniting
2k - 1 k -. 1 0

<Pk(t) = k t<f>k-1(t) - k <Pk-2(t) form of the sum in (e) as the integral

using the initial functions </>0 (t) = 1 and x(t) 1


= -21T f "'°
-;,o
.
X(jw)e'wz dw
</> 1 (t) = t. Determine the ck and MSE ob-
tained by approximating the following sig-
nals with the first six Legendre polynomíals.
. 2, o< t < l
2
• Computer Experiments
(1) x(t) =
o,
-1 < t < o, ½< t < 1 3.35 Use MATLAB to repeat Example 3.3 for
(ii) x(t) = sin( m) N = 50 and (a) M = 12, (b) M = 5, and
(e) M = 20.
*3.34 We may derive the FT frorn the FS by describing
a nonperiodic signal as the limiting forrn of a 3.36 Use the MATLAB command f f t to repeat
periodic signal whose period, T, approaches in- Problem 3 .1.
finity. ln order to take this approach, we assume 3.37 Use the MATLAB command i f f t to repeat
that the FS of the periodic version of the signal Problem 3.2.
exísts, that the nonperiodic signal is zero for 3.38 Use the MATLAB command f f t to repeat Ex-
1 t 1 > T/2, and that the lirnit as T approaches ample 3.4.
infinity is taken in a symrnetric manner. Define 3.39 Use MATLAB to repeat Example 3. 7. Evaluate
the finite-duration nonperiodíc signal x(t) as the peak overshoot for J = 29, 59, and 99.
one period of the T periodic signal x(t}
3.40 Let x(t) be the triangular wave depicted in Fig.
x(t), -T/2 < t <· T/2 P3.40.
x(t} = O, ltl > Til · (a) Find the FS coefficients, X[k].
(b} Show that the FS representation for x(t) can
(a) Graph an exarnple of x(t) and x(t) to dem- be expressed in the form
onstrate that as T increases, the periodic
00
replicares of x(t) in x(t) are moved farther
and farther away from the origin. Eventu-
x(t) = L
k=O
B[k] cos(kw t) 0
Problems 255

x(t) (b) M = 10
'
(e) M = 25
••• •••
(d) M = 50
2 Discuss the effect of increasing M on the accu-
i racy with which H,(eiº) approxímates H(eiº).
l
;

-1
3.44 Use MATLAB to verify thar the time-band-
width product for a discrete-time square wave
FIGURE P3.40
is approximately independent of the number of
nonzero values in each period when duration is
(e) Define the J-term parcial sum approxima-
defined as the number of nonzero values in the
tion to x(t) as
square wave and bandwidth is defined as the
J mainlobe width. Define one period of the square
x1(t) = L
k=O
BlkJ cos(kwºt) wave as

Use MATLAB to evaluate and pior one pe- 1, O< n < M


riod of the Jrh term in rhis sum and x1(t) for xfnl =
O, M s n s 999
J = 1, 3, 7, 29, and 99.
3.41 Repeat Problem 3.40 for rhe impulse train given Evaluate the bandwidth by first using the com-
by rnands f f t and a b s to obtain the magnitude
oc, spectrun1 and then count the 11umber of DTFS
x(t) = L D(t - n) coefficients in the mainlobe for M = 10, 20,
l t - -oc 40, 50, 100, and 200.
3.42 Use MATLAB to repeat Example 3.8 usi11g the 3.45 Use the MATLAB function T d B w introduced
following values for the time constant: in Section 3. 7 to evaluate and pior the time-
(a) RC = 0.01 s. bandwidth product as a function <1Í duratíc,n
(b) RC = 0.1 s. for the following classes c>f signals:
(e) RC = 1 s. (a) Rectangular pulse trains. Ler the pulse in a
3.43 Evaluate rhe frequency response of the trunca- single period be of length M and vary M
ted filter in Example 3.32. You may do this in from 51 to 701 in steps of 50.
MATLAB by writing an M-file to evaluate (b) Raised cosíne pulse trains. l.et the pulse in
/1,1 a single period be of length M and vary M
H 1(eif!) = L hlnJe-;nn from 51 to 701 in steps of 50.
n~-M 2
(e) Gaussian pulse trains. Let xlnl = e-ªn ,

for a large number (> 1000) of values of O in - 500 s n s 500, represent the Gaussian
the interval - 1r < O < 1r. Plot the freq uency pulse in a single period. Vary the p11lse du-
response magnitude in dB (20 log 10 1H,(ei11 ) 1) rarion by letting a take the following values:
for thc following values of M: 0.00005, 0.0001, 0.0002, 0.0005, 0.001,
(a) M = 4 0.002, and 0.005.
Applications of
Fourier Representations

,.
·:,,,,
~··
'< J, ..

-
··t .'<.
.··
:-:;.
... .r,.;.

.,:· . '•·....
.t!'', .. .f. .·, ... ., .

. -·
~
.; ' ,: ...··:

14. 1 Introduction
ln the previous chapter we developed Fourier representations for four distinct signal
classes: the discrete-time Fourier series (DTFS) for periodic discrete-time signals, the Fou-
rier series (FS) for periodic continuous-time signals, the discrete-cime Fourier transform
(DTFT) for nonperiodic discrete-time signals, and the Fourier transform (FT) for nonper-
iodic continuous-time signals. We now focus on applications of these Fourier representa-
tions. The two most common applications are (1) analysis of the interaction between
signals and systems, and (2) numerical evaluation of signal properties or system behavior.
The FT and DTFT are most commonly used for analysis applications, while the DTFS is
the primary representation used for cc>mputational applications. The first and major por-
tion of this chapter is devoted to the presentation of analysis applications; computational
applications are discussed briefly at the end of the chapter.
An important aspect of applying Fourier representations is dealing with situations in
which there is a mixing of signal classes. For example, íf we apply a periodic signal to a
stable system, the convolution representation for the system output involves a míxing of
nonperiodic (impulse response) and periodic (input) signal classes. A system that samples
continuous-time signals involves both continuous- and discrete-time signals. ln order to
use Fourier methods to analyze such interactions, we must build bridges between the Fou-
rier representations for different signal classes. We establish these relationships in this
chapter. Specifically, we develop FT and DTFT representations for continuous- and dis-
crete-time periodic signals, respectively. We may then use the FT to analyze continuous-
time applications that involve a mixture of periodic and nonperíodic signals. Símilarly, the
DTFT may be used to analyze mixtures of discrete-time periodic and nonperiodíc signals.
Lastly, "ve develop a FT representation for discrete-time signals to address problems in-
volving mixtures of contínuous- and discrete-time signals.
We begin this chapter by relating the frequenc)r response descríption of a LTI system
to the time-domain descriptions presented in Chapter 2. We then revisit convolution and
modulatíon, considering applications in which periodic and nonperiodic signals interact
using FT and DTFT representations for periodic signals. Next, we analyze the process of
sampling signals and reconstructíon of continuous-time signals from samples using the FT
representation for discrete signals. These issues are of fundamental importance whenever
a computer is used to manipulate continuous-time signals. Computers are used to manip-
ulate signals in communication systems (Chapter 5), for the purpose of filtering (Chapter
8), and control (Chapter 9). Our analysis reveals the limitations associated with discrete-
4.2 Frequency Response ofLTI Systems 257

time processing of continuous-time signals and suggests a practical system that minimizes
them.
Recall that the DTFS is the only Fourier representation that can be evaluated exactly
on a computer. Consequently, it finds extensive use in numerical algorithms for signal
processing. We conclude the chapter by examining two common uses of the DTFS: nu-
merical approximatÍ<)n of the FT and efficient implementation of discrete-time convolu-
tion. ln both of these, a clear understanding of the relationshíp between the Fourier rep-
resentations for different signal classes is essential for correct interpretati<1n of the results.
ln the course of applying Fourier representations, we discover the relationships be-
tween all four. A thorough understanding of these relationships is a criticai first srep in
using Fourier methods to solve problems invoJving signals and systems.

4.2 Frequency Response of LTI Systems


ln this section we use the FT and DTFT to explore the relationships between the time-
domain system descriptions introduced in Chapter 2 and the corresponding frequency
response of the system. The frequency response offers a useful and intuitive characteriza~
tíon of the input-output behavior of the system. This is because conV(>lution in the ti1ne
domain transforms to multiplication in the frequency domain: the output of a system is
obtained simply by mulriplying the Fourier representation of the input with the system
frequency response. It is easy to visualize and interpret the operation <1f multiplying two
frequency-domain functions. We begin the discussion by examining the relationship be-
tween the frequency response and the impulse response, and then proceed to derive the
relationships between the frequency response and differential/difference-equation and
state-variable description represerttations.

• IMPULSE RESPONSE

We established in the previous chapter that the impulse response and frequency response
of a continuous- or discrete-time system constitute a FT or DTFT pair, respectively. Recall
that the impulse response of a bounded input, bounded output stable system h(t) is ab-
solutely integrable, as defined by

Thus the Dirichlet conditions are satisfied and the FT of h(t) exists if h(t) has a finite
number of local maxima, minima, and discontinuities with each discontinuity of finite size.
These conditions are met by many physical systems. Similarly, a stable discrete-time system
has an absolutely summable impulse response> as defined by
00

L
n=-~
lh[n] <1
00

This condition is sufficient to guarantee the existence of the DTFT. We conclude that the
frequency response exísts for stable systems.
The c<>nvolution property relates the input and output (>f a system as follows:

y(t) = x(t) * h(t) ~ FT • Y( jw} = H( jw)X( júJ)


258 CHAPTER 4 • .ÀPPLICATIONS Of' FOURIER REPRESENTATJONS

Thc multiplication that occurs in the frequency-domain representation gives rise to the
notion of filtering. The system filters the input signal by presenting a different response to
components of the input at different frequencies. We often describe systems in terms of
the type of filtering that they perform on the input signal. A lowpass filter attenuates high-
frequency components of the input and passes the lower-frequency components. ln con-
trast, a highpass filter attenuates low frequencies and passes the high frequencies. A band-
pass filter passes signals within a certain frequency band and attenuates signals outside
this band. Figures 4. l(a)-(c) illustrate ideal lowpass, higl1pass, and bandpass filters, re-
spectively, corresponding to both continuous- and discrete-time systems. Note that char-
acterization of the discrete-time filter is based on its behavior in the frequency range
-1r < n s 1r because íts frequency response is 21r periodic. Hence a highpass discrete-
time filter passes frequencies near 7r and attenuates frequencies near zero.
The passband of a filter is the band of frequencies that are passed by the system,
while the stopband refers to the range of frequencies that are attenuated by the system. lt
is impossible to build a practical system that has the discontinuous frequency response
characteristics of the ideal systems depicted in Fig. 4 .1. Realistic filters always have a
gradual transition from the passband to the st<>phand. The range of frequencies over which
this occurs is known as the transition band. Further1nore, realistic filters do not have zero
gain over the entire stopband, but instead have a very small gain relative to that cJf the
passband. ln general, filters with sharp transitions from passband to stopband are more
difficult to ímplement. Detailed treatment of filters is deferred to Chapter 8.
The magnitude response of a filter is commonly described in units of decibels or dB)
defined as
20 log 10 1 H{ jw) 1 or 20 log 10 I H{e;!l} I
The magnitude response in the stopband is normally much sma])er than that in the pass-
band and thc details of the stopband response are difficult to visualize on a linear scale.

H(jw)

I
•••
;

'
'
- •••

----·-+--......- - w '' '


' '
'
'
'
' Q
-W O w -21T -11-W w 1T 21T
(a)

H(jw)

I ...•...
... ...
----+,-~--t----- w '' '
'
'
' '
'
'
'
' '
< Q
-W O 1 w -37T -21T -7T w 1T 27T 311'
(b)

.. ...
i

' ... .. •
' Q
; '
' '
-W2 -W1 o w1 w1 -21T -1r-W2 -W1 w1 w..,- 1T 27T
(e)

FIGURE 4. 1 Frequency response of ideal continuot1s- and discrete-time filters, (a) Lo\\''Pass char-
acteristic. (b) HighJJass characteristic. (e) Bandpass characteristic.
4.2 Frequency Response of LTI Systems 259

By using units of dB, we display the magnitude response on a logaríthmic scale and are
able to examine the details of the response in both the passband and the stopband. Note
that unit gain corresponds to O dB. Hence the magnitude response in the filter passband
is normally close to O dB. The edge of the passband is usually defined by the frequencies
for which the response is -3 dB. At these frequencies the magnitude response is 1/V2.
Since the energy spectrum of the fi lter output is given by

IY(jw)l2 = IH(jw)[2 1X(jw)[2


we see that the -3-dB point corresponds to frequencies at which the filter passes only
½of the input power. These are often termed the cutoff frequencies of the filter .
.. , . .. .

ExAMPLE 4.1 The impulse response of the RC circuit in Fig. 4.2(a) was deríved as
: . ,

1
. . ., . .. ' h(t) = e-iiRc:u(t)
RC .
. .
Plot the magnitude response of this system on a linear scale and in dB, and characterize thís
system as a filter.
Solution: The frequency response of this system is
, 1
. · -H(1w) = ---
jwRC + 1
Figure 4.2(b) depicts the magnitude response, 1H( íw) 1- Figure 4.2(c) illustrates the magnitude
response in dB. This system has unit gain at low frequencies and tends to attenuate high
frequencies. Hence it has a lowpass filtering characteristic. We see that the cutoff frequency
is wc = 1/RC since the magnitude response is -3 dB at wc. Therefore the filter passband is
from O to 1/RC.
··•·: ·.. ~;: . • f• • ,, • •

The convolution property implies that the frequency response of a system may be
expressed as the ratio of the FT or DTFT of the output to that of the input. Specifically,
for a continuous-time system we may write

H(. ) = Y(jw) (4.1)


JW X(jw)

and for a discrete-time system


. Y(ei!l)
H(e'íl) = X(ei!l) (4.2)

Both of these expressions are of the indeterminate form 0/0 at frequencies where X( jw) or
X(eiª) is zero. Hence if the input spectra are nonzero at all frequencies, we can determine
the frequency response of a system from knowledge of the input and output spectra.
Note that if H(jw) and H(ei11 ) are nonzero, then Eqs. (4.1) and (4.2) ais() imply that
we can recover thc input of the system from the output as
X(jw) = H- 1(jw)Y(jw)
and
260 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

R +
e:=: y{t)

(a)

1 ,-----.---r----.---r---.----,----,--.,...--r-----,
0.9
l
0.8 v2
0.7 ""· -- - - - - - - ........... · · · · ... -· --- --·- ...... ··-- --
0.6 - 1

1 H(jw) 1 1 i
0.5 r 1
1 1
0.4 ! 1
1 (
0.3 1
l
0.2 1
1
0.1 1
o .___.__......____,__....._L.......&._,____.____._ __.__._______,
-10 -8 -6 -4 -2 o 2 4 6 8 10
wRC
(b)

-3 --···-------- ····-----------
-5

-10 -
20 log I H(jw) 1
(dB)
-15

-20 ..

-25 ,___.__~___.__......._........________,_ _..__~__..


-)0 -8 -6 -4 -2 O 2 4 6 8 10
wRC
(e)

FIGURE 4.2 (a) RC circuit with input x(t) and output y(t). (b) Magnitude response of RC cir-
cuit. (e) Magnitude response of RC circuit ín dB.

where H- 1( jw) = 1/H( jw) and H- 1 (ei!l) = 1/H(ei!l) are the frequency responses of the
respective inverse systems. An ínverse system is also known as an equalizer and the process
of recovering the input from the output is known as equalization. ln practice it is often
difficult or impossible to build an exact inverse system, so an approximate inverse is used.
For example, a communication channel may introduce a time delay in addition to dis-
torting the signal's magnitude and phase spectra. An exact equalizer would have to intro-
duce a time advance, which implies it is noncausal and cannot be implemented. However,
we may choose to build an approximate equalizer, one that compensates for all the
distortion except for the time delay. An introduction to equalizer design is given in
Chapter 8.
4.2 Frequency Response of LTI Systems 261
.; ..

ExAMPLE 4.2 The output of a system in response to an input x(t) = e-11u(t) is y(t) = e-tu(t).
Find the frequency response and the impulse response of this system.
Solution: Take the FT of x(t) and y(t)) obtaining
1
X(jw) == .
/óJ + 2
and
1
Y(jw) == .
fW +1
Now use the definition

'· H{. ) = Y(iw)


JW X(jw)

to obtain the system frequency response

H( ·w) = jw + 2
I jw +1
which may be rewritten as

H(jw) = ;w + 1 + . 1
jw + 1 JW + 1
1
==1+--
jw + 1
· Take the inverse FT of each term to obtain the impulse response of the system:
h(t) = 8(t) + e-tu(t)

• DIFFERENTIAL· AND DIFFERENCE~EQUATION DESCRIPTIONS

By definition, the frequency respor1se is the amplitude and phase change the sysrem imparts
to a complex sinusoid. The sinusoid is assumed to exist for ali time; it does not have a
starting or ending time. This implies that the frequency response is the system's steady-
state response to a sinusoid. In contrasr t<> differential- and difference-equation descripti<)ns
for a system, the frequency response description cannot represent initial conditions; it can
only describe a system ín a steady-stare C<)ndition.
The differential-equation representation for a continuous-time system is
,'J . Jk M dk
~o ak dtk y(t) = ~) bk dtk x(t)

Take the FT <>f both sides of this equation and repeatedly appJy rhe dífferentiation properry

d
dt g(t) <
FT

.
/W
G( . )
JW

to obtain
N M
L ak (jw)kY( jw) = L bk ( jw)kX{ jw)
k=O k=O
262 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

Rearrange this equation as the ratio of the FT of the output to the input, obraining

Y( jw) Lr=o bk ( jw)k


X(jw) - ~: 0 ak(júJ)k

Hence Eq. (4.1) implies that the frequency response of the system is

H( . ) = Lr=o bk(jw)k (4.3)


Jw LÍ: 0 ak( jw)k
The frequency response of a system described by a linear constant-coefficient differential
equation is a ratio of two polynomials in jw. Note that we can reverse this process and
determine a differential-equation description for the systern from the frequency response
provided the frequency response is expressed as a ratio of polynomials in jw.
The difference-equation representation for a discrete-time system is of the form
N M
~ aky[n - k] = Í: bkx[n - k]
k=O k=O

T ake the DTFT of both sides of this equation, using the time-shift property

to obtain
N M
Í: ak(e-;n)kY(e;n) = L bk(e-i!i)kX(e;n)
k=O k=O

Rewrite this equation as the ratio


Y(eifl) Lr=o bk (e-;1~)k
X(e;11 ) - LÍ: 0 ak (e-;n)k
Identifying thís ratio with Eq. (4.2), we have

=
LM bk (e-;n)k
H( ejfl) "N
k=O
( -jO)k
(4.4)
L,k:,:O ak e

ln the discrete-time case, the frequency response is a ratío of polynomials in e-;íi_ Given a
frequency response of the form described in Eq. (4.4), we may reverse our derivation to
determine a difference-equation description for the system, if so desired .

.
.,....

EXAMPLE 4.3 Find the frequency response and impttlse response of the system described by
the differential equation
. .
d2 . d . . . d.
· •· .,, dt2 y(t) + 3 dt y(t} + 2y(t) = 2 dt x(t) + x(t)

Solution: Here we have N == 2, M = 1. Substituting the coefficients of this differential


equation into Eq. (4.3), we obtain .the frequency response ..
. 2iw +1
H(7w) = (jw)z + 3jw +2
:·--;..
4.2 Frequency Response of LTI Systems 263

The impulse response is given by the inverse Ff of H(jw). Rewrite H(jw) using the partia}
fraction expansíon:
2jw + 1 A B
(jw) 2 + 3jw + 1 jw + 1 + jw + 1
Solving for A and B we obtain A = -1, and B = 3. Hence
.H(. -1 3
JW) = JW
. + 1 + /W
.
+
. . The inverse Ff gives the impulse response
· h(t) = 3e- 2tu(t) - e-tu(t)

EXAMPLE 4.4 The mechanical system depicted in Fig. 4.3(a) has the applied force x(t) as its
input and positíon y(t) as íts output. The relationship between x(t) and y(t} is governed by the
differential equation
d2 d
m dt2 y(t) + f dt y(t) + ky(t) = x(t)
Find the frequency response of this system and plot the magnitude response in dB for
m = 0.5 kg, f = 0.1 N · sim, and k = 50 Nlm.
Solution: Applicatíon of Eq. (4.3) gives
1
H(1·w) =
m(jw) 2 + fjw + k
Figure 4.3(b) depicts the magnitude response in dB for the specified values of m, J, and k. ··
This system tends to attenuate frequencies both below and above 10 rad/s. Hence it has a very
narrow bandpass filtering characteristíc and effectively only responds to input frequency com-
ponents near 1O rad/s.
. .,,~-..

o
-5
-10
-15 -

Gain in dB -20 ·-····

· 20log IH(jw)I
-25 -
dB -30
/ -35
k -40
......._.mi----. x(t)
-45
1.. )O 1 o 2 4 6 8 lO 12 14 16 18 20
y(t) f Frequency (rad/s)
(a) (b)

FIGURE 4.3 (a} l\1echanical system with input x(t) and output y(t). (b) System magnitude
response.
264 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

• Drill Problem 4.1 Find the frequency response and the impulse response of a dis-
crete-time system described by the difference equation
y[11 - 2] + 5y[11 - 1J + 6y[nJ = 8x[11 - 1J + 18x[11]
Ansiver:
. 8e-i!l + 18
H(e1n) = - - - - - - -
(e-i!l)2 + 5e-i!l + 6
h[nl = 2(-l)nu[nJ + (-½)"u[n] •
• Drill Problem 4.2 Write the dífferential equation relating the input and output of
the RC circuit depicted in Fig. 4.2(a). Use the results of this subsecrion to identify the
frequency response and impulse response from the differential equation.
Answer: See Example 4.1 .• •
• STATE·VARIABLE DESCRIPTION

The state-variable description for a continuous-time system is


d
dt q(t) = Aq(t) + bx(t) (4.5)

y(t) = cq(t) + Dx(t) (4.6)


We shall determine the frequency response in terms of (A, b, e, D) by taking the FT of
both sides of these equations using the differentiation property. Beginning with Eq. (4.5),
we have ·
. jwq( jw) == Aq( jw) + bX( jw) (4.7)
where we have defined the FT of the state vector as the vector containing the FT of each
state variable. That is,

q( jw) = •

Q,"J( iw)
FT
where the ith entry in q( jw) is the FT of the ith state variable) qi(t) ~<- •• Q;( jw). We
may rewrite Eq. (4. 7) as
jwq( jw) - Aq( jw) = bX( jw)
( jwl - A)q( jw) = bX( jw)
and thus write
q( jw) = ( jwl - A)- 1 bX( jw) (4.8)
Here I is the N by N identity matrix. Now take the FT of Eq. (4.6) to obtain
Y( jw) = cq( jw) + DX( jw)
and substitute Eq. (4.8) te> obtain
Y(jw) = (c(jwl - A)-'b + D)X(jw) (4.9)
4.2 Frequency Response of LTl Systems 265

Since by definitíon
Y(jw)
H(jw)
X(jw)

we therefore conclude that

H(jw) = e( jwl - A)- 1b + D (4.10)

is the expression for the frequency response defined in terms of the state-variable descríp-
tion (A, b, e, D).
We may derive the frequency resp()nse of a discrere-time system in terms of (A, b, e,
D) by following an analogous set of steps and using the time-shift properry in place of the
djfferentiatíon property. The res111t is
(4.11)

ExAMPLE 4.5 Determine the frequency response of the continuous-time system with state,
varíable description
....
'
2 -1 1
A= b=
1 o ' o
-;

e= [3 1], D= [O]
Solution: The frequency response is determined by substituting (A, b, e, D) into Eq. (4.10).
Begin by evaluatíng (jwl - A)- 1 . For this example we have
-1
jw - 2 1
.. ..
(jwI - A)- 1
=
-1 jw ,

...
' 1 JW -1
=------
(jw)2 - 2jw + 1 1 jw - 2 ,.

Now substitute e, (jwl - A)- 1 , b, and D into Eq. (4.10) to obtain


. . 1 - -1 1
'
/W
H{jw) = [3 2 +o
• lJ (jw) - 2iw + 1 1 jw- 2 o
.•. 1
·~ .: = (jw)2 - 12/w + 1 [3iw + 1 jw - 5] o
_ 3jw + 1 ...
.. ' . ----"----
(iw)2 - 2iw + 1
..
(' .. :

• Drill Prohlem 4.3 Find the frequency response of a discrete-time system with state-
variable descríption
-2 O 1
A= b=
1 -1 ' 1
e = IO 2], D= [1]

Answer:


266 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

ln Chapter 2 we noted that there are many different state-variable descriptions for a
system with a given input-output characteristic. These different state-variable descriptions
are obtained by transforming the system's state vector with a nonsingular matrix T. Since
the frequency response of a system is an input-output description, it should also be in-
variant to transformations of the state vector. We now prove thís important property of
a LTI system.
Let

H(jw) = c(jwI - A)- 1b +D


be the frequency response of a continuous-time system with state-variable description
(A, b, e, D). Now transform the state vector with a nonsingular matrix T to obtain a new
state-variable description (A', b', e', D'). The frequency response of the transformed sys-

tem 1s

· H'(jw) = c'(jwl - A')- 1b' + D'


ln Chapter 2 we established that A' = TAT- 1 , b' = Tb, e' = cT- 1 , and D' = D. Substitute
these relationships into the expression for H'( jw) to obtain

Write I = TT- 1 and substitute it in the above equation to obtain

H'(jw) = cT- 1(jwTT- 1 - TAT- 1)- 1Tb +D


= cT- 1 (T(jwl - A)T- 1)- 1Tb +D
Now use the identíty for the inverse of a product of invertible matrices (FGH)- 1
H- 1G- 1F- 1 to write

H'( jw) = cT- 1T( iwI - A)- 1T- 1Tb + D


= c(jwl - A)- 1b + D
Hence we have shown that H' ( jw) = H( jw) and conclude that the frequency response of
a LTI system is invariant to transformations of its state vector. An analogous result holds
for discrete-time systems. •

4.3 Fourier Transform Representations


for Periodic Signals
Recall that the FS and DTFS have been derived as the Fourier representations for periodic
signals. Strictly speaking, neither the FT nor DTFT converges for periodic signals. How-
ever, by incorporating impulses into the FT and DTFT in the appropriate manner, we may
develop FT and DTFT representations for periodic signals. These representations satisfy
the properties expected of the FT and DTFT. Hence we may use these representations and
the properties of the FT or DTFT to analyze problems involving mixtures of períodic and
nonperiodic signals. Our derivation also indicares the relationship between Fourier series
representations and Fourier transform representations. We begin with the continuous-time
case.
4.3 Fourier Transfonn Hepresentationsfor Periodic Sig,uils 267

• RELATING THE FT TO THE FS


The FS representation for a periodic signal x(t) is

x(t) = L Xf k)eikwot (4.12)


k=-x
where w 0 is the fundamental frequency of x(t). Now note that the inverse FT c>f a fre~
quency-shiftcd impulse, 8( w - kw 0 ), is a complex sinusoid with frequency kw 0 , as shown
by
- 1 e'.k·Wc,t FT
ô(W -
kW )
2w ·
( )
' 1 (4.13)

Although eikwºt is a periodic function and thus does not have a convergent FT, we obtain
this FT pair as a consequence of the sifting prc>perty of the impulse functíon.
Substitute the FT pair Eq. (4.13} into the FS representation Eq. (4.12) and use the
linearity property of the FT to obtain
fT
L
oc <>'

x(t) = ~ Xf k]eikwot < > X( jw) = 2w Xfk l8(w - kwo) (4.14)


k=-oo . k=-~

Hence the FT c,f a peric>dic signal is a series of impulses spaced by the fundamental fre-
quency w The kth impulse has strength 2wX[k], where XlkJ is the kth FS coefficient.
0

Figure 4.4 illustrates rhis relationship. Throughout this chapter we denote the strength of
impulses in the figures by their height as indicated by the labels on the vertical axis. This
is clone solely for convenience in presenting the large number of impulses that occur in
this material. Using this convention we see that the shape of X( jw) is identical te> rhat
of X[k]. .
Equation (4.14) also indicates h<,w t<> c<>nvert between FT and FS representations
for periodíc signals. The FT is obtained from the FS by placing impulses at integer multiples
of w and weighting them by 21T times the corresponding FS coefficient. Given a FT con-
0

sisting of impulses that are uniformly spaced in w, we obtain FS coefficients by dividing


the impulse strengths by 21T. The fundamental frequency corresponds to the spacing be-
tween impulses.

X[k]

3 ...

2
1
k

~
x(t) -2 -l -il[ l 2

X(jw)
· · · · - ·-•-- t
-T T 2T 61T - .
4?T .....
2'1T

FIGURE 4.4 t'S and Fl- representations for a periodíc continuous-time signal.
268 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

;.,• .
{i. •

ExAMPLE 4.6 Find the FT representation for x(t) = cos(w t). 0

Solution: The FS representation for x(t) is


Fs • X[k] -- 1 k = ±1
2
,,__;_"'_º
cos(w t) ..- '
O, k * ±1
0

Substituting these coefficients into Eq. (4.14) gives


. FT · ..
.' . cos(wot) < > X(jw) = '1TÔ(w - Wo) + 1T6(w + Wo)
" This pair is depicted graphically in Fig. 4.5.

.... .. ..... ........ ...::~·:·~. . .......; .... ...:...... ,


. ., . < : :,, :· .. ~· ...

ExAMPLE 4. 7 Find the FT of the impulse train


.....,.
...
,.,
• <
,:w , ;. .
, . : .; :.,
. ..
. .·:
.1
·,
p(t) L 8(t
= n~-oo - n?J)

Solution: We note that p(t) is periodíc with fundamental period <!J, so w0 = 2'1Tl<ff and the
FS coefficients are given by ,. .

/'. • > :
" ..

P[k] = .!_
.

?J
f $"12

-'5/2
8(t)e-;kw t dt . 0

,.
'. ....: .,,: ·,; .; , 1 .·,. ..
=-

Substítute these values into Eq. (4 .14) to obtain


, ..
.

ki .
·:

2
P(jw) = ; Õ(<.t> - k<.t>o) ··

Hence the FT of an impulse train is another impulse train. The spacing between the impulses
in frequency is inversely related to the spacing between impulses in time. This FT pair is
depicted in Fig. 4.6.
. :' ... . .. ... : : ,. . . ~:
: .,.' ., ..

• Drill Problem 4.4 Determine the FT representation for the periodic square wave
depicted in Fig. 4. 7.
Answer:
:X,

2 sin(k1r/2)
X(jw) = L 8 w-k-
1'T

k=-"" k 2 •
X(jw)
1T .....

PT
---\--+-1-+-+-1-+-+-~--+-
21T 27T
t .. •• ----------w
-1

FIGURE 4. 5 FT of a cosine functíon.


4.3 Fourier Transform Representationsfor Period.ic Signals 269

p(t)

1 .. -

••• • ••

' '
; 1 ; t
-Sj" -3J" -CZJ <fj' 391 53'

FT

P(iw)

27T
--- ..
T
••• ...
----4----+-----+----+------+-w
'
21T
-- o 4?T
CZJ ?!"

FIG\JRE 4.6 An impulse train and íts FT.

• RELATING THE DTFT TO THE DTFS


The rnethod for deriving the DTFT of a discrete-time periodic signal parallels thar of rhe
continuous-time case. The DTFS expression for an N periodic signal xlnl is
xfn] = L X[kle;kn,) 11 (4.15}
. k=(N>

As in the FS case, the key observation is that the inverse DTr'T <>Í a frequency-shifted
impulse is a discrere-time complex sinusoid. The DTFT is a 21T periodic function <>f fre-
quency, and so we may expressa frequency-shifted impulse either by expressing one period
8(.0 - k.O.o), - 7T < fl ~ 11', -7r < kn.º < 1T

or by using an infinite series of shifted impulses separated by an interval of 27T to obtain


the 27T periodic function
,e.

(4.16)
rn=-:;,;

which is depicted in Fig. 4.8. The inverse DTFT of Eq. (4.16) is evaluated using the sifting
property of the impulse funccion. We have
00

1 DTFT
- e~!~n ~ - -• (4.17}
211' rn=- - oo

x(t)
1
1 !
'
'
... !
• ••

• 1 ' . ' '; ' t


;
1
' ' ' '
-6 -4 -2 o 2 4 6 8

FICVRE 4. 7 · Square \\'ave f<>r DrilJ Problem 4.4.


270 CHAPTER 4 • APPLICATIONS OF fOlJRIER REPRESENTATIONS

1 -~

' ' '


'
61r

FIGURE 4.8 lnfinite series of frequency~shifted impulses that is 21r periodic in frequcncy n.

Hence we identify the complex sinus()id and the frequency-shifted impulse as a DTFT pair.
This relationship is a direct consequence of the properties of impulse functions.
Use linearity and substitute Eq. (4.17) into Eq. (4.15) to obtain rhe DTFT of the
peri<>dic signal x[n] as

Since X[k] is N peri(>dic and Nf! 0


= 27T, we may rewrite the DTFT of x[n] as

(4.18)

Thus the DTFT representation for a periodic signal is a series of impulses spaced by the
fundamental frequency n 0 • The kth impulse has strengrh 27TX[k], where X[k] is the kth
DTFS coefficient fc>r xln]. Figure 4.9 depicts both DTFS and DTFT representations for a
peric>dic discrete-time signal. Here again we see that the DTFS X[k] and the corresponding
DTFT X(e;n) are similar.
Equatíon (4.18) establishes the relationship between DTFS and DTFT. Given the
n
DTFS coefficients and the fundamental frequency 0 , we obrain the DTFT representation
by placing impulses at integer multiples of n and weighting rhem by 21r times the cor-
0

responding DTFS coefficienr. We reverse this process to obtain the DTFS coefficienrs fr<Jm
the DTFT representation. If the DTFT consists of impulses that are uniformly spaced in
n, then we obtain DTFS coefficients by dividing the impulse strengths by 21r. The fun-
damental frequency is the spacing between the impulses.

X[k]

x[nJ
.. 1
~--~.
1 ..... > I
• ••

k
-N A '!o ~N A2N
-1 l '

'
••• • ••
.. n
'
-6 -2 6
1T- ..

27T
••• •••

t f' n
!_211" •
-217'
.... \ '211" 41r

FIGURE 4.9 DTFS and DTFT representations for a periodic discrete-timc signal.
4.3 Fourier Transform Representationsfor Periodic Signals 271

-1 .
2
,- -

••• • ••
-Q.
'
' '
'
;
; '
'

'
'
'
-+-·· ····-···-·· íl
-311' -21T -1T 27T 31T 41T
º' 1l'

' ·- -ll2í

FIGURE 4.10 DTFT of periodíc signal for Example 4.8.

·. . 1 •. ... ·:· . ,, . 1: ,; ·'' . ,,.,.,.r .. :,, .... • : •• ,< •

ExAMPLE4.8 Determine the inverse DTFf of the frequency-domain representation depicted


. in Fig. 4.10. .. ·< ·>

.,
• •• }. •• • • •
.. .:, , .
• •
: .i • ,·:

Solution: Express one period of X(eíº) as


n 1 1
X(e' ) = 2j 8(0 - 0 1 ) - 2í c5(f! + 0 1 },

Now take the inverse DTFT of each frequency-shifted impulse to obtain

.. .,. ..•• .,.. <.·.


,. ,
.. . . .: ... .:, :~ ;\;: .
. : ;. .. ,.e. .,,. ".

• Drill Problem 4.5 Find both the DTFS and DTFT representations for the periodic
signal

31T 1T 1T
x[nJ = 2 cos - n + - + 4 sin -n
8 3 2

Answer: DTFS: 110 = 21r/l6

-2/i,k = -4
e-;-rri3 k = -3
'
e;nt3, k = 3
Xfkl
2/j, k =4
O, otherwíse for -7 '!S k < 8

DTFT: one period

·n 41r n.
X(e' ) = - . ô n+-2 1T
~, + 8
31T

J
471' 71'
+ 2 71' ei'" 13
8 !l - 31r + . o 11-- 71' < f! < 1T
8 2 J '

272 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

4.4 Convolution and Modulation


with Mixed Signal Classes
ln this section we use the FT and DTFT representations of periodic signals to analyze
problems involving mixtures of periodic and nonperiodic sígnals. ft is common to have
míxing of periodic and nonperiodic signals ín convolution and modulation problems. For
example, if a periodic signal is applied to a stable filter, the output is expressed as the
convolution c)f the periodic input signal and the nonperiodic impulse response. The tool
we use to analyze problems involving mixtures of períodic and nonperi<)dic continuous-
time signal classes is the FT. The DTFT applies to mixtures of periodic and nonperiodic
discrete-time signal classes. This analysis is possible since we now have FT and DTFT
representations for both periodic and nonperiodic signals. We begin examining convolu-
tion of periodic and nonperiodic signals and then focus on modulation applications.

• CONVOLUTION OF PERIODIC AND NONPERIODIC SIGNALS

ln Section 3.6 we established that convolution in the time domain corresponds to multi-
plication in the frequency domain: that is,

y(t) = x(t) * h(t) < FT > Y( jw) = X( jw)H( jw}

This property may be applied to problems in which one of the time-domain signals, say,
x(t}, is periodic by using its FT representation. Recall that the FT of a periodic signal
x(t) is
<O

x(t) < FT > X(jw) = 271' L


k=-eo
X[k]ô(w - kwcJ

where Xf k] are the FS coefficients. Substitute this representation into the convolution prop-
erty to obtain
,:,,:;

y(t) = x(t) * h(t) < FT > Y( jw} = 21r 2:


k=-""
X[kJo(w - kw 0 )H( jw) (4.19)

~ FT > Y( jw) = 271' I, H( jkw 0 )X[k]8(w - kwº) (4.20}


k=-""

where in the last line we have used the sifting property of the impulse functíon. Fígure
4.11 illustrates the multiplication of X( jw) and H( jw) that occurs in Eq. (4.20). The
strength <)f the kth impulse is adjusted by the value of H( jw) evaluated at the frequency
at which it is located, that is, H( jkw 0 ). The form of Y( jw} c<>rresponds to a periodic signal.
Hence y(t) is períodic with the sarne period as x(t). The most common application of this
property is in determining the output of a filter with impulse response h(t) and periodic
input x(t).
... ,

EXAMPLE 4.9 Let the input to a system with impulse response h(t) = (1/m) sin( m) be the
periodic square wave depicted ín Fig. 4.7. Use the convolution properry to find the output of
this system.
Solution: The frequency response of the system is
·.• > i, lwl :s; 7r
,.
,., :l•·
• ;i• •••
., h(t) < FT > H(jw) =
O, otherwise
...:.:,. : .., ;_ .."'
4.4 Convolution and Modulation with Mixed Signal Classes 273

X(jw)
1
21TX[O]
21rX[2J

-wo wo '
;

'
, ' '
w
-4w0 -2w0 2w0 4w0

21rX[ll

H(jw)

Y(jw)

- 21rX[OJH(/0)

-wo
---.--...L..-----í-, --+-,--+--+-i--+,
' ---,,-"-+---- w
-4w0 -2w0 2w0 4w0

FIGURE 4.11 Convolution property for mixture of periodic an<l nonperio<lic signals.

The FT of the periodic input signal was obtained in Drill Problem 4.4 and is given by

.· ) ~ 2 sin(k1r/2) ~ k 1T
X( JW = k=-oo
~ u W - -
k 2

The FT of the system output is Y(jw) = H(jcu)X(jw). This product is depicted in Fig. 4.12,
where we see that · ' · · · " " ·· · ... .,:· . ·

• 1T 1T
Y(Jw) = 2S w +
1 + 1r8(w) + 28 w--
2

ln effect, the system described by H(jw) acts as a lowpass .filter, passing the discrete-frequency
components at -1r/2, O, and 1r/2, while suppressing ali others. Taking the inverse FT of Y(j«1)
gives the output . ,. ,

1 2 Tr
. ' y(t) = - + - cos t -
.. 2 1r 2

An analogous result is obrained in the discrete-time case. The convolution property



IS
274 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

H(jw)

-------1-----------
o
-1T
w

X(jw)

X 2 -,_

... • •
--+-------,__--+,--+--+----IL---+-,--+---+------!--- w
' : 1
••

-71' -'IT/2 o Tr/2 1r

Y(jw)

- 2 ..

---------f--+----l"----------- w
-1T/2 Tr/2

FIGURE 4.12 Application of convolution property in Exarnple 4.9.

We may use this property when x[n] is periodic by substituting the DTFT representation
given in Eq. {4.18) for periodic signals,
00

X(e;n) = 217' L
k=-oo
X[k]8(!l - k!lo)

and so obtain
00

DTFT
y[n] = x[nl * h[n] ~-----") Y(eiº) = 21r "LJ H(e;k n°)X(k]8(!l - k!l 0 ) (4.21)
k=-oo

The form of Y(ei11 } indicares that y[n] is also periodic with the sarne period as x[n]. This
properry finds applicatíon in evaluating the input-output behavior of LTI systems.

• Drill Problem 4.6 Let the impulse response of a discrete-time system be


h[n] = (½)nu[n]
Determine the output of this system in response to the input
x[n] == 3 + cos( 1rn + 1r/3)
Answer:
· y[n] = 6 + fcos(1rn + 1r/3) •
• MODlJLATION OF PERIODIC AND NONPERIODIC SIGNALS

Now consider the modulation pr<1perty of the FT described by


FT 1
y(t) = g(t)x(t) " > Y( iw) = 17' G( jw) * X( jw)
2
4.4 Convolution and Modulatíon with Mixed Signal Classes 21,;

We may use this property even if x(t) is periodic by empl<Jying its FT representation.
Substituting Eq. (4.14) for X( jw) gives
FT ~
y(t) = g(t)x(t) ~-----+ Y(jw) = G(jw) * LJ X[kJS(w - kw 0 )
k"'-cc

The sifting property of the impulse function implies that convolution of any functic>n with
a shifted impulse results ir1 a shifted version of the original function. Hence we ha ve
CC

f'f
y(t) = g(t}x(t) - - - Y( jw) = 2:
k-==-oo
X[k]G( j(w - kw 0 )} (4.22)

Modulation of g(t) with the periodic function x(t) gives a FT consisting of a weighted sum
of shifted versions of G(jw). This result is illustrated in Fig. 4.13. Note that the form <)Í
Y( jw) corresponds to the FT of a nonperiodic signal. The product of periodic and non-
periodic signals is nonperiodic .
.. , ,:

EXAMPLE 4.10 Considera system with output y(t) = g(t)x(t). Assume x(t) is the square
wave depicted in Fig. 4.7. (a) Find Y(jw) in terms of G(jw). (b) Sketch Y(jw) if g(t) =
cos{t/2).
Solution: The square wave has the FS representation
FS; 1rl2 X[k] sin(k?T/2)
_ x(t) < _• = k,,,.
.. . .., .
...r . ·····"· , ..

G(jw)

---------+-----,..-----
w -W O
i w

X(jw)

3
* 2

... -2w0 ...


----------------------w
o
-1 -

Y(jw)
3 .
-
••• •••

FIGURE 4.13 Modulation property for the combination <>f a periodic and no1lperíodic signal.
276 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

...•... ,. ,. ... ...,,,,,.,


.
. : .: · .. ·~~•...• ,,.,., .· · ;.., ,;r..::i im;..1,,,- •• ,
. . ,: .. .,/

(a) Substituting this result into Eq. (4.22) gives

Y(jw) = i
k=-oo
sin(k1r/2) G
k1T
i w - k :!!.2

(b) Here we have


G(jw) = 1r8(w - ½) + 1r8(w + ½)
and thus Y(jw) may be expressed as

Y(jw) = f
k=-a,
sin{k1T/2)
k
1
~ w---k-
2
1r

2
1 . 1T
+ô w+--k-
2 2
Figure 4.14 depicts the terms constítuting Y(j,o) in the sum near k = O.
..~ ·.

• Drill Problem 4. 7
Use the modulation property to determine the frequency response
of a system with in1pulse response
sin( 1rt)
h(t) = - - cos(31rt)
1Tt
Answer:

H(jw) =
l, 21r < 1w 1 < 41r
o, otherwise •
The discrete-rime mc>dulation property is

If xlnJ is periodic, then thís property is still applicable provided that we use the DTFT
representation for x[n],. given ín Eq. (4.18), as shown by
CC

X(ei11 ) = 21r _I X[k]ô(!l - kfi 0


)

k=-cc

where X[k] are the DTFS coefficients. Substitute X(ei!l) int(> the definition of peric>dic
convolution to obtain ·

Y(ei11 ) = f i
(2rr) k= - o ç
XlkJa(B - k!!o)G(ei(!l-OJ) dfJ

Y(iw)

-2- -
1T

l .
'
... •••

•1 ' 1
! '' 1
1
'
'
'
- '' t ''
t w
-21T
--l -l -31T 21T
31T -'TT' 1T 1T 1T 51T
' -- -- - -
2 2 2 -~ 2 2 2 2
\l
--
3

FIGURE 4.14 Solution for Example 4. IO(b).


4.4 Convolution and Modulation 1vith Mixed Signal Classes 277

ln any 21r interval of (} there are exactly N impulses of the form 8(0 - kf! 0 ). Thís is beca use
f! 0 = 21r/N. Hence we can reduce the infinite sum to any N consecutive values of k.
lnterchanging the sum and integral gives

Y(ei11 ) = L
k=(N>
X[k] J .ô(O -
(21r>
k!l 0 )G(ei(!l-OJ) d0

Now apply the sifting property (>Í rhe impulse function to evaluate the integral and c,l,tai11

yf nl = x[n]g[n] . DTFT '°'


Y(e;n) == Li XlkJG(ei(!l-k!l.,J) (4.23)
k=(N)

Modulatil)n of g[n] with the periodic sequence xfnl results in a DTFT consisting of a
weighted sum of shifted versions of G(e;n). Note that y[nj is nonperiodic since the product
of a periodic signal and a nonperic>dic signal is nonperiodic. Hence the form of Y(ei11 )
corresp<>nds to a nonperiodic sígnal .

.ExAMPLE 4.11 Consjder the foJlowing signal:


. 71r
x[n] = cos n + cos
16
Use the modulation property to evaluate the effect of computing the DTFT using only the
2M + 1 values x[n], lnl :5 M.
Solution: The DTFf of x[nJ is
. 9TT 71r 71r 91T
X(e'ª) = 1T5 fl +
16 + 1r8 fl + -
16
+ 1rl> fl - -
16 + 1T<> 0 - 16 ' -1r< n< 7T

X(eiº) consists of impulses at ±71Tl16 and ±91r/16. Now define a signal y[n] = x[n]w[n],
where w[n] is the window function
· 1, ln! s M
w[n] =
. O, lnl > M
· This window function selects the 2M + 1 values of x[n] centered on n = O. Comparison of
the DTFTs of y[n] and x[nl establishes the effect of using only 2M + 1 values. The modulation
property implíes ·
Y(eiº) = ½{W(eiln+9nt16J) + W(eiOl+7?Ttt6>) + W(ei<íl-71rtt6l) + W(eilfi-91Tt16})}

where .·.,.. ,; ~. .,.-:


··: ..

....·
.... .
sin 11 (2M + l)
. 2
· · W(eiª) = ------
...
.,..,.
.
Slll
n
2
We see that windowing introduces replicas of W(eiíl) at the frequencies 71T!16 and 91r/16
instead of the impulses that are present ín X(ein). We may víew this as a smearing or broad-
ening of the original impulses. The energy in Y(eiª) is now smeared over a band centered on
the frequencies of the cosines. The extent of the smearing depends on the width of the maínlobe
of W(eiº), which is given by 41r/(2M + 1) (see Fig. 3.19}.
Figures 4.15(a)-(c) depict Y(eifl) for severa! decreasing values of M. When M is large
enough so that the width of the rriainlobe of W(eí'1) is small relative to the separation between
, .·
278 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

100,....,...---.---.-----.----.---....-----,.., ! 7 7 i 1 !

10 L...,
,,_

75
-

50
5 ,_ -
a---
,-..
. ._

-li)

>- 25 "•
·- a
~
l i)
-

o
o >
\ \
1

-25 L_ ' - -
.J.. ! 1 - 1 lj -5 i t 1 1 1 1

-3 -2 -1 o 1 2 3 -3 -2 -1 o ) 2 3
Q Q
(a) (b)

10

-5 .....___ ___.___ _..___ ___.__ __,__ __.__-----w


-3 -2 -1 O 1 2 3
Q
(e)

FIGURE 4.15 Effect of windO\VÍng a data record. Y(ei0 ) for diffcrent values of 1\1 assuming
!1 1 = 77r/l6 and 0 2 = 91r/I6. (a) M = 80. (b) M = 12. (e) M = 8.

.. ~.,- ... •• • ,.. ..:< :;.. .,,.. • • . ., ~. . .. . ·r ., . ., ,, < •

the frequencíes 71r/l6 and 91r/l6, then Y(eiº) is a fairly good approximation to X(ei0 ). Thís
case ís depícted in Fig. 4.15(a) usíng M = 80. However, as M decreases and the maínlobe
width becomes about the sarne as the separation between the frequencies 77T/16 and 97r/16,
then the peaks associated wíth each individual shifted version of W(eiº) begin to overlap and
merge into a single peak. This is illustrated in Figs. 4.15(b) and (e) by usíng values M = 12
and M = 8, respectively.
. ·,. .

The problem of identifying sinusoidal signals of different frequencies in data is very


important and occurs frequentJy in signal analysis. The preceding example illustrates thac
our ability to distinguish distinct sinusoids is limited by the length of the data record. If
the number of available data points is small relative to the frequency separation, the DTFT
is unable to distinguish the presence of two distinct sinusoids. ln practice, we are always
4.5 Fourier Transfor,n Hepresentationfor Discrete-Time Signals 279

+
x[n] -----1 .2'•· 1--• y[n]
+ ••• • ••

----+:-~.......- ......-~--+-~~-....--~--+---3'.- Q

t
(-l)n
-21T -1T
--2 1T 1T
2
1T 21T

(a) (b)

••• • ••

-3?T -1T 1T

(e)

FIGURE 4. 16 Drill Problcm 4. 8. (a) System. (b) Input spectrum. (e) Output spectrum.

restricted to finite-length data records in any signal analysis applicatic>n. Thus it is impor-
tant to recognize the effects of windowing.

• Drill Problem 4.8 Consider the system depicted in Fig. 4.16(a). Determine an ex-
pression for Y(eií!), the DTFT of the output, yln], and skerch Y(e 1st ) assuming that X(ef.{!)
is given in Fig. 4.16(b).
Answer:
Y(eifl) = X(e;n) + X(ei(!l-rr>)

See Fig. 4 .16 (e) for the sketch. •


4.5 Fourier Transform Bepresentation
for Discrete-Time Signals
ln this section we derive a FT representation for discrete-time signals by incorpc)rating
impulses into the signal description in the appropriate manner. This representation satísfies
all the properties c>f the FT and thus converts the FT into a powerful t<><>l for analyzing
problems involving mixtures of discrete- and continuous-time sígnals. Our derivation also
indicates the relationship between the FT and DTFT. Combining the resu)ts <>f this section
with the Fourier transform representations for periodic signals derived in Section 4.3 en-
ables the FT to be used as an analysis tool for any of the four signal classes.
We begin the discussion by establishing a correspondence between continuous-time
frequency w and discrete-time frequency fl. Define complex sinusoids x(t) = ei',J1 and
gf n] = ei!ln. A c<>nnectic)n between the frequencies of chese sinusoids is established by
requiring g[n] to correspc)nd to x(t). Suppose we force g[n] to be equal to the samples c>f
x{t) taken ar intervals of :!, that is, g[nJ = x(n~). Tbis impJies

and we may define D = w?f. In words, discrete-time frequency· .O corresponds to contin-


uous-tirne frequency w multiplied by the sampling interval ?J.
280 CHAPTER 4 • APPLICATIONS Of FOURIER REPRESENTATIONS

• RELATING THE FT TO THE DTFT


Now consider the DTFT of an arbitrary discrete-time signal x[n]. We have
ôC

(4.24)
n"" - oo

FT
We seek a FT pair x;,(t) < > Xs( iw) that corresponds to the DTFT pair
DTFT
x[n] ------¼ X{e;11). Substitute fl = wCZ! into Eq. {4.24) t<> c>btain the function of contin-
uous-time frequency w:

00
(4.25)

n=-cc

Take the inverse FT of X 6 { jw} using linearity and the FT paír


FT
ô( t - n2T) ( • e-jw'?fn

t<> <>btain the continu()Us-time signal description

x 6 (t) = L x[n]ô(t - n?l) (4.26)


n=-oo

Hence
~ ~

Xô(t) = L xln ]ô(t - n<.!I) < FT > Xi>( jw) = L


n=-oc
x[n]e-íw:'Tn (4.27)
n=-oo

where x 6 (t) is a continuous-time signal that corresponds to x[n], and the Fourier transform
X 6 ( jw) corresponds to the discrete-time Fourier transform X(eiº). We refer to Eq. (4.26)
as the continuous-time representation of x[n]. This representation has an assc)ciated sam-
pling interval 2T that determines the relationship between continuous- and discrete-time
frequency: n = w5". Figure 4.17 illustrates the relationships between the signals x[n] and

2..-'
r
... DTFT
••• 1- - ••• • ••

; ; n _ _ ____,,~--+------+,-...,._- n
-1 1 '2 3 ~ -2'1T 21r

Impulse conversion il=w7-f

3- -
2 -·

•••
'

;
J .. ...
t
. ..
FT

-'2l '2l 3'2l 21T


--~

FIGURE 4.1 7 Rclationship bet,veen FT and DTFT reprcscntations of a discrete-time signal.


4. 5 Fourier Transfo,-m Bepresentationfor Discrete-Time ,5ignals 281

x 8 (t) and the corresponding f(>urier representations X(e;íi) and X 8 ( jw). The DTFT X(ei11 )
is 27T periodíc in !l, while the FT X~( jw} is 21r/CZJ periodíc in w. The discrete-time signal
has values xfn], ,:vhi]e rhe cc>rresponding continuous-time signal C()nsísts of a series of
impulses separared by CZJ with the nth impulse having strength xl n l.

•• !- •• < ··.·,. .
' .,.._ ~

ExAMPLE 4.12 Determine the FT pair associated with a signal whose D'IFI' is

•A 1
X(e'"') = .
', .·.
. . 1 - ae- 10

Solution: Take the inverse DTFT to obtain

Now use Eg. (4.26) to define the continuous-time signal


..
Xõ(t) = I
t1=0
an8(t - n~)
' ..

Substituting .O = w<ff gíves


.;, .

FT • Xa(jw) =1- 1 -·fA


. ae'
'r:<7 •, • .,,, ,:.... •• ...,,.• ..,t,,... ..

Note the many parallels between the continuous-time representatic)n of a discrete-


time signal given in Eq. (4.26) and the fT representatÍ<)n of a peri()dic signal given in Eq.
(4.14). The FT representati<)ll isc)btained fr,>m the FS cc)efficients by introducing impulses
at integer multiples of the fu11damental frequency w with the srrength of the kth impulse 0

determined by the kth FS coefficjent. The FS representation X[k] is discrete valued whj]e
the corresponding FT representation X( jw) is continuous in frequency. Here x[ n] is discrete
valued, while x 8 {t) is continuous. The paramerer 'J' determines the separarion between
impulses in X.,;(t} just like w 0 does in X( jw). These parallels between x 5 (t) and X( jw) are a
direct consequence of the FS-DTFT dua]ity property discussed in Section 3.6. D11ality
states that the roles of time and frequency in Fourier analysis are intercha11geable. Here
x 8 (t) is a continuo11s-time signal whc>se FT is a 27Tfrzi period.ic ft1nction t)f freqt1ency, while
X( jw) is a continuous frequency signal whose inverse FT is a 2-rr/w0 periodic function c>f
tjme.

• RELATING THE FT TO THE DTFS


ln Secrion 4.3 we derived the FT representation f<>r a periodic continuous-time signal.
Previously in this section we have shown how to represent a discrete-time nonperiodic
signaJ ¼'ith the FT. The remaining case, representation of a discrete-time periodic signal
with the FT, is obtained by combíning the DTFT representation for a discrete-time periodic
signal derived in Secrion 4.3 ¼'.ith the results of the previous subsection. Once this is ac-
complished we may use the FT to represent any of the Í(>ur signal classes.
Recall that the DTFT representati<>n for an N periodic signal x[n] is given in Eq.
(4.18)as
<>::

X(ei0 ) = L
k=-:x;
21rX[k]8(!1 - kfi"}
282 CHAPTl::R 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

where X[k] are the DTFS coefficients. Perform the substitutíon



n = w?J to obtain the FT
representa tron

k=-""

= "" 27TX[k Jó ~ w - k !1º


m-
k=-oo v

Now use the scaling property of the impulse function, ô(av) = (1/a)ó(v) derived in Chapter
1 to rewrite X 6 ( jw) as

(4.28)

Recall that Xlkl is an N periodic function. This implies that X 5 ( jw} is periodic with pcriod
Nflj<!J = 271'/<!f. The signal x 5 (t) corresponding to this FT is most easily obtained by
substituting the periodic signal x[nJ into Eq. (4.26), thar is,
00

x 5 (t) = L
n=-oo
x[nJó(t - n<!J) (4.29)

Note that the N periodic nature of x[n] implies that x 6 (t) is also periodic with fundamental
period N<!J. Hence both x 15 (t) and X 6 ( jw) are N periodíc impulse trains as depicted in Fig.
4.18.

• Drill Prohlem 4.9 Determine the FT pair associated with the discrete-time periodic
signal

2-rr
x[n] = cos Nn

x[n] X[kJ

1
••• DTFS; Q 0 ... ...
•• •
n . • ; k
-N o N -N o N

Impulse conversion Impulse conversion

Xa(t)
l

...
~L.X-1.
J
.
... ...
'
... ...
. ..
t .. ..
FT ·· • .. ' '
•'
...
(JJ
f
-Nff o <!J N?J - 21t
- ' o
2n:
2n:
- '
N!J"
FIGURE 4.18 Relationship between FT and DTFS representations of a díscrete-time pcriodic
signal.
4.6 Sampling 283

Answer:
:X:
CC

21T 21T 21T


Xi,(!) = I. COS Nn ô(t -: n?f) < FT > XB( jw} = ~ m L Ô w+ · - - - m -
n==-oo 00 N:J ;J

21T 21T
+8 w- -m
N?J ?J •

4.6 Sampling
ln this section we use the FT representation of discrete-time signals to analyze the effects
of uniformly sampling a signal. The sampling operation generates a discrete-time signal
from a continuous-time signal. Sampling of continuous-time signals is often performed in
order to manipulate the signal with a computer or micropr(>cessor. Such manipulations
are c<>mmon in communication, control, and signal-processing systems. We shall show
how rhc DTFT of rhe sampled signal is related to rhe FT of the continuous-time signal.
Sampling is also frequently performed on discrete-time sígnals to change the effective data
rate, an <)peration termed subsampling. ln this case the sampJing pr<)cess discards ,,alues
of the signal. We examine the ímpact of subsampling by comparing the DTFT of the
sampled signal to the DTFT of rhe original signal.

• SAMPLING CONTINUOUS•TIME SIGNALS

Let x(t) be a continuous-time signal. We define a discrete-time signal x[n] that is equal to
the ''samples'' of x(t) at integer multiples of a sampling interval ?J, chat is, x[n] = x(n2T).
The effect of sampling is evaluated by relating the DTFT of x[n] to the FT of x(t). Our
tool for exploring this relationship is the FT representatío11 of discrete-time signa1s.
Begin with the contínuous~time representation for the discrete~timc signal x[ n] given
in Eq. (4.26),
OQ

X.s(t) = L
n=-oo
x[n)o(t - n2T)

Now substitute x(n2T} for x[n] to obtain


ôC

X5 (t) = L
n=- oc
x(n?J)8{t - n2T)

Since x(t)ô(t - n2T) = x(n2T)5(t - n2T), we may rewrite X&(t} as a product of time functions

X.s(t) = x(t)p(t) (4.30)

where

. p(t) L ô(t - n?J} (4.31)

Hence Eq. (4.30} implies that we may mathematically represent the sampled signal as the
producr of the original continuous-time signal and an impulse train, as depicted ín Fig.
4.19. This rcpresentation is commonly termed impulse sampling.
284 CHAP'l'ER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

Xr,(f)

'
••• •••
' -T
'
'
' t
-3'!! '!J 3'!!

x(t)
- p(t)

' ' ' '


X ••• • ••

'
t
-3'!J -'!J '!J 3'!f

FIGURE 4.19 l\1athematical representation of sampling as the product of a given time signal and
an impulse train.

The effect of sampling is determined by relating the FT of x 6 (t) to the FT of x(t).


Since multiplication in the time domain corresponds to convolution in the frequency do-
main, we have
1
Xs( jw) = 1T" X( jw) * P( jw)
2
Substituting the value for P( jw) determined in Example 4. 7 into this relationship, we obtain
1 211" eo

Xc'>(jw) = l7r X(jw) * ~ k L"" 8(w - kws)

where W 5 = 21rl~ is the sampling frequency. Now convolve X( jw) with each of the shifted
impulses to obtain

(4.32)

The FT of the sampled signal is given by an infinite sum of shifted versions of the original
signal's FT. The shifted versions are offset by integer multiples of W 5 • The shifted versions
of X( jw} may overlap with each other if Ws is not large enough compareci to the frequency
extent of X( jw). This effect is demonstrated in Fig. 4.20 by depicting Eq. (4.32) for several
different values of ~- The frequency content of the signal x(t) is assumed to lie within the
frequency band -W < w < W for purposes of illustration. ln Figs. 4.20(b)-(d) we depict
!
the cases W 5 = 3 W, W 5 = 2 W, and W 5 = W, respectively. The shifted replicares of X{ jw)
associated with the kth term in Eq. (4.32) are labeled. Note that as <!J increases and ws
decreases, the shifted replicares of X( jw) move closer together. They overlap one another
when Ws < 2W.
Overlap in the shifted replicas of the original spectrum is termed aliasing, which
refers to the phenomenon of a high-frequency component taking on the identity of a
low-frequency one. Aliasing distorts the spectrum of the original continuous-time signal.
This effect is íllustrated ín Fig. 4.20(d). Overlap between the replicas of X( jw) at k = O
and k = l occurs for frequencies between W 5 - W and W. These replicas add, and thus
the basic shape of the spectrum changes from portions of a triangle to a constant. The
spectrum of the sampled signal no longer has a one-to-one correspondence to that of the
original continuous-time signal. This means that we cannot use the spectrum of the sam-
4.6 Sampling 285

X(jw)

-W O

(a)

Xi(iw)

-2w5 -ws -W w

(b)

Xa(iw)
k-= -2 k =-1 1 k-= O k-= l k-= 2

•••

-2ws -w s -W

(e)

X8(iw)
k = -2 k = -1 -l k-= l k= 2
.....

,, ,, ,, ,, •••

''
I\ I\ ''
I\
" ''
A A I\ ''

(d)

FIGURE 4.20 FT of a sampled signal for different sampling frequencies. (a) Spectrum of contin-
uous-time signal. (b) Speclrum of sampled signal when w.- = 3W. (e) Spectrum of sampled signal
when Ws = 2W. (d) Spectrum <lf sampled signal \Vhen W 5 = ½W.

pled signal to analyze the continuous-time signal, and we cannot uniquely reconstruct the
continuous-time signal from its samples. The reconstruction problem is addressed in the
followíng sectíon. As Fig. 4.20 íllustrates, alíasing is prevented by choosing the sampling
interval <J' so that ws > 2 W, where W is the highest frequency component in the signal.
This implies that we must satisfy the condition 5" < 'TTIW.
The DTFT of the sampled signal is obtained from Xô( jw) using the relationship
n = w<!f, that is,
DTFT
x[n] +---- X(ein) = Xô( jw) !w=Ol?r
This scaling of the independent variable to n = 2'1T.
implies that w = ws corresponds
Figures 4.21(a)-(c) depict the DTFTs of the sampled signals corresponding to the FTs in
Figs. 4.20(h)-(d). Note rhat the shape is the sarne in each case; the only difference is a
scaling of the frequency axis. The FTs have period w,., while the DTFTs have period 2'1T.
286 CHAPTER 4 • APPLICATIONS Of' FOlJRll:'.R REPRESENTATIONS

••• • ••

_ _...__ _4--____;1i--+-~-+----'lí--+---+-,---li..-----+--~- .Q
-31T -21T -1T
--
w o w 1T 27i
2'7T 27T
(a)

••• • ••

-3'7T -21T -1T 1T 41T

(b)

••• ...
~ ~
.; <'
'
' tl
-3'1T -21T -1T 1T 21T 31T 41T

(e)

FIGURE 4.21 DTFTs c<)rresponding to the FTs depicted in Figs. 4.20(b)-(d). (a) w. = 3\,V.
(b) w, = 2W. (e) W5 = fw.

.. . -. ,~.. ·. ::

EXAMPLE 4.13 Consider the effect of sampling the sinusoidal signal


•···. . ,, ·• x(t) = cos(m)
Determine the FT of the sampled signal for the following sampling intervals: (a) ~ = ~'
(b) f'f = 1, and (e} ~ = ¾.
Solution: Use Eq. (4.32) for each value of f'f. ln particular, note that
. FT
x(t) < > X(jw) = '1TÔ(w + 1r) + 7T6(w - ?T)

Substiturion of X(jw) into Eq. (4.32) gives


. 1T ""
Xõ(Jw) =i k~ c5(w + 'TT' - kws) + S(w - '1T - kw5 )
00

Hence X 8 (jw) consists of pairs of impulses separated by 21r centered on integer multiples of
the samplíng frequency, ea,5 , The sampling frequency is different in each case. Using ws = 21r/'!J
gives (a) ws = 81r, (b) ws = 21r, and (e) ws = 41r/3, respectively. The sampled continuous-time
signals and their FTs are depicted in Fig. 4.22.
ln case (a) the impulses are clearly paired about multiples of 81r, as depicted in Fig.
4.22(b). As~ increases and W 5 decreases, pairs of impulses assocíated with dífferent values of
k becorne closer together. ln case (b), impulses associated with adjacent indices k superimpose
on one another, as illustrated in Fig. 4.22(c). Thís corresponds to a sampling interval of
one-half períod. There is an ambiguity here since we cannot uniquely determine the original
4.6 Sampling 287

x(t) X(jw)

. FT
JII
1T - ..

t w
-3 -2 2 -81T -41T -1T 1T 41T 811

(a)

Xô(t) Xli(iw)
k =-l k=O k=l
1- - FT A ,A.._
-3 ' -1 1; 3
li(
• ... • ••
'
; ' t ; w
' -2 2 -81T -41T -1T 1T 41T 81T

(b)

=-2 k =1 k Xô(jw)
X<5(t)
k \3/\Âl/t:1/\/\2J =
= 3
-3 -l
l -
1 3
. .. FT
•••
1T
• ••
;
t '; w
-2 2
-1 - >- '
-41T -211' 21T 41T

(e)
X 5(jw)
217'/3
x 6(t)
k =-2 k=O k=I k=2
1- ...
FT ••• • ••
; ;
' ' ' ' ' 1
t • •
-6
;

-4 -2
'
'

-1 . . ..
'
2 4
' '
6
'
' '
8
'
-31T -21T -1T
-1T3 1T 21T 37T
(J)

(d)

FIGURE 4.22 Effect of samplíng a sinusoíd at different rates. (a) ()riginal signal and FT.
(b) Sampled signal and FT for rzJ' = ¼. (e) Sampled signal and FT for rzJ' = 1. (d) Sampled sígnal
and FT for ~ = t.

from either X,s{t) or X 6 (jw). For example, both x 1(t) = cos( 111) and x 2(t) = eim result in the
sarne sequence x[n] = (-1 )"for~ = 1. ln case (e), shown in Fig. 4.22(d), the pairs of impulses
associated with each índex k are interspersed. ln this case we also have an ambiguity. Both
the original signal x(t) and the signal x 1 (t) = cos(( 1r/3)t) are consístent wíth the sampled signal
x 8 (t) and spectrum X~(iw). Sampling has caused the original sinusoid with frequency 1r to
alias or appear as a new sinusoid of frequency 1r/3. • · -. · ...•· · - . ~

... ,. .. ..:·. ...,, ......~.' .;· ·. .;·: . ~- .. :

• Drill Problem 4.10 Draw the FT of a sampled version of the continuous-time signal
having FT depicted in Fig. 4.23(a) for (a) 21 = ½and (b) 21 = 2.
Answer: See Figs. 4.23(6) and (e). •
"' • SUBSAMPLING: SAMPLING DISCRETE-TIME StGNALS

The FT is also very helpful in analyzing the effect of sampling a discrete-time signal, or
subsampling. Let y[n] = x[qn} be a subsampled version of xlnJ. We require q to be a
288 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

X(íw) Xs(iw)
.
1- ~
2; - ..
••• • ••
/2
- '11' - 1T -1T
w ' ' ' ..
; (
' ' ' '
--rr/2 1T -41T -21T 1T 21T 47T
-,-
'
>- ·+..
1
-2·J

(a) (b)

.. j/2
•• • •••

w
-27T -'IT 7T 277' 311
-j/2

(e)

4.23 (a) Spectrum of original signal f(lr Drill Problcm 4. l (). (b) Spcctrum of samJ)lcd
FIGllRI:'.
signal for ?J = ½. (e) Spcctrun1 of sam1>lcd signal for ?J = 2.

p<1sicive integer for this operation to be meaningful. Our goal is to relate the DTFT of y[n]
to the DTFT of xlnl. We acc<.lmplish this by using the FT to represent x[ nl as a sampled
version <>f a continuous-time signal x(t). We then express ; 1lnJ as a sampJed versio11 of the
sarne underlying continuous-time signal x(t) obtained using a sampling interval q times
that ass()Ciated with xln ].
Use Eq. (4.26) t<> represent xln] as the impulse sampled continuous-time signal with
sampling interval ?J, and thus write
oc

n=-oo

Suppose xf n] are the samples of a continuous-time signal x(t), obtained at integer multiples
of <!!. That is, x{n] = x(n2l). Let x(t) < FT > X( jw) and apply Eq. (4.32) to obtain
1 ~
X.5( jw) = Õr
'j
L
k=-oo
X( j(w - kws))

Since y[n] is f<)rmed using every qth sample of xlnl, we may also express y[n] as a sampled
version of x(t). We have
y[n) = x[qn]
= x(nq<Zr)
Hence the effcctive sampling rate for y[n] is ?J' = q<Zr. Applying Eq. (4.32) to y[n] gives
,,, FT 1 "'
y5(t) = x(t) L
11= ..-~
o(t - n2J') < > Y.5(jw) = v,,
~
L
k=-·'>C
X(j(w - kw~))

Substituting 2T' = q<!f and w; = wslq in the right-hand member of this pair gives
1 00
k
Y0 ( jw) L
= q':Ja, k=-,x, X j w - - w
q s
(4.33)
4.6 Sampling 289

We have expressed both X 5 ( jw) and Y 8 ( jw) as functíons <>f X( jw). However, X( jw)
is unknc>wn, since we only know x[n] and do not know x(t). Hence we shall further
manipula te the expression for Y5( jw) in order to express Yn( jw) as a function of X 0( jw),
the FT of xln ]. We begin by writing klq in Eq. (4.33) as a prc)per fraction, and thus set
k m
- = l +-
q q
where I is the inceger portion of klq and m is the remainder. Allowing k to range from
- oo to 00 correspc>nds to having / range from - oo to oc and m from O tl> q - 1. Hence ,ve
may rewrite Eq. (4.33) as the double sum
tJ-1
1 ""
Yô( jw) = .!_
q
L
n1=0 'j /=--x
~
õ;: LJ X /
.
W - fw 5 -
m
-
q
W5

Recognizing that the cerm in braces corresponds to Xô( j( w - (mlq)w 5 )), we rewrite Y.5( jw)
as
1 q-1
Yô( jw) =- I Xs j ú> - ~ w, (4.34)
q ,n=O q
which represents a sum <)f shifted versions of X 6 ( iw) normalized by q.
At chis point we convert from the FT representation back to the DTFT i11 <>rder to
express Y(e;n) as a function of X(e;fi). The sampling interval associated wíth Y0 ( jw) is 9T'.
Using the relationship n = w9T' in Eq. (4.34), we have
Y(ei!l) = Y<'>( jw) \w=fll~i'
1 q-l n m
=- ~
LI
q 1n=O
xô ; or, - -q úJ
s
v

Now substitute '!.f' = q9T to obtain


1 q-l n 111
Y(eiíl) = -
q
I xb
nz=O
j úT -
q';:J
-
q
ú)s

1 q-
1
i n ,n
= -
q
2'
1n=O
X., ãr - - - 21r
/) q q
The sampling interval assocíated with X 6 ( jw) is 9T, and so X(ei' 1) = Xô( j!1/'!J). Hence we
may substitute X(eittltq-m 2 1rtqJ) for Xô( ( jl<Zi)(!Llq - m21r/q)) and obtaín
1 q \
Y(ei!l) =_ I X(ei(O.lq-nz27Tlq))
q rn=O
(4.35)
1 q-1
= _ L, X(ei(D-,n2rr)lq)
q m=O

This equation indicares that Y(ei!l) is obtained by summing versions of the scaled DTFT
Xq{e;n) = X(e;niq) chat are shifted by integer multiples of 21r. We may write this result
explicitly as
1 q-1
Y(ein) = _ I Xq(ei<D-m27T))
q rn=O

Figure 4.24 ílluscrates the relationship between Y(ei11 ) and X(ei11 ) descríbed in Eq.
(4.35}. Figure 4.24(a) depicts X(e;n). Figures 4.24(b}-(d) show the i11dividual terms in the
290 CHAPTER 4 • APPLICATIONS Of FOURIER REPRESENTATIONS

X(eiO)

1
••• ...
' '
n
-2'1T -W w 2-n-
(a)

Xq(ein)

••• • ••

--f--~------f--+-~,-----...,J...-+-~-- Q
-2-rrq -Wq Wq

(b)

l
••• • ••

---.....1.--..., __..;:i,,...._ _ _ _..,_.J1.....--+_::!í---------4--~-- íl


-2m_q- 1) 211-Wq 21T+Wq 2-rr(q + 1)

(e)

X (ei<O --{q - 1)211'))


q

•••
• ••

------+-__:i,,__ _ __,1.._-1-.... -+----_,l..-4-,.-lo.----n


-2m_q+ l) -2,r 21r(q - l)

(d)

'7t
W<-
q

••• • ••

-~~--1-~-~-.i---Jo.......,._,_--f.--lí--l-~-!------3"'--i_,J,~...._--li,,
: : ;
__ Q
-47r -21r -1T-Wq Wq1T 21r

(e)

1T
W>-
q

••• .., ,..,, ,v, ,"", • ••


---+--.;_'+-:'-'-"---i--...-.;.--1-'----+--..-.,;...~--- ,--,,..:..--+---+---- Q •

-41T -31r -21r -Wq -1r O '1T W'q 2tr

(t)

FIGURE 4.24 Effect of subsampling on the DTFT. (a) Original signal spectrum. (b) m = O term,
X,,(ejº), in Eq. (4.35). (e) m -:- l term ín Eq. (4.35). (d) m = q - 1 term in Eq. (4.35). (e) Y(ei 0 )
assumíng W < 7r/q. (f) Y(ejº) assuming \,V> 7rlq.
4. 7 Reconstruction of Continuous-Time Signals from Sam.pies 291

.. ,-
'
•••
! '
i • ••

;
;'
! ;
1
'
1 Q
' ' '
-31T -2-rr -'Tr -21r/3 21T/3 1T 21T 31T

(a)

21T
••• • ••

'
1
'
1 ' '
1

'
n
-31T -211' -1T -7T/3 7r/3 1T 21T 31T

(b)

FIGURE 4.25 Sc>lution to Drill Problem 4.11. (a) q = 2. (b) q = 5.

sum of Eq. (4.35) corresponding to m = O, m = 1, and m = q - 1. ln Figure 4.24(e} we


depict Y(eiº) assuming that W < 7r/q, while Figure 4.24(f) shows Y(ei11 } assuming that
W > 7r/q. ln this last case there is overlap berween the scaled and shifted versit>ns of X{ei12 )
involved in Eq. (4.35) and aliasing occurs. We conclude that alíasing can be prevented if
W, the highest frequency component of X(ei!1 ), is less than rrlq.

• Drill Problem 4.11 Depitt the DTFT <>Í rhe subsampled signal y[n] = x[qn] fc>r
q = 2 and q = 5 assuming
7r
xlnl = 2 cos -n
3
Answer: See Figs. 4.25(a) and (b). •
4. 7 Reconstruction of Continuous-Time
Signals from Samples
The problem of reconstructing a continuous-time signal from samples also involves a mix-
ture of co11tinuous- and discrece.:.rime signals. As illustrated in the block diagran1 of Fig.
4.26, a system that performs this c>peration has a discrete-time input signal and a contin-
uous-time t>utput signal. The FT is an ideal tO()l for analyzing this reconstructic>n problem,
since it n1ay be used to represent both contínuous- and discrete-tirne signals. ln this section
we first consider the conditíons that must be met in order to uniquely reconstruct a con-
tinuous-time signal from its samples. Assuming that these conditions are satisfied, we es-


. ..,
~,,.
Reconstruction · _ _
.. x(t)
xLnJ----+--
system .
'. •~;, ')!i, ,;~: ;~L 1.cr.

FIGURE 4.26 Block diagran1 illustraling conversion of a discrete-time signal to a continuous-


time signal.
292 CHAPTER 4 • Al>PLICATIONS OF FOURIER REPRESENTATIONS

tablish a method for perfect reconstruction. Unfortunately, the ideal rec<>nstruction ap-
proach cannot be impleme11ted in any practical system. Hence the section concludes with
an analysis of practical reconstruction techniques and their limitations.

• SAMPLING THEOREM

Our discussion of sampling indicated that the samples of a signal do not always uniquely
determine the corresponding continuous-time signal. For example, if we sample a sinusoid
at intervals of a period, then the sampled signal appears to be a constant and we cannot
determine whether the original signaJ was a constant or the sínusoid. Figure 4.27 illustrates
this problem by depicting two different continuous-time signals having the sarne set of
samples. We have
x[nl = X1(n~) = x 2(n~)
The samples do not tel1 us anything about the behavior of the signal ín between the sample
times. ln order to determine how the signal behaves in between the samples, we must
specify additíona) constraints on the continuous-time signal. One such set of constraints
that is very useful in practice involves requiring the signal to make smooth transitions from
one sample to another. The smoothness, or rate at which the tíme-domain signal changes,
is directly related to the maximum frequency present in the signal. Hence constraining
smoothness in the time domain corresponds to limiting the signal bandwidth.
There is a one-to-one correspondence between the time-domain and frequency-
domain representations for a signal. Thus we may also view the problem of reconstructing
the continuous-time signal in the frequency domain. ln order to uniquely reconstruct a
continuous-time signal from its samples there must be a unique correspondence between
the FTs of the continuous-time signal and the sampled signal. These FTs are uniquely
related if the sampling process does not introduce aliasing. Aliasing distorts the spectrum
of the c>riginal signal, as we discovered in the previous sectic>11, and destroys the one-
to-one relationship between the continuous-time and sampled signal FTs. This suggests
that a condition for unique correspondence between the continuc)us-time signal and its
samples is equivalent to the condition for the prevention of aliasing. This requirement is
formally stated as follows:

Sampling Theorem · Let x(t) ~ IT > X(jw) represent a bandlimited signal so that
X(jw} = O for Iwl > w1,,. If W 5 > 2wm, where W 5 = 21Tl'5 is the sampling frequency,
then x(t) is uniquely determíned by its samples x(n~), n = O, ::!: 1, +2, ....

I
I

'

FIGURE 4.27 Two continu<>us-time signals, x 1(t) (dashed line) and x 2 (t) (solid linc), that have
the sarne set of samples.
4. 7 Reconstruction of Continuo-us-Time Signals from Samples 293

The mínimum sampling frequency, 2<.tJ,n, is often termed the Nyquist sampling rate <)r
Nyquist rate. The actual sampling frequency, ws, is commonly referred to as the Nyquist
freque1tcy when discussing the FT of either the continuous-time or sampled signal. We
note that in many problems ir is more convenient to evaluate the sampling theorem using
frequency expressed in units of hertz. If f,n = w,n12'TT' is the highesr frequency present in
the signal and fs denotes che sampling frequency, both expressed i11 units of hertz, chen
the sampling theorem states that fs > 2f,n, where Is = 1/~. Equivalently, we must have
~ < 1l2f,n to satisfy the conditions of the sampling theorem.

EXAMPLE 4.14 Suppose x(t) = sin(10m)/11t.


Determine the conditions on the sampling
· interval '!/' so that x(t) is uniquely represented by the discrete-time sequence x[n] = x(nef) .
.· Solution: ln order to apply the sampling theorem, we must first determine the maximum
.. frequency, Wm, present in x(t). Taking the FT, we have ·
lwl s
1, 101r
'
X(iw) =
o, lwl > 101r
as depicted in Fig. 4.28. We have Wm = 101r. Hence we require
.'

27T
·.·'
'!f > 201T
.'
or
,.
~..
.' . :,r,,•;. .·~· '

.. ... • : ·•.;. : f.. : • • •

• Drill Problem 4.12 Determine the conditions on the sampling interval ~ so that
x(t) = cos(2m) sin( m)/m + 3 sin( 6m) sin(2m)/m is uniquely represented by che discrete-
time sequence x[n] = x(n~).
Answer: ~ < ½. •
We are often interested in only the lower-frequency components of a signal and
would like to sample the signal ata rate W 5 less than twice the highest frequency actually
present. A reduced sampling rate can be uscd if the signal is passed through a continuous-
time lowpass filter prior to sampling. Ideally, this filter passes frequency components bel<)W
ws/2 without distortion and suppresses any frequency components above w5 /2. Such a
filter prevents aliasing and ís thus termed an antí-aliasing filter. A practical anti-aliasing
filter will change from passband to stopband gradually. To con1pensate for che filtcr's
transition band, rhe passband is usually chosen to include the maximum sígnal frequency
of interest, and the sampling frequency, ws, is chosen so that w 5 /2 is in the anti-aliasing

X(íw)
1

-"'""····----+-----l....__ w
-101T I01r

FIGURE 4.28 FT of continuous-time sígnal for Example 4.14.


294 CHAPTER 4 li APPLICATIONS OF f OlJRIER RE.PRESENTATIONS

filter stopband. This issue is discussed further in Section 4.8. An anti-aliasing filter is nor-
mally used even if the signal of interest is bandlimited to less than w 5 /2 to a,,oid aliasing
associated with the presence of measurement or electronic noise.

• IDEAL RECONSTRUCTION

The sampling theorem indicates how fast we must sample a signal so thar the samples
uniquely represent the conrinuous-time signal. Now we consider the problem of recon-
structing the continuous~time signal from these samples. This problem is solved most easily
FT
in the frequency domain using the FT. Recall that if x(t) - - X( jw), then the FT rep-
resentation for the sampled signal is gíven by Eq. (4.32), reproduced here as
1 ()Ç

X 6 ( jw) = ã;= ~ X( jw - jkws)


'j k=-:x,

Figures 4.29(a) and (b) depict X( jw) and X 8 ( jw), respectively, assuming that the conditions
of the sampling theorem are satisfied.
The goal of reconstruction is to apply some operation to X 0 ( jw) that converts it back
to X( jw). Any such operation must eliminate the replicates of X( jw) that appear at kw5 •
This is accomplished by multiplying X 6 ( jw) by H,( jw), where

?f, 1wl < W5 /2


(4.36)
O, lwl > ws/2
as depicted in Fig. 4.29(c). We now have
X( jw) = X 8 ( jw)H,( jw) (4.37)
Note that multiplication by H,( jw) will not reco ver X( jw) from X 5 ( jw) if the conditions
of the sampling theorem are not met and aliasíng occurs.
Multiplication in the frequency domain transforms to convolution in the time do-
main, and so Eq. (4.37) ímplies
x(t) = x 8 (t) * h,,(t)

X(jw)

-~----1--Ji--. w
-W»I -w s -wm

(a) (b)

_ _ _. . __ ___j_ _ __,__ _ _ w

-w/2 w/2
(e)

4.29 Ideal reconstructíon. (a) Original signal spectrum. (b) Sampled signal spcctrum.
fIGlJRI:'.
(e) Frequency responsc of reconstruction filter.
4. 7 Reconstruction of Continuous-Time Signals from Samples 295

FT
where hr(t) < > H,.( jw). Substituting for xõ(t) in this relation gives
:X,

x(t) = h,(t) * Í: x[n]B(t - n2J)


n=-,;io

n=-oc

Now use

. Ws
?/ s1n t
2
hr(t) = ----
m
to obtain
00

x(t) ,L. x[n] sinc (4.38)


11=-00

ln the time domain we reconstruct x(t) as a weighted sum of sinc functions shifted by the
sampling i11terval. The weights correspond to the values of the discrete-time sequence. This
rec()nstruction c)peration is illustrated in Fig. 4.30. Thc vaJue of x(t) at t = n2J is given by
x[ n] beca use ali of the shifted sinc funcrions go through zero at n~ except the nth or1e,
and its value is x[n]. The value of x(t) in between integer multiples of ·2J is determíned by
ali of the values of the sequence x[n].
The opcration described in Eq. (4.38) is commonly referred to as ideal bandlimited
interpolaticJn, since it indicates hc)w to interpolate in between the samples <>Í a bandlimited
signal. ln practice this equation cannot be implemenred. First, it represents a noncausal
system. The output, x(t), depends on past and furure values of the input, x/111. Second, rhe
influence of each sample extends over an infinite amount of time because hr(t} has infinite
time duratio11.

• PRACTICAL RECONSTRUCTION-THE ZER0·0RDER ffOLD

Practical reconstruction of continuous-tíme signals is often implemented with a dcvice


kn<>wn as a zero-order hold, which simply maintains <>r holds the value xlnl for ?J secc>nds,
as depicted in Fig. 4.31. This causes sharp transitions in x 0 (t} at integer multíples of <ff and
produccs a stairstep approximatÍ()O to the continuous-time signal. Once again, the FT
<Jffers a means for analyzing the quaJiry of this approximation.
The zero-order hold is represented mathematically as a weighted sun1 of rectangular
pulses shifted by integer multiples of the sampling interval. I.et

1, O< t < ~
O, otherwise

as depicted ín Fig. 4.32, The outp1Jt <>f rhe zer<)-<>rder h<Jld is expressed in terms of h (t)
0

as

(4.39)
n=-J;JC
296 CttAPTER 4 • APPLICATIONS OF FOURIER RE.PRESENTATIONS

X<'i(t)
1

/
--- ' ' '
- --- ,, /
/
'
,, .,.. - '
~
.,...
/
' ',
'\

I
/
.._ ..... _
'
; t
-3~ -5" <!{ 35"

*
1
"'

-
x(t)

FIGlJRt:: 4.30 Ideal reconstruction in the time domain.

x[n] - - •
;iero9
hold - -• Xo(t)
xLnJ
LiR.W9'U .... l r

--,--"--+--+-!---'---+-- n ' • 1
t
' ' '
-1 l 3 - vt <!{ 2<!!

FIGURE 4.31 Zero-order hold.

--+------i--- t
o/
FIGURE 4.32 Rectangular pulse used to analyze the zero-c>rder hold.
4. 7 Reconstruction of Continuous-Time Signals from Samples 297

We recognize Eq. (4.39) as rhe convolution of the impulse sampled signal x 8 (t) with h {t), 0

as shown by

Xo(t) = ho(t) * L
n=-oc
x[n]o(t - n<!í)

== h (t) * Xõ(t)
0

Now take the FT of x 0 (t) using the convolution-multiplicatÍ<)n prc>perty (>Í the FT to obtain

where


s1n

Figure 4.33 depicts the effect of the zero-arder hold in the frequency domai11, assum-
íng that ?J is chosen to satisfy the sampling theorem. Comparing X jw} to X( jw), we see0 (

that the zero-order ho\d introduces three forms of modification:


1. A linear phase shift corresponding to a time delay of ff/2 seconds.
2. The portion of X5 ( jw) between -wn and Wn is distorted by the curvature of the
1 1

mainlobe of H0 ( jw).
3. Distorted and attenuated versions c)f X( jw) remain centered at nonzero multiples
of Ws.
By holding each value x[nl for ~ secc)nds, we are introducing a time shíft of ff /2 seconds
into x 0 (t). This is the source c>f mc>dificatic>n 1. The sharp transitíons in x 0 (t) associated
with the stairstep approximatic>n suggest the presence of high-frequency components and
are consistcnt with m<>dificati<)n 3. Both modificatíons 1 and 2 are reduced by increasing
Ws or, equivalently, decreasing ?J.
ln some applications the modifications associated with the zero-order hold may be
acceptable. ln others, further processing of x 0 (t) may be desirable to reduce the distorcion
associated with modifications 2 and 3. Generally, a delay of ff /2 seconds is of no real
consequence. The second and third modifications listed above are eliminated by passing
x (t) through a continuous-time con1pensation filter wirh frequency response
0

w?J
-----,
2 sin(w~/2)
lwl :S W,n
Hc(jw) =
O, lwl > (J)s - w,n

This frequency response is depicted in Fig. 4.34. On Iwl < wn, the compensation
filter reverses the distortion íntroduced by the mainlobe curvature of H jw). On 0 (

1wl > W 5 - w,11 , Hc( jw) removes the energy in X 0 ( jw) centered at nonzero multiples of W 5 •
The value of Hc( jw) does not matter on the frequency band w,11 < I wl < W5 - w,11 since
X 0 ( jw) is zero. Hc( jw) is often termed an anti-imaging filter beca use it eliminares the dis-
torted ''images'' of X{ jw) presentat n<>nzer<> multiples c>f W 5 • A bl(>ck diagram representing
the compensated zero-order hc>ld recc)nstruction process is depicted in Fig. 4.35. The anti-
imaging filter smooths out the step discontinuities in x 0 (t).
There are several practícal issues that arise in designing and building ao anti-imaging
filter. We cannot obtain a causal anti-imaging filter that has zero phase. Hence a practical
298 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

X(jw)

-wm

(a)

X 8(jw)

l
~····
... ...
-ws

(b)

1 H 0 (jw) 1

-ws 2w

1T··· ...

-ws
-'TT ...

-ws -wm

(d)

FIGURE 4.33 Effect of the zero-order hold in the frequency domain. (a) Original continuous-
timc signal spectrum. (b) FT of sampled signal. (e) Magnitude and phase of H (jw). (d) l\1agnitude
0

spectrum of signal reconstructed using iero-order hold.

IH.. <iw)I
Don't care ,~..._-..JI!J.__,..,~, Don'l care
. ~, I
'< \
____..;....__.,__..+---__,j.----1--4--------w

FIGURE 4.34 F requency response of a compensation filter for elimínatíng some of the distortion
introduced by the zero-order hold.
4. 7 Reconstruction of Continuous-Time Signals from Samples 299

... : ·..;.
Zero-order xº(t) Anti-imaging
x[n] - - - -• hold -------;• filter
'!'\: ·.;.~·:

x[n]

1
1

? n t ---- . 1. -· ·-· - . - - t

FIGURE 4.3 5 Block diagran1 of a 1Jractical reconstruction system.

filter will introduce some phase distortion. ln many cases linear phase in the passband,
1 w 1 < w,11 , is acceptable, since linear phase distortion corresp()nds t(> an additional time

delay. The difficulty of approximating IHc( jw) I depends (>11 the separation between wn 1

and W 5 - wni• If this distance, W 2w, is large, then the mainlobe curvature of H jw)
5 - 11 , 0
(

is very small, anda very good approximation is <>htained by simply setting IHc( jw) [ = 1.
Second, the region wn-1 < w < W 5 - wn1 is used to make the transition from passband to
stopband. If W 5 - 2wm is large, then the transition band of the filter is large. Filters with
large rransiti<)n bands are much easíer to design and build than th<.)se with small transition
bands. Hence the requiren1ents on an anti-imaging filter are greatly reduced by choosíng
?J sufficiently small so that W 5 >> 2w 11l. A more detailed discussion of filter design is given
in Cha pter 8.
lt is C<)mmcln in practical reconstruction schemes t() íncrcase the effective sampling
rate of the discrete-time signal prior to the zero-order hc,ld. This technique is known as
overs,zmpling. It is clone to relax the requirements on the anti-imaging filter, as illustrated
in the fc)llowing example. Although this increases the cc>mplexíty t)f the discrete-time hard-
ware, it usually produces a decrease in overall system cost for a given level of reconstruction
quality.

. . . . ,; ·~;' ·h; ~'. it· ··

ExAMPLE 4.15 ln this example we explore the benefits of oversampling in an audio compact
disc player. The maximum signal frequency ís 20 kHz because we recall from Chapter 1 that
the ear is sensitive to frequencies up to 20 kHz. Consider two cases: (a) basic sampling rate
of 1/'!J 1 = 44.1 kHz; (b) 8 times oversampling for an effectíve samplíng rate of 1/'!.f2 = 352.8
kHz. ln each case, determine the constraints on the magnitude response of an anti-imaging
filter so that the overall magnitude response of the zero-order hold reconstruction system is
between 0.99 and 1.01 in the signal passband and less than 10- 3 to the images of the signal
spectrum that are located ar multiples of the sampling frequency.
Solution: ln thís example, it is convenient to express frequency in units of hertz rather than
radians per second. This is explicitly indicated by replacing w with f and by representing the
frequency responses H (jw) and Hc(iru) as H~(if) and H~(jf). The overall magnitude response
0

of the zero-order hold followed by an anti-imaging filter H;(j/) is IH~(j/) IIH;(;/)]. Our goal
is to .find the acceptable range of IH;,(j/) 1 so the product IH~(j/) I IH;(;/) 1 satisfies the response
constraints. Figures 4.36(a) and (b) depict IH~{jf) J assuming sampling rates of 44.1 kHz and
352.8 kHz, respectively. The dashed lines in each figure denote the signal passband and its
images. At the lower samplíng rate we see that the signal and its images occupy rhe majority
of the spectrum; they are separated by 4.1 kHz. ln the 8 times oversampling case the signal
and its images occupy a very small portion of the spectrum; they are separated by 312.8 kHz.
300 CttAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

X 10-5 X 10--6
3 ; ;' i 1! 3 ! :!: i • 1
il i1 1
)lii .

2.5
• 1
1
l 1 Image
1 1'' Passband !' i
1!' • ~ i .. Image
!!
i '1
2.5
IImage '~ Image!
i 1• .. 1 i 1
' 1
1! 1 ! ; i !i 1 'i ;; ;: 1
:
' ;

2 11 !i ; 11 i i
1! l ~ l! 21 'i i i
; ; 1
1j !i 1i 1 ! i 1
1H;>(j/)1 1! i !l
1 H~(íf)I 1

!
' '
i 1 j
1

l .5 !! 1i 1i
1.5 1
1l ' 1'
;
1
> ' '
!1 l 1 !' > ! 1 1
!1 11 1! !. ;
! i f i !1
l !1 1 'i 1 ;;
r ;>
l
!Passband ---e!-- i
i 1' 1! !' ! i ! '
; 1
;
1' 1! i i i ! l1
0.5 ! ~ 1i l1
! '; !! 11
1 ~ 1 i i i

-80 -60 -40 -20 o 20 40 60 80 -600 -400 -200 O 200 400 600
f (kHz) f (kHz)
(a) (b)

l.5 ~ - - - - - - - - - - - ~ - - -

1.41
1.3 1
1

9T IH,' (1'{) 1 1.2 -

1.1

0.9 ,....__.____.___.___.____;,__._______ ~

-20 -15 -10 -5 O 5 10 15 20


( (kHz)
(e)

FIGllRt-: 4.36 Anti-imagíng filter design ,vith anel ,vithout oversampling. (a) l\1agnitude of H;,(jf)
fclr 44.1-kllz samJJling rate. Dashed lincs denote signal passband and images. (b) lv1agnitL1de of
H~(jf) for 8 times oversamplí11g ( 3 52.8-kl lz samJJlíng rate). Dashed lincs denote signal passband
and ímages. (e) Norn1alized constraints on passl>and responsc <)f anti-imaging filtcr. S(>li<l lines as-
sume 44.1-kHz sampling rate, dashed lines assume 8 times oversampling. The normalized filter
response must lie between each pair of lines.

The passband constraint is 0.99 < IH~(jf)l!H;(j/) 1 < 1.01, which implies

0.99 I , I 1.01 k k
IH~(i/)1 < Hc(if) < IH~(jf)I' -20 Hz < f < 20 Hz

Figure 4.36(c) depicts these constraints for both cases. Here we have multiplied I H;(j/) 1 by
the sampling intervals ?J' 1 and 5' 2 so that both cases may be displayed with the sarne scale.
Note that case (a) requires substantial curvarure in IH~(if) 1 to eliminate the passband distor-
tion íntroduced by rhe mainlobe of H~(jf). At the edge of the passband rhe bounds are
1.4257 < ~1/H;(jf',n)/ < 1.4545, f m = 20 kHz
for case (a) and

., 0.9953 < ~2 IH~(Jjm) 1 < 1.0154, /,., = 20 kHz


for case (b).
,:.·. :·~~·
4.8 Discrete-Time Processing of Continuous-Time Signals 301

..J.

The image rejection constraint implies IH~(j/) 1!H;(j/) 1 < 0.001 for all frequencies at :
ll
which images are present. Tnis condition is simplified somewhat by considering only the
frequency at which IH~(jf) 1is largest. The maximum value of IH~(j/) 1in the image frequency
bands occurs at the smallest frequency in the first image: 24.1 kHz in case (a) and 332.8 kHz
in case (b). The values of IH~(jf)j/21"1 and IH;(j/)l/2T2 at these frequencíes are 0.5763 and
0.0598, respectively. This implies the bounds
:?T1IH;(j/)l<0.0017, />24.tkHz ·
and
:?I2IH;(j/) 1 < 0.0167, f > 332.8 kHz
for cases (a) and (b), respectively.
Hence the anti-imaging fi.lter for case (a) must show a transition from a value of
1.4257/2' 1 to 0.0017/2T1 over an interval of 4.1 kHz. ln contrast, with 8 times oversampling
the filter must show a transitíon from 0.9953/!?f2 to 0.0167/2f2 over a frequency ínterval of
312.8 kHz. Thus oversampling not only increases the transition wídth by a factor of almost
• 4 ••
80, but also relaxes the stopband attenuation constrainc by a factor of more than 10.

,;.4.8 Discrete-Time Processing


of Continuous-Time Signals
ln this section we use Fourier merhods to discuss and analyze a typical system for discrete-
time processing of continu<,us-time signals. There are severa! advantages to prc>cessing a
continuc>us-tíme signal with a discrete-time system. These advantages result frc>m the
power and flexibility of discrete-time computing <levices. First, a broad class c>f signal
manípulations are more easily performed using the aríthmetic <>peratíons of a cc,mputer
than using analog comp<)nents. Second, implementing a system in a cornputer ()nly involves
writing a set of insrructions or program for the computer to execute. Third, the discrete-
time system is easily changed by modifying the program. Often che system can be modified
in real time to optimize some criterion associated with the prc>cessed sígnal. Yet another
advantage of discrete-time processing is the direct dependence of the dynamic range and
signal-to-n<>ise ratio on the number of bits used to represent the discrete-time signal. These
adva11tages have led to a proliferation of computing devices designed specifically for dis-
crete-time signal processing.
A minimal system for discrete-tírne processing of continuous-time signals n1ust cc>n-
tain a sampling <levice, and a computing <levice for implementing the discrete-time systc1n.
If the processed sígnal is to be converted back to continuous time, then reconstruction is
also necessary. Mc)re sophisticated systems may also utilize oversampling, decimation, and
interpolaric>n. Decimation and interpolatíon are methods for changing the effectíve sam-
plíng rate of a discrete-rime signal. Decirnation reduces the effective sampling rate, while
ínterpolation increases the effective sampling rate. Judicious use of these methods can
reduce the cost of the overall system. We begín wirh an analysis of a basic system for
processing continuous-time signals. We conclude by revisiting oversampling and examin-
ing the role of interpolation and decimation in systems that process ct>ntinuous-time
signals.

11 BASIC 0tSCRETE-TI.l\1E SIG.NAL·PROCESSING SYSTEM

A typical system Í{>r processing continuous-time signals in discrete time is illustrated in


Fig. 4.37(a). A continuous-time signal is first passed through a lowpass anti-aliasing filter
302 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

Anti-aliasing Sampleà,.. x[n] y[n] Lample arui/l


x(t) - •- filter intervals ' <' • .
·~·· processmg .
hold _...,... -•• y(t)

:..ü.wàz . :· .·....
-"7--
(a)

:,ijÍji . 'íilii "


Equivalent ·
' .
x(t) - •~ ., cont1nuous-tune • y(t)
1. • system
~(t) ' ''<

{b)

FIGURE 4.37 Block diagram Í<lr discrete-time processing of continuc>us-tíme signals. (a) A basic
system. (b) Equivalent continuous-time system.

and then sampled at intervals of 'J" to convert it to a discrete-tíme signal. The sampled
signal is then processed by a discrete-time system to impart some desired effect to the
signal. For example, the · discrete-tirne system may represent a filter designed to have a
specific frequency response, such as an equalizer. After processing, the sígnal is converted
back to conrinuous-time format. A zero-order hold <levice converts the discrete-time signal
back to continuous time, and an anti-imaging filter removes the distortion introduced by
the zero-order hoJd.
This combination of operations may be reduced to an equivalent continuous-time
filter using the FT as an analysis tool. The idea is to find a continu(>us-time system
PT
g(t) < • G( jw) such that Y( jw) = G( jw)X( jw) as depicted in Fig. 4.37(6). Hence G( jw}
has the sarne effect on the input as the system in Fig. 4.37(a). We assume for thís analysis
that the discrete-time processing operation is represented by a díscrete-time system with
frequency response H(ei11). Recall that n = w?f, where CZJ is the sampling interval, so the
discrete-time system has a contínuous-time frequency response H(ei"'5"). Also, the frequency
response associated with the zero-order hold <levice is

• CZJ
S10 w-
2
w
The first operation applied to x(t) is the continuous-time anti-aliasing filter, whose
output has FT given by
Xa( jw) = Ha( jw)X( jw)
After sampling, Eq. (4.32) indicates that the FT representation for xln] is

Xs(fw) ½i.., Xa(j(w -


= k kw 5 ))
(4.40)
• = ½ki.,, Ha(j(w - kws))X(j(w - kw 5 ))

where W 5 = 21r/CZJ is the sampling frequency. The discrete-tín1e system modifies X 0 (júJ) by
H(eiw:?r), producing
4.8 Discrete-Time Processing ofContinuous-Titne Signals 303

The reconstruction process modifies Yõ( jw) by the product H 0 ( jw, ff)H<~( jw), and thus we

may wr1te

k=-oo

Assuming that aliasing d<)es n<>t <>ccur, the anti-imaging filter Hc( jw) eliminates frequency
components abovc w 5 /2 and thus eliminates ali the terms ín the infinite sum except for the
k = O term. We therefc>re have ·

This expressi<.)11 indicares that the overall system is equivalent to a continuous-time LTI
syste1n having the frequency response

(4.41)

Nc>te that this correspondence to a continuous-time LTI system assumes the absence of
aliasing.

• 0VERSAMPLING

ln Section 4.7 we noted that increasing the effective sampling rate associated with adis-
crete-time signal prior to the use of a zcro-<>rder h<>ld for converting the discrete-time signal
back to continuous time relaxes the requirements on the anti-in1aging filter. Similarly, rhe
require1nents on the anti-aliasing filter are relaxed if the sampling rate is chosen signifi-
cantly greater than the Nyquíst rate. This allows a wide transition band i11 the anri-alíasing
filter.
An anri-aliasing filter prevents aliasing by limiting the signal bandwidrh prior to
sampling. While the signal of interest may have maximum frequency W, in general the
continuous-time signal will have energy ar higher frequencies dueto the presence of noise
and othcr 11<>nideal characteristics. Such a situation is illustrated in Fig. 4.38(a). The shaded
area <)Í the spectrum represents energy at frequencies above the maximum signal frequency;
we shall refer to rhis component as noise. The anti-aliasing filter is chosen to prevent this
noise fro1n aliasíng back dow11 int<) the l)and <>Í interest. The magnitude response of a
practical anti-aliasing filter cannot g() fr<>1n unit gain t<> zer<> at frequency W, but instead
goes from passband to stopband over a range c)f frequencies as depicted in Fig. 4.38(6).
Here the stopba11d of the filter he.gins at Ws, and Wt = Ws - W denotes the width of the
transiti<>n band. The spectru1n of the filtered signal Xª( jw} now has maximum frequency
Ws as depicted in Fig. 4.38(c). This signal is sampled at rate ws, resulting in rhe spectrum
X 6 ( jw) depicred in Fig. 4.38(d). We have drawn X 6 ( jw) assuming that W is large enough
5

to prevent aliasing. As ws decreases, replicas of the original signal spectrun1 begín to ()vcrlap
and aliasíng occurs.
ln order to prevent the noise from aliasing with itself, we require ws - Ws > \,\7.s or
W 5 > 2 Ws as predicred by the samplíng thcorcm. However, hecause of the subsequent
discrete-tin1e processing we ofren .do not care if the noise aliases with itself, but rather wish
to prevent the noise from aliasing back into rhe signal band - W < w < W. This implies
that we must have
304 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

X(jw)

------·--'-"'----+-- ' -~~--+---1----+---~~- úJ


-W O w -Ws -W O w

(a) (b)

'
; 1!1!:!11-..----w
-Ws -W w
(e)

X6(jw)

'
-ws -W w
(d)

FIGURE 4.38 Effect of oversampling on anti-aliasing filter specifications. (a) Original signal
spectrum. (b) Antí-alíasíng filter frequency resp<>nse magnitude. (e) Spectrum <>Í signal at the anti-
aliasing filter output. (d) Spectrum of the anti-aliasing filter output after sampling. The graph
<lepicts the case w, > 2W,.

Substituting Ws = W, + W in this inequality, we may write


W5 - (Wt + W) > W
Rearranging terms to obtaín the relationship between the trans1t1on band of the anti-
aliasing filter and the sampling frequency, we have
wt < Ws - 2W
Hence the transition band of the anti-aliasing filter must be less than the samplíng fre-
quency minus twice the highest signal frequency component of interest. Filters with small
transition bands are difficult to design and expensive. By oversampling, or choosing
ws >> 2 W, we can greatly relax the requirements on the antí-aliasing filter rransition band
and consequently reduce its complexity and cost.
ln both sampling and reconstruction, the dífficulties of implementing practical analog
filters suggest using the highest possible sampling rate. However, if this data set is processed
with a discrete-time system, as depicted in Fig. 4.37(a), then high sampling rates lead to
increased discrete-time system cost. The higher cost results because the discrete-time system
must perfc>rm its computations at a faster rate. This conflict over the sampling rate is
mitigated if we can somehow change the sampling rate such that a high rate is used Í<>r
sampling and recc)nstruction, and a lower rate is used for discrete-time processing. Deci-
mation and interpolation offer such a capability, as discussed next.
4.8 Discrete-Time Processing of Continuous-Time Signals 305

• DECIMATION

We begin the discussion by considering the DTFTs obtained by sampling the identical
continuous-time signal at different intervals, ?J 1 and ?J2 • Let the sampled signals be denoted
as x,[n] and x 2 [nl. We assume that ?f 1 = q:!f2, where q is an integer, and that aliasing does
not occur at either sampling rate. Figure 4.39 depicts the FT c)f a representativecontinuous-
time signal and the DTFTs X 1 (e'11) and X 2 (e;n) associated wíth the sampling intervals '2'l 1
and ?J 2 • Decimation corresponds to changing X 2 (e;11 ) to X 1(eiª). One way t<) do this is to
convert the discrete-time sequence back to a continu<>us-tíme signal and then resample.
Such an approach is subject to distortion introduced in the reconstruction operation. We
can avoid this problem by using methods that operate directly on the discrete-timc signals
to change the sampling rate.
Subsampling is the key to reducing the sampling rate. If the sampling interval is ?J 2
and we wish to increase it to ?J 1 = q?J 2 , we may accomplish this by selecting every qth
sample of the sequence x 2 [n], that is, set g[n] = x 2 [qn]. Equation (4.35) indicares that the
relationship between G(e;ti) and X 2(ei!l) is
1 q-1
G(ei{l) = - 2, X2(e;(!l-n12?T)!q)
q ,n=O

That is, G(e;n) is a sum of shifted versions of X 2(eiíltq). The scaling spreads out X 2 (ei!l)
by the factor q. Shifting these scaled versions of X 2 (ei11 ) gives G(ei!l) as depicted in

X(iw)

-W w
(a)

••• • ••

.r,
-211' - W., 1 1 411"

(b)

•••

<

-41T -21T -W~2 WY2 2'7T


(e)

FIGURE 4.39 Effect of changing the sam1>ling rate. (a) Underlyi11g continuous-tín1e sig11al F'I'.
(b) DTf"l' of sampled data at sa1npling ínterval '3 1 • (e) DTFl' of sampled data at sami)ling
intentai C!f 2·
306 CHAPTER 4 ili APPLICATIONS OF FOURIER Rl::PRESENTATIONS

••• • ••

-J.---l-----.31o......,.~-Í-----.:3,..,_~--+--~-----i::..._-.___~:io.,-...L.---;--~- ~l
' '
-417' -2?T Wq?f 2 2 rr 41T

FIGURE 4.40 Spectrum that results fr,lm subsampling X2(e.i.n) depicted in Fig. 4.39 by a factor
of q.

Fig. 4.40. Identífying ?f 1 = q?f 2 , we see thar G(eiíí) C()rresponds to X 1 (ei!2) in Fig. 4.39(b).
Hence subsampling by q changes the effective sampling rate by q.
This analysis assumes that the maximum frequency component of X 2 (e;!l) satísfies
W?f 2 < 1rlq, so that aliasing does not occur as a consequence of the subsampling process.
This assurnption is rarely satisfied in practice; even if the signal of interest is bandlimited
in this way, there will often be noise or other cc>mponents present at higher frequencies.
For exarnple, if <)versarnpling is used to obtain x 2 [n], then noise that passed through the
transition band of the anti-alíasing will be present at frequencíes above 1rlq. If we subsam-
ple x 2 {n] directly, rhen rhis noise will alias into frcquencies !l < W?f 1 and distort the
1 1

signal <)Í interest. This aliasing problem is prevented l)y applying a lowpass discrete-tirne
filter to x 2 fnl prior to subsampling.
Figure 4.41(a) depicts a decimation systern includíng a lowpass discrete~time filter.
The input signal xlnJ with DTFT shown in Fig. 4.41 (b) corresponds to the oversampled
signal, whose FT is depicted in Fig. 4.38(d). The shaded regions indicate noise energy. The
lowpass filter shown in Fig. 4.41(c) removes m<)St of the noise in producing the output
signal depicted in Fig. 4.41 (d). After subsarnpling, the noise does not alias into the signal
band, as illustrated in Fig. 4.41(e). Note that this procedure is effectíve only if the discrete-
time filter has a rapid transítion from passband to stopband. f<>rtunately, a discrete-time
filter with a narrow transitíon band is much easier to design and irnplement than a con1-
parable continuous-time filter.
Decimation is also known as downsampling. It is often denoted by a downward
arrow followed by the decimati(>n factor as i[lustrated in the bJock diagram of Fig. 4.42.

• INTERPOLATION .

Interpolation is the process <>f increasing the sampling rate and requires that we somehow
''interpolate'' <>r assign values between the samples of the signal. ln the frequency domain
we seek to convert X 1(eiíl) of Fig. 4.39(6) into X 2(eiº) of Fig. 4.39(c). We shall assume
that we are íncreasing the sarnpling rate by an integer factor, that is, ?f 1 = qCZf2.
The DTFT scalíng property presented in Secrion 3.6 is the key to developing an
ínterpolation procedure. Assume x1 ln J is the sequence to be interpolated by the factor q.
Define a new sequence Xz[,il as follows:
n fl .
x1 - , - 1nteger
Xz[n] = q q (4.42)
O, otherwise
With thís definition, we have x 1 [n] = Xzlqn] and the DTFT scaling property implies
4.8 Discrete-Time Precessing o__fContin1wus-Time Signals 307

Discrete-tíme Subsample
x[nj .. lowpass by ••
·--1 xdfn]
. Hd(eiü)
:~t':!·· ,;, ..,.,,. : ~ .iilll!/.' ·'!'""·

(a)

-21t W9l.,v· 2'17'


(b)

•• •

.. Q
'
' 1
'
' '
>

-2,r -W~2 W~2. 2'11'


(e)

••• •••

-21T 21T
(d)

(e)

F1GVRE 4.41 Frequcncy-<lc>n1ain ínterilretali<>n of decim,ttion. (a) Block diagram clf decimatíon
system. (b) Spectrun1 of <>vcrsampled input signal . .Noisc is depictcd as the shaded portions of the
spectrum. (e) fílter frequency response. (d) Spectrum of fílter output. (e) Spectrum after
subsampJing.

;.;.lt,i '~ . .. . .
·,

xln1 - - •• « .
~q ~~
'.----.• xdln]
1· '-1!';.
"·· .

FIGURE 4.42 Symbol for dccimation by a factor of q.


308 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

That is, Xz(ei11 } is a scaled version of X 1 (ei! 1), as illustrated in Figs. 4.43(a) and (b). Iden-
tifyíng ?J 2 = ?J 1 /q we find that Xz(e;n) corresponds to X 2(eííl) in Fig. 4.39(c), except for
the spectrum replicares at + 2nlq, ±4Trlq, ... , ± (q - 1 )2Trlq. These can be removed by
passing the signal x.:[n J through a lowpass filter wl1ose freque11cy respor1se is depicted in
Fig. 4.43(c}. The passband of this filter is defined by ln 1< W9J 2 and the transitic>n band
must lie in the regic>n W?J 2 < 1111 < 2Trlq - \Xf9J2 • The passband gain is chosen as q so
that the interpolated signal has the correct amplitude. Figure 4.43(d) illustrates the spec-
trum of the fi.lter output, X;(eiº) .

... ...
---+---~--""'!""""--+----4----31,------t:.-----jf----..Ji- Q
-2-rr -1T W5'1 1T

(a)

••• ••• ...


;
' '
-21T -41r 211 W9l1 WY1 21r -47T 21T
-
q
--q -q q
q q

(b)

H;(eifl)
q
,
'\ '\
f
••• ' ...
• ) \
' ';
' '
-21T 7T 1T 21T
--q -
q

(e)

... • ••

----+-----'------------+--+--+,______________.........,_ _ _ !l
-21T -WY2 W:?12

(d)

FIGURE 4.43 Frequcncy-domain interpretation of interpolation. (a) Spectrum of original se-


quence. (b) Spectrum after inserling q - l zeros in betvvecn every value of the original sequence.
(e) Frequency response of a fílter for removing undesired replicates located at ±21r/q, +41r/q, ... ,
+(q - 1)21rlq. (d) Spectrum of interpolated sequence.
4.8 Discrete-Time Precessing of Continuous-Time Signals 309

Hence ínterpolarion by the factor q is accomplished by inserting q - l zeros in


between each san1ple of x 1[n] and then lowpass filtering. A bl<)ck diagram illustrating this
procedl1re is depicrcd in Fig. 4.44(a). lt is for this reason that interpolacíon is also known
as upsampling; it is denoted by an upward arrow followed by the interpolatíon factor as
depicted ín the hlock diagram of Fig. 4.44(h}.
This interp<)larion procedure has a time-domain interpretation analogous to that of
DTfT
the continuous-time signal reconstrucrion process. Let h;[n] ~ - -• Hi(e;n) be the low-
pass filter impulse response. We may then write
xi[nJ = Xz[n] * hi[n]
(4.43)
= I
k~--x
Xzlk]h;[n - k]

Now suppose H;(ei!l) is an ideal lowpass filter with transítion band of zero width. That

1s,
q, jfij < 1rlq
O, 1rlq < IOI < 1r

Taking the inverse DTFT of H;(ei!.t), we obtain

. 1T
s1n -n
q
h;fnl = q - - -
1Tn

Substituting h;[n] in Eq. (4.43) yiclds

• 1T
q s1n - (n - k)
q
k L"° Xz[k] --1r-(n___k_)-

Now Eq. (4.42) indicates that xzlk] = O unless k = qm, where m is an integer. Rewrite
x;[n] by using only the nonzero terms in the sum
'Tr
q sin - (n - qm)
"" q
L
rn= - ~
Xzlqm] - - - - - -
1T(n - qm)

lnsert q-1 Discrete-time


x[n] _,....., zeros between lowpass
~::
each value Hi(~;o)

x[n] x;[nl

xLnJ •
~-·

(a) (b)

FIGURE 4.44 (a) Bl<.lck diagram of an interpolation system with signals depícted assuming
q = 2. (b) Symbol denoting interpolation by a factor of q.
310 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

Now substitute x[m] = Xz[qm] and use sinc function notation to obtain

o,: 1
xí[n] = L, x[m] sinc - (n - qm} (4.44)
m=-c,o q

The interpolared sequence is given by a weighted sum of shifted discrete-time sinc func-
tions. Equation (4.44) is the discrete-time version of Eq. (4.38).
A discrete-time signal-pr(>cessing system that includes decimation and interpc>lation
is depicted in Fig. 4.45. Consider the effect of the decímation and interpolation operations.
Starting with decimation, we have

Equation (4.34) implies the effect of the subsampling operation is

. 1 ql -1
m
Xi(iw) = -
q,
L
m=O
X{ j w - - ws
ql

Substitute for X~( jw} to obtain

1 q1-l
Xi(iw) =- L Hd(ei(w-m,,.,,lq1)if1 )Xn i m
w - - ws
. ql m=O q)
(4.45)
1 qi-1
m
= _ L, Hd(ei(w'Y1-m2-niq1l)Xô j w--w s
. ql 1n=O q,

The interpolation operation relates Ys( jw) and Y~( jw) as follows. First, the insertion
of q 2 - 1 zeros in between each value of ylnl corresponds to frequency-dc>main scaling,
that is,

yf(jw)

Anti-aliasing x,i(t)
__
.•. ,.,.. ·..,--.,·. ,._..,
Sample at xln]
r- -
1
1
. .
-- --- ---

Decimation
--1

1
x(t) - •.. .. fdter - - - + - .· intervals filter
of'!/1 I; 1.. '
1
1
Hd(ei'l) ,

r- - - - - - - - - - - - , 1
1
1
1
y[n]
"
Di~cfJte~time ·" ín l

d
1
1
1
~-·~
.
by
'p~·):~~
1 processmg 1
1
1
, ql •· .•~·
1 1_ _ _ _ _ _ _ _ _ _ .....- _ _J
1
1 Decimate by q 1
1
,·-~""""'""'_t<i,. . .·'"'""'-'#>': ~1..::nmsa u ,, mm:;-~ ,·M!ft;, ~f •iildliw . ~ . ·~
1
Interpolâtiófi y'\nl Sample and ··. y0 (t) Anti-imaging
1
1
filter ;~ ·, ____..,• hold at '!/ 2 filter -----l• y(t)
1
1
l: 1II (jw, '!l2}a
0 Hc(itt>.
, __ - - - - - - --~- - J
lnterpolate by q2

FIGURE 4.45 Block díagram of a system for díscrete-tíme processíng of continuous-time signals
including decimation and interpo]atíon.
4. 9 Fmirier Series Representations for Finite-Duration Nonperiodic Sigtials 311

The interpolation filter operates at the higher samplíng rate 21r/CZf2, so that the mappíng
between discrete- and continuous-time frequency is n
= w<!J 2 • We may therefore write
Y;( jw) = H;(eiw-Y 2) Y1 ( jw)
(4.46)
= Hi(e;,.,,:i- 2 ) Y ( jq2w)
6
Alth(>ugh the exprcssion for the equivalent continuous-time filter frequency respc>nse is
too cc>mplex to pr{)vide much insight when interpolation and decimation are present, we
may still use Eqs. (4.45) and (4.46) along with Eq. (4.40), Y6 ( jw) = H(eiwqi'.'1 1 ) Xi (jw),
and Y{ jw) = H 0 ( jw, C!J 2 )Hc( jw)Y~( jw) to evaluate the equivalent continuous-time filtering
operation represented by the system in Fig. 4.45.

4.9 Fourier Series Representations


for Finite-Duration Nonperiodic Signals
The DTFS and FS are the Fourier representations fc>r peri<><lic signals. ln this section we
explore their use for representing finíte-duration nonperiodic signals. The primary moti-
vatic>n for doing thís is numerical computation of Fourier representations. Recall that the
DTFS is the <1nly Fourier representation that can be evaluated numerically. As a result, we
often apply the DTFS to signals that are not peric>dic. It is important to understand the
implicati<>ns of appl;1 ing a periodic representation to nonperiodic signals. A secondary
benefit is further understanding of the relationship between the Fourier transform and
corresponding Fourier series representations. We begin the discussion with the díscrete-
.
time case.

• RELATING THE DTFS TO THE DTFT


Let xtn] be a .finite-duration signal of length M, that is,
x[n) = O, n < O or n 2.: M
The DTFT of this signal is
M-1
X(ei11 ) = I
n=O
xln le-;nn

Now suppose we evaluate N 2.: M DTFS coefficients using x[nl, O :=s; n < N - 1. We have
!10 = 2 7ílN and
_ l N-1 .
xr k] =- I x[n]e-,kfl,.,n (4.47)
N n=O

Since x[ n] = O for n > M, we have


_ l M-1 .
' xr k] =- I x[n]e-,k!!(Jn
N n=O

Comparison of X[k] and X(e 1s1) reveals that

X[kl = _!__ X(e;n) (4.48}


N !l=kfl"

Hence the DTFS coefficients are samples of the DTFT divided by N and evaluated at
intervals of 2-rr/N. We may write the DTFS of x[n] as (1/N)X(eik!l").
.312 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

Now suppose we Ct>nvert the DTFS coefficients in Eq. (4.48) back tt> a time-domain
signal x [n l using the DTFS expansion
x[n1 = ~ X[k]eik 0011 (4.49)
k=(N}

The complex sinusoids in Eq. (4 .49) are ali N periodic, so x ln] is an N periodic signal.
One period <)f x[n] is obtained by recognizing that Eqs. {4.49) and (4.47) are inverses <)Í
each ocher. Hence
x[n] = x[n], O~n<N-1
This implies that rhe DTFS coefficients of xfnl corresp<>nd to the DTFS c<>efficients of a
periodically extended sig11al .x[nl. ln ocher words, the effect of sampling the DTFT of a
finite-duration notlfJeriodic signal is l<) periodically extend the signal in the time dc)main.
That is,
:X:

x[n] = L
1n= -oc
x[n + mN] DTFS; !!., X[k] = __!__
N
X(eik!l.,)

Figure 4.46 illustrates rhese relatic>nships in both time and frequency domains. They are
the dual to sampling in rime. Recall that sampling a signal in rime generates shifted replicas

x[n] .x[.n]

-••--<->-0-CH<_._.__._i...,...i-+-o-o-o-o-- n
0 N periodíc
. • • • O-O-o+.L......L...'.r......--,-L-(>-0-<0-0-<>-tJ.-1--'--,r-'--.L.O--O-O--O-O
o
í.l .~:~ n
o M-1 extens1on M-1 N 2N

DTFT

~
X[k]

1
N

••• •••

Sample at
,.....___:~,:::.__ ___jh-~~:....__-----!!
' '
-21T 217' 1l =kQ 0 -N 01 N

FIGURE 4.46 DTFS of a finite-duraríon nonperiodic.: signal.


4.9 Fourier Series Representationsfor Finite-Duration Nonperiodic Signals 313

of the original signal spectrum in the frequency domain. Here we sample in frequency.
This generates shifted replicas of the original time signal ín the time-domain representation.
ln order to prevent overlap or ''aliasing'' of these shifted replicas in time we require the
frequency sampling interval 0 0 to be less rhan or equal to 27r/M. Basically, this result
corresp(>nds to the sampling theorem applied in the frequency domain.

.,·, .,. ,, i~,.,


,

ExAMPLE 4.16 Consider the signal

31T
cos 8 n , Os n s 31
x[n] =
O, otherwise

Derive both the DTFT, X(eiº), and the DTFS, X[k], assuming N > 31. Evaluate and plot
JX(ei0 ) 1 and NIX[k] 1 for N = 32, 60, and 120. ·
Solution: First we evaluate the DTFT. Write x[n] = g[nJw[nJ, where g[n] = cos (¾1rn) and
w[nl is the window funcrion

1, O s n :5 31
w[n] =
O, otherwise

We have .;

n+~ + 1r8 !l - 31r


8 8 '

as one 21r period of G(eiº) and


- ..

Wí ;n) - -;(31121.n sin( 16!})


(e - e sin(0/2)

The modulation property implies that X(ei0 ) = (l/2?T)G(ei11) © W(e;n); for the problem at
hand, this property yields
• ,< : ••

3 3
sin 16 !l + 87T sín 16 n - 1r
e-i(J112l(n+3m8l e-iC3112H0-31TIS) 8
X(efn) = ------- + -------
2 . ·1 37T 2 1 37T
sin -2 Ü +- sin -2 Ü -
8 8

Let 0.0 = 2 7r/N, so the N DTFS coefficients are


3
1 317' ' '
X[k] = - 2: l
cos - n e-,k{lon
N n=O 8
1 31 1
,, = _
2 e-i(k1)0 +31Tl8)n +_ 2:
31
8
-j(k00 -31Tl8)n
2N n=O 2N n=O

Summing each geometric series gives


1 1 _ e-f1kn0 +31T!8)32 1 1_ e-i(k!10 -371'/8)32

X[k] =::: 2N 1 - e-f(kílo+37r/8) + 2N 1 - e-i(kflo-31r/8)


,
314 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

which we rewrite as
e-i(k!l0 +3rr/8)16 ei(kfi.,+31rl8)l6 _ e-i(kfl0 +31T/8)l6
X[k] = 2Ne-j(k0.o +Jw/8)/2 ei<kn., + 31r/8)/2 - e-i(kílo +Jw/8)/2

31T
sin 16 kfl 0 +
8
2N 3
sin 12 kü o + 81r

3
sin 16 k!l - -rr
e-;(.11/2l(k00 -3w-/8) o 8
+ 2N •
s1n l kil - 31T
2 o 8

Comparison of X[k] and X(ei11) indicates that Eq. (4.48) holds for this example.
Figures 4.47(a)-(c) depict \X(eíª)I as the dashed line and NIX[k)I as the stems for
N = 32, 60, and 120. As N increases, X[k] samples X(ein) more densely, and the shape of
the DTFS coefficients resembles the underlying DTFT more closely.
. ". ..,.., ., ... .·. .. . .. . . . . :·····: ··: .,. . . ·,, ... . .. .., ..,, .. . .. ......., . ., ..;., ·:;'li<>: ,,~:, :. ..,... . {w.: ::· . . . . ... .

ln many applicatÍ<>ns, only M values of a signal xín] are available, and we have no
knowledge of rhe signal's behavior outside this set c>f M values. The DTFS provides samples
of the DTFT c>f rhis Jength M sequence. The pracrice of choosing N > M when evaluating
the DTFS is knc>wn as zero-padding, since it can be viewed as augmenting or padding the
iW available values of x[nJ with N - M zeros. We emphasize that zero-padding does not
overcome any c>f the límitations associated wirh knowing only M values of xlnl; it simply
sampJes the underlying length M DTFT more densely, as illustrated in the previous
example.

• Drill Problem 4.13 Use the DTFT of the finite-duration nonperiodic signal
1, O::sn::s31
x[nJ =
O, otherwise
to find the DTFS coefficients of the period 64 signal x[nl with one period given by
1, O< n < 31
xln] =
O, 32 ::s n ::s 63
Answer:
x[nJ DTFS; 7r/32 X[k]

where

s1n k
• 71'
2
X[k] = e~ik31 ;r/64
• 71'
s1n k 64 •
]8,-i,-------.,-,------..-1------Tl.------~l------~i------~I,--,
N=32
16 - -
1\ /1
1 I J 1
14 - 1 1
1 ,
1 l
r •
-
' , r •
1 I I 1
12 - 1 1 1 1 -
1 1 1 1
1 1 I 1
10 - 1 1 1 1 -
1 1 r •
NI X[k]I I 1 1 1
1 t I 1
8- 1 1 1 1 -
1 1 1 1
1
1 r 1
6 ... I
1
1
'
1
r
I
1
-
I 1 1 t
1 1 1 1
4 - -
""f'
,1
,,1
1"
•''
"r
11,
• 1"'
1
,1
/\
,, ' •' r , ~ ~ ' , '• 1 ,,
2 - ~ /\ 1 \ / .i / \ / \ ,. :1 1 ,'
" / \ / \ : \ 1 ,: ~
•, 1 , 1 / \ - ~
~ 1'- I \ 1, I \ I \ / \1 li li 11l1111111 1I 1, li 1 1I l i \ / 1 / \ /\ 1\ / \ ~
1 )., '.' ,,' '.' 1/,'! ', 1 ~ ~ '1 11
• 1/ 1!
1
, ~ '!
11 \1 'J \ I
Ü0-'<>--0--0--0->--''0----<>----0--0--'--0---0--0---0->--0---0--0---0->----L.-O--O---<>--<>---C>--O--O--O--O--O
l f •!, ,!, \
-3 -2 -1 o l 2 3
Q

(a)

18 i
i i
' ' 1 i

N=60
16 >-
,' ' -
'
'' ,,
1 1 1
14 >-
J 1 1
1 1
-
1 1 1

12 ~-
> '1 1
' -
1 1
1 1
1 1

NI XlkJI
10 ~-
1
1

'
1
,
1
l
1
1
l
1
1
-
1 1
8 >- 1
1
' 1
-
r
1 1
6 ~- J 1
1
-
r r
1 1 1 1

4 - .1 11
, r
J 1

,
1 I\
1
" 1
111
r 1
1
1 '
-
'1 / \
2

o lir:, l1,t r,,i iÂ1Íiri'


\ r 11
,,
11

1
1
' 1

, I 1
j__ . I' ~--{ 1 _JJ~
. , 1
••
'
li
1
,,,,,,,,,,,

1
1
I

'J
~
'1•
1
r
' 1 1
1/

•' 1,, 1'1I 1'i'


1 , . ~

l 9 , 'j ,, \: 'ri \!
-
\

-3 -2 -1 o 1 2 3
n
(b)

18
N= 120
16
'
1
14 1 1 1
1 1 1 1
1 1

12
1
1
.. 1
10 1

NIXlkJ]
1
r
'1 1 1
1

8 - '
1
1
1
1
'
1
1
1
1

6
l

4 1
'1 '
1
, '
1 '1 1 ' 1 1
1

2
,1 1 '
1 1, 1'
1
li 1
1
1 1 ' ',
r
1J
1
\
1, "li 1

o
-3 -2 -1 o l 2 3
ü
(e)

FIGURE 4.4 7 DTFT and length N DTFS of a 32-1>oint cosine. The <lashed line denotes IX(ej.fl) 1,
tl1c stcms rcpresent NIX[k] 1- (a) N = 32, (b) N = 60, (e) l'\J = l 20.

315
316 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

m RELATING THE FS TO THE FT


The relationship between the FS coefficients and the FT of a finite-duration nonperiodic
signal ís analogous to the discrete-time case discussed above. Let x(t) have duration T 0 so
that
x{t} = O, t < O, ()r t > T 0

Construct a periodic signal .i(t) with T 2 T by periodically extending x(t), as shown by


0

oc

x(t) = L
m=-oo
x{t + mT)

The FS coefficients of .i( t) are

X[k l = .!. fT x(t)e-jkwot dt


T Jo
= -1 lT" x(t)e-,k,.,<>t
. dt
T o

The FT of x(t) is defined by

X( jw) = Ja. 00
x(t)e-iwt dt

Hence comparing X[kl with X( jw), we conclude that

-
X[k] 1 X{ jw) 1w=kw
=T 0

The FS coefficients are samples of the FT normalízed by T.

* 4.1 O Computational Applications


of the Discrete-Time Fourier Series
The DTFS involves a finite number of discrete-valued coefficients in both frequency and
time domains. Ali the other Fourier representations are continuous in either the time or
frequency domain. Hence the DTFS is the only Fourier representation that can be evaluated
in a computer, and it is widely applied as a computational tool for manipulating signals.
We begin this section by examining two common applications of the DTFS: approximating
the FT and computing discrete-time convolutíon. ln both of these the characteristics (>Í the
DTFS play a central role in correct interpretation of the result.

• APPROXIMATING THE FT
The FT applies to continuous-time nonperiodic signals. The DTFS coefficients are com-
puted using N values of a discrete-time signal. ln order to use the DTFS ro appr<>ximate
the FT, we must sample the continuous-time signal and retain at most N samples. We
assume that the sampling interval is 5" and that M < N samples of the continuous-time
signal are retained. Figure 4.48 depicts thís sequence of steps. The problem at hand is to
determine how well the DTFS coefficients Y[k] approximate X( jw), the FT of x(t). Both
the sampling and windowing operations are potential sources of error ín the
. .
approx1mat1on.
4.1 O Computational Applications of the Dlscrete-Time E'ourier Series 317

zlnl
x(t)
Sample
ater
-----,•
Windo\V to
length M
---11'•
ylnl Zero pad
to length N
,. N point
DTFS
----+- Y[k]
. ,.,
,_.,·mD!ililiml13il/&I
w[nJ

FIGURE 4.48 Block diagram de1>ícting thc sequence <>f <>perati<>11s i11vc>lved in appr<>xin1ating the
FT with Lhe DTFS.

The potential error introduced by sampling is dueto aliasing. l,et xlnJ < FT > X.5( jw)
where x[n] = x(n<ff) are the samples of x(t). Equation (4.32) indicates
1 ,.,
Xô(jw) = "'ij k L 00
X(j(w - kw 5 )) (4.50)

where w = 27Tl<ff. Suppc>se we wish to approximate X( jw) on the interval -wª < w < wª
5

and that x(t) is bandlimited with maximum frequency wm where w, 2:: wª. Aliasing in the 11

band - wª < w < wª is prevented by choosing <ff such that W 5 > w,n + w,1 , as illustrated
in Fig. 4.49. That is, we require
21r
<!J < - - - (4.51)

The windowing operation of length M corresponds to the periodic convolution


1
Y(ei!l) = X(eif!) 0 W(ei!!)
2'11'
DTfT
where xlnJ ~ - - ~ X(eiíl) and W(ei!l) is the window frequency respc)nse. We may rewrite
thís periodic convolution in terms of continuous-time frequency w by performing the
change of variable {l = w~ in the convolution integral. We thus have
1
Yô( jw) = - X 5 ( jw} 0 W.s;( jw) (4.52)
Ws

FT FT
Here X 5 (jw) is given in Eq. (4.50), y[n] < > Y,;(jw) and w[n] - - W 6(jw). Both
Xs( jw) and W6 ( jw) have the sarne period, ws; hence the periodic convolution ís performed
over an interval of length W 5 • Since
1, O< n < M - 1
w[n] =
O, otherwise

-ú)
s -wm -ú)
a o
No aliasing _....,..,T"'ll"',---w,,, .. 1

FIGURE 4.49 Effect of aliasing.


318 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

we have

• wCZl
s1n M 2
· ) -_ e -jwY(M-1)/2 - - - - -
W 6 ( JúJ (4.53)
• w<!l
s1n
2
A pior of one period of I W 6 ( jw) 1 is given in Fig. 4.50. The effect of the cc>nvolucion
is to smear or smoc>th the spectrum of X 6 ( jw). This smearing limits our abiliry to resolve
details in the spectrum. The degree of smearing depends on the maínlobe width of W;;( jw).
It is difficult to precisely quantify the loss in resolution resulting from windowi11g. Since
we cann<.>t resolve details in the spectrum that are closer than a mainlobe width apart, we
define resoluti<>n as the mainlobe width w 5 / M. Hence to achieve a specified resolution wr
.
we requ1re

Ws
M >
- - (4.54)

Substituting Í<)r ws, we may rewrite this explicitly as

MCZJ ~ 21r
Wr

Recognizing that M;!f is the total time over which we san1ple x(t), we see that this time
interval must exceed 21rlwr.
DTFS; 21r/N
The DTFS y[n} ~ - - -• Ylkl samples the DTFT Y(e;íi) at intervals ()f 21r/N.
That is, YlkJ = (1/N) Y(eikl7Tl,I\J). 111 terms of continu<>us-time frequency w the samples are
spaced at intervals of 21r/NCZJ = w 5 /N, and so
1
Y[k] = - Ys l'k -Ws (4.55)
N N
If rl1c desired sampling interval is at least Liw, then we require

(4.56)

Her1ce if aliasing does not occur, and !vI is ch<>sen large en<)ugh to prevent resolution loss
dueto windowing, then the DTFS approximation is related te> the original signal spectrum
according to
1
Y[k] = - X 'k Ws
N2T N
,-

2 2M 2M 2
FIGURE 4.50 lv1agnítude responsc of 1W-poinl wíndow.
4.1 O Computational Applications of the Discrete-Titne Fourier Series 319

The foll(>wing example illustrates use <>f the guidelines given in Eqs. (4.,Sl ), (4.54),
and (4.56) to approximate the FT with the DTFS.

ExAMPLE 4.1 7 Use the DTFS to approximate the FT of the sígnal


x(t) = e- 1110u(t)(cos(10t) + cos(12t))
Assume the frequency band of interest is - 20 < w < 20 and the desired
sampling interval
is Âw = 1r/20. Compare the DTFS approximation to the underlying FT for resolutions of
(a) w, = 2,r, (b) w, = 2,;r/5, and (e) w, = lrr/25.

Solution: First evaluate the FT of x(t). Let f(t) = e-t11ºu(t) and g(t) = cos(lOt) + cos(l2t)
so that x(t) = /(t)g(t}. Use
F(jw) = . 1 1
JW + 1Õ

G(iw} = 1r8(w + 10) + rrô(w - 10) + ,rô(w + 12) + 1rô(w - 12)


and the modulation property to obtain
1 1 1 1 1
X(iw) - - --------,- + - - - - - ~ + ----- + ----~
- 2 j(w + 10) + -fõ j(w - 10) + fõ i(w + 12) + -fõ j(w - 12) + fõ
Now put the first and last two terms of X(já!) over common denominators:
1 + · 1 ·
. = 'iõ fW + 1Õ + JW (4.57)
X(Jw) (fõ + jw)2 + 102 (-fõ + jw)2 + 122

The maxímum frequency of interest is given as 20, so wª = 20. ln order to use Eq.
(4.51) to find the sampling interval we must also determine wm, the highest frequency present
in x(t). While X(jw) in Eq. (4.57) is not strictly bandlimited, for w >> 12 the magnitude
spectrum I X(jw) 1 decreases as 1/w. We shall assume that X(jw) is effectively bandlimited to
wm = 500 since I X(j500) 1is more than a factor of 10 less than IX(j20) 1, the highest frequency
of interesr and nearest frequency at whích aliasing occurs. This will not prevent aliasing in
- 20 < w < 20 but will ensure that the effect of aliasing in thís region is small for all practical
purposes. We require
2-rr .,:.,

.. '
rzJ' < 520 ·•

= 0.0121 • •:. f.: ,,:


">!·.
. .
·...

To satisfy this requirement, we choose (fJ° = 0.01.


Given '!f, the number of san1ples, M, is determined using Eq. (4.54): .'
.· : . ·r,::
.

M 2: 200rr
Wr

Hence for (a), w, = 21r, we choose M = 100; for (b), w, = 21Tl5, we choose M = 500; and
for (e), wr = 27r/25, we choose M = 2500.
Finally, the length of the DTFS, N, must satisfy Eq. (4.56) as shown by

N > 2001r .
. .
Aw .:F,{
. .· .
..
..·r: '. ~..

Substitution of ôw = 1r/20 gives N > 4000, and so ,ve choose N = 4000.


We compute the DTFS coefficients Y[k] using these values of q, M, and N. Figure 4.51
compares the FT to the DTFS approxímation. The S<>lid line in each pior is I X(jw) 1 and the
stems represent the DTFS approximation, N'!f I Y[kl 1. Both IX(jw) 1and I Y[k] 1have even sym-
metry because x(t) is real, so we only need to depict the interval O < ál < 20. Figure 4.50(a)
. ., .:;;1·• • ·:'·· • r::: .';:·
320 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

6.----.----.---.....---~-------~
5 5

4 .. 4
-.....,.......
..!;( ~
~

.......
>-- 3 >-- 3
-13-)
-
13-)
z z
2 2

1 1

o
O 2 4 6 8 lO 12 14 16 18 20 6 8 1O 12 14 16 18 20
ú) ú)

(a) (b)

6---------~---~----
5 5

4 4
-~
..!;(
.....,
~
.......
~3 >-- 3
-
l~ 13-)
z 2
z 2

l 1

2 4 6 8 10 12 14 16 18 20 9.5 lO 10.5 11 11.5 12 12.5 13


w w
(e) (d)

FIGURE 4.51 DTFS approximation to the FT of x(t) = e- 1110 u(t) (cos(IOt) + cos(l2t)). The
solid linc is the 1:-'T IX(jw) 1, the stems denote the DTFS approxímation N5" IY[k] j. Both IX(jw) 1 and
N5" 1Y[k] I have even symmetry so only O < w < 20 is displayed. (a) M = 100, N = 4000. (b) 1\,f =
500, N = 4000. (e) M...:. 2500, N = 4000. (d) M = 2500, N = 16000 for 9 < w < 13.

depicts M = 100, (6) depicts M = 500, and (e) depicts M = 2500. As M increases and the
resolution wr decreases, the quality of the approximatíon improves. ln the M = 100 case, the
resolution (21r = 6) is larger than the separation between the two peaks and we cannot dis-
tinguish the presence of separate peaks. The only portions of the spectrum that are reasonably
well approximated are the smooth sections away from the peaks. When M = 500, the reso-
lution (21r/5 =1.25) is less than the separation between the peaks and distinct peaks are
evident, although each peak is still blurred. As we move a\vay from the peaks the quality of
the approximation improves. ln case (e), the resolution (21r/25 = 0.25) is much less rhan the
peak separation and a much better approximation is obtained over the entire frequency range.
It appears that the values at each peak are still not represented accurately in case (e).
This could be dueto the resolution limít i1nposed by M, or because we have not sampled the
4.1 O Comyutational Applications of the Discrete-Time Fourier Series 321

. : ·,
., ., .

DTFT at small enough intervals. ln Fig. 4.Sl(d) we increase N to 16,000 whíle keeping
M = 2500. The region of the spectrum near the peaks, 9 < w < 13, is depicted. Increasing N
by a factor of 4 reduces the frequency sampling interval by a factor of 4. We see that there is
·· still some error in representing each peak value, although less than suggested by Fig. 4.51{c).
..... .....,.:: ....... ........ ... .

The quality of the DTFS approximation to the FT improves as CZf decreases, M2J
increases, and N increases. However, practical considerations such as mem<>ry limitations
and hardware cost generally limit the range over which we can choose these parametcrs
and force compromises. For example, if memory is limited, then we can increase M<J to
obtain better resolution only if we increase '2T and reduce the frequency range over which
the approximation is valid.
Recall that the FT of peric)dic sígnals contains continuous-valued impulse functions
whose areas are proportional to the value of the corresponding FS coefficients. The nature
of the DTFS approximation to the FT of a periodic signal differs slightly from the non-
peric)dic case because the DTFS coefficients are discrete valued and thus are not well suited
to appr<>ximating continuous-valued impulses. ln this case, the DTFS coefficients are pro-
portional t<> the area under the impulses in the FT.
To illustrate this, consider using the DTFS to approximate the FT of a complex
sinusoid with amplitude a, x(t) = aei<v.,r. We have

fT
x(t) < > X( jw) = 21rao(w - Wo}

Substitution for X( jw) in Eq. (4.,SO) yields


. 21T e,,

Xô( 7w) = CZf a k ~<,> ô( w - Wc, - kws)

Recognizing W 5 = 21rl<!J and substituting for X 0 ( jw) in Eq. (4.52) gives the FT of the
sampled and wind<)wed complex sinusoid as
(4.58)

where W6 ( jw) is given by Eq. (4.53 ).


Applicati<>n of Eq. (4.55) indicares that the DTFS coefficients associated with the
sampled and windowed complex sinus<>id are given by

a . k Ws
Y[ k] = N W.s J N- W0 (4.59)

Hence the DTFS approximation to the FT of a complex sinusoid consists of samples c>f
the FT of the window frequency response centered on w 0 , with amplitude proportional
to a.
If we choose N = M (no zero-paddíng) and the frequency of the complex sinusoid
satisfies w 0 = mwslM, then the DTFS samples W~( j(w - w at the peak of its mainlobe 0
)}

and at its zero crossi11gs. Consequently, we have


a, k = m
Ylkl =
O, otherwise for O :S k s M - 1

ln this special case, the continuous-valued impulse with strength 21ra in the FT is approx-
imated by a discrete-valued impulse of amplitude a.
322 CHAPTE.R 4 l'I APPLICATIONS OF FOURIER REPRESENTATIONS

An arbitrary periodic signal is represented by the FS as a weighted sum of harmor


ically related complex sinusoids, so in general the DTFS approximatíon to the FT consisl
of samples of a weighted sum of shifted window frequency responses. The following e,.
ample illustrates this effect .
• >',• ••

· EXAMPLE 4.18 Use the DTFS to approximate the FT of the periodic signal
x(t) = cos(21r(0.4)t) + ½cos(21r(0.45)t)
Assume the frequency band of interest is -101r < w < 101r and that the desired sampling
interval is .â.w = 21r/M. Evaluate the DTFS approximation for resolutíons of (a) wr = 'TT/2 and
(b) w, = 1rllOO.

Solution: First note that the Ff of x(t) is given hy

X(iw) = 1rô(w + 0.81r) + 7TS(w - 0.81r) + fs{w + 0.97T) + i8(w - 0.91r)


· The maximum frequency of interest is given as wª = 101r and this is much larger than
the highest frequency in X{jw), so aliasing is not a C(>ncern and we choose ws = 2wa. This
gives ~ = 0.1. The number of samples, M, is determined by substitutíng for ws into Eq. (4.54):

M ~ 201r
.. w,
To obtain the resolution specified in case (a) we require M ~ 40 samples, while in case (b) we
choose M ~ 2000 samples. We shall choose M = 40 for case (a) and M = 2000 for case (b).
The desired sampling interval in frequency indicares that N = M and thus there is no zero-
padding.
The signal is a weíghted sum of complex sinusoids, so the underlying Ff, Y6(j<o), which
will be sampled by the DTFS, is a weighted sum of shifted window frequency responses given
by
Y.s(iw) = ½Wó(j(w + 0.87T)) + ½Wa(i(w - 0.81r)) + ¾W.s(i(w + 0.91r)) + ¼W 8 (j(w - 0.91r))
ln case (a), W 6 (jw) is given by

Ws(iw) sin(2w)
= e-iw39J20 _ __
. w
s1n
20
ln case (b), it ís given by

( . )
W6 JW - - -;w1999/20 sin(100w)
e
• ú)
. s1n
20
The stems in Fig. 4.52(a) depict I Y[k] 1 for M = 40 while the solid line depicts
(1/M) 1 Ya(íw) 1 for positive freqt1encies. We have chosen to la bel the axis in units of Hz rather
than rad/s for convenience. ln this case, the minimum resolution of w, = 1r/2 rad/s or 0.25 Hz
is five times greater than the separation between the two sinusoidal components. Hence \ve
cannot identify the presence of two sinusoids in eíther \ Y[ k1\ or (1/M) \ Y8 ( jw) \.
Figure 4.52(b) illustrates I Y[k] 1 for M = 2000. We zoom in on the frequency band
containing the sinusoids in Fig. 4.52(c), depicting I Y[k] 1 wíth the stems and ( 1/M) 1 Y15(jw) 1
wíth the solid line. ln this case, the minimum resolution is a factor of 1 O times smaller than
the separation between the two sinusoidal components, and we clearly see the presence of two
.... ... :.~
4.1 O Comptitational Applications of the Discrete-Time Fourier Series 323

0.8 ,------.------~----.----.--------,----.

0.7 ..

0.6 0.7 1 l 1 1

0.6 ... -
o 0.5
....
-o
·--- 0.4
::,
... 0.5 - -
o. li)

E ...·-=
"O
0.4 - -
< 0.3 .. - 0.3
Q ,,
E - -
0.2 <
0.2 - -
0.1 0.1 - -

o o 1 1 1

o 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 o 0.5 l 1.5 2 2.5


3 3.5 4 4.5 5
Frequency (Hz) Frequency (Hz)
(a) (b)

0.7

0.6
0.7 ,-----,----,----.,.-----,---,-----,------,---, 0.5
0.6
'g
-õ.-
0.4
0.5
a>

.õ.-..
-g 0.4 ~ 0.3 . .
e o.3 - 0.2
<
0.2 . .
0. l
0.1 ..
O ~>;AÇ.iA{:.~UUL!U~~,,,.!(_)J-U~QA~~...J o LJL.IL ...U..U..&..J'-...U.. ~Ll'.JlcSl,,;~'.l!L.l!.lJ lfil'.M'JXit:,/:t,,_ .:
0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5
Frequency (Hz) Frequency (Hz)
(e) (d)

FIGURE 4.52 DTFS approximation t<> the FT of x(t) = cos(21r(0.4)t) + 1 ~ 1 cos(21r(0.45)t). The
stems denote IY[k11 while the solid line dcn{>tes ( 1/M) IY8 (jw) 1- The frequency axis is displayed in
uníts of hertz for convenience and only positive frequencies are illustrated. (a) 1\J = 40. (b) 11'1 =
2000. (e) Behavic>r ín the vicinity of the sinusoidal frequencies for 1'W = 2000. (d) Behavior in the
vicinity of the sinus<>i<lal frequencies for 1\;f = 201 O. t·

. ...· . .. ·'

sinusoids. The interval for which the DTFS samples Y6 (jw) is 21r/200 rad/s or 0.005 Hz. The
frequency of each sinusoid is an integer multiple of the samplíng interval, so Y[k] samples
Y6 (jw) once at the peak of each mainlobe with the remaínder of samples occurring at the zero
crossings. Hence the amplitude of each component is correctly reflecced in I Y[k] 1-
Figure 4.52(d) depiccs I Y[k] 1 and (1/M)] Y.s(iw) 1 assuming M = 2010. This results in
slíghtly better resolution than M = 2000. However, in this case the frequency of each sinusoid
is not an integer multiple of the interval at which the DTFS samples Y6 (jw}. Consequently,
Y6 (jw) is not sampled at the peak of each mainlobe and zero crossings. White the resolution
is sufficient to reveal the presence of two compone11ts, we can no longer determine the am-
plitude of each component directly from I Y[k] 1-
324 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

• COMPUTING DISCRETE-TIME CONVOLUTION


WITH A FINITE-LENGTH SEQUENCE

The DTFS may be used to efficiently compute the output of a system characterízed by a
finite-length impulse response. Díscrete-time systems with fi11ite-length impulse responses
are often used to filter sígnals and are termed finite-duration impulse response (FIR) filters.
The output <>Í the filter is related to the input through the convolution sum. Instead of
directly computing the cc>nvolution sum, the DTFS may be used to compute the output by
perfe.>rming multiplication in the frequency domain. If a fase algorithm is used to evaluate
the DTFS (see the following section), then this approach offers significant savings in C<>m-
putation. While straightforward in concept, care must be exercised because we are using
periodic convolution to compute ordinary or linear convolution: the filter output is given
by the linear convolution of the impulse response with the input, but multiplicatíon of
DTFS coefficients corresponds to a periodic convolution.
Let h[nl be an impulse response of length M; that is, assume h[n] = O for n < O,
n > M. The system <>utput yfnl is related to the input x[n] via the convolution sum
ylnJ = h[n] * x[n]
M-1 (4.60)
= L
k=O
h[k]x[n - k]

The goal is to evaluate this sum using a set of periodic convolutions computed via the
DTFS.
Consider the N point periodic convolution of h[n] with N consecutive values of the
~
input sequence x[nl and assume N > M. Let x[n] and h[n] be N-point periodic versic>ns
of x[n] and hlnJ, respectively,

x [nl = xlnJ, for Os n s N - 1


x[n + mN] = x[n], for ali integer m, O < n < N - 1
h[n] = h[n], for O< n < N - 1
-h[n + -
mN] = hín], for ali integer m, O :s n s N - 1

The periodic convolution between h[n] and x[n] is

y[n] = h[n] 0 x [11],


= L
k=(N}
h[k]x[n - kl (4.61)
N-1

= í:
k=O
h[k]x[n - k]

Using the relationship between h[n] and h[n] to rewrite Eq. (4.61), we have
M-1
y[n] = 22
k=O
h[k]x[n - k] (4.62)

Now sínce x[n] = x[n], O :s n :s N - 1, we know that

x[n - kl = x[n - k], O< 11 - k < N - 1


4.1 O Com-putational Applications of the Discrete-Time Fourier Series 325

ln the sum of Eq. (4.62) k varies from O t() M - 1, and so the condition O s n - k s
N - 1 is always satisfied provided M - 1 s n s N - 1. Substituting xín - k] = x ln - k],
M - 1 s n < N - 1, into Eq. (4.62), we obtain
M-1

y[n] = L
k=U
h[k]xln - k], M-l<nsN-1

= y[n], M-lsnsN-1
Hence the periodic convolution is equal t<) the linear convoluti<.Jn at L = N - M + 1
values ()f n.
We may c>btain values of y[n] other than those on the interval M - 1 < n $ N - 1
by shifting x[nJ prior to defining x[nJ. Ler
Xp[nJ = xf n + pLJ, Osn<N-1
for all integer m, O s n < N - 1
and defi11e

ln this case we ha ve
Yrln] = YÍ n + pL], M-l<n-5.N-1
That is, the last L values in one period of y1)ln] correspond t<) yln] for M - l + pL s n
s N - 1 + pL. Each tirne we increment p the N-point periodic cc>11volution gives us L
new values of the linear convolution.
The relationship between periodic and linear convoit1tion lcads to tl1e fcJllowing pro-
cedt1re, called the overlap and save method, for evaluating a linear convolution with the
DTFS. We assttme that we desire to compute y[n] = h[n] * xlnJ for n > O and that
hln] = O, n < O, n > M - l.
Overlap and Save Method of l11iplementing Convolution
DTFS; 21r/N
1. Compute the N DTFS coefficients H[k]: h[n] - - - - H[k].
2. Set fJ = O and L = N - M + 1.
3. Define xp[n] = x[n - (M - 1) + pL], O< n s N - 1.
DTl-.S; 2 1r/N
4. Compute the N DTFS coefficients X 1)[k]: xp[nl
5. Compute the product Yrfkl = NH[k]Xp[k].
6. Compute the time signal .Yp[n] from the DTFS C(>efficients, Yp[k]:
DTFS; 2nl1\J
Yrfn 1~ - - ~ Yp[k 1-
7. Savc L output points: y[n + pl~J = )ip[n + M - 1], O s n < L - 1.
8. Ser /J = p + l and return t<> step 3.
This overlap and save algorithm for cc>mpuring C<)nvolution isso called because there
is overlap in the input blocks Xp[n] and a portio11 <>f the output Yp[n] is saved. Other
algorithms for evaluating linear convolutic>n using circular convc>lution may also be de-
rived. They are known collectively as fast convolution algorithms because they require
sígnificantly less c<>mputation than direct evaluatÍ<)n <>f conv<)lution whe11 a fast algoríthm
is used to compute the required DTFS relationships. The exact savi11gs depends on M
and N.
326 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

0.12 ....---,-----r---r----.----,-----.-----,

0.1 ·

0.08 ... 0.5

-.
~ 0.06
::i::

0.04

0.02

o
o 20 40 60 80 20 40 60 80 100 120 140
k n
(a) (b)

0.04 --~-~--.....----,---.....----------. 0.06 .----.----,---,------T--.....----,..----,

0.035
0.05
0.03 ...

0.04 .
-......
-
-'(
0.025
-
~>< 0.02 ...
~
""5 0.03
l ::,.:;

0.015
0.02
0.01

0.005
O.OI I·
1

o o
o 20 40 60 80 100 120 140 o 20 40 60 80 100 120 140
k k
(e) (d)

l 1 1
1 1 1
' 1 i
'

0.8 - - 0.8 ... -


0.6 - - 0.6 -

0.4 - - 0.4 -
-
~ 0.2 - ....
"
..
o i llli~W!Hltº o 1 1111111
J !I lJ lA 111 •

-0.2 ,... - -0.2 "

-0.4 ,... - -0.4 .. .

-0.6 1 1 l 1
' ' -0.6 ' i

o 20 40 60 80 100 120 140 O 50 100 150


n Tíme index n
(e) (f)

FIGURE 4.53 Example of using perÍ<l<lit.: C(lnV(>lutíon to compute linear convolution. (a) IJI[k] j, (b) .x0[n],
(e) IX0 [k] I, (d) IY0 [k] 1= INH[k]Xo[k] 1, (e) Yo[n], (f) y[n].
4.11 EfficientAlgorithtnsfor Evaluatlng the DTFS 327

. .
ExAMPLE 4.19 Use periodic convolution and the DTFS to determine the output y[n] of the
system with impulse response
.•..
.·:,.·
..
-15 n
h[n] = (u(nJ - u[n - 50])
16
in response to the input
1T
x[nJ = 0.9n cos n (u[n] - u[n - 42])
8
Solution: The output is the linear convolurion of h[n] and x[n]. Both h[n] and x[n] have a
finite duration. The linear convolution of two finíte-length signals is a finite-length signal
whose length is the sum of the lengths of x[n] and h[n]. Both x[n] and h[n] are zero for
n < O, so y[n] is zero for n < O and n 2: 50 + 42. Our goal is to use the DTFS to determine
y[nJ, O < n :S 91. We know that the periodíc convolution is equal to the linear convolution
at L = N - M + 1 values. Here M = 50 and we desire L 2: 92 to determine the 92 nonzero
values of y[n]. We therefore choose the DTFS length N = 141 so that ali values of interest are
determined with one pass through the overlap and save procedure.
ln step 1 we use the computer to evaluate H[k] fr<lm h[n], where
. , . ,. .

h[n] DTFS; 21r/141 H[k]

The magnitude of H[k) is depicted in Fig. 4.53(a). Now set p = O and L = 92 to complete
step 2. ln step 3 we define .i0 [n] as a shifted periodic version of x[n]. We have
.i0 [n] = x[n - 49] :

1T
= o.9n- 49
cos - (n - 49) (u[n - 49] - u[n - 91]), O:sn:s140
8
49 '1T
0.9n- cos .- (n - 49) , 49 :S n :S 90
8
O, O :s n :S 48 and 91 :s n :::; 140
. .
as one period of x0 ln}. This signal is depicted in Fig. 4.53(b). Complete step 4 by using the
computer to evaluate .X0 [k} described by
. '
•!-"

_
x 0 [n]
DTFs; 21r1141
<:-----~
x~o[kl

Figure 4.53(c) illustrates IX0 [k] 1, The next step is to compute Y0 [k] = NH[k]X0 [k]. Figure
4.53(d) depicts I Y0 [k] 1, We then con1pute jí0 [n] ín step 6, where
:.',. ~

• . Yo[n] DTFS; 2'1T/141 Yo[k]

'
The signal jí0 {n] is depicted in Fig. 4.53(e). Finally, ser y[n] = ji0 [n + 49], O < n s 91, as
· illustrated in Fig. 4.53(f).
• ~ •• .. ··.i• .::,···• • :>•
• • .; <. • ,_.. •'w• < º.<.>:· :: :· . J'h .,. ' • • .''>l .,..
: ....: . . . ..·;(:· •:
·i(\

*4.11 Efficient Algorithms


for Evaluating the DTFS
The role of the DTFS as a computational toc,l is greatly expanded by the availability of
efficíent algorithms for evaluating the forward and the inverse DTFS. Such algorithms are
collectively termed fast Fourier transforms or FFT algorithms. These fast algorirhms op-
328 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

erate on the ''divide and conquer'' principie by splitting the DTFS into a series of lower
order DTFS and exploiting the symmetry and periodicity properties of the con1plex sinu-
soid eik 2 " Less computation is required to evaluate and combine the lower order DTFS
11

than to evaluate the original DTFS. We shall demonstrate the computational savings that
accrue from this splitting process.
Recall that the DTFS pair may be evaluated usíng the expressíons
l N-1 _
X[k] =- I x[n]e-,k.Oun
N n=O

and
N-1

x[n] = I X[k]eikfio,z (4.63)


k=O

These expressions are virtually identical, differing only in the normalization by N and the
sign of rhe complex exponencial. Hence the sarne basic algorithm can be used t(> cc>mpute
either relationship; only minor changes are required. We shall consider evaluating Eq.
(4.63).
Direct evaluation of Eq. (4.63) for a single value of n requires N complex multipli-
cations and N - 1 complex additions. Thus computation of x[n ], O ::s n =s N - 1, requires
N 2 complex multiplications and N 2 - N complex additíons. ln order to demonstrate how
this can be reduced, we assume N is even. Split X[k], O ::s k s N - 1, into even- and odd-
indexed signals, as shown by
Xe[k] = X[2k], O< k ::s N' - 1
and
X 0 [k] = X[2k + 1 ], O< k < N' - 1
where N 1 = N/2 and
n~ X [k}
DTFS; [ nDTFS;
]-- n~- X "lkJ
Xe[n] ~ - - e . , X 0

with n~ = 21r!N Now express Eq. (4.63) as a combination of the N' DTFS coefficients
1

Xelk] and X 0 [k], as shown by


N-1
x[n] = I X[k]eikt}nn
k=O
= I X[k]eikflun + I Xlk]eik!l,/1
k even k odd

Write the even and odd índices as 2m and 2m + 1, respectively, to obtain


N'-1 N'-1
x[n] = L X[2m]eim2!lnn + L X[2m + 1]ei(1122!lon+!1,,n)
ni=O ,n=O

Substitute the definitions of Xe[k], X 0 lkJ, and n~ = 2!10 so that


N'-1 N'-1
x[n] = L Xe[m]ei•nf!.;n + eií}nn L Xo[m]eirnn.;n
m=O n1=0

This indicates that x[n] is a weighted combination of xl,ln] and x 0 [n].


We may further simplify our result by exploiting the periodicity properties of Xef nl
and Xo[n). Using Xe[n + N'J = Xef n], Xo[n + N' j = Xof n], and ei(n+N')!l., = -eii!,,n we obtain
x[n] = Xe[n] + ei110' xo[n],
1
O ::s n :5 N' - 1 (4.64)
l
4.11 Efficient AlgorithmsforEvaluating the DTFS 329

as the first N' DTFS values of x[n] and


x[n + N'] = Xe[n] - e;n°nx0 [n], O< n < N' - 1 (4.65)
as the second N' values of x[n]. Figure 4.54(a) depicts the computatic>n described in
Eqs. (4.64) and (4.65) graphically for N = 8. We see that we need only multiply by

Xe[O] +;5
XeíOl =X[Ol x[Ol
+
::.: ::,··,.>.
Xel 1]
Xell] = X(2] x[ l]
~-point ~
tnverse ~
DTFS ·'°e[2]
Xe[2] =X[41 ~·· ·'!~r
xf21

- - x[3]

X 0 [0] = X[lJ - -

+
x<Jll X-----1.--,____..f.--....>,,..;•',;; , __ xí51
4-point

1nverse t ei'ff/4
X 0 [2] = X[5] -.....1 - - xl6]

+
- .,:,__ x[7J
---'-----~F< ,.
X 0 [3] =X[7] - - - &· ••

(a)

+
. ·:.
- - Xe[O]
+

+
+
Xe0 [0] =X[2] - - · - - - ~ · ,,,. Xe<JO]
2-point

1nverse
+
DTFS Xe1Jl]
~·------'---• ··:,:·-- Xe[3]
Xe0 [l] = X[6] - - ~

(b)

+
XeelO] = X[O] --------+-
+

Xeeil] = X[4] ----""'""'----!• -- +


•;ilf., - - Xee[l]

(e)

FIGURE 4.54 Block diagrams depícting the decompositíon of an inverse DTFS as a combination
of lo,ver-or<ler inverse DTFS. (a) Eight-point inverse DTFS represented in terms of t\vo 4-point
inverse DTFS. (b) f'our-point inverse DTFS represented in terms of 2-p<)Ínt inverse DTFS.
(e) Two-point inverse DTFS.
330 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

ei!lºn once in computing both equations. The remaining operations are addition and
su btracti()n.
Let us consider the computation required to evaluate Eqs. (4.64) and (4.65). Eval-
uation of Xa[nl and x 0 [n} each requires (N') 2 complex multiplications for a t<>tal of N 2/2
complex multiplicarions. An additional N' multiplicati<>ns are required to compute
e-ik!l X 0 [kJ. Thus the total number of complex multiplicaric>ns is N /2 + N/2. For large
0 2

N this is approximately N 2/2, about one-half the numbcr of multiplicati<)ns required to


directly evaluate x[n}. Additíonal reductions in computational requirements are obtained
if we further split Xe[kl and X [k] into even- and odd-indexed sequences. For example,
0

Fig. 4.54(6) illustrates how to splir the 4-point inverse DTFS used to calcula te XeÍ nl into
two 2-point inverse DTFSs for N = 8. The greatest savings is when Nisa power of 2. ln
that case we can continue subdividing until the size of each inverse DTFS is 2. The 2-point
inverse DTFS requires no multíplications, as illustrated in Fig. 4.54(c).
Figure 4.55 illustrates the FFT computarion for N = 8. The repeated partitioning
into even- and odd-indexed sequences permutes the order c)f the DTFS coefficients at the
input. This permutatíon is termed bit-reversai, since the location of Xf kJ may be derer-
mined by reversing the bits in a binary representation for the índex k. For example, Xf 61
has índex k = 6. Representing k = 6 in bínary form gives k = 110 2 • Now reversing the
xr
bits gives k, = 0112 ()f k = 3, so 61 appears in the third position. The basic two-input,
I

two-output structure depicted in Fig. 4.54(c) that is duplicated in each stage of the FFT
(see Fig. 4.55) is termed a butterfiy because of its appearance.
FFT algorithms for Na power of 2 require on the order of N log2 (N) cc>mplex
multiplications. This can represent an extremely large savings in computation relative to
N 2 when N is large. For example, if N = 8192 or 2 13 , the direct approach requires ap-
proximately 630 times as many arithmetic operations as the FFT algorithm.
Many software packages contain routines that implement FFT algorithms. Unfor-
tunately, the location of the 1/N factor is nc)t standardized. Some routines place the 1/N
in the expression for the DTFS coefficients XlkJ, as we have here, while <>thers place the

X[O] +
."'~'
+
.
·.
,•;~~ +
... --xlO]
+ + +
X[4J -·..
• >:~'<
,. + ..
--x[l]
'
+ +
+
XL2J + -~:

~
~~i
:.;. . ., --x[2]
+
...
t eiº - '
+
+
X[6]
- ,,,..,. X - ...,,.
..... --x[3]
t ei'rr/2 +
+
X[l] + ·-: + __,... x--------~E---+-1• - - xí4]
+ t eiº +
X[S] .,
- .. ·--• ~----+-~f-4-1• - - x[SJ

+
t ei1rf4 +
X[3] -~~+ .,._ __,,. X----~~ ...
--x[6)
t eiº +
t ei'rr/2 +
-----... zZ-----...;i-...- ·~ --;• x------------i• , - - x[7]
t eiJrr/4
FIGURE 4.55 Diagram.of the FFT algorithm for computíngx(iil from X[kl for N = 8.
4.12 Exploring Concepts with MATLAB 331

1/N in the expression for the time signal x[n]. Yet another convention is to place 1/YN
in both the expressions for Xf kl and xfn]. The only effect of these alternate conventions
is to multiply the DTFS coefficients X[k] by either N or VN.

4.12 Exploring Concepts with MATLAB


i1 FREQUENCY RESPONSE OF LTI SYSTEMS

The MATLAB Signal Processing and Control System Toolboxes contain routi11cs f<1r eval-
uating thc frequency response of LTI systems. Frequency response is a cc)ntinuc>us function
of frequency. However, numerically we can evaluate only the freq11ency response at dis-
crete values of frequency. Thus a sufficiently large number of values are n<>rmally used to
capture the detail in the system's frequency response.
Recall that the impulse and · frequency rcsp<>nses (>f a continuous-time system are
related through the FT, while the DTFT relates impulse and frequency responses of
discrete-time systems. Hence determining thc frequency response from the impulse re-
sponse description requires approximating either the FT or DTFT using the DTFS, a topic
that is discussed in Sections 4.9 a11d 4.10. We noted in Section 3.7 that the DTFS may be
implemented in MATLAB wirh the f f t c<.>mmand.
The cc>mmands f r e q s and f r e q z evaluate the frequency response for systems
described by differential and difference equations, respectively. The command H =
f r e q s ( b, a , w) returns the values of the continuous-time system frequency response
given by Eq. (4.3) at the frequencies specified in the vector w. Here we assume that vectors
b = lhM, b.M-1, ... , bo] and a = [a.ó\!, a,"l-l, ... , a01 represent the coefficients of the
differential equation. The syntax for f r e q z is different in a subtle way. The command
H = f r e q z { b, a, w) evaluates rhe discrete-time system frequency response give11 by
Eq. (4.4) at the frequencies specified in rhe vector w. ln the discrete-time case, the entries
ofwmustlíebetween0and21randthevectorsb = lb 0 ,b1 , ••• ,bA1]anda = [a0 ,a 1, ••• ,aNl
contain the difference-equation C<)efficients in the reverse order of that requircd hy f r e q s.
The frequency response of either a continuous- or díscrete-time system described in
state-variable form (see Eqs. (4.10) and {4.11)) may be computed using f reqresp. The
syntax is H = f r e q r e s p { s y s, w), where s y s is the object conraini11g the state-
variable description (see Section 2.7) and w is a vector containing the frequencies at which
to evaluate the frequency response. f r e q r e s p applies in general te) multi pie-input, mu1-
tiple-c>utput systems so the output H is a multidimensional array. Fc>r the class <.)f single-
input, single-c>utput systems considered in this text and N freq11ency pc>ints in w, H is a
multidimensional array of size 1 by 1 by N. The command s que e z e ( H) converts H to
a length N vector that may be displayed with thc p lo t c<>mmand.
For example, we may nu1nerically evaluate and plot the magnitude response for the
system described in Example 4.5 at 500 points evenly spaced <)11 the interval O < w < 1O
by using the comn1ands:
»a= (2, -1; 1, OJ; b = [ 1 ,· OJ ,· e= C3, 1J; d= O;
» sys = ss(a,b,c,d);
» w = [0:499]*10/500;
» H = freqresp(sys~w);
» Hmag = abs(squeeze(H));
>> plot{w,Hmag)
» title('System Magnitude Response')
>> xlabel('Frequency Crads/s)'); ylabel('Magnitude')
Figure 4 ..S6 depicts the system's magnitude response.
332 CHAPTER 4 • APPLJCATIONS OF FOURIER REPRESENTATIONS

System Magnitude Response


1.6 ...----.. - - - , , - - - , - - - , - - - - - . , - !----,-----,-1----,----,.---,

1.4

1.2 . . ..

.g:::s 1
·-~e-:
~ 0.8 =

0.6 ...

0.4

0.2 ,___..___.,____......_____\ _ __,__ __.__________,_ ___,


O 1 2 3 4 5 6 7 8 9 10
Frequency (rad/s)

FIGURE 4. 56 l\1agnítude response evaluated from a state-variable description using MATLAB.

• Jf- DECIMATION AND INTERPOLATION

Recall that decimation reduces the effective sampling rate of a discrete-time signal while
interpolation increases the effective sampling rate. Decimation is accomplished by subsam-
pling a lowpass filtered version of the signal, while interpolation is performed by inserting
zeros in between sarnples and then applying a lowpass filter. The Signal Processing To<llbox
contains severa! routines for performing decimation and interpolation. All of them auto-
matical\y design and apply the lowpass filter required for both decimation and interpola-
tÍ<>n. The command y = d e e i mate< x, r) decimates the signal represented by x by a
positive integer factor r to produce y. The vector y is a factor of r shorrer than x.
Similarly, y = i n ter p ( x, r) interpolates x by a positive integer factor r, producing
a vector y that is r times as longas x. The command y = resa mp l e< x, p, q) resam-
ples the signal in vector x at p / q times the original sampling rate, where p and q are
positive integers. This is conceptually equivalent to first interpolating by a factc>r p and
then decirnating by a factor q. The vector y is p / q rimes the length of x. The values of
the resampled sequence may be inaccurate near the beginníng and end of y if x contains
large deviations frc>m zero at its beginning and end.
Suppose the discrete-time signal

x[n] = e-nll<i sin l7r n + !!. O< n ~ 59


13 8 '
results from sampling a continuous-time signal at a rate of 45 kHz and that we wish to
find the discrete-time signal resulting from sarnpling the underlying continuous-time sígnal
ar 30 kHz.
This corresponds to changing the samplíng rate by a factor ()Í ~ = ~- The resa mp l e
command is used to effect this change as follows:
» x = exp(-[0:59]/15).*sin([0:59J*2*pi/13 + pi/8);
» y = resarnple(x,2,3};
>> subplot(2,1,1)
4.12 Exploring Cancepts with MATIAB 333

» stem([0:59],x);
» title('Signal sampled at 45 kHz); xlabel('Time');
ylabel('Amplitude')
» subplot<2,1,2)
» stem([0:39],y);
>> title('Signal sampled at 30 kHz); xlabel('Time');
ylabel('Amplitude')

The original and resampled signals resulting from these commands are depicted in Fig.
4.57.

• RELATING THE DTFS TO THE DTFT


Equation (4.48) states that the DTFS coefficients of a finite-duration signal correspond to
samples (>Í the DTFT divided by the number of DTFS coefficients, N. As discussed in
Section 3. 7, thc MATLAB cornrnand f f t calculates N times the DTFS coefficients. Hence
f f t directly cvaluates samples of the DTFT of a finite-duratíon signal. Thc process of
appending zer<>s to the finite-duration signal before computing the DTFS is called zero-
padding and results in a denser sampling of the underlying DTFT. Zero-padding is easily
accomplished with f f t by adding an argurnent that specifies the number of coefficients
to compute. If x is a length M vector representing a finite-duration time signal and n is
greater than M, then the command X = f f t <x, n) evaluates n samples of the DTFT of

SjgnaJ sampJcd at 45 kHz


1.------,,----~-------,--------,--------------,
0.5 ·
-g
-
:g_ o ..
E
<
-0.5

-1 ,...___ _ ____,_ _ _ ___._


. ! _ _ _ ____.__ _ _ __,,;.._ _ _ _ _ _ _ _ __,
O 10 · 20 30 40 50 60
Time

Signal sampled at 30 kHz


1.------.------,...-.---------.------,-----r------,------,

0.5 t
li.)

...~
'g
o-
e
<
-0.5 -

-1 ,..._______________.,____________........._ _ _....__ ____.._ _ ___.


o 5 10 15 20 25 30 35 40
Titne

FIGURE 4.57 Original and resampled signals obtaíned using lv1ATLAB.


334 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATfONS

x by first padding x wíth trailing zeros to length n. If n is less than M, then f f t ( x, n)


first truncares x to length n.
The frequency values C(1rresponding to the samples in X are represented by a vector
n p<>ints long with first element zero and remaining entries spaced at intervals of 2 7r/n.
For example, the command w = [ O : ( n -1 ) J * 2 *pi / n generates the appropriate
vector of frequencies. Note that X describes the DTFT for frequencíes on the inrerval
O s !l < 27r. It is sometimes more C()nvenient to view rhe DTFT over a frequency ra11ge
centered on zer(J, that is, -7r < n < 1r. The MATLAB command Y = f f t s h i f t <X)
swaps the left and right halves <>f X in order to put the zero-freque11cy value in the center.
The vector of frequency values corresponding to the values in Y may be generated using
w = [-n/2:(n/2-1)J*2*pi/n.
Suppose we revisít Example 4.16 usíng MATLAB to evaluate IX(e;!l)j at intervals in
frequency of (a) 21r/32, (b) 21r/60, and (e) 21T/l20. Recall that
31rn
cos , Os n < 31
x[n] = 8
o, <>ther\vise

For case (a) we use a 32-point DTFS computed from the 32 nonzero values of the signal.
ln case (b) and (e) we zero pad to length 60 and 120, respectively, t<) sample the DTFT at
the specified intervals. We evaluate and display the results <>n -1r s !l < 1r using the
following commands:
>> n = [0:31J;
» x = cos(3*pi*n/8);
» X32 = abs(fftshift(fft(x))); %magnitude for 32 point
DTFS
» X60 = abs(fftshift(fft(x,60))); %magnitude for 60
point DTFS
» X120 = abs(fftshift(fft(x,120))); %magnitude for 120
point DTFS
>> w32 = [-16:15]*2*pi/32; w60 = C-30:29]*2*pi/60;
w120 = [-60:59J*2*pi/120;
» stem(w32,X32); %stem plot for Fig. 4.47(a)
» stem(w60,X60); %stem plot for fig. 4.47(b)
» stem(w120,X120); %stem plot for Fig. 4.47(c)
The results are depicted as the stem plots in Figs. 4.4 7(a)-(c).

• COMPUTATIONAL APPLICATIONS OF THE DTFS


As previ<>usly noted, MATLAB's f f t command may be used to evaluate the DTFS and
thus is t1sed for approximating the FT. ln particular, the f f t is used to generate the DTFS
approxin1ations in Examples 4.17 and 4.18. To repeat Example 4.18 we use the commands
» t a = 0:0.1:3.9; %time samples for case (a)
» tb = 0:0.1:199.9; %time samples for case (b)
» xa = cos(0.8*pi*ta) + O.S*cos(0.9*pi*ta);
» xb = cos(0.8*pi*tb) + 0.5*cos(0.9*pi*tb);
» Ya = abs(fft(xa)/40); Yb = abs(fft(xb)/2000);
» Ydela = abs(fft(xa,8192)/40); %evaluate 1/M Y_delta
(j omega) for case (a)
>> Ydelb = abs(fft(xa,16000)/2000); %evaluate 1/M
Y_delta(j omega) for case (b)
4 12 Exploring Concepts with MATLAB 335

» fa = E0:19]*5/20; fb = [0:999]*5/1000;
» fdela = [0:4095]*5/4096; fdelb = C0:7999]*5/8000;
» plot(fdela,Ydela(1:4192)) %Fig. 4.52Ca)
» hold on
>> stem(fa,Ya(1:20))
>> xlabel('Frequency (Hz)'); ylabel('Amplitude')
>> hold off
» plot(fdelb(560:800),Ydelb(560:800)) %Fig. 4.52(c)
>> hold on
>;> stem(fb(71 :100),Yb(71 :100))
» xlabel('Frequency (Hz)'); ylabel('Amplitude')
Note that here we evaluated (1/M) Y..,( jw) using f f t by zero-padding with a large number
of zeros relative to the length of x[n]. Recall that zero-padding decreases the spacing be-
tween the samples of the DTFT that are obtained by the DTFS. Hence by padding with a
large number of zeros, we capture sufficient detail such that p lo t provides a smc>oth
approximation to the underlying DTFT. If p Lo t is used to display the DTFS coefficients
without zero-padding, then a much coarser approximation to the underlying DTFT is
obtained. Figure 4.58 depicts the DTFS coefficients for case (b) of Example 4.18 using
both plot and stem. lt is obtained using the commands
>> plot(fb(71 :100),Yb(71 :100))
>> hold on
>> stem(fb(71 :100),Yb(71 :100))
Here the p lo t command produces triangles centered on the frequencies associated with
the sinusoids. The triangles are a consequence of p lo t drawing straight lines in between
the values in Yb.
The f f t command is implemented using a numerically efficient, or fast Fourier
transform algorithm based on tl1e divide and conquer principie díscussed ín SectÍ(>n 4.11.
The Signal Processing Toolbox routine f f t f i L t employs a fast Fourier transform-based

0.7 .------!r----,l.----,!----.------,.1--...,.1---..l----,

0.6- -

0.5 ...... --
cL)
-o 04 ...... -
a .
·-E
Q.
< 0.3 ...... -

0.1 ...... -

0.4 0.42 0.44 0.46 0.48 0.5


Frequency (Hz)

FIGURE 4.58 Use of the l\ilATLAB command p lo t for displaying the DTFS coefficients in
case (b) of Example 4.18.
336 CHAPTER 4 í1 APPLICATIONS OF FOURil::'.R REPRESENTATIONS

algorithm for computing C()nvolution with a finite-length sequence. I1 is basec.i on the


overlap and add method, which is similar in principie to the overlap a:1d save a pproach
discussed in Section 4.10. Bc)th implement convolution by mt1ltiplicati(,n of DTI-:s coeffi-
cients. They use different apprc>aches for relating the desired linear •.:onv<>lutic)n t<) thc
periodic convolution associated with the DTFS.

14· 13__ Summary


: ..
Fourier representations find application in analyzing the interaction between signals and
systems and in numerical evaluation of signal characteristics ()r system behavíor. The DTFS
is used for numerical computation since it is the only representation chat can be evaluated
in a computer. The FT and DTFT are most commo11ly used for a11alysis. ln the cottrse of
applying these representations we frequently encounter situati<)ns in which there is a mix-
ing of signal classes. This mixing occurs naturally in the interaction between signals and
systems and the manipulation of signals. We have established relationships between dif-
ferent Fourier representatÍ<)ns in this chapter in order to address mixing <)Í signal classes.
The FT is the most versatile representation for analysis, since there are FT represen-
tarions for all four signal classes. lt is most often used to analyze continuous-time LTI
systems, and systems that sample continuous-time signals or reconstruct continuous-time
signals from samples. The primary use of the DTFT is to analyze discrete-tímc systems.
We have developed a DTFT representation fc>r discrete-time periodic signals to facílítate
this role. The DTFS is used to approximate both the FT and DTFT in computational
applicati<)ns. The existence of computationally efficient <)r ''fase'' algorithms for evaluating
the DTFS greatly expands the range of problems in which it may be used. We have estab-
lished che relati<>nships between the DTFS and the FT as well as between the DTFS and
DTFT in order t<> C<>rrectly interpret the results of numerical computati<)ns.
Fourier methc>ds pr<>vide a powerful set of analytic and numerical to<>ls fc>r S<)lving
problems involving signals and systcms. They provide an important set of tools for the
study of communication systems, as we see in the next chapter. They also have extensive
applícatíons in the c<>t1text of filtering, the copie of Chapter 8.

1 FURTHER RE~!NG .,,.., - •• n - - -

1. The copies of sampling, reconstruction, discrete-tirne signal processing systems, computa-


tional applications of the DTFS, and fast algorithms for rhe DTFS are discussed in greater
detail in the following signal processing rexts:
• Porat, B., A Cc)urse i11 Digítal Signal Processing (Wiley, 1997)
• Proakis, J. (;., and D. G. Manc>lakis, Introduction to Digital Signal Pr<>cessing (Macmillan,
1988)
• Oppcnhaim, A. V., and R. W. Schafer, Discrete-Tíme .Çignal Processíng Second Edition (Prcn-
tice Hall, 1999)
• Jackson, L. B., Digital Filters and Signal Processing, Third Edition (Kluwer, 1996)
• Rc>berts, R. A., and C. T. Mullís, Digital Signal Processíng (Addison-Wesley, 1987)

2. ln 11umerical computation applications, the DTFS is usually termed the discrete Fourier
transfor1n, or DFT. We have chosen to retai11 the DTFS ter1ni11ology for consistency a11d to
avoid confusion with the DTFT.
Problems 337

3. Rediscovery of the FFT algorithm for evaluating the DTFS is attributed to J. W. Cooley and
J. W. Tukey for their 1965 publication ''An algorithm for the machine calculation of corn-
plex Fourier series,'\ Mat. Comput., vol. 19, pp. 297-301. This paper greatly accelerated
the development of a field called digital sig11al processíng, \.vhich was in its infancy in the
mid-1960s. The avaílability of a fast algorirhm for con1puting the DTFS ope11ed a rremen-
dous number of new applications for digital signal processing, and resulted in explosive
growth of this new field. Indeed, the majority of rhis chapter anda substancial portíon of
Chapter 8 concern the field of digital signal processing.
Carl Friedrick Gal1ss, the eminent Cierman mathematician, has been credited with
developing an equivalenr, efficient algorirhm for compuring DTFS cc>efficients as early as
180,5, predaring Joseph Fourier's work on har1nonic analysis. Additional reading on the
history of the FFT and its impact on digital signal processing is found in the following
articles: ·
• M. T. Heideman, D. H. Johnson, and C. S. Burrus, "Gauss and the history c)f the fase Fourier
transform,'' IEEE ASSP Magazine, vol. 1, n<>. 4, pp. 14-21, October 1984.
• J. W. C:ooley, "How the t'FT gaíned acceptance," IEEE Signal Processing Magazine, vol. 9,
no. 1, pp. 10-13, January 1992.

1 P~<>BLEMS
• - - -
4.1 Sketch the frequency response of the systerhs de- 4.4 Determine the differenrial- or difference-equa-
scri bed by the following impulse responses. tion descriptions for the systems with the fol-
Characterize each system as \owpass, bandpass, lowing impulse rcsponses:
or híghpass. (a) h(t) = ¾e ··t1ªu(t)
(a) h(t) = 8(t) - 2e- 2 tu(t) (b) h(t) = 2e 21u(t) - 2te- 21u(t)
(b) h(t) = 4e 21 cos(20t)u(t)
(e) h[nl = a u[n], lal < 1
11

(e) h[n] = ~(~) u[n]11

(d) h[nl = ô[n] + 2(½)''u[nl + (-1l2) u[n] 11

(d) h[n] = (-1)'1, lnl ~ ~o 4.5 Determine the differential- or difference-equa-


O, otherw1se tion descriptíons for the systems with the fol-
4.2 Find the frequency response and rhe impulse re- lowing frequency responses:
sponse of the systems havíng the output y(t) or H(. ) =2 + 3j(JJ - 3(j(JJ)
2
( )
yln] for the input x(t) or x[n]. ª /(JJ 1 + 2jw
(a) x(t) = e-tu(t), y(t) = e- 2 tu(t) + e- 3 tu(t)
1 - iw
(b) x(t) = e- 1 ru(t), y(t) = e 3 (t- 2 )u(t - 2) {b) H(j(JJ) = -w2 - 4
(e) x(t) = e 2 ti-t(t), y(t) = 2te- 2 tu(t)
1 +.
(d) x[n] = (~)nuf n], y[n] = ¼(1)"uln] + (¼) u[nl
11
(e) H(jw) =+ 2)(:: + 1}
(jw
(e) x[n] = (~)nt,t[n],
. 1 + e-i!l
y[n] = (¼)11ii[n] - (!)"- 1 u[n - 1] (d) H(e'n) = e. ;211 + 3
4.3 Determine the frequency response and the i1n-
-;íl
pulse response for the systems described by rhe ifl - e
follo\vÍng differential and difference equations: (e) H(e ) - 1 + (1 - ½e- ;11)(1 + ¼e-;íl)
d 4.6 Determine the frequency response, impulse re-
(a) dt y(t) + 3y(t) = x(t)
sponse, and differenríal-equation descriptions
d2 d d • for the continuous-time systcms described by
(b) dt 2 y(t) + 5 dt y(t} + 6y(t) = -dtx(t) the following state-variable descriptions:
(e) y[nl - ¾y[n - 1] - ½y[n - 2] = -2 O b = O
(a) A=
3x[n] - ¾x[n - 11 O -1 ' 2 '
(d) y[n] + fy[n - 1J = x[n] - 2x[n - 1] e= [1 1J, D = [O]
338 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

1 2 1 (b) x(t) = Ll=ol(-1 )k/2k] cos((2k + l)1Tt)


(b) A = -3 -4 ' b = 2 ' (e) x(t) = lsin(1Tt)I
e = lO lJ, D = lO] (d) x(t) as depicted in Fig. P4.9(a)
4.7 Determine the frequency response, impulse re- (e) x(t} as depicted in Fig. P4.9(b)
sponse, and difference-equation descriptions for 4.10 Find the DTFTrepresentations for the following
the discrete-time systems described by the fol- periodíc signals. Sketch the magnitude and
lowing state-variable descriptions: phase specrra.
--1 1 o (a) x[n] = cos( 1Tnl4) + sin( 1Tn!S)
(a) A = 02 t ' b = , e = 11 O],
-4 1 (b) x[n] = 1 + 2::=-00 cos( mn/2)ô[n - m]
D= {1 t (e) x[n 1as depicted in Fig. P4.1 O(a)
-4l -43 1 (d) x[nJ as depicted in Fig. P4.10(b)
b
()A= b= , e = [O 1],
-4 --4
1 1,
1 (e) x[n] as depicted in Fig. P4.10(c)
D= [O]
x[n]
4.8 A co11ti11uous-time system is described by the
state-variable description ) ) , l! '

-1 o o ••• ...
A= • ' n
o -3 ' b = 2 ' -8
'
-4 4
'
8
'
12
e= ro 1], D == [OI (a)
Transform the state vector associated with this
x[n]
system using che matrix
l
1 -1
T= •••
-8 8 • ••
1 1 , . '
n
'

to find a new state-variable description for the -4 4


system. Show that the frequency responses of - ,-
-1
the original and transformed systems are equal.
(b)
4.9 find the FT representations for the following
periodic signals. Sketch the magnitude and xfnJ
phase spectra.
1
(a) x{t) = 2 sin( 1Tt) + cos(21Tt)
••• •••

' n
x(t) '
-8 -4 4 8
3 "
~'
"
'

•••
- .__

l
t-
1

....
- - •••
-1
(e)

FIGURE P4. l O
''
'
·-
'
''
' t
-3 -2 -1 l 2 3 4 5 6
4.11 A LTI system has impulse response
(a)
sin( 1Tt)
x(t) h(t) = 2 - - cos(4m)
1Tt
4
... Use the FT to determine the system output if the
-9 -1 7
••• . .
input 1s:
'
-7 -3 1 9 (a) x(t) = 1 + cos( 1Tt) + sin{41Tt)
-4 ''''' (b) x(t) = L:=-ocô(t - m)
(e) x(t) as depicted in Fig. P4.11(a)
(b)
(d) x(t) as depicted in Fig. P4.11(b)
FIG\.JRE P4.9 (e) x(t) as depicted in Fig. P4.11(c}
Problems 339

x(t) (iii) Find the range for the time constant RC


11 such that rhe first harmonic of the rip-
' ple in y(t) is less than 1 °/o of the average
••• - -l ' -1 -3 •••
2 2; ;
2 value.
' ' '
' ; ; ; ' t
'
-l -7 1
4. 13 Consider the syste1n depicted in Fig. P4. l 3 (a).
--8l -l
The FT of the input signal is depicted i11 Fig.
-l
- - 8 8
FT FT
(a) 4.13(b). Ler g(t) G(jw) and y(t) ~
+------,,~
Y(jw). Sketch G(jw) and Y(jw) for the following
x(t)
cases:
2 sin(61rt)
(a) w(t) = cos(51rt) and h(t) = - - -
•• • ••• 1rt
sin(51rt)
(b) w(t) = cos(51rt) and h(t) = - - -
1rt
-2
(e) w(t) depicted in Fig. P4.13(c) and
(b) sin(21rt) (S )
h (t ) = - - - cos 7ft
1Tt
x(t)
! e-1
1
... ... X(jw)

; • t.
x(t}
)1,. ~ g(t~
h(t) --~ • y(t)
-2 -1 1

(e)
2 3
t
w(t)
t
cos 571't
-71' 1T'
FIGURE P4.1 l
(a) (b)

w(t)

4.12 We may design a de power supply by cascading 5 1


a full-wave rectifier and an RC circuit as de- ;

picted in Fig. P4.12. The full wave rectifier out- ... ! ...

put is given by · ' '' ' ' ' 1
t
1
' 1
' ' '
-3 -2 -1 l 2 3 4
g(t) = lx(t) 1 ..
-5
Let H(jw) = Y(jw)IG(jw) be the frequency re-
sponse of the RC circuit. (e)

(a) Show that FIGlJRE P4.13


1
H(jw) = jwRC +1
(b) Suppose the input is x(t) = cos(120m}. 4.14 Consider the systen1 depicted in Fig. P4.14. The
impulse response h(t) is given by
(i) Find the FT representation for g(t).
(ii) Find the FT representation for y(t). sin( 101rt)
h( t) = - - -
1Tt
and we have
oc 1
+ R +
x(t) =~
1
k cos{k41rt)
x(t} + g(t) y(t)
rectifier 10

g(t) = L cos(k81rt)
k=l

FIGURE P4. l 2 Use the FT to determine y(t).


340 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

x(t) ••
---i h(t) _.....,.. y[n]
,.
7 'v .. ' .,.(t ;~•_....,. y(t)
_J A - ~ . ; . ~~i
....·
•• FIGURE P4. l 6
g(t) ---i h(t)
. -
FIGURE P4.14
4.17 Determine and sketch the FT representation,
X;;(iw), for the following discrete-time signals.
The sampling interval '!J is given for each case.
4.15 The input to a discrete-time system is given by
, 7i
s1n n
1T • 31T 4
x [ n] = cos n + s1n n (a) x[n] = - - - , 2T = 1
4 4 1rn

Use the DTFT to find the output of the system, , 1T


y[n], if the impt1lse response is given by s1n n
4
l
. 1T
(b) x[nl = - - - , '!J -- 4
1Tn
SIIl ·- n
2
(a) h[nJ = --- • 1T
s1n - n
1T 4
(e) xln 1 = cos n 1rn , '!J =2
.
Slil -
7T
n 2
2 (d) x[n] depicted in Fig. P4.10(a) with '!J =4
(b) h[n] = (-1 )" - - -
(e) x[n] = "Z;=-ocó[n - 4p], Y = ½
• 7T 4.18 Consider sarnpling rhe signal x(t) =
s1n - n
7T 8 (1/1rt) si11(1rt).
(e) h[nl = cos n
2 1T11 (a) Sketch the FT of the san1pled signal for the
4.16 Co11sider the discrete-time system depicted in
following samplíng intervals:
Fig. P4.16. Assume h[n] = [sin(1rn/2)]/7Tn. Use (i) 3 =¼
the DTFT to determine the output, ylnl, for che (ii) 3 =1
following cases. Also sketch G(ei11 ), the DTFT
of g[n]. (iíi) 3 = 1
. ) =
( IV j
4
=}
. 1T
Sl11 - n (h) Let x[n] = x(n~). Sketch the DTFT of xlnl,
2
(a) x[n] = - - - , w[nl = (-1) 11 X(eiº), for each of the sampling intervals
1i11 given in (a).
, 1T 4.19 The co11tinuous-ti1ne signal x(t) with FT as de-
s1n - n picted in Fig. P4.19 is sampled.
2
(h) xf nl = S[n] - - - - , w[n] = (-1)" (a) Sketch the FT of the sampled signal for the
1Tn
followíng sampling intervals. Identify in

s1n - n
1T each case if alíasing occurs.
2 7i (i) '!J= /.s
(e) x[n] =---, w[n] = cos 11
1Tn 2 (ii) 3 = i15
7i 377' (iil) 3 = ½
(d) x[n] = 1 + sin -n + 2 cos n ,
8 4 (b) Let x[n] = x(n2T). Sketch the DTFT of x[n],
w[n] = cos
1T
n X(eií 1), for each of the sampling intervals
2 given in (a).
Problems 341

X(jw)

1-
'
-l l1r 111T
-------------+---------,
-I01r\
,.__~•
/ I01r
,--- 1
w

-91T 91T
-1 --

FIGVRE P4. l 9

1
--. •••
-~t-----"ft--......,---+------+---'íF-___,........_ _+-~ n
51T -1T 31T 31T 1T 51r 2'1T
4 - 4 4 4
-1 .....

FIGURE P4.2 l

*4.20 Consider subsampling the signal x[nl = *4.23 A continuous-tin1e signal lies in the frequency
[sin(1rn/6)]/1rn so that ylnl = x[qn]. Sketch band Iwl < 51r. Thís signal is contaminated by
Y(e;11 ) for the following choíces of q: a large sinusoidal signal of frequency 1201r. The
(a) q = 2 contaminated signal is sampled at a sampling
(b) q = 4 rate of W 5 = 131r.
(e) q == 8 (a) After satnpling, at what frequency does the
sinusoidal interfering signal appear?
*4.21 The discrete-time signal x[n] with DTFT de-
picted in Fig. P4.21 is subsampled to obtain (b) The contaminated signal is passed through
y[n] = x[qn]. Sketch Y(ein) for the following an anti-aliasing filter consisting of the RC
choices of q: circuit depicted in Fig. P4.23. Find the value
of the time constant RC required so that the
(a) q = 3
contaminated sinusoid is attenuated by a
(b) q == 4 factor of 1000 prior to sampling.
(e) q = 8 (e) Sketch the magnitude response in dB thar
4.22 The signals below are sampled with sampling the anti-aliasing filter presents to the signal
interval <!f. Determine the bounds on 9J that of interest for the value of R C identified
guarantee there wíll be no aliasing. in (b).
(a) x(t) = (1/t) sin 1rt + cos(21rt)
(b) x(t) = cos( 101rt)[sin( 1rt)]/2t
(e) x(t) = e- 4 tu(t) * lsin( Wt)]/1rt + R +
(d) x(t) = w(t)z(t)t where the FTs W(jw) and Input e:~ Oulput
Z(jw) are depicted in Fig. P4.22

W(jw) Z(íú)) FIGURE P4.23

4.24 Consider the system depicted ín Fig. P4.24. As-


_....,__ _-+-_ _....,__ w ----1t:....._-4--~- w sume lX(iw) 1= O for Iwl > Wm. Find the largest
-wa 5?T value of T such that x(t) can be reconstructed
from y(t). Determine a system that will perform
FIGURE P4.22 the reconstruction for this maximum value of T.
342 CHAPTER 4 • APPLICATIONS Of FOURIER REPRESENTATIONS

x(t) • .~ y(t) versíon of x{t) and determine the relation-


ship betvveen the scaling factor and 21.
t w(t)
4.28 A reconstruction system consists of a zero-order
liold followed by a contir1uous-cime anti-í111ag-
ing filter wíth frequency response Hc(jw). The
w(t)
original signal x(t) is bandlimited to w,.,, that is,
1 .. X(júJ) = O for I wl > w,n, and is sampled with a
••• ... sampling interval of Y. Determine the con-
straints on the magnitude response of the anti-
<
'
t
-T T T 2T
'
' 3T imaging filter so that rhe overall magnitude re-
-
2 sponse of this reconstruction system is l1etween
0.99 and 1.01 in the signal passband and less
FIGURE P4.24 than 10- 4 to the images of the signal spectrum
for the following values:
(a) Wm = 1Ü7r, 2J = 0.08
4.25 Ler I X(jw) 1 = O for I wl > w,,,. Form the signal (b) W 111 = 1Ü?T, 2J = 0.05
y(t) .,-c(t}[cos(21rt) + sin(101rt)J. Determjne
=
the maximum value of wm for which x(t) can be (e) W,n = 101r, 2J = 0.01
reconstructed from y(t) and specify a system (d) úJ,11 = 21r, 2J = 0.08
that \Vill perform rhe reconstruction. ,., 4.29 ln this problem we reconstruct a signal x( t) from
*4.26 A bandlimited signal x(t) satisfies I X(jw) 1 = O its samples x[n] = x(n:íf) using pulses of width
I
for wl < w, and Iwl
> w 2 • Assume w 1 > w1 - w 1 • less than 2J followed by an anti-ímagir1g filter
ln this case we can sample x(t) ar a rate less than with frequency response Hc:(iú)). Specifically, we
that indicated by the sampling theorem and still apply
perform perfect reconstruction by using a 00

bandpass reconstruction filter Hr(jw). Let xp(t) = L, x[n]hp(t - n2T)


xln] = x(n5'). Determine the maximum san1- n~-x

pling interval 2J such that x(t) can be perfectly to the anti-ímaging filter, where hp(t) is a pulse
reconstructed from xínJ. Sketch the frequer1cy of width T 0 as depicted ín Fig. P4.29(a). An ex-
response of the reconstrucrion filter required for ampJe of Xp(t) is depicted in Fig. P4.29{l">). De-
this case. termine the constraints on I Hc(iw) 1 so that the
"'4.27 Suppose a períodic sígnal x(t) has FS coefficients overall magnitude response of this reconstruc-
tion system is hetween 0.99 and 1.01 in the síg-
(¾)k, lkl < 4 nal passband ar1d less than 10- 4 to rhe images
X[k] =
O, otherwise of the signal spectrum for the following values.
Assume x(t) is bandlimíted to 101r, that is,
The period of this signal is T = l.
X(jw) = O for Iwl > 101r.
(a) Determine the minimum sampling interval
(a) ?J = 0.08, T 0 = 0.04
for thís signal that will prevent aliasing.
(b) ~ = 0.08, T 0 = 0.02
(b) The constraints of the sampling theorem
can be relaxed somewhat in the case of pe- (e) ~ = 0.04, T 0 = 0.02
riodic signals if we allow the reconstructed (d) 2J = 0.04, T 0 = 0.01
signal to be a time-scaled versioi:1 of the orig-
inal. Suppose we choosc a sampling interval
'3° = ~ and use a reconstruction filter
t, lwl<1r
O, otherwise
• •• -'!! 3'!! •• •
Show that the reconstructed signal is a time- ' '
t
scaled version of x(t) and idenrífy the scal- Tº y.f 2q
1
ing factor. '

(e) Find the constraints on the sampling inter-


(a) (b)
val 5" so that use of Hr(jw) in (b) results in
the reconstruction filter being a time-scaled FIGURE P4.29
Problenis 343

4.30 The zero-order hold produces a stairstep ap- bandlimited to 101r, that is, X(jw) = O
proximation to the sampled signal x(t) from for Iwl > 107T.
samples x[1zl = x(n9T). A <levice termed a first- (i) 91 = 0.08
order hold línearly interpolates between the (ii) '?J = 0.04
san1ples x[11] and thus produces a smoother ap- 1
proximation to x(t). Thc output of the first-
~4.31 A nonideal sampling operation obtains x[n]
from x(t) as
order hold may be described as

f:
3

x[n] = :i x(t) dt
11

where h 1(t) is the triangular pulse shown in Fig. (a) Show that this can be written as ideal sam-
P4.30(a). The relatíonshíp between x[n] and pling of a filtered signal y(t) = x(t) * h(t),
x 1 (t) is depicted in f'ig. P4.30(b). that is, x[n] = y(n:?l), and find h(t).
(a) Identify the distortions íntroduced by the (b) Express the FT of x[n] in terms of X(jw),
first-order hold and compare them to H(jw), and '?J.
those introduced bv , the zero-order hold. (e) Assume that x(t) is bandlimited to the fre-
Hint: h,(t) = ho(t) * ho(t). quency range Iwl < 31r/(4;f). Determine the
(b) Considera reconstruction system consisting frequency response of a discrete-time system
of a first-order hold followed by an anti- that will correcr the distortion ín x[n] intro-
imagíng filter with freqt1ency response duced by nonideal sampling.
H.-(iw). Find Hc(iw) so that perfect recon- '~4.32 The system depicted in Fig. P4.32(a) converrs a
structio11 is obtained. concinuous-time sígnal x(t) to a discrete-time
(e) Determine the co11straints on IHc::(jw) 1so signal y[n]. We have
rhat the overall n1agnitude response of this 1, 1n1 < 1r/4
reconstruction system is between 0.99 and
O, orherwise
1.01 in the signal passband and less than
10-- 4 to the images of the signal spectrum Find the sampling frequency W 5 = 2 'TTl:f and the
for the following values. Assume x(t) is constraínts on the anti-aliasing filter frequency
response Ha(iw) so that an input signal wirh FT
X(jw) shown in Fig. P4.32(b) results i11 the out-
put sígnal with DTFT Y(ei!1).

·'-',,(!)
-···---
Sample at
x[nJ =
x,/n?[) :o
• •...
x(t)- . :, ), ws = 21r/(ff
---i • H(e'"'.") -. y[n]
.~ ,·

(a) (a)

x[nJ X(jw)
1

••• • ••
_.....___.__..___._______.__..L-__.___ _ _ _ n ---------f~-l--~·--+---l--4-- w
-1 1 2 3 -TT 1T 21T

•••

-27T 21r
--'1T4 1T
4
(b) (b)

FIGlJRE P4.30 FIGURE P4.32


344 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

... • ••

-'IT' '1T' 1T
6 3

FIGlJRE P4.33

*4.33 Determine the maximum factor q by which a depicted in Fig. P4.35. Determine the minimum
x[n] with DTFT X(eiº) depicted in Fig. P4.33 value of !lp and maximum value of Os so that
can be decimated without aliasing. Sketch the ideal interpolatíon is obtained for the following
DTFT of the sequence that results when x[nl is cases. Also sketch the DTFT of the ínterpolared
decimared by this amounr. signal.
::•4.34 The discrete-tin1e signal x[nl with DTFT X(eiíl) (a) q = 2, W= w/2
shown in Fig. P4.34(a) is decimated by first (b) q = 2, W = 37T/4
passing x[n] through the filter with frequency (e} q = 3, W = 3 7T/4
response H(ei!l) shown in Fig. P4.34(b) and then
subsampling by a factor of q. For the follo\.ving
values of q and W, determine the minimum
value of OP and maximum value of íls so that
the subsampling operation does not change the
shape of the portion of X(ei11 ) on lf11 < W.
... . ..
Sketch the DTFT of the subsampled signal.
(a) q = 2, W = 7r/3
(b) q = 2, W = 7r/4
-21T -W w
(e) q = 3, W = 7T/4 FIGURE P4.3 5

'~4.36 Consider interpolating a signal x[n] by repeat-


ing each value q times as depícted in Fig. P4.36.
That is, we define x [n] = x[floor(n/q)], where
0
... ... floor(w) is the largest integer less than or equal
~--+--...____ _ ___._--+__,,_ _ _ __..__ __.__ _ n to w. Letting Xz[n] be derived from x[n] by in-
-21T -W W serting q - l zeros between each value of x[nj,
(a) that is,

x [n] = x[n/q], nlq integer


z O, otherwise
l
we may write x 0 [n] = Xz [n] * h0 [n], where
... •••
1, O ::::; n ==:; q - 1
-21T -0.5 -{Lp flp ~ls O, otherwise
(b) Note that this is the discrete-tíme analog of the
zero-order hold. The inrerpolation process is
FIGURE P4.34 completed by passing x [n] through a filter with
0

frequency response H(e;n).


*4.35 A signal x[nl is interpolated by a factor of q by (a) Express X (ei1l) in terms of X(e;n)
0

first inserting q - l zeros between each sample and H 0 (eiº). Sketch I X 0 (ei0 ) 1 if xlnl =
and next passing the zero-stuffed sequence 1sin( 3'Trnl4) ]/( wn ).
through a filter with frequency response H(eifl) (6) Assume X(ei11 ) is as shown in Fig. P4.35.
depicted in Fig. P4.34(b). The DTFT of x[n] is Specífy the constraínts on H(eifl) so that
Problems 345

xínl the frequency response magnitude of an


1 equivalent continuous-time sysrem for the fol-
' lowing cases:
(a) 0 1 = Trl4,
: ;
(b) 0 1 = 3Trl4,
-1 l 2 3
o (e) n1
= Trl4, WC = 271'
'~4.38 The system shown in Fig. P4.38 is used to imple-

·· Repeat each•
i ment a bandpass filter. The discrete-time filter
H(ei!l) has frequency response on -Tr < n < Tr
vaJue q ·n 1, Ü,1 < 1fll < fib
~" times 1• H(e' ) =
O, otherwise
Find the sampling interval '?I, fia, Ob, W 1 , W2 ,
W 3 , and W4 , so that the equivalent continu-
ous-time frequency response G(jw) satisfies
0.9 < 1 G(jw) 1 < 1.1, for 10071' < w < 20071'
' '
G(jw) = O elsewhere
)
'
' ' '
n
}
-3-2 -1 >
! 1 2 3 4 5 6 ln solving this problem, choose W1 and W_~ as
o small as possible and choose rzf, W2 , and W4 as
large as possible.
FIGURE P4.36
4.39 Let X(ei11 ) = sin( 1 lü/2)/sin(ü/2) and define
X[k] = X(e;kn.,). Find and sketch x[n], where
ideal interpolarion is obtai11ed for the fol-
DTFS; !!,, - f
lowing cases: x[n] - - - - Xlkl or the following values
(i) q = 2, W = 3Trl4 of nu:
(ii) q = 4, W = 3Trl4 (a) !10= 2Tr/15
4.37 A discrete-time system for processing co11tinu- (b) íl = TrllO
0

ous-rime signals is shown in Fig. P4.37. Sketch (e) fl = rr/3


0

x(t) ..
~t. ·:~~·

Ha(iú>)
i::~1;

)1,
Sample
fJ.,, .....=,... 0.05
no

.- .itlllli'l: l ru!ij!III
H(efll)
__
..


--
Zero-order
hold
.. ..........H,(jw) • y(t)
. ' ... ....
-· _,.,....... "··.
,,.
.,....... -···- ,.._ ..,,,,_,.. ,,

!
11 1

_ ..._- · • - - - - - w ~--,.._--1--_ _..,_...,.:_ (t)

-IO?T 10-rr 1T -We


FIGURE P4.3 7

x(t) - •• Ha(j'w) .-)li • San~~le - • H(e ;n) -•• Zero-order


hold
__
.,,
He (Jw.
• ) _,,,... y (t )
at j

l 1

---f~-f----+---+------- w ~~-i----+----+--~-w

FIGURE P4.38
346 CHAPTER 4 • APPLICATIONS OF FOURIER REPRESENTATIONS

4.40 Ler X(jw) = [sin(2w)J/w and define X[k] = where


X(jkw 0
). Find and sketch .x(t), where ;n _ 0.02426(1 - e-iíl) 2
Hi(e ) - (1 + 1.10416e-i11 + 0.4019e-i2.r1)
x(t) - - - Xf k ] f or rhe fo 11 owing values o f
1-s; w,, -
. (1 - e-;n)2
Wo: H (e'n} = - - - - - - - - - - -2-
2 (1 + 0.56616e-;n + 0.7657e-; n)
(a) w0 = rr/8
(b) w 0 = 7Tl4
4 .44 Use the MATLAB command f r e q r e s p to plot
(e} w,) = 'TT/2 the magnitude and phase response for the sys-
"'"4.41 The continuous-time representation for a peri- tems with state-variable descriptions given in
D7'fS; 21r!N Problems 4.6 and 4.7.
odic discrete-time signal x[ n1 - - - - - X[ k l
4.45 Repeat Example 4.11 usíng zero-padding a11d
is periodic and thus has a FS representation.
the MATLAB commands f f t and f f t s h i f t
This FS representation is a function of the DTFS
to sample and pior Y(ei!l) at 512 poinrs on
coefficients Xfk], as we show in this problem.
-1r :s- !l < 1r for each case.
The result establishes the relationshíp between
the FS and DTFS representations. Let x[n] have 4.46 The rectangular wíndow ís defined as
period N and let x 8 (t} = L:--cx,x[n]5(t - n~). 1, O::; n < M
w,[n] =
(a) Show x 8 (t) is periodic and find the period, T. O, otherwise
(b) Begín with the definition of the FS We may truncate a signal to the interval O < n
coefficients :5 M by multiplying the signal with w[n]. ln the

Xs[k] = -1
T
f <T}
x 6 (t)e-
. .dt
ikw,,t
frequency domain we convolve the DTFT of the
signal with
. fi(M + 1)
Substitute for T, w 0 , and one períod of x 8 (t) s1n
2
to show W,(ei!l) = e-J(A1121n _ _ _ _ __

n
Xslk] = 'j 1X[kJ •
Slll
2
The effect of this convolution is to smear derail
4.42 A signal x(t) is sampled ar intervals of ~ = 0.01.
and introduce ripple in the vicinity of disconti-
One hundred samples are collected and a 200-
nuities. The smearing is proporcional to the
poínt DTFS is taken in an atcempt to approxí-
mainlobe width, while the ripple is proportional
mate X(jw). Assume IX(jw} 1 = O for I wl >
to the size of the sidelobes. A variety of alter-
1201r. Determine the frequency range -wª < w
native windows are used in pracrice to reduce
< wª over which the DTFS offers a ieasonable
sidelobe height in return for increased mainlobe
approximation to X(jw), the effective resolution
width. ln this problem we evaluate the effect of
of this approximation, wr, and the frequency in-
windowing rjme-domain signals on their DTFT.
terval between each DTFS coefficient~ 6-w.
The role of windowing in filter design is ex-
plored ín Chapter 8.

• Computer Experiments
The Hanning window is defined as
0.5 - 0.5 cos(21rn/M), O< n :5 M
W1, [n ] =
4.43 Use the 11ATLAB command f reqs or f reqz O, otherwise
to pior che magnitude response of rhe followíng (a) Assume M = 50 and use the MATLAB com-
systems. Determine whether the system has a mand f f t to evaluate the magnitude spec-
lowpass, highpass, or bandpass characteristic. trum of the rectangular window in dB at fre-
8 quency intervals of 'TT/50, rr/100, and 1r/200.
(a) H(jw) = (jw} 3 + 4(jw} 2 + Bjw + 8 (b) Assume M = 50 and use the MATLAB com-
. (jw)J . mand f f t to evaluate the magnitude spec-
(b) H(7w) = (jw)3 + 2(jw)2 + 2jw +1 trum of the Hanning window in dB at fre-
quency intervals of 1r/50, rr/100, and 7T/200.
. 1 + 3e-;n + 3e-;2n + e-;3!l
(e) H(e'11) = 6 + 2e-;211 (e) Use the results from (a) and (b) to evaluate
the mainlobe width and peak sidelobe
(d) H(eiíl) = H1 (ein) H2(e;f1) height in dB for each window.
Problems 347

{d) Let Yr[n] = x[n]wr[n] and y,,[n] 4.50 Use the MATLAB comma11d f f t to repcat Ex-
x[n]whlnl, where x[n] = cos{261r11/l 00) + ample 4.16.
cos(291rn/100) and M = 50. Use the 4.51 Use the MATLAB command f f t to repeat Ex-
MATLAB command f f t to evaluate ample 4.17.
1 Yr(e;n) 1 in dB and I Y1,(eiíl) 1 in dB at inter-
4.52 Use the MATLAB command f f t to repeat Ex-
vals of 1r/200. Does the window choice af- ample 4.18. Also depict the DTFS approxi1na-
fect whether you can identify the presence tion and thc underlying DTFT for M = 2001
of two sinusoids? Why? and M = 2005.
(e) Ler Yr[n] = x[n]wr[n] and yh[n] = 4.5 3 Consider the sum of sinusoids
xln1w11[n], where xlnl = cos{261rn/100} +
0.02 cos(,51 '7'n/100) and M = 50. Use the x(t) = cos(21Tt) + 2 cos(2'7'(0.8)t)
MATLAB command f f t to evaluate + ½cos(21r(l.l)t)
1 Yr(eifl) 1 in dB and I Yh(ei!l) 1 in dB at inter-

vais of '7'/200. Does the window choice af- Assume the freque11cy band of interest is
fect whether you can identify the presence -51T < w < S1r.
of two sinusoids? Why? (a) Determine the samplír1g ir1terval '!J so that
4.47 Let a discrete-time signal x[n] be defined as the DTFS approxiniation to the FT of x(t)
spans the desired freque11cy band.
e-i0,ln)2/2, lnl < 50
(b) Dctermi11e the minimu,n number of samples
x[n] =
· O, otherwise M so that the DTFS approximation con-
0

Use the MATLAB commands f f t and sists of discrete-valued impulses locared ar


f f t s h; f t to numerically evaluatc and plot the the frequency corresponding to each
DTFT of x[n] and the following subsampled sinusoid.
sígnals at 500 values of fl on the interval -1T < (e) Use MATLAB to pior (1/M)IY,5 (jw)I and
n < '7': . 1 Y[k] 1 for the value of Y chosen in pare (a)

(a) y[n] = x[ln] and M = M 0 •

(b) g[n] = x[4nl (d) Repeat part (e) using M = M + 5 and 0

M = M + 8.
4.48 Repeat Problem 4.47 assuming 0

4.54 We desire to use the DTFS to approxiniate the


cos(1Tn/2)e-(O.lnl
2 2
' , lnl < 50 FT of a continuous-time sígnal x(t) on the band
xln] = .
O, otherw1se -wª < w < Wa with resolution Wr and a maxi-
mun1 sa1npling interval in frequency of Llw. Find
4.49 A signal x(t) is defir1ed as
the sampling interval d, number of samples M,
31T - t 2l2 and DTFS length N. You may assume that the
x(t) = cos t e
2 signal is effectively bandlimited to a frequency
Wm for which IX(jwª) 1 ~ 1 OI X(jw) 1, w > w,,,.
(a) Evaluate the FT X(jw) and show that
Pior the fT and the DTfS approxitnatío11 for
IX(jw) 1 :::.< O for I wj > 31T. .
each of the followi11g cases t1sing the MATLAB
ln parts (b)-(d), we compare X(jw} to the FT of command f f t. Hínt: Be sure to sample the
the sampled signal, x[nl = x(n91), for several pulses in (a) a11d (b) symmetrically about t = O.
sampling intervals. Let x[n] +.--
X 6 (jw) be
the FT of the sampled version of x(t). Use MAT-
f1'

(a) x(t) =
1, ltl<l
h . , w,1 =
31T
,
0, ot erw1se 2
LAB to numerically determine X 8 (jw) by 31T 1T
evaluating Wr = 4 ' and ~w = 8.
25
Xõ(jw) = L xln}e-;(,,.,n
2
(b) x(t) = l/(21r) e-' 2 , w,1 = 3, Wr =
' and l,
n=-25 à ú) = ¼.
at 500 values of w on the interval (e) x(t} = cos(201Tt) + cos(211rt), w,1 = 401r,
-31r < w < 31r. ln each case, compare X(jw) wr = 1Tl3, a11d ~w = 1Tl10.
and Xõ(jw) and explain any differences .. (d) Repeat case (e) using w, = 1r/l O.
(b) ?.T = ½ 4.55 Repeat Example 4.19 using MATI,AB.
(e) 91 = ~ 4.56 Write a MATLAB M-file that in1plen1ents the
(d) 91 = ½ overlap and save method usi11g f f t to evaluate
348 CHAPTl:::R 4 • APPLICATIONS o•· FOURIER REPRESENTA'TIONS

che convolution y[n] = hlnl * xln] on O< 11 < L 4.59 ln this experiment \.ve ínvestigate cvaltiation of
for the follo\ving signals: the time-bandwidth product with the DTFS.
(a) h[nl = i(u[n] - u[n - .51), FT
Let x(t) - - X(iw).
x[n] = cos( 1rn/6), T~ = 30
(a) Use the Reimann sum approximation to an
(b) h[n] = -~(ulnl - i,[n - 5]),
integral
xln J = (½)nu[n], L = 20
h 1n1,
4.57 Plor the ratio of thc nt1mber of mt1ltiplications
in the dircct 1nethod for computing the DTFS
coefficients ro that of the FFT approacl1 when
f

1
f(u) du = ,n~z
"
f(m!lu) !lu

to show that
N = 2P for p = 2, 3, 4, ... , 16. 112
2 2
4.58 ln this problem we compare the number of n1ul- f~"" t jx(t) dt 1

tiplications required to evaluate h[n] * x[n l us- Td = f""'"' lx(t)l 2 dt


1/2
ing the overlap and save algorithm to that re- ~A~- n lx(n]l 2 2

quired for direct evaluation of the convolution :::::: j n~ A1 1 12


L,,
-A-1 x[n]
sum when the impulse respo11sc /::,[n] = O for
n < O, n > M. ln ordcr to maximize the com- provided x[nl = x(n5') represents the sam-
puratil)nal efficiency of the overlap and save al- ples of x(t) a11d x(nY) = O for n > M. I 1

gorithm, a FFT algorithm is used to evaluate the (l)) Use the l)TFS approximatÍ<.)n to the FT and
DTfS in step 4 and inverse DTt·s ín step 6. the Reiman11 su1n approxímation to a11 in-
(a) Show that M multiplicatio11s are required tegral to show that
per output point if the convolutio11 sum is 2
f"",., w I X(jw) 1 dw 2
112

eva lua ted dírectly. B


w
= f"',., IX(iw)l 2 dw
(b) ln countír1g mL1lriplicarions per output point 2 2 1/2
for the ovcrlap and save algorithm, we need If=-M lkl 1Xlkll
Ws
==
only consider steps 4, 5, and 6, since we as- 2M + 1 If -M IX[k] 2 1

sume Hík] is precomputed. Show that /)TFS; 2rr/(2A-1 + 1) k


2N lc>g2 (N) + N w h ere xl1z] - - - - - - X[ 1, Ws =
21r/~ is the san1pling frequency, and
N- M + 1 X(jkw5 /(2M + 1)) = O for k > M. I 1

multiplications per output poi11t are (e) Use the results from (a) and (b) and Eq.
required. (3.58) to show rhar rhe tín1e-band\vidth
(e) Let R be the ratio of multíplications re- product co1nputed using the DTFS approx-
quired for direct evaluarion of the convo- imatio11 satifies
lution sum to that for the overlap and save T,1Bu = 1

algorithm. Evalt1ate R for the followíng 112

cases: I!~-M n 2 lx[nl J2


(i) M = 10; N = 16, 256, and 1024. 2'.Z1--M lx[n] 12
(ii) M = 20; N = 32, 256, and 1024. 2M + 1
>---
(iii) M = 100; N = 128, 256, and 1024. 41T
(d) Show that for N >> M, the overlap a11d save (d) Repeat Computer Experiment 3.4,5 to dem-
algorithm requires fewer multiplications onstrate that the bound i11 (e) is satisfied and
than direct evaluation of the convolution that Gaussian pulses satisfy the bou11d with
sum if M > 2 log2 N + 1. equa1ity.
Application to
Communication Systems

. . ..,::
., ·:. .-,..,,,. . .,,._,
·;;.
.. :~:~.
.,
• ;. . º·..d!!::. ·i"~:: •.;. ~ .. "'''... r.~; ·~·
·, "ij. ·-~::· ..
;)

~.;
,,

. ,.. .
<.;. .... '
T ..
·-~· ::~ .

5.1 Introduction
The purpose of a communication system is to transport a message signal (generated by a
source of information) over a channel and detiver an estimate of that message signal to a
user. For example, the message signal may be a speech signal. The channel may be a cellular
relephone char111el or satellite channel. As mentioned in Chapter 1, modulation is basic to
the operation of a communication system. Modulation provides the means for shifting the
range of frequencies containcd in the message signal into another frequency range suitable
f(>r transmissíon over the channel, and a corresponding shift back to the original frequency
range after reception. Formally, modulation is de.fined as the process by which some char-
acteristic <)/ a carrier wave is varied in accordance with the message signal. The message
signal is referred to as the rriodulating wave, and the result of the modulation pr(>cess is
referred t<> as the modulated wave. ln the receiver, demodulation is used to recover the
message signal from the modulated wave. Demodulation is the inverse of the m(>dulatio11
process.
ln this chapter we present an introductory treatment of modulation from a system-
theoretic viewpoint, buílding on Fourier analysis as discussed in the previous two chapters.
We begin the discussion witl1 a description of the basíc types of mc>dulation, fc)llowed by
the practical benefi.ts derived from their use. This sets the stage for a discussion of the
sc>-called amplitude modulation, which is widely used in practice for analog communica-
rions by virrue of its simplicity. One common application of amplitude modulation is in
radio broadcasting. We then díscuss some important variants of amplitude modulation.
The counterpart of amplitude modulation that is used in digital communicatic>ns is known
as pulse-amplitude modulatíon, which is discussed in the latter part C)Í the chapter. ln
reality, pulse-amplitude modulation is anorher manifestation of the sampling process that
we studied in Cha pter 4.

5.2 Types of Modulation


The speci:fic type of modulation emplc>yed in a communication systen1 is derermined by
the form of carrier wave used to perform the modulation. The twc) rnost commonly used
forms of carrier are:
• Sinusc>idal wave
• Periodic pulse train
3 50 CHAPTE:R 5 • APPLICATION TO COMMUNICATION SYSTEMS

Correspondingly, we may identify two main classes of modulation as described here.

1. Continuous-wave {C\V) modulation


Consider the sinusoidal carrier wave
c(t) = Ac cos(</>(t)) (5.1)
which is uniquely defined by the carrier amplitude Ac and angle cp (t). Depending on which
of these parameters are chosen for modulation, we may identify two subclasses of CW
modulatíon:
• Amplitude modulation, in which the carrier amplitude is varied with the message
sígnal
• Angle modulati<Jn, in which the angle of the carrier is varied with the message signal
Figure 5 .1 shows examples of amplitude-modulated and angle-modulated waves for a
sinusoidal mc>dulating wave.
Amplitude modulatíon can itself be implemented in several different forms. For a
given message signal, the frequency content of the modulated wave depends on the form
of amplitude modulation used. Specifically, we have:
• Full amplitude modulation (double sideband-transmítted carríer)
• Double sideband-suppressed carrier modulation

(a)

(b)

(e)

(d)

FIGURE 5.1 Amplitude- and angle-mc><lulatcd waves for sinusoidal n1c>dulation. (a) Carrier wave.
(b) Sinusoidal modulating \VaVe. (e) Amplitude-modulated wavc. (d) Angle-modulated \Vave.
5.2 Types of Modulatimi 351

• Single sideband modulation


• Vestigial sideband modulation
Thc last three types of amplitude modulation are examples of linear modulation, in thc
sense that if the amplitude of the message signal is scaled by a certain factor, ther1 the
amplitude of the 1nodulatcd wavc is scaled by exactly the sarne factor. ln this strict sense,
full amplitude modulatÍ<)n fails to meet the definition of linear modulation with respect to
the message signal for reasons that will become apparent later. Nevertheless, the departure
from linearity i11 the case <>f full amplitude modt1lation is of a rather mild sort, such that
many <>f thc 1nathematical prc>cedures applicable to the analysis of linear modulatil)Il may
be retained. Most importantly from our present perspective, all four different forms of
amplitude modulation mentioned here lend themselves to marhematical analysis using the
tools presented in this hoc)k. Subsequent sections of this chapter develop the details of this
analysis.
ln contrast, angle modulation is a nonlinear modulation process. To describe it in a
formal ma11ner, we need t<> intrc>duce the notion of ínstantaneous radian frequency, de-
noted by w;(t). lt is defined as the derivative of the angle </>(t) with respect to time t, as
shown bv•
d</> (t)
W;(t) (5.2)
dt
Equivalently, \Ve may write (ignoring the constant c.>f integration)

</> (t) = J: wi( T) dr (5.3}

where it is assu1ned that the initial value


()
<b (0) =J -oo (ti;( T} dT

1s zero.
Equation (5.2) ir1cludes the usual definition of radian frequency as a special case.
Consider the ordinary for1n of.a sinusoidal wave written as
· c(t) = Ac cos(wct + 0)

where Ac is the amplitude, wc is the radian frequency, and 0 is the phase. For this simple
case, the angle </> ( t) is
<b (t) = wct + 0
in which case the use of Eq. (5.2) yields the expected result
wi(t) = Wc for all t
Returning to the general definition c>f Eq. (5.2), when the instantaneous radian fre-
quency wi(t) is varied in accordance with a message signal denoted by m(t}, we may write
(5.4)

where kf is thc fre{Juency sensirivity factc>r <>f the mc>dulator. Hence substituting Eq. (5.4)
into (5.3), we get
352 CHAPTER 5 • APPLICATION TO COMMlJNICATION SYSTEMS

The resulting form of angle modulation is known as frequency modulation (FM), written
as

(5.5)

where the carrier amplitude is maintained constant.


When the angle q> (t) is varied in accordance with the message signal m(t), we may
.
wr1te
</> (t) = Wct + kpm(t}
where kp is the phase sensitivity factor of the modulator. This time we have a different
form of angle modulatic.>n known as phase modulation (PM}, defined by
SpM(t) = Ac cos(wct + kpm(t)) (5.6)

where the carrier amplitude is again maíntained constant.


Alchough the formulas of Eqs. (5.5) and (5.6) for FM and PM signals look dífferent,
they are in face intimarely related to each other. For the present, ir suffices to say that both
of them are nonlinear functions of the message signal m(t), which makes their mathemat-
ical analysis more difficult than that of amplitude modulation. Since the primary emphasis
in this book is on a linear analysis of signals and systems, we will devote much of the
discussion in this chapter to amplitude modulation and its variants.

2. Pulse modulation
Consider next a carrier wave that consists of a periodic train of narrow pulses, as shown
by
00

c(t) = L
n= --oc
p(t - nfr)

where 7J is the period, and p(t) denotes a pulse of relatively short duration (compareci to
the period ~) and centered on the origin. When some characteristic parameter of p(t) is

- ~
- ~ - ~ - ~
- ~
- - - -
t
(a)

(b)

;
,~.. -~ I'>' ,-
~~
,.._
~ ..
✓•
~

' ·,
~
' ' ~, /
;
/
'' , t
~,, ... _~ ' ' .,, ~
/ /
• •

.. ---
.. ✓

(e)

FIGURE. 5.2 Pulsc-am1-,litude modulation. (a) Train of rectangular pulses as the carricr wave.
(b) Sinusoidal modulating wave. (e) Pulse-amplitude modulated \!Vave.
5.3 Benefits of Modulation . 353

varied in accordance with the 1nessage signal, we have pulse mc)dulati<)n. Figure 5 .2 shc>ws
an example of pulse amplitude mc>dt1lation for the case <>Í a sinust>idal m<)dulating wave.
Depending on how pulse modulation is actually accomplished, we may disti11guish
the following two subclasses:
• Anal<)g pulse modulation, ín which a characteristic pararneter such as thc an1plitude,
durati,>n, c>r p<>sitic>n <>f a pulse is varied cc>ntinuc>usly with the rnessage signal. We
thus speak of pulse-amplitude mc>dulation, pulse-duratic>n m<>dulation, and pulse-
position m,>dulatic>n as different realizatic)ns c)f analc>g pulse mc>dulatic>n. This type
of pulse modulation may be viewed as the counterpart of CW modulation.
• Digital pulse modulation, ín which the modulated signal is representcd in cc><led
form. This representation can be accomplished in a numl">er of different \Vays. The
standard method involves two operations. First, the an1plitude of each n1odulated
pulse is approximated by the nearest member of a set c)f discrete leveis that occupies
a compatible range of values. This operation is called quantization, and the <levice
for performing it is called a quantizer. Second, the quantizer output is coded (e.g.,
in binary form). This particular form of digital pulse modulation is known as
pulse-code modulation (PCM). Quantization is a 11onlinear process that results i11 a
loss of information, but the loss is under the designer's control in thar it can hc madc
as small as desired simply by using a large enough number of discrete (quantizatic>11}
leveis. ln any cvent, PCM has no CW C<)unterpart. As with angle m<>dt1latic>n, a
complete discussion of PCM is beyond the scope of this book. lnsofar as pulse mod-
ulation is concerned, the primary emphasis in this chapter is c>n pulse-amplitude
mc>dulation, which is a linear process.

5 .3 Benefits of Modulation
The use of mc>dulation is n<)t cc>nfined exclusively to communication systems. Rather,
modulation in one form or a11other is used in signal processíng, radiotelemetry, radar,
sonar, control systems, and general-purpose instruments such as spectrum analyzers and
frequcncy synthesizers. However, it is in the study of C<1mmunicatic>n systems that we find
modulation playing a dominant role.
ln the context of con1munication systems, we may identify three practical be11efits
that result from the use of modulation:

1. Modulatíon is used to shift the spectral content of a message signal so that it fies inside
the operating frequency band of a communication channel
Consider, for example, telephonic communication over a cellular radio channel. t'or such
a11 application, the frequency components of a speech signal fron1 about 300 to 3100 Hz
are considered to be adequate for rhe purpc,se c>f cc>mmt1nicati<>11. 111 Nc)rth America, the
band of frequencies assigned to cellular radio systems is 800-900 MHz. The st1bl)and
824-849 MHz is used to receive signals from the mobile units, and the subband 869-894
MHz is used f<.)r transmitting signals t<.> the mc>hile units. F<.>r this fc>rm <>f teleph<>nic
communication to be feasible, we clearly need to do two things: shift the essential spectral
content of a speech signal so that it lies inside the prescribed subband f<.)r transmissi<)n,
and shift it back to its original frequency band on reception. The first <>f these tw<.> <)per-
ations is one of modulation, and the second is one of den1odulation.

2. Modulation provides a meihanísm for putting the information cc)ntent of a message


signal into a form that may be less vulnerable to noise or interference
ln a communication system the received signal is c>rdínarily cc>rrupted hy nc>ise gc11eratcd
354 CHAP'I't::R 5 • APPLICATION TO COMMlJNICATION SYSTEMS

at the front end of the receiver or by interference picked up in the course of transmission.
Some specific forms of modulation such as frequency modulation and pulse-code 1nod11-
lation have the inherent abílity to trade off íncreased transmission bandwidth for improved
system performance in the presence of noise. We are careful here to say that this important
property is not shared by ali modulation techniques. ln particular, those m(>dulation tcch-
niqucs that vary the arnplitude c)f a CW or pulsed carrier pr(>vide absolutely no prorection
against noise or interference in the received signal.

3. M<>dulation permits the use of multiplexing


A C(>mmunication channel (e.g., telephone channel, mobile radio channel, satellite com-
munications channel) represents a majf,r capital investment and must tl1erefc>re be de-
ployed in a cost-effective manner. Multiplexing is a signal-pr<)Cessing operation that makes
this possible. ln particular, ir permits the simultaneous transmission of information-bearíng
signals from a number <lf indepe11dent sources over the channel and on to their respective
destinatic>ns. lt can take the form c>f frequency-division multiplexing fc>r use wich CW
modulatic)n techniqucs, <)f time-division multiplexing for use with digital pulse modulation
techniques.

111 this chapter we will discuss rhe frequency-shifting a11d mt1ltiplexing aspects of
modulation. However, a study of the issues relating to nc,íse in modulation systems is
beyond the SC<)pe <.>f this bor>k.

Considera sinusoidal carrier wave c(t) defined by


(5.7)
For convenience of presentation, we have assumed that the phase of the carrier wave is
zero in Eq. (5.7). We are justíficd in making rhis assumption as the prímary emphasis here
is on variations imposed <>n the carrier amplitude. Ler m(t) denc)te a message signal of
interest. Amplitttde m<>dulation (AM) is defined as a process in which the amplitude of the
carrier is varied pr<Jportio1tately to a message sígnal m(t), as shown by
(5 .8)

where k,, is a constant called the amplitude sensitivity factor c>f the modulatc>r. The mc)d-
ulatcd wave s(t) so defined is said t<> be a ''full'' AM wave fc>r reasons explained later in
the section. Nc>te that the radian frequency wc (>f the carrier is maintained constant.
The amplitude of the time functio11 mt1ltíplyi11g cos(wct) in Eq. (5.8) is called the
envel<>pe of the AM wave s(t). Using a(t) to de11otc this envelope, we may thus write
(5.9)
Two cases arise, depe11ding on the mag11itude of kam(t), compared t(> unity:
1. lk.lm(t)I ::s 1 fc>r ali t

Under this C(>nditic)n, the term 1 + kªm(t) is always non11egative. We may therefore
simplify che expression Í()r rhe envelope of the AM wave by writing
a(t) = Ac[l + kam(t)] for ali t (5.10)
5.4 FullAmplitude Modulation 355

2. lk,1 m(t) > 1


1 for some t

Under this second condition, we must use Eq. (5.9) for evaluating the c11velope of
the AM wave.

The maximum absolute value of kam(t) multiplied by 100 is referred to as the percentage
modulatic>n. Accordingly, case 1 corresponds t<) a percentage modulation less than or equal
to 100o/o, whereas case 2 corresponds to a percentage modulation in excess <)f 1OOo/o.
The waveforms of Fig. 5.3 illustrate the amplitude rn(>dulation process. Part (a) <>f
the figure depicts the waveform of a message signal m(t). Part (b) of the figure depicts an
AM wave produced by thís message sígnal for a value of k,1 for which the percentage
modulation is 66. 7% (i.e., case 1 ).
On the other hand, the AM wave shown in Fig. ,S.3(c) corresponds to a value of kª
for which the percentage modulation is 166.7°/o (i.e., case 2). Comparing the waveforms
of these twc> AM waves with that of the message signal, we draw an important conclusion.
Specifically, the envelope of the AM wave has a waveform that bears a <>ne-to-one corre-
spondence with that of the message signal if and only if the percentage m<>dulation is less
than or equal to 100o/o. This correspondence is destroyed if the percentage modulation is
permitted to exceed 100%. ln the latter case, the modulated wave is said to suffer fr(>m
envel<>Pe distortion, and the wave itself is saíd to be <)ver111odulated.

• Drill Problem 5.1 For lOOo/o modulation, the envelope a(t) becomes zero for some
time t. Why?

Answer: If kam(t) = -1 for some time t, then a(t) = O. •


s(t)
m(t)
.,. / - - --
~- -..... -

(a)
(b)
s(t)

/
,,. -
/

-- \ Phase reversais

I
I ' -
/
/

(e)

FIGURE 5.3 Amplitt1de modu1ation for varying percentage of modu1ation. (a) i\!lessage signal
m(t ). (b) Al\,'l \Vave fc)r I k.,»i(t) 1 < I for a]] t, \vhere k,, is the amplitude sensitivity of the modulat{,r.
Thís case reprcsents undermc>dulation. (e) AM wave for lk"m(t) 1 > I some of the time. This sec-
ond case represents overmodulation.
3 56 CHAPTER 5 li APPLICATION TO COMMUNICATION SYSTEMS

• MORE ON THE TIME-DOMAIN DESCRIYflON OF AN AM WAVE


Earlier we defined linear modulation to be that form of a modulation process in which if
the amplítude of the message sígnai (i.e., modulating wave) is scaled by a certaín factor,
then the amplitude of the modulated wave is scaled by exactly the sarne factor. This def-
inition of linear modulatíon is consistent with the notion of linearity of a system that was
introduced in Section 1.8. Amplitude modulation, as defined in Eq. (5.8), fails the linearity
test ín a strict sense. To demonstrate this, suppose the message sígnal m(t) consísts of the
sum of two comp(>ne11ts, m 1 (t) and m 2 (t). Let s 1(t) and s 2 (t) denote the AM waves produced
by these two components acting separately. With the operator H denoting the amplitude
modulation process, we may then write
H{m1(t) + m2(t)} = Acl1 + ka(m1(t} + m2(t))] cos(wct)
-;/= S1 (t) + S2(t)

where

and
s2(t) = Acf 1 + k.:1m2(t)J cos( wct)
The presence of the carrier wave Ac cos(wct) in the AM wave causes the principie of
superposition to be violated.
However, as pointed out earlier, the failure of amplitude modulation to meet the
criterion for línearity is of a rather mild sort. From the definítion given in Eq. (5.8), we
see that the AM signal s(t) is, in fact, a linear C()mbination <>f the carrier component
Ac cos(wct) and the modulated component Ac cos(wct)m(t). Accordingly, arnplitude mod-
ulation does permit the use of Fourier analysis without difficulty, as discussed next.

• FREQUENCY-DOMAIN DESCRIPTION OF AMPLITUDE MODULATION

Equation (5 .8} defines the full AM wave s(t) as a function of time. To develop the frequency
description of this AM wave, we take the Fourier transform of both sides of Eq. (5.8). Let
S( jw) denote the Fourier transform of s(t), and M( jw) denote the Fourier transform of
m(t); we refer to M( jw) as the message s/Jectrum. Accordingly, using the Fourier transform
representatíon of the cosine function Ac cos( wct) and the frequency-shifting property of
the Fourier transform, we may write
S( jw) = 1rAc [8(w - Wc) + S(w + Wc.)1
(5.11)
+ ½kaAc [M( j(w - Wc)) + M( j(w + Wc))]
Ler the message signal m(t) be bandli1r1ited to the interval -wm s w ::S wn, as in Fig. 5.4(a).
We refer to the highest frequency component wm of m(t) as the message bandwidth, which
is measured in rad/s. The shape of the spectrum shown in this figure is intended for the
purpose of illustration only. We find from Eq. (5.11) that the spectrum S(jw) of the AM
wave is as shown in Fig. 5.4(6} for the case when wc > wn,• This spectrun1 C<)nsists of two
impulse functions weighted by the factor 1rAc and occurring at ±wc, and two versions of
the message spectrum shifted in frequency by ±wc and scaled in amplitude by ½k,1Ac. The
spectrum of Fig. 5.4(b) rnay be descrihed as follc>WS!
1. For positive frequencies, the portion of the spectrum of the modulated wave lying
above the carrier frequency wc is called the upper sideband, whereas the symmetric
portion below wc is called the lower sideband. For negative frequencies, the image
5.4 Full Amplitude Modulation 357

IS(jw}I
i
M(O)·· ··

....... - -- --- ____ ,.,..,._ -- ---- ................._...... - - - - - -


Upper Lowcr Lower Upper
sideband sideband sideband sideband

- - - - - - (1
-wm o w,,, o
(a) (b}

FIGURE 5.4
Spectral content of 1\1\'l \vave. (a) 1\.1agnitude spectrum of 111essage signal. (b) 1Vlag-
nitt1de spectrum of the Alvl \vave, shovving the compositions of the carrier, up}JCr and lower
sidebands.

c>f the t1pper sideband is represented by the portion of the spectrum below - wc and
the image of the lower sídeband by the pc>rtion above -wc. The condition wc > w,, 11

cnsures that the sidebands do not overlap. Otherwise, the mc>dulated wave exhibits
spectral overlap and, therefore, frequency distortion.
2. For positive frequencies, the highest frequency component of rhe AM wave is
wc + wn,, and the lowest frequency component is wc - w,,,. The difference between
these two frequencies defines the transmission bandwidtl7 wr for an AM wave, which
is exactly twice the message bandwidth wm; that is,
w-1· = 2w111 (5.12)
Thc spectrum of the AM wave as depicted in Fig. 5.4(b) is fui/ in that the carrier, the upper
sideband, and the lower sideband are all completely represenred. lt is for this reason that
we refer to this form <>f modulation as ''full amplitude mc>dulation."
The t1pper sidehand of the AM wave represents rhe positive frequency components
<>Í the message spcctrum M( jw), shifted upward in frequency by the carríer frequency wc.
The lower sidel",and of the AM wave represe11ts the negative frequency componenrs of the
message spectrum M ( j w), also shífted upward in freq uency by w c.. Herein lies the impor-
rance of admitting the use of negative frequencies in the Fourier analysis of signals. ln
particular, the use of amplitude modulation reveals the negative frequency components of
M( jw) cc>mpletely, provided that wc. > w,n·

.: . ::, .
.. .,. .....
.:.~.;
..
·i,::>' ::SW<•. • : ::r; '
: ' .
...,~,
5.1 Considera modulating wave m(t) that consists of a síngle tone or frequency
Ex.<\.MPLE
component, that is,
m(t) = A 0 cos{(l)0 t)

where A 0 is the amplitude of the modulating wave and w0 is its radian frequency (see Fig.•.
5.5(a)). The sinusoidal carrier wave c(t) has amplitude Ac and radian frequency wc (see Fig.
5.5(b)). Evaluare the time-domain and frequency-domain characteristics of the AM wave.
Solution: The AM wave is described by
.;,,,.

(5.13)
where ..
..

..
358 CHAPTER 5 • APPLICATION TO COM.MUNICATION SYSTEMS

M(jw)
m(t)

Ao,__
QJ----.::!-......-----::.,,,C..---'---3o,c-----
1
fo
(a)

C(jw)
c(t)

AC
o
-we o WC

1-- •. 1
l
fc
(b)

S(jw)
s(t)
Ama~
-- .... /
/ -- ....
Amin
' .....
'
o
-w o WC
/
,/
...._ .,.,, /
1• e~ .. 1
- ......
- /
2w0
1-
2w0
Time! • Frequency w .
(e)

FIGURE 5.5
Time•domain (on the left) and frequency·domain (<,n the right) characteristics of
Al\11 pr<)<luced by a sinusoidal modulating wave. (a) lvlodulating wave. (b) Carrier ,vave. (e) Al\.1
wave.

. ,. .,.
., ' ·. .., :

The dimensionless constant µ, for a sinusoidal modulating wave is called the modulatíon
factor; it equals rhe percentage modulation when it is expressed numerically as a percentage.
To avoid envelope distortion dueto overmodulation, the modulation factor µ, must be kept
below unity. Figure 5.5(c) is a sketch of s(t} forµ, less than unity.
Let Amax and Arnín denote the maximum and mínimum values of the envelope of the
modulated wave. Then, from Eq. (5.13), we get

;:•. ; •
Amax Ac(1 + µ,)
., Amín Ac( 1 - µ)
.,

Solving forµ;:

...,·· . . µ,=
Amax - Amín ·
Amax + Amm
"
Expressing the product of the two cosínes in Eq. (5.13) as the sum of two sinusoidal
waves, one having frequency Cdc + iuo and the other having frequency wc - w0 , we get
s(t) = Ac cos(wct) + ½µ,Ac cos[(wc + l.t)0 )tJ
+ ½µAc cos[(wc - w0 }t]
5.4 FullAniplitude Modulation 359

. . '
. .., :

The Fourier transform of s(t) is therefore


: ,.,.. t

S(iw) = 1TAc[B(w - Wc) + ó(w + w,J]


+ 21 1TµAc[8(w - Wc - Wo) + S(w + Wc + Wo)]
+ ½1rµ,Ac[8(w - Wc + Wo) + S(w + wc - Wo)]
Thus, in ideal terms, the spectrum of a full AM wave, for the special case of sinusoidal mod-
ulation, consists of impulse functions at ::!:::wc, wc + w0 , and -wc + w0 , as depicted in Fig.
5.5(c). r ,
...
~ :· .· ;. . º1 .
'l ~.. :. J . t· .~' .. ·:,#;~· ~~· .,..'::' ...: . . .. ·..,:'!· ,, ·:~ ,..~.;.:. : : ..

..,, __ ..
.•.\• .
.. . ... ;. .,· . ,· ,: •• ,1 •

EXAJ\-tPLE 5.2 Continuing with Example 5.1, investigate the effect of varying the modulation
factor µ, on the power content of the AM wave.

Solution: ln practice, the AM wave s(t) is a voltage or current signal. ln either case, the
average power delivered to a 1-ohm load resistor by s(t) is comprised of three components:
Carrier power = ½A;
)',
Upper side-frequency power = }µ 2 A~
Lower side-frequency power = ½µ 2 A~
The ratio of the total sideband power to the total power in the modulated ,vave is therefore
equal to µ 2/(2 + µ 2 ), which depends only on the modulation factor µ. If µ, = 1, that is, 100%
modulatíon is used, the total power in the two side-frequencies of the resulting AM wave is
only one-third of the total power in the modulated wave.
Figure 5 .6 shows the percentage of total power in both side-frequencies and in the carrier
plotted versus the percentage modulation. '"
. . >.,:,,.t
.. . ,..,.. ........... ... • •

• Drill Problem 5.2 For a particular case of sinusoidal modulation, the percentage
m{>dulation is less than 20o/o. Show thar the power in one side-frequency is less rha11 1°/o
of the total power in the AM wave. •

100

Carrier
60 .....

40 . . .
Síde-frcquencíes
20

O 20 40 60 80 l 00
Percentage modulation

FIGURE 5.6 Variations of carrier power and sideband l">owcr ,-vith percentagc n1,>dt1lation of A.lvl
wave for the case of sinusoidal modulation.
360 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

• GENERATION OF AM WAVE
Various schemes have bee11 devised for the generati<>n of an AM wave. Here we consider
a símple circuit that Í<)ll<>WS from the definíng equation (5.8 ). First, we rcwrite this equation
in the equivalent form:.
(5.14)
The C<)nstant B, equal to llk 0 , represents a bias that is added to the message signal m(t)
before modulation. Equation (5.14) suggests the scheme described in the blc)ck diagram
<)f Fig. 5. 7 for generating an AM wave. Basically, it consists of two functional blocks:
• An adder, which adds the bias B to the incoming message signal m(t).
• A multiplier, which multiplies the adder output (m(t) + B) by the carrier wave
Ac cos(wct), producing rhe AM wave s(t). The constant k is a proportÍ<)nality con-0

stant associated with the multiplíer.


The percentage modularic)11 is controlled simply by adjusting the bias B.

• Drill Problem 5.3 ·. Assuming that M nax is the maximum absolute value of the mes-
1

sage signal, what is the condition which the bias B must satisfy tl> avoid overmodularic>n?
Answer: B 2:: Mnu1x· .

• DEMODULATION OF AM WAVE
The so-called envelo/Je detector pr<>vides a simple and yet effective <levice for the demod-
ulation (>Í a narrowba11d AM wave fc)r which the percentage modulation is less than 100º/o.
By ''narrowband'' we mean that rhe carrier frequency is large compared with the message
bandwidth. Tdeally, an envelope detector produces an output signal that follows the en-
velope c>f the input signal waveÍ()rm exactly-hence the na1ne. So111e version of this circuit
is used in almost ali commercial AM radi(> receivers.
Figure 5.8(a) shows the circuit diagram of an envelope detector that consists of a
diode anda resistor-capacitor filter. The operation of this envelope detector is as follows.
On the positive half-cycle of the input signal, the diode is forward-biased and the capacitar
C charges up rapidly to the peak value of the input signal. When the i11put signal falis
below this value, the di<>de becomes reverse-biased and the capacitor C díscharges slowly
through che load resistor R,. The discharging process continues until the next positive half-
cycle. When the input signal becomes greater than the voltage across the capacitar, the
diode cc,nducts again and the pr()Cess is repearcd. We assume that the di<.>de is ideal,
presenting zero impedance to currcnt flow in the f<)rward-biased region, and infinite im-
pedance in the reverse-biased region. We further assume rhat the AM wave applied to the

Message Amplitude
signal - >• J,\d,der
,·;p .
., Multiplier -~..,, modulated (AM)
m(t} .,. wave s(t)

Bias B
t t
Carrier
Ac cos (wct)

FIGlJRE 5. 7 System inv•Jlving an adder a1ld multiplíer, for generatíng .in ANl \Vave.
5.4 Full Amplitude Modulation 361

s(t)

e :::::::: R, Output --- t


AM wave
s(t) ...

(a) (b)

Envelope
detector
output

(e)

FIGURE 5.8 Envelope detector, illustrated by (a) circuit diagram, (b) Al\1 wave inr,ut, and
(e) envelope detector output, assuming ideal conditíons.

envelope detector is supplied by a voltage source <,f internai resistance R 5 • The charging
time constant RsC must be short compared with the carrier period 21rlwc, that is

Accordingly, the capacitar C charges rapidly and thereby follows the applied voltage up
to the positive peak when the diode is conducting. On the other hand, the dischargi11g tin1e
constant R 1C must be long enough to ensure that the capacitor discharges slowly thrc)ugh
the load resistor R 1 between positive peaks of the carrier wave, but not so long that the
capacitor voltage will not discharge at the maximum rate of change of the modulating
wave, that is,
21T 27T
-<< R1C <<-
wc W,n

where wn, is the message bandwidth. The result is that the capacítor vc.>ltage or detector
output is very nearly the sarne as the envelope of the AM wave, as we can see from Figs.
5.8(6) and (e). The detector output usually has a small ripple (not shown in Fig. 5.8(c)) at
the carrier frequency; this ripple is easily removed by lowpass filtering.

• Drill Problem 5.4 A11 envelope detector has a source resistance Rs = 75 !1 a11d a
load resistance R1 = 1 O kíl. You are given wc = 27T X 105 rad/s and wm = 2'1T X 103 rad/s.
Suggest a suitable value for the capacitor C.
Answer: C = 0.01 µ,F. •
362 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

5.5 Double Sideband-Suppressed


Carrier Modulation
ln full AM, the carríer wave c(t) is completely independent of the message signal m(t),
which means that the transmission of the carrier wave represents a waste of p<>wer. This
points to a shortcoming of amplitude modulation, namely, that only a fraction C}Í the total
transmitted power is affected by m(t), which was well demonstrated in Example 5.2. To
<)vercome this shortcoming, we may suppress the carrier component from che modulated
wave, resulting in double sideband-suppressed carrier {DSB-SC) modulation. By suppress-
ing che carrier, we obtain a modulated wave that is proportional to the product of the
carrier wave and the message signal. Thus to describe a DSB-SC modulated wave as a
function of tin1e, we simply write

s(t) = c(t)m(t) (5.15)


= Ac cos( wct)m(t)
This mc>dulated wave undergoes a phase reversai whenever the n1essage signal m(t) cresses
zero, as illustrated in fig. 5.9; part (a) of the figure depicts the waveform of a message
signal, and part (b) depícts the cc>rresponding DSB-SC m<>dulated wave. Accordingly, un-
like amplitude modulation, the envelope of a DSB-SC modulated wave is entirely different
from the message signat.

• Drill Problem 5.5 Sketch the envelope of the DSB-SC modulated wave shown in
Fig. 5.9(6) and compare it to the message signal depicted ín Fig. 5.9(a). •
• FREQUENCY~DOMAIN DESCRIPTION

The suppression of the carrier from the modulated wave of Eq. (5.15) is well appreciated
by examining its spectrum. Specifically, by taking the Fourier transform of both sides of
Eq. (5.15), we get

(5 .16)
where, as before, S( iw) is the Fourier transform of the modulated wave s(t), and M( jw) is
the Fourier transform c>f the message signal m(t). When the message signal m(t) is limited
to the interval -w,n s w s wn 1 as in Fig. 5. lO(a), we find that the spectrum S( jw) is as
illustrated in part (b) <>f the figure. Except for a change in scale factor, the modulation
process simply translates the spectrum of the message signal by ±wc. Of course, the trans-

m(t) S(t) Phase reversai


/'
-- ' / -
/.
/
"
' t
'- /
/

(a)
'-
--- /

(b)
' --- ,.,..

FIGURE 5.9
Dc>uble sídeband-suppressed carrier modulation. (a) I\1essage signal. (b) DSB-SC
modulated wave, resulting from multiplicatíon of the message sígnal by the sinusoidal carrier
wave.
5.5 Double Sideband-Suppressed Carrier Modulation 363

IM(iw)I 1S(jw) 1

Lowcr Upper
sideband sideband

- - ----+------ w _ 4 -_ __,___-----31_ _ _ _ _~1-------~--...---...lll,......--(i


'
o o
-cú,n w,n
L2::_j
(a) (b)

FIGURE 5. 10 S1Jcctral content of .DSB-SC modulated wave. (a) Magnitude spectrum of message
signal. (b) l\1agnitucle spectrl1111 <>f DSB-SC m<><lulated ,vavc, consisting of upper and lo,ver side-
bands 011lv.
,

mission bandwidth required by D5B-SC rnodulation is rhe sarne as that for full amplitude
modulation, narnely, 2wn1 • However, cornparing thc spectrum of Fig. 5.10(6) for DSB-SC
modt1lation with that of Fig. 5.4(6) for full AM, we clearly see that the carrier is suppressed
in the DSB-SC case, whereas it is present in the full AM case as exemplified by the existence
of the pair of impulse functions at ±wc.
The generation of a DSB-SC rnodulated wave consists simply <)Í the product of the
rnessage signal m(t) and the carrier wave Ac cos(wct), as indicated in Eq. (5.15). A <levice
for achicving this requirement is called a product modulator, which is another term for a
straightfc)rward multiplier. Figure 5.11 (a) shows the block diagram representation of a
prc>duct modulator.

• COHEREN'I' DE'I'ECTION

The message signal m(t) may be rec()vered fr<>m a DSB-SC modulated wave s(t} by first
rnultiplyi11g s(t) with a locally generated sinusoidal wave and then lowpass filteríng the
product, as depicted in Fig. 5.11 (b). lt is assumed that the local oscillator output is exactly
coherent or synchronized, in both frequency and phase, with the carrier wave c(t) used in
the product m<)dulat<>r te> generate s(t). This method of dernodulation is known as coherent
detecti<)11 <>r synchr<)n(>us demc>dulation.
Ir is instructive to derive coherent detection as a special case of rhe more general
demc><lulatic>n prc>cess using a local oscillator signal of the sarne frequency but arbitrary
phase difference </>, measured with respect to the carríer wave c(t). Thus denotíng rhe local
oscillator signal in the receiver by cos(wct + </> ), assumed to be of unir amplitt1de fc>r

Product __
,. çjf\
Product v( t) Lowpass
m(t) - •• 1nodulator s(t) - • - modulator filter

t cos (wi:t + <b)

Local
oscillator

(a) (b)

FIGURE 5.11 (a) Product modulator for generatíng the DSB-SC modulated \.vave. (b) Coherent
detector for demodulatic,n <)f the DSB-SC modulated wave.
364 CHAPl:'ER 5 • APPLICATION TO COMMlJNICATION SYSTEMS

convenience, and using Eq. (5.15) for the DSB-SC modulated wave s(t), we find that the
product modula tor output in Fig. 5 .11 (b) is given by

v( t) = cos( wct + </> )s(t)


= Ac cos(wct) cos(wct + </> )m(t) (5 .17)
= ~Ac cos( cp )m(t} + ~Ac cos(2wct + <P )m(t)
The first term on the right-hand side of Eq. (5.17), namely, ~Ac cos( <f> )m(t), represents a
scaled version of the original message signal m(t). The second term, ~ Ac cos(2wct + <f, )m(t),
represents a new DSB-SC modulated wave with carrier frequency 2wc. Figure 5 .12 shows
the magnitude spectrum of v(t). The clear separation between the spectra of the two com-
ponents of v(t) indicated in Fig. 5.12 hinges on the assumption that the original carrier
frequency wc satisfies the following condition:

or, equivalently,
(5.18)

where w"1 is the message bandwidth. Provided that this condition is satisfied, we may then
use a lowpass filter to suppress the unwanted second term of v(t). To accomplish this, the
passband of the lowpass filter must extend over the entire message spectrum and no more.
M<.)re precisely, its specifications must satisfy two requirements:

1. Cutoff frequency: wm
2. Transition band: Wnz s w s 2wc - W1n

Thus the overall output v 0 (t) in Fig. 5.1 l(b) is given by

v (t)
0 = ½Ac cos(</>)m(t) (5.19)

The demodulated signal v 0 (t) is proportional to m(t) when the phase error cp is a constant.
The amplitude of this demodulated sígnal is maximum when <P = O and has a minimum
of zero when </J = + ,,,./2. The zero demodulated signal, which occurs for </> = + 'Tí/2,
represents the quadrature nu/[ effect of the coherent detector. The phase error </J in the
local oscillator causes the detector output to be attenuated by a factor equal to cos <f,,. As
long as the phase error <b is constant, the detector output provides an undistorted version
of the original message signal m(t). ln practice, however, we usually find that the phase
error </> varies randomly with time, owing to random variations in the communication
channel. The result is that at the detector output, the multiplying factor cos <b also varies
randomly with time, which is obviously undesirable. Therefore circuitry must be provided

1 V(jw) 1

-----11:...._---1~___:i-i-------..J-----+-~-------1------+--Jt---w
-w1n O

FIGURE 5.12 i\1agnitude spectrum of the product modulator output v(t) ín the coherent detec•
tor t>f Fig. 5.1 l (b).
5.5 Double Sideband-Suppressed Carrler Modulation 365

in the receiver to maintain the local oscillator in perfect synchronism, in both frequency
and phase, with the carrier wave used to generate rhe DSB-SC modulated wave in the
transmitter. The resulting increase in receiver complexity is the price that must hc paid Í()f
suppressing the carrier wave to save transmitter power.

• Drill Problern 5.6


For the coherent detect<>r <>f Fig. 5 .1 l(b) te> opera te properly, the
condition of Eq. (5 .18) must be satisfied. What wc>uld happe11 if this c<>nditic>n is vic>lated?
Answer: The lower and upper sidebands overlap, in which case the coherent detector
fails to operate pr<>perly. •
:!r:': ·.·,..; .

ExAMPLE 5.3 Consider agaín the sinusoidal modulating sígnal


., '
m(t) = Ao cos(w0 t)
.'
with amplitude A 0 and frequency w0 ; see Fig. 5.13(a). The carrier wave is .....,,.i
...
,.
. ( .....
..
•:
c(t) = A. cos(wct)
..
~:

with amplitude Ac and frequency w"; see Fig. 5.13(b). Investígate the time-domain and fre-
quency-domain characteristícs of the corresponding DSB-SC modulated wave.

M(jw)
m(t) !

A o !""",,...
o l----"'"'c:------,,"""---.l..-----'""-c-----
1
fo
(a)

C(jw)
c(t)

AC
o
-we o w,
• l •I
1

fc
(b}

S(jw)
s(t)

-W,- o
1• .. 1
2w0
Time t Fre4uency w ..
(e)

FIGURE 5. 13 Time-domain (on thc lcft) and frcquency-d<>tnain (<>n the right) characteristics of
DSB-SC n1f><lulation produced by a sinusoidal modulating vvave. (a) N1odulating \vave. (b) Carrier
\Vave. (e) DSB-SC modulatcd wave. Note that w = 21rf.
366 CHAPTER 5 • APPLICATION TO C0.1\-11\ilUNICATION SYSTEMS

.,
:;.. ":·: . ;:·.~·. ··.{::: 1. ·~. ;.:,f~ . . ~? .. •..
;

..,,·:.
. ..,,
Solutio1i: The DSB-SC modulated wave is defined by
.. ··,.'

s(t) = AcAo cos(wct) cos(w0t)


: .... }
= lAcAo cos[(wc + wo)t] + ½AcAo cos[(wc - Wo)t] '
~· :.
The Fourier transform of s(t) is given by ·,

S(jw) = l1rAcAo[ô(w - Wc - Wo) + ô(w + Wc + Wo)


+ ô(w - Wc + wo) + 8(w + Wc - w 0 )]

Thus, in ideal terms, the spectrum of the DSB-SC modulated wave, for rhe special case of a
sinusoidal n1odulating wave, consists of impulse functio11s located at wc ::t w 0 and -wc ::!: w0 •
Figure 5.13(c) presents a depiction of the modulated wave s(t) and its Fourier transform S( jw).
Comparison of Fig. 5 .13 for DSB-SC modulation with Fig. 5 .5 for full AM is noteworthy.
Suppression of the car.tier has a profound impact on the waveform of the modulated signal
and íts spectrum.

lt is informative t<> C<>ntinue the analysis of DSB-SC modulation for a sinusoidal


modulating wave. When there is perfect synchro11ism between the local oscillator in the
receiver of Fig. 5 .11 (b) and the carrier wave c(t) in the transrnitter of Fig. 5.11 (a), we find
that thc prc>duct modulator output is
. 1 1
v(t) = cos(wct){ 2 AcAo cos[(wc - «>o)t} + cosf (wc +
2 AcAo wo)tl}
= !A.Ao cos[(2wc - wo)tl + ¾AcAo cos(w0 t)
+ 41 AcAo cosl_(2w. + w0 )t} + 41 AcAo cos(w0 t}
The first two terms of v(t) are prc>duced by the lower side-frequcncy, and the last two
terms are produced by the upper side-frequency. The first and third tcrms, c>f frequencies
2wc - w0 and 2wc + w0 , respectively, are removed by the lowpass filter in Fig. 5.11 (b).
The coherent detector output thus reproduces the origi11al modulating wave. Note, how-
ever, that this detect<)r output appears as two equal ter1ns, one derived from the upper
side-frequency and the other from the lower side-frequency. We therefore conclude that,
for the transmission of inforrnario11, only one side-frequency is necessary. This issue is
discussed further in Section 5. 7.

• Drill Problem 5. 7 f<>r rhe sinusc>idal modulatio11 co11sidered in Example .S ..3, what
is the average power in the lc)wer c>r upper side-frequency, expressed as a percentage of
the total power in the DSB-SC mc>dulated wave?

Answer: 50o/o. •

A quadrature-carrier mi,ltiplexing or quadrature-amplitude modulati<)n (QAM) system


enables rwo DSB-SC mc>dulated waves (resulting from the application of two independent
message signals) t<> <>ccupy the sarne transmission bandwidth, and yet ir all<)WS fc,r their
separati<>n at the receiver <>utput. It is therefore a bandwidth-c<)nservatic)n scheme.
Figure 5 .14 is a block diagram of the quadrature-carrier multiplexing system. The
transmitter of the systen1, shown in part (a) of the figure, inv(>lves the use of two separate
product modulators that are supplied with two carrier waves of the sarne frequency but
5. 7 Other Variants ofAmplitude Modulation 367

Messaoe Multiplexed ...., ..


o
signal .. Product
modulator
+
• :r .. signal
ir,.
., Product ,.. Lowpass .. .,..l A,m 1(t)
m 1(t) .~
·• . : ,ácillll!~
.... + s(t) rnodulator filter
..

..
Multiplexcd
. ..,,;.. . .':·:
-90ºphase signal ___.,.___,.
shífter s(t)

~·.•,,iei ...,

• Product
Message - •
i
.~!l."lt :~ ~

signal ,,. modulatot '---• , Product ,. Lowpass


-•• ~ A,m2(t)
m2(t) ·.,modulator filter -
(a)
(b)

FIGlJRE 5.14
Quadrature-carricr multíplexíng system, exploiting the quadrature null effect.
(a) ·rransmitter. (b) Receiver, assuming perfcct synchr<>nízation ,vith the transn1ittcr.

differing in phase by -90º. The multiplexed signal s(t) consists of the sum of these two
product modulatc>r outputs, as shown by
(5.20)
where m 1(t) and m 2 (t) denote the two different message signals applied to the product
n1odulators. Since each term in Eq. (5.20) has a transmission bandwidth of lw, 11 and is
centered on wc, we see that the multiplexed signal s(t} occupies a transmission bandwidth
of 2wm centered on the carrier frequency wc, where wn1 is the cc)mmon message bandwidth
of m 1 (t) and m 2 (t).
The receiver of the system is shc>wn in Fig. ,S.14(6). The multiplexed signal s(t) is
applied simultaneot1sly to two separate C<)herent detectors that are supplied with tw<) local
carriers of the sarne frequency, but differing in phase by -90º. The (>utput of the top
detector is ½Acm 1 (t}, whereas the output of the bottom detector is ½Acm 2 (t).
F<)r the quadrature-carrier multiplexing system to operate satisfactorily, it is imp(>r-
tant to maintain the correct phase and frequency relatíonships between the local oscillators
used in the transmitter and receiver parts of the system. This increase in system C<>mplexity
is thc price that must be paid Í(>r the practical benefit gained from bandwidth cc>nservation.

• Drill Problem 5.8 Verify that the <)utputs of thc receiver in Fig. 5.14 in response
to the s(t} of Eq. (5.20) are as indicated therein. •
5. 7 Other Variants
ofAmplitude Modulation
The full AM and DSB-SC forms of modulati<>n are wasteful of bandwidth because they
both require a transmission bandwidth equal t<> twice the message bandwidth. ln cither
case, 011e-half the transmission bandwidth is <)ccupíed by the upper sideband of the mod-
ulated wave, whereas the other half is occupied by the lower sideband. Indeed, the upper
and lo\.ver sidebands are uniquely related to each other by virtue of thcir symmetry about
the carrier frequency as illustrated in Example 5.3. That is, given the amplitude and phase
spectra of either sideband, we can uniquely determine the other. This means that insofar
368 CHAPTER 5 • APPLICATION TO COMMlJNICATION SYSTEMS

as the transmission of information is concerned, only one sideband is necessary, and if


both the carrier and the other sídeband are suppressed at the transmitter, no information
is lost. ln this way the channel needs to provide only the sarne bandwidth as the message
signal, a conclusion that is intuitively satisfying. When only one sideband is transmitted,
the modulation is referred to as single sideband (SSB) modulation.

• FREQUENCY-DOMAIN DESCRIPTION OF SSB MODULATION


The precise frequency-domain description of a S5B mc>dulated wave depends on which
sideband is transmitted. To investigate this issue, consider a message signal m(t) with a
spectrum M( jw) limited to the band wª =:; 1wl =:; wb, as in Fig. ,5.15(a). The spectrum of
the DSB-SC modulated wave, obtained by multiplying m(t) by the carríer wave Ac cos( wct),
is as showr1 ín Fig. 5 .15 (b). The upper sidebar1d is represer1ted i11 duplica te by the fre-
quencies above wc. and those below -wc; and when only the upper sideband is transmitted,
the resulting SSB modulated wave has the spectrum shown in Fig. 5.15(c). l,ikewise, the
lower sideband is represented in duplicate by the frequencies below wc (for positive fre-
quencies) and those above -wc (for negative frequencies); and when only the lower side-
band is transmitted, the spectrum of the corresponding SSB m<>dulated wave is as shown
in Fig. 5.15(d). Thus the essential function of SSB modulatÍ<>n is to translate the spectrum
of the modulating wave., either with or without inversíon, to a new locatíon in the fre-
quency domain. Moreover, the transmission bandwidth requirement of a SSB mc>dulation

1 Soca-sc(iw) 1

-wc - wb 1 -Wc -wc + wb O w, - wb w, w, + wb


-wc-~ -w,+~ w,-~ w,+~
(a) (b)

i
o
(e)

-W.e -We + W'b Ü


-w,+ w12
(d)

FIGURE 5.15 Frequency-domain characteristícs of SSB mcldulation. (a) 1\1agnitude spectrum of message signal,
\vith energy gap from -w,, to wª. (b) 1\1agnílttde spectrum of DSB-SC sígnal. (e) l\1agnitude spectrum of SSB
modulated wave, containing upper sideband only. (cl) Nlagnitude spectrum of SSB modulated wave, containing
lo\ver sideband onlv.
,
5. 7 Otlier Variants of Amplittule Modulation 369

system is one-half that of a standard AM or DSB-SC modulation system. The benefit of


using SSB modulation is rherefore derived principally from the reduced bandwidth require-
ment and the elimínation of the high-power carrier wave, two features that make SSB
modulation rhe optimum (and therefore most desired) form of linear CW modulation. The
principal disadvantage <)Í SSB modulation, however, is the cost and complexity of imple-
menting both the transmitter and the receiver. Here again we have a tradeoff betwcen
increased system cc>mplexity and improved system perfor1nance.
Using the frequency-domain descriptic>ns in Fig. 5.15, we n1ay readily deduce the
frequency-discrimination scheme shown in Fig. 5.16 for prc>ducing SSB modulacion. The
scheme consists <>f a product m<>dulator follc)wed by a bandpass filter. The filter is designed
to pass the sideband selected for transmission and suppress the remaining sideband. For a
filter to be physically realizable, the transition band separating the passband from the
stopl1and must have a finice width. ln the context of the scheme shown in Fig. 5.16, this
requiretnent demands that there be an adequate separation between the lower sideband
and upper sideband of the DSB-SC ffi(>dulated wave produced at the output of the product
modulator. Such a requirement can only be satisfied if the message signal m(t) applied t<>
the product modulat<)r has an energy gap in its spectru1n as indicated in Fig. 5.15(a).
Fortunately, speech signals for telephonic C<)mmunicatíon de> exhibit an energy gap ex-
tending from -300 to 300 Hz. Ir is this feature c>f speech signal that n1akes SSB modulati<>n
well suited for its transmission. lndeed, a11alog telephony, which was dominant for a g<>od
part of the twentieth century, relied on SSB modularion fc.>r its transmission needs.

• Drill Prohlem 5.9 A SSB modulated wave s(t) is generated using a carrier <>f fre-
quency wc and a sinusoidal modulacing wave of frequency w 0 • The carrier amplitude is
Ac, and that of the modulating wave is A 0 • Define s(t), assu1ni11g that (a) only the upper
side-frequency is transmitted and (b) only the lower side-frequency is transmitted.
Answer:
(a) s(t) = }AcAo cosl(wc + wo)t]
(b) s(t) = ~AcAo cos[(wc - wo}tl •
• Drill Problem 5.1 O The spectrum of a speech signal lies inside the band
w 1 < 1wl :::; w 2 • The carrier frequency is wc. Specify the passband, transitíon band, and
stopband of the bandpass filter in Fig. 5.16 S<) as to cransmit (a) the lc)wer sideband and
(b) the upper sideband.
Answer:
(a) Passband: Wc - W2 :::; 1wl ~ WC - W1
Transition band: Wc - Wi :::; 1wl ~ Wc + W1
St(>pband: We + W1 < 1W 1 :::; W ,: + W2

Message
_.... Product __...,... Bandpass _ _... SSB modulated
signal - .,. .,. •
m(t) modulator ,,. ,,filter sígnal

t
Carrier
c(t) = Ac cos (wct)

FIGURE 5.16Systcn1 consisling <>f product modulator and bandpass filter, for generatíng SSB
modulated wavc.
3 70 CHAPTER 5 • APPLICATION TO COI\-IMUNICATION SYSTEMS

(b) Passband: úJC + úJ1 < 1 úJI < úJC + úJ2


Transition band: úJC - úJ1 ~ lúJI ~ úJC + úJ1
Stopband: úJc - úJ2 ~ 1wl ~ úJc - W1

*• TIME-DOMAIN DESCRIPTION OF SSB MODULATION

The frequency-d(>main descripti<)n (>f SSB modulation depicted in Fig. 5 .1 ,S and its gen-
eration using the frequency-discrimination scheme shown in Fig. 5 .16 build on our knowl-
edge of DSB-SC modulation in a straightforward fashion. However, unlike DSB-SC mod-
ulation, the time-domain description of SSB modulation is not as straightforward. To
develop the time-domain description of SSB modulation, we need a mathematical tool
known as the Hilbert transform. The <levice used to perform this transformation is known
as the Hilbert transformer, the frequency response of which is characterized as follows:

• The magnitude response is unity for all frequencies, both positive and negative.
• The phase response is -90º for positive frequencies and +90º for negative
frequencies.

The Hilbert transfc>rmer may therefore be viewed as a wideband - 90º phase shifter, wide-
band in the sense that its frequency response occupies a band of frequencies that, in theory,
is infinite in extent. Further consideration of the time-domain description of SSB modu-
lation is beyond the scope of this book.

• VESTIGIAL SIDEBAND MODULATION

Single sideband modulation is well-suited for the transmission of speech because of the
energy gap that exists in the spectrum of speech signals between zero and a few hundred
hertz. When the message signal contains signíficant components at extremely low fre-
quencies (as in the case of television signals and wideband data), the upper and lower
sidebands meet at the carrier frequency. This means that the use of SSB 1nodulation is
inappropriatc for the transmission of such message signals (>wir1g t(> the practical dífficulty
of L1uilding a filter to isolate one sideband completely. This difficulty suggests another
scheme known as vestigial sideband (VSB) modulation, which is a compromise between
SSB and DSB-SC forms of modulation. ln VSB m<)dulation, <)ne sideband is passed alm(>St
completely whereas just a trace, <>r vestige, <)f the c>ther sidehand is retained.
Figure 5.17 illustrates the spectrum <>Í a VSB mcJdulated wave s(t) in relati<>n t<> that
of the message signal m(t), assuming that the lc)wer sideband is m<>dified int<> the vestigial

IM(iw)I l S(jw) 1

'1 r

----i4iC---/-+.;:..1-------...-------L...;--- (J)

-w,,, o
Llc L W1n
O _J (VW
Wv Wv

w,,1
(a) (b)

FIGURE 5.17 Spectral content of VSB n1odulated wave. (a) ;\1agnitudc SJJC<.:trum <>Í message sig-
nal. (b) i\1agnitude spectrum of VSB mo<lL1lated \.vave containing a vestige of the lower sideband.
5. 7 Other Variants ofAmplitude Modulation 371

~ ~ .. .f>,... ·.: •••::~~..


;,
Message
signal __-:,i,- Product -:,i,- Sideband-shaping _ ..._.... Vestigial sideband
llt~ 4
m(t)
; ,t
~":- ftlrei: modulated signal

Carricr wave
A, cos (w..-t)

flGlJRF.. 5.18
System consisting t)f product modulator and sicleband sha1)ing fi\ter, f(>r generat-
íng VSB m,Jdulated wave.

sideband. Specifically, the transmitted vestige of the l<)wer sideband compensates for rhe
amount removed from the upper sideband. The transmíssi<.)n bandwidth required by the
VSB modulated wave is therefore given by
W·r = W,n + Wv (5.21)
where wm is the message bandwidth and wi, is the width c>f the vestigial sideband.
To generate a VSB rnodulated wave, we passa DSB-SC mc>dulaced wave through a
sideband shaping filter, as in Fig. 5.18. Unlike the bandpass filter used for SSB modulation,
the filter in Fig. 5.18 does not have a ''flat'' magnitude response in its passband, because
the upper and l(>Wer sidebands have to be shaped differently. The filter response is designed
so that the original message spectrum M( jw) (i.e., the Fourier transform of the rnessage
signal m(t)) is reproduced on demodulation as a result of the superposition of two spectra:
• The positive-frequency part of S( jw) (that is, the Fourier transform of the rransmitted
signal s(t), shifted downward in frequency by wc).
• The negative-frequency part of S( jw), shifted upward in frequency by wc).
The magnítudes of these two spectral contributions are illustrated in Figs. 5.19(a) and (b),
respectively. ln effect, a reflection c)f the vestige <>f the lower sideband makes up for the
missing part of the upper sideband.
The design requirement descríbed herein makes rhe implementation of the sideband
shaping filter a challenging task.
Vescigial sídeband modulation has the virtue of conservíng bandwidth almost
as efficiently as single sídeband modulation, while retaining the excellent low-frequency
characteristics of double sideband modulation. Thus VSB modulation has becc)me stan-
dard for the analog transmission of television and similar signals, where g(><)d pl1ase

I \ I \
/ \ / \
I \ I \
I '\ / \\
I
/
I
/ ' \
\
I \.
I \
I \
I \.
·--'"--- w
-(J) -{J)
m m

(a) (b)

FIGURE 5. l 9 Superpositi,>n of the two spectra sh<>,vn ín parts (a) and (b) <>f the figure results in
the original message spectrum (sho,vn dashed) upon demodulati<>n.
3 72 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

characteristics and transmission <)f low"frequency components are imporrant, bt1t rhe
bandwidth required for double sideband transmission is unavailable or unec<>n(Jmical.
ln the transmission of television signals in practice, a controlled amount of carrier is
added to the VSB modulated signal. This is clone to permit the use <JÍ a11 envel<>pe detector
for demodulation. The design of the receíver is thereby consíderably simplified.

Pulse-Amplitude Modulation
- - - - 4w

Having familiarized ourselves with continuous-wave AM and its variants, we nc>w turn
<>ur attention to pulse-amplitude modulation (PAM). PAM represents a widely used form
of pulse modulation. Whereas frequency shifting plays a basic rc)le in the operation of AM
systems, the basic operation in PAM systems is that of sampling.

• SAJ\ilPLING REVISITED

The sampling process, including a derivation of the sampling theorem and rclatcd issues
of aliasing and reconstructing the message signal from its sampled version, is covered in
detail in Sections 4.6 and 4. 7. ln this subsection, we tie the díscussíon of sampling for
PAM t(> the material covered therein. To begin with, we may resta te the sampling theorem
in the context of PAM in t\>vo equivalent parts as follows:

1. A bandlimited signal of finite energy, which has no radian frequency components


higher than w n, is uniquely determíned by the values of the signal at instants of time
1

separated by rrlw,n seconds.


2. A bandlimited signal of finite energy, which has nc) radían frequency c<)m/)<>ne11ts
higher than Wrn, · may be cc>mpletely rec<>vered frc>m a knc>wledge (>( its samfJles taken
at the rate o( w,nl'TT" per second.

Part 1 of the sampling theorem is exploited in the transmítter of a PAM system; part 2 of
the theorem is exploited in the receiver of the system. The special value of the sampling
rate rrlw,n is referred to as the Nyquist rate, in recognition of rhe pioneering work done
by Harry Nyquist on data transmission.
Typically, the spectrum of a message signal is n(>t strictly bandlimited, as required
by the sampling theorem. Rather, ít approaches zer<) asympt(>tically as the frequency ap-
proaches infinity, which gives rise to aliasing and therefc>re signal dist<>rtion. Recall that
aliasing refers to a high-frequency component in the spectrum c>f the messagc signal ap-
parently taking on the identity of a lower frequency in the spectrum of a sampled versic>n
<)f the message signal~ To combat the effects of aliasing in practice, we use two corrective
measures:

• Prior to sampling, a lowpass anti-aliasing filter is used t<> attenuate those high-
freque11cy components of the signal that líe outside the band of interest.
• The filtered signal is sampled ar a rate higher than the Nyquíst rate.
On this basis, the generation of a PAM signal as a sequence of flat-topped pulses, whc>se
amplitudes are determined by the corresponding signal samples, follows the blc>ck diagram
shown in Fig. 5.20. •
5.8 Pulse-Amplitude Modulation 373

Message - ••·'- '•~~~: ml-~-)I, • 7~~1~:and-hold - •)lo Sampled


signal filter circuit signal
.,,.,,.:. , . .

~w
t
Timing pulse
generator

FIGURE 5.20 System consisting <>f anti-alias fílter anel sample-and-hold circuit, for converting a
message sígnal ínto t]at-top PAl\1 signal.

ExAMPLE 5.4 The highest frequency component of a speech signal needed for telephonic
comrr1unications is about 3.1 kHz. Suggest a suitable value for the sampling rate.
Solution: The highest frequency component of 3.1 kHz corresponds to
. ., . .
.'
Wm = 6.2'17' X 10 3 rad/s
Correspondingly, the Nyquist rate is

Wm = 6.2 kHz
'lT

For a suitable value for the sampling rate, slightly higher than the Nyquist rate, we may suggest
8 kHz. Indeed, this sampling rate is the international standard for speech signals.
..,. ·. ... •'>. .. , , < •• Y,$ ••

• MATHEMATICAL DESCRIPTION OF PAM


The carrier wave used in PAM consists of a sequence of short pulses of fixed duration, in
terms of which PAM is formalfy defined as folfc>ws. PAM is a form of pulse modulation
in which the amplitude of the pulsed carrier is varied in accordance with instantaneous
sample values of the message signal; the duration of the pulsed carrier is maintained con-
stant throughour. Figure 5.21 iJlusrraces the waveform of such a PAM signal. Note that
the fundamental frequency of the carrier wave (i.e., the pulse repetition frequency) is the
sarne as the sampling rate.
For a mathematical represeotarion of rhe PAM signal s(t) for a message signal m(t),
,
we may wr1te
00

s(t} ~ m[n]h(t - n?J) (5.22)


11=-00

s(t)
m(t)

/
/
/ /
- - ,__ _,.. ,____,, _ _. ..__ ......... - - . r - - - t

º L
To ..... -..
-
FIGURE 5.21 Waveform tlf flat-top PAI\-1 signal with pulse duration T0 and sampling period ;:f.
374 CHAPTER 5 • APPLICATIONTO COMMlJNICATION SYSTEMS

where ?J is the sampling period. The term m[n] is the value of the message signal m(t) at
time t = n?J. The term h(t) is a rectangular pulse of unit amplitude and duration T 0 , defined
as follows (see Fig. 5.22(a)):
1, O< t < T0
h(t) = (5 .23)
O, otherwise
ln physical terms, Eq. (5.22) represents a sample-and-h<)ld operation. This operatic>n
is analc>gous to the zero-order hold-based reconstruction described in Section 4.7. These
two operations differ from each other in that the impulse response h(t) in Eq. (5.22) is T 0
wide instead of ?J. Bearing this difference in mind, we may follow the material presented
in Section 4.7 to derive the spectrum of the PAM signal s(t).
The impulse-sampled version of the message signal m(t) is given by
oc

mo(t} = I
n~-oo
m[n] ô(t - n?J) (5.24)

The PAM signal s(t) is itself expressed as


CC

s(t) = Í: m[n] h(t - n?J)


n=-oo (5.25}
= m 6 (t) * h(t)
Eql1ation (5.25) states that s(t) is mathematically equivalent to the convolution of m 8 (t),
the impulse-sampled version of m(t), and the pulse h(t).
Taking the Fourier transform of both sides of Eq. (5.25) and recognizing that the
ç<)U\l·o1ution of t'A-'O time functions is transformed into the multiplication of their respecti"·e
Fourier transforms, we get
S( jw) = M 8 ( jw)H( jw) (5 .26)

IH(jw)I

h(t) ú)

-61T
- 41T
-- -21T
- o 21T
- 47T
- 61r
-
l To
To To To To To
arg{H(jw) 1

1T
- - ·-- - ·······-· ...... ; "····-· .....,. -·· -···--- - ....
i
---+---~""""'""-- t --+---~e.----+-:_ _.;:~--+---~~--+---
o
ú)

o 67T i 27T l 21r 161T


-To i-:-To - . .::ir- - To To

(a) (b)

FIGURE 5.22 (a) Rectangular pulse h(t) c,f unit amplitude and duration T 0 • (b) l\:1agnitudc spec-
trum IH(jw) I and phase spcctrum arg{H(jw)} of pulse h(t).
5.8 Pulse-Amplitude Modulation 375

F1. FT fT
where S( jw) < > s(t), Mn( jw) < > m 5 (t), and H( jw} < > h(t). From Chapter 4
we recall that impulse sampling of the message signal m(t) introduces perÍ<)dicity into the
spectrum as shown by
00
. 1 2 '1Tk
MI>( JW) = Gr L
'j k=-oo

M J w - úT
'j
(5.2 7)

where l/'3 is the sampling rate. Therefore substitution of Eq. (5.27) into (5.26) yields
1 "'- 2'1Tk
S(jw) = '3 k L oc M j w - ?J H(jw} (5.28)

FT
where M( jw) < > m(t).
Finally, suppose that m(t) is strictly bandlimited and that the sampling rate 1/'3 is
greater than the Nyquist rate. Then passing s(t) through a recc.)nstruction filter chosen as
an ideal lowpass filter with cutoff frequency wtn, we find that the spectrum <>f the resulting
filter output is equal te> M( jw)H( jw). This result is equivalent t<> that which would be
obtained by passing the original message signal m(t) through a lowpass filter of frequency
response H( jw).
From Eq. (5.23) we find that
H( jw) = T0 sinc(wT0 12'1T)e-;wTofl (5.29)

whose magnitude and phase components are plotted in Fig. 5.22(6). Hence in líght of Eqs.
(5 .26) and {5.29) we see that by using PAM to representa continuous-time message signal,
we intrc>duce amplitude distc>rtic>n as wcll as a delay <>Í T 0 /2. Bc>th c>f these cffects are ais<)
present in the sample-and-hc)ld rec<>nstructic>n scheme described in Section 4. 7. A similar
form of amplitude distortion is caused by the finite size of the scanning aperture in tele-
visic)n and facsimile. Accordingly, the frequency distc>rtic>n caused by the t1se <>f flat-t<)p
samples in the generation of a PAM wave, as in Fig. 5.22(b), is referred to as the aperture
effect.

• Drill Problem 5. l l What happens to the scaled frequency respc>nse H( jw)IT0 c>f
Eq. {5.29) as the pulse duration T0 approaches zero?

Answer: lim H( jw) = 1.


T 0 -o T0 '

• DEMODULATION OF PAM SIGNAL
Given a sequence of Aat-topped samples, s(t), we may reconstruct the <>riginal message
sígnal m(t) using the scheme shown in Fig. 5.23. lt consists of two components connected
in cascade. The first component ís a lowpass filter wíth a cutoff freque11cy that equals the
highest frequency component w,.11 of the message signal. The second component is an equal-

Sampled Lowpass Reconstructed


message - • _, Equali~r ,-•-·
int~rpolation _ _
signal filter mcssage signal

FIGURE 5.23 System consistíng of lowpass interpolation filter and cqualizcr, for rcconstructi11g
a message sígnal from its flat-top sampled version.
376 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

1.2 .

l 1.1
sínc (0.5 T0 /CZJ)

1.0 -L-~~~
- - - - - - - - - - - - -Ideal
- - -condition
- - - -·· . . ..
··-·---------------.....__ ;

o 0.2 0.4 0.6 0.8


Duty cycle T0 /?f

FIGlJRE 5.24 Normalized eqt1alization (lo compensate fc>r aperturc cffect) l)lotted , ersus the1

dt1ty cycle T0 /rzf.

izer that corrects for the ·aperture effect due to flat-top sampling in the sample-and-hold
circuit. The equalizer has the effect of decreasing the in-band lc,ss of the interpolation filter
as rhe frequency increases in such a manner as t<) compensate for the aperture effect.
Ideally, the amplitude response of the equalizer is given by
1 1 1 wT0
IH( iw) 1 T 0 sinc(wT0 /2) 2T0 sin(wT0 /2)
where H( jw) is the frequency response defined in Eq. (5.29). The amount of equalization
needed in practice is usually small.

ExAMPLE 5.5 The duty cycle in a PAM signal, namely, T0 /rzl, is 10%. Evaluate the equali-
zation required at w = wm, where wm is the modulation frequency.
Solution: At wn1 = 'TTÍfif, which C<)rresponds to the highest frequency component of che
message signal for a sampling rate equal to the Nyquist rate, we find from Eq. (5.29) that the
magnitude response of the equalizer at wm, normalized to that at zero frequency, is equal to
1 _ (1r/2)(T0 /'!f)
sinc(0.5T0 /T)
-
sin[( 1r/2)T0 /~)]
where the ratio T 0 /'!f is equal to the duty cycle of the sampling pulses. ln Fig. 5.24, this result
is plotted as a function of T 0 ?J. Ideally, it should be equal to 1 for ali values of T 0 /?J. For a
duty cycle of 10%, it is equal to 1.0041. It follows therefore chat for duty cycles of less than
10%, the magnitude equalization required is Jess than 1.0041, and the aperture effect is usually
considered to be negligible.
·~
f 5.9 _~u_ltip_lexing
ln Section 5 .3 we pc)inted <>ut that modulation provides a 1nethod for multiplexing,
whereby message signals derived from independent sources are combined int<> a cc>mposite
signal suitable for transmission over a common channel. ln a telephone system, for ex•
ample, multiplexing is used to transmit multiple conversations over a single long-distance
line. The signals associated with different speakers are combined i11 such a way as to not
interfere with each other during transmission and so that they can be separated at the
5.9 Multiplexing 377

receíving end of the system. Multíplexing can be accomplished by separaring the different
message signals either in frequency or in time, or through the use of c<>ding techniques.
We thus have three basic types of multiplexing:
1. Frequency-division multiplexing, where the signals are separated by allocating them
to different frequency bands. This is illustrated in Fig. 5.25(a) for rhe case of síx
different message signals. 1-'requency-division multiplexing favors the use of CW
modulation, where each message signal is able to use rhe channel <>n a continuous-
time basis.
2. Time-division multiplexing, where the signals are separated by allocating them dif-
ferent time slots within a sampling ínterval. This second type of multiplexing is i1-
lustrated in Fig. 5.25(6) for the case of six dífferent message signals. Time-division
multiplexing favors the use of pulse modulation, where each message signal has
access to the complete frequency response of the channel.
3. Code-divisi(Jn multiplexing, which relies on the assignment of different codes to the
individual users of the channel.
The first two methods of multiplexing are described in the sequel; discussion of code-
division multiplexing is beyond the scope of this book.

• FREQUENCY·DIVISION MULTIPLEXING (FDM)

A block díagram of a FDM system is shown in Fig. 5.26. The incoming message sig11als
are assumed to be of the lowpass type, but their spectra do not necessarily have nonzero
values all the way down to zero frequency. Following each signal input, we have shown
a l<)wpass filter, which is designed to remove high-frequency components that de) n(>t
contribute significantly to signal representation but are capable of disturbing other message
signals that share the common channel. These lowpass filters may be omitted only if the
input signals are sufficiently bandlimited initially. The filrered signals are applied t<> m<>d-
ulators that shift the frequency ranges of the signals so as to occupy mutually exclusive
frequency intervals. The necessary carrier frequencies needed to perform these frequency

Frequency

·r ' .

Channel 6

-·-
.e:
"O
~
Channel :1
"O
e::
"'
-==
.r:,
Channcl4 Guardbands
-- - -
G.)
N
QJ
("')

QJ -=
"'1'
QJ -= -
V')

QJ
-e
QJ
QJ
Channel 3 Frequency e::= = e:: ij e:
e:: e
e
:::1
-=
u
1u t,:s
.e
u
.e:
u u
"'
.e::
u
e,:!
.e::
,:,:i
.e:
u
C.hannel 2

Channel l
Time Time
One sampling períod
(a) (b)

FIGURE 5.25 T\.vo basic forms of multiplexing. (a) Frequency-division multiplexing (\vith guar<l-
bands). (b) Time-division multiplexing; no provision is made here for synchronizing pulses.
378 CHAPTER 5 li! .APPLICA'CION ·ro COJ\11\'IUNICATION SYSTE.MS

Message Lowpass Modulators Bandpass Bandpass Modulators Lowpass Message


inputs filters fihers filters fílters outputs

1 o- • LP ---i •• MOO --1 •• BP . BP ---1•• DEM ---i•• LP • l


...
,....
·-·· . ...

/·:
., ·: . ·,,

2o • LP M0D •
--1J1" BP • .•·
BP _...,..,• DEM LP ••
---l 2
:·,
••
t---ir• Corlunon • .•
•• •••

••
• ••
••
channel •• •
•••
,. ••
• • ·:.~ . ••
•• •• ••
••
••• .• •
••• •••
• ••
•••
.• ••
•• • •
••• •
••
••• •• ••

••

• • • •
·'•
:,,,
511·•
'

No
1
• LP -
M0D --1..-,-. BP · - -•-'
1
BP --1.,• DEM LP _....,.N
l
.,
1 ~.:;~ n ;t'"l'~;.,_. ! 1
r 1 1
1
1 '• 1
1 ............ 1, •••••.. •.. •• 1r
,
1
f.~
l
1 ,.-,..~.
1
1
1
1 " Carrier 1 Carrier ' 1
1 1 1
r supply 1 supply 1
.e* ·:•f!l•r: · •ii 1 1
1
1 1 !
1 1
- - - - - - Transmitter ---------..i - - - - - - - - Receiver _ _ _ _ _ _...,I

FIGURE 5.26 Blt>ck diagram of FD[Vt system, sh<>\ving the imi)ortant c<,nstitucnts <>f the trans-
mitter and receiver.

translations are obtained from a carrier supply. For the modulation, we may use any one
of the methods described in previous sectic>ns of this chapter. However, the most widely
used method of modulation in frequency-division multiplexing is single sideband modu-
lation, which, in the case of voice signals, requires a bandwidth that is approximately
equal to rhat of the original vc.>ice signal. ln practice, each voice input is usually assigned
a bandwidth of 4 kHz. The bandpass filters follc)wing the modulators are used to resrrict
the band of each modulated wave to its prescribed range. The resulting bandpass filter
outputs are next cornbined in parallel t(> form the input to the common channel. At the
receiving terminal, a bank of bandpass filters, with their inputs connected in parallel, is
used to separate the message signals on a frequency-occupancy basis. Finally, the original
message signals are recovered by individual demodulators. Note that the FDM system
shown in Fig. 5.26 operates in only one direction. To provide for two-way transmission,
as in telephony, Í{)r example, we have to complerely duplicate the multiplexing facilitíes,
wíth the components connected in reverse order and with the signal waves proceeding
from right to left.

EXAMPLE 5.6 A FDM system is used to multiplex 24 independent voice signals. SSB mod-
ulation is used for the transn1ission. Given that each voice signal is allotted a bandwidth of 4
kHz, calculate rhe overall transmission bandwidth of the channel.
Solution: With each voice signal allotted a bandwidth of 4 kHz, the use of SSB moduiation
requites a bandwidth of 4 kHz for its transmission. Accordingly, the overall transmission
bandwidth provided by the channel is 24 X 4 = 96 kHz.
., ... .. . .. ' . ,,. .. . ..: . . .. , :.: ' ;:.. ...• . ..:~'JI
5.9 Multiplexing 379

• TIME·DIVISION MULTIPLEXING (TOM)

Basic to the operation of a TDM system is the sampling theorem. lt states that we can
transmít all the information contained in a bandlimited message signal by using samples
of the message signal taken uniformly at a rate that is usually slightly hígher than the
Nyquist rate. An important feature of the sampling process is a conservati<>n <>f time. That
is, the transmission of the message samples engages the transmissic>n channel for only a
fraction of the sampling ínterval on a periodic basis, equal to the wídth T 0 of the P AM
modulating wave. In this way some of the time interval between adjacent samples is cleared
for use by other independent rnessage sources on a rime-shared basis.
The concept of TDM is illustrated by the block diagram shown in Fig. 5.27. Each
input message signal is first restricted in bandwídth by a lowpass filrcr t(> remove the
frequencies that are nonessential to an adequate signal representation. The lowpass filter
outputs are then applied to a commutator that is usually irnplemented using electronic
switching circuitry. The function of the cornrnutator is twofold: (a) t<) take a narrow sample
of each c)f the N input message signals ata rate 1/g- that is slightly higher than w.l'Tr, where
wc is the cutoff frequency of the input lowpass filter; and (2) te> sequentially interleave these
N samples inside a sampling interval ?J. Indeed, chis latter function is the essence of the
time-division multiplexing operation. Following the commutation process, the multiplexed
signal is applied to a pulse modulator (e.g., pulse-amplitude modulator), the purpose of
which is to transform the multiplexed signal into a form suitable for transmission over the
common channel. The use of time-division multiplexing introduces a bandwidth expansion
factor N, beca use the scherne must squeeze N samples derived from N independent message
sources into a time slor equal to one sampling interval. At the receiving end <)f the system,
the received signal is applied to a pulse demodulator, which pcrf(>rms the inverse operation
of the pulse rnodulator. The narrow samples produced ar che pulse demodulator output are
distríbuced to the appropriate lowpass reconstruction filters by means of a decommutator,
which operares in synchronism with the commutator in rhe transmitter.
Synchronization between the timing operations of the transmitter and receiver in a
TDM system is essential for satisfactory performance <)Í the system. ln the case of a TDM
system using PA!vi, synchronization may be achieved by inserting an extra pulse int(> each
sampling interval on a regular basis. The combination of N PAM signals and synchroni-

Anti-alias Recon!'.truction
filters fílters
Message Messaoe
,::,
inputs outputs
.... :~"

1 /
1
LPF LPF )lu

Synchronizcd
/
r----------------------------, \
/ \
2 2
LPF
~/
-1
;
/'
/

'\ Pulse- Pulse-


\
\..-
/\
- LPF
. . ;!ll!fc.~
I \ \
I .,__~• amplitude ----- Communication _,.• amplitude ___,.___.__.,. 1
.• \ / modulator channel modulator \\ /

\
---✓
/

Commutator
t t
......

Decon1mutator
--- /


Timing pulses Timing pulses


N
LPF LPF ,.
FIGURE 5.27 Block diagram of TDi\1 system, sho,ving the imp(>rtant constitucnts of the transmitter and

rece1ver.
380 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

zation pulse contained in a single sampling period is referred to as a frame. ln PAM, the
feature of a message signal that is used for modulation is its amplitude. Accordingly, a
simple way of identifying the synchronizing pulse train at the receiver is to make sure that
its constant amplitude is large enough to stand above every one of the PAM signals. On
this basis, the synchronizing pulse train is identified at the receiver by using a threshold
<levice ser at the appropriate levei. Note that the use of time synchronization in the manner
described here increases the bandwidth expansion factor to N + 1, where N is the number
of message signals being multiplexed.
The TDM system is highly sensitive to dispersion in the common transmission chan-
nel, that is, to variations of amplitude with frequency or nonlinear phase response. Ac-
cordingly, accurate equalization of both rhe amplitude and phase responses of the channel
is necessary to ensure a satisfactory operation of the system. Equalization of a communi-
cation channel is discussed in Chapter 8.
. .,... .,.,_ .. ··. ·-- "" • ··--<-<>< .,.

ExM-tPLE 5. 7 A TDM system is used to multiplex four independent voice signals using
PAM. Each voice signal is sampled at the rate of 8 kHz. The system incorporares a synchro-
nizing pulse train for its proper operation.
(a) Determine the timing relatíonshíps between the synchronízíng pulse traín and the im-
pulse traíns used to sample the four voice signals.
(b) Calcula te the transmission bandwidth of the channel for the TDM system, and compare
the result with a corresponding FDM system using SSB rnodularion.
Solution:
(a) The sampling period is
ii' ,.
;>< :••••

1
<!J =8 X 103 s = 125 µ.s
In this example, the number of voice sígnals is N = 4. Hence divídíng the samplíng
period of 125 µ,s among chese voice signals and the synchronizing pulse train, the time
slot allocated to each one of them is
. ... .. ;.;
'!!" . '

To= N + 1
125 .
== S = 25 µs

Figure 5 .28 shows the timing relationships between the synchronizing pulse train and
the four impulse traíns used to sample the different voice sígnals in a single frame. Each
frame includes time slots of common duration T 0 = 25 µ,s, which are allocated to
corresponding PAM signals or synchronizing pulse train.
(b} As a consequence of the time bandwidth product discussíon in Section 3.6, there is an
inverse relationship between the duration of a pulse and bandwidth (i.e., cuttoff fre-
• ·.t••
quency) of the channel needed for its transmission. Accordingly, the overall transmission
bandwidth of the channel is

i -
:·:

--.
To
··,..

.,,··...
1
.. .
~
= -25 MHz = 40 kHz
5.10 Phase and Group Delays 381

A1
Amplitude
•··
~!.Í' • ..

Syrichronizing
Voice Voice Voice Voice pulse
channel channel channeJ channel
1 2 3 4
o 25 50 75 100 125
One sampling period c,g- = 125 µ,s


Time (µs) - -
FIGURE 5.28 Composition of one frame of a multíplexed PAi\:1 signal, inc<.>rporating four voice
signals and synchr<.)nizing pulse.

ln contrast, the use of a FDM system based on SSB modulation requires a channel
bandwidth equal to N times that of a single voice signal, that is, 4 X 4 = 16 kHz. Thus
the use of PAM-TDM requires a channel bandwidth that is 40/16 = 2.5 times that of
SSB-FDM.
ln practice, pulse-code modulation is commonly used as the method of modula-
tion for TDM; this results in a further increase in channel bandwidth, depending on the
codeword length used in the digital representation of each pulse in the PAM signal.
; .. ... ···; .. . . ·. ·:. ....

*5.10 Phase and Group Delays


Whenever a signal is transmitted thrc>ugh a dispersive (i.e., frequency-selective) system,
such as a communication channel, some delay is introduced into the output signal in
relation to the input signal. The delay is determined by the phase response of the system,
as discussed here.
For convenience of presentation, let <f>(w) denote the phase response of a dispersive
communication channel, as shown by
</> (w) = arg{H( jw)} (5.30)
where H( jw} is the frequency response of the channel. Suppose that a sínusc>idal signal at
frequency wc is transmitted through the channel. The signal received ar rhc channel <)Utput
lags the transmitted signal by </> (wc) radians. The time delay corresponding t<> this phase
lag is simply equal to -</>(wc)lwc, where the minus sign accounts for the phase lag. This
time delay is called rhe phase delay <>f the channel. The phase delay, denoted by Tp, is
formally defined by

Tp = (5.31)

Ir is important to realize, however, that the phase delay is necessarily the true ncJt
signal delay. This follows fron1 the fact that a sinusc.lidal signal has infinite duration., with
each cycle exactly like the preceding cycle. Such a signal d<>es not convey information
except for the fact that it is there, so to speak. lt would therefore be incorrect to deduce
from the above reasoning that the phase delay is the true signal delay. ln actual fact, as
we have seen from the material presented in this chapter, informatíon can only be trans-
mitted through a channel by applying some form of modulation to a carrier.
382 CHAPTER 5 • APPLICATION TO CoMMlJNICATION SYSTEMS

Assume that we have a transmitted signal thac consists of a DSB-SC modulated wave
with carrier frequency wc and sinusoidal modulation frequency w 0 , as shown by
s(t) =A cos(wct) cos(w0 t) (5.32)
which corresponds co the signal considered in Example 5.3. (f<>r convenience of presen-
tation, we have set A = AcA 0 .) Expressing the modulated signal s(t) in terms of its upper
and lower side-frequencies, we may write
s(t) = 21 A cos(w 1t) + 21 A cos(w2 t)
where
(5.33)

and
(5.34)
Now let the signal s(t) be transmitted through rhe channel with phase response </>(w).
For illustrative purposes, we assume that the magnitude response of the channel is essen-
tially constant (equal to unity) over the frequency range from w 1 to w 2 • Accordingly, the
signal received at the channel output is
y(t) = ½A cos(w 1 t + <t>(w1)) + ½A cos(w2 t + </>(w2 ))
where </>(w 1) and <b(w2 ) are the phase shifts produced by the channel at frequencies w1
and w 2 , respectively. Equivalently, we may express y(t) as

(5.35)

where we have invoked the definitions of w 1 and w2 given in Eqs. (5.33) and (5.34),
respeccively. Comparing the sinusoidal carrier and message componenrs of the received
signal y(t) in Eq. (5.35) with those of the transmitted signal s(t) in Eq. (5.32), we can make
the following two statements:
1. The carrier component at frequency wc in y(t) lags its counterpart in s(t) by
~(<t>(w 1 ) + <t>(w2 )), which represents a time delay equal t<>
<t>(w1) + <f,(w2)
(5.36)
W1 + W2

2. The message component ar frequency w0 in y(t) lags its counterpart in s(t) by


½(t/>(w 1 ) - </>(w2 )), which represents a time delay equal te>
</>(w1) - <f,(w2) <f>(w1) - </>(w2)
- - (5.37}

Suppose that the modulation frequency w0 is srnall compared wirh the carrier fre-
quency wc, which implies that the side-frequencies w1 and w2 are close together with Wc
between rhem. Such a modulated signal is said to be a narrowband sígnal. Then we may
approximare the phase response <t> (w) in the vicinity of w = wc, using the two-term Taylor
. .
series expans1on
. d<t>(w)
q>(w} = <J>(wc} + dw (5.38}
J
5.1 O Phase and Group Delays 383

Using this expansion to evaluate cf>(w 1 ) and cf>(w2 ) for substitution in Eq. (5.36), we see
that the carrier delay is equal to -cf>(wc)lwc, which is identical to the formula given in Eq.
(5.31) for phase delay. Treating Eq. (5.37) in a similar way, we find that the time delay
íncurred by rhe message signal (i.e., the ''envelope'' of rhe modulated signal) is given by
dcf>(w)
1'.
g
=- dw
(5.39)
w=wc

The time delay T1: is called rhe envelope delay or group delay. Thus group delay is defined
as the negative of the derivative of the phase response c/> (w) <>f the channel with respect to
w, evaluated at the carrier frequency wc.
ln general, we thus find that when a mc>dulated signal is transmitted through a c<)m-
munication channel, there are two different delays to be considered:
1. Carrier or phase delay, Tp, defined by Eq. (5.31)
2. Envelope or group delay, 'Tg, defined by Eq. (5.39)
The group delay is the true signal delay.

• Drill Problem 5 .. 12
What are the conditions for which the phase delay and group
delay assume a common value?
Answer: The phase response cf>(w) must be linear in w, and c/>(wc) = O. •
. .. ... ,, ... ,·,· ., .·.

EXAMPLE 5.8 The phase response of a bandpass communication channel is defined by


• >•
... ' . .., . ·
</>(w) = -tan- 1
w2 -w:
WWc

The signal s(t) defined in Eq. (5.32) is transmitted through thís channel wiJh
wc = 4.75 rad/s and w0 = 0.25 rad/s
Calculate (a) the phase delay and (h) the group delay.
Solution:
(a) At w = w,., cf>(wc) = O. According to Eq. (5.31), the phase delay Tp ís zero.
(b) Differentiating cf>(w) with respect to w, we get ·

d<J>(w) wc(w2 + w~) '·

-- = --- - -2- - -2
dw w~w + (w - w~)
2

Using this result in Eq. (5.39), we .find that the group delay is
"".,
' , .·. 2 2
'Tg = -Wc = -- = 0.4211 s
4.75
To display the results obtained in parts (a) and (b) in graphical form, Fig. 5.29 shows
a superposition of two waveforms obtained as follows:
1. One waveform, shown as a solid curve, was obcained by multiplying the transmitted
signal s(t) by the carrier wave cos{wct).
2. The second waveform, shown as a dotceà curve, was obtained by multiplying the re-
ceived signal y(t) by the carríer wave cos(wct).
Figure 5.29 clearly shows that the carrier (phase) delay Tp is zero, and the envelope of the
receíved signal y(t) is lagging behind that of the transmitted signal by Tg seconds. For the
384 CHAPTER 5 lll APPLICATION TO COM.l\.lUNICATION SYSTEI\IS

2 .....-----,1-----.-----rl--~---=---~--,----rl----,-I----,-,- - - ,

1.5 - -

1 ,- -

0.5 · -

-0.5 ' ' · .•• :•• ••


••
•• ••
V V
•• • ...
.
••
-1 V V -
••• • ••
••
1: ••
f f •

-l.5 - f

••
1:
'
••
••
,,, .

! 1 i 1 ! i 1
-2'----'C----'----'"----'----'----'-----'---~---------'
-10 -8 -6 -4 -2 O 2 4 6 8 10
Time
FIGlJRE 5.29 Highlighting thc zero carrier deJay (solid curve) and group delay T~ (dotte<l curve),
which are determined in accordance ,vith Example 5.9.

• • • , ·, ••• :,.·· • ~;,· ,<

presentation of waveforms in this figure, we purposely did not use a filter to suppress the high-
freque11cy components resulting from the multíplications described under points 1 and 2 be-
cause of the desire to retain a contribution dueto the carrier for display.
Note also that the separation between the upper side-frequency w 1 = Wc + w0 = 5 .00
rad/s and the lower side-frequency w2 = wc - w0 = 4.50 rad/s is about 10% of the carrier
frequency wc = 4.75 rad/s, which justifies referring to the modulated signal in this example
as a narrowband signal.
·.~ ·.

• SOME PRACTICAL CONSIDERATIONS

Having established rhat group delay is the true signal delay when a modulated signal is
transmítted through a communication channel, we now need to address the following
question: What is the practical importance of group delay? To <leal with this question, we
first have to realize that the formula of Eq. {5.39) for determining group de1ay app1ies
strictly to modulated signals that are narrowband, that is, the bandwidth of the n1essage
sígnal is small compared to the carrier frequency. lt is only when this conditic>n is satisfied
that we W<>uld be justified to use the two-term appre.>ximation of Eq. (5.38} fc)r the phase
response <f>(w), on the basis of which Eq. (5.39) was derived.
However, there are many practical situatíons where this narrowband assu1nption is
not satisfied because the message bandwidth is comparable to the carrier frequency. ln
5. I l Exploring Co1icepts with MATLAB 385

4,------------.------.

l --

o l 2 3 4 5
Frequency (kHz)

FIGURE 5.30 Group de)ay response of voice-grade.· telcphone channel. (Adapted from J. C. Bel-
lamy, Digital Telephony, Wiley, 1982.)

situations of this kind, the group delay is formulated as a frequency-dependent parameter,


as shown by
d(j>((J)}
{5.40)
dw
which includes Eq. (5.39) as a special case. Now we begin to see the real importa11ce <>f
group delay. Specifically, when a wideband modulated signal is transmitred rhrough a
dispersive channel, the frequency comp<>nents of the message signal are dela yed by different
amounts at the channel output. Cc>nsequently~ the message signal experiences a form of
linear distortion known as delay distortion. To rec<>nstruct a faithful versic>n c>f rhe original
message signa) in the receiver, we have to use a delay equalizer. This equalizer has to be
designed in such a way that when it is connected in cascade with the channel, the <)verall
group delay is constant (i.e., the overall phase is linear with frequency).
As an illustrative example, consider the ubiquitous telephone channel, the tiseful
frequency band c>f which exrends from about 0.1 t<> 3.1 kHz. Over this band ()f frequencíes,
the magnitude response of the channel is C(>nsidered to be essentially constant, S<> that there
is little amplitude distorti()n. ln contrast, the group delay of the channel is highly dependent
011 frequency, as shown in Fig. 5.30. Insofar as telephonic communication is concerned,
the variation of group delay in the channel with freque11cy is of no real C<)nsequence
because our ears are relatively insensitive te) delay distortion. The story is dramatically
different, however, when wideband data are transmitted over a telephone channel. For
example, for a data rate of 16 kilobits per second, the bit duration is about 60 µ,s. From
Fig. 5.30 we see that over the useful frequency band of the telephone channel 5 the group
delay varies from zero to several milliseconds. Accordingly, delay distc)rtion is extremely
harmful to wideband data transmission over a telephone channel. ln such an application,
delay equalization is essencial for satisfact<)ry operation.

5 .11 Exploring Concepts with MATIAB


ln this chapter, we discussed the ide~ of modulation for the transmission of a message
signal over a bandpass channel. To illustrate this idea, we used a sinusoidal wave as the
.386 CHAPTER 5 • APPLICATION TO COMMUNICATION SYSTEMS

message (modulating) signal. ln particular, we used Examples 5 .1 and 5 .3 to illustrate the


spectra of sinusoidally modulated waves based on full AM and DSB-SC modulation, as-
suming ideal conditions. ln this section, we use MATLAB t() expand on those examples
by considering modulated waves of finite duration, which is how they always are in real-
life situations. ln particular, we build on the results presented in Example 4.18, where we
used the DTFS to approximare the Fourier transform of a finite-duration signal consisting
of a pair of sinusoidal components.

• FllLLAM

ln the time-domain description of amplitude modulation, the modulated wave consísts of


the carrier plus a product of the message signal (i.e., the modulating wave) and the carrier.
Thus for the case of sinusoidal modulation considered in Example 5.1, we have
s(t) = Ac[l + µ cos(w0 t)l cos(wct)
where µ is the modulation factor. The term 1 + µ, cos(w0 t) is a modified version of che
modulating signal and Ac cos( w,-t) is the carrier.
For the AM experiment described here, we have
Carrier amplitude, Ae =1
Carrier frequency, Wc = 0.87T rad/s

Modulation frequency, w 0 = 0.17T rad/s

We wish to display and analyze 10 full cycles of the AM wave. This corresponds to a total
dt1ration of 200 s. Choosing a sampling rate 1/?J = 1O Hz, we have a total of N = 2000
time samples. The frequency band of interest is -101r :5 w :5 l 01r. Since the separation
between the carrier and eirher side-frequency is equal to the modulation frequency w0 =
0.17T rad/s, we would like to have a frequency resolution wr = 0.017T rad/s. Accordingly,
to achieve this resolution, we require the following number of frequency samples (see Eq.
(4.54)):

M > ~= l01r = 2000


Wr 0.017T
We therefore choose M = 2000. T<1 approximate the Fourier transform of the AM wave
s(t), we may use a 2000-point DTFS. The only variable in the AM experiment is the
mc>dularion factor µ. Specifically, we wish to investigate three different situations:
• µ = 0.5, cc1rresponding to undermodulation
• µ = 1.0, for which the AM system is on the verge of overmodulation
• µ, = 2.0, corresponding to overmodulation
Putting ali of these points together, we may now formulate the MATLAB commands
for generating the AM wave and analyzing its frequency contentas follows:
Ac = 1; % carrier amplitude
wc = 0.8*pi; % carrier frequency
wO = 0.1*pi; % modulation frequency
mu = 0.5; % modulation factor
t = 0:0.1:199.9;
s = Ac*(1 + mu*cos(wo*t)).*cos(wc*t);
plot(t,s)
5.11 Exploring Concepts with MATLAB 387

Smag = abs(fftshift(fft,2000)))/2000;
% Smag denotes the magnitude spectrum of the AM wave
w = 10*[-1000:999]*2*pi/2000;
plot(w,Smag)
The fourth command is written forµ.,= 0.5. The computations are repeated forµ,= 1, 2.
ln what follows, we describe the effect of varying the modulatíon factor µ, on the
time-domain and frequency-domain characteristics of the AM wave:

1. µ, = 0.5.
Figure 5.31(a) shows 10 cycles of the full AM wave s(t) corresponding toµ, = 0.5. The
envelope of s(t) is clearly seen to faithfully follow the sinusoidal modulating wave. This

Time Domain

1
~
·-e-='.. o
E
<
-1

o 20 40 60 80 100 120 140 160 180


Time (s)
(a)

Frequency Domain
0.8 .----r-------.------,------,-------,.----------,-,
0.6

0.4 -

0.2 ...

i
o
-30 -20 -10 o 10 20 30
Frequency (rad/s)
(b)

Frequency Domain
0.8 1 1 l

0.6 - -
0.4 -- -

0.2 . . . . -

Ü .___.____..&..a•-----__._...._..__1_...._il_.l_~__,
2 2.2 2.4 2.6 2.8 3
Frcquency (rad/s)
(e)

FIGURE 5.31
Am1)litu<le mc>dulation with 50% modulation. (a) Al\.1 wave, (b) magnitude SJ)ec·
trum of the Al\:1 wave, and (e) expanded spectrum around the carríer frequency.
388 CHAPTER 5 • APPLICA'flON TO COMMlJNICATION SYSTEMS

means that we can use an envelope detector for dcmodulation. Figure 5.31(b) shows the
magnitude spectrum of s(t). ln Fig. 5.31(c), we have zoomed in on the fine structure of the
specrrum of s(t) around the carrier frequency. This latter figure clearly displays the exact
relationships berween the side-frequencies and the carrier in accordance with modulation
theory. ln particular, the lower side-frequency, the carrier, and rhe upper side-frequency
in Fig. 5.31(c) are located at wc - w 0 = ±0.71r rad/s, wc = ±0.81r rad/s, and wc + w0 =
±0.91r rad/s, respectively. Moreover, the amplitude of both sidebands is (µ)2) = 0.25
times that of the carrier; see Fig. 5.5 for comparis<)n.

2. µ, = 1.0.
Figure 5.32(a) shows 10 cycles of the AM wave s(t} with the sa1ne parameters as in fig.
5.31 (a), except for the fact that µ, = 1.0. Thís figure shows that the AM wave is now <>n

Time Domain
2 ......

-2 .__
- __
.__
o 20 40 60 80 100 120 140 160 180
Time (s)
(a)

Frequency Domain
0.8 .--.----------~----.--~------r------.---~--.---.
0.6

0.4 . . . .

0.2

!
o
-30 -20 -10 o 10 20 30
Frequençy (rad/s)
(b)

Frequency Domain
0.8 1 1

0.6 - -
0.4 >- -
0.2 - -
o ,.__...__ _.,_,'--__..__.._.......___-L..__._!--'
2 2.2 2.4 2.6 2.8 3
Frequcncy (rad/s)
(e)

FIGURE 5.32 Amplitude modulatíon with 100% modulation. (a) Alvl wave, (b) magnitude spec-
trum of the Al\1 wave, and (e) expanded spectrum around the carrier frcquency.
5.11 Exploring Concepts with MATIAB 389

the verge of overmodulation. The magnitude speccrum of s(t) is shown in Fig. 5.32(6), and
its zoomed version (around the carrier frequency) is shown in Fig. 5.32(c). Here again we
see that the basic structure of the magnitude spectrum of the full AM wave is in perfect
accord wich the theory.

3. µ = 2.0.
Figure 5.33(a) demc)nstrates the effect of overmodulation by using a modulation factc>r of
µ = 2. Here we see that there is nc) clear relationship between the envelope of che <>ver-
modulated wave s(t) and the sinusoidal modulatíng wave. This implies that an envelope
detector will not work, and we must use a coherent detector to perform the process of
demodulation. Note, however, that the basic spectral content of the AM wave displayed
in Figs. 5.33(b) and (e) follows exactly what the theory predicts.

Ti1ne Domain

2
(1.)
-o
-
·--e. o
::l

E
<(
-,-
.., ...

o 20 40 60 80 100 120 140 160 180


Time (s)
(a)

Frequency Domain
0.8 ,......,..-----.-------,------.-----...-------,-----.....--,

0.6

0.4 . . ..

0.2 -
''
o -30 1
-20 -10 o 10 20 30
Frequency (rad/s)
(b)

Frequency Domain
0.8 ! 1 1 ! !

0.6 >- -
0.4 ...:.. -
0.2 - ......

O .____,_1_ ___.,_...._____.,_l__.....L..1..1_ ___,_,.......____.____,


2 2.2 2.4 2.6 2.8 3
Frequency (rad/s)
(e)

FIGURE 5.33 Amplitude mc>dulatíon with 200% modulation. (a) AJ\1 wave, (b) magnitude spec-
trum of the Ai\.1 wave, and (e) expanded spectrum around the carricr frequcncy.
390 CHAPl'ER 5 • APPLICATION TO COMMUNICATION SYSTEMS

• DSB-SC MODULATION

ln a DSB-SC modulated wave, the carrier is suppressed and both sidebands are transmitted
in full. lt is produced simply by multiplying the modulating wave by the carrier wave.
Thus for the case of sínus()Ídal modulation, we have
s(t) = AcAo C<>S(wct) cos(w0 t)
The MATLAB comn1ands for generating s(t) and analyzing its frequency content are as
follows:
Ac = 1; % carrier amplitude
wc = 0.8*pi; % carrier frequency in rad/s

Time Domaín

(U 0.5
"'O

·-
::::,

'E. o
E
~ -0.5

-l .....· · ·_ _ _ _ _ _ _
o 20 40
~_---~--------------__,--j
60 80 100 120 140 160 180
Time (s)
(a)

Frcquency Domain
0.4 ~ i - - - - - - . - - - - - - r l - - - - - - . - , !- - - - - , . . - - - - - - - - - - - - - - . - - .

0.3 - -

0.2 - -
0.1 -

.__._1_ _ _ _ _ _1.___ _ __.._1_ _ _ _..L.&-l.-L.L-___..__ _ _ _~_ _ _ _........,


O -30 -20 -10 o 10 20 30
Frequcncy (rad/s)
(b)

Frequency Domain
0.4 ~....-------------~----~

0.3 >--

0.2 -

0.1 ..

o.____.........___________..._,,__,........
~
2 2.2 2.4 2.6 2.8 3
Frequency (rad/s)
(e)

FIGURE 5.34 DSB-SC modt1lation. (a) DSB-SC modulate(l \Vave, (b) magnitude spectrum of the
n1odulated wave, and (e) expanded spectrum around the carricr frequency.
5.11 Exploring Concepts with MATLAB 391

AO= 1; % amplitude of modulating signal


wO = 0.1*pi; % frequency of modulating signal
t = O: . 1 : 1 99. 9;
s = Ac*AO*cos(wc*t).*cos(wO*t);
plot(t,s)
Smag = abs(fftshift(fft(s,2000)))/2000;
w = 10*[-1000:999J*2*pi/2000;
plot(w,Smag)
These commands were used to investigate different aspects of DSB-SC modulation, as
described here.

1. Figure 5.34(a) shows 1 O cycles of the DSB-SC modulated wave s(t) for the sinusoidal
modulating wave. As expected, the envelope of the modulated wave bears n<> clear
relationship to the sinusoidal modulating wave. Accordingly, we must use coherent
derection for demodulation, which is discussed further under poínt 2. Figure 5.34(b)
shows the magnitude spectrum of s(t). An expanded view of the spectrum around
the carrier freqt1ency is shown in Fig. 5.34(c). These two figures clearly show that
the carrier is índeed suppressed, and that the upper and lower side-frequencies are
located exactly where they should be, namely, 0.91r and 0.71r rad/s, respectively.
2. To perform coherent detection, we multiply the DSB-SC: modulated wave s(t) by a
replica of the carrier, and then pass the result through a lowpass filter, as described
in Section 5.5. The output of the product modulator in Fig. 5.1 l(h) is defined by
(assuming perfect synchronism between the transmitter and receiver)

v(t) = s(t) cos(wct)

Correspondingly, the MATLAB C<>mmand is

v = s.*cos(wc*t);

where s is itself as computed previously. Figure 5.35(a) shows the waveform of v(t).
Applyíng the f f t command to v and taking the absolute value of the result, we
obtain the magnitude spectrum shown in Fig. 5.35(6). This latter figure readily shows
that v(t) consists of the following components:
• A sinusoidal component with frequency 0.11T rad/s, representing che modulating
wave.
• A new DSB-SC modulated wave with double carrier frequency of l.61r rad/s; in
actual fact, the side frequencies of this modulated wave are located at 1.51r and
1.71T rad/s.
Accordingly, we may recover the sinusoidal modulating signal by passing v(t)
through a lowpass filter with the followíng requirements:
• The frequency of the modulating wave lies insíde the passband of the filter.
• The upper and lower side-frequencies of the ne,iv DSB-SC modulated wave lie
inside the stopband of the filter.
The issue of how to design a filter with these requirements will be considered in detail
in Chapter 8. For the present, it suffices to say that the above requirements can be
met by using the MATLAB commands:

[b,aJ = butter(3,0.025);
output= filter(b,a,v);
392 CttAPTER 5 • APPLICATION TO COMMlJNICATION SYSTEMS

0.35 ,---.,..1---..-1-----.-1---.1--,
1 .
0.3 - -
0.5 0.25 ~ -

0.2 -
o
0.15 · -
-0.5 ····· 0.1 ~ • -

..
0.05 ····-
-1
1
!
o ' i
o 20 40 60 80 100 120 140 160 180 200 o 2 4 6 8
Time (s) Frequency ( rad/s)
(a) (b)

0.8 0.25 ! 1 1
0.6 -
.......
..
0.2 ····-
0.4

0.2 '
1
-
1
o.1s 1
o 1
i
1
.

-0.2 -···· O. J - -

-0.4 ......
0.05 1 •••• -
-0.6 [ j '
-0.8 - 1 1- l L 1
r .. 1 o t:'""~~'b..--i.-~---_J
o 20 40 60 80 100 120 140 160 180 200 o 1 2 3 4 5
Tíme (s) Frequency (rad/s)
(é) (d)

FIGVRE 5.35 Coherent detcction of DSB-SC modulated wave. (a} and (h): \\'aveform and mag-
nitude spcctrum of signal prodt1ced at the output of product modulator; (e) and (d) waveform and
magnitude spectrum of lowpass filter outpt1t.

The first cc>mmand produces a special type of filter called a Butterworth filter. For
the experíment considered here, the filter order is 3 and its normalized cutoff fre-
quency of 0.025 is calculated as fc>llows:
Actual cutoff frequency of filter 0.251r rad/s
Half the sampling rate 101r rad/s
= 0.025
The second command computes the filter's output in response to the product mod-
ulator output v(t). (We will revisit the design of this filter in Chapter 8.) Figure 5 .35(c)
displays the waveform (>Í the lc>wpass filter c>utput; it represents a sínusc>idal signal
of frequency 0.05 Hz. This observation is confirmed by using the f f t command to
approximate the spectrum of the filter output; the result of the computation is shown
in Fig. 5.35(d).
5.11 Exploring Concepts with MATIAB 393

3. ln Fig. ,5.36, we explc>re an(>ther aspect (>f DSB-SC modulation, namely, the effect of
varying the modulation frequency. Figure 5.36(a) shows five cycles of a DSB-SC
modulated wave that has the sarne carrier frequency as rhat in Fig. 5.34(a), but the
modulation frequency has been reduced to 0.025 Hz (i.e., radian frequency of 0.051r).
Figure 5.36(6) shows the magnitude spectrum of this second DSB-SC n1oduJated
wavc. lts zoomed-in version is shown in Fig. 5.36(c). Cc>mparing this latter figure
with Fig..S.34(c), we clearly see that decreasing the mc.>dulati<>n frequency has the
effect <>f mc.>ving the upper and lower side~frequencies closer together, which is ex-
actly consistent with modulation theory.

Time Domain
1.

(1) 0.5
"O

·---=
Q. o
E
<t'. ..
--0.5

-1 ' ..
o 20 40 60 80 100 120 140 160 180
Time (s)
(a)

Frequency D01nain
0.4 i 1 ! ! 1 i i
'

0.3 - -

0.2 ··- -.
0.1 -· ····-

i i i i 1 1 !
'
O -30 -20 -10 o 10 20 30
Frequency (rad/s)
(b)

Frequency Domain
0.4 ,---.---,---,----.-----,----,----,

0.3 1 -
0.2 -

0.1 -

o L----'-----'-- --L..• .....__.........,__._.....____........___,

2 2.2 2.4 2.6 2.8 3


Frequency (rad/s)
(e)

fIGlJRF 5.36 1-'.ffcct (lf varying thc 1n<><lulaticln frcqt1cncy, compared t<> tl1at in Fig. 5.35. (a) and
(b): \Vaveform and magnitude spectrum of DSB-SC modulatcd \\'ave ,vith a modulation frequency
one-half that used in Fig. 5.34; (e) expanded spectrum around the carrier frequency.
394 CHAPTER 5 • APPLICA'flON TO COMMlJNICATION SYSTEl\-1S

• Drill Problem 5.13 A radiofrequency (RF) pulse is defined as the product of a


rectangular pulse and a sinusoidal carrier wave. Using MATLAB, plot the waveform of
this pulse for each of the following two cases:
(a) Pulse duration = 1 s
Carrier frequency = 5 Hz
(b) Pulse duration = 1s
Carrier frequency = 25 Hz
Use a sampling frequency of 1 kHz. •
• Drill Problem 5.14 Using the f f t command, pior the magnitude spectrum of the
RF pulse for each of the two cases described in Drill Problem 5 .13. Hence, demonstrate
the following:
(a) for case (a), corresponding to carrier frequency 5 Hz, the lower sidebands for positive
and negative frequencies overlap each other. This effect is known as sideband
overlap.
(h) For case (b), corresponding to carrier frequency 25 Hz, the spectrum is essentially
free from sideband overlap. •
11 *PHASE AND GROUP DELAYS

ln Example 5.8, we studied the phase and group delays for a bandpass channel with phase
response
2 2
q>(w) = -tan-1 W - Wc
WWc

At w = wc, the phase delay is Tp = O and the group delay is T~ 0.4211s. The two
waveforms displayed in Fig. 5.29 are defined as follo\-vs:
( 1) x 1 (t) = s(t) cos(wct)
A
= [cos((wc + Wo)t) + cos((wc - w0)t)]
2
A
= [cos(w 1 t) + cos(w2 t)]
2
where w1 = wc + w0 and w2 = wc - w 0 • The waveform shown in Fig. 5.29 as a solid curve
is a plot of x 1(t).
(2) x 2 (t) = y(t) cos(wct)
A
= [cos(w 1t + 4>(w 1 )) + cos(w2 t + 4>(w2 ))] cos(wct)
2
where the angles <f>(w1 ) and 4>(w2 ) are the values of the phase response <f>(w) at w = w 1
and w = w2 , respectively. The waveform shown as a dotted curve in Fig. 5.29 is a plot of
Xz(t).
The generation of x 1 ( t) and x 2 ( t) in MATLAB is achieved using rhe followjng
commands:
o1=-atan((w1A2-wcA2)/Cw1*wc));
o2=-atanCCw2A2-wcA2)/(w2*wc));
5. 12 Summary 395

s=cos(w1*t)+cos(w2*t);
y=cos(w1*t+o1)+cos(w2*t+o2);
x1=s.*cos(4.75*t);
x2=y.*cos(4.75*t);
where we have set (A/2) = 1 for cc>nvenience of presentatic>n. The function ata n in the
first two c<>mmands returns thc arctangent. Note als(> that both x 1 and x 2 invo\ve
element-by-element multiplications, hence the use of a period followed by an asterisk.

5 .12 Summary
In this chapter we presented a discussion of linear modulation techniques for the tra11s-
mission of a rr1essage signal over a communicatíon channel.
ln particular, we described amplitude modulation (AM) and irs variants, as sum-
marized here:
• ln full AM, the spectrum consists of two sidcbands (one termed the upper sideband
and the other termed the lower sideband) a11d the carrier. The primary advantage <>Í
fui\ AM is the simplicity of its implementation, which explains its popular use Í<>r
radio broadcasting. Its disadvantages include a wastage of transmission bandwidth
and transmit power.
• ln double sideband-suppressed carrier (DSB-SC) modulation, thc carrier is sup-
pressed, saving transmit power. However, the transmission bandwidth for DSB-SC
modulation is the sarne as that of full AM, that is, twice rhe message bandwi(lth.
• ln single sideband (SSB) modulation, only one c>f the sídebands is transmitted. SSB
modulation is therefore the optimum form {>Í continuous-wave (CW) modulation,
in rhat it requires the least amount of channel bandwidth and pc>wer for irs trans-
mission. The use of SSB modulation requíres the presence <)Í an energy gap in the
spectrum of thc message signal around zero frequency.
• ln vestigial sideband (VSB) modulation, one sideband and a vestige of rhe <)ther
sideband are transmitted. lt is well suited for transmission of wideband signals whose
spectra extend down to zero frequency. VSB m<><lulation is the standard merhod for
the transmission of television signals.
The ocher form <>f linear modulation discussed in the chapter was chat of pulse-
amplitude modularion (PAM). PAM represents the simplest form <>f pt1lse modulatio11. lt
may be viewed as a direct manifestatio11 <)Í rhe sampling proccss. As such, PAM is C<>In-
monly used as a method of modulation in its own right. Moreover, it constitutes an <>p-
eration that is basic to all the other forms of pulse modulation, including pulse-code
modulatíon.
We then discussed the notion of mulriplexing, which permits the sharing of a C<>m-
mon cornmunication channel among a number of independent users. ln frequency-division
multiplexing (PDM), the sharing is perfor1ned in the frequency domai11. ln time-division
multiplexing (TDM), the sharing is performed in the time domain.
The other topic discussed in the chapter was that of phase (carricr) delay and group
(envelope) delay, borh c>f which are de:fined in terms of the phase rcsp<>11se of a channel
over which a modulated signal is transmicted. The group delay is the true signal delay; ir
becomes of paramount importance when a wideband modulated signal is transmitted over
the channel.
396 CHAPTER 5 • APPLICATION TO COM.l\.ll.lNICA'TION 5YSTEMS

One final comment is in <>rder. ln discussing the modulation systems presented in


this chapter, we made use of tw<) functional blocks:
• Filters for the suppression of spurious signals
• Equalizers for correcting signal distortion produced by physical transmission systems
The apprc>ach taken herein was from a system-theoretic viewpoínt, and we did not concern
ourselves with the design of these functional blocks. Design considerations of filters and
equalizers are taken up in Chapter 8.

1 FURTHER RE~DING - - - - "'- - -


1. Communications technology has a11 extensive history that dates back to the inventíon of the
tclegraph (the predecessor to digital communícations) by Samuel Morse in 1837. This was
followed by the i11ve11tio11 of the telephone by Alexander Graha1n Bell in 1875, in whose
honor the decibel is named. Other notahle contributors to the subject include Harry Nyquist,
who published a classic paper on the theory of signal transmissíon in telegraphy in 1928,
and C:laude Shannon who laid down the foundations of informatíon theory in 1948. Infor-
mation theory is a broad subject, encompassing the transmission, processing, and utilization
of informatio11.
For a historical accounr of communication systems, see Chapter 1 of the book:
• Haykin, S., Communication Systems, Third Edition (Wiley, 1994)
2. For a more complete treatment of modulation theory, see rhe books:
• C:arlsc)11, A. B., Communicati<)n Systems: An lntroducti<>n to Sig11als and Noise i1z Electrical
Communications, Third Edition (McGraw-Hill, 1986)
• Couch, L. W. Ili, Digítal and Analog Commi,nicatíon Systems, Third Editi()n (Prenrice Hall,
1990)
• Haykin, S., (~<)mmunication Systems, Third Edition (Wiley, 1994)
• Schwartz, M., Information Transmission Modulatíon and Noise: A Unified Approach, Third
Edition (Mc(;raw-Hill, 1980)
• Stremler, F. (;., lntroductíon to Communicati(>n Systems, Third Editic>n (Addiso11-Wesley,
1990)
• Ziemer, R. E., and W. H. Tranter, Principies of Commit11icati<>11 Systems, Third Edition
(H(>ughton Mifflin, 1990)
These books cover both continuous-wave modulation and pulse modulation techniques.
'~ 3. The Hilbert transform of a signal x(t) is defined by
A( t
x ) - 1
_ - f"" x( T)
- -r d
1T -o,;, t - 1'

Equivalently, we may define the Hilbert transform .x(t) as rhe convolutíon of x(t) with 1l1rt.
The Fourier transform of l/1rt is; times the signum function, where
+1, for w > O
sgn(jw) = O, for w = O
-1, for w < O
Passíng x(t) through a Hilbert transformer is therefore equivalent to the co1nbination of the
fol]owjng two operations jn the frequency-domain:
• Keeping I X(jw) 1 (i.e., the magnitude spectrum of x(t)) unchanged for ali w
• Shifting arg{X(jw)} {i.e., the phase spectrum of x(t)) by +90º fclr negative frequencies and -90º
for p<>sitive frequencics
Problems 397

For a m(Jre compJete discussion of the Hilbert transform and its use í11 the time-domain
description of single sideband modulation, see Chapters 2 and 3 of the book:
• Haykín, S., Communication Systems, Third Edition (WiJey, 1994)
4. The books cited above also include the study of how noise affects the performance of mod-
ulation systems.
5. For an advanced treatment of phase delay and group delay, see Chapter 2 of the book:
• Haykin, S., Cc)mmunicati<>n Systems, Third Edition (Wiley, 1994)

" -- -. . - . - .. .. ''

5.1 You are given a 11onlinear <levice whose input- each component of the modulared signal s(t) is
output relation is described by uniquely determined by m(t)?
.
lo -
-
a1Vi + 2
a2Vi
5.3 Figure PS.3 shows the circuit diagra1n of a bal-
anced moduJator. The input applied to the top
where a 1 and a2 are constants, V; is the input AM modulator is m(t), whereas that applied to
voltage, and i is the output current. Let
0 the lower AM modulator is -m(t); these two
V;(t) = Ac cos(wc:t) + A,., cos(w,.,t) . modulators have the sarne amplitude sensítivity.
Show that the output s(t) of the balanced mod-
where the first term represents a sinusoidal car- ulator consists of a DSB-SC modulated signal.
rier and the second term represents a sinusoidal
modulating signal.
(a) Determine the frequency content of i (t). 0

(h) The output current i0 (t) contains an AM sig-


nal produced by the two components of
vi(t). Describe the specification of a filter +
...,__ Oscillator ~· Jlo s(t)
that extracts this AM signal from i (t). 0

5.2 Consider a message signal m(t} wíth the spec-


trum shown in Fig. P5.2. The message band-
S2(t)
width wm = 27T X 1O·, rad/s. This signal is ap- -m(t) - •• ~ ,;.· :,.· · · _ _ _ _ __.
1nodulator
plied to a product modulator, together with a
carrier wave Ac cos(wct), producing the DSB-SC
FIGURE P5.3
modulated signal s(t). The modulated sígnal is
next a pplied to a coherent detector. Assuming
perfect synchronism between rhe carrier waves 5.4 Phase-shift keying (PSK) is a form of phase mod-
in thc modulator and detector, determine the ulation wide)y used for digital co1nmunications.
spectrum of the detector output when (a) the The simplest form of PSK, known as binary
carrier frequency wc: = 2.51T X 103 rad/s and PSK, arises when the following representation
( b) the ca rrier frequency wc = 1.5,,.,. X 10 3 rad/s. for the modulation signal s(t) is adopted:
Wh€1r is rhe Jowesr carrier frequency for which
Ac cos(w.t}, symbol 1
s(t) =
symbol O
1 M(jw)I Binary PSK may also be viewed as double side-
band-suppressed carrier (DSB-SC) modulation.
Specifically, we may redefine s(t) as
s(t} = Ac cos(wct)b(t)
where b(t) represents the sequence of 1's and
O's; symbol 1 is represented by + 1 a11d symbol
O is represented by -1.
-wm o
Consider then a binary sequence l,(t) co11-
FIGURE P5.2 sisting of a square wave with fundamental pe-
398 CHAPTER 5 • APPLICATION TO COMMlJNICATION SYSTEMS

riod 2T0 ; that is, symbols 1 and O alternate. For sketch the modulated waves for the following
this special sequence, do the following: methods of modulation:
(a) Sketch the waveform of the binary PSK sig- (a) Amplitude modulation with 50% modula-
.
nal s(t) for Wc T 0 = 47T. tlOil
(b) Derive a general formula for the spectrum (b) Double sideband-suppressed carrier modu-
of s(t). lation
(e) Pior the magnitude spectrum of s(t) for ,..5.10 The spectrum of a voice signal m(t) is zero out-
wcTo = 47T, 107T, 201r. side the interval w,1 < 1wl < wb. ln order to
5.5 The so-called radiofrequency (RF) pulse s(t) is ensure communication privacy, this signal is
defined by applied to a scrambler that consists of the
following cascade of components: a producr
Ac cos(wct), -T/2 < t < T/2 modulator, a highpass filter, a second product
s(t) =
O, otherwise modulator, and a lowpass filter. The carrier
wave applied to the first product modulator has
(a) Derive a formula for the spectrum of s(t),
a frequency equal to wc, whereas that applied to
assuming that wcTo >> 21r.
the second product modulator has a frequency
(b} Sketch the magnitude spectrum of s(t} for equal to wb + wc; both of them ha ve unit am-
wcTo = 207T. plitude. The highpass and lowpass filters have
5.6 The transmitted signal s(t} of a radar system the sarne cutoff frequency ar wc. Assume that
consists of a periodíc sequence of short RF Wc > Wt,.
pulses. The fundamental period of the sequence (a) Derive an expression for the scrambler out-
is T0 • Each RF pulse has duration T, and fre- put s(t), and sketch its spectrum.
quency w,.. Typical values are:
(b) Show that the original voice signal m(t) may
T0 = 1 ms be recovered from s(t) by using an unscram-
T 1 = 1 µ,s bler that is identical to the unit described
9
above.
Wc = 21T X 10 rad/s
5.11 A single sideband modulated wave s(t) is ap-
Using the results of Problem 5.5, sketch the plied to the coherenr detector shown in Fig.
magnitude spectrum of s(t). PS.11. The cutoff frequency of the lowpass filter
5.7 A DSB-SC modulated signal is demodulated by is set equal to the highest frequency component
applying it to a coherent detector. Evaluate the of the message signal. Using frequency-domain
effect of a frequency error ~w in the local carrier ideas, show that this detector produces an out-
frequency of the detector, measured with re- put that is a scaled version of the original mes-
spect to the carríer frequency of the incoming sage signal. You may assume that the carríer
DSB-SC signal. . frequency wc satisfies the condition wc > w,,,.
*5.8 Consider the quadrature-carrier multiplex sys-
tem of Fig. 5.14. The multiplexed signal s(t) pro- SSB
duced at the transmitter input ín Fig. 5. l 4(a} is modulated · Product-''' Lowpass
signal -•.. modulator - • t1lter

-)lo Output
applied to a communícatío11 channel with fre- s(t)
quency response H(jw). The output of this chan-
nel is in turn applied to the receiver input in Fig. t
Carrier
5.14(b). Prove that the condition . Ac cos (wct)

H(jwc + jw) = H'' (iwc - jw), O< w < w,,1 FIGlJRt:: P5. l l
is necessary for recovery of the message signals
m 1 (t) at the receiver outputs; wc is the carrier 5.12 Figure P5.12 shows the block diagram of a fre-
frequency, and w,,, is the message bandwidth. quency synthesizer, whích enables the genera-
Hint: Evaluate the spectra of the two receiver tion of n1any frequencies, each with the sarne
outputs. high accuracy as the master oscillator. The mas-
5.9 Using the message signal ter oscillator of frequency 1 MHz feeds two
spectrum generators, one dírectly and the other
1 through a frequency divider. Spectrum genera-
m(t) = 1 + t2 tor 1 produces a signal rich in the following har-
Problems 399
,.
Master Spectrum Hannoriic
oscillator generator .. selector
l MHz l ,. .:., l
' ,
... ·$
, '

+10
·:·~ ;, ...~·~
Spectrum Hannonic
••
Mixer
,.
·•
)li Output

frequency
divider
)1, generator
2
... selector
2

FIGURE P5. I 2

monics: 1, 2, 3, 4, 5, 6, 7, 8, and 9 MHz. The .'·. .

frequency divider provides a 100-kHz output,


in response ro which spectrum generator 2 pro-
AMwave
s(t) - )lo
Product
modulator
~ :
Bandpa.'>s
filter
... Mixer
output

duces a second signal rich in the following har-


monics: 100, 200, .100, 400, 500, 600, 700,
Tunable
t
800, and 900 kHz. The harn1onic sclectors are
oscíllator
designed to feed two signals into the mixer, 011e
from spectru1n generator 1 a11d the other from
FIGURE P5.14
specrrun1 gcnerator 2. Fi11d the range of possible
freque11cy outputs of this synthesizer a11d its res-
olution (i.e., thc separation between adjace11t 0.455 MHz. Find the range of tl1ning that must
frequency outputs). be provided in the local oscillator in ordcr to
>:•5.13 Considera multíplex system in which four input achieve this requírement.
signals m 1 (t), m 2 (t), m 3 (t), and m 4 (t) are respec- 5.15 Compare full AM with PAM, emphasizingtheir
tively multiplied by the carrier \vaves. similarities a11d differences.
[cos(wat) + cos(w1>t)] 5.16 ln natural samp/ing, an analog signal g(t) is
[cos(w.1t + a1) + cos(wbt + /31 ) 1 multiplied by a periodic train of rectangular
pulses c(t). You are gíven that the pulse repeti-
[cos(úJat + a2) + cos(w,,t + ./32 )] tion frequency of this periodic train is úJs and
[cos(úJat + a3) + cos(wbt + ./33)] the duration of each rectangular pulse is T (with
úJ,T >> 21r). Find the spectrum of the signal s(t)
a11d the resulting DSB-SC signals are summed
a11d rhen transmitted ovcr a common channel. tha t resufts from the use of natural sampling;
ln the receiver, demodulation is achieved by you may assume that time t = O corresponds to
multiplying the sum of the DSB-SC signals by the midpoint of a rectangular pulse in c(t).
the four carrier waves separately and then using 5.17 Specify the Nyquist rate for each of the follow-
filtering to remove the un\vanted cc)mponents. ing signals:
Determine the conditions rhat the phase angles (a) g(t) = sinc(200t)
a1, a2, a 3 and ./3,, /32 , {33 1nust satisfy in order
(b) g(t) = sínc2 (200t)
that the output of the kth demodulator is mk(t),
where k = 1, 2, 3, 4. (e) g(t) = sinc(200t) + sinc 2 (200t)
*5.14 ln this problem we study the idea of mixing use<l 5.18 Twenty-four voice sígnals are sampled uni-
in a superheterodyne receiver. To be specific, formly and then tirne-division multiplexed using
consider the block diagram of the mixer shown PAM. The PAM signal is reconstructed from
in Fig. P5.14 that consists of a product modu- fiar-top pulses with 1-µs duration. The multi-
lator with a local oscillator of variable fre- plexing operation includes provision for syn-
quency, followed by a bandpass filter. The input chronization by adding an extra pulse of
signal is an AM wave of bandwidth 10 kHz and sufficient amplitude and also 1-µs duration. The
carrier frequency that may \ie anywhere in the highesr frequcncy component of each voice sig-
range 0.535-1.60.S MHz; these paraineters are nal is 3.4 kHz.
typical of AM radio broadcasting. Ir is required (a) Assuming a sampling rate of 8 kHz, calcu-
to translate this signal to a frequency band ccn- late the spacing between successive pulses of
tered at a fixed íntermediate frequency (IFJ of the multiplexed signal.
400 CHAPTER 5 • APPLICATION TO COMMUNICATION Sl'STEMS

(b) Repeat your calculation assurning the use of (b) Use m(t) to modulate a carrier of frequency
Nyquist rate sarnpling. f c = 25 Hz, generating a full AM wave with
5.19 Twelve different rnessage signals, each with a 80% rnodulation. Compute the magnitude
bandwidth of 1O kHz, are to be 111ultiplexed spectrum of the AM vva ve.
and transmitted. Determine the minimum 5.23 Continuíng with Problem 5 .22, investigate the
bandwidth required for each method if the effect of varying the carrier frequency f c on
multiplexing/modulation method used is the spectrum of the AM wave. Determine the
(a) FDM, SSB minimum value of f c that is necessary to ensure
that there is no overlap between the lower and
(b) TDM, PAM
upper sidebands of the AM wave.
5 .20 A PAM telemetry system ínvolves the multi-
5.24 The triangular wave described in Problem
plexing of four input signals S;(t), i = l, 2, 3, 4.
5.22(a) is used to perform DSB-SC rnodulation
T\vo of the signals s 1 (t) and s 2 (t) have band-
on a carrier of frequency f e = 25 Hz.
widths of 80 Hz each, whereas the remaining
two signals s 3 (t) and s4 (t) have bandwidths of 1 (a) Generate and display the DSB-SC modu-
kHz each. The signals s 3{t) and s 4 (t) are each lated wave so produced.
sampled at the rate of 2400 samples per second. {b) Compute and display the spectrun1 of the
This sampling rate is divided by 2R (i.e., an in- modulated wave. Investigate the use of co-
teger power of 2) in order to derive the sampling herent detection for demodulation.
rate for s 1(t) and s 2 (t). 5.25 Use MATLAB to do the followíng:
(a) Find the maximum value of R. (a) Generate a PAM wave using a sinusoidal
(b) Using the value of R found in part (a), de- modulating signal of freque11cy w,,, = 0.57T
sign a multiplexing system that .first multi- rad/s, sampling period ~ = ls, and pulse
plexes s1 (t) and s2 (t) into a new sequence, duration T = 0.05s.
s5 (t), and then multiplexes s 3 (t), s4(t) and (b) Compute and display the magnitude spec-
S5 ( t). trum of the PAM wave.
(e) Repeat the experiment for pulse duration
T = 0.1, 0.2, 0.3, 0.4, 0.5s.
• Computer Experiments Comment on the resulrs of your experiment.
5.26 Natural sampling involves the multiplication of
Note: The readeris expected to choose sampling rates a message signal by a rectangular pulse train, as
for the computer experiments described here. Thor- discussed in Problern 5.16. The fundamental pe-
ough understanding of the material presented in riod of the pulse train is Te and the pulse du-
Chapter 4 is needed. ration is T.
(a) Generate and display the modulated wave
5.21 Use MATLAB to generate and display an AM
for a sinusoida) modulating w·ave, \.\i'ith the
wave with the following specifications:
following specifications:
Modulatíng wave Sinusoidal
Modulation frequency 1 kHz
Modulation frequency 1 kHz (1/T,:) = 1 O kHz
Pulse-repetition frequency
Carríer frequency 20 kHz
Pulse duration T = 10 µ,s
Percentage modulation 75%
(b) Compute and dispJay rhe spectr11m of rhe
Compute and display the magnitude spectrum modulated wave. Hence verify that the orig-
of rhe AM wave. inal modulating wave can be recovered
5.22 (a) Generate a symmetric triangular wave m(t) without distortion by passing the modu-
with fundamental frequency of 1 Hz, alter- la ted wave chrough a lowpass filter. Specify
nating between - 1 and + 1. the requírements that this filter must satisfy.
Selected Mathematical Identities

. ':::'li'.·

,. <. ..,
.: . ·.,, ... .',. ·.:tV , .. ••;, ,, $'
........ .,.
:;k.

Trigonometry
-- -
Consider the right triangle depicted in Fig. A.1. The Í(>llowing relationships hold:

. 0
Slll = -y
r
X
cos () =-
r
y sin 0
tan () =- = --
x cos ()
cos 2 0 + sin2 O = l
cos2 (} = ½(l + cos 20}
sin2 0 = ½(1 - cos 28)
cos 28 = 2 cos2 8 - l
= 1 - 2 sin2 {)

Other identities include:


sin(0 + <P) = sin {) CDS (p ::!:: COS 0 sin qJ
cos(8 ::!: </J) = cos 0 cos <b + sin 8 sin </J
sin (J sin </, = ½[cos(0 - </>) - cos(0 + </>))
cos f) cos </J = 1[cos(O - <b) + cos(O + </> )]
sin 8 cos 4> = ½[sin(8 - 4>) + sin(O + </>}]

FIGURE A. I Right triangle.


668 APPENDIX A • SELECTED MATHEMATICAL IDENTITIES

1A.2_ Co_mp~ex Numbers


Let w be a complex number expressed in rectangular coordinates as w = x + jy, where
j = v=-I, x = Re{w} is the real part of w, and y = lm{w} is the imaginary part. We express
w in polar coordinates as w = re; 9 , where r = lwl is the magnitude of w and (J = arg{w}
is the phase of w. The rectangular and polar representations for the number w are depicted
in the complex plane of Fig. A.2.

• CONVERTING FROM RECTANGUIAR TO POLAR COORDINATES


r = v'x2 + y2

() = arctan y
X

• CONVERTING FROM POLAR TO RECTANGULAR COORDINATES

X =r COS ()

y =r sin (J

• COMPLEX CONJUGATE

If w = x + jy = reiº, then using the asterisk to denote complex conjugation,


w'i- = x - jy = re-i 6

ei
9
= cos (} + j sin (}

• 0THER IDENTITIES

ww* = r2
w + w*
x = Re(w) = - - -
2
w - w'~
y = lm(w) = lj
e;0 + e-;0
cos 0 =
2
ei8 _ e-;0
sin 0 = lj

lm
y ···-·······-·······-······-············ w

0
o X Re

FIGURE A.2 The C<>mplex plane.


A.4 Defi.nite lntegrals 669

A.3 Geometric Series


If f3 is a complex number, then the following relationships hold:
1 - {3N
/3 *1
1 - /3 '
N, /3 = 1
00 1
nLO {3n = l - /3 ' l/31 < 1
oo a_k
I
11=k
/3" = __,;_1-1_ ,
1 - f3
l/31 < 1
00 {3
In=o nf3"
·
= ---
( 1 - /3) 2
'
l/31 < 1

A.4 Definite Integrais


b · l b

f
a
x" dx = - - xn+t ,
n + 1 a
n =I= -1

. fb 1 b
. ecx dx = - ecx
a e a
b b
( xecx dx = ..!._2 ecx(cx - 1)
1a c a
(b l h
J,a cos(cx) dx = - sin(cx)
e a
b h

h
f a
· sin(cx) dx = _ !_ cos(cx)
.
e a
h

f ª x cos(cx) dx = 2 (cos(cx) + ex sin(cx))


a
rb b
J,a x sín(cx} dx = ~ (sin(cx) - ex cos(cx))
e a
b · b
egx

f a
egx cos(cx) dx = ,..2
~ + e
2 (g cos(cx) + e sín(cx))
a
b · P b

f a
egx sin(cx) dx = i"e
+ e 2
(g sín(cx) - e cos(cx))
a

• GAUSSIAN PULSES

f"'" e-x211c? dx = a~, u>O

f0000 x2e-xl12a2 dx = t'.73\fb, a-> o


670 APPENDIXA • SELECTED MATHEMATICAL IDENTITIES

• INTEGRATION BY PARTS

rb
Jª u(x) dv(x) Jb
= u(x)v(x) I! - ª v(x) du(x)

A.5 Matrices
--·-·------
A matrix is a set of numbers arranged in a rectangular array. For example,
2 3
A=
-1 4
is a matrix with two columns and two rows. We thus say that A is a two by two matrix.
The first and second rows of A are given by l2 3] and [-1 4], respecrively. We índex
the elements of the matrix in terms of their location, which is measured by the row and
column in which the elemenr lies. For example, the element in the first row and second
column of A is 3. Boldface uppercase symbols are used to denote matrix quantities.
A vector is a matrix containing a single column ora single r<)W. A column vector
consists of a single column. For example,
3
h=
-2
is a two-dimensional column vecror. A row vector consists of a single row. For example,
e = [2 -1 J
is a tw<)-dimensional row vector. Vectors are denoted with lowercase boldface symbols.

B ADDITION

If a;; and b;; are the elements in the ith row and jth column of matrices A and B, respectively,
then the matrix C = A + B has elements e;, = a;, + bi,•

IJ MULTIPLICATION

If aik is the element in rhe ith row and kth column of an M by N matrix A and bk; is the
elemcnt in the kth r(>W and jth column of an N by L matrix B, rhen the M by L matrix
C = AB has elements C;; = ~r 1a;kbk;•

Two by Two Matrix .lnverse


1
a b 1 d -b
e d ad - bc -e a

lnverse of Product of Matrices


If A and B are invertible, then
Partial Fraction Expansions

.·:
.a;'
::~ ..
. ·i,·
.,:'r .:· ,. . (;i!lí!i!;
··: ... ,:'i'. .
.<;:-. ....
,::fr:·
' .; ..
..
: '' ...: ': : .. ·:·? ,.,,. ·~ . . ., ~-., ·i,:. ......:· . ,
,,,, .. ~··

. ~· ,

·.. ., ..

Partia/ fraction ex/Jansions are used to express a ratio of polyn(>mials as a sum of ratíos
of lower order p<>lynomials. ln essence, the partial fracti<>n expansi<>n is thc inverse op-
eration to placing a sum of fractions over a common denominator. The partial fraction
expansion is used in signals and systems problems to determine inverse Fourier, Laplace,
and z-transforms. ln thís context we use a partial fraction expansion to express an arbitrary
ratio of polynomials as a sum of terms for which the inverse rransform is known.
There are two different standard forms for ratios of polynomials that occur in our
study of signals and systems. One arises in the conrext of representations for continu-
ous-time signals and systems, while the other arises in the context of discrete-time signals
and systems. We shall treat these separately, since the method for performing the partial
fraction expansion differs slightly in. each case.

B. l Partial Fraction Expansions


for Continuous-Time Representations
ln the study of continuous-time signals and systems we generally encounter rati<)S <)Í p<)ly-
nomials of the form
B(u) ·
W(u) = A(u)
(B.1)
bMuM + bM_ 1uM-l + · · · + h 1u + b 0
UN + a,'V-luN-l + ... + a1U + ªº
ln a Fourier transform problem the variable u represents j<JJ and in a Laplace rransform
problem u represents s. Note that the coefficient of uN in A(u) is unity. We assume thac
W(u) is a proper rational function: that is, the order of B(u) is less than that of A(u)
(M < N). If this condition is not satisfied, then long division of A(u) into B(u) is used to
write W(u) as the sum of a polyn(lmial in u and a proper rational function represe11ting
the remainder of the dívision. The partial fraction expansion is then applicd to the
remainder.
The first step in performing a partia! fraction expansion is to factor the denc>minator
polynomial. If the N roots d; are distinct, then we may rewrite W(u) as shown by

W(u) = . B(u)
(u - d 1 )(u - d2) · · · (u - dN)
672 APPENDIX 8 a PARTIAL FRACTION EXPANSIONS

ln this case the partia! fraction expansion for W(tt) takes the form
C1 C2 CN
W(u) = d + -- + ··· + J (B.2)
U - 1 U - d2 U - dN
If a root u = r occurs with multiplicity L, then W(u) is of the form
. B(u)
W(u) = (u - r)L(u - d1 )(u - d2) · · · (u - dN-L)

and the partial fraction expa11sion for W(u) is given as


C1 C2 CN-L KL-1
W( u ) =---+---+···+----+--
u - d1 u - d2 u - dN-L u - r
(B.3)
KL-2 Ko
+ . 2 +···+---1
(u- r) (u - r) ·

N<)te that as rhe power to which the dent>minator terms (u - r) are raised íncreases, the
indices i t>f rhe corresponding éoefficients K; decrease.
The constants C; and K; are called residues. We may obtain the residues using two
different appr<>aches. 111 the method of linear equati<>ns we place ali the terms in the partial
fraction expansic>n for W(u) over a common denominat(>r and equate the coefficient of
each power of u t<> the corresponding coefficient ín B(u). This gives a system of N linear
equations that may be solved t<> obtain the residues, as illustrated in the following example.
For hand calculations thís approach is generally limited to N = 2 or N = 3.

ExAMPLE B. l Determine the partial fraction expansion for the function


. .·.. ..
. W(u) = 3u + 5
u 3 + 4u2 + Su + 2
Solution: The roots of the denominator polynomial are u = - 2 and u = -1 with multiplicity
two. Hence the partial fraction expansion for W(u) is of the form
K1 Ko C1
W( u ) = - - + - - - +--
u 2
+1 (u + 1) u +2
The residues K 1, K 0 , and C 1 may be determined by placing the terms in the partial fraction
expansion over a common denominator, as shown by
K 1 (u + 1 )(u + 2) K 0 (u + 2) C1 (u + 1)2
W(u) = (u + 1)2 (u + 2) + (u + 1)2 (u + 2)
+ (u + 1)2 (u + 2)
(K 1 + C 1 )u2 + (3K1 + K 0 + 2C1)u + (2K1 + 2K0 + C 1 )
u 3 + 4u2 + Su + 2
Equating the coefficient of each power of u in the numerator on the right-hand side of this
equation to those in B(tt) gives the system of three equations in the three unknowns K 1, K0 ,
and C 1 shown by
O= K1 + C1
: ' . 3 = 3K1 + Ko + 2C1
5 = 2K1 + 2K0 + C1
Solvir1g these equations we obtain K 1 = 1, K0 = 2, and C 1 == -1 so the partial fraction
expansion of W(u) is gíven by
1 2 1
W(u) - - - + - - - ---
. - u + 1 (u + 1)2
u + 2
B.I Partial Fraction Expansionsfor Continuous-Time Representations 673

The method of residues is based on manipulating the parcial fraction expansion so


as to isolate each residue. Hence this method is usually easier to use than solving linear
equations. Consider multiplying each side of Eq. (B.3) by (u - di), as shown by

C1(U - d;) C2(u - di) C CN-1-(U - d;)


(u - d ·)w( u) = ____;; ___ + - - - - + · · · + 1
•+ · · · + -----
, U - d1 . U - d2 U - dN-L
KL-1 (u - d;) KL-2(u - d;) K 0 (u - d;)
+ - - - - - + - - - -2 - + ... + - - - -
u - r (u - r) (u - r}L

On the left-hand side the multiplication by (u - dz) cancels the (u - d;} term in the de-
nominator of W(u). If we now evaluate this expression at u = d;, then all the tern1s on the
right-hand side are zero except for C; and we obtain the expression Í<>r C; given by

C; = (u - d;} W{u) lu=d; (B.4)

Isolation of the residues associated with the repeated root u = r requires multiplying both
sides of Eq. (B.3) by {u - r)L and differentiating. We have
1 Ji
K; · -:-- d . {(u - r)LW(u)} (B.5)
1! u' u=r

The following example uses Eqs. (B.4) and (B.5) to obtain the residues .

<!:, •,,;;t:; . . • , •· hov ••, - _ , •••~,,. • • • • •·, •• """"°""' • • - · "•41:111111\t:, ·: • • • • •• ...,,... ,.,_,., , .. ··"' ••
;).
'-···
r•.
º '/l/YI"" •
. ., . . .
• •••,. ..
....,,.
• ....

ExAMPLE B.2 Find the partial fraction expansion for


>. .

3 2
W(u) = 3u + 15u + 29u + 21
, r ';
(u + 1)2(u + 2)(u + 3)
Solution: Here we have a root of multiplicity two at u = -1 and distinct roots at u = -2
and u = -3. Hence the partia! fraction expansion for W(u) is of the form
', '

·· :'. W{ ) K1 Ko C1 C2
u =' +-----+--+--
u + 1 (u + 1)2 u +2 u +3
·.·.·. We obtain C1 and C2 usíng Eq. (B.4) as shown by
3 2
,, e = (u + 2) 3u + 15u + 29u + 21
1
' (u + 1)2 (u + 2)(u + 3) u=-i
... :·
_::· ·;,

·.,; ..
= -1
. 3 2
C = (u + 3) 3s
2
+ 15u + 29u + 21
(u + 1)2 (u + 2)(u + 3) u=-3
r'. ·. ....
.:. .
,.'

= 3 . . : . ·"
. . ~

'Now we may obtàin K 1 and K 0 using Eq. (B.5) as follows:


3 2
K = (u + l)2 3u + 15u + 29u + 21
0
(u + 1)2 (u + 2)(u + 3) u=-t

=2 . ·;., . ., " ....·• .. ,,;.

+ 15u2 + 29i, + 21
1 d 2 3u 3
Ki = 1! du (u + l) (u + 1)2 (u + 2)(u + 3) u=-l
_ (9u2 + 30u + 29}(u2 + Su + 6) - (3u3 + 15u2 + 29u + 21}(2u + 5)
'' ,' (u2 + Su + 6)2 u=-1

=1
.,... , ' ;
... ,.... ','f ·::>: ..._,
"

......... ......• ,. ,.,:;.


:':
..
.
. . , .. .::.
674 APPENDIX B • PARTIAL fRACTION EXPANSIONS

; ..
Hence the partial fraction expansion for W(u) is given by
1 2 1 3
., :
W(u) = _u_+_l + -(u-+-1)-2 - _u_+_2 + _u_+_3

We rnay draw the following conclusions ab()ut the residues frc>m Eqs. (B.4) and (B.5):
• The residue associated with a real root is real.
• The residues associated with a pair of complex-conjugate roots are the complex
conjugate of each other, and thus only one of them needs to be computed.

B.2 Partial Fraction Expansions


for Discrete-Time Hepresentations
ln the study of discrete-time signals and systems we frequently encounter ratios of pc>ly-
nomials having the form shown by
B(u)
W(u) == A(u)
114
(B.6)
+ bM_ 1uM-J + · · · + b1u + h0
bMu
= ---------------
aNuN + aN_ 1uN-t + · · · + a 1u + 1

ln a discrete-rjme Fourjer transform problem the variab]e u represents e-;i>-, whi]e in a


z-transform problem u represents z- 1 . Note that the coefficient of the zeroth power of u
in A(u) is unity here. We again assume that W(u) is a proper rational function: that is, the
order of B(u) is less than that of A{u) {M < N). If this condition is not satisfied, then long
division of A(u) into B(u) is used to write W(u) as the sum of a polynomial in u and a
proper rational function representing the remainder of the division. The partial fraction
expansion is then applied to the remainder.
Here we write the denominator polynomial as a product of first-order terms as shown
by
(B.7)

where d;-- 1 is a root of A(u). Equivalently, d; is a root of the polynomial Ã(u) constructed
by reversing the order of coefficients in A(u). That is, d; is a root of
~
A(u) = u N + a 1u N - 1 + · · · + ªN-1u + aN

If all rhe d; are distinct, then the partial fraction expansion is given by
C1 C2 CN
W(u) =1- du+ _l___d__u + · · · + _1___d_N_u (B.8)
1 2

lf a term 1 - ru occurs with multiplicity L in Eq. (B. 7), then the partia} fraction expansion
has the form shown by
C1 C2
W(u) = d + --- + · · · +
1 - 1u 1 - d2 u 1 - dN-1.u
(B.9)
KL-1
+ --- KL-2
+ ------,-2 Ko
+ ... + - - - -
(1 - ru) (1 - ru) (1 - ru)L
B.2 Partial Fraction Expansionsfor Discrete-Time Representations 675

The residues C; and K; may be determined analogously to the continuous-time case.


We may place the right-hand side of Eq. (B.8) or (B.9) over a common denominator and
obtain a system of N linear equations by equating coefficients of like powers of u in the
numerator polynomials. Alternatively, we may solve for the residues directly by manipu-
lating the partial fraction expansion in such a way as to isolate each coefficient. This yields
the following two relationships:
(B.1 O)

(B.11)

...
... .· ...,. ":: .....,. . ···::·
.
.,:,.: :..
'

ExAMPLE B.3 Find the partial fraction expansion for the discrete-time function
. .....
· , .. • -14u - 4
. ..:·..
'
... W(u) = 8u3 - 6u - 2

Solution: The constant term is not unity in the denominator, and so we .first divide the
denomínator and numerator by -2 to express W(u) instandard form. We may then write
. ·. ~ ·. ..
'
. 7u + 2
,, " ,,,, .. . W(u) = 4 3 3 1 . '
· .·• -u-+ u+
·;

The denominator polynomíal A(u) is factored by rooting the related polynomial


Ã(uJ = u3 + 3u2 - 4
This polynomial has a single root at u = 1, and a root of multiplicity two at u = -2. Hence
W(u) can be expressed as
' . . , ..
...
7u + 2
. ;,'.
.
. ...
::
.·,.:.
· · W(u) = (1 - u){1 + 2u)2
and the partial fraction expansion has the form given by
C1 K1 Ko
, W(u ) = - - + - - - + - - -2
.1 - u 1 + 2u (1 + 2u )
The residues are evaluated using ~qs. (B.10) and (B.11) as follows:
.,... .
...
C1 = (1 - u)W(u)lu=l · .,
,

-- 1
. . . :. :. .
. .
..

·,··'
•• ,>

Ko = (1. + 2u} 2
W(u) lu=-1/2 .· · <

= -1
1 1 d
K1 = -- - ~ {{1 + 2u)2 W(u)}
l! 2 w=-1/2
. .
"
7(1 - u) + (7u + 2)
,

=---------'----
.,. 2(1 - u) 2 u=-1/2

=2
We conclude that the parcial fraction expansion is shown by
. ,.
,' .
2 1 1
W(tt) = -1---u + _1_+_2_u - -(1_+_2-t,-)2
. .;;
... : ..; ~:. . ..,,
..,..:.. ;~r , · .' .,:. ,~,· •·:-• · ;,.,... • •... ·· • . ....,.,... ,... .:.~,.
Tables of Fourier Representations
and Properties

..:~·,t· :::~:,
..<•t .:-:,:
·=~~ ..
.. ·.~·
•• : >
·:.y . •• > .·
·••< .
.·•
....
,, .., ..;;, ..
r

C.1 Basic Discrete-Time Fourier


Series Pairs

Time Domain Frequency Domai11


x[nJ = X[k Jeikn,!lo L X{kJ = 2.. I x[n]e-;k,,flo
k={N) N 11=(N>
21r
Period == N O.º= N
I
'
.
~ (2M +
,

x[nJ =·
lnl :5: M1, :
s1n k 1)

-o, M < lnl < N/2 X[kJ = '


)

x[n] = x[n + N] : N sin k !lº


\ 2
,
1, k == p, p + N, p ± 2N, ...
xf n] = eip{lºn X[k] ;;;;
o, otherwise
, 1
-2, k = :::!::.p, + p ± N, + p ± 2N, ...
x[n] = cos(p!10 n) X[k] =
'-
o, otherwíse
, 1
- k = P, p ± Nl p ± 2N, ...
2j'
x[n] = sin(pil n) X[kJ = -1
0
k = -p, -p ± N, -p ± 2N, ...
2j '
~ o, otherwise
,
1, k O, ±N, ±2N, ...
=
x[n] ;;;; 1 Xlk] =
-o, otherwise

1
x[nj = í;=-..,S[n - pN] X[k] =-
N
C.3 Basic Discrete-Time Fourier Transform Pairs 677

C.2 Basic Fourier Series Pairs

Time Domain Frequency Domain


= _! J, x(t)e-ikw"t dt
,:

x(t) = L X[k]eikwol X[k]


T <T)
k=-oo
21r
Period = T w,, = T
,

x(t) =·
1, ltl s T$
,O, Ts < ltl < T/2

X[ k] = o[k - p]

x(t) = cos(pw 0
t) X[k] = ½õ[k - p] + ½S[k + p]
· 1 1
x(t) = sin(pw 0
t) · X[k] = lj 8[k - p] - 2j Blk + p]

1
x(t) = 2-;=-ooS(t - pT) Xlkl = -
T

C.3 Basic Discrete--Time Fourier


Transform Pairs

Time Domain Frequency Dotnaín


r X(ei!l)ei.On dn.
00

x[n] = 2_!_7f J(27í) X(ei11 ) = L xfnle-i!'!n


n=-oo

,
. n
Slil
I
2M + 1
' -

x[n] =
1, lnls M 2

'-
(), otherwise .
Slil -
2

x[n] = al'u[n], lal < 1


'

x[n] = S[n]

x[n] = u[nJ
,
1 1, I!! 1s w
x[nJ = sin( Wn), X(eii1 ) is 21r períodic
7tn ,O, W<IOlsr.'

·n 1
x[n] = (n + l)anu[n] X(e') =-- -11-2
(1 - ae-i )
678 APPENDIX C flil TABLES OF FOURIER REPRESENTATIONS ANO PROPERºfIES

1C.4 -';Jasic_F~urier Transform !'airs

Time Domain Frequency D<)main


x(t) = l1r l f~ _"'X(jw)e'"'t dw
. X(jw) = f" " x(t)e iwt dt
,

x(t) =
1, ltl :S T
X(jw) =
2 sin(wT)

'
o, orherwjse ú)

x(t) = _!_ sin(Wt) X(jw) = ·


1, lwl < w
1Tt
-o, otherwise

x(t) = 8(t) X(jw) =1


x(t) = 1 X(jw) = 21TS(w)
1
x(t) = u(t) X(jw) = :-- + 1T8(w)
/W

1
x(t) = e-'' u(t), 1
Re{a} >O. X(jw) =
a+ ;w
x(t) = te '''u(t), Rela}> O X(jw) = (a +1 jw)2
'

x(t) = e-,, ' 1


, a>() X(jw) = 22a '
a + w-

1
x(t) = V1n e
t 2l2 X(jw) = e-,;,-12 >

2?T

C.5 Fou,. ier Tra,isfonn Pairs


for Periodic Signals

Peri<)dic Time-Domain Signal Fourier Transform


00 00

x(t) = L X[kle;k,,,", X(iw) = 27r L X[kl8(w - kw 0 )


k=-:-x: k=-~,

x(t) = cos(w 0
t) X(jw) = 1T8(w - w0 ) + 1r8(w + w.,)

7r 7T
x(t) = sin(w 0
t) X(jw) = -. õ(W - w0 ) - -:- S(w + W 0 )

' '
x(t) = eiw.,t X(jw) = 21rS(w - w,.,)

2r. ~ 21r'
x(t} = r:- -,:,,ô(t - n<!J) X(jw) =-;:;- LJ 8 w-k-
'j k~-x <!J

x(t) =·
1, 1 ti s T,
· Ü, Ts<Jtj<T/2
'
x(t + T) = x(t)
C.6 Discrete-Time Fourier Transform Pairsfor Periodic Signals 679

C.6 Discrete-Time Fourier Transform Pairs


for Periodic Signals

Periodic Time-Domain Signal Discrete-Tin1e Fourier Transform


00

X(ei!l) = 21r L X[k]ô(!l - kilr,)


k=-oc

""
X(e;ti) = 1T L
k~ - ""
8(!1 - !l 1 - k21r) + «S(!! + !11 - k21r)

"'
x[n] = sin({l 1n) X(e;ri) =~
J
L
k--x
ô(f! - il 1 - k21r) - 5(!1 + il1 - k21r}

00

X(e;n) = 21r ~ 8(!1 - !1 1 - k21r)


k= oo

x[n] = I
O<,

ó(n - kN) X(e' )


·n
=- 21r ~
Li ô
n _ k21r'
k=-oo N k=-oo N
680 APPENDIX C • TABLES OF FOURIER REPRESENTATIONS AND PROPERTIES

Fourier Series
Fourier Transform FS; w,,• X [k]
FT x(t) <
x(t) < > X(iw) f"S; w 0
y(t} < • Y[k]
FT
Property y(t) ( • Y(jw) Period =T
Linearity FT FS; w,,
ax(t} + by(t) < • aX(jw) + b Y(jw) ax(t} + by(t) · ~ aX[kl + b Y[k]

Time shift FT FS; w,, ºk


x(t - tr,) ( • e-,..,,.,X (jw) x(t - tu) ~ e-1 W,lvX(k]

frequency shift FT FS; W0


ei-Ytx(t) < • X(i(w - y)) eiko'",lx( t) . ; X[k - kºJ

FT • _!._X~\ FS; t1ú)º


Scaling x(at} < x(at) ~ ~ X[k]
1 I a a

Differentiatíon- d FT d FS; Wc,


.
time
- x(t) <
dt
• jwX(jw) dt x(t) " • jkw 0 X[k]

Differentiation-
frequency
-jtx(t} .
fT
-t X(jw) -
Integration/ ft x(T) d-r <
FT
>
X(jw)
. + 1rX(jO)ô(w) -
Summation -,a JW

Cclnvolution
J~ _

00
x(r)y(t - T) d-r,
FT
· X(jw)Y(jw)
J x( r)y(t -
<T>
-r) dT ~
fS; W0
, TX[k]Y[kl

FS; ""
Modulation x{t)y{t) <
FT
• b
1f""
_"" X(jv)Y(j(w - v)) dv x(t)y(t) "
W0
> I X[/}Ylk - /J
I= - oc

Parseval's
Theorem f"°oo 2 1
lx(t)l dt = 2 '1T f"""" IX(jw)l 2 dw ½, i 7
'> lx(t) 1 dt = 2
kt'"' IX[k} 1 2

DTFT
FT x[n] : X(e'11 )
Duality X(jt) < > 21rx(-w) ,
FS·1
X(ei 1
) < > x[-k]

FT FS; Wc,
x{ t) real : · X*{jw) = X{-jw) x(t} real~ > X'~ Ik) = Xf-k)
FT FS; w,,
x(t) imaginary " • X*(jw) = -X(-jw) x(t} imaginary < • X*[k] = -X[-k]
S},rnmetry
FT FS; W0
x(t) real and even < > lm{X(jw)} == O x(t) real and even ~ • Im{X[k}} = O
FT FS; W,:,
x(t) real and odd . : Re{X(jw)} =O x(t) real and odd (- > Re{X[kl} = O
C. 7 Properties of Fourier Representations 681

Discrete-Time FS
Discrete-Time FT DTFS; 0 0
DT1-·1· . x[n] : : X[kj
xfnJ ~ · X(e 111 ) DTFS; !10
DTFT .2 y[nl , . Yfk]
yln] < > Y(e'r ) Period = N

DTFT {l !l DTFS; fl,,


ax[nl + byln] < > aX(e' ) + b Y(e' ) axln] + by[n] : : aX[kl + bYlk]

..
1
DTF'f . ,. ri
e' "x[nl ~ > X(e (••- }
1

xzln] = O, n * fp Xzln} = O, n -=I= lp


DTFT . DTFS; pD.,_,
Xz[Pn] ( ) X~(e'!l/p) x,Jpn] : : PXzlk]

-
DTFT d .11
-jnx[t11 . , dn X(e' )

" DTFT X(e;it) .


-
,e

L
k--oc
x[k] < >
1- e
_I tl + 1rX(e 10
)
k-=-oo
L 5(!1 - k21r)

"' DTFS; D..,


L.J x[l]y[n - li . NX[kJY[kJ
I= .,,, 1-íN)

x[n}yln]. DTFT
. -l
2rr
f (21T)
X(e )Y(e'
1 ·r .<,.,.
1
'
ri
) df x[n]y[n] .
DTFS; fiu "'
. L.J
1-<N)
x[nYlk - l]

DTFT .11
x[nJ ~ , X(e' ) DTFS; !1º 1
. FS;l
X[nJ . : N xf-k]
X(e't) < • x[-kJ
l).fFT .,, ,.,. DTFS; íl"
x[n] real < • X'~(e' 1
'} = X(e- 1
'') x[nl real · . X*[k] = Xl-k]
DTJ-T n n DTFS; fiº
x[n] imaginary ~ · x.:• (e' ) ;=e -X(e-, ) x[n] imaginary · X* [k] = -Xl-k]
DTFT .1 DTFS; !},,
x[nl real and even ~ . ln1{X(e'' )) =O x[nl real and even ~ • lm{X[k li =O
DTFT . DTFS; fl0
x[n] real and odd < , RefX(e'ª)J == O .-r[n] real and odd : . Re/X[k]} =O
682 APPENDIX C • TABLES OF FOURIER REPRESENTATIONS AND PROPERTIES

C.8 Relating the Four


Fourier Representations
Let

g(t) FS; W0 = 21r/T G[k]

DTFT
v[n] - - - V(ei!l)
w[nl DTFS; 0 0 = 21r/N W[k]

• FT REPRESENTATION FOR A CONTINUOUS-TIME PERIODIC SIGNAL


00

g(t) < FT • G(jw) = 21T L


k=-c,o
G[k]ô(w - kw 0 )

• DTFT REPRESENTATION FOR A DISCRETE-TIME PERIODIC·SIGNAL


00

w[nl ~D_TF_T~ W(e;n) = 21T L W[k]B(il - kilº)


k=-oc

• FT REPRESENTATION FOR A DISCRETE-TIME APERIODIC SIGNAL


00
FT
Vz,(t) = L
n=-oo
v[nJS(t - n2T) < > Võ( jw) = V(ei
0
)

• FT REPRESENTATION FOR A DISCRETE-TIME PERIODIC StGNAL

Ws(t) =
oc

L
n=-oo
w[n]i>(t - n2T) < FT > W.s( jw) = 2:
~
i
k=-oo
W[k]8 w - k~º
~

1C.?. Sampling a'!"'Aliasing Relationships


Let

x(t) < FT • X( jw)


DTFT
v[n] _ ___,. V(ei 0 )

~ IMPULSE SAMPLING FOR CONTINUOUS-TIME SIGNALS

oo FT 1 "" w - k l1r
Xs(t) = n L,,, x(n2T)D(t - n2T) < > X.s( jw} = 2T k~ "° X j 2T

X 8 ( jw) is 27T/2T periodic.


C.9 Sampling andAliasing Relationships 683

• SAMPLING A DISCRETE~TIME SIGNAL


q-1
1 "'"'
y[nJ = v[qnJ -DTFT
- - Y(eiº) = - LI V(eiU1-m21rJtq}
q m=O

Y(ei 1t) is 21r periodic.

• SAMPLING THE DTFT IN FREQUENCY

~ DTFS; fi = 21r/N 1 'k'' 0


w[nJ = LI v[n + mNJ ~ - - - - ~ W[k] = - V(e' ••
0
)

rn= - ::,o N
w[ n] is N periodic.

• SAMPLING THE FT IN FREQUENCY


00 1
"'"' + .mT} FS;- - - ~ G[k]
w = 21r!T
=T X( I.k Wo )
0
g(t} = LI x(t ~
111= - oo

g(t) is T periodic.
Tables of Laplace Transforms
and Properties

. '~::. . ·""· ,•,;\.-.


: ·"-·~
,;; ...
.:. ;.,. ;li. •~
,,,. ....
.,a. ·<:. .·:.
.~:
·•..

, .......
.··
...:·· ../' ~
.~: .·
.
:~
. ·. ·:.;. .,;,,;,.,,,
. ... ·.. t.
.· ~-· .'./ '~··
·~~. ., .. li .,
.>i;

:.·:·.•!\ •

l .~• l __ Basic Laplace Tra,isforms


Signal Transform ROC

1
u(t) -s Re{s} > O

1
tu(t) -s2 Reis} > O

ô(t - T), T> Ü e ·· ST


for ali s

1
e· atu(t) Re{s} > -a
s+a
l
te· atu(t} Re{s} > -a
(s + a) 2

s
[cos( w 1 t) lu(t) Re{s} > O
s + 2
wf
W1
[sin(w 1t) Jti(t) Re{s} > O
s +1
wf
s+a
1
le-ª cos(w1t)Ju(t) Re{s} > -a
(s + a) 2
+ wf
W1
le-ª 1 sin(w 1t)lu(t) Re{s} > -a
(s + a) 2 + Wi
-
D.2 Laplace Trunsform Properties 685

• BILATERAL LAPLACE TRANSFORMS FOR SIGNALS


TIIAT ARE NONZERO FOR t :s-; 0

Signal Bilateral Transform ROC


-s~
S(t - r), -r :S O e for all s

1
-u(-t) -s Re{s} < O

1
-tu(-t) -s2 Reis} < O
'

1
-e atu(-t) Reis}< -a
s+a
1
-te-ªtu(-t} Re{s} < -a
(s + a) 2

D.2 Laplace Transform Properties

Signal Unilateral Transform Bilateral Transform RC)C

x(t) )((s) X(s) R.,,.

y(t) Y(s) Y(s) Rr


ax(t) + by(t) aX(s) + b Y(s) aX(s) + b Y(s) At least Rx n R).
e-s1'X(s)
x(t - ,) e-s1'X(s) Rx
íf x(t - -r}u(t} = x(t ·_ r}u(t - r)

es,,tx(t) X(s - s0 ) X(s - s0 ) Rx - Re{s")


1 \
1 s' _!_ X s Rx
x(at) -X- -
la 1 ,a lal a, la 1
x(t) * y(t) X(s}Y(s) X(s)Y(s} At least Rx n R).
d . d
-tx(t) ds X(s) ds X(s) Rx

d
dt x(t) sX(s) - .'.\'.( 0-,.) sX(s) Ar least R,

f1
00
x( r) dr -1
s
Jº 1

-co
(
X T ) d 1' + X(s)
s
X(s)
s
At least Rx n {Re{s) > O}
686 APPENDIX D • TABLES OF LAPLACE TRANSFORMS AND PROPERTIES

• INITIAL VALUE THEOREM


lim sX(s) = x(O+)
This result does not apply to rational functions X(s) in which the order of rhe numerator
polynomial is equal to or greater than the order of the denominator polynomial, since this
implies X(s) contains terms of the form csk, k 2= O. Such terms correspond to the impulses
and their derivatives located ar time t = O and are excluded from the unilateral Laplace
transforrn.

• FINAL VALUE THEOREM

lim sX(s) = lim x(t)


s----,0 t-#=

This result requires that ali the poles of sX(s) be in the left half of the s-plane.

• UNILATERAL DJFFERENTIATION PROPER1Y-GENERAL FORM


dn dn- 1 dn-2
dtH
11
x(t) < ~" > s X(s) - dt"_ 1 x{t) - s dtn-2 x(t} - • • · - sn- 2 !!-.. x(t}
dt
Tables of z-Transforms and Properties

,.
.,,. ,.;.
. ,.~ ·. ~-• ..> .i.i .\· ...;: .f. 7 ,,,.

·, ' . .t ··"•·
,. /! • 0
-
ui ~
·'> .. . ~- <.. •••, • • ,; • ,,
·;. '> ;•~: •• • ~ • ... • ••• ~- • •• .,.,. • ,
<· :i.· •• •
:< •
'•. • ,. •$'• i, --~- :" ;,',i· : i~. ··. '~..-· ···'···'·· . .·.·
• ,· • ,: ·:" : .. ;,r
·,t. ::>

1E.1 Basic z-Transforms

Signal Transform ROC

5[nl 1 All z

1
u[n] 1 - z.-1 lzl > 1

1
a"tt\n]
1 - az.:... 1 lzl > lal
az- 1
1ia
11
ul n] (1 - az--1)2 lzl > lal
I
1- z
cos f!1
1cos(f! 111) ]i, [ n]
1- z 12 cos íl + z- 2 lzl > 1
1

I
sin !11 z
[sin(!1 1n)]u[n]
1 - z 12 cos !! 1 + z- 2
lzl > 1

1-z- 1rcos!11
lrn cos(fl 1 n)ju[rz]
1 - z- 12r cos !1 1 + r2 z- 2
lzl > r

z- 1r sin ll 1
[r" sin(!l 1n)ji,[n]
1 - z- 1 2r cos !} 1 + r2 z- 2
lzl > r
Introduction

1-; •• :.· •

... '<~:··
·.· .•... .:~
·:...

., ...
..
., . '
,. ' ., ...
"
..
.··.
·~f«< \-i' ·'i•, ,. :, .. .::ii:.

.;

1. 1 What Is a Signal?
Signals, in one form or another, constitute a basic ingredient of our daily lives. For ex-
ample, a common form of human communication takes place through che use of speech
signals, be that in a face-to-face conversation or over a telephone channel. Another com-
mon form of human communication is visual in nature, with the signals taking the form
of images of people or objects around us.
Yet another form of human communication is through electronic mail over the In-
ternet. In addition to mail, the Internet provides a powerful medium for searching for
information of general interest, advertising, telecommuting, educarion, and games. All of
these forms of cornmunication over the Internet involve the use of informati<>n-bearing
signals of one kind or another. Other real-life examples where signals of interest arise are
discussed in what follows.
By listening to the heartbeat of a patient and monitoring his/her blood pressure and
temperarure, a doctor is able to diagnose the presence or absence of an il1ness or disease.
These quantities represent signals that convey information to the doctor about the state
of health of the patie11t.
ln listening to a weather forecast over the radio, we hear references made to daily
variations in temperature, humidity, and the speed and direction of prevalent winds. The
sígnals represented by these quantities help us, for example, to form an opinion about
whether to stay indoors or go out for a walk.
The daily fluctuations in the prices of stocks and commodiries 011 world markets, in
their own ways, represent sígnals that convey ínformation on how the shares in a particular
company or corporation are doing. On the basis of this information, decisions are made
on whether to make new investments or sell old ones.
A probe exploring outer space sends valuable information about a faraway planet
back to an Earth station. The information may take the form of radar ímages representi11g
surface profiles of the planet, infrared images conveying information on how hot the planet
is, or ()ptical images revealing the presence of clouds around the planet. By studying these
irnages, our knowledge of the unique characteristics of the planet in question is enhanced
significantly.
Indeed, the list of what constitutes a signal is almost endless.
A signal is formally defined as a function of one or m<Jre variables, which C<)nveys
information on the nature of a physical phenc)mencJn. When the function depends on a

Vous aimerez peut-être aussi