Vous êtes sur la page 1sur 438

• 11.

·' . .. .. • ll

- i .
•· .. . . '
' .

'.;,' .
.. .
s an
, '

'• .··..

'• _,
...; ·- . .. ' .... '
.. .
'""·>..· ..._

..... ,Jitr. ~:. .

..-,s.<f • ~:.:: ••,·,.··,. •• >
.. ~· '\:;

• ·~...
·,;;. .,
_,, . •.
. . t

. ,

- .. ..
,. •
. ..
·.. ... .. . ' ... '

• .... 1
• • ·.·,

Simon Haykin
AtJcMaster University

Barry Van Veen

University of Wisconsi11


New York m Chichester m Wcinl1ei1n fl Brisbane II Singap<>re s T<>r<>11t<>

To Nancy and Kathy, Emily, David, and Jonathan

EDITOR Bill Zohríst

SENIOR PR()DUC~'ff()N MANAGER l,ucille Buonocore
SI-:NJOR l>R()DUC1'l()N EDITOR Moniq1,e Calei/o
SENI()R DESI(;NER Laura Boucher
COVER DESJ(;NER Laura Boucher
c:OVER PHOTO <:rJurtesy of NASA
ILLlJSTRATI()N EDITOR Sígmund Malinowskí
Il.LUS"fRATION Wellington Studios

This hook was set in Times Roman by UG division of CiGS lnformation Services and printed and bound by
Quebecor Printing, Kingsport. The cover was printed by Phoenix C:olor Corporation.

This hook is printcd on acíd-free papcr. @>

Thc paper in this book \vas manufactured by a mill whose forest management programs include sustained
yield harvescing of irs timbcrlands. Sustained yield harvcsting principies ensure rhat the numbers of crccs cu
cach year does not cxceed chc an1ount of new growth.

Copyright© 1999, John Wiley & Sons, Inc. All rights reservcd.

No part of thís publicaríon n1ay he rcproduced, src>red ín a recrieval system or rransmitted in any forrn or b
any means, electroníc, mechanical, photocopying, recording, scar1ning or orherwise, exccpt as permitted un,
Sections 107 or 108 of the 1976 lJnited States Copyright Act, ,vithout either che prior wrítten pertníssion 01
the Publísher, or authorizarion rhrough payment of the appropriate per-copy fcc to the Copyright Clearanc1
C:enter, 222 Rose\.vood Drive, Danvers, MA 01923, (508) 750-8400, fax (508) 750-4470. Requests to the
Puhlisher for permil>sion should be addressed to the Permissions Dcpartn1ent, Joh11 Wilcy & Sons, lnc.,
605 Third Avenue, Ncw York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail:

l~ibrary of Congress Cataloging-in-Publication Data

Haykin, Símon
Signals an<l sy~cems / Sitnon Haykitl, Barry Van Vcen.
p. cn1.
Includes indcx.
ISBN 0-4 71-13820-7 (cloth : alk. paper)
1. Sig11al processing. 2. System analysis. 3. Linear tíme
invariant systen1s. 4. Telecomn1unication systen1s. 1. Van Veen,
Barry. II. Titlc.
TKS102.5.H37 1999
621.382'2-dc21 97-52090

Princed in rhe Unitcd Srates of A111erica

10 9 8 7 6

Ts settling time
X(T, jw) short-time Fourier transform of x(t}
Wx( T, a) wavelet transform of x(t)


A/D analog-to-digital (converter}

AM. amplitude modulation
BIBO bounded input bounded <)utput
cw cont1nuous wave
DIA digital-to-analog (converter)
dB decibel
DOF degree c>f f reedorn
DSB-SC double sideband-suppressed carrier
DTFS discrete-time Fotirier series
DTFT discrete-time Fot1rier transform
FDM frequency-divisi<)n multiplexíng
FFT fast Fourier transform
FIR finite-durati<)n impulse response
FM frequency modulation
FS Fourier series
FT Fourier transform
Hz hertz
IIR infinite-duratíon impulse response
LTI linear time-invariant (system)
• •
MRI magnet1c resonance 1mage
MSE mean squared error
PAM pulse-amplitude modulation
PCM pulse-c<>de modulation
PM phase modulation
QAM quadra ture-am pli tude m<>d ula tion
ROC region of convergence
rad radian(s)
s second
SSB síngle sideband modulation
STFT short-time Fourier transform
TOM time-division multiplexing
VSB vestigial sideband m<>dulation
WT wavelet transfor1n


Each ''Exploring Concepts with MATLAB'' section is designed t<) instruct the s
on the proper application of the relevant MATLAB comrnands and develop add
insight int<> the C(>ncepts introduced in the chapter. Minimal previous exposure to MA
is assumed. The MATLAB code Í(>r all the computatíons performed in the book, inc
the last chapter, are available on the Wiley Web Site: http://www.wiley.com/college
There are 10 chapters ir1 the book, organized as follows:
• Chapter 1 begins by motivating the reader as to what signals and systems a
how they arise in communication systems, contrai systems, rcmote sensing, bi
ical signal processing, and the auditory system. lt then describes the different,
of signals, defines certain elementary signals, and introduces the basic notic
volved in the characterization of systems.
• Chapter 2 presents a detailed treatment of time-d<>main representations of
time-invaríant (LTI) systems. It devclops convolution fro1n the representatior
input signal as a superpositi<>n of impulses. The notions of causality, memor
bílity, and invertibility that were briefly introduced in Chapter 1 are then re
in terms of the impulse response description for LTI systems. The steady-st;
sponse of a LTI system t(> a sinusoidal input is used to introduce the cc,nc
frequency response. Differential- and difference-equation representations for
time-invariant systems are also presented. Next, block diagram reprcsentatio
LTI systen1s are introduced. The chapter finishes with a discussion of the
variable description of LTI systems.
• Chapter 3 deals with the Fourier reprcsentation of sígnals. ln particular, the F
representations of four fundamental classes c1f sígnals are thoroughly discusse
unified manner:
• Discrete-time períodic signals: the discrete-time Fourier series
• Continuous-time periodic signals: the f(>urier series
• Discrete-time nonperiodic sígnals: the discrete-time Fourier transform
• Continuous-time nonperiodic signals: the Fourier transform
A novel feature of the chapter is the way in which sin1ilarities between thes1
representarions are exploited and the differe11ces between them are highlightcc
fact that complex sinusoids are eigenfunctions of LTI systems is used t(> motiva
representatÍl)n of signals in terms l)Í complex sinusoids. The basic form of the F1
reprcsentati<>n for each signal class is introduced and the four representatic>1
developed ín sequence. Next, the properties of ali four representations are st
side by side. A stríct separation between signal classes and the corresponding F<
representations is maintained throughout the chapter. It is our conviction t
parallel, yet separate, treatment minimizes confusion between representation
aids later mastery of proper application for each. Mixing of Fourier represent,
occurs naturally in the context of analysis and computational applicatians .:
thus deferred to Chapter 4.
,. Chapter 4 presents a thorough treatment of the applications of f(>urier rcprei
tions to the study of signals and LTI systems. Links between the frequency-dc
and time-domain system representations presented in Chapter 2 are established.
analysis and computational applications are then used to motivare derivation e
relationships betwcen the four Fourier representations and develop the student'
in applying these tools. The continuous-time and discrete-time Fourier tran~
representations of periodic signals are introduced for analyzing problems in \
there is a mixturc of períodic and nonperiodic signals, such as application of
riodíc inpt1t to a l,Tl system. The Fourier transform representation for discrete
Preface Vil

signals is then developed as a tool for analyzing situations in which there is a mixture
of continuous-time and discrete-time signals. The sampling process and continu-
ous-time signal reconstruction from samples are studied in detail within this context.
Systems for discrete-tíme processing of continuous-time signals are als(> discussed,
íncluding the issues of oversampling, decimation, and interpolation. The chapter
concludes by developing relationshíps between the discrete-time Fourier series and
the discrete-time and continuous-time Fourier transf<lrms in order to introduce the
computational aspects of the Fourier analysis of signals.
• Chapter 5 presents an introductory treatment of linear modulation systems applied
to communication systems. Practical reasons for usir1g r11c>dulatíon are descril,ed.
Amplitude modulation and its variants, namely, double sideband-suppressed carrier
modulation, single sideband modulation, and vestigial sideband modulation, are dis-
cussed. The chapter also includes a discussion of pulse-amplitude 1nodulation and
its role in digital communications to again highlight a natural interactic>n between
continuous-tíme and discrete-time signals. The chapter includes a discussion of
frequency-division and time-division multiplexing techniques. lt finishes with a treat-
ment of phase and group delays that arise when a modulated signal is transmitted
through a linear channel. ·
• Chapter 6 discusses the Laplace transform and its use for the complex exponential
representations of continuous-time signals and the characterization of syscems. The
eigenfunction property of LTI systems and the existence of complex exponential
representations for signals that have no Fourier representarion are used to motivate
the study of Laplace transforms. The unilateral Laplace transform is studied :first and
applied to the solution of differential equations with inicial conditions to reflect the
dominant role of the Laplace transÍ()rm ín engineering applications. The bilateral
Laplace transform is introduced next and is used to study issues of causaliry, stability,
invertibility, and the relationship between poles and zeros and frequency response.
The relationships between the transfer function description of l.TI systems and the
time-domain descriptions introduced in Chapter 2 are developed.
• Chapter 7 is devoted to the z-transform and its use in the complex exponential rep-
resentation of discrete-time signals and the characterízation of systems. As in Chapter
6, the z-transform is motivated as a more general representation than that of the
discrete-time Fourier transform. Consistent with its primary role as an analysis t<)ol,
we begin with the bilateral z-transform. The properties of the z-transform and tech-
niques for inversion are introduced. Next, the z-transform is used for transform
analysis of systems. Relationships between the transfer function and tíme-domain
descriptions introduced in Chapter 2 are developed. Issues of invertibility, stability,
causality, and the relationship between the frequency response and poles and zeros
are revisited. The use of the z-transform for deriving computational structures for
implementing discrete-time systems on computers is introduced. Lastly, use of the
unilateral z-transform for solving difference equations is presented.
• Chapter 8 discusses the characterization and design of linear filters and equalizers.
The approximation problem, with emphasis on Butterworth functíons and brief men-
tion of Chebyshev functions, is introduced. Direct and indírect methods for the design
of analog (i.e., continuous-time) and digital (i.e., discrete-time) types of :filters are
presented. The window method for the design of :finite-duration impulse response
digital filters and the bilateral transform method for the design of infinite-duratíon
impulse response digital filters are treated in detail. Filter design offers another op-
portunity to reinforce the links between continuous-time and discrete-time systems.
The chapter builds on material presented in Chapter 4 in developing a method for the

equalization of a linear channel using a discrete-time filter of finite impulse response.

Filters and equalizers provide a natural vehicle for developing an appreciation for how
to design systems required to meet prescribed frequency-domain specifications.
• Chapter 9 presents an introductory treatment of the many facets of linear feedback
systems. The various practical advantages of feedback and the cost of its application
are emphasized. The applications of feedback in the design of operational amplifiers
and feedback c<Jntrol systems are discussed in detail. The stability problem, basic to
the study <>f feedback systems, is treated in detail by considering the following
• The root-locus method, related to the closed-loop transient response of the system
• Nyquist stability criterion, related to the open-loop frequency response of the
The Nyquist stability criterion is studied using both the Nyquist locus and B<>de
diagram. The chapter also includes a discussion of sampled data systems to illustrate
the natural inceracti<>Il between continu<>us-time and discrete-time signals that occurs
in control applications.
• Chapter 10, the final chapter in the book, takes a criticai look ar limítations of the
representations of signals and systems presented in the previous chapters of the book.
lt highlights <>ther advanced tools, namely, time-frequency analysis (the short-time
Fourier transform and wavelets) and chaos, for the characterization of signals. It
also highlights the notions of nonlinearity and adaptivicy in the study of systems. ln
so doing, the student is made aware of the very broad nature of the subject of sig-
nals and systems and reminded of the limitations of the linear, time-invariance
ln organizing the material as described, we have tried to follow theoretical material
hy appropriate applications draw11 from the fields of communication systems, design of
filters, and control systems. This has been clone in order to provide a source of motivation
for the reader.
The material in this book can be used for either a one- or two-semester course se-
quence on signals and systems. A two-semester course sequence would cover most, if not
all, of the topics in the bo()k. The material for a one-semester course can be arranged in a
variety of ways, depending on the preference of the instructc,r. We have attempted te>
maintain maximum teaching flexibility in the selection and order of topics, subject to our
philosophy of truly integrating continuous-time and discrete-tíme concepts. Some sections
of the book include material that is considered to be of an advanced nature; these sections
are marked with an asterisk. The material covered in these sections can be omitted without
disruptíng the continuity of the subject matter presented in the pertinent chapter.
The book finishes with the following appendices:
• Selected mathematical identities
• Parcial fracti<>n expansions
• Tables of Fourier representations and properties
• Tables of Laplace transforms and properties
• T ables of z-rransforn1s and properties
A C<>nsistent set of 11otations is used throughout the book. Except for a few places, the
derivations of all the formulas are integrated into the text.
The book is accompanied by a detailed Solutions Manual for all the end-of-chapter
problems in the book. A copy of the Manual is only available to instructors adopting this
book for use in classrooms and may be obtained by writing to the publisher.

Acknmvledgments IX

ln writing this bc>ok <>ver a períod <>f four years, we have bcnefited enormously from the
insightful suggestions and cc)nstructive inputs received fr<>m many colleagues and reviewers:
• Professor Rajeev Agrawa1, llniversity of Wisc()nsin
• Professor Richard Baraniuk, Rice University
• Professor Jím Bucklew, Uníversíty of Wisconsin
• Professor C. Sidney Burrus, Rice Uníversity
• Professor Dan Cobb, Uniuersity of Wisconsin
• Professor Chris DeMarco, University of Wisconsin
• Professor John Gubner, Universíty of Wisconsín
• Profess<>r Yu Hu, University of Wisconsin
• Professor John Hung, Aubur11 U11iversity
• Professor Steve Jacobs, Uníversity of Pittsburg
• Dr. James F. Kaiser, Bel/core
• Professor Joseph Kahn, Uniz1ersít)' of Califí>rnia-Berkele)'
• Professor Ramdas Kumaresan, University <){ Rhode lsland
• Professor Troung Nguyen, Boston University
• Professor Robert Nowak, Michigan State University
• Professor .s. Pasupathy, University o( Tor(>nto
• Professor John Platt, McMaster University
• Professor Naresh K. Si11ha, McMaster University
• Professor Mike Thomson, University of Texas-Pan America
• Professor Anthony Vaz, McMaster U11iversity
We extend our gratitude to them ali for helping us in their own_ individual ways to shape
the book into its final form.
Barry Van Veen js indebted to his colleagt1es at the lJnjversity of Wísconsjn, and
Professor Willis Tompkins, Chair of the Department of Electrical and Computer Engi-
neering, for all<1wing him to teach the Signals and Syste1ns Classes repeatedly whíle in the
process of working on this text.
We thank the many students at both Mc.Nlaster and Wisconsin, whose suggestíons
and questi()ns have helped us over the years to refine and in some cases rethink the pre-
sentation of the material in this book. ln particular, we thank Hugh Pasíka, Eko Ongge.>
Sanusi, Dan Sebald, and Gil Raz for their invaluable help in preparing some of the com-
puter cxperiments, the s<)luti(>11s manual, and in reviewing page proofs.
The idea of writing this l)ook was conceived when Steve Elliott was the Edit(>r of
Electrícal Engineering at Wiley. We are deeply grateful to him. We ais<) wish to express
our gratitude to Charíty Robey for undertaking the many helpful reviews of the book, and
Bill Z(>brist, the prese11t editor of Electrical Engineering at Wiley) for his strong support.
We wish to thank !vf<,nique Cale11o for dextrously managing the production of rhe book,
and Katherine Hepburn f(>r her creative promotion of the book.
Lastly, Sim<Jn Haykin thanks his wife Nancy, and Barry Van Veen thanks his wife
Kathy and children Emily and David, for their support and understanding throughout the
long hours involved in writíng this book.

Simon Haykin
Barry Van Veen
To Nancy and Kathy, Emily, David, and Jonathan

N()tation XVI

CHAPTER 1 Introduction l

1.1 What Is a Signal? 1

1.2 What Is a System? 2
1.3 Overview of Specific Systems 2
1.4 Classification of Signals 15
1.5 Basic Operations on Signals 22
1.6 Elementary Signals 29
1. 7 Systems Viewed as lnterconnections of Operatic.>ns 42
1.8 Properties of Systems 44
1.9 Exploring Concepts with MATLAB 54
1.10 Summary 60
~urther Reading 61
Problems 62

CHAPTER 2 Time-Domain Representations

for Linear Time-Invariant Systems 70

2.1 lntroduction 70
2.2 Convolution: Impulse Response Representation for LTI Systems 71
2.3 Properties of the Impulse Response Representati(>n fc>r LTI Systems 94
2.4 Differential and Difference Equation Representations for LTI Systems 108
2.5 Blc)ck Diagram Representations 121
2.6 State-Variable Descriptions for LTI Systems 125
2.7 Exploring Concepts with MATLAB 133
2.8 Summary 142
Further Reading 143
Problems 144

1CHAPTER -~ _ Fourier J!epresentationsfor Signals 155

3.1 Introduction 155

3.2 Discrete-Time Periodic Signals: The Discrete-Time Fc)urier Series 160
3.3 Cc>11tinuous-Time Peric>dic Signals: The Fc)urier Series 171
3.4 Discrete-Time Nonperiodic Signals: The Discrete-Time Fourier Transform 182
3.5 Continuous-Time Nonperiodic Signals: The f<>urier Transform 190
3.6 Pr<>perties of Fourier Representatíons 196
3.7 Exploring Concepts with MATI,AB 237
3.8 Summary 241
Further Reading 242
Problems 243

1CttAPTE~ ~ Applications of Fourier Representations

- .. . "' "

4.1 lntroduction 256

4.2 Frequency Response of LTI Systems 257
4.3 Fourier Transft>rm Representations Í(>r Periodic Signals 266
4.4 Convolution and Modulation with Mixed Signal Classes 272
4.5 Fourier Transform Representation for Discrete-Time Signals 279
4.6 Sampling 283
4.7 Reconstruction of Continuc>us-Time Signals from Samples 291
)~4.8 Discrete-Time Processíng of Continuous-Time Signals 301
4.9 fourier Series Rcpresentations for Finite-Duration Nonperi(>dic Signals 311
"'4.10 Computational Applicatíons of the Discrete-Time Fourier Series 316
*4.11 Efficient Algoríth1ns for Evaluating rhe DTFS 32 7
4.12 Exploring Concepts with MATLAB 331
4.13 Summary 336
Further Reading 336
Problems 337

to Communication Systems 349
- u . _"".

5.1 Introduction 34 9
5.2 Types of Modulation 349
5.3 Benefits of Modulation 353
5.4 full Amplitude Modulation 354
5.5 Douhle Sideband-Suppressed Carrier Modulation 362
5.6 Quadrature-Carrier Multiplexing 366
5.7 Other Varianrs of Amplitude Modulation 367
5.8 Pulse-Amplitude Modulation 372
5.9 Multiplexing 376
Contents xííí

*5.10 Phase and Group Delays 381

5.11 Exploring Concepts with MATLAB 385
5.12 Summary 395
Further Reading 396
Problems 397

CHAPTER6 Representation of Signals Vsing Continuous-

Time Complex Exponentials: The Laplace Transform 401

6.1 Introduction 401 ·

6.2 The Laplace Transform 40 l
6.3 The Unilateral Laplace Transf(>rm 407
6.4 Inversion of the Laplace Transform 412
6.5 Solving Differential Equations with Initial Conditions 416
6.6 The Bilateral Laplace Transform 423
6.7 Transform Analysis of Systems 432
6.8 Exploring Concepts wíth MATLAB 446
6.9 Surnmary 449
Further Reading 450
Problems 450

CHAPTER 7 Representation of Signals Vsing Discrete-

Time Complex Exponentials: The z-Transform 455

7.1 lntroduction 455

7.2 The z-Transform 455
*7.3 Properties of the Region of Convergence 463
7.4 Properties of the z-Transform 468
7.5 lnversi<>n of the z-Transforrn 4 72
7.6 Transform Analysis of LTI Systems 479
7.7 Computational Structures for Implementing Discrete-Time Systems 489
7.8 The Unilateral z-Transform 493
7.9 Exploring Concepts with MATLAB 479
7.10 Summary 500
Further Reading 501
Problems 501

CHAPTER 8 Application to Filters and Equaliz.ers 508

8.1 lntroduction 508

8.2 Conditions for Distortionless Transmission 508
8.3 Ideal Lowpass Filters . 510
8.4 Design of Filters 517

8.5 Approximating Functions 518

8.6 Frequency Transformations 524
8.7 Passive Filters 526
8.8 Digital Filters 527
8.9 FIR Digital Fílters 528
8.10 IIR Digital Filters 538
8.11 Linear Distortion 542
8.12 Equalization ,543
8.13 Exploring Concepts with MATLAB 546
8.14 Summary 551
Further Reading 551
Prc)blems 552

1C~APTER 9 Application to Feedback Systems

• r •

9.1 Introduction 5 56
9.2 Basic Feedback Concepts 557
9.3 Sensitivíty Analysis 559
9.4 Effect of Feedback on Disturbances or Noise 561
9.5 Distorti(Jn Analysis 562
9.6 Cost of Feedback .564
9.7 Operational Amplifiers 564
9.8 Control Systems 569
9.9 Transient Response of Low-Order Systems 576
9.10 Time-Domain Specifications 579
9.11 The Stability Problem 581
9.12 Routh-Hurwitz Criteri<>n 585
9.13 Root Locus Method 588
9.14 Reduced-Order Models 597
*9.15 Nyquist Stabiliry Criterion 600
9.16 Bode Diagram 600
*9.17 Sampled-Data Systems 607
9.18 Design of Control Systems 625
9.19 Exploring Concepts with MATLAB 633
9.20 Summary 639
Further Reading 640
Problems 640

1~HAPT_ER 10 Epilogue 648

10.1 Physical Properties of Real-Life Signals 648

10.2 Time-Frequency Analysis 652
Contents XV

10.3 Departures from the "Linear Time-lnvaríant System'' Model 659

10.4 Concluding Remarks 665
Further Reading 666

APPENDIX A Selected Mathematical Identities 667

A.1 Trigonometry 667

A.2 Complex Numbers 668
A.3 Geometric Series 669
A.4 Definite Integrais 669
A.5 Matrices 670

APPENDIX B Partial Fraction Expansions 671

B.1 Partia! Fractíon Expansions for Continuous-Tíme Representatíons 671

B.2 Partial Fractíon Expansions for Díscrete-Time Representations 674

Tables of Fourier Representations
i and Properties 676

C.1 Basic Discrete-Time F<>urier Series Pairs 676

C.2 Basic fourier Series Pairs 677
C.3 Basíc Díscrete-Time Fourier Transform Pairs 677
C.4 Basíc Fourier Transform Pairs 678
C.5 Fourier Transform Pairs for Periodic Signals 678
C.6 Discrete-Time Fourier Transft>rm Pairs Í()r Periodic Signals 679
C.7 Pr<)perties {)Í fc)uricr Representations 680
C.8 Relating the Four Fourier Representations 682
C.9 Sampling and Aliasing Relationships 682

1 APPENDIX o Tables of Laplace Transforms and Properties

d "- L L l •- - L - - d - 1 1• - a - - - -

D.1 Basic Laplace Transforms 684

D.2 l.aplace Transft>rm Pr(>perties 68.5

APPEND_IX ~ Tables of :;-Transforms and Properties 687

.. . ..,
- - - ,,,_ -
E.1 Basic z-Transforms 687
E.2 z-Transform Properties 688

[·] indicares discrete-valued independent variable, for example, x[n]

(·) indicates continuous-valued independent variable, for example, x(t)
• Lowercase functions denote time-domain quantities, for example, x(t), w[n]
• Uppercase functíc>ns denote frequency- or transform-domain quantities
X[kl discrete-time Fourier series coefficients for x[n]
X[k] Fourier series coefficients fc>r x(t)
X(ei!l) discrete-time Fourier transform c>f x[ n]
X(jw) Fourier transform of x(t)
X(s) Laplace transform of x(t)
X(z) z-transform of xr nl
• Boldface lowercase symbols denote vector quantities, for exarnple, q
• Boldface uppercase symbols denote matrix quantities, for example, A
• Subscript l5 indicares cc>ntinuous-time representation for a discrete-time signal
x 8 (t) cc>ntinuous-time representation for x[nl
Xs( jw) Fourier transfc>rm of xh(t)
• Sans serif type indicates MATLAB variables or commands, for example,
X= fft(x,n)
Oº is defined as 1 for convenience
arctan refers to thc Í<)Ur-quadrant function and produces a value between -1T t
1T radians.


lei magnitude of complex quantity e

arg{c} phase angle of complex quantity e
Re{c} real part of e
lm{c} i1naginary part of e
complex conjugate <>f e
J square root of -1

1 sq uare r<><>t <>f -1 used by MATLAB
2T sampling interval in seconds
T fundamental period for continuous-tíme signal in seconds
N fundamental period for discrete-tíme signal in samples
(!) (angular) frequency for C<>ntinuous-time signal in radians/second
Symbols XVII

n (angular) frequency for discrete-time signal in radians

fundamental (angular) frequency for co11tinuous-time periodic signal
in radians/second
fundamental (angular) frequency for discrete-time periodíc signal ín
u(t}, u[n] step function of unit amplitude
«S[n], ô(t} impulse function of unit strength
H{·} representatíon of a system as an operator H
ST{·} time shift of T units
H-1 h-1 superscript -1 denotes inverse system
* denotes convolution operation
H(eiº) discrete-time system frequency response
H(jw) continuous-time system frequency response
h[n] discrete-time system impulse response
h(t) continous-time system impulse response
Y(n) superscript (n) denotes natural response
y<f) superscript ([) denotes forced response
ylP> superscript (p) denotes particular solution
DTFS; 0 0 discrete-time Fourier series pair wíth fundamental frequency .fl0
f5; W0
Fourier series pair with fundamental frequency w
~ > 0

DTFT discrete-time Fourier transform pair

~ >
f'f Fourier transform pair
( >

5E ) Laplace transform pair
;fu unilateral Laplace transform pair
( •

z z-transform pair
( >

Z,, unílateral z-rransform paír

( )

sinc(u) sin( 7ru)/7ru

© periodic convolution of two periodíc sígnals
n •

T(s) closed-loop transfer function

F(s) return difference
L(s) loop transfer function
€ss steady-state error
. .
Kp pos1t1on error constant
Kv velocity error constant
Ka acceleration error constant
P.O. percentage overshoot
Tp peak time
• •
T,, r1se time

The study of signals and systems is basic to the discipline of electrical engineering at all
levels. It is an extraordinarily rich subject with diverse app]icati()ns. Indeed, a thorough
understanding of signals and systems is essential for a proper appreciation and application
of other parts of electrical engineering, such as signal processing, communication systems,
and contrai systems.
This book is intended to provide a modern treatment of signals and systems at an
introductory levei. As such, it is intended for use in electrical engineering curricula in the
sophomore or junior years and is designed to prepare students f{>r upper-level courses in
communication systems, contrc)l systems, and digital signal processing.
The book provides a balanced and integrated treatment of cc>ntinuous-time and
discrete-time forms of signals and systems intended to reflect their rc>les i11 engi11eering
practice. Specifically, these tW<) fc>rms of signals and systems are treated side by side. This
approach has the pedagogical advantage of helping the student see the fundamental sim-
ilarities and differences bet,1/een discrete-time and continuous-time representations. Real-
world problems often involve mixtures of continuous-time and discrete-time forms, so the
integrated treatment also prepares the student for practical usage of these concepts. This
integrated philosophy is carried over to the chapters of the book that <leal with applications
of signals and systems in modulation, filtering, and feedback systems.
Abundant use is made of examples and drill problems with answers throughout the
book. All of these are designed to help the student understand and master the issues under
consideration. The last chapter is the only one without drill problems. Each chapter, except
for the last chapter, includes a large number of end-of-chapter problems designed to test
the student cJn the material covered in the chapter. Each chapter also includes a list of
references for further reading anda collection of historical remarks.
Another feature of the bc)ok is the emphasis given to design. ln particular, the chap-
ters dealing with applications include illustrative design examples.
MATLAB, acronym for MATrix l.ABoratory and product of The Math Works, lnc.,
has emerged as a powerful environment for the experimental study of signals and systems.
We have chosen to integrate MATLAB in the text by including a section entitled ''Ex-
ploring Concepts with MATLAB'' in every chapter, except for the concluding chapter. ln
making this choice, we have been guided by the conviction that MATLAB provides a
computationally efficient basis for a ''Software Laboratory," where concepts are explored
and system designs are tested. Accordingly, we have placed the section on MATLAB before
the ''Summary'' section, therehy relating to and building on the entire body of material
discussed in the preceding sections (>f rhe perrinent chapter. This approach also offers the
instructor flexibility to either formally incorporate MATLAB exploration into the class-
room or leave it for the students to pursue on their own.


Sígnal Bilateral Transform ROC

u[-n - 1]
1 -z -1 lzl < 1

-a"ul-n - 1]
1 - az- 1
lzl < lal
-na u[-n - 1]

(1 - az- 1) 2 lzl < lal

1E.2 z.-Transfo~m Propertie~

Signal Unilateral Transform Bilateral Transform ROC
x[n] X(z) X(z) Rx
y[n] Y(z) Y(z) Ry

ax[nJ + by(nJ aX(z) + b Y(z) aX(z} + b Y(z} At least Rx íl Ry

x[n - k] See below z-kx(z) Rx exccpt possibly lzl = O, x

I \
X -z X
- lalRx
\ a ,a
x[-n] - /1\ 1
X - -
,z Rx
x[n) * y[n] X(z) Y(z) X(z)Y(z) At least Rx n Ry
nx[n] d d Rx except possibly additi<>n
-z-X(z) -z- X(z) or deletion of z = O
dz dz


x[n - kl ~ z,, > x[-k] + xl-k + l]z- 1 + · · · + x[-1]z-k+t + z-kX(z) for k > O
x[n + k] < Zu • -xlO]zk - x[l]zk-I - · · · - x[k - 1]z + zkX(z) for k > O

single variable, the signal is said to be one-dimensional. A speech signal is an example of

a one-dimensional signal whose amplitude varies with time, depending on the spoken word
and who speaks it. When the function depends on two or more variables, the signal is said
to be multidimensic)nal, An image is an example of a two-dimensional signal, with the
horizontal and vertical coordinates of the image representíng the two dímensions.

( 1.2 __ What Is a System?

In the examples of signals mentioned above, there is always a system associated with rhe
generation of each signal and another system associated with the extraction of information
from the signal. For example, in speech communicatíon, a sound source or signal excites
the vocal tract, which represents a system. The processing of speech signals usually relies
on the use of our ears and auditory pathways in the brain. ln the situation described here,
the systems responsible for the production and reception of signals are biological in nature.
They could also be performed using electronic systems that try to emulate or mimic their
bi<>logical counterparts. For example, the processing of a speech signal may be performed
by an automatic speech recognition system in the form of a computer program that rec-
ognizes words or phrases.
There is no unique purpose for a system. Rather, the purpose depends on the appli-
cation of interest. ln an automatic speaker recognition system, the function of the system
is to extract information from an incoming speech signal for the purpose of recognizing
or identifying the speaker. ln a communícation system, the function of the system is to
transport the information content of a message signal over a communication channel and
deliver it to a destination in a reliable fashion. ln an aircraft landing system, the require~
mentis to keep the aircraft on the extended centerline of a runway.
A system is formally defined as an entity that manipulates one or 1nore signals to
accomplish a function, thereby yielding new signals. The interaction between a system ar1d
its associated signals is illustrated schematically in Fig. 1.1. The descriptions of the input
and output signals naturally depend on the intended application of the system:
• In an autornatic speaker recognition system, the input signal is a speech (voice) signal,
the system is a computer, and the output sígnal is the identity of the speaker.
• ln a communication system, the input signal could be a speech signal or computer
data, the system itself is made up of the combination of a transmitter, channel, and
receiver, a11d the output signal is an estimate of the original message signal.
• ln an aircraft landing system, the input signa\ is the desired position of the aircraft
relative to the runway, the system is the aircraft, and the output signal is a correction
to the lateral position of the aircraft.

11.3 Overview of Specific Systems

- w - '"'

ln describing what we mean by signals and systems in the previous two sections, we men-
tioned severa! applications of signals and systems. ln this section we will expand on five

FIG(.IRE 1.1 Block diagram representation of a systern.

1.3 Oven1iew of Specific Systems 3

Message Transmitted Received of message
e signal signal sígnal
_ _ _..,._ Transtnit.ter _ _ _...,... •··. Cba~el · - - - - . . . : / R~eiver I
FrGVRE 1.2 Elerne11ts of a Cf)111n1unícatíon syste111. The transmitter changes the message signal
ínto a form suítable for transmíssi<>n over the c.hannel. The receiver processes the channel output
(i.e., the received signal) to producc .an estimate of the message signal.

of those applicatic>ns systen1s, nan1cly, communication systems, contr<>l systems, remote

sensing, bion1edical sig11al pr()ccssjng, a11d audit()ty sysren1s.


There are three basic elements to every communication system, namely, transmitter, cha11-
nel, and receiver, as depicted in fig. 1.2. The transmitter is located atone point in space,
the receiver is located at sorne otl1er poi11t separate frorn the transmitter, and the channel
is the physícal medium that connects them together. Each of these three elements may be
viewed as a system with associated signals of its own. The purpose of the transmitter is to
convert the message signal produced by a source of information into a form suítable for
transmission over the channel. The message signal could be a speech signal, television
(video) signal, or computer data. The channel may be an optical fi.ber, coaxial cable, sat-
ellite channel, or m<)bile radio channel; each of these channels has its specific area of
a pplication.
As the transmitted signal propagates over the channel, it is distorted due to the
physícal characteristics of the channel. Moreover, noise and interfering signals (originating
from other sources) contaminate the channel output, with the result that the received signal
is a corrupted version of the transmitted signal. The function of the receiver is to operate
on the received signal so as to reconstruct a recognizable form (i.e., produce an esrimate)
of the original message signal and detiver ir to the user destination. The signal-processing
role of the receiver is thus the reverse of that of the transmitter; in addition, the receiver
reverses the effects of the channel.
Detaíls of the operations performed in the transmitter and receiver depend <>n the
type of comznunication sysrem being considered. The C(>mmunication system can be of an
analog or digital type. ln signal-processing terms, the design of an analog cc>mmunícation
system is relatively simple. Specifically, the transmitter consists of a modulator and the
recei\>"er consists of a deniodulator. M<)dt1lati<J11 is rhe process of co11verti11g rhe message
signal into a forn1 that is compatible with the trans1nission characteristics <)f the channcl.
Ordinarily, the transmitted signal is represented as amplitude, phase, or frequency varia-
tion of a sinusoidal carrier wave. We thus speak of amplitude modulation, phase modu-
lation, or frequency modulation, respectively. Correspondingly, through the use of ampli-
tude demodulation, phase demodulation, or frequency demodulation, an estimate of the
original message signal is prc>duced ar the receíver output. Each one of these analc>g mod-
ulation/demodulati<>n techniques has its own advantages and disadvantages.
ln contrast, a digital communication system is considerably more complex, as de~
scribed here. If the message signal is of analog form, as in speech and video signals, the
transmitter performs the following operations to convert it into digita) form:
• San-zpling, which converts the message signal into a sequence of numbers, with each
number representing the amplitude of the message signal at a particular instant of


• Quantization, which involves representing each number produced by the sampler to

the nearest level selected from a finite number of discrete amplitude leveis. For ex-
ample, we may represent each sample as a 16-bit binary number, in which case there
are 2 16 amplitude leveis. After the combination of sampling and quantization, we
have a representation of the message signal that is discrete in both ti1ne and
• Coding, the purpose of which is to represent each quantized sample hy a codeword
made up of a finíte number of symbols. For example, in a binary code the symbols
n1ay he 1 's or O's.
Unlike the c>perations of sampling and coding, quantization is completely irreversible:
that is, a loss of informatÍ<)n is always incurred by its application. However, this loss
can be made small, and nondiscernible for ali practical purposes, by using a qua11tizer
with a sufficiently large number of discrete amplitude leveis. As the number of discrete
amplitude leveis increases, the length of the codewc)rd must also increase in a correspc)nd-

1ng way.
If, h<>wever, the source of information is discrete to begin with, as in the case of a
digital computer, none of the above operations would be needed.
The transmitter may involve additional tlperations, namely, data compression and
channel encoding. The purpose of data compression is to remove redundant information
from the message signal and thereby provide for efficíent utilization of the channel by
reducing the number of bits/sample required for transmission. Channel encoding, on rhe
other hand, involves the insertion of redundant elements (e.g., extra symbols) into the
codeword in a controlled manner; this is clone to provide protection against noise and
interfering signals picked up during the course cJf transmission through the channel. Fi-
nally, the coded signal is ffi()dulated onto a carrier wave (usually sinusoidal) for transmis-
sion c>ver the channel ..
At the receiver, the above operations are performed ín reverse order. An estimate of
the original message signal is thereby pr<lduced and delivered to the user destination. How-
ever, as mentioned previously, quantization is irreversible and therefore has no counterpart
in the receiver. ·
lt is apparent from this discussion that the use of digital communications may requíre
a considerable amount of electronic circuitry. This is not a significant problem since the
electr()nics are relatively inexpensive, due to the ever-increasing availability of very-
large-scale-integrated (VLSI) circuits in the form c>f silicon chips. Indeed, with continuing
ímprovements in the semiconductor industry, digital communications are often more C<Jst
effective than analog cómmunications.
There are two basic modes of communication:
1. Broadcasting, which involves the use of a single powerful transmitter and numerous
receivers that are relatively cheap to build. Here information-bearing signals flow
only in one dírection.
2. Point-tcJ-point comn-iunicatíon, in which the communication process takes place over
a link between a single transmítter anda single receiver. ln this case, there is usually
a bidirectional flow of information-bearing signals. There is a transmitter and re-
ceiver at each end of the link.
The broadcasting mode of communication is exemplified by the radio a11d relevision
that are integral parts of our daily lives. On the ocher hand, the ubiquitous telephone
provides the means for one form of point-to-point communication. Note, however, that
ín this case the link is part of a highly complex telephone network designed t() accom-
modate a large number of users on demand.
1.3 Overview ofSpecific Systems ;

Another example of point-to-poínt communication is the deep-space communica-

tions link between an Earth station and a robot navigating the surface of a distant planet.
Unlike telephonic cc>mmunication, the composition of the message sígna1 depends on the
direction of the communicarion process. The message signal may be in tl1e form of com-
puter-generared instructions transmitted fro1n an Earth statíon that command the robot
to perform specific maneuvers, ór it may cc)ntain valuable information about the chemical
composition <>Í the s<>il on the planet that is sent back to Earth Í(>r analysis. ln order to
reliably communicate (>ver such great distances, it is necessary to use digital communica-
tions. Figure 1.3(a) shows a phorograph of the robot, named Pathfi,nder, which landed on

,,,. L •· • ..
• .4'

. . ~. .'
. ""'"'
ti. :·

.!/.· ~

•• .., •

~ I ) ;;
"•;:<14'• A>

Jtr, ·-' ... ...-..~ .

.,·.....-.~' ....


. ;
. ~
; .

FIGURE l .3 (a) Snapshot of Patlifi1ider exploring thc Sl1rfacc of w1ars. {h) The 70-meter
(230-foot) diameter antenna located at Canberra, Australia. The surface of the 70-ineter reflcctor
must remain accurate ,vithin a fraction of the signal wavelength. (Courtesy of Jet Propulsion

Mars on July 4, 1997, a historie day in the National Aeronautics and Space Administra-
tion's (NASA's) scientific investigation of the solar system. Figure 1.3(b) shows a photo-
graph of the high-precision, 70-meter antenna located at Canberra, Australia, which is an
integral part of NASA's worldwide Deep Space Network (DSN). The DSN provides the
vital two-way communicati<>ns link that guides and controls (unmanned) planerary ex-
plorers and brings back images and new scientific information collected by them. The
successful use of DSN for planetary exploration represents a tríumph of communication
theory and technology over the challenges presented by the unav(>idable presence c>f nc>ise.
Unfortunately, every communication system suffers from the presence (lf chan11el
noise in the received signal. Noise places severe limits on the quality of received messages.
Owing to the enormous distance between our own planet Earth and Mars, for example,
the average power of the information-bearing component of the receíved signal, at either
end of the link, is relatively small compared to the average power of the noise component.
Reliable operation of the link is achieved through the combined use of (1) large antennas
as part of the DSN and (2) errar contrai. For a parabolic-reflector antenna (i.e., rhe type
of antenna portrayed in Fig. 1.3(6)), the effective areais generally between 50o/o and 6,So/o
of the physical area of the antenna. The received power available at the terminais of the
antenna is equal to the effective area times the power per unit area carried by the íncident
electr(>magnetic wave. Clearly, the larger the antenna, the larger the received signal power
will be, hence the use of large antennas in DSN.
Turning next to the issue of error control, ít involves the use of a channel encoder
at the transmitter and a channel decoder at the receiver. The channel encoder accepts
message bits and adds redundancy according to a prescribed rule, thereby producing en-
coded data ata higher bit rate. The redundant bits are added for the purpose of protection
against channel noise. The channel decoder exploíts the redundancy to decide which mes-
sage bits were actually sent. The combined goal of the channel encoder and decoder is to
minimize the effect of channel noise: that is, the number of errors between the channel
encoder input (derived from the source of information) and the encoder output (delivered
to the user by the receiver) is minimized on average.


Control of physical systems is widespread in the application of signals and systems in ()Ur
industrial society. As some specífic examples where control is applied, we mention aircraft
autopilots, mass-transit vehicles, automobile engines, machine tools, oil refineries, paper
mills, nuclear reactors, power plants, and robots. The object to be controlled is commonly
referred to as a plant; in this context, an aircraft is a plant.
There are many reasons for using contrai systems. From an engineering viewpoint,
the two most important ones are the attainment of a satisfactory response and robust
performance, as described here:
1. Response. A plant is said to produce a satisfactory response if its output follows or
tracks a specified reference input. The process of holding the plant output close t(>
the reference input is called regulation.
2. Robustness. A control system is said to be robust if it exhibits good regulatíon,
despite the presence of externai disturbances (e.g., turbulence affeccing the flight of
an aircraft) and in the face of changes in the plant parameters dueto varying envi-
ronmental conditions.
The attainment of these desirable properties usually requires the use of feedback, as
illustrated in Fig. 1.4. The system in Fig. 1.4 contains the abstract elements of a control
1.3 Overview of Specific Systems 7


Reference + . e( li ' ' "' v( t) '" • "e:: · i Output

input x(t) -:.
• '.E

Fe~dback signal

. CQ,f..ltrolleJ;, • ;'(li

PlfL~_,....,, l: -+-• y(t)

r(t) Sensor(s)

FIGURE 1.4 Block diagram of a fecdback control system. The controller drives the J>lant, whose
disturbed outJ)Ut drives the sensor(s). TI1e resulting feedback signal is subtractcd fro1n thc refcr-
encc input to produce an error signal e(t), \vhich, in turn, drives the contr<>ller. Thc feedback loop
is thercby closed.

system and is referred to as a closed-loop contrai system (>r feedback contrai system. For
example, in an aircraft landing system the plant is represented by the aircraft bc.>dy and
actuator, the sensors are used by the pilot to determine the lateral position of the aírcraft,
and the controller is a digital computer.
ln any event, the plant is described by mathematical operations that generate the
output y(t} in response to the plant input v(t) and external disturbance v(t). The sensor
included in the feedback loop measures the plant output y(t) and converts it into another
form, usually electrical. The sensor output r(t) constitutes the feedback signal. Tt is com-
pared against the reference input x(t) to produce a difference or errar signal e(t). This latter
signal is applied to a controller, which, in turn, generates the actuating signal v(t) that
perf<)rms the controlling action on the p1ant. A control S}'Stem with a síngJe input and
single output, as illustrated in Fig. 1.4, is referred to as a single-input!single-output (SISO)
system. w·hen the number of plant inputs and/or the number of plant outputs is more than
one, the system is referred to as a multiple-inputlmultiple-output (MIMO) system.
ln either case, the controller may be in the form of a digital computer or micropro-
cessor, in whjch case we speak of a digit11l control system. The use of digira} contro] systems
is becoming more and more common because of the flexibility and high degree of accuracy
affl>rded by the use of a digital computer as the controller. Because of its very nature, the
use of a digital control system involves the operatic>ns c>f sampling, quantization, and
coding that were described previously.
Figure 1.5 shows the photograph <)f a NASA (National Aeronautics and Space Ad-
ministration) space shuttle launch, which relies on the use of a digital computer for its


Remate sensing is defined as the process of acquiring information about an object of

jnreresr without being jn physicaJ conract with it. Basical)y, the acquisition of information
is accomplished by detecting and measuring the changes that the obiect imposes on the
surrc)u1tding fie/d. The field can be electromagnetic, acoustic, magnetic, or gravitationaJ,
depending on the application of interest. The acquisition of information can be performed
in a passive manner, by listening to the field (signal) that is naturally emitted by the object
and processing it, or in an active manner, by purposely illuminating the object with a well-
defined field (sígnal) and processing the echo (i.e., signal returned) from the object.
The abo,,e definition of remote sensing is rather broad in that it applies to every
possible field. ln practice, hc>wever, the term ''remate sensing'' is commo11ly used in the

..• ,t;•

.rn.· . .....
' •
FIGURE 1.5 NAS,.\ S[lace shuttlc launch. (Courtesy of NASA.)

context of electromagnetíc fields, with the techniques used for informatíon acquisition
covering the whole electromagnetic spectrun1. lt is this specialized form of rem<>te sensíng
that we are concerned with here.
The scope of remate sensing has expanded en<>rmously since che 1960s dueto the
advent ()Í satellites and planetary probes as space platforms f(>r the sens(>rs, and the avail-
ability of sophisticated digital signal-processing techniques for extracting infc>rmatic>n fr(>m
the data gathered by the sensors. ln particular, sensors on Earth-orbiting satellites provide
highly valuable inf<>rmation about global weather patterns and dynamics <>f clouds, surface
vegetation cover and its seasonal variations, and ocean surface temperatures. Mc>st im-
portantly, they do so in a reliable way and on a conrinuing basís. ln planetary studies,
spaceborne sensors have provided us with high-resc>lution ímages of planetary surfaces;
the images, in turn, have uncovered for us new kinds of physical phen<>mena, some similar
to and others completely different from what we are fa1niliar \1/ith on our planet Earth.
The electromagnetic spectrum extends from lc>w-frequency radio waves through mi-
crowave, submillimeter, infrared, visible, ultraviolet, x-ray, and gamma-ray regions of the
spectrum. Unfortunately, a single sensor by itself can cover only a small part of the elec-
tromagnetic spectrum, with the mechanism responsible for wave-matter ínteraction being
influenced by a limited number of physical properties of the ol-,ject of interest. If, therefore,
we are to undertake a detailed study of a planetary surface or atmosphere, then che si-
multaneous use of multiple sens<Jrs covering a large pare of the electromagnetic spectrum
is required. For example, to study a planetary surface, we may requíre a suite of sensors
covering selected bands as follows:
1.3 Overvieiv of Specific Systems 9

• Radar sensors to provide information on the surface physical properties <)Í the planet
under study (e.g., topography, roughness, moisture, and dielectric constant)
• lnfrared sens(>rs to measute the near-surface thermal properties of the planet
• Visible and near-infrared sensors to provide inf<1rmation about the surface chemical
comp<>sition of the planet
• X-ray sensors to provide information on radioactive materiais co11tained in the planet
The data gathered by these highly diverse sensors are then processed on a computer to
generate a set c>f images that can be used collectively to enhance the knowledge of a scientist
studying the p1anetary surface ..
Among the electromagnetic sensors mentioned above, a special type <>f radar know11
as synthetic aperture radar (SAR) stands out as a unique imaging system in remore sensing.
lt offers the following attractive features:
• Satisfactory operation day and night and under a1l weather conditions
• High-resoJution imaging capabiJity that is independent <>Í sensor altitude <)r
The realization (>Í a high-resc>lution image with radar requires the use c>f an antenn.,1 wíth
large aperture. From a practical perspectíve, however, there is a physical limit <>n the size
of an ar1tenna that can be accornmodated on a11 airborne or spaceb<1rne plarform. In a
SAR system, a large ante11na aperture is synthesized by signal-processing means, hence the
name ''synthetic aperture radar." The key idea behind SAR is that an array of antenna
elements equally spaced al<)ng a straight line is equivalent to a single antenna moving along
rhe array line ar a unjform speed. This is true provided rhat we sarjsfy the follc>wing
requirement: the signals received by the single antenna at equally spaced points along the
array line are coherently recorded; that is, amplitude and phase relationships among the
' received signals are maintained. Coherent recording ensures that signals received from
the single antenna correspond to the signals received from the individual elements of an
equivalent antenna array. ln (>rder to obtain a high-resolution image from the single-
antenna signals, highly sophisticated signal-processing operations are necessary. A central
operation in the signal processing is the Fourier transform, which ís implemented efficiently
on a digital c<lmputer using an algorithm known as the fast Fourier transf<>rm (FFT) al-
g<>rithm. t'ot1rier analysis <>f signals is one of the main focal points of this bc>ok.
The phc>tograph in Fig. 1.6 shc>ws a perspective view <>f Mt. Shasta (Calif<)rnia),
which was derived from a stereo paír of SAR images acquired from Earth c>rbit with the
Shuttle lmaging Radar (SIR-B). The color version of this photograph appears on thc co1or


The goal of biomedical signal processing is to exrract information from a bioll>gical signal
rhat helps us to further impr<1ve our understanding of basic mechanisms of biological
functic)n or aids us in the diagnosis or treatment of a medical condition. The ge11eration
<>f 1nany bio/()gical signals found in the human bc>dy is traced to the electrical activity of
large groups of nerve cells or muscle cells. Nerve cells in the brain are commonly referred
to as neurons. Figure 1. 7 shows morphological types of neurc)ns identifiable in a monkey
cerebral cortex, based on studies of prin1ary somatic sens<1ry and motc)r cortex. This figure
il1ustrates the many different shapes and sizes of neurons that exist.
Irrespective of the signal (>rigin, biomedical signal processing begins with a temp<)ral
rec<>rd of the bi<)logica] e\'ent. of interest. For example, the eJecrrical activit}' of the heart

.": . ,,
..... .... . ....... . ,.,...

' . ~--
....:..)IQ. '
·· ... ",
. .,. -
. .. .......,
. ' ..

., .
. ...~· . . ,.

FIGURE 1.6 Pcrspcctive vie\v <,f l\1ount Shasta (Californía) derived from a pair of stereo radar
images acquired from orbít ,,vith the Shuttle lmaging Radar (SIR-B). (Courtesy of Jet Propulsi<)11
, Scc Color Plate.

is represented by a record called the electr<)cardic)gram (ECG ). The ECG represents changes
in the potential (voltage) due to electr(>chemical pr<>cesses involved in the formation and
spatial spread of elecrrical excitations in the heart cells. Acc<>rdingly, derailed i11ferences
abc>ut the heart can be made from the ECG.
Another important example <>f a hi<)logical signal is the electroencephalogram (EEG).
The EEG is a record of fluctuations in the electrical activity of large groups of neurons in


FIGURE 1. 7 i\lorphological typcs of nerve cells (neurons) identifiable in a monkcy cerebral cor-
tex, based on studies of priritary somatic sensory and motor cortcx. (Reproduced from E. R.
Kandel, J. H. Sch,vartz, and 'I'. .!Vl. Jesscl, Principles of Nettral Scie1ice, Third Edition, 199 J; cour-
tesy of Appleton and Lange:)
1.3 Ove"7iew of Speciflc Systems 11

the brain. Specifically, the EEG measures the elecrrical .field associated with rhe ctrrrent
flowing through a group of neurons. To record the EEG (<lr che ECG for that matrer) at
least two electrodes are needed. An active electrode is placed over the particular site of
neuronal actívity that is of interest, and a reference electrode is l)laced at S<)me rem<>te
distance from this site; the EEG is measured as the voltage or pc>tential difference between
the active and reference elecrrodes. Fig11re 1. 8 sho"vs three examples <Jf EEG signals re-
corded from the hippocampus of a rar.
A major issue of concern in biomedical signal processing-in rhe cc>ntext c.>f ECG,
EEG, or some other hiologica1 signal-is the detection and suppression of artifacts. An
artifact refers to that part of the signal produced by events rhat are cxtrane<>us to the
biological event of interest. Artifacts arise in a biological signal ar different stages c>f pro-
cessing and in many different ways, as sun1marized herc:
• Instrumental artifacts, generated by the use of a11 instru111ent. A11 example of an
instrumental artifact is the 60-Hz interference picked up by the recc>rding i11strun1ents
from the electrical maíns power supply.
• Biological artifacts, in ""'hich c>ne bi<>logical signa) C()ntaminates or interferes \Vith
another. An example C)Í a biological artifact is the electrical pc)tc11tial shift that n1ay
be observed in the EEG dueto heart acrivity.
• Analysis artifacts, which may arise in the course of proccssing the biol<>gical signal
to produce an estimate of the event of interest.
Analysis artifacts are, in a way, controllable. For exan1ple, rc>und<)ÍÍ errors due to
quantization of sig11al samples, which arisc from the use <>f digital signal prc>cessi11g, can
be made nondiscernible fc)r al] practical purposes hy making the number c>f discretc am-
plitude leveis in the quantizer large enough.
What about instrumental and biological artifacts? A comm<)n meth<>d c)f reducing
their effects is through the use of filtering. A filter is a system that perforn1s a desired




Os Time ... 2s

FIGURE 1.8The traces sho\Vn in (a), (h), and (e) are three c:xamples <)f EEG signals rccc)r<.lecf
from the hippocampus of a rat. Neurobiol<>gical studies suggest that the hip[><>c.:am1>us plays a kcy
role ín certain aspects of learning or memory.

operatio11 t)n a signal or signals. lt passes signals containing frequencies in <>ne frequency
range, termed the filter passband, and removes signals contaíning frequencies in other
frequency ranges. Assuming that we have a fJric)ri knowledge concerning the signal of
interest, we may estimate rhe range of frequencies inside whích the significant components
of the desíred signal are located. Then, by designing a filter wh<)Se passband corresponds
to the frequencies of the desired signal, artifacts with frequency c<)mponents outside this
passl-,and are removed by the filter. The assumptio11 made here is that the desired signal
and the artifacts contaminating it occupy essentially nonoverlapping frequency bands. If,
however, the frequency bands overlap each other, rhen the filtering problem becomcs more
difficult and requires a solution beyond rhc sc<>pe of the present book.


For our last example <)Í a system, we turn to the 111ammalian auditory system, the functi<>n
of which is to discriminate and recognize complex S<)unds on the basis of their frequency
Sound is produced by vibrations such as the movements of vocal cords <lr vi<)lin
strings. These vibrati()ns result in the compressi(>n and rarefaction (i.e., increased or re-
duced pressure) of the surrounding air. The disturbance so produced radiates outward
from the source of sound as an acoustical wave with alrernating highs and lows of pressure.
The ear, rhe <>rgan of hearing, responds t<> incoming acoustical waves. lt has three
main parts, with their functions summarized as follows:

• The outer ear aids in the collection of sounds.

• The middle ear provides an acc>ustic impedance n1atch between the air and the C<>-
chlea fluids, thereby conveying the vibrations of the tympanic membrane (eardrum)
due to the incoming sounds te) the inner ear in an efficienr manner.
• The inner ear converts the mechanical vibratíons from the middle ear t<> an "'electro-
chemical'' or ''neural'' signal for transmission to the brain.

The inner ear consists of a bony spiral-shaped, fluid-filled tube, called the cochlea.
Sc>und-induced vibrations of the rympanic membrane are transmitted into the oval window
of the cochlea by a chaín <>f h<>nes, called ossicles. The lever action of the ossicles provides
some amplificati<)n <>f rhe mechanical vibrarions <>Í the tympanic membra11e. Thc cc>chlea
tapers in size like a cone t<>ward a tip, so that there is a base at the oval window, and an
apex at the tip. Through the middle of the cochlea stretches the basilar membrane, which
gets wider as the c<)chlea gets narrower.
The vibratory movement of the tympanic membrane is transmitted as a travelíng
wave along the length of the basilar membrane, starting from the oval window to the apex
at the far end of the C<)chlea. The wave propagares along the basilar membrane, much like
the snapping of a rope tied atone end causes a wave to propagare along rhe rope from
the snapped end to the fixed end. As illustrated in fig. 1.9, the wave attains its peak
amplitt1de ata specific lc>cation along the basilar n1embranc that depends on the frequency
of rhe incoming sc>und. Thus, although the wave ítself travels al(>ng the basilar men1brane,
the envelope of the wàve is ''stationary'' for a given frequency. The peak displacements
for high frequencies occur toward the base (where the basilar membrane is narrowest and
sriffesr). The peak displacements for low frequcncies occur toward the apex (where the
basilar membrane is \videst and mosr flcxible). That is, as the wave propagares along rhe
basilar me1nbra11e, a res(>nance phenomenon takes place, wirh the end <>f the basilar mem-
brane at the base of the cochlea resonating at ab()Ut 20,000 Hz and its other end at the
J .3 Overview of Specific Systems 13

>: >,.
stiff region
flexible region

4000 Hz

15,000 Hz


FIGURE J.9 (a) ln this diagram, thc basilar membrane in the c<ichlea is de1,icted as if it ,vere
uncoíled and strctched out flat; the "base" an<l "apex" refcr to the cochlca, but the remarks ''stiff
region" and "flexil)le regíon" refer to thc basilar membrane. (b) This diagram illustrates the travei~
ing ,vavcs along the basilar membrane, sho'vvir1g their enveloJ)CS induced by inc<>ming sound at
three different frequencíes.

apex of the cochlea resonating at about 20 Hz; the res<)nance frequency of the basilar
membrane decreases gradually with dístance from base to apex. Consequently, the spatial
axis of the cochlea is said to be tonotopically ordered, because each location is associated
with a particular resonance frequency or tone.
The basilar membrane is a dispersive medium, in that higher frequencies propagate
more sJowly than do lower frequencies. In a dispersive medium, we distinguish two dif-
ferenr velocities, namely, phase velocity and group velocity. The phase velocity is the ve-
locity at which a crest or valley of the wave propagares along the basilar membrane. The
group vclocity is the velocity ar which the envelope of the wave and its energy pr<>pagate.
The mechanical vibrations of the basilar membrane are transduced into electrochem-
ical signals by hair cells that rest in an orderly fashion on the basilar membrane. There are
two main types of hair cells: inner hair cells and outer hair cells, with the latter being by
far the most numerous type. The outer hair cells are motile elements. That is, they are
capable of altering their Jengrh, and perhaps other mechanical characteristics, which is
believed to bc responsible for the cc>mpressive nonlinear effect seen in the basilar membrane
vibrations. There is also evidence that the outer hair cells contribute to the sharpening of
tuning curves from the basilar membrane and on up the systern. However, the inner hair
cells are rhe rnain sites of auditory transduction. Specifically, each auditory neuron syn-
apses wírh an inner hair cell ata particular l<)Cation on the basilar mernbrane. The 11eurons
that synapse with inner hair cells near the base of the basilar membrane are found in the
periphery <>f the audit<)ry nerve bundle, and there is an orderly progression toward syn-
apsing at the apex end of the basilar membrane with movcment toward the center c>f the
bundle. The tonotopic organization of the basilar rnembrane is therefore anatomically
preserved in the auditory nerve. The inner hair cells also perforrn rectification and com-

pression. The mechanical signal is approximately half-wave rectified, thereby responding

to motion of the basilar membrane in one direction only. Morcover, the mechanical signal
is corr1pressed nonlinearly, such that a large range of incon1ing sound ínrensiries is reduced
to a manageable excursion of electrochemical potential. The electrochemical signals so
produced are carried over to the brain, where they are further processed to become our
hearing sensations.
ln summary, in the cc>chlea we have a wonderfttl example of a biological system that
operares as a bank of filters tuned to different frequencies and uses nonlinear processing
to reduce dynamic range. It enables us to discriminate and recognize complex sounds,
despite the enormous differences in intensity leveis that can arise in practice.


The sígnal processí11g operatíons ínvolved in buíldíng con1municatíon systems, C(>Otí()l

systems, instruments for remote sensing, and instruments for the pr()Cessing of bic>lc>gical
signals, among the many applications of signal processing, can be implemented in two
fundamentally different ways: ( 1) analog or continuous-time appr<>ach and (2) digital or
discrete-time approach. The analog approach to signal processing was dominant for many
years, and it remains a viable optíon for 1nany applicatic,ns. As the name implies, analog
signal processing relies on the use of analog circuit elements such as resistors, capacitors,
inductors, transistor ampli.fiers, and diodes. Digital signal processing, on rhe other hand,
relies on three basic. digital cc)mputer elements: a d ders and multipliers (for arithmetic op-
erations) and memory (for st<>rage).
The main attribute of the analog approaçh is a 11atural abiliry to solve differential
equations that describe physical systems, without having to resort to approximate solu-
tions for them. These solutions are also obtained in real time irrespective of the input
signal's frequency range, since the underlying mechanisms responsíble for the operations
of the analog appr<>ach are ali physícal in nature. ln conrrast, the digital approach relies
on numerical computations for its operation. The time required to perform these com-
putations determines whether the digital apprclach is able to <Jperate in real time, rhat is,
to keep up wíth the changes in the input signal. ln othcr words, the analog approach is
assured of real-time operation, but there is no such guarantee for the digital approach.
However, the digital approach has the fc)ll()wing impc>rtant advantages c>ver analog
signal processing:
• Flexibility, whereby the sarne digital machine (hardware) can be used fc>r imple-
menting different versions c>f a signal-processing operaticJn c)f i11terest (e.g., filtering)
merely by making changes to the software (pr(>gram) read int<> the machine. On the
other hand, in the case of an analog machine, the system has to be redesigned every
time the signal-processing specifications are changed.
• Repeatabílity, which refers to the fact thar a prescribed signal-processing operation
(e.g., control of a robot) can be repeated exactly over and over again when it is
implemented by digital means. ln C()11trast, analog systems suffer from parameter
variations rhar can arjse dLJe to changes jn the s11pply voJtage <)r room temperature.
For a given signal-prc>cessing operatic)n, however, we usually find that the use <>f a
digital approach requires greater circuit complexity than an analog approach. This was
an issue of major concern in years past, but this is no longer so. As remarked earlier, the
ever-increasing availahility of VLSI circuits in the form of silicc)n chips has 1nade digital
electronics relatively cheap. Consequently, we are now able to build digital signal proces-
sors that are cost competitive with respect to their analog counterparts over a wide fre-
1.4 Classification of Signals 15

quency range that includes b<>th speech and video signals. ln the final analysis, however,
the ch<>ice of a11 analog or digital approach for the solution of a signal-processing problem
can only be determined by the applicatíon of interest, the resources avaílable, and the cost
inv<1lved ín building the system. It should also be noted that the vast majority of systems
builr i11 practíce are ''mixed'' in narure, c<)n1bining the desirable features of both analog
and digital approaches t<> signal processing.

1.4 Classification of Signals

ln this book we will restrict our actention to one-dimensior1al signals defined as single-
valued functions of rime. ''Single-valued'' means that for every instant of time there is a
unique value <)f the function. This value may be a real number, in which case we speak of
a reaJ-1,,alt1ed signal, or it may be a compJex number, in wh1ch case we speak of a complex-
valued signal. ln either case, the independent variable, namely, time, is real valued.
The most useful method of signal representation for a given situatic,11 hinges <>n rhe
particular rype c>f signal being cc>nsidered. We may identify five methods of classifying
signals based on different features:

1. (;<)ntintt()us-time and díscrete-time signals.

One way of classifying signals is on the basis of how they are defined as a function of time.
ln this contcxc, a sig11al x(t) is said to be a continuous-time signal if ir is defincd for all
time t. figt1re 1.10 represents an example of a continuous-time signal whose amplitude or
value varies C(}ntinuously with time. Continuous-time signa)s arise naturally when a phys-
ical waveform such as an acoustic wave or light wave is converted into an clectrical signal.
The conversion is effecte<l by means of a transducer; cxamples include the microphc>ne,
which co11verts sound pressure variations int<> corresp<>nding voltage or current variati<>11s,
and the photocell, which d<)es the sarne for light-intensity variatic)ns.
On thc other hand, a discrete-time signal is defined only ar discrere instants of time.
Thus, in this case, the independenc variable has discrete values only, which are usually
uniformly spaced. A discrete-time signal is often derived from a continuous-time signal by
samJ>ling it ata uniform rate. Let ?J denote the sampling period and n denote an integer
·· that may assume positive and negative values. Sampling a c<>ntinuous-t1me signal x(t) at
time t = n?J yields a sample of value x(n?.T). For convenience c)f presentarion, we write
x[11I = x(n?.T), n = O, :::!::: 1, ±2, ... (1.1)
Thus a discrete-rime signal is represented by the sequence numbers ... , x[-21, xl-1],
x[O], xl 11, x[2], ... , which can take on a continuum of values. Such a sequence of numl-,ers
is referred t<> as a time series, written as (x[n}, n = O, ± 1, +2, ... } or simply x[n]. The


FIGURE ) •) 0 c:ontinllOllS-ti1ne signal.



(a) (b)

FIGURE 1. l I (a) C.~ontínuous-time signal x(t ). (b) Rcpresentation of x(t) as a discrete-tíme sígnal

latter nc>tation is used throughc>ut this book. Figure 1.1 J illustrarcs rhe relationship be-
tween a continuous-tirrie sígnal x(t) and discrete-time signa) x[n] derived from it, as de-
scribed ahc>ve.
Thrc>ughout this bo<)k, we use the syml"lc>l t to denote ti1nc fc>r a continuous-rime
signal and the symbol n t<> denote time for a discrete-time sig11al. Similarly, parenrheses (·)
are used to denote contint1ous-valued quantities, whilc brackets [·] are used to denote
discrete-valued quantities.

2. Even and c)dd signals.

A continuc)us-time signal x(t) is said to be an even signal if ir satisfies the condirion
x(-t) = x(t} for all t ( 1.2)
The signal x(t) is said to be an ()dd signal if ir satisfies the co11ditic>n
x(-t) = -x(t) for ali t ( 1.3)
ln other words, even signals are symmetric about the vertical axis or time c>rigin, whereas
odd signals are antisymmetric (asymmetric) about the time origin. Similar remarks apply
to discrete-time signals.

. ,.
.,, .:...r f.
. .. , ... , .
·. ·.
ExAMPLE J. l Develop the even/odd decornposition of a gener~l signal x(t) by applying the
definitions of Eqs. (1.2) and (1.3).
Solution: Let the signal x(t) be expressed as the sum of two components Xe(t) and x (t) as 0

x(t) = Xe(t) + X 0 (t)
Define Xe(t) to be even and x 0 (t) to be odd; that is,
Xe(-t) = Xe(t)
x 0 (-t) = -x (t) 0

Putting t = -t in the expression for x(t), we may then write

x(-t) = Xe(-t) + x 0 (-t) ..

= Xe(t) - X 0 (t)
• •• •,. ·'.'·>>s. ••:. ·-! :
., ... .... ...•· ,,~ .v(:,.•.
. ..... ~r.
.,..... :, .<~' : •••·::. ·~ •'<
: . .., .: . :
1.4 Classifwation of Signals 17

'... \
•• r • •
.. ' . :

,. ,,~,, ..
Solving for Xe(t) and x 0 (t), we thus obtain :. .
• • l::
;i:.,; .~ .. -

1 ;.:
+ x(-t}

Xe(t) =l x(t) .,
" ,: ,. .
... .. 1· .;·:
. . ....
. '
and .. .',,.,,.,

' .i •• : • .,
1 . .: '

x 0 (t) = x(t) - x(- t)

.. .,,, .

The above definitions of even and odd signals assume rhat the signals are real valued.
Care has to be exercised, however, when the signal of interest is complex vall1ed. ln the
case of a complex-valued signal, we may speak of conjuga te symmetry. A complex-valued
signal x(t) is said to be co11jugate symmetric if it satisfies the condition
x(-t) = x*(t) ( 1.4)
where the asterisk denotes complex conjugati<>n. Let
x(t) = a(t) + jb(t)

where a( t) is the real part of x( t}, b( t) is thc imagina ry part, and j is the sq ua re r<)<>t of - 1.
The complex conjugate of x(t) is
x'"(t) = a(t) - jb(t)
From Eqs. (1.2) to (1.4), it follows therefore that a complex-valued signal x(t) is conjt1gate
symmetric if its real part is even and its imaginary part is odd. A similar remark applies
to a discrete-cime sig11al.

• Drill Problem 1.1

Consider the pair of signals shown in fig. 1.12. Which of thcse
two signals is evcn, and which 011e is odd?
Answer: x 1(t) is even, and x 2 (t) is odd. •
• Drill Problem 1 .2
The signals x 1(t) and x 2 (t) shown in Figs. l .12(a) and (b} C(>n-
stitute the real and imaginary parts of a complex-valued signal x(t). What fc>rn1 of sym-
metry does x(t) have?
Answer: x(t) is conjugate symmetric. •

O T/2
.....···· --·-+----- t
O T/2

(a) (b)

FIGURE 1. 12 (a) ()ne example of contintrous-time signal. (b) Another examplc of continu,lus-
time signal.

3. Periodic signals, nonperiodic signals.

A periodic signal x(t) is a function chat satisfies the condition
x(t) = x(t + T) for all t (1.5)
where Tis a positive constant. Clearly, if this c<>ndition is satisfied for T = T0 , say, then
it is also satísfied for T = 2 T0 , 3 T0 , 4 T 0 , • • • • The smallest value of T that satísfies Eq.
(1.5) is called the fundamental period of x(t). Accordingly, the fundamental peric>d T
defines the duration of one complete cycle of x(t). The reciproca! of the fundamental period
Tis callcd the fi-,ndamental frequency of the periodic sig11al x(t); it describes h(>W frequently
the periodic signal x(t) repeats itself. We thus formally write

f=! (1.6)
The frequency f is measured in hertz (Hz) or cycles per second. The angular frequency,
measured in radians per second, is defined by
w= (1.7)
since there are 21r radians in one complete cycle. To simplify terminology, w is often
referred to simply as frequency.
Any signal x(t) for which there is no valt1e <)f T to satisfy the condition of Eq. (1.5)
is called an aperi<>díc or nonperiodic signal.
Figures 1.13(a) and (b) present examples of peri<)dic and nonperiodic signals, re-
spectively. The periodic signal shown here represents a square wave of amplitude A = 1
and peric>d T, and the nonperiodic signal represents a rectangular pulse <>Í amplitude A
and duratic>11 T,.

• Drill Problem 1 .3 Figure 1.14 shows a triangular wave. What is the fundamental
frequency of this wave? Express the fundamental frequency ín units c)f Hz or rad/s.
Answer: 5 Hz, or 101r rad/s. •
The classification (>Í signals into periodic and n<>nperiodic signals presented thus far
applies to contínu(>us-time signals. We next cc>nsider the case of discrete-time signals. A
discrete-time signal x[nl is said to be periodíc if ít satisfies the conditi<)n:
x[n] = xln + N] for all integers n (1.8)

x(t) x(t)

11-- A

o -
-----··········L········· ·--- t
-1 -
o T 2T 3T 4T 5T
Time t, seconds
(a) (~

FIGURE 1.13 (a) Square wave \Vith amplitude í\ = l, and l)eriod T = 0.2 s. (b) Rcctangular
pulse of amplitt1cle A and duration T 1•
l .4 Classification of Signals 19

o O, l 0.2 0.3 0.4 0.5 0.6 o. 7 0.8 0.9 1

Titne t, scconds

FIGURE 1.14 Triangular wavc alternating between -1 and + l ,vith fundamental period of 0.2

where Nisa positive integer. The smallest value of integer N for which Eq. (1.8) is satisfied
is calJed the fundamental period of the discrete-time signal x[n]. The fundamental angular
frequency or, simply, fundamental frequency of xfnl is defined by

n = 2'" (1.9)
which is measured in radians.
The differences between the defining equations (1.5) and ( 1.8) should be carefully
noted. Equation (1.5) applies to a periodic continuous-time signal whose fundamental
period T has any pc.>sitive value. On the other hand, Eq. (1.8) applies to a periodic discrete-
time signal whose fundamental period N can only assume a positive integer value.
Two examples of discrete-time signals are shown in Figs. 1.15 and 1.16; the signal
<>f fig. 1.15 is peric>díc, whereas that <>f Fig. 1.16 is aperiodic.

• Drill Problern 1.4 What is the fundamental frequency of thc discrete-time square
wave shc>wn in Fig. 1.15?
Answer: TTl4 radians. •
4. Deterministic signals, random signals.
A deterministic signal is a signal about which there is no uncertainty with respect to its
value at any time. Accordingly, we find that deterministic signals may be modeled as


· · · · ····-··· ··-·-··· · · · · · · ··-· - · · ··· · -·········~~~.. · · · ·····~-~- ·, - · · · Time n

-8 O 8

..... -1

FIGURE 1.15 Discretc-time squarc wave altcrnating bet,-veen - 1 and + J.



-ó--o--o-- ·--'--...__--'-----('1--- <>---<>-· n

-4 -3 -2 -1 O 1 2 3 4
FIGURE 1. 16 1\periodíc discrctc-time signal consisting oi' three nonzcrc) samples.

completely specified fu11ctions of time. The square wave shown in Fig. 1.13(a} and the
rectangular pulse shown in t'ig. 1.l3(b) are examples of deterministic signals, and so are
the signals shown in Figs. 1.15 and 1.16.
On the other hand, a random signal is a signal about which there is uncertainty
before its actual occurrence. Such a signal may he viewed as belonging to an ensemblc or
group of sígnals, with each signal in the ensemble having a dífferent wavcform. Moreover,
each signal within the ensemble has a certain probability of occurrence. The ensemble of
such signals is referred to as a random process. The n<>ise generated in the an1plifier of a
radio or television receiver is an example of a rand<>m signal. lts amplitude fluctuates
between positive and negative values ín a completely ra11dom fashion. The EEG signal,
exemplified by the waveforms shown in Fig. 1.8, is another example of a random signal.

5. Energy signals, power signals.

ln electrical systems, a signal may representa voltage ora current. Considera voltage v(t)
developed across a resistor R, producing a current i(t). The instantaneous power dissipated
in this resistor is defined by


or, equivalent)y,
p(t) = Ri 2 (t) (1.11)

ln both cases, the instantaneous power fJ(t) is proporti<>nal to che squared amplitude of
the signal. Further111ore, for a resistance R (>Í 1 ohm, we see that Eqs. ( 1.1 O) and ( 1.11)
take on the sarne mathematical form. Accordingly, in signal analysis it is customary to
define power in ter1ns of a 1-c>hm resistor, so that, regardless of whether a given signal
x(t) represents a voltage or a current, we may cxpress the instantaneous power of the
signal as
p(t) = x 2 (t) (1.12)

Based on this C<)nventíon, we define the total energ)1 of the continuous-time signal x(t) as
E = lim
y_,,_:,o I -"1'/2
x 2 (t) dt
= f QC"" x 2 (t) dt

and its average power as

1.4 Classificatio11 of Signals 21

Frc>m Eq. (1.14) we readíly see that the average power of a periodic signal x(t) of funda-
mental period Tis given b}·
· fT/2
P = Tl x 2 (t) dt (1.15)

The square root of the a,1 erage power P is called the root mean-square (rms) vaJue (>f rhe
signal x(t). .
ln the case of a discrete-time signal xlnl, the integrais in Eqs. (1.13) and (1.14) are
replaced by corresponding sums. Thus the total energy of x[n] is defined by

( 1.16)

and its average pc1,ver is defined by

1 N
p = E~ 2N,, LN x2lnl (1.17)

Here again we see from Eq. {1.17) that the average power in a peri<)dic signal x[n] with
fundan1e11tal pcriod N is given by
p =-


A signal is referred t(> as an energy sig11al, if and only if the total energy of thc signal
satisfics the condition
On the other hand, ir is referred to as a f)ower signal, if and'c>nly if the average power of
the signa] sarisfies the conditi(>Il ·
The energy and power classifications of sig11als are mutually exclusive. ln particular, an
energy signal has zero average power, whereas a power signal has infinite energy. lt is also
of interest to nc>te that periodic signals and random signals are usually viewed as power
signals, whereas signals that are both deterministic and nonperiodic are energy sig11als.

• Drill Problem 1. 5
(a) What is the total energy of the rectangular pulse shown in Fig. 1.13(6)?
(h) Whar is the average power of the square wave shown in Fig. 1.13(a)?
Answer: (a) A 2 T 1• (b) 1. •
• Drill Prohlem 1.6 What is rhe average power <)Í the triangular wave shown in
Fig. 1.14?
Answer: 1/3. •
• Drill Problem 1. 7 What is the total energy <>Í the discrete-time signal shown i11
Fig. 1.16?

Answer: 3. •

• Drill Problem 1.8 What is the average power of the perÍ{)dic discrete-time signal
shown in Fig. 1.15?
Answer: l •
11.5 Basic
Operations on Signals
-·. ··-
An issue of fundamental importance in the study of sig11als and systems is thc use of syscems
t<> process or manipulate signals. This issue usually involves a combination <>f sc)me basic
operations. ln particular, we may identify two classes of operatic)ns, as descril1ed here.
1. Operations performed on dependent variables.
Amplitude scaling. Let x(t) denote a continuous-time signal. The signal y(t) resulting
from amplitude scaling applied to x(t) is defined by
y(t) = cx(t) (1.18)
where e is the scaling facror. According te> Eq. (1.18), the value of y(t) is obtained by
multiplying the corresponding value of x(t) by the scalar e. A physical example of a <levice
rhat performs an1plitude scaling is an electronic amt>lifier. A resistor also performs ampli-
tude scaling when x(t) is a current, e is the resistance, and y(t) is rhe output voltage.
ln a manner similar to Eq. (1.18), for discrete-time signals we write
y[n] = cx[nl
Addition. Ler x 1(t) and x 2 (t) denc)te a pair of contint1<>us-ti1ne signals. The signal y(t)
obtained by the addition of x 1 (t) and x 2 (t) is defined by
y(t) = X1(t) + X2(t) (1.19}
A physical examplc of a <levice that adds signals is an audíc) mixer, \vhich cc>n1l-)i11es 111usíc
and voice signals.
ln a manner similar tl> Eq. (1.19), for discretc-time signals we write
yl_n l = X1[n] + x2lnj
Multiplication. Let x 1 (t) and x 2 (t) denc>te a pair of contint1c>us-rime signals. The signal
y(t) resulting from the mt1ltiplication of x 1(t) by x 2 (t) is defined by
That is, for each prescrihed time t the value of y(t) is given by the product c>f the corre-
sponding values of x 1(t) and x 2 (t). A physícal example of y(t) is a11 AM radio sígnal, in
which x 1(t) cc>nsists of an audic) signal plus a de cc)rnponent, and x 2 (t) consists of a sinu-
soidal signal called a carrier wave.
ln a manner similar t<> Eq. ( 1.20), for discrete-time signals we write
y[n] = x,[n.lx2l nl


v(t) L

FIGURE 1.17 lnductc>r ,vith current i(t ), inducing voltage v(t) across íts tern1inals.
1. 5 Basic Operations on Signals 23



FIGURE 1.18 Capacit(lT ,vitl1 voltage v(t) across íts terminais, indt1cing current i(t ).

Differentiation. I.et x(t) denote a continuous-time signal. The derivative ()f x(t) with
respect to time is defined by
y(t) = dt x(t) (1 .21 )

For example, an inductor perf()rms differentiation. Let i(t) denote che curre11t flowing
through an inductor of inductance L, as shown in Fig. 1.17. The voltage v(t) developed
across the inductor is defined by

v(t) = L f, i(t) {1.22)

Integration. Let x(t) denc)te a continuous-time signal. The integral of x(t) with respect
to time t is defined by ·


where ris the integration variable. f<)r cxample, a capacitor perforn1s integration. Ler i(t)
denote the current flowing through a capacit<)r of capacitance C, as shown in Fig. 1.18.
The voltage v(t) developed across rhc capacitor is defined by

v(t) = e1 f' -c,o i( T) dT (1.24)

2. Operations performed on the independent variable.

Time scaling. I.et x(t) denote a continuous-ti1ne sígnal. The signal y(t) obtained by
scaling the independent variable, time t, by a fact<>r a is defined by
y(t) = x(at)
If a > 1, the signal y(t) is a compressed version <>f x(t). If, <)n the <>ther hand, O < a < 1,
the signal y(t) is an expanded (stretched) version of x(t}. These two operatio11s are illus-
trated in Fig. 1.19.

x(t) y(t) = x(2t) y(t)=x(~t)

. 1.0 . 1.0::,r,,,.


-1 o 1 l O l -
_J o 2
2 2

FIGlJRE 1. 19 Timc-scaling operation: (a) continuous-time signal x(t), ( l)) compressed versi(>n <>f
x(t) by a factor of 2, and (e) expanded vers Í<>n c>f x( t) by a factor of 2.

x[nJ y[n] = xf2n]

1 !

__.__,____ _ _ _ _ _ _ _ _ ___,,_+---t---+--1- n n
-6 -5 -4 -3 -2 -1 O 1 2 3 4 5 6 -3 -2 -1 o 1 2 3
(a) (h)

Effcct ,,f
time scalíng on a díscrete-tin1c signal: (a) c.liscrete-tíme signal xí 1-i l. a11d
(b) c<>mpressed version of x[1i] by a factc,r <,f 2, wiLh some values of thc original x[1i] losl as a
result of the comprcssion.

ln the discrete-time case, we write

k>O y[n] = x[kn ],
which is defined only for integer values ()f k. If k > 1, then some values <)f the discrete-
time signal y[11} are lost, as illustrated in Fig. 1.20 for k = 2.
Reflection. Let _-r(t) denote a continuous-time signal. Let y(t) denote the sjgnaJ ob-
tained by replacing time t with - t, as shown by
y(t) = x(-t)
The signal y(t) represents a reflected version of x(t) al1out the amplitude axis.
The following two cases are c)f special interest:
• Even signals, for which wc have x(-t) = x(t) for all t; that is, an even signal is the
sarne as its reflected vcrsion.
• Odd signals, for which we have x(-t) = -x(t) for all t; that is, an odd signal is the
negative of its reflected version.
Similar observarions apply to discrete-time signals.

EXAMPLE t.2 Consider the triangular pulse x(t) shown in Fig. 1.21{a). Find the reflected
version of x(t) about the amplitude axis.
Solution: Replacing the independent variable t in x(t) with -t, we get the result y(t) = x(-t) '
shown in Fig. 1.21 (b).
Note that for this example, we have
x(t) = O for t < -T1 and t > T2
, Correspondingly, we find that

y(t)=O fort>T1 andt<-T2

x(t) y(t) = x(-t)

- ~ ,; _ _ _ _ _ J _ i - - - - - - ~ - t --t!!::.------+--.3!--- t
-T1 O -T2 o
(a) (b)

fIGlJRE 1.21
()peration of reflcctÍ<>n: (a) contint1ot1s-tirne signal x(t) and (b) reflected version of
x(t) about the origin.
l. 5 Basic Operations on Signals. 25

• Drill Problem 1.9 The discrete-time signal x[nl is defined by

1, n = 1
xlnl -1 n = -1
O, n = O ar1d /nl > 1

Fi11d the cc>mposite signal y[nl defined in terms c>f x[n] by

y(n] = x[nj + x(-nj

A1isiver: y[n] = O for ali integer values of n. •

• Drill Problem 1.10 Repeat Drill Problem 1.9 for
1, n = -1 and n = 1
x(nJ =
O, n = O and I n1 > 1

2, n = -1 and n = 1
Answer: yf nl =
O, 11 = O and In I > 1 •
Time shifting. Let x(t) denote a c<>ntinuous-time signal. The time-shifted version of
x(t) is defined by

y(t) = x(t - t 0 }
where t 0 is the tin1e shift. lf t0 > O, the waveforn1 representing x(t) is shifted intact t() the
right, relative to the time axis. If t 0 < O, it is shifted te> the left.

EXAMPLE 1.3 Figure 1.22(a) shows a rectangular pulse x(t) of unit amplitude and unit
duratíon. Find y(t) = x{t - 2).

Solution: ln this example, the time shift t 0 equals 2 time units. Hence, by shifting x(t) to
the right by 2 time units we get the rectangular pulse y(t) shown in Fig. 1.22(6). The pulse
y(t) has exactly the sarne shape as the original pulse x(t); it is merely shifted along the time

·. ..t•:. .. .. .: ~i. ~ <,;::. • ; ··.;. ·v"~ ·. .. ·; ;. . ::;: .:,.,,·.

x(t) y(t) = x(t - 2)

1.0· ... - - --··----.

t '
1 o -l o l -3 2 5
2 2 2 2
(a) (b)

FIGURE 1.22 ·1·in1e-shiftíng operation: (a) conti11uous-time signal in the form of a rectangu]ar
pulse of amplitude 1.0 and duration 1.(), symmelric about the origin; and (b) time-shiftc<l version
of x(t) bv, 2 time units.

ln the case of a discrete-time signal x[n ], we define its time-shifted version as follows:

y[nJ = x[n - m]

where the shift m must be an integer; it can l-,e positive <>r 11egative.

• Drill Problem 1.11 The discrete-time signal x[11] is defined by

1, n = 1, 2
x[nJ -1 n = -l, -2
o, n = O and n > 2I 1

Find the time-shifted signal yln] = x[n + 3].

1, n= -1, -2
Answer: ylnl = -1 n= -4, -5
o, 1t = -3, n < -.',, and n > -1 •

Let y(t) denote a continu<)us-time signal that is derived from a11other continu<)us-tíme
signal x(t) through a combination of time shifting and time scaling, as described here:

y(t) = x(at - b) ( 1.25)

This relation between y(t) and x(t) satisfies the followi11g conditions:

y(O} = x(-b) ( 1.26}

Y a = x(O) (1.27)

which provide useful checks on y(t) in terms of cc,rresponding values of x(t).

To correctly obtain y(t) from x(t), the rime-shifting and time-scalíng operations must
be performed in the correct order. The proper order is based on the fact that the scaling
operation always replaces t by at, while the time-shifting operation always replaces t by
t - b. Hence the time-shifting operation is performed first <>n x{t), resu!ting in an inter-
mediate signal v(t) defined by

v(t) = x(t - b)

The rime shift has replaced t in x(t) b}' t - b. Next, the time-scaling operarion is performed
on v(t). This replaces t by at, resulting i11 the desired output

y(t) = v(at)
= x(at - b)

To illustrate h<>w the <>peratÍ<)n descri bcd in Eq. (1.25) can arise in a real-life situa-
tíon, consider a voice signal recorded on a tape recc)rder. If the tape is played back at a
rate faster than the original recording rate, we get compressi{>n (i.e., a > 1 ). If, on the
I. 5 Basic Operation.~ on Signals 27

x(t) v(t) =x(t + 3) y(t) =v{2t)

- - - - ·- J 1.0 - - - -'- 1.0

-~'--+------ t --_;....~·---......._ _ _ t
-l O 1 -4 -3 -2 -1 O -3 -2 -l O
(a) (b) (<.:)

FIGURE 1.23 Thc proper c>rder in \vhich the operations of time scaling and time shifting should
be applied for the case of a contintu)Ús-time signal. (a) Rectangular }Julse x(t) of am1-,litude 1.()
and duration 2.0, symmetric about the origin. (b) lntermediate pulse v(t), representing time-
shifted versÍ(Hl c>f x(t). (e) Desíred signal y(t), resulting from the compression of 1J(t) by a factor
of 2.

other ha11d, the tape is played back at a rate slower tha11 rhe origina) reC()rding rate, we
get expansion (i.e., a< 1). The constant b, assumed to be positive, acc<)unrs for a delay
in playíng back the tape.

,, .~. ..
EXAMPLE 1.4 Consider the rectangular pulse x(t) of unit amplitude and duration of 2 time
units depicted in Fig. 1.23(a). Find y(t) = x(2t + 3).
Solution: ln this example, we have a = 2 and b = -3. Hence shifting the given pulse x(t)
to the left by 3 time uníts relative to the time axis gives the intermediate pulse v(t) shown in
Fig. 1.23(b). Finally, scaling the independent variable t in v(t) by a = 2, we get the solution
y(t) shown in Fig. 1.23(c).
Note that the solutíon presented in Fig. 1.23(c} satisfies both of the conditions defined
in Eqs. (1.26) and (1.27).
Suppose next that we purposely do not follow the precedence rule; that is, we first apply
time scaling, followed by time shifting. For the gíven signal x(t), shown in Fig. 1.24(a}, the
waveforms resulting from the application of these two operations are shown in Figs. 1.24(b)
and (e), respectively. The signal y(t) so obtained fails to satisfy the condition of Eq. (1.27) •

This example clearly illustràtes that if y(t) is defined in rerms of x(t) by Eq. (1.25),
then y(t) can on1y be <>htained from x(t) correctly by adhering to the precedence rule for
time shifting and time scaling. •
Sin1ilar remarks apply to the case t>f discrere~time signals.

x(t) x(2t) y(t)

1.0_.... - .. .. -- - - -'- 1 .o
-2 1
--·-······· 1-··········-- t ......L.... t
-l O 1 1 o 1 -3 -2 -1 O
2 2
(a) (b) (e)

FIGURE I .24 The incorrect ,vay of applying the precedence rule. (a) Signal x(t ). (b) Time-scaled
signal x(2t). (e) Signal y(t) ohtained by shifting x(2t) by 3 time units.

ExAMPLE 1.5 A discrece-time signal x[n] is defined by

1, n = 1, 2
., ,,. x[nJ = -1, n = -1, -2
O, n = O and !nl > 2
Find y[n] = x[2n + 3].
Soliition: The signal x[n] is displayed in Fig. 1.25(a). Time shifting x[n] to che left by 3 yields
the intermediate signal v[n] shown in Fig. 1.25(b). Finally, scaling n ín v[n] by 2, we obtain
the solution y[n] shown in Fig. 1.25(c).
Note that as a result of the con1pression performed in going from v[nJ to y[,tJ = v[211J,
the samples of v[n] at n = -5 and n = -1 (i.e., those contained in the original sígnal at
n = -2 and n = 2} are lost.

• Drill Prohlem 1.12 Considera discrete-time sig11al xlnJ defined by

1, -2 < n ~ 2
x[nl =
o, lnl > 2
Find y[nJ = x[3n - 2].
1, n = O, 1
Answer: y[nl =
O, otherwise •

x[n] v[nl
l . .., l l
-5 -4 -3. -2 -l
-5 -4
n "
-<>-- o n
o 1 2 3 4 5 -3 -2 -1 O 1 2 3 4 5

-1 ...... -1 L.

(a) (b)


---,.--o---<>--o------'--O---<:>-----Ç.....___.,❖ ·-O i n
-5 -4 -3 -1 O i l 2 3 4 5

-1 l

fIGURI:': l .25
The proper ordcr of appl}ing the operations <1f time scali11g and ti1nc shifting for
Lhe case <lf a discretc-titne signal. (a) Discrete-tin1c sígnal x[11 l. antisymmetric about thc origin.
(b) Intermediate signal 1,[111 <1l>Laíned by shiftingx[,il t(> the left by 3 samples. (e) Discrete-tin1e
signal rí1i] resulting from the compression of v[n] by a factor <>f 2, as a rcsult of \Vhich t\vo sam-
plcs of the original xln] are lost.
1.6 Elementary Si.gnals 29

11.6 Elementary Signals

There are several elementary signals that feature prominently in the study <>f signals
and systems. The list of elementary sígnals includes exponential and sinusoidal sígr1als, the
srep function, impulse function, and ramp functic)n. These elementary sig11als serve as
building blocks for the construction of more cc)mplcx signals. They are also i1nportant in
their own right, in that they may be used t<> mc>del many physical signals thar occur in
nature. ln what follo,vs, we will descríbe the above-mentioned elementary signals, 011e by


A real exponencial signal, in íts most general form, is written as

x(t) = Be"1 (1.28)

where both B and a are real parameters. The parameter B is the a1nplir11(ic c>f the expo-
nential signal measured at time t = O. Depending on whether the other paramctcr a is
positive or negative, we may identify two special cases:

• Decaying exponential, fc)r which a < O

• Growing exponential, for which a > O
These two forms of an exponential signal are illustrated in Fig. 1.26. Part (a) c)f the figure
was generated using a = -6 and B = 5. Part (b) of the figure was generated t1sing a = .5
and B = 1. If a = O, the signal x(t) reduces to a de signal equal to the constant B.
For a physical example of an exponential signal, consider a ''lossy'' capacit()r, as
depicted in Fig. 1.27. The capacitar has capacitance C, and the loss is represented L1y shunt
resistance R. The capaciror is charged by connecting a battery across it, and then the
battery is removed at time t = O. Lct V O denote the initial value of the voltage deveJoped

5..----..--------~---~ 150 r - - - r - - . - - - - - , - - , - - - - - , - - - - - . . , . . . _ - - , - - - - ,
4.5 · .
4 . . .. 1

3.5 100
x(t) 2.5 x(t)
2 . . ..
50 . . . .

OO 0.1 0.2 0.3 0.4 0.5 0.60. 7 0.8 0.9 l ºo 0.1- 0.2 0.3 Õ.4 0.5 0.6 0.7 0.8 0.9 l
Time t Time t
(a) (b)

FIGURE 1 .26 (a) Decaying exponcntial f<)rm (>f c:<>ntilltlous-Lime signal. (b) Gro\\'ing eX})Oncntial
form of continuous-time signal.

i(t) = e :e u(t)

v(t) ~e R

FIGURE 1.27 Lossy ca1>acitor, wíth the loss rc11resented by shunt resislance R.

across the capacítor. From Fig. 1.2 7 we readily see that the operation of the capacitor for
t 2:: O is described bv,
RC dt v(t) + v(t) = O (1.29

where v(t) is the voltage measured across the capacitar at time t. Equation (1.29) is él
differential equation of arder <>ne. lts solutic)n is given by
v(t) = V 0 e- 111<(: (1.30:
where the product term RC plays the role of a time constant. Equation ( 1.30) shc>ws tha1
the voltage across the capacitor decays exponentially with time at a rate determined h)
the time constant RC. The larger the resistor R (i.e., the less Iossy the capacit<>r), the slowe1
will be the rate of decay <Jf v(t) with time.
The discussion thus far has been ín the context <>f continuous time. ln discrete time
it is common practice t<> write a real expc>nential signal as
x[n] = Brn (1.31
The expo11ential nature of this signal is readily confirmed by defining
r = eª
for s<>me a. Figure 1.28 illustrates the decaying and growing forms of a díscrete-tim1
exponential signal corresp(>nding to O < r < 1 and r > 1, respectively. This is where thc
case of discrete-time exponential signals is distinctly different from continuous-time ex
ponential sigr1als. Note also that when r < O, a discrete-time exponential signal assume
alternating signs.

6 ! ! \ i ! ! i
4.5 1 1 1 { l i l !

1 1 •
4 --
5 -- -
3.5 -
4 ...... - 3 ....... '
2.5 •h•

xln] -
1 ' x[n]
2 ·-

...... ...... 1.5 ·-·

1 •--
1 ··- }
~10 -8 -6 -4 -2 O 2

' ? 'r
9 9
O10 -8 -6 -4 -2 O 2 4 6 8
Timcn Time n
(a) (b)

F1<;.URE 1.28
(a) Decaying cx11onential form (lf discrete-timc signal. (b) Growing exponential
form of disçrete-time sig11al.
1.6 Eleme-ntary Signals 31

The exponential signals shown in Figs. 1.26 and 1.28 are ali real valued. lt is possible
for an exponential signal to be complex valued. The mathematícal forms of complex ex~
ponential signals are the sarne as those shown in Eqs. (1.28) and (1.31 ), with some differ•
ences as explained here. ln the continuous-time case, the parameter B or parameter a or
both in Eq. (1.28} assume complex values. Simílarly, in the discrete-time case, the param-
eter B or parameter r or bc,th in Eq. (1.31) assume complex values. Two commc)nly en-
countered examples of complex exponential signals are eiwt and eii1.n.


The conti11uous-time version of a sinusoidal signal, in its most general form, may be written
x{t) = A cos(wt + </>) (1.32)
where A is the amplitude, w is the frequency in radians per second, and </> is rhe phase angle
in radians. Figure 1.29(a) presents the waveform of a sinusoidal signal fc>r A = 4 and
cJ> = + Trl6. A sinusoidal signal is an example of a períodic signal, the period of which is

T = 21r

We may readily prove this property of a sinusoidal signal by using Eq. (1.32) to write
x(t + T) =A cos(w(t + T) + cp)
=A cos(wt + wT + </>)
=A cos(wt + 21T + </>)
=A cos(wt + </J)
= x(t)
which satisfies the defining condition of Eq. (1.5) for a periodic signal.

5 .....----------....---~--.....------·----.----,---~

x(t) ..

o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 l

5 -
x(t) ..

-5 ----~--__,__ __,__ __.___ __,___ _.....___________ ~ _ ____,

o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Tjme t

FIGURE 1.29 (a) Sinusoidal signal A cos(wt + cp) \vith phase </.> = +1r/6 radians. (b) Sinusoidal
signal A sín ( wt + <P) with phase <b = + 1r/6 radians.

To illustrate the generatíon of a sinusoídal sígnal, consider the círcuít of Fig. 1.30
consisting of an inductor and capacitar connected in parallel. lt is assumed that the lasses
in both components of the circuit are small enough for then1 to be considered ''ideal." The
voltage developed across the capacitor at time t = O ís equal to V 0 • The operatíon of the
circuit in Fig. 1.30 for t 2:: O is described by
LC dt 2 v(t) + v(t) =O (1.33)

where v(t) is the voltage across the capacitar at time t, C is its capacitance, and L is the
inductance of the inductor. Equation (1.33) is a differential equation of arder two. Its
solution is given by
v(t) = V0 cos(w 0 t), t 2:: o ( 1.34}
where w 0 is the natural angular frequency of oscillation of the circuit:
w o -v'LC
- (1.35)

Equation (1.34) describes a sinusoidal signal of amplitude A = V 0 , frequency w = w0 , and

phase angle </J = O.
Consider next the discrete-time version of a sinusoidal signal, written as
x[n] = A cos(!ln + </J) (1.36)
The period <>f a períc)dic discrete-time signal is measured in samples. Thus for x[n] to be
periodic with a peri<>d of N samples, sa}', it must satisf}' the condition of Eq. {1.8) for all
integer n and some integer N. Substituting n + N for n in Eq. (1.36) yields
x[n + N] = A cos(fln + !lN + <f;,)
For the condition of Eq. (1.8) to be satisfied, in general, we require that
!lN = 21rm radians

~L = 21rm
N ra dº1ans/cyc1e, integer m, N ( 1.3 7)

The important point to note here is that, unlike continuous-time sinusoidal signals, not ali
discrete-tíme sinusoidal systems with arbítrary values of !1 are periodic. Specifically, for
the discrete-time sinusoidal signal descríbed in Eq. (1.36) to be periodic, the angular fre-
quency O must be a rational mu)tiple of 21r, as indicated in Eq. (1.37). Figure 1.31 illus-
trates a discrete-time sinusoidal sígnal for A = 1, <f;, = O, and N = 12.

i(t) = e~ v(t)

v(t) L ~e

FIGURE J.30 Parallel LC circuít, a~suming that the inductor L and capacitor C are l)oth ideal.
1.6 Elementary Signals 33

1 ,-----,-----,-----,~--()---------,.---,-~---.----,
1 1 1 ,
1 1
0.8 1-

0.6 ...... -

0.4 . . . .

0.2 - -
x[nJ O -O··-----~...--.,--o--·--'--...... ······•---•·······<>-······•·----

-0.4 .. -

-0.6 . . . .
-0.8 --
QIO -8 -6 -4 -2 o 2 4 8 10
Tímc n

FIGURE 1.31 Discrete-time sinusc,i<lal sígnal.

f.xAMPLE 1.6 A pair of sinusoidal sig11als with a common angular frequency is defined by
X1[n1 = sin[511'1t]
x2[nJ = v'3 cosf5?Tn]
(a) Specify the condition which the period N of both x 1 (n] and x 2 [n] must satisfy for them
to be periodic.
(b) Evaluate the amplitude and phase angle of the composite sinusoidal signal
y[n] = x 1 [n] + x 2 [n]
(a) The angular frequency of both x 1 [n] and x 2 [n] is
a = 51T radians/cycle
Solving Eq. (1.37) for the period N, we get
. .;.
. - N = 21rm
o ;;:

;t .
. .. ~:''r 5'1T'
·~ :
i :
For x 1 [n1 .and x 2 [n] to be periodic, their period N must be an integer. This can only be
satisfied for m = 5, 10, 15, ... , whích results in N = 2, 4, 6, ....
(b) We wish to express y[n] in rhe form ;·r ._. .·.:.: !:

y[n] =A cos(On + cp) .· ':~

:~:.;. ;.. , .

Recall the trigonometric identity

:" .
A cos(On + <f,) = A cos(On) cos(<f,) - A sin(On) sin(<f,}
. >:: ..•,...~:·:· :~ . ·",, : ;;. : ,. .
34 CHAP'f'ER l • IN'I'RODlJC"l'ION

Identifying .n = 5 71', we see that the right-hand side of this identity is of the sarne form as
x 1 [n] + x 2 [n]. We may therefore write · ·· · · - ·

·. . ';, .. . ,: ,.., ''1 .·. ·'> ;;

~· ·, •
A sin(cp) = -1 and A cos(<f,) = V3
Hence • • :· ; • •f' '!

., ' . . x· ~-

sin( <f,) amplitude of x 1 [ n J

tan(q,) =:: - - = -=--------
.. t : .~
cos(cf>) amplitude of x 2 [n]
. • . . ·i ,.• >$!·-!·· ..
e · ·· ii f -1 '··· ·< ..

... -·
. - ·-
• •
. .
• • <·
V3 • > ,,. •• ; .;, • • • .. • • : :- • ._

from which we find that </> = -'!T'/6 radians. Similarly, the amplitude A is given by
A = v'(amplitude of x 1 [n])2 + (amplitude of x 2 [n])2
Accordingly, we may express y[ti] as
y[nl = 2 cos(51rn - 7r/6)
•' • • •: •• • ':>" :'< ... ._. ·: ,. ·. ... .. :· : ' ,
. -~ .
:,;. >" • .. ;.

• Drill Problem 1.13 Consider the following si11usoidal sig11als:

(a) x[nJ = 5 sin[2n]
(h) xf n1 = 5 cosf 0.21rn]
(e) x[nl = .5 cc)sl61rnJ
(d) xlnl = 5 sinl61rn/351
Determine whether each x(n) is peric>dic, and if it is, find its fundamental period.
Answer: (a) Nonperiodic. (l1) Periodic, fundamental period = 10. (e) PerÍl)dic, funda-
mental period = 1. (d) Periodic, fundamental period = 35. •
• Drill Prohlem 1. 14 Find the smallest a11gular freque11cies fc>r whích discrete-time
sinusoidal signals wíth che following fundamental peri<>ds w<>uld be peri{>dic: (a) N = 8,
(b) N = 32, (e) N = 64, (d) N = 128.

Answer: (a) n= 1r/4. (b) n= 1r/16. (e) !l = 1r/32. (d) n= 1r/64. •


Consider the complex exponential e ;e_ Using Euler's identity, we may expand this tcrm as
ei6 = cos 0 + j sin 0 (1.38)

This result indicares that we may express the C<>11tinuc>t1s-ti111e sinusc>idal signal of Eq.
(1.32) as the real pare of the complex expc>ncntial signal Beiwt, where Bis itself a C(>mplex
q uantity defined by
B = Aei<t> (1 ..,9)
That is, we may write
A cos( wt + <J>) = Re{ Beiwt} (1.40)
1.6 Elementary Signals 35

where Re{ } denotes the real part of the complex quantity enclosed inside the braces. We
may readily prove this relation by noting that
Beiwt = Aei<Peiwr
= Aeílwt+tt>>
= A cos(wt + </>) + jA sin(wt + </>)

from which Eq. ( 1.40) follows immediately. The sinusoidal signal of Eq. (1.32) is defined
in terms of a cosine function. Of course, we may also define a continuous-time sinusoidal
signal in terms of a sine function, as shown by

x(t) = A sin(wt + </>) (1.41)

which is represented by the imaginary part of the complex exponential signal Beiwt. That
• •
1s, we may wr1te
A sin(wt + <b) = Im{Beiwt} (1.42)

where B is defined by Eq. (1.39), and lm{ } denotes the imaginary part of the cornplex
quantity enclosed inside the braces. The sinusoidal sígnal of Eq. (1.41) differs from that
of Eq. (1.32) by a phase shift of 90º. That is, the sinusoidal signal A cos(wt + </>) lags
behind the sinusoidal signal A sin( wt + </>), as illustrated in Fig. 1.29{b) for </) = 7r/6.
Similarly, in the discrete-time case we may write
A cos (!ln + </>) = Re {Bei!ln} (1.43)

A sin(fln + cp) = lm{Be;nn} (1.44)

where B is defined in terms of A and </> by Eq. (1.39). Figure 1.32 shows the two-
dimensional representation of the complex exponential e1' 1n for n = 1T!4 and n = O, 1, ... ,
7. The projection of each value on the real axis is cos(fln), while the projection on the
imaginary axis is sin(fln).

Imaginary axis

Unit circlc


n == 4 / 1T/4 n=O Real axís

o ' -1r/4
n=5 n=7
n == 6

FIGURE J .32 Complex plane, sh()\Ving eight points unif(>rmly distributed on the t1nit círcle.


10 .....

o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
FIGURE 1.33 Exponentially damped sinusoidal sígnal e-a, sín wt, with a > O.


The multiplication of a sinusoidal signal by a real-valued decaying exponential signal

results in a ne,v signal referred to as an exponentially damped sinusoidal signal. Specifi-
cally, multiplying the C(>nti11uc)us-time sinusc>idal signal A sin( wt + </>) by the exponential
e-(•t results in the exponentially damped sinusoidal signal
x(t) = Ae-·at sin( wt + </> ), a>O ( 1.45)
Figure 1.33 shows the waveform of this signal for A = 60, a= 6, and </>=O. For increasing
time t, the amplitude of the sinusoidal oscillations decreases in an expone11tial fashic>n,
approaching zero for infinite time.
To illustrate the generation of an exponentially damped sinusoidal signal, consider
the parallel circuit of Fig. 1.34, consisting of a capacitar of capacitance C, an inductc>r c.>f
inductance L, and a resistor of resistancc R. The resistance R represents the combined
effect of losses associated with the inductor and the capacitor. Let V0 denote the voltage
developed across the capacitor at time t = O. The <>peratic>n of the circuit in Fig. 1.34 is
described by

d v(t) + -l v(t) + -L
e -dt R
1 f'
v(T) dT =o ( 1.46)

-= v(r)dr

v(t) J_ R

FIGURE 1.34 Parallel LCR circuít, with inductor L, capacitar e;, and resistor Rali assumed to
bc ideal.
1.6 Elementary Signals 37

where v(t) is the voltage across the capacitar at time t 2: O. Equati(>n (1.46) is an integro-
differential equation. lts solution is given by
v(t) = V 0 e-tllCR cos(w0 t) (1.47)
1 1
Wo = ( 1.48)
LC 4C 2 R 2
ln Eq. (1.48) it is assumed that 4CR 2 > L. Comparing Eq. (1.47) with (1.4,S), we have
A = V0 , a = l/2CR, w = w0 , and <p = Tr/2.
The circuits of Figs. 1.27, 1.30, and 1.34 served as examples in which an exponential
signal, a sinusoidal signal, and an exponentially damped sinusoidal signal, respectively,
arose naturally as solutions to physical problems. The operations of these circuits are
described by the differential equatio11s (1.29), (1.33 ), and (1.46), whose solurions were
simply stated. Methods for solving these differential equarions are presented in suhsequent
Returning to the subject matter at hand, the discrete-time version of the exponentially
damped sinus<)idal signal of Eq. (1.45) is described by
x[n] = Brn sin[On + </> 1 (1.49)
For the signal of Eq. (1.49) to decay exponentially with time, the parameter r must lie in
the range O< lrl < 1.

• Drill Problem 1.15 Is it possible for an exponentially damped sinusoidal signal of

whatever kind to be periodic?
Answer: No. •

The discrete-time version of the step function, commonly denoted by uf111, is defined by
l, n;;::: O
u[n] = (1.50)
O, n < O
which is illustrated in Fig. 1.35.
The continucJus-time version of the step function, commc>nly denoted by u(t), is de-
fined by
1, t 2: O
u(t) = (1.51)
o, t < o
Figure 1.36 presents a portrayal of the step function u(t). lt is said t<> exhibit a discontinuity
at t = O, since the value of u(t) changes instantaneously from O to 1 when t = O.
The step function u(t) is a particularly simple signal to apply. Electrically, a battery
or de sc>urce is applied ar t = O by closing a switch, for example. As a test signal, it is

1.0 . 1

-o~--o-·-o-·----;::>--'-+---'--+--+--- n
: ' T

-4 -3 -2 -1 O l 2 3 4
FIGURE 1.35 Discrctc-time version of stcp fun<.:lÍt)n of unit amplitude.



FIGURE 1.36 Continuous-time version <lf step functíon of unit amplitude.

useful because the <)utput of a system dueto a step input reveals a great <leal about how
quickly the system responds to an abrupt change in the input signal. A similar remark
applies to u[n] in the context of a discrete-time system.
The step function u(t) may also be used to construct other discontinuous waveforms,
as illustrated in the following example.

. ' :
... : ·. . .: .. ·.· : . ,.

EXAMPLE 1.7 Consider the rectangular pulse x(t) shown in Fig. 1.37(a). This pulse has an
amplitude A and duration T. Express x(t) as a weighted sum of two step functíons.
Solution: The rectangular pulse x(t} may be wrítten ín mathematical terms as follows:

' A, O ::; 1t 1 < T/2

x(t) = (1.52)
o, ltl > T/2
where !ti denotes the magnitude of time t. The rectangular pulse x(t) is represented as the
difference between two time-shifted step functions, as íllustrated in Fig. 1.37(b). On the basis
of this figure, we may express x(t) as
: : ., ... x(t) = Au t + - - Au t - - (1.5 3)
2 2
· where u(t) is the step function. For the purpose of illustration, we have set T = ls in Fig. 1.37.

• Drill ProLlem 1.16 A discrete-time signal x[,i J is defined by

1, O :::S n < 9
x[n] =
O, otherwise

At---- . . ----

-0.5 o
t •,-------.
-0.5 o 0.5 1

(a) (b)

-+----..,.__----+----+-------+- t
-l -0.5 o 0.5 1

FIGURE 1,37 (a) Rectangular pulse x(t) of amplitude A and <luration T = Is symmetric about
the origin. (b) Representatic,n of x(t) as the superposition of two ste1, functions of amplitude A,
with one step function shifted to the lcft by T/2 and the other shifted to the right by T/2; these
two shiftcd signals are denoted by x 1(t) and x 2 (t), respectively.
1.6 Elen,entary Signals 39



-----<>---. -<-- n
-4 -3 -2 -1 O l 2 3 4
FIGVRE 1.38 Oiscrete-time form <>f ímJlt1lse.

Using u[n], describe x[nJ as the superpc)sirion of twc> step functions.

Answer: x[nJ = u[nJ - u[n - 10]. •

The díscrete-rime version c>f the impulse, commc)n]y denoted by ô[n), is defined by
1, n = O
B[n] = (1.54)
O, n *O
,ivhich is í)lt1strated in Fig. 1.3 8.
Thc continuous-time version of the u11it impulse, commonly denoted by ô(t}, is de-
fined by the following paír of relations:
ô(t} = O for t *O (1.55)

J X 00 8 ( t} dt = 1 (1.56 J

Equation (1.55) says that the impulse ô(t) is zero ever}'\.vhere except at the <>rigin. Equation
(1.56) says that the t<)tal area under the unit impulse is ttnity. The impulse B(t) is also
referred to as the Dirac delta function. Note that the impulse o(t) is rhe derjvative of the
step function u(t) with respect to time t. Conversely, the step function u(t) is the integral
of the impulse ô(t) with respect to time t.
A graphical descríption <)f the impulse ô[nj Ít)r discrete time is straightforward, as
shown in Fig. 1.38. ln ct>ntrast, visualizatic)n of thc t1nit impulse ô(t) for continuous time
requires mc>re detaíled attentic>n. One way te> visualize 8(t) is to view ít as the limitíng form
of a rectangular pulse of unit area, as illustrated in Fig. 1.39(a}. Specifical]y, the duration
of the pulse is decreased and its amplitude is increased such that che area under rhe pulse

x(t) a8(t)

Area = 1

Area = I r--"""'+-"'~t--"'"'
Area = 1

•• ••'• •••• •••• • •• • •••v•w•••- •••••- t _ _ _ _..___ _ _ _,t
-T/2 O T/2 o
(a) (b)

FIGURE 1.39 (a) Evolution of a rectangular pulse <.>f unit area int(> an in1pl1lsc of unit strength.
(b) GraphícaJ syn1boJ fc,r at1 i111pulse of strc11gth a.

is maíntaíned C<>nstant at unity. As the duration decreases, the rectangular pulse better
approximates the impulse. Indeed, we may generalize this result by stating that
o(t) = lim gT(t) (1.57)

where gT(t) is any pulse that is an even functi<)n of time t, with duratic,n T, and unit area.
The area under the pulse defines the strength of the impulse. Thus when we speak <>Í the
impulse function ó(t}, in effect we are saying that its strength is unity. The graphical symbol
for an impulse is depicted in Fig. 1.39(6). The strength of the impulse is denoted by the
lal,cl next to the arrow.
From the defining equation (1.55), it immediately follows that the unit impulse B(t)
is an even funcrion of time t, as shown by
õ(-t) = 5(t) (1.5 8)
For the unit impulse ô(t) to have mathematical meaning, however, it has to appear
as a f acror in the integrand of an integral with respect to time and then, strictly speaking,
011ly when the other factor in the integrand is a continuous function of time at which the
impulse occurs. Let x(t} be such a function, and consider the product of x(t) and the time-
shifted delta function o(t - t0 ). In light of the two defining equations (1.55) and (1.56),
we may express the integral of this product as follows:

Jx"" x(t)ô(t - t0 ) dt = x(to) (1.59)

The operation indicated (>n rhe left-hand side <>f Eq. (1.59) sifts out the value x(t0 ) of the
function x(t) at time t = t0 • Accordingly, Eq. (1.59) is referred to as the sifting property
of the unit impulse. This property is sometimes used as the definition of a unit impulse; ín
effect, it incorporares Eqs. (1.,55) and (1.,56) into a single relation.
Another useful property of the unít impulse ô(t) is the time-scaling property, de-
scribed by
o(at) = - ô(t), a>O (1.60)
To prove this property, we replace t in Eq. (1.57) with at and so write
ô(at) = lim gT(at) (1.61)

Te> represent the function gT(t), we use the rectangular pulse shown in Fig. 1.40(a), which
has duration T, arnplitude 1/T, and therefore unit area. Correspondingly, the time-scaled


.,,,,-Area= l
_,,.Area == l / Area = 1a /
/ /

-~~---~-t _ _..._____,____..__ _ t

(a) (e)

FIGURE 1.40 Steps involved in províng thc timc-scaling prc.)perty <>f the unit impt1lse. (a) Rec-
ta11gular pulse g1 -(t) c>f an1plitude I IT and duralion T, symmetric about the origin. (b) [>ulse g r(t)
compressed by factor a. (e) Amplitude scaling of the compressed pulse, rcstoring it to unit area.
1.6 Elementary Signals 41

function gT(at) is shown in Fig. 1.40(6) for a> 1. The amplitude of g·r(at) is left unchanged
by the time-scaling operation. Therefore, in order to restore the area under this pulse to
unity, the amplitude of gT(at) is scaled by the sarne factor a, as indicated in Fig. 1.40(c).
The time function in Fig. 1.40(c) is denoted by g}(at); it is related to gT(at) by
gr(at) = agy(at} (1.62)
Substituting Eq. (1.62) in (1.61), we get

ô(at) = !a lim g~(at) (1.63)


Since, by design, the area under the function gr(at) is unity, it follows that
ô(t) = lim g~(at) (1.64)

Accordingly, the use of Eq. (1.64) in (1.63) results in the time-scaling property described
in Eq. (1.60).
Having defined what a unit impulse is and described its properties, there is one more
question that needs to be addressed: What is the practical use of a unit impulse? We cannot
generate a physical impulse function, since that would correspond to a signal of infinite
amplitude at t = O and that is zero elsewhere. However, the impulse function serves a
mathematical purpose by providing an approximation to a physical signal of extremely
short duration and high amplitude. The response of a system to such an input reveals much
about the character of the system. For example, consider the parallel LCR circuit of Fig.
1.34, assumed to be initially at rest. Suppose now a voltage signal appr<)ximating an
impulse function is applied to the circuit at time t = O. The current thrc>ugh an induct<Jr
cannot change instantaneously, but the voltage across a capacitar can. lt fc>llows therefcJre
that the voltage acrc,ss the capacitar suddenly rises to a value equal to V 0 , say, at time
t = o+. Here t = o+ refers to the instant of time when energy in the input signal is expired.
Thereafter, the circuit operares without additional input. The resulting value of the voltage
v(t) across the capacitar is defined by Eq. (1.47). The response v(t) is called the transient
or natural response of the circuit, the evaluation of which is facilitated by the application
of an impulse function as the test signal.

The impulse function cS(t) is the derivative of the step function u(t) with respect to time.
By the sarne token, the integral of the step function u(t) is a ramp function of unir slope.
This latter test signal is commonly denoted by r(t), which is formally defined as Í()llc,ws:
t, t > o
r(t) = (1.65)
o, t < o


FIGURE 1.41 Ramp function of unit slope.


FIGURE 1.42 Discrete-time version of the ran1p function.

Equivalently, we may write

r(t) = tu(t) (1.66)

The ramp function r(t) is shown graphically in Fig. 1.41.

ln mechanical terms, a ramp function may be visualized as follows. If the input
variable is represented as the angular displacement ()Í a shaft, then the constant-speed
rotation of the shaft provides a representation of the ramp function. As a test signal, the
ramp function enables us to evaluate how a continu<>us-time system would resp<)nd t<> a
signal thar increases linearly with time.
The discrete-time version of the ramp function is defined by
n, n 2: O
rfn1 = (1.67)
O, n < O
or, eq ui valentl y,
r[n] = nulnl (1.68)

lt is illustrated in Fig. 1.42.

1. 7 Systems Viewed as
lnterconnections of Operations
ln mathematical terms, a system may be viewed as an ínterconnection of operations that
transforms an input signal into an <>utput signal with properties different from those <)Í
the input signal. The sígnals may be of the contínuous-time or discrete-time variety, ora
mixture of both. Let the overall operator H denote the action of a system. Then the ap-
plícation of a C<)ntinuous-time signal x(t) to the input of the system yields the output signal
described by
y(t) = H{x(t)} (1.69)

Figure 1.43{a) sh<)ws a block diagram representat1on of Eq. (1.69). Correspondingly, for
the discrete-time case, we may write
y[n) = H{x[n]} (1.70)

.. ' ...
H. ...
y(t) x[nl
... H .. ,.
•r . ....·

(a) (b)

FIGURE 1.43 Block diagram representation of operator H for (a) continuous time and (b) dis-
crete time.
1. 7 Systems Viewed as lnterconnections of Operations 43

x[nJ x[n - k]

FIGURE 1.44 Discrete-timc shift <>perator Sk, operating on thc discrete-Lime signal xlnJ to pro-
duce x[n - k].

where che discrete-time signals x[n] and y[n] denote the input and output signals, respec-
tively, as depicted in Fig. 1.43(b).
. . . ·" ... .. ': ·. . . ..,· ·......

EXAMPLE 1.8 Considera discrete-time system whose output sígnal y[n] is the average of the
three most recent values of the input signal x[n], as shown by

y[n] = ½(x[n] + x[n - 1] + x[n - 2])

. .
Such a system is referred to as a moving-average system for two reasons. First, y[n] is the
average of the sample values x[n], x[n - 1], and x[n - 2]. Second, the value of y[n] changes
as n moves along the discrete-time axis. Formulate the operator H for rhis system; hence,
develop a block diagram representation for it.

Solution: Ler the operator Sk denote a system that time shifts the input x[n] by k time uníts
to produce an output equal to x[n - kJ, as depicted in Fig. 1.44. Accordingly, we may define
the overall operator H for the moving-average system as

H = ½(1 + S + S2 )

Two different ímpleme11tations of the operator H (i.e., the moving-average system) that suggest
themselves are presented in Fig. 1.45. The implementation shown in part (a) of the figure uses
the cascade connection of two identical uníty time shifters, namely, S 1 = S. On the other hand,
the implementation shown in ...
part (b) of the figure uses two different time shifters, S and S ,

connected in para/lei. ln boch cases, the moving-average system is made up of an intercon-

nection of three functional blocks, namely, two time shifters, an adder, and a scalar

_x.,[n_]-+-• • x[n-1] x[n -2]

s "'
- ....~.... 2: ---...• 1/3 •
--1J1s yí nl

. ..



FIGlJRE 1.45
T"vo different (but equivalcnt) írnplementations of the mo\ing-avcrage systen1:
(a) cascade form of implemcntatÍ<>n, and (h) parallel form of implcn1entation.

• Drill Problern 1.17 Express rhe operator that describes the input-output relation

y[nJ = ~(x[n + 1] + x[n] + x[n - 1])

in terms of the time-shift operator S.

Answer: H = }(s- 1 + 1 + S 1). •

ln the interconnected systems shown in Figs. 1.45(a) and (b), the sígnal flows through
each one of them in the forward direction only. Another possible way of combining systems
is through the use of feedback connections. Figure 1.4 shows an example of a feedback
system, which is characterized by two paths. The forward path involves the cascade con-
nection of the controller and plant. The feedback path is made possible through rhe use
of a sensor C(lnr1ected to the output of the system at one end and the input at the <>ther
end. The use of feedback has many desirable bene.fits and gives rise to problems <>Í its own
that require special attention; the subject of feedback is discussed in Chapter 9.

11.8__Properties of Systems
The properties of a system describe the characteristics of the operator H representing the
system. ln what follows, we study some of the most basic properties of systems .


A system is said to be h<>unded input-bounded <)utput (BIBO) stable if and only if every
bounded input results in a bounded output. The output of such a system does not diverge
if the input does not diverge.
To put the condition for BIBO stability on a formal basis, consider a continuous-
time system whose input-output relarion is as descríbed in Eq. (1.69). The operator H is
B1B() stable if the output signal y(t) satisfies the condition

ly(t) 1 :S My < 00 for ali t

whenever the input signals x(t) satisfy the conditic)n

lx(t) 1 :S Mx < 00 for ali t

Both Mx and My represent some finite positive numbers. We may describe the Cl1ndition
for the BlBO stability of a discrete-time system 1n a similar manner.
From an engineeríng perspective, it is important rhat a system of ínterest remains
stable under ali possible ()perating conditions. lt is only then that the system is guaranteed
to produce a bounded output for a bounded input. Unstable systems are usually to be
avoided, unless some mechanism can be found t(> stabilize them.
One famous example of an unstable system is the first Tacoma Narrows suspension
bridge that cc>llapsed on November 7, 1940, at approximately 11 :00 a.m., due to wind-
induced vibrations. Situated on the Tacoma Narrows in Puget Sound, near the city of
Tacoma, Washington, the bridge had only been open for traffic a few months before it
collapsed; see Fig. 1.46 for photographs taken just prior to failure of the bridge and soon
1.8 Properties of Systems 45

' .
.t •.
: ..·· '

~I ';
.. '


;. ,

: ..~ i
,,. '<''"'.
..·•· · ,-u........
.... .
.. .. •.•
. . . . . . . . . •••
.. .
',!M.,)l.;..... ~ • J I J l ( •. .OW ~ ) o o ( - , , 1 ~


'·, :q\\11
' ,,

' ' ,
. ..... ~

,. •
• ,

• •'

• •
• ••
l \'

• . "


~ .........-·..
'. 1; ., .
.. -

. 131' "'·'
•.,.,,-. , • F ~·: •


FIGURE 1.46 Dramatic photographs showing Lhe collapse of the Tacoma Narro,vs suspension
bridge on ~<>veml>er 7. 1940. (a) Photograph sho,ving the t,vísting motion (lf the brídge's center
span jt1st l>efore failurc. (b) ,;\ Íe\v minutes after the first píece of C()ncrete fell. thís second photo-
graph sho\vs a 600-ft section of thc bri<lge ilreaking out of the sttspension S}léln an<l turníng upside
down as it crashed in Pugt't S<>u11d, \Vashington. Note the car ín thc top right-hand corner of the
photograph. (C~ourtesy (>Í the Smithsonian Inslíltttion.)

EXAl\tPLE 1.9 Shc>w that the moving-average system described in Example 1.8 is BIBO

Solution: Assume that

lx{n] 1 < Mx < 00 for ali n

r Using the given input-output relation

y[n} = ½(x[n] + x[n - 1] + x[n - 2])

we may wr1te
ly[n}I = ½lx[n] + x[n - 11 + x[n - 2]1 ...,.r.,

:s ½(lx[n]I+ lx(n - 1]1 + lx[n - 2]1}

S ½(Mx + Mx + Mx)
= Mx
Hence the absolute value of the output signal y[n] is always less than the maximum absolute
( value of the input signal x(n] for ali n, which shows that the moving-average system is stable.

• Drill Problem 1.18 Show that the moving-average system described by the in-
put-output relati<>n

yfnl = 1(x[n + 1J + x[n] + x[n - 1])

is BIBC) stable. •

ExAMPLE 1.1 O Consider a discrete-time system whose input-output relation is defined by ,~:
.:· ..
y[n] = r' xl n]
. : . ., .

where r > 1. Show that this system is unstable. . •'::

L .
Solution: Assume that the input signal x[n] satisfi.es the condition , ,.

lx[n] I s Mx < 00 for all n . .

. .. ,
....., .. , .,, :.
·: .

We then find that

ly[n] 1= J rnx[n] 1 <:~·

= lrnl · lx[n]I
With r > 1, the multiplying factor r 11 diverges for increasing n. Accordingly, the condition
that the input signal is bounded is not sufficient to guarantee a bounded output signal, and
so the system is unstable. To prove stability, we need to establish that all bounded inputs
produce a bounded output.
,> •
t'F . ,.: .. •·:!',.. ; " ••
•• >
•i •' •

11 MEl\10RY

A systen1 is said to possess memory if irs output signal depends on past values of the input
signal. The temporal extent of past values on which the output depends defines how far
the memory of the system extends into the past. ln conrrast, a system is said to be me-
moryless if its output signal depends 011ly on the present valt1e <)f the input signal.
l. 8 Properlies of Syslems 47

For example, a resistor is memoryless since the current i(t) flowing through it in
response to the applied voltage v(t) is defined by

i(t) =R v(t)

where Ris the resistance of the resistor. On the other hand, an inductor has memory, since
the current i(t) flowing through it is related to the applied voltage v(t) as follows:

ft 1
= L _"" v( T) dT

where L is the inductance of the inductor. That is, unlike a resistor, the current through
an inductor at time t depends on ali past values of the voltage v(t); the memory c>f an
inductor extends into the infinite past.
The moving-average system of Example 1.8 described by the input-output relation

y[n] = 1(x(n] + xln - 11 + x[n - 2])

has memory, since the value of the output signal y[1t] at time 11 depends on the present and
two past values of the input signal x[nj. On the other hand, a system described by the
input-output relation

y[n] = x 2 [n]

is memoryless, since the value of the output signal y[n] at time n depends <>nly <)D the
present value of the input signal x[nj.

• Drill Problem 1.19 How far does the memory of the moving-a verage system de-
scribed by the input-output relatíon

y[n] = j(x[n] + x[n - 11 + x[n - 2])

extend into the past?

A1iswer: Two time units. •

• Drill Problem 1.20 The input-output relation of a semiconductor diode is repre-
sented by

i(t) = a0 + a 1 v(t) + a2 v 2 (t) + a 3 v 3 (t) + ···

where v(t) is the applied voltage, i(t) is the current flowing through the diode, and a 0 , a 1,
a3 , ••• are constants. Does this diode have memory?
Ansiver: No. •
• Drill Problem 1.21 The input-output relation of a capacitor is described by

v(t) =e
ft 1
-x, i( T) dT

What is its memory?

Answer: Memory extends from time t to the infinite past. •



A system is said to be causal if the present value of the output signal depends only on the
present and/or past values of the input signal. ln contrast, the output signal of a noncausal
system depends on future values of the input signal.
For example, the moving-average system described by

y[n] = ½(x[n] + x[n - 1] + x[n - 2])

is causal. On the other hand, the moving-average system described by

y[nJ = ½(x[n + 11 + x[n] + x[n - 11)

is non~ausal, since the output signal y[nJ depends on a future value of the input signal,
namely x[n + 1].

• Drill Problern 1 .22 Consider the RC circuit shown in Fig. 1.47. Is it causal or
Answer: Causal. •
• Drill Problem 1.23 Suppose k in the operat<>r of Fig. 1.44 is replaced by -k. Is the
resulting system causal or noncausal for positive k?

Answer: Noncausal. •

A system is said to be invertible if the input of the system can be recovered frc)m the systcm
output. We may view the set of operations needed to recover the input as a second system
connected in cascade with the given system, such that the output signal c>f the second
system is equal to the input signal applied to the given system. To put the notic)n of
invertibility on a formal basis, let the operator H represent a continuous-time system, with
the input signal x(t) producing the output signal y(t). Let the output signal y(t) be applied
to a second continuous-time system represented by the operator H- 1, as illustrated i11 Fig.
1.48. The output signal of the second system is defined by
H · 1 (y(t)} = H·- 1{H{x(t)}}
= H- 1H{x(t)}
where we have made use of the fact tl1at two operators H and H- 1 connected in cascade
are equivalent to a single operator H- 1 H. For this output signal to equal the c>riginal input
signal x(t), we require that


v 1(t) ~j

Input Output

FIGlJRE 1.47 Series RC círct1it driven from an ideal voltage sourcc v 1(t), producing output volt-
age v2 (t).
1.8 Properties of Systems 49

x(t) . .r . ' )'(t) .,, ' x(t)

_,...,, ,. 'H ,. ,. H-1 .,

FIGURE 1.48 The notion of system invertibility. The seconcl operat,>r H- 1 is the inverse of the
first operator H. Hence the input x(t) is passed through the cascade correction of H anel H- 1 cclm-
pletely unchanged.

where I denotes the identity operator. The output of a system described by the identity
operator is exactly equal to the input. Equation ( 1. 71) is the condition that the new op-
erator H- 1 must satisfy in relation to the gíven operator H for the original input sígnal
x(t) to be recovered from y(t}. The operator H- is called the inverse c>perator, and the
associated system is called the inverse system. Note that H- 1 is not the reciprocai of the
operator H; rather, the use of the superscript -1 is intended to be merely a flag indicatíng
''inverse. '' ln general, the problem of finding the inverse of a given system is a difficult
one. ln any event, a system is not invertible unless distinct inputs applied to the system
produce distinct <)Utputs. That is, there must be a one-to-one mapping between input and
output signals for a system to be invertible. Identical conditions must hold for a discrete-
time system to be invertible.
The property of invertibility is of particular importance in the design of communi-
cation systems. As remarked in Section 1.3, when a transmitted signal propagates through
a communication channel, it becomes distorted due to the physical characteristics of the
channel. A widely used method of compensating for this distortion is to include in the
receiver a necwork called an equalizer, which is connected in cascade with the channel in
a manner similar to chat described in Fig. 1.48. By designing the equalízer to be rhe inverse
of the channel, the transmitted signal is restored to its original form, assuming ideal

EXAMPLE 1.11 Consider the time-shift system described by the input-output relation
.. .": . . . y(t) = x(t - t 0 ) :,::: S'0 {x(t)}

where the operator sro represents a time shift of t 0 seconds. Find the inverse of this system.
Solution: For this example, the inverse of a time shift of t0 seconds is a time shift of - t0
seconds. We may represent the time shift of -t0 by the operator s-to. Thus applying s-i0 to
the output signal of the given time-shift system, we gec
s-ro(y(t)} = s-' {Sr {x(t)}}
0 0

= s-105to{x(t)}
For this output signal to equal the original input signal x(t), we require chat
s-to5to = 1
., which is in perfect accord with the condition for i11vertibility described in Eq. (1.71).

• Drill Prohlem 1 .24 An inductc>r is described by the input-output relation

y(t) = L
ft l
-ao x( 'T) d'T

Find the operation representing the inverse system.

Answer: L
dt •

.· .~ .··•

EXAMPLE 1.12 Show that a square-law system described by the input-output relation
' y(t) = x 2(t)
~,;. . .· .,.

is not invertible.
Solution: We note that the square-law system violates a necessary condition for invertibility,
which postulates that distinct inputs must produce distinct outputs. Specifically, the distinct
inputs x(t} and -x(t) produce the sarne output y(t). Accordingly,.the square-law system is not
·:·, ..


A systern is said to be time invariant if a time delay or time advance of the input sigr
leads to an identical time shift in cl1e c>utput signal. This implies that a time-invariant syst<
responds identically no matter when the input signal is applied. Stated in another way, t
characteristics of a time-invariant system do not change with time. Otherwise, the syst<
is said t<> be time variant.
Considera continuous-time system whose input-output relation is described by E
(1.69), reproduced here for convenience of presentation:

y(t) = H{x(t)}

Suppose the input signal x(t) is shifted in time by t 0 seconds, resulting in the new i11i:
x(t - t0 ). This operation may be described by writing

where the operator S'0 represents a time shift equal to t 0 seconds. Ler y; (t) denote t
output signal of the system produced in response to the time-shifted input x(t - t 0 ). "\
may then write

y;{t) = H{x(t - t 0 )}
= H{St {x(t)}}
= HS 10 {x(t)}
which is represented by the block diagram shown in Fig. 1.49(a). Now supp(>Se y 0

represenrs the output of the original system shifted in time by t 0 seconds, as shown by

Yo(t) = Si0 {y(t)}

= S' {H{x(t)}}
0 (1.7
= St H{x(t)}

which is represented by the block diagram shown in Fig. l.49{b). The system is tir
invariant if the outputs
y;(t) and .Y (t) defined in Eqs. (1.72) and (1.73) are equal for ai

identical input signal x(t). Hence we require

(1. 7

That is, for a system described by the operator H to be time invariant, the system operar
H and the time-shift operator Sto must commute with each other for all t 0 • A similar relati<
must hold for a discrete-time system to be time invariant.
1.8 Properties of Systems 51

x(t) i<i. . ...x(t - t 0 )

' ':;" Y;(t) x(t) ,.,,
-•• ~o . - - ~ fl' • -•- '~'lir-·..- ---1'11•
;j;• .. . .
.sto.·e,- '
: .~

(a) (b)

FIGURE 1.49 The notion of tin1e invariancc. (a) l'ime--shift <>perator S'0 llrccedíng c>perator H.
(b) Timc-shift operator .S111 follo\l\'Íng opcralor H, These two situations are equiva1ent, IJrc>vicled that
H is time invaríant.

ExAMPLE 1.13 Use the voltage v(t) across an inductor to represent the input signal x(t)., and
the current i(t) flowing through it to represent the output signal y(t). Thus the inductor is
described by the input-output relation

y(t) =L
1 f' -<,> X('T) d-r

where L is the inductance. Show that the inductor so described is time invaríant.
Solution: Let the input x(t) be shifted by t 0 seconds,. yielding x(t - t 0 ). The response y,{t) of
the ínductor to x(t - t 0 ) is
. ,.

y;(t) = L -~ x(
'T - t0 } d-r

;. Next, let y (t) denote the original output of the .inductor shifted by t0 seconds, as shown by

Yo(t} = y(t - to)

l Jt-to X(T) d1r
L -oo

Though at first examination y;(t) and y 0 (t) look different, they are in fact equal, as shown by
a simple change in the varíable for integration. IJet

For a constant t 0 , we have d-r 1 = dT. Hence changing the limits of integration, the expression
for y;(t) may be rewritten as

y;(t) =-
l ft-r 0

X('T 1 ) d'T'
L -oo

which, in mathematical terms, is identical to y0 (t). lt follows therefore that an ordinary in-
ductor ís time ínvaríant.

;,.·-~ •• ..,..,.. •••••• <. ~.., ,,.,.. • :'Jlt'::: ••• ' .,. • • ·,~,. •• ,... .·;i•.· ... ·..
~ ~,;. ..•.'·

ExAMPLE 1.14 A rhermisror has a resistance that varies with time due to temperature
changes. Let R(t) denote rhe resístance of the thermisror, expressed as a function of time.
Associating the input signal x(t) wíth che voltage applied across the thermistor, and the ot1tput
signal y(t) wirh rhe current flowing through it, we may express the input-output relacion of _
the thermistor as

y(t) = R(t}

Show that the thermistor so descríbed is time variant.

'.l.t .. :·~· :.: ·. .
.. ."t'··· ·~ . ... . ..."':'

Solution: Let y;(t) denote the response of the thermistor produced by a time-shifted version
x(t - t 0 ) of the original input signal. We may then write '> , ,

x(t - t 0 )
Y;(t) = R(t)

Next, ler y0 (t) denote the original output of the thermístor shifted in tin1e by t 0 , as shown by
Yo(t) = y(t - to)
x(t - t0 )
R(t - t 0 )
We now see that since, in general, R(t) * R(t - t 0 ) for t 0 * O, then
y0 (t) * y;(t) for t 0 *O
Hence a thermistor is tíme variant, which is intuitively satisfying.

• Drill Problem 1.25 Is a discrete-time system described by the input-output relatic)n

y(n) = r"x(n)
time invariant?

Answer: No. •

A system is saíd to be linear if it satisfies the princif,le (>( superposition. That is, the rcspc)nse
of a linear system to a weighted sum of input signals is equal to the sarne wcighred sum
of ot1tput signals, each output signal being associated with a particular input signal acting
on the system independently of ali the other input signals. A system rhat víolates the
principle of superposition is said to be nonlinear.
l.et the operator H represent a continuous-time system. l,et the signal applied to rhe
system input be defined by the weighted sum
x(t) = L a;x;(t)
i= l

where x 1(t), x 2 (t), ... , x 1'J(t) denote a set of input signals, and a 1 , a 2 , ••• , aN denote the
C<)rresponding weighting factors. The resulting output signal is written as
y(t) = H{x(t)}
N (1.76)
=H L a;x;(t)
i= l

If the systen1 is linear, we may (in accordance with the principie of superposítion) express
the output signal y(t} c>f the system as
y(t) = 2- a;y;(t) (1. 77)
i= 1

where y;(t) is the output <lf the system in response to the input X;(t) acting alone; that is,

Y;(t) = H{x;(t)} (1.78)

I. 8 Properties of Systems 53

X\ (t) o .. x 1(t) o .~ •
~- :• >l•


Output Output

Inputs X2(t}

o .... .....

:~~: •

Inputs X2(t)

• •••
• •
. )o l: )o y(t)

• • • ••

(a) (b)

FIGURE 1.50 The linearity property of a system. (a) The combined operation of amplitude scal-
ing and summation precedes the operator H for multiple inputs. (b) The operator H precedes
amplitude scaling for each input; the resulting outputs are summed to produce the overall output
y(t). If these t\vo configurations produce the sarne output y(t), the operator H is linear.

The weighted sum of Eq. (1.77) describing the output signal y(t) is of the sarne mathe-
matical formas that of Eq. (1. 75), describing the input signal x(t). Substituting Eq. ( 1. 78)
into (1. 77), we get
y(t) = L a;H{x;(t)}
( 1. 79)

ln order to write Eq. (1.79) in the sarne formas Eq. (1.76), the system operation described
by H must commute with the summation and amplitude scaling in Eq. (1.79), as illustrated
in Fig. 1.50. Indeed, Eqs. (1.78) and (l.79), víewed together, represenr a marhemarícal
statement of the principie of superposition. For a linear discrete-time system, the principle
of superposition is described in a similar manner.

ExAMPLE 1.15 Consider a discrete-time system described by the input-output relation

y[n] = nx[n] .. .•;.. .. ,, ., ,.

Show that this system is linear. ,


Solution: Let the input signal x[n) be expressed as the weighted sum
· x[n] = L a;x;[n]
;.= 1

We may then express the resulting output signal of the system as

·.. ~· ·..
• <.
'· y[n] =n L a;x;[nJ

'>·· : .,

: . ·. ' . ,..
' ..

' .
. N
=L a;y,[n] . . .

'-· , . . ,.
where . ,.

.' ..• ,

is the output dueto each input acting independently. We thus see that the given system satisfies
the principie of superposition and is therefore linear.
. ;,· ..

ExAMPLE 1.16 Consider next the continuous-time system described by the input-output
y(t) = x(t)x(t - 1)

Show that this system is nonlinear.

Solution: Let the input signal x(t) be expressed as the weighted sum
x(t) = L a;x;(t) ··

Correspondingly, the output signal of the systern is given by the double summatíon
' N N
y(t} = Í: a;x;(t) Í: a;x;(t - 1)
. i=-1 i=1
;·. .
= Í:, 2, a;a;x,(t)x;(t - 1}
i=-1 i= l
The form of this equation is radically different from that describing the input signal x(t). That
is, here we cannot write y(t) = !f'1 1a;y;(t). Thus the system violates the principie of superpos-
ition and is therefore 11onlinear.

• Drill Problem 1.26 Show that the moving-average system described by

y[n] = ½(x[n] + x[n - lj + x[n - 2])
is a linear system. •
• Drill Problem 1.27 Is it possible fc)r a linear system to be noncausal?
Answer: Y es. •
• Drill Prohlem 1 .28 The hard limiter is a mem<)ryless device ,vhose output y is
related co the input x by
1, X> 0
0, X< 0
Is the hard limiter linear?
Answer: No. •
11.9 Exploring Concepts with
" --
- .,

The basic object used in MATLAB is a rectangular numerical matrix wíth possibly complex
elements. The kinds of data objects encountered in the study of signals and systems are all
well suired to matrix representations. ln this section we use MATLAB to explore the
generation of elementary signals described in previous sections. The exploratior1 of systems
and more advanced signals is deferred to subsequent chapters.
The MATLAB Signal Processing Toolb<>x has a large variety c>f functi<Jns for gen-
erating signals, most of which require rhat we begin with the vector representation of time
t or n. To generate a vector t of time values with a sampling interval 2T of 1 rns on the
interval from O to 1s, for example, we use the command:
t = O: . 001 : 1 ;
I.9 Exploring Concepts wit1~ MATLAB 55

This corresponds 1000 rjme samples for each sec,)nd or a sampling rate of 1000 Hz.
To generate a vector n c>f time values for discrete-time signals, say, from n = Oto n = 1000,
we use thc command:
n = 0:1000;
Given t <.>r n, we 1nay then proceed to generate the signal of interest.
ln MATLAB, a discrete-tíme signal is represented exactly, because the values of the
signal are described as the elements of a vector. On the other hand, MATI,AB provides
only an approximation to a continuous-time signal. The approximation cc>nsists of a vector
whc>se individual elements are samples of the underlying continuous-time signal. When
using this approximate approach, it is important that we choose the sampling interval 2T
sufficiently small so as to ensure that the samples capture ali the details of the signal.
ln this section, we consider the generation of both continuous-time and discrete-time
signa]s (lÍ various kinds.


lt is an easy 1natter to generate peric)dic signals such as square waves and triangular waves
using MATLAB. Consider first the generation of a square wave <>Í amplitude A, funda-
mental freqt1ency wO (measured in radians per second), and duty cycle r h o. That is, r h o
is the fracrion of each peric>d for which the signal is positive. To generate such a sígnal,
we use the basic command:
A*square(wO*t + rho);
Thc sqttare wave sh(>wn in Fig. 1.13(a) was thus generated using the following complete
set <)f commands:
>> = 1;
>> wO = 10*pi;
>> rho = 0.5;
>> t = 0:.001:1;
» sq = A*square(wO*t + rho);
>> plot(t, sq)

ln the second command, pi is a built~in MATLAB function that returns the floating-pc>i11t
number clc>sest to '71'. The last con1mand is used to view the square wave. The comma11d
p lo t draws lines C<>nnecti11g the successive values of the signal a11d thus gives the ap-
pearance of a continuc)us-time signal.
Consider next the generacion of a triangular wave <>f amplitude A, fundamental fre-
quency wO (measured in radians per second), and width W. Ler che peric)d of the triangular
wave be T, with the first maximum value occurring at t = WT. The basic command for
generating this second periodic signal is
A*sawtooth(wO*t + W);
Thus to generate the symmetric triangular wave show11 in Fig. 1.14, we t1sed the fclllowing
>> A = 1;
>> wO = 10*pi;
>> t = 0:0.001:1;
» tri = A*sawtooth(wO*t + W);
>> plot(t, tri>

As mentioned previously, a signal generated on MATLAB is inherently of a discret

time nature. To visualize a discrete-time signal, we may use the s tem command. Speci:
cally, s tem< n, x) depícts the data contained in vector x as a discrete-time sígnal at tf
time values defined by n. The vectors n and x must, of course, have compatible dimension
Consíder, for example, the discrete-time square wave shown in Fig. 1.15. This sig11
is generated using the following commands:
>>A= 1;
>>omega= pi/4;
>> rho = 0.5;
>> n = -10:10;
» x = A*square(omega*n + rho);
>> s t em ( n , x )

• Drill Problem 1 .. 29 Use MATLAB to generare the triangular wave depicted in Fi1
1.14. ·


Moving on to exponential signals, we have decaying exponentials and growing exponer

tials. The MATLAB command for generating a decaying exponential is

To generate a growing exponential~ we t1se the command


ln both cases, the exponential parameter a is positive. The following commands \.Vere use,
to generate the decaying exponential signal shown in Fig. 1.26(a):
>> B - 5;
>> a - 6 ,•
>> t - 0:.001:1;
>> X - B*exp(-a*t); % decaying exponential
>> plotCt, x)
The growing exponencial signal shown in Figure 1.26(b) was generated using th
>> B = 1;
>>a= 5;
>> t = 0:0.001:1;
» x = B*exp(a*t); % growing exponential
>> plot(t, x)

Consider next the exponential sequence defined in Eq. (1.31). The growing form <>
this exponential is shown in Fig. 1.28(b). This figure was generated using the followini
>> B = 1;
>> r = O. 85
>> n = -10:10;
>> x = B*r.~n; % decaying exponential
>> stem(n, x)
1.9 Exploring Concepts with MATIAB 57

Note that, in this example, the base r is a scalar but rhe exponent is a vect<)r, hence the
use of the syml1t>l ." to denote element-by-element powers.

• Drill Problem 1.30 Use MATLAB to generate the decaying exponential sequence
dep1cted in Fig. 1.28(a). •

MA TLAB also contains trigt>n<>1netric functions that can be used to generatc si11usoidal
signals. A cosine signal of amplitude A, frcquency wO (measured in radians per sect>nd),
and phasc angle p h i (in radians) is obtained by using the command
A*cos(wO*t + phi);
Alter11atively, we may use the sine function t<> generate a sinusoidal signal by using the
A*sin(wO*t + phi);
These two commands were used as the basis of generating the sinusoidal signals shown in
Fig. 1.29. Specifically, for the cosine signal shown in Fig. 1.29(a), we used the following
>>A= 4·,
>> wO = 20*pi;
>> phi = pi /6;
>> t = 0:.001:1;
» cosine = A*cos(wO*t + phi);
» plot(t, cosine)

• Drill Problem 1.31 Use MATLAB ro generate the sine signaJ shown in Fig.
1.29(6). •
Consider next the discrete-time sinusoidal signal defined in Eq. (1.36). This periodic
signal is plotted in Fig. 1.31. The figure was generated using the follc>wing commands:
>>A= 1;
»omega= 2*pi/12; % angular frequency
>> ph i = O;
>> n = -10:10;
» y = A*cos(omega*n);
>> stem(n, y)


ln all <>Í the signal-generatÍ()n commands described above, we have generated the desired
amplitude by multiplying a scalar, A, into a vector representing a unit-amplitude signal
(e.g., si n ( wO* t + p h i ) ). This operation is described by using an asterisk. We next
consider thc generation of a signal that requires element-by-element multifJ/ication of two
Suppose we multiply a sinusoidal signal by an exponential signal to produce an
exponentially dan1ped sinusoidal signal. With each signal component l1eing represented

by a vecror, the generation of such a product signal requires the multiplicatior1 of one
vector by another vector on an element-by-element basis. MATLAB represents elemenr-
by-element multiplication by usíng a dot followed by an asterisk. Thus the command for
generating the exponentially damped sinusoidal signal
x(t) =A sin(w0 t + </>) exp(-at)
is as Í(>llows:
A*sin(wO*t + phi).*exp(-a*t);

For a decaying exponential, a is positive. This command was used in the generation of
rhe waveform shown in Fig. 1.33. The complete set of comn1ands is as follows:

>> A = 60;
>> wO = 20*pi;
>> ph i = O;
>>a= 6;
>> t = 0:.001:1;
» expsin = A*sinCwO*t + phi).*exp(-a*t);
» plot(t, expsin)
Consider next the exponentially damped sinusoidal sequence depicted in Fig. 1.51.
This sequence is obtained by multiplying the sinusoidal sequence x[nl of Fig. 1.31 by the
decaying exponencial sequence y[n] of Fig. l.28(a). Both of these sequences are defined for
n = -1 O: 1 O. Thus usíng z[nl to denote this product sequence, we may use the following
commands to generate and visualize it:
» z = x.*y; % elementwise multiplication
>> stem(n, z)

Note that there is no need to include the definition of n in the generatíon of z as it i~

already included in the commands for both x and y.

3 .----.------.----..----,--....----,---...----....-----.

2 ·-

1 -

x[nl O ... 0 ······--·······

-1 . . ..

-2 . . ..

- 3 10 -8 -6 -4 -2 O 2 4 6 8 10
Time n
FIGURE 1.51 Exponentíally damped sinusoidal sequence.
1.9 Exploring Concepts urith MATlAB 59

• Drill Problem 1.32 Use MATLAB to generate a signal defincd as the product of
the growing exponential of fig. 1.28(6) and the sinusoidal signal of Fig. 1.31. •

ln MATLAB, on e s ( M, N) is an M-by-N matrix of ones, and zeros ( M, N) is a11

M-by-N matrix of zeros. We may use these two matrices to generate tW<) con1monly used
signals, as folltJws:

• Step functi<Jn. A unir-amplitude stcp fu11ction is generated l-,y writi11g

u = Czeros(1, 50), ones(1, 50)];

• Discrete-time impulse. A unir-amplitude discrete-time i1npulse is ger1erated by wríti11g

delta= Czeros(1, 49), 1, zeros(1, 49)J;

To ge11erate a ramp sequence, we simply write

ramp = n;
ln Fig. 1.37, we illuscratcd how a pair of step functi<)11s shifted in time relative to
each other may be used t<) produce a rectangular pulse. ln light of the pr<,ce(it1re illustrated
therein, wc may formulate che foll<)wing set of commands for generating a rectangular
pulse centered on che origin:

t = -1:1/500:1;
u1 = Czeros(1, 250), ones(1, 751)];
u2 = Czeros(1, 751), ones(1, 250)];
u = u1 - u2;

The first command defines time running from -1 second to 1 sec<)nd in increments <>f 2
millisec<)nds. The second command generates a step functi(,n u 1 of unir amplitude, onset
at time t = -0.5 second. The third command generates a seco11d step functi<)n u 2, 011ser
at time t = 0.5 second. The fourth command subtracts u 2 from u 1 to prodt1ce a rectan-
gular pulse of unit amplitude and unit duration centercd <>n the origín.


An important feature of the MATLAB environme11t is that ir permits us to creare <>ur <>wn
M•files or subroutines. Two types of M-files exist, namely, scripts and funcrions. Scripts,
or script files, automate long sequences of C<>mmands. On the <)thcr l1and, functions, or
functi<)n files, provide extensibility to MATLAB by allowing us to add new functío11s. Any
variables used in function files do not remain in memory. For this reasou, input and outpL1t
variables must be declared explicitly.
We may thus say that a function M-file is a separate entity characterized as follo\vs:

1. lt begins with a statement defi11ing thc fu11ction name, its i11put arguments, and its
output arguments.
2. lt also includes additional statements that compute the values to l,e rcturned.
3. The inputs may be scalars, vectors, or matrices.

Co11sider, for examplc, the generation c,f the rectangular pulse depicted in Fig. 1.37
using an M-file. This pulse has unir amplitude and unir duration. To generate it, wc create
a file called r e e t . m containing the following statements:

function g = rect(x)
g = zeros(size(x));
set1 = find(abs(x)<= 0.5);
g(set1) = ones(size(set1));
ln rhe last three staternents of this M-file, we have introduced two useful functions:
• The function s i z e retur11s a two-element vecror containing the row and column
din1e11sions of a marrix.
• The function f i n d returns the índices of a veccor or matrix that satisfy a prescribed
relacional condítion. For the example ar hand, f i n d ( a b s ( x ) < = T ) returns the
índices of the vector x where the abs<)lute value of x is less than <>r equal to T.
The new function r e e t. m can be used like any ocher MATLAB function. ln particular,
we may use it t<> generate a rectangular pulse, as fc>llows:
t = -1:1/500:1;
plot(t, rect(0.5))

11 •.~_0 _S~im1!'ary
ln this chapter we prcsented an overview of signals and systems, setting the stage for the
rest t)f the bol)k. A particular theme that stands <)Ut in the discussion presented herein is
that sígnals may be <)Í the continuous-time or discrete-time variety, and likewise for sys-
tems, as summarized here:
• l\. continuc>us-time signal is defined for all values of time. ln C()ntrast, a discrete-tíme
signal is defined only for discrece instants of time.
• A continuous-time system is described l1y an operator that changes a continuous-
time input signal into a continuous-time output signal. l11 contrast, a díscrete-cime
system is described by an operator that changes a discrete-time input sigr1al into a
discrete-timc <>utput signal.
ln practice, many systems mix continuous-time and discrete-time componencs. Analysis of
mixed systems is an ímportant part of the material presented in Chaprers 4, ,S, 8, and 9.
ln discussing the various properties of sígnals a11d systems, we took special care in
treating these two classes of signals and systems sidc by síde. ln so doíng, much is gained
by emphasizing the similarities and differences between C<)ntinuous-time signals/systems
and their discrete-time counterparrs. This practice is followed in later chapters too, as
a ppropr1ate.
Another n<)teworthy point is that, in the study of systerns, particular attention is
give11 to the analysis of linear time-invariant systems. Linearity means chat the system obeys
the principle of superposition. Time invaríance means that the characreristics of the system
do n(>t change with time. By invoking these two properties, the analysis c)f syste1ns becornes
mathematically tracrable. lndecd, a rich set of toc>ls has been developed for the analysis of
linear tíme-invariant systems, which provides direct motivatio11 Í<>r much <>f the material
on system analysis presented in this book.
Furtlier Readings 61

ln thís chapter, we also explored the use of MATLAB for the generati<>n of elemen-
tary waveforms, representing the continuous-time and discrete-time variety. MATLAB
provides a powerfu1 environment for exploring concepts and testing system designs, as
will be illustrated in subsequent chapters.


1. For a readabJc account of signals, their represencations, and use in communication systems,
see the book:
• Pierce, J. R., and A. M. Noll, Signals: The Science o{Telecommunícations (Scientific American
Library, 1990}

2. For examplcs of control systems, see Chapter 1 of the book:

• Ku<>, B. C., At1tomatic Control Systems, Scventh Edition (Prentice-Hall, 1995)
and Chapters 1 and 2 of the book:
• Phiffips, C. I,., and R. D. Harbc>r, Feedhack C<Jntro{ 5;ystems, Third Edítíon (Prentíce-HalJ,

3. For a general discussion of re1note sensing, see the book:

• Hord, R. M., Remate Sensing: Methods and Applications (Wiley, 1986)
For material 011 the use of spaceborne radar for remore sensíng, see the book:
• Elachi, C., lntroduction t<J the Physics and Techniques of Remate Sensing (Wiley, 1987)
For detailed description of synthetic aperture radar and thc r()le of signal proeessing i,1 its
implementation, see the book:
• Curlander, J. e:., and R. N. McDonough, Synthetic Aperture Radar: Systems and .Çignal Pr<J-
cessing (Wiley, 1991)

4. For a collection of essays on biological signal processing, see thc book:

• Weitkunat, R.~ editor, Digital Biosignal Processíng (Elsevier, 1991)
5. For detailed discussion of the auditory sy·srem, see the following:
• Dallos, P., A. N. Popper, and R. R. Fay, editors, The Cochlea (Springer-Verlag, 1996)
• Hawkíns, H. L.1 and T. McMt1llen, edít<)rs, Auditory C<Jmputation (Springer-VcrJag, 1996)
• Kelly, J. P., ''Hearing." ln E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Principies of Neural
Scíence, Thi rd Edition (Elsevier, 1991 )
The cochlea has provided a source of motivation for building a11 electronic version of it,
using silícon integrated circuits. Such an artificial implementation is sometimes referred to
as a ''sil1con cochlea. '' For a díscuss;on of the silicon cochlea, see:
• Lyon, R. F., and C. Mea<l, "Electronic Cochlea." ln C. Mead, Analog VLSI and Neural Sys-
tems (Addison-Wesley, 1989)

6. For an account of the legendary story of the first Tacoma Narrows suspension bridge, see
the report:
• S1nith, D., ''A Case Study and A11alysis of rhe Tacoma Narrows Bridge Failure," 99.497 E11-
gineeríng Prc>iect, Department of Mechanical Engineering, Carletc>n Univcrsity, March 29,
1974 (supervised by Prc>fessor G. Kardc)s)

7. For a textbook treatment of MATLAB, see:

• Etter, D. M., Engíneering JJroblem So/ving with MATLAB (Prentice-Hall, 1993)

• - - • • -
1.1 Find the even and odd components of each of show that the output y(t) consists of a de com·
the followíng signals: ponent anda sinusoidal component.
(a) x(t} == cos(t) + sin(t) + sin(t) cos(t) (a) Specify the de component.
(b) x{t) = 1 + t + 3t2 + St] + 9t 4 (b) Specify the amplitude and fundamental fre-
(e) x(t) = 1 + t cos(t) + t 2 sin(t) quency of the sinusoidal component in rhe
+ t 3 sin{t) cos(t) output y(t).
( d) x(t) = (1 + t 3 ) cos3 ( 1Ot) 1.4 Categorize each of the following signals as an
1.2 Determine whether the following signals are pe- energy or power signal, and find the energy or
riodic. If they are periodic, find the fundamental power of the signal.
period. t, O< t:::; 1
(a) x(t) = (cos(2m))2 (a) x(t) = 2 - t, 1 :::; t :::; 2
(b) x(t) = Lk=-s w(t - 2k) for w(t) depicted in O, otherwise
Fíg. Pl.2b.
n, O s n :::; 5
{e) x(t) = L.k--oo w(t - 3k) for w(t) depicted in
Fig. P1.2b. (b) x[n\ = 10 - n, 5 s n:::; 10
(d) x[nl = (-1) 11 O, otherwise
(e) x[n] = (-1)"
(e) x(t) = 5 cos(-rrt} + sin(5-rrt),
(f) x[n] depicted in Fig. P1.2f. -oo<t<oo
(g) x(t) depicted in Fig. P1.2g. 5 COS('TT't), -1:::; t:::; 1
(d) x(t) =
(h) x[n] = cos(2n) O, otherwise
(i) x[n] = cos(21rn) 5 cos( 'TT't), -0.5 < t :::; 0.,5
1.3 The sinusoidal sígnal (e) x(t) =
O, otherwise
x(t) = 3 cos(200t + 7r/6) sin( 7r/2 n), -4 < ,z :::; 4
is passed through a square-law devíce defined by
(f) x[nl = O, otherwíse
the input-output relation
cos(1rn), -4 < n ::S 4
y(t) = x 2 (t) (g) x[n] =
O, otherwise
Using the trigonometríc identity cos( 1rn }, n > O
(h) x[nl =
cos 8 2
= ½(cos 28 + 1) O, otherwise

w(t) x{n]
' • ' ' 1

... • ••

--ifC---+-~- t ;
, '
' '
-1 1 -5 -l 1 4 8 6
(b) (f)

- 1- -
••• ...
' ' ' ' ' ;
-5 -4 -3• -2 -1 l 2 3 4 5 6 7 s

Problems 63

1.5 Consider the sinusoidal signal 1.10 A rectangular pulse x(t) is denr1ed by·
x(t) = A cos(wt + cp) A, O< t < T
x(t) = .
Determine the average power of x(t). O, otherw1se
1.6 The angular frequency n of the sinusoidal signal The pulse x(t) is applied to an integrator dcfi11cd
xlnl = A cos(On + <b) by
satisfies the conditíon for x[n] to bc periodic.
Derern1ine the average power of x[n]. y(t) = J: x( T) dr
1.7 The raised-cosine pulse x(t) shown in Fig. Pl.7
is defined as Find the total energy of the output y(t).
1.11 The trapezoidal pulse x(t) of Fig. Pl.8 is time
t[cos(wt) + l], -1rlw < t < 1rl<v
x(t) = - · scaled, producing
O, otherwise
y(t) = x(at)
Determine rhe total energy of x(t).
Sketch y(t) for (a) a = 5 and (b) a = 0.2.
1.12 A triangular pulse signal x(t) is depicted in Fig.
Pl.12. Sketch each of the following sígnals de-
rived from x(t):
(a) x(3t)
(b) x(3t + 2)
-1r/w o Trlw {e) x(-2t-1)
FIGURE PI.7 (d) x(2(t + 2))
(e) x(2(t - 2))
1.8 The trapezoidal pulse x(t) shown in Fig. Pl .8 is (f) x(3t) + x(3t + 2)
defined by
.5 - t, 4 < t < 5
1, -4 < t < 4
x(t) =
t + 5 -5 < t < -4
O, otherwise
Determi11e the total energy of x(t). --~-+-,- ~ ,- - t
-1 O l

x(t) FIGURE Pl.12

1.13 Sketch the trapezoidal pulse y(t) that ís related

to that of Fig. Pl.8 as follows:

~'----+---+--+--+-+--+;-+:-+-~-,.___ t
y(t) = x(10t - .5)
-5 -4 -3 -2 -1 O l 2 3 4 5 1.14 Let x(t) and y(t) be given in Figs. P1 .14(a) and
FIGURE Pl.8 (b}, respectively. Carefully sketch the following
1.9 The trapezoidal pulse x(t) of Fig. Pl.8 is applied (a) x(t)y(t - 1)
to a differentiatt)r, defined by (h) x(t - l )y(-t)
d (e) x(t + 1 )y(t - 2)
y(t) = dt x(t} (d) x(t)y(-1 - t)

(a) Determine the resulting output y(t) of the (e) x(t)y(2 - t)

differentia tor. (f) x(2t)y(½t + 1)
(h) Determine the total e11ergy of y{t). (g) x(4 - t)y(t)

x(t) y(t)
1 3
t 1 t

-~~;.__-+---+-~,- t -+-,--~-~--r----t
-2 -1
-----'······· ... ··- ··-··
1 2 3

--- .-• - -
- i-

- - !- ·- - - ·······--

(a) (b)


' 1 t
-4 -3 -1 o l 3 4
FIGURE Pl.14 (a)

1.15 Figure Pl.l 5(a) shows a staircase-like signalx(t) 1
that may be viewed as the superposition of four
rectangular pulses. Starting with the recranguJar ------1---+--,-....--t
pulse g(t) shown in Fig. P1.15(b), construct this -1 l
waveform, and express x(t) in terms of g(t). (b)


4 ->-·- - - - ···-·----
(e) xln - 21 + yln + 2}
3------·--- (f) x[2n] + y[n - 4]
g(t) (g) x[n + 2}yln - 2j
2 ·- ····---
(h) x[J - n]y[n]
1 (i) x[-n}y[-n}
; t ---l----+----4-----t (j) x[n]yf-2 - n]
o 1 2 3 4 -1 O 1 {k} x[n + 2]y[6 - n]
(a) (b)

1.16 Sketch the \Vaveforms of the following signals:
(a) x(t) = u(t) - u(t - 2)
(b) x(t) = u(t + 1) - 2u(t) + u(t - 1)
(e) x(t) = -u(t + 3) + 2u(t + 1) - 2u(t - 1)
+ u(t - 3} l .
(d) y(t) = r(t + 1) - r(t) + r(t - 2)
(e) y(t) = r(t + 2) - r(t + l) - r(t - 1) --<>--O--<>--+--+---o-.--+~-<>-o-o--n
,·- ' : :

+ r(t - 2) -3 -2 -1 l 2 3
1.17 Figure Pl.17(a} shows a pulse x(t) that may be (a)
viewed as the superposition of three rectangular
pulses. Starting with the rectangular pulse g(t) y[n]
of Fig. Pl.17(b), construcr rhis waveform, and
express x{t) in rer1ns of g(t). 1 --

1.18 Let x[11] and yfnl be given in Figs. P1 .18(a) and -4 -3 -2 -1

(b}, respectively. Carefully sketch the following ---<-:>----<o--er--....-•.--+--+--r---O--+---!,---'..........,~>-<:>--<o-- n
signals: l 2 3 4
(a) x[2nl 1 1 •• l'-

(b) x[3n - 1]
(e) y[l - 11}
(d) yf2 - 2nl FIGURE Pl.18
Problems 65

1.19 Consider the sinusoidal signal (b) What happe11s to the differentiator output
y(t) as T approaches zere)? lJse the definition
41T 1T
x[n] = 10 cos n + of a unir impulse 5(t) to express your
31 5 answer.
Determine the fundamental period of x(n). (e) What is the total area l1nder rhe diffcrentia-
1.20 The sinusoidal signaJ x[nJ has fundamental pe- tor output y(t) for ali T? Jt1srify y<)ur
riod N = 10 samples. Determine the smallest answer.
angular frequency .n for which xlnl is periodic. Based on your findings ín parts (a) to (e), de-
1.21 Determine \.Vhether the following signals are pe- scribe in succinct terms the result of differenti-
riodic. If they are periodic, find the fundamental ating a unir impulse.
(a) x{n] = cos(fs1rn)
(b) x[n) = cos(n1rn)
(e) x(t) = cos(2t) + sin(3t)
(d) x(t) = Lk _""(-l)k8(t - 2k)
(e) x[n] = LZ'= ""{8[n - 3kl + S[n - k 2 ]}
(f) x(t) = cos(t)u(t)
-T/2 o T/2
(g) x(t) = v(t) + v(-tJ, where v(t) = cos(t)u(t)
(h) x(t) = v(t) + v(-t), where v(t) = sin(t)u(t)
(i) x[n] = cos(½1rn) sin(_~ 7Tn) 1.26 The derivative c.)f 11npl1lse functic)tl 5(t) is re-
1.22 A complcx sinusoidal signal x(t) has the follow- ferred to as a doublet. lt is denoted l)y 8 1 (t) .
1ng components: Show thar 8' (t) sa tisfies the si fting properry
x 1(t)
= A cos(wt + </>)
= A sin(wt + </J) f" " ô'(t - t 0 ) f(t) dt = f'(t0 )
The amplitude of x(t) is defincd by the square where
root of x1(t) + xy(t). Show that this amplitude
equals A, independent of the phase angle <p. f''(to) = ~ f(t)
1.23 Consider the complex-valued exponencial signal
Assume that the fu11ction f(t) has a contjnuous
x(t) = Ae°'r+;w,, a>O derivative at time t = t 0 •
Evaluate the real and imaginary components c)f 1.27 A svstem
, C<)nsisrs of se\ eral subsvstems
, cc)n- 1

x(t). necte<l as shown in Fig. Pl .27. Find the operator

1.24 Consider the contínuous-time signal H relating x(t) to y(t) for the Sl1l,system <Jpcra-
tors given by:
t/T + 0.5, -T/2 < t < T/2 H1 :y1(t) = x 1(t)x 1(t - 1)
x(t) = 1, t > T/2 H2: Y2(t) = lx2(t) 1
O, t < -T/2 H_,, :y3 (t) = 1 + 2x,,(t)
H4:y4(t) = cos(x4 (t))
whích is applied to a differentiator. Sho\v that
the output of the dífferentiator approaches the x,(t) Yi<t)
unit impulse S(t) as T approaches zero. ---...~. J-1.,.,., ,-.- - - .
óf! 9lt $1< •!ll ~

1.25 ln this problem, we explore what happens when

a unit impulse is applied to a diffcrentiator.
Consider a triangular pulse x(t) of duration T +
and amplitude 1/2T, as depícted in Fig. Pl.25. X2(t) )'2(!)
The area under the pulse is unity. Hence as the
x(t) '-----+-.11: .l-4?. . ,, ___. y(t)
'J.: .~ •)(,. ,,., I r - -.....
duration T approachcs zero, the triangular pulse
approaches a unit impt1lse.
(a) Suppose the triangt1lar pulse x(t) is applied
to a differentiator. Determine the output
y(t) of the differenriator. f tGURI:'. p 1.2 7

-IJo• lfP ..
---- • --t
1 1 2

-- . Y2(t)

1 2 3 .. k~;JJI*·
... .. ..
t t
1 1 1 4
-1 1
' 1 1 -1 ...


1 r··-,.,_ 1 ...
H •
' t t
1 2 3 1 4



_! 1

_...,_--+--+-- t
l -1 l


2 Y2(t)
l ·,. l ·-t--------,

--+----t-----1------- t -+--r----+---"---- t
1 2 3 4 1 2 3 4

• • . -
_....__~--"---------1-- t
-JI H -1 t,.,
: : ·········-·-
' - : - '
l 2 3 4 -l l 2 3
1 2

1 ......

· ··H .,.,.
...,_ """"1'· · -·

• •l
,., t
----·······---+-·-----+- t
1 - 1
l 2 3 4


Problems 67

1.28 The systems given below have input x(t) or x[nJ 1.34 Show that the discrete-time systen1 described ín
and output y(t) or yfn], respectively. Deter- Problem 1.29 is time ínvariant, independent of
mine whether each of them is (i) memoryless, the coefficients a0 , a 1 , a 2 , anda:~•
(ii) srable, (jjj) cat1sal, (iv) Jjnear, and (v) rime 1.35 Is ir possib1e for a time-variant S}'Ste1n to be lin-
. .
1nvar1ant. ear? Justify your answer.
(a) y(t) = cos(x(t)) 1.36 Show that an Nth power~law <levice defined by
(b) yf n} = 2x[n)ufn] the input-output relation
(e) y[,tl = log10( lx[n} 1) y(t) = x'-..:(t), N ínteger and N =I= O, 1
(d) y(t) = f' 1~ x( T) dT
is nonlinear.
(e) )'tn) = IZ=-oc .i-[k + 21
1.37 A linear tíme-invariant syste1n 111ay he causal or
(f) y(t) = dt x(t) noncausal. Give an example for each one <)Í
these two possibilities.
(g) )'[n] = cos(21rxfn + 1]} + x[n] 1.38 Figure l ..50 shows two eqt1ivalent systen1 con-
d figurations on condirion that the sysrem
(h) y(t) = dt {e- x(t)}

operator H is linear. Which of these two config-

(i) y(t) = x(2 - t) 11rations js sjmpler to impJement? Justify yo11r
(j) y[nl = x[n] Lk~-'"' 8[n - 2k]
1.39 A system H has íts input-output pairs gíven.
(k) y(t) = x(t/2}
Determine whether che system could be mc-
(1) yínJ = 2x(2''1 moryless, causal, linear, and time invariant for
1.29 The output of a discrete-ti1ne system is related (a) signals depicted in Fíg. P1.39(a) and (b) sig-
to its input x[ n I as follo\1\1S: nals depicted ín Fig. Pl.39(b}. For ali cases, jus-
tify your answers.
y[nl = aox[nl + a 1x[n - 1]
1.40 A linear system H has the input-output pairs
+ a2x[n - 21 + a_~xln - 3]
depicted in Fig. Pl.40(a). Determine the follow-
Ler the operat<.)r .\k denote a system that shifts ing a.nd explain yot1r answers;
the input x[n] by k time units to produce (a) Is this system causal?
x[n - k]. Formulate the opcrator H for the
(b} Is this system time invariant?
system relating y[nJ to x[n]. Hence develop a
(e) Is this system memoryless?
block diagram representation for H, using
(a) cascade ímplementation and (b) parallel (d) Find the output for the input depicted in fig.
ímplementatio11. Pl .40(b}.
1.30 Show that tl1e syste111 described in Problen1 1.29 1.41 A discrete-timc system is both linear and time
is BIBO stable for ali a 0 , a,, a2 , and a 3 • invariant. Suppose the output due to a11 input
x[n] = o[n} is gíven i11 Fig. Pl.41 (a).
1.31 How far does the memory of the discrete-rime
system described jn Prob]em 1.29 extend into (a) Fínd the output due to a11 input xlnl =
the past? 8[n - 1}.
1.32 Is it possible fc>r a noncausal system to possess (b) Find the output dueto an input xlnl = 2ô[n]
memc)ry? Jusrify your answer. - o[n - 2J.
1.33 The Ot1tput sjg11al )1[n] of a discrete-time system (e) Find the output dueto the input depicred in
is related to its i11put signal xi nl as follows: Fig. Pl.41(b).

y[nl = x[nJ + .r[n - 11 + x[n - 2]

Let the operator S denote a system that shifts its • Compute1· Experiments
input by <.1ne time unit.
(a) f<1r111ulate the operator H for the system re- 1.42 Write a set of MA TLAB commands for appr<)X-
lating y[n] to x[n}. imating the following continuous-time periodic
(b) The operator H- 1 denotes a discrete-time waveforms;
system rhat is the inverse of this system, (a) Square wave of amplitude 5 volts, funda-
How is H- 1 defined? mental frequency 20 Hz, and duty cycle 0.6.

X1(t) Y1(t)

; t
... .. l

o l o l
- Y2(t)

1- ~


1 t •- t
o 1 2 3 o 3

1 -~

--t-~,----t -+---1',;,__-+-_..,__ t
o 1 2 o 1 2 3



_..,___.•1---+-- t
o l 2



y[n] (b) Sawtooth wave of amplitude 5 volts, and

2 fundamental frequency 20 Hz.
Hence plot five cycles of each of these two
l waveforms.

--c>-0--o--+-t-l-0---0--0-- n
1.43 (a) The solution to a linear differential equation
1 2 is given by
-1 x(t) = 10e - t - 5e-O-St

Using MATLAB, plot x(t) versus t for t =
x[n] 0:0.01 :5.
( b) Repeat the problem for
1 X(t) = 10e-t + Se-O.Sr

--c>-----0--0--0-+-~,-o--o--o-- n 1.44 An exponentially damped sinusoidal signal is

l defined by
-1 x(t) = 20 sin(21r X lOOOt - 1T!3) exp(-at)
where the exponential parameter a is varíable;
FIGURE P 1.41 it takes on the followíng set of values: a = 500,
Problems 69

750, 1000. Using MATLAB, investigate the ef- 1.46 A rectangular pulse x(t) ís defined by
fect of varying a on the signal x(t) for -2 < t
< 2 milliseconds. 10, O< t < 5
x(t) =
1.45 A raised-cosine sequence ís defined by O, otherwise
t cos(21rFn), -1/2F < n < 1/2F Generate x(t) using:
" w[n] = .
O, otherw1se (a) A pair of time-shifted step functions.
Use MATLAB to plot w[n] verst1s n for F = 0.1. (b) An M-file.



1· ,.

; '

> •

Time-Domain Representations
for Linear Time-Invariant Systems

.. .,, ..
.;, . : :i,. .:' : ·.
.. ·;. : . ~.
·. ,,,':·. : \.: ·....•: '. . .,

·.. ·~'.i· ,: ::o>'·


.,. ··.' .",'!·


12.1 lntroduction
ln this chapter we C<>nsider severa! methods for describing the relationship betwcen the
input and output of linear time-invariant (LTI) systems. The focus here is on sysrem de-
scriptions that relate the output signal to the input signal when b<.)th signals are represented
as functions of time, hence the terminol<)gy ''time domain '' ín the chapter title. Merhods
for relating system output and input in domains other than rime are presented in later
chapters. The descriptions developed in this chapter are useful for analyzing and predicting
the behavior of LTI systems and for implementíng discrete-time systems on a compurer.
We begin by characterizing a LTI system in terms <>f its impulse response. The impulse
response is the system output associated with an impulse input. Given the impulse re-
sponse, we determine the output due to an arbitrary input by expressing the input as a
weíghted superposition of time-shifted impulses. By linearity and time invariance, rhe out-
put must be a weighted superposition of time-shifted impulse responses. The tcrm ''con-
volution'' is t1sed to describe the procedure for determining the output from rhe input and
the impulse response.
The second method considered for characterizing the input-(>utput bel1a vi<)r of LTI
systems is the linear constant-coefficient differential or difference equatic>n. Differential
equations are used to represent continuous-time systems, while difference equations rep-
resent discrete-time systems. We focus on characterizing differential and difference equa-
tion S<llutions with the g()al of developing insight into system behavior.
The third system representation we discuss is the block diagram. A block díagram
represents the system as an interconnection of three elementary operati<>ns: scalar multi-
plication, additíon, and either a time shift for discrete-time systems <>r integration Í(>r
• •
cont1nuous-t1me systems.
The final time-domain representation discussed in this chapter is the state-variable
description. The state-variable description is a series of coupled first-order differentíal or
difference equations that represent the behavior of the system's ''state'' and an equation
that relates the state to the output. The state is a set of varia bles associaced with energy
stc>rage or memory <levices in the system.
Ali four of these time-domain system representations are equivalent in the sense that
identica1 outputs result from a given input. However, each relates the input and output in
a different manner. Different representations offer different views of the system, with each
offering different insights into system behavior. Each representati()n has advantages and
2.2 Convolution: Impulse Hesponse Hepresentatio,,for LTI Systems 71

disadvantages for analyzing and implementing systems. Understa11ding h(>W differcnt rcp-
resentati<>ns are related and determining which offers the most insight and straightforward
so1ution in a particular prol)lem are important skil1s to develop.

2.2 Convolution: Impulse Response

Representation for LTI Systems
The impulse response is the ()utput of a I~TI system dt1e te> an impulse input applied at
time t = O or n = O. The impulse resp<>nsc completely characterizes the behavi(>r c,f a11y
LTI system. This may see1n surprising, but it is a basíc pr<>perty of all LTI systems. The
impulse response is often determined from knowledge of the system configt1rati<.>n and
dynamics <>r, in the case of an unknow11 system, can l-,c 111easured by applyi11g a11 approx-•
ímate impulse te) the system input. Generati<.>n c>f a discrctc-time impulse sequence for
testing an unknc>wn system is straightforward. ln the cc>11tinuc>us-time case, a true itnpulse
of zer<> width and infinite amplitude cannot actually bc gcncratcd a11d usually is physically
approximated as a pulse of large amplitude and narr(>W width. Thus rhc impulse response
may be interpreted as thc system behavior in response to a high-amplitt1de, cxtremely
short-duration input.
lf thc input to a linear system is expressed as a weíghtcd superposicion of ti111e-shifted
impulses, then the ()Utpt1t is a weighted superposition of the systen1 resp<.>nse t<) each rin1e-
shifted impulse. If the system is ais(> time invariant, chen the system response to a time-
shifted impulse is a cime-shifted version of the system respc>nsc to an impulse. Hence the
()Utput c>f a LTI system is given by a weighted superpc>sitic,11 c>f ti1nc-shifted impulse re-
sponses. This weighted superposition is termed the C<)nvc)li,tic>n su111 fc>r discrcte-rin1e sys-
tems and the convolution integral for continuous-time syste111s.
We begin by considering the discrete-time case. First an arhitrary signal is expressed
as a weighted superpc>sition of time-shifted impulses. The convolution sumis then (>l)t::1inec.l
by applying a signal represented in rhis manner to a L TI system. A similar procedure is
used to obtain the convolution integral for continuot1s-time systems later in this section.


Consider the prl)duct <.>f a sig11al x[11} and the impulse seqt1ence 81.nl, written as
x[n}8[n] = x[OJôlnJ
Generalize this relationship t() che product of x[n] and a cirne-shifted impulse seque11ce t<>

X rn lô[ n - k] = X [ k] Bl n - k]

ln this expression n represents the time indcx; hence x[n] de11otes a signal, while xlkl
represents the value of the signal xf n I at time k. We see that multiplicatjon of a signal by
a time-shifted impulse results in a time-shifted impulse with an1plitude given by the value
of the signal at the ti1ne the impulse occurs. This property allows us to express xln] as the
following weighted sum of time-shifted impulses:

x[n] = · · · + xf-2]B[n+ 2] + x[-l]Bln + 1_1 + xf0]8fnl

+ x[l]B[n - 1] + xl2]8ln - 2] + · · ·

We may rewríte rhis representation for x[ n J ín concise form as


x[nJ = L
k= -oo
x[k]ô[n - k] (2.1

11\ graphical illustration <)f Eq. (2.1) is given in Fig. 2.1.

Let the (>perator H denc>te the system t<) which the input x[n] is applied. Then usinl
F.q. (2.1) to represent the input x[n] t<> the system results i11 the output

y[n] = H 2, xlkl8[n - kl


xl-2]8[n + 2]

x[-2} · · · ·

xl-1]5[n + IJ

--o--o---o--.---i--0---0-----..:>---o--- n

x[0]5ln - l l

+ x[O]

-· <>------<O>----<Q->----<O--··-',!--O--···-ó--❖-··~--- n
x[l)o[n -1]

+ x[t] +
x[2)ô[n - 2]

x[2] ·



••• ...___.__•••_ _ _ _ n

FtGlJRE 2.1 GraJ>hical example illustrating the reprcsentati(>n of a signal x[1i] as a \veighted sum
of time-shiftcd impulses.
2.2 Convolution: Impulse Hesponse Hepresentationfor LTI Systems 73

Now use the linearity property to interchange the system operator H with thc summation
and sigr1al values x[kj to obtain

yfn] :Z.: xfkJH{!Sfn - kl}



where hk[ nl = H{B[n - k]} is the response of the system to a time-shifted impulse. If we
further assume the system is time ínvariant, then a time shift ín the input results ín a tin1e
shift in the output. Thís implies that the output due to a time-shifted impulse is a time-
shifted ,,ersion of the output due to an impulse; that is, hk[n] = h0 [n - k]. Letting
h[nl = h0 [n] be the impulse response of the LTI system H, Eq. (2.2) is rewritten as

y[nJ = L
x[k]hín - k] (2 ..3)

Thus the output of a LTI system is given by a weighted sum <>Í time-shífted impulse re-
sponses. This is a direct consequence of expressing the input as a weighted sum of tin1e-
shífted impulses. The sum in Eq. (2.3) is termed the convolution sum and is denoted by
the symh(>l *; that is,

x[nJ * h[nJ = L
k= -·- CC
x[k]hln - kJ
The convolution process is illustrated in Fig. 2.2. Figure 2.2(a) depicts the impulse response
of an arbitrary LTI system. ln Fig. 2.2(b) the input is represented as a sum of weighted
and time-shifted impulses, Pk[n 1 = x[k IB[n - k 1- The <>utput <>Í the system associated with
each input pk[n] is
vk[n 1 = x[k lh[n - k]
Here vk[n J is obtained by time-shifting the impulse response k units and multiplying by
x(k]. The outputy[n] in resp<>nse t<> the inputx[n} is obtained by summing ali the sequenc.:cs
vkf n]:

That is, for each value of n, we sum the values along the k axís indicated on the right side
of Fig. 2.2(6). The following example illustrates this pr(>cess.
·. ~...: v·
ExAMPLE 2.1 Assume a LTI system H has impulse response
1, lt .::::: ± 1
h[n] = 2, n == O
o, otherwise
Determine the output of thís system in response to the input
2, n=O
3, n = 1
x[n] = -
-2, n=2
... o, otherwise

l - ..
••• 9 •••
---c...,._-0---11----+-+---+-+---+--n !



• ••
• •

l 2 3 4
n ... .. •

6 • ••

-1 -1. •

Po[n] v0 [n]
1 1
n .. h[n]
.... ,..
.. -❖----0--1---1--+-+--+
' '
. ~'-...:_.:.;,,__ n ..... o
-1 l 2 3 4 • "li -1 l 2 4 5

.. ~~- ..

-1 l 2 3 4 ~- 1 2 3 5 6
9-··-· n
--o---~4----+-+-~---1.-....1- \

-.;.,.,,.,_ ..
2- ---<>---<>---<>-_._,--<>----<>---o---<>- n ., h[ l n 11 -~--<>--<>--__,__..,__-+---,-+--+ n -2
-1 1 2 3 4 1 2 3 4


-l l 2
--<>---<>-----<>--o----..----<>--<>----<>- n
~ I
4 5
J j_ ~n 3

k •

•• k
• •

00 00

x[n] == L Pk[n] y[n] == L vk[n]

k=-oo k =-oo
-1 1
3 1 •••
... c•••
' -----n .... • -......A--1---j,---+,-+--+---,,---+--- n
1 2 ! -1 2 3 4
-1 - - -1 ...


FIGURE 2.2 Illustration of the convolution sum. (a) Impulse response of a system. (b) Decom-
position of the input x[nJ into a \Veighted sum of time-shifted impulses results in an output y[n]
given hy a weighted sum of time-shifted imJlulse resilonses. Here Pk[n) is the weightcd (by x[k]}
and time-shifted (by k) impulse input, and vk[n] is the wcighted and time-shifted im1>ulse respons
out1>ut. The dependcnce of both pk[n] and vk[n] on k is depicted by lhe k axis shown on the left-
and right•hand sides of the figure. The output is ohtained by summing vk[n] over all values of k.
2.2 Convolution: Impulse Response Representation for LTI Systems 75

••• ••

w 0 [k]
-1 00

o---o-------c>----o----<>-----o-- k y[O] = L w0 [k]

••• l 2 3 4 k=-oo
-1 ...


k y[ll = L W1[k]
1 2 3 4 k =-oo


yL2J = L W2lk] N ··t--O

k =-oo

-1 3 00

••• 6
k yL3J = L W3[k]
1 2 4 k ;-oo
-1- ...

-1 l 3 00

~ ... t··- -···r·2:~-r.....' - - - - - - - k

,!. ~ •••
y[4] =
W4LkJ --o
-1 - .
•• ••
• •

FIGllRE. (e) The signals w nlk] used to con,pute thc output at time 1-1 for several values <>f n.
Here \Ve have redrawn the right-hand side of Fig. 2.2(b) so that the k axis is horizontal. The out-
put is (>htained f(>r 11 = 110 by summing w,. 0 [k] over all values c>f k.

'·· : . .., ·i: ...,..~ .. ./:'.: .... . .

. ~•.: . •:r ''l .

Solution: First write x[n] as the weighted sum of time-shifted impulses

.. '
..' , .
x[n] = 28[n] + 38[n - 1) - 28[n - 2]

Here Po[n] = 28[n], P1[n] = 3S[n - 1], and p 2 [n] = -25[n - 2]. All other tíme-shifted Pk[n]
are zero because the input is zero for n < O and n > 2. Since a weighted, time-shifted, impulse
inptit, a8[n - k}, results ín a weighted, time-shifted, impulse response output, ah[n - k], the
system output may be written as

y[n] = 2h[n] + 3h[n - 1] - 2h(n - 2]


Here v0{n) = 2hln), v 1[n] = 3h{n - 1], v2[n] = -2h[n - 2), and all other vk{n] = O. Summation
of the weíghted and time-shifted impulse responses over k gives
-~· ...>·
2, n
n -<
• .·,d~i

./. ,:'· 7, n - o
y[n] - 6, n - 1
, .
-1 , n - 2
-2, n - 3

.., . ,
·;. ... <
o, n -> 4
, .. , ::i; .. ·:.
. ""· . .. ,....~. {,\ ·. .,,~;;: ...~~ .....:~;i: ;~~· ·•~li ·•;~·· > ' '. ..'

ln Example 2.1, we found all the vk[nJ and the11 summed over k to determine yl1z
This approach illustrates the principies that underlie convolution and is very effective whe
the input is of shc>rt duration so that only a small nun1bcr of signals vk[nl need t<> t
determincd. When the input has a lclng duration, then a very large, p<lssibly infinite, nurr
ber of signals vk[nJ must be evaluated before y[nJ can be found and this procedure can t
cum berS<>me.
A11 alternative approach f<>r evaluating the convolution sum is obtained by a sligt
change in perspective. Consider evaluating the output ata fixed time n 0


y[nol =
I= - ,,, vk[noJ
That is, we sum along the k or vertical axis on the right-hand side of Fig. 2.2(b) ar a fixe,
time n = n 0 • Suppose we define a signal representing the values at n = n 0 as a function e
the independent variable k, Wn [kl = vk[n 0 ] . The ()Utput is now obtained by summing ove 0

the independent variable k:


y[noJ = L

Note that here we need only determine one signal, w ,., [kJ, t<> evaluate the <)t1tput a 11

n = n 0 • figure 2.2(c) depicts w 11,,[kJ for several different values <>Í n 0 and the correspondin:
output. Here the horizontal axis corresponds to k and the vertical axis corresp<>nds to n
We may view vk[nJ as representing the kth row c>n the right-hand side of Fig. 2.2(l-,), \.Vhil,
w 11 l_kJ represents the nth column. ln Fig. 2.2(c), wn[kJ ís the nch row, while vk[nl is the ktl
We have defined the intermediate sequence w [kJ = x[k]h[n - k] as the product o 11

x[k] and hfn - kJ. Here k is che independent variable and n is treated as a constant. Henc1
h[n - kl = h[-(k - n)] is a reflected and time-shifted (by -n) version of hfkl. The tim1
shift n determines the time ac which we evaluate the output of the systetn, since

y[nJ = L
Wn[kJ (2.4

Note that now we need only determine one signal, w n[ k 1, for each time ar which we desirc
to evaluate the output.
2.2 Convolution: Impulse Response Representationfor LTI Systems 77
. ,.

ExAMPLE 2.2 A LTI system has the impulse response

h[n] = (¾) u(nJ

Use Eq. (2.4) to determine the output of the system at times n = -5, n = 5, and n = 10 when
the input is x[n] = u[n].
Solution: Here the impulse response and input are of infinite duration so the procedure
followed in Example 2.1 would require determining an infinite number of sígnals vkln]. By
using Eq. (2.4) we only form one signal, wn[k], for each n of interest. Figure 2.3(a) depicrs
x[k], while Fig. 2.3(b) depicts the reflected and tíme-shifted impulse response h[n - k]. We
see that
( l)n-k
k < n
h[n - k] = ' -
"'' · O, otherwise
Figures 2.3(c), (d), and (e) depict the productwn[k] for ti= -5, n = 5, and n = 10, respectively.
We have
w_5 [k] = O . , :. . . ·:.~..
.:,... ·. .. . :

and thus Eq. (2.4) gives y[-5] =O.For n = 5, we have .•

. ·.·. .
". ..
(4 o :S k < 5
·.{ w 5 [k] = ' .' .
, '•
O, otherwise '
: ;. . .,. . ..

and so Eq. (2.4) gives

s 5-k
• ·< 3
y[5] - Ik=O
~.;., ., 4 . ~·

x[kl hln - kJ
l ,• )n~-k 1 l·

-2 o 2



k ~ 2-.'.i 1 1
' .
rI n
0--0--0-0-0 k

(a) (b)

UJ_sfk] (3)5-k
·t ✓-:4
- ~0--0..0-ó-<>-<>-<>--0-<>-<>-0--0-<>----
-4 -2 O 2 4
k _ _ o--0 º º <> 1-1_r_Il . . ºº º
ü l 4
•--- __ k

(e) (d)

O 2 4 6 8 10

FIGVRE Evaluation of Eq. (2.4) in Examplc 2,2. (a) The inpltl sígnal x[k] dcpíctcd as a
functi<>n tlf k. (b) Thc reflected and time-shiftecl impuJst.' rcs1Jc.>nsc. /1-[11 - kJ, as a l\1nctít>n <>f k.
(e) 'I'he product signal u 1 _:;[kl used to cvaluate y[-5]. (d) The product signal 11 ,,[kl used to eYaltt- 1

ate y[5]. (e) Thc product signal t,v 10 lkJ used t<> evaluatc rí .1 O].

·,. ..,.

Factor (¾)5 from the sum and apply the formula for rhe sum of a finite geometric serie
5 5 k
3 4
y[S] = -4 I -
""º k
3 s 1 - (1)6
4 1 - (1)
Lastly, for n = l O we see that
O< k:::;; 10
and Eq. (2.4) gives
lO 10-k

y[lO] = L i4
.,.... ,,,

3 10 10 4 k
-4 >. -
:.:'o 3


3' 10 1 - (!)11
4 1 - (1)
Note that in this example Wn[kJ has only two different functional forms. For n < O)
have wn[k] = O since there is no overlap between the nonzero portions of x[kl and h[n -
When n:::: O the nonzero portions of xlk] and h[n - k] overlap on the inrerval O~ k .:::ã n é:

. we may write
3 n-k
(4) , Os k s n ....
•;.:.• ... ..·
w,,[k] = h . .:·
O, ot erw1se
Hence we may determine the output for an arbitrary n by using the appropriate functioi
. forro for Wn[k] in Eq. (2.4).

This example suggests that in general we may determine y[n 1for all n without
uating Eq. (2.4) at an infinite number <>f distinct time shifts n. This is accomplish(
identifying intervals of n on which w [kl has the sarne functional form. We rhen <)nly

to cvaluate Eq. (2.4) usi11g tl1e w [kl ass(>ciated with each interval. Often ir is very ht

to graph both xlk] and hln - k] when determini11g w,,lk] a11d ídentifying the approf
intervals of ti1ne shifts. This procedure is now summarized:
1. Graph both x[kJ and l7[n - kJ as a functi<>n of the índependent variable k. T•
termine h[n - k\, first reflect hík] ahout k = O to <)l1tain h[-kj and then time
h[-kl by -n.
2. Begin with the time shift n large and negative.
3. Write the functional form for w,,[kJ.
4. Increase the time shift n until the functional form for t.v [kl changes. The valuc 11

at which the change <>ccurs defines the end of the current interval and the begin
c>f a 11ew interval.
5. Lct n bc in the new intcrval. Repeat steps 3 and 4 until a\l íntervals of time shi
and the corresponding functional forms for zv,,[kl are identified. This ust1ally im
increasing n to a very large positive number.
6. For each interval of time shifts n, sum ali che values of the corresponding w.,[,
obtain ylnJ on that interval.
2.2 Convolution: Impulse Response Representatio•ifor LTI Systems 79

The effect of varying n from - oo to oc is to slide h[-k] past x[ k] fr<)m lcft tt> right.
Transitions in the intervals of n identified in step 4 generally t>ccur when a change point
in the representation for h[-kJ slides through a change point in the representati{>n for
xlk]. Alternatively, we can sum all the values in w,,[k] as each interval ()Í time shifts is
identífied, that is, after step 4, rather than waiting until ali intervals are identified. The
following examples illustrate this procedure for evaluacing the cc>nv<>lution sun1 .


ExAMPLE 2.3 A LTI system has impulse response gíven by
h[n] = u[n] - u[n - 10]
and depicted in Fig. 2.4(a). Determine the output of this system when the input is the rectan~
gular pulse defined as
x[n] = u[n - 2] - u[n - 7]
and shown in Fig. 2.4(b).
Solution: First we graph x[k] and h[n - k], treating nas a constant and k as the independent
variable as depicted in Figs. 2.4(c) and (d). Now identífy intervals of time shifts n on which
the product signal wn[k] has the sarne functional form. Begin with n large and negative, in
which case w,.[k] = O because there is no overlap in the nonzero portio11s of x[k] and
h[11 - k]. By increasing n, we see that w,,[k] = O províded n < 2. Hence the first interval of
time shifts is n < 2.
. . .
....,,. ···"· ~- .. ... ...

h[n] x[n] x[kJ

l • 1 l -~ ' }

; n '
n ~ - k
' ' '

O 2 4 6 8 o 2 4 6 2 4 6
(a) (b) (e)

wn[k] wnfk]
h[n - k]
l- - • 1 ... ~

-<>--<>······ .. ...L.......IL.......L.__.__.____.___...._--<>--<o-o- k
n-9 n O 2 n 2 4 6
(d) (e} (f)


n -9 6
-~~-l- --------~+~~~~~~n 2





12 14 16
(g) (h)

Eva1uati{ln of the convolution sum for Example 2.3. (a) The system in1pt1lse re-
sponse Ji[n]. (b) The ínpt1t signal x[fiJ. (e) The input depicte<l as a functi<>n ,)f k. (d) 'J'hc rcflected
and time-shifted impulse rcsponse h[1i - k] depicted as a function of k. (e) The product signal
tvnlkJ for the interval of time shifts 2 < n < 6. (f) The tJroduct signal w .. [k] for the interval of time
shifts 6 < n s; 11. (g) 'I'he product signal w,.[k] for the interval of time shifts 12 < n s; l 5.
(h) The Ot1tpt1l rí 111.


When n = 2 the right edge of h[1t - kJ slides past the left edge of x[kJ anda transition occur,
in the functional form for wn[kJ. For n .2: 2, ·

, 1, 2 s k s n
' Wn[kJ ==
·. . O~ otherwise t:' ..,

This functional form is correct until n > 6 and is depicted in Fig. 2.4(e). When n > 6 the rigw
edge of h[n - k] slides past the right edge of x[kJ so the form of w,i[k] changes. Hence ou1
second interval of time shífts is 2 s n s 6.
For n > 6, the functional form of w,i[ k J is given by

1, 2 s k s 6
., Wn[k] = .
.· O, otherw1se

as depicted in Fig. 2.4(f). This form holds until n - 9 = 2, or n = 11, since at that value of
n the left edge of h[n - k1 slides past the left edge of x[k}. Hence our third interval of time
shifts is 6 < n s 11.
...'' Next, for n > 11, the functional form for wn(k] is given by . .. ,

.,; i:
1, n - 9 s k $. 6
Wn[k] = .
O, otherw1se

as depicted in Fig. 2.4(g). Thís forn1 holds unti] n - 9 = 6, or n = 15, since for n > 15 the
left edge of h[n - k] lies to the right of x[kl and the functional form for w,,[kJ again changeS'i
Hence the fourth interval of time shifts is 11 < n s 15.
For all values of n > 15, we sce that wn[k] = O. Thus the last interval of time shifts i.Q.
this problem is n > 15.
The output of the system on each interval of 1t is obtained by summing the values of
the corresponding w,,[kJ according to Eq. (2.4). Beginning with n < 2 we have y[n] = O. Next,
for 2 s n s 6, we have
< '

y[nJ = L1


On the third interval, 6 < n s 11, Eq. (2.4) gives ,,:


. , .,,

.,. .·
y[nl = L


For 11 <n < 15, Eq. (2.4) gives .,,

..... ···~
, ..

. '
y[nJ L 1
= k=n-9

= 16 - n
Lastly, for n > 15, we see that y[nj ~ O. Figure 2.4(h) depicts the output y[n] obtained by
combining the results on each interval.
2.2 Convolution: Impulse Response Representationfor LTI Systems 81
... , .• : f'. • • • ·' ••

ExAMPLE 2.4 Let the input, x[n], to a LTI system H be given by

x[n] = a"{u[n] - u[n - 10]}
and the impulse response of the system be given by · ·
~- ,. . h[n] = t3nu[n]
where O < J3 < 1. Find the output of this system.
Solution: First we graph x[k] and h[n - k], treating nas a constant and k as the independent
variable as depicted in Figs. 2.5(a) and (b). We see that
,,, .
cf, Os k s 9
x[k] =
O, otherwise
13n-k, k :s; n
h[n - k] =
O, otherwise
. . ' . . :

Now identify intervals of time shifts n on which the functional form of w n[ k] is the sarne.
Begin by consídering n large and negative. We see that for n < O, wn[k] = O since there are
no values k such that x[k] and h[n - k] are both nonzero. Hence the first interval is n < O.
When n = O the right edge of h[n - k] slides past the left edge of x[k] soa transition
occurs in the form of wn[k]. For n > O,
·. -.: - cl{:3"-k, Os k s n
. .> ' ..
. ·• . w,,[k) = O,
This form is correct provided Os n s 9 and is depicted in Fig. 2.5(c). When n = 9 the right
edge of h[n - k] slides past the right edge of x[k] so the form of w,.[k] again changes.
Now for n > 9 we have a third form for wn[k],
·. .' . , .. , W {k] = ~/3n-k, OS k S 9
,. O, otherwise
Figure 2.5(d) depícts this wn[k] for the third and last interval in this problem, n > 9.
We now determine the output y[n] for each of these three sets of time shifts by summing

x[kJ h[n - kJ
1- l - -..
a- ~

Cl' 9 ... .. '
• ••
k - - k
' ' ' ' ' 1

o 2 4 6 8 10 n
(a) (b)

a- a
13n' ' 13n >·

- - k -o-- k
' ' ' ' <

o n O 2 4 6 8 10
(e) (d)

FIGlJRE 2. 5 Evalttation of the convolution sum for ExamJJle 2.4. (a) The input signal xf k] de-
picted as a function of k. (b) Reflccted and time-shifte<l impulse response, h[n - k]. (e) The
product sígnal wnfkl for()< n < 9. (d) The product signal w,.[k] for 11 > 9.

Wn[kJ over ali k. Starting with the .first intervaI, n < O, we have Wn[k] O, and thus;
y[n] = O. For the second interval, Os n s 9, we have
y{n] = L o/t13n-k
Here the index of summation is limited from k = O to n because these are the only times kii:
for which wn[k] is nonzero. Combining terms raised to the kth power, we have
~ a
y[n] = /3" Li -
k=O f3
Next, apply the formula for summing a geometric series of {n + 1) terms to obtain
:= 1311 1 - (a/f3)n+l
y n1 1 - a//3 .' ..

.. , .

Now considering the thírd interval, n;:::; 10, we have

y[n] = L ak13n-k .,,'.
9 k

I ª/3
= /3'' k=O
1 -_(a//3)10
= 1311 _ __
1 - a//3
where, again, the índex of summation is limited from k = O to 9 because these are the only
times for which w,,[kl is nonzero. The last equality also follows from the formula for a finite
geometric series. Combining the solutions for each interval of shifts gives the system output
as ,.;:

o, n < O,
1 - (a//3)"+ 1
y[n] = fr 1 - a/(3 '
1 - (a/(3)10
{3 1 - a//3 '

• Drill Prohlem 2. 1 Repeat the C()nvolution in Example 2.1 hy directly evaluatin

the convolution sum.
Answer: See Example 2.1. •

• Drill Problem 2.2 Ler the i11put to a LTI system with i1npulse respc>nse h[nl -
a 11 {i,[n - 2J - u[n - 13)} be xlnJ = 2{uln + 21 - u[n - 12J}. Find the output y[nJ.
O, n < O
1 ( }-1-n
2 a''+ 2 ; ª

, O< 11 ::s: 1O

12 1 - (a)-11
y[nJ = 2a 1 - a-1 '
11 :'.5 n :5 13

12 1 - (a)n- 24
2a _1 , 14 -5. n ::s: 23
1 -a
o, }1 > 24 •
2.2 Convolution: Impulse Response Representationfor LTI Systems 83

• Drill Problem 2.3 Suppose the input x[n] and impulse response h[nJ of a LTI system
H are given by
x[nJ = -u[nJ + 2u[n - 31 - u[n - 61
h[nJ = u[n + ll - u[n - 10]
Find the output of this system, y[n 1-

o, n < -l
-(n + 2), -1 ~ n s 1
n-4 2~n<4
y[n} = o, 5:Sn<9
n - 9 10<n<11
15 - n , 12 .::s n .::S 14
o, n > 14 •
The next example in this subsecti<>n uses the convolution sum to obtain an equatÍ()n
directly relatíng the input and output of a system with a finite-duration impulse response.

, .. ... ..,.,. ·.: ; ..: .·

. ··,· :

ExAMPLE 2.5 Considera LTI system with impulse response

,, ¼, Os n s 3 ..
., h[n] =
· O, otherwise
Find an expression that directly relates an arbitrary input x[n], to the output of thís system,
. .
Solution: Figures 2.6 (a) and (b) depict an arbitrary input x[k] and the reflected, time shifted
impulse response h[n - k]. For any time shift n we have
. .. \. . ,·:. .
....,., ... :, . . .· _: ·. , . , Wn[kl = 40x' [k], n - 3 s k < n
. ., . . otherwise
Summing wn[kJ over ali k gives the output , ·
• >

,, .'. . .
." .. y[n] = ¼(x[nl + x[n - 1] + x[n - 2] + x[n - 3]) . .··· '

t'· ·".. ,.
;··. ,.,i, ••• ,._ ,..,.. .,., • •

The output of the system in Example 2.5 is the arithmetic avcrage of the fc)ur most
rece11t inputs. I11 Chapter 1 such a syste111 was termed a n1oving-average system. The

x[k] h[n - k]
•• • ... '
k >
T k
' o 2 4 n-3 n

(a) (b)

FIGURE 2.6 Evaluatíon of the convolution sum for Example 2. 5. (a) An arbitrary iil}JUt signal
depicted as a function of k. (b) Reflccted and timc-shifted impulse response, h[1i - k].

effect of the averaging in this system is to smooth out short-terrn fluctuations in the input
data. Such systems are often used to identify trends in data.

"·"' ·.,:

EXAMPLE 2.6 Apply the average January temperature data depicted in Fig. 2.7 to the fol-
lowíng moving-average systems:
-z1., Osn< 1
(a) h[n] =
o, otherwise ·.~'. ..

-4l , 0:Sn.:S3 •••

.., .
(b) h[nl =
o, otherwise
.;, .

-s1 , Osns7
{e) h[n) =
o, otherwise
Solution: In case (a) the output is the average of the tW<) most recent inputs, in case (b} the
four most recent inputs, and in case (e) the eight most recent inputs. The system output for
cases (a), (h), and (e) is depicted in Figs. 2.8(a), (b), and (e), respectively. As the impulse
response durarion increases, the degree of smoothing introduced by the system increases be-
cause the output is computed as an average <)f a larger number of inputs. The input to the
system prior to 1900 is assumed to be zero, so the output near 1900 involves an average \Vith
some of the values zero. Thís leads to low values of the output, a phenomenon most evident
in case (e).

ln general, the output of any discrete-time system wirh a finite-duration in1pulse

response is given by a weighted sum of the input signal values. Such weighted sun1s can
easily be implemented in a computer to process discrete-time signals. The effect of the
system <)n the signal depends on the weights or values of the system impulse response. The
weighrs are usually chosen to enhance S<>me feature of the data, such as an underlying
trend, or to impart a particular characteristic. These issues are discussed throughout \ater
chapters of rhe text.


The output of a continuous-time LTI system may also be determi11ed solely fro111 knowl-
edge of the input and the system's impulse respo11se. The approach and result are analog<>us
to the discrete-time case. We first express an arbitrary input signal as a weighted super-
position of time-shifted impulses. Here the superposition is an integral ir1scead <>f a sum

50 1--
>.- r r
... u. l r
- ~
~ o

40 ~

' TU


- u-1,.. • vr !'

30 .. ~("
OI) (L)
e e.
> a 20 ..
<~ 'i
o .. .. .. .. .. .. .. ..

.. .. J
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
FIGURE 2. 7 Averagc January tcmperature from 19()0 to I 994.
2.2 Cmavolutimi: Impulse Response Represetitationfor LTI Systetns 85

50 .....
L~ -- ,.

r rl li" ,.
,... r '
,. r

,.,o j
í - IW ~
1 '
i IC
1 >

.. ..

o . - . .. .. .. .. ~ .. .. ..
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990

60 r-----,------,--------,-----------..---------r----.

50 . . . . ···-

401 - -
,. -
30 1·...." 1
10 1
O u_._,t..1..1._. J
1900 l 91() 192() 193() 1940 \950 1960 \970 198() 199()


. ..J
1910 1920 1930 1940 1950 1960 1970 1980 1990

FIGURE 2.8 Result of passing average January temperaturc data throttgh severa! n1oving-averagc
systen1s. (a) Output of two-point moving-average systcm. (b) ()utput ()f four-poinl 111c,ving-average
system. (e) Output of eight-point mc,ving-a,'erage system.

due to the continuous nature <>f the input. We then apply this input to a l.TI system to
write the <>utput as a weighted superposition c>f time-shifted impulse responses, an ex-
pression termed the convolution integral.
The convolution sum was derived by expressing the input signal x[n] as a weighted
sum of time-shifted impulses as shown hy

X [ 11] = L
X [ k] B[ n - k]

Similarly, we may expressa continuous-time signal as the weighted superposition of time-

shifted impulses:

x(t) = f 0000 x( r)o(t - r) dr (2.5)

Here the superposition is an integral and the time shifts are given by the continuous vari-
able T. The weights x(-r) dT are derived from the value of the signaJ x(t) ar rhe time ar
which each impulse occurs, T. Equation (2.5) is a statement of the sifting property of the
impulse function.
Define the impulse response h(t) = H{o(t)} as the output <>f the system in response
to an impulse in pur. I f the system is time ínvarian t, then H {8( t - T}} = h( t - -r). Tha r is,
a time-shifted impulse input generates a time-shifted impulse response output. Now con-
sider the system output in response to a general input expressed as the weighted super•
position in Eq. (2.5), as shown by

y(t) = H J~,, x(-r}B(t - -r) dr

Using the linearity property of the system we obtain

y(t) = f'"'a, X(T)H{S(t - -r)} dT

and, since the system is time ínvariant, we have

y(t) = J 00

,,, x(-r)h(t - r) dT

Hence the <Jutpuc of a LTI sysrem in response to an input of the form of Eq. (2.5) may be
expressed as

y(t) = f °"°" X( T)h(t - T) dT (2.6)

The <)Utput y(t) is given as a weighted superposition of impulse responses time shifted by
r. The weighcs are x( T) dr. Equacion (2.6) is termed the convoluti<Jn integral and, as before,
is denoted by rhe symboJ *; that ís,

x(t) * h(t) = f" ~ x( T)h(t - T) d-r

The convolution process is illustrated in Fig. 2.9. Figure 2.9(a) depicrs the impulse
response of a system. ln Fig. 2.9(b) the input to this system is represented as an integral
of weighted and time-shífted impulses, P-r(t) = x(-r)B(t - T). These weighted and cime-
shifted impulses are depicted for several values of T on the left-hand side of Fig. 2.9. The
outptlt associated with each input p (!) is the weighted and time-shifted impulse response:

V 7 (t) = x( T)h(t - -r)

The ríght-hand side of Fig. 2.9(b) depicts V (t) for several values of T. Note that V (t) is a
7 7

functíon of two independent variables, T and t. On the right-hand side of Fig. 2.9(b), the
variation with tis shown on the horizontal axis, while the variation wíth TOccurs vertícally,
2.2 Convolution: Impulse Response Representationfor LTI Systems 87


• •

P-i<t) V -1 (t)

""':''. ..... t
-1 -1 2
-1 .. ..

11.(ty )1

·-0.5 4
t -1


0.5 0.5
o ----+-------t .. o

l ----+--+------t
..,,..,. ·,
"· l

2 -----1----.J-----t -• h(t) ., _ _LL--+-,--1-----i,c.~- t 2

2 2

0.5 ;;i.:

•> 0.5 ·
3 -----l--------t
_,..• l,(t). ..

1' • • T
• •
• •

-----+-,..+--+-+---+-;- , 1 4 , - - - t
2 3 4 l 2 3 4 5


Illustration of the co11volutic>n integral. (a) Impulse response of a continuous-timc
system. (b) Decc>mpc)sition of x(t) into a weightcd integral <>Í tin1e-shiftecl impulses results in an
output y(t) givcn by a \veighted integral of time-shifted impulse rcsponscs. Here pT(t) is the
weighted (by x(T)) and time-shifted (by 'T} impulse input, and vT(t) is the weighted and time-shifted
impulse respe.>nse c,ulpL1l. Both Pr(t) and v.,.(t) are depicted <.>nly at integer values of T. The depen-
dence of both p.,.(t) and v.,.(t) e.>n Tis depicted by the T axis sho,vn on the lcft- and right-hand sides
of the figt1re. The output is obtained by integrating vT(t) over T.

l .....
••• -1
--- ~~........-..;..,- -i-- 'T . .L 1
r -
l 2 4 6

Wo( T)

l 2 4 6
y(O) = I: w0( T)d-r --li---~
o ~

--~ft':IH-4,--+--+- T __... ......
l 2 4 6

1 ....
••• -]
---..::!1~7---j------+-··· 'T N .....
I 2 4 6


1 .l
••• -1 1 1
· ' -........- 'T



F1GlJRE 2.9 (e) Thc signals tv,( T) uscd to compute thc output at time t corresp<)nd t(> vertical
slices ()f v,.(t). Here ,ve have rl'dravvn the right-hand sicle oi' Fig. 2.9(b) so that the T axis is h<>rizo1
tal. l'he outpltl is <lbtained for t = t,, by integraling w,( T) over T.

as sh<)wn by the vertical axis on the right-hand side. The system output at time t = t 0
obtained by integrating over T, as sh(>wn by

That is, we integrate along the vertical or T axis on the right-hand side of Fig. 2.9(6) at
fixed time, t = t 0

Define a signal w,,.(-r) to represent the variation of v,.(t) along the T axis for a fixe
time t = t 0 • This ímplies ivt.,( T) = V 7 (t0 ). Examples of this signal for several values of t 0 ai
depicted in Fig. 2.9(c). The correspc>nding system output is now obtained by integratin
Wt ( 'T) over T frc>m - x to 00 • Note that the horizontal axís in Fig. 2.9(6) is t and the vertic,
axis is T. ln Fig. 2.9(c} we have in effect redrawn the right-hand síde of Fig. 2.9(b) with
as the horizontal axis and tas the vertical axis.
We have defined the intermediate signal Wr(r) = x(T)h(t - -r) as the product c>f x(~
and h(t - T). ln this definition Tis the independcnt variable and tis treated as a cc)nstan
This is explicitly indicated by writing tas a sul,script and T within the parentheses of wr(T
Hence h(t - T) = h(-(r - t)) is a reflected and time-shifted (by -t) version of h(T). Tb
2.2 Convolution: Impulse Response Representationfor LTI Syste1ns 89

time shift t determines rhe time at which we evaluate the output of the sysrem since Eq.
(2.6) becomes


The system output at any time tis the area under the signal w 1(-r).
ln general, the functional form for w 1( r) will depend on the value of t. As in the
discrete-time case, we may avoid evaluating Eq. (2.7) at an infinite number c>f values c>f t
by jdentifying intervals of t on which ivt(T} has the sarne funcrionaJ form. We then only
need to evaluate Eq. (2.7) using the w 1(-r) associated wíth each interval. Often it is very
helpful to graph both x( r) and h(t - T) when determining wt( T) and identifying the ap-
propriate interval of time shifts. This procedure is summarized as follows:
1. Graph x( r) and h(t - T) as a function of the independent variable To obtain
h(t - T), reflect h( r) about r = O to obtain h(-T) and then time h(-t) shift by -t.
2. Begin with the time shift t large and negative.
3. Wríte the functional form for wt(T).
4. Increase the time shift t untíl the functi<)nal form for w 1(T) changes. The value t at
which the change occurs defines the end of the current ínterval and the begin11ing of
a new interval.
5. Let t be in the new interval. Repeat steps 3 and 4 untíl all intervals of time shifts t
and the corresponding functional forms for wt( T) are identified. This usually implies
increasing t to a large and positive value.
6. For each interval of time shifts t, integrate w t( T) from -r = - ·~ to T = oo to obtain
y(t) <)n that jnrerval.
The effect of increasing t from a large negative value to a large positive value is to slide
h(- r} past x( T) from left to right. Transitions in the intervals of t associated with the sarne
form of wt(r) generally occur when a transition in h(-r) slides through a transition in
X(T). Alternatively, we can integrate w 1 (r) as each interval of time shifts is identified, that
ls, after srep 4, rather rhan wajring untiJ a]) jnrervals are identified. The following exampJes
illustrate this procedure for evaluating the convolution integral.

EXAMPLE 2.7 Consider the RC circuit depicted in Fig. 2.10 and assume the circuit's time
constant is RC = 1 s. Determine the voltage across the capacitor, y(t), resulring from an input
voltage x(t) = e-31{u(t) - u(t - 2)}.
Solution: The circuit is linear and time invariant, so the output is the convolution of the in-
put and the impulse response. That is, y(t} = x(t) * h(t). The impulse response of this circuit is
. ..
. . ,,
. ....~: .,
,,... .,
.· .......,.;, . ,·.· •:.i:t .·...:..... . ·,.::·~ . . ...

x(t) i(t) :;::;:::: e y( t>

FIGURE 2.10 RC circuit system with the voltage source x(t) as input and the voltage measured
across the capacitor, y(t), as output.

To evaluate the convolution integral, first graph x('T) and h(t - T) as a function of
índependent variable 1while treating tas a constant. We see from Figs. 2.1 l(a) and (b) ti
e- 31", O < T <2
X(T) =
O, otherwise
• • •
e -(t-1"), T < t : '
h(t - T) = ·:,,.'

O, otherwise
Now identify the intervals of time shifts t for which the functional form of w,(-r) d
not change. Begin with t large and negative. Provided t < O, we have w,( T) = O since tb
are no values T for which both x( -r) and h(t - ,) are both nonzero. Hence the first interva
time shifts is t < O.
Note that at t = O the right edge of h(t - r) intersects the Jeft edge of x(r). For t >
e-r-lT, O< T <t
.. w,(r) =
O, otherwise
This form for w ,( r) is depicted in Fig. 2.11 (e). lt does not change untiJ t > 2, at which pc
the right edge of h(t - T) passes through the right edge of x(T}. The second ínterval of ti
shifts t is thus O s t < 2.

X(T) h(t- T)
l {

e-<' -1")
- . .O. . . ··················-==~-
l 2
; ____'.=-~·····~····~··~·····.::_:±=._::::._. . . . . . ··········------
oii t

(a) (b)

e-r .. ·.1

---····-·····--· ; T
o t 2 o 1 2 3

(e) (d)


- - ·---~---+········-···--··--+--············••i••········
; ; ·······-·-- t
o l 2 3


FIGURE 2.11 Evaluation of the convolution integral Í<)í Example 2. 7. (a) 'fhe input <lcpicte
a functíon <>f T. (b) Reflectcd and time-shifted impulse rcsponse, h(t - -r). (e) The producl si!
w 1 ( r) for O < t < 2. (d) 'l'he product sígnal u11 ( r) for t ::2: 2. (e) Systc1n output }{t).
2.2 Convolution: Impulse Response Representationfor LTI Systems 91


For t > 2 we have a third form for w 1( T), which is written as

e-,-ir, O < T <2
O, otherwíse

Figure 2.ll(d) depicts wt(r) for thís third interval of time shifts, t ~ 2.
We now determine the output y(t) for each of these three intervals of time shifts by
integrating w1 (-r) from T = - oo to T = oo. Starting with the first interval, t s: O, we have
w,( r) = O and thus y(t) = O. For the second interval, O s t < 2, we have


, y(t) = J: e-,-zr d-r
= e-'{-½e-irl~
= ½(e-r _ e·-3t)
. ' ..·

For the third interval, t ~ 2, we have

y(t} = J: e-r-lr dr ..


= e-t{-½e-2Tlã . . :..
.. •

= ½(1 - e- 4 } e- 1

Combining the solutions for each inrerval of time shifts gives the output

o, t < o
< •• •
y(t) = ½(1 - e- 21 ) e- 1, O< t < 2 .
.i • t
. ,..

½(1 - e- 4 ) e-i, t~ 2

as depicted in Fig. 2.1 l(e).

EXAMPLE 2.8 Suppose the input x(t) and impulse response h(t) of a LTI system are given
x(t) = 2u(t - 1) - 2u(t - 3)
h(t) = u(t + 1) - 2u(t - 1) + u(t - 3)

Find the output of this system.

Solution: Graphical representations for x(T) and h(t - r) are gíven in Figs. 2.12(a) and (b).
From these wc can determine the inrerva1s of time shifts t on which the functional form of
w 1( -r) is the sarne. Begin with t large and negative. For t + l < 1 or t < O the right edge of
h(t - r) is to the left of the nonzero p(>rtion of x( -r) and consequently Wi( r) = O.
For t > O the right edge of h(t - T) overlaps with the nonzero portion of x( T) and we
2, 1< T< t + 1 ..

w ,( r) = O, otherwise

,This form for w 1 ( r) holds provided t +1< 3, or t < 2, and is depicted in Fig. 2.12(c). ,,,

h(t- T)
2-- 2--
1- >-
--i-----+--~'T ' "
'T ---4--+-_...,._ _ .,
o I 2 3 t-1 t+I o 1 t+1 3
(a) (b) (e)

2- -
t-1 2
\ 1
' ---+---+:--.----;,'-
t- 3 3
T ,
o 1 2 3 1 1 2 7
-2- - -2--
(d) (e) (f)

FIGURE 2. 12 Evaluation c)f the convolution integral for Example 2.8. (a) The input depicted as
a function of T. (b) Reflected and time-shifted impulse response, h(t - 7). (e) Thc }Jroduct signal
wt( T) for O < t < 2. (d) The product signal w 1( 7) for 2 s t < 4. (e) The product signal w,( r} for
4 ~ t < 6. (f) System output y(t).

For t > 2 the right edge of h(t - T) is to the right of the nonzero portion of x(r). ln this
·. case we have

' '
-2, 1 < T < t - 1
Wt(1") = 2, t - 1 < r < 3
O, otherwise
This form for w1 (-r) holds provided t - 1 < 3, or t < 4, and is depicted in Fig. 2.12(d).
For t > 4 the leftmost edge of h(t - T) is within the nonzero portion of x(T) and we
', -2, t - 3 < ,,. < 3
O, otherwise
This form for w1 ( T) is depicted in Fig. 2.12(e) and holds provided t - 3 < 3, or t < 6.
For t > 6, no nonzero portions of x( T) and h(t - T) overlap and consequently wt( T) = O.
Tb.e system output y(t) is obtained by integrating Wt( -r) frorn 'T = - oo to T = oo for each
. interval of time shifts identified above. Beginning with t < O, we have y(t) = O since
w 1(T) =O.For O< t < 2 we have

, ..
' '
' ',' y(t) = I:+l 2 dT
, ,.
' '

= 2t
On the next interval, 2 ~ t < 4, we have
t-1 J,3
.,' ·.
y(t) ==
-2 dT +
2 dr

= -4t + 12
• •<' •• • . '· . ,~ ·, ~.
2.2 Convolution: Impulse Response Representationfor LTI Systems 93

Now consideríng 4 :s t < 6 tbe output is '

., . y(t) = t-3 -2 di 13

·.' : ·.., •• ·< .·,

.. · = 2t - 12
· Lastly, for t > 6, we have y(t) = O since w,( T) = O. Combining the outputs for each interval
of time shifts gives the result
t< o . ' ..

2t, 0:St<2
, ..

y(t) -- -4t + 12, 2 < t < 4

.'.\ ·< .. ". 2t - 12, 4:St<6
.,. v, .

o, t 2= 6
as depicted in Fig. 2.12({). ,,.
•• • •
.. :..: .

• Drill Problem 2.4 Let the impulse response of a LTI system be h(t) = e-l(t+Ilu(t + 1).
Find the output y(t) if the input is x(t) = e-1 1 •.
Answer: For t < -1,
e- 2 <r.+ 1 le-'T, -oo < T < t+ 1
O, otherwise
y(t) = ½et+I
For t > -1,

o, otherwise
y(t) = e-(t+ Il _ ie-2(t+l)
J •
• Drill Problem 2.5Let the impulse response of a LTI system be given by y(t)
u(t - 1) - u(t - 4). Find the output of this system in response to an input x(t)
u(t) + u(t - 1) - 2u(t - 2).

o, t < l
t - 1 l<t<2
2t - 3 2:st<3
y(t) = 3, 3st<4
7 - t 4<t<,5
12 - 2t, 5:St<6
o, t 2:: 6 •
The C()nvolution integral describes the behavior of a continuous-time system. The
system impulse response can provide insight ínt(> the nature of the system. We will develop

this insight in the next section and subsequent chapters. T(> glimpse some of the insight
offered by the impulse response, consider the following example.

...· .,··.
EXAMPLE 2.9 Ler the impulse response of a LTI system be h(t) = 5(t - a). Determine the ~
output of this system in response to an input x(t).

Solution: Consider first obtaining h(t - -r). Reflectíng h(1') = 8(r - a) about T = O gives
h( - r) = ô( T + a) since the impulse function has even symmetry. Now shift the independent
variable T by -t to obtain h(t - T) = ô(T - (t - a)). Substitute this expression for h(t - T) in
the convolution integral of Eq. (2.6) and use the sifcing property of the impulse function to

y(t) = f~oo x(r)(5(T - (t - a)) dT

. .. . .,..
= x(t - a) : :. '

. . . •~:

Note tl1at thc identity system is represented for a = O since in this case the output is equal
to the input. When a -=I= O, the system time shifts the input. If a is positive the input is
delayed, and if a is negative the input is advanced. Hence the location of the impulse
response relative t<) the time origin determines the amount of delay introduced by the

2.3 Properties of the Impulse Response

Representation for LTI Systems
The impulse response completely characterizes the input-output behavior of a LTI system.
Hence properties of a system, such as memory, causality, and stability, are relared to its
impulse response. Also, the impulse response of an intercor1nection of LTI systems is related
to the impulse response of the constiruent systems. ln this section we examine the in1pulse
resp<.)nse of interconnected systems and relate the irnpulse response to system properties.
These relationships tel1 us how the impulse resp<>nse characterizes sysrem behavior. The
results for continuc>us- and discrete-time systems are obtained using ncarly ider1tical ap-
proaches, so we derive c>ne and símply state the results for the other.


Consider two LTI systems with impulse responses h 1(t) and h 2 (t) cc>nnected in parallel as
illustrated in Fig. 2.13(a). The output of this connection c.>f systems, y(t), is rhe sum of the
outputs of each system

y(t) = Y1 (t) + Y2(t)

= x(t) * h 1 (t) + x(t) * h2 (t)
Substitute the integral representatic>n for each convolution
2.3 Properties of the lmyulse Hesponse Hepresentation for LTI Systems 95

x(t) - , ~ ----l • y(t)

(a) (b)

FIGURE 2.13 lnterconnection of two systems. (a) Parallel connection of t\.vo systems. (b) Equiv-
aJent system.

and combine the integrais to obtain

y(t) = f 0000 x(-r){h1 (t - -r) + h2(t - T)} dT

= J:co x( T)h(t - 7) d-r

= x(t} * h{t)
where h(t) = h 1 (t) + h 2 (t). We identify h(t) as the impulse response of the parallel con-
nection of two systems. This equivalent system is depicred in Fig. 2.13(b). The impulse
response of two systems connected in parallel is the sum of the individual impulse
Mathematically, this implies rhat convolution possesses the distributive property:
x(t) * h 1 (t) + x(t) * h 2 (t) = x(t) * {h 1 (t) + h2 (t)} (2.8)
Identical results hold for the discrete-time case:


Now consider the cascade connection of two LTI systems illustrated in Fíg. 2.14(a). l,et
z(t) be the output of the first system and the input to the second system in the cascade.
The output y(t) is expressed in terrns of z(t) as
y(t) = z(t) * h2 (t) (2.10)

= f 00,,,, z( T)h2(t - -r) d-r (2.11)

~ illiMll'.lltl~'llfA.I INdll'.

x{t) - •• ~r ~: l1 1(t) * h2(t) · • y(t)

·,· :, ...

(a) (b)

x(t) • ·. ~~(t,i • • • y(t)


FIGURE 2.14 Intercclnnection <>f two systems. (a) Cascade con11ecti(>n (Jf t\.vo systcms.
(b) EquivaJent system. (e) Equivalent system; interchange system order.

However, z( -r) is the output of the first system and is expressed in terms of the inptlt x( 1
z(r) = x(T) * h 1(1')

= f" 00
x( v)h 1( T - v) dv

Here vis used as the variable of inregration i11 the convolutic>n integral. Substituting Eq
(2.12) for z( r) in Eq. (2.11) gives

y(t) = Jx"" f"""" x(v)h (r - 1 v)h 2 (t - r) dv dr

N(>W perfor1n the change of variable 71 =T - v and interchange integrais to obtain


The inner integral is identified as the convc>lution c>f h 1(t) wirh h 2 (t) evaluated at t - v
That is, if we define h(t) = h 1(t) * h 2 (t), then

[""'ex. h1 ( 17)/12 (t - v - 11) d17 = h(t - v)

Substiruting this relationship i11t<> Eq. (2.13) yields

y(t) = f"'" x(v)h(t - v) dv

= x(t) * h(t)
Hence che irr1pulse resp<>nse of two LTI systems connccted in cascade is thc cc.>nvolutÍ<>r
of the individual impulse resp<lnses. The cascade C(Jnnection is input-output equivalent te
the single system represented by the impulse rcsponse h(t) as shown ín Fig. 2.14(6}.
Substituting z(t) = x(t) * h 1 (t) into the expression for y(t) given in Eq. (2.10) and
h(t) = h,(t) * h 2 (t) into the alternative expression for y(t) given in Eq. (2.14) establishe~
that convoluti<,n possesses the associatíve property

A seco11d importanr property fclr the cascade connectic>n of sysrems concerns the
ordering of rhe systen1s. Write h(t) = h 1(t) * h 2 (t) as the integral

and perform the change of variable v =t - T to obtain

h(t) = J~"" h (t - 1 v)h 2 ( v) dv

== h2(t) * h 1(t}
Hence the conv<>lution of h 1 (t) and h 2 (t) can be perf<)rmed ín either order. This corresponds
t<> interchanging the order of the systems in rhc cascade as shc>wn in fig. 2.14(c). Since
2.3 Properties of tlie Impulse Response Representationfor LTI Systems 97

we conclude rhat the output of a cascade con1bination of LTI sysrems is independent of

the order in which the systems are connected. Mathematically, we say that thc convolt1tion
operation possesses the commutative property


The commutative property is often used to simplify evaluation or interpretation of the

convolution integral.
Discrete-rime systems and convolution have identical properties to their co11tinu-
ous-time counterparts. The impulse response of a casca de connection of LTI systems is
given by the conv<)lution of the individual i1npulse respo11ses a11d the <)utput of a cascade
combination of LTI systems is independent <)f the order in which the syste1ns are connected.
Discrete-time convolution is associative

{x[n] * h 1[n]} * h 2 [n] = x[n] * {h 1[n] * h 2 [n]} (2.18)

and commutative


The following example demonstrares the use of convolution properties for finding a single
system that is input-output equivalent to an ínterconnected system.

EXAl\tPLE 2.10 Consider the interconnectíon of LTI systems depícted ín Fig. 2.15. The ,.
impulse response of each system is given by . '
. ·.·

h1[n) = u[n]
~ . h2 [n) = u[n + 2] - u[n]

• >·
.. h3 [n] = 8[n - 2]
h4 [n] = d'u[n]
Find the impulse response of the overall system, h[n].

Solution: We first derive an expression for the overall impulse response in ter1ns of the
impulse response of each system. Begin with the parallel combínation of h1 [n] and h 2 [n].
The equívalent system has impulse response h 12 [n] = h 1 [n] + h 2 [n]. Thís system is in series
with h3 [n], so the equivalent system for the upper branch has impulse response h 123 [n] =
h 12 [n] * h 3 [n]. Substituting for h 12[n], we have h 123 [n] = {h 1[n] + h 2 [n]) * h 3 [n]. The upper
branch is in parallel with the lower branch, characterized by h4 [n]; hence the overall system
impulse response is h[n] = h123[n] - h4{n]. Substitutíng for h123(n] yields
h[n] = (h 1[nJ + h2 [n]) * h3 [n] - h 4 [n] ~.... .,......
.. ,::. , ·:

Now substitute the specific forms of h 1 [n] and h2 [n) to obtain . ,.

. : ,·
.. , . ,, /. :

h 12[n] = u[n] + u[n + 2] - u[n]

= u[n + 2]
Convolving h 12[n] with h 3 [n] gives ,,-
.··. . ....

h123[n] = u[n + 2] * c5[n - 2] ....~·...

,.., .

' . .-:..
= u[n]
' ..
Lastly, we obtain the overall impulse response by summing h 123[n] and h 4 [n] •·.'

h[n] = {l - d'} u[n] ...~,. .,.. ·. ,,· . '·


,.....~ · · ~

x[n] ~.,...

,....__ _ _ _....,._ . h [n] _ _ _ _ _____,


FIGURE 2.15 1nterc<>n11ecti<>n of systems for Example 2.1 O.

Interconnections of systems occur naturally in analysis. Often it is easier to break a

complex system into simpler subsystems, analyze each subsystem, and then study the entire
system as an interconnection of subsystems than it is to analyze the overall system directly.
This is an example of the ''divide-and-conquer'' approach to problem solving and is pos-
sible due to the assumptions of linearity and time invariance. Interconnections of systems
are als<> useful in system implementation, since systems that are equivalent in the input-
<)utput sense are not necessarily equivalent in other senses. For example, the computational
cc>mplexity of two input-output equivalent systems for processing data in a computer may
differ significanrly. The fact that many different interconnections of LTI systems are input-
output equivalent can be exploited to optimize some other implementation criterion such

as computat1on.


Recall that the output of a memoryless system depends only on the present input. Exploit-
ing the commutative property of convolution, the output of a LTI discrete-time system
may be expressed as
y[n] = h[n] * x[n]

= L
h[kJxln - k]

For this system to be memoryless, y[n] must depend only on x[n] and cannot depend on
x[n - k] for k -=/= O. This condition implies that hf kl = O for k -=/= O. Hence a LTI dis-
crete-time system is memoryless if and <>nly if h[kl = co[k], where e is an arbítrarycc>nstant.
Writing the output of a continuous-time system as

f 00

y(t) = 00 h(T)x(t - T) dT

we see that, analogous to the discrete-time case, a continuous-time system is memoryless

if and only if h( T) = cS( T) for e an arbitrary constant.
The memoryless condítion places severe restrictíons on the form of the impulse re-
sponse. Ali memoryless LTI systems perform scalar multiplication on the input.


The output of a causal system depends <)nly (>n pastor present values of the input. Again
write the convolution sum as

L h[kJxln - kJ
2.3 Prt>perties of the Impulse Response Representationfor LTI Systems 99

Past and present values of the input, x[n], x[n - 1], x[n - 2J, ... , are associated with
indices k 2:: O in the convolution sum, while future values of the input are associated with
indices k < O. ln order for y[n] to depend only on pastor present values of the input, we
require h[kJ = O for k < O. Hence, for a causal system, h[k] = O f<,r k < O, and rhe
convolution sum is rewritten

y[n] = L
h[k]x[n - kl
The causality condition for a continuous-time system follows in an analogous manner
from the convolution integral

y(t) = J,,,,,,., h(T)x{t - r) d'T

A causal cc)ntinuous-time system has impulse response that satisfies h( T) = O for 1' < O.
The output of a causal system is thus expressed as the convolution integral

y(t) = f 0
"" h(T)x(t - -r} dr

The causality condition is intuitively satisfying. Recall that the impulse response is
the output of a system in resp(>nse to an impulse input applied at time t = O. Causal
systems are nonanticipative: that is, they cannot generate an output before the input is
applied. Requiring the impulse response to be zero for negative time is equivalent to saying
the system cannot respond prior to applícation of the impulse.


Recall from Chapter 1 that a system is bounded input-bounded output (BIBO) stable if
the output is guaranteed to be bounded for every bounded input. Formally, íf the input
to a stable discrete-time system satisfies lx[n] I ::s Mx < 00 , then the output must satisfy
!yfn]) ::s My < co. We shall derive condjrions on h[n] rhat g11arantee stability of the system
by bounding the convolution sum. The magnitude of the output is given by
ly[n] 1 = 1h[n] * x[n] I

L hlklx[n - k]

We seek an upper bound on Jy[nJ j that is a function of the upper bound on Jx[n] J and the
impulse response. Since the magnitude of a sum of numbers is less than or equal to the
sum of the magnitudes, that is, Ia + b I ::s Ia I + Ib 1, we may write

ly[n] 1 < L
[h[k]x[n - k] 1

Furthermore, the magnitude of a product is equal to the product of the magnitudes, that
is, /ah/ = /ai//;/, and so we have

ly[n]I < L
lh[k]llxín - k]I

If we assume that the input is b()unded, lxlnJ I < Mx < 00 , then lx[n - k] I < Mx and

Jy[nJJ < Mx L
)h[k]J (2.20)

Hence the output is bounded, jyfn]J < oo, provided that the impulse response of the system
is absolutely summable. We conclude that the impulse response of a stable system satisfies
the bound

k ==- - ""
lh[k] < 1

Ot1r derivation so far has established absolute summability of the impulse response as a
sufficient condition for BIBO stability. The reader is asked to sho\V thar rhís is also a
necessary cc>ndition for BIBO stability in Prohlem 2.13.
A similar set of steps may be used to establish that a continuous-time system is BIBO
stable if a11d only if the impulse response is absolutely integrable, that is,

.. .:::, .
: : • ·< 1· . ,.,. ; .·~ '< • •
·. ':.
.' .. ·,.
., ..
EXAMPLE 2.11 A discrete-time system has impulse response
h[n] = anu[n + 2] . .,
' ..,'
Is this system BIBO stable, causal, and memoryless? .,, ~;·. .. ·'·..•

Solution: Stability is determined by checking whether the impulse response is absolutely

summable, as shown by
0C "'

~ lh[n]] = ~ la I 11
· ··:
k=-oo k=-2
... . ·•·. . :

> •

= 1a- ) + 1a- • ) + 2
~ 1a ln . '~•,: : ::
.. k=O

The infinite geometric sum ín the second line converges only if !ai< 1. Hence the system is
stable provided O < la! < 1. The system is not causal, since the impulse response h[n] is
nonzero for n = -1, -2. The system is not memoryless because h[n] is nonzero for some
values n O.*
• .:•)!\ • •:\(. • • • • • ·"'!!· ;~·· ·í,1','.':: •••• ••• • i'°"<. .,...,:: ...... • . ..,.. ,.· .

• Drill Problem 2.6 Determine the conditions on a such rhat the continu<>us-time
systen1 wirh i111pulse response J1(t) = eª'u(t) is stable, causaJ, and memory1ess.

A1iswer: The system is stable provided a< O, causal for all a, and there is no a for which
the system is memoryless. •
We emphasize that a system can be unstable even thot1gh rhe impulse response is
finite valued. For example, the impulse response h[nj = uínl is never greater than one, but
is not absolutely summable and thus the system is unstable. T(> demonstrate this, use che
convolution sum to express the output of this system in terms of the input as

y[nl = 2, x[k]
k=- X

Although the output is bounded for some bounded inputs xln l, ir is not bounded for every
bounded x[n]. ln particular, the conscant input xl11l = e clearly results in an unl1c)unded
2.3 Properties of the Impulse Response Representation for LTI Systems 101

x(t) •
• .,
, y(t)•
1. ",,_:..1 ($) - •JII• x(t)

FIGURE 2.16 Cascade of LTI system ~rith impulse response h(t) and inverse system with im-
pulse response h- 1 (t ).


A system is invertible if the input to the system can be recovered from the outpt1t. This
implies existence of an inverse system that takes the output of the original system as its
input and produces the input of the original system. We shall limit ourselves here to con-
sideration of inverse systems that are LTI. Figure 2.16 depicts the casca de of a LTI system
having impulse response h(t) with a LTI inverse system whose impulse response is denoted
as 1,- 1(t).
The process of recoveríng x(t) from h(t) * x(t) is termed deconvolution, since it
corresponds to reversing or undoing the convolution operation. An inverse system has
output x(t) in response to input y(t) = h(t) * x(t) and thus solves the deconvolution prob-
lem. Deconvolution and inverse systems play an important role in many signal-processing
and systems problems. A common problem is that of reversing or ''equalizing'' the distor-
tion introduced by a nonideal system. For example, consider using a high-speed modem
to communícate over telephone Iínes. Dístortion íntroduced by the telephone network
places severe restrictions on the rate at which information can be transmitted, so an equal-
izer is incorporated into the modem. The equalizer reverses the telephone network distor-
tion and permirs rnuch higher data rates to be achieved. In this case the equa1izer represents
an inverse systern for the telephc>ne network. We will discuss equalization in more detail
in Chapters 5 and 8.
The relationship between the in1pulse response of a system, h(t), and the correspond-
ing inverse system, h- 1 (t), is easily derived. The impulse response of the cascade connection
in Fig. 2.16 is the convolution of h(t) and h- 1(t). We require the output of the cascade to
equal the input, <>r
x(t) * (h(t) * h- 1 (t}) = x(t)
This implies that
h(t) * h- 1 (t) = ô(t) (2.21)
Similarly, the impulse response of a discrete-time LTI inverse system, h- 1 (n], must satisfy
hfnJ * h- 1 /n] = 8[nj (2.22)
ln many equalization applications an exact inverse system may be difficult to find or im-
plement. Determination of an approximate solution te) Eq. (2.21) or Eq. (2.22) is often
sufficient in such cases. The following example íllustrates a case where an exact inverse
system is obtained by directly solving Eq. (2.22).

·.· .
. ExAMPLE 2.12 Consider desígning a discrete-time inverse system to elirninate the distortion
.· associated with an undesired echo in a data transmission problem. Assume the echo is rep-
·.· resented as actenuation by a constant a and a delay corresponding to one time unic of the
.·. ínput sequence. Hence the dístorted receíved signal, y(n], ís expressed ín terms of the trans-
. rnitted signal x[n] as ·

y[n] = x[n} + ax[n - 11
.. . : ,, : .

Finda causal ínverse system that recovers x[n j from y[n]. Check if this inverse system is stable.
Solution: First we identify the impulse response of the system relatíng y{n] and x[n]. Writing
the convolution sum as
y[n] = L
h[k]x[n - k]

. ., .~
we identify ·:

1, k =o
h[kJ = a, k =1 ·•r:
; ·;
: ~

O, otherwise
as the impulse response of the system that models direct transmission plus the echo. The inverse
system h- 1 [n] must satisfy h[n] * h- 1 [n] = 8[nJ. Substituting for h[n], we desire to find h- 1 [n]
that satisfies the equation
Consider solving this equation for severa] different values of n. For n < O, we must have
h- 1 [n] = O ín order to obtain a causal inverse system. For n == O, o[n] = 1 and Eq. (2.23)
h- 1 fO] + ah- 1[-t] =1
so h- 1 [0] = 1. For n > O, õ[nJ = O and Eq. (2.23) implíes
h- 1 In} + ah-\ln - 1) = O
or h- 1 [n] = -ah- 1 [n - 1]. Since h- 1 [0] = 1, we have h- 1 [1] = -a, h- 1 [2] = a 2, h- 1 [3] =
-a3 , and so on. Hence the invecse system has the impulse response
h- 1 [n] = (-a)nu[n]
To check for stability, we determine whether h- 1 ln] is absolutely summable, as shown
00 00

lh- [kl\ = L \a\k

This geometric series converges and hence the system is stable provided Ia 1< 1. Thís implies
that the inverse system is stable if the echo attenuates the transmitted signal x[n], but unstable
if the echo amplifies x[n]. ·
::11t, : :··~ ... .: ·:;

Obtaining an inverse system by directly S<)lving Eq. (2.21) or Eq. (2.22) is difficult
in general. Furtherm<>re, not cvery LTI sysrem has a stable and causal inverse. Methc>ds
developed in later chapters provide additional insíght into the existence and determi11ation
of ínverse systems.


The response of a LTI system to a step characterizes h(>W the system responds te> sitdden
changes in the input. The step response is easily expressed in terms of the impulse response
using convolution by assuming that the input is a step function. Let a discrete-time system
have impulse response h[n] and denote the step respo11se as s[n]. We have
sf nJ = h[nl * u[n1
= L
h[k]u[n - kl
2.3 Properties of the Impulse Response Representationfor LTI Systems 103

Now, since ufn - kl = O for k > n and u[n - k] = 1 for k < n, we have

sfn] =
I h[k]

That is, the step response is the running sum of the impulse response. Similarly, the step
response, s(t), for a contínuous-time system is expressed as the runníng integral of the
impulse response

s(t) = f~e,; h(r) dr (2.24)

Note that we may invert these relationships to express the impulse response in terms of
the step response as
hínl = sfn] - sfn - 1]

h(t) = dt s(t)

ExAMPLE 2.13 Find the step response of the RC circuit depicted in Fig. 2.10 having impulse
h(t) = - e-ttRcu(t)
Solution: Apply Eq. (2.24) to obtain
. ·::.. ·:<.
·.~i: s(t) = - e--rtRCu( -r) d-r
-.., RC
Now simplify the integral as
:·.~ .

.; ..
o, t s o
y(t) =
e---r1Rc dT.
t> o
o, t s o
. ..
1 - e-t/RC, t > 0
,. . .·~. ; , .. :. .

• Drill Problem 2. 7 Find the step response of a discrete-time system with impulse
hfn] = (-a) 11 u[n]

assuming la 1< 1.
1 - (-a)n+l
s[n] = u[n]
1 +a •

Sinusoidal input signals are often used to characterize the response of a system. Here we
examine the relati(>nship between the impulse response and the steady-state response of a

LTI system to a complex sinusoidal input. This relationship is easily established using
convolutíon and a complex sinusoid input signal. Consider rhe (>utput of a discrete-tíme
system with impulse response h[nJ and unir-amplitude complex sinusoidal input
x[n] = e;nn, given by

y[nl = k=L-- oc
h[klxln - k]

= L h[k]e;s1,,,-k1

Factor e;nn from the sum to obtain

y[n] = ei!ln L h[k]e-i!lk


where we have defined

H(e;t1) = L h[k]e-;.rik (2.25)


Hence the output of the system is a complex sinusoid of the sarne frequency as the input
multiplied by the complex number H(ei!1). This relationship is depicted in Fig. 2.17. The
quantity H(e;n) is nota function of tíme, n, but is or1ly a function of frequency, n, and is
termed the frequency response of the discrete-time system.
Similar results are obtained for continuous-time systems. Ler the impulse response
of a system be h(t) and the input be x(t) = eiwt. The convolution integral gives rhe output

y(t) = f """° h( T)eiw(t-T) dr

= efwt J"".,., h(r)e-iwT d-r (2.26)

= H(jw)eiwt
where we de.fine

H( jw) = f" , , h( T)e-iwT dr (2.27)

The output of the system is a cc>mplex sinusoid of the sarne frequency as the input mul-
tiplied by the complex constant H( ;w). H( jw} is a function of only frequency, w, and not
time, t. lt is termed the frequency response of the continuous-time system.
An intuitive interpretation of the sinusoidal steady-state response is obrained by writ-
ing the complex number H( jw) in polar form. Recall that if e = a + jb is a complex
num ber, then we ma y vvri te e in polar fc>rm as e = Ie Iei arglcl, where Ie I = V a2 + b2 and
arg{c} = arctan(b/a). Hence we have H( jw) = IH( jw) 1e1 arg(H(iw>l. Here IH( jw) 1 is termed

eif2n ,. h[n]
,. ·.. . ..
- •., H(eiº)e1D.n

FIGURE 2.17 A complex sint1soidal ínpitt te> a 1;r1 system results in a cc)n1plcx sinusoidal c>utput
of the sarne frequency multiplíed by the frequency respc>nse of the system.
2.3 Properlies of the Impiilse Respo,ise Representationfor LTI Systems 105

the magnitude response and arg{H( jw)} is termed the phase response of the system. Sub-
stituting this polar fc,rm in Eq. (2.26), che output y(t) is expressed as
y(t) = H( jw)
1 1 ei(wt+arg{H(jw)I)

The syscen1 modifies the amplitude of the input by IH( jw) 1 and the phase by arg{H( jw)}.
The sin11soídal steady-state response has a similar interpretation for real-valued sinusoids.
x(t) = A cos(wt + cp)
= -A e'.( wt+.,..)
.... + -A . -">
2 2
and use linearicy to obtain the output as

y(t) = 1H( jw) 1 A ei(wt+<J,+arglH(jw)}I + 1H( -jw) 1 ~ e-i(wt+$-arg{H(-jw}I)

2 2
Assun1ing thac h(t) is real valucd, Eq. (2.27) implies that H( jw) possesses conjugate sym-
merry, that is, H'; ( i<v) = H(-jw). This implies rhat IH( jw) 1 is an even functi<>n of w while
arg{H( jw)} is <)dd. Exploiting these symmetry conditions and símplifying yields
y(t) = 1 H( jw) IA cos(wt + cp + arg{H(jw)})
As with a complex sinusoidal input, the system modifies the input sinusoid's amplitude by
IH( jw) 1 a11d the phase by arg{H( jw) }. This modification is illustrated in Fig. 2.18.
Similar results are obrained for discrere-time systems using the polar form for H(eií1).
Specifically) if x[nJ = ei!ln is tl1e input, ther1
y[ n =
1 IH( ei!l) Iei(fln+arg(H(d!})I)
Furthermore, if x[ n] = A c<>s( fln + </>) is the input to a discrece-time system wíth real-
valued impulse response, then
y[n] = 1 H(ei12 ) 1 A cos(íln + </> + arg{H(ei! 1)})
Once again, the system changes the amplitude of rhe sinusoidal input by IH(eií!) 1 and its
phase by arg{H(eií1)}.
The frequency response characterizes the steady-state response of the system to si-
nusoidal inputs as a functi<>n of the sinusoid's frequency. We say this is a steady-state
response because the input sinusoid is assumed to exist fc)r all time and rhus the system is
in an equilibríum or steady-state condition. The frequency response pr<.>vides a great <leal
of information about the system and is useful for both understanding and analyzing sys-
tems, topics that are explored in depth in later chapters. lt is easily measured with a

x(t) = A cos (wt + </>) y(t)=AIH(jw)I cos(wt+cp+arglH(iw)l)

1 A l H(jw)I
A- 4>
-)lo • h(t~ • --------------- t
-A -AIH(jw)I \
q, + arg!H(iw)l
FIGURE 2.18 A sinusoidal inJ)Ut to a L'I'l systen1 results in a sinL1soidal output of the sarne fre-
quency \.vith the amplitude and phase mo<lified by thc system's frequcncy response.

sinusoidal oscillacor and oscilloscope by using the oscill<.Jscc>pe t<> measure the amplitude
and phase change between the input and output sinusoids for different oscillator
lt is standard practice to represent the frequency response graphically by separately
displaying the magnitude a11d phase response as functions of frequency, as illustrated in
the following examples.

'· ·:.. ·.,,,· '· ..,;:. . .., ·,:. ..,/."'.... ;:~.ih, · .,. +,.: ,. . . : :.. .

EXAMPLE 2.14 The impulse responses of t\.vo discrete-tíme systems are given by
h 1[n] = ½(S[n] + õ[n - 1]) .

h2 (n] = ½(o[n] - 8[n - 1]} ·.

.. .,... ..

Find the frequency response of each system and plot the magnitude responses.
Solution: Substitute h 1 [n] into Eq. {2.25) to obtain

, ,..;. ;.,:: .: {

. .' ' .,..

.. . '.
which may be rewrítten as :. . :J; .. . •• ·li'.· ❖

. . e;n,2 + e-;n12
Hi(eln) = e-1!112 _ _ _ __ ;~ : t·.:: .•.'f:ff .·,.;..·«:, .,.. .;, .. . ~ ·: . ,i :••
2 >: ,: • • • •' ••

.; ; ., .·, .....
= e-;nri. cos(ü/2) : .:


Hence the magnitude response is expressed as

, ·.,·.:. . :: . .:~ i
IH1(ei )j 11
= lcos(il/2)1 .;..., ,.... . :r.• . ..
\: . . . ., .

and the phase response is expressed as

-0/2 for cos(il/2) > O
.,,"·~·:. -0/2 - 1T for cos(fi/2) < O
. ., .
Simílarly, the frequency response of the second system is given by .., ..·..
,,, .. .
~i·. . :
1 - e-in
.-~·. ....
.. .. -.:' ·. H2(e;i1) .. •. .
=--- ·•·· ,,
',i 't ::
•·: .·
. ,..:
,., ...
< ,r,
_ e;!u2 _ e-;iu2
= ,·e-,n,2 _____
., 2,· . ~ ·:U :·.:.. '1... ~·•.+: .:; ~..
J: . (..

= je-;wi sín({}/2) .,
.,..,.. .. .. / •: } ...:
....., ,. .·
,,. '. ••>

,; : . .. ,, .
ln this case the magnitude response is expressed as . .
r~ ·~k: .,~ . .;
... . . .· . . ":. .··..
IH2(eiº)I = lsin(fi/2)1

.... ·"

and the phase response is expressed as

-ft/2 + 1T!2 for sin(il/2) > O ..
• .. • •

; .: .:
-!l/2 - rr/2 for sin(ü/2) < O
.. Figures 2.19(a) and (b) depict the magnitude response of each system on the interval
- 1T' < n < 1r. Thís interval is chosen beca use it corresponds to the range of frequencies for
which the complex sinusoid eiD.n is a unique function of frequency. The convolution sum
indicates that h 1 [n] averages successive inputs, while h2 [n] takes the difference of successíve
inputs. Thus we expect h 1 [nl to pass low-frequency signals while attenuatíng high frequencíes.
This characteristic is reflected by the magnitude response. ln contrast, the differencing oper-
ation implemented by h2 [n] has the effect of attenuating low frequencies and passing high
frequencíes, as indicated by its magnitude response.
2.3 Properties of the Impulse Response Representationfor LTl Systems 107

1 H 1(eiª) 1

-11' o -1T o 1T

(a) (b)

'l'he magnitude responses of two simple discrete-time syslen1s. (a) .A. sysle1n tl1at
averages Sltccessive inputs tends to attenuate high frequencies. (b) A system that forms the differ-
ence of successi,,e inputs tends to atten11ate lo\v frequencies.

. . , . .;: t'r: ': . ..... :·· .,;- . . .•4!

.,..... .
EXAMPLE 2.15 The impulse response of the system relatíng the input voltage to the voltage
across the capacitor in Fig. 2.10 is given by

h(t) = - 1 e- 11Rc··u(t) . ; :.


Find an expression for the frequency response and plot the magnitude and phase response.
Solution: Substituting h(t) into Eq. (2.27) gives
. ·.·..

. .. .


·.:. . = 1 f"° e-,;<,J+11RC>-r dT

RC Jo . ·.'

1 -1 e-(iw+l/RC)-r • • •:--..11

RC (jw + 1/RC) o
't· , :•

1 -1 .....
J:t.: ..··.·
. ;
= RC (iw + 1/RC) (O - l)
•• <
.. r
jw + 1/RC

The magnitude response is


:.:>. '. ·..
1H(jw) 1 = -;::::::=R=C=:::;:2
ú)2 + .. .. ' .
..,. .. RC •••; . • • •••

while the phase response is

arg{H(jw)} = -arctan(wRC)
·. The magnitude response and phase response are presented in Figs. 2.20( a) and (b), respectively.
The magnitude response indicates that the RC circuit tends to attenuate high-frequency si-
nusoids. This agrees with our intuition from circuit analysis. The circuit cannot respond to
rapid changes in the input voltage. High-frequency sinusoids also experience a phase shíft of
1r/2 radians. Low-frequency sinusoids are passed by the círcuit with much higher gain and
experience relatively little phase shift.

IH(Jw)I arg{H(iw) l
11 -172
-./2 .. 4

RC_ _ _ _ _ w
------.t-+---icc----······ w
1 O l 1T
(a) (b)

FICURE 2.20 Frequenc.·y response of the RC circuit i,1 Fig. 2.1 O. (a) J\ilagnitu<le response.
(b) Phase responsc.

• Drill Problem 2.8 Find a11 expressi<ltl for the frequency response of the discrete
time system wirh impulse response

h[nl = (-a)nu[nl
assuming lal < 1.

2.4 Differential and Difference Equation

Representations for LTI Systems
Linear constant-coefficient difference and differential equations provide another represen
tation for the input-output characteristics of LTI systems. Difference equations are usei
to represent discrete-ti1ne syste1ns, while diffcrenrial equatíons represcnt continu<>us-tim
systems. The general form of a linear consta11t-coefficient differential equation is


Here x(t) is the input to the system and y(t) is the output. A linear constant-c<)efficier
differe11ce equatÍ<)11 has a similar form, with the derivatives replaced by delayed values e
the input x[nl and output ylnJ, as shown by
L akyln - kl = I bkx[n - kl (2.25
k=O kc:O

The integer N is tern1ed the arder <)f the differential or difference equatio11 and cc>rresponc
to rhe highest derivative or maximum memory involving the system <)utput, respectivel~
The order represents the number <)f energy st<>rage <levices in the system.
As an example of a differential equation that describes the behavic>r of a physic,
system, consider the RLC circuit depicted in Fig. 2.21 (a). Assume the input is the volta~
2.4 Differential and Difference Equation Representationsfor LTI Systems 109

R 1,
Mass Force
k ·· ,, x(t)
x·(t) + y(t) ::::::: e --.. m.
1~ y(t)
Friction f
(a) (b)

FIGURE 2.21 Exarnples of systems clescribcd by differential equatíons. (a) RL(~ circuit.
(b) Spring-mass-damper system.

source x(t) and rhe (>utput is the current around the loop, y(t). Summing the voltage drops
around the lc)op gives

Ry(t) + L
dt +e
Jt 1
-oç y( T) dT = x(t)

Differcntiating both sides of this equation with respect to t gives

1 () R dy(t) L d y(t) dx(t)
Cy t + dt + dt 2 dt

This differential equation describes the relationship between the current y(t) and voltage
x(t) in the circuit. ln this example, the order is N = 2 and we note that the circuit contai11s
two energy storage devices, a capacitor and an inductor.
1v1echanícal systems may also be described in terms c>f differencial equations using
Newton's laws. l11 rhe system depicted in Fig. 2.21(6), the applied force, x(t), is the input
and the posítion <>f the mass, y(t), is the output. The fc)rce associated with the spring is
directly proporti<>nal to position, the force due to friction is díreccly prc>portíonal to ve-
locity, and the force due to mass is proportional to acceleration. Equating the forces c)n
the mass gives
d2 d
m dt 2 y(t) +f dt y(t) + ky(t) = x(t)

This differential equ.ation relates position to the applied force. The system contains two
energy stc>rage n1echanisms, a spring anda mass, and the order is N = 2.
An example of a second-c>rder difference equation is
y[nJ + y[n - 1] + ¼Yln - 2] = x[n] + 2x[n - 1] {2.30)
This difference equatic>n might represent the relationship between the input and output
signals for a system that processes data in a computer. ln this example the order is N = 2
because the difference equation involves y[n - 21, implying a maximum memory in the
system output of 2. Memory in a discrete-time system is a11alogous to energy storage in a
continuous-time system.
Difference equations are easily rearranged to c>btain recursive formulas for comput-
ing the current output of the system from the input signal and past outputs. Rewrite Eq.
{2.29) so that y[n] is al<>ne c>n the left-hand side, as shown by
l A1 l N
y[nl = - L bkx[n - k] - - L aky[n - k]
ao k=O ao k=l
This equation indicares how to obtain y[n] from the input and past values of the C>utput.
Such equatic.>ns are often used to implement discrete-time systems in a computer. Consider

computing y[n] for n > O from x[n] for the example second-order difference equation
given in Eq. (2.30}. We have
y[nj = x[n] + 2x[n - 11 - y[n - 11 - ¾y[n - 2]
Beginning with n = O, we may determine the output by evaluating the sequence of equations
y[OJ = x[OI + 2xf-1] - y[-1] - ¾y[-2]
y[l] = x[l] + 2x[OJ - y[OJ - ¼yl-11
y[2] = x[2] + 2x[l] - y[l] - ¼y[O]
y[3J = xl3J + 2xl2J - yl2I - ¾yfll

ln each equation the current output is computed from the input and past values of the
output. ln order to begin this pr<)cess ar time n = O, we must know the two most recent
past values of the output, namely, y[ -1 l and y[ - 2 }. These values are known as initial
conditions. This technique for finding the output of a system is very useful for computation
but does not provide much insight intc> the relatic>nship between the difference equation
description and system characteristics.
The initial conditions summarize ali the information about the system's past that is
needed to determine future outputs. No additiona] information about the past output or
input is necessary. Nc)te that in general the number of initial conditions required to deter-
mine the output is equal to the order of the system. Initial conditions are also required to
solve differential equations. ln this case, the i11itial conditions are the values of the first N
derivatives of the output
dy(t) d 2y(t) JN-ly(t)
y(t), dt ' dt 2 ' • • • ' dtN- I
evaluated ar the time t 0 after which we desire te> deter1nine y(t). The initial conditions in
a differential-equation description for a LTI system are directly related to rhe initial values
of the energy storage <levices in the system, such as initial voltages on capacitors and initial
currents through inductors. As in rhe discrete-time case, the initial conditions summarize
ali information about the past of the system that can impact future outputs. Hence initial
conditions alsc) represent the ''memory'' of continuous-time systems.
.. ,. ;....... "' .. •
. . t,\ ·. •
. .' '

ExAMPLE 2.16 A system is described by the difference equation

y[n] - 1.143y[n - 1] + 0.4128y[n - 2] = 0.0675x[n] + 0.1349x[n - 1] + 0.0675x[n - 2}
Write a recursive formula to compute the present output from the past outputs and current
inputs. Determine the step response of the system, the system output when the input is zero
and the initial conditions are y[-1] = 1, y[ - 2] = 2, and the output ín response to the sinu-
soidal inputs x 1 [ n] = cos( 'fõ 1rn), x 2 [ n] ;;;: cos(½ 1rn ), and x 3 [ n] = cos(fõ 1Tn) assuming zero initial
conditions. Lastly, find the output of the system if the input is the average January temperature
data depicted in Fig. 2.22(f).
Solution: We rewrite the difference equatíon as shown by
y[n] = 1.143y[n - 1] - 0.4128y[n - 2] + 0.0675x[nl + 0.1349x(n - 1) + 0.0675x[n - 2J
This equation is evaluated in a recursive rnanner to determine the system output from the
input and initíal conditions y(-1] and y[-2].
The step response of the system is evaluated by assuming the input is a step, x{n] =
u[n], and that the system is initially at rest, so the initial conditions are zero. Figure 2.22(a)
2.4 Di.fferential and Difference Equation Representationsfor LTI Systems 111

- ....
! i l ' ! l ! ! •
1 r

·- ) ) r r ..., r r r ...
l '
' 1 } >

slnJ !'
0.5 l .....


o 5 10 15 20 25 30 35 40 45 50

0.4 ! i ! 1
; i '; ! i 1

0.2 ~-··· .......


o1 ~
·-· -0-0...0 O -000-00-00-0-00- - - - - -0--0-0 O 0-0--0--0 000-0-0-00-0--0-o-
... A'"'

i i l 1 i 1 ! !
-0.2 ·-··- 1

o 5 10 15 20 25 30 35 40 45 50
1 l
··-· : ..._. . . . . j
! i--· i i--· ! ' .õW.- !
~~ ~~
r r

ylnJ 00
i i
i-· l ~-


o 10 20 30 40 50 60 70 80 90 100

l f -· -,-

-1 - - - - - - - - ' - - - ~ - - - - - ~ - - - - - - - - - - - - - - · - - - - '
O 1O 20 30 40 50 60 70 80 90 l00


-] ,.__
_____ ____________ __
10 20

30 40
50 60
70 80 90 100

FIGURE 2.22 lllustration of the solt1tion to Example 2.16. (a) Stcp resi)onse of system. (h) Out-
pt1t dueto nonzero initial conditions with zero input. (e) Output dt1e to x 1 [n] ~ cos(-to1T1i).
(d) ()uti)ut <.-lue t<.> x 2(nJ = cos(¾1r1i), (e) ()t1t1)ut due t<> x 3 (n] = cos(jfi1Tn).

60 '
; ;
! '
50 - ' r <.:

~~ ~ •,
• r •

- r

' r
IC r
~ o
:;::s ._, 40 ·;,. (
IÇo (
r r
... ....

,:o:: 30


( 1,..
' 'j
g_ D IC
> E 20 j
o .. .. .. . .. .. .. .. .. .. ..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
19(>0 1910 1920 193() 1940 1950 }960 l 97l) 198(} 1990


~ •r


............................... • •••••• 1,,., ..................,

1910 1920 1930 1940 1950 1960 ]970 1980 1990


(f) Input signal consisting of avcrage Jant1ary te1n1leraturc data. (g) Output ass<)Ci-
ated \Yith average January temJ)crature data.

depicts the first 50 values of the step response. This system responds to a step by initially
rising to a value slightly greater than the input amplitude and then decreasing to the value of
the input at about n = 13. For n sufficientl}· large, we may consider the step to be a de or
consrant input. Since the output amplitude is equal to the input amplitude, we see that this
system has unir gain to constanr inputs.
The response of the system to the initial conditions y[ -1 J = 1, y[ - 2J = 2 and zero
input is shown in Fig. 2.22(b). Although the recursive nature of the dífference equation sug-
gests that the initia) conditions affect ali future values of the output, we see that the significant
portion of the output dueto the inicial conditions Iases until about n = 13.
The outputs dueto the sinusoidal inputs x 1 [n], x 1 {n], and x 3 [n] are depicted in Figs.
2.22{c), (d), and (e), respectively. Once we are disrant fr<JJn the injrjal conditions and enter a
steady-state condition, we see that the system output is a sinusoid of the sarne frequency as
the input. Recall that the ratio of the steady-state output to input sinusoid amplitude is the
magnitude response of the S}'Stem. The magnitude response at frequencyto1ris unity, is about
O. 7 at frequency ½'77', and is near zer{) at frequency These results suggest that the magnitude t51r.
response of this system decreases as frequency increases: that is, the system attenuates the
components of the input that vary rapídly, whíle passíng with unít gaín those that vary s1owly.
This characteristic is evidenr in the output of the syscem in response to the average January
temperaturc input shown in Fig. 2.22(g). We see that the output initially increases gradually
in the sarne manner as the step response. This is a consequence of assuming the input is zero
prior to 1900. After about 1906, the systern has a smoothing effect sínce ít attenuates rapid
fluct11arjons jn the input and passes constant terms wjrh .zero gain. ' . ,, ..
ii. . .... , ,,... ·.
. .
2.4 Di.fferential and Di.fference Equation Representations for LTI Systems 113

x(t) ! i (t) e :::::::: y(t)

FIGURE 2.23 R(: circuit.

• Drill Problem 2.9 Write a differential equation describing the relationship between
the input vt>ltage x(t) and voltage y(t) across the capacitor in Fig. 2.23.


We now briefly review a method f<)r solving differential and differencc equatío11s. This
offers a general characterization of solutions that provides insight i11to systcn1 behavi<.>r.
lt is convenient to express the output of a system described by a dífferential or dif-
ference equation as a sum of two components: one associated only with inítial conditi<>ns,
anda second due only to the i11put. We shall term the con1ponent <)f the otttput associated
with the initia1 conditions the natural resp(>nse of the system and denote it as y("i. The
component of the output due only to the input is termed thc forced res/Jo11se of the system
and den<)ted as yCt'I. The natural response is the system <>utput for zer<, i11put, whilc the
forced response is che system (Jutput assuming zero inicial conditions. A system with zero
initial conditions is said to be at rest, since there is no stc,red energy <>r memory i11 the
system. The natural response describes the manner in which the system dissipares any
energy or memory of the past represented by nonzerl) initial conditions. The forced re-
sponse describes the system behavior that is ''forced'' by the input whc11 the system is at

Tire Natural Respo-nse

The natural response is rhe S}'Stem output vvhen the input is zero. Hence for a c<,11-
tinuous-time system the natural respc>nse, y<nl(t), is the solution to the h<>mogene<>us
N dk
L ak k y(n)(t) = O
k:0 dt
The natural response for a continuous-time system is of the form
Y(n)(t) = L C;er;t (2.31)
i= l

where the ri are the N roc>ts of the system~s characteristic equatíon


Substitution of Eq. (2.31} into the homogeneous equation establishes that y("l(t) is a so-
lution for any set of constants C;.

ln discrcte time the 11att1ral response, y 1"llnl, is the solution to che h(>mc>geneous

L aky(nl[n - k] O

lt is of thc form
y(n)[n] = L C;Y; (2.33)

where the r; are rhe N roots of the discrete-time syste1n's characterístic equati()n

L akr·"-1-k =O (2.34)

Again, su bstittttic>n of Eq. (2.33} into the h(>mogeneous equatíon establishes that y( 11 )I n] is
a solutic>11. ln both cases, the C; are determined so that the solution y(n) satisfies the initial
conditic>ns. Note that the continu<>us-time and discrete-time characteristic equations differ.
The form of the natural respo11se changcs slightly whcn the characteristic equati(>n
descri!Jed by Eq. (2.32) or Eq. (2.34) has repeated roots. If a root r; is repeated p tin1es,
then we include p distinct terms in rhe solutions Eqs. (2.31) and (2.33) associated with r;,
They involve the p functic>ns
er;t' ter;t' ... ' tp-1 er,t

11 n p-1 n
r; , nr; , ... , n r;

The nature (>Í each term in the natural resp<>11se depends <>n whether the roots ri are
real, ímagi11ary, or co1nplex. Real r<>(>ts lead to real exponentials, i1naginary r<><>ts to sí-
nusoids, and con1plex r()()ts to exponentially dan1ped sinusoids.

. .. .,~· • • ....,,., • ·>r"· • ~ •••• • ' . :r,. . :•·· ·,n..

EXAMPLE 2.1 7 Consider the RL circuit depicted in Fig. 2.24 as a system whose input is the
applied voltage x{t) and output is the current y(t). Fínd a differential equation that describes
this system and determine rhe natural response of rhe sysrem for t > O assuming rhe current
through the inductor at t = O is y(O) = 2 A.
Solution: Summing the voltages around the loop gives the differential equation
. _. ...
dy(t) .
Ry(t) + L dt = x(t)

The natural response is the solution of the homogeneous equation

Ry(t) + L d~~t) = O

The solutíon is given by Eq. (2.31) for N = 1,

y{"~(t) = c1e"11 A

where r1 is the root of the equation

R + Lr = O
2.4 Differential and Dif.ference Equation Representationsfor LTI Syste1ns 115

x(t) +
,.... y(t) L

FIGURE 2.24 HL circuit.

Hence r 1 = -RIL. The coefficient c1 is determined so that the response satísfies the initial
condition y(O) = 2. This implies c 1 = 2 and the natural response of this system is
y("'(t) = 2e··(RJL>t A, t ~ O

• Drill Problem 2. 1O Determine the form of the natural response for the system
described by the difference equation
y[nl + ¼y(n - 21 = xlnl + 2xln - 21

• Drill Problem 2.11 Determine the form of the natural response for the RLC circuit
depicted in Fig. 2.21 (a} as a function of R, I.,, and C. Indica te the conditions on R, L, and
C so that the natural response consists of real exponentials, complex sinus<>ids, and ex-
ponentially damped sinusoids.
Answer: for R 2 -=I= 4LIC,

-R + YR 2 - 4L/C -R - YR 2 - 4L/C
2L ' 2L
For R- = 4LIC,

F()r real cxp<,ne11tials R 2 > 4LIC, for complex sinusoids R = O, and for exponentially
damped sinusoids R 2 < 4L/C. •
The Forced Response
The forced response is the solution to the differential or difference equation for the
given input assuming the initial conditíons are zero. It consists of the sum of two co1n-
ponents: a term of the sarne form as the natural response, and a particular solution.
The particular solution is denoted as y(P> and represents any solution to the differ-
ential or difference eqttation for the given input. lt is usually obtained by assumíng the
system output has the sarne general form as the input. for example, if the input to a
discrete-time system is x[n] = o(', then we assume the output is of the form ylPl[nl = can
and find the constante so that yíP)[nl is a S<)lutÍ(>D te) the system's difference equation. If
the input is xlnJ = A cos(!ln + <p), then we assume a general sinusoidal response of the

TABLE Form of a Particular Solution Corresponding

2. 1
to Several Common Inputs
Contínu<)US Time Discrete Time

Input Particular .Solution Input Particular Solution

1 c 1 e
e-at ce-at a" can
cos(wt + cp) c 1 cos(wt) + c2 sin(wt) cos(On + cp) c 1 cos(!ln) + c2 sin(!ln)

form y1P1[nl = c 1 cos({ln) + c2 sin(iln), where c 1 and c2 are determined so that y(Pl[n]
satisfies the system's difference equation. Assuming an output of the sarne form as the
ínput is consistent with our expectation that the output of the system be directly related
to the input.
The form of the particular solution associated with common input signals is given
in Table 2.1. More extensive tables are given in books devoted to solving difference and
dífferential equations, such as th<)se lísted at the end of rhis chapter. The procedure for
identifying a particular solution is illustrated in the following example.
. ·lr·'

. 2.18 Consider the RL circuit of Example 2.17 and depicted in Fig. 2.24. Find a
. particular solution for this system with an input x(t) = cos(w0 t) V.
· Solution: The differential equation describing this system was obtained in Example 2.17 as
Ry(t) + L dt = x(t)

We assume a particular solution of the form y(P 1(t) = c 1 cos(w0 t) + c2 sin(úJ0 t). Replacingy(t)
in the differential equation by y(P'(t) and x(t) by cos(w0t) gives
Rc1 cos(wot) + Rc2 sin(w0 t) - Lwoc1 sin(wot) + Lw0c2 cos(w0 t) = cos{w0 t)
The coefficients c 1 and c2 are obtained by separately equating the coefficients of cos( w 0 t) and
sin(w0 t). This gives a system of two equatíons in two unknowns, as shown by
..•. Rc1 + Lw0 c2 = 1
-Lw0 c 1 + Rc2 = O
. .
Solving these for c 1 a11d c2 gives
. ..
,,: •.

Hence the particular solution is

IP> ) _ ( 0) Lw0 • ( ) A
Y (t - R2 + L2 wã cos w t + R2 + L 2 Wõ s1n w0 t
4i!>' •
. ·.· ;. • f• •
.•:.• .

This approach for finding a particular solution is m(>dified when the input is of the
sarne form as one of the components of the natural response. ln this case we must assume
a particular solution that is independent (>f all terms in the natural response in order to
2.4 Dijferential and Difference Equation Representatio,i.~ for LTI Systems 11 7

obtaín the forced response of the system. This is accomplished analogously to the proce-
dure for generating independent natural response components when there are repeated
roots in the characterístic cquation. Specifically, we multiply the form of the particular
solution by the lowest power of t or n that will give a response component not included
in the natural response. For example, if the natural response cc.>ntains the terms e-at and
te-" 1 due t<> a seC<>nd-order root at -a, and the input is x(t) = e-ª', then \..Ve assume a
particular s<>lutic>n of the form y(P)(t) = ct 2 e .2t.
The fcJrced respc)nse of the system is obtained by summing the particular sc>luti<>11
with the form of the 11atural rcsp<>nse a11d finding the unspecífied coefficients in the natural
response so that the combined response satisfies zero initial conditions. Assuming the input
is applied at time t = O or n = O, this procedure is as follows:
1. Fi11d thc form of the natural response y<n) from the ro<Jts cJf the characteristic
2. Find a particular sc>lutÍ<)11 ylPJ by assu1ning it is of the sarne form as the input yet
independent of ali terms in thc natural response.
3. Determine the coefficients in the natural response so that the fc.)rced response y(fJ =
y(P> + y(nl has zero initial conditions at t = O or n = O. The f(>rced response is valid
fc>r t > O <)r n > O.
ln the discrete-cime case, the zero initial conditions, ylf 1[-N], ... , y1f 1f -1 ], must be trans-
lated to times 11 > O, sincc thc forced response is valid only for times n 2:::: O. This is
accomplished by using the recursive form of the difference equation, the input, and the at-
rest C()ndi tic,ns y(f) f - Nl = O, ... , y(f'J [ -1] = O to o btain transla ted in iti a l con d iti ons
y111 [0J, y(f)ll l, ... , ytfllN - 1l. These are then used to determine the unknown coefficients
ín the natural resp<)nse cc>mpc,nent of yi11 [n].

.•. . . .

EXAMPLE 2.19 Find the forced response of the RL circuit depicted in Fig. 2.24 to an input
x(t} = cos(t) V assuming norrnalized values R = 1 O and L = 1 H.
Solution: The form of the natural response was obtained ín Example 2.17 as
y("'(t) = ce-(RtL11 A

. . A particular solution was obtained in Example 2.18 for this input as

.. :~
.. ,
y(Pl(t) = R2 : L 2 cos(t) + R2 ~ L 2 sin(t) A

where we have used w0 = 1. Substituting R = 1 O and L = 1 H, the forced response for

t >Ois . ·.;

y(f)(t) = ce-t + ½cos t + ½sin t A

The coefficient e is now determined from the initial condition y(O) = O
O = ce-0 + ½cos O + ½sin O
and so we find that e = -½.

• Drill Problem 2.12 A system described by the difference equati<)n

y[n] - !yln - 2J = 2xlnJ + x[n - 1]

has input signal x[nl = u[n]. Find the f<.)rced response of the system. Hint: Use y[n]
{yln - 2] + 2x(n] + x[n -1] withxfnl = u[rzJ andy(fll-21 = O,yifll-11 = O to determine
y<f)[O} and y(11 [1}.
y[n] = (-2(½} 11 + 4) u[n] •
The Coniplete Response
The complete response of the system is the sum <)Í the natural response and the forced
response. If there is no need to separately <.>btain the 11atural and the forced response, then
the complete response of the system may be obtained directly by repeating the three-step
procedure for determi11ing the forced response using the actual initial conditions instead
of zero i11itia1 conditions. This is illustrated in the following example.

•• • :•V;;,: •: ••
. . ··,., ·.' ,. '

ExAMPLE 2.20 Find the current through the RL circuit depicted in Fig. 2.24 for an applied
voltage x(t) = cos(t) V assuming normalized values R = 1 n, L = 1 H and that the ínitial
condítion is y(O) = 2 A.
Solution: The form of the forced response was obtained in Example 2.19 as
y(t) = ce-z + ½cos t + ½sin t A
We obtain the complete response of the system by solving for e so that the initial condition
y(O) = 2 is satisfied. This ímplies

2 = e + ½( 1) + ½(O)
ore = Í· He11ce
y(t} = fe-t + ½cos t + ½sín t A, t ~ O
Note that thís corresponds to the sum of the natural and forced responses. ln Example 2.17
we obtained
y<~>(t) = 2e-t A, t ~ O
while in Example 2.19 we obtained
y(ll(t) = -½e-t + ½cos t + ½sin t A, t ~ O
The sum, y(t) = y 1111 (t) + y(fl(t), is given by
y(t) = ie-t + ½cos t + ½sin t A, t ~ O

and is exactly equal to the response we obtained by dírectly solving for the complete response.
Figure 2.25 depicts the natural, forced, and complete responses of the system.
..v.· ,

• Drill Problem 2. 13 Find rhe response of the RC circuit depicted in Fig. 2.23 to x(t)
= u(t), assuming the initial voftage across the capacit<.>r is y(O) = -1 V.

y(t) = (1 - 2e-,,Rc) V, t 2:: O •
The Impulse Hesponse
The method described thus far for solving differential and difference equations can-
not be used to find the impulse response directly. However, the impulse response is easily
2.4 Differential and Difference Equation Representationsfor LTl Systen,s 119

2 ~-----.-----....--.---.-----,,--r-----.---,

1.5 .. 1.5

l ...

0.5 . . . .

o- o . . ..

-0.5 -0.5 -

-1 ~-'----'-----"---J'----'--'-----'---'---' -1 .____j__L___..1_._·····-L.·-·········.....·········-"······-·······;_·········-·-·L········-·····;._ _
O 2 4 6 8 1O 12 14 16 18 20 O 2 4 6 8 1O 12 14 16 18 20
Time (seconds) Time (scconds)
(a) (h)


l .5

y(t) l .0



- 1·º0 2 4 6 8 1O 12 l 4 16 l8 20
Time (seconds)

FIGURE 2.25Responsc of RL circuit dcJJÍctcd i11 Fig. 2.24 t<> input x(t) = cos(t) V ,vhen y(O) =
2 A. (See Example 2.2().) (a) Natural response. (b) Force<l respor1se. (e) Complete response.

determined by first finding the step response and then explc>iting thc rclatic>11ship bctwce11
the impulse and step response. The definition <>Í the step response assumes the system is
at rest, so it represents the forced response <1f the system t<> a stcp input. f<>r a contínuous-
time system, the impulse response, h(t), is related to the step response, s(t), as h(t) =
1;s(t). For a discrete-time system we have h[n] = s[nJ - s[n - 11, Thus the impulse resp<>nse
is obtained by differentiating or dífferencing the step response. The differentiatic)n and
differencing operations eliminate the constant term assc>ciated with the particular solution
in the step response and change only the constants associated with the exponential terms
in the natural response compc>nent. This implies that the impulse response is only a func-
tion of the terms in the natural respc)nsc.



The forced response of a l,TI system described hy a diffcrcnrial or difference equation is

linear with respect to the input. If y\fi is the forced response associated with an input x 1
and y(j) is the forced response associated with an input x 2 , then the input ax 1 + {3x 2

generates a Í(>rced response gíven by ay1/·i + f3yi). Similarly, the natural response is linea
with respect to the initial conditions. If y\111 is the natural response associated with initia
conditions JI and y'.{'l is the natural response ass()ciated with initial conditions 12, rhen th
inicial condition al 1 + /31 2 results in a 11atural resp<>nse ay\"J + {3y<.;,ii. The forced respons
is also time invariant. A time shift in the input results in a time shift i11 the output sinc
the syste1n is ínítially at rest. ln general, the con1plete resp<>nse of a system described by.
differential or differcnce eq uation is 11<>t ti1ne invariant, si11ce the initial conditions wi:
result in an output term rhat does not shift with a time shift of the input. Lastly, we observ
that thc forced response is also causal. Since the system is i11itially at rest, the <)utput doe
not begin pri<)r to the time at which the input is applied to the systen1.
The forced response depends c>n both the input and the roots of the characteristi
equation since it involves hoth the basic f<>rm of the natural respo11se and a partícula
solution to the differcntial or difference equation. The basic form of the natural respons
is dependent entirely ()O the roots of the characteristic equatÍ<)n. The impulse response e
the sysrem also depends 011 the roots <>f the characteristic e'-1uation since it C(>ntains th
identical terms as the natural response. Thus the roots of the characteristic equation prc
vide considerable infc>rmation about the system behavior.
For example, the stal1ility characteristics <)f a system are direcrly related t<> rhe root
of the system's characteristic cquation. To see this, note that the <>t1tput of a stable syster
in response t<) zero input must l1e l1ou11ded for any ser of i11itíal conditi<>11s. This follo\\ó
from the definition of BIBO stability and implíes rhat the natural response of the syster
must be b(>unded. Thus each term i11 the natural resp(>nse must be b()t1nded. ln the discrett
time case we must have Ir? I bounded, <)r Ir; I < 1. When Ir; 1= 1, the natural response do<:
nc>t decay and the sysrcm is said to be <>n the verge of insta bility. For continu<)us-tin1
systcms we require that Ier,r I be bounded, which implies Rc{r;} < O. Here again, whe
Ref ri} = O, the system is said to be on the verge of instabílity. These results imply that
discrcte-rime system is unstable if any r<><>t of the characteristíc equation has mag11itud
greater than unity, and a C(>ntinuous-time system is unstable if rhe real pare of any r<)Ot e
the characteristic equation is positive.
This discussion establishes that the roots of the characteristic equation indica te whe
a system is unstable. l11 later chapters we cstablish that a discrete-time causal system
stable if and (>nly if all roots of the characteristic equation have magnitude less than unir:
and a continu<)us-ti1ne causal system is stable if and <>11ly if the real parts of ali roots <
the characteristic equation are negative. These stability c<>nditions imply that the 11atur,
response of a system goes to zero as time approaches infinity since each term in the natur,
resp<>nse is a decaying exponential. This ''decay to zer()'' is co11sistent with our intuiti,
concept of a system's zero input behavic>r. We expecta zere> output when thc i11put is zer<
The initial conditions represent any energy present in the system; in a stable system wit
zero input this energy eventually dissipares and the output approaches zero.
The respo11sc ti1ne of a system is also determined by the roots of the characterist
equation. 011ce the natural response has decayed to zero, the system behavior is governe
c>nly by the particular solution-which is of the sarne form as the input. Thus the natur;
response comp(>nent describes the transient behavi<>r of the system: rhat is, it describes tl
transition of the system from its initial condition to an equilibrium cc>ndition determint
by the input. Hencc the transient rcsponse time of a system is determined by the time
takes rhc 11atural respc>nse to decay t<> zero. Recall that natural response contains terms e
the form r;1 for a discretc-time system and er;t for a continuous-time system. The transie1
response time of a discrete-time system is therefore propc>rtional to the magnitude of tJ
largest root of thc characteristic equation, while that of a continuous-time system is d
termined hy tl1e root whose real component is closest t<) zero. ln order to have a conti1
2.5 Block Diagram Representations 121

uous-time system with a fast response time, ali the roots of the characteristic equation must
have large and negative real parts.
The imptzlsc response of the system can be determined directly from the differentia1-
or difference-eqt1ation description of a system, although it is generally much easíer to
obtain the itnpulse response indirectly using methods described in later chapters. Note that
thcre is TI<) provision for initial conditic>ns when using the impulse response; it applies (>nly
to systems that are initially at rest or when che input is known for ali time. Differential
and diffcrcnce equation system descriptic)ns are more flexible in this respect, since they
apply t<> systems either at rest or with nonzer() inicial co11ditions.

2.5 Block Diagram Representations

ln thís section we examine block diagram representations for I.TI systems described by
differentíal and difference equations. A bl<Jck diag1'am is an interconnection of elementary
operatic>ns that act on the input signal. The blc>ck diagram is a n1ore detailed representation
c>f the syscem tha11 the impulse respt)nse <>r diffcrence- and differentia]-equation descrip-
tions since it describes how the system's internai cc>mputations or operations are ordered.
The impulse response and difference- or differential-equatíon descriptions represent only
the input-output behaví<1r of a system. We shall show that a system with a given input-
output characteristic can be represented with different block diagrams. Each block diagram
represe11tatic>n describes a different set of internai computations used to determine the
system output.
Block diagran, representatit>ns cc>nsist <>Í a11 interconnection of three elementary t>p-
erations on signals:
1. Scalar multiplication: y(t) = cx(t) or y(n] = cx[n], where e is a scalar.
2. Addition: y(t) = x(t) + w(t) or ylnl = xlnl + w[n].
3. lntegratíon for c<)ntinuc>t1s-time systems: y( t) = f 1 "" x(-r) d-r; or a time shift fc>r
discrete-tin1e systems: yln] = x[n - 1].
Figure 2.26 depicts the block diagram symbols used to represent each of these operations.
ln order to express a c<>11ti11uc1us-time system ín terms of integration, we shall co11vert the
differential eqttation intt> an integral equation. The operation of integratit>n is usually uscd
in block diagrams for continuc>us-timc systerns instead of dífferentiation because integra-
tors are more easily bt1ilt from analc>g cc>mponents than are differentiators. Also, integra-
tc1rs s111ootl1 out noise in the system, while differentiators accentuate nc>ise.
The i11tegral or difference equatic>n cc>rresponding to the system behavic>r is <>l1tai11ed
by exprcssing the sequence of operatic.>ns represented by the block diagram in equati<>n

-]-----i• l: . ,. y(t) = x(t) + w(t) x(t) - • ~ ---f ~-•• y(t) Í~ x(T)dT
x(t) e y(t) = cx(t)
l y[nl = .x-fnl + w[n]

x[11} y[n]; cx[nJ

x[n] - - • S --• y[n J = x[ n - 1J
(a) (b) (e)

FIGURE 2.26 Symb<)ls f<>r elen1entary operations ín block diagram descriptions for systems.
(a) Scalar multiplication. (b) Additíon. (e) lntegration for continuous-time systems and tin1e shift
for discrete-tin1e svstcms.

ho u,[nl
x[n] ' )1, )1,
t' )lo 'Vínl

b, -a,
x[n - l] i.,t yí n - 1]

·-s ·,~..
.,, : .

x[n - 2] ..,__....,_ _... L--_...,.__.. y[n - 21
l ________________ _

FIGURE 2.27 Blc,çk diagram reprcscntation for a discretc-tirne system described by a second·
order difference equation.

form. Begin with the discrete-time case. A discrete-time system is depicted in Fig. 2.27.
Consider writing an equation corresponding to the portic)n of the system within the dashed
box. The output c>f the first time shift is x[n - 1]. The second time shift has output
x[n - 2]. The scalar multiplications and surnmations imply
w[nl = b0 x[n] + h1x[n - 11 + h2xln - 2] (2.35)
Now we may write an expression for y[nJ ir1 ter111s of w[nJ. The block diagran1 indicares
y[n] = w[n] - a1y[n - 1] - a2y[n - 21 (2.36)
The output c,f this system may be expressed as a functic>n of the input xln I by substituting
Eq. (2.35) for w[n] in Eq. (2.36). We have
y[n] = -a 1y[n - 1] - a2y[n - 2] + boxlnl + h 1x[n - lJ + b2x[n - 2]

y[nl + a 1y[n - lJ + a1 y[n - 2] = hox[n] + h1x[n - 1] + b2xín - 2] (2.37)

Thus the block diagram in Fig. 2.27 describes a systern whose input-c)utput characteristic
is represented by a second-order difference equation.
Note that the blc>ck diagram explicitly represents the operations involved in cc>m-
puting the output from the input and tells us how to simu1ate the system on a computer.
The operations of scalar multiplication and additi<>n are easily evaluated using a computer.
The outputs of the time-shift operations correspond to memory locations in a computer.
ln order to compute the current output frc)m the current input, we must have saved the
past values of the input and output in memc>ry. T () begin a computer simulatíon at a
specified time we must know the input and the past two values of the output. The past
values of the output are the initial condirions required to solve the difference equation

• Drill Problem 2.14 Determine the difference equatic>n corresponding to the block
diagram description of the system depicted in Fig. 2.28.
y[n] + ½y[n - 1] - lyln - 31 = x[n] + 2x[n - 2]
2. 5 Bloch Diagram Representations 123

x[n] - - - - ~ - - - • • . l : - -....... ~$)\-• ----"'"'!li""'-~•~ y[n]

·•.. g,-~.,.
,. ••<...

• +~----------.
·( s·,..

1 .. ,.

FIGURE 2.28 Block clíagran, rcprcscntation for Drill Problen1 2.14. •

The block diagram descriptic>n for a system is not unique. We illustrate this by de-
veloping a second block diagram description for the syste111 described by the second-order
difference equation given by Eq. (2.37). We may view the system in Fig. 2.27 as a cascade
of two systems: one with input xi n I and output w[ n] described by Eq. (2.35) and a second
with input w[11l and output ylnJ described by Eq. (2.36). Since these are LTI sysrems, we
may interchange their order without changing the input-output behavíor of the cascade.
lnterchange their order a11d den<>te the output of the new first system as fl n ]. Thís output
is obtaíned frc>m Eq. (2.36) and the input x[n l as sh<)wn by
f'lnl = -a,f[n - 1] - a2 f[n - 21 + xln] (2.38)
The signal fl n] is also the i11put te> the sec(>nd system. The output <>f the second systetn,
y[n], is obtained from Eq. (2.35) as
yln] = bof[,i] + b1f[n - 11 + b2fln - 2] (2.39)
Both systems inv<>lve time-shifted versions of /[nl. Hence only one set of time shifts is
needed in the block diagran1 for this systcm. We may represent the system described by
Eqs. (2.38) and (2.39) as the block diagram illustrated i11 Fig. 2.29.
The block diagrams in Figs. 2.27 and 2.29 represent different ímplementations Í(>r a
systc1n with input-output behavior described by Eq. (2.37). The system ín Fig. 2.27 is
termed a ••direct form I'' in1pleme11tatic>n. The system in Fig. 2.29 is termed a ''direct form

x[n] • ~ ,, /fnl bo
)lo • E ... y[nJ


-a, h1
f[n - l]


f[n - 21
FIGURE 2.29 Alternative block <liagram rcprcscntation for a system described l>y a sec<>n<l-<)rdcr
dífference c4uation.

11'' impleme11tation. The direct form II implementation uses memory more efficic11tly, since
for this example it requires only two memory locations co1npared t<) the four required for
the dírect form I.
There are many different i111pleme11tatio11s fc>r a system whose input-c>utput behavíor
is described by a difference equation. They are <)btained by manipularing cither the differ-
ence equation or the elen1ents in a bl<)Ck diagram representation. \Xfhile these different
systems are equivalent from an input-output perspectíve, they will gcnerally differ with
respect to other criteria such as memory requírements, the number clf comptttations re-
quired per output value, or numerical accuracy.
Analogous results h<>ld for continuous-time systen1s. We may simply replace thc time-
shift operations in Figs. 2.27 and 2.29 with time differentiatíon to ()btain block diagram
representations for systems described by differential equatíons. However, in order to dcpict
the continuous-time system ín terms of the mc,re easily implemented i11tegrati<>n operation,
we must first rewrite the differential equation description
dk ;vi
N dk
ak d k y( t)
= k=O bk dtk x(t) (2.40)

as an integral equation.
We define the integration operation in a recursive manner to simplify the notation.
Let v101 (t) = v(t) be an arbitrary signal and set

v(n)(t) = ft"" v(n-l)('T) dT, n = 1, 2, 3, ...

Hence v(n 1(t) is the n-fc>ld integral of v(t) with respect to time. This definiti(>n integrares
over ali past values c>f time. We may rewrite this in terms of an inicial C()11ditic>n on the

1ntegrator as

n = 1, 2, 3, ...

If we assume zcr<> initial conditions, then i11tegrati<>n and differentíatio11 are inverse op-
erations; that is,

t > O and n = 1, 2, 3, ...

Hence if N 2:: M and we integrate Eq. (2.40) N times, we obtain the integral equation
description for the system:
L ªkY(N-k)(t) = L bkx(N-kl(t) (2.41)
k=O k=O

For a second-order system with a 2 = 1, Eq. (2.41) may be written

y{t) = -a 1y 111 (t) - a 0 y 121 (t) + b1x(t) + h 1x( 1J(t) + b0 x( 21 (t) (2.42)

Direcr fc>rm I a11d direct fc>rn1 II imp1ementatio11s ()f rhis system are depicted in Figs. 2.30(a)
and (b). The reader is asked to show that these bl<>ck diagrams in1pleme11t the integral
equation in Problem 2.25. Note that the direct fc,rm II implementatíon uses fewer integra-
tors than the direct form I implementati<>n.
Block diagram representations for continuous-time systems may be used to specify
analog computer simulations of systems. 111 such a simulation, signals are represented as
2.6 State-Variable Descriptionsfor LTl Systems 125

,. ,. f <t)
x(t) )lo~"?- . ~.;
• y(t) x(t) • ·•:r:,
., ""'
-----4i.,..__....,,, b2
1: •
----,)1, y(t)

l l

. ··: i.

• •••
,.: ·i.

·. :
X( li(t)
y<n(t) ··1:

,. .:.
~ ·,·
": ;~:.
• l>o
....__---4_ __, y(2)(t)


FIGURE 2.30Block diagram represenlalions for C()t1tinuous-tín1c system descrilled hy a sec()l1d-

order integral ec1t1ation. (a) Direct Í<>r1n I. (b) l)i rcct form I l.

voltages, resistors are used te> implement scalar n1ultiplication, and the integrators are
constructed using operational amplifiers, resistors, and capacitc>rs. lnitial cor1ditions are
specified as initial voltages on integrators. Analog C<>mputer simulations are n1uch 111ore
cumbersome than digital computer sin1ulations and suffer from drift, h<)wever, so it is
common to simulate continuous-tíme systems on digital C<>mputers by using numerical
apprc>ximati{)ns to either integration or differentiation operations.

2.6 State-Variable Descriptions

for LTI Systems
The srare-variable description for a LTI system C<>nsists of a series of coupled first-<)rder
differential or difference equations that <lescribe how the state of the systern evolves and
an equation that relates the output of the systetn to the current state variables and input.
These equations are written in 1natrix Í<)rm. The state of a system n1ay be defined as a
minimal set of signals that represcnt the system's entire memory of the past. That is, given
only the value of the statc ar a pc>int in time n (or t a11d the input f<>r ti1nes 11 > n (or
0 0 ) 0

t 2:: t we can determine thc <>utput Í<>r ali times n 2: n (or t 2: t We shall see that the
0 ), 0 0 ).

selection of signals comprising the state of a system is not unique a11d that there are rnany
possible state-variable descripti<)ns corresponding to a syste111 with a given i11put-c.>utput
characteristic. The ability t<) represent a system with different state-variable descriptions
is a powerful attribute that finds application in advanced methods for conrrol system
analysis and discrete-time system implementation.


We shall develop the general state-variable description by starting with rhe dircct Í<>r111 II
implementation for a second-order LTI system depicted in Fig. 2.31. ln order t<> dctcr1nine
the output of the system for 11 > n 0 , we must know the input for 11 > n 0 and the c>utpt1ts
of the time-shift operations labeled q 1 [n] and q2 [n] at time n = n0 • This suggests that we
may ch<>osc q 1 fnl and q 2 ln] as the state of the system. Note that since q 1 lnJ and q 2 [nJ
are the outputs of the time-shift operations, the next value of the state, q 1 [n + 1l and
q 2 [n + 1], must corresp<)nd t(> the variables at the input to the tin1e-shift c>perati<.>ns.

x[n] ,. ~ 1 - ·_ _ _ _ i..,____. .,., .1: • yln]

>~ .iJ..
. L·.. ·,:.

FIGURE 2.31 Direct form 11 rcpresentation for a second-<)rder <liscrete-time system dcpicting
state variables q 1 [n] and q 2 [n].

The block diagran1 i11dicates that the next value of the state is <>htained fro1n the cttrrent
state and the input via the equatic>ns
q1[n + 11 = -a 1q 1[n] - a2q2ln] + xlnl (2.43)
q2[n + 1] = q1[nj (2.44)
The bl(>Ck diagram also indicates that the system output is cxpressed in terms of the i11put
and state as

y[11] = (b1 - a,)q,lnl + (l,2 - a2)q2[n] + xlnl (2.45)
We write Eqs. (2.43) a11d (2.44) in matrix Í<)rm as
q1ln + 11 - 1
+ xlnl (2.46)
q 2 [n + tj 1 O o
while Eq. (2.45) is expressed as

+ lllxí1tl (2.4 7)

If we define the state vectc>r as the column vector

then we Célll rewrite Eqs. (2.46) a11d (2.4 7) as

qln + 1] = Aq[nj + bx[nj (2.48)
y[n] = cq[nJ + DxfnJ (2.49)
where the matrix A, vecrors b ande, and scalar D are given L",y
A= , b=
1 O o
D= [1]
Equations (2.48) and (2.49) are the general form for a state-variable descriptic.Jn corre-
spondíng to a discrete-time system. The matrix A, vectors b and e, and scalar D represent
2.6 State-Variable Descriptionsfor LTI Systems 127

another description for the system. Systems having different internai structures will be
represented by different A, b, e, and D. The state-variable description is the only analytic
system representation capable of specifying the internai structt1re of the system. Thus rhe
state-variable description is used ín any problem in which the internai system structt1re
needs to be considered.
If the input-output characteristics c>f the system are described by an Nth <>rder dif-
ference equation, then the state vector q[n] is N-by-1, A is N-by-N, b is N-by-1, and e is
1-by-N. Recall that solution of the difference equation requires N initial conditions. The
N inicial conditions represent the system's memory of the past, as does the N-dimensional
state vector. Also, an Nth order sysrem contains at least N time-shift operatíons in its block
diagram representation. If the block diagram for a system has a minimal number of time
shifts, then a natural choice for the states are the outputs of the unit delays, since the unir
delays embody the memory of the system. This choice is illustrated ín the following

EXAMPLE 2.21 Find the state-variable description corresponding to the second-order systen1
depicted ín Fig. 2.32 by choosíng the state variables to be the outputs of the unit delays.
Solution: The block diagram indicares that the states are updated according to the equations
+ 1} = aq1 [n] + ô1 x[n]
q 1 [n
., .
.· · · q2(n + 1} = yq 1[n] + J3q2[n] + S2x[n]
and the· output is given by

·"' .: .

These equations are expressed in the state-variable forms of Eqs. (2.48) and (2.49) if we define
., ~:

. .. . ·. ....
.. q[nJ =

, A=
..,,:. ,., : ··:
.. ' . ·. ':
e = [111 . .: ..,....
.; .
· · .,,.,,.. ;.:1,: .•<"·'!·· • •":'..;:' ,. ,. • ••~;~;r :·;:o,, ....:i<· • '•\:·"' ···.':':·. r~·"' :·· . .,..1··'

__ q 1(n + 1) NIMir.J q1[n]

;n:~---~• ~r -------:

1} 1

,-r [n] .,,._ }' ín)



~:r=-q_l_n_+_l-i]• 2

FIGURE 2,32 Block diagram of system for Example 2.21.


....,,, q2[ n]
-•-- .~ S ,, --,,.---i• ··1: -
-· •lll YÍ n]

--2l -31
FIGURE 2.33 Block diagram of system for Drill Problem 2.15.

• Drill Problem 2.15 Find the state-variable description corresponding to the block
diagram in Fig. 2.33. Choose rhe state variables to be the outputs of the unit delays, q 1 ln]
and q 2 fn J, as indicated in the figure.

--21 o 1
A= b=
1 3 '1
- 3
e= [O 1], D= [2] •
The state-variable descriptíon for continuous-time systems is analogous to that for
discrete-time systems, with the exception that the state equation given by Eq. (2.48) is
expressed in terms of a derivarive. We rhus write

dt q(t} = Aq(t) + bx(t) (2.50)

y(t) = cq(t) + Dx(t) (2.51 1

Once again, the matrix A, vectors b and e, and scalar D describe the internai structure of
the systerr1.
The memory of a continuous-time system is contained wirhin the system,s energy
storage <levices. Hence state variables are usually chosen as the physícal quantities asso-
ciated with the energy storage devices. For example, in electrical systems the energy storage
<levices are capacitors and inductors. We may choose state variables to correspclnd to rhe
voltage across capacitors c>r the current through inductc>rs. ln a mechanical systen1 rhe
energy-storing <levices are springs and masses. State variables may be chosen as spring
displacement or mass velocity. ln a block diagram representation energy storage <levices
are integrators. The state-variable equatíons represented by Eqs. (2.50) and (2.51) are
obtained from the equations that relate tl1e behavior of the energy storage <levices to the
input and c>utput. This procedure is demonstrated in the following examples.

'· ,'

ExAMPLE 2.22 Consider the electrical circuit depicted in Fig. 2.34. Derive a state-varíable
description for this system if the input is the applied voltage x(t) and the output is the current
through the resistor labeled y(t).
Solutiott: Choose the state variables as the voltage across each capacitor. Sumrning the
voltage drops around the loop ínvolving x(t), R 1, and C 1 gives l

x(t) = y(t)R1 + q 1 (t)

>: .: ·.'' • ,, , .....
2.6 State-Variable Descriptionsfor LTl Systems 129


+ R2 i-
x(t) : . e 1 ::::::::: q 1( t) e2 ::::::::: q 2 <t)

F1GURE 2.34 Circuit diagram of system for Example 2.22.

·, i
.. . . ,•...

1 1 ·
y(t) = -. Ri
- q1(t) + -R1 x(t) (2.52)
This equation expresses the output as a function of the state variables and input. Let i2 (t) be
the current through R 2 • Summing the voltage drops around the loop involving C 1, R2 , and C2
we obtain ·
. 1.

.. . ' .
... ,:·
: ·,..
. or·· .

;, .. ... ·. : ~ (2.53)

However, we also know that


i2(t) = C2 ~ q2(t) ·,

Substitute Eq. (2.53) for i2 (t) to obtain ·

d 1 1
. , dt q2 (t} = C2R2 qi(t) - C2R2 q2 (t) . .. · (2.54)

Lastly, we need a state equation for q 1(t). This is obtained by applying Kirchhoff's current
law to the node between R 1 and R 2 • Letting i1 (t) be the current through C1 , we have
y(t) = i1(t) + i2(t)
Now substitute Eq. (2.52) for y(t), Eq. (2.53) for i2 {t), and
. . d
, ..
.· : ,· t1(t) = C1 dt q1(t)
for i 1 (t), and rearrange to obtaín
d 1 1 1 1
dt qi(t) = - C1R1 + C1R2 qi(t) + C R qz(t)
1 2
+ C R x(t)
1 1

The state-variahle description is now obtained from Eqs. (2.52), (2.54), and (2.55) as
1 1 1
--+-- 1
C1R1 C1R2 C1R2
A= , h= C1R1
--·. 1
- C2 R2

1 1
e= --
D -
- -
R1 .. ,.:

' ·~. .


R1 R2 +
x(t) ~ q2(t)! L C:::;::: q 1(t)

FIGURE 2.35 Circuit diagrarn of system for Drill Problem 2.16.

• Drill Problem 2.16 Find the state-variable description for the circuit depicted in
Fig. 2.35. Choose state variables q 1 (t) and q 2 (t) as the voltage across the capacitor and the
current through the inductor, respectively.
-1 -R1 1
(R1 + R2)C (R1 + R2)C (R1 + R 2 )C
A= b=
Ri -R1R2 ' R2
(R1 + R 2)L (R1 + R2)L (R1 + R 2)L
-1 -R1 1
e= D=
R 1 + R 2 R1 + R2 R1 + R2 •
ln a block diagram representation fc>r a continuous-time system the state variables
correspond to the outputs c>f the integrators. Thus the input to the integrator is the deriv-
ative of the corresponding state variable. The state-variable description is obtained by
writing equations that correspond to the operations in the block diagram. This procedure
is illustrated in the following example.

EXAMPLE 2.23 Determine the state-variable description corresponding to the block diagram ;
in Fig. 2.36. The choice of state variables is indicated on rhe diagram. .,
Solution: The block diagram indicares that
d ..
dt q1(t) = 2q1(t) - q2 (t) + x(t)

df q2(t) = q1(t)
y(t) = 3q1(t) + q2(t) ~
Hence the state-variable description is ,.,.

2 -1 1
A= o ) b=
1 o
e= [3 1 ), D= [O] .

.. ' . .;.;.


We have claimed that there is no unique state-variable description for a system with a
given input-output characteristic. Different state-variable descriptions may be obtained
2.6 State-Variable Descriptionsfor LTl Systems 131

q1(t) q2(t)
---~~-d--• J---t-.....,.. ~ ---•
q2(t) ,,,_.,,_

2 dt

FIGURE 2.36 Block diagram of system for Example 2.23.

by transforming the state variables. This transformation is accomplished by definíng a new

set of state variables that are a weighted sum of the original state variables. This changes
the form of A, b, e, and D but does not change the input-output characteristics of the
system. To illustrate this, reconsider Example 2.23. Define new states q 2(t) = q 1 (t) and
q~ (t) = q 2 (t). Here we simply have interchanged the state variables: qí(t) is the output c>f
the first integrator and qi(t) is the output of the second integrator. We have not changed
the structure of the block diagram, so clearly the input-output characteristic of the system
remains the sarne. The state-variable description is dífferent, however, since now we have
o 1 o
A'= b' =
-1 2 ' 1
e' = [1 3], D' = [o]
The example in the previous paragraph employs a particularly simple transformation
of the original state. ln general, we may define a new state vector as a transfc>rmation of
the original state vector, or q' = Tq. We define Tas the state transformation matrix. Here
we have dropped the time index (t) or [n] in order to treat both co11tinuous- and dis-
crete-time cases simultaneously. ln order for the new state to represent the entire system's
memory, the relationship between q' and q must be one to one. This implies that T must
be a nonsingular matrix, or that the inverse matrix T- 1 exists. Hence q = T- 1q'. The
original state-variable description is
q = Aq + bx
y = cq + Dx

where the dot over q denotes differentiation in contínuous time or time advance in discrete
time. The new state-variable description A', b', e', and D' is derived by noting
q' = Tq
= TAq + Tbx
= TAT 1 g' + Thx
y = cq + Dx
= cT -1 q' + Dx
Hence i f we ser
A'= TAT- 1 b' = Tb
e' = cT- 1 D'= D


q' =A'q' + b'x

y = c'q' + D'x

is the new state-variable description.

ExAMPLE 2.24 A discrete-time system has the state-variable description

1 -1 4
h = 2
A= 10 4 -1 ' 4

e= ½[1 1], D= [2]

Find the state-variable description A', b', e', and D' corresponding to the new states
q;[n] = -½q 1 [n] + ½q2 [n} and q_í[n] = ½q 1 (n] + ½q 2 [n].

Solution: Write the new state vector as q' = Tq, where

1 -1 1
F T -- -
2 1 1

This matrix is nonsingular, and its inverse is

-1 1
.. '
1 1

Hence substituting for T and r- 1 in Eq. (2.56) gíves

., .. ..

,, --21 o 1
A' -

b' -
o -103 ' 3
••• >
e'= [O 1], D' = (2)
Note that this choíce for T results in A' being a diagonal matrix and thus separares the state
update into the two decoupled first-order difference equations as shown by

+ 1] = -½q1 [n] + x(n]

q 1 [n
q2[n + 1] = fõq2 [n} + 3x[n]

The decoupled form of the state-variable description ís particularly useful for analyzing sys-
tems because of its simple structure.

• Drill Problem 2.17 A continuous-time system has the state-variable description

-2 O 1
A= h=
1 -1 ' 1
e = [O 2], D= [1]
Find the state-variable description A', b', e', and D' corresponding to the new states
qi (t) = 2q, (t) + q2(t) and q~(t) = q 1(t) - q 2(t).
2. 7 Exploring Concepts with MATLAB 133

1 -4 -1 3
A' b' =
3 -2 -5 ' o
e' = ½[2 -4], D' = [1] •
Note that each nonsingu1ar transformation T generates a dífferent state-'1"ariabJe de-
scription for a system with a given input-output behavior. The ability to transform the
state-variable description without changing the input-output characteristics of the system
is a powerful tool. Ir is used to analyze systems and identify implementations of systems
that optimize some performance criteria not directly related to input-output behavior,
such as the numerícal effects of roundoff in a computer,based system implementation.

2. 7 Exploring Concepts with MATLAB

Digital cc>mputers are ideally suited to irnplementing time-domain descriptions of discrete-
time systems, because computers naturally store and manipulate sequences of numbers.
For example, the convolution sum describes the relationship between the input and out-
put of a discrete-time system and is easil}' evaluated \vith a computer as the sum of products
of nurnbers. ln contrast, continuous-time systems are described in terms of continu<>us
functions, which are not easily represented or manipulated in a digital cornputer. For
example, the output of a continuous-time system is described by the convolution integral.
Evaluation of the convolution integral with a computer requires use of either nurnerical
integration or symbolic manipulation techniques, both of whích are beyond the scope of
this book. Hence our exploration with MATLAB focuses on discrete-time systems.
A second limitation on exploring signals and systems is imposed by the finite mernory
or storage capacity and nonzero computation times inherent to all digital computers. Con-
sequently, we can only manípulate finite-duratíon sígnals. For example, íf the ímpufse
response of a system has infinite duration and the input is of infinite duration, then the
convolution sum involves summing an infinite number of products. Of course, even if we
could store the infinite-length signals in the computer, the infinite sum could not be com-
puted in a finite amount of time. ln spíte of this limitation, the behavior of a system in
resp<>nse to an infinite-length signal rnay often be inferred frc>m irs respc>nse to a carefully
chc>sen finite-length signal.
Both the MATLAB Signal Processing Toolbox and Control Systern Toolbox are used
in this sectíon.


Recall that the convolution sum expresses the output of a discrete-time system in terrns <)Í
the input and impulse response of the system. MATLAB has a functíon named e o n v that
evaluates the convolution of finite-duration discrete-time sígnals. If x and h are vecrors
representing signals, then the MATLAB command y = e o n v Cx, h ) generates a vector
y representing the convo)utÍ(>n of the signals represented by x and h. The number of
e1ements in y is gi·ven by the sum of the number of elements in x and h minus one. Nore
that we must know the time origin of the signals represented by x and h in order to
determine the time origin of their convolution. ln general, if the first elernent of x corre-
sponds to time n = kx and the first element of h corresponds to time n = kh, then rhe first
element of y corresponds to time n = kx + kh.

To illustrate this, consider repeating Example 2.1 using MATLAB. Here the first
nonzero value in the impulse response occurs at time n = -1 and the first element of the
input x occurs at time n = O. We evaluate this convolution in MATLAB as follows:
>> h = [1, 2, 1 J;
>>X: (2, 3, -2J;
>> y = e o n v Cx , h )
y =
2 7 6 -1 -2
The first element in the vector y corresponds to time n = O + (-1) = -1.
ln Exa1nple 2.3 we used hand calculation to determine the output of a system with
impulse response given by
h[n] = u[n] - u[n - 10]
and input
x[nl = u[n - 21 - u[n - 71
We may use the MATLAB command conv to perform the convolution as follows. ln this
case, the impulse response consists of ten consecutive ones beginning at time n = O, and
the input consists of five consecutive ones beginning at time n = 2. These signals may be
defined in MATLAB using the commands
» h = onesC1,10);
>> x = ones(1,5);

The output is obtained and graphed using the commands

>> n = 2:15;
>> y:: conv(x,h);
» stemCn,y); xlabel('Time'>; ylabel('Amplitude')
Here the first element of che vector y corresponds to time n = 2 + O= 2 as depicted in
Fig. 2.37.

5 1 1 1

4.5 >- -
4 -·· -
3.5 >- -
-8 3 ....... ' -
:g_ 2.5 ..... -
< 2 >- -

1.5 ....... --
1 - -

0.5 - -

2 4 6 8 10 12 14 16
FIGURE 2.37 Convolutíon sum computed using MATLAB.
2. 7 Exploring Concepts with MATIAB 135

• Drill Problem 2.18 Use ~1ATLAB to solve DriJJ Problem 2.2 for a = 0.9. That is,
find the output of the system with input x[n] = 2{u[n + 2} - u[n - 12}} and impulse
response h[nJ = 0.9n{uín - 2] - uln - 13]}.
Answer: See Fig. 2.38. •

The srep response is rhe output of a system in response to a srep input and is infinite in
duration in general. However, we can evaluate the first p values of the step response using
the e o nv function if the system impulse response is zero for times n < k,, by convolving
the first p values of h[nl wíth a finite-duration step of length p. That is, we construct a
vector h frc>m the first p nonzero values of the impulse response, define the step u =
o n e s ( 1 , p), and evaluate s = e o n v ( u, h). The first eJement of s corresponds to
time k1, and the first p values of s represent the first p values of rhe step response. The
remaining values of s do not correspond to the step response, but are an artifact <>f con-
volving finite~durati<>n signals.
For example, we may determine the first 50 values of the step response of the system
,ivith impulse response given in Drill Problem 2. 7:
h[ n] = (-a)"uln]
with a = 0.9 by using the MATLAB commands
» h = (-0.9).A [0:49];
» u = ones(1,50);
>> s = conv(u.,h);

The vector s has 99 values, the first 50 of which represent the step response and
are depicted in Fig. 2.39. This figure is obtained usíng the MATLAB command
stem([0:49J, s(1:50)).
The sinusoidal steady-state response of a discrete~time system is given by the ampli-
tude and phase change experienced by the infinite-duration complex sinusoidal input signal

System Output
12 1 ! 1 !

10 - -

8 ··- -

...... -

4 ...._ -

2 ···- -
o 5 10 15 20 25
FIGURE 2.38 Solution to Drill Problem 2.18.

Step Response
l \ l 1 '! 'i l
; j ·1

0.9 -· -

0.8 -

0.7 -

0.6 ···- -
;.g_ 0.5

<t: 0.4 ···-·· ...

0.3 .... ...

0.2 ...

0.1 ...

o ~ ~ ... .. . .. . . ..• .. ~ .
o 5 10 15 20 25 30 35 40 45 50

FIGURE 2.39 Step response computed using MATLAB.

x[n 1 = ei!ln_ The sinusoidal steady-state response of a system with finite-duration impulse
response may be determined using a finite-duration sinusoid prc>vided the sinusoid is suf-
.ficiently long to drive the system to a steady-state condition. To show this, suppose
h[n} = O for n < n 1 and n > n 2 , and let the system input be the finite-duration sinusoid
v[n 1 = ei!ln(u[n1- u[n - nv]). We may write the system output as

y[n] = h[n] * v[n]

= h[nJ * e;nn,
Hence the system output in response to a finite-duration sinusoidal input corresponds to
the sinusoidal steady-state response on the interval n 2 ::s n < n 1 + nv. The magnitude and
phase response of the system may be determined from y[n], n 2 < n < n 1 + n 11 , by noting

Take the magnitude and phase of y[n] to obtain

arg{y[n]} - nn = arg{H(eiº)},

\Y/e may use this approach to evaluate the sinusc)idal steady-state response of one of
the sysrems given in Exa1nple 2.14. Consider the system with impulse response
-1 n=O
h[n] --2,1 n=1
o, otherwise
2. 7 Exploring Concepts 1vith MATIAB 137

We shall determine the frequency response and 50 values of the sinusoidal steady-state
response of this system for input frequencíes !l = ¼1r and 1r. J
Here n 1 = O and n 2 = 1, so to obtain 50 values of the sinusoidal steady-state response
we require nv > 51. The sinusoidal steady-state responses are obtained by MATLAB
» Omega1 = pi/4; Omega2 = 3*pi/4;
» v1 = exp(j*Omega1*[0:50J);
» v2 = exp(j*Omega2*[0:50]);
>> h = [O. 5, -O. 5];
>> y1 = conv(v1,h); y2 = conv(v2,h);
Figures 2.40(a) and (b) depict the real and imagínary components of y 1, respectively, and
may be obtained with the commands
>> subplot(2,1,1)
» stem([0:51J,real(y1))
» xlabel('Time'); ylabel('Amplitude');
>> title( 'Real(y1) 1 )
>> subplotC2,1,2)
» stem([0:51],imag(y1))
>> xlabel('Time'); ylabel('Amplitude');
>> title('Imag(y1)')

Real(y 1)
0.6 , - - - - - - - - . l - - - - - - - - . - ! - - - - - . . - - 1 - - - - - - - . 1 - - - - - - - - - - - - ,

0.4 - -

] 0.2 ..__ . -
-0.2 - -

' _ ____.' _
-0.4 ,...___ ' ___' __._i _ _ _
' ___._
i ____________
' l ' ...,___ _ ____,
o 10 20 30 40 50 60

0.4 - - - - - - - - - - - - - - - - , - - - - - - , - - - - - - . . . . - - - - - - - - ,


.g:, o
~ -0.2 . . . .


-0.6 .___ _ _____._ _ _ ____.__ _ _ _~ - - - - - - - - - - - _ . _ _ - - - ~

o 10 20 30 40 50 60
FIGURE 2.40 Sinusoidal steady-state response computed using lvIATLAB. The ,,alues at times l
through 5() represent the sinusoidal steady-state response.

The sinusoidal steady-state response is represented by the values at time índices 1 through
We may now obtain the magnitude and phase responses from any element of the
vectors y 1 and y 2 except for the first one or the last one. Using the fifth element, we use
the commands

» H1mag = abs(y1(5))
H1mag =
» H2mag = abs(y2(5))
H2mag =
» H1phs = angle(y1(5)) - Omega1*5
H1phs =
» H2phs = angle(y2C5)) - Omega2*5
H2phs =
The phase response is measured in radians. Note that the a n g l e command always returns
a value between - 1T and Tr radians. Hence measuring phase wíth the command
a n g l e ( y 1 { n) ) - Omega 1 * n may result in answers that differ by integer multiples
of 2Tr when different values of n are used.

• Drill Problem 2.19 Evaluate the frequency response and 50 values of the sinusoidal
steady-state response of the system with impulse response

O :5 n < 3
h[n] =
O, otherwise
at frequency n = ½1r.
Answer: The steady-state response is given by the values at time indices 3 through 52 in
Fig. 2.41. Usíng the fourth element of the steady-state response gives I H(ei7r/J) 1 = 0.4330
and arg{H(e;,"13 )} = -1.5708 radians. •

ln Section 2.4, we expressed the difference-equation description for a system in a recursive

form that allowed the system output to be computed from the input signal and past outputs.
The f i L ter command performs a simiJar function. Define vectors a = [a0 , a 1, ••• , aN]
and b = lho, b1, ••• , bM] representing the coefficients of the difference equation given
by Eq. (2.29). If x is a vector representing the input signa), then the command y =
f i l ter ( b, a, x ) results in a vector y representing the output ()Í the system for zero
initial c<>nditions. The number c)f output values in y corresponds to the number <>f input
val LJes in x. Nonzero initial C<)nditions are .incorporated by t1sing the alterna tive com-
mand syntax y = f i L ter ( b, a, x, z i ) , where z i represents the inítial conditions re-
quired by f i Lter. The initial cc)nditions used by f i l ter are not the past values of the
<)utput since f i l ter uses a modified form of the dífference equation to determine the
output. These initial conditions are obtained from knowledge of the past outputs using
2. 7 Exploring Concepts with MATLAB 139

0.4 1 1 1

0.2 ... -

·--o-. ºl
o .. o o o o o o o o o o o o o o o o o ·-

-0.2 1--·····

i " i .
'' :
o 10 20 30 40 50 60

0.5 ,---------.------,-------.--------r------,-------,

·--s o

-0.5 .___ _ ____._ _ _ ____.__ _ _ ____.__ _ _ __,___ _ _ __,___ _

o 10 20 30 40 50 60
FIGURE 2.41 Sinusc>i<lal steady-state response for Orill Problem 2.19.

the cc)mn1and z i = f i l ti e ( b, a, y i ) , wherc y i is a vectc)r co11taining the initial

conditions in the order fy(-1), y(-2}, ... , y(-N)J.
We illustrate use of the f i l ter command by revisiting Example 2.16. The system
of interest is described by the difference equation
y[nl - l.143y[n - 11 + 0.4128y[n - 2] = 0.0675x[n]
+ 0.1349xln - 1] + 0.0675x[n - 2]
We determine the c>utpuc in response to zero input and inítial conditi<)ns y[-1] = 1
y[-21 = 2 using the commands
>> a = [1, -1.143, 0.4128J;
» b = [0.0675, 0.1349, 0.0675J;
» x = zerosC1, 50); ·
>> zi = filticCb,a,[1, 2J);
>> y = filter(b,a,x,zi);
The result is depicted in Fig. 2.22(b). We may determine the system respc>nse to an input
consisring of the average January te1nperature dara wjrh the commands
>> load Jantemp;
>> fi lttemp = fi lter(b,a,Jantemp);
Here we have assumed the average January temperature data are in the file
J ante mp . ma t. The result is depicted in Fig. 2.22(g).

• Drill Problem 2.20 Use f i l ter to determine the first 50 values of the step re-
sponse of the system descríbed by Eq. (2.57) and the first 100 values of the response to
the input xfnl = cos( ¼7Tn) assuming zero initial conditions.

Answer: See Figs. 2.22(a) and (d). •

The command [ h, t J = i mp z ( b, a, n) evaluates n values of the impulse re-
sponse of a system described by a dífference equation. The dífference-eq uation coefficients
are contained in the vectors b and a as for f i l te r. The vector h contains the values of
the impulse response and t contains the corresponding time índices.


The MATLAB Control System Toolbox contains numerous routines for manipulating
state-variable descriptions. A key feature of the Control System Toolbox is the use of LTI
objecrs, which are customized data structures that enable manipulation of LTI system
descri ptíons as single MATLAB varia bles. If a, b, e, and d are MATLAB arra ys repre-
senting the A, b, e, and D matrices in the state-variable description, then the command
s y s = s s (a, b, e, d, -1 ) produces a LTI object s y s that represents the discrete-time
system in state-variable form. Note that a continuous-time system is obtained by omitting
the -1, that is, using s y s = s s (a, b, e, d>. LTI objects corresponding to other system
representations are discussed in Sections 6.9 and 7.1 O.
Systems are manipulated in MATLAB by operations on their LTI objects. For ex-
ample, if s y s 1 and s y s 2 are objects representing two systems in state-variable form,
then s y s = s y s 1 + s y s 2 produces the state-variable description for the parallel
combination of s y s 1 and s y s 2, while s y s = s y s 1 * s y s 2 represents thc casca de
combina tion.
The functic>n L s i m sin1ulates the output of a system in response to a specified input.
For a discrete-time system, the command has the form y = l sim ( s y s, x), where x
is a vector containing the input and y represents the output. The command h =
i mp u l se <s y s, N) places the first N values of the impulse response in h. Both of these
may also be used for continuous-time systems, although the command syntax changes
slightly. ln the continuous-time case, numerical methods are used to approximate the con-
• • •
t1nuous-t1me system response.
Recall that there is no unique state-variable description for a given system. Different
state-variable descriptions for the sarne system are obtained by transforming the state.
Transforma tions of the state ma y be com pu ted in MATLAB using the rou tine s s 2 s s.
The state transformation is identical for both continuous- and discrete-time systems, so
the sarne command is used for transforming either type of system. The command is of the
form s y s T = s s 2 s s ( s y s, T), where s y s represents the original state-variable de-
scription, T is the state transformation matrix, and s y s T represents the transformed state-
variable description.
Consider using s s 2 s s to transform the state-variable description of Example

-1 4 2
A= 1 h=
10 4 -1 ' 4
e= 1[1 1],
2 D= [2]
2. 7 Exploring Concepts with MATIAB 141

using the state transfc>rmation matrix

-1 1
1 1
The following commands prc>duce the desíred result:

>> a --
[-0.1, 0.4; 0.4, -0.1J; b - [2; 4 J ,.
>> e - [0.5, 0.5]; d - 2; -
>> sys -
- ss(a,b,c,d,-1); % define the state-space object sys
>> T -
- 0.5*[-1, 1 ; 1 , 1 ] ,.
>> sysT - ss2ss(sys,T)
a --
x1 x2
x1 -0.50000 o
x2 o 0.30000
b --
x1 1.00000
x2 3.00000
e - -
x1 x2
y1 o 1 .00000
d --
y1 2.00000
Sampling time: unspecified
Discrete-time system.
This result agrees with Example 2.24. We may verify that the two systems represented by
s y s and s y s T have identical input-output characteristics by comparing their impulse
responses via the following commands:

» h = impulse(sys,10); hT = impulse(sysT,10);
» subplot(2,1,1)
>> stem([0:9],h)
>> title( 'Original System Impulse Response');
>> xlabel( 'Time'>; ylabel('Amplitude')
» subplot(2,1,2)
>> stem([0:9],hT)
>> title('Transformed System Impulse Response');
>> xlabel('Time'); ylabel('Amplitude')
Figure 2.42 depicts the first 10 values of the impulse responses of the original and crans-
formed systems produced by rhis sequence of commands. We may verify that the original
and transformed systems have the (numerically) identical impulse response by computing
the error e r r = h - h T.

• Drill Problem 2.21 Solve Drill Pr(>blem 2.17 using MATLAB. •


Original System Impulse Response

2.5 ..
<!'. 1.5
0.5 ...,,.

o 1 2 3 4 5 6 7 8 9
Transforrned System Impulse Response
~ 2.5 ....
E 1.5
1 ···--

o l 2 3 4 5 6 7 8 9
FIGURE 2.42 lm1lulse responses associated with the original and transformed state-variahle dc-
scriptions computed t1sing MATLAB.

12-8 Su1nmary
- .. .
There are many different methc>ds fc>r describing the actic>n c>f a l,TI system on an input
signal. ln this chapter we have examined four different descriptions for LTI systems: the
impulse response, difference- and differential-equation, block diagram, and state-variable
descriptions. All four are equivalent in the input-output sense; for a given input, each
description will produce the identical output. However, different descriptions offer differ-
ent insights into system characteristics and use different techniques for obtaining the output
from the input. Thus each description has its own advantages and disadvantages for solving
a particular system problem.
The impulse response is the output c>f a system when the input is an impulse. The
output of a linear time-invariant sysrem in response to an arbitrary input is expressed in
terms of rhe impulse response as a convolution operation. System properties, such as caus-
ality and stability, are directly related to the impulse response. The impulse response also
offers a convenient framew<>rk for analyzing intercl)nnections of systems. The input must
be know11 fc">r all time in order to determine the <lutput of a system using the impulse
response and convolution.
The input and <)utput of a LTI system may als<> be related using either a differential
or difference equati<>n. Differential equations ofren follow directly from the physical prin-
cipies that define the behavior and interaction of continuous-time system components. The
order of a differenrial equation reflects the maximum number of energy storage <levices in
the system, while the order of a difference equation represents the system's maximum
memory of past outputs. ln contrast to impulse response descriptions, the <>utput of a
system frt>m a given point in time forward can be determined withc>ut knowledge of ali
past inputs provided initial conditions are known. Initial ct>nditions are the initial values
of energy storage or system memory and summarize the effect of all past inputs up to the
Further Reading 143

starting time of interest. The solution to a differential or difference equation can be sep-
arated into a natural and forced response. The natural response describes the behavior of
the system dueto the initial conditions. The forced response describes the behavior of the
system in response to the input alone.
The block diagram represents the system as an interconnection of elementary oper-
ations on signals. The manner in which these operations are intercon11ected defines the
internai structure of the system. Different block diagrams can represent systems with iden-
tical input-output characteristícs.
The stare-variable description is a series of coupled first-order differential or differ-
ence equations representing the system behavior, which are written in matrix form. It
consists of two equations: one equation describes how che state of the system evolves and
a second equation relates the state to the output. The state represents the system's entire
memory of the past. The number of states corresponds to the number of energy storage
<levices or maximum memory of past outputs present in the system. The choice of state is
not unique; an infinite number of dífferent state-variable descriptions can be used to rep-
resent systems wíth the sarne input-output characteristic. The state-variable description
can be used to represent the internai structure of a physical system and chus provides a
more detailed characterization of systems than the impulse response or differentíal (dif~
ference) equations.


1. A concise summary and many worked problems for much of the material presented in this
and later chapters is found in:
• Hsu, H. P., Sígnals and Systems, Schaum's Outline Series (McGraw-Hill, 1995)
2. The notation H(eifl) and H(jw) for the sinusoidal steady-state response of a discrete- anda
continuous~time system, respectively, may seem unnatural at first glance. Indeed, the alter-
native notations H(fl) and H(w) are sometimes used in engineering pracrice. However,
our notation is more commonly used as it allows the sinusoidal steady-state response
to be de.fined naturally in terms of the z-transform (Chapter 7) and the Laplace transform
(Chapter 6).
3. A general treatment of differential equations is given in:
• Boyce, W. E., and R. C. DiPrima, Elementary Differential Equations, Sixrh Edition (Wiley,
4. The role of difference equations and block diagram descriptions for discrete-time systems
in signal processing are described in:
• Proakis, J. G., and D. G. Manolakis, Introductíon to Digital Signal Processing (Macmillan,
• Oppenheím, A. V., and R. W. Schafer, Discrete-Time Signal Pr<Jcessíng (Prentice Hall, 1989)
5. The role of differential equations, block diagram descriptions, and state-variable descriptions
in control systems ís described in:
• Dorf, R. C., and R. H. Bishop, Modern Control Systems, Seventh Edition (Addison-Wesley,
• Phíllips, C. L., and R. D. Harbor, Feedback Contrai Systems, Third Editíon (Prentice Hall,
6. State variable descriptíons in control systems are discussed in:
• Chen, C. T., Linear System TIJeory and Design (Holt, Rinehart, and Winston, 1984)

• B. Friedland, Contr()l System Design: An lntr<>ductíon to State-Space Methods (McGraw-Hill,

A thorough, yet advanced, treatment of srate-variable descriptions in the context of signal
. . . ..
process1ng 1s g1ven 1n:
• Roberts, R. A., and C. T. Mullis, Digítal Signal Processíng (Addison-Wesley, 1987)

2. 1 A discrete-tíme LTI system has the impulse re- (k) y[nl = (u[n + 10] - 2u[n + 51 + uln - 6})
sponse l1[11J depicted in Fig. P2.1 (a). Use linear- :1< cos( ½í'Tn)
ity and time invariance to determine the system {l) = u[n] * ~p=oô[n - 2p]
output y[,i] if the input x[ n] is: (m) y[n] = j3'1u[n] * Lp oô[n - 2pl, 1J31 < 1
(a) x[n] = 2o[n] - 8[n - 1] (n) yf nl = u[n - 2] * h[n], where hfn] =
(b) x[n] = u[n] - u[n - 31 '}'n, n < O, YI > 1 1

(e) x[n] as given in Fig. P2. l(b) T/

, n > O, 1111 < 1
(o) y[n] (½) 11u[n + 21 * hln], where h[n] is
h[n] defined in part (n)
3 2.3 Consider the discrete-time signals depicted in
Fig. P2.3. Evaluate the convolution sums indi-
l --
cated below.
. ..
n (a) m[nJ = xf nl * z[n]
-2 -1_ ,
l (b) m[n] = xln 1 * y{n]
(e) m[n] = x[nl * f[n]
(d) mini= x[n] * g[n]
x[n] (e) m[n] = y[n] * z[n]
(f) m[n] = y[n] * g[n]
2 (g) m[n] = y[n] * w[nl
(h) m[n] = y[n] * /[n]
-1 1 •••
(i) m[n] = z[n] * g[n]
l 2 3
-1 (j) m[n] = w[n] * g[n]
(b) (k) m[n] = f[n] * g[n]
2.4 A LTI system has impulse response h(t) depicted
FIGURE P2.l in Fig. P2.4. Use linearity and time invariance
to determine the system output y(t) if the input
2.2 Evaluate the discrete-timc convolution sums x(t) is:
given below. (a) x(t) = 28(t + 1) - 8(t - 1)
(a) y[n] = u[nl * uf n - 3] (b) x(t) = 5(t - +
é5(t - 2) + S(t - 3)
{b) y[n} = lnu[ -n + 2] * u[n - 3] (e} x(t) = Lp--oo(-l)PS(t - 2p)
(e) y[nJ = (f}»u[n - 2j * ufnJ 2.5 EvaJuate the continuous-time convolution inte-
(d) ylnl = cos(½1rn)u[n] * u[n - 1l grais given below.
(e) ylnl = cos(½1rn) * 2"u[-n + 2j (a} y(t) = u(t + 1) * u(t - 2)
{f) y[n] = cos(½1rn) * (½)nu[n - 2] (b) y(t) = e- 2 tu(t) * u(t + 2)
(g) y[n] = J3nu[nl * u[n - 3], 1J31 < 1 (e) y(t) = cos(1rt)(u(t + 1) - u(t - 3)) * u(t)
(h) y[n] = J3nuln] * anu[n], IJ31 < 1, lal < 1 (d) y(t) = (u(t + 2) - u(t - 1)) * u(-t + 2)
(i) y[n} = (u[n + 10] - 2u[n + 51 + u[n - 6]) (e) y(t) = (tu(t} + (10 - 2t)u(t - 5) -
* u[n - 2] (10 - t)u(t - 10)) * u(t)
(j) yfn] = (u[n + 10] - 2uf n + 5] + u[n - 6]) (f) y(t) = (t + 2t2 )(u(t + 1) - u(t - 1)) *
* J3 u[n},
IJ31 < 1 lu(t + 2)
Problems 145

x[n] y[n]

1 1
' ; " ,.' ,. n •' n
-4 -1
-1 - ~


2 ~
l l

1 3- ~

• ' ' ,n
' 2- ~

2 3
2- -
-3 3
1- f--

; ;
' '
..,' n
' 4
-2- ...
-4- f--

-<>----..;>--t-,--+--+--+--+--<>--~-<>-+-+--f--~1--<>-<>-- n
-5 5


h(t) 2.6 Consider the continuous-time signals depicted

in Fig. P2.6. Evaluate the convolution integrais
indicated below.
(a) m(t) = x(t) * y(t)
-1 1 (b) 1n(t) = x(t) * z(t)
(e) m(t) = x(t) * f(t)
(d) m(t} = x(t) * b(t)
(e) m(t) = * a(t)
(g} y(t) = cos( 1rt)(u(t + 1) - u(t - 3)) *
(u(t + 2) - u(t - 1)) (f) m(t) = y(t) * z(t)
(h) )'(t) = cos( m)(u(t + 1) - tJ.(t - 3)) *
(g) m(t) = y(t) * w(t)
e- 2 tu(t) (h) m(t) = y(t) * g(t)
(i} y(t) = (2ô(t) + ô(t - 5)) * u(t + 1) (i) m(t) = y(t) * c(t)
(j) y(t) = (5(t + 2) +
õ(t - 5)) * (tu{t) + (j) m(t) = z(t) * f(t)
(10 - 2t)u(t - 5) - (10 - t)u(t - 10)) (k) m(t) = z(t) * g(t)
(k) y(t) = e-ytu(t) * (u(t + 2) - u(t - 2)) (1) m(t) = z(t) * b(t)
{J} y(t} = e-ytu(t) * }:; 0 {½)P8(t - p) (rn) m(t) = zv(t) * g(t)
(m) y(t) = (2ô(t) + ô(t - 5)) * Lp=-o(½)PB(t ·- p) (n) m(t} = w(t) * a(t)
(n) y(t) = e--Y'u(t) * e13tu(-t) y > O, f3 > O (o) m(t) = /(t) * g(t)
(o) y(t) = u(t - 1) * h(t), where h(t) = (p) m(t) = /(t) * c(t)
e 2 ', t < O (q) m(t) = f(t) * d(t)
e- 3 ,, t > O (r) m(t) = x(t) * d(t)


2 ....

l --

-----+------ t -.-----+---+--_...,_ _ t
-l 1 1 2 3 4

z(t) w(t)

1- 1

-,,,.., -1 1 2
' ' '
t t
1 ·2

-1 -1- -

/(t) g(t)

1 e-t 1

_...,.__ __..,_ _ _ _ t -1

l l

b(t) c(t)

_ _ _ _"'-1
-1 2
--~-"i'---+---t---31---t ----+-----+---+---+-- t
-2 -l . l 5 l

d(t) a(t)

l .
••• •••
•• • .. .
-----,---------+:-- t t
1 2 3 4 -2 -1 1 2 3


2.7 Use the definirion of the convolution sum to

prove the following properties:
(a) Distributive: x{n] * (h[nl + g[n]) = 1/A
x[n) * h[n) + x[n] * g[n)
(b) Associarive: x[n] * (h[n) * g[n]} = -------t
(x[n] * h[n]) * g[n]
(e) Commutative: x[n] * h[n] = h[n] * x[n] -1/A
2.8 A LTI system has the impulse response depicted
in Fig. P2.8. FIGURE P2.8
Problems 147

·j; ...


"·'· • 1. ••

x[nJ _,...., . h 1[n] ----.
• h fn] ___:'.j
.: ......... •$>'

e:·· .,........•-··· 1...--

.,. 3, ;:· '

"·· ..,;......:: .:......


x(t) --+-...,._. ~··~ ..

·" ._... -
., 'h3(t) · ,. h4(t) ~ •• y[nl
+ •
~ ... :~ .:.



(a) Express the system output y(t) as a function (b) h(t) = h1(t) * h2(t) + h3 (t) * h4 (t)
of the input x(t). (e) h(t) = h1(t) * {h2(t) + h_-,,(t) + h 4 (t)}
(b) Identify the mathematical operation per- 2 .11 An interconnection of LTI systems is depicted
formed by this system in the limít as ~---'), O. in Fig. P2.11. The impulse responses are h1 [n]
(e) Let g(t) = lim,l_.0 h(t). Use the results of = (½)n(u(n + 21 - u[n - 3]), h 2 [n] = 8[n], and
(b} to express the output of a LTI system h3 [n] = u[n - 1]. Let the impulse response of
with impulse response the overall system from x[ n] to yl n] be denoted
hn(t) = g(t) * g(t) * · · · * g(t) as h[n].
n times (a) Express h[n] in terms of h 1 ln], h2 [n], and
as a function of the input x(t). h3 [nJ.
2.9 Find the expression for the impulse response re- (b) Evaluate h[n] using the results of (a).
lating the input xf nl or x(t) to the output yf nl ln parts (c)-{e) determine whether the system
or y(t) in terms of the impulse response of each corresponding to each impulse response is (i)
subsystem for the LTI systems depicted in: stable, (ii) causal, and (iii) memoryless.
(a) Fig. P2.9(a) (e) h 1 [n]
(b) Fig. P2.9(b)
{e) Fig. P2.9(c)
2.10 Let h 1(t), h 2 (t), h3 (t), and h 4 (t) be impulse re-
sponses of LTI systems. Construct a system with
impulse response h(t) using h 1(t), h 2 (t), h 3 (t),
and h 4 (t) as subsystems. Draw the interconnec-
tion of systems required to obtain:
(a) h(t) = h 1(t) + {h 2 (t) + h3 (t)} * h4(t) FIGURE P2.l 1

2.12 For each impulse response lístcd belo"'·, deter-

mine whether the corresponding systern is (i)
x(t) +
- y(t) ! L ::::!::::C
memoryless, (ii) causal, and (iii) stable.
(a) h(t) = e- 21 '
(b) h(t) = e 2 ,u(t - 1)
(e) h(t) = u(t + 1) - 2u(t - 1)
(d) h(t) = 38(t) +
(e) h(t) = cos( 1rt)u(t) x(t) +
(f} h[nJ = 2 u[-nj

(g) h[n] = e2 ,,u[n - lJ

(h) h{n] = cos(~?Tn){ulnl - u[n - 10]} (b)
(i) h[n] = 2u[n] - 2uln - 1] FIGURE P2.) 6
(j) h[nl = sin(½1rn)
(k) h[nl = B[n] + sin(1rn)
2.17 Determine the natural response for the sys-
'~2.13 Prove that absolute summability of the impulse tems described by the followíng differential
response is a necessary co11dition for stability of .
a discrete-time system. Hint: Finda bounded in-
put xln] such that the output ar some rjme n0 (a) Sdt J1(t) + 1O),{t) = 2x(t), J'(O) = 3
satisfies [y[no] l = Lk--oolhlk 11.
2.14 Evaluate the step response for the LTI systems d2 d
represented by the following impulse responses: (b) dt2 y(t) + 5 dt y(t) + 6y(t) = 2x(t) +
(a) h{nl = (½}nu[n] d d
(b) h In] = 8[ n] - B[n - 1] dt x(t}, y{O} = 2, dt y(t} t=O = 1
(e) h[n] == (-1 )''{u[n + 21 - u[n - 3 \}
d2 d
(d) l1ln] == ulnl (e) dt2 y(t) + 3 dt y(t) + 2y(t) = x(t) +
(e) h(t) = e- 1 1
d d
(f) h(t) = 5(t) - 5{t - 1) dt x(t), y(O) = O, dt y(t) r==o = 1
(g) h(t) = u(t + 1) - u{t - 1)
d2 d d
(h) h(t) = tu(t) (d) dt 2 y(t) + 2 dt y(t) + y(t} = dt x(t),
2.15 Evalt1ate rhe frequency response for the LTI
systems represe11ted by the following impulse y(O) = 1, dt y(t) = 1
responses: t=O

(a) h[n] = (i)1'u[n] d2 d

(b) h[ n] = BI n] - 5[ n - 1] (e) dt 2 y(t) + 4y(t) = 3 dt x(t), y(O) = -1,

(e} hln] = (-1 )n{t1[n + 2} - tt[n - 3]} d

(d) h[n] = (.9)nei<rr12 )nu[n)
dt y(t) =1

(e) h(t) = e-lrl d2 d d

(f) h(t) = -5(t + 1) + S(t) - S(t - 1) (f) dt2 y(t) + 2 dt y(t) + 2y(t) - dt x(t),
(g) h(t) == cos( ?Tt){1,(t + 3) - u(t - 3)} d
y(O) = 1, dt y(t) =O
(h) h(t) == e21u(-t) t=O
2.16 Write a differential equation description relat- 2.18 Determine the natural response for the systems
ing the output to the input of the following elec- described by the following difference equations:
trical circuits: (a) yínJ - ay[n - 1] = 2x[n], y[-1] = 3
(a) Fig. P2.16(a) (b) yln] -tgy[n - 2] = x[n - 1], y[-1] = 1,
(b) Fig. P2.16(b) y[-2] = -1
Problems 149

(e) y[n] =
-!yín - 1] - ½y[n - 2] = x[n] + (ií) x[n] = (})nu[n]
xln - 1], y[-1] = O, y[-2] = 1 (iii) xf n] = ei<rrt4 )nufn]
(d) y[n] + {6 y[n - 2] = xln - 11, y[-1] = 1, (iv) x[n] = (l)nu[n]
y[-2] = -1 (d) y[n] + y[n - 1] + ½yln - 2] = x[n] +
2x[n - 1}
(e) y[n] + y[n - 1] + ½y[n - 2J = x[n] +
2x[n - 1], y[-1] = -1,y[-21 = 1 . (i} x[n] = u[11]
2.19 Determine the forced response for the systems (ií) x[n] = (-½)»u[nl
described by the following differential equa- 2.21 Determine the output of the systems described
tions for the given inputs: by the following differential equations with in-
put and initial conditions as specified:
(a) 5 dt y(t) + 10y(t) = 2x(t) d
(a) dt y(t) + 10y(t} = 2x(t), y(O) = 1,
(i} x(t) = 2u(t)
x(t) = u(t}
(ii) x(t) = e- 1u(t)
(iii) x(t) = cos(3t)u(t} d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t) = dt x(t),
d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 6y(t} = 2x(t) + dt x(t) d
y(O) = O, dt y(t) = 1, .x-(f) = e 2
(i) x(t) = -2u(t) t=O

(íi} x(t) = 2e-tu(t} d2 d

(e) dt 2 y(t) + 3 dt y(t} + 2y(t) = 2x(t),
(iii) x(t) = sin(3t)u(t)
(iv) x(t) = se- 2 tu(t) d
. y(O} = -1, dt y(t) = 1, x(t) = cos(t)u(t)
d2 d d t=O
(e) dt 2 y(t} + 3 dt y(t) + 2y(t) = x(t) + dt x(t)
d2 d
(i) x(t) = Su(t)
(d) dt 2 y(t} + y(t) = 3 dt x(t), y(O) = -1,
(ii) x(t) = e21u(t) d
-d y(t) = 1, x(t} = 2e- u(t)1

(iii} x(t} = (cos(t) + sin(t))u(t) t t=O

(iv) x(t) = e-'u(t) 2.22 Determine the output of the systems described
by the following difference equations with input
d2 d d
(d) dt 2 y(t) + 2 dt y(t) + y(t) = dt x(t} and initial conditions as specified:
(a) y[n] - 2l y[n - 1] = 2x[n], y[-1} = 3,
(i) x(t) = e- 'u(t)
x[n] = 2(-½)nu[n]
(ii) x(t} = 2e- 1u(t)
(b) y[n] - ¼y[n - 21 = x[n - l], y[-11 = 1,
(iii) x(t) = 2 sin(t)u(t) y[-2] = O, x[n] = u[nl
2.20 Determine the forced response for the systems
described by the following difference equations (e) y[n] - Jy[n - 1] - ½y[n - 21 =
x[n] +
for the given inputs: x[n - 1], y[-1] = 2, y[-2] = -1,
x[nl = 2 11u[n]
(a) y[n] - ~yín - 1] = 2x[n]
(i) x [n] = 2u [ n] (d) y[n] - + !y[n - 2] =
¾y[n - 1]
(ii) x[n] = -(½)nu[nl 2x[n], y[-1] = 1, y[-2] = -1, xln] =
(iii) x[n] = cos(½1rn)ul11]
9 2.23 Find difference-equation descriptions for the
(b) y[1tJ - 16y[11 - 21 = x[n - 1]
four systems depicted in Fig. P2.23.
(i) x[n) = u[n} 2.24 Draw direct form I and direcr form li im-
(ii) x[n] = -(½)11u[n] plementations for the following difference
(iií) x[n] = (¾) u[nl 11
(e) y[n] - ¼y[n - 1] - !y[n - 2] = x[nj + (a) y[n} - lyln - 1] = 2x[n}
x[n - 1] (b) y[n] + ¼y[n - 1] - ½yln - 2] = x[n} +
(i) x[n] = -2u[n] x[n - 1]

x(t) .l: .. y(t)

, t ::
-2 ..............,
(a) x(t) ··"'f. "11 • y(t)

x[nl I:

s •· ~
-: -:•:
• yfnl
t • J t • J 2

--4l l

,2 (b)

x[nl 1• s l
.. :;[ y[n] -•
•·J ...............
,.. .
--2l ;
.i x(t} -2 -y(t)

- -1
4 ~·s- '11--J
-21 (e)
• .. s P2.27
l s s

}:= l
x[nJ •· l: .. ylnJ
t • •

--8 l
x[n] l .... I: s
.. lo;. y[nl
t •

FIGURE Pl.23 -2

(e) y[n] - iy[n - 2] = 2x[n] + x[n - l J

(d) y[n] + !)'[n - 1] - y/n - 3] = 3xtn - 1]
+ 2xín - 21
2.25 Shc>\V that the direct form l and 11 implemen- - -,
E- S - -
tations depicted in Fig. 2.27 implement the
second-order integral equarion given by Eq. (b)
2.26 Convert rhe fc.>llowing differential equatíons to
integral equatio11s and draw direcr forn1 I and
xln1 fl
l" ~ ! 3

'f:-~-t- s --1:-- s -yftt]

direct form II implementations ,.1f thc corre- 1
....... _ -1 - -l
spondíng systems: 8 4 2
(a) dt y(t) + 10y(t) = 2x(t) (e)

d2 d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t)
= dt x(t) -
? -1

xln1-:t S r • E- s
dt1. y(t) + y(t) = 3 dt x(t)
d "'. ~

l l

d d d
(d) dt 3 y(t) + 2 dt y(t) + 3y(t) = x(t) + 3 dt x(t) 1
2.27 Find differential-equation descriptions for rhe
three systems depicted in Fig. P2.27. FIGURE P2.28
Problems 151

2.28 Determine a state-variable description for the l

Ü -1
four discrete-time systems depicted in Fig. (a) A = 1
, b = ,
O -2 2
e= [1 1], D= [O]
2.29 Dra w block diagram system representations
corresponding to the following discrete-time 1 1 -1
stare-variable descriptions. (b) A= b =
1 O ' 2 '

(a) A=
O -½ , b=
, e= [1 -1],
e= [O -1], D= [O]
-.1 o
0 1 -1 o
D= [O] (e) A= O -1 ' b = 5 '
1 _l1 1 e= [1 O], D= [O]
(h) A= b= e= [1 -1],
13 o ' 2 '
(d) A= 1 -2 b = 2
D= [O] 1 1 , 3 '

O -½ b = O e = [1 1 ], D= [O]
(e) A = • -1 ' 1 , 2.32 Let a discrete-time system have the state-
e= [1 O], D= [1] variable description

O O 2 1 --21 1
(d) A= b = A= h=
O 1 ' 3 ' - o '
3 2 '
e= [1 -1], D= [O] e= [1 -1], D= [O]
2.30 Deter1nine a state-variable description for the (a) Define new states q; [n] = 2q1 lnl, qíln l
five continuous-time systems dcpicted in Fig. 3q2 [n]. Find the new state-variahle descrip-
P2.30. . A' , b' , e ' , D' .
2.31 Draw block diagram system representations (b) Define new states qi[nl = 3q2[n], qílnl =
corresponding to the following continuous-time 2q 1 [n ]. Find the new state-variable descrip-
state-variable descriprions: tion A', b', e', D'.

3 x(t)

J f½ t2
x(t) l • l: J • •E
2 y{t)
l ~ • • f y(t)

- L,
t •
~ •
-2 3


• { J',
1€.\__f - I: f • ..l;
. w;.
- y(t)
t •4 f '


+ y(t) -

x(t) t e e L

(d) (e)


(e) Define new states q;[n] = q 1 [n] + q2 [n], pression for the system output derived in (b)
qí[nl = q1[nl - q2 [n]. Find the new state- reduces to x(t) * h(t) in the limit as â goes
variable description A', b', e'; D'. to zero.
2.33 Consider the continuous-time system depicted
in Fig. P2.33.
{a) Find the state variable description for this
system assuming the states q 1(t) and q2 (t) 1/.ó.
are as labeled.
(b) Define new states q;(t) = q 1(t) - q2(t),
qí(t) = 2q 1 (t). Find the new state-variable ' t
-t::./2 D,.12
description A', b', e', D'.
(e) Draw a block diagram corresponding to the
new state-variable descriptiort in (b).
(d) Define new states qi(t) = (l/b 1)q1(t), q2(t)
= b2q1 (t) - h1q2 (t). Find the riew state-vari-
able description A', b', e', D'.
(e) Draw a block díagram corresponding to the
new state-variable description in (d).

Í .. y(t) x(2/l)
-.L....---J_--l----l----i.~-+---1,....!!~--- t



)~2.35 ln this problem we use linearity, time invari-
ance, and representation of an impulse as the
*2.34 We may develop the convolution integral using limiting form of a pulse to obtain the impulse
linearity, time invariance, and the limiting form response of a simple RC circuit. The voltage
of a stairstep approximation to the input signal. across the capacitor, y(t), in the RC circuit of
Define gll.(t) as the unir area rectangular pulse Fig. P2.35{a) in response to an applied voltage
depicted in Fig. P2.34(a). x(t) = u(t) is given by
(a) A stairstep approximation to a signal x{t) is s(t) = {1 - e- ttRc}u(t)
depicted in Fig. P2.34(b). Express x(t) as a
weighted sum of shifted pulses g~(t). Does (See Drill Problems 2.8 and 2.12.) We wish to
the approximation quality improve as ~ find the impulse response of the system relating
decreases? the input voltage x(t) to the voltage across the
(b) Ler the response of a LTI system to an input capacitor y(t).
gt:..(t) be ha(t). If the input to .this system is (a) Write the pulse input x(t) = gt:..{t) depicted
.x{t), find an expression for the output of this in Fig. P2.35(b) as a weighted sum of step
system in terms of ht:..(t). functions.
(e) ln the limit as ti goes to zero; g,i(t) satisfies (b} Use linearity, time invariance, and knowl-
the properties of an impulse. and we may edge of the step response of this circuit to
interpret h(t) = lima-oha(t) as the impulse express the output of the circuit in response
response of the system. Show that the ex- to the input x(t) = g 6 (t) in terms of s(t).
Problenis 153

(e) ln the limit as A ~ O the pulse input g~(t) (iii) x(t) = u(t) - 2u(t - l) + u(t - 2)
approaches an impulse. Obtain the impulse (iv) x(t) = u(t - a) - u(t - a - 1)
response of the circuit by taking the limitas
(e) Show that rx,,(t) = r,,x(-t).
~~O of thc c>utpt1t obtained i11 (b). Hint:
Use the definition of the derivative (f) Show that r,..,..(t) = rxx(-t).

.l .:i
zt+- -zt--
d 2 2
- z(t) = lim - - - - - - - -
dt ~. . . o ~
• Computer Experiments

2.37 Repeat Problcm 2.3 usi11g the MATLAB com-

gc,.(t) mand conv.
2.38 Use MATI,AB to repear Example 2.6.
1/L\ i
+ 2.39 Use MATLAB to evaluate the first 20 values of
x(t) + e: y(t) the step response for the systems in Problem
-L\/2 1112 2.40 Consider the three moving-average systems de-
fined in Example 2.6.
(a) {b) (a) Use MATLAB to evaluate and pior 50 val-
fIGlJRE P2.35 ues of the sinusoidal steady-state response
ar frequencics of n = rr/3 and íl = 2rr/3 for
each system.
::-2.36 The cross-correlation l)et,veen r,vo real signals (h) Use the results of (a) to determine the mag-
x(t) and y(t) is defined as nitude and phase response of each system at
frequencies n = rr/3 and !l = 21r/3.
fxy(t) = J"'.,_ x( T)y( T - t) dT (e) Obtai11 a closed-form expressic>11 for the
magnitude response of each system and plot
This is the area under thc product of x(t) a11d a it on - rr < il s 7T t1sing MATLAB.
shifted version of y(t}. Note that the indepe11-
2.41 Considcr the two systems having impulse
dent varial)le T - t is the negative of rhat found
in the dcfi11itio11 of convolution. The autocor-
relatio11, rx_"(t), of a signal x(t) is obraí11ed by !, Os; n < 3
replacing y(t) wírh x(t). O, otherwise
(a) Show that rxy(t) = x(t) * y(-t).
¼, n = O, 2
(b) Derive a step-l)y-ster) procedure for evalu-
ating the cross-correlatio11 a11alogous to
-¼, n = 1, 3
the one for evaluating convolution íntegral O, otherwise
given in Section 2.2.
(a) Use thc MATLAB co1nmand conv to plot
(e) Evaluate the cross-correlatio11 between the the first 20 values of the step response.
following signals:
(b) Obtai11 a closed-form expression for the
(i) x(t) = e- 1u(t}, y(t} = e- 31u(t)
mag11itude responsc and plot it on - rr < n
(ii) x(t} = cos( 1rt){u(t + 2) - s; 1T llSÍIJg MATLAB.
u(t - 2)], y(t) = cos(21rt)[u(t + 2)
2.42 Use the MATLAB commands f i l ter and
- u(t - 2)1
f i l ti e to repeat Example 2.16.
(iíi) x(t) = u(t) - 2u(t - 1} +
2.43 Use the MATLAB commands f i l ter and
u(t - 2), y(t) =
u(t + 1) -
f i l ti e to determine the first 50 output values
(iv) x(t) = u(t - a) - u(t - a - 1), in Problem 2.22.
y(t) = u(t) - u(t - 1)
2.44 The magnitude response of a system described
(d) Evaluate the autocorrelation of the follow- by a differe11ce equation may be obtained from
ing sig11als: the output y[n] by applying an input x[nl =
(i) x(t) = e- 1u(t) e)!lnutnl to a syster11 thar is inítially at rest. 011ce
(ii) x(t) = cos( 1rt)lu(t + 2) - u(t - 2)1 the natural response of the system has decayed

to a negligible value, y[n] is due only to the input 2.46 Use the MATLAB command s s 2 s s to solve
and we have y[n] ""' H(ei!2)e;nn. Problem 2.32.
(a) Determine the value n for which each term
0 2.47 A system has the state-variable description
in the natural response of the system in Ex-
ample 2 .16 is a factor of 1000 smaller than -l
2 --2l 1
A= b=
its value at time n = O. --~ o '
2 '
(b) Show that I H(ei0 ) 1 = 1y[no] I. · e= [1 -1], D= [O]
(e) Use the results in (a) and (b) to experimen-
tally determine the magnitude response of (a) Use the MATLAB commands L sim and
this system with the MATLAB command impulse to determine the first 30 values
f i l ter. Plot the magnitude response for of the step and impulse responses of this
input frequencies in the range - 7T < s; 7T. system.
2.45 Use the MATLAB command i mp z to determine (b) Define new states q 1 [n] = q 1 [n] + q2 [n] and
the first 30 values of the impulse response for q 2 [n] = 2q 1[n] - q 2 [nJ. Repeat part (a) for
the systems described in Problem 2.22. the transformed system.
Fourier Representations for Signals

··•:· .. ,..
;:.,.:~. . ,
. .:,\ .~d ·,, .
~: •>
....,.,,.. . ,.· . >'
: ..'

3.1 Introduction
ln this chapter we consider representing a signal as a weighted superpositic,n of con1plex
sinusoids. If such a signal is applied to a linear system, then the system <>utpttt is a weighted
superposition of the system response to cach complex sinusoid. A similar application <>f
the Jinearity property was exploited in the previous chapter to develop the conv<.>lution
integral and convolution sum. ln Chapter 2, the input signal was expressed as a weighted
superposition of time-shifted impulses; the output was then given by a weighted super-
position of time-shifted versions <>Í che sysrem's impulse response. The expressíon f()f the
output that resulted from expressing signals in terms of impulses ,vas termed ''cc>nvolu-
ti<>n.'' By rcpresenting signals in terms of sinusoids, we \vill obtaín an alternative expression
for the input-output behavior of a LTI system.
Representatit)n of signals as superpositions of complex sinusoids not only leads to a
useful expression for the system output but also provides a very insightful characrerization
of signals and systems. The focus of this chapter is representation of sígnals using complex
sinusoids and the properties of such representations. Applications t>f these representations
to system and sígnal analysis are emphasized in the ft)llowing chapter.
Thc srudy <1Í signals and systems using sinusoidal representations is termed Fourier
analysís after J<>seph Fc>urier (1768-1830) for his contributions to the cheory {>f reprc-
senting functions as weighted superpc)sitions <>Í sinus<>ids. Fourier methods have widc-
spread applicati<>n beyond síg11aJs and systems; they are used in every branch of engineering
and science.


The sinusc)idal steady-state respc>11se of a L TI systen1 was intr<>duced ir1 Secti{>n 2.3. W'e
showed that a complex sinusoid input to a LTI system ge11erates an outpt1t eqt1al to the
sinusoidal input multiplied by the system frequency response. That is, in discrete time, the
input x[nl = eiihi results in the output
y[nl = H(e;11)eifl11
where the frequency respc)nse H(eiº) is defined in terms of the impt1lse response h[11J as

H(ei11 ) = I h[kJe-i!!k
k :e - ""

ln continuous time, the input x(t) = eiwt results in the <>utput

)'( t) = H( jw)e''"'
where the frequency response H( iw) is defined in rerms of rhe impulse response h(t) as

H( jw} = J: ?C h(-r)e-;,.. -r d-r

We say that che complex sinusoid lj,(t) = eit.,n is an eigenfunction of the system H
associated with the eigenvalue À = H( jw) beca use it satisfies an eigenvalue pr<>blen1 de-
scribed by
H{lf,(t)} = Alf,(t)
This eigenrelation is illustrated in Fig. 3 .1. The effect of the system on an eigenfunction
input signal is one of sca1ar multiplication-the output is given by rhe product of the input
anda compJex number. This eigenrelation is analogous te> the more familiar macrix eiger1-
value prc>blem. If ck is an eigenvector of a matrix A with cigenvalue Ak, then we have
Aek = Àkek

Multiplying ek by the matrix A is equivalent to multiplying ek by the scalar Àk.

Signals that are eigenfunctions of syscems play an imp<)rta11t role ín systems rheory.
By representing arbitrary signals as weighted superp<>Sitions of cigenfunctions, we trans-
Í<)rm the operation of convolutíon to one of m11ltiplication. To see this, consider expressing
the input to a LTI system as the weighted sun1 of M complex sinusoids

x(t) = L akeicokt

If eiwkt is an eigenfunccion of the system with eige11value H( jwk), then each term in the
input, akeiwkt, produces an output term, akH( iwk)eiwkt. Hence we express rhe output of the
system as
y(t) = L akH( jwk)eiwkt

The <>utput is a weíghted sum of M complex sinusoids, with the weights, a1.,, modi.fied by
the system frequency response, H( jwk). The operation of convolution, h(t) * x(t}, becomes
multiplication, akH( jwk}, because x(t) is expressed as a sum c)f eigenfunccions. The ana1-
ogous rclationship holds in the discrete-time case.
This property is a powerful motivatic)n for representing signals as weighted super-
positions of complex sinusoids. ln addition, the weights provide an alternative interpre-
tarion of thc signal. Rather than describing the signal behavior as a function of time, the

--• H(iw)eil,,)t eifln_.,

..... H

(a) (b) (e)

fIGlJRE: 3.1 Jllustratíc>n of the cigenfunctíon pro1lcrty of linear systems. The action of the
system on an eigcnft1nction input is one of multiplication by the corresponding eigenvalue.
(a) General cigenft1ncti<>n iJ,(t) or it,[n J anel eigenvalue À. (b) C<>mplex sinusoid eigenfunction e_;,.,,,
and eigenvalue H(jú>). (e) Cornple.'!í sjnusoid eigenfunctjon e.iíh, and eigenvalue H(ei!l),
3.1 lntroduction 157

weights describe the signal as a function of frequency. The general notion (>f describing
complicated signals as a function of frequency is commonly encountered in music. For
example, the musical score for an orchestra contains parts for instruments having different
frequency ranges, such as a string bass, which produces very low frequency sound, and a
piccolo, which produces very high frequency sound. The sound that we hear when listeníng
t<> an orchestra represents the superposition (JÍ sounds generated by each instrument. Sim-
ilarly, the score fc>r a choir contains bass, tenor, alto, and soprano parts, each <>f which
contributes to a different frequency range in the overall sound. The signal representations
developed in this chapter can l)e viewed analogously: the weight associated with a si11usoid
of a given frequency represents rhe contributíon of that sinusoid t(> the overall sígnal. A
frequency-do1nain view of signals is very informative, as we shall see in what foll<)ws.


There are four distinct Fourier representations, each applicable to a different class <>f sig-
nals. These four classes are defined by the peri<><lícity properties of a signal and whether
it is continuous or discrete time. Periodic signals have Fourier series represcntations. The
Fc>urier series (FS) applies to C<)ntinuous-time periodic signals and the discrete-time Fourier
series (DTFS) applies to discrete-time periodic signals. N()nperiodic signals have Fourier
transform representations. If the signal is continuous time and nonperiodic, the represen-
tation is termed the Fourier transform (FT). If the signal is discrete time and nonperiodic,
then the discrete-time Fourier transform (DTFT) is used. Table 3.1 illustrates the relatic>n-
ship between the time properties of a signal and the appropriate Fourier representation.
The DTFS is often referred to as. the discrete Fourier transform or DFT; however, this
termínc)l<1gy does not correctly reflect the series nature of the DTFS and often leads to
cc>nfusion with the DTFT S<J we adopt the mc>re descriptive DTFS terminc>logy.

TABLE 3.1 Relationship Between Time Properties

of a Signal a1id the Appropriate Fourier Representation
Pr<>fJerty Periodic Nonperiollic

l Fourier Series Fourier Transf<>rm
n (FS) (FT)

e Discrete-Timc l-'ourier Series Discrete-Time Fourier T ransform
r (DTFS} (DTt'T)

Periodic Signals: Fourier Series Representations

Consider representing a periodic signal as a weighted superposition of complex si-
nusoids. Since the weighted superpositíon must have the sarne period as the signal, each
sinusoid in the superposition must have the sarne period as the signal. This implies that
the frequency of each sinusoid must be an integer multiplc of the signal's fundamental
frequency. lf xfn] is a discrete-time signal of fundamental period N, then we seek to rep-
resent x[n] by rhe DTFS
x[n] = L A[k Jeikílon (3.1)

where 0 0 = 21TIN is the fundamental frequency of x[n]. The frequency of the kth sinusoid
in the superposition is kfi0 • Similarly, if x(t} is a continuous-time signal of fundamental
period T, we represent x(t) by the FS
x(t) = L A[k]eikwut (3.2)

where w 0 = 211'/T is the fundamental frequency of x(t). Here the frequency of the kth
sinusoid is kw 0 • ln both Eqs. (3.1) and (3.2), Alk] is the weight applied to the kth complex
sinusoid and the hat ~ denotes approximate value, since we do not yet assume that either
x[n] or x(t) can be represented exactly by a series of this form.
How many terms and weights should we use in each sum? Beginning with the DTFS
described in Eq. (3.1), the answer to this question becomes apparent if we recai! that
complex sinusoids with distinct frequencies are not always distinct. ln particular, the com-
plex sinusoids eikfl,,n are N periodic in the frequency índex k. We have
ei(N+k)!l 0 n = ejNn0 ne;kn0 n

= eí2-rr•reik!i0 n

= eikíl n 0

Thus there are only N distinct complex sinusoids of the form eikílºn. A unique set of N
dístinct complex sinus()ids is obtained by letting the frequency índex k take on any N
consecurive values. Hence we may rewrite Eq. (3.1) as
x[ n] = ~ A[k]eik!lon (3.3)

where the notation k = (N) ímplies letting k range over any N consecutive values. The set
of N consecutive values <>ver which k varies is arbítrary and is usually chosen to simplify
the problem by exploiting symmetries in the signal x[n]. Common choices are k = O to
N - 1 and, for N even, k = -N/2 to N/2 - 1.
ln order to determine the weights or coefficients A[k], we shall minimize the mean-
squared error (MSE) between the signal and its series representatíon. The construction of
the series representation ensures that both the signal and the representation are periodic
with the sarne period. Hence the MSE is the average of the squared difference between the
signal and its representation over any one period. ln the discrete-time case only N consec-
utive values of x[n] and x[n] are required since both are N periodic. We have

MSE = -h 1t~N> lx[n] - xlnJ[


where we agaín use the notation n = (N) to indicate summation over any N consecutive
values. We leave the interval for evaluating the MSE unspecified since it will later prove
convenient to choose different intervals in different problems.
3.1 l1itroduction 159

ln C<>ntrast to the discrete-time case, C()ntinuous-timc complex sinusoids eikw"t with

distinct frequencies kw are always distinct. Hence there are potentially an infinite number

of disti11ct terrns in thc series of Eq. (3.2) and we approximate x(t) as

. x(t} = L A[k]eikc.>,,, (,) .5)


We seek c<)efficients A lk I s<, that x(t) is a good approxin1ati<)n to x(t).

No1iperiodic Signals: Fourier Transforni Represe1itatio1is

ln co11trasr to the peri()dic signal case, there are no restrictions on the period of the
sínusoids used to represent nonperiodic signals. Hence the Fourier transform representa-
tions empl(>y complex si11usoids. having a continuum of frequencies. The signal is repre-
sented as a weighted integral of complex sinusoids where the variable of integration is the
sinusoid's frequc11cy. Discrete-time sinus<)ids are used to represent díscrete-time signals in
the DTFT, whilc continuous-time sinus{>ids are used to represent continuous-time signals
in the FT. C<>11tínuous-time sinusoids with distinct frequencies are distinct, s<, the FT in-
volves sinusoidal frequencies from - oo t<> co. Discrete-time sinus(>ids are only uni que over
a 2TT interval of frequency, since discrete-time sinusoids with frequencies separated by an
ínteger multiple of 2 TT are ídentical. Hence the DTFT involves sinusoidal frequencies within
a 2 TT interval.
The next four sections of this chapter develop, in sequence, the DTFS, FS, DTFT,
and Ff. The remainder of rhe chapter explores tl1e properties of these four representations.
All four representati<>11s are based <Jn complex sinusoidal basis functions and thus have
analogous prc>perties.


The orthog<,11ality of cc>mplex sinusc>ids plays a key role in Fc>urier representations. We

say rhat two signals are orth<)gc)nal if their inner product is zero. Fc.>r discrete-time periodic
signals, the inner prc>duct is defined as the sum of values in their product. lf cf,k[nl and
cf>,11 [n] are tw<) N periodic signals, their inner product is

. rk,,n =
<Pk[111 <P ,:[n]

Nc>te tl1at the i1111er product is defined using complex conjugatic>n when the signals are
cc>1nplex value(i. If lk.,n = O fc,r k -=!= m, then <Pk[n] and q>,,1 [11] are <>rthogonal. c:cJrrespond-
íngly, Í(>f co11tinuc)us-time signals with period T> rhe inner product is defined in terms of
an integraJ, as sho,vn by

.lk,,n = J (T)
<Pk(t}<f>:! (t) dt

where the r1c>tation (T) in1plies integration over any interval <)f length T. As in discrete
time, if l 1l,,,, = O for k -=!= r11, chen we say <Pk (t) and tj), 11 (t) are c>rthogonal.
Begínning witl1 the discrete-cime case, let <Pklnl = eikíl"11 be a complex sinusoid with
frequency k!1 Choosing the interval n = O to n = N - 1, the inner product is given L1y


Ik,rn = ""'
ei!k-1n)!t0 11

Assuming k and m are restricted to the sarne interval of N consecutive values, rhis is a
finite geometric series whose sum depends on whether k = m or k * m, as shown by

N-1 N, k=m
L ei(k-m)non ==
1 _ eik2Tr
1 - e;kno' ki=m

Now use e-;k 2rr = 1 to obtain

N, k = m
L ei< k - m'ª"n = (3.6)
n=O O, k *m
This result indicates that complex sinusoids with frequencies separated by an integer mul-
tiple of the fundamental frequency are orthogonal. We shall use this result in deriving the
DTFS representation. .
Continuous-time complex sinusoids with frequencies separated by an integer multi-
ple of the fundamental frequency are also orthogonal. Letting <!>k (t) = eikw,.,r, the inner
product between eikw,,t and eimw"t is expressed as

1 = (T ei<k-m)wc,t dt
k,m )

This integral takes on two values, depending on the value k - m, as shown by

T, k=m
Using the fact ei<k-m)w,.,T = ei(k-m)2 = 1, we obtain

T, k =m
O k-:f=m
This pr<)perty is central to determining the FS coefficients.

3.2 Discrete-Time Periodic Signals:

The Discrete-Time Fourier Series
The DTFS represents an N periodic discrete-time signal x[n] as the series of Eq. (3.3):
x[n] = L A[k]eikOun

where !1 = 21r/N.
0 .
ln order to choose the DTFS coefficients A[k], we now minimize the MSE defined in
Eq. (3.4 ), rewritten as

MSE = .!_ L lx[n] - x[n] 12

N n=(N>

= ..!_ L x[n] - L A[k]eik!l,,n

N n=(N> k:::(N)
3.2 Discrete-Time Periodic Signals: The Discrete-Time Fourier Series 161

Minimization of the MSE is instructive, although it involves tedious algebraíc manipula-

tion. The end result is an expression for A[k] in terms of x[12J. Also, by examining the
minimum value of the MSE, we are able to establish the accuracy with which x[11] ap-
proximates x[n ].
The magnitude sc.1uared ()Í a complex num ber e is given by e 2 = cc*. Expanding I 1

the magnitude squared in the sum using la + b 12

== (a + b){a + h) * yields

MSE = - L x[nJ - .L A[kJejk!l 0

x[ t1_I - L A 1111Jei11112'' 11
N ,1=(.~) k=<N) n1=(N>

Now multiply each term to obtai11

MSE = _!_ L lx[n] 12 - L A,., [m l .!_ L x[n]e-i"ifl,,n

N n=(N) n1=(N) N n=(N)

- L k=(N)


Xlk l = .!_ L x[n]e-;k! 10" (3.8)

N 11=<·"''>

and apply the <)rthogonality property <>f discrete-time complex sinusoids, Eq. {3.6), to the
last term in the MSE. Hence we may write the MSE as

MSE = t n~N)
]xlnll - k~N) A'~[k]Xlkl - ki) Alk]X'~[k] + k¾.,i IA1kll
Now use the technic.1ue of ''cc>mpleting the square'' to write the MSE as a perfect
square in the DTFS coefficients AlkJ. Add and sul)tract ~k=<N> 1X[kj 12 t<) rhe right-hand
side of the MSE> so that it may he written as

MSE = ~ ,, ~~, lxlnl 1 + , ~N> ( IA[k] I' -

A ''[k]X[kl - AlkJX>.-{k] + 1Xlk] 1 2

- L k=<N)
IX!kl 12
Rewrite the middle sum as a square t<) <)btain

MSE = .!_
liº (N}
lx[n]l 2 + L
IAlkl - Xlkll 2 - L
IX[k]l2 {3.9)

The depende11ce of the MSE on the unkn<>wn DTFS coefficients A[kj is confi11ed t<) rhe
middle term <>f Eq. (3.9), and rhis term is always nonnegative. Hence the MSE is minimized
by forcing the middle term t<> zero \Vith the choice
A[kl = X[k]
These coef.ficients mi11imize the MSE l-,erween xf ,zl a11d x[11J.
Note that XI k] is N periodic ín k, si11ce

Xlk + N] = .!_ L xlnle-j(k f ,",')flnll

N 11=<-"N>

Usíng the fact that e- iN!l,,n = e-il-rrn = t we <)btain

X[k + Nl = _!_ L x[111e-·ikílon

N 11=(1\l)
= X[kJ
which establishes that X[k] is N pcriodic.
The value of thc minímum MSE determines how \vell x[n] apprc)ximates x[n). We
determine the minimum MSE by substituting Alk] = X[kJ into Eq. (3.9) t(> obtain

· MSE = h n~N> lx[n)l 2

- k ~>IX[kj 1 2

We next substitute Eq. (3.8).into the second ter1n of Eq. (3.10) to obtain
. 1 .
L IX[k]l 2 = L L L x[n]x*[m]e 11111
-n)fl0 k
k=(N) . k=(N) N2 n=(N) m=(N)

lnterchange the arder of summatic>n to write


Equation (.3.11) is simplified by recalling that e;,n!l.,k and ein!l,,k are orthogonal. Referring
to Eq. (3.6), we have
n = m
- L
1 .
e'(,n-nHlc,k =
N k=(l\l} O, n =fa m
This redttces the doub)e sum c>ver m and n <>11 the right-hand side of Eq. (3.11) to the single

k~N> 1X[kl I' = h"tN, lx[n]l 2

Substituting this resuJr into Eq. (3.1 O) gives MSE = O. That is, if rhe DTFS Cí>efficients are
givcn by Eq. (3.8), then the MSE between x[n] and xlnl is zero. Since the MSE is zero, the
err<>r is zero for each value of n a11d thus xlnl = x[n].


The DTFS represe11tation for xlnl is given by
x[n] = L X[k]eik!lºn (3.12)

X[k}=1_ L xfnle-;knºn (3.13)

N n=<,\i>
where x[n] has fundamental period N and n,, = 21r/N. We say that x[11J and X[k] are a
DTFS pair and denote this relatíonship as

xlnl DTFS;íl,, Xfkl

from N values of X[k] we may determine xlnl using Eq. (3.12), and from N values of
xfnl we may determine X[k] using Eq. (3.13). Either X[kJ or x[n] provides a complete
description of the signal. We shall see that in some pr<Jblems it is advantageous to represent
the signal using its time values x[n l, while in <>thers the DTFS coefficients X[kj <Jffer a
3.2 Discrete-Time Periodic Signals: The Discrete-Titne Fourier Series 163

more convenient description of the signal. The DTFS coefficient representation is also
known as a frequency-domain representation because each DTFS coefficient is associated
with a complex sinusoíd of a different frequency.
Before presenting several examples illustrati11g the DTFS, we remind the reader that
the starting values of the índices k and n in Eqs. (3.12} and (3.13) are arbitrary beca use
both x[n] and X[k] are N periodic. The range for the índices may thus be chosen to simplify
the problem at hand.
.. . . .,. . ..·4 -t·:... •• > ' • ·~.., :·:· •• ·- .~.. •><'><llt: , ,_ _, . ...,.. "·· ·:·· ..,,...... ,.,:.,. .,•;,.:~

ExAMPLE 3.1 Find the DTFS representation for

x[nJ = cos(fn + </>)
Solution: The fundamental períod of x[ n] is N = 16. Hence 0 0 = 21r/16. We could determine
the DTFS coefficients using Eq. (3.13); however, in this case it is easier to find them by in-
spection. Write
. ~: . eil< 111s},i+ <!> l + e - i[ (1t/8 )n +<I> J

,,,: ,.
x[n) = 2
..., (3.14)
and compare this to the DTFS of Eq. (3.12) written using a starting index k = -7
x[n) = L X[k]eik(rrl8)n (3.15)

Equating the terms in Eq. (3.14) and Eq. (3.15) having equal frequencies, k1rl8, gives
12e-;tf,, k= -1
DTFS; 21r/l6 X[k] =
x[n).,_____ leitl>
2 '
k= 1
O, -7 < k :S;. 8 and k =!:- ±1
Since X[k) has period N = 16, we have X[15] = X[31] = · · · = ½e-;,t, and similarly X[l 7] =
X[33] = · · · = fei<f> wíth ali other values of X[k] equal to zero. Plots of the magnitude a11d
phase of X[k] are depicted in Fig. 3.2.
ln general it is easíest to determine the DTFS coefficients by inspection when the signal
consists of a sum of sinusoids. ....
• · . ·,; ·. ·..·
•.~·. , •~ - •; ~
··:!"~':· · · ,mr·:.··
.,,.-,.. • •• • ...,.• • •·· · ···· • , .•.,•...
......, .. •,ri;..;~,~ ..~•x~ • ~'"· ,._,,.. ........::,, .........:

1X[k] 1

1/2 ~

... ...
-----<>-o •~ - ~ - ~ "'!" ~ - - ~ -- - -- -- - o-k
-20 -10 10 20 30

arg{ X[k] 1

... •• •

, . - - - - . -

-20 -10 10 20 30

. . . -4>
FIGlJRI:'. 3.2 l\:lagnitu<le and phase of DTFS coefficie11Ls for Example 3.1.

The magnitude of X[k], IX[kJI, is known as the magnitude spectrum of x[n]. Simi-
larly, the phase of Xf kl, arg{X[k}}, is known as the phase spectrum of x[n}. ln the previous
example ali the components of x[n] are concentrated at two frequencies, 0 0 (k = 1) and
-nº (k = -1).
• Drill Problem 3.1 Determine the DTFS cc>efficients by inspection for the signal

1 37T
x[n] = 1 + sin 12 7T n + 8

k= -1
2j '
DTFS; 2-rr/24 1, k=O
x[n} X[k} ei(J,.,,18)
k = l
2j '
o, otherwise on -11 s k s 12 •
The next example directly evaluates Eq. (3.13) to determine the DTFS coefficients.

EXAMPLE 3.2 Find the DTFS coefficients for the N periodíc square wave depicted in
Fig. 3.3.
Solution: The period is N, so fl = 2TTIN. It is convenient in this case to evaluate Eq. (3.13)

over indíces n = -M to n = N - M - 1. We thus have

1 N-M-1 _ ..
X[k] = - x[n]e-,kn.,n L
N n=-M

·.'·. <•
=- IM .
N n=-M

Perform the change of variable on the índex of summation, m = n + M, to obtain

1 . 2M _
X[k] =- e'kn,.M L e-,kfiom
·. N m=O

Summing the geometric series yields

k =I= O, ±N, ±2N, ...

. >. •


r J •>
••• ••• ... • •• ••• • ••

-N+M -M M N-M N N+M

FIGURE 3.3 Square wave for Example 3.2.

3.2 Discrete-Time Periodic Sig1ials: The Discrete-Time Fourier Series 165

'. . .
. ,., - ,.. ·:: . :
'· ..
•,:,;,~:·:· .,:,i_::.:,,._
..-;.; '··' .· ,.
which may be rewritren as
1 eik!l0 (2M + 1 )12 1 _ e-ik!l0 (2M+1)
X[k] =N ejkfi</2 . 1 - e-ikfl,,
eikfi.,(2,'\,f + t )/2 _ e-ik0(1(2M+1)/2
: f
.. =-
N eikfi,/ :!. _ e- ;kfl,/2 , k * O, ±N, +2N, ...
At this point we may divide the ~umerator and denomínator by 2j to express X[k] as a ratio
of two sine functions, as shown by

. k
sin !1 (2M + 1 ).
X[k] ~ ~------, k * O, -:!:N, ±2N, ...
. . ... • k fiº
: .,

/ .. s1n
. .

An alternative expression for X[k] is obtained by substituting fi 0 = 21r/N, yielding

sin k~ (2M + 1)
X[k] = -h-----,
k * O, ±N, -:±:2N, ...
s1n k

The technique used here to write the finite geometric sum expression for X[k] as a ratio of
sine functic>ns involves symmetrizing both the numerator, 1 - e-ik0,,(2 ,\.1+ 1 >, and denominator,
1 - e-ik!iº, with the appropriate power of eik110• Now, for k = O, ±N, -:±:2N, ... , we have

1 M
X[k) =- L 1
f ., N m=--M
. 2M + 1
,;,':.. N

and the expression for X[k] is

~.:). :. . ~: ··li{:: .. /.'; .\"
. ,.;.~
. ~ . . . .;
: . .


s1n k; (2M + 1)
.. -~------, k =fo. O, + N, ±2N, ...
N • k 'lT
X(k] = N
2M + 1
k == O, ±N, ±2N, ...
N '
Using L'Hopital's rule, it is easy to show that

k N (2M + 1

1 2M + 1
lim - --------
k-o, + N,:!:2N.... N . N
s1n k~
....,,... . ('::

0.2 ...------~-----,.----,.----.----.----~




0.05 ·

-0.05 L..-_._______.:_----L_ ___.._ _..;._-...i.._--.L._.....L.._ _ _ _ _ _...J

-50 -40 -30 -20 -10 O 10 20 30 40 50


0.5 ,----,,--------,.-----.-----0-----.------,---,----0

0.4 . .




-0.2 ..____.____'--___._ __._ _..._-'-_ __.__ _.__ _.__

-50 -40 -30 -20 -10 O 10 20 30 40 50

FIGURE 3.4 The DTFS coefficients for a square wave: (a) 1\1 = 4 and (b) J\1 = 12.

For this reason, it is common to write the expression for X[k] as

. .. 1 Slfl k N (2M + 1)
.,. •

. .. .
X[k] = N - - - - -

:. . ..
sin k ~
ln this form it is understood that the value X[k] for k = O, ±N, :!:2N, ... is obtaíned from
the limitas k ~O.A plot of two periods of X[k] as a function of k is depicted in Fig. 3.4 for
M = 4 and M = 12 assuming N = 50. Note that in this example X[k] is real; hence the
magnitude spectrum is the absolute value of X[k] and the phase spectrum is O when X[k] is
positive and 1r when X[k] is negative.
' .. ·'.
3.2 Discrete-Time Periodic Signals: The Discrete-Time Fourier Series 167


2 !'

- ~ _l ,-~ . . . . L<>-<:H>-+-1L-O-<O-O-.L.J.... ~-- n

-5 5 10

FIGURE 3.5 Signal x[n] for Drill Problem 3.2.

• Drill Problem 3.2 Determine the DTFS coefficients for the periodic signal depictcd
l)TFS; lrr/6 X[kl 1 2 k 7r
l l ---- = -6 + -3 COS -3
X 11

Each term in the DTFS c>f Eq. (3.12) associated \-Vith a nonzer<> coefficient X[k]
contributes to the represenrati<>n of rhe signal. We now examine this rcpresentation by
considering the contribution of each term for the square wave in Example 3.2. ln this
example the DTFS coefficíents have even symmetry, Xf k] = XJ-k], and we may rewrite
the DTFS of Eq. (3.12) as a series involving harmonically related cosines. General cc>ndi-
tions under which the DTFS coefficicnts have even or <>dd symmctry are discussed in
Section 3.6. Assume for convenience that N is even so that N/2 is integer and let k range
frorn - N/2 + 1 to N/2, and thus write
x[n] = I xr k]eikíl<>n
= X[Ol + L (X[m]eini{}()n + X[-m]e-i•nil,,11) + X[N/2.lei(Nll)il,.n

Now exploit Xf ml = X[-m] and N0 0 = 21r to obtain

N/2-1 · · n + e-11n!!
e1mtt · n 0 0

x[nJ = XfO] + n~l 2X[,nl

+ XlN/2]e'7Tn

= XfOJ + L
171= 1
2X[m] cos(míl n) + X[N/2] cos(1rn) 0

where we have also used ei7Tn = cc>s( 1rn). If we define the new sct of coefficients
X[kj, k = O, N/2
Blkl =
2X[k], k = l, 2, ... , N/2 - 1
then wc may write the DTr'S in terms of a series of harmonically related cosines as
x[n] =I Blk) cos(kil n) 0
{' ·JIC· .·>1,;: • ••• i,. ·* . ,; ....;; ·~ ·,.. . ,. . ·<1: .Jt ·:,. . . ...,·. i> ;••• • • •••• ."l· ;;., ••

EXAMPLE 3.3 Define a parcial sum approximation to x[n} as

. ·,·
xj[nJ = L
B[kJ cos(kflon)

where J s N/2. This approximarion contains the first 2J + 1 terms centered on k = O in Eq.
(3.12). Evaluate one period of the Jth term in the sum and x1(n] for J = 1, 3, 5, 23, and 25,
assumíng N = 50 and M == 12 for the square wave in Example 3.2. ..•

. ..
.: "j. . .,.-~:;;,. ·~1;~~:,. /{{:: \Y,~: ;.. ,;i ..:; ;:. ..

Solution: Figure 3.6 depicts the Jth term in the sum, BU] cos(Jfi n), and one period of x/[n] 0

for the specífied values of ]. Only odd values for J are considered because the even indexed
coefficients B[k] are zero. Note that the approximation improves as J increases, with exact
representation of x[n] when J = N/2 = 25. ln general, the coefficients B[k] associated with
values of k near zero represent the low-frequency or slowly varying features in the signal,
while the coefficients associated with the values of k near ±.N/2 represent the high-frequency
or rapidly varyíng features in the signal.

l ...----,-----,----.-!--.,---...-

l::~ -
:ll m·-r-··••,-

~ -O.'.,___[_·_·___1.___......i _ 1_
____.._ _...... ___._ _ _ _ _____,__1_ _ _ ...._i_ ' _ _ _ _ , •

-25 -20 -15 -10 -5 o 5 10 15 20 25

1.5 ~ - ~ - - - ~ - - - - - - - - - - - - - - ~

l -

(~- 0.5 ..

-0.5 ....._____________......___...,_____~---------~-'1
-25 -20 -15 -10 -5 o 5 10 15 20 25
l ,-···-·--······. -·····-·"·~--,-------,---,..........-...-----.----,...---.
! i i

a~ o.s l,
0 tii-001,! !!!Aºo?f ttfj~TI!r-9 yf i 2-,A!!!!A-.-()--L-L-Y rr
~ -0.5 . . .
-1 ,________________l_._----L____ !__ -~···-·-··--L.._.__ .-t....___L_. __~
-25 -20 -15 -10 -5 O 5 10 15 20 25
l .5 1 ! ; í - ' í 1

1 t- > >

- -
o12..00-~ 0i_0 J tJ_ ......... ·-~ -~ ......... -~ - ~
_l11 º6bbbõ -~ >

1 i i i ! ; 1
-0.5 ª i ;

-25 -20 -15 -10 -5 o 5 10 15 20 25


FIGURE 3.6 Individual terms in the DTFS expansion f<)r a square \-Vave (top panei) and the cor-
respc,11cling partia) SLlm approximations x1 [1i] (bottom panei). 1·he J = O term is x0[1i] = ½and is
not shown. (a) J = l. (b) J = 3.

;: ~

ou o
...... -0.5
l i")

-1.0 L---...J....-__.J_ _ ...J.__ _,,,1.._ _ ...__...J...._ ___J_ _.,;.__ _,,,1.._ ___,J

o 25
-25 -20 -15 -10 -5

1.5 - - - - ~ - - - - . . . . - - - ~ - ~ - - - - ~ - - - ~ - - - - - ,
'º 15 20

1.0 '

.,., 0.5

o ~~il .
-25 -20 -15 -10 -5 o 5 10 15 20 25
1 1 1 1 1 1 1


...... -0.5
N - -
1 1 i 1 !
-25 -20 -15 -10 -5 o 5 10 15 20 25
1.5 1 1

1 1-
' '
\ > ' -

0.5 ~ -

O 1--o-t; ··o·º·o··º··o·º o o o,.o·-LL-'-L~- ~ ··· ·· ·· ··· •- _,,IJ.-«...l--'-1.-'-'...L..J'..LJ< ·· ···º·o·.o-o·ºo o o-0-o·.O-·o-

1 1 1 1
-0.5 · 1

-25 -20 -15 -10 -5 o 5 10 15 20 25

1 - -........- - ~ - - - , 1 - - - , . - - - , - , ,- - ~ 1 - - . . . . . 1 - - ~ 7 - - - - - - ,

"' O ···º o O o ·º··o º··o·º ·o··º o·º··o Oo 0 o O o·º o··º·o-º··o·0 •o··º ·o·º o 0 o 0 ··o 0··0·º·0··º ·o·º o··º o O o 0

-0.5 ~ -
-1 L - - - - ' - - - . . L I_ _..1l_ _ _ _...._1_ _,__1_
~ __.__ __,_,_ _--11_ _...J

-25 -20 -15 -10 -5 O 5 10 15 20 25


1.5 í \ i i 1 ! ; i l

'. -
1 ~ > ' >
~ 0.5 ..... -
·~"' o . . - - - 0-0-000-0-00·0-0-

i ! • i i ! 1
-0.5 i

-25 -20 -15 -10 -5 o 5 10 15 20 25


FIGURE 3.6 (continued} (e} J = 5. (d) J = 23. (e) J = 25.


The DTFS is the only Fourier representation that can be numerically evaluated and
manipulated in a computer. This is because both the time-domain, x[n], and frequency-
domain, Xlk], representations of the signal are exactly characterized by a finite set of N
numbers. The computational tractability of the DTFS is of great significance. The DTFS
finds extensive use in numerical signal analysis and system implementation and is often

3 .----,----,-----,------------.!--....
,. ----,----,----,

2- -
1 >- -
o ~..,,.....J"", --"'-l.__.........,-~__,,-...i""-11-~""i _..,...~ ,__--4
-1 ... ..

-2 '------------------'--------'--'---......__ _._..._ _,
O 200 400 600 800 1000 1200 1400 1600 1800 2000
Time indcx (n)
Ventricular tachycardia
3 .---..-----,-----,------,----,.--.---....---,---.....
2 ..

-1 .

0 200 400 600 800 l 000 1200 1400 J600 1800 2000
Time index (n)

0.25 ,------.----...-------,------,------,----,

1X[k] 1
0.1 . . .

o 10 20 30 40 50
Frequency index (k)
Ventricular tachycardia
0.25 . - - - - ~ - - - - , - . . . - - - - , - - - - ~ - - - - - . - - - - ,
0.2 . .

1 Y(k] 1
o 1..LLJ..J..1.,,U..J,J.....LJLJ...LLLI.J..J..1...LU..o.L(l.O.O,Jl)0.0,~.oD-CL().0().06JO,O.O.c,Oo.OO.:,ó-o(:.O.O.C)Q-OOÓ
o 10 20 30 40 50
Frequency index (k)

FIGURE 3. 7 Electrocardiograms for two clifferent heartbeats and the fírst 60 coefficients of their
magnitude spectra. (a) Normal heartbeat. (b) Ventricular tachycardia. (e) .lvlagnitude spectrum for
the normal heartheat. (d) Magnitude spectrum for ventricular tachycardia.
3.3 Continuous-Time Periodic Signals: The Fourier Series 171

usec.l numerically approximaté the other three Fourier representations. These issues are
explored in rhe next chapter.

.:,:~< . : . , .. . .,. ,·,;,;.

·i- t ,.': •, ; • : ,••~• • ~. .::> ·~ • ..,. :·;,:. ••,.,,. •••'>•• ....

EXAMPLE 3.4 ln this example we evaluate the DTFS representations of rwo different elec-
trocardiogram (ECG) waveforms. Figures 3.7(a) and (b) depict the ECG of a normal heart
and one experiencing ventricular tachycardia, respectively. These sequences are drawn as con-
tinuous functions due to the dif.ficulty of depicting ali 2000 values in each case. Both of these
appear nearly periodic, with very slight variations in the amplitude and length of each period.
The DTFS of one period of each ECG may be computed numerically. The period of the normal
ECG is N == 305, while thc period of the ventricular rachycardia ECG is N = 421. One period
of each waveform is available. Evaluate the DTFS coefficients for each and pior their 1nagni-
rude spectrum. . ·., :•·

Solution: The magnitude spectrum of the first 60 DTFS coefficients is depicted in Figs. 3.7{c)
a11d (d). The higher indexed coefficients are very small and thus not shown.
The time waveforms differ, as do the DTFS coefficíents. The normal ECG is dominated
by a sharp spike or impulsive feature. Recall that the DTFS coefficients for a unit impulse have
constant magnitude. The DTFS coefficients of the normal ECG are approximately constant,
showing a gradual decrease in amplitude as the frequency íncreases. They also have a fairly
small magnitude, since there is relatively little pc>wer in the impulsive signal. ln contrast, the
ventricular tachycardia ECG is not as impulsive but has smoother features. Consequently, the
DTFS coefficíents have greater dynamic range with the low-frequency coefficients dominating.
The ventricular tachycardia ECG has greater power than the normal ECG and thus the DTFS
coefficients have larger amplítude.
. ,,,.

3.3 Continuous-Time Periodic Signals:

The Fourier Series

We begin otir derivatic>n <)f the fS by approximating a signal x(t) having fundamental
peric>d T t1sir1g the series of Eq. (3.5):

x(t) L Af.k]eikwut (3.16)


where w() = 2 Tr!T.

We shall now use the orthogonality property, Eq. (3. 7), t<> find the FS C(>efficients.
We begin by assuming we can fi11d coefficients A[kJ so that x(t) = x(t). If x(t) = x(t), then

X ( t )e - jinw,,t dt = f (T)
x( t )e . jin«Jot dt

Substit11te the series expression for x(t) in this equality te> obtain the expression

i,, x(t)e-fmw,,i dt =in i. k A[k]efkw,/e-fmw,,, dt

= i AlkJ J. eik<tJ,,te-j111w,.t dt
k=-,,,, (f}

The orthogonality property of Eq. (3.7) implies that the integral on the right-hand side is
zero except for k = m, and so we have

x(t}e-jmwol dt = A[m]T

We conclude that if x(t) = x(t), then the mth coefficient is given by

A[m] = -1 J, '
x(t)e- 1111"' 0
t dt (3.17)
T (T)

Problem 3.32 establishes that this value also minimizes the MSE between x(t) and the
2] + 1 term, truncated approximation
X1(t) = 2: A[kJeikwot

Suppose we choose the coefficients according to Eq. (3.17). Under what conditions
does the infinite series of Eq. (3.16) actually converge to x(t)? A detailed analysis of this
question is beyond the scope of this book. However, we can state severa( results. First, if
x(t) is square integrable, that is,

f (T)
lx(t) 12 dt < oo

then the MSE between x(t) and x(t) is zero. This is a useful result that applíes te> a very
broad class of signals encountered in engineering practice. Note that in contrast t(> the
discrete-time case, zero MSE does not imply that x(t) and x(t} are equal pointwise (at each
value of t); it simply implies that there is zero energy in their difference.
Pointwise convergence is guaranteed at ali values of t except those corresponding to
discontinuities if the Dirichlet conditions are satisfied:

• x(t) is bounded ..
• x(t) has a finite number of local maxima and minima in one period.
• x(t) has a finite number of discontinuitíes in one peric>d.

If a signal x(t) satisfies the Dirichlet conditions and is not C<)11tinuous, then the FS repre-
sentati<>n of Eq. (3.16) converges to the midpoint of x(t) at each discontinuity.


We may write the FS as



X[k] = _!_
f (T>
x(t)e-ikwot dt (3.19)
3.3 Continuous•Time Periodic Signals: The Fourier Series 173

where x(t) has fundamental period T and w 0 = 27r/T. We say that x(t) a11d Xf kl are a FS
pair and denote this relatíonship as

· x(t) - - -
FS; Wr,

From the FS coefficients X[k1 we may determine x(t) using Eq. (3.18) and from x(t) we
may determine Xlkl using Eq. (3~19). We shall see later that in some problems it is ad-
vantageous to represent the signal in the time domain as x(t}, while in others the FS co-
ef.ficients X[kj offer a more convenient description. The FS coefficient representation is
also known as a frequency-domairi representation because each FS coefficient is associated
with a complex sinusoid of a different frequency. The follc)wi11g examples illustrate deter-
mination of the FS representation.
. ,.

ExAMPLE 3.5 Determine the FS representation for the signal

1T 1T
x(t) = 3 cos t +
2 4
Solution: The fundamental period of x(t) is T = 4. Hence w = 2,.,,.14 = 7T/2 and we seek to

express x(t) as
,, ...... 00

:',: . x(t) = L X[k]eik('ITtl)t

k= -,,,,

One approach to finding X[k] is to use Eq. (3.19). However, in this case x(t) is expressed in
terms of sinusoids, so it is easier to obtain X[k] by inspection. Write
1T 1T
x(t) = 3 cos 2 t + 4
ei('ff'l2)t+'ff'l4 + e-[;(-n-/2)t+'ff'l4]
= 3 ---------

This last expression is in the form of the Fourier series. We may thus identify
le-;'"14 k= -1
2 '

X[k] = 1efrrl4
2 '
k= 1
o, otherwise
The magnitude and phase of X[kl ·are depicted in Fig. 3.8. ., :
. <'

X[kJ 1
1 arg{ X[kJ 1
3/2 'ff/4 ......


FIGURE 3.8 l\ilagnitude and l)hasc spectra for Example 3.5.


• Drill Problem 3.3 Determine the FS representation for

x(t) = 2 sin(2m - 3) + sin( 6m}

j/2, k= -3
jei3, k = -1
F.S; 21r ,,
x(t) X[kj = -;e-'·', k = 1
-j/2, k = 3
o, otherwise •
As in the DTFS, the magnitude of X[k l is known as the magnitude spectrum of x(t),
while the phase c)f XlkJ is known as the phase spectrun1 of x(t). ln the previous exa1nple
aJI the power in x(t) is concentrated at two frequencíes, úJ and -úJ ln the next example 0 0 •

the p<>wer in x(t) is distributed acr<>Ss many frequcncies.

ExAMPLE 3.6 Determine the FS representation for the square wave depicted in Fig. 3.9.

Solution: The period is T, so w0 = 21r/T. lt is convenient in this problem to use the integral
formula Eq. (3.19) to determine rhe FS coefficients. We integrate over the period t = -T/2 to
t = T/2 to exploit the even symmetry of x(t) and obtain for k =/:- O
X[k] = -
JT/2 .
x(t)e-ikw 01

Tkw 0 2; '
_ 2 sin(kw0 Ts)
, k *O
For k = O, we have
X[O] = -
JT, dt
T -T$

'. .· ··' •:
·'·,/· .,::,~: .-~''" ' "'
•<, .,.,


7· • ..
-T .
-T-Ts -T+Ts

FIGURE 3.9 Square wave for Example 3.6.

3.3 Continuous-Time Periodic Signals: The Fourier Series 175

' ~·.

Using L'Hopítal's rule it is straightforward to show that ..'

. 2 sin(kw T5 )02Ts
11 m - - - - - =
•-o Tkü>o T
and thus we write .' .

. X[k] = 2 sin(kw0 T5 )
Tkw 0

with the understanding that X[O] is obtained as a limit. ln this problem X[k] is real valued.
Substituting w0 = 27r/T gives X[k] as a functíon of the racio T 5 /T, as shown by
• >
. ,. ·.
. . . . k 21rT5
' 2 Slll T
X[k] = - - - - (3.20)
Figure 3.10 depicts X{k], - 50 ~ k ~ 50, for T 5 /T = ¼and T 5 /T = ft. Note that as T 5 /T
decreases, the signal becomes more concentrated in time within each period while the FS
representation becomes less concentrated in frequency. We shall explore the inverse relation-
ship between time- and frequency-domain concentrations of signals more fully in the sections
that follow.

0.6 .---------.-----r----.----....----..---~----,-----.----,



-0.2 ...__ ___.__ _ _ _ _.......__ _......____ _.......__ _,....__ _ _ ___..__ ___...._ __,
-50 -40 -30 -20 -10 o 10 20 30 40 50
0.15 ..------.----.-------r----,.-----.-----..------,-------,----,


0.05 . . . .

-0.05 ..__________....__ ___.__ _.....__ _ _ _ _..___________________,

-50 -40 -30 -20 -10 o 10 20 30 40 50

FIGURE 3.10 The FS coefficjents, X[k], -50 < k < 50, for tw<> square waves: {a) T,IT = ¼and
(b) TslT = ft.

The functional form sin( 1ru)/1ru occurs sufficiently often in Fourier analysis that we
give it a specíal name:

. ( ) sin( 1ru}
s1nc u = (3.21)

A graph of sinc(u) is depicted in Fig. 3.11. The maximum of the sinc function is unity at
u = O, the zero crossings occur at integer values of u, and the magnitude dies off as 1/u.
The portion of the sínc function between the zero crossings at u = :::t: 1 is known as the
mainl<>he of the sinc function. The smaller ripples outside the mainlobe are termed side-
lobes. The FS coefficients in Eq. (3.20) are expressed using the sinc function notation as

= 2Ts . k 2Ts
T s1nc T

Each term in the FS of Eq. (3 .18) associated with a nonzero coefficient X[k] contri butes
t(> the representation of the signal. The square wave of the previous example provides a
convenient illustration of how the individual terms in the FS contribute to the representation
of x(t). As with the DTFS square wave representation, we exploit the even symmetry of X[k]
to write the FS as a sum of harmonically related cosines. Since X[k] = X[-k], we have

x(t) = L X[k]eikwot


= XfO] + L
2X[m] cos(mw t) 0

If we define B[O] = X[O] and B[k] = 2X[k], k -::/= O, then

x(t) = I
B[k] cos(kw0 t)



sinc (u)


-0.4 .______.____________.__ __.___~_......__.__~-----

-10 -8 -6 -4 -2 O 2 4 6 8 10
FIGVHE 3.11 Sínc functit>n.
3.3 Continuous-Ti1ne Periodic Signals: The Fourier Series 177

ExAMPLE 3. 7 We define the partial sum approximation to the FS representation for the
square wave, as shown by
X;(t) == L
B[k] cos(kw t) 0

Assume T = 1 and T /T = ¼. Note that in this case we have


2' k=O
,,, .. . 2( -1 )lk-1)/2

B[k] = k'TT , k odd .·

o, k even
so the even indexed coefficients are zero. Depict one period of the Jth term in this sum and
x1(t) for J = 1, 3, 7, 29, and 99.
Solution: The individual terms arid partia! sum approximations are depicted in Fig. 3.12.
The behavior of the partial sum approximation in the vicinity of the square wave disconti-
nuities at t = ±¼ is of particular interest. We note that each partial sum approxímation passes
through the average value (½) of the discontinuity, as stated in our convergence discussion.
On each side of rhe discontinuity the approximation exhibits ripple. As J increases, the max-
imum height of the ripples does not appear to change. ln fact, it can be shown for any finite
J that the maximum ripple is 9% of the discontinuity. This ripple near díscontinuities in partial
sum FS approximatíons is termed the Gibbs phenomenon in hont>r of rhe mathematical phys-
icíst J. Willard Gibbs for his explanation of this phenomenon in 1899. The square wave
satisfies the Dirichlet conditions and so we know that the FS approximation ultimately con-
verges to the square wave for ali values of t except at the discontinuities. However, for finite
J the ripple is always present. As J increases, the ripple in the partia! sum approximations
becomes more and more concentrated near the discontinuitíes. Hence, for any given J, the
accuracy of the partial sum approximation is best at times distant from discontinuities and
worst near the discontinuities•
. ..,::

~ 0.5
~ -0.5 i::-_.,.-
-1 '---------'---_,___ _ _ _ __,__-'-_~_ __.__ ___.
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
1.5 .----..----,---.......---r---.---~---,----.----.----,

l -
--:::: 0.5 ..

. . . ___._..
-0.5 ,__-~- .L._ _,__ _ _ _ _ _ __.__ __.___ _..l. _ _,
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5

FIGURE 3.12 Individual terms in FS expansion for a square \vave (top panei) and the corre-
sponding partial sum approximations x1(t) (bottom panei). The J = O term is .x0 (t) = ½and is not
shown. (a) J = 1.
1 .---~---------~-------,------.---

~ -0.5 -
-1 .___...__....__....,__..........._ __.__ _ ____.__ ___.__ ___,__ _
-0.5 -0.4 -0.3 -0.2 -0. 1
O 0.1 0.2 0.3 0.4 0.5
1.5 .-----......----.....---,------------.--~

~ 0.5 ....
( lo<
o ...
-0.5 L----'---...i.---'--...L..-.........-....L.._ ___.__ ____.__ ___,______

-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5

l .-----~-------.....--~------.-----.-----.
a 05

~-0.5 ... ..
-11---~-...1.--...1..--....L.---'--.....__ _.__-1-_-1-_--1
-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
1.5 ....---..---~-...-----,---.---..----,--......,.----,

r--- 0.5 -
-0.5 '---..L----'----'---...L..-~---'----'----'--__.._---'
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5


1 ! ! 1 1

._ -
i f r r r
-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5
1.5 ! 1

,.._ ,..•
1 ,_ -

0.5 ._

·~ °'

o - - -

-0.5 1 1
' 1 1

-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5

FIGlJRI:: 3. 12 (continued) (b) J = 3. (e) J = 7. (d) J = 29.

3.3 Continuous-Time Periodic Signals: The Fourier Series 179

1 1 ! 1 1 1 1


0.5 -··- -
<r. o

g'.: -0.5

1 j
-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5
l.5 ! i ! 1 l ;' i
• 1 1

1 - • -

g: 0.5 .... -·

o • •

-0.5 i l l 1 1 ! i l i

-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5

FIGURE 3. l 2 (c()ntinued) (e) J = 99.

• Drill Problem 3.4 Find the FS representation for the sawtooth wavc depic.:ted in
Fig. 3.13. Hint: Use integration by parts.
Answer: Integrate t from -½ to 1 in Eq. (3.19) to obtain
-1 k = O
x[nl FS; 41r/3 X[k]
-2 2 . ,kw0
e-,kwo + e1 2 , ()therwise
3jkw 0

The following example exploits linearity and the FS representation for the square
wave to determine the output of a LTI system.

•·v •. .. .

ExAMPLE 3.8 Here we wish to find the FS representation for the output, y(t), of the RC
. . circuit depicted in Fig. 3.14 in response to the square wave input depícted in Fig. 3.9 assuming

TslT = ¾, T = 1 s, and RC = 0.1 s.

Solution: If the input to a LTI system is expressed as a weighted sum of sinusoids, then the
output is also a weighted sum of sinusoids. The kth weight in the output sum is given by


1 j
••• • ••
-2 -1 2 3
FIGURE 3.13 Períodic signal for Orill Problem 3.4.

R +
y(t> e

FIGURE 3.14 RC circuit for Example 3.8.

the product of the kth weíght in the input sum and system frequency response evaluated at
the kth sinusoid's frequency. Hence if


x(t) = L X[k]eikw.,t
k= -""

then the output y(t) is

y(t) = L H(ikwc,)X[k]eikwºt

where H(jw) is the frequency response of the system. Thus


FS; (J}o
y(t) "---_,. Y[k] = H(ikW 0 )X[k]

The frequency response of the RC circuit was computed in Example 2.15 as

H(íw) = 1/RC
jw + 1/RC

and the FS coefficients for the square wave are given in Eq. (3.20). Substituting for H(jkw0 )
with RC = 0.1 s, w0 = 21T, and using Ts!T = ¼gives

Y[k] = 10 sin(k1r/2)
j21rk + 10 k1r

The magnitude spectrum IY[kl l goes to zero in proportion to 1/k 2 as k increases, soa reason-
ably accurate representation for y(t) may be determined using a modesr number of terms in
the FS. Determine y(t) usíng

y(t) = ~ Y[kJeikwot

The magnitude and phase of YfkJ for -2.'l :5 k =S 25 are depicted in Figs. 3.15(a) and
(b), respectively. Comparing Y{k] to X[k] as depicted in Fig. 3.10(a), we see that rhe circuit
attenuates rhe amplitude of X[k] when k ;;?:: 1. The degree of attenuation increases as fre-
I 1

quency, kw0 , increases. The circuit also introduces a frequency-dependent phase shift. One
period of the time waveform y(t) is shown in Fig. 3.15(c). This result is consistent with our
intuition from circuit analysis. When the input switches from O to 1, the charge on the capac-
itor increases and the voltage exhibits an exponential rise. When the input switches from 1 to
O, the capacitor discharges and the voltage exhibits an exponential decay.
3.3 Continuous-Time Periodic Signals: The Fourier Series 181

0.5 1 1 1 i 1 1 l i

0.4 - -

..... -

0.2 - -

0.1 - -
- o
o 1 1 - - t~ r, n - l l

-25 -20 -15 -10 -5 o 5 10 15 20 25


3 ,- 1 1 '. i i ! '; ! ' -

' '

2 - ~
l -- )

o .... ·- .... • o -· o- -O -0- ... -<> 0-· .. -

e.: -1 ··-· .....

' -
-2 ~·····

! ......
-3 ·-· 1
' ' 1 ' 1 1 ! l

-25 -20 -15 -10 -5 o 5 10 15 20 25


0.8 .......

0.7 -


0.4 ·-

0.3 -·

-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5

FICURE 3.15 The FS coefficients, Y[k], -25 s k s 25, for the RC circuit outpt1t ín rcsponse to
a square ,vave input, (a) lv1agnitudc spectrum. (b) Phase spectrum. (e) One period of the output,
y(t ).

3.4 Discrete .. Time Nonperiodic Signals:

The Discrete ..Time Fourier Transform

A rigc)rous derivation <)Í the DTFT is complex, so we employ an intuitive approach. We

develop the DTFT frc)m the DTFS by describing a n(>nperiodic signal as the limit of a
periodic signal whose period, N, approaches infinity. For this approach to be meaningful,
we assume that the nonperiodic signal is represented by a single period of the periodic
signal that is centered on the origin, and that the limít as N approaches infinity is taken
in a symmetric manner. Let x [n] be a periodic signal with period N = 2M + 1. Define the
finite-duration nonperiodic signal x[ n) as one period of x [n l, as shown by

x[n], -M < n s M
xfn] = O, lnl > M

Thís relationship is illustrated in Fig. 3.16. Note that as M increases, the periodic replícates
of xfnl that are present in x[n] move farther and farther away from the <>rigin. Eventually,
as M ~ oo, these replicares are removed to infinity. Thus we may wríte

x[nj = lim x[n] (3.22)


Begin with the DTFS representation for the periodic signal x[n]. We have the DTFS

x[n] í: X[k]eikílon (3.23)

1 L X [n ]e-jk!lon
X[k] (3.24)
2M + 1 11=-M


••• o ... i
---0...o-0-o-0,o.-o-0-0-0-0-0-0-0-,--,--o-__._i+..LL.LL_ __o-o--,_o-o-~~=>-<>--------- n