,_
·' . .. .. • ll
•
- i .
•· .. . . '
' .
'.;,' .
.. .
••
·•
'
s an
.,,
, '
'• .··..
.•.
.~·.:
'• _,
...; ·- . .. ' .... '
.. .
~
A....
'""·>..· ..._
#--..
~...
• ·~...
·,;;. .,
.,.
_,, . •.
.
.
. . t
•
.
'
...
. ,
.
- .. ..
,. •
. ..
·.. ... .. . ' ... '
• .... 1
• • ·.·,
Simon Haykin
AtJcMaster University
This hook was set in Times Roman by UG division of CiGS lnformation Services and printed and bound by
Quebecor Printing, Kingsport. The cover was printed by Phoenix C:olor Corporation.
Thc paper in this book \vas manufactured by a mill whose forest management programs include sustained
yield harvescing of irs timbcrlands. Sustained yield harvcsting principies ensure rhat the numbers of crccs cu
cach year does not cxceed chc an1ount of new growth.
Copyright© 1999, John Wiley & Sons, Inc. All rights reservcd.
No part of thís publicaríon n1ay he rcproduced, src>red ín a recrieval system or rransmitted in any forrn or b
any means, electroníc, mechanical, photocopying, recording, scar1ning or orherwise, exccpt as permitted un,
Sections 107 or 108 of the 1976 lJnited States Copyright Act, ,vithout either che prior wrítten pertníssion 01
the Publísher, or authorizarion rhrough payment of the appropriate per-copy fcc to the Copyright Clearanc1
C:enter, 222 Rose\.vood Drive, Danvers, MA 01923, (508) 750-8400, fax (508) 750-4470. Requests to the
Puhlisher for permil>sion should be addressed to the Permissions Dcpartn1ent, Joh11 Wilcy & Sons, lnc.,
605 Third Avenue, Ncw York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail:
PERMREQ@WILEY .C:() M.
10 9 8 7 6
xviii CONTENTS
Ts settling time
X(T, jw) short-time Fourier transform of x(t}
Wx( T, a) wavelet transform of x(t)
Abbreviations
Each ''Exploring Concepts with MATLAB'' section is designed t<) instruct the s
on the proper application of the relevant MATLAB comrnands and develop add
insight int<> the C(>ncepts introduced in the chapter. Minimal previous exposure to MA
is assumed. The MATLAB code Í(>r all the computatíons performed in the book, inc
the last chapter, are available on the Wiley Web Site: http://www.wiley.com/college
There are 10 chapters ir1 the book, organized as follows:
• Chapter 1 begins by motivating the reader as to what signals and systems a
how they arise in communication systems, contrai systems, rcmote sensing, bi
ical signal processing, and the auditory system. lt then describes the different,
of signals, defines certain elementary signals, and introduces the basic notic
volved in the characterization of systems.
• Chapter 2 presents a detailed treatment of time-d<>main representations of
time-invaríant (LTI) systems. It devclops convolution fro1n the representatior
input signal as a superpositi<>n of impulses. The notions of causality, memor
bílity, and invertibility that were briefly introduced in Chapter 1 are then re
in terms of the impulse response description for LTI systems. The steady-st;
sponse of a LTI system t(> a sinusoidal input is used to introduce the cc,nc
frequency response. Differential- and difference-equation representations for
time-invariant systems are also presented. Next, block diagram reprcsentatio
LTI systen1s are introduced. The chapter finishes with a discussion of the
variable description of LTI systems.
• Chapter 3 deals with the Fourier reprcsentation of sígnals. ln particular, the F
representations of four fundamental classes c1f sígnals are thoroughly discusse
unified manner:
• Discrete-time períodic signals: the discrete-time Fourier series
• Continuous-time periodic signals: the f(>urier series
• Discrete-time nonperiodic sígnals: the discrete-time Fourier transform
• Continuous-time nonperiodic signals: the Fourier transform
A novel feature of the chapter is the way in which sin1ilarities between thes1
representarions are exploited and the differe11ces between them are highlightcc
fact that complex sinusoids are eigenfunctions of LTI systems is used t(> motiva
representatÍl)n of signals in terms l)Í complex sinusoids. The basic form of the F1
reprcsentati<>n for each signal class is introduced and the four representatic>1
developed ín sequence. Next, the properties of ali four representations are st
side by side. A stríct separation between signal classes and the corresponding F<
representations is maintained throughout the chapter. It is our conviction t
parallel, yet separate, treatment minimizes confusion between representation
aids later mastery of proper application for each. Mixing of Fourier represent,
occurs naturally in the context of analysis and computational applicatians .:
thus deferred to Chapter 4.
,. Chapter 4 presents a thorough treatment of the applications of f(>urier rcprei
tions to the study of signals and LTI systems. Links between the frequency-dc
and time-domain system representations presented in Chapter 2 are established.
analysis and computational applications are then used to motivare derivation e
relationships betwcen the four Fourier representations and develop the student'
in applying these tools. The continuous-time and discrete-time Fourier tran~
representations of periodic signals are introduced for analyzing problems in \
there is a mixturc of períodic and nonperiodic signals, such as application of
riodíc inpt1t to a l,Tl system. The Fourier transform representation for discrete
••
Preface Vil
signals is then developed as a tool for analyzing situations in which there is a mixture
of continuous-time and discrete-time signals. The sampling process and continu-
ous-time signal reconstruction from samples are studied in detail within this context.
Systems for discrete-tíme processing of continuous-time signals are als(> discussed,
íncluding the issues of oversampling, decimation, and interpolation. The chapter
concludes by developing relationshíps between the discrete-time Fourier series and
the discrete-time and continuous-time Fourier transf<lrms in order to introduce the
computational aspects of the Fourier analysis of signals.
• Chapter 5 presents an introductory treatment of linear modulation systems applied
to communication systems. Practical reasons for usir1g r11c>dulatíon are descril,ed.
Amplitude modulation and its variants, namely, double sideband-suppressed carrier
modulation, single sideband modulation, and vestigial sideband modulation, are dis-
cussed. The chapter also includes a discussion of pulse-amplitude 1nodulation and
its role in digital communications to again highlight a natural interactic>n between
continuous-tíme and discrete-time signals. The chapter includes a discussion of
frequency-division and time-division multiplexing techniques. lt finishes with a treat-
ment of phase and group delays that arise when a modulated signal is transmitted
through a linear channel. ·
• Chapter 6 discusses the Laplace transform and its use for the complex exponential
representations of continuous-time signals and the characterization of syscems. The
eigenfunction property of LTI systems and the existence of complex exponential
representations for signals that have no Fourier representarion are used to motivate
the study of Laplace transforms. The unilateral Laplace transform is studied :first and
applied to the solution of differential equations with inicial conditions to reflect the
dominant role of the Laplace transÍ()rm ín engineering applications. The bilateral
Laplace transform is introduced next and is used to study issues of causaliry, stability,
invertibility, and the relationship between poles and zeros and frequency response.
The relationships between the transfer function description of l.TI systems and the
time-domain descriptions introduced in Chapter 2 are developed.
• Chapter 7 is devoted to the z-transform and its use in the complex exponential rep-
resentation of discrete-time signals and the characterízation of systems. As in Chapter
6, the z-transform is motivated as a more general representation than that of the
discrete-time Fourier transform. Consistent with its primary role as an analysis t<)ol,
we begin with the bilateral z-transform. The properties of the z-transform and tech-
niques for inversion are introduced. Next, the z-transform is used for transform
analysis of systems. Relationships between the transfer function and tíme-domain
descriptions introduced in Chapter 2 are developed. Issues of invertibility, stability,
causality, and the relationship between the frequency response and poles and zeros
are revisited. The use of the z-transform for deriving computational structures for
implementing discrete-time systems on computers is introduced. Lastly, use of the
unilateral z-transform for solving difference equations is presented.
• Chapter 8 discusses the characterization and design of linear filters and equalizers.
The approximation problem, with emphasis on Butterworth functíons and brief men-
tion of Chebyshev functions, is introduced. Direct and indírect methods for the design
of analog (i.e., continuous-time) and digital (i.e., discrete-time) types of :filters are
presented. The window method for the design of :finite-duration impulse response
digital filters and the bilateral transform method for the design of infinite-duratíon
impulse response digital filters are treated in detail. Filter design offers another op-
portunity to reinforce the links between continuous-time and discrete-time systems.
The chapter builds on material presented in Chapter 4 in developing a method for the
•••
VIII PREFACE
Acknowledgments
ln writing this bc>ok <>ver a períod <>f four years, we have bcnefited enormously from the
insightful suggestions and cc)nstructive inputs received fr<>m many colleagues and reviewers:
• Professor Rajeev Agrawa1, llniversity of Wisc()nsin
• Professor Richard Baraniuk, Rice University
• Professor Jím Bucklew, Uníversíty of Wisconsin
• Professor C. Sidney Burrus, Rice Uníversity
• Professor Dan Cobb, Uniuersity of Wisconsin
• Professor Chris DeMarco, University of Wisconsin
• Professor John Gubner, Universíty of Wisconsín
• Profess<>r Yu Hu, University of Wisconsin
• Professor John Hung, Aubur11 U11iversity
• Professor Steve Jacobs, Uníversity of Pittsburg
• Dr. James F. Kaiser, Bel/core
• Professor Joseph Kahn, Uniz1ersít)' of Califí>rnia-Berkele)'
• Professor Ramdas Kumaresan, University <){ Rhode lsland
• Professor Troung Nguyen, Boston University
• Professor Robert Nowak, Michigan State University
• Professor .s. Pasupathy, University o( Tor(>nto
• Professor John Platt, McMaster University
• Professor Naresh K. Si11ha, McMaster University
• Professor Mike Thomson, University of Texas-Pan America
• Professor Anthony Vaz, McMaster U11iversity
We extend our gratitude to them ali for helping us in their own_ individual ways to shape
the book into its final form.
Barry Van Veen js indebted to his colleagt1es at the lJnjversity of Wísconsjn, and
Professor Willis Tompkins, Chair of the Department of Electrical and Computer Engi-
neering, for all<1wing him to teach the Signals and Syste1ns Classes repeatedly whíle in the
process of working on this text.
We thank the many students at both Mc.Nlaster and Wisconsin, whose suggestíons
and questi()ns have helped us over the years to refine and in some cases rethink the pre-
sentation of the material in this book. ln particular, we thank Hugh Pasíka, Eko Ongge.>
Sanusi, Dan Sebald, and Gil Raz for their invaluable help in preparing some of the com-
puter cxperiments, the s<)luti(>11s manual, and in reviewing page proofs.
The idea of writing this l)ook was conceived when Steve Elliott was the Edit(>r of
Electrícal Engineering at Wiley. We are deeply grateful to him. We ais<) wish to express
our gratitude to Charíty Robey for undertaking the many helpful reviews of the book, and
Bill Z(>brist, the prese11t editor of Electrical Engineering at Wiley) for his strong support.
We wish to thank !vf<,nique Cale11o for dextrously managing the production of rhe book,
and Katherine Hepburn f(>r her creative promotion of the book.
Lastly, Sim<Jn Haykin thanks his wife Nancy, and Barry Van Veen thanks his wife
Kathy and children Emily and David, for their support and understanding throughout the
long hours involved in writíng this book.
Simon Haykin
Barry Van Veen
To Nancy and Kathy, Emily, David, and Jonathan
Contents
•
N()tation XVI
CHAPTER 1 Introduction l
2.1 lntroduction 70
2.2 Convolution: Impulse Response Representation for LTI Systems 71
2.3 Properties of the Impulse Response Representati(>n fc>r LTI Systems 94
2.4 Differential and Difference Equation Representations for LTI Systems 108
2.5 Blc)ck Diagram Representations 121
2.6 State-Variable Descriptions for LTI Systems 125
2.7 Exploring Concepts with MATLAB 133
2.8 Summary 142
Further Reading 143
Problems 144
••
XII CONTENTS
Application
...
to Communication Systems 349
- u . _"".
..
5.1 Introduction 34 9
5.2 Types of Modulation 349
5.3 Benefits of Modulation 353
5.4 full Amplitude Modulation 354
5.5 Douhle Sideband-Suppressed Carrier Modulation 362
5.6 Quadrature-Carrier Multiplexing 366
5.7 Other Varianrs of Amplitude Modulation 367
5.8 Pulse-Amplitude Modulation 372
5.9 Multiplexing 376
Contents xííí
9.1 Introduction 5 56
9.2 Basic Feedback Concepts 557
9.3 Sensitivíty Analysis 559
9.4 Effect of Feedback on Disturbances or Noise 561
9.5 Distorti(Jn Analysis 562
9.6 Cost of Feedback .564
9.7 Operational Amplifiers 564
9.8 Control Systems 569
9.9 Transient Response of Low-Order Systems 576
9.10 Time-Domain Specifications 579
9.11 The Stability Problem 581
9.12 Routh-Hurwitz Criteri<>n 585
9.13 Root Locus Method 588
9.14 Reduced-Order Models 597
*9.15 Nyquist Stabiliry Criterion 600
9.16 Bode Diagram 600
*9.17 Sampled-Data Systems 607
9.18 Design of Control Systems 625
9.19 Exploring Concepts with MATLAB 633
9.20 Summary 639
Further Reading 640
Problems 640
APPENDIX e
Tables of Fourier Representations
i and Properties 676
Symbols
(
5E ) Laplace transform pair
;fu unilateral Laplace transform pair
( •
z z-transform pair
( >
The study of signals and systems is basic to the discipline of electrical engineering at all
levels. It is an extraordinarily rich subject with diverse app]icati()ns. Indeed, a thorough
understanding of signals and systems is essential for a proper appreciation and application
of other parts of electrical engineering, such as signal processing, communication systems,
and contrai systems.
This book is intended to provide a modern treatment of signals and systems at an
introductory levei. As such, it is intended for use in electrical engineering curricula in the
sophomore or junior years and is designed to prepare students f{>r upper-level courses in
communication systems, contrc)l systems, and digital signal processing.
The book provides a balanced and integrated treatment of cc>ntinuous-time and
discrete-time forms of signals and systems intended to reflect their rc>les i11 engi11eering
practice. Specifically, these tW<) fc>rms of signals and systems are treated side by side. This
approach has the pedagogical advantage of helping the student see the fundamental sim-
ilarities and differences bet,1/een discrete-time and continuous-time representations. Real-
world problems often involve mixtures of continuous-time and discrete-time forms, so the
integrated treatment also prepares the student for practical usage of these concepts. This
integrated philosophy is carried over to the chapters of the book that <leal with applications
of signals and systems in modulation, filtering, and feedback systems.
Abundant use is made of examples and drill problems with answers throughout the
book. All of these are designed to help the student understand and master the issues under
consideration. The last chapter is the only one without drill problems. Each chapter, except
for the last chapter, includes a large number of end-of-chapter problems designed to test
the student cJn the material covered in the chapter. Each chapter also includes a list of
references for further reading anda collection of historical remarks.
Another feature of the bc)ok is the emphasis given to design. ln particular, the chap-
ters dealing with applications include illustrative design examples.
MATLAB, acronym for MATrix l.ABoratory and product of The Math Works, lnc.,
has emerged as a powerful environment for the experimental study of signals and systems.
We have chosen to integrate MATLAB in the text by including a section entitled ''Ex-
ploring Concepts with MATLAB'' in every chapter, except for the concluding chapter. ln
making this choice, we have been guided by the conviction that MATLAB provides a
computationally efficient basis for a ''Software Laboratory," where concepts are explored
and system designs are tested. Accordingly, we have placed the section on MATLAB before
the ''Summary'' section, therehy relating to and building on the entire body of material
discussed in the preceding sections (>f rhe perrinent chapter. This approach also offers the
instructor flexibility to either formally incorporate MATLAB exploration into the class-
room or leave it for the students to pursue on their own.
688 APPENDIX E • TABLES OF z-TRANSFORMS AND PROPERTIES
1
u[-n - 1]
1 -z -1 lzl < 1
1
-a"ul-n - 1]
1 - az- 1
lzl < lal
az-1
-na u[-n - 1]
11
x[n - kl ~ z,, > x[-k] + xl-k + l]z- 1 + · · · + x[-1]z-k+t + z-kX(z) for k > O
x[n + k] < Zu • -xlO]zk - x[l]zk-I - · · · - x[k - 1]z + zkX(z) for k > O
2 CHAPTER 1 • }NTROPUCTION
ln describing what we mean by signals and systems in the previous two sections, we men-
tioned severa! applications of signals and systems. ln this section we will expand on five
Estimate
Message Transmitted Received of message
síunal
e signal signal sígnal
_ _ _..,._ Transtnit.ter _ _ _...,... •··. Cba~el · - - - - . . . : / R~eiver I
FrGVRE 1.2 Elerne11ts of a Cf)111n1unícatíon syste111. The transmitter changes the message signal
ínto a form suítable for transmíssi<>n over the c.hannel. The receiver processes the channel output
(i.e., the received signal) to producc .an estimate of the message signal.
• COMMUNICATION SYSTEMS
There are three basic elements to every communication system, namely, transmitter, cha11-
nel, and receiver, as depicted in fig. 1.2. The transmitter is located atone point in space,
the receiver is located at sorne otl1er poi11t separate frorn the transmitter, and the channel
is the physícal medium that connects them together. Each of these three elements may be
viewed as a system with associated signals of its own. The purpose of the transmitter is to
convert the message signal produced by a source of information into a form suítable for
transmission over the channel. The message signal could be a speech signal, television
(video) signal, or computer data. The channel may be an optical fi.ber, coaxial cable, sat-
ellite channel, or m<)bile radio channel; each of these channels has its specific area of
a pplication.
As the transmitted signal propagates over the channel, it is distorted due to the
physícal characteristics of the channel. Moreover, noise and interfering signals (originating
from other sources) contaminate the channel output, with the result that the received signal
is a corrupted version of the transmitted signal. The function of the receiver is to operate
on the received signal so as to reconstruct a recognizable form (i.e., produce an esrimate)
of the original message signal and detiver ir to the user destination. The signal-processing
role of the receiver is thus the reverse of that of the transmitter; in addition, the receiver
reverses the effects of the channel.
Detaíls of the operations performed in the transmitter and receiver depend <>n the
type of comznunication sysrem being considered. The C(>mmunication system can be of an
analog or digital type. ln signal-processing terms, the design of an analog cc>mmunícation
system is relatively simple. Specifically, the transmitter consists of a modulator and the
recei\>"er consists of a deniodulator. M<)dt1lati<J11 is rhe process of co11verti11g rhe message
signal into a forn1 that is compatible with the trans1nission characteristics <)f the channcl.
Ordinarily, the transmitted signal is represented as amplitude, phase, or frequency varia-
tion of a sinusoidal carrier wave. We thus speak of amplitude modulation, phase modu-
lation, or frequency modulation, respectively. Correspondingly, through the use of ampli-
tude demodulation, phase demodulation, or frequency demodulation, an estimate of the
original message signal is prc>duced ar the receíver output. Each one of these analc>g mod-
ulation/demodulati<>n techniques has its own advantages and disadvantages.
ln contrast, a digital communication system is considerably more complex, as de~
scribed here. If the message signal is of analog form, as in speech and video signals, the
transmitter performs the following operations to convert it into digita) form:
• San-zpling, which converts the message signal into a sequence of numbers, with each
number representing the amplitude of the message signal at a particular instant of
•
time.
4 CHAPTER l • INTRODUCTION
,,,. L •· • ..
• .4'
•
. . ~. .'
<~
. ""'"'
ti. :·
,.
Í
.!/.· ~
•• .., •
•
•
.
~ I ) ;;
"•;:<14'• A>
(a)
. ;
,'
. ~
; .
:
·t
1
t,
(b)
FIGURE l .3 (a) Snapshot of Patlifi1ider exploring thc Sl1rfacc of w1ars. {h) The 70-meter
(230-foot) diameter antenna located at Canberra, Australia. The surface of the 70-ineter reflcctor
must remain accurate ,vithin a fraction of the signal wavelength. (Courtesy of Jet Propulsion
Laboratory.)
6 CHAPTER 1 • INTRODUCTION
Mars on July 4, 1997, a historie day in the National Aeronautics and Space Administra-
tion's (NASA's) scientific investigation of the solar system. Figure 1.3(b) shows a photo-
graph of the high-precision, 70-meter antenna located at Canberra, Australia, which is an
integral part of NASA's worldwide Deep Space Network (DSN). The DSN provides the
vital two-way communicati<>ns link that guides and controls (unmanned) planerary ex-
plorers and brings back images and new scientific information collected by them. The
successful use of DSN for planetary exploration represents a tríumph of communication
theory and technology over the challenges presented by the unav(>idable presence c>f nc>ise.
Unfortunately, every communication system suffers from the presence (lf chan11el
noise in the received signal. Noise places severe limits on the quality of received messages.
Owing to the enormous distance between our own planet Earth and Mars, for example,
the average power of the information-bearing component of the receíved signal, at either
end of the link, is relatively small compared to the average power of the noise component.
Reliable operation of the link is achieved through the combined use of (1) large antennas
as part of the DSN and (2) errar contrai. For a parabolic-reflector antenna (i.e., rhe type
of antenna portrayed in Fig. 1.3(6)), the effective areais generally between 50o/o and 6,So/o
of the physical area of the antenna. The received power available at the terminais of the
antenna is equal to the effective area times the power per unit area carried by the íncident
electr(>magnetic wave. Clearly, the larger the antenna, the larger the received signal power
will be, hence the use of large antennas in DSN.
Turning next to the issue of error control, ít involves the use of a channel encoder
at the transmitter and a channel decoder at the receiver. The channel encoder accepts
message bits and adds redundancy according to a prescribed rule, thereby producing en-
coded data ata higher bit rate. The redundant bits are added for the purpose of protection
against channel noise. The channel decoder exploíts the redundancy to decide which mes-
sage bits were actually sent. The combined goal of the channel encoder and decoder is to
minimize the effect of channel noise: that is, the number of errors between the channel
encoder input (derived from the source of information) and the encoder output (delivered
to the user by the receiver) is minimized on average.
• CONTROL SYSTEMS
Control of physical systems is widespread in the application of signals and systems in ()Ur
industrial society. As some specífic examples where control is applied, we mention aircraft
autopilots, mass-transit vehicles, automobile engines, machine tools, oil refineries, paper
mills, nuclear reactors, power plants, and robots. The object to be controlled is commonly
referred to as a plant; in this context, an aircraft is a plant.
There are many reasons for using contrai systems. From an engineering viewpoint,
the two most important ones are the attainment of a satisfactory response and robust
performance, as described here:
1. Response. A plant is said to produce a satisfactory response if its output follows or
tracks a specified reference input. The process of holding the plant output close t(>
the reference input is called regulation.
2. Robustness. A control system is said to be robust if it exhibits good regulatíon,
despite the presence of externai disturbances (e.g., turbulence affeccing the flight of
an aircraft) and in the face of changes in the plant parameters dueto varying envi-
ronmental conditions.
The attainment of these desirable properties usually requires the use of feedback, as
illustrated in Fig. 1.4. The system in Fig. 1.4 contains the abstract elements of a control
1.3 Overview of Specific Systems 7
Disturbance
v(t)
Fe~dback signal
•
~--
. CQ,f..ltrolleJ;, • ;'(li
~~
PlfL~_,....,, l: -+-• y(t)
1:·
r(t) Sensor(s)
FIGURE 1.4 Block diagram of a fecdback control system. The controller drives the J>lant, whose
disturbed outJ)Ut drives the sensor(s). TI1e resulting feedback signal is subtractcd fro1n thc refcr-
encc input to produce an error signal e(t), \vhich, in turn, drives the contr<>ller. Thc feedback loop
is thercby closed.
system and is referred to as a closed-loop contrai system (>r feedback contrai system. For
example, in an aircraft landing system the plant is represented by the aircraft bc.>dy and
actuator, the sensors are used by the pilot to determine the lateral position of the aírcraft,
and the controller is a digital computer.
ln any event, the plant is described by mathematical operations that generate the
output y(t} in response to the plant input v(t) and external disturbance v(t). The sensor
included in the feedback loop measures the plant output y(t) and converts it into another
form, usually electrical. The sensor output r(t) constitutes the feedback signal. Tt is com-
pared against the reference input x(t) to produce a difference or errar signal e(t). This latter
signal is applied to a controller, which, in turn, generates the actuating signal v(t) that
perf<)rms the controlling action on the p1ant. A control S}'Stem with a síngJe input and
single output, as illustrated in Fig. 1.4, is referred to as a single-input!single-output (SISO)
system. w·hen the number of plant inputs and/or the number of plant outputs is more than
one, the system is referred to as a multiple-inputlmultiple-output (MIMO) system.
ln either case, the controller may be in the form of a digital computer or micropro-
cessor, in whjch case we speak of a digit11l control system. The use of digira} contro] systems
is becoming more and more common because of the flexibility and high degree of accuracy
affl>rded by the use of a digital computer as the controller. Because of its very nature, the
use of a digital control system involves the operatic>ns c>f sampling, quantization, and
coding that were described previously.
Figure 1.5 shows the photograph <)f a NASA (National Aeronautics and Space Ad-
ministration) space shuttle launch, which relies on the use of a digital computer for its
control.
• REMOTE SENSING
..• ,t;•
~\,
i;.,_,,;;,::,i·..
.rn.· . .....
' •
FIGURE 1.5 NAS,.\ S[lace shuttlc launch. (Courtesy of NASA.)
context of electromagnetíc fields, with the techniques used for informatíon acquisition
covering the whole electromagnetic spectrun1. lt is this specialized form of rem<>te sensíng
that we are concerned with here.
The scope of remate sensing has expanded en<>rmously since che 1960s dueto the
advent ()Í satellites and planetary probes as space platforms f(>r the sens(>rs, and the avail-
ability of sophisticated digital signal-processing techniques for extracting infc>rmatic>n fr(>m
the data gathered by the sensors. ln particular, sensors on Earth-orbiting satellites provide
highly valuable inf<>rmation about global weather patterns and dynamics <>f clouds, surface
vegetation cover and its seasonal variations, and ocean surface temperatures. Mc>st im-
portantly, they do so in a reliable way and on a conrinuing basís. ln planetary studies,
spaceborne sensors have provided us with high-resc>lution ímages of planetary surfaces;
the images, in turn, have uncovered for us new kinds of physical phen<>mena, some similar
to and others completely different from what we are fa1niliar \1/ith on our planet Earth.
The electromagnetic spectrum extends from lc>w-frequency radio waves through mi-
crowave, submillimeter, infrared, visible, ultraviolet, x-ray, and gamma-ray regions of the
spectrum. Unfortunately, a single sensor by itself can cover only a small part of the elec-
tromagnetic spectrum, with the mechanism responsible for wave-matter ínteraction being
influenced by a limited number of physical properties of the ol-,ject of interest. If, therefore,
we are to undertake a detailed study of a planetary surface or atmosphere, then che si-
multaneous use of multiple sens<Jrs covering a large pare of the electromagnetic spectrum
is required. For example, to study a planetary surface, we may requíre a suite of sensors
covering selected bands as follows:
1.3 Overvieiv of Specific Systems 9
• Radar sensors to provide information on the surface physical properties <)Í the planet
under study (e.g., topography, roughness, moisture, and dielectric constant)
• lnfrared sens(>rs to measute the near-surface thermal properties of the planet
• Visible and near-infrared sensors to provide inf<1rmation about the surface chemical
comp<>sition of the planet
• X-ray sensors to provide information on radioactive materiais co11tained in the planet
The data gathered by these highly diverse sensors are then processed on a computer to
generate a set c>f images that can be used collectively to enhance the knowledge of a scientist
studying the p1anetary surface ..
Among the electromagnetic sensors mentioned above, a special type <>f radar know11
as synthetic aperture radar (SAR) stands out as a unique imaging system in remore sensing.
lt offers the following attractive features:
• Satisfactory operation day and night and under a1l weather conditions
• High-resoJution imaging capabiJity that is independent <>Í sensor altitude <)r
wavelength
The realization (>Í a high-resc>lution image with radar requires the use c>f an antenn.,1 wíth
large aperture. From a practical perspectíve, however, there is a physical limit <>n the size
of an ar1tenna that can be accornmodated on a11 airborne or spaceb<1rne plarform. In a
SAR system, a large ante11na aperture is synthesized by signal-processing means, hence the
name ''synthetic aperture radar." The key idea behind SAR is that an array of antenna
elements equally spaced al<)ng a straight line is equivalent to a single antenna moving along
rhe array line ar a unjform speed. This is true provided rhat we sarjsfy the follc>wing
requirement: the signals received by the single antenna at equally spaced points along the
array line are coherently recorded; that is, amplitude and phase relationships among the
' received signals are maintained. Coherent recording ensures that signals received from
the single antenna correspond to the signals received from the individual elements of an
equivalent antenna array. ln (>rder to obtain a high-resolution image from the single-
antenna signals, highly sophisticated signal-processing operations are necessary. A central
operation in the signal processing is the Fourier transform, which ís implemented efficiently
on a digital c<lmputer using an algorithm known as the fast Fourier transf<>rm (FFT) al-
g<>rithm. t'ot1rier analysis <>f signals is one of the main focal points of this bc>ok.
The phc>tograph in Fig. 1.6 shc>ws a perspective view <>f Mt. Shasta (Calif<)rnia),
which was derived from a stereo paír of SAR images acquired from Earth c>rbit with the
Shuttle lmaging Radar (SIR-B). The color version of this photograph appears on thc co1or
plate.
The goal of biomedical signal processing is to exrract information from a bioll>gical signal
rhat helps us to further impr<1ve our understanding of basic mechanisms of biological
functic)n or aids us in the diagnosis or treatment of a medical condition. The ge11eration
<>f 1nany bio/()gical signals found in the human bc>dy is traced to the electrical activity of
large groups of nerve cells or muscle cells. Nerve cells in the brain are commonly referred
to as neurons. Figure 1. 7 shows morphological types of neurc)ns identifiable in a monkey
cerebral cortex, based on studies of prin1ary somatic sens<1ry and motc)r cortex. This figure
il1ustrates the many different shapes and sizes of neurons that exist.
Irrespective of the signal (>rigin, biomedical signal processing begins with a temp<)ral
rec<>rd of the bi<)logica] e\'ent. of interest. For example, the eJecrrical activit}' of the heart
10 CHAPTER 1 • (l\'TRODUCTIO~
.": . ,,
..... .... . ....... . ,.,...
+
~
' . ~--
....:..)IQ. '
·· ... ",
. .,. -
. .. .......,
. ' ..
'
.,
., .
,.
. ...~· . . ,.
FIGURE 1.6 Pcrspcctive vie\v <,f l\1ount Shasta (Californía) derived from a pair of stereo radar
images acquired from orbít ,,vith the Shuttle lmaging Radar (SIR-B). (Courtesy of Jet Propulsi<)11
Laboratorv.)
, Scc Color Plate.
.
is represented by a record called the electr<)cardic)gram (ECG ). The ECG represents changes
in the potential (voltage) due to electr(>chemical pr<>cesses involved in the formation and
spatial spread of elecrrical excitations in the heart cells. Acc<>rdingly, derailed i11ferences
abc>ut the heart can be made from the ECG.
Another important example <>f a hi<)logical signal is the electroencephalogram (EEG).
The EEG is a record of fluctuations in the electrical activity of large groups of neurons in
100µ,m
FIGURE 1. 7 i\lorphological typcs of nerve cells (neurons) identifiable in a monkcy cerebral cor-
tex, based on studies of priritary somatic sensory and motor cortcx. (Reproduced from E. R.
Kandel, J. H. Sch,vartz, and 'I'. .!Vl. Jesscl, Principles of Nettral Scie1ice, Third Edition, 199 J; cour-
tesy of Appleton and Lange:)
1.3 Ove"7iew of Speciflc Systems 11
the brain. Specifically, the EEG measures the elecrrical .field associated with rhe ctrrrent
flowing through a group of neurons. To record the EEG (<lr che ECG for that matrer) at
least two electrodes are needed. An active electrode is placed over the particular site of
neuronal actívity that is of interest, and a reference electrode is l)laced at S<)me rem<>te
distance from this site; the EEG is measured as the voltage or pc>tential difference between
the active and reference elecrrodes. Fig11re 1. 8 sho"vs three examples <Jf EEG signals re-
corded from the hippocampus of a rar.
A major issue of concern in biomedical signal processing-in rhe cc>ntext c.>f ECG,
EEG, or some other hiologica1 signal-is the detection and suppression of artifacts. An
artifact refers to that part of the signal produced by events rhat are cxtrane<>us to the
biological event of interest. Artifacts arise in a biological signal ar different stages c>f pro-
cessing and in many different ways, as sun1marized herc:
• Instrumental artifacts, generated by the use of a11 instru111ent. A11 example of an
instrumental artifact is the 60-Hz interference picked up by the recc>rding i11strun1ents
from the electrical maíns power supply.
• Biological artifacts, in ""'hich c>ne bi<>logical signa) C()ntaminates or interferes \Vith
another. An example C)Í a biological artifact is the electrical pc)tc11tial shift that n1ay
be observed in the EEG dueto heart acrivity.
• Analysis artifacts, which may arise in the course of proccssing the biol<>gical signal
to produce an estimate of the event of interest.
Analysis artifacts are, in a way, controllable. For exan1ple, rc>und<)ÍÍ errors due to
quantization of sig11al samples, which arisc from the use <>f digital signal prc>cessi11g, can
be made nondiscernible fc)r al] practical purposes hy making the number c>f discretc am-
plitude leveis in the quantizer large enough.
What about instrumental and biological artifacts? A comm<)n meth<>d c)f reducing
their effects is through the use of filtering. A filter is a system that perforn1s a desired
(a)
(b)
(e)
Os Time ... 2s
FIGURE 1.8The traces sho\Vn in (a), (h), and (e) are three c:xamples <)f EEG signals rccc)r<.lecf
from the hippocampus of a rat. Neurobiol<>gical studies suggest that the hip[><>c.:am1>us plays a kcy
role ín certain aspects of learning or memory.
12 CHAPTER) • INTRODUCTION
operatio11 t)n a signal or signals. lt passes signals containing frequencies in <>ne frequency
range, termed the filter passband, and removes signals contaíning frequencies in other
frequency ranges. Assuming that we have a fJric)ri knowledge concerning the signal of
interest, we may estimate rhe range of frequencies inside whích the significant components
of the desíred signal are located. Then, by designing a filter wh<)Se passband corresponds
to the frequencies of the desired signal, artifacts with frequency c<)mponents outside this
passl-,and are removed by the filter. The assumptio11 made here is that the desired signal
and the artifacts contaminating it occupy essentially nonoverlapping frequency bands. If,
however, the frequency bands overlap each other, rhen the filtering problem becomcs more
difficult and requires a solution beyond rhc sc<>pe of the present book.
• AUDITORY SYSTEM
For our last example <)Í a system, we turn to the 111ammalian auditory system, the functi<>n
of which is to discriminate and recognize complex S<)unds on the basis of their frequency
content.
Sound is produced by vibrations such as the movements of vocal cords <lr vi<)lin
strings. These vibrati()ns result in the compressi(>n and rarefaction (i.e., increased or re-
duced pressure) of the surrounding air. The disturbance so produced radiates outward
from the source of sound as an acoustical wave with alrernating highs and lows of pressure.
The ear, rhe <>rgan of hearing, responds t<> incoming acoustical waves. lt has three
main parts, with their functions summarized as follows:
The inner ear consists of a bony spiral-shaped, fluid-filled tube, called the cochlea.
Sc>und-induced vibrations of the rympanic membrane are transmitted into the oval window
of the cochlea by a chaín <>f h<>nes, called ossicles. The lever action of the ossicles provides
some amplificati<)n <>f rhe mechanical vibrarions <>Í the tympanic membra11e. Thc cc>chlea
tapers in size like a cone t<>ward a tip, so that there is a base at the oval window, and an
apex at the tip. Through the middle of the cochlea stretches the basilar membrane, which
gets wider as the c<)chlea gets narrower.
The vibratory movement of the tympanic membrane is transmitted as a travelíng
wave along the length of the basilar membrane, starting from the oval window to the apex
at the far end of the C<)chlea. The wave propagares along the basilar membrane, much like
the snapping of a rope tied atone end causes a wave to propagare along rhe rope from
the snapped end to the fixed end. As illustrated in fig. 1.9, the wave attains its peak
amplitt1de ata specific lc>cation along the basilar n1embranc that depends on the frequency
of rhe incoming sc>und. Thus, although the wave ítself travels al(>ng the basilar men1brane,
the envelope of the wàve is ''stationary'' for a given frequency. The peak displacements
for high frequencies occur toward the base (where the basilar membrane is narrowest and
sriffesr). The peak displacements for low frequcncies occur toward the apex (where the
basilar membrane is \videst and mosr flcxible). That is, as the wave propagares along rhe
basilar me1nbra11e, a res(>nance phenomenon takes place, wirh the end <>f the basilar mem-
brane at the base of the cochlea resonating at ab()Ut 20,000 Hz and its other end at the
J .3 Overview of Specific Systems 13
>: >,.
Base:
stiff region
Apcx:
flexible region
(a)
4000 Hz
15,000 Hz
(b)
FIGURE J.9 (a) ln this diagram, thc basilar membrane in the c<ichlea is de1,icted as if it ,vere
uncoíled and strctched out flat; the "base" an<l "apex" refcr to the cochlca, but the remarks ''stiff
region" and "flexil)le regíon" refer to thc basilar membrane. (b) This diagram illustrates the travei~
ing ,vavcs along the basilar membrane, sho'vvir1g their enveloJ)CS induced by inc<>ming sound at
three different frequencíes.
apex of the cochlea resonating at about 20 Hz; the res<)nance frequency of the basilar
membrane decreases gradually with dístance from base to apex. Consequently, the spatial
axis of the cochlea is said to be tonotopically ordered, because each location is associated
with a particular resonance frequency or tone.
The basilar membrane is a dispersive medium, in that higher frequencies propagate
more sJowly than do lower frequencies. In a dispersive medium, we distinguish two dif-
ferenr velocities, namely, phase velocity and group velocity. The phase velocity is the ve-
locity at which a crest or valley of the wave propagares along the basilar membrane. The
group vclocity is the velocity ar which the envelope of the wave and its energy pr<>pagate.
The mechanical vibrations of the basilar membrane are transduced into electrochem-
ical signals by hair cells that rest in an orderly fashion on the basilar membrane. There are
two main types of hair cells: inner hair cells and outer hair cells, with the latter being by
far the most numerous type. The outer hair cells are motile elements. That is, they are
capable of altering their Jengrh, and perhaps other mechanical characteristics, which is
believed to bc responsible for the cc>mpressive nonlinear effect seen in the basilar membrane
vibrations. There is also evidence that the outer hair cells contribute to the sharpening of
tuning curves from the basilar membrane and on up the systern. However, the inner hair
cells are rhe rnain sites of auditory transduction. Specifically, each auditory neuron syn-
apses wírh an inner hair cell ata particular l<)Cation on the basilar mernbrane. The 11eurons
that synapse with inner hair cells near the base of the basilar membrane are found in the
periphery <>f the audit<)ry nerve bundle, and there is an orderly progression toward syn-
apsing at the apex end of the basilar membrane with movcment toward the center c>f the
bundle. The tonotopic organization of the basilar rnembrane is therefore anatomically
preserved in the auditory nerve. The inner hair cells also perforrn rectification and com-
14 CHAPTER l II INTRODllC'ílON
quency range that includes b<>th speech and video signals. ln the final analysis, however,
the ch<>ice of a11 analog or digital approach for the solution of a signal-processing problem
can only be determined by the applicatíon of interest, the resources avaílable, and the cost
inv<1lved ín building the system. It should also be noted that the vast majority of systems
builr i11 practíce are ''mixed'' in narure, c<)n1bining the desirable features of both analog
and digital approaches t<> signal processing.
x(t)
x[n]
1
(a) (b)
FIGURE 1. l I (a) C.~ontínuous-time signal x(t ). (b) Rcpresentation of x(t) as a discrete-tíme sígnal
xln].
latter nc>tation is used throughc>ut this book. Figure 1.1 J illustrarcs rhe relationship be-
tween a continuous-tirrie sígnal x(t) and discrete-time signa) x[n] derived from it, as de-
scribed ahc>ve.
Thrc>ughout this bo<)k, we use the syml"lc>l t to denote ti1nc fc>r a continuous-rime
signal and the symbol n t<> denote time for a discrete-time sig11al. Similarly, parenrheses (·)
are used to denote contint1ous-valued quantities, whilc brackets [·] are used to denote
discrete-valued quantities.
·.
.
·.)
. ,.
.,, .:...r f.
. .. , ... , .
.....
·. ·.
ExAMPLE J. l Develop the even/odd decornposition of a gener~l signal x(t) by applying the
definitions of Eqs. (1.2) and (1.3).
Solution: Let the signal x(t) be expressed as the sum of two components Xe(t) and x (t) as 0
follows:
x(t) = Xe(t) + X 0 (t)
Define Xe(t) to be even and x 0 (t) to be odd; that is,
Xe(-t) = Xe(t)
and
x 0 (-t) = -x (t) 0
'... \
•• r • •
.. ' . :
,. ,,~,, ..
Solving for Xe(t) and x 0 (t), we thus obtain :. .
·~,·
,
.,
• • l::
;i:.,; .~ .. -
,.
1 ;.:
+ x(-t}
;
Xe(t) =l x(t) .,
" ,: ,. .
... .. 1· .;·:
. . ....
. '
and .. .',,.,,.,
' .i •• : • .,
'
1 . .: '
The above definitions of even and odd signals assume rhat the signals are real valued.
Care has to be exercised, however, when the signal of interest is complex vall1ed. ln the
case of a complex-valued signal, we may speak of conjuga te symmetry. A complex-valued
signal x(t) is said to be co11jugate symmetric if it satisfies the condition
x(-t) = x*(t) ( 1.4)
where the asterisk denotes complex conjugati<>n. Let
x(t) = a(t) + jb(t)
where a( t) is the real part of x( t}, b( t) is thc imagina ry part, and j is the sq ua re r<)<>t of - 1.
The complex conjugate of x(t) is
x'"(t) = a(t) - jb(t)
From Eqs. (1.2) to (1.4), it follows therefore that a complex-valued signal x(t) is conjt1gate
symmetric if its real part is even and its imaginary part is odd. A similar remark applies
to a discrete-cime sig11al.
-----J~--t
-T/2
O T/2
-
-T/2
.....···· --·-+----- t
O T/2
----A
(a) (b)
FIGURE 1. 12 (a) ()ne example of contintrous-time signal. (b) Another examplc of continu,lus-
time signal.
18 CHAPTER 1 • INTRODlJCTION
f=! (1.6)
T
The frequency f is measured in hertz (Hz) or cycles per second. The angular frequency,
measured in radians per second, is defined by
21T
w= (1.7)
T
since there are 21r radians in one complete cycle. To simplify terminology, w is often
referred to simply as frequency.
Any signal x(t) for which there is no valt1e <)f T to satisfy the condition of Eq. (1.5)
is called an aperi<>díc or nonperiodic signal.
Figures 1.13(a) and (b) present examples of peri<)dic and nonperiodic signals, re-
spectively. The periodic signal shown here represents a square wave of amplitude A = 1
and peric>d T, and the nonperiodic signal represents a rectangular pulse <>Í amplitude A
and duratic>11 T,.
• Drill Problem 1 .3 Figure 1.14 shows a triangular wave. What is the fundamental
frequency of this wave? Express the fundamental frequency ín units c)f Hz or rad/s.
Answer: 5 Hz, or 101r rad/s. •
The classification (>Í signals into periodic and n<>nperiodic signals presented thus far
applies to contínu(>us-time signals. We next cc>nsider the case of discrete-time signals. A
discrete-time signal x[nl is said to be periodíc if ít satisfies the conditi<)n:
x[n] = xln + N] for all integers n (1.8)
x(t) x(t)
11-- A
o -
-----··········L········· ·--- t
-1 -
!
o T 2T 3T 4T 5T
Time t, seconds
(a) (~
FIGURE 1.13 (a) Square wave \Vith amplitude í\ = l, and l)eriod T = 0.2 s. (b) Rcctangular
pulse of amplitt1cle A and duration T 1•
l .4 Classification of Signals 19
FIGURE 1.14 Triangular wavc alternating between -1 and + l ,vith fundamental period of 0.2
second.
where Nisa positive integer. The smallest value of integer N for which Eq. (1.8) is satisfied
is calJed the fundamental period of the discrete-time signal x[n]. The fundamental angular
frequency or, simply, fundamental frequency of xfnl is defined by
n = 2'" (1.9)
N
which is measured in radians.
The differences between the defining equations (1.5) and ( 1.8) should be carefully
noted. Equation (1.5) applies to a periodic continuous-time signal whose fundamental
period T has any pc.>sitive value. On the other hand, Eq. (1.8) applies to a periodic discrete-
time signal whose fundamental period N can only assume a positive integer value.
Two examples of discrete-time signals are shown in Figs. 1.15 and 1.16; the signal
<>f fig. 1.15 is peric>díc, whereas that <>f Fig. 1.16 is aperiodic.
• Drill Problern 1.4 What is the fundamental frequency of thc discrete-time square
wave shc>wn in Fig. 1.15?
Answer: TTl4 radians. •
4. Deterministic signals, random signals.
A deterministic signal is a signal about which there is no uncertainty with respect to its
value at any time. Accordingly, we find that deterministic signals may be modeled as
x[n]
1
..... -1
x[n1
1i
completely specified fu11ctions of time. The square wave shown in Fig. 1.13(a} and the
rectangular pulse shown in t'ig. 1.l3(b) are examples of deterministic signals, and so are
the signals shown in Figs. 1.15 and 1.16.
On the other hand, a random signal is a signal about which there is uncertainty
before its actual occurrence. Such a signal may he viewed as belonging to an ensemblc or
group of sígnals, with each signal in the ensemble having a dífferent wavcform. Moreover,
each signal within the ensemble has a certain probability of occurrence. The ensemble of
such signals is referred to as a random process. The n<>ise generated in the an1plifier of a
radio or television receiver is an example of a rand<>m signal. lts amplitude fluctuates
between positive and negative values ín a completely ra11dom fashion. The EEG signal,
exemplified by the waveforms shown in Fig. 1.8, is another example of a random signal.
(1.10)
or, equivalent)y,
p(t) = Ri 2 (t) (1.11)
ln both cases, the instantaneous power fJ(t) is proporti<>nal to che squared amplitude of
the signal. Further111ore, for a resistance R (>Í 1 ohm, we see that Eqs. ( 1.1 O) and ( 1.11)
take on the sarne mathematical form. Accordingly, in signal analysis it is customary to
define power in ter1ns of a 1-c>hm resistor, so that, regardless of whether a given signal
x(t) represents a voltage or a current, we may cxpress the instantaneous power of the
signal as
p(t) = x 2 (t) (1.12)
Based on this C<)nventíon, we define the total energ)1 of the continuous-time signal x(t) as
T/2
E = lim
y_,,_:,o I -"1'/2
x 2 (t) dt
(1.13)
= f QC"" x 2 (t) dt
(1.14)
1.4 Classificatio11 of Signals 21
Frc>m Eq. (1.14) we readíly see that the average power of a periodic signal x(t) of funda-
mental period Tis given b}·
· fT/2
P = Tl x 2 (t) dt (1.15)
-T/2
The square root of the a,1 erage power P is called the root mean-square (rms) vaJue (>f rhe
signal x(t). .
ln the case of a discrete-time signal xlnl, the integrais in Eqs. (1.13) and (1.14) are
replaced by corresponding sums. Thus the total energy of x[n] is defined by
00
( 1.16)
n~-oo
Here again we see from Eq. {1.17) that the average power in a peri<)dic signal x[n] with
fundan1e11tal pcriod N is given by
1
p =-
N
r
N-l
n=O
x2[n]
A signal is referred t(> as an energy sig11al, if and only if the total energy of thc signal
satisfics the condition
O<E<oo
On the other hand, ir is referred to as a f)ower signal, if and'c>nly if the average power of
the signa] sarisfies the conditi(>Il ·
O<P<oo
The energy and power classifications of sig11als are mutually exclusive. ln particular, an
energy signal has zero average power, whereas a power signal has infinite energy. lt is also
of interest to nc>te that periodic signals and random signals are usually viewed as power
signals, whereas signals that are both deterministic and nonperiodic are energy sig11als.
• Drill Problem 1. 5
(a) What is the total energy of the rectangular pulse shown in Fig. 1.13(6)?
(h) Whar is the average power of the square wave shown in Fig. 1.13(a)?
Answer: (a) A 2 T 1• (b) 1. •
• Drill Prohlem 1.6 What is rhe average power <)Í the triangular wave shown in
Fig. 1.14?
Answer: 1/3. •
• Drill Problem 1. 7 What is the total energy <>Í the discrete-time signal shown i11
Fig. 1.16?
Answer: 3. •
22 CHAPTER l • INTRODlJCTION
• Drill Problem 1.8 What is the average power of the perÍ{)dic discrete-time signal
shown in Fig. 1.15?
Answer: l •
11.5 Basic
-
Operations on Signals
-·. ··-
An issue of fundamental importance in the study of sig11als and systems is thc use of syscems
t<> process or manipulate signals. This issue usually involves a combination <>f sc)me basic
operations. ln particular, we may identify two classes of operatic)ns, as descril1ed here.
1. Operations performed on dependent variables.
Amplitude scaling. Let x(t) denote a continuous-time signal. The signal y(t) resulting
from amplitude scaling applied to x(t) is defined by
y(t) = cx(t) (1.18)
where e is the scaling facror. According te> Eq. (1.18), the value of y(t) is obtained by
multiplying the corresponding value of x(t) by the scalar e. A physical example of a <levice
rhat performs an1plitude scaling is an electronic amt>lifier. A resistor also performs ampli-
tude scaling when x(t) is a current, e is the resistance, and y(t) is rhe output voltage.
ln a manner similar to Eq. (1.18), for discrete-time signals we write
y[n] = cx[nl
Addition. Ler x 1(t) and x 2 (t) denc)te a pair of contint1<>us-ti1ne signals. The signal y(t)
obtained by the addition of x 1 (t) and x 2 (t) is defined by
y(t) = X1(t) + X2(t) (1.19}
A physical examplc of a <levice that adds signals is an audíc) mixer, \vhich cc>n1l-)i11es 111usíc
and voice signals.
ln a manner similar tl> Eq. (1.19), for discretc-time signals we write
yl_n l = X1[n] + x2lnj
Multiplication. Let x 1 (t) and x 2 (t) denc>te a pair of contint1c>us-rime signals. The signal
y(t) resulting from the mt1ltiplication of x 1(t) by x 2 (t) is defined by
(1.20)
That is, for each prescrihed time t the value of y(t) is given by the product c>f the corre-
sponding values of x 1(t) and x 2 (t). A physícal example of y(t) is a11 AM radio sígnal, in
which x 1(t) cc>nsists of an audic) signal plus a de cc)rnponent, and x 2 (t) consists of a sinu-
soidal signal called a carrier wave.
ln a manner similar t<> Eq. ( 1.20), for discrete-time signals we write
y[n] = x,[n.lx2l nl
i(t)
+
v(t) L
FIGURE 1.17 lnductc>r ,vith current i(t ), inducing voltage v(t) across íts tern1inals.
1. 5 Basic Operations on Signals 23
i(t}
-
+
v(t)
FIGURE 1.18 Capacit(lT ,vitl1 voltage v(t) across íts terminais, indt1cing current i(t ).
Differentiation. I.et x(t) denote a continuous-time signal. The derivative ()f x(t) with
respect to time is defined by
d
y(t) = dt x(t) (1 .21 )
For example, an inductor perf()rms differentiation. Let i(t) denote che curre11t flowing
through an inductor of inductance L, as shown in Fig. 1.17. The voltage v(t) developed
across the inductor is defined by
Integration. Let x(t) denc)te a continuous-time signal. The integral of x(t) with respect
to time t is defined by ·
(1.23)
where ris the integration variable. f<)r cxample, a capacitor perforn1s integration. Ler i(t)
denote the current flowing through a capacit<)r of capacitance C, as shown in Fig. 1.18.
The voltage v(t) developed across rhc capacitor is defined by
~-t
;
-1 o 1 l O l -
_J o 2
2 2
(a}
FIGlJRE 1. 19 Timc-scaling operation: (a) continuous-time signal x(t), ( l)) compressed versi(>n <>f
x(t) by a factor of 2, and (e) expanded vers Í<>n c>f x( t) by a factor of 2.
24 CHAPTER 1 li INTRODU(:TJON
__.__,____ _ _ _ _ _ _ _ _ ___,,_+---t---+--1- n n
'
-6 -5 -4 -3 -2 -1 O 1 2 3 4 5 6 -3 -2 -1 o 1 2 3
(a) (h)
FIGURE 1.20
Effcct ,,f
time scalíng on a díscrete-tin1c signal: (a) c.liscrete-tíme signal xí 1-i l. a11d
(b) c<>mpressed version of x[1i] by a factc,r <,f 2, wiLh some values of thc original x[1i] losl as a
result of the comprcssion.
EXAMPLE t.2 Consider the triangular pulse x(t) shown in Fig. 1.21{a). Find the reflected
version of x(t) about the amplitude axis.
Solution: Replacing the independent variable t in x(t) with -t, we get the result y(t) = x(-t) '
shown in Fig. 1.21 (b).
Note that for this example, we have
x(t) = O for t < -T1 and t > T2
, Correspondingly, we find that
'
- ~ ,; _ _ _ _ _ J _ i - - - - - - ~ - t --t!!::.------+--.3!--- t
'
-T1 O -T2 o
(a) (b)
fIGlJRE 1.21
()peration of reflcctÍ<>n: (a) contint1ot1s-tirne signal x(t) and (b) reflected version of
x(t) about the origin.
l. 5 Basic Operations on Signals. 25
1, n = 1
xlnl -1 n = -1
'
O, n = O ar1d /nl > 1
2, n = -1 and n = 1
Answer: yf nl =
O, 11 = O and In I > 1 •
Time shifting. Let x(t) denote a c<>ntinuous-time signal. The time-shifted version of
x(t) is defined by
y(t) = x(t - t 0 }
where t 0 is the tin1e shift. lf t0 > O, the waveforn1 representing x(t) is shifted intact t() the
right, relative to the time axis. If t 0 < O, it is shifted te> the left.
EXAMPLE 1.3 Figure 1.22(a) shows a rectangular pulse x(t) of unit amplitude and unit
duratíon. Find y(t) = x{t - 2).
Solution: ln this example, the time shift t 0 equals 2 time units. Hence, by shifting x(t) to
the right by 2 time units we get the rectangular pulse y(t) shown in Fig. 1.22(6). The pulse
y(t) has exactly the sarne shape as the original pulse x(t); it is merely shifted along the time
•
ax1s.
·. ..t•:. .. .. .: ~i. ~ <,;::. • ; ··.;. ·v"~ ·. .. ·; ;. . ::;: .:,.,,·.
i
'
t '
t
1 o -l o l -3 2 5
-
2 2 2 2
(a) (b)
FIGURE 1.22 ·1·in1e-shiftíng operation: (a) conti11uous-time signal in the form of a rectangu]ar
pulse of amplitude 1.0 and duration 1.(), symmelric about the origin; and (b) time-shiftc<l version
of x(t) bv, 2 time units.
26 CUAPTER 1 • INTRODUCTION
ln the case of a discrete-time signal x[n ], we define its time-shifted version as follows:
y[nJ = x[n - m]
where the shift m must be an integer; it can l-,e positive <>r 11egative.
1, n = 1, 2
x[nJ -1 n = -l, -2
'
o, n = O and n > 2I 1
Let y(t) denote a continu<)us-time signal that is derived from a11other continu<)us-tíme
signal x(t) through a combination of time shifting and time scaling, as described here:
This relation between y(t) and x(t) satisfies the followi11g conditions:
v(t) = x(t - b)
The rime shift has replaced t in x(t) b}' t - b. Next, the time-scaling operarion is performed
on v(t). This replaces t by at, resulting i11 the desired output
y(t) = v(at)
= x(at - b)
To illustrate h<>w the <>peratÍ<)n descri bcd in Eq. (1.25) can arise in a real-life situa-
tíon, consider a voice signal recorded on a tape recc)rder. If the tape is played back at a
rate faster than the original recording rate, we get compressi{>n (i.e., a > 1 ). If, on the
I. 5 Basic Operation.~ on Signals 27
-~'--+------ t --_;....~·---......._ _ _ t
-l O 1 -4 -3 -2 -1 O -3 -2 -l O
(a) (b) (<.:)
FIGURE 1.23 Thc proper c>rder in \vhich the operations of time scaling and time shifting should
be applied for the case of a contintu)Ús-time signal. (a) Rectangular }Julse x(t) of am1-,litude 1.()
and duration 2.0, symmetric about the origin. (b) lntermediate pulse v(t), representing time-
shifted versÍ(Hl c>f x(t). (e) Desíred signal y(t), resulting from the compression of 1J(t) by a factor
of 2.
other ha11d, the tape is played back at a rate slower tha11 rhe origina) reC()rding rate, we
get expansion (i.e., a< 1). The constant b, assumed to be positive, acc<)unrs for a delay
in playíng back the tape.
,, .~. ..
EXAMPLE 1.4 Consider the rectangular pulse x(t) of unit amplitude and duration of 2 time
units depicted in Fig. 1.23(a). Find y(t) = x(2t + 3).
Solution: ln this example, we have a = 2 and b = -3. Hence shifting the given pulse x(t)
to the left by 3 time uníts relative to the time axis gives the intermediate pulse v(t) shown in
Fig. 1.23(b). Finally, scaling the independent variable t in v(t) by a = 2, we get the solution
y(t) shown in Fig. 1.23(c).
Note that the solutíon presented in Fig. 1.23(c} satisfies both of the conditions defined
in Eqs. (1.26) and (1.27).
Suppose next that we purposely do not follow the precedence rule; that is, we first apply
time scaling, followed by time shifting. For the gíven signal x(t), shown in Fig. 1.24(a}, the
waveforms resulting from the application of these two operations are shown in Figs. 1.24(b)
and (e), respectively. The signal y(t) so obtained fails to satisfy the condition of Eq. (1.27) •
..
This example clearly illustràtes that if y(t) is defined in rerms of x(t) by Eq. (1.25),
then y(t) can on1y be <>htained from x(t) correctly by adhering to the precedence rule for
time shifting and time scaling. •
Sin1ilar remarks apply to the case t>f discrere~time signals.
FIGURE I .24 The incorrect ,vay of applying the precedence rule. (a) Signal x(t ). (b) Time-scaled
signal x(2t). (e) Signal y(t) ohtained by shifting x(2t) by 3 time units.
28 CHAPTER 1 • INTRODVCTION
x[n] v[nl
l . .., l l
-5 -4 -3. -2 -l
.
-5 -4
'
'
'
'
n "
-<>-- o n
o 1 2 3 4 5 -3 -2 -1 O 1 2 3 4 5
-1 ...... -1 L.
(a) (b)
y[n]
-2
---,.--o---<>--o------'--O---<:>-----Ç.....___.,❖ ·-O i n
-5 -4 -3 -1 O i l 2 3 4 5
-1 l
(e)
fIGURI:': l .25
The proper ordcr of appl}ing the operations <1f time scali11g and ti1nc shifting for
Lhe case <lf a discretc-titne signal. (a) Discrete-tin1c sígnal x[11 l. antisymmetric about thc origin.
(b) Intermediate signal 1,[111 <1l>Laíned by shiftingx[,il t(> the left by 3 samples. (e) Discrete-tin1e
signal rí1i] resulting from the compression of v[n] by a factor <>f 2, as a rcsult of \Vhich t\vo sam-
plcs of the original xln] are lost.
1.6 Elementary Si.gnals 29
• EXPONENTIAL SIGNALS
5..----..--------~---~ 150 r - - - r - - . - - - - - , - - , - - - - - , - - - - - . . , . . . _ - - , - - - - ,
4.5 · .
4 . . .. 1
'
'
3.5 100
3
x(t) 2.5 x(t)
2 . . ..
50 . . . .
1.5
1
0.5
OO 0.1 0.2 0.3 0.4 0.5 0.60. 7 0.8 0.9 l ºo 0.1- 0.2 0.3 Õ.4 0.5 0.6 0.7 0.8 0.9 l
Time t Time t
(a) (b)
FIGURE 1 .26 (a) Decaying exponcntial f<)rm (>f c:<>ntilltlous-Lime signal. (b) Gro\\'ing eX})Oncntial
form of continuous-time signal.
30 CHAPTt:R 1 li INTRODlJCTION
i(t) = e :e u(t)
+
v(t) ~e R
FIGURE 1.27 Lossy ca1>acitor, wíth the loss rc11resented by shunt resislance R.
across the capacítor. From Fig. 1.2 7 we readily see that the operation of the capacitor for
t 2:: O is described bv,
d
RC dt v(t) + v(t) = O (1.29
where v(t) is the voltage measured across the capacitar at time t. Equation (1.29) is él
differential equation of arder <>ne. lts solutic)n is given by
v(t) = V 0 e- 111<(: (1.30:
where the product term RC plays the role of a time constant. Equation ( 1.30) shc>ws tha1
the voltage across the capacitor decays exponentially with time at a rate determined h)
the time constant RC. The larger the resistor R (i.e., the less Iossy the capacit<>r), the slowe1
will be the rate of decay <Jf v(t) with time.
The discussion thus far has been ín the context <>f continuous time. ln discrete time
it is common practice t<> write a real expc>nential signal as
x[n] = Brn (1.31
The expo11ential nature of this signal is readily confirmed by defining
r = eª
for s<>me a. Figure 1.28 illustrates the decaying and growing forms of a díscrete-tim1
exponential signal corresp(>nding to O < r < 1 and r > 1, respectively. This is where thc
case of discrete-time exponential signals is distinctly different from continuous-time ex
ponential sigr1als. Note also that when r < O, a discrete-time exponential signal assume
alternating signs.
6 ! ! \ i ! ! i
4.5 1 1 1 { l i l !
-
1 1 •
'
4 --
5 -- -
3.5 -
4 ...... - 3 ....... '
'
2.5 •h•
xln] -
1 ' x[n]
2 ·-
4
' ? 'r
6
9 9
8
1
10
0.5
'
O10 -8 -6 -4 -2 O 2 4 6 8
-
1
Timcn Time n
(a) (b)
F1<;.URE 1.28
(a) Decaying cx11onential form (lf discrete-timc signal. (b) Growing exponential
form of disçrete-time sig11al.
1.6 Eleme-ntary Signals 31
The exponential signals shown in Figs. 1.26 and 1.28 are ali real valued. lt is possible
for an exponential signal to be complex valued. The mathematícal forms of complex ex~
ponential signals are the sarne as those shown in Eqs. (1.28) and (1.31 ), with some differ•
ences as explained here. ln the continuous-time case, the parameter B or parameter a or
both in Eq. (1.28} assume complex values. Simílarly, in the discrete-time case, the param-
eter B or parameter r or bc,th in Eq. (1.31) assume complex values. Two commc)nly en-
countered examples of complex exponential signals are eiwt and eii1.n.
• SINUSOIDAL SIGNALS
The conti11uous-time version of a sinusoidal signal, in its most general form, may be written
as
x{t) = A cos(wt + </>) (1.32)
where A is the amplitude, w is the frequency in radians per second, and </> is rhe phase angle
in radians. Figure 1.29(a) presents the waveform of a sinusoidal signal fc>r A = 4 and
cJ> = + Trl6. A sinusoidal signal is an example of a períodic signal, the period of which is
T = 21r
w
We may readily prove this property of a sinusoidal signal by using Eq. (1.32) to write
x(t + T) =A cos(w(t + T) + cp)
=A cos(wt + wT + </>)
=A cos(wt + 21T + </>)
=A cos(wt + </J)
= x(t)
which satisfies the defining condition of Eq. (1.5) for a periodic signal.
5 .....----------....---~--.....------·----.----,---~
x(t) ..
0
-5
o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 l
Timet
(a)
5 -
l
1
x(t) ..
0
FIGURE 1.29 (a) Sinusoidal signal A cos(wt + cp) \vith phase </.> = +1r/6 radians. (b) Sinusoidal
signal A sín ( wt + <P) with phase <b = + 1r/6 radians.
32 CHAPTER l • INTRODOCTION
To illustrate the generatíon of a sinusoídal sígnal, consider the círcuít of Fig. 1.30
consisting of an inductor and capacitar connected in parallel. lt is assumed that the lasses
in both components of the circuit are small enough for then1 to be considered ''ideal." The
voltage developed across the capacitor at time t = O ís equal to V 0 • The operatíon of the
circuit in Fig. 1.30 for t 2:: O is described by
Jl
LC dt 2 v(t) + v(t) =O (1.33)
where v(t) is the voltage across the capacitar at time t, C is its capacitance, and L is the
inductance of the inductor. Equation (1.33) is a differential equation of arder two. Its
solution is given by
v(t) = V0 cos(w 0 t), t 2:: o ( 1.34}
where w 0 is the natural angular frequency of oscillation of the circuit:
1
w o -v'LC
- (1.35)
n
~L = 21rm
N ra dº1ans/cyc1e, integer m, N ( 1.3 7)
The important point to note here is that, unlike continuous-time sinusoidal signals, not ali
discrete-tíme sinusoidal systems with arbítrary values of !1 are periodic. Specifically, for
the discrete-time sinusoidal signal descríbed in Eq. (1.36) to be periodic, the angular fre-
quency O must be a rational mu)tiple of 21r, as indicated in Eq. (1.37). Figure 1.31 illus-
trates a discrete-time sinusoidal sígnal for A = 1, <f;, = O, and N = 12.
i(t) = e~ v(t)
+
v(t) L ~e
FIGURE J.30 Parallel LC circuít, a~suming that the inductor L and capacitor C are l)oth ideal.
1.6 Elementary Signals 33
1 ,-----,-----,-----,~--()---------,.---,-~---.----,
1 1 1 ,
1 1
0.8 1-
0.6 ...... -
0.4 . . . .
0.2 - -
x[nJ O -O··-----~...--.,--o--·--'--...... ······•---•·······<>-······•·----
-0.2
-0.4 .. -
'
'
-0.6 . . . .
-
-0.8 --
'
i
'
QIO -8 -6 -4 -2 o 2 4 8 10
Tímc n
f.xAMPLE 1.6 A pair of sinusoidal sig11als with a common angular frequency is defined by
X1[n1 = sin[511'1t]
and
x2[nJ = v'3 cosf5?Tn]
(a) Specify the condition which the period N of both x 1 (n] and x 2 [n] must satisfy for them
to be periodic.
(b) Evaluate the amplitude and phase angle of the composite sinusoidal signal
y[n] = x 1 [n] + x 2 [n]
Solution:
(a) The angular frequency of both x 1 [n] and x 2 [n] is
a = 51T radians/cycle
Solving Eq. (1.37) for the period N, we get
. .;.
. - N = 21rm
o ;;:
;t .
2'1T'm
/.
'
. .. ~:''r 5'1T'
·~ :
i :
2m
5
For x 1 [n1 .and x 2 [n] to be periodic, their period N must be an integer. This can only be
~:
satisfied for m = 5, 10, 15, ... , whích results in N = 2, 4, 6, ....
(b) We wish to express y[n] in rhe form ;·r ._. .·.:.: !:
..
:" .
A cos(On + <f,) = A cos(On) cos(<f,) - A sin(On) sin(<f,}
. >:: ..•,...~:·:· :~ . ·",, : ;;. : ,. .
34 CHAP'f'ER l • IN'I'RODlJC"l'ION
Identifying .n = 5 71', we see that the right-hand side of this identity is of the sarne form as
x 1 [n] + x 2 [n]. We may therefore write · ·· · · - ·
., ' . . x· ~-
... -·
<
. - ·-
• •
. .
:::::
• • <·
V3 • > ,,. •• ; .;, • • • .. • • : :- • ._
from which we find that </> = -'!T'/6 radians. Similarly, the amplitude A is given by
A = v'(amplitude of x 1 [n])2 + (amplitude of x 2 [n])2
=V1+3=2
Accordingly, we may express y[ti] as
y[nl = 2 cos(51rn - 7r/6)
•' • • •: •• • ':>" :'< ... ._. ·: ,. ·. ... .. :· : ' ,
. -~ .
:,;. >" • .. ;.
Consider the complex exponential e ;e_ Using Euler's identity, we may expand this tcrm as
ei6 = cos 0 + j sin 0 (1.38)
This result indicares that we may express the C<>11tinuc>t1s-ti111e sinusc>idal signal of Eq.
(1.32) as the real pare of the complex expc>ncntial signal Beiwt, where Bis itself a C(>mplex
q uantity defined by
B = Aei<t> (1 ..,9)
That is, we may write
A cos( wt + <J>) = Re{ Beiwt} (1.40)
1.6 Elementary Signals 35
where Re{ } denotes the real part of the complex quantity enclosed inside the braces. We
may readily prove this relation by noting that
Beiwt = Aei<Peiwr
= Aeílwt+tt>>
= A cos(wt + </>) + jA sin(wt + </>)
from which Eq. ( 1.40) follows immediately. The sinusoidal signal of Eq. (1.32) is defined
in terms of a cosine function. Of course, we may also define a continuous-time sinusoidal
signal in terms of a sine function, as shown by
which is represented by the imaginary part of the complex exponential signal Beiwt. That
• •
1s, we may wr1te
A sin(wt + <b) = Im{Beiwt} (1.42)
where B is defined by Eq. (1.39), and lm{ } denotes the imaginary part of the cornplex
quantity enclosed inside the braces. The sinusoidal sígnal of Eq. (1.41) differs from that
of Eq. (1.32) by a phase shift of 90º. That is, the sinusoidal signal A cos(wt + </>) lags
behind the sinusoidal signal A sin( wt + </>), as illustrated in Fig. 1.29{b) for </) = 7r/6.
Similarly, in the discrete-time case we may write
A cos (!ln + </>) = Re {Bei!ln} (1.43)
and
A sin(fln + cp) = lm{Be;nn} (1.44)
where B is defined in terms of A and </> by Eq. (1.39). Figure 1.32 shows the two-
dimensional representation of the complex exponential e1' 1n for n = 1T!4 and n = O, 1, ... ,
7. The projection of each value on the real axis is cos(fln), while the projection on the
imaginary axis is sin(fln).
Imaginary axis
Unit circlc
n=2
/
/
/
/
FIGURE J .32 Complex plane, sh()\Ving eight points unif(>rmly distributed on the t1nit círcle.
36 CHAPTER 1 • INTRODUCTION
60
50
40
30
20
x(t)
10 .....
o
-10
-20
-30
-40
o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Timet
FIGURE 1.33 Exponentially damped sinusoidal sígnal e-a, sín wt, with a > O.
d v(t) + -l v(t) + -L
e -dt R
1 f'
-oç
v(T) dT =o ( 1.46)
f.lf'
-= v(r)dr
+
v(t) J_ R
FIGURE 1.34 Parallel LCR circuít, with inductor L, capacitar e;, and resistor Rali assumed to
bc ideal.
1.6 Elementary Signals 37
where v(t) is the voltage across the capacitar at time t 2: O. Equati(>n (1.46) is an integro-
differential equation. lts solution is given by
v(t) = V 0 e-tllCR cos(w0 t) (1.47)
where
1 1
Wo = ( 1.48)
LC 4C 2 R 2
ln Eq. (1.48) it is assumed that 4CR 2 > L. Comparing Eq. (1.47) with (1.4,S), we have
A = V0 , a = l/2CR, w = w0 , and <p = Tr/2.
The circuits of Figs. 1.27, 1.30, and 1.34 served as examples in which an exponential
signal, a sinusoidal signal, and an exponentially damped sinusoidal signal, respectively,
arose naturally as solutions to physical problems. The operations of these circuits are
described by the differential equatio11s (1.29), (1.33 ), and (1.46), whose solurions were
simply stated. Methods for solving these differential equarions are presented in suhsequent
chapters.
Returning to the subject matter at hand, the discrete-time version of the exponentially
damped sinus<)idal signal of Eq. (1.45) is described by
x[n] = Brn sin[On + </> 1 (1.49)
For the signal of Eq. (1.49) to decay exponentially with time, the parameter r must lie in
the range O< lrl < 1.
The discrete-time version of the step function, commonly denoted by uf111, is defined by
l, n;;::: O
u[n] = (1.50)
O, n < O
which is illustrated in Fig. 1.35.
The continucJus-time version of the step function, commc>nly denoted by u(t), is de-
fined by
1, t 2: O
u(t) = (1.51)
o, t < o
Figure 1.36 presents a portrayal of the step function u(t). lt is said t<> exhibit a discontinuity
at t = O, since the value of u(t) changes instantaneously from O to 1 when t = O.
The step function u(t) is a particularly simple signal to apply. Electrically, a battery
or de sc>urce is applied ar t = O by closing a switch, for example. As a test signal, it is
x[n]
1
1.0 . 1
...
-o~--o-·-o-·----;::>--'-+---'--+--+--- n
: ' T
-4 -3 -2 -1 O l 2 3 4
FIGURE 1.35 Discrctc-time version of stcp fun<.:lÍt)n of unit amplitude.
38 CHAPTER 1 • INTRODUCTION
u(t)
1.0+------
--------------t
o
FIGURE 1.36 Continuous-time version <lf step functíon of unit amplitude.
useful because the <)utput of a system dueto a step input reveals a great <leal about how
quickly the system responds to an abrupt change in the input signal. A similar remark
applies to u[n] in the context of a discrete-time system.
The step function u(t) may also be used to construct other discontinuous waveforms,
as illustrated in the following example.
. ' :
... : ·. . .: .. ·.· : . ,.
EXAMPLE 1.7 Consider the rectangular pulse x(t) shown in Fig. 1.37(a). This pulse has an
amplitude A and duration T. Express x(t) as a weighted sum of two step functíons.
Solution: The rectangular pulse x(t} may be wrítten ín mathematical terms as follows:
x(t)
l
A
At---- . . ----
!
-1
'
-0.5 o
'
0.5
'
'
1
t •,-------.
-1
---------------+-
-0.5 o 0.5 1
t
(a) (b)
-+----..,.__----+----+-------+- t
-l -0.5 o 0.5 1
(e)
FIGURE 1,37 (a) Rectangular pulse x(t) of amplitude A and <luration T = Is symmetric about
the origin. (b) Representatic,n of x(t) as the superposition of two ste1, functions of amplitude A,
with one step function shifted to the lcft by T/2 and the other shifted to the right by T/2; these
two shiftcd signals are denoted by x 1(t) and x 2 (t), respectively.
1.6 Elen,entary Signals 39
c5[n]
1.0
-----<>---. -<-- n
-4 -3 -2 -1 O l 2 3 4
FIGVRE 1.38 Oiscrete-time form <>f ímJlt1lse.
The díscrete-rime version c>f the impulse, commc)n]y denoted by ô[n), is defined by
1, n = O
B[n] = (1.54)
O, n *O
,ivhich is í)lt1strated in Fig. 1.3 8.
Thc continuous-time version of the u11it impulse, commonly denoted by ô(t}, is de-
fined by the following paír of relations:
ô(t} = O for t *O (1.55)
and
J X 00 8 ( t} dt = 1 (1.56 J
Equation (1.55) says that the impulse ô(t) is zero ever}'\.vhere except at the <>rigin. Equation
(1.56) says that the t<)tal area under the unit impulse is ttnity. The impulse B(t) is also
referred to as the Dirac delta function. Note that the impulse o(t) is rhe derjvative of the
step function u(t) with respect to time t. Conversely, the step function u(t) is the integral
of the impulse ô(t) with respect to time t.
A graphical descríption <)f the impulse ô[nj Ít)r discrete time is straightforward, as
shown in Fig. 1.38. ln ct>ntrast, visualizatic)n of thc t1nit impulse ô(t) for continuous time
requires mc>re detaíled attentic>n. One way te> visualize 8(t) is to view ít as the limitíng form
of a rectangular pulse of unit area, as illustrated in Fig. 1.39(a}. Specifical]y, the duration
of the pulse is decreased and its amplitude is increased such that che area under rhe pulse
x(t) a8(t)
1
Area = 1
a
Area = I r--"""'+-"'~t--"'"'
Area = 1
'
1/1"
••••••••--•
!
•• ••'• •••• •••• • •• • •••v•w•••- •••••- t _ _ _ _..___ _ _ _,t
-T/2 O T/2 o
(a) (b)
FIGURE 1.39 (a) Evolution of a rectangular pulse <.>f unit area int(> an in1pl1lsc of unit strength.
(b) GraphícaJ syn1boJ fc,r at1 i111pulse of strc11gth a.
40 CttAPTER l • INTRODlJCTION
is maíntaíned C<>nstant at unity. As the duration decreases, the rectangular pulse better
approximates the impulse. Indeed, we may generalize this result by stating that
o(t) = lim gT(t) (1.57)
T-+O
where gT(t) is any pulse that is an even functi<)n of time t, with duratic,n T, and unit area.
The area under the pulse defines the strength of the impulse. Thus when we speak <>Í the
impulse function ó(t}, in effect we are saying that its strength is unity. The graphical symbol
for an impulse is depicted in Fig. 1.39(6). The strength of the impulse is denoted by the
lal,cl next to the arrow.
From the defining equation (1.55), it immediately follows that the unit impulse B(t)
is an even funcrion of time t, as shown by
õ(-t) = 5(t) (1.5 8)
For the unit impulse ô(t) to have mathematical meaning, however, it has to appear
as a f acror in the integrand of an integral with respect to time and then, strictly speaking,
011ly when the other factor in the integrand is a continuous function of time at which the
impulse occurs. Let x(t} be such a function, and consider the product of x(t) and the time-
shifted delta function o(t - t0 ). In light of the two defining equations (1.55) and (1.56),
we may express the integral of this product as follows:
The operation indicated (>n rhe left-hand side <>f Eq. (1.59) sifts out the value x(t0 ) of the
function x(t) at time t = t0 • Accordingly, Eq. (1.59) is referred to as the sifting property
of the unit impulse. This property is sometimes used as the definition of a unit impulse; ín
effect, it incorporares Eqs. (1.,55) and (1.,56) into a single relation.
Another useful property of the unít impulse ô(t) is the time-scaling property, de-
scribed by
1
o(at) = - ô(t), a>O (1.60)
a
To prove this property, we replace t in Eq. (1.57) with at and so write
ô(at) = lim gT(at) (1.61)
T-+O
Te> represent the function gT(t), we use the rectangular pulse shown in Fig. 1.40(a), which
has duration T, arnplitude 1/T, and therefore unit area. Correspondingly, the time-scaled
gj-(at)
la!T
.,,,,-Area= l
_,,.Area == l / Area = 1a /
/ /
-~~---~-t _ _..._____,____..__ _ t
L~_j
(a) (e)
FIGURE 1.40 Steps involved in províng thc timc-scaling prc.)perty <>f the unit impt1lse. (a) Rec-
ta11gular pulse g1 -(t) c>f an1plitude I IT and duralion T, symmetric about the origin. (b) [>ulse g r(t)
compressed by factor a. (e) Amplitude scaling of the compressed pulse, rcstoring it to unit area.
1.6 Elementary Signals 41
function gT(at) is shown in Fig. 1.40(6) for a> 1. The amplitude of g·r(at) is left unchanged
by the time-scaling operation. Therefore, in order to restore the area under this pulse to
unity, the amplitude of gT(at) is scaled by the sarne factor a, as indicated in Fig. 1.40(c).
The time function in Fig. 1.40(c) is denoted by g}(at); it is related to gT(at) by
gr(at) = agy(at} (1.62)
Substituting Eq. (1.62) in (1.61), we get
Since, by design, the area under the function gr(at) is unity, it follows that
ô(t) = lim g~(at) (1.64)
T-40
Accordingly, the use of Eq. (1.64) in (1.63) results in the time-scaling property described
in Eq. (1.60).
Having defined what a unit impulse is and described its properties, there is one more
question that needs to be addressed: What is the practical use of a unit impulse? We cannot
generate a physical impulse function, since that would correspond to a signal of infinite
amplitude at t = O and that is zero elsewhere. However, the impulse function serves a
mathematical purpose by providing an approximation to a physical signal of extremely
short duration and high amplitude. The response of a system to such an input reveals much
about the character of the system. For example, consider the parallel LCR circuit of Fig.
1.34, assumed to be initially at rest. Suppose now a voltage signal appr<)ximating an
impulse function is applied to the circuit at time t = O. The current thrc>ugh an induct<Jr
cannot change instantaneously, but the voltage across a capacitar can. lt fc>llows therefcJre
that the voltage acrc,ss the capacitar suddenly rises to a value equal to V 0 , say, at time
t = o+. Here t = o+ refers to the instant of time when energy in the input signal is expired.
Thereafter, the circuit operares without additional input. The resulting value of the voltage
v(t) across the capacitar is defined by Eq. (1.47). The response v(t) is called the transient
or natural response of the circuit, the evaluation of which is facilitated by the application
of an impulse function as the test signal.
• RAMP FUNCTION
The impulse function cS(t) is the derivative of the step function u(t) with respect to time.
By the sarne token, the integral of the step function u(t) is a ramp function of unir slope.
This latter test signal is commonly denoted by r(t), which is formally defined as Í()llc,ws:
t, t > o
r(t) = (1.65)
o, t < o
r(t)
o
FIGURE 1.41 Ramp function of unit slope.
42 CHAPTER 1 • INTRODUCTION
r(n)
9
n
o
FIGURE 1.42 Discrete-time version of the ran1p function.
1. 7 Systems Viewed as
lnterconnections of Operations
ln mathematical terms, a system may be viewed as an ínterconnection of operations that
transforms an input signal into an <>utput signal with properties different from those <)Í
the input signal. The sígnals may be of the contínuous-time or discrete-time variety, ora
mixture of both. Let the overall operator H denote the action of a system. Then the ap-
plícation of a C<)ntinuous-time signal x(t) to the input of the system yields the output signal
described by
y(t) = H{x(t)} (1.69)
Figure 1.43{a) sh<)ws a block diagram representat1on of Eq. (1.69). Correspondingly, for
the discrete-time case, we may write
y[n) = H{x[n]} (1.70)
.. ' ...
y[n]
..
x(t)
H. ...
,.
y(t) x[nl
... H .. ,.
"'
•r . ....·
(a) (b)
FIGURE 1.43 Block diagram representation of operator H for (a) continuous time and (b) dis-
crete time.
1. 7 Systems Viewed as lnterconnections of Operations 43
x[nJ x[n - k]
FIGURE 1.44 Discrete-timc shift <>perator Sk, operating on thc discrete-Lime signal xlnJ to pro-
duce x[n - k].
where che discrete-time signals x[n] and y[n] denote the input and output signals, respec-
tively, as depicted in Fig. 1.43(b).
. . . ·" ... .. ': ·. . . ..,· ·......
EXAMPLE 1.8 Considera discrete-time system whose output sígnal y[n] is the average of the
three most recent values of the input signal x[n], as shown by
Solution: Ler the operator Sk denote a system that time shifts the input x[n] by k time uníts
to produce an output equal to x[n - kJ, as depicted in Fig. 1.44. Accordingly, we may define
the overall operator H for the moving-average system as
H = ½(1 + S + S2 )
...'
Two different ímpleme11tations of the operator H (i.e., the moving-average system) that suggest
themselves are presented in Fig. 1.45. The implementation shown in part (a) of the figure uses
the cascade connection of two identical uníty time shifters, namely, S 1 = S. On the other hand,
2
the implementation shown in ...
part (b) of the figure uses two different time shifters, S and S ,
x[n]
.
s "'
- ....~.... 2: ---...• 1/3 •
--1J1s yí nl
.........iS~:,-
. ..
(b)
y[n]
(a)
FIGlJRE 1.45
T"vo different (but equivalcnt) írnplementations of the mo\ing-avcrage systen1:
(a) cascade form of implemcntatÍ<>n, and (h) parallel form of implcn1entation.
44 CHAPTER 1 • INTRODUCTION
• Drill Problern 1.17 Express rhe operator that describes the input-output relation
11.8__Properties of Systems
The properties of a system describe the characteristics of the operator H representing the
system. ln what follows, we study some of the most basic properties of systems .
•
• STABILlTY
A system is said to be h<>unded input-bounded <)utput (BIBO) stable if and only if every
bounded input results in a bounded output. The output of such a system does not diverge
if the input does not diverge.
To put the condition for BIBO stability on a formal basis, consider a continuous-
time system whose input-output relarion is as descríbed in Eq. (1.69). The operator H is
B1B() stable if the output signal y(t) satisfies the condition
Both Mx and My represent some finite positive numbers. We may describe the Cl1ndition
for the BlBO stability of a discrete-time system 1n a similar manner.
From an engineeríng perspective, it is important rhat a system of ínterest remains
stable under ali possible ()perating conditions. lt is only then that the system is guaranteed
to produce a bounded output for a bounded input. Unstable systems are usually to be
avoided, unless some mechanism can be found t(> stabilize them.
One famous example of an unstable system is the first Tacoma Narrows suspension
bridge that cc>llapsed on November 7, 1940, at approximately 11 :00 a.m., due to wind-
induced vibrations. Situated on the Tacoma Narrows in Puget Sound, near the city of
Tacoma, Washington, the bridge had only been open for traffic a few months before it
collapsed; see Fig. 1.46 for photographs taken just prior to failure of the bridge and soon
thereafter.
1.8 Properties of Systems 45
' .
.t •.
: ..·· '
•
~I ';
.. '
1
"
d
;:
.....
,.
;. ,
•
: ..~ i
,,. '<''"'.
..·•· · ,-u........
.... .
.. .. •.•
. . . . . . . . . •••
.. .
',!M.,)l.;..... ~ • J I J l ( •. .OW ~ ) o o ( - , , 1 ~
(a)
'·, :q\\11
''
'
' ,,
I
;
i
;
' ' ,
'
. ..... ~
••
,. •
• ,
'
• •'
I
•
• •
• ••
l \'
.
• . "
•
,
~ .........-·..
'. 1; ., .
.. -
•y.·,
·'
. 131' "'·'
•.,.,,-. , • F ~·: •
(b)
FIGURE 1.46 Dramatic photographs showing Lhe collapse of the Tacoma Narro,vs suspension
bridge on ~<>veml>er 7. 1940. (a) Photograph sho,ving the t,vísting motion (lf the brídge's center
span jt1st l>efore failurc. (b) ,;\ Íe\v minutes after the first píece of C()ncrete fell. thís second photo-
graph sho\vs a 600-ft section of thc bri<lge ilreaking out of the sttspension S}léln an<l turníng upside
down as it crashed in Pugt't S<>u11d, \Vashington. Note the car ín thc top right-hand corner of the
photograph. (C~ourtesy (>Í the Smithsonian Inslíltttion.)
46 CHAPTER 1 Ili INTRODUCTION
EXAl\tPLE 1.9 Shc>w that the moving-average system described in Example 1.8 is BIBO
stable.
• Drill Problem 1.18 Show that the moving-average system described by the in-
put-output relati<>n
.,.
ExAMPLE 1.1 O Consider a discrete-time system whose input-output relation is defined by ,~:
·'
.:· ..
"
y[n] = r' xl n]
1
. : . ., .
= lrnl · lx[n]I
With r > 1, the multiplying factor r 11 diverges for increasing n. Accordingly, the condition
that the input signal is bounded is not sufficient to guarantee a bounded output signal, and
so the system is unstable. To prove stability, we need to establish that all bounded inputs
produce a bounded output.
,> •
t'F . ,.: .. •·:!',.. ; " ••
•• >
•i •' •
11 MEl\10RY
A systen1 is said to possess memory if irs output signal depends on past values of the input
signal. The temporal extent of past values on which the output depends defines how far
the memory of the system extends into the past. ln conrrast, a system is said to be me-
moryless if its output signal depends 011ly on the present valt1e <)f the input signal.
l. 8 Properlies of Syslems 47
For example, a resistor is memoryless since the current i(t) flowing through it in
response to the applied voltage v(t) is defined by
1
i(t) =R v(t)
where Ris the resistance of the resistor. On the other hand, an inductor has memory, since
the current i(t) flowing through it is related to the applied voltage v(t) as follows:
i(t)
ft 1
= L _"" v( T) dT
where L is the inductance of the inductor. That is, unlike a resistor, the current through
an inductor at time t depends on ali past values of the voltage v(t); the memory c>f an
inductor extends into the infinite past.
The moving-average system of Example 1.8 described by the input-output relation
y[n] = x 2 [n]
is memoryless, since the value of the output signal y[n] at time n depends <>nly <)D the
present value of the input signal x[nj.
• Drill Problem 1.19 How far does the memory of the moving-a verage system de-
scribed by the input-output relatíon
v(t) =e
ft 1
-x, i( T) dT
a CAUSALIIY
A system is said to be causal if the present value of the output signal depends only on the
present and/or past values of the input signal. ln contrast, the output signal of a noncausal
system depends on future values of the input signal.
For example, the moving-average system described by
• Drill Problern 1 .22 Consider the RC circuit shown in Fig. 1.47. Is it causal or
n<)ncausal?
Answer: Causal. •
• Drill Problem 1.23 Suppose k in the operat<>r of Fig. 1.44 is replaced by -k. Is the
resulting system causal or noncausal for positive k?
Answer: Noncausal. •
• INVERTIBILl'lY
A system is said to be invertible if the input of the system can be recovered frc)m the systcm
output. We may view the set of operations needed to recover the input as a second system
connected in cascade with the given system, such that the output signal c>f the second
system is equal to the input signal applied to the given system. To put the notic)n of
invertibility on a formal basis, let the operator H represent a continuous-time system, with
the input signal x(t) producing the output signal y(t). Let the output signal y(t) be applied
to a second continuous-time system represented by the operator H- 1, as illustrated i11 Fig.
1.48. The output signal of the second system is defined by
H · 1 (y(t)} = H·- 1{H{x(t)}}
= H- 1H{x(t)}
where we have made use of the fact tl1at two operators H and H- 1 connected in cascade
are equivalent to a single operator H- 1 H. For this output signal to equal the c>riginal input
signal x(t), we require that
(1.71)
+
v 1(t) ~j
Input Output
FIGlJRE 1.47 Series RC círct1it driven from an ideal voltage sourcc v 1(t), producing output volt-
age v2 (t).
1.8 Properties of Systems 49
FIGURE 1.48 The notion of system invertibility. The seconcl operat,>r H- 1 is the inverse of the
first operator H. Hence the input x(t) is passed through the cascade correction of H anel H- 1 cclm-
pletely unchanged.
where I denotes the identity operator. The output of a system described by the identity
operator is exactly equal to the input. Equation ( 1. 71) is the condition that the new op-
erator H- 1 must satisfy in relation to the gíven operator H for the original input sígnal
1
x(t) to be recovered from y(t}. The operator H- is called the inverse c>perator, and the
associated system is called the inverse system. Note that H- 1 is not the reciprocai of the
operator H; rather, the use of the superscript -1 is intended to be merely a flag indicatíng
''inverse. '' ln general, the problem of finding the inverse of a given system is a difficult
one. ln any event, a system is not invertible unless distinct inputs applied to the system
produce distinct <)Utputs. That is, there must be a one-to-one mapping between input and
output signals for a system to be invertible. Identical conditions must hold for a discrete-
time system to be invertible.
The property of invertibility is of particular importance in the design of communi-
cation systems. As remarked in Section 1.3, when a transmitted signal propagates through
a communication channel, it becomes distorted due to the physical characteristics of the
channel. A widely used method of compensating for this distortion is to include in the
receiver a necwork called an equalizer, which is connected in cascade with the channel in
a manner similar to chat described in Fig. 1.48. By designing the equalízer to be rhe inverse
of the channel, the transmitted signal is restored to its original form, assuming ideal
condirions.
EXAMPLE 1.11 Consider the time-shift system described by the input-output relation
.. .": . . . y(t) = x(t - t 0 ) :,::: S'0 {x(t)}
'
where the operator sro represents a time shift of t 0 seconds. Find the inverse of this system.
Solution: For this example, the inverse of a time shift of t0 seconds is a time shift of - t0
seconds. We may represent the time shift of -t0 by the operator s-to. Thus applying s-i0 to
the output signal of the given time-shift system, we gec
s-ro(y(t)} = s-' {Sr {x(t)}}
0 0
= s-105to{x(t)}
For this output signal to equal the original input signal x(t), we require chat
s-to5to = 1
., which is in perfect accord with the condition for i11vertibility described in Eq. (1.71).
y(t) = L
ft l
-ao x( 'T) d'T
!!:_
Answer: L
dt •
50 CHAPTER 1 • INTRODUCTION
.· .~ .··•
EXAMPLE 1.12 Show that a square-law system described by the input-output relation
' y(t) = x 2(t)
~,;. . .· .,.
.'
is not invertible.
Solution: We note that the square-law system violates a necessary condition for invertibility,
which postulates that distinct inputs must produce distinct outputs. Specifically, the distinct
inputs x(t} and -x(t) produce the sarne output y(t). Accordingly,.the square-law system is not
invertible.
·:·, ..
• TIME INVARIANCE
A systern is said to be time invariant if a time delay or time advance of the input sigr
leads to an identical time shift in cl1e c>utput signal. This implies that a time-invariant syst<
responds identically no matter when the input signal is applied. Stated in another way, t
characteristics of a time-invariant system do not change with time. Otherwise, the syst<
is said t<> be time variant.
Considera continuous-time system whose input-output relation is described by E
(1.69), reproduced here for convenience of presentation:
y(t) = H{x(t)}
Suppose the input signal x(t) is shifted in time by t 0 seconds, resulting in the new i11i:
x(t - t0 ). This operation may be described by writing
where the operator S'0 represents a time shift equal to t 0 seconds. Ler y; (t) denote t
output signal of the system produced in response to the time-shifted input x(t - t 0 ). "\
may then write
y;{t) = H{x(t - t 0 )}
= H{St {x(t)}}
0
(1.7
= HS 10 {x(t)}
which is represented by the block diagram shown in Fig. 1.49(a). Now supp(>Se y 0
represenrs the output of the original system shifted in time by t 0 seconds, as shown by
which is represented by the block diagram shown in Fig. l.49{b). The system is tir
invariant if the outputs
.
y;(t) and .Y (t) defined in Eqs. (1.72) and (1.73) are equal for ai
0
(1. 7
That is, for a system described by the operator H to be time invariant, the system operar
H and the time-shift operator Sto must commute with each other for all t 0 • A similar relati<
must hold for a discrete-time system to be time invariant.
1.8 Properties of Systems 51
(a) (b)
FIGURE 1.49 The notion of tin1e invariancc. (a) l'ime--shift <>perator S'0 llrccedíng c>perator H.
(b) Timc-shift operator .S111 follo\l\'Íng opcralor H, These two situations are equiva1ent, IJrc>vicled that
H is time invaríant.
ExAMPLE 1.13 Use the voltage v(t) across an inductor to represent the input signal x(t)., and
the current i(t) flowing through it to represent the output signal y(t). Thus the inductor is
described by the input-output relation
y(t) =L
1 f' -<,> X('T) d-r
where L is the inductance. Show that the inductor so described is time invaríant.
Solution: Let the input x(t) be shifted by t 0 seconds,. yielding x(t - t 0 ). The response y,{t) of
the ínductor to x(t - t 0 ) is
. ,.
ft
y;(t) = L -~ x(
1
'T - t0 } d-r
;. Next, let y (t) denote the original output of the .inductor shifted by t0 seconds, as shown by
0
=-
l Jt-to X(T) d1r
L -oo
Though at first examination y;(t) and y 0 (t) look different, they are in fact equal, as shown by
a simple change in the varíable for integration. IJet
For a constant t 0 , we have d-r 1 = dT. Hence changing the limits of integration, the expression
for y;(t) may be rewritten as
y;(t) =-
l ft-r 0
X('T 1 ) d'T'
L -oo
which, in mathematical terms, is identical to y0 (t). lt follows therefore that an ordinary in-
ductor ís time ínvaríant.
;,.·-~ •• ..,..,.. •••••• <. ~.., ,,.,.. • :'Jlt'::: ••• ' .,. • • ·,~,. •• ,... .·;i•.· ... ·..
~ ~,;. ..•.'·
ExAMPLE 1.14 A rhermisror has a resistance that varies with time due to temperature
changes. Let R(t) denote rhe resístance of the thermisror, expressed as a function of time.
Associating the input signal x(t) wíth che voltage applied across the thermistor, and the ot1tput
signal y(t) wirh rhe current flowing through it, we may express the input-output relacion of _
the thermistor as
x(t)
y(t) = R(t}
Solution: Let y;(t) denote the response of the thermistor produced by a time-shifted version
x(t - t 0 ) of the original input signal. We may then write '> , ,
'·
x(t - t 0 )
Y;(t) = R(t)
Next, ler y0 (t) denote the original output of the thermístor shifted in tin1e by t 0 , as shown by
Yo(t) = y(t - to)
x(t - t0 )
R(t - t 0 )
We now see that since, in general, R(t) * R(t - t 0 ) for t 0 * O, then
y0 (t) * y;(t) for t 0 *O
Hence a thermistor is tíme variant, which is intuitively satisfying.
Answer: No. •
• LINEARIIT
A system is saíd to be linear if it satisfies the princif,le (>( superposition. That is, the rcspc)nse
of a linear system to a weighted sum of input signals is equal to the sarne wcighred sum
of ot1tput signals, each output signal being associated with a particular input signal acting
on the system independently of ali the other input signals. A system rhat víolates the
principle of superposition is said to be nonlinear.
l.et the operator H represent a continuous-time system. l,et the signal applied to rhe
system input be defined by the weighted sum
N
x(t) = L a;x;(t)
i= l
(1.7,5)
where x 1(t), x 2 (t), ... , x 1'J(t) denote a set of input signals, and a 1 , a 2 , ••• , aN denote the
C<)rresponding weighting factors. The resulting output signal is written as
y(t) = H{x(t)}
N (1.76)
=H L a;x;(t)
i= l
If the systen1 is linear, we may (in accordance with the principie of superposítion) express
the output signal y(t} c>f the system as
N
y(t) = 2- a;y;(t) (1. 77)
i= 1
where y;(t) is the output <lf the system in response to the input X;(t) acting alone; that is,
X\ (t) o .. x 1(t) o .~ •
~- :• >l•
ª1
Output Output
Inputs X2(t}
•
•
o .... .....
~
·.~..
.
••
.
{A•
:~~: •
•
•
y(t)
Inputs X2(t)
•
• •••
• •
~
. )o l: )o y(t)
• • • ••
(a) (b)
FIGURE 1.50 The linearity property of a system. (a) The combined operation of amplitude scal-
ing and summation precedes the operator H for multiple inputs. (b) The operator H precedes
amplitude scaling for each input; the resulting outputs are summed to produce the overall output
y(t). If these t\vo configurations produce the sarne output y(t), the operator H is linear.
The weighted sum of Eq. (1.77) describing the output signal y(t) is of the sarne mathe-
matical formas that of Eq. (1. 75), describing the input signal x(t). Substituting Eq. ( 1. 78)
into (1. 77), we get
N
y(t) = L a;H{x;(t)}
i=I
( 1. 79)
ln order to write Eq. (1.79) in the sarne formas Eq. (1.76), the system operation described
by H must commute with the summation and amplitude scaling in Eq. (1.79), as illustrated
in Fig. 1.50. Indeed, Eqs. (1.78) and (l.79), víewed together, represenr a marhemarícal
statement of the principie of superposition. For a linear discrete-time system, the principle
of superposition is described in a similar manner.
Solution: Let the input signal x[n) be expressed as the weighted sum
N
· x[n] = L a;x;[n]
;.= 1
'>·· : .,
: . ·. ' . ,..
' ..
,<
i=1
' .
. N
=L a;y,[n] . . .
í=l
'-· , . . ,.
where . ,.
.' ..• ,
is the output dueto each input acting independently. We thus see that the given system satisfies
the principie of superposition and is therefore linear.
. ;,· ..
54 CHAPTER 1 • INTRODUCTION
ExAMPLE 1.16 Consider next the continuous-time system described by the input-output
relarion
y(t) = x(t)x(t - 1)
Correspondingly, the output signal of the systern is given by the double summatíon
r
' N N
y(t} = Í: a;x;(t) Í: a;x;(t - 1)
'
·::
. i=-1 i=1
;·. .
N N
= Í:, 2, a;a;x,(t)x;(t - 1}
i=-1 i= l
The form of this equation is radically different from that describing the input signal x(t). That
is, here we cannot write y(t) = !f'1 1a;y;(t). Thus the system violates the principie of superpos-
ition and is therefore 11onlinear.
The basic object used in MATLAB is a rectangular numerical matrix wíth possibly complex
elements. The kinds of data objects encountered in the study of signals and systems are all
well suired to matrix representations. ln this section we use MATLAB to explore the
generation of elementary signals described in previous sections. The exploratior1 of systems
and more advanced signals is deferred to subsequent chapters.
The MATLAB Signal Processing Toolb<>x has a large variety c>f functi<Jns for gen-
erating signals, most of which require rhat we begin with the vector representation of time
t or n. To generate a vector t of time values with a sampling interval 2T of 1 rns on the
interval from O to 1s, for example, we use the command:
t = O: . 001 : 1 ;
I.9 Exploring Concepts wit1~ MATLAB 55
This corresponds 1000 rjme samples for each sec,)nd or a sampling rate of 1000 Hz.
t<)
To generate a vector n c>f time values for discrete-time signals, say, from n = Oto n = 1000,
we use thc command:
n = 0:1000;
Given t <.>r n, we 1nay then proceed to generate the signal of interest.
ln MATLAB, a discrete-tíme signal is represented exactly, because the values of the
signal are described as the elements of a vector. On the other hand, MATI,AB provides
only an approximation to a continuous-time signal. The approximation cc>nsists of a vector
whc>se individual elements are samples of the underlying continuous-time signal. When
using this approximate approach, it is important that we choose the sampling interval 2T
sufficiently small so as to ensure that the samples capture ali the details of the signal.
ln this section, we consider the generation of both continuous-time and discrete-time
signa]s (lÍ various kinds.
• PERIODIC SJGNALS
lt is an easy 1natter to generate peric)dic signals such as square waves and triangular waves
using MATLAB. Consider first the generation of a square wave <>Í amplitude A, funda-
mental freqt1ency wO (measured in radians per second), and duty cycle r h o. That is, r h o
is the fracrion of each peric>d for which the signal is positive. To generate such a sígnal,
we use the basic command:
A*square(wO*t + rho);
Thc sqttare wave sh(>wn in Fig. 1.13(a) was thus generated using the following complete
set <)f commands:
>> = 1;
A
>> wO = 10*pi;
>> rho = 0.5;
>> t = 0:.001:1;
» sq = A*square(wO*t + rho);
>> plot(t, sq)
ln the second command, pi is a built~in MATLAB function that returns the floating-pc>i11t
number clc>sest to '71'. The last con1mand is used to view the square wave. The comma11d
p lo t draws lines C<>nnecti11g the successive values of the signal a11d thus gives the ap-
pearance of a continuc)us-time signal.
Consider next the generacion of a triangular wave <>f amplitude A, fundamental fre-
quency wO (measured in radians per second), and width W. Ler che peric)d of the triangular
wave be T, with the first maximum value occurring at t = WT. The basic command for
generating this second periodic signal is
A*sawtooth(wO*t + W);
Thus to generate the symmetric triangular wave show11 in Fig. 1.14, we t1sed the fclllowing
commands:
>> A = 1;
>> wO = 10*pi;
>>W=0.5;
>> t = 0:0.001:1;
» tri = A*sawtooth(wO*t + W);
>> plot(t, tri>
56 CHAPTER l • INTRODlJCTION
• Drill Problem 1 .. 29 Use MATLAB to generare the triangular wave depicted in Fi1
1.14. ·
• EXPONENTIAL SIGNALS
ln both cases, the exponential parameter a is positive. The following commands \.Vere use,
to generate the decaying exponential signal shown in Fig. 1.26(a):
>> B - 5;
-
>> a - 6 ,•
-
>> t - 0:.001:1;
-
>> X - B*exp(-a*t); % decaying exponential
>> plotCt, x)
The growing exponencial signal shown in Figure 1.26(b) was generated using th
commands
>> B = 1;
>>a= 5;
>> t = 0:0.001:1;
» x = B*exp(a*t); % growing exponential
>> plot(t, x)
Consider next the exponential sequence defined in Eq. (1.31). The growing form <>
this exponential is shown in Fig. 1.28(b). This figure was generated using the followini
commands:
>> B = 1;
>> r = O. 85
>> n = -10:10;
>> x = B*r.~n; % decaying exponential
>> stem(n, x)
1.9 Exploring Concepts with MATIAB 57
Note that, in this example, the base r is a scalar but rhe exponent is a vect<)r, hence the
use of the syml1t>l ." to denote element-by-element powers.
• Drill Problem 1.30 Use MATLAB to generate the decaying exponential sequence
dep1cted in Fig. 1.28(a). •
li SINUSOIDAL SIGNALS
MA TLAB also contains trigt>n<>1netric functions that can be used to generatc si11usoidal
signals. A cosine signal of amplitude A, frcquency wO (measured in radians per sect>nd),
and phasc angle p h i (in radians) is obtained by using the command
A*cos(wO*t + phi);
Alter11atively, we may use the sine function t<> generate a sinusoidal signal by using the
cc>1nmand
A*sin(wO*t + phi);
These two commands were used as the basis of generating the sinusoidal signals shown in
Fig. 1.29. Specifically, for the cosine signal shown in Fig. 1.29(a), we used the following
commands:
>>A= 4·,
>> wO = 20*pi;
>> phi = pi /6;
>> t = 0:.001:1;
» cosine = A*cos(wO*t + phi);
» plot(t, cosine)
• Drill Problem 1.31 Use MATLAB ro generate the sine signaJ shown in Fig.
1.29(6). •
Consider next the discrete-time sinusoidal signal defined in Eq. (1.36). This periodic
signal is plotted in Fig. 1.31. The figure was generated using the follc>wing commands:
>>A= 1;
»omega= 2*pi/12; % angular frequency
>> ph i = O;
>> n = -10:10;
» y = A*cos(omega*n);
>> stem(n, y)
ln all <>Í the signal-generatÍ()n commands described above, we have generated the desired
amplitude by multiplying a scalar, A, into a vector representing a unit-amplitude signal
(e.g., si n ( wO* t + p h i ) ). This operation is described by using an asterisk. We next
consider thc generation of a signal that requires element-by-element multifJ/ication of two
vectors.
Suppose we multiply a sinusoidal signal by an exponential signal to produce an
exponentially dan1ped sinusoidal signal. With each signal component l1eing represented
58 CHAPTER l Ili INTRODUCTION
by a vecror, the generation of such a product signal requires the multiplicatior1 of one
vector by another vector on an element-by-element basis. MATLAB represents elemenr-
by-element multiplication by usíng a dot followed by an asterisk. Thus the command for
generating the exponentially damped sinusoidal signal
x(t) =A sin(w0 t + </>) exp(-at)
is as Í(>llows:
A*sin(wO*t + phi).*exp(-a*t);
For a decaying exponential, a is positive. This command was used in the generation of
rhe waveform shown in Fig. 1.33. The complete set of comn1ands is as follows:
>> A = 60;
>> wO = 20*pi;
>> ph i = O;
>>a= 6;
>> t = 0:.001:1;
» expsin = A*sinCwO*t + phi).*exp(-a*t);
» plot(t, expsin)
Consider next the exponentially damped sinusoidal sequence depicted in Fig. 1.51.
This sequence is obtained by multiplying the sinusoidal sequence x[nl of Fig. 1.31 by the
decaying exponencial sequence y[n] of Fig. l.28(a). Both of these sequences are defined for
n = -1 O: 1 O. Thus usíng z[nl to denote this product sequence, we may use the following
commands to generate and visualize it:
» z = x.*y; % elementwise multiplication
>> stem(n, z)
3 .----.------.----..----,--....----,---...----....-----.
2 ·-
1 -
-1 . . ..
-2 . . ..
- 3 10 -8 -6 -4 -2 O 2 4 6 8 10
Time n
FIGURE 1.51 Exponentíally damped sinusoidal sequence.
1.9 Exploring Concepts urith MATlAB 59
• Drill Problem 1.32 Use MATLAB to generate a signal defincd as the product of
the growing exponential of fig. 1.28(6) and the sinusoidal signal of Fig. 1.31. •
• STEP, IMPULSE, ANO RA.l\ilP FUNCTIONS
ramp = n;
ln Fig. 1.37, we illuscratcd how a pair of step functi<)11s shifted in time relative to
each other may be used t<) produce a rectangular pulse. ln light of the pr<,ce(it1re illustrated
therein, wc may formulate che foll<)wing set of commands for generating a rectangular
pulse centered on che origin:
t = -1:1/500:1;
u1 = Czeros(1, 250), ones(1, 751)];
u2 = Czeros(1, 751), ones(1, 250)];
u = u1 - u2;
The first command defines time running from -1 second to 1 sec<)nd in increments <>f 2
millisec<)nds. The second command generates a step functi(,n u 1 of unir amplitude, onset
at time t = -0.5 second. The third command generates a seco11d step functi<)n u 2, 011ser
at time t = 0.5 second. The fourth command subtracts u 2 from u 1 to prodt1ce a rectan-
gular pulse of unit amplitude and unit duration centercd <>n the origín.
• USER-DEFINED FUNCTION
An important feature of the MATLAB environme11t is that ir permits us to creare <>ur <>wn
M•files or subroutines. Two types of M-files exist, namely, scripts and funcrions. Scripts,
or script files, automate long sequences of C<>mmands. On the <)thcr l1and, functions, or
functi<)n files, provide extensibility to MATLAB by allowing us to add new functío11s. Any
variables used in function files do not remain in memory. For this reasou, input and outpL1t
variables must be declared explicitly.
We may thus say that a function M-file is a separate entity characterized as follo\vs:
1. lt begins with a statement defi11ing thc fu11ction name, its i11put arguments, and its
output arguments.
2. lt also includes additional statements that compute the values to l,e rcturned.
3. The inputs may be scalars, vectors, or matrices.
60 CHAPTER l ri INTRODUCTION
Co11sider, for examplc, the generation c,f the rectangular pulse depicted in Fig. 1.37
using an M-file. This pulse has unir amplitude and unir duration. To generate it, wc create
a file called r e e t . m containing the following statements:
function g = rect(x)
g = zeros(size(x));
set1 = find(abs(x)<= 0.5);
g(set1) = ones(size(set1));
ln rhe last three staternents of this M-file, we have introduced two useful functions:
• The function s i z e retur11s a two-element vecror containing the row and column
din1e11sions of a marrix.
• The function f i n d returns the índices of a veccor or matrix that satisfy a prescribed
relacional condítion. For the example ar hand, f i n d ( a b s ( x ) < = T ) returns the
índices of the vector x where the abs<)lute value of x is less than <>r equal to T.
The new function r e e t. m can be used like any ocher MATLAB function. ln particular,
we may use it t<> generate a rectangular pulse, as fc>llows:
t = -1:1/500:1;
plot(t, rect(0.5))
11 •.~_0 _S~im1!'ary
ln this chapter we prcsented an overview of signals and systems, setting the stage for the
rest t)f the bol)k. A particular theme that stands <)Ut in the discussion presented herein is
that sígnals may be <)Í the continuous-time or discrete-time variety, and likewise for sys-
tems, as summarized here:
• l\. continuc>us-time signal is defined for all values of time. ln C()ntrast, a discrete-tíme
signal is defined only for discrece instants of time.
• A continuous-time system is described l1y an operator that changes a continuous-
time input signal into a continuous-time output signal. l11 contrast, a díscrete-cime
system is described by an operator that changes a discrete-time input sigr1al into a
discrete-timc <>utput signal.
ln practice, many systems mix continuous-time and discrete-time componencs. Analysis of
mixed systems is an ímportant part of the material presented in Chaprers 4, ,S, 8, and 9.
ln discussing the various properties of sígnals a11d systems, we took special care in
treating these two classes of signals and systems sidc by síde. ln so doíng, much is gained
by emphasizing the similarities and differences between C<)ntinuous-time signals/systems
and their discrete-time counterparrs. This practice is followed in later chapters too, as
a ppropr1ate.
Another n<)teworthy point is that, in the study of systerns, particular attention is
give11 to the analysis of linear time-invariant systems. Linearity means chat the system obeys
the principle of superposition. Time invaríance means that the characreristics of the system
do n(>t change with time. By invoking these two properties, the analysis c)f syste1ns becornes
mathematically tracrable. lndecd, a rich set of toc>ls has been developed for the analysis of
linear tíme-invariant systems, which provides direct motivatio11 Í<>r much <>f the material
on system analysis presented in this book.
Furtlier Readings 61
ln thís chapter, we also explored the use of MATLAB for the generati<>n of elemen-
tary waveforms, representing the continuous-time and discrete-time variety. MATLAB
provides a powerfu1 environment for exploring concepts and testing system designs, as
will be illustrated in subsequent chapters.
FURTHER READING
1. For a readabJc account of signals, their represencations, and use in communication systems,
see the book:
• Pierce, J. R., and A. M. Noll, Signals: The Science o{Telecommunícations (Scientific American
Library, 1990}
6. For an account of the legendary story of the first Tacoma Narrows suspension bridge, see
the report:
• S1nith, D., ''A Case Study and A11alysis of rhe Tacoma Narrows Bridge Failure," 99.497 E11-
gineeríng Prc>iect, Department of Mechanical Engineering, Carletc>n Univcrsity, March 29,
1974 (supervised by Prc>fessor G. Kardc)s)
• - - • • -
1.1 Find the even and odd components of each of show that the output y(t) consists of a de com·
the followíng signals: ponent anda sinusoidal component.
(a) x(t} == cos(t) + sin(t) + sin(t) cos(t) (a) Specify the de component.
(b) x{t) = 1 + t + 3t2 + St] + 9t 4 (b) Specify the amplitude and fundamental fre-
(e) x(t) = 1 + t cos(t) + t 2 sin(t) quency of the sinusoidal component in rhe
+ t 3 sin{t) cos(t) output y(t).
( d) x(t) = (1 + t 3 ) cos3 ( 1Ot) 1.4 Categorize each of the following signals as an
1.2 Determine whether the following signals are pe- energy or power signal, and find the energy or
riodic. If they are periodic, find the fundamental power of the signal.
period. t, O< t:::; 1
(a) x(t) = (cos(2m))2 (a) x(t) = 2 - t, 1 :::; t :::; 2
(b) x(t) = Lk=-s w(t - 2k) for w(t) depicted in O, otherwise
Fíg. Pl.2b.
n, O s n :::; 5
{e) x(t) = L.k--oo w(t - 3k) for w(t) depicted in
Fig. P1.2b. (b) x[n\ = 10 - n, 5 s n:::; 10
(d) x[nl = (-1) 11 O, otherwise
(e) x[n] = (-1)"
2
(e) x(t) = 5 cos(-rrt} + sin(5-rrt),
(f) x[n] depicted in Fig. P1.2f. -oo<t<oo
(g) x(t) depicted in Fig. P1.2g. 5 COS('TT't), -1:::; t:::; 1
(d) x(t) =
(h) x[n] = cos(2n) O, otherwise
(i) x[n] = cos(21rn) 5 cos( 'TT't), -0.5 < t :::; 0.,5
1.3 The sinusoidal sígnal (e) x(t) =
O, otherwise
x(t) = 3 cos(200t + 7r/6) sin( 7r/2 n), -4 < ,z :::; 4
is passed through a square-law devíce defined by
(f) x[nl = O, otherwíse
the input-output relation
cos(1rn), -4 < n ::S 4
y(t) = x 2 (t) (g) x[n] =
O, otherwise
Using the trigonometríc identity cos( 1rn }, n > O
(h) x[nl =
cos 8 2
= ½(cos 28 + 1) O, otherwise
w(t) x{n]
l
' • ' ' 1
... • ••
o
--ifC---+-~- t ;
'
'
, '
' '
-n
-1 1 -5 -l 1 4 8 6
(b) (f)
x(t)
- 1- -
••• ...
' ' ' ' ' ;
'
t
-5 -4 -3• -2 -1 l 2 3 4 5 6 7 s
-1
(g)
FIGORE Pl.2
Problems 63
1.5 Consider the sinusoidal signal 1.10 A rectangular pulse x(t) is denr1ed by·
x(t) = A cos(wt + cp) A, O< t < T
x(t) = .
Determine the average power of x(t). O, otherw1se
1.6 The angular frequency n of the sinusoidal signal The pulse x(t) is applied to an integrator dcfi11cd
xlnl = A cos(On + <b) by
satisfies the conditíon for x[n] to bc periodic.
Derern1ine the average power of x[n]. y(t) = J: x( T) dr
1.7 The raised-cosine pulse x(t) shown in Fig. Pl.7
is defined as Find the total energy of the output y(t).
1.11 The trapezoidal pulse x(t) of Fig. Pl.8 is time
t[cos(wt) + l], -1rlw < t < 1rl<v
x(t) = - · scaled, producing
O, otherwise
y(t) = x(at)
Determine rhe total energy of x(t).
Sketch y(t) for (a) a = 5 and (b) a = 0.2.
1.12 A triangular pulse signal x(t) is depicted in Fig.
Pl.12. Sketch each of the following sígnals de-
rived from x(t):
(a) x(3t)
(b) x(3t + 2)
-1r/w o Trlw {e) x(-2t-1)
FIGURE PI.7 (d) x(2(t + 2))
(e) x(2(t - 2))
1.8 The trapezoidal pulse x(t) shown in Fig. Pl .8 is (f) x(3t) + x(3t + 2)
defined by
.5 - t, 4 < t < 5
x(t)
1, -4 < t < 4
x(t) =
t + 5 -5 < t < -4
O, otherwise
Determi11e the total energy of x(t). --~-+-,- ~ ,- - t
-1 O l
~'----+---+--+--+-+--+;-+:-+-~-,.___ t
y(t) = x(10t - .5)
-5 -4 -3 -2 -1 O l 2 3 4 5 1.14 Let x(t) and y(t) be given in Figs. P1 .14(a) and
FIGURE Pl.8 (b}, respectively. Carefully sketch the following
signals:
1.9 The trapezoidal pulse x(t) of Fig. Pl.8 is applied (a) x(t)y(t - 1)
to a differentiatt)r, defined by (h) x(t - l )y(-t)
d (e) x(t + 1 )y(t - 2)
y(t) = dt x(t} (d) x(t)y(-1 - t)
x(t}
x(t) y(t)
1 3
t 1 t
j
-~~;.__-+---+-~,- t -+-,--~-~--r----t
-2 -1
-----'······· ... ··- ··-··
2
·····---
-1
1 2 3
-l
2
--- .-• - -
i,
- i-
!
-
l
-,-
1
- - !- ·- - - ·······--
!
i
(a) (b)
1
'
1
' 1 t
-4 -3 -1 o l 3 4
FIGURE Pl.14 (a)
g(t)
1.15 Figure Pl.l 5(a) shows a staircase-like signalx(t) 1
that may be viewed as the superposition of four
rectangular pulses. Starting with the recranguJar ------1---+--,-....--t
pulse g(t) shown in Fig. P1.15(b), construct this -1 l
waveform, and express x(t) in terms of g(t). (b)
FIGURE Pl.l 7
x(t)
4 ->-·- - - - ···-·----
(e) xln - 21 + yln + 2}
3------·--- (f) x[2n] + y[n - 4]
g(t) (g) x[n + 2}yln - 2j
2 ·- ····---
(h) x[J - n]y[n]
1
1 (i) x[-n}y[-n}
i
f
1
;
; t ---l----+----4-----t (j) x[n]yf-2 - n]
o 1 2 3 4 -1 O 1 {k} x[n + 2]y[6 - n]
(a) (b)
FIGURE. Pl.15
x[n]
3--
1.16 Sketch the \Vaveforms of the following signals:
(a) x(t) = u(t) - u(t - 2)
1
2,\
(b) x(t) = u(t + 1) - 2u(t) + u(t - 1)
(e) x(t) = -u(t + 3) + 2u(t + 1) - 2u(t - 1)
+ u(t - 3} l .
(d) y(t) = r(t + 1) - r(t) + r(t - 2)
(e) y(t) = r(t + 2) - r(t + l) - r(t - 1) --<>--O--<>--+--+---o-.--+~-<>-o-o--n
,·- ' : :
+ r(t - 2) -3 -2 -1 l 2 3
1.17 Figure Pl.17(a} shows a pulse x(t) that may be (a)
viewed as the superposition of three rectangular
pulses. Starting with the rectangular pulse g(t) y[n]
of Fig. Pl.17(b), construcr rhis waveform, and
express x{t) in rer1ns of g(t). 1 --
-1
(b) x[3n - 1]
(b)
(e) y[l - 11}
(d) yf2 - 2nl FIGURE Pl.18
Problems 65
1.19 Consider the sinusoidal signal (b) What happe11s to the differentiator output
y(t) as T approaches zere)? lJse the definition
41T 1T
x[n] = 10 cos n + of a unir impulse 5(t) to express your
31 5 answer.
Determine the fundamental period of x(n). (e) What is the total area l1nder rhe diffcrentia-
1.20 The sinusoidal signaJ x[nJ has fundamental pe- tor output y(t) for ali T? Jt1srify y<)ur
riod N = 10 samples. Determine the smallest answer.
angular frequency .n for which xlnl is periodic. Based on your findings ín parts (a) to (e), de-
1.21 Determine \.Vhether the following signals are pe- scribe in succinct terms the result of differenti-
riodic. If they are periodic, find the fundamental ating a unir impulse.
period.
x(t)
(a) x{n] = cos(fs1rn)
(b) x[n) = cos(n1rn)
(e) x(t) = cos(2t) + sin(3t)
(d) x(t) = Lk _""(-l)k8(t - 2k)
(e) x[n] = LZ'= ""{8[n - 3kl + S[n - k 2 ]}
(f) x(t) = cos(t)u(t)
-T/2 o T/2
(g) x(t) = v(t) + v(-tJ, where v(t) = cos(t)u(t)
FIGURE Pl.25
(h) x(t) = v(t) + v(-t), where v(t) = sin(t)u(t)
(i) x[n] = cos(½1rn) sin(_~ 7Tn) 1.26 The derivative c.)f 11npl1lse functic)tl 5(t) is re-
1.22 A complcx sinusoidal signal x(t) has the follow- ferred to as a doublet. lt is denoted l)y 8 1 (t) .
.
1ng components: Show thar 8' (t) sa tisfies the si fting properry
Re{x(t)}
lm{x(t)}
=
=
xR(t)
x 1(t)
= A cos(wt + </>)
= A sin(wt + </J) f" " ô'(t - t 0 ) f(t) dt = f'(t0 )
The amplitude of x(t) is defincd by the square where
root of x1(t) + xy(t). Show that this amplitude
equals A, independent of the phase angle <p. f''(to) = ~ f(t)
1.23 Consider the complex-valued exponencial signal
Assume that the fu11ction f(t) has a contjnuous
x(t) = Ae°'r+;w,, a>O derivative at time t = t 0 •
Evaluate the real and imaginary components c)f 1.27 A svstem
, C<)nsisrs of se\ eral subsvstems
, cc)n- 1
-IJo• lfP ..
---- • --t
1 1 2
X2(t)
-- . Y2(t)
1 2 3 .. k~;JJI*·
... .. ..
t t
1 1 1 4
-1 1
' 1 1 -1 ...
í
y3(t)
1 r··-,.,_ 1 ...
)111
i~
H •
' t t
1 2 3 1 4
-1
(a)
Y1(t)
_! 1
2'
_...,_--+--+-- t
l -1 l
X2(t)
2 Y2(t)
l ·,. l ·-t--------,
--+----t-----1------- t -+--r----+---"---- t
1 2 3 4 1 2 3 4
• • . -
_....__~--"---------1-- t
-JI H -1 t,.,
: : ·········-·-
' - : - '
t
l 2 3 4 -l l 2 3
1 2
y4(t)
1 ......
•
-1Jo
,rv.><~•
· ··H .,.,.
··"1!~
...,_ """"1'· · -·
• •l
--+--~--+-----
,., t
1
----·······---+-·-----+- t
1 - 1
l 2 3 4
(b)
FIGURE Pl.39
Problems 67
1.28 The systems given below have input x(t) or x[nJ 1.34 Show that the discrete-time systen1 described ín
and output y(t) or yfn], respectively. Deter- Problem 1.29 is time ínvariant, independent of
mine whether each of them is (i) memoryless, the coefficients a0 , a 1 , a 2 , anda:~•
(ii) srable, (jjj) cat1sal, (iv) Jjnear, and (v) rime 1.35 Is ir possib1e for a time-variant S}'Ste1n to be lin-
. .
1nvar1ant. ear? Justify your answer.
(a) y(t) = cos(x(t)) 1.36 Show that an Nth power~law <levice defined by
(b) yf n} = 2x[n)ufn] the input-output relation
(e) y[,tl = log10( lx[n} 1) y(t) = x'-..:(t), N ínteger and N =I= O, 1
(d) y(t) = f' 1~ x( T) dT
is nonlinear.
(e) )'tn) = IZ=-oc .i-[k + 21
1.37 A linear tíme-invariant syste1n 111ay he causal or
d
(f) y(t) = dt x(t) noncausal. Give an example for each one <)Í
these two possibilities.
(g) )'[n] = cos(21rxfn + 1]} + x[n] 1.38 Figure l ..50 shows two eqt1ivalent systen1 con-
d figurations on condirion that the sysrem
(h) y(t) = dt {e- x(t)}
1
X1(t) Y1(t)
; t
... .. l
t
o l o l
x,,(t)
- Y2(t)
1- ~
l
•
;
1 t •- t
o 1 2 3 o 3
-1
1 -~
--t-~,----t -+---1',;,__-+-_..,__ t
o 1 2 o 1 2 3
(a)
x(t)
2--
_..,___.•1---+-- t
'
o l 2
(b)
FIGURE P 1.40
--c>-0--o--+-t-l-0---0--0-- n
1.43 (a) The solution to a linear differential equation
1 2 is given by
-1 x(t) = 10e - t - 5e-O-St
(a)
Using MATLAB, plot x(t) versus t for t =
x[n] 0:0.01 :5.
2
( b) Repeat the problem for
1 X(t) = 10e-t + Se-O.Sr
750, 1000. Using MATLAB, investigate the ef- 1.46 A rectangular pulse x(t) ís defined by
fect of varying a on the signal x(t) for -2 < t
< 2 milliseconds. 10, O< t < 5
x(t) =
1.45 A raised-cosine sequence ís defined by O, otherwise
.
t cos(21rFn), -1/2F < n < 1/2F Generate x(t) using:
" w[n] = .
O, otherw1se (a) A pair of time-shifted step functions.
Use MATLAB to plot w[n] verst1s n for F = 0.1. (b) An M-file.
'-·
•
'
t
1· ,.
; '
'
> •
'
•..-
Time-Domain Representations
for Linear Time-Invariant Systems
.. .,, ..
.;, . : :i,. .:' : ·.
.t>1;:.
...
.. ·;. : . ~.
·. ,,,':·. : \.: ·....•: '. . .,
.,
<
.,.:,,
.,. ··.' .",'!·
:1.,
12.1 lntroduction
ln this chapter we C<>nsider severa! methods for describing the relationship betwcen the
input and output of linear time-invariant (LTI) systems. The focus here is on sysrem de-
scriptions that relate the output signal to the input signal when b<.)th signals are represented
as functions of time, hence the terminol<)gy ''time domain '' ín the chapter title. Merhods
for relating system output and input in domains other than rime are presented in later
chapters. The descriptions developed in this chapter are useful for analyzing and predicting
the behavior of LTI systems and for implementíng discrete-time systems on a compurer.
We begin by characterizing a LTI system in terms <>f its impulse response. The impulse
response is the system output associated with an impulse input. Given the impulse re-
sponse, we determine the output due to an arbitrary input by expressing the input as a
weíghted superposition of time-shifted impulses. By linearity and time invariance, rhe out-
put must be a weighted superposition of time-shifted impulse responses. The tcrm ''con-
volution'' is t1sed to describe the procedure for determining the output from rhe input and
the impulse response.
The second method considered for characterizing the input-(>utput bel1a vi<)r of LTI
systems is the linear constant-coefficient differential or difference equatic>n. Differential
equations are used to represent continuous-time systems, while difference equations rep-
resent discrete-time systems. We focus on characterizing differential and difference equa-
tion S<llutions with the g()al of developing insight into system behavior.
The third system representation we discuss is the block diagram. A block díagram
represents the system as an interconnection of three elementary operati<>ns: scalar multi-
plication, additíon, and either a time shift for discrete-time systems <>r integration Í(>r
• •
cont1nuous-t1me systems.
The final time-domain representation discussed in this chapter is the state-variable
description. The state-variable description is a series of coupled first-order differentíal or
difference equations that represent the behavior of the system's ''state'' and an equation
that relates the state to the output. The state is a set of varia bles associaced with energy
stc>rage or memory <levices in the system.
Ali four of these time-domain system representations are equivalent in the sense that
identica1 outputs result from a given input. However, each relates the input and output in
a different manner. Different representations offer different views of the system, with each
offering different insights into system behavior. Each representati()n has advantages and
2.2 Convolution: Impulse Hesponse Hepresentatio,,for LTI Systems 71
disadvantages for analyzing and implementing systems. Understa11ding h(>W differcnt rcp-
resentati<>ns are related and determining which offers the most insight and straightforward
so1ution in a particular prol)lem are important skil1s to develop.
Consider the prl)duct <.>f a sig11al x[11} and the impulse seqt1ence 81.nl, written as
x[n}8[n] = x[OJôlnJ
Generalize this relationship t() che product of x[n] and a cirne-shifted impulse seque11ce t<>
obtain
X rn lô[ n - k] = X [ k] Bl n - k]
ln this expression n represents the time indcx; hence x[n] de11otes a signal, while xlkl
represents the value of the signal xf n I at time k. We see that multiplicatjon of a signal by
a time-shifted impulse results in a time-shifted impulse with an1plitude given by the value
of the signal at the ti1ne the impulse occurs. This property allows us to express xln] as the
following weighted sum of time-shifted impulses:
x[nJ = L
k= -oo
x[k]ô[n - k] (2.1
y[n] = H 2, xlkl8[n - kl
k=-=
••
•
xl-2]8[n + 2]
x[-2} · · · ·
o
xl-1]5[n + IJ
+
--o--o---o--.---i--0---0-----..:>---o--- n
o
xl-1)
x[0]5ln - l l
1
+ x[O]
-· <>------<O>----<Q->----<O--··-',!--O--···-ó--❖-··~--- n
o
x[l)o[n -1]
+ x[t] +
o
x[2)ô[n - 2]
+
x[2] ·
o
••
•
x[rzJ
--
__________
••• ...___.__•••_ _ _ _ n
o
FtGlJRE 2.1 GraJ>hical example illustrating the reprcsentati(>n of a signal x[1i] as a \veighted sum
of time-shiftcd impulses.
2.2 Convolution: Impulse Hesponse Hepresentationfor LTI Systems 73
Now use the linearity property to interchange the system operator H with thc summation
and sigr1al values x[kj to obtain
k=-oo
where hk[ nl = H{B[n - k]} is the response of the system to a time-shifted impulse. If we
further assume the system is time ínvariant, then a time shift ín the input results ín a tin1e
shift in the output. Thís implies that the output due to a time-shifted impulse is a time-
shifted ,,ersion of the output due to an impulse; that is, hk[n] = h0 [n - k]. Letting
h[nl = h0 [n] be the impulse response of the LTI system H, Eq. (2.2) is rewritten as
y[nJ = L
k=-oo
x[k]hín - k] (2 ..3)
Thus the output of a LTI system is given by a weighted sum <>Í time-shífted impulse re-
sponses. This is a direct consequence of expressing the input as a weighted sum of tin1e-
shífted impulses. The sum in Eq. (2.3) is termed the convolution sum and is denoted by
the symh(>l *; that is,
00
x[nJ * h[nJ = L
k= -·- CC
x[k]hln - kJ
The convolution process is illustrated in Fig. 2.2. Figure 2.2(a) depicts the impulse response
of an arbitrary LTI system. ln Fig. 2.2(b) the input is represented as a sum of weighted
and time-shifted impulses, Pk[n 1 = x[k IB[n - k 1- The <>utput <>Í the system associated with
each input pk[n] is
vk[n 1 = x[k lh[n - k]
Here vk[n J is obtained by time-shifting the impulse response k units and multiplying by
x(k]. The outputy[n] in resp<>nse t<> the inputx[n} is obtained by summing ali the sequenc.:cs
vkf n]:
k=-oo
That is, for each value of n, we sum the values along the k axís indicated on the right side
of Fig. 2.2(6). The following example illustrates this pr(>cess.
·. ~...: v·
'
ExAMPLE 2.1 Assume a LTI system H has impulse response
1, lt .::::: ± 1
h[n] = 2, n == O
o, otherwise
'
Determine the output of thís system in response to the input
2, n=O
3, n = 1
x[n] = -
-2, n=2
... o, otherwise
74 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEJ\-tS
h[nJ
l - ..
••• 9 •••
---c...,._-0---11----+-+---+-+---+--n !
-l-·
(a)
•
• ••
• •
-1
-1
l 2 3 4
n ... .. •
.
-1
1
D
l
'
2
3
1
4
'
6 • ••
n
-1 -1. •
Po[n] v0 [n]
1 1
i>·
3
n .. h[n]
.... ,..
.. -❖----0--1---1--+-+--+
' '
-..
. ~'-...:_.:.;,,__ n ..... o
-1 l 2 3 4 • "li -1 l 2 4 5
4
.. ~~- ..
1
l
-1 l 2 3 4 ~- 1 2 3 5 6
9-··-· n
--o---~4----+-+-~---1.-....1- \
-.;.,.,,.,_ ..
5
2- ---<>---<>---<>-_._,--<>----<>---o---<>- n ., h[ l n 11 -~--<>--<>--__,__..,__-+---,-+--+ n -2
-1 1 2 3 4 1 2 3 4
p3(n1
3
-l l 2
3
--<>---<>-----<>--o----..----<>--<>----<>- n
4
3
~ I
4 5
J j_ ~n 3
-1
k •
•
•• k
• •
00 00
(b)
FIGURE 2.2 Illustration of the convolution sum. (a) Impulse response of a system. (b) Decom-
position of the input x[nJ into a \Veighted sum of time-shifted impulses results in an output y[n]
given hy a weighted sum of time-shifted imJlulse resilonses. Here Pk[n) is the weightcd (by x[k]}
and time-shifted (by k) impulse input, and vk[n] is the wcighted and time-shifted im1>ulse respons
out1>ut. The dependcnce of both pk[n] and vk[n] on k is depicted by lhe k axis shown on the left-
and right•hand sides of the figure. The output is ohtained by summing vk[n] over all values of k.
2.2 Convolution: Impulse Response Representation for LTI Systems 75
••• ••
•
w 0 [k]
l
-1 00
W1[k]
l
-1
-
00
•••
k y[ll = L W1[k]
1 2 3 4 k =-oo
..
-1
l
~
1-
-1 3 00
---+-----l.---+-----1f-----+---<>--<>--~
••• 6
k yL3J = L W3[k]
1 2 4 k ;-oo
-1- ...
W4[k]
1
-1 l 3 00
FIGllRE. (e) The signals w nlk] used to con,pute thc output at time 1-1 for several values <>f n.
2.2
Here \Ve have redrawn the right-hand side of Fig. 2.2(b) so that the k axis is horizontal. The out-
put is (>htained f(>r 11 = 110 by summing w,. 0 [k] over all values c>f k.
Here Po[n] = 28[n], P1[n] = 3S[n - 1], and p 2 [n] = -25[n - 2]. All other tíme-shifted Pk[n]
are zero because the input is zero for n < O and n > 2. Since a weighted, time-shifted, impulse
inptit, a8[n - k}, results ín a weighted, time-shifted, impulse response output, ah[n - k], the
system output may be written as
Here v0{n) = 2hln), v 1[n] = 3h{n - 1], v2[n] = -2h[n - 2), and all other vk{n] = O. Summation
of the weíghted and time-shifted impulse responses over k gives
-~· ...>·
º~
2, n
n -<
-
-2
-1
• .·,d~i
>.i>
./. ,:'· 7, n - o
'.,·
y[n] - 6, n - 1
...
, .
-1 , n - 2
-2, n - 3
.., . ,
·;. ... <
o, n -> 4
, .. , ::i; .. ·:.
. ""· . .. ,....~. {,\ ·. .,,~;;: ...~~ .....:~;i: ;~~· ·•~li ·•;~·· > ' '. ..'
ln Example 2.1, we found all the vk[nJ and the11 summed over k to determine yl1z
This approach illustrates the principies that underlie convolution and is very effective whe
the input is of shc>rt duration so that only a small nun1bcr of signals vk[nl need t<> t
determincd. When the input has a lclng duration, then a very large, p<lssibly infinite, nurr
ber of signals vk[nJ must be evaluated before y[nJ can be found and this procedure can t
cum berS<>me.
A11 alternative approach f<>r evaluating the convolution sum is obtained by a sligt
change in perspective. Consider evaluating the output ata fixed time n 0
00
y[nol =
k
I= - ,,, vk[noJ
That is, we sum along the k or vertical axis on the right-hand side of Fig. 2.2(b) ar a fixe,
time n = n 0 • Suppose we define a signal representing the values at n = n 0 as a function e
the independent variable k, Wn [kl = vk[n 0 ] . The ()Utput is now obtained by summing ove 0
y[noJ = L
k=-•x
Wn(J[k]
Note that here we need only determine one signal, w ,., [kJ, t<> evaluate the <)t1tput a 11
n = n 0 • figure 2.2(c) depicts w 11,,[kJ for several different values <>Í n 0 and the correspondin:
output. Here the horizontal axis corresponds to k and the vertical axis corresp<>nds to n
We may view vk[nJ as representing the kth row c>n the right-hand side of Fig. 2.2(l-,), \.Vhil,
w 11 l_kJ represents the nth column. ln Fig. 2.2(c), wn[kJ ís the nch row, while vk[nl is the ktl
column.
We have defined the intermediate sequence w [kJ = x[k]h[n - k] as the product o 11
x[k] and hfn - kJ. Here k is che independent variable and n is treated as a constant. Henc1
h[n - kl = h[-(k - n)] is a reflected and time-shifted (by -n) version of hfkl. The tim1
shift n determines the time ac which we evaluate the output of the systetn, since
oc
y[nJ = L
k=-,,,
Wn[kJ (2.4
Note that now we need only determine one signal, w n[ k 1, for each time ar which we desirc
to evaluate the output.
2.2 Convolution: Impulse Response Representationfor LTI Systems 77
. ,.
Use Eq. (2.4) to determine the output of the system at times n = -5, n = 5, and n = 10 when
the input is x[n] = u[n].
Solution: Here the impulse response and input are of infinite duration so the procedure
followed in Example 2.1 would require determining an infinite number of sígnals vkln]. By
using Eq. (2.4) we only form one signal, wn[k], for each n of interest. Figure 2.3(a) depicrs
x[k], while Fig. 2.3(b) depicts the reflected and tíme-shifted impulse response h[n - k]. We
see that
( l)n-k
4
k < n
h[n - k] = ' -
"'' · O, otherwise
Figures 2.3(c), (d), and (e) depict the productwn[k] for ti= -5, n = 5, and n = 10, respectively.
We have
w_5 [k] = O . , :. . . ·:.~..
.:,... ·. .. . :
x[kl hln - kJ
l ,• )n~-k 1 l·
(3
4
1
-2 o 2
;
4
;
6
•••
k ~ 2-.'.i 1 1
' .
rI n
0--0--0-0-0 k
(a) (b)
UJ_sfk] (3)5-k
·t ✓-:4
- ~0--0..0-ó-<>-<>-<>--0-<>-<>-0--0-<>----
-4 -2 O 2 4
k _ _ o--0 º º <> 1-1_r_Il . . ºº º
ü l 4
•--- __ k
(e) (d)
O 2 4 6 8 10
(e)
FIGVRE Evaluation of Eq. (2.4) in Examplc 2,2. (a) The inpltl sígnal x[k] dcpíctcd as a
2.3
functi<>n tlf k. (b) Thc reflected and time-shiftecl impuJst.' rcs1Jc.>nsc. /1-[11 - kJ, as a l\1nctít>n <>f k.
(e) 'I'he product signal u 1 _:;[kl used to cvaluate y[-5]. (d) The product signal 11 ,,[kl used to eYaltt- 1
ate y[5]. (e) Thc product signal t,v 10 lkJ used t<> evaluatc rí .1 O].
78 CHAPTER 2 Ili Tll\1E-00MAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT Sl'STI
·,. ..,.
Factor (¾)5 from the sum and apply the formula for rhe sum of a finite geometric serie
obtain
5 5 k
i;i:
3 4
y[S] = -4 I -
3
.~.·
""º k
3 s 1 - (1)6
-
4 1 - (1)
Lastly, for n = l O we see that
O< k:::;; 10
otherwise
..,,,
....
and Eq. (2.4) gives
lO 10-k
y[lO] = L i4
k..:O
....
.,.... ,,,
3 10 10 4 k
-4 >. -
:.:'o 3
•
.>Á
·•.
3' 10 1 - (!)11
-
4 1 - (1)
Note that in this example Wn[kJ has only two different functional forms. For n < O)
have wn[k] = O since there is no overlap between the nonzero portions of x[kl and h[n -
When n:::: O the nonzero portions of xlk] and h[n - k] overlap on the inrerval O~ k .:::ã n é:
•
. we may write
3 n-k
(4) , Os k s n ....
•;.:.• ... ..·
~
·,..
w,,[k] = h . .:·
O, ot erw1se
Hence we may determine the output for an arbitrary n by using the appropriate functioi
. forro for Wn[k] in Eq. (2.4).
.,
This example suggests that in general we may determine y[n 1for all n without
uating Eq. (2.4) at an infinite number <>f distinct time shifts n. This is accomplish(
identifying intervals of n on which w [kl has the sarne functional form. We rhen <)nly
11
to cvaluate Eq. (2.4) usi11g tl1e w [kl ass(>ciated with each interval. Often ir is very ht
11
to graph both xlk] and hln - k] when determini11g w,,lk] a11d ídentifying the approf
intervals of ti1ne shifts. This procedure is now summarized:
1. Graph both x[kJ and l7[n - kJ as a functi<>n of the índependent variable k. T•
termine h[n - k\, first reflect hík] ahout k = O to <)l1tain h[-kj and then time
h[-kl by -n.
2. Begin with the time shift n large and negative.
3. Write the functional form for w,,[kJ.
4. Increase the time shift n until the functional form for t.v [kl changes. The valuc 11
at which the change <>ccurs defines the end of the current interval and the begin
c>f a 11ew interval.
5. Lct n bc in the new intcrval. Repeat steps 3 and 4 until a\l íntervals of time shi
and the corresponding functional forms for zv,,[kl are identified. This ust1ally im
increasing n to a very large positive number.
6. For each interval of time shifts n, sum ali che values of the corresponding w.,[,
obtain ylnJ on that interval.
2.2 Convolution: Impulse Response Representatio•ifor LTI Systems 79
The effect of varying n from - oo to oc is to slide h[-k] past x[ k] fr<)m lcft tt> right.
Transitions in the intervals of n identified in step 4 generally t>ccur when a change point
in the representation for h[-kJ slides through a change point in the representati{>n for
xlk]. Alternatively, we can sum all the values in w,,[k] as each interval ()Í time shifts is
identífied, that is, after step 4, rather than waiting until ali intervals are identified. The
following examples illustrate this procedure for evaluacing the cc>nv<>lution sun1 .
.•
•
ExAMPLE 2.3 A LTI system has impulse response gíven by
h[n] = u[n] - u[n - 10]
and depicted in Fig. 2.4(a). Determine the output of this system when the input is the rectan~
gular pulse defined as
x[n] = u[n - 2] - u[n - 7]
and shown in Fig. 2.4(b).
Solution: First we graph x[k] and h[n - k], treating nas a constant and k as the independent
variable as depicted in Figs. 2.4(c) and (d). Now identífy intervals of time shifts n on which
the product signal wn[k] has the sarne functional form. Begin with n large and negative, in
which case w,.[k] = O because there is no overlap in the nonzero portio11s of x[k] and
h[11 - k]. By increasing n, we see that w,,[k] = O províded n < 2. Hence the first interval of
time shifts is n < 2.
. . .
....,,. ···"· ~- .. ... ...
.:.
,;,,..,.
; n '
n ~ - k
' ' '
;
O 2 4 6 8 o 2 4 6 2 4 6
(a) (b) (e)
wn[k] wnfk]
h[n - k]
l- - • 1 ... ~
-<>--<>······ .. ...L.......IL.......L.__.__.____.___...._--<>--<o-o- k
n-9 n O 2 n 2 4 6
(d) (e} (f)
y[n]
n -9 6
-~~-l- --------~+~~~~~~n 2
:
4
l
6
l
8
:
10
l
12 14 16
(g) (h)
FIGURE 2.4
Eva1uati{ln of the convolution sum for Example 2.3. (a) The system in1pt1lse re-
sponse Ji[n]. (b) The ínpt1t signal x[fiJ. (e) The input depicte<l as a functi<>n ,)f k. (d) 'J'hc rcflected
and time-shifted impulse rcsponse h[1i - k] depicted as a function of k. (e) The product signal
tvnlkJ for the interval of time shifts 2 < n < 6. (f) The tJroduct signal w .. [k] for the interval of time
shifts 6 < n s; 11. (g) 'I'he product signal w,.[k] for the interval of time shifts 12 < n s; l 5.
(h) The Ot1tpt1l rí 111.
80 CHAPTER 2 • TI\\11:'.-DOMAIN REPRE.SE.NTATIONS FOR LINEAR TIME-INVARIANT SYSTEM~
.,;••
When n = 2 the right edge of h[1t - kJ slides past the left edge of x[kJ anda transition occur,
in the functional form for wn[kJ. For n .2: 2, ·
, 1, 2 s k s n
' Wn[kJ ==
·. . O~ otherwise t:' ..,
.."
This functional form is correct until n > 6 and is depicted in Fig. 2.4(e). When n > 6 the rigw
edge of h[n - k] slides past the right edge of x[kJ so the form of w,i[k] changes. Hence ou1
second interval of time shífts is 2 s n s 6.
For n > 6, the functional form of w,i[ k J is given by
1, 2 s k s 6
., Wn[k] = .
.· O, otherw1se
as depicted in Fig. 2.4(f). This form holds until n - 9 = 2, or n = 11, since at that value of
n the left edge of h[n - k1 slides past the left edge of x[k}. Hence our third interval of time
shifts is 6 < n s 11.
...'' Next, for n > 11, the functional form for wn(k] is given by . .. ,
.,; i:
'.
1, n - 9 s k $. 6
Wn[k] = .
..,....
O, otherw1se
as depicted in Fig. 2.4(g). Thís forn1 holds unti] n - 9 = 6, or n = 15, since for n > 15 the
left edge of h[n - k] lies to the right of x[kl and the functional form for w,,[kJ again changeS'i
Hence the fourth interval of time shifts is 11 < n s 15.
For all values of n > 15, we sce that wn[k] = O. Thus the last interval of time shifts i.Q.
this problem is n > 15.
The output of the system on each interval of 1t is obtained by summing the values of
the corresponding w,,[kJ according to Eq. (2.4). Beginning with n < 2 we have y[n] = O. Next,
for 2 s n s 6, we have
·,
.,
< '
"
·.,
y[nJ = L1
k=2
=n-1
•
. , .,,
'
6
.,. .·
y[nl = L
k=2
1
=5
:~,
6
;
..... ···~
...
, ..
. '
y[nJ L 1
= k=n-9
•
= 16 - n
Lastly, for n > 15, we see that y[nj ~ O. Figure 2.4(h) depicts the output y[n] obtained by
combining the results on each interval.
,.
2.2 Convolution: Impulse Response Representationfor LTI Systems 81
... , .• : f'. • • • ·' ••
Now identify intervals of time shifts n on which the functional form of w n[ k] is the sarne.
Begin by consídering n large and negative. We see that for n < O, wn[k] = O since there are
no values k such that x[k] and h[n - k] are both nonzero. Hence the first interval is n < O.
When n = O the right edge of h[n - k] slides past the left edge of x[k] soa transition
occurs in the form of wn[k]. For n > O,
·. -.: - cl{:3"-k, Os k s n
. .> ' ..
. ·• . w,,[k) = O,
.
otherw1se
This form is correct provided Os n s 9 and is depicted in Fig. 2.5(c). When n = 9 the right
edge of h[n - k] slides past the right edge of x[k] so the form of w,.[k] again changes.
Now for n > 9 we have a third form for wn[k],
·. .' . , .. , W {k] = ~/3n-k, OS k S 9
,. O, otherwise
Figure 2.5(d) depícts this wn[k] for the third and last interval in this problem, n > 9.
We now determine the output y[n] for each of these three sets of time shifts by summing
x[kJ h[n - kJ
1- l - -..
a- ~
f3
Cl' 9 ... .. '
• ••
k - - k
' ' ' ' ' 1
o 2 4 6 8 10 n
(a) (b)
wn[kJ
n-
a- a
'
'
13n' ' 13n >·
- - k -o-- k
' ' ' ' <
o n O 2 4 6 8 10
(e) (d)
FIGlJRE 2. 5 Evalttation of the convolution sum for ExamJJle 2.4. (a) The input signal xf k] de-
picted as a function of k. (b) Reflccted and time-shifte<l impulse response, h[n - k]. (e) The
product sígnal wnfkl for()< n < 9. (d) The product signal w,.[k] for 11 > 9.
82 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR Til\1E-INVARIANT SYSTE.1\1S
Wn[kJ over ali k. Starting with the .first intervaI, n < O, we have Wn[k] O, and thus;
y[n] = O. For the second interval, Os n s 9, we have
n
y{n] = L o/t13n-k
k=O
Here the index of summation is limited from k = O to n because these are the only times kii:
for which wn[k] is nonzero. Combining terms raised to the kth power, we have
k
~ a
y[n] = /3" Li -
k=O f3
Next, apply the formula for summing a geometric series of {n + 1) terms to obtain
:= 1311 1 - (a/f3)n+l
[
y n1 1 - a//3 .' ..
~
.. , .
I ª/3
= /3'' k=O
1 -_(a//3)10
= 1311 _ __
1 - a//3
where, again, the índex of summation is limited from k = O to 9 because these are the only
times for which w,,[kl is nonzero. The last equality also follows from the formula for a finite
geometric series. Combining the solutions for each interval of shifts gives the system output
as ,.;:
o, n < O,
1 - (a//3)"+ 1
y[n] = fr 1 - a/(3 '
1 - (a/(3)10
n
n>9
{3 1 - a//3 '
• Drill Problem 2.2 Ler the i11put to a LTI system with i1npulse respc>nse h[nl -
a 11 {i,[n - 2J - u[n - 13)} be xlnJ = 2{uln + 21 - u[n - 12J}. Find the output y[nJ.
Answer:
O, n < O
1 ( }-1-n
2 a''+ 2 ; ª
-a
_·
1
, O< 11 ::s: 1O
12 1 - (a)-11
y[nJ = 2a 1 - a-1 '
11 :'.5 n :5 13
12 1 - (a)n- 24
2a _1 , 14 -5. n ::s: 23
1 -a
o, }1 > 24 •
2.2 Convolution: Impulse Response Representationfor LTI Systems 83
• Drill Problem 2.3 Suppose the input x[n] and impulse response h[nJ of a LTI system
H are given by
x[nJ = -u[nJ + 2u[n - 31 - u[n - 61
h[nJ = u[n + ll - u[n - 10]
Find the output of this system, y[n 1-
Answer:
o, n < -l
-(n + 2), -1 ~ n s 1
n-4 2~n<4
'
y[n} = o, 5:Sn<9
n - 9 10<n<11
'
15 - n , 12 .::s n .::S 14
o, n > 14 •
The next example in this subsecti<>n uses the convolution sum to obtain an equatÍ()n
directly relatíng the input and output of a system with a finite-duration impulse response.
,, .'. . .
·,·
." .. y[n] = ¼(x[nl + x[n - 1] + x[n - 2] + x[n - 3]) . .··· '
t'· ·".. ,.
;··. ,.,i, ••• ,._ ,..,.. .,., • •
The output of the system in Example 2.5 is the arithmetic avcrage of the fc)ur most
rece11t inputs. I11 Chapter 1 such a syste111 was termed a n1oving-average system. The
x[k] h[n - k]
-1
4
>
•• • ... '
'
k >
T
<
T k
' o 2 4 n-3 n
(a) (b)
FIGURE 2.6 Evaluatíon of the convolution sum for Example 2. 5. (a) An arbitrary iil}JUt signal
depicted as a function of k. (b) Reflccted and timc-shifted impulse response, h[1i - k].
84 CHAPTE.R 2 • TI!\-1E.-00MAIN REPRESENTA'TIONS FOR LINl::'.AR Tll\lE•INVARIANT Sys·rEl\'IS
effect of the averaging in this system is to smooth out short-terrn fluctuations in the input
data. Such systems are often used to identify trends in data.
"·"' ·.,:
EXAMPLE 2.6 Apply the average January temperature data depicted in Fig. 2.7 to the fol-
lowíng moving-average systems:
-z1., Osn< 1
(a) h[n] =
o, otherwise ·.~'. ..
.
-s1 , Osns7
{e) h[n) =
o, otherwise
Solution: In case (a) the output is the average of the tW<) most recent inputs, in case (b} the
four most recent inputs, and in case (e) the eight most recent inputs. The system output for
cases (a), (h), and (e) is depicted in Figs. 2.8(a), (b), and (e), respectively. As the impulse
response durarion increases, the degree of smoothing introduced by the system increases be-
cause the output is computed as an average <)f a larger number of inputs. The input to the
system prior to 1900 is assumed to be zero, so the output near 1900 involves an average \Vith
some of the values zero. Thís leads to low values of the output, a phenomenon most evident
in case (e).
The output of a continuous-time LTI system may also be determi11ed solely fro111 knowl-
edge of the input and the system's impulse respo11se. The approach and result are analog<>us
to the discrete-time case. We first express an arbitrary input signal as a weighted super-
position of time-shifted impulses. Here the superposition is an integral ir1scead <>f a sum
50 1--
r
r
-
'(
-
>.- r r
... u. l r
- ~
_Q
~ o
::,
e
Cll
--·...
oi)
40 ~
r
u,
Ir
' TU
,
')
')
Ir
r
~
')Ir
- u-1,.. • vr !'
....
ou
....a
:::l
30 .. ~("
Ir
~
OI) (L)
e e.
(L)
> a 20 ..
'
<~ 'i
..
10
!
o .. .. .. .. .. .. .. ..
-·
.. .. J
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
Year
FIGURE 2. 7 Averagc January tcmperature from 19()0 to I 994.
2.2 Cmavolutimi: Impulse Response Represetitationfor LTI Systetns 85
50 .....
-
L~ -- ,.
)
--
r rl li" ,.
-~
~
"}
,... r '
t
,. r
,.,o j
í - IW ~
,.
,1
1 '
i IC
j
j
1 >
.. ..
10
o . - . .. .. .. .. ~ .. .. ..
'
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
Ycar
(a)
60 r-----,------,--------,-----------..---------r----.
:
'
50 . . . . ···-
401 - -
r
,. -
30 1·...." 1
'
j
'l
10 1
1
O u_._,t..1..1._. J
1900 l 91() 192() 193() 1940 \950 1960 \970 198() 199()
Year
(b)
,.
. ..J
i
!
i
1910 1920 1930 1940 1950 1960 1970 1980 1990
Ycar
(e)
FIGURE 2.8 Result of passing average January temperaturc data throttgh severa! n1oving-averagc
systen1s. (a) Output of two-point moving-average systcm. (b) ()utput ()f four-poinl 111c,ving-average
system. (e) Output of eight-point mc,ving-a,'erage system.
due to the continuous nature <>f the input. We then apply this input to a l.TI system to
write the <>utput as a weighted superposition c>f time-shifted impulse responses, an ex-
pression termed the convolution integral.
The convolution sum was derived by expressing the input signal x[n] as a weighted
sum of time-shifted impulses as shown hy
00
X [ 11] = L
k=···oc
X [ k] B[ n - k]
86 CHAPTER 2 • TtME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEl\'IS
Here the superposition is an integral and the time shifts are given by the continuous vari-
able T. The weights x(-r) dT are derived from the value of the signaJ x(t) ar rhe time ar
which each impulse occurs, T. Equation (2.5) is a statement of the sifting property of the
impulse function.
Define the impulse response h(t) = H{o(t)} as the output <>f the system in response
to an impulse in pur. I f the system is time ínvarian t, then H {8( t - T}} = h( t - -r). Tha r is,
a time-shifted impulse input generates a time-shifted impulse response output. Now con-
sider the system output in response to a general input expressed as the weighted super•
position in Eq. (2.5), as shown by
y(t) = J 00
,,, x(-r)h(t - r) dT
Hence the <Jutpuc of a LTI sysrem in response to an input of the form of Eq. (2.5) may be
expressed as
The <)Utput y(t) is given as a weighted superposition of impulse responses time shifted by
r. The weighcs are x( T) dr. Equacion (2.6) is termed the convoluti<Jn integral and, as before,
is denoted by rhe symboJ *; that ís,
The convolution process is illustrated in Fig. 2.9. Figure 2.9(a) depicrs the impulse
response of a system. ln Fig. 2.9(b) the input to this system is represented as an integral
of weighted and time-shífted impulses, P-r(t) = x(-r)B(t - T). These weighted and cime-
shifted impulses are depicted for several values of T on the left-hand side of Fig. 2.9. The
outptlt associated with each input p (!) is the weighted and time-shifted impulse response:
7
The ríght-hand side of Fig. 2.9(b) depicts V (t) for several values of T. Note that V (t) is a
7 7
functíon of two independent variables, T and t. On the right-hand side of Fig. 2.9(b), the
variation with tis shown on the horizontal axis, while the variation wíth TOccurs vertícally,
2.2 Convolution: Impulse Response Representationfor LTI Systems 87
h(t)
1
t
-1
(a)
• •
•
•
•
•
P-i<t) V -1 (t)
""':''. ..... t
-1 -1 2
..
-1 .. ..
-0.5
t
··-~
11.(ty )1
·-0.5 4
t -1
Po(t)
0.5 0.5
o ----+-------t .. o
l ----+--+------t
1
........
..,,..,. ·,
"· l
0.5 ;;i.:
•
..t;í.
•> 0.5 ·
3 -----l--------t
3
_,..• l,(t). ..
1' • • T
• •
• •
-----+-,..+--+-+---+-;- , 1 4 , - - - t
2 3 4 l 2 3 4 5
(b)
FIGURE 2.9
Illustration of the co11volutic>n integral. (a) Impulse response of a continuous-timc
system. (b) Decc>mpc)sition of x(t) into a weightcd integral <>Í tin1e-shiftecl impulses results in an
output y(t) givcn by a \veighted integral of time-shifted impulse rcsponscs. Here pT(t) is the
weighted (by x(T)) and time-shifted (by 'T} impulse input, and vT(t) is the weighted and time-shifted
impulse respe.>nse c,ulpL1l. Both Pr(t) and v.,.(t) are depicted <.>nly at integer values of T. The depen-
dence of both p.,.(t) and v.,.(t) e.>n Tis depicted by the T axis sho,vn on the lcft- and right-hand sides
of the figt1re. The output is obtained by integrating vT(t) over T.
88 CHAPTER 2 • TiME-DOMAIN REPRESENTATIONS FOR LINEAR TIME•INVARIANT SYSTEMS
•
•
•
W_J(T)
l .....
••• -1
--- ~~........-..;..,- -i-- 'T . .L 1
r -
l 2 4 6
Wo( T)
l 2 4 6
y(O) = I: w0( T)d-r --li---~
o ~
...
--~ft':IH-4,--+--+- T __... ......
l 2 4 6
1 ....
••• -]
---..::!1~7---j------+-··· 'T N .....
I 2 4 6
W3(T)
1 .l
••• -1 1 1
· ' -........- 'T
6
•••
(e)
F1GlJRE 2.9 (e) Thc signals tv,( T) uscd to compute thc output at time t corresp<)nd t(> vertical
slices ()f v,.(t). Here ,ve have rl'dravvn the right-hand sicle oi' Fig. 2.9(b) so that the T axis is h<>rizo1
tal. l'he outpltl is <lbtained for t = t,, by integraling w,( T) over T.
as sh<)wn by the vertical axis on the right-hand side. The system output at time t = t 0
obtained by integrating over T, as sh(>wn by
That is, we integrate along the vertical or T axis on the right-hand side of Fig. 2.9(6) at
fixed time, t = t 0
•
Define a signal w,,.(-r) to represent the variation of v,.(t) along the T axis for a fixe
time t = t 0 • This ímplies ivt.,( T) = V 7 (t0 ). Examples of this signal for several values of t 0 ai
depicted in Fig. 2.9(c). The correspc>nding system output is now obtained by integratin
Wt ( 'T) over T frc>m - x to 00 • Note that the horizontal axís in Fig. 2.9(6) is t and the vertic,
o
axis is T. ln Fig. 2.9(c} we have in effect redrawn the right-hand síde of Fig. 2.9(b) with
as the horizontal axis and tas the vertical axis.
We have defined the intermediate signal Wr(r) = x(T)h(t - -r) as the product c>f x(~
and h(t - T). ln this definition Tis the independcnt variable and tis treated as a cc)nstan
This is explicitly indicated by writing tas a sul,script and T within the parentheses of wr(T
Hence h(t - T) = h(-(r - t)) is a reflected and time-shifted (by -t) version of h(T). Tb
2.2 Convolution: Impulse Response Representationfor LTI Syste1ns 89
time shift t determines rhe time at which we evaluate the output of the sysrem since Eq.
(2.6) becomes
(2.7)
The system output at any time tis the area under the signal w 1(-r).
ln general, the functional form for w 1( r) will depend on the value of t. As in the
discrete-time case, we may avoid evaluating Eq. (2.7) at an infinite number c>f values c>f t
by jdentifying intervals of t on which ivt(T} has the sarne funcrionaJ form. We then only
need to evaluate Eq. (2.7) using the w 1(-r) associated wíth each interval. Often it is very
helpful to graph both x( r) and h(t - T) when determining wt( T) and identifying the ap-
propriate interval of time shifts. This procedure is summarized as follows:
1. Graph x( r) and h(t - T) as a function of the independent variable To obtain
T.
h(t - T), reflect h( r) about r = O to obtain h(-T) and then time h(-t) shift by -t.
2. Begin with the time shift t large and negative.
3. Wríte the functional form for wt(T).
4. Increase the time shift t untíl the functi<)nal form for w 1(T) changes. The value t at
which the change occurs defines the end of the current ínterval and the begin11ing of
a new interval.
5. Let t be in the new interval. Repeat steps 3 and 4 untíl all intervals of time shifts t
and the corresponding functional forms for wt( T) are identified. This usually implies
increasing t to a large and positive value.
6. For each interval of time shifts t, integrate w t( T) from -r = - ·~ to T = oo to obtain
y(t) <)n that jnrerval.
The effect of increasing t from a large negative value to a large positive value is to slide
h(- r} past x( T) from left to right. Transitions in the intervals of t associated with the sarne
form of wt(r) generally occur when a transition in h(-r) slides through a transition in
X(T). Alternatively, we can integrate w 1 (r) as each interval of time shifts is identified, that
ls, after srep 4, rather rhan wajring untiJ a]) jnrervals are identified. The following exampJes
illustrate this procedure for evaluating the convolution integral.
EXAMPLE 2.7 Consider the RC circuit depicted in Fig. 2.10 and assume the circuit's time
constant is RC = 1 s. Determine the voltage across the capacitor, y(t), resulring from an input
voltage x(t) = e-31{u(t) - u(t - 2)}.
Solution: The circuit is linear and time invariant, so the output is the convolution of the in-
put and the impulse response. That is, y(t} = x(t) * h(t). The impulse response of this circuit is
. ..
. . ,,
. ....~: .,
,,... .,
.· .......,.;, . ,·.· •:.i:t .·...:..... . ·,.::·~ . . ...
+
x(t) i(t) :;::;:::: e y( t>
FIGURE 2.10 RC circuit system with the voltage source x(t) as input and the voltage measured
across the capacitor, y(t), as output.
90 CtlAPTER 2 li TIME-DOMAIN REPRESENTATIONS •·oR LINEAR TIME-INVARIANT SYS'fE
To evaluate the convolution integral, first graph x('T) and h(t - T) as a function of
índependent variable 1while treating tas a constant. We see from Figs. 2.1 l(a) and (b) ti
e- 31", O < T <2
X(T) =
O, otherwise
• • •
and
e -(t-1"), T < t : '
h(t - T) = ·:,,.'
O, otherwise
Now identify the intervals of time shifts t for which the functional form of w,(-r) d
not change. Begin with t large and negative. Provided t < O, we have w,( T) = O since tb
are no values T for which both x( -r) and h(t - ,) are both nonzero. Hence the first interva
time shifts is t < O.
Note that at t = O the right edge of h(t - r) intersects the Jeft edge of x(r). For t >
e-r-lT, O< T <t
.. w,(r) =
'/
O, otherwise
This form for w ,( r) is depicted in Fig. 2.11 (e). lt does not change untiJ t > 2, at which pc
the right edge of h(t - T) passes through the right edge of x(T}. The second ínterval of ti
shifts t is thus O s t < 2.
X(T) h(t- T)
i
l {
e-<' -1")
- . .O. . . ··················-==~-
l 2
; ____'.=-~·····~····~··~·····.::_:±=._::::._. . . . . . ··········------
oii t
7'
(a) (b)
e-r .. ·.1
---····-·····--· ; T
'
o t 2 o 1 2 3
1
(e) (d)
y(t)
- - ·---~---+········-···--··--+--············••i••········
; ; ·······-·-- t
o l 2 3
(e)
FIGURE 2.11 Evaluation of the convolution integral Í<)í Example 2. 7. (a) 'fhe input <lcpicte
a functíon <>f T. (b) Reflectcd and time-shifted impulse rcsponse, h(t - -r). (e) The producl si!
w 1 ( r) for O < t < 2. (d) 'l'he product sígnal u11 ( r) for t ::2: 2. (e) Systc1n output }{t).
2.2 Convolution: Impulse Response Representationfor LTI Systems 91
'..
Figure 2.ll(d) depicts wt(r) for thís third interval of time shifts, t ~ 2.
We now determine the output y(t) for each of these three intervals of time shifts by
integrating w1 (-r) from T = - oo to T = oo. Starting with the first interval, t s: O, we have
w,( r) = O and thus y(t) = O. For the second interval, O s t < 2, we have
,
,
..·......
, y(t) = J: e-,-zr d-r
= e-'{-½e-irl~
= ½(e-r _ e·-3t)
. ' ..·
y(t} = J: e-r-lr dr ..
;;
~
.
·,
= e-t{-½e-2Tlã . . :..
:•';
.. •
··.,
\
,,
= ½(1 - e- 4 } e- 1
.···~:
,
Combining the solutions for each inrerval of time shifts gives the output
·,,
..
o, t < o
< •• •
y(t) = ½(1 - e- 21 ) e- 1, O< t < 2 .
:
.i • t
. ,..
½(1 - e- 4 ) e-i, t~ 2
EXAMPLE 2.8 Suppose the input x(t) and impulse response h(t) of a LTI system are given
by
,.,
x(t) = 2u(t - 1) - 2u(t - 3)
h(t) = u(t + 1) - 2u(t - 1) + u(t - 3)
w ,( r) = O, otherwise
,This form for w 1 ( r) holds provided t +1< 3, or t < 2, and is depicted in Fig. 2.12(c). ,,,
92 CHAP'fER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
X(T)
h(t- T)
2-- 2--
1- >-
t-3
--i-----+--~'T ' "
'
'T ---4--+-_...,._ _ .,
'
o I 2 3 t-1 t+I o 1 t+1 3
-1
(a) (b) (e)
y(t)
4
2- -
t-1 2
\ 1
' ---+---+:--.----;,'-
t- 3 3
T ,
'
t
o 1 2 3 1 1 2 7
-2
-2- - -2--
-4
(d) (e) (f)
FIGURE 2. 12 Evaluation c)f the convolution integral for Example 2.8. (a) The input depicted as
a function of T. (b) Reflected and time-shifted impulse response, h(t - 7). (e) Thc }Jroduct signal
wt( T) for O < t < 2. (d) The product signal w 1( 7) for 2 s t < 4. (e) The product signal w,( r} for
4 ~ t < 6. (f) System output y(t).
For t > 2 the right edge of h(t - T) is to the right of the nonzero portion of x(r). ln this
·. case we have
,.,
' '
-2, 1 < T < t - 1
Wt(1") = 2, t - 1 < r < 3
O, otherwise
This form for w1 (-r) holds provided t - 1 < 3, or t < 4, and is depicted in Fig. 2.12(d).
For t > 4 the leftmost edge of h(t - T) is within the nonzero portion of x(T) and we
have
,,·:.
'
,,
', -2, t - 3 < ,,. < 3
O, otherwise
This form for w1 ( T) is depicted in Fig. 2.12(e) and holds provided t - 3 < 3, or t < 6.
For t > 6, no nonzero portions of x( T) and h(t - T) overlap and consequently wt( T) = O.
Tb.e system output y(t) is obtained by integrating Wt( -r) frorn 'T = - oo to T = oo for each
. interval of time shifts identified above. Beginning with t < O, we have y(t) = O since
w 1(T) =O.For O< t < 2 we have
, ..
' '
' ',' y(t) = I:+l 2 dT
, ,.
' '
= 2t
On the next interval, 2 ~ t < 4, we have
,
t-1 J,3
.,' ·.
y(t) ==
ft
-2 dT +
t-1
2 dr
= -4t + 12
• •<' •• • . '· . ,~ ·, ~.
2.2 Convolution: Impulse Response Representationfor LTI Systems 93
., . y(t) = t-3 -2 di 13
'.
2t, 0:St<2
, ..
o, t 2= 6
as depicted in Fig. 2.12({). ,,.
...
•• • •
.f.<
.. :..: .
• Drill Problem 2.4 Let the impulse response of a LTI system be h(t) = e-l(t+Ilu(t + 1).
Find the output y(t) if the input is x(t) = e-1 1 •.
Answer: For t < -1,
e- 2 <r.+ 1 le-'T, -oo < T < t+ 1
O, otherwise
y(t) = ½et+I
For t > -1,
O<T<t+l
o, otherwise
y(t) = e-(t+ Il _ ie-2(t+l)
J •
• Drill Problem 2.5Let the impulse response of a LTI system be given by y(t)
u(t - 1) - u(t - 4). Find the output of this system in response to an input x(t)
u(t) + u(t - 1) - 2u(t - 2).
Answer:
o, t < l
t - 1 l<t<2
'
2t - 3 2:st<3
'
y(t) = 3, 3st<4
7 - t 4<t<,5
'
12 - 2t, 5:St<6
o, t 2:: 6 •
The C()nvolution integral describes the behavior of a continuous-time system. The
system impulse response can provide insight ínt(> the nature of the system. We will develop
94 CtIAPTER 2 li TIME-DOMAIN REPRESENTATIONS 1-'0R LINEAR TtME-INVARIANT SYSTEl\'1S
this insight in the next section and subsequent chapters. T(> glimpse some of the insight
offered by the impulse response, consider the following example.
...· .,··.
..
EXAMPLE 2.9 Ler the impulse response of a LTI system be h(t) = 5(t - a). Determine the ~
output of this system in response to an input x(t).
Solution: Consider first obtaining h(t - -r). Reflectíng h(1') = 8(r - a) about T = O gives
h( - r) = ô( T + a) since the impulse function has even symmetry. Now shift the independent
variable T by -t to obtain h(t - T) = ô(T - (t - a)). Substitute this expression for h(t - T) in
the convolution integral of Eq. (2.6) and use the sifcing property of the impulse function to
obtain
. . . •~:
Note tl1at thc identity system is represented for a = O since in this case the output is equal
to the input. When a -=I= O, the system time shifts the input. If a is positive the input is
delayed, and if a is negative the input is advanced. Hence the location of the impulse
response relative t<) the time origin determines the amount of delay introduced by the
systen1.
Consider two LTI systems with impulse responses h 1(t) and h 2 (t) cc>nnected in parallel as
illustrated in Fig. 2.13(a). The output of this connection c.>f systems, y(t), is rhe sum of the
outputs of each system
+
x(t) - , ~ ----l • y(t)
(a) (b)
FIGURE 2.13 lnterconnection of two systems. (a) Parallel connection of t\.vo systems. (b) Equiv-
aJent system.
= x(t} * h{t)
where h(t) = h 1 (t) + h 2 (t). We identify h(t) as the impulse response of the parallel con-
nection of two systems. This equivalent system is depicred in Fig. 2.13(b). The impulse
response of two systems connected in parallel is the sum of the individual impulse
responses.
Mathematically, this implies rhat convolution possesses the distributive property:
x(t) * h 1 (t) + x(t) * h 2 (t) = x(t) * {h 1 (t) + h2 (t)} (2.8)
Identical results hold for the discrete-time case:
(2.9)
Now consider the cascade connection of two LTI systems illustrated in Fíg. 2.14(a). l,et
z(t) be the output of the first system and the input to the second system in the cascade.
The output y(t) is expressed in terrns of z(t) as
y(t) = z(t) * h2 (t) (2.10)
~ illiMll'.lltl~'llfA.I INdll'.
(a) (b)
(e)
FIGURE 2.14 Intercclnnection <>f two systems. (a) Cascade con11ecti(>n (Jf t\.vo systcms.
(b) EquivaJent system. (e) Equivalent system; interchange system order.
96 CHAPTER 2 Ili Til\'lE•DOMAIN REPRESENTATIONS FOR LINJ:::AR Til\'IE-INVARIANT SYSTEi\-1S
However, z( -r) is the output of the first system and is expressed in terms of the inptlt x( 1
as
z(r) = x(T) * h 1(1')
= f" 00
x( v)h 1( T - v) dv
(2.12
Here vis used as the variable of inregration i11 the convolutic>n integral. Substituting Eq
(2.12) for z( r) in Eq. (2.11) gives
(2.1,3
The inner integral is identified as the convc>lution c>f h 1(t) wirh h 2 (t) evaluated at t - v
That is, if we define h(t) = h 1(t) * h 2 (t), then
A seco11d importanr property fclr the cascade connectic>n of sysrems concerns the
ordering of rhe systen1s. Write h(t) = h 1(t) * h 2 (t) as the integral
(2.17)
and commutative
(2.19)
The following example demonstrares the use of convolution properties for finding a single
system that is input-output equivalent to an ínterconnected system.
EXAl\tPLE 2.10 Consider the interconnectíon of LTI systems depícted ín Fig. 2.15. The ,.
impulse response of each system is given by . '
;:~✓;-::
. ·.·
h1[n) = u[n]
~ . h2 [n) = u[n + 2] - u[n]
,.:
• >·
.. h3 [n] = 8[n - 2]
h4 [n] = d'u[n]
Find the impulse response of the overall system, h[n].
Solution: We first derive an expression for the overall impulse response in ter1ns of the
impulse response of each system. Begin with the parallel combínation of h1 [n] and h 2 [n].
The equívalent system has impulse response h 12 [n] = h 1 [n] + h 2 [n]. Thís system is in series
with h3 [n], so the equivalent system for the upper branch has impulse response h 123 [n] =
h 12 [n] * h 3 [n]. Substituting for h 12[n], we have h 123 [n] = {h 1[n] + h 2 [n]) * h 3 [n]. The upper
branch is in parallel with the lower branch, characterized by h4 [n]; hence the overall system
impulse response is h[n] = h123[n] - h4{n]. Substitutíng for h123(n] yields
h[n] = (h 1[nJ + h2 [n]) * h3 [n] - h 4 [n] ~.... .,......
.. ,::. , ·:
. : ,·
'
.. , . ,, /. :
' . .-:..
= u[n]
' ..
Lastly, we obtain the overall impulse response by summing h 123[n] and h 4 [n] •·.'
+
,.....~ · · ~
x[n] ~.,...
• MEMORYLESS SYS'I'EMS
Recall that the output of a memoryless system depends only on the present input. Exploit-
ing the commutative property of convolution, the output of a LTI discrete-time system
may be expressed as
y[n] = h[n] * x[n]
CC
= L
k=-""
h[kJxln - k]
For this system to be memoryless, y[n] must depend only on x[n] and cannot depend on
x[n - k] for k -=/= O. This condition implies that hf kl = O for k -=/= O. Hence a LTI dis-
crete-time system is memoryless if and <>nly if h[kl = co[k], where e is an arbítrarycc>nstant.
Writing the output of a continuous-time system as
f 00
y(t) = 00 h(T)x(t - T) dT
11 CAUSAL SYSTEMS
The output of a causal system depends <)nly (>n pastor present values of the input. Again
write the convolution sum as
CC
y[n]
k=-oo
L h[kJxln - kJ
2.3 Prt>perties of the Impulse Response Representationfor LTI Systems 99
Past and present values of the input, x[n], x[n - 1], x[n - 2J, ... , are associated with
indices k 2:: O in the convolution sum, while future values of the input are associated with
indices k < O. ln order for y[n] to depend only on pastor present values of the input, we
require h[kJ = O for k < O. Hence, for a causal system, h[k] = O f<,r k < O, and rhe
convolution sum is rewritten
00
y[n] = L
k=O
h[k]x[n - kl
The causality condition for a continuous-time system follows in an analogous manner
from the convolution integral
A causal cc)ntinuous-time system has impulse response that satisfies h( T) = O for 1' < O.
The output of a causal system is thus expressed as the convolution integral
y(t) = f 0
"" h(T)x(t - -r} dr
The causality condition is intuitively satisfying. Recall that the impulse response is
the output of a system in resp(>nse to an impulse input applied at time t = O. Causal
systems are nonanticipative: that is, they cannot generate an output before the input is
applied. Requiring the impulse response to be zero for negative time is equivalent to saying
the system cannot respond prior to applícation of the impulse.
• STABLE SYSTEMS
Recall from Chapter 1 that a system is bounded input-bounded output (BIBO) stable if
the output is guaranteed to be bounded for every bounded input. Formally, íf the input
to a stable discrete-time system satisfies lx[n] I ::s Mx < 00 , then the output must satisfy
!yfn]) ::s My < co. We shall derive condjrions on h[n] rhat g11arantee stability of the system
by bounding the convolution sum. The magnitude of the output is given by
ly[n] 1 = 1h[n] * x[n] I
"°
k=-x
L hlklx[n - k]
We seek an upper bound on Jy[nJ j that is a function of the upper bound on Jx[n] J and the
impulse response. Since the magnitude of a sum of numbers is less than or equal to the
sum of the magnitudes, that is, Ia + b I ::s Ia I + Ib 1, we may write
00
ly[n] 1 < L
k=-oc
[h[k]x[n - k] 1
Furthermore, the magnitude of a product is equal to the product of the magnitudes, that
is, /ah/ = /ai//;/, and so we have
00
ly[n]I < L
k=-oo
lh[k]llxín - k]I
If we assume that the input is b()unded, lxlnJ I < Mx < 00 , then lx[n - k] I < Mx and
(X)
Jy[nJJ < Mx L
k=""
)h[k]J (2.20)
100 CHAP'fER 2 9ll T11\-tl::'.-DOl\.1AIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTE!\-1S
Hence the output is bounded, jyfn]J < oo, provided that the impulse response of the system
is absolutely summable. We conclude that the impulse response of a stable system satisfies
the bound
:,e:
I
k ==- - ""
lh[k] < 1
00
Ot1r derivation so far has established absolute summability of the impulse response as a
sufficient condition for BIBO stability. The reader is asked to sho\V thar rhís is also a
necessary cc>ndition for BIBO stability in Prohlem 2.13.
A similar set of steps may be used to establish that a continuous-time system is BIBO
stable if a11d only if the impulse response is absolutely integrable, that is,
.. .:::, .
: : • ·< 1· . ,.,. ; .·~ '< • •
·. ':.
.' .. ·,.
., ..
..
EXAMPLE 2.11 A discrete-time system has impulse response
h[n] = anu[n + 2] . .,
' ..,'
Is this system BIBO stable, causal, and memoryless? .,, ~;·. .. ·'·..•
~ lh[n]] = ~ la I 11
.•
..
..
· ··:
k=-oo k=-2
... . ·•·. . :
> •
= 1a- ) + 1a- • ) + 2
~ 1a ln . '~•,: : ::
.. k=O
The infinite geometric sum ín the second line converges only if !ai< 1. Hence the system is
stable provided O < la! < 1. The system is not causal, since the impulse response h[n] is
nonzero for n = -1, -2. The system is not memoryless because h[n] is nonzero for some
values n O.*
• .:•)!\ • •:\(. • • • • • ·"'!!· ;~·· ·í,1','.':: •••• ••• • i'°"<. .,...,:: ...... • . ..,.. ,.· .
• Drill Problem 2.6 Determine the conditions on a such rhat the continu<>us-time
systen1 wirh i111pulse response J1(t) = eª'u(t) is stable, causaJ, and memory1ess.
A1iswer: The system is stable provided a< O, causal for all a, and there is no a for which
the system is memoryless. •
We emphasize that a system can be unstable even thot1gh rhe impulse response is
finite valued. For example, the impulse response h[nj = uínl is never greater than one, but
is not absolutely summable and thus the system is unstable. T(> demonstrate this, use che
convolution sum to express the output of this system in terms of the input as
n
y[nl = 2, x[k]
k=- X
Although the output is bounded for some bounded inputs xln l, ir is not bounded for every
bounded x[n]. ln particular, the conscant input xl11l = e clearly results in an unl1c)unded
output.
2.3 Properties of the Impulse Response Representation for LTI Systems 101
x(t) •
·a·.
• .,
, y(t)•
1. ",,_:..1 ($) - •JII• x(t)
~~'
FIGURE 2.16 Cascade of LTI system ~rith impulse response h(t) and inverse system with im-
pulse response h- 1 (t ).
A system is invertible if the input to the system can be recovered from the outpt1t. This
implies existence of an inverse system that takes the output of the original system as its
input and produces the input of the original system. We shall limit ourselves here to con-
sideration of inverse systems that are LTI. Figure 2.16 depicts the casca de of a LTI system
having impulse response h(t) with a LTI inverse system whose impulse response is denoted
as 1,- 1(t).
The process of recoveríng x(t) from h(t) * x(t) is termed deconvolution, since it
corresponds to reversing or undoing the convolution operation. An inverse system has
output x(t) in response to input y(t) = h(t) * x(t) and thus solves the deconvolution prob-
lem. Deconvolution and inverse systems play an important role in many signal-processing
and systems problems. A common problem is that of reversing or ''equalizing'' the distor-
tion introduced by a nonideal system. For example, consider using a high-speed modem
to communícate over telephone Iínes. Dístortion íntroduced by the telephone network
places severe restrictions on the rate at which information can be transmitted, so an equal-
izer is incorporated into the modem. The equalizer reverses the telephone network distor-
tion and permirs rnuch higher data rates to be achieved. In this case the equa1izer represents
an inverse systern for the telephc>ne network. We will discuss equalization in more detail
in Chapters 5 and 8.
The relationship between the in1pulse response of a system, h(t), and the correspond-
ing inverse system, h- 1 (t), is easily derived. The impulse response of the cascade connection
in Fig. 2.16 is the convolution of h(t) and h- 1(t). We require the output of the cascade to
equal the input, <>r
x(t) * (h(t) * h- 1 (t}) = x(t)
This implies that
h(t) * h- 1 (t) = ô(t) (2.21)
Similarly, the impulse response of a discrete-time LTI inverse system, h- 1 (n], must satisfy
hfnJ * h- 1 /n] = 8[nj (2.22)
ln many equalization applications an exact inverse system may be difficult to find or im-
plement. Determination of an approximate solution te) Eq. (2.21) or Eq. (2.22) is often
sufficient in such cases. The following example íllustrates a case where an exact inverse
system is obtained by directly solving Eq. (2.22).
·'
·.· .
. ExAMPLE 2.12 Consider desígning a discrete-time inverse system to elirninate the distortion
.· associated with an undesired echo in a data transmission problem. Assume the echo is rep-
·.· resented as actenuation by a constant a and a delay corresponding to one time unic of the
.·. ínput sequence. Hence the dístorted receíved signal, y(n], ís expressed ín terms of the trans-
. rnitted signal x[n] as ·
"
y[n] = x[n} + ax[n - 11
.. . : ,, : .
102 CHAPTER 2 li Til\11:'.-00l\1AIN REPRESENTATIONS FOR LINEAR Tll\-lE-)NVARIANT SYSTEMS
Finda causal ínverse system that recovers x[n j from y[n]. Check if this inverse system is stable.
Solution: First we identify the impulse response of the system relatíng y{n] and x[n]. Writing
the convolution sum as
""
y[n] = L
k=-oo
h[k]x[n - k]
. ., .~
we identify ·:
.:
...,
1, k =o
h[kJ = a, k =1 ·•r:
; ·;
: ~
O, otherwise
as the impulse response of the system that models direct transmission plus the echo. The inverse
system h- 1 [n] must satisfy h[n] * h- 1 [n] = 8[nJ. Substituting for h[n], we desire to find h- 1 [n]
that satisfies the equation
(2.23)
Consider solving this equation for severa] different values of n. For n < O, we must have
h- 1 [n] = O ín order to obtain a causal inverse system. For n == O, o[n] = 1 and Eq. (2.23)
implies
h- 1 fO] + ah- 1[-t] =1
so h- 1 [0] = 1. For n > O, õ[nJ = O and Eq. (2.23) implíes
h- 1 In} + ah-\ln - 1) = O
or h- 1 [n] = -ah- 1 [n - 1]. Since h- 1 [0] = 1, we have h- 1 [1] = -a, h- 1 [2] = a 2, h- 1 [3] =
-a3 , and so on. Hence the invecse system has the impulse response
h- 1 [n] = (-a)nu[n]
To check for stability, we determine whether h- 1 ln] is absolutely summable, as shown
by
00 00
L
k=
lh- [kl\ = L \a\k
·-aok=O
1
This geometric series converges and hence the system is stable provided Ia 1< 1. Thís implies
that the inverse system is stable if the echo attenuates the transmitted signal x[n], but unstable
if the echo amplifies x[n]. ·
::11t, : :··~ ... .: ·:;
Obtaining an inverse system by directly S<)lving Eq. (2.21) or Eq. (2.22) is difficult
in general. Furtherm<>re, not cvery LTI sysrem has a stable and causal inverse. Methc>ds
developed in later chapters provide additional insíght into the existence and determi11ation
of ínverse systems.
• STEP RESPONSE
The response of a LTI system to a step characterizes h(>W the system responds te> sitdden
changes in the input. The step response is easily expressed in terms of the impulse response
using convolution by assuming that the input is a step function. Let a discrete-time system
have impulse response h[n] and denote the step respo11se as s[n]. We have
sf nJ = h[nl * u[n1
= L
k=-oo
h[k]u[n - kl
2.3 Properties of the Impulse Response Representationfor LTI Systems 103
Now, since ufn - kl = O for k > n and u[n - k] = 1 for k < n, we have
ll
sfn] =
k=-x
I h[k]
That is, the step response is the running sum of the impulse response. Similarly, the step
response, s(t), for a contínuous-time system is expressed as the runníng integral of the
impulse response
Note that we may invert these relationships to express the impulse response in terms of
the step response as
hínl = sfn] - sfn - 1]
d
h(t) = dt s(t)
ExAMPLE 2.13 Find the step response of the RC circuit depicted in Fig. 2.10 having impulse
response
1
h(t) = - e-ttRcu(t)
RC
Solution: Apply Eq. (2.24) to obtain
1
f
t
. ·::.. ·:<.
..
·.~i: s(t) = - e--rtRCu( -r) d-r
-.., RC
Now simplify the integral as
:·.~ .
.; ..
o, t s o
y(t) =
1
RC
it
o
e---r1Rc dT.
'
t> o
o, t s o
. ..
1 - e-t/RC, t > 0
,,..
,. . .·~. ; , .. :. .
i':.
• Drill Problem 2. 7 Find the step response of a discrete-time system with impulse
response
hfn] = (-a) 11 u[n]
assuming la 1< 1.
Answer:
1 - (-a)n+l
s[n] = u[n]
1 +a •
11 SINUSOIDAL S'I'EADY-STATE RESPONSE
Sinusoidal input signals are often used to characterize the response of a system. Here we
examine the relati(>nship between the impulse response and the steady-state response of a
104 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME~INVARIANT SYST.EMS
LTI system to a complex sinusoidal input. This relationship is easily established using
convolutíon and a complex sinusoid input signal. Consider rhe (>utput of a discrete-tíme
system with impulse response h[nJ and unir-amplitude complex sinusoidal input
x[n] = e;nn, given by
cr,
y[nl = k=L-- oc
h[klxln - k]
00
= L h[k]e;s1,,,-k1
k=-oo
Hence the output of the system is a complex sinusoid of the sarne frequency as the input
multiplied by the complex number H(ei!1). This relationship is depicted in Fig. 2.17. The
quantity H(e;n) is nota function of tíme, n, but is or1ly a function of frequency, n, and is
termed the frequency response of the discrete-time system.
Similar results are obtained for continuous-time systems. Ler the impulse response
of a system be h(t) and the input be x(t) = eiwt. The convolution integral gives rhe output
as
= H(jw)eiwt
where we de.fine
The output of the system is a cc>mplex sinusoid of the sarne frequency as the input mul-
tiplied by the complex constant H( ;w). H( jw} is a function of only frequency, w, and not
time, t. lt is termed the frequency response of the continuous-time system.
An intuitive interpretation of the sinusoidal steady-state response is obrained by writ-
ing the complex number H( jw) in polar form. Recall that if e = a + jb is a complex
num ber, then we ma y vvri te e in polar fc>rm as e = Ie Iei arglcl, where Ie I = V a2 + b2 and
arg{c} = arctan(b/a). Hence we have H( jw) = IH( jw) 1e1 arg(H(iw>l. Here IH( jw) 1 is termed
eif2n ,. h[n]
,. ·.. . ..
- •., H(eiº)e1D.n
FIGURE 2.17 A complex sint1soidal ínpitt te> a 1;r1 system results in a cc)n1plcx sinusoidal c>utput
of the sarne frequency multiplíed by the frequency respc>nse of the system.
2.3 Properlies of the Impiilse Respo,ise Representationfor LTI Systems 105
the magnitude response and arg{H( jw)} is termed the phase response of the system. Sub-
stituting this polar fc,rm in Eq. (2.26), che output y(t) is expressed as
y(t) = H( jw)
1 1 ei(wt+arg{H(jw)I)
The syscen1 modifies the amplitude of the input by IH( jw) 1 and the phase by arg{H( jw)}.
The sin11soídal steady-state response has a similar interpretation for real-valued sinusoids.
Write
x(t) = A cos(wt + cp)
= -A e'.( wt+.,..)
.... + -A . -">
e-;(wt+.,..
2 2
and use linearicy to obtain the output as
sinusoidal oscillacor and oscilloscope by using the oscill<.Jscc>pe t<> measure the amplitude
and phase change between the input and output sinusoids for different oscillator
frequencies.
lt is standard practice to represent the frequency response graphically by separately
displaying the magnitude a11d phase response as functions of frequency, as illustrated in
the following examples.
'· ·:.. ·.,,,· '· ..,;:. . .., ·,:. ..,/."'.... ;:~.ih, · .,. +,.: ,. . . : :.. .
EXAMPLE 2.14 The impulse responses of t\.vo discrete-tíme systems are given by
h 1[n] = ½(S[n] + õ[n - 1]) .
'·..
Find the frequency response of each system and plot the magnitude responses.
Solution: Substitute h 1 [n] into Eq. {2.25) to obtain
,...
...
:~_;.
, ,..;. ;.,:: .: {
. . e;n,2 + e-;n12
Hi(eln) = e-1!112 _ _ _ __ ;~ : t·.:: .•.'f:ff .·,.;..·«:, .,.. .;, .. . ~ ·: . ,i :••
2 >: ,: • • • •' ••
.; ; ., .·, .....
= e-;nri. cos(ü/2) : .:
·.•
= je-;wi sín({}/2) .,
.,..,.. .. .. / •: } ...:
....., ,. .·
,,. '. ••>
,; : . .. ,, .
ln this case the magnitude response is expressed as . .
r~ ·~k: .,~ . .;
... . . .· . . ":. .··..
IH2(eiº)I = lsin(fi/2)1
~
..
.... ·"
; .: .:
-!l/2 - rr/2 for sin(ü/2) < O
.. Figures 2.19(a) and (b) depict the magnitude response of each system on the interval
- 1T' < n < 1r. Thís interval is chosen beca use it corresponds to the range of frequencies for
which the complex sinusoid eiD.n is a unique function of frequency. The convolution sum
indicates that h 1 [n] averages successive inputs, while h2 [n] takes the difference of successíve
inputs. Thus we expect h 1 [nl to pass low-frequency signals while attenuatíng high frequencíes.
This characteristic is reflected by the magnitude response. ln contrast, the differencing oper-
ation implemented by h2 [n] has the effect of attenuating low frequencies and passing high
frequencíes, as indicated by its magnitude response.
r
2.3 Properties of the Impulse Response Representationfor LTl Systems 107
1 H 1(eiª) 1
!
'
11
1
-11' o -1T o 1T
(a) (b)
FIGURE 2.19
'l'he magnitude responses of two simple discrete-time syslen1s. (a) .A. sysle1n tl1at
averages Sltccessive inputs tends to attenuate high frequencies. (b) A system that forms the differ-
ence of successi,,e inputs tends to atten11ate lo\v frequencies.
h(t) = - 1 e- 11Rc··u(t) . ; :.
RC
Find an expression for the frequency response and plot the magnitude and phase response.
Solution: Substituting h(t) into Eq. (2.27) gives
. ·.·..
. .. .
.·
•:
1 -1 e-(iw+l/RC)-r • • •:--..11
RC (jw + 1/RC) o
't· , :•
1 -1 .....
J:t.: ..··.·
. ;
= RC (iw + 1/RC) (O - l)
•• <
.. r
1/RC
-----
jw + 1/RC
·.
1
:.:>. '. ·..
;
1H(jw) 1 = -;::::::=R=C=:::;:2
1
ú)2 + .. .. ' .
..,. .. RC •••; . • • •••
IH(Jw)I arg{H(iw) l
11 -172
l
-./2 .. 4
1T
;
l
RC_ _ _ _ _ w
------+~_...,.
1
RC
--1T4
------.t-+---icc----······ w
1 O l 1T
---······
RC RC 2
(a) (b)
FICURE 2.20 Frequenc.·y response of the RC circuit i,1 Fig. 2.1 O. (a) J\ilagnitu<le response.
(b) Phase responsc.
• Drill Problem 2.8 Find a11 expressi<ltl for the frequency response of the discrete
time system wirh impulse response
h[nl = (-a)nu[nl
assuming lal < 1.
Answer:
(2.28
Here x(t) is the input to the system and y(t) is the output. A linear constant-c<)efficier
differe11ce equatÍ<)11 has a similar form, with the derivatives replaced by delayed values e
the input x[nl and output ylnJ, as shown by
N M
L akyln - kl = I bkx[n - kl (2.25
k=O kc:O
The integer N is tern1ed the arder <)f the differential or difference equatio11 and cc>rresponc
to rhe highest derivative or maximum memory involving the system <)utput, respectivel~
The order represents the number <)f energy st<>rage <levices in the system.
As an example of a differential equation that describes the behavic>r of a physic,
system, consider the RLC circuit depicted in Fig. 2.21 (a). Assume the input is the volta~
2.4 Differential and Difference Equation Representationsfor LTI Systems 109
R 1,
/
Mass Force
k ·· ,, x(t)
x·(t) + y(t) ::::::: e --.. m.
1~ y(t)
Friction f
(a) (b)
FIGURE 2.21 Exarnples of systems clescribcd by differential equatíons. (a) RL(~ circuit.
(b) Spring-mass-damper system.
source x(t) and rhe (>utput is the current around the loop, y(t). Summing the voltage drops
around the lc)op gives
Ry(t) + L
dy(t)
dt +e
Jt 1
-oç y( T) dT = x(t)
This differential equation describes the relationship between the current y(t) and voltage
x(t) in the circuit. ln this example, the order is N = 2 and we note that the circuit contai11s
two energy storage devices, a capacitor and an inductor.
1v1echanícal systems may also be described in terms c>f differencial equations using
Newton's laws. l11 rhe system depicted in Fig. 2.21(6), the applied force, x(t), is the input
and the posítion <>f the mass, y(t), is the output. The fc)rce associated with the spring is
directly proporti<>nal to position, the force due to friction is díreccly prc>portíonal to ve-
locity, and the force due to mass is proportional to acceleration. Equating the forces c)n
the mass gives
d2 d
m dt 2 y(t) +f dt y(t) + ky(t) = x(t)
This differential equ.ation relates position to the applied force. The system contains two
energy stc>rage n1echanisms, a spring anda mass, and the order is N = 2.
An example of a second-c>rder difference equation is
y[nJ + y[n - 1] + ¼Yln - 2] = x[n] + 2x[n - 1] {2.30)
This difference equatic>n might represent the relationship between the input and output
signals for a system that processes data in a computer. ln this example the order is N = 2
because the difference equation involves y[n - 21, implying a maximum memory in the
system output of 2. Memory in a discrete-time system is a11alogous to energy storage in a
continuous-time system.
Difference equations are easily rearranged to c>btain recursive formulas for comput-
ing the current output of the system from the input signal and past outputs. Rewrite Eq.
{2.29) so that y[n] is al<>ne c>n the left-hand side, as shown by
l A1 l N
y[nl = - L bkx[n - k] - - L aky[n - k]
ao k=O ao k=l
This equation indicares how to obtain y[n] from the input and past values of the C>utput.
Such equatic.>ns are often used to implement discrete-time systems in a computer. Consider
110 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
computing y[n] for n > O from x[n] for the example second-order difference equation
given in Eq. (2.30}. We have
y[nj = x[n] + 2x[n - 11 - y[n - 11 - ¾y[n - 2]
Beginning with n = O, we may determine the output by evaluating the sequence of equations
y[OJ = x[OI + 2xf-1] - y[-1] - ¾y[-2]
y[l] = x[l] + 2x[OJ - y[OJ - ¼yl-11
y[2] = x[2] + 2x[l] - y[l] - ¼y[O]
y[3J = xl3J + 2xl2J - yl2I - ¾yfll
•
•
•
ln each equation the current output is computed from the input and past values of the
output. ln order to begin this pr<)cess ar time n = O, we must know the two most recent
past values of the output, namely, y[ -1 l and y[ - 2 }. These values are known as initial
conditions. This technique for finding the output of a system is very useful for computation
but does not provide much insight intc> the relatic>nship between the difference equation
description and system characteristics.
The initial conditions summarize ali the information about the system's past that is
needed to determine future outputs. No additiona] information about the past output or
input is necessary. Nc)te that in general the number of initial conditions required to deter-
mine the output is equal to the order of the system. Initial conditions are also required to
solve differential equations. ln this case, the i11itial conditions are the values of the first N
derivatives of the output
dy(t) d 2y(t) JN-ly(t)
y(t), dt ' dt 2 ' • • • ' dtN- I
evaluated ar the time t 0 after which we desire te> deter1nine y(t). The initial conditions in
a differential-equation description for a LTI system are directly related to rhe initial values
of the energy storage <levices in the system, such as initial voltages on capacitors and initial
currents through inductors. As in rhe discrete-time case, the initial conditions summarize
ali information about the past of the system that can impact future outputs. Hence initial
conditions alsc) represent the ''memory'' of continuous-time systems.
:t~
.. ,. ;....... "' .. •
. . t,\ ·. •
. .' '
- ....
! i l ' ! l ! ! •
1 r
·- ) ) r r ..., r r r ...
l '
r
' 1 } >
)
slnJ !'
0.5 l .....
1
r
o
o 5 10 15 20 25 30 35 40 45 50
(a)
0.4 ! i ! 1
; i '; ! i 1
y[nJ
o1 ~
·-· -0-0...0 O -000-00-00-0-00- - - - - -0--0-0 O 0-0--0--0 000-0-0-00-0--0-o-
... A'"'
i i l 1 i 1 ! !
-0.2 ·-··- 1
o 5 10 15 20 25 30 35 40 45 50
(b)
....
1 l
··-· : ..._. . . . . j
·····-··
! i--· i i--· ! ' .õW.- !
~~ ~~
'-''
r r
ylnJ 00
ljr
1
..,
-1
......
i i
-
i-· l ~-
.•
~
l
-i-·
l
,n•
í
IC
o 10 20 30 40 50 60 70 80 90 100
(e)
l f -· -,-
\
ylnl
j
-1 - - - - - - - - ' - - - ~ - - - - - ~ - - - - - - - - - - - - - - · - - - - '
O 1O 20 30 40 50 60 70 80 90 l00
(d)
1.----,----,---------------,----,----,-----,----,
-] ,.__
o
_____ ____________ __
10 20
;,...._
30 40
__..__
50 60
.__~-'-------'~----~--'
70 80 90 100
(e)
FIGURE 2.22 lllustration of the solt1tion to Example 2.16. (a) Stcp resi)onse of system. (h) Out-
pt1t dueto nonzero initial conditions with zero input. (e) Output dt1e to x 1 [n] ~ cos(-to1T1i).
(d) ()uti)ut <.-lue t<.> x 2(nJ = cos(¾1r1i), (e) ()t1t1)ut due t<> x 3 (n] = cos(jfi1Tn).
112 CHAPl'ER 2 • TIME-001\'IAIN REPRE.SENTATIONS FOR LINEAR TIME-INVARIAN'f SYSTEMS
60 '
; ;
! '
r
50 - ' r <.:
r
.......
~~ ~ •,
• r •
- r
e
(:
•
' r
IC r
~ o
:;::s ._, 40 ·;,. (
r
IÇo (
r r
,-
... ....
e:
(Ú
~
::o
~
::,
,:o:: 30
ri
~
,r
(
IC
r
( 1,..
' 'j
~
I,,;;
g_ D IC
,_
Q,)
> E 20 j
<1'.~
10
1
1
o .. .. .. . .. .. .. .. .. .. ..
'
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
19(>0 1910 1920 193() 1940 1950 }960 l 97l) 198(} 1990
Ycar
(f)
60r---~--------,.---~----,-----,------,---~---,---,
~ •r
-
•
.j
FIGURE 2.22
(f) Input signal consisting of avcrage Jant1ary te1n1leraturc data. (g) Output ass<)Ci-
ated \Yith average January temJ)crature data.
depicts the first 50 values of the step response. This system responds to a step by initially
rising to a value slightly greater than the input amplitude and then decreasing to the value of
the input at about n = 13. For n sufficientl}· large, we may consider the step to be a de or
consrant input. Since the output amplitude is equal to the input amplitude, we see that this
system has unir gain to constanr inputs.
The response of the system to the initial conditions y[ -1 J = 1, y[ - 2J = 2 and zero
input is shown in Fig. 2.22(b). Although the recursive nature of the dífference equation sug-
gests that the initia) conditions affect ali future values of the output, we see that the significant
portion of the output dueto the inicial conditions Iases until about n = 13.
The outputs dueto the sinusoidal inputs x 1 [n], x 1 {n], and x 3 [n] are depicted in Figs.
2.22{c), (d), and (e), respectively. Once we are disrant fr<JJn the injrjal conditions and enter a
steady-state condition, we see that the system output is a sinusoid of the sarne frequency as
the input. Recall that the ratio of the steady-state output to input sinusoid amplitude is the
magnitude response of the S}'Stem. The magnitude response at frequencyto1ris unity, is about
O. 7 at frequency ½'77', and is near zer{) at frequency These results suggest that the magnitude t51r.
response of this system decreases as frequency increases: that is, the system attenuates the
components of the input that vary rapídly, whíle passíng with unít gaín those that vary s1owly.
This characteristic is evidenr in the output of the syscem in response to the average January
temperaturc input shown in Fig. 2.22(g). We see that the output initially increases gradually
in the sarne manner as the step response. This is a consequence of assuming the input is zero
prior to 1900. After about 1906, the systern has a smoothing effect sínce ít attenuates rapid
fluct11arjons jn the input and passes constant terms wjrh .zero gain. ' . ,, ..
ii. . .... , ,,... ·.
. .
"
'
2.4 Di.fferential and Di.fference Equation Representations for LTI Systems 113
R
+
x(t) ! i (t) e :::::::: y(t)
• Drill Problem 2.9 Write a differential equation describing the relationship between
the input vt>ltage x(t) and voltage y(t) across the capacitor in Fig. 2.23.
Answer:
•
• SOLVING DIFFERENTIAL AND DtFFERENCE EQUATIONS
We now briefly review a method f<)r solving differential and differencc equatío11s. This
offers a general characterization of solutions that provides insight i11to systcn1 behavi<.>r.
lt is convenient to express the output of a system described by a dífferential or dif-
ference equation as a sum of two components: one associated only with inítial conditi<>ns,
anda second due only to the i11put. We shall term the con1ponent <)f the otttput associated
with the initia1 conditions the natural resp(>nse of the system and denote it as y("i. The
component of the output due only to the input is termed thc forced res/Jo11se of the system
and den<)ted as yCt'I. The natural response is the system <>utput for zer<, i11put, whilc the
forced response is che system (Jutput assuming zero inicial conditions. A system with zero
initial conditions is said to be at rest, since there is no stc,red energy <>r memory i11 the
system. The natural response describes the manner in which the system dissipares any
energy or memory of the past represented by nonzerl) initial conditions. The forced re-
sponse describes the system behavior that is ''forced'' by the input whc11 the system is at
rest.
(2.32)
Substitution of Eq. (2.31} into the homogeneous equation establishes that y("l(t) is a so-
lution for any set of constants C;.
114 CHAPl'ER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR Ttl\-lE-INVARIANT SYSTE1"1S
ln discrcte time the 11att1ral response, y 1"llnl, is the solution to che h(>mc>geneous
•
equat1on
,'-J
L aky(nl[n - k] O
k=O
lt is of thc form
1'J
y(n)[n] = L C;Y; (2.33)
i=l
where the r; are rhe N roots of the discrete-time syste1n's characterístic equati()n
,\l
L akr·"-1-k =O (2.34)
k=O
Again, su bstittttic>n of Eq. (2.33} into the h(>mogeneous equatíon establishes that y( 11 )I n] is
a solutic>11. ln both cases, the C; are determined so that the solution y(n) satisfies the initial
conditic>ns. Note that the continu<>us-time and discrete-time characteristic equations differ.
The form of the natural respo11se changcs slightly whcn the characteristic equati(>n
descri!Jed by Eq. (2.32) or Eq. (2.34) has repeated roots. If a root r; is repeated p tin1es,
then we include p distinct terms in rhe solutions Eqs. (2.31) and (2.33) associated with r;,
They involve the p functic>ns
er;t' ter;t' ... ' tp-1 er,t
and
11 n p-1 n
r; , nr; , ... , n r;
respectively.
The nature (>Í each term in the natural resp<>11se depends <>n whether the roots ri are
real, ímagi11ary, or co1nplex. Real r<>(>ts lead to real exponentials, i1naginary r<><>ts to sí-
nusoids, and con1plex r()()ts to exponentially dan1ped sinusoids.
EXAMPLE 2.1 7 Consider the RL circuit depicted in Fig. 2.24 as a system whose input is the
applied voltage x{t) and output is the current y(t). Fínd a differential equation that describes
this system and determine rhe natural response of rhe sysrem for t > O assuming rhe current
through the inductor at t = O is y(O) = 2 A.
Solution: Summing the voltages around the loop gives the differential equation
. _. ...
dy(t) .
Ry(t) + L dt = x(t)
Ry(t) + L d~~t) = O
x(t) +
,.... y(t) L
Hence r 1 = -RIL. The coefficient c1 is determined so that the response satísfies the initial
condition y(O) = 2. This implies c 1 = 2 and the natural response of this system is
y("'(t) = 2e··(RJL>t A, t ~ O
• Drill Problem 2. 1O Determine the form of the natural response for the system
described by the difference equation
y[nl + ¼y(n - 21 = xlnl + 2xln - 21
Answer:
•
• Drill Problem 2.11 Determine the form of the natural response for the RLC circuit
depicted in Fig. 2.21 (a} as a function of R, I.,, and C. Indica te the conditions on R, L, and
C so that the natural response consists of real exponentials, complex sinus<>ids, and ex-
ponentially damped sinusoids.
Answer: for R 2 -=I= 4LIC,
where
-R + YR 2 - 4L/C -R - YR 2 - 4L/C
2L ' 2L
')
For R- = 4LIC,
F()r real cxp<,ne11tials R 2 > 4LIC, for complex sinusoids R = O, and for exponentially
damped sinusoids R 2 < 4L/C. •
The Forced Response
The forced response is the solution to the differential or difference equation for the
given input assuming the initial conditíons are zero. It consists of the sum of two co1n-
ponents: a term of the sarne form as the natural response, and a particular solution.
The particular solution is denoted as y(P> and represents any solution to the differ-
ential or difference eqttation for the given input. lt is usually obtained by assumíng the
system output has the sarne general form as the input. for example, if the input to a
discrete-time system is x[n] = o(', then we assume the output is of the form ylPl[nl = can
and find the constante so that yíP)[nl is a S<)lutÍ(>D te) the system's difference equation. If
the input is xlnJ = A cos(!ln + <p), then we assume a general sinusoidal response of the
J 16 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TJl\-fE•fNVARlANl' SYSTEMS
1 c 1 e
e-at ce-at a" can
cos(wt + cp) c 1 cos(wt) + c2 sin(wt) cos(On + cp) c 1 cos(!ln) + c2 sin(!ln)
form y1P1[nl = c 1 cos({ln) + c2 sin(iln), where c 1 and c2 are determined so that y(Pl[n]
satisfies the system's difference equation. Assuming an output of the sarne form as the
ínput is consistent with our expectation that the output of the system be directly related
to the input.
The form of the particular solution associated with common input signals is given
in Table 2.1. More extensive tables are given in books devoted to solving difference and
dífferential equations, such as th<)se lísted at the end of rhis chapter. The procedure for
identifying a particular solution is illustrated in the following example.
. ·lr·'
. 2.18 Consider the RL circuit of Example 2.17 and depicted in Fig. 2.24. Find a
EXAMPLE
. particular solution for this system with an input x(t) = cos(w0 t) V.
· Solution: The differential equation describing this system was obtained in Example 2.17 as
dy(t)
Ry(t) + L dt = x(t)
We assume a particular solution of the form y(P 1(t) = c 1 cos(w0 t) + c2 sin(úJ0 t). Replacingy(t)
in the differential equation by y(P'(t) and x(t) by cos(w0t) gives
Rc1 cos(wot) + Rc2 sin(w0 t) - Lwoc1 sin(wot) + Lw0c2 cos(w0 t) = cos{w0 t)
The coefficients c 1 and c2 are obtained by separately equating the coefficients of cos( w 0 t) and
sin(w0 t). This gives a system of two equatíons in two unknowns, as shown by
..•. Rc1 + Lw0 c2 = 1
-Lw0 c 1 + Rc2 = O
. .
Solving these for c 1 a11d c2 gives
R
..
. ..
··,:
,,: •.
R
IP> ) _ ( 0) Lw0 • ( ) A
,.
Y (t - R2 + L2 wã cos w t + R2 + L 2 Wõ s1n w0 t
4i!>' •
. ·.· ;. • f• •
.•:.• .
This approach for finding a particular solution is m(>dified when the input is of the
sarne form as one of the components of the natural response. ln this case we must assume
a particular solution that is independent (>f all terms in the natural response in order to
2.4 Dijferential and Difference Equation Representatio,i.~ for LTI Systems 11 7
obtaín the forced response of the system. This is accomplished analogously to the proce-
dure for generating independent natural response components when there are repeated
roots in the characterístic cquation. Specifically, we multiply the form of the particular
solution by the lowest power of t or n that will give a response component not included
in the natural response. For example, if the natural response cc.>ntains the terms e-at and
te-" 1 due t<> a seC<>nd-order root at -a, and the input is x(t) = e-ª', then \..Ve assume a
particular s<>lutic>n of the form y(P)(t) = ct 2 e .2t.
The fcJrced respc)nse of the system is obtained by summing the particular sc>luti<>11
with the form of the 11atural rcsp<>nse a11d finding the unspecífied coefficients in the natural
response so that the combined response satisfies zero initial conditions. Assuming the input
is applied at time t = O or n = O, this procedure is as follows:
1. Fi11d thc form of the natural response y<n) from the ro<Jts cJf the characteristic
.
equat1on.
2. Find a particular sc>lutÍ<)11 ylPJ by assu1ning it is of the sarne form as the input yet
independent of ali terms in thc natural response.
3. Determine the coefficients in the natural response so that the fc.)rced response y(fJ =
y(P> + y(nl has zero initial conditions at t = O or n = O. The f(>rced response is valid
fc>r t > O <)r n > O.
ln the discrete-cime case, the zero initial conditions, ylf 1[-N], ... , y1f 1f -1 ], must be trans-
lated to times 11 > O, sincc thc forced response is valid only for times n 2:::: O. This is
accomplished by using the recursive form of the difference equation, the input, and the at-
rest C()ndi tic,ns y(f) f - Nl = O, ... , y(f'J [ -1] = O to o btain transla ted in iti a l con d iti ons
y111 [0J, y(f)ll l, ... , ytfllN - 1l. These are then used to determine the unknown coefficients
ín the natural resp<)nse cc>mpc,nent of yi11 [n].
.•. . . .
EXAMPLE 2.19 Find the forced response of the RL circuit depicted in Fig. 2.24 to an input
x(t} = cos(t) V assuming norrnalized values R = 1 O and L = 1 H.
Solution: The form of the natural response was obtained ín Example 2.17 as
y("'(t) = ce-(RtL11 A
::lit:.
.. :~
.. ,
y(Pl(t) = R2 : L 2 cos(t) + R2 ~ L 2 sin(t) A
,,..
The coefficient e is now determined from the initial condition y(O) = O
O = ce-0 + ½cos O + ½sin O
=e+½
...
and so we find that e = -½.
has input signal x[nl = u[n]. Find the f<.)rced response of the system. Hint: Use y[n]
{yln - 2] + 2x(n] + x[n -1] withxfnl = u[rzJ andy(fll-21 = O,yifll-11 = O to determine
y<f)[O} and y(11 [1}.
Answer:
y[n] = (-2(½} 11 + 4) u[n] •
The Coniplete Response
The complete response of the system is the sum <)Í the natural response and the forced
response. If there is no need to separately <.>btain the 11atural and the forced response, then
the complete response of the system may be obtained directly by repeating the three-step
procedure for determi11ing the forced response using the actual initial conditions instead
of zero i11itia1 conditions. This is illustrated in the following example.
•• • :•V;;,: •: ••
. . ··,., ·.' ,. '
ExAMPLE 2.20 Find the current through the RL circuit depicted in Fig. 2.24 for an applied
voltage x(t) = cos(t) V assuming normalized values R = 1 n, L = 1 H and that the ínitial
condítion is y(O) = 2 A.
Solution: The form of the forced response was obtained in Example 2.19 as
y(t) = ce-z + ½cos t + ½sin t A
We obtain the complete response of the system by solving for e so that the initial condition
y(O) = 2 is satisfied. This ímplies
2 = e + ½( 1) + ½(O)
ore = Í· He11ce
y(t} = fe-t + ½cos t + ½sín t A, t ~ O
Note that thís corresponds to the sum of the natural and forced responses. ln Example 2.17
we obtained
y<~>(t) = 2e-t A, t ~ O
while in Example 2.19 we obtained
y(ll(t) = -½e-t + ½cos t + ½sin t A, t ~ O
The sum, y(t) = y 1111 (t) + y(fl(t), is given by
y(t) = ie-t + ½cos t + ½sin t A, t ~ O
and is exactly equal to the response we obtained by dírectly solving for the complete response.
Figure 2.25 depicts the natural, forced, and complete responses of the system.
••>(»:;a
..v.· ,
• Drill Problem 2. 13 Find rhe response of the RC circuit depicted in Fig. 2.23 to x(t)
= u(t), assuming the initial voftage across the capacit<.>r is y(O) = -1 V.
Answer:
y(t) = (1 - 2e-,,Rc) V, t 2:: O •
The Impulse Hesponse
The method described thus far for solving differential and difference equations can-
not be used to find the impulse response directly. However, the impulse response is easily
2.4 Differential and Difference Equation Representationsfor LTl Systen,s 119
2 ~-----.-----....--.---.-----,,--r-----.---,
1.5 .. 1.5
l ...
y<f>(t)
0.5 . . . .
o- o . . ..
-0.5 -0.5 -
-1 ~-'----'-----"---J'----'--'-----'---'---' -1 .____j__L___..1_._·····-L.·-·········.....·········-"······-·······;_·········-·-·L········-·····;._ _
O 2 4 6 8 1O 12 14 16 18 20 O 2 4 6 8 1O 12 14 16 18 20
Time (seconds) Time (scconds)
(a) (h)
2.0•---,--..----.--....----.---,-----r---,..---.-----.
l .5
y(t) l .0
0.5
-0.5
- 1·º0 2 4 6 8 1O 12 l 4 16 l8 20
Time (seconds)
(e}
FIGURE 2.25Responsc of RL circuit dcJJÍctcd i11 Fig. 2.24 t<> input x(t) = cos(t) V ,vhen y(O) =
2 A. (See Example 2.2().) (a) Natural response. (b) Force<l respor1se. (e) Complete response.
determined by first finding the step response and then explc>iting thc rclatic>11ship bctwce11
the impulse and step response. The definition <>Í the step response assumes the system is
at rest, so it represents the forced response <1f the system t<> a stcp input. f<>r a contínuous-
time system, the impulse response, h(t), is related to the step response, s(t), as h(t) =
1;s(t). For a discrete-time system we have h[n] = s[nJ - s[n - 11, Thus the impulse resp<>nse
is obtained by differentiating or dífferencing the step response. The differentiatic)n and
differencing operations eliminate the constant term assc>ciated with the particular solution
in the step response and change only the constants associated with the exponential terms
in the natural response compc>nent. This implies that the impulse response is only a func-
tion of the terms in the natural respc)nsc.
generates a Í(>rced response gíven by ay1/·i + f3yi). Similarly, the natural response is linea
with respect to the initial conditions. If y\111 is the natural response associated with initia
conditions JI and y'.{'l is the natural response ass()ciated with initial conditions 12, rhen th
inicial condition al 1 + /31 2 results in a 11atural resp<>nse ay\"J + {3y<.;,ii. The forced respons
is also time invariant. A time shift in the input results in a time shift i11 the output sinc
the syste1n is ínítially at rest. ln general, the con1plete resp<>nse of a system described by.
differential or differcnce eq uation is 11<>t ti1ne invariant, si11ce the initial conditions wi:
result in an output term rhat does not shift with a time shift of the input. Lastly, we observ
that thc forced response is also causal. Since the system is i11itially at rest, the <)utput doe
not begin pri<)r to the time at which the input is applied to the systen1.
The forced response depends c>n both the input and the roots of the characteristi
equation since it involves hoth the basic f<>rm of the natural respo11se and a partícula
solution to the differcntial or difference equation. The basic form of the natural respons
is dependent entirely ()O the roots of the characteristic equatÍ<)n. The impulse response e
the sysrem also depends 011 the roots <>f the characteristic e'-1uation since it C(>ntains th
identical terms as the natural response. Thus the roots of the characteristic equation prc
vide considerable infc>rmation about the system behavior.
For example, the stal1ility characteristics <)f a system are direcrly related t<> rhe root
of the system's characteristic cquation. To see this, note that the <>t1tput of a stable syster
in response t<) zero input must l1e l1ou11ded for any ser of i11itíal conditi<>11s. This follo\\ó
from the definition of BIBO stability and implíes rhat the natural response of the syster
must be b(>unded. Thus each term i11 the natural resp(>nse must be b()t1nded. ln the discrett
time case we must have Ir? I bounded, <)r Ir; I < 1. When Ir; 1= 1, the natural response do<:
nc>t decay and the sysrcm is said to be <>n the verge of insta bility. For continu<)us-tin1
systcms we require that Ier,r I be bounded, which implies Rc{r;} < O. Here again, whe
Ref ri} = O, the system is said to be on the verge of instabílity. These results imply that
discrcte-rime system is unstable if any r<><>t of the characteristíc equation has mag11itud
greater than unity, and a C(>ntinuous-time system is unstable if rhe real pare of any r<)Ot e
the characteristic equation is positive.
This discussion establishes that the roots of the characteristic equation indica te whe
a system is unstable. l11 later chapters we cstablish that a discrete-time causal system
stable if and (>nly if all roots of the characteristic equation have magnitude less than unir:
and a continu<)us-ti1ne causal system is stable if and <>11ly if the real parts of ali roots <
the characteristic equation are negative. These stability c<>nditions imply that the 11atur,
response of a system goes to zero as time approaches infinity since each term in the natur,
resp<>nse is a decaying exponential. This ''decay to zer()'' is co11sistent with our intuiti,
concept of a system's zero input behavic>r. We expecta zere> output when thc i11put is zer<
The initial conditions represent any energy present in the system; in a stable system wit
zero input this energy eventually dissipares and the output approaches zero.
The respo11sc ti1ne of a system is also determined by the roots of the characterist
equation. 011ce the natural response has decayed to zero, the system behavior is governe
c>nly by the particular solution-which is of the sarne form as the input. Thus the natur;
response comp(>nent describes the transient behavi<>r of the system: rhat is, it describes tl
transition of the system from its initial condition to an equilibrium cc>ndition determint
by the input. Hencc the transient rcsponse time of a system is determined by the time
takes rhc 11atural respc>nse to decay t<> zero. Recall that natural response contains terms e
the form r;1 for a discretc-time system and er;t for a continuous-time system. The transie1
response time of a discrete-time system is therefore propc>rtional to the magnitude of tJ
largest root of thc characteristic equation, while that of a continuous-time system is d
termined hy tl1e root whose real component is closest t<) zero. ln order to have a conti1
2.5 Block Diagram Representations 121
uous-time system with a fast response time, ali the roots of the characteristic equation must
have large and negative real parts.
The imptzlsc response of the system can be determined directly from the differentia1-
or difference-eqt1ation description of a system, although it is generally much easíer to
obtain the itnpulse response indirectly using methods described in later chapters. Note that
thcre is TI<) provision for initial conditic>ns when using the impulse response; it applies (>nly
to systems that are initially at rest or when che input is known for ali time. Differential
and diffcrcnce equation system descriptic)ns are more flexible in this respect, since they
apply t<> systems either at rest or with nonzer() inicial co11ditions.
x(t)
-]-----i• l: . ,. y(t) = x(t) + w(t) x(t) - • ~ ---f ~-•• y(t) Í~ x(T)dT
=
__......,___
x(t) e y(t) = cx(t)
xn
l y[nl = .x-fnl + w[n]
FIGURE 2.26 Symb<)ls f<>r elen1entary operations ín block diagram descriptions for systems.
(a) Scalar multiplication. (b) Additíon. (e) lntegration for continuous-time systems and tin1e shift
for discrete-tin1e svstcms.
,
122 CHAPTER 2 • T11\1E·DO.l\1AIN REPRESEN'l'ATIONS FOR LINEAR TIME~INVARIANT SYSlºEMS
r-----------------
ho u,[nl
x[n] ' )1, )1,
.
t''
...
t' )lo 'Vínl
l
,
#.
•;
l;i';?
··~
s
b, -a,
x[n - l] i.,t yí n - 1]
'
·-s ·,~..
.,, : .
-a..,•
]
b2
x[n - 2] ..,__....,_ _... L--_...,.__.. y[n - 21
l ________________ _
FIGURE 2.27 Blc,çk diagram reprcscntation for a discretc-tirne system described by a second·
order difference equation.
form. Begin with the discrete-time case. A discrete-time system is depicted in Fig. 2.27.
Consider writing an equation corresponding to the portic)n of the system within the dashed
box. The output c>f the first time shift is x[n - 1]. The second time shift has output
x[n - 2]. The scalar multiplications and surnmations imply
w[nl = b0 x[n] + h1x[n - 11 + h2xln - 2] (2.35)
Now we may write an expression for y[nJ ir1 ter111s of w[nJ. The block diagran1 indicares
that
y[n] = w[n] - a1y[n - 1] - a2y[n - 21 (2.36)
The output c,f this system may be expressed as a functic>n of the input xln I by substituting
Eq. (2.35) for w[n] in Eq. (2.36). We have
y[n] = -a 1y[n - 1] - a2y[n - 2] + boxlnl + h 1x[n - lJ + b2x[n - 2]
or
• Drill Problem 2.14 Determine the difference equatic>n corresponding to the block
diagram description of the system depicted in Fig. 2.28.
Answer:
y[n] + ½y[n - 1] - lyln - 31 = x[n] + 2x[n - 2]
2. 5 Bloch Diagram Representations 123
1
2
• +~----------.
·( s·,..
2
-·-
1 .. ,.
x[n] • ~ ,, /fnl bo
)lo • E ... y[nJ
ls
·.41b<>·
-a, h1
f[n - l]
!
·s
f[n - 21
FIGURE 2.29 Alternative block <liagram rcprcscntation for a system described l>y a sec<>n<l-<)rdcr
dífference c4uation.
124 CHAP'l'l::R 2 li TIME-DOMAIN REPRESENTATIONS 1-·oR LINEAR TIME~INVARIANT SYSTEMS
11'' impleme11tation. The direct form II implementation uses memory more efficic11tly, since
for this example it requires only two memory locations co1npared t<) the four required for
the dírect form I.
There are many different i111pleme11tatio11s fc>r a system whose input-c>utput behavíor
is described by a difference equation. They are <)btained by manipularing cither the differ-
ence equation or the elen1ents in a bl<)Ck diagram representation. \Xfhile these different
systems are equivalent from an input-output perspectíve, they will gcnerally differ with
respect to other criteria such as memory requírements, the number clf comptttations re-
quired per output value, or numerical accuracy.
Analogous results h<>ld for continuous-time systen1s. We may simply replace thc time-
shift operations in Figs. 2.27 and 2.29 with time differentiatíon to ()btain block diagram
representations for systems described by differential equatíons. However, in order to dcpict
the continuous-time system ín terms of the mc,re easily implemented i11tegrati<>n operation,
we must first rewrite the differential equation description
dk ;vi
N dk
L
k=O
ak d k y( t)
t
L
= k=O bk dtk x(t) (2.40)
as an integral equation.
We define the integration operation in a recursive manner to simplify the notation.
Let v101 (t) = v(t) be an arbitrary signal and set
Hence v(n 1(t) is the n-fc>ld integral of v(t) with respect to time. This definiti(>n integrares
over ali past values c>f time. We may rewrite this in terms of an inicial C()11ditic>n on the
•
1ntegrator as
n = 1, 2, 3, ...
If we assume zcr<> initial conditions, then i11tegrati<>n and differentíatio11 are inverse op-
erations; that is,
Hence if N 2:: M and we integrate Eq. (2.40) N times, we obtain the integral equation
description for the system:
N M
L ªkY(N-k)(t) = L bkx(N-kl(t) (2.41)
k=O k=O
y{t) = -a 1y 111 (t) - a 0 y 121 (t) + b1x(t) + h 1x( 1J(t) + b0 x( 21 (t) (2.42)
Direcr fc>rm I a11d direct fc>rn1 II imp1ementatio11s ()f rhis system are depicted in Figs. 2.30(a)
and (b). The reader is asked to show that these bl<>ck diagrams in1pleme11t the integral
equation in Problem 2.25. Note that the direct fc,rm II implementatíon uses fewer integra-
tors than the direct form I implementati<>n.
Block diagram representations for continuous-time systems may be used to specify
analog computer simulations of systems. 111 such a simulation, signals are represented as
2.6 State-Variable Descriptionsfor LTl Systems 125
h2
,. ,. f <t)
x(t) )lo~"?- . ~.;
• y(t) x(t) • ·•:r:,
., ""'
-----4i.,..__....,,, b2
---l••
-
1: •
----,)1, y(t)
l l
.·
f
. ··: i.
>·:
,.f•
• •••
,.: ·i.
"<;.
·. :
,
X( li(t)
bi
2I:
-ai
y<n(t) ··1:
-a,
-ao
,. .:.
·;~
~ ·,·
;..,t•:
": ;~:.
.
• l>o
....__---4_ __, y(2)(t)
(a)
voltages, resistors are used te> implement scalar n1ultiplication, and the integrators are
constructed using operational amplifiers, resistors, and capacitc>rs. lnitial cor1ditions are
specified as initial voltages on integrators. Analog C<>mputer simulations are n1uch 111ore
cumbersome than digital computer sin1ulations and suffer from drift, h<)wever, so it is
common to simulate continuous-tíme systems on digital C<>mputers by using numerical
apprc>ximati{)ns to either integration or differentiation operations.
t 2:: t we can determine thc <>utput Í<>r ali times n 2: n (or t 2: t We shall see that the
0 ), 0 0 ).
selection of signals comprising the state of a system is not unique a11d that there are rnany
possible state-variable descripti<)ns corresponding to a syste111 with a given i11put-c.>utput
characteristic. The ability t<) represent a system with different state-variable descriptions
is a powerful attribute that finds application in advanced methods for conrrol system
analysis and discrete-time system implementation.
We shall develop the general state-variable description by starting with rhe dircct Í<>r111 II
implementation for a second-order LTI system depicted in Fig. 2.31. ln order t<> dctcr1nine
the output of the system for 11 > n 0 , we must know the input for 11 > n 0 and the c>utpt1ts
of the time-shift operations labeled q 1 [n] and q2 [n] at time n = n0 • This suggests that we
may ch<>osc q 1 fnl and q 2 ln] as the state of the system. Note that since q 1 lnJ and q 2 [nJ
are the outputs of the time-shift operations, the next value of the state, q 1 [n + 1l and
q 2 [n + 1], must corresp<)nd t(> the variables at the input to the tin1e-shift c>perati<.>ns.
126 CHAPTJ::R 2 li TIME-DOl\-1AIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTE.MS
>~ .iJ..
. L·.. ·,:.
FIGURE 2.31 Direct form 11 rcpresentation for a second-<)rder <liscrete-time system dcpicting
state variables q 1 [n] and q 2 [n].
The block diagran1 i11dicates that the next value of the state is <>htained fro1n the cttrrent
state and the input via the equatic>ns
q1[n + 11 = -a 1q 1[n] - a2q2ln] + xlnl (2.43)
q2[n + 1] = q1[nj (2.44)
The bl(>Ck diagram also indicates that the system output is cxpressed in terms of the i11put
and state as
or
y[11] = (b1 - a,)q,lnl + (l,2 - a2)q2[n] + xlnl (2.45)
We write Eqs. (2.43) a11d (2.44) in matrix Í<)rm as
q1ln + 11 - 1
+ xlnl (2.46)
q 2 [n + tj 1 O o
while Eq. (2.45) is expressed as
+ lllxí1tl (2.4 7)
another description for the system. Systems having different internai structures will be
represented by different A, b, e, and D. The state-variable description is the only analytic
system representation capable of specifying the internai structt1re of the system. Thus rhe
state-variable description is used ín any problem in which the internai system structt1re
needs to be considered.
If the input-output characteristics c>f the system are described by an Nth <>rder dif-
ference equation, then the state vector q[n] is N-by-1, A is N-by-N, b is N-by-1, and e is
1-by-N. Recall that solution of the difference equation requires N initial conditions. The
N inicial conditions represent the system's memory of the past, as does the N-dimensional
state vector. Also, an Nth order sysrem contains at least N time-shift operatíons in its block
diagram representation. If the block diagram for a system has a minimal number of time
shifts, then a natural choice for the states are the outputs of the unit delays, since the unir
delays embody the memory of the system. This choice is illustrated ín the following
example.
EXAMPLE 2.21 Find the state-variable description corresponding to the second-order systen1
depicted ín Fig. 2.32 by choosíng the state variables to be the outputs of the unit delays.
Solution: The block diagram indicares that the states are updated according to the equations
+ 1} = aq1 [n] + ô1 x[n]
q 1 [n
., .
.· · · q2(n + 1} = yq 1[n] + J3q2[n] + S2x[n]
and the· output is given by
..•
•
·"' .: .
These equations are expressed in the state-variable forms of Eqs. (2.48) and (2.49) if we define
.,
., ~:
. .. . ·. ....
~
.. q[nJ =
and
, A=
'}'
...
..,,:. ,., : ··:
.. ' . ·. ':
e = [111 . .: ..,....
.; .
· · .,,.,,.. ;.:1,: .•<"·'!·· • •":'..;:' ,. ,. • ••~;~;r :·;:o,, ....:i<· • '•\:·"' ···.':':·. r~·"' :·· . .,..1··'
1} 1
~:r=-q_l_n_+_l-i]• 2
3
....,,, q2[ n]
-•-- .~ S ,, --,,.---i• ··1: -
-· •lll YÍ n]
--2l -31
FIGURE 2.33 Block diagram of system for Drill Problem 2.15.
• Drill Problem 2.15 Find the state-variable description corresponding to the block
diagram in Fig. 2.33. Choose rhe state variables to be the outputs of the unit delays, q 1 ln]
and q 2 fn J, as indicated in the figure.
Answer:
--21 o 1
A= b=
1 3 '1
- 3
e= [O 1], D= [2] •
The state-variable descriptíon for continuous-time systems is analogous to that for
discrete-time systems, with the exception that the state equation given by Eq. (2.48) is
expressed in terms of a derivarive. We rhus write
d
dt q(t} = Aq(t) + bx(t) (2.50)
Once again, the matrix A, vectors b and e, and scalar D describe the internai structure of
the systerr1.
The memory of a continuous-time system is contained wirhin the system,s energy
storage <levices. Hence state variables are usually chosen as the physícal quantities asso-
ciated with the energy storage devices. For example, in electrical systems the energy storage
<levices are capacitors and inductors. We may choose state variables to correspclnd to rhe
voltage across capacitors c>r the current through inductc>rs. ln a mechanical systen1 rhe
energy-storing <levices are springs and masses. State variables may be chosen as spring
displacement or mass velocity. ln a block diagram representation energy storage <levices
are integrators. The state-variable equatíons represented by Eqs. (2.50) and (2.51) are
obtained from the equations that relate tl1e behavior of the energy storage <levices to the
input and c>utput. This procedure is demonstrated in the following examples.
'· ,'
ExAMPLE 2.22 Consider the electrical circuit depicted in Fig. 2.34. Derive a state-varíable
description for this system if the input is the applied voltage x(t) and the output is the current
through the resistor labeled y(t).
Solutiott: Choose the state variables as the voltage across each capacitor. Sumrning the
voltage drops around the loop ínvolving x(t), R 1, and C 1 gives l
,.
y(t}
+ R2 i-
x(t) : . e 1 ::::::::: q 1( t) e2 ::::::::: q 2 <t)
,.
·, i
...
.. . . ,•...
.or
1 1 ·
y(t) = -. Ri
- q1(t) + -R1 x(t) (2.52)
.
This equation expresses the output as a function of the state variables and input. Let i2 (t) be
the current through R 2 • Summing the voltage drops around the loop involving C 1, R2 , and C2
we obtain ·
. 1.
.. . ' .
... ,:·
'
: ·,..
. or·· .
;, .. ... ·. : ~ (2.53)
i2(t) = C2 ~ q2(t) ·,
Lastly, we need a state equation for q 1(t). This is obtained by applying Kirchhoff's current
law to the node between R 1 and R 2 • Letting i1 (t) be the current through C1 , we have
y(t) = i1(t) + i2(t)
Now substitute Eq. (2.52) for y(t), Eq. (2.53) for i2 {t), and
. . d
, ..
.· : ,· t1(t) = C1 dt q1(t)
,.
for i 1 (t), and rearrange to obtaín
d 1 1 1 1
dt qi(t) = - C1R1 + C1R2 qi(t) + C R qz(t)
1 2
+ C R x(t)
1 1
(l.SS)
The state-variahle description is now obtained from Eqs. (2.52), (2.54), and (2.55) as
1 1 1
--+-- 1
C1R1 C1R2 C1R2
A= , h= C1R1
--·. 1
- C2 R2
o
.,.::
·'
1 1
e= --
R1
o
'
D -
- -
R1 .. ,.:
' ·~. .
130 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
y(t)
R1 R2 +
x(t) ~ q2(t)! L C:::;::: q 1(t)
• Drill Problem 2.16 Find the state-variable description for the circuit depicted in
Fig. 2.35. Choose state variables q 1 (t) and q 2 (t) as the voltage across the capacitor and the
current through the inductor, respectively.
Answer:
-1 -R1 1
(R1 + R2)C (R1 + R2)C (R1 + R 2 )C
A= b=
Ri -R1R2 ' R2
(R1 + R 2)L (R1 + R2)L (R1 + R 2)L
-1 -R1 1
e= D=
R 1 + R 2 R1 + R2 R1 + R2 •
ln a block diagram representation fc>r a continuous-time system the state variables
correspond to the outputs c>f the integrators. Thus the input to the integrator is the deriv-
ative of the corresponding state variable. The state-variable description is obtained by
writing equations that correspond to the operations in the block diagram. This procedure
is illustrated in the following example.
EXAMPLE 2.23 Determine the state-variable description corresponding to the block diagram ;
in Fig. 2.36. The choice of state variables is indicated on rhe diagram. .,
.lj
Solution: The block diagram indicares that
d ..
dt q1(t) = 2q1(t) - q2 (t) + x(t)
d
df q2(t) = q1(t)
..
y(t) = 3q1(t) + q2(t) ~
'
~
Hence the state-variable description is ,.,.
2 -1 1
A= o ) b=
1 o
e= [3 1 ), D= [O] .
>,·:
,V
.. ' . .;.;.
We have claimed that there is no unique state-variable description for a system with a
given input-output characteristic. Different state-variable descriptions may be obtained
2.6 State-Variable Descriptionsfor LTl Systems 131
q1(t) q2(t)
---~~-d--• J---t-.....,.. ~ ---•
q2(t) ,,,_.,,_
y(t)
2 dt
-1
where the dot over q denotes differentiation in contínuous time or time advance in discrete
time. The new state-variable description A', b', e', and D' is derived by noting
q' = Tq
= TAq + Tbx
= TAT 1 g' + Thx
and
y = cq + Dx
= cT -1 q' + Dx
Hence i f we ser
A'= TAT- 1 b' = Tb
(2.56}
e' = cT- 1 D'= D
132 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
then
1 -1 4
h = 2
A= 10 4 -1 ' 4
'·.
v·
-1 1
.. '
1 1
,, --21 o 1
A' -
··:
b' -
-
o -103 ' 3
••• >
e'= [O 1], D' = (2)
Note that this choíce for T results in A' being a diagonal matrix and thus separares the state
update into the two decoupled first-order difference equations as shown by
The decoupled form of the state-variable description ís particularly useful for analyzing sys-
tems because of its simple structure.
-2 O 1
A= h=
1 -1 ' 1
e = [O 2], D= [1]
Find the state-variable description A', b', e', and D' corresponding to the new states
qi (t) = 2q, (t) + q2(t) and q~(t) = q 1(t) - q 2(t).
2. 7 Exploring Concepts with MATLAB 133
Answer:
1 -4 -1 3
A' b' =
3 -2 -5 ' o
e' = ½[2 -4], D' = [1] •
Note that each nonsingu1ar transformation T generates a dífferent state-'1"ariabJe de-
scription for a system with a given input-output behavior. The ability to transform the
state-variable description without changing the input-output characteristics of the system
is a powerful tool. Ir is used to analyze systems and identify implementations of systems
that optimize some performance criteria not directly related to input-output behavior,
such as the numerícal effects of roundoff in a computer,based system implementation.
• CONVOLUTION
Recall that the convolution sum expresses the output of a discrete-time system in terrns <)Í
the input and impulse response of the system. MATLAB has a functíon named e o n v that
evaluates the convolution of finite-duration discrete-time sígnals. If x and h are vecrors
representing signals, then the MATLAB command y = e o n v Cx, h ) generates a vector
y representing the convo)utÍ(>n of the signals represented by x and h. The number of
e1ements in y is gi·ven by the sum of the number of elements in x and h minus one. Nore
that we must know the time origin of the signals represented by x and h in order to
determine the time origin of their convolution. ln general, if the first elernent of x corre-
sponds to time n = kx and the first element of h corresponds to time n = kh, then rhe first
element of y corresponds to time n = kx + kh.
134 CHAPTER 2 • TIME-DOMAIN REPRESENTATIONS FOR LINEAR TtME-INVARIANT SYSTEMS
To illustrate this, consider repeating Example 2.1 using MATLAB. Here the first
nonzero value in the impulse response occurs at time n = -1 and the first element of the
input x occurs at time n = O. We evaluate this convolution in MATLAB as follows:
>> h = [1, 2, 1 J;
>>X: (2, 3, -2J;
>> y = e o n v Cx , h )
y =
2 7 6 -1 -2
The first element in the vector y corresponds to time n = O + (-1) = -1.
ln Exa1nple 2.3 we used hand calculation to determine the output of a system with
impulse response given by
h[n] = u[n] - u[n - 10]
and input
x[nl = u[n - 21 - u[n - 71
We may use the MATLAB command conv to perform the convolution as follows. ln this
case, the impulse response consists of ten consecutive ones beginning at time n = O, and
the input consists of five consecutive ones beginning at time n = 2. These signals may be
defined in MATLAB using the commands
» h = onesC1,10);
>> x = ones(1,5);
5 1 1 1
4.5 >- -
4 -·· -
3.5 >- -
-8 3 ....... ' -
....:::1
:g_ 2.5 ..... -
e
< 2 >- -
1.5 ....... --
1 - -
0.5 - -
o
2 4 6 8 10 12 14 16
Time
FIGURE 2.37 Convolutíon sum computed using MATLAB.
2. 7 Exploring Concepts with MATIAB 135
• Drill Problem 2.18 Use ~1ATLAB to solve DriJJ Problem 2.2 for a = 0.9. That is,
find the output of the system with input x[n] = 2{u[n + 2} - u[n - 12}} and impulse
response h[nJ = 0.9n{uín - 2] - uln - 13]}.
Answer: See Fig. 2.38. •
• STEP AND SINUSOIDAL STEADY~STATE RESPONSES
The srep response is rhe output of a system in response to a srep input and is infinite in
duration in general. However, we can evaluate the first p values of the step response using
the e o nv function if the system impulse response is zero for times n < k,, by convolving
the first p values of h[nl wíth a finite-duration step of length p. That is, we construct a
vector h frc>m the first p nonzero values of the impulse response, define the step u =
o n e s ( 1 , p), and evaluate s = e o n v ( u, h). The first eJement of s corresponds to
time k1, and the first p values of s represent the first p values of rhe step response. The
remaining values of s do not correspond to the step response, but are an artifact <>f con-
volving finite~durati<>n signals.
For example, we may determine the first 50 values of the step response of the system
,ivith impulse response given in Drill Problem 2. 7:
h[ n] = (-a)"uln]
with a = 0.9 by using the MATLAB commands
» h = (-0.9).A [0:49];
» u = ones(1,50);
>> s = conv(u.,h);
The vector s has 99 values, the first 50 of which represent the step response and
are depicted in Fig. 2.39. This figure is obtained usíng the MATLAB command
stem([0:49J, s(1:50)).
The sinusoidal steady-state response of a discrete~time system is given by the ampli-
tude and phase change experienced by the infinite-duration complex sinusoidal input signal
System Output
12 1 ! 1 !
'
1
10 - -
1
8 ··- -
...... -
4 ...._ -
)
'
2 ···- -
~
o
o 5 10 15 20 25
Tin1e
FIGURE 2.38 Solution to Drill Problem 2.18.
136 CHAPTER 2 ~ Til\-'1E-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
Step Response
l \ l 1 '! 'i l
;
; j ·1
'
0.9 -· -
•
0.8 -
0.7 -
(!.)
-g
0.6 ···- -
-
;.g_ 0.5
E
r
'
r
..
0.2 ...
0.1 ...
o ~ ~ ... .. . .. . . ..• .. ~ .
o 5 10 15 20 25 30 35 40 45 50
Time
x[n 1 = ei!ln_ The sinusoidal steady-state response of a system with finite-duration impulse
response may be determined using a finite-duration sinusoid prc>vided the sinusoid is suf-
.ficiently long to drive the system to a steady-state condition. To show this, suppose
h[n} = O for n < n 1 and n > n 2 , and let the system input be the finite-duration sinusoid
v[n 1 = ei!ln(u[n1- u[n - nv]). We may write the system output as
= h[nJ * e;nn,
Hence the system output in response to a finite-duration sinusoidal input corresponds to
the sinusoidal steady-state response on the interval n 2 ::s n < n 1 + nv. The magnitude and
phase response of the system may be determined from y[n], n 2 < n < n 1 + n 11 , by noting
that
and
arg{y[n]} - nn = arg{H(eiº)},
\Y/e may use this approach to evaluate the sinusc)idal steady-state response of one of
the sysrems given in Exa1nple 2.14. Consider the system with impulse response
-1 n=O
2,
h[n] --2,1 n=1
o, otherwise
2. 7 Exploring Concepts 1vith MATIAB 137
We shall determine the frequency response and 50 values of the sinusoidal steady-state
response of this system for input frequencíes !l = ¼1r and 1r. J
Here n 1 = O and n 2 = 1, so to obtain 50 values of the sinusoidal steady-state response
we require nv > 51. The sinusoidal steady-state responses are obtained by MATLAB
commands
» Omega1 = pi/4; Omega2 = 3*pi/4;
» v1 = exp(j*Omega1*[0:50J);
» v2 = exp(j*Omega2*[0:50]);
>> h = [O. 5, -O. 5];
>> y1 = conv(v1,h); y2 = conv(v2,h);
Figures 2.40(a) and (b) depict the real and imagínary components of y 1, respectively, and
may be obtained with the commands
>> subplot(2,1,1)
» stem([0:51J,real(y1))
» xlabel('Time'); ylabel('Amplitude');
>> title( 'Real(y1) 1 )
>> subplotC2,1,2)
» stem([0:51],imag(y1))
>> xlabel('Time'); ylabel('Amplitude');
>> title('Imag(y1)')
Real(y 1)
0.6 , - - - - - - - - . l - - - - - - - - . - ! - - - - - . . - - 1 - - - - - - - . 1 - - - - - - - - - - - - ,
0.4 - -
<t>
] 0.2 ..__ . -
---a.
~
-0.2 - -
' _ ____.' _
-0.4 ,...___ ' ___' __._i _ _ _
' ___._
i ____________
' l ' ...,___ _ ____,
o 10 20 30 40 50 60
Time
0.4 - - - - - - - - - - - - - - - - , - - - - - - , - - - - - - . . . . - - - - - - - - ,
0.2
.g:, o
-
....
~ -0.2 . . . .
-0.4
The sinusoidal steady-state response is represented by the values at time índices 1 through
50.
We may now obtain the magnitude and phase responses from any element of the
vectors y 1 and y 2 except for the first one or the last one. Using the fifth element, we use
the commands
» H1mag = abs(y1(5))
H1mag =
0.3287
» H2mag = abs(y2(5))
H2mag =
0.9239
» H1phs = angle(y1(5)) - Omega1*5
H1phs =
-5.8905
» H2phs = angle(y2C5)) - Omega2*5
H2phs =
-14.5299
The phase response is measured in radians. Note that the a n g l e command always returns
a value between - 1T and Tr radians. Hence measuring phase wíth the command
a n g l e ( y 1 { n) ) - Omega 1 * n may result in answers that differ by integer multiples
of 2Tr when different values of n are used.
• Drill Problem 2.19 Evaluate the frequency response and 50 values of the sinusoidal
steady-state response of the system with impulse response
O :5 n < 3
h[n] =
O, otherwise
at frequency n = ½1r.
Answer: The steady-state response is given by the values at time indices 3 through 52 in
Fig. 2.41. Usíng the fourth element of the steady-state response gives I H(ei7r/J) 1 = 0.4330
and arg{H(e;,"13 )} = -1.5708 radians. •
• SIMULATING DIFFERENCE EQUATIONS
Real(y)
0.4 1 1 1
0.2 ... -
CL)
-e
·--o-. ºl
::,
o .. o o o o o o o o o o o o o o o o o ·-
E
<C
.......
-0.2 1--·····
i " i .
'' :
-0.4
o 10 20 30 40 50 60
Time
lmage(y)
0.5 ,---------.------,-------.--------r------,-------,
~
a
·--s o
<C
• Drill Problem 2.20 Use f i l ter to determine the first 50 values of the step re-
sponse of the system descríbed by Eq. (2.57) and the first 100 values of the response to
the input xfnl = cos( ¼7Tn) assuming zero initial conditions.
• STATE-VARIABLE DESCRIPTIONS
The MATLAB Control System Toolbox contains numerous routines for manipulating
state-variable descriptions. A key feature of the Control System Toolbox is the use of LTI
objecrs, which are customized data structures that enable manipulation of LTI system
descri ptíons as single MATLAB varia bles. If a, b, e, and d are MATLAB arra ys repre-
senting the A, b, e, and D matrices in the state-variable description, then the command
s y s = s s (a, b, e, d, -1 ) produces a LTI object s y s that represents the discrete-time
system in state-variable form. Note that a continuous-time system is obtained by omitting
the -1, that is, using s y s = s s (a, b, e, d>. LTI objects corresponding to other system
representations are discussed in Sections 6.9 and 7.1 O.
Systems are manipulated in MATLAB by operations on their LTI objects. For ex-
ample, if s y s 1 and s y s 2 are objects representing two systems in state-variable form,
then s y s = s y s 1 + s y s 2 produces the state-variable description for the parallel
combination of s y s 1 and s y s 2, while s y s = s y s 1 * s y s 2 represents thc casca de
combina tion.
The functic>n L s i m sin1ulates the output of a system in response to a specified input.
For a discrete-time system, the command has the form y = l sim ( s y s, x), where x
is a vector containing the input and y represents the output. The command h =
i mp u l se <s y s, N) places the first N values of the impulse response in h. Both of these
may also be used for continuous-time systems, although the command syntax changes
slightly. ln the continuous-time case, numerical methods are used to approximate the con-
• • •
t1nuous-t1me system response.
Recall that there is no unique state-variable description for a given system. Different
state-variable descriptions for the sarne system are obtained by transforming the state.
Transforma tions of the state ma y be com pu ted in MATLAB using the rou tine s s 2 s s.
The state transformation is identical for both continuous- and discrete-time systems, so
the sarne command is used for transforming either type of system. The command is of the
form s y s T = s s 2 s s ( s y s, T), where s y s represents the original state-variable de-
scription, T is the state transformation matrix, and s y s T represents the transformed state-
variable description.
Consider using s s 2 s s to transform the state-variable description of Example
2.24
-1 4 2
A= 1 h=
10 4 -1 ' 4
e= 1[1 1],
2 D= [2]
2. 7 Exploring Concepts with MATIAB 141
-1 1
1 1
The following commands prc>duce the desíred result:
>> a --
[-0.1, 0.4; 0.4, -0.1J; b - [2; 4 J ,.
>> e - [0.5, 0.5]; d - 2; -
>> sys -
- ss(a,b,c,d,-1); % define the state-space object sys
>> T -
- 0.5*[-1, 1 ; 1 , 1 ] ,.
>> sysT - ss2ss(sys,T)
a --
x1 x2
x1 -0.50000 o
x2 o 0.30000
b --
u1
x1 1.00000
x2 3.00000
e - -
x1 x2
y1 o 1 .00000
d --
u1
y1 2.00000
Sampling time: unspecified
Discrete-time system.
This result agrees with Example 2.24. We may verify that the two systems represented by
s y s and s y s T have identical input-output characteristics by comparing their impulse
responses via the following commands:
» h = impulse(sys,10); hT = impulse(sysT,10);
» subplot(2,1,1)
>> stem([0:9],h)
>> title( 'Original System Impulse Response');
>> xlabel( 'Time'>; ylabel('Amplitude')
» subplot(2,1,2)
>> stem([0:9],hT)
>> title('Transformed System Impulse Response');
>> xlabel('Time'); ylabel('Amplitude')
Figure 2.42 depicts the first 10 values of the impulse responses of the original and crans-
formed systems produced by rhis sequence of commands. We may verify that the original
and transformed systems have the (numerically) identical impulse response by computing
the error e r r = h - h T.
o
o 1 2 3 4 5 6 7 8 9
Time
Transforrned System Impulse Response
3
~ 2.5 ....
...·-=
-o.
E 1.5
2
<!'.
1 ···--
0.5
o
o l 2 3 4 5 6 7 8 9
Time
FIGURE 2.42 lm1lulse responses associated with the original and transformed state-variahle dc-
scriptions computed t1sing MATLAB.
12-8 Su1nmary
- .. .
There are many different methc>ds fc>r describing the actic>n c>f a l,TI system on an input
signal. ln this chapter we have examined four different descriptions for LTI systems: the
impulse response, difference- and differential-equation, block diagram, and state-variable
descriptions. All four are equivalent in the input-output sense; for a given input, each
description will produce the identical output. However, different descriptions offer differ-
ent insights into system characteristics and use different techniques for obtaining the output
from the input. Thus each description has its own advantages and disadvantages for solving
a particular system problem.
The impulse response is the output c>f a system when the input is an impulse. The
output of a linear time-invariant sysrem in response to an arbitrary input is expressed in
terms of rhe impulse response as a convolution operation. System properties, such as caus-
ality and stability, are directly related to the impulse response. The impulse response also
offers a convenient framew<>rk for analyzing intercl)nnections of systems. The input must
be know11 fc">r all time in order to determine the <lutput of a system using the impulse
response and convolution.
The input and <)utput of a LTI system may als<> be related using either a differential
or difference equati<>n. Differential equations ofren follow directly from the physical prin-
cipies that define the behavior and interaction of continuous-time system components. The
order of a differenrial equation reflects the maximum number of energy storage <levices in
the system, while the order of a difference equation represents the system's maximum
memory of past outputs. ln contrast to impulse response descriptions, the <>utput of a
system frt>m a given point in time forward can be determined withc>ut knowledge of ali
past inputs provided initial conditions are known. Initial ct>nditions are the initial values
of energy storage or system memory and summarize the effect of all past inputs up to the
Further Reading 143
starting time of interest. The solution to a differential or difference equation can be sep-
arated into a natural and forced response. The natural response describes the behavior of
the system dueto the initial conditions. The forced response describes the behavior of the
system in response to the input alone.
The block diagram represents the system as an interconnection of elementary oper-
ations on signals. The manner in which these operations are intercon11ected defines the
internai structure of the system. Different block diagrams can represent systems with iden-
tical input-output characteristícs.
The stare-variable description is a series of coupled first-order differential or differ-
ence equations representing the system behavior, which are written in matrix form. It
consists of two equations: one equation describes how che state of the system evolves and
a second equation relates the state to the output. The state represents the system's entire
memory of the past. The number of states corresponds to the number of energy storage
<levices or maximum memory of past outputs present in the system. The choice of state is
not unique; an infinite number of dífferent state-variable descriptions can be used to rep-
resent systems wíth the sarne input-output characteristic. The state-variable description
can be used to represent the internai structure of a physical system and chus provides a
more detailed characterization of systems than the impulse response or differentíal (dif~
ference) equations.
fURTHER READING
1. A concise summary and many worked problems for much of the material presented in this
and later chapters is found in:
• Hsu, H. P., Sígnals and Systems, Schaum's Outline Series (McGraw-Hill, 1995)
2. The notation H(eifl) and H(jw) for the sinusoidal steady-state response of a discrete- anda
continuous~time system, respectively, may seem unnatural at first glance. Indeed, the alter-
native notations H(fl) and H(w) are sometimes used in engineering pracrice. However,
our notation is more commonly used as it allows the sinusoidal steady-state response
to be de.fined naturally in terms of the z-transform (Chapter 7) and the Laplace transform
(Chapter 6).
3. A general treatment of differential equations is given in:
• Boyce, W. E., and R. C. DiPrima, Elementary Differential Equations, Sixrh Edition (Wiley,
1997)
4. The role of difference equations and block diagram descriptions for discrete-time systems
in signal processing are described in:
• Proakis, J. G., and D. G. Manolakis, Introductíon to Digital Signal Processing (Macmillan,
1988)
• Oppenheím, A. V., and R. W. Schafer, Discrete-Time Signal Pr<Jcessíng (Prentice Hall, 1989)
5. The role of differential equations, block diagram descriptions, and state-variable descriptions
in control systems ís described in:
• Dorf, R. C., and R. H. Bishop, Modern Control Systems, Seventh Edition (Addison-Wesley,
1995)
• Phíllips, C. L., and R. D. Harbor, Feedback Contrai Systems, Third Editíon (Prentice Hall,
1996)
6. State variable descriptíons in control systems are discussed in:
• Chen, C. T., Linear System TIJeory and Design (Holt, Rinehart, and Winston, 1984)
144 CHAPTER 2 • TIJ\ill::'.-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
1PROBL~~S
2. 1 A discrete-tíme LTI system has the impulse re- (k) y[nl = (u[n + 10] - 2u[n + 51 + uln - 6})
sponse l1[11J depicted in Fig. P2.1 (a). Use linear- :1< cos( ½í'Tn)
ity and time invariance to determine the system {l) = u[n] * ~p=oô[n - 2p]
y[n]
output y[,i] if the input x[ n] is: (m) y[n] = j3'1u[n] * Lp oô[n - 2pl, 1J31 < 1
(a) x[n] = 2o[n] - 8[n - 1] (n) yf nl = u[n - 2] * h[n], where hfn] =
(b) x[n] = u[n] - u[n - 31 '}'n, n < O, YI > 1 1
x[n] y[n]
1
1 1
1
'
' ; " ,.' ,. n •' n
-4 -1
-1 - ~
z[n]
2 ~
l l
wfn]
1 3- ~
• ' ' ,n
,.
' 2- ~
2 3
l
f[n]
-0-~,--o--+--+--+-~,-1-.---<>-~-()-T-(>-•--(>---n
2- -
-3 3
1- f--
-4
.
; ;
' '
..,' n
' 4
-2- ...
'
g[n]
-4- f--
-<>----..;>--t-,--+--+--+--+--<>--~-<>-+-+--f--~1--<>-<>-- n
-5 5
FIGURE Pl.3
y(t)
x(t)
2 ....
1
l --
-----+------ t -.-----+---+--_...,_ _ t
-l 1 1 2 3 4
z(t) w(t)
1
1- 1
-,,,.., -1 1 2
'
' ' '
t t
1 ·2
-1 -1- -
/(t) g(t)
1 e-t 1
_...,.__ __..,_ _ _ _ t -1
l l
-)
b(t) c(t)
_ _ _ _"'-1
1
5
1----
-1 2
--~-"i'---+---t---31---t ----+-----+---+---+-- t
-2 -l . l 5 l
--
2
-1
d(t) a(t)
l .
••• •••
•• • .. .
-----,---------+:-- t t
1 2 3 4 -2 -1 1 2 3
FIGURE P2.6
~~·.----
·j; ...
(a)
"·'· • 1. ••
..
x[nJ _,...., . h 1[n] ----.
• h fn] ___:'.j
.: ......... •$>'
(b)
(e)
FIGURE P2.9
(a) Express the system output y(t) as a function (b) h(t) = h1(t) * h2(t) + h3 (t) * h4 (t)
of the input x(t). (e) h(t) = h1(t) * {h2(t) + h_-,,(t) + h 4 (t)}
(b) Identify the mathematical operation per- 2 .11 An interconnection of LTI systems is depicted
formed by this system in the limít as ~---'), O. in Fig. P2.11. The impulse responses are h1 [n]
(e) Let g(t) = lim,l_.0 h(t). Use the results of = (½)n(u(n + 21 - u[n - 3]), h 2 [n] = 8[n], and
(b} to express the output of a LTI system h3 [n] = u[n - 1]. Let the impulse response of
with impulse response the overall system from x[ n] to yl n] be denoted
hn(t) = g(t) * g(t) * · · · * g(t) as h[n].
n times (a) Express h[n] in terms of h 1 ln], h2 [n], and
as a function of the input x(t). h3 [nJ.
2.9 Find the expression for the impulse response re- (b) Evaluate h[n] using the results of (a).
lating the input xf nl or x(t) to the output yf nl ln parts (c)-{e) determine whether the system
or y(t) in terms of the impulse response of each corresponding to each impulse response is (i)
subsystem for the LTI systems depicted in: stable, (ii) causal, and (iii) memoryless.
(a) Fig. P2.9(a) (e) h 1 [n]
(b) Fig. P2.9(b)
{e) Fig. P2.9(c)
2.10 Let h 1(t), h 2 (t), h3 (t), and h 4 (t) be impulse re-
sponses of LTI systems. Construct a system with
impulse response h(t) using h 1(t), h 2 (t), h 3 (t),
and h 4 (t) as subsystems. Draw the interconnec-
tion of systems required to obtain:
(a) h(t) = h 1(t) + {h 2 (t) + h3 (t)} * h4(t) FIGURE P2.l 1
148 CHAPTER 2 • nME-DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYS1'EMS
(e) y[n] =
-!yín - 1] - ½y[n - 2] = x[n] + (ií) x[n] = (})nu[n]
xln - 1], y[-1] = O, y[-2] = 1 (iii) xf n] = ei<rrt4 )nufn]
(d) y[n] + {6 y[n - 2] = xln - 11, y[-1] = 1, (iv) x[n] = (l)nu[n]
y[-2] = -1 (d) y[n] + y[n - 1] + ½yln - 2] = x[n] +
2x[n - 1}
(e) y[n] + y[n - 1] + ½y[n - 2J = x[n] +
2x[n - 1], y[-1] = -1,y[-21 = 1 . (i} x[n] = u[11]
2.19 Determine the forced response for the systems (ií) x[n] = (-½)»u[nl
described by the following differential equa- 2.21 Determine the output of the systems described
tions for the given inputs: by the following differential equations with in-
put and initial conditions as specified:
d
(a) 5 dt y(t) + 10y(t) = 2x(t) d
(a) dt y(t) + 10y(t} = 2x(t), y(O) = 1,
(i} x(t) = 2u(t)
x(t) = u(t}
(ii) x(t) = e- 1u(t)
(iii) x(t) = cos(3t)u(t} d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t) = dt x(t),
d2 d d
(b) dt 2 y(t) + 5 dt y(t) + 6y(t} = 2x(t) + dt x(t) d
y(O) = O, dt y(t) = 1, .x-(f) = e 2
tu(t)
(i) x(t) = -2u(t) t=O
(iv) x(t) = e-'u(t) 2.22 Determine the output of the systems described
by the following difference equations with input
d2 d d
(d) dt 2 y(t) + 2 dt y(t) + y(t) = dt x(t} and initial conditions as specified:
(a) y[n] - 2l y[n - 1] = 2x[n], y[-1} = 3,
(i) x(t) = e- 'u(t)
3
x[n] = 2(-½)nu[n]
(ii) x(t} = 2e- 1u(t)
(b) y[n] - ¼y[n - 21 = x[n - l], y[-11 = 1,
(iii) x(t) = 2 sin(t)u(t) y[-2] = O, x[n] = u[nl
2.20 Determine the forced response for the systems
described by the following difference equations (e) y[n] - Jy[n - 1] - ½y[n - 21 =
x[n] +
for the given inputs: x[n - 1], y[-1] = 2, y[-2] = -1,
x[nl = 2 11u[n]
(a) y[n] - ~yín - 1] = 2x[n]
(i) x [n] = 2u [ n] (d) y[n] - + !y[n - 2] =
¾y[n - 1]
(ii) x[n] = -(½)nu[nl 2x[n], y[-1] = 1, y[-2] = -1, xln] =
2u[n]
(iii) x[n] = cos(½1rn)ul11]
9 2.23 Find difference-equation descriptions for the
(b) y[1tJ - 16y[11 - 21 = x[n - 1]
four systems depicted in Fig. P2.23.
(i) x[n) = u[n} 2.24 Draw direct form I and direcr form li im-
(ii) x[n] = -(½)11u[n] plementations for the following difference
.
(iií) x[n] = (¾) u[nl 11
equat1ons:
(e) y[n] - ¼y[n - 1] - !y[n - 2] = x[nj + (a) y[n} - lyln - 1] = 2x[n}
x[n - 1] (b) y[n] + ¼y[n - 1] - ½yln - 2] = x[n} +
(i) x[n] = -2u[n] x[n - 1]
150 CHAPTER 2 • TtME-D01\IAIN REPRESENTATIONS FOR LINEAR Til\tE-INVARl,\.JU SYSTEI\IS
--;.-v[n]
, t ::
7
~
J
(a)
-2 ..............,
(a) x(t) ··"'f. "11 • y(t)
x[nl I:
••::.:>,
s •· ~
..
-: -:•:
s
....
• yfnl
tf·
t • J t • J 2
--4l l
4
(b)
,2 (b)
x[nl 1• s l
.. :;[ y[n] -•
...
•·J ...............
l
,.. .
--2l ;
.i x(t} -2 -y(t)
-?
- -1
4 ~·s- '11--J
(e)
-3
-21 (e)
• .. s P2.27
l s s
l FIGURE
}:= l
x[nJ •· l: .. ylnJ
t • •
3
--8 l
x[n] l .... I: s
l
.. lo;. y[nl
(d)
t •
]
'
FIGURE Pl.23 -2
(a)
d2 d
(b) dt 2 y(t) + 5 dt y(t) + 4y(t)
d
= dt x(t) -
? -1
xln1-:t S r • E- s
(e)
d2
dt1. y(t) + y(t) = 3 dt x(t)
d "'. ~
l l
-~-y[nl
4
3
d d d
(d) dt 3 y(t) + 2 dt y(t) + 3y(t) = x(t) + 3 dt x(t) 1
6
(d)
2.27 Find differential-equation descriptions for rhe
three systems depicted in Fig. P2.27. FIGURE P2.28
Problems 151
(a) A=
O -½ , b=
2
, e= [1 -1],
e= [O -1], D= [O]
-.1 o
1
0 1 -1 o
D= [O] (e) A= O -1 ' b = 5 '
1 _l1 1 e= [1 O], D= [O]
(h) A= b= e= [1 -1],
13 o ' 2 '
(d) A= 1 -2 b = 2
D= [O] 1 1 , 3 '
O -½ b = O e = [1 1 ], D= [O]
(e) A = • -1 ' 1 , 2.32 Let a discrete-time system have the state-
3
e= [1 O], D= [1] variable description
O O 2 1 --21 1
(d) A= b = A= h=
O 1 ' 3 ' - o '
1
3 2 '
e= [1 -1], D= [O] e= [1 -1], D= [O]
2.30 Deter1nine a state-variable description for the (a) Define new states q; [n] = 2q1 lnl, qíln l
five continuous-time systems dcpicted in Fig. 3q2 [n]. Find the new state-variahle descrip-
P2.30. . A' , b' , e ' , D' .
t1on
2.31 Draw block diagram system representations (b) Define new states qi[nl = 3q2[n], qílnl =
corresponding to the following continuous-time 2q 1 [n ]. Find the new state-variable descrip-
state-variable descriprions: tion A', b', e', D'.
3 x(t)
•
J f½ t2
x(t) l • l: J • •E
2 y{t)
f
l ~ • • f y(t)
l
- L,
t •
-1
~ •
(a)
-2 3
(b)
3
2
• { J',
x(t)
1€.\__f - I: f • ..l;
. w;.
- y(t)
.
-2
t •4 f '
-1
•-3
(e)
R
+ y(t) -
L
-
y(t)
R
x(t) t e e L
(d) (e)
FIGURE P2.30
152 CHAPTER 2 • TIME•DOMAIN REPRESENTATIONS FOR LINEAR TIME-INVARIANT SYSTEMS
(e) Define new states q;[n] = q 1 [n] + q2 [n], pression for the system output derived in (b)
qí[nl = q1[nl - q2 [n]. Find the new state- reduces to x(t) * h(t) in the limit as â goes
variable description A', b', e'; D'. to zero.
2.33 Consider the continuous-time system depicted
in Fig. P2.33.
gt:,(t)
{a) Find the state variable description for this
system assuming the states q 1(t) and q2 (t) 1/.ó.
are as labeled.
(b) Define new states q;(t) = q 1(t) - q2(t),
qí(t) = 2q 1 (t). Find the new state-variable ' t
-t::./2 D,.12
description A', b', e', D'.
(a)
(e) Draw a block diagram corresponding to the
new state-variable descriptiort in (b).
x(t)
(d) Define new states qi(t) = (l/b 1)q1(t), q2(t)
= b2q1 (t) - h1q2 (t). Find the riew state-vari-
able description A', b', e', D'.
(e) Draw a block díagram corresponding to the
new state-variable description in (d).
x<t>
x(-t::.)
x(O)
~t,
x(A)
Í .. y(t) x(2/l)
-.L....---J_--l----l----i.~-+---1,....!!~--- t
-.1.
(b)
FIGURE P2.34
FIGURE P2.33
)~2.35 ln this problem we use linearity, time invari-
ance, and representation of an impulse as the
*2.34 We may develop the convolution integral using limiting form of a pulse to obtain the impulse
linearity, time invariance, and the limiting form response of a simple RC circuit. The voltage
of a stairstep approximation to the input signal. across the capacitor, y(t), in the RC circuit of
Define gll.(t) as the unir area rectangular pulse Fig. P2.35{a) in response to an applied voltage
depicted in Fig. P2.34(a). x(t) = u(t) is given by
(a) A stairstep approximation to a signal x{t) is s(t) = {1 - e- ttRc}u(t)
depicted in Fig. P2.34(b). Express x(t) as a
weighted sum of shifted pulses g~(t). Does (See Drill Problems 2.8 and 2.12.) We wish to
the approximation quality improve as ~ find the impulse response of the system relating
decreases? the input voltage x(t) to the voltage across the
(b) Ler the response of a LTI system to an input capacitor y(t).
gt:..(t) be ha(t). If the input to .this system is (a) Write the pulse input x(t) = gt:..{t) depicted
.x{t), find an expression for the output of this in Fig. P2.35(b) as a weighted sum of step
system in terms of ht:..(t). functions.
(e) ln the limit as ti goes to zero; g,i(t) satisfies (b} Use linearity, time invariance, and knowl-
the properties of an impulse. and we may edge of the step response of this circuit to
interpret h(t) = lima-oha(t) as the impulse express the output of the circuit in response
response of the system. Show that the ex- to the input x(t) = g 6 (t) in terms of s(t).
Problenis 153
(e) ln the limit as A ~ O the pulse input g~(t) (iii) x(t) = u(t) - 2u(t - l) + u(t - 2)
approaches an impulse. Obtain the impulse (iv) x(t) = u(t - a) - u(t - a - 1)
response of the circuit by taking the limitas
(e) Show that rx,,(t) = r,,x(-t).
~~O of thc c>utpt1t obtained i11 (b). Hint:
Use the definition of the derivative (f) Show that r,..,..(t) = rxx(-t).
.l .:i
zt+- -zt--
d 2 2
- z(t) = lim - - - - - - - -
dt ~. . . o ~
• Computer Experiments
to a negligible value, y[n] is due only to the input 2.46 Use the MATLAB command s s 2 s s to solve
and we have y[n] ""' H(ei!2)e;nn. Problem 2.32.
(a) Determine the value n for which each term
0 2.47 A system has the state-variable description
in the natural response of the system in Ex-
ample 2 .16 is a factor of 1000 smaller than -l
2 --2l 1
A= b=
its value at time n = O. --~ o '
l
2 '
(b) Show that I H(ei0 ) 1 = 1y[no] I. · e= [1 -1], D= [O]
(e) Use the results in (a) and (b) to experimen-
tally determine the magnitude response of (a) Use the MATLAB commands L sim and
this system with the MATLAB command impulse to determine the first 30 values
f i l ter. Plot the magnitude response for of the step and impulse responses of this
n
input frequencies in the range - 7T < s; 7T. system.
2.45 Use the MATLAB command i mp z to determine (b) Define new states q 1 [n] = q 1 [n] + q2 [n] and
the first 30 values of the impulse response for q 2 [n] = 2q 1[n] - q 2 [nJ. Repeat part (a) for
the systems described in Problem 2.22. the transformed system.
Fourier Representations for Signals
··•:· .. ,..
...
;:.,.:~. . ,
~,··
. .:,\ .~d ·,, .
~: •>
·*'
....,.,,.. . ,.· . >'
: ..'
3.1 Introduction
ln this chapter we consider representing a signal as a weighted superpositic,n of con1plex
sinusoids. If such a signal is applied to a linear system, then the system <>utpttt is a weighted
superposition of the system response to cach complex sinusoid. A similar application <>f
the Jinearity property was exploited in the previous chapter to develop the conv<.>lution
integral and convolution sum. ln Chapter 2, the input signal was expressed as a weighted
superposition of time-shifted impulses; the output was then given by a weighted super-
position of time-shifted versions <>Í che sysrem's impulse response. The expressíon f()f the
output that resulted from expressing signals in terms of impulses ,vas termed ''cc>nvolu-
ti<>n.'' By rcpresenting signals in terms of sinusoids, we \vill obtaín an alternative expression
for the input-output behavior of a LTI system.
Representatit)n of signals as superpositions of complex sinusoids not only leads to a
useful expression for the system output but also provides a very insightful characrerization
of signals and systems. The focus of this chapter is representation of sígnals using complex
sinusoids and the properties of such representations. Applications t>f these representations
to system and sígnal analysis are emphasized in the ft)llowing chapter.
Thc srudy <1Í signals and systems using sinusoidal representations is termed Fourier
analysís after J<>seph Fc>urier (1768-1830) for his contributions to the cheory {>f reprc-
senting functions as weighted superpc)sitions <>Í sinus<>ids. Fourier methods have widc-
spread applicati<>n beyond síg11aJs and systems; they are used in every branch of engineering
and science.
The sinusc)idal steady-state respc>11se of a L TI systen1 was intr<>duced ir1 Secti{>n 2.3. W'e
showed that a complex sinusoid input to a LTI system ge11erates an outpt1t eqt1al to the
sinusoidal input multiplied by the system frequency response. That is, in discrete time, the
input x[nl = eiihi results in the output
y[nl = H(e;11)eifl11
where the frequency respc)nse H(eiº) is defined in terms of the impt1lse response h[11J as
X
H(ei11 ) = I h[kJe-i!!k
k :e - ""
156 CHAPTER 3 • f OlJRIER REPRESENTATIONS FOR SIGNALS
We say that che complex sinusoid lj,(t) = eit.,n is an eigenfunction of the system H
associated with the eigenvalue À = H( jw) beca use it satisfies an eigenvalue pr<>blen1 de-
scribed by
H{lf,(t)} = Alf,(t)
This eigenrelation is illustrated in Fig. 3 .1. The effect of the system on an eigenfunction
input signal is one of sca1ar multiplication-the output is given by rhe product of the input
anda compJex number. This eigenrelation is analogous te> the more familiar macrix eiger1-
value prc>blem. If ck is an eigenvector of a matrix A with cigenvalue Ak, then we have
Aek = Àkek
x(t) = L akeicokt
k=l
If eiwkt is an eigenfunccion of the system with eige11value H( jwk), then each term in the
input, akeiwkt, produces an output term, akH( iwk)eiwkt. Hence we express rhe output of the
system as
M
y(t) = L akH( jwk)eiwkt
k=I
The <>utput is a weíghted sum of M complex sinusoids, with the weights, a1.,, modi.fied by
the system frequency response, H( jwk). The operation of convolution, h(t) * x(t}, becomes
multiplication, akH( jwk}, because x(t) is expressed as a sum c)f eigenfunccions. The ana1-
ogous rclationship holds in the discrete-time case.
This property is a powerful motivatic)n for representing signals as weighted super-
positions of complex sinusoids. ln addition, the weights provide an alternative interpre-
tarion of thc signal. Rather than describing the signal behavior as a function of time, the
fIGlJRE: 3.1 Jllustratíc>n of the cigenfunctíon pro1lcrty of linear systems. The action of the
system on an eigcnft1nction input is one of multiplication by the corresponding eigenvalue.
(a) General cigenft1ncti<>n iJ,(t) or it,[n J anel eigenvalue À. (b) C<>mplex sinusoid eigenfunction e_;,.,,,
and eigenvalue H(jú>). (e) Cornple.'!í sjnusoid eigenfunctjon e.iíh, and eigenvalue H(ei!l),
3.1 lntroduction 157
weights describe the signal as a function of frequency. The general notion (>f describing
complicated signals as a function of frequency is commonly encountered in music. For
example, the musical score for an orchestra contains parts for instruments having different
frequency ranges, such as a string bass, which produces very low frequency sound, and a
piccolo, which produces very high frequency sound. The sound that we hear when listeníng
t<> an orchestra represents the superposition (JÍ sounds generated by each instrument. Sim-
ilarly, the score fc>r a choir contains bass, tenor, alto, and soprano parts, each <>f which
contributes to a different frequency range in the overall sound. The signal representations
developed in this chapter can l)e viewed analogously: the weight associated with a si11usoid
of a given frequency represents rhe contributíon of that sinusoid t(> the overall sígnal. A
frequency-do1nain view of signals is very informative, as we shall see in what foll<)ws.
There are four distinct Fourier representations, each applicable to a different class <>f sig-
nals. These four classes are defined by the peri<><lícity properties of a signal and whether
it is continuous or discrete time. Periodic signals have Fourier series represcntations. The
Fc>urier series (FS) applies to C<)ntinuous-time periodic signals and the discrete-time Fourier
series (DTFS) applies to discrete-time periodic signals. N()nperiodic signals have Fourier
transform representations. If the signal is continuous time and nonperiodic, the represen-
tation is termed the Fourier transform (FT). If the signal is discrete time and nonperiodic,
then the discrete-time Fourier transform (DTFT) is used. Table 3.1 illustrates the relatic>n-
ship between the time properties of a signal and the appropriate Fourier representation.
The DTFS is often referred to as. the discrete Fourier transform or DFT; however, this
termínc)l<1gy does not correctly reflect the series nature of the DTFS and often leads to
cc>nfusion with the DTFT S<J we adopt the mc>re descriptive DTFS terminc>logy.
e
o
n
t
.
l Fourier Series Fourier Transf<>rm
n (FS) (FT)
u
{)
u
s
1)
1
s
e Discrete-Timc l-'ourier Series Discrete-Time Fourier T ransform
r (DTFS} (DTt'T)
e
;
'
158 CHAPTER 3 • fOlJRll:'.R REPRESENTATIONS FOR StGNALS
where 0 0 = 21TIN is the fundamental frequency of x[n]. The frequency of the kth sinusoid
in the superposition is kfi0 • Similarly, if x(t} is a continuous-time signal of fundamental
period T, we represent x(t) by the FS
x(t) = L A[k]eikwut (3.2)
k
where w 0 = 211'/T is the fundamental frequency of x(t). Here the frequency of the kth
sinusoid is kw 0 • ln both Eqs. (3.1) and (3.2), Alk] is the weight applied to the kth complex
sinusoid and the hat ~ denotes approximate value, since we do not yet assume that either
x[n] or x(t) can be represented exactly by a series of this form.
How many terms and weights should we use in each sum? Beginning with the DTFS
described in Eq. (3.1), the answer to this question becomes apparent if we recai! that
complex sinusoids with distinct frequencies are not always distinct. ln particular, the com-
plex sinusoids eikfl,,n are N periodic in the frequency índex k. We have
ei(N+k)!l 0 n = ejNn0 ne;kn0 n
= eí2-rr•reik!i0 n
= eikíl n 0
Thus there are only N distinct complex sinusoids of the form eikílºn. A unique set of N
dístinct complex sinus()ids is obtained by letting the frequency índex k take on any N
consecurive values. Hence we may rewrite Eq. (3.1) as
x[ n] = ~ A[k]eik!lon (3.3)
k=(N>
where the notation k = (N) ímplies letting k range over any N consecutive values. The set
of N consecutive values <>ver which k varies is arbítrary and is usually chosen to simplify
the problem by exploiting symmetries in the signal x[n]. Common choices are k = O to
N - 1 and, for N even, k = -N/2 to N/2 - 1.
ln order to determine the weights or coefficients A[k], we shall minimize the mean-
squared error (MSE) between the signal and its series representatíon. The construction of
the series representation ensures that both the signal and the representation are periodic
with the sarne period. Hence the MSE is the average of the squared difference between the
signal and its representation over any one period. ln the discrete-time case only N consec-
utive values of x[n] and x[n] are required since both are N periodic. We have
where we agaín use the notation n = (N) to indicate summation over any N consecutive
values. We leave the interval for evaluating the MSE unspecified since it will later prove
convenient to choose different intervals in different problems.
3.1 l1itroduction 159
. rk,,n =
.
I
n=(l\J)
<Pk[111 <P ,:[n]
Nc>te tl1at the i1111er product is defined using complex conjugatic>n when the signals are
cc>1nplex value(i. If lk.,n = O fc,r k -=!= m, then <Pk[n] and q>,,1 [11] are <>rthogonal. c:cJrrespond-
íngly, Í(>f co11tinuc)us-time signals with period T> rhe inner product is defined in terms of
an integraJ, as sho,vn by
.lk,,n = J (T)
<Pk(t}<f>:! (t) dt
1
where the r1c>tation (T) in1plies integration over any interval <)f length T. As in discrete
time, if l 1l,,,, = O for k -=!= r11, chen we say <Pk (t) and tj), 11 (t) are c>rthogonal.
Begínning witl1 the discrete-cime case, let <Pklnl = eikíl"11 be a complex sinusoid with
frequency k!1 Choosing the interval n = O to n = N - 1, the inner product is given L1y
0
•
N-1
Ik,rn = ""'
L.J
ei!k-1n)!t0 11
11=0
160 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS
Assuming k and m are restricted to the sarne interval of N consecutive values, rhis is a
finite geometric series whose sum depends on whether k = m or k * m, as shown by
N-1 N, k=m
L ei(k-m)non ==
1 _ eik2Tr
n=O
1 - e;kno' ki=m
1 = (T ei<k-m)wc,t dt
k,m )
0
T, k=m
ki=m
Using the fact ei<k-m)w,.,T = ei(k-m)2 = 1, we obtain
'TT
T, k =m
(3.7)
O k-:f=m
This pr<)perty is central to determining the FS coefficients.
where !1 = 21r/N.
0 .
ln order to choose the DTFS coefficients A[k], we now minimize the MSE defined in
Eq. (3.4 ), rewritten as
1
MSE = - L x[nJ - .L A[kJejk!l 0
•
1
x[ t1_I - L A 1111Jei11112'' 11
N ,1=(.~) k=<N) n1=(N>
- L k=(N)
A[k]
Define
and apply the <)rthogonality property <>f discrete-time complex sinusoids, Eq. {3.6), to the
last term in the MSE. Hence we may write the MSE as
MSE = t n~N)
2
]xlnll - k~N) A'~[k]Xlkl - ki) Alk]X'~[k] + k¾.,i IA1kll
2
Now use the technic.1ue of ''cc>mpleting the square'' to write the MSE as a perfect
square in the DTFS coefficients AlkJ. Add and sul)tract ~k=<N> 1X[kj 12 t<) rhe right-hand
side of the MSE> so that it may he written as
- L k=<N)
IX!kl 12
Rewrite the middle sum as a square t<) <)btain
MSE = .!_
N
L
liº (N}
lx[n]l 2 + L
k=(N)
IAlkl - Xlkll 2 - L
k=<N)
IX[k]l2 {3.9)
The depende11ce of the MSE on the unkn<>wn DTFS coefficients A[kj is confi11ed t<) rhe
middle term <>f Eq. (3.9), and rhis term is always nonnegative. Hence the MSE is minimized
by forcing the middle term t<> zero \Vith the choice
A[kl = X[k]
These coef.ficients mi11imize the MSE l-,erween xf ,zl a11d x[11J.
Note that XI k] is N periodic ín k, si11ce
N 11=<-"N>
162 CHAPTER 3 lll f OlJRIE.R REPRF.:SENTATIONS FOR SIGNALS
We next substitute Eq. (3.8).into the second ter1n of Eq. (3.10) to obtain
. 1 .
L IX[k]l 2 = L L L x[n]x*[m]e 11111
-n)fl0 k
k=(N) . k=(N) N2 n=(N) m=(N)
(3.11)
Equation (.3.11) is simplified by recalling that e;,n!l.,k and ein!l,,k are orthogonal. Referring
to Eq. (3.6), we have
n = m
- L
1 .
e'(,n-nHlc,k =
1
'
N k=(l\l} O, n =fa m
This redttces the doub)e sum c>ver m and n <>11 the right-hand side of Eq. (3.11) to the single
sum
Substituting this resuJr into Eq. (3.1 O) gives MSE = O. That is, if rhe DTFS Cí>efficients are
givcn by Eq. (3.8), then the MSE between x[n] and xlnl is zero. Since the MSE is zero, the
err<>r is zero for each value of n a11d thus xlnl = x[n].
from N values of X[k] we may determine xlnl using Eq. (3.12), and from N values of
xfnl we may determine X[k] using Eq. (3.13). Either X[kJ or x[n] provides a complete
description of the signal. We shall see that in some pr<Jblems it is advantageous to represent
the signal using its time values x[n l, while in <>thers the DTFS coefficients X[kj <Jffer a
3.2 Discrete-Time Periodic Signals: The Discrete-Titne Fourier Series 163
more convenient description of the signal. The DTFS coefficient representation is also
known as a frequency-domain representation because each DTFS coefficient is associated
with a complex sinusoíd of a different frequency.
Before presenting several examples illustrati11g the DTFS, we remind the reader that
the starting values of the índices k and n in Eqs. (3.12} and (3.13) are arbitrary beca use
both x[n] and X[k] are N periodic. The range for the índices may thus be chosen to simplify
the problem at hand.
.. . . .,. . ..·4 -t·:... •• > ' • ·~.., :·:· •• ·- .~.. •><'><llt: , ,_ _, . ...,.. "·· ·:·· ..,,...... ,.,:.,. .,•;,.:~
,,,: ,.
x[n) = 2
..., (3.14)
and compare this to the DTFS of Eq. (3.12) written using a starting index k = -7
8
x[n) = L X[k]eik(rrl8)n (3.15)
k=-7
Equating the terms in Eq. (3.14) and Eq. (3.15) having equal frequencies, k1rl8, gives
12e-;tf,, k= -1
DTFS; 21r/l6 X[k] =
x[n).,_____ leitl>
2 '
k= 1
O, -7 < k :S;. 8 and k =!:- ±1
Since X[k) has period N = 16, we have X[15] = X[31] = · · · = ½e-;,t, and similarly X[l 7] =
X[33] = · · · = fei<f> wíth ali other values of X[k] equal to zero. Plots of the magnitude a11d
phase of X[k] are depicted in Fig. 3.2.
ln general it is easíest to determine the DTFS coefficients by inspection when the signal
consists of a sum of sinusoids. ....
·
• · . ·,; ·. ·..·
•.~·. , •~ - •; ~
··:!"~':· · · ,mr·:.··
.,,.-,.. • •• • ...,.• • •·· · ···· • , .•.,•...
......, .. •,ri;..;~,~ ..~•x~ • ~'"· ,._,,.. ........::,, .........:
1X[k] 1
1/2 ~
... ...
-----<>-o •~ - ~ - ~ "'!" ~ - - ~ -- - -- -- - o-k
-20 -10 10 20 30
arg{ X[k] 1
4>
... •• •
, . - - - - . -
<
'
k
-20 -10 10 20 30
'
. . . -4>
FIGlJRI:'. 3.2 l\:lagnitu<le and phase of DTFS coefficie11Ls for Example 3.1.
164 CHAPTER 3 • FOlJRIER REPRESE.NTATIONS FOR SIGNALS
The magnitude of X[k], IX[kJI, is known as the magnitude spectrum of x[n]. Simi-
larly, the phase of Xf kl, arg{X[k}}, is known as the phase spectrum of x[n}. ln the previous
example ali the components of x[n] are concentrated at two frequencies, 0 0 (k = 1) and
-nº (k = -1).
• Drill Problem 3.1 Determine the DTFS cc>efficients by inspection for the signal
1 37T
x[n] = 1 + sin 12 7T n + 8
Answer:
e-i(3,.,,1s)
k= -1
2j '
DTFS; 2-rr/24 1, k=O
x[n} X[k} ei(J,.,,18)
k = l
2j '
o, otherwise on -11 s k s 12 •
The next example directly evaluates Eq. (3.13) to determine the DTFS coefficients.
EXAMPLE 3.2 Find the DTFS coefficients for the N periodíc square wave depicted in
Fig. 3.3.
Solution: The period is N, so fl = 2TTIN. It is convenient in this case to evaluate Eq. (3.13)
0
1
·.'·. <•
=- IM .
e-,kOon
N n=-M
x[nl
r J •>
'
••• ••• ... • •• ••• • ••
'. . .
. ,., - ,.. ·:: . :
'· ..
•,:,;,~:·:· .,:,i_::.:,,._
..-;.; '··' .· ,.
which may be rewritren as
.
1 eik!l0 (2M + 1 )12 1 _ e-ik!l0 (2M+1)
X[k] =N ejkfi</2 . 1 - e-ikfl,,
..,,
eikfi.,(2,'\,f + t )/2 _ e-ik0(1(2M+1)/2
..
: f
.. =-
1
N eikfi,/ :!. _ e- ;kfl,/2 , k * O, ±N, +2N, ...
At this point we may divide the ~umerator and denomínator by 2j to express X[k] as a ratio
of two sine functions, as shown by
. k
sin !1 (2M + 1 ).
2
X[k] ~ ~------, k * O, -:!:N, ±2N, ...
. . ... • k fiº
: .,
•
·-~-
/ .. s1n
. .
2
-~··
sin k~ (2M + 1)
X[k] = -h-----,
!!.
k * O, ±N, -:±:2N, ...
s1n k
•
The technique used here to write the finite geometric sum expression for X[k] as a ratio of
sine functic>ns involves symmetrizing both the numerator, 1 - e-ik0,,(2 ,\.1+ 1 >, and denominator,
1 - e-ik!iº, with the appropriate power of eik110• Now, for k = O, ±N, -:±:2N, ... , we have
1 M
X[k) =- L 1
f ., N m=--M
..
. 2M + 1
,;,':.. N
.
~.:). :. . ~: ··li{:: .. /.'; .\"
!
. ,.;.~
. ~ . . . .;
: . .
\,
1
•
s1n k; (2M + 1)
.. -~------, k =fo. O, + N, ±2N, ...
N • k 'lT
s1n
X(k] = N
2M + 1
k == O, ±N, ±2N, ...
N '
Using L'Hopital's rule, it is easy to show that
1T
k N (2M + 1
•
Slll
1 2M + 1
lim - --------
k-o, + N,:!:2N.... N . N
s1n k~
N
....,,... . ('::
166 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS
0.2 ...------~-----,.----,.----.----.----~
0.15
0.1
X[k]
0.05 ·
0.5 ,----,,--------,.-----.-----0-----.------,---,----0
0.4 . .
0.3
0.2
X[k]
0.1
-0.l
FIGURE 3.4 The DTFS coefficients for a square wave: (a) 1\1 = 4 and (b) J\1 = 12.
. .. .
.·•
X[k] = N - - - - -
:. . ..
sin k ~
ln this form it is understood that the value X[k] for k = O, ±N, :!:2N, ... is obtaíned from
the limitas k ~O.A plot of two periods of X[k] as a function of k is depicted in Fig. 3.4 for
M = 4 and M = 12 assuming N = 50. Note that in this example X[k] is real; hence the
magnitude spectrum is the absolute value of X[k] and the phase spectrum is O when X[k] is
positive and 1r when X[k] is negative.
' .. ·'.
3.2 Discrete-Time Periodic Signals: The Discrete-Time Fourier Series 167
x{n]
2 !'
/
• Drill Problem 3.2 Determine the DTFS coefficients for the periodic signal depictcd
inFig.3.5.
Answer:
l)TFS; lrr/6 X[kl 1 2 k 7r
l l ---- = -6 + -3 COS -3
X 11
•
Each term in the DTFS c>f Eq. (3.12) associated \-Vith a nonzer<> coefficient X[k]
contributes to the represenrati<>n of rhe signal. We now examine this rcpresentation by
considering the contribution of each term for the square wave in Example 3.2. ln this
example the DTFS coefficíents have even symmetry, Xf k] = XJ-k], and we may rewrite
the DTFS of Eq. (3.12) as a series involving harmonically related cosines. General cc>ndi-
tions under which the DTFS coefficicnts have even or <>dd symmctry are discussed in
Section 3.6. Assume for convenience that N is even so that N/2 is integer and let k range
frorn - N/2 + 1 to N/2, and thus write
N/2
x[n] = I xr k]eikíl<>n
k:c-N/l+I
N/2-l
= X[Ol + L (X[m]eini{}()n + X[-m]e-i•nil,,11) + X[N/2.lei(Nll)il,.n
= XfOJ + L
171= 1
2X[m] cos(míl n) + X[N/2] cos(1rn) 0
where we have also used ei7Tn = cc>s( 1rn). If we define the new sct of coefficients
X[kj, k = O, N/2
Blkl =
2X[k], k = l, 2, ... , N/2 - 1
then wc may write the DTr'S in terms of a series of harmonically related cosines as
N/2
x[n] =I Blk) cos(kil n) 0
k=O
{' ·JIC· .·>1,;: • ••• i,. ·* . ,; ....;; ·~ ·,.. . ,. . ·<1: .Jt ·:,. . . ...,·. i> ;••• • • •••• ."l· ;;., ••
where J s N/2. This approximarion contains the first 2J + 1 terms centered on k = O in Eq.
(3.12). Evaluate one period of the Jth term in the sum and x1(n] for J = 1, 3, 5, 23, and 25,
assumíng N = 50 and M == 12 for the square wave in Example 3.2. ..•
168 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS
. ..
.: "j. . .,.-~:;;,. ·~1;~~:,. /{{:: \Y,~: ;.. ,;i ..:; ;:. ..
Solution: Figure 3.6 depicts the Jth term in the sum, BU] cos(Jfi n), and one period of x/[n] 0
for the specífied values of ]. Only odd values for J are considered because the even indexed
coefficients B[k] are zero. Note that the approximation improves as J increases, with exact
representation of x[n] when J = N/2 = 25. ln general, the coefficients B[k] associated with
values of k near zero represent the low-frequency or slowly varying features in the signal,
while the coefficients associated with the values of k near ±.N/2 represent the high-frequency
or rapidly varyíng features in the signal.
l ...----,-----,----.-!--.,---...-
,--,,----,-----.--.,----,
..-..
l::~ -
cf
'-'
:ll m·-r-··••,-
.....8
-
~ -O.'.,___[_·_·___1.___......i _ 1_
____.._ _...... ___._ _ _ _ _____,__1_ _ _ ...._i_ ' _ _ _ _ , •
l -
~
;:
~
(~- 0.5 ..
o
-0.5 ....._____________......___...,_____~---------~-'1
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
(a)
l ,-···-·--······. -·····-·"·~--,-------,---,..........-...-----.----,...---.
! i i
a~ o.s l,
§
,-,
0 tii-001,! !!!Aºo?f ttfj~TI!r-9 yf i 2-,A!!!!A-.-()--L-L-Y rr
~ -0.5 . . .
-1 ,________________l_._----L____ !__ -~···-·-··--L.._.__ .-t....___L_. __~
-25 -20 -15 -10 -5 O 5 10 15 20 25
n
l .5 1 ! ; í - ' í 1
....
1 t- > >
,
- -
o12..00-~ 0i_0 J tJ_ ......... ·-~ -~ ......... -~ - ~
_l11 º6bbbõ -~ >
1
1 i i i ! ; 1
-0.5 ª i ;
FIGURE 3.6 Individual terms in the DTFS expansion f<)r a square \-Vave (top panei) and the cor-
respc,11cling partia) SLlm approximations x1 [1i] (bottom panei). 1·he J = O term is x0[1i] = ½and is
not shown. (a) J = l. (b) J = 3.
1.0
-a-
;: ~
0.5
.._.,
li")
'.Jl
ou o
-
...... -0.5
l i")
o 25
-25 -20 -15 -10 -5
1.5 - - - - ~ - - - - . . . . - - - ~ - ~ - - - - ~ - - - ~ - - - - - ,
n
5
'º 15 20
1.0 '
r-,
......
~
.,., 0.5
<~
o ~~il .
-0.5
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
(e)
1 1 1 1 1 1 1
-
0.5
r-,
("'l
...... -0.5
N - -
~
1 1 i 1 !
-1
-25 -20 -15 -10 -5 o 5 10 15 20 25
n
1.5 1 1
1 1-
' '
\ > ' -
0.5 ~ -
1 1 1 1
-0.5 · 1
"' O ···º o O o ·º··o º··o·º ·o··º o·º··o Oo 0 o O o·º o··º·o-º··o·0 •o··º ·o·º o 0 o 0 ··o 0··0·º·0··º ·o·º o··º o O o 0
-~
8
__,
~
-0.5 ~ -
-1 L - - - - ' - - - . . L I_ _..1l_ _ _ _...._1_ _,__1_
~ __.__ __,_,_ _--11_ _...J
'. -
1 ~ > ' >
,....,
1-:!
~ 0.5 ..... -
·~"' o . . - - - 0-0-000-0-00·0-0-
i
i ! • i i ! 1
-0.5 i
The DTFS is the only Fourier representation that can be numerically evaluated and
manipulated in a computer. This is because both the time-domain, x[n], and frequency-
domain, Xlk], representations of the signal are exactly characterized by a finite set of N
numbers. The computational tractability of the DTFS is of great significance. The DTFS
finds extensive use in numerical signal analysis and system implementation and is often
Nonnal
3 .----,----,-----,------------.!--....
,. ----,----,----,
2- -
1 >- -
x[n]
o ~..,,.....J"", --"'-l.__.........,-~__,,-...i""-11-~""i _..,...~ ,__--4
-1 ... ..
-2 '------------------'--------'--'---......__ _._..._ _,
O 200 400 600 800 1000 1200 1400 1600 1800 2000
Time indcx (n)
(a)
Ventricular tachycardia
3 .---..-----,-----,------,----,.--.---....---,---.....
2 ..
l
y[n]
oi..····~
-1 .
-2'------------~---'-------~-------
0 200 400 600 800 l 000 1200 1400 J600 1800 2000
Time index (n)
(b)
Nonnal
0.25 ,------.----...-------,------,------,----,
0.2
0.15
1X[k] 1
0.1 . . .
0.05
oL.L&.&..l~_._._._._~flllJ.lll~~~=~=~0ooo.~
o 10 20 30 40 50
Frequency index (k)
(e)
Ventricular tachycardia
0.25 . - - - - ~ - - - - , - . . . - - - - , - - - - ~ - - - - - . - - - - ,
0.2 . .
0.15
1 Y(k] 1
0.1
0.05
o 1..LLJ..J..1.,,U..J,J.....LJLJ...LLLI.J..J..1...LU..o.L(l.O.O,Jl)0.0,~.oD-CL().0().06JO,O.O.c,Oo.OO.:,ó-o(:.O.O.C)Q-OOÓ
o 10 20 30 40 50
Frequency index (k)
(d)
FIGURE 3. 7 Electrocardiograms for two clifferent heartbeats and the fírst 60 coefficients of their
magnitude spectra. (a) Normal heartbeat. (b) Ventricular tachycardia. (e) .lvlagnitude spectrum for
the normal heartheat. (d) Magnitude spectrum for ventricular tachycardia.
3.3 Continuous-Time Periodic Signals: The Fourier Series 171
usec.l numerically approximaté the other three Fourier representations. These issues are
t<)
explored in rhe next chapter.
EXAMPLE 3.4 ln this example we evaluate the DTFS representations of rwo different elec-
trocardiogram (ECG) waveforms. Figures 3.7(a) and (b) depict the ECG of a normal heart
and one experiencing ventricular tachycardia, respectively. These sequences are drawn as con-
tinuous functions due to the dif.ficulty of depicting ali 2000 values in each case. Both of these
appear nearly periodic, with very slight variations in the amplitude and length of each period.
The DTFS of one period of each ECG may be computed numerically. The period of the normal
ECG is N == 305, while thc period of the ventricular rachycardia ECG is N = 421. One period
of each waveform is available. Evaluate the DTFS coefficients for each and pior their 1nagni-
rude spectrum. . ·., :•·
::
'
Solution: The magnitude spectrum of the first 60 DTFS coefficients is depicted in Figs. 3.7{c)
a11d (d). The higher indexed coefficients are very small and thus not shown.
The time waveforms differ, as do the DTFS coefficíents. The normal ECG is dominated
by a sharp spike or impulsive feature. Recall that the DTFS coefficients for a unit impulse have
constant magnitude. The DTFS coefficients of the normal ECG are approximately constant,
showing a gradual decrease in amplitude as the frequency íncreases. They also have a fairly
small magnitude, since there is relatively little pc>wer in the impulsive signal. ln contrast, the
ventricular tachycardia ECG is not as impulsive but has smoother features. Consequently, the
DTFS coefficíents have greater dynamic range with the low-frequency coefficients dominating.
The ventricular tachycardia ECG has greater power than the normal ECG and thus the DTFS
coefficients have larger amplítude.
. ,,,.
We begin otir derivatic>n <)f the fS by approximating a signal x(t) having fundamental
peric>d T t1sir1g the series of Eq. (3.5):
f(/"}
X ( t )e - jinw,,t dt = f (T)
x( t )e . jin«Jot dt
Substit11te the series expression for x(t) in this equality te> obtain the expression
= i AlkJ J. eik<tJ,,te-j111w,.t dt
k=-,,,, (f}
1 72 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS
The orthogonality property of Eq. (3.7) implies that the integral on the right-hand side is
zero except for k = m, and so we have
f(T)
x(t}e-jmwol dt = A[m]T
A[m] = -1 J, '
x(t)e- 1111"' 0
t dt (3.17)
T (T)
Problem 3.32 establishes that this value also minimizes the MSE between x(t) and the
2] + 1 term, truncated approximation
.,
X1(t) = 2: A[kJeikwot
k=-J
Suppose we choose the coefficients according to Eq. (3.17). Under what conditions
does the infinite series of Eq. (3.16) actually converge to x(t)? A detailed analysis of this
question is beyond the scope of this book. However, we can state severa( results. First, if
x(t) is square integrable, that is,
_!_
T
f (T)
lx(t) 12 dt < oo
then the MSE between x(t) and x(t) is zero. This is a useful result that applíes te> a very
broad class of signals encountered in engineering practice. Note that in contrast t(> the
discrete-time case, zero MSE does not imply that x(t) and x(t} are equal pointwise (at each
value of t); it simply implies that there is zero energy in their difference.
Pointwise convergence is guaranteed at ali values of t except those corresponding to
discontinuities if the Dirichlet conditions are satisfied:
• x(t) is bounded ..
• x(t) has a finite number of local maxima and minima in one period.
• x(t) has a finite number of discontinuitíes in one peric>d.
If a signal x(t) satisfies the Dirichlet conditions and is not C<)11tinuous, then the FS repre-
sentati<>n of Eq. (3.16) converges to the midpoint of x(t) at each discontinuity.
• THE FS REPRESENTATION
(3.18)
k=-oo
X[k] = _!_
T
f (T>
x(t)e-ikwot dt (3.19)
3.3 Continuous•Time Periodic Signals: The Fourier Series 173
where x(t) has fundamental period T and w 0 = 27r/T. We say that x(t) a11d Xf kl are a FS
pair and denote this relatíonship as
· x(t) - - -
X[kl
FS; Wr,
From the FS coefficients X[k1 we may determine x(t) using Eq. (3.18) and from x(t) we
may determine Xlkl using Eq. (3~19). We shall see later that in some problems it is ad-
vantageous to represent the signal in the time domain as x(t}, while in others the FS co-
ef.ficients X[kj offer a more convenient description. The FS coefficient representation is
also known as a frequency-domairi representation because each FS coefficient is associated
with a complex sinusoid of a different frequency. The follc)wi11g examples illustrate deter-
mination of the FS representation.
. ,.
express x(t) as
,, ...... 00
One approach to finding X[k] is to use Eq. (3.19). However, in this case x(t) is expressed in
terms of sinusoids, so it is easier to obtain X[k] by inspection. Write
1T 1T
x(t) = 3 cos 2 t + 4
ei('ff'l2)t+'ff'l4 + e-[;(-n-/2)t+'ff'l4]
= 3 ---------
2
This last expression is in the form of the Fourier series. We may thus identify
le-;'"14 k= -1
2 '
,.·,.
X[k] = 1efrrl4
2 '
k= 1
o, otherwise
The magnitude and phase of X[kl ·are depicted in Fig. 3.8. ., :
. <'
X[kJ 1
1 arg{ X[kJ 1
1
3/2 'ff/4 ......
-1T/4
ExAMPLE 3.6 Determine the FS representation for the square wave depicted in Fig. 3.9.
Solution: The period is T, so w0 = 21r/T. lt is convenient in this problem to use the integral
formula Eq. (3.19) to determine rhe FS coefficients. We integrate over the period t = -T/2 to
t = T/2 to exploit the even symmetry of x(t) and obtain for k =/:- O
l
X[k] = -
JT/2 .
x(t)e-ikw 01
dt
T -TIZ
Tkw 0 2; '
_ 2 sin(kw0 Ts)
, k *O
For k = O, we have
l
X[O] = -
JT, dt
T -T$
'. .· ··' •:
·'·,/· .,::,~: .-~''" ' "'
•<, .,.,
x(t)
•••
i
l
7· • ..
-T .
'
'
t
-T-Ts -T+Ts
' ~·.
. 2 sin(kw T5 )02Ts
11 m - - - - - =
•-o Tkü>o T
and thus we write .' .
. X[k] = 2 sin(kw0 T5 )
Tkw 0
with the understanding that X[O] is obtained as a limit. ln this problem X[k] is real valued.
Substituting w0 = 27r/T gives X[k] as a functíon of the racio T 5 /T, as shown by
• >
. ,. ·.
. . . . k 21rT5
' 2 Slll T
.,
X[k] = - - - - (3.20)
k21r
Figure 3.10 depicts X{k], - 50 ~ k ~ 50, for T 5 /T = ¼and T 5 /T = ft. Note that as T 5 /T
decreases, the signal becomes more concentrated in time within each period while the FS
representation becomes less concentrated in frequency. We shall explore the inverse relation-
ship between time- and frequency-domain concentrations of signals more fully in the sections
that follow.
0.6 .---------.-----r----.----....----..---~----,-----.----,
0.4
X[k]
0,2
-0.2 ...__ ___.__ _ _ _ _.......__ _......____ _.......__ _,....__ _ _ ___..__ ___...._ __,
-50 -40 -30 -20 -10 o 10 20 30 40 50
k
(a)
0.15 ..------.----.-------r----,.-----.-----..------,-------,----,
0.1
X[k]
0.05 . . . .
FIGURE 3.10 The FS coefficjents, X[k], -50 < k < 50, for tw<> square waves: {a) T,IT = ¼and
(b) TslT = ft.
176 CHAPTER 3 • FOURIER REPRESENTATIONS FOR SIGNALS
The functional form sin( 1ru)/1ru occurs sufficiently often in Fourier analysis that we
give it a specíal name:
. ( ) sin( 1ru}
s1nc u = (3.21)
1TU
A graph of sinc(u) is depicted in Fig. 3.11. The maximum of the sinc function is unity at
u = O, the zero crossings occur at integer values of u, and the magnitude dies off as 1/u.
The portion of the sínc function between the zero crossings at u = :::t: 1 is known as the
mainl<>he of the sinc function. The smaller ripples outside the mainlobe are termed side-
lobes. The FS coefficients in Eq. (3.20) are expressed using the sinc function notation as
X[kl
= 2Ts . k 2Ts
T s1nc T
Each term in the FS of Eq. (3 .18) associated with a nonzero coefficient X[k] contri butes
t(> the representation of the signal. The square wave of the previous example provides a
convenient illustration of how the individual terms in the FS contribute to the representation
of x(t). As with the DTFS square wave representation, we exploit the even symmetry of X[k]
to write the FS as a sum of harmonically related cosines. Since X[k] = X[-k], we have
00
x(t) = L X[k]eikwot
k=-oo
00
rn=l
= XfO] + L
m=l
2X[m] cos(mw t) 0
0.8
0.6
0.4
sinc (u)
0.2
-0.2
ExAMPLE 3. 7 We define the partial sum approximation to the FS representation for the
square wave, as shown by
J
X;(t) == L
k=O
B[k] cos(kw t) 0
.!
2' k=O
,,, .. . 2( -1 )lk-1)/2
.....
o, k even
so the even indexed coefficients are zero. Depict one period of the Jth term in this sum and
x1(t) for J = 1, 3, 7, 29, and 99.
Solution: The individual terms arid partia! sum approximations are depicted in Fig. 3.12.
The behavior of the partial sum approximation in the vicinity of the square wave disconti-
nuities at t = ±¼ is of particular interest. We note that each partial sum approxímation passes
through the average value (½) of the discontinuity, as stated in our convergence discussion.
On each side of rhe discontinuity the approximation exhibits ripple. As J increases, the max-
imum height of the ripples does not appear to change. ln fact, it can be shown for any finite
J that the maximum ripple is 9% of the discontinuity. This ripple near díscontinuities in partial
sum FS approximatíons is termed the Gibbs phenomenon in hont>r of rhe mathematical phys-
icíst J. Willard Gibbs for his explanation of this phenomenon in 1899. The square wave
satisfies the Dirichlet conditions and so we know that the FS approximation ultimately con-
verges to the square wave for ali values of t except at the discontinuities. However, for finite
J the ripple is always present. As J increases, the ripple in the partia! sum approximations
becomes more and more concentrated near the discontinuitíes. Hence, for any given J, the
accuracy of the partial sum approximation is best at times distant from discontinuities and
worst near the discontinuities•
..
. ..,::
~ 0.5
-!
cS
u
o
-
~ -0.5 i::-_.,.-
-1 '---------'---_,___ _ _ _ __,__-'-_~_ __.__ ___.
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
1.5 .----..----,---.......---r---.---~---,----.----.----,
l -
~
--:::: 0.5 ..
oi=
. . . ___._..
-0.5 ,__-~- .L._ _,__ _ _ _ _ _ __.__ __.___ _..l. _ _,
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
(a)
FIGURE 3.12 Individual terms in FS expansion for a square \vave (top panei) and the corre-
sponding partial sum approximations x1(t) (bottom panei). The J = O term is .x0 (t) = ½and is not
shown. (a) J = 1.
1 .---~---------~-------,------.---
~
u
o
,.....
~ -0.5 -
-1 .___...__....__....,__..........._ __.__ _ ____.__ ___.__ ___,__ _
-0.5 -0.4 -0.3 -0.2 -0. 1
O 0.1 0.2 0.3 0.4 0.5
t
1.5 .-----......----.....---,------------.--~
1
e-
~ 0.5 ....
( lo<
o ...
-0.5 L----'---...i.---'--...L..-.........-....L.._ ___.__ ____.__ ___,______
-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
(b)
l .-----~-------.....--~------.-----.-----.
~
a 05
.
-
r---
.....,
~-0.5 ... ..
-11---~-...1.--...1..--....L.---'--.....__ _.__-1-_-1-_--1
-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
1.5 ....---..---~-...-----,---.---..----,--......,.----,
1
-·~....
._.,
r--- 0.5 -
o
-0.5 '---..L----'----'---...L..-~---'----'----'--__.._---'
-0.5 -0.4 -0.3 -0.2 -0. l O 0.1 0.2 0.3 0.4 0.5
t
(e)
-.a...-
~
0.5
l
.....
1 ! ! 1 1
-
-
°'
N
:r.
ou
,.....
o
O\
~-0.5
._ -
CCi
i f r r r
-1
-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5
t
1.5 ! 1
,.._ ,..•
1 ,_ -
-....
.._,
0.5 ._
•
-
·~ °'
N
o - - -
V
-0.5 1 1
' 1 1
-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5
t
(d)
178
3.3 Continuous-Time Periodic Signals: The Fourier Series 179
1 1 ! 1 1 1 1
.....
,-._
3-
~
O\
0.5 -··- -
°'.,_,
<r. o
ou
~
g'.: -0.5
~
,-
i:.tl
1 j
-1
-0.5 -0.4 -0.3 -0.2 -0.1 o 0.1 0.2 0.3 0.4 0.5
t
l.5 ! i ! 1 l ;' i
• 1 1
-.:-
1 - • -
g: 0.5 .... -·
<~
o • •
-0.5 i l l 1 1 ! i l i
-0.5 -0.4 -0.3 -0.2 -0.1 O 0.1 0.2 0.3 0.4 0.5
t
(e)
• Drill Problem 3.4 Find the FS representation for the sawtooth wavc depic.:ted in
Fig. 3.13. Hint: Use integration by parts.
Answer: Integrate t from -½ to 1 in Eq. (3.19) to obtain
-1 k = O
x[nl FS; 41r/3 X[k]
-2 2 . ,kw0
e-,kwo + e1 2 , ()therwise
3jkw 0
•
The following example exploits linearity and the FS representation for the square
wave to determine the output of a LTI system.
•·v •. .. .
ExAMPLE 3.8 Here we wish to find the FS representation for the output, y(t), of the RC
. . circuit depicted in Fig. 3.14 in response to the square wave input depícted in Fig. 3.9 assuming
·<.
x(t)
1 j
••• • ••
1
'
t
-2 -1 2 3
--21
FIGURE 3.13 Períodic signal for Orill Problem 3.4.
] 80 CHAPTER 3 • FOURIER REPRESENTATIONS •·oR SJGNALS
R +
y(t> e
the product of the kth weíght in the input sum and system frequency response evaluated at
the kth sinusoid's frequency. Hence if
00
x(t) = L X[k]eikw.,t
k= -""
...
y(t) = L H(ikwc,)X[k]eikwºt
k-=-oo
FS; (J}o
y(t) "---_,. Y[k] = H(ikW 0 )X[k]
H(íw) = 1/RC
jw + 1/RC
and the FS coefficients for the square wave are given in Eq. (3.20). Substituting for H(jkw0 )
with RC = 0.1 s, w0 = 21T, and using Ts!T = ¼gives
Y[k] = 10 sin(k1r/2)
j21rk + 10 k1r
The magnitude spectrum IY[kl l goes to zero in proportion to 1/k 2 as k increases, soa reason-
ably accurate representation for y(t) may be determined using a modesr number of terms in
the FS. Determine y(t) usíng
100
y(t) = ~ Y[kJeikwot
k=-100
The magnitude and phase of YfkJ for -2.'l :5 k =S 25 are depicted in Figs. 3.15(a) and
(b), respectively. Comparing Y{k] to X[k] as depicted in Fig. 3.10(a), we see that rhe circuit
attenuates rhe amplitude of X[k] when k ;;?:: 1. The degree of attenuation increases as fre-
I 1
quency, kw0 , increases. The circuit also introduces a frequency-dependent phase shift. One
period of the time waveform y(t) is shown in Fig. 3.15(c). This result is consistent with our
intuition from circuit analysis. When the input switches from O to 1, the charge on the capac-
itor increases and the voltage exhibits an exponential rise. When the input switches from 1 to
O, the capacitor discharges and the voltage exhibits an exponential decay.
3.3 Continuous-Time Periodic Signals: The Fourier Series 181
0.5 1 1 1 i 1 1 l i
0.4 - -
..... -
0.3
0.2 - -
0.1 - -
(j'
- o
o 1 1 - - t~ r, n - l l
2 - ~
--
'
'
-,......
-J:i
l -- )
-
__.
-...
>-
::.o
o .... ·- .... • o -· o- -O -0- ... -<> 0-· .. -
! ......
-3 ·-· 1
' ' 1 ' 1 1 ! l
1
0.9
0.8 .......
0.7 -
.......
0.6
y(t)
0.5
0.4 ·-
0.3 -·
0.2
0.1
o
-0.5 -0.4 -0.3 -0.2 -0.l o 0.1 0.2 0.3 0.4 0.5
t
(e)
FICURE 3.15 The FS coefficients, Y[k], -25 s k s 25, for the RC circuit outpt1t ín rcsponse to
a square ,vave input, (a) lv1agnitudc spectrum. (b) Phase spectrum. (e) One period of the output,
y(t ).
182 CHAPTER 3 • FOURIER REPRESENTATIONS FOR StGNALS
x[n], -M < n s M
xfn] = O, lnl > M
Thís relationship is illustrated in Fig. 3.16. Note that as M increases, the periodic replícates
of xfnl that are present in x[n] move farther and farther away from the <>rigin. Eventually,
as M ~ oo, these replicares are removed to infinity. Thus we may wríte
Begin with the DTFS representation for the periodic signal x[n]. We have the DTFS
•
pa1r
M
x[n] í: X[k]eikílon (3.23)
k=-M
M
1 L X [n ]e-jk!lon
X[k] (3.24)
2M + 1 11=-M
x{n]
••• o ... i
---0...o-0-o-0,o.-o-0-0-0-0-0-0-0-,--,--o-__._i+..LL.LL_ __o-o--,_o-o-~~=>-<>--------- n
___.___o-_o-
-M