Académique Documents
Professionnel Documents
Culture Documents
Chapter Contents
1. 2. 3. 4. Signals and vectors, Orthogonal Functions Energy and Power Signals, Fourier series, Fourier Transforms, Signal Spectrum Correlation, Spectral Density To answer the following queries and study of related topics
Objective:
1. Signals and vectors, Orthogonal Functions What is meant by component of one vector / signal in another? What is Minimum Mean Square Error (MMSE) criterion? What is the condition for two signals to be orthogonal? Example of some commonly encountered orthogonal functions 2. Energy and Power Signals The difference between energy and power signals Familiarization of terms related to power and energy 3. Fourier Series, Fourier Transforms, Signal Spectrum, What is Fourier series representation? What is generalized Fourier series representation? Frequency analysis of periodic signals: find the Fourier series representation of a function in trigonometry or exponential form. Understand the concept of frequency contents of a signal. Single and double sided plot of amplitude and phase spectrum Frequency analysis of aperiodic signals : find the Fourier transform of a signal Understand the concept of impulse: defined by its properties Recognize some important properties of Fourier transform 4. Correlation, Spectral Density What is correlation of power and energy signals? What is power and energy spectral density?
$AS5A Q PI!"GH889G)EC !R 6 F D
In this chapter, we review the basics of signal and linear systems in the frequency domain. The motivation for studying these fundamental concepts stems from the basic role they play in modeling various types of communication systems. In transmission of an information signal over a communication channel, the shape of the signal is changed, or distorted, by the channel. The communication channel is an example of a system; i.e., an entity that produce an output signal when excited by an input signal. A large number of communication channels can be modeled closely by a subclass of systems called linear systems. Linear systems are a large subclass of systems, which arise, naturally in many practical applications. We have devoted this entire chapter to the study of the basics of signals and linear systems in the frequency domain.
V1
Ve V2
C12V2
Figure 2.1: Vectors
b) Physical interpretation: the larger the component of a vector along the other vector, the more closely do the two vectors resemble each other in their directions, and the smaller is the error vector. c) C12 is given by :
C12 =
V1 V2 1 = V .V V2 V2 V 2 1 2 2
(2.2)
V1 V2 = 0 = V1 V2 cos
The vectors are independent of each other.
(2.3)
f 1 (t ) C12 f 2 (t )
b) Error function is given by f e (t ) = f 1 (t ) C12 f 2 (t ) .
(2.4)
(2.5)
This error function in the approximation can be minimized by minimizing the average (mean) of the square of the error :
=
c)
1 t 2 t1
d =0 dC12
The solution is :
t2
d) From the previous section on Vectors, we say that a signal V1 contains a component C12V2. Note that in vector terminology, C12V2 is the projection of V1 on V2. Continuing with that analogy, we say that if the component of a signal V1 of the form V2 is zero (that is C12 = 0), the signals V1 and V2 are orthogonal. Therefore, we define the real signals f1(t) and f2(t) to be orthogonal over an interval (t1, t2) if :
C12 = 0 or
t2 t1
C12 =
t1
f 1 (t ) f 2 (t )dt
t2 t1
f 1 (t ) f 2 (t )dt = 0
t2 t1
f e2 (t )dt
(2.6)
(2.7)
f 22 (t )dt
(2.8)
Examples 2.1: Minimum mean square error A rectangular function f(t) is defined by
f (t ) =
Approximate this function by a waveform sin t over the interval (0,2) such that the mean square error is minimal.
Solution: Let
f (t ) C12 sin t
2 0
2 0
sin 2 tdt
Thus, f (t )
C12 =
f (t ) sin tdt
sin 2 tdt
q t rs
t
2
b a
* n (t ) m (t )dt = 0, where n m
The zero result implies that these functions are independent. If the result is not zero, the two functions have some dependence to each other. If the function the set {n(t) } are orthogonal , then they also satisfy the relation
b a
Where
n (t )
* m (t )
dt =
0, n m
Kn , n =m
} =K
n nm
1, n m
nm is called the delta function. If the constant Kn all equal to 1, the function n(t) are said to
be orthonormal functions.
Examples of Orthogonal Functions - trigonometry functions [sin(kt), cos(kt)] and exponential functions [exp(kt)]
nm =
0, n m
$S Q PI"G88G)E ~ }
(2.9)
(2.10)
x(t + T0 ) = x(t ),
<t <
(2.11)
~(t ) = Ae j (0t + ) , x
<t <
(2.12)
We will refer to equation (2.12) as a rotating phasor to distinguish it from the phasor Eulers theorem, we may show that equation (2.12) is periodic with period rotating phasor
Ae j . Using
The
T0 = 2 / 0 .
Ae j (0t + ) can be related to a real sinusoidal signal in two ways. The First is: x(t ) = A cos( t + ) = Re ~ (t ) = Re Ae j (0t + ) x (2.13)
0
A cos( 0t + ) =
1~ 1 1 1 x (t ) + ~ * (t ) = Ae j (0t + ) + Ae j (0t + ) x 2 2 2 2
(2.14)
The equivalent representation of equation (2.13) in frequency domain is shown in Figure 2.3(a). The resulting plots are referred to as the amplitude line spectrum and the phase line spectrum for x(t), or the single-sided amplitude and phase spectra of x(t). The spectrum of equation (2.14) is shown in Figure 2.3(b) and referred to as the double-sided amplitude and phase spectra.
Amplitude
Phase
Amplitude
f f
A/2
f 0
0 (a) f0 0 f0
f0
0 (b)
f0
Figure 2.3: Amplitude and phase spectra for the signal A cos( 0 t + ) (a) Single-sided. (b) Double-sided.
u (t ) =
0, ( )d = 1, underfined ,
$S Q PI"G88G)E
Phase
f0
(2.15)
(2.16)
(2.17)
u(t)
1
t=0
t -T/2 T/2 -T T
t rect T
t=0
t or T
t T
t=0 1
Note this is twice as wide (2T) as rect (t/T), which is only T wide
E = lim
and
T T / 2
P = lim
1 T T
For an arbitrary signal x(t), in general, be complex, we define total (normalized) energy as
E = lim
and (normalized) power as
T /2
T T / 2
x(t ) dt =
2
P = lim
1 T T
T /2 T / 2
x(t ) dt
Based on these definitions, we can define two distinct classes of signals: 1. x(t) is an energy signal if and only if 0 < E < (finite and nonzero), so that P = 0. Aperiodic signal is usually an energy waveform (as the energy is finite, period is infinite, thus power is equal to zero). 2. x(t) as a power signal if and only if 0 < P < (finite and nonzero), implying that E = . Periodic signal is a power waveform (as energy is infinite for infinite duration). Note:- Every signal that can be generated in a lab has finite energy. In other words, every signal observed in real life is an energy signal. A power signal must necessarily have an infinite duration. Examples 2.2 : Energy and Power Signals As an example of determining the classification of a signal, consider
x1 (t ) = Ae t u (t ),
>0
where A and are constants. Using equation (2.20), we may readily verify that x1(t) is an energy signal, since E = A 2 / 2 . Letting 0 , we obtain the signal x2(t) = Au(t), which has infinite energy. Applying equation (2.21), we find that P = A 2 / 2 , thus verifying that x2(t) is a power signal.
T /2
i 2 (t )
i 2 (t )
(2.18)
T /2 T / 2
(2.19)
x(t ) dt
(2.20)
(2.21)
2.4
FOURIER SERIES
n = e jn t , where n ranges
o
over all possible integer values, negative, positive, and zero and 0 = 2 / T0 . A signal x(t) can be expressed over an interval of duration T0 seconds as an exponential Fourier series
x(t ) = cn e jn0t
n =
n=
(2.22)
cn =
(2.23)
cn =
(2.24)
when n = 0. In general,
cn
cn * = c n
, therefore
cn = c n
and
n = n .
n
If x(t) real and even, then Im[c ] =0 and if x(t) real and odd, then Re [c ] = 0
10
Example 2.3 : Exponential Fourier series Let x(t) denote the periodic signal depicted in Figure 2.4 and described analytically by
n =
n =
where (t) or rect(t) is a rectangular pulse of width . Determine the Fourier series expansion for this signal.
x(t) 1 t
2T0
T0
/2
/2
T0
T0
T0
T0
T0
-4 -2
11
sin c
T0
.x
Cn
T0
sin c
1 n = sin = n T0 T0
n T0
sin( n
x(t )e
jn 0t
(1)e
1 cn = T0
1 dt = T0
1 T0 dt = T0 jn 2
T / 2 .
.T0 / 2
/ 2 .
. / 2
jn
2 t T0
jn T0
jn T0
wx
2T0 x
x(t ) =
(t nT0 )
rect
(t nT0 )
n=
n =
cn * = c n
and by taking the complex conjugate inside the integral and noting
c n = c n e j n , we obtain
cn = c n
and
n = n
We can now regroup the complex exponential Fourier series by pairs of terms of the form
x ( t ) = c0 +
2 cn cos(n0 t + n )
n =1
(2.25)
Expanding the cosine in equation (2.25), we obtain still another equivalent series of the form
x(t ) = c0 +
An cos(n0t ) + Bn sin( n0 t )
n =1 n=1
(2.26)
where and
2 An = 2 cn cos( n ) = T0
t .
.t0 +T0
x (t ) cos(n0t )dt
(2.27)
2 Bn = 2 cn sin( n ) = T0
and also
.t0 +T0
t .
.t0 +T0
(2.28)
1 c0 = T0
t .
x(t )dt
0
(2.29)
component of x(t). The term for n = 1 is called the fundamental, the term for n = 2 is called the second harmonic, and so on.
12
~
and
1 2 2 An + Bn 2
n = tan 1
Bn An
c0 , An or Bn become
zero after
integration. Finding zero coefficients is time consuming and can be avoided. With the knowledge of even and odd functions, a zero coefficient may be predicted without performing the integration. A function x(t) is said to be even, if x( t) = x(t) for all values of t. The graph of an even function is always symmetrical about the y-axis (a mirror image). For an even function x(t), defined over the range T, the zero coefficient is Bn = 0. A function x(t) is said to be odd, if x( t) = x(t) for all values of t. The graph of an odd function is always symmetrical about the origin. For an odd function x(t), defined over the range T, the zero coefficients are c0 = 0 and An = 0.
Frequency components are represented by a line with amplitude Cn and at frequency f = nfo.
Amplitude,
C0
|cn|
2|C2 | 2|C3| 2|Cn|
Phase (rad)
2|C1 |
0
f fo 2fo 3fo nfo fo 2fo
13
x (t ) = c0 +
n =1
2 cn cos(n 0t + n )
2
3fo
n
f
nfo
n=
Amplitude,
|C-n| |C-2| |C-3| |C-1|
|cn|
|C1| |C2| |C3 | |Cn|
-n
-3fo -nfo
-2
-1
Phase (rad)
2
3fo
0
-2fo -fo fo 2fo
-3
Figure 2.7. Double-sided line spectra Example 2.4: Trigonometric Fourier series Evaluate the trigonometric Fourier Series expansion for the periodic function x(t) of a Triangular waveform with period T0 = 1 and x(0) = 1. Sketch the magnitude spectrum for n = 3.
x(t)
x(t ) = c0 +
T0 / 2
n =1
An cos 0 nt +
n =1 T0 / 2
Bn sin 0 nt , T0 = 1, 0 =
2 nt dt T0
c0 =
1 T0
T0 / 2
x (t )dt ,
An =
2 T0
T0 / 2
x (t ) cos
c0 =
1 T0
x(t )dt = 2
(1 2t )dt = 2 t t 2
1/ 2
14
1/ 2
1/ 2 1/ 2 0
n=
cn e jn0t
n
f
nfo
2 = 2 T0
T0 / 2
Bn =
2 T0
T0 / 2
x (t ) sin
2 nt dt = 0 T0
(Even Function)
1 2
Revised November 2008
2 An = 1
1/ 2
1/ 2
2 nt x (t ) cos dt = 4 T0
1/ 2
(n )
Hence:
2 1 4 1 4 4 4 x (t ) = c0 + An cos T0 nt = 2 + (n )2 cos 2 nt = 2 + 2 cos 2 t + 9 2 cos 6 t + 25 2 cos10 t + ....
n =1 n0 n =1 n =odd n = n =
Amplitude,
(a)
Figure 2.9 : Amplitude (or Magnitude) spectrum (a) Single-sided (b) Double-sided
An =
=4
sin 2 nt 2 n
1/2
1/ 2
t cos(2 nt )dt =
Amplitude,
4 9 2 2 9 2 2
2 9 2
f
3
f
3 2 1
0 1 2 3
(b)
15
x(t ) dt =
2
n =
16
1 P= T
cn
(2.31)
(2.32)
x(t ) = F
[X ( f )] = X ( f )e j 2ft df
(2.33)
The equations (2.32) and (2.33) are called the Fourier transform pair denoted by x(t ) X ( f ) By using Fourier transformation, an energy signal x(t) is represented by the Fourier transform X(f), which is a function of the frequency variable f.
X ( f ) = X ( f ) e j ( f ) , where ( f ) = X ( f )
Where X ( f ) is called the continuous amplitude spectrum of x(t), and continuous phase spectrum of x(t).
(2.34)
( f )
is called the
X ( f ) = X * ( f ) = X ( f ) e j ( f )
or
(2.35) (2.36)
X ( f ) = X ( f ) and ( f ) = ( f )
Thus, just as for the complex Fourier series, the amplitude spectrum, X ( f ) is an even function of f and the phase spectrum,
( f ) is an odd function of f.
17
-/2
Solutions
G ( f ) = F [g (t )] = g (t )e
j 2ft
dt =
/ 2
-5
-4
-3
UV
Therefore,
WXY
sin c( f )
-2
-1
18
g ( t ) =
1, t / 2 0, t > / 2
g(t) 1
/2
/2
(1)e j 2ft dt =
sin(f ) = sin c( f ) f
G(f)=sinc(f)
sin c( f ), = 1
DE
DE
= rect
GHI
GHI
P Q
f
5
Example 2.6 : Fourier transform of exponential function Find the Fourier transform of x(t) = e at u (t ) , where a > 0. Solutions The Fourier transform of x(t) is
e u (t )e
a 2 + (2f ) 2
a 2 + (2f ) 2
E=
x(t ) dt =
2
X ( f ) df
(2.37)
Equation (2.37) gives an alternative method for evaluating the energy by using the frequency domain description instead of the time domain definition. Examining X ( f ) , we note that it has the unit of (volt-seconds) or, since we are considering power on a per-ohm basis, (wattsseconds) per hertz = joules per hertz. Thus, we can see that X ( f ) has the units of energy
density. So, we can now defined the energy spectral density (ESD), with units of joules per hertz for energy signal by:
2
G( f ) = X ( f )
(2.38)
By integrating G(f) over all frequency, we can obtain the total energy as
E=
G ( f )df
(2.39)
19
g hi
X( f ) =
, where X ( f ) =
, and ( f ) = tan 1
jkl
j tan 1
2f a
def
X( f ) =
at
j 2ft
dt =
( a + j 2f ) t
1 dt = e ( a + j 2f )t a + j 2f
=
0
1 a + j 2f
2f a
x (t ) X ( f )
then
x ( t ) e j 2 . f 0 .t X ( f f 0 )
(2.41)
x (t ) X ( f )
then
x(t t 0 ) X ( f )e j 2 . f .t0
F [x(t t 0 )] = y = t t0
x(t t 0 )e j 2ft dt
Scale-Change Theorem
If
x (t ) X ( f )
then
x(at )
Let then
y = at
Duality Theorem
If
x (t ) X ( f )
then
X (t ) x( f )
Proof: The proof of this theorem follows by virtue of the fact that only differences between Fourier transform integral and the inverse Fourier transform integral is a minus sign in the exponent of the integrand
20
x ( y )e
x ( y )e
dy =
F [x(at )] =
j 2 . f .
x(at )e j 2ft dt
y a
dy 1 = a a
j 2 . f .
F [x(t t 0 )] =
x( y )e j 2 . f .( y +t0 .) dy = e j 2 . f .t0
F x ( t ) e j 2 . f 0 .t =
x ( t ) e j 2 . f 0 .t e
j 2 ft
dt =
x(t )e j 2 .( f f0 ).t dt = F ( f f 0 )
(2.42)
x( y)e j 2 . f . y dy = X ( f )e j 2 . f .t0
1 f X a a
(2.43)
y a
1 f X a a
(2.44)
Modulation Theorem If
1 1 X ( f f0 ) + X ( f + f0 ) 2 2
(2.45)
Proof: The proof of this theorem follows by writing cos(2f 0 t ) in exponential form as
x (t ) X ( f )
then
d n x(t ) ( j 2f ) n X ( f ) dt n
(2.46)
Proof: We prove the theorem for n = 1 by using integration by parts on the defining Fourier transform integral as follows:
where the definitions u = e j 2ft and dv = (dx / dt )dt have been used in the integration-by-parts formula and first term of the middle equation vanishes at each end point by virtue of x(t) being an energy signal. The proof for values of n > 1 follows by induction. Convolution Theorem If then
x1 (t ) X 1 ( f )
When the convolution (denoted by *) of two signals, x1(t) and x2(t) is conducted, a new function of time, x(t) is produced and is given by
1. Time reversal to obtain x2() 2. Time shifting to obtain x2(t) 3. Multiplication of x1() and x2(t) to form the integrand.
x1 (t ) * x 2 (t ) =
x1 ( )x2 (t )d
21
x1 (t ) * x 2 (t ) =
dx(t ) = dt
+ j 2f
and
x 2 (t ) X 2 ( f )
x1 ( )x 2 (t )d =
x1 (t )x2 ( )d X 1 ( f ) X 2 ( f )
(2.47)
x (t )e j 2ft dt = j 2fX ( f )
Proof:
x 2 (t )e j 2ft dt = X 2 ( f )e j 2f
In communication system, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function.
Multiplication Theorem
If then
x1 (t ) X 1 ( f )
and
x 2 (t ) X 2 ( f )
Proof: The proof of the multiplication theorem proceeds in a manner analogous to the proof of the convolution theorem.
x1 (t ).x2 (t ) X 1 ( f ) * X 2 ( f ) =
X 1 ( )X 2 ( f )d
22
F [x1 (t ) * x 2 (t )] =
x1 ( ) X 2 ( f )e j 2f d =
x1 ( )e j 2f d X 2 ( f ) = X 1 ( f ).X 2 ( f )
F [x1 (t ) * x 2 (t )] =
x1 ( ) x 2 (t )d e j 2ft dt =
x1 ( )
x 2 (t )e j 2ft dt d
(2.48)
A2
T2 < T 1
A1
T1
A1T1= A2T2=1
Figure 2.13: Rectangular pulse impulse ( when T 0) The Dirac delta function is not a true function. It can be obtained by letting the pulse width of a rectangular pulse, T, with unit area to go to infinity. The normal concept of amplitude does not apply here. Alternatively, the concept of weight is introduced. The impulse, (t) has the following features: i) unit area which depends on location
(t to )dt =
t1
ii) ability to weight the impulse with a resulting area other than unity (2.50)
A (t t o )dt =
t1
x (t ) (t to ) = x (to ) (t to )
iv) shifting property
x (t ) (t to )dt = x (to )
v) scaling property
(at)dt =
1 1 (t)dt = a a
+
23
2 F { (t to )} = ( t to )e j ft dt =
t2
t2
(2.49)
(t)
0
(t-t1)
t
(2.52)
(2.53)
1, ej
2 fto
to = 0 , elsewhere
(2.54)
Example 2.7 : Fourier transform properties Find the Fourier transform for the following signals: 1) v(t) = (t) 2) v(t) = 1 3) v ( t ) = cos o t 4) v ( t ) = m( t ) cos o t and m( t ) M ( f ) Solutions
V( f ) =
2.) v(t) = 1,
V( f ) = V( f ) =
e e
j 2 ft
dt
and
(t ) =
Therefore,
j 2 ( f ) t
dt = ( f ) = ( f )
where scaling property of impulse function has been applied in the last step. 3.) v (t ) = cos o t => v (t ) =
1 j o t ( e + e j o t ) 2
1 ( f ) , Then e j ot = e j 2f ot ( f f o )
e j ot = e j 2f o t ( f + f o )
1 ( f fo ) + ( f + fo ) 2
4.) v (t ) = m(t ) cos o t and m(t ) M ( f ) => v ( t ) = From frequency shift property : If m(t ) M ( f ) then e j ot m(t ) M ( f f o )
1 j o t ( e + e jot )m(t ) 2
e j ot m(t ) M ( f + f o )
From linearity property :
V( f ) =
1 M( f fo ) + M( f + fo ) 2
Modulation theorem : m( t )
1 M( f fo ) + M( f + fo ) 2
24
d e g
(t ) e
j 2 ft
dt = 1
e j 2ft df
and
] ]
x(t)
X(f)
e t
e f
e j 2 . f c .t
cos (2fct ) sin (2fct ) x(t).cos (2fct )
sgn (t )
1
u (t)
i =
n =
e at u (t )
e at u (t ) e
a t
te at u (t ) e at cos(2f 0 t )u (t ) e at sin( 2f 0 t )u (t )
1 , a + j 2f 1 , a j 2f 2a , 2 a + (2f ) 2 1 , (a + j 2f ) 2 a + j 2f
25
(t iT0 )
rs
t T
T sinc2 ( fT )
(f )
e j 2 . f .t 0 ( f fc )
1 [ ( f f )+ ( f + f )] c c 2 1 [ ( f f ) ( f + f )] c c 2j 1 [X ( f f )+ X ( f + f )] c c 2 1 jf
1 (f)
-j sgn ( f )
1 1 (f )+ 2 j 2f
1 T0
(f
n ) T0
k lm
for a >0 for a >0 for a >0 for a >0
sinc (2Wt )
1 f rect 2W 2W
nop
fg q
rect
t T
hij
T sinc ( ft )
tuv
x(t ) = cn e jn0t
n =
n =
, where
0 =
2 T
(2.55)
X ( f ) = cn ( f nf 0 )
n=
n =
(2.56)
Note that the Fourier transform of a periodic signal consists of a sequence of equidistant impulses located at the harmonic frequencies of the signal.
xy ( ) =
x * (t ). y (t + ) dt =
26
x (t ). y * (t ) dt
(2.57)
frequency component, but implies it with many harmonic frequencies. The timeautocorrelation function for a given energy signal x(t) is defined as
x ( ) =
x * (t ).x (t + ) dt =
x ( ) = x ( ) =
x(t ).x (t + ) dt
x ( y ).x ( y )dy
x ( ) =
x(t ).x (t ) dt
This shows that for a real x(t), the autocorrelation function is an even function of , that is
x ( ) = x ( )
The Fourier transform for
x ( ) in equation (2.58a) is
x(t )
x(t ) X ( f )e j 2f .t dt = X ( f )
F [ x (t )] = G ( f ) = X ( f )
where the energy spectral density (ESD), G(f) is the Fourier transform of the autocorrelation function. Although this result is proved here for real signal, it is valid for complex signal also. Note that the autocorrelation function is a function of , not t. Hence, its Fourier transform is
F [ x ( )] =
x ( )e j 2f d .
27
x( + t )e j 2f d dt , where F [x( + t ) ] = e j 2f .t X ( f )
x (t )e j 2ft dt = X ( f ) X ( f ) = X ( f )
2
F [ x ( )] =
x ( )e j 2f d =
e j 2f
x (t ).x * (t )dt
(2.58)
(2.58a)
(2.59)
(2.60)
x (t ).x (t + )dt d
(2.61)
Example 2.8 : Autocorrelation of energy signals Find the time autocorrelation function of the signal, x(t) = e at u (t ) , (where a > 0) and from it determine the ESD of x(t).
Solutions We have x(t ) = e at u (t ) and x(t ) = e a ( t ) u (t ) . The autocorrelation function x (t ) is given by the area under the product x(t).x(t), as shown in Figure 2.15. Therefore,
This is valid for positive. We can perform a similar procedure for negative. However, we know that for a real x(t), x() is an even function of . Therefore,
x ( ) =
1 a e 2a
Figure 2.15 shows the autocorrelation function x(). The ESD, G(f) is the Fourier transform of x(). From Table 2.2, G(f) is:
g(t)
g(t)
To develop a frequency-domain description of a power, we need to know the Fourier transform of the signal x(t). However, this may pose a problem, because power signals have infinite energy and may therefore not Fourier transformable.
28
G ( f ) = F [ x ( ) ] = F
1 a e 2a
1 2a 1 = 2 2 2 2a a + ( 2f ) a + ( 2f ) 2
1 2a
x ( ) =
x (t ) x(t + ) dt =
e at u (t )e a (t ) u (t ) dt = e a
e 2 at dt =
1 a e 2a
x()
To overcome the problem, we consider a truncated version of the signal x(t) (Figure 2.16) by
0,
t elsewhere
x(t)
power signal
xT (t)
truncated power signal
T/2
T/2
Using equation (2.21) and Figure 2.16, we obtain the average normalized power:
P = lim
1 T T
T /2 T / 2
x(t ) dt = lim
2
ET 1 = lim T T T T
The integrand of the right-hand integral has unit of watts per hertz and can be defined as PSD.
Therefore, the PSD with unit watts per hertz, for deterministic power signal is defined as
Note that the PSD is always a real non-negative function of frequency. By integrating S(f) over all frequency, we can obtain the normalized average power as:
S ( f ) = lim
XT ( f )
29
lim
1 P = lim T T
X T ( f ) df =
2
XT ( f )
xT (t ) =
= x(t ).rect
t T
t t
x(t ),
T / 2 < t < T / 2
xT (t ) dt
df
6V D1YGGY9
(2.62) (2.63)
P = S ( f ) df
2
(2.64)
This result is parallel to the result obtained for energy signals in equation (2.39). The area under the PSD function is the normalized average power. Observe that the PSD is the time average of ESD of
xT(t) (equation 2.63).
As is the case with ESD, the PSD is also a positive, real and even function of f. If x(t) is a voltage signal, the units of PSD are volts squared per hertz.
R x ( ) = lim
1 T T 1 T T
T /2 T / 2
x * (t ).x(t + ) dt = lim
1 T T
R x ( ) = lim
T /2 T / 2
x (t ).x (t + ) dt
Using the same argument as that used for energy signals (derived in previous section), we can show that R x ( ) is an even function of . This means for a real x(t)
R x ( ) = lim
and
1 T T
T /2 T / 2
x (t ).x (t ) dt
R x ( ) = R x ( ) 1 T
R x ( ) =
T /2
T / 2
x (t ).x (t ) dt
For energy signals, the ESD G(f) is the Fourier transform of the autocorrelation function x ( ) . A similar result applies to power signals, where the PSD S(f) is the Fourier transform of the autocorrelation function Rx ( ) . From equation (2.65a) and Figure 2.16, we have
R x ( ) = lim
1 T T
xT (t ).xT (t + ) dt = lim
2
xT ( )
T
30
T /2 T / 2
x (t ).x * (t ) dt
(2.65)
(2.65a)
(2.65b)
(2.66)
F [R x ( )] = lim
XT ( f ) T
= S( f )
(2.67)
The above proves are also valid for complex signal. The autocorrelation function and power spectral density are important tools for system analysis involving random signals. Some properties of the autocorrelation function Rx(),
Example 2.9 : Autocorrelation of power signals Find the average autocorrelation function for the sinusoidal signal x (t ) = A sin(1t + ) , where
1 =
R x ( ) =
1 T1
A2 = 2T1
A2 = cos1 2
The power spectral density is the Fourier transform of the autocorrelation. Therefore S(f) is
A2 = [ ( f f1 ) + ( f + f1 )] 4
31
f gh
A2 A2 1 1 S ( f ) = F [Rx ( )] = F cos 1 = ( f f1 ) + ( f + f1 ) 2 2 2 2
ijk
fh
ikj
d e
Rx (0) = lim
1 T T
T /2 T / 2
T1 / 2 T1 / 2
(1)dt
(i) Draw the trigonometric Fourier spectra of g(t), |Cn| and Cn. (ii) Sketch the exponential Fourier spectra of g(t). (iii) By inspection of the exponential Fourier spectra obtained in part a(ii), find the exponential Fourier series for g(t). (b) Fourier Transform is the common method used to analyze a non-periodic signal in frequency domain. (i) If g(t) has the Fourier transform G(f), show that
g (t ) cos(2f c t )
1 1 G( f f c ) + G( f + f c ) 2 2
(ii) Name the theorem for the Fourier transform properties in Part b(i). Using this theorem, find the Fourier transform of a signal
(iii) Is the signal p(t) in Part b(ii) a power or an energy signal? Explain (iv) Using the result in Part b(iii), determine the power or energy of p(t) if T =
32
p(t ) =
t cos 2f c t T
1 . 2 fc
Solution Question 1
Solution a(i)
C0 =2, C1 = 0, C2 = 4, C3 = 2, C4 = 0 0 = 0, 1 = 0, 2 = -, 3 = -/2, 4 = 0
Cn
4 2 0 /2
C n
1 2 3 4
a(ii) Vn 2 1
2 1 -4 -3 -2
Vn
1 -4 -3 -2 -1 -
33
-1 0
2 3 4
g (t ) =
n = 4
1 = G ( f ) { ( f f c ) + ( f + f c )} 2 1 1 = G ( f f c ) + G ( f + fc ) 2 2
Since
(b)(iii) p(t) is an energy signal since it has finite total energy and zero average power (b)(iv)
E=
T /2 T / 2
cos 2 2 f c t dt =
(c)(ii)
T = 1/4f c 2
F { g (t ) cos 2 f ct} = G ( f ) F
1 j 2 f c t j 2 f c t e +e 2
34
(b)(i)
a(iii)
Vn e jnt
Question 2 (a) Determine the Fourier transforms of the signals shown in Figure S2(i) and Figure S2(ii).
(i) Figure S2
(ii)
(b) Determine whether the following signals are energy-type or power-type. In each case determine the energy or power-spectral density and also the energy or power content of the signal. (i) x(t) = sinc(t)
(ii)
(ii)
x (t ) =
n =
(t 2 n )
35
(ii)
36