Vous êtes sur la page 1sur 36

Chapter 2: Spectral Analysis

Chapter Contents
1. 2. 3. 4. Signals and vectors, Orthogonal Functions Energy and Power Signals, Fourier series, Fourier Transforms, Signal Spectrum Correlation, Spectral Density To answer the following queries and study of related topics

Objective:

1. Signals and vectors, Orthogonal Functions What is meant by component of one vector / signal in another? What is Minimum Mean Square Error (MMSE) criterion? What is the condition for two signals to be orthogonal? Example of some commonly encountered orthogonal functions 2. Energy and Power Signals The difference between energy and power signals Familiarization of terms related to power and energy 3. Fourier Series, Fourier Transforms, Signal Spectrum, What is Fourier series representation? What is generalized Fourier series representation? Frequency analysis of periodic signals: find the Fourier series representation of a function in trigonometry or exponential form. Understand the concept of frequency contents of a signal. Single and double sided plot of amplitude and phase spectrum Frequency analysis of aperiodic signals : find the Fourier transform of a signal Understand the concept of impulse: defined by its properties Recognize some important properties of Fourier transform 4. Correlation, Spectral Density What is correlation of power and energy signals? What is power and energy spectral density?

Learning outcome of Chapter 2:


At the completion of the subject, students should be able to: Understand the concepts of signals and vectors, orthogonal function, Fourier series, Fourier transform, convolution and correlation for spectral analysis.

Revised November 2008

$AS5A Q PI!"GH889G)EC !R    6 F D

In this chapter, we review the basics of signal and linear systems in the frequency domain. The motivation for studying these fundamental concepts stems from the basic role they play in modeling various types of communication systems. In transmission of an information signal over a communication channel, the shape of the signal is changed, or distorted, by the channel. The communication channel is an example of a system; i.e., an entity that produce an output signal when excited by an input signal. A large number of communication channels can be modeled closely by a subclass of systems called linear systems. Linear systems are a large subclass of systems, which arise, naturally in many practical applications. We have devoted this entire chapter to the study of the basics of signals and linear systems in the frequency domain.

2.1 SIGNALS AND VECTORS


2.1.1 Component of a Vector
A vector, V, is specified by its magnitude and direction. Consider two vectors, V1 and V2. a) Suppose we want to write V1 in terms of V2. Components of V1 along V2 is C12V2, where C12 is chosen such that the error vector, Ve, is minimum. V1 = C12 V2 + Ve (2.1)

V1

Ve V2

C12V2
Figure 2.1: Vectors

b) Physical interpretation: the larger the component of a vector along the other vector, the more closely do the two vectors resemble each other in their directions, and the smaller is the error vector. c) C12 is given by :

C12 =

V1 V2 1 = V .V V2 V2 V 2 1 2 2

(2.2)

Revised November 2008

d) Orthogonal vectors: if C12 = 0 or

V1 V2 = 0 = V1 V2 cos
The vectors are independent of each other.

(2.3)

2.1.2 Component of a Signal


Consider two signals, f1 (t) and f2(t). a) Suppose we want to approximate f1(t) in terms of f2(t) over a certain interval t1 < t < t2 :

f 1 (t ) C12 f 2 (t )
b) Error function is given by f e (t ) = f 1 (t ) C12 f 2 (t ) .

(2.4)

(2.5)

This error function in the approximation can be minimized by minimizing the average (mean) of the square of the error :

=
c)

1 t 2 t1

To find the value of C12 which will minimize , we must have

d =0 dC12
The solution is :

t2

d) From the previous section on Vectors, we say that a signal V1 contains a component C12V2. Note that in vector terminology, C12V2 is the projection of V1 on V2. Continuing with that analogy, we say that if the component of a signal V1 of the form V2 is zero (that is C12 = 0), the signals V1 and V2 are orthogonal. Therefore, we define the real signals f1(t) and f2(t) to be orthogonal over an interval (t1, t2) if :

C12 = 0 or

t2 t1

C12 =

t1

f 1 (t ) f 2 (t )dt
t2 t1

f 1 (t ) f 2 (t )dt = 0

t2 t1

f e2 (t )dt

(2.6)

(2.7)

f 22 (t )dt

(2.8)

Revised November 2008

Examples 2.1: Minimum mean square error A rectangular function f(t) is defined by

f (t ) =

Approximate this function by a waveform sin t over the interval (0,2) such that the mean square error is minimal.

Solution: Let

f (t ) C12 sin t
2 0

To minimize the mean square error:

2 0

sin 2 tdt

Thus, f (t )

sin t represents the best approximation of f(t) by a function sin(t) in

minimum mean square error (MMSE) sense.


4/ 1

Figure 2.2: Example of MMSE approximation

C12 =

f (t ) sin tdt

sin tdt + sin tdt 0


2 0

sin 2 tdt

q t rs

1 0<t < 1 < t < 2

t
2

Revised November 2008

2.1.3 Orthogonal Functions


Functions n and m are said to be orthogonal with respect to each other over the interval (a, b) if they satisfy the condition

b a

* n (t ) m (t )dt = 0, where n m

The zero result implies that these functions are independent. If the result is not zero, the two functions have some dependence to each other. If the function the set {n(t) } are orthogonal , then they also satisfy the relation

b a
Where

n (t )

* m (t )

dt =

0, n m

Kn , n =m

} =K

n nm

1, n m

nm is called the delta function. If the constant Kn all equal to 1, the function n(t) are said to
be orthonormal functions.

Examples of Orthogonal Functions - trigonometry functions [sin(kt), cos(kt)] and exponential functions [exp(kt)]

nm =

0, n m

Revised November 2008

$S Q PI"G88G)E ~ } 
(2.9)

(2.10)

2.2 SIGNAL MODELS


2.2.1 Deterministic and Random Signals
In communication systems we are concerned with two broad classes of signals, referred to as deterministic and random. Deterministic signals can be modeled as completely specified functions of time. For example, the signal x(t ) = A cos( 0 t ) , where A and 0 are constants. Random signals are signals that take on random values at any given time instant and must be modeled probabilistically.

2.2.2 Periodic and Aperiodic Signals


A signal x(t) is periodic if and only if

x(t + T0 ) = x(t ),

<t <

(2.11)

Any signal not satisfying equation (2.11) is called aperiodic

2.2.3 Phasor Signals and Spectra


An important periodic signal in system analysis is the complex signal

~(t ) = Ae j (0t + ) , x

<t <

(2.12)

We will refer to equation (2.12) as a rotating phasor to distinguish it from the phasor Eulers theorem, we may show that equation (2.12) is periodic with period rotating phasor

Ae j . Using
The

T0 = 2 / 0 .

Ae j (0t + ) can be related to a real sinusoidal signal in two ways. The First is: x(t ) = A cos( t + ) = Re ~ (t ) = Re Ae j (0t + ) x (2.13)
0

The second is:

A cos( 0t + ) =

1~ 1 1 1 x (t ) + ~ * (t ) = Ae j (0t + ) + Ae j (0t + ) x 2 2 2 2

(2.14)

The equivalent representation of equation (2.13) in frequency domain is shown in Figure 2.3(a). The resulting plots are referred to as the amplitude line spectrum and the phase line spectrum for x(t), or the single-sided amplitude and phase spectra of x(t). The spectrum of equation (2.14) is shown in Figure 2.3(b) and referred to as the double-sided amplitude and phase spectra.

Revised November 2008

Amplitude

Phase

Amplitude

f f

A/2

f 0
0 (a) f0 0 f0

f0

0 (b)

f0

Figure 2.3: Amplitude and phase spectra for the signal A cos( 0 t + ) (a) Single-sided. (b) Double-sided.

2.2.4 Singularity Functions


An important subclass of aperiodic signals is the singularity functions. Here, we will be concerned with only two: the unit impulse function (t) and the unit step function u(t). The unit impulse function is defined in terms of the integral

x(t ) (t )dt = x(0)

A change of variables and redefinition of x(t) results in the shifting property

x(t ) (t t 0 )dt = x(t 0 )

The other singularity function may be defined as

u (t ) =

0, ( )d = 1, underfined ,

t<0 t >0 t=0

Revised November 2008

$S Q PI"G88G)E
Phase

f0

(2.15)

(2.16)

(2.17)

Table 2.1 : Some useful function definition sgn (t)


1 t -1 t=0

u(t)
1

t=0

t -T/2 T/2 -T T

t rect T

t=0

t or T

t T

t=0 1

Note this is twice as wide (2T) as rect (t/T), which is only T wide

Revised November 2008

2.3 SIGNAL CLASSIFICATIONS


2.3.1 Energy and Power Signals
In this chapter we will be considering two signal classes, those with finite energy and those with finite power. As a specific example, suppose e(t) is the voltage across a resistance R producing a current i(t). The instantaneous power per ohm is p(t) = e(t)i(t)/R = i2(t). The total energy and the average power on a per-ohm basis are obtained as the limits

E = lim
and

T T / 2

P = lim

1 T T

For an arbitrary signal x(t), in general, be complex, we define total (normalized) energy as

E = lim
and (normalized) power as

T /2

T T / 2

x(t ) dt =
2

P = lim

1 T T

T /2 T / 2

x(t ) dt

Based on these definitions, we can define two distinct classes of signals: 1. x(t) is an energy signal if and only if 0 < E < (finite and nonzero), so that P = 0. Aperiodic signal is usually an energy waveform (as the energy is finite, period is infinite, thus power is equal to zero). 2. x(t) as a power signal if and only if 0 < P < (finite and nonzero), implying that E = . Periodic signal is a power waveform (as energy is infinite for infinite duration). Note:- Every signal that can be generated in a lab has finite energy. In other words, every signal observed in real life is an energy signal. A power signal must necessarily have an infinite duration. Examples 2.2 : Energy and Power Signals As an example of determining the classification of a signal, consider

x1 (t ) = Ae t u (t ),

>0

where A and are constants. Using equation (2.20), we may readily verify that x1(t) is an energy signal, since E = A 2 / 2 . Letting 0 , we obtain the signal x2(t) = Au(t), which has infinite energy. Applying equation (2.21), we find that P = A 2 / 2 , thus verifying that x2(t) is a power signal.

T /2

i 2 (t )
i 2 (t )

(2.18)

T /2 T / 2

(2.19)

x(t ) dt

(2.20)

(2.21)

Revised November 2008

2.4

FOURIER SERIES

2.4.1 Fourier analysis of signals


The analysis of signals and linear systems in frequency domain is based on representation of signals in terms of the frequency and this is done through employing Fourier series and Fourier transform. Fourier series is applied to periodic signals whereas the Fourier transform can be applied to periodic and aperiodic signals.

2.4.2 Complex Exponential Fourier Series


The complex Fourier series uses the orthogonal exponential functions

n = e jn t , where n ranges
o

over all possible integer values, negative, positive, and zero and 0 = 2 / T0 . A signal x(t) can be expressed over an interval of duration T0 seconds as an exponential Fourier series

x(t ) = cn e jn0t
n =

n=

(2.22)

Where the complex Fourier coefficients cn are given by

cn =

1 .t0 +T0 x(t )e jn0t dt t0 T0 . 1 .T0 / 2 x(t )e jn0t dt T0 / 2 T0 .

(2.23)

for some arbitrary t0 , by setting t0 = T0/2, we have

cn =

(2.24)

and 0 = 2f 0 = 2 / T0 , The frequency f 0 = 1 / T0 is said to be the fundamental frequency, and

nf 0 = n / T0 is said to be the nth harmonic frequency, when n >1.


The Fourier coefficient

co is equivalent to the dc value of the waveform x(t)

when n = 0. In general,

cn

is a complex number and can be expressed as

cn = c n e j n that gives the magnitude and

phase angle of nth harmonic.

Some properties of the complex Fourier series


If x(t) is real , then

cn * = c n

, therefore

cn = c n

and

n = n .
n

If x(t) real and even, then Im[c ] =0 and if x(t) real and odd, then Re [c ] = 0

10

Revised November 2008

Example 2.3 : Exponential Fourier series Let x(t) denote the periodic signal depicted in Figure 2.4 and described analytically by

n =

n =

where (t) or rect(t) is a rectangular pulse of width . Determine the Fourier series expansion for this signal.

x(t) 1 t

2T0

T0

/2

/2

T0

Figure 2.4 : A periodic rectangular waveform


Solutions We find observe that the period is T0 and

T0

T0

where we use the definition of sinc(X) = sin(X)/(X).

T0

T0

-4 -2

Figure 2.5 : Fourier series coefficient for periodic rectangular waveform

11

Revised November 2008

sin c

T0

.x

Cn

T0

sin c

1 n = sin = n T0 T0

n T0

sin( n

x(t )e

jn 0t

(1)e

1 cn = T0

1 dt = T0

1 T0 dt = T0 jn 2

T / 2 .

.T0 / 2

/ 2 .

. / 2

jn

2 t T0

jn T0

jn T0

wx
2T0 x

x(t ) =

(t nT0 )

rect

(t nT0 )

n=

n =

2.4.3 Fourier series for Real signals Trigonometric Fourier series


For real x(t), we have

cn * = c n

and by taking the complex conjugate inside the integral and noting

that the same result is obtained by replacing n by n in

c n = c n e j n , we obtain

cn = c n

and

n = n

We can now regroup the complex exponential Fourier series by pairs of terms of the form

cn e jn0t + cn e jn 0t = cn e j ( n0t + n ) + cn e j ( n0t + n ) = 2 cn cos(n0 t + n )


Hence equation (2.22) can be written in the equivalent compact trigonometric form

x ( t ) = c0 +

2 cn cos(n0 t + n )
n =1

(2.25)

Expanding the cosine in equation (2.25), we obtain still another equivalent series of the form

x(t ) = c0 +

An cos(n0t ) + Bn sin( n0 t )
n =1 n=1

(2.26)

where and

2 An = 2 cn cos( n ) = T0

t .

.t0 +T0

x (t ) cos(n0t )dt

(2.27)

2 Bn = 2 cn sin( n ) = T0
and also
.t0 +T0

t .

.t0 +T0

x(t ) sin( n0 t )dt

(2.28)

1 c0 = T0

t .

x(t )dt
0

(2.29)

cn and n are also related to


cn =

An, and Bn by the equations

In either the trigonometric or the exponential form of the Fourier series,

component of x(t). The term for n = 1 is called the fundamental, the term for n = 2 is called the second harmonic, and so on.

12

Revised November 2008

~

and

1 2 2 An + Bn 2

n = tan 1

Bn An

(2.30a & 2.30b)

c0 represent the average or dc

2.4.3.1 Trigonometric Fourier series for Even and Odd functions


In some of the problems encountered, the Fourier coefficients

c0 , An or Bn become

zero after

integration. Finding zero coefficients is time consuming and can be avoided. With the knowledge of even and odd functions, a zero coefficient may be predicted without performing the integration. A function x(t) is said to be even, if x( t) = x(t) for all values of t. The graph of an even function is always symmetrical about the y-axis (a mirror image). For an even function x(t), defined over the range T, the zero coefficient is Bn = 0. A function x(t) is said to be odd, if x( t) = x(t) for all values of t. The graph of an odd function is always symmetrical about the origin. For an odd function x(t), defined over the range T, the zero coefficients are c0 = 0 and An = 0.

2.4.4 Frequency or Line Spectra


A plots of |cn| and n versus the frequency, f is called the amplitude spectrum and phase spectrum of the periodic signal x(t) respectively. These plots are referred as frequency spectra of x(t). Since the index n assumes only integers, the frequency spectra of a periodic signal exist only at discrete frequency nf0. These are therefore referred to as discrete frequency spectra or line spectra. (i) Single-sided plot of Fourier series coefficients

Frequency components are represented by a line with amplitude Cn and at frequency f = nfo.

Amplitude,
C0

|cn|
2|C2 | 2|C3| 2|Cn|

Phase (rad)

2|C1 |

0
f fo 2fo 3fo nfo fo 2fo

Figure 2.6. Single-sided line spectra

13

Write the function in F.S. trigonometric form :

x (t ) = c0 +

n =1

2 cn cos(n 0t + n )

2
3fo

n
f

nfo

Revised November 2008

(ii) Double-sided plot of Fourier series coefficients

n=

Amplitude,
|C-n| |C-2| |C-3| |C-1|

|cn|
|C1| |C2| |C3 | |Cn|

C0 f -nfo -3fo -2fo -fo fo 2fo 3fo nfo

-n
-3fo -nfo

-2

-1

Phase (rad)

2
3fo

0
-2fo -fo fo 2fo

-3

Figure 2.7. Double-sided line spectra Example 2.4: Trigonometric Fourier series Evaluate the trigonometric Fourier Series expansion for the periodic function x(t) of a Triangular waveform with period T0 = 1 and x(0) = 1. Sketch the magnitude spectrum for n = 3.

x(t)

Figure 2.8. Triangular waveform, x(t) Solutions:


n = n =

x(t ) = c0 +
T0 / 2

n =1

An cos 0 nt +

n =1 T0 / 2

Bn sin 0 nt , T0 = 1, 0 =
2 nt dt T0

c0 =

1 T0

T0 / 2

x (t )dt ,

An =

2 T0

T0 / 2

x (t ) cos

c0 =

1 T0

x(t )dt = 2

(1 2t )dt = 2 t t 2

1/ 2

14

1/ 2

1/ 2 1/ 2 0

Write the function in F.S. exponential form : x(t ) =

n=

cn e jn0t

n
f

nfo

2 = 2 T0
T0 / 2

Bn =

2 T0

T0 / 2

x (t ) sin

2 nt dt = 0 T0

(Even Function)

1 2
Revised November 2008

2 An = 1

1/ 2

1/ 2

2 nt x (t ) cos dt = 4 T0

1/ 2

(1 2t ) cos(2 nt )dt 8 1/2 cos 2 nt + 2 nt sin 2 nt ]0 2 [ (2n )

2 2 2 cos n + n sin n cos 0 + sin 0] = cos n 1] = [1 cos n 2 [ 2 [ (n ) ( n ) (n ) 2

(n )

Hence:
2 1 4 1 4 4 4 x (t ) = c0 + An cos T0 nt = 2 + (n )2 cos 2 nt = 2 + 2 cos 2 t + 9 2 cos 6 t + 25 2 cos10 t + ....
n =1 n0 n =1 n =odd n = n =

Amplitude,

(a)

Figure 2.9 : Amplitude (or Magnitude) spectrum (a) Single-sided (b) Double-sided

An =

=4

sin 2 nt 2 n

1/2

1/ 2

t cos(2 nt )dt =

0, for n=even 2, for n=odd

Amplitude,
4 9 2 2 9 2 2

2 9 2

f
3

f
3 2 1
0 1 2 3

(b)

15

Revised November 2008

2.4.5 Parsevals Theorem


A periodic signal is a power signal and every term in its Fourier series is also a power signal. The power of x(t) is equal to the power of its Fourier series, which is the sum it the powers of its Fourier components. Parsevals theorem for the Fourier series states that the average power if a periodic signal, x(t) is the sum of the powers in the phasor components of its Fourier series
T /2 T / 2

x(t ) dt =
2

n =

16

1 P= T

cn

(2.31)

Revised November 2008

2.5 FOURIER TRANSFORMS


2.5.1 Frequency Domain Representation of Aperiodic Signals
A non-periodic signal can be viewed as a limiting case of a periodic whose period approaches infinity. Since the period approaches infinity, the fundamental frequency fo becomes approaches zero. The harmonics get closer and closer together, and in the limit the Fourier series summation representation becomes an integral. In this manner, we could develop the Fourier integral (transform) theory. The Fourier transform (FT) of a waveform x(t), symbolized by F is defined by:

X ( f ) = F [x(t )] = [ x(t )]e j 2ft dt

(2.32)

The Inverse Fourier transform (IFT) of X(f), symbolized by F 1 , is defined by:

x(t ) = F

[X ( f )] = X ( f )e j 2ft df

(2.33)

The equations (2.32) and (2.33) are called the Fourier transform pair denoted by x(t ) X ( f ) By using Fourier transformation, an energy signal x(t) is represented by the Fourier transform X(f), which is a function of the frequency variable f.

2.5.2 Amplitude and Phase Spectra


In general, the Fourier transform X(f) is a complex function of the continuous frequency f. We may therefore express it in the form

X ( f ) = X ( f ) e j ( f ) , where ( f ) = X ( f )
Where X ( f ) is called the continuous amplitude spectrum of x(t), and continuous phase spectrum of x(t).

(2.34)

( f )

is called the

If x(t) is a real function of time, we have

X ( f ) = X * ( f ) = X ( f ) e j ( f )

or

(2.35) (2.36)

X ( f ) = X ( f ) and ( f ) = ( f )

Thus, just as for the complex Fourier series, the amplitude spectrum, X ( f ) is an even function of f and the phase spectrum,

( f ) is an odd function of f.

17

Revised November 2008

Example 2.5 : Fourier transform of rectangular function

-/2

Figure 2.10 : A rectangular gate function

Solutions

G ( f ) = F [g (t )] = g (t )e

j 2ft

dt =

/ 2

-5

-4

-3

UV

Therefore,

WXY

sin c( f )

-2

-1

Figure 2.11: A sinc function

18

g ( t ) =

1, t / 2 0, t > / 2

g(t) 1

/2

/2

(1)e j 2ft dt =

sin(f ) = sin c( f ) f

G(f)=sinc(f)

sin c( f ), = 1

Revised November 2008

DE

DE

Find the Fourier transform for the following gate function, g (t ) =

= rect

GHI

GHI

P Q

f
5

Example 2.6 : Fourier transform of exponential function Find the Fourier transform of x(t) = e at u (t ) , where a > 0. Solutions The Fourier transform of x(t) is

e u (t )e

a 2 + (2f ) 2

a 2 + (2f ) 2

2.5.3 Energy Spectral Density (ESD) and Parsevals Theorem


Parsevals theorem for the Fourier transform states that if x(t) is an energy signal, then

E=

x(t ) dt =
2

X ( f ) df

(2.37)

Equation (2.37) gives an alternative method for evaluating the energy by using the frequency domain description instead of the time domain definition. Examining X ( f ) , we note that it has the unit of (volt-seconds) or, since we are considering power on a per-ohm basis, (wattsseconds) per hertz = joules per hertz. Thus, we can see that X ( f ) has the units of energy
density. So, we can now defined the energy spectral density (ESD), with units of joules per hertz for energy signal by:
2

G( f ) = X ( f )

(2.38)

By integrating G(f) over all frequency, we can obtain the total energy as

E=

G ( f )df

(2.39)

19

Revised November 2008

g hi

X( f ) =

, where X ( f ) =

, and ( f ) = tan 1

jkl

j tan 1

2f a

Expressing in polar form, we obtain

def

X( f ) =

at

j 2ft

dt =

( a + j 2f ) t

1 dt = e ( a + j 2f )t a + j 2f

=
0

1 a + j 2f

2f a

2.5.4 Properties of Fourier Transform


Superposition Theorem If x1 (t ) X 1 ( f ) and x 2 (t ) X 2 ( f ) , then a1 x1 (t ) + a2 x 2 (t ) a1 X 1 ( f ) + a 2 X 2 ( f ) (2.40) Proof: trivial Frequency Translation Theorem If Proof :

x (t ) X ( f )

then

x ( t ) e j 2 . f 0 .t X ( f f 0 )

(2.41)

Time-Delay Theorem If Proof: Let then

x (t ) X ( f )

then

x(t t 0 ) X ( f )e j 2 . f .t0

F [x(t t 0 )] = y = t t0

x(t t 0 )e j 2ft dt

Scale-Change Theorem

If

x (t ) X ( f )

then

x(at )

Let then

y = at

Duality Theorem
If

x (t ) X ( f )

then

X (t ) x( f )

Proof: The proof of this theorem follows by virtue of the fact that only differences between Fourier transform integral and the inverse Fourier transform integral is a minus sign in the exponent of the integrand

20

Revised November 2008

x ( y )e

x ( y )e

dy =

F [x(at )] =

j 2 . f .

Proof: Assume a > 0, F [x(at )] =

x(at )e j 2ft dt

y a

dy 1 = a a

j 2 . f .

F [x(t t 0 )] =

x( y )e j 2 . f .( y +t0 .) dy = e j 2 . f .t0

F x ( t ) e j 2 . f 0 .t =

x ( t ) e j 2 . f 0 .t e

j 2 ft

dt =

x(t )e j 2 .( f f0 ).t dt = F ( f f 0 )

(2.42)

x( y)e j 2 . f . y dy = X ( f )e j 2 . f .t0

1 f X a a

(2.43)

y a

1 f X a a

(2.44)

Modulation Theorem If

x(t ) X ( f ) then x(t ) cos(2f 0 t )

1 1 X ( f f0 ) + X ( f + f0 ) 2 2

(2.45)

Proof: The proof of this theorem follows by writing cos(2f 0 t ) in exponential form as

1 j 2f0t 1 j 2f 0t e + e and applying the superposition and frequency translation theorems. 2 2


Differentiation Theorem If

x (t ) X ( f )

then

d n x(t ) ( j 2f ) n X ( f ) dt n

(2.46)

Proof: We prove the theorem for n = 1 by using integration by parts on the defining Fourier transform integral as follows:

where the definitions u = e j 2ft and dv = (dx / dt )dt have been used in the integration-by-parts formula and first term of the middle equation vanishes at each end point by virtue of x(t) being an energy signal. The proof for values of n > 1 follows by induction. Convolution Theorem If then

x1 (t ) X 1 ( f )

When the convolution (denoted by *) of two signals, x1(t) and x2(t) is conducted, a new function of time, x(t) is produced and is given by

The integrand is formed from x1 and x2 by three operations:

1. Time reversal to obtain x2() 2. Time shifting to obtain x2(t) 3. Multiplication of x1() and x2(t) to form the integrand.

x1 (t ) * x 2 (t ) =

x1 ( )x2 (t )d

21

x1 (t ) * x 2 (t ) =

dx(t ) = dt

dx(t ) j 2ft e dt = x(t )e j 2ft dt

+ j 2f

and

x 2 (t ) X 2 ( f )

x1 ( )x 2 (t )d =

x1 (t )x2 ( )d X 1 ( f ) X 2 ( f )
(2.47)

x (t )e j 2ft dt = j 2fX ( f )

Revised November 2008

Figure 2.12: The convolution processes of two exponentially decaying signals.

Proof:

By the time-delay theorem, Thus, we have

x 2 (t )e j 2ft dt = X 2 ( f )e j 2f

In communication system, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function.

Multiplication Theorem
If then

x1 (t ) X 1 ( f )

and

x 2 (t ) X 2 ( f )

Proof: The proof of the multiplication theorem proceeds in a manner analogous to the proof of the convolution theorem.

x1 (t ).x2 (t ) X 1 ( f ) * X 2 ( f ) =

X 1 ( )X 2 ( f )d

22

Revised November 2008

F [x1 (t ) * x 2 (t )] =

x1 ( ) X 2 ( f )e j 2f d =

x1 ( )e j 2f d X 2 ( f ) = X 1 ( f ).X 2 ( f )

F [x1 (t ) * x 2 (t )] =

x1 ( ) x 2 (t )d e j 2ft dt =

x1 ( )

x 2 (t )e j 2ft dt d

(2.48)

2.5.5 Impulse or Dirac delta function, (t)

A2

T2 < T 1
A1

T1

A1T1= A2T2=1

Figure 2.13: Rectangular pulse impulse ( when T 0) The Dirac delta function is not a true function. It can be obtained by letting the pulse width of a rectangular pulse, T, with unit area to go to infinity. The normal concept of amplitude does not apply here. Alternatively, the concept of weight is introduced. The impulse, (t) has the following features: i) unit area which depends on location

(t to )dt =

t1

ii) ability to weight the impulse with a resulting area other than unity (2.50)

A (t t o )dt =
t1

iii) sampling property

x (t ) (t to ) = x (to ) (t to )
iv) shifting property

x (t ) (t to )dt = x (to )

v) scaling property

(at)dt =

1 1 (t)dt = a a
+

vi) Fourier transform pair:

23

2 F { (t to )} = ( t to )e j ft dt =

t2

t2

1, t1 < to < t 2 0, elsewhere

(2.49)

(t)
0

(t-t1)
t

Figure 2.14. Impulse representation

A t1 < to < t2 0 elsewhere


(2.51)

(2.52)

(2.53)

1, ej
2 fto

to = 0 , elsewhere

(2.54)

Revised November 2008

Example 2.7 : Fourier transform properties Find the Fourier transform for the following signals: 1) v(t) = (t) 2) v(t) = 1 3) v ( t ) = cos o t 4) v ( t ) = m( t ) cos o t and m( t ) M ( f ) Solutions

1.) v(t) = (t),

V( f ) =

2.) v(t) = 1,

V( f ) = V( f ) =

e e

j 2 ft

dt

and

(t ) =

Therefore,

j 2 ( f ) t

dt = ( f ) = ( f )

where scaling property of impulse function has been applied in the last step. 3.) v (t ) = cos o t => v (t ) =

1 j o t ( e + e j o t ) 2

From frequency shift property : If

1 ( f ) , Then e j ot = e j 2f ot ( f f o )
e j ot = e j 2f o t ( f + f o )

centered at fo centered at -fo

From linearity property : V ( f ) =

1 ( f fo ) + ( f + fo ) 2

4.) v (t ) = m(t ) cos o t and m(t ) M ( f ) => v ( t ) = From frequency shift property : If m(t ) M ( f ) then e j ot m(t ) M ( f f o )

1 j o t ( e + e jot )m(t ) 2

centered at fo and centered at -fo

e j ot m(t ) M ( f + f o )
From linearity property :

V( f ) =

1 M( f fo ) + M( f + fo ) 2

Modulation theorem : m( t )

1 M( f fo ) + M( f + fo ) 2

24

Revised November 2008

d e g

(t ) e

j 2 ft

dt = 1

e j 2ft df

and

] ]

Table 2.2 Short Table of Fourier Transforms

x(t)

X(f)

e t

e f

sinc2 (t) (t) 1 (t t0 )

e j 2 . f c .t
cos (2fct ) sin (2fct ) x(t).cos (2fct )

sgn (t )
1

u (t)
i =

n =

e at u (t )

e at u (t ) e
a t

te at u (t ) e at cos(2f 0 t )u (t ) e at sin( 2f 0 t )u (t )

, for a >0 (a + j 2f ) 2 + (2f 0 ) 2 2f 0 , for a >0 (a + j 2f ) 2 + (2f 0 ) 2

1 , a + j 2f 1 , a j 2f 2a , 2 a + (2f ) 2 1 , (a + j 2f ) 2 a + j 2f

25

Revised November 2008

(t iT0 )

rs

t T

T sinc2 ( fT )
(f )

e j 2 . f .t 0 ( f fc )
1 [ ( f f )+ ( f + f )] c c 2 1 [ ( f f ) ( f + f )] c c 2j 1 [X ( f f )+ X ( f + f )] c c 2 1 jf

1 (f)

-j sgn ( f )
1 1 (f )+ 2 j 2f
1 T0

(f

n ) T0

k lm
for a >0 for a >0 for a >0 for a >0

sinc (2Wt )

1 f rect 2W 2W

nop

fg q

rect

t T

hij

T sinc ( ft )

tuv

2.5.6 Fourier Transform of Periodic Signals


The Fourier transform of a periodic signal, in a strict mathematical sense, does not exist because periodic signals are not energy signals. However, we could, in a formal sense write down the Fourier transform of a periodic signal by Fourier transforming its complex Fourier series term by term (using the properties of impulse function). From a periodic signal x(t) with a period T, we express x(t) as

x(t ) = cn e jn0t
n =

n =

, where

0 =

2 T

(2.55)

Taking the Fourier transform of both sides and using e j ot = e j 2f ot ( f f o ) , we obtain

X ( f ) = cn ( f nf 0 )
n=

n =

(2.56)

Note that the Fourier transform of a periodic signal consists of a sequence of equidistant impulses located at the harmonic frequencies of the signal.

2.6 CORRELATION FUNCTIONS


2.6.1 Correlation of Energy Signals
Correlation is a process of comparing two signals in order to measure the degree of similarity (agreement or alignment) between them. It is usually used in signal processing application in radar, sonar, digital communication, electronic warfare and many others. In signal processing, the cross-correlation (or sometimes "cross-covariance") is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. The cross-correlation function of two energy signals, x(t) and y(t) is defined as

xy ( ) =

x * (t ). y (t + ) dt =

Autocorrelation is a mathematical tool used frequently in signal processing for analysing


functions or series of values, such as time domain signals. It is the cross-correlation of a signal with itself. Autocorrelation is useful for finding repeating patterns in a signal, such as determining the presence of a periodic signal which has been buried under noise, or identifying the fundamental frequency of a signal which doesn't actually contain that

26

x (t ). y * (t ) dt

(2.57)

Revised November 2008

frequency component, but implies it with many harmonic frequencies. The timeautocorrelation function for a given energy signal x(t) is defined as

x ( ) =

x * (t ).x (t + ) dt =

For a real signal x(t), the autocorrelation function is given by

x ( ) = x ( ) =

x(t ).x (t + ) dt

Setting y = t + in equation (2.58) yields

x ( y ).x ( y )dy

where the y is a dummy variable and could be replaced by t. Thus

x ( ) =

x(t ).x (t ) dt

This shows that for a real x(t), the autocorrelation function is an even function of , that is

x ( ) = x ( )
The Fourier transform for

x ( ) in equation (2.58a) is

x(t )

x(t ) X ( f )e j 2f .t dt = X ( f )

This shows that:

F [ x (t )] = G ( f ) = X ( f )

where the energy spectral density (ESD), G(f) is the Fourier transform of the autocorrelation function. Although this result is proved here for real signal, it is valid for complex signal also. Note that the autocorrelation function is a function of , not t. Hence, its Fourier transform is

F [ x ( )] =

x ( )e j 2f d .

27

x( + t )e j 2f d dt , where F [x( + t ) ] = e j 2f .t X ( f )

x (t )e j 2ft dt = X ( f ) X ( f ) = X ( f )
2

Revised November 2008

F [ x ( )] =

x ( )e j 2f d =

e j 2f

x (t ).x * (t )dt

(2.58)

(2.58a)

(2.59)

(2.60)

x (t ).x (t + )dt d

(2.61)

Example 2.8 : Autocorrelation of energy signals Find the time autocorrelation function of the signal, x(t) = e at u (t ) , (where a > 0) and from it determine the ESD of x(t).

Solutions We have x(t ) = e at u (t ) and x(t ) = e a ( t ) u (t ) . The autocorrelation function x (t ) is given by the area under the product x(t).x(t), as shown in Figure 2.15. Therefore,

This is valid for positive. We can perform a similar procedure for negative. However, we know that for a real x(t), x() is an even function of . Therefore,

x ( ) =

1 a e 2a

Figure 2.15 shows the autocorrelation function x(). The ESD, G(f) is the Fourier transform of x(). From Table 2.2, G(f) is:

g(t)

g(t)

Figure 2.15 : Computation of the time autocorrelation function

2.6.2 Power Spectral Density (PSD)


The power spectral density (PSD) is very useful in describing how the power content of signals and noise is affected by filters and other communication systems. It is very useful in solving communication problems since power-type models are usually used.

To develop a frequency-domain description of a power, we need to know the Fourier transform of the signal x(t). However, this may pose a problem, because power signals have infinite energy and may therefore not Fourier transformable.

28

G ( f ) = F [ x ( ) ] = F

1 a e 2a

1 2a 1 = 2 2 2 2a a + ( 2f ) a + ( 2f ) 2

1 2a

Revised November 2008

x ( ) =

x (t ) x(t + ) dt =

e at u (t )e a (t ) u (t ) dt = e a

e 2 at dt =

1 a e 2a

x()

To overcome the problem, we consider a truncated version of the signal x(t) (Figure 2.16) by

0,

t elsewhere

x(t)
power signal

xT (t)
truncated power signal

T/2

T/2

Figure 2.16 : Limiting process in derivation of PSD

Using equation (2.21) and Figure 2.16, we obtain the average normalized power:

P = lim

1 T T

T /2 T / 2

x(t ) dt = lim
2

ET 1 = lim T T T T

From Parsevals theorem, equation (2.37) ( xT (t ) X T ( f ) ), the average normalized power is

The integrand of the right-hand integral has unit of watts per hertz and can be defined as PSD.

Therefore, the PSD with unit watts per hertz, for deterministic power signal is defined as

Note that the PSD is always a real non-negative function of frequency. By integrating S(f) over all frequency, we can obtain the normalized average power as:

S ( f ) = lim

XT ( f )

29

Revised November 2008

lim

1 P = lim T T

X T ( f ) df =
2

XT ( f )

xT (t ) =

= x(t ).rect

t T

t t

x(t ),

T / 2 < t < T / 2

xT (t ) dt

df

6V D1YGGY9
(2.62) (2.63)

P = S ( f ) df
2

(2.64)

This result is parallel to the result obtained for energy signals in equation (2.39). The area under the PSD function is the normalized average power. Observe that the PSD is the time average of ESD of
xT(t) (equation 2.63).

As is the case with ESD, the PSD is also a positive, real and even function of f. If x(t) is a voltage signal, the units of PSD are volts squared per hertz.

2.6.3 Time-Autocorrelation Function of Power Signals


The autocorrelation function, Rx ( ) of a power signal x(t) is defined as

R x ( ) = lim

1 T T 1 T T

T /2 T / 2

x * (t ).x(t + ) dt = lim

1 T T

For a real signal x(t), the autocorrelation function is given by

R x ( ) = lim

T /2 T / 2

x (t ).x (t + ) dt

Using the same argument as that used for energy signals (derived in previous section), we can show that R x ( ) is an even function of . This means for a real x(t)

R x ( ) = lim
and

1 T T

T /2 T / 2

x (t ).x (t ) dt

R x ( ) = R x ( ) 1 T

If x(t) is periodic with period T, then

R x ( ) =

T /2

T / 2

x (t ).x (t ) dt

For energy signals, the ESD G(f) is the Fourier transform of the autocorrelation function x ( ) . A similar result applies to power signals, where the PSD S(f) is the Fourier transform of the autocorrelation function Rx ( ) . From equation (2.65a) and Figure 2.16, we have

R x ( ) = lim

1 T T

xT (t ).xT (t + ) dt = lim
2

xT ( )
T

Recalling that F [ xT (t ) ] = GT ( f ) = X T ( f ) , the Fourier transform of the preceding equation yields

30

Revised November 2008

T /2 T / 2

x (t ).x * (t ) dt

(2.65)

(2.65a)

(2.65b)

(2.66)

F [R x ( )] = lim

XT ( f ) T

= S( f )

(2.67)

The above proves are also valid for complex signal. The autocorrelation function and power spectral density are important tools for system analysis involving random signals. Some properties of the autocorrelation function Rx(),

R x ( ) R x (0) , a relative maximum exist at origin


* R x ( ) = R x ( )

Example 2.9 : Autocorrelation of power signals Find the average autocorrelation function for the sinusoidal signal x (t ) = A sin(1t + ) , where

1 =

2 . Determine also the power spectral density. T1

Solutions From equation 2.65b, the autocorrelation function is given by

R x ( ) =

1 T1

A2 x(t ).x(t )dt = T1 / 2 T1


T1 / 2

A2 = 2T1

A2 [cos 1 cos(21t + 2 1 )]dt = cos 1 T1 / 2 2T1


T1 / 2

A2 = cos1 2
The power spectral density is the Fourier transform of the autocorrelation. Therefore S(f) is

A2 = [ ( f f1 ) + ( f + f1 )] 4

31

Revised November 2008

f gh

A2 A2 1 1 S ( f ) = F [Rx ( )] = F cos 1 = ( f f1 ) + ( f + f1 ) 2 2 2 2

ijk

fh

ikj

d e

Rx (0) = lim

1 T T

T /2 T / 2

[ x(t )]2 dt = Px , average power

(2.68) (2.69) (2.70)

T1 / 2 T1 / 2

sin(1t + ) sin[1 (t ) + ]dt


T1 / 2 T1 / 2

(1)dt

Selected Questions and Answers Question 1


(a) A periodic signal g(t) is represented by its Fourier series
g(t) = 2 + 4cos(2t - ) + 2cos(3t - /2)

(i) Draw the trigonometric Fourier spectra of g(t), |Cn| and Cn. (ii) Sketch the exponential Fourier spectra of g(t). (iii) By inspection of the exponential Fourier spectra obtained in part a(ii), find the exponential Fourier series for g(t). (b) Fourier Transform is the common method used to analyze a non-periodic signal in frequency domain. (i) If g(t) has the Fourier transform G(f), show that

g (t ) cos(2f c t )

1 1 G( f f c ) + G( f + f c ) 2 2

(ii) Name the theorem for the Fourier transform properties in Part b(i). Using this theorem, find the Fourier transform of a signal

(iii) Is the signal p(t) in Part b(ii) a power or an energy signal? Explain (iv) Using the result in Part b(iii), determine the power or energy of p(t) if T =

32

p(t ) =

t cos 2f c t T

1 . 2 fc

Revised November 2008

Solution Question 1

Solution a(i)
C0 =2, C1 = 0, C2 = 4, C3 = 2, C4 = 0 0 = 0, 1 = 0, 2 = -, 3 = -/2, 4 = 0

Cn

4 2 0 /2

C n
1 2 3 4

a(ii) Vn 2 1

2 1 -4 -3 -2

Vn

1 -4 -3 -2 -1 -

33

Revised November 2008

-1 0

2 3 4

g (t ) =

n = 4

where V0 = 2, V1 = V-1 = 0, V2 = 2e j , V-2 = 2e j , V3 = e j / 2 , V-3 = e j / 2 , V4 = V-4 = 0 therefore g(t)= e j / 2 e j 3t + 2e j e j 2t + 2 + 2 e j e j 2t + e j / 2 e j 3t

1 = G ( f ) { ( f f c ) + ( f + f c )} 2 1 1 = G ( f f c ) + G ( f + fc ) 2 2

Since

t Tsinc( fT ) T 1 1 P ( f ) = sinc( T ( f f c )) + sinc( T ( f + f c )) 2 2

(b)(iii) p(t) is an energy signal since it has finite total energy and zero average power (b)(iv)
E=

T /2 T / 2

cos 2 2 f c t dt =

(c)(ii)

T = 1/4f c 2

F { g (t ) cos 2 f ct} = G ( f ) F

1 j 2 f c t j 2 f c t e +e 2

34

(b)(i)

a(iii)

Vn e jnt

Revised November 2008

Question 2 (a) Determine the Fourier transforms of the signals shown in Figure S2(i) and Figure S2(ii).

(i) Figure S2

(ii)

(b) Determine whether the following signals are energy-type or power-type. In each case determine the energy or power-spectral density and also the energy or power content of the signal. (i) x(t) = sinc(t)

Solution Question 2 (a)


(i)

(ii)

(ii)

x (t ) =

n =

(t 2 n )

35

Revised November 2008

(b) (i) Energy signal The energy content of the signal is

(ii)

36

Revised November 2008

Vous aimerez peut-être aussi