Vous êtes sur la page 1sur 21

Digital Communications

Instructor: Dr. Phan Van Ca Lecture 06 : Optimal Receiver Design

Modulation

We want to modulate digital data using signal sets which are :


z bandwidth efficient z energy efficient

A signal space representation is a convenient form for viewing modulation which allows us to:
z design energy and bandwidth efficient signal constellations z determine the form of the optimal receiver for a given

constellation z evaluate the performance of a modulation type

Problem Statement

We transmit a signal s(t ) {s1(t ), s2 (t ),, sM (t )} , where s(t) is nonzero only on t [0,T] . Let the various signals be transmitted with probability:
p1 = Pr[ s1(t )],, p M = Pr[ s M ( t )]

The received signal is corrupted by noise:


r(t) = s(t) + n(t)

Given r(t) , the receiver forms an estimate s(t) of the signal s(t ) with the goal of minimizing symbol error probability
Ps = Pr[ s(t ) s(t )]

Noise Model

The signal is corrupted by Additive White Gaussian Noise (AWGN) n(t)


N The noise n(t) has autocorrelation nn ( ) = 0 ( ) and 2 power spectral density nn ( f ) = N0 2

Any linear function of n(t) will be a Gaussian random variable

Channel

s(t )

r (t )

n( t )

Signal Space Representation

The transmitted signal can be represented as:


sm ( t ) = sm, k f k ( t ) ,
k =1 K

where sm,k = sm (t ) f k (t )dt .


0

The noise can be respresented as : n(t ) = n(t ) + nk f k (t )


k =1

where nk = n(t ) f k (t )dt


0

and

n (t ) = n( t ) nk f k ( t )
k =1

Signal Space Representation (continued)

The received signal can be represented as :


K K K k =1 k =1 k =1

r ( t ) = sm, k f k ( t ) + nk f k ( t ) + n ( t ) = rk f k ( t ) + n ( t )

where rk = sm, k + nk

The Orthogonal Noise: n ( t )

The noise n(t) can be disregarded by the receiver


T K sm (t )n (t )dt = sm (t ) n( t ) nk f k ( t ) dt k =1 0 0 T T K

K = sm,k f k (t ) n( t ) nk f k (t ) dt k =1 0 k =1 2 t dt = sm, k f k (t )n(t )dt sm,k nk f k () K T K T k =1 K k =1

= sm,k nk sm,k nk = 0
k =1 k =1

We can reduce the decision to a finite dimensional space!

We transmit a K dimensional signal vector: s = [ s1, s2 ,, sK ] {s1,, s M } We receive a vector r = [r1,, rK ] = s + n which is the sum of the signal vector and noise vector n = [n1,, nK ] Given r , we wish to form an estimate s of the transmitted signal vector which minimizes Ps = Pr[s s]

Channel

Receiver

MAP (Maximum a posteriori probability) Decision Rule

Suppose that signal vectors {s1 , , s M } are transmitted with probabilities { p1, , pm } respectively, and the signal vector r is received We minimize symbol error probability by choosing the signal sm which satisfies : Pr (sm r ) Pr (si r ), m i Equivalently : or

p( r sm ) Pr (sm ) p(r si ) Pr (si ) , m i p( r ) p( r )


p( r sm ) Pr (sm ) p(r si ) Pr (si ), m i

Maximum Likelihood (ML) Decision Rule

If p1 = = pm or the a priori probabilities are unknown, then the MAP rule simplifies to the ML Rule We minimize symbol error probability by choosing the signal s m which satisfies p(r sm) p(r si ), m i

Evaluation of Probabilities

In order to apply either the MAP or ML rules, we need to evaluate: p(rsm) Since r = sm + n where sm is constant, it is equivalent to evaluate : p(n) = p(n1,, nk )

n(t) is a Gaussian random process


z Therefore z Therefore

nk = n(t ) f k (t )dt is a Gaussian random variable

p(n1,, nK ) will be a Gaussian p.d.f.

The Noise p.d.f


T T E [ni nk ] = E n( t ) fi ( t )dt n( s) f k ( s)ds 0 0

T T TT = E n(t )n( s) fi ( t ) f k ( s)dsdt = E [n( t )n( s)] f i ( t ) f k ( s)dsdt 0 0 00 = nn (t s) f i (t ) f k ( s)dsdt = =


00 TN TT TT N 00 2 0 ( t s) f ( t ) f ( s)dsdt i k

0 2

0 f ( t ) f ( t )dt = N 0 2 , i = k i k

0,

ik

The Noise p.d.f (continued)

Since E[ni nk ] = 0,i k , individual noise components are uncorrelated (and therefore independent) Since E nk2 = N0 2, each noise component has a variance of N0 2 . p(n1,, nK ) = p(n1) p(nK )

[ ]

1 exp nk 2 N0 k =1 N0 K 2 K 2 = ( N0) exp nk N0 k =1

Conditional pdf of Received Signal

Transmitted signal values in each dimension represent the mean values for each signal

p(r sm ) = ( N 0 )

K 2

K 2 exp rk sm, k N0 k =1

Structure of Optimum Receiver

MAP rule : s = arg max pm p( r sm ) {s1,,s M }

s = arg max pm ( N 0 )

K 2

{s1,,s M }

K 2 exp rk sm, k N0 k =1

K 2 K 2 exp rk sm, k N0 s = arg max ln pm ( N 0 ) k =1 {s1,,s M }

s = arg max ln[ pm ]

{s1,,s M }

K 1 K 2 ln[ N 0 ] rk sm, k N 0 k =1 2

Structure of Optimum Receiver (continued)


s = arg max ln[ pm ]

{s1,,s M }

K ln[ N 0 ] 2

K K 1 K 2 rk 2 rk sm,k + sm,k 2 N 0 k =1 k =1 k =1

Eliminating terms which are identical for all choices: 2 K 1 K s = arg max ln[ pm ] + rk sm,k sm, k 2 N 0 k =1 N 0 k =1 {s ,,s }
1 M

Final Form of MAP Receiver

Multiplying through by the constant N0 2 :


K N0 1 K s = arg max ln[ pm ] + rk sm,k sm,k 2 2 k =1 k =1 {s1,,s M } 2

Interpreting This Result

N0 ln[ pm ] weights the a priori probabilities 2


z If the noise is large,

pm counts a lot

z If the noise is small, our received signal will be an accurate

estimate and pm counts less

k =1

rk sm, k = sm (t )r (t )dt represents the correlation with


0

the received signal


1 K 1T 2 E 2 sm,k = sm (t )dt = m 2 k =1 20 2

represents signal energy

An Implementation of the Optimal Receiver Correlation Receiver


r (t )

T 0 dt

s1(t )

E1 2 N0 ln( p1 ) 2

Choose Largest

r (t )

T 0 dt

sM (t )

EM 2 N0 ln( pM ) 2

Simplifications for Special Cases

ML case: All signals are equally likely ( p1 = = pM ). A priori probabilities can be ignored. All signals have equal energy ( E1 = = EM ). Energy terms can be ignored. We can reduce the number of correlations by directly implementing:
s = arg max
K 1 K N0 ln[ pm ] + rk sm, k sm, k 2 2 k =1 k =1 {s1,,s M } 2

Reduced Complexity Implementation: Correlation Stage


r (t )

f1(t )
T 0 dt

r1

r = [ r1

rK ]

r (t )

f K (t )

T 0 dt

rK

Reduced Complexity Implementation Processing Stage


s1,k rk
K

k =1

E1 2

s11 , s1, K

sM,1 K sM , K sM,k rk
k =1

N0 ln( p1) 2

Choose Largest

EM 2 N0 ln p ( M) 2

Matched Filter Implementation


Assume fk (t) is time-limited to t [0, T ] , and let hk (t ) = f k (T t ) Then rk = r ( t ) f k ( t )dt = r ( t ) f k (T ( T t ) )dt


= r ( t ) hk ( T t )dt = r ( t ) hk ( t ) t = T
0 T 0 0 T T

where r(t ) hk (t ) t =T denotes the convolution of the signals r(t) and hk (t) evaluated at time T

We can implement each correlation by passing r(t) through a filter with impulse response hk (t)

Matched Filter Implementation of Correlation


t=T

r (t )

h1(t )

r1
r = [ r1 rK ]

t=T

r (t )

hK (t )

rK

Example of Optimal Receiver Design

Consider the signal set:

s1(t )
+1

s 2 (t )
+1

t
-1 1 2 -1 1 2

s 3 (t )
+1

s 4 (t )
+1

t
-1 1 2 -1

Example of Optimal Receiver Design (continued)

Suppose we use the basis functions:

f 1(t )
+1

f 2(t )
+1

t
-1 1 2 -1 1 2
s1( t ) = 1 f1 ( t ) + 1 f 2 ( t ) s3 ( t ) = 1 f1 ( t ) + 1 f 2 ( t )
T=2 E1 = E2 = E3 = E4 = 2

t
s2 ( t ) = 1 f1 ( t ) 1 f 2 ( t ) s4 ( t ) = 1 f1 ( t ) 1 f 2 ( t )

1st Implementation of Correlation Receiver


r (t )
s1(t )

0 dt

N0 ln( p1 ) 2

Choose Largest

r (t )

0 dt

s4 (t )

N0 ln( p4 ) 2

Reduced Complexity Correlation Receiver Correlation Stage


r (t )

0 dt
2

r1

f1(t )

r = [ r1 r2 ]
0 dt
2

r (t )

r2

f2 (t )

Reduced Complexity Correlation Receiver Processing Stage


1 r1 + 1 r2

1 r1 1 r2

N0 ln( p1 ) 2

N0 ln( p2 ) 2 1 r1 + 1 r2 1 r1 1 r2

Choose Largest

N0 ln( p3 ) 2

N0 ln( p4 ) 2

Matched Filter Implementation of Correlations


hk ( t ) = f k ( 2 t )

h1(t )
+1 +1

h 2 (t )

t
1 2 1 2

t=2 r (t ) r (t )
h1(t )

r1 t=2
r = [ r1 r2 ]

h2 (t )

r2

Summary of Optimal Receiver Design

Optimal coherent receiver for AWGN has three parts:


z Correlates the received signal with each possible transmitted

signal signal z Normalizes the correlation to account for energy z Weights the a priori probabilities according to noise power

This receiver is completely general for any signal set Simplifications are possible under many circumstances

Decision Regions

Optimal Decision Rule:


s = arg max
K 1 K N0 ln[ pm ] + rk sm, k sm,k 2 2 k =1 k =1 {s1,,s M } 2

Let Ri K be the region in which


K N0 1 K ln[ pi ] + rk si ,k si , k 2 2 2 k =1 k =1 K N0 1 K ln p j + rk s j ,k s j ,k 2 , i j 2 2 k =1 k =1

[ ]

Then Ri is the ith Decision Region

A Matlab Function for Visualizing Decision Regions

The Matlab Script File sigspace.m (on course web page) can be used to visualize two dimensional signal spaces and decision regions The function is called with the following syntax: sigspace( [ x1 y1 p1; x2 y2 p2; ; x M y M pM ] , Eb N0 )
z z z

xi and yi are the coordinates of the ith signal point

pi is the probability of the ith signal (omitting gives ML)


Eb N0 is the signal to noise ratio of digital system in dB

Average Energy Per Bit: E b

Ei = si ,k 2 is the energy of the ith signal


k =1

1 M is the average energy per symbol Es = Ei M i =1


log 2 M is the number of bits transmitted per symbol

Eb =

Es is the average energy per bit log 2 M

z allows fair comparisons of the energy requirements of

different sized signal constellations

Signal to Noise Ratio for Digital Systems


N0 2

is the (two-sided) power spectral density of the background noise The ratio Eb N 0 measures the relative strength of signal and noise at the receiver has units of Joules = Watts *sec has units of Watts/Hz = Watts*sec

Eb N0

The unitless quantity Eb N 0 is frequently expressed in dB

Examples of Decision Regions - QPSK

sigspace( [1 0; 0 1; -1 0; 0 -1], 20)


1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

0.8

QPSK with Unequal Signal Probabilities

sigspace( [1 0 0.4; 0 1 0.1; -1 0 0.4; 0 -1 0.1], 5)


1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

0.8

QPSK with Unequal Signal Probabilities Extreme Case

sigspace([0.5 0 0.4; 0 0.5 0.1; -0.5 0 0.4; 0 -0.5 0.1],-6)


1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

0.8

Unequal Signal Powers

sigspace( [1 1 ; 2 2; 3 3; 4 4], 10)


4

3.5

2.5

1.5

0.5

0 0 0.5 1 1.5 2 2.5 3 3.5 4

Signal Constellation for 16-ary QAM

sigspace( [1.5 -1.5; 1.5 -0.5; 1.5 0.5; 1.5 1.5; 0.5 -1.5; 0.5 0.5; 0.5 0.5; 0.5 1.5;-1.5 -1.5; -1.5 -0.5; -1.5 0.5; -1.5 1.5; 0.5 -1.5; -0.5 -0.5; -0.5 0.5; -0.5 1.5],10)
2 1.5

0.5

-0.5

-1

-1.5

-2 -2

-1.5

-1

-0.5

0.5

1.5

Notes on Decision Regions

Boundaries are perpendicular to a line drawn between two signal points If signal probabilities are equal, decision boundaries lie exactly halfway in between signal points If signal probabilities are unequal, the region of the less probable signal will shrink. Signal points need not lie within their decision regions for case of low Eb N0 and unequal probabilities.

Vous aimerez peut-être aussi