Vous êtes sur la page 1sur 9

re une strpuisart au ou unlisela

i dentité
qu a leurdfaeCanaan.sn
matrice
C est plusoum la
récepteur
memesauf

Doncengrosonpassepasparle fréquentielpour
attribuer
lessymboles
aimer tonne camps une de qu'untrou
pavot

camp hn op soitAclamatrice duchannelcirculentpournierles


asanas
g l'expression

g Had n Hewnw

y_canard a

imam

RÉEUX onnous a veutRune EHumanae I


laitage
wawapas
www
anweçâaijitiÎi

Rou oiwifi

Genmatrice
aïe i vi
paraméïaïairome ça kâÎÏ

ÏÏiiiiit.im

Ê ÎÎ

çay

me
ÉÉ neige
nn amusepour

ÉÉ II
Eminemaucas ans

Rmm É Ii or Rmn réuni

a_enanimascitrawfan kci.FIJ KIIfI 4

agwaea y

w'nwdtv

connu ce oserasseiden

Fy cinwa www.ai wnwr

TE

Puissanouysssquerias Émet as Sur


çaÉTÉ

Y ÏÏÏ sina.ge ppa


fMwM
agi

IIII

E Ecarcanwoscantanionsconsommait

siens

Eça kw

amie

awww

_ais unanasceenaeoses
reste n µ

y rw âs un as un
mm

iii iii II
a ai non sinon terme bruit

maniéau au

bz oresami

au
aque ni est oansiréspernaucassortescavione su
mariage enémeraudediego

aucousincnn.ae

Whereas a hard-decision decoder operates on data that take on a xed set of possible values (typically 0 or 1 in a binary code), the inputs to a soft-decision decoder may take on a whole range of values
in-between. This extra information indicates the reliability of each input data point, and is used to form better estimates of the original data. Therefore, a soft-decision decoder will typically perform better in
the presence of corrupted data than its hard-decision counterpart.[1]

Soft

Dva I entreeuxdonc onveutmaximiserLes V

Lis sa ÉÏ Écarte
IIperazaEna
Eagmax
ses II ünfü

acornéla

ognnmanza.sn
s agonir

aismsw.ae juste
anaraouposeraine ses
samosa
Revient miniserladisanece

faenaedanair des le h
er symonavismaissaprenden
compte poids
og a

Remember
posarapeusienscroix

pipi_parcepote
gpmsgf.fi
MFIIiI

TITI I Less Étrennapeint ÀPeintepeint l pelapasasieiné


d'erreur

on separe Retina
msn.fi simpeigereau
arief
si
savate
saun
w

I
a
Depeuspuisqueapsu peinePrise Lol
ra Zerfelstrat Ei dine
4f
marre
si saunroseau
si suave
depose
ÊÜ hee estaposaaérer orientait

pKirsner
normalec enses
pumaximiserun produit une au paspasser
penouspassimiins
ergenerg ne eininées

Ï
Élairarestinaisan
ma
À
anderen pes
n pasEauenme
tenu pas
e a tenu
na des
puisquepeine pas
dépend
ô agmaxÉcain
dir ene osez
ses an

µey
Exercise Session 5: Orthogonal Frequency Division Multiplexing
(OFDM)
Solution

Exercise 1: Single-carrier OFDM transmission

We are interested in a transmission chain using a cyclic prefix (CP) single-carrier and a linear equalisation
of the multipath channel in the frequency domain. A Zero-Forcing (ZF) decoder is used in this exercise.
The transmission chain is represented in Figure 5.1.


d x … ỹ y z d̂
Add CP channel h Remove CP Equaliser F Decision

Figure 5.1: Communication chain considered for the single carrier-OFDM. The vectors y and n are obtained
by removing the CP part of ỹ and ñ.

The system parameters are as follows:


• K ⇤ number of symbols in one block;
⇥ ⇤ 2
• d ⇤ symbol vector with E [d] ⇤ 0, E ddH ⇤ d
I;
• h ⇤ channel vector of length L;
⇥ ⇤ 2
• n ⇤ additive white Gaussian noise vector with E nnH ⇤ n I;

• W = the normalised FFT matrix� (i.e. WH W ⇤ I).



p h
• ⇤ DFT of h, ⇤ KW ;
0(K L)⇥1

• Hc ⇤ channel circulant matrix, Hc ⇤ WH ΛW, with Λ ⇤ diag ( ).


1. Compare the SC-OFDM to the usual OFDM.
The difference between these two modulations lies in the fact that in the SC-OFDM, the symbols
are not precoded by the FFT matrix. Hence, while in OFDM the symbols are assigned to different
frequency tones and then translated in the time-domain, SC-OFDM directly assigns the symbols to
different time instants. However, these two modulations look alike as they both take advantage of a
cyclic prefix, which leads to the circulant matrix description of the channel.

2. Give the expression of y in function of W, Λ and n.


The cyclic prefix converts the linear convolution with the channel impulse response into a cyclic one.
The channel matrix Hc is a circulant matrix which can be decomposed as

Hc ⇤ WH ΛW.

Hence, we obtain

y ⇤ Hc d + n,
⇤ WH ΛWd + n.

⇣ ⌘
2⇡ jkl
�The normalised FFT matrix of size K ⇥ K is defined as Wkl ⇤ p1 exp , with k, l 2 {0, . . . , K 1}.
K K

1
3. Compute the covariance matrix of the useful part of the signal in function of the k ’s, and especially
the average power (the diagonal elements).
The average power of each element of the useful part of y are the diagonal element of the covariance
matrix computed as
h i ⇥ ⇤
H
R⇤E WH ΛWd WH ΛWd ⇤ WH ΛWE ddH WH ΛH W,
2 H H H
⇤ d W ΛWIW Λ W,
2 H 2
⇤ d W |Λ| W,

where we have used the unitary property of W. Note however that we cannot simplify the remaining
W and WH as the |Λ| 2 is present in between. Instead, we will need to compute explicitly the diagonal
elements.
2 W⇤ WK⇤ 1,0 37 2| 0 |2 . . . 0 37 2 W00 W0,K 1 37
6 00 ... 6 6 ...
2 6 7 6 7 6 7
R ⇤ 6 ⇤ . . . . . . . . . . ⇤. 7 6 . . . . . . . . . 7 6 . . . . . . . . . .. 7
6W 7 6 0 27 6WK 1,0 . . . WK 1,K 1 7
d
4 0,K 1 . . . W K 1,K 1 5 4 . . . | K 1| 5 4 5
with
✓ ◆
1 2⇡ jmn
Wmn ⇤ p exp .
K K

The element m, n of the covariance matrix is thus equal to

2 ’
K 1 ✓ ◆
d 2 2⇡ jl(n m)
R mn ⇤ | l| exp ,
K K
l⇤0

which simplifies for the diagonal elements as

2 ’
K 1 2
2
R kk ⇤ d
| l| ⇤
d
k k 22 .
K K
l⇤0

4. Express the above average power in function of h.



p h
From the circulant matrix factorisation, we have that ⇤ KW . Hence,
0(K L)⇥1

⇥ ⇤ p p h
k k 22 ⇤ H
⇤ hH 01⇥(K WH K KW ,
L)
0(K

L)⇥1
⇥ ⇤ h
⇤K hH 01⇥(K ,
L)
0(K L)⇥1

⇤ KhH h ⇤ K khk 22 .

Therefore, the average power becomes

R⇤ 2
d khk 22 ,

where we have dropped the k subscript as this power is independent from k. From the above equation,
we see that the power of the useful part of the received signal is simply the multiplication between the
channel power and the input power.

2
5. Give the expression of the ZF equaliser and of the decision variables z.
Starting from

y ⇤ WH ΛWd + n,

it directly appears that the ZF equaliser is

F ⇤ WH Λ 1 W,

this giving

z ⇤ Fy ⇤ d + WH Λ 1 Wn.

One could try to compute the SNR of the decision variables. Doing so, the developments of question 2
would be very useful as the covariance of the noise has exactly the same structure (yet with n2 instead
of d2 , and with 1 2 instead of | l | 2 ). However, the results of question 3 cannot be applied here as we
ÕK| l|
1 1
cannot relate l⇤0 | l | 2 to the norm of .

6. Bonus: Assuming a disjoint decision is performed, compute the SNR of y and z in function of and
compare them.
Bonus: In this supplementary material, we are interested in computing the SNR of y and z. From the
above developments, we readily obtained them as
2 ÕK 1 2
d l⇤0 | l|
SNR y ⇤ 2
,
n K
2
K
ÕK
d
SNRz ⇤ 2 1 1
.
n l⇤0 | l | 2

Note that we have only considered the diagonal elements as the decision is performed symbol by
symbol. The above expressions say that SNR y is impacted by the arithmetic mean of the | l | 2 which
⇣ ⌘ ⇣ ⌘
we denote by A | | 2 , while SNRz is impacted by their harmonic mean H | | 2 . A well-known
⇣ ⌘ ⇣ ⌘
inequality� states that H | | 2  A | | 2 , and therefore we get

SNRz  SNR y .

This last inequality clearly highlights the noise enhancement phenomenon due to the equalisation.

Exercise 2: Decoding of coded OFDM transmission

We are interested in the derivation of a Viterbi decoder for a coded OFDM transmission. This coded OFDM
transmission works as follows: based on the input bit sequence, a convolutional encoder outputs a sequence
of bits (which correspond to a path in the code trellis). These bits are then mapped to QPSK symbols, and
groups of K such symbols are build. Then, each of these groups (called codewords) is split over the K
carriers of the OFDM modulation. At the receiver side, each element of the codeword is observed with a
different SNR (depending on the carrier). The block diagram of the transmission is represented in Figure 5.2.
We want to design a Maximum Likelihood (ML) decoder for this transmission scheme based on the output
of a ZF equaliser. This decoder can either work with soft input or with hard ones.
�See Generalized mean inequality on Wikipedia

3
u c Symbol s x x̃
Encoder WH Add CP
mapping

channel h

ûs Soft z ZF equaliser y r r̃ … ñ


W Remove CP
decoder Λ 1

ûh Hard v
Decision
decoder

Figure 5.2: Communication chain considered for the coded OFDM. The vectors x, n and r are obtained by
removing the CP part of x̃, ñ and r̃.

1. Give the matrix expression of the x, r, y and z signals and determine the distribution of the decision
variable z.
The expressions of the different signals follow readily from the block diagram, and the fact that thanks
to the CP, the channel acts as a circulant matrix. Letting Hc ⇤ WH ΛW with Λ a diagonal matrix, we
have

x ⇤ WH s
r ⇤ Hc x + n
y ⇤ Wr ⇤ WHc x + Wn ⇤ Λs + Wn
z ⇤ s + Λ 1 Wn ⇤ s + ⌫

The decision variable is the vector z, which follows a multivariate Complex normal distribution

z ⇠ CN (s, Σ⌫ ) ,

with Σ⌫ computed as
⇥ ⇤
Σ⌫ ⇤ E ⌫⌫ H
⇥ ⇤
⇤ E Λ 1 WnnH WH Λ H
⇥ ⇤
⇤ Λ 1 WE nnH WH Λ H

⇤ Λ 1W 2 H
n IW Λ
H

2 2
⇤ |Λ| n.

This shows that


2
2 n 0
⌫k ⇤ 2
, Cov(⌫k , ⌫k0 ) ⇤ 0 if k , k .
| k|

Hence, all elements of z are independent and therefore the multivariate complex normal distribution
boils down to the product between K univariate complex normal distributions.

2. Provide an expression of the metric that should be minimised by a ML decoder with soft decisions.
In the case of the soft decision, the likelihood function is given by

L(s) ⇤ p(z |s, Λ ) ⇤ p(⌫ ⇤ z s).

4
As the ⌫k are independent, this can again be rewritten as
!
÷
K
1 |z k sk |2
L(s) ⇤ p(⌫k ⇤ z k sk ) ⇤ q exp 2
. .
2 ⌫k
k⇤1 ⇡ ⌫k

Notice that as the complex normal distribution is the product between two normal distributions with
half the variance, the above equation is correct (no factor 2 is missing for the variance). Hence, letting
S be the set of all sequences belonging to the code, the ML detector should select s such that

ŝ ⇤ arg max L(s),


!
s2S
÷
K
1 |z k sk |2
⇤ arg max q exp 2
.
s2S 2 ⌫k
k⇤1 ⇡ ⌫k

Using the loglikelihood, this becomes


K
1
ŝ ⇤ arg min 2
|z k s k |2 ,
s2S k⇤1 ⌫k


K
2
⇤ arg min | k| |z k sk |2 .
s2S k⇤1

The decoder has thus to minimise an euclidean distance weighted by a coefficient depending on | k | 2
(which is known). Intuitively, the elements of the code that benefit from a high SNR (high | k | 2 ) will
be weighted more as it is less likely that an error occurs for these elements. The Viterbi algorithm,
relying on the fact that the metric is the sum of independent terms, allows to find the best sequence
efficiently. Note however that it outputs directly the bits ûs instead of the sequence of symbols ŝ.

3. Provide an expression of the metric that should be minimised by a ML decoder with hard decisions.
In the case of the hard decision, the likelihood function is given by

÷
K ÷
K
L(s) ⇤ p(v |s, Λ ) ⇤ p(v r,k |s k , k) p(v i,k |s k , k ),
k⇤1 k⇤1

where the subscripts r and i respectively stand for the real and the imaginary part. This expression
can be rewritten as
÷
K ÷
K
L(s) ⇤ (p e,r,k )dr,k (1 p e ,r,k )1 d r,k
(p e,i,k )di,k (1 p e,i,k )1 d i,k

k⇤1 k⇤1

where

1 if v r,k , s r,k
d r,k ⇤
0 if v r,k ⇤ s r,k

1 if v i,k , s i,k
d i,k ⇤
0 if v i,k ⇤ s i,k

2 2
2 2 n
and, as
⌫k
⌫r,k ⇤ ⌫i,k ⇤ 2 ⇤ 2
.
2| k|
s
⇣p ⌘ 1
© ™
2 2
1 | k|
p e,k , p e,r,k ⇤ p e ,i,k ⇤ p e,PAM ⇤ erfc SNRk ⇤ erfc ≠ s
Æ̈ .
2 2 2
´ n

5
Finally, using the log-likelihood,


K ’
K
ŝ ⇤ arg max d r,k ln(p e ,k ) + (1 d r,k ) ln(1 p e,k ) + d i,k ln(p e,k ) + (1 d i,k ) ln(1 p e,k )
s2S k⇤1 k⇤1
K ✓
’ ✓ ◆ ◆ K ✓
’ ✓ ◆ ◆
p e,k p e,k
⇤ arg max d r,k ln + ln(1 p e,k ) + d i,k ln + ln(1 p e ,k )
s2S 1 p e,k 1 p e,k
k⇤1 k⇤1

K ✓ ◆ ’
K ✓ ◆
p e ,k p e,k
⇤ arg max d r,k ln + d i,k ln
s2S 1 p e ,k 1 p e,k
k⇤1 k⇤1

K ✓ ◆ ’
K ✓ ◆
1 p e,k 1 p e ,k
⇤ arg min d r,k ln + d i,k ln
s2S p e,k p e ,k
k⇤1 k⇤1
K ✓
’ ✓ ◆◆
1 p e,k
⇤ arg min d r,k + d i,k ln .
s2S p e,k
k⇤1

The logarithm is always positive for p e,k < 0.5. Hence, this is again a minimisation of a Hamming
distance weighted by a coefficient depending on | k | 2 (which is known). The Viterbi algorithm can
again be used in order to find the sequence uh that minimises this distance.

Vous aimerez peut-être aussi