Vous êtes sur la page 1sur 9

2/10/15

Digital comm: the big picture


Introduc)on to Communica)on Systems TransmiHer picks one of M signals to send
by Upamanyu Madhow
Receiver gets noisy, possibly distorted signal
Chapter 5 Receiver must guess which transmiHed signal
was sent
Probability and Random Processes
We use probabilis)c modeling and inference to
Minimal coverage required to access Chapter 6
derive algorithms for this purpose
Topics:
Condi)onal Probabili)es
Law of total probability
Bayes rule and posterior probabili)es
Gaussian random variables; example error probability computa)on
Bare bones facts on joint Gaussianity and white Gaussian noise

Role of probability Outline


Provides models for uncertainty
Review of probability and random variables
Receiver does not know what bits/symbols the
Provided in the chapter, but not covered in lectures
transmiHer has sent, otherwise there would be no
informa)on transfer ! models them as random Lectures review concepts most relevant to comm
Uncontrolled sources of uncertainty modeled as Condi)onal probabili)es and Bayes rule
random Gaussian random variables
Receiver noise, interference White Gaussian Noise
Channel oOen modeled as randomly chosen from a Minimal opera)onal deni)on
class of channels
Once we have a sta)s)cal model, can develop
systema)c framework for inference

1
2/10/15

Condi)onal probabili)es Law of total probability


Condi9onal probability of A given B Uncondi9oning by averaging out condi9onal probabili9es
(P[B] > 0)

Can be used to describe a communica)on channel


A: what the receiver observes
Generaliza9on: {Bi} form par44on (disjoint, union covers sample space)
B: what the transmiHer sends

Species P[A|B] and P[A|Bc]

Species P[B] and P[Bc]


Channel transi9on probabili9es
Now apply law of total probability

Bayes rule Bayes Rule (contd)


Flipping the condi9oning (guring out what was sent, from what is received)
We oOen apply it in mixed se[ngs
Con)nuous and discrete random variables
Generaliza9on (M-ary signaling)
Discrete: 0 or 1 sent
Con)nuous: condi)onal density of the decision
sta)s)c given 0 or 1 sent
FAQ on a con)nuous random variable X
P[X=a] = 0, but we can condi)on on {X=a}.
Essen)ally, we are condi)oning on X being in a
small interval around a.
Just use density in a nave way

2
2/10/15

Bayes Rule applica)on Bayes rule in mixed se[ng

In-class group exercise

Gaussian random variables Using standard Gaussian


Can express probabili)es involving any Gaussian random variable in
terms of a standard Gaussian

! Useful to dene special nota)on for standard Gaussian

!Can express probabili)es involving any Gaussian random variable in


terms of a standard Gaussian

3
2/10/15

Q and Phi func)ons Proper)es of Q and Phi func)ons


CDF and CCDF rela9onship (x) + Q(x) = 1
Complementary CDF

Symmetry of Gaussian density (x) = Q(x)



!Can express everything in terms of Q func9on with posi9ve arguments
(most relevant for comm applica9ons, where we deal with tail probabili9es
when compu9ng probabili9es of error)

(x) = 1 Q(x)
CDF
(x) = Q(x)
Q(x) = 1 Q(x)

Example computa)ons Gaussian probabili)es involving quadra)cs


Just for funwill not be using this in the main ow of the material
X m X +5
= ~ N(0,1)
v 2
Can factor quadra)c to get an analy)cal handle (or can evaluate by simula)on
Expressing probabili9es involving X in terms of the Q func9on with without factoring)
posi9ve arguments (in class group exercise)

Decompose event involving quadra9c into two mutually exclusive events


involving linear func9ons of X

X 2 2X >15 X 2 2X 15 >0 (X 5)(X + 3) > 0


X > 5 (both factors positive) or X < 3 (both factors negative)

4
2/10/15

What the transforma)on to Compu)ng the Q func)on


standard Gaussian means Available in Matlab Communica9ons Toolbox as qfunc()

Alterna9vely, can do without specialized toolbox by rela9ng to commonly tabulated


Tail probabili9es: all that maSers is how many standard devia9ons complementary error func9on
we are away from the mean

Can relate erfc to Gaussian probability as follows:


Probability of small intervals: depends on normalized distance from mean and
normalized length of interval (normalized by standard devia9on) For x > 0, erfc(x) = 2P[X > x], X ~ N(0,1/2)

Inver9ng the rela9onship, we have


Forms the basis for
code fragment 5.6.1

Example: binary on-o keying Condi)onal error probabili)es


Y receiver decision statistic Condi9onal error probability given 0 sent
Y = m + N if 1 is sent
Model Y = N if 0 is sent
Gaussian noise N ~ N(0,v 2 )
Signal level m > 0 (without loss of generality)

Condi9onal error probability given 1 sent


Y > m /2 say that 1 is sent
Simple decision rule
Y m /2 say that 0 is sent

Signal power (1/2)m 2 + (1/2)0 2 = m 2 /2


(assuming 1 and 0 equally likely to be sent)
Signal to noise ra9o
Noise power E[N 2 ] = v 2
m2 For the symmetric decision rule considered here, the condi9onal error probabili9es are equal
SNR = (this need not always be the case)
2v 2

5
2/10/15

Uncondi)onal error probability Typical error probability curve


If we know the condi9onal error probabili9es and the probability of sending 0/1,
then we can average out to nd the uncondi9onal error probability Pe = Q ( )
SNR /2 for on - off keying
Uncondi9onal error probability

Generated using
0 = probability that 0 is sent
Code fragment
(for our symmetric rule, the answer does not depend on 0 ) 5.6.2

Evaluate at SNR of 13 dB

SNR = 13 dB SNR(raw) = 10 SNR(dB )/10 = 101.3 20


( )
Pe = Q 10 = 7.8 10 4 for SNR of 13 dB

Error probability on log scale, ploSed against SNR in dB

PSD of WGN
OOK model: where it comes from
| H( f ) |
Continuous time received signal
1
y(t) = s(t) + n(t) if 1 sent WGN f
Power meter
y(t) = n(t) if 0 sent n(t)
f
n(t) is white Gaussian noise (WGN) f0
Power =
N0
f (independent of f 0 )
WGN n(t) is not a determinis)c signal. It is what is called a random process. 2

N 0 2
We are going to try and get by without really dening random processes. Two-sided PSD of WGN
= WGN is so important that we reserve two
2 nota)ons
for it!
Well just focus on a minimal descrip)on of WGN.
Also called variance per dimension, for reasons that will be clear soon.
WGN is a mathema)cal idealiza)on which only makes sense when we
lter it or correlate against it. So we will describe what happens when we
perform those opera)ons on WGN. PSD at over frequency ! innite power (integra)ng over all freqs)
! only useful as a model when we agree that we will subsequently process
It to obtain nite power (e.g., by ltering or correla9on)

6
2/10/15

One-sided PSD
WGN through a correlator
WGN passed through a lter with real-valued impulse response and Key Result: WGN passed through a correlator yields a zero mean Gaussian
(physical) bandwidth W ! due to conjugate symmetry, the two-sided random variable whose variance scales with the PSD and the correlator
lter transfer func)on has bandwidth 2W energy

Power at filter output =


N0
2W = N 0W
n,u = n(t)u(t)dt is Gaussian with mean zero and variance 2
|| u ||2

2
N 0 is the one - sided PSD More succinctly, n,u ~ N (0, 2 || u ||2 )
What is the value of the PSD?
Depends on the quality of the receive front end. If u(t) is a "unit vector" (|| u ||2 = 1),
Quan)ed in terms of receiver noise gure. then the variance
of n,u , the projection of n(t) along it is 2
Can be discussed just in )me for link budget analysis.
Holds for any "direction," i.e., any choice of unit norm u(t)
This is why we say that 2 = variance/dimension

Back to OOK OOK: suggested group exercise


Con9nuous distribu9on given 0 sent
Continuous time received signal y(t) = n(t) if 0 sent
y(t) = s(t) + n(t) if 1 sent
Con9nuous 9me model n(t) is white Gaussian noise (WGN) with PSD 2 = N 0 /2
y(t) = n(t) if 0 sent
Y = y,s = n,s ~ N(0, 2 || s ||2 )
n(t) is white Gaussian noise (WGN)
Template matching for decisions: Compare received signal against Condi9onal distribu9on given 1 sent
the signal transmiHed if 1 is sent y(t) = s(t) + n(t) if 1 sent


Y = y,s = s + n,s =|| s ||2 + n,s ~ N(|| s ||2 , 2 || s ||2 )
Decision sta9s9c Y = y,s = y(t)s(t)dt

Reduces to earlier Y receiver decision statistic
(This happens to be op)mal, as we will see in Chapter 6)
example based on Y = m + N if 1 is sent
a single observa9on
Lets rst nd the condi9onal distribu9ons of the decision sta)s)c, assuming Y = N if 0 is sent
that 0 and 1 are sent, respec)vely.
m =|| s ||2 , v 2 = 2 || s ||2 Gaussian noise N ~ N(0,v 2 )
Hopefully this will give us insight into how to actually make the decision.
Signal level m > 0 (without loss of generality)

7
2/10/15

OOK: suggested group exercise (contd) What about mul)ple correlators?


If the same noise waveform is passed through mul)ple correlators in
parallel, we expect the outputs to be related.
Now apply results from the earlier example to conclude that:
Actually, what we get are jointly Gaussian random variables.
Simple decision rule Y >|| s ||2 /2 say that 1 is sent
(split the dierence We do a bare bones treatment of joint Gaussianity, star)ng with
Y || s ||2 /2 say that 0 is sent a useful result for an important special case.
in means)
Performance of simple decision rule If u(t) and v(t) are orthogonal waveforms (i.e., u,v = 0),
then the correlator outputs n,u and n,v are independent Gaussian random variables.
Group exercise
u(t) v(t)
What are the distributions of

n,u and n,v here? 2
2
1 3 t
t 0
0 3
1




Covariance
Recall that a single Gaussian random variable is characterized by
its mean and variance
X ~ N(m,v 2 )
m = E[X]
A liHle more on joint Gaussianity v 2 = var(X) = E[(X E[X]) 2 ] = E[X 2 ] (E[X]) 2
Jointly Gaussian random variables are characterized by
means and covariances
Just enough to know basic terminology

Covariance is a natural generaliza)on of variance:

cov(U,V ) = E[(U E[U])(V E[V ])] = E[UV ] E[U]E[V ]


Note : cov(U,U) = var(U)

8
2/10/15

Mean vector and covariance matrix Jointly Gaussian random variables


X1
Just a compact way to
Random vector . represent random variables
X1, X 2 ,..., X m are jointly Gaussian random variables
(column vectors X = . = (X1,..., X m )T dened on a common
Deni9on
(two dierent ways of saying the same thing)
by our conven)on) . probability space
X m X = (X1, X 2 ,..., X m )T is a Gaussian random vector,
IF
any linear combination of its elements is a Gaussian random variable.
Mean vector Covariance matrix
(matrix of pairwise covariances) That is, for any scalars a1,...,am ,
(entry by entry expecta)on)
the linear combination a1 X1 + ...+ am X m is a Gaussian random variable
E[X1 ] cov(X1, X1 ) . . . cov(X1, X m )

. . Fact: A Gaussian random vector is completely characterized by its mean vector
mX = . = (E[X1 ],...,E[X m ])T

CX =

. cov(X i , X j )
and covariance matrix
. .
E[X m ] cov(X m , X1 ) cov(X m , X m ) Nota9on: X ~ N(m,C)

Lots of example computa9ons in text, See Problem 5.48 for a proof, and text for an intui)ve argument.
(i,j)th entry

skipping those.

Some important facts Discrete )me WGN


Independence ! Uncorrelatedness N ~ N(0, 2I) Random vector whose components are i.i.d. N(0,2)
random variables
If U,V are independent, then cov(U,V ) = 0 (i.e., U,V are uncorrelated) White: noise samples are uncorrelated and have equal variance

Reverse not true in general. These might arise from passing con)nuous )me WGN through mul)ple
correlators (or by ltering and samplingnot needed right now).

BUT, joint Gaussianity + uncorrelatedness ! independence Recall:


n,u is Gaussian with mean zero and variance 2 || u ||2
If U,V are jointly Gaussian and uncorrelated (i.e., cov(U,V ) = 0), If u(t) and v(t) are orthogonal waveforms (i.e., u,v = 0),
then U,V are independent. then the correlator outputs n,u and n,v are independent Gaussian random variables.

We can reduce con)nuous )me WGN to discrete )me WGN by using unit
norm, orthogonal correlators. Well do this on our way to deriving
op)mal receivers in Chapter 6.

Vous aimerez peut-être aussi