Vous êtes sur la page 1sur 46

1

4 : -


2
k-D Detection Problem /M- hypotheses
Given a communication system with the following M-
hypotheses:
are known vectors which are transmitted by the
source
is a discrete random variables vector with a known
probability density function. The components of the vectors are
independent
( ) ( ), {1,.., } = =
i i
P symbol wastransmitted P H i M S
: , 1,...,
i i
H i M = + = R S n
1
( , 1,..., ) ( )
k
n j j j j n j j
j
P x n x dx j k f x dx
=
< < + = =

, 1,..,
i
i M = S
i
S
n
R
n
3
Vector Notation
The H
i
hypotheses is given by
11
s
R
1
S
n
12
s
1 1 1
:
| | | | | |
| | |
| | |
| | | = +
| | |
| | |
| | |
\ \ \
i
i
k ik k
r s n
H
r s n
( ) ( )
( )
1 1
1
or in vector notation
:
...., .... ; ...., ....
...., ....
= +
= =
=
i i
T T
k k
T
i i ik
H
r r n n
s s
R S n
R n
S
4
The optimal receiver
Noise
g(s,n)
Source
: ( , ) , {1,.., }
i
H g i M = = +
i i
R s n s n
A solution for the M-ary decision case
The optimal receiver
( ) ( | )
j j
P H f R S
5
Error Probability (cont.)
Assuming,
( ) 1/ , 1
i
P H M i M =
.The overall error probability
is the average of the message error probability

1 1
1 1
1
( | )
1
( ) ( | )
j
M M
E i
A
i j
j i
M M
j i
i j
j i
P f H d
M
G f H d
M
= =

= =

= =
=




R R
R R R

where

1
( )
0
j
j
j
A
G
A

R
R
R

6
Error Probability (cont.)
The trick
a.
( | ) ( | ),
j j k
A f H f H for all j k R R R

Therefore, for a given i

( | )
1
( | )
j
j
i
f H
A
f H

R
R
R

b.
( | )
1, 0 1
( | )
s
j
j
i
f H
A s
f H
(

(

R
R
R

Thus
( | )
( ) , 0 1
( | )
s
j
j
i
f H
G s
f H
(

(

R
R
R

77
Error Probability (cont.)
The trick
a.
( | ) ( | ),
j j k
A f H f H for all j k R R R

Therefore, for a given i

( | )
1
( | )
j
j
i
f H
A
f H

R
R
R

b.
( | )
1, 0 1
( | )
s
j
j
i
f H
A s
f H
(

(

R
R
R

Thus
( | )
( ) , 0 1
( | )
s
j
j
i
f H
G s
f H
(

(

R
R
R

88
Upper bound- Bhattacharyya bound
and for s=1/2 we obtain

( )
1 1
1/ 2
1 1
1/ 2
1 1
1
( ) ( | )
( | )
1
( | )
( | )
1
( | ) ( | )
M M
E j i
i j
j i
M M
j
i
i j
i
j i
M M
j i
i j
j i
P G f H d
M
f H
f H d
M f H
f H f H d
M

= =

= =

= =

`

)







R R R
R
R R
R
R R R

Thus we get the bound
( )
1/ 2
.
( 1) max ( | ) ( | )
E i j j i
P M f H f H d



R R R

9
Example: Gaussian Channel M Hypotheses,
K Dimensions

/ 2
2
0 0
/ 2
2
1
0 0
1
( | ) exp( )
1
exp( )
K
j
K
K
k k
k
f H
N N
r s
N N

=
| |
=
|
\
| |
=
|
\

i
i
R S
R

Thus,

( )
1/ 2
( | ) ( | )
j i
I f H f H = = R R

2
2 / 2
0 0
1
exp( )
2
K
j
I
N N
+
| |
=
|
\
i
R S R S

10
It is easy to show that
Substituting this we get
2
2 2
2 1
2
2 2
j
j j
+
+ = +
i
i i
S S
R S R S R S S
2
2
/ 2
0 0 0
2
1
exp exp
4
i j
N
i j
I
N N N
| |
+
|
| |

| |
|
|
=
|
|
|
\
|
\
|
\
s s
R
s s
11
Integration over all the dimensions we get
2
.
0
2
/ 2
0 0
( 1) max exp( )
4
2 1
exp( )
2
j
E i j
j
K
P M
N
d
N N

| |

|
\

i
i
S S
S S
R
R

Or
2
.
0
2
/ 2
1
0 0
( 1) max exp( )
4
2 1
exp( )
2
j
E i j
k jk
N
k
K
k
k
P M
N
s s
r
dr
N N

| |

|
\


i
i
S S

12
Since the inner integral is one!! We get a simple bound
2
.
0
( 1) max exp( )
4
j
E i j
P M
N


i
S S

13
The theorem of Irrelevance
Given a channel with pdf
The output of the channel is
composed of two sub
vectors
1 2
Pr [ | ]
( , | )
j
j
ob d
f Lim
d
< < +
=
R x R R S
R R S
x
An optimum receiver may disregard the
vector R
2
if and only if
2 1 2 1
( | , ) ( | )
j
f f = R S R R R
1 2 1 2 1
1 2 1
In general p(a,b)=p(a)p(b|a)
Thus,
( , | ) ( | ) ( | , )
( | ) ( | )
=
=
j j j
j
f f f
f f
R R S R S R S R
R S R R
Prove:
Channel Receiver
1 2
( , | )
j
f R R S
1
R
2
R
{ }, 1,..,
j
j M = S
14
Examples
1
R
{ }, 1,.., =
j
j M S
1
n
2 2
n =R
Case A:
Case B:
1
R
{ }, 1,..,
j
j M = S
1
n
2
n
1
R
2
, R
15
What about this case
1
R
{ }, 1,..,
j
j M = S
1
n
2
n
2
, R
16
Sufficient Statistic- Definition
We say that a mapping forms a sufficient statistic
for the density if there are
M functions from
such that the vector
is a probability vector and such that this probability vector is equal
to
| 1 |
(.),..., (.)
M M M
f f
= = r r
'
:
D D
T R R >
( ) ({ }, ( )),
obs m m obs
T P T m M > r r
'
[0,1]
D
R to
1
( ({ }, ( )),.. ({ }, ( )))
m obs M m obs
P T P T r r
(Pr( 1| ),..., Pr( | ))
obs obs
M Y M M Y = = = = r r
17
The objectives of the current lecture
In this lecture we will consider the following detection
problem.
Given:
Find an optimal receiver in the sense of minimum
probability of error
: ( ) ( ) ( ), 0 , 1,...,
( ) are known to the reciever
n(t) is additive noise with a known statistic
i i
i
H r t s t n t t T i M
P H
= + =
In order to solve this problem we want to make use of the
results of the previous talk
The major differences are:
1. ( ) is a waveform
2. n(t) is additive noise process and not a random vector
i
s t
18
Example: Pulse Amplitude Modulation
0
1
: ( ) ( ) ( ), 0
: ( ) ( ) ( ), 0
H r t g t n t t T
H r t g t n t t T
= +
= +
19
FSK Transmitter
1
f
t
0
( ) p t
1
( ) p t
pulse
generator
0
( ) p t
pulse
generator
1
( ) p t
+
"0"
"1"
trigger
trigger
bitstream 1 0 1 1 0 0 0 1
1 0 1 1 0
( ) ( ) ( 2 ) ( 3 ) ( 4 ) ... p t p t T p t T p t T p t T + + + + +
t
20
Signal Space Representation ( )
Euclidean vector space
Let
k
R be the set of all ordered k-tuples
1 2
( , ,..., )
k
x x x = x where
i
x are real
numbers known as the coordinates of x, the elements of
k
R are known as
points or vectors. We define the following operations:
(I)
1 1 2 2
( , ,..., )
k k
x y x y x y + = + + + x y )
(II)
1 2
( , ,..., )
k
x x x = x
(III) Inner product (or scalar product)
1
( , )
k
i i
i
x y

=
=

x y
(IV) Norm of
1/ 2
*
1
( , )
k
i i
i
x x
=
| |
= =
|
\

x x x x
2
2
, . . x L i e x <
21
Inner Product-Scalar Product
*
. ( , ) ( , ) = I x y y x
1 2 1 2
. ( , ) ( , ) ( , ) + = + III x x y x y x y
. ( , ) ( , ) = II x y x y
. ( , ) 0 iff (0,..., 0) = = IV x x x
22
Example- 3-D
Let a, b be vectors in 3-D space. (1, 0, 0), (0,1, 0), (0, 0,1)
x y z
e e e = = = ,
unit vectors. Therefore,
(4,1, 2) ( , ) ( , ) ( , )
( , ) ((4,1, 2), (1, 0, 0)) 4
( , ) ((4,1, 2), (0,1, 0)) 1
( , ) ((4,1, 2), (0, 0,1)) 2
4(1, 0, 0) 1(0,1, 0) 2(0, 0,1)
x x y y z y
x
y
z
e e e e e e
e
e
e
= = + +
= =
= =
= =
= + +
a a a a
a
a
a
a

23
Example- 3-D
If
(4,1, 2)
( 2,1, 3)
=
=
a
b

4 1 2
2 1 3
x y z
x y z
e e e
e e e
= + +
= +
a
b

and
2 2 1
( , ) 4 ( 2) 1 1 2 ( 3) 13
x y z
e e e + = +
= + + =
a b
a b

2 2 2
2 2 2
2 2 1
( , ) 4 1 2 21 21
( , ) 2 1 3 14 14
x y z
e e e + = +
= + + = > =
= + + = > =
a b
a a a
b b a

13 13
cos( )
14*21 7 6


= = =
(a, b)
a b

24
The Fourier Series
We also know about the Fourier Series of integrable function over the
interval [0,T],

0
1 2 1 2
( ) exp( ); ( ) exp( )
T
n n
nt nt
f t f j f f t j dt
T T
T T

= =


;
and
n
( ) is equivalent to {f } f t
Can we find a different way to represent set of waveforms for detection?
25
Hilbert Space: Definition
A sequence {h
n
} in a linear vector space is said to be Cauchy
sequence if for every there is an integer N such that
0 >
, < >
n m
h h if n m N
A vector space with Norm is complete if every Cauchy
sequence converge
Hilbert Space is linear complete vector space with scalar
product
26
Hilbert Space for real (complex) functions
H is the collection of all real (complex) functions on the interval
[0,T] such that
* 2
0
( ) ( ) , ( )
T
f t f t f L <

The inner product (scalar Product) is defined as


* 2
0
( , ) ( ) ( ) , ( , )
T
f g f t g t f g L = <

27
Definitions
1. A family of functions
1
{ }
k k


=
is called an orthogonal family if
( , ) 0
i j
= for ij.
2. A family of functions
1
{ }
k k

=
is called an orthonormal family if

1
( , )
0
i j
i j
i j

=


3. A family of functions
1
{ }
k k


=
is called a complete family if for every
f(t) that has energy different than zero (
*
0
( ) ( ) ( , )
T
f t f t dt f f =

, there
is at least one i s.t.;
0
( ) ( ) 0
T
k
f t t dt

.
28
Riesz -Fischer Theorem
A complete orthonormal family of functions includes all
possible orthonormal vectors in the space No other
vector outside this set can possible be orthogonal to all
the vectors of the family
The Riesz -Fischer Theorem
Let
1
{ }
k k


=
be orthonormal and complete on [0,T], and
2
( ) f t L then

2
1
0
lim ( ( ) ( )) 0, ( , ( ))
T
N
i i i i
N
i
f t f t dt f f t

=
= =


An orthonormal family of functions is called complete in a vector
space
2
( ) [0, ] f t L T iff ( , ) 0, 0
k
f k f = =
This may be stated as follows. The only function orthonormal to all
the members of the complete set is the zero function in the sense
of norm
29
Example
Thus we can express the Fourier representation as

1
1 2
( ) ( ); ( ) exp( )
i i i
i
tn
f t f t t j
T
T

=
= =


0
1 2
( ) exp( ) ( )
1 2
( ) exp( ) ( , )
n n n
T
n n
nt
f t f j f t
T
T
nt
f f t j dt f
T
T


= =
= =


The above function series satisfies

1
( ( ), ( ))
0
i j ij
i j
t t
i j

=

= =


In a similar way we have
2
2
1
( ) | |
i
i
f t f E energy
=
= =


Orthonormal
Base
30
Parseval's theorem:
Let f(t) and g(t) be real functions which can be represented by a function
series,
1
( ) ( )
i i
i
f t f t

=
=

, and
1
( ) ( )
i i
i
g t g t

=
=

, where ( ( ), ( ))
i i
f f t t =

T
* *
1
0
T
2
* 2
1
0
( ( ), ( ))
then
. ( ) ( ) ( , )
. ( ) ( ) | |
i i
i i
i
i
i
g g t t
a f t g t dt f g f g
b f t f t dt f f E Energy

=
=
= =
= = =


31
Signal Space Representation
Any signal in a set of M waveform
can be represented by linear combination of a set of N orthonormal
function
( )
i
s t
{ ( ),1 }
i
s t i M
{ ( ),1 },
i
t j N where N M
1
( ) ( ), 1, 2,..., , 0
M
i ik k
k
s t s t i M t T
=
= =

*
0
( , ( )) ( ) ( )
t
ik i k i k
s s t s t t dt = =

is the projection of ( ) on the direction of ( )


ik i k
s s t t
32
Signal Space Representation
The representation (1) has the following matrix form:
Thus, using the set of the basis functions, each signal s
i
(t) maps to a
set of N real numbers, which is a N dimensional real-valued vector
This is a 1-1 correspondence between the signal set (or
equivalently, the message symbol set) and the N dimensional
vector space.
1
1
( )
( ) ( ,..., )
( )
i i iN
N
t
s t s s
t

| |
|
|
=
|
|
\
1
( ,..., )
i i iN
s s = s
33
Gram-Schmidt Orthogonalization
1 1
1
1
1
2
1
1 1
0
1
( ) ( )
( )
( ) ( )
( )
( )
( )
T
g t s t
g t
t unit length
g t
s t
E s t dt
E

=
=
= =

1 1 1 11 1
1
11 1
( ) ( ) ( )
:
( )
s t E t s t
Note
t has unit energy
s E

= =
=
Step 1:
34
35
36
Example: 4 Signals - Gram-Schmidt procedure
37
The Procedure for the example
38
39
The final results
The set of signals are
equivalent to
1 1
2
3
4
( ) ( 2, 0, 0)
( ) (0, 2, 0)
( ) (0, 2,1)
( ) ( 2, 0,1)
s t
s t
s t
s t
=
=
=
=
2
3
4
s
s
s
s
40
Another Example
Gram Schmidt Procedure for the set of functions

2
,

3
2
t,
1
2

5
2
13t
2
,
1
2

7
2
t35t
2
,
3330t
2
35t
4

-1 -0.5 0.5 1
-1
1
2
2 3 4
{1, , , , } [ 1,1] t t t t t
41
Conclusions
Any set of signal set with M can be represented by a
finite set of N base function N<=M.
The base functions can be used to generate the
signals
The main issue is how to treat the noise process
42
Revisit of Random Process
A stochastic process X(t) is defined as ensemble of sample functions,
the values of the process at time instants are given
by the random variables
1 2 3 n
t <t <t <...< t
1 2 n
x(t ), x(t ) , ...,x(t )
1 2 n
x(t ), x(t ) , ...,x(t )
The random variables are characterized
statistically by their joints pdf
1 2 n
pdf(x(t ), x(t ) , ...,x(t ))
Stationary Stochastic process in the narrow sense satisfies that the
probability low of the vector is invariant to time shift
1 2 n 1 2 n
pdf(x(t ), x(t ) , ...,x(t ))=pdf(x(t +a), x(t +a) , ...,x(t +a))
43
Autocorrelation-Wide sense Stationary process
Stationary Stochastic process in the wide sense satisfies that the mean
of the process is constant and the covariance is a function of the time
difference
The Autocorrelation function of X(t):
1 1 1
1 2 1 2 1 2
( ( )) ( ( ) )
( , ) ( ( ) ( )) ( ( ) , ( ) )
x
E X t x p X t x dx
R t s E X t X s x x p X t x X s x dx dx



= =
= = = =



The Covariance function of X(t)
( , ) ( ( ) ( )) ( ( )) ( ( )) ( , ) ( ( )) ( ( ))
x x
K t s E X t X s E X t E X s R t s E X t E X s = =
A wide sense stationary process:
( ( )) =constant; ( , ) (| |)
x x x
E X t R t s R t s = =
44
Remainder: Circular Gaussian random
noise vector
0
n
1
n
2
n
( ) p n
A circular Gaussian real random k-dim vector is defined
as zero mean i.i.d (independent and identically
distributed) Gaussian vector with probability density
function
2 2 2
1 2
1 2
2 / 2 2
2
2 / 2 2
1 ...
( ) ( , ,..., ) exp
(2 ) 2
1
exp
(2 ) 2
k
k
k
k
n n n
p p n n n


| | + + +
= =
|
\
| |
| =
|
\
n
n
45
Vector Space-Revisit
46
Normed Linear Space

Vous aimerez peut-être aussi