Vous êtes sur la page 1sur 4

0512-4261 Introduction to Statistical Signal Processing 28.11.

2017

Exercise 1
Gal Cohen 201306909

1. Denote T for ”Tails” and H for ”Heads”. Let X ∼ U (−1, 1) and Y be r.v. which is the result of a coin
toss. The PDF and CDF of X are

( 
0, x ≤ −1
0.5, x ∈ [−1, 1] 
x+1
fX (x) = , FX (x) = , −1 < x ≤ 1
0, o.w  2

1, 1<x

Let Z be the generated random variable.


(
X, Y = T
(a) Z = , thus
0, Y = H

FZ (z) = P (Z ≤ z) = P (Y = T ) · P (Z ≤ z | Y = T ) + P (Y = H) · P (Z ≤ z | Y = H)


 0, z ≤ −1

 z + 1
, −1 < z ≤ 0


1 1 1 1 
4
= P (X ≤ z) + P (0 ≤ z) = FX (z) + U (z) =
2 2 2 2  z+3

 , 0<z≤1
 4



1, 1<z

(random variable which gets exactly one value c has CDF U (c)) and

d 1 1
fZ (z) = FZ (z) = fX (z) + δ(z)
dz 2 2
(
X, Y =T
(b) Z = , thus
X + 1, Y =H

FZ (z) = P (Z ≤ z) = P (Y = T ) · P (Z ≤ z | Y = T ) + P (Y = H) · P (Z ≤ z | Y = H)

 0, z ≤ −1
z+1




 , −1 < z ≤ 0



 4
1 1 1 1 
2z + 1
= P (X ≤ z) + P (X + 1 ≤ z) = FX (z) + FX (z − 1) = , 0<z≤1
2 2 2 2 
 4


 z+2
, 1<z≤2


 4



1, 2<z

Accordingly, 

 0.25, −1 < z ≤ 0

d  0.5, 0<z≤1
fZ (z) = FZ (z) =
dz 
 0.25, 1<z≤2

0, o.w

Exercise 1, page 1
(
X, Y =T
(c) Z = , thus
2X, Y =H

FZ (z) = P (Z ≤ z) = P (Y = T ) · P (Z ≤ z | Y = T ) + P (Y = H) · P (Z ≤ z | Y = H)

0, z ≤ −2
z + 2




 , −2 < z ≤ −1



 8
1 1 1 1  z  
3z + 4
= P (X ≤ z) + P (2X ≤ z) = FX (z) + FX = , −1 < z ≤ 1
2 2 2 2 2 
 8

z + 6

, 1<z≤2


 8



1, 1<z

Accordingly, 

 0.125, −2 < z ≤ −1

d 0.375, −1 < z ≤ 1
fZ (z) = FZ (z) =
dz 
 0.125, 1<z≤2

0, o.w

(
X, Y =T
(d) Z = , thus
−X, Y =H

FZ (z) = P (Z ≤ z) = P (Y = T ) · P (Z ≤ z | Y = T ) + P (Y = H) · P (Z ≤ z | Y = H)
1 1 1 1
= P (X ≤ z) + P (−X ≤ z) = FX (z) + F−X (z)
2 2 2 2
Now,

F−X (x) = P (−X ≤ x) = P (X ≥ −x) = 1 − P (X < −x) = 1 − FX (−x)


 
0,
 −x ≤ −1 0,
 x ≤ −1
= 1 − −0.5x + 0.5, −1 < −x ≤ 1 = 0.5x + 0.5, −1 < x ≤ 1 = FX (x)
 
1, 1 < −x 1, 1<x
 

and eventually FZ (z) = FX (z), i.e Z U (−1, 1).

Exercise 1, page 2
2. First, we compute some required values
1 1
E [A] = E [E [A | Y ]] = E [X] + E [0] = 0
2 2
1 1 1
E [B] = E [E [B | Y ]] = E [X] + E [X + 1] =
2 2 2
1 1
E [C] = E [E [C | Y ]] = E [X] + E [2X] = 0
2 2
1 1
E [D] = E [E [D | Y ]] = E [X] + E [−X] = 0
2 2
 1   1 1
V [A] = E A − E [A] = E E A | Y = E X 2 + E [0] =
 2 2
  2
2 2 6
 1 1   1   1
V [B] = E B 2 − E2 [B] = E E B 2 | Y − = E X 2 + E (X + 1)2 −
   
4 2 2 4
 2 1 7
= E X + 2E [X] + =
4 12
 1   1   5
V [C] = E C − E [C] = E E C | Y = E X 2 + E (2X)2 =
 2 2
  2
2 2 6
 1  2  1    1
V [D] = E D − E [D] = E E D | Y = E X + E (−X) = E X 2 =
 2 2
  2 2

2 2 3
1  2 1 1
σAB = E [AB] − E [A] E [B] = E [E [AB | Y ]] = E X + E [0] =
2 2 6
1  2 1  1
σCD = E [CD] − E [C] E [D] = E [E [CD | Y ]] = E X + E −2X 2 = −

2 2 6
(a) The optimal estimator of B from A is
1 1 1
B̃(A) = E [B | A] = E [E [(B | A) | Y ]] = E [X | X] + E [X + 1 | 0] = X +
2 2 2
The MSE is
 1 1
E (B − X − 0.5)2 = E E (B − X − 0.5)2 | Y = E [0.25] + E [0.25] = 0.25
   
2 2

(b) The optimal linear of B from A is


σAB 1
B̃(A) = ηB + (A − ηA ) = A +
V [A] 2

The MSE is
 1 1 
E (B − A − 0.5)2 = E E (B − A − 0.5)2 | Y = E [0.25] + E (X + 0.75)2
    
2 2
1 1  2 9 41
= E [0.25] + E X + 1.5E [X] + =
2 2 16 48

(c) The optimal estimator of D from C is


1 1 1
D̃(C) = E [D | C] = E [E [(D | C) | Y ]] = E [X | X] + E [−X | 2X] = − X
2 2 2
The MSE is
 1   1  5
E (D + 0.5X)2 = E E (D + 0.5X)2 | Y = E (1.5X)2 + E (−0.5X)2 =
    
2 2 12

Exercise 1, page 3
(d) The optimal linear estimator of D from C is
σCD 1
D̃(C) = ηD + (C − ηC ) = − C
V [C] 5

3. (a) Let x̃(y) = cx + d be some linear estimator of x from y. We have


h i h i h i h i
2 2 2 2
E (x̃ − x) = E (x̂ − x + x̃ − x̂) = E (x̂ − x) + 2E [(x̂ − x) (x̃ − x̂)] + E (x̃ − x̂)
| {z }
≥0
h i h i
2 2
≥ E (x̂ − x) + 2 E [ε(x, y) [(c − a)x + (d − b)]] = E (x̂ − x)
| {z }
=0

i.e, the MSE of x̂(y) is minimal, hence it is the optimal linear estimator.
(b) Let x̃(y) = q(y) be some estimator of x from y. We have
h i h i h i h i
2 2 2 2
E (x̃ − x) = E (x̂ − x + x̃ − x̂) = E (x̂ − x) + 2E [(x̂ − x) (x̃ − x̂)] + E (x̃ − x̂)
| {z }
≥0
h i h i
2 2
≥ E (x̂ − x) + 2 E [ε(x, y) [q(y) − x̂(y)]] = E (x̂ − x)
| {z }
=0

i.e, the MSE of x̂(y) is minimal, hence it is the optimal estimator.


4. (a) The MSE of optimal linear estimator of x from y is σx2 (1 − ρ2 ) and the MSE of optimal linear
estimator of y from x is σy2 (1 − ρ2 ). Given that σx2 = σy2 , it is clear that The claim is true.
(b) The claim is false, but I couldn’t manage to find a counter example.

Exercise 1, page 4

Vous aimerez peut-être aussi