Vous êtes sur la page 1sur 24

MAE 600: Homework 1

Due on February 2nd, 2012


Instructor: Professor P. Singla
Priyanshu Agarwal (Contribution: 50 %)
Suren Kumar (Contribution: 50 %)
Contents
Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
(a) E [ x] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
(b) E
_
(x x) (x x)
T
_
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
(c) E
_
( y y) ( y y)
T
_
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Problem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1
Problem 1
Prove that Gaussian distribution function p(x) =
1

2
e

(x)
2
2
2
is a probability density function.
Solution 1: To prove that p(x) is a probability density function we need to prove,
1. p(x) 0
2.
_

p(x)dx = 1
Proving (1) is trivial because exponential raised to any power will result in a positive quantity and variance
is a positive quantity.
_

p(x)dx =
_

2
e

(x)
2
2
2
dx (1)
=
1

2
_

(x)
2
2
2
dx (2)
Let y =
x

2
, which implies dy =
dx

2
. Substituting y and dy in Equation 2, we get
_

p(x)dx =
1

e
y
2
dy (3)
Integral in Equation 3 is in form of a gaussian integral. Consider
_

e
y
2
dy,
_
_

e
y
2
dy
_
2
=
_

e
y
2
dy
_

e
z
2
dz (4)
=
_

e
(y
2
+z
2
)
dzdy (5)
=
_

e
(y
2
+z
2
)
dA (6)
where z is a dummy variable and dA = dzdy is the area of a small element in Y Z plane. Physically
Equation 6 refers to area of complete Y Z plane. This area could also be obtained in terms of polar
coordinates (r, ) when r [0, ), [0, 2]. Original coordinates (y, z) can be expressed in terms of polar
coordinates as
y = r cos()
z = r sin()
y
2
+z
2
= r
2
Also in terms of radial coordinates, area of a small strip can be written as dA = rdrd. Equation 6 can now
be written as
_
_

e
y
2
dy
_
2
=
_
2
0
_

0
re
r
2
drd
=
_
2
0
1
2
e
r
2

0
d
Problem 1 continued on next page. . . Page 2 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 1 (continued)
=
_
2
0
1
2
d
=
1
2
|
2
0
=
This implies that
_

e
y
2
dy =

. By substituting this result in Equation 3, we get,


_

p(x)dx = 1
Hence, Guassian density function is a valid probability density function.
Page 3 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 1
Problem 2
The number =
1

3
E
_
(x )
3

is called the skewness of random variable, x. Show that Poisson distribution


has =
1

.
Solution: 2 The probability mass function for Poisson distribution of a discrete random variable is given
by
p(x, ) =

x
e

x!
(7)
where x Z

(set of non-negative integers) and is a positive real number.


The mean of x is given by
= E[x]
=

x=0
x

x
e

x!
= e

x=0
x

x
x!
= e

_
0 + + 2

2
2!
+ 3

3
3!
+...
_
= e

_
1 +

1!
+

2
2!
+...
_
From Taylor series expansion of e

= 1 +

1!
+

2
2!
+...
So,
= e

= (8)
Now, the variance of x is given by

2
= E
_
(x )
2
_
= E
_
x
2
+
2
2x

= E
_
x
2

+E
_

E [2x] ( E [x +y] = E[x] +E[y])


= E
_
x
2

+
2
2E [x] ( E [c] = c, E[cx] = cE[x], where c is a constant)
= E
_
x
2

+
2
2
2
( E [x] = )

2
= E
_
x
2

2
(9)
From Eqn.s 8 and 9, we get

2
= E
_
x
2

2
(10)
Problem 2 continued on next page. . . Page 4 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 2 (continued)
Now by denition E[x
2
] is given by
E[x
2
] =

x=0
x
2
p(x, )
=

x=0
x
2

x
e

x!
= e

x=0
x
2

x
x!
= e

_
0 + 1
2
+ 2
2

2
2!
+ 3
2

3
3!
+...
_
E[x
2
] = e

_
1 + 2

1!
+ 3

2
2!
+...
_
(11)
Now, using the Taylor series expansion of e

, we get
e

= 1 +

1!
+

2
2!
+...
Multiplying both the sides by , we get
e

= +

2
1!
+

3
2!
+

4
3!
+...
Dierentiating both sides w.r.t , we get
e

+e

= 1 + 2

1!
+ 3

2
2!
+ 4

3
3!
+...
( + 1) e

= 1 + 2

1!
+ 3

2
2!
+ 4

3
3!
+... (12)
From Eqn.s 11 and 12, we get
E[x
2
] = e

( + 1) e

E[x
2
] = ( + 1) (13)
From Eqn. 10 and 13, we get

2
= ( + 1)
2

2
=
=

(14)
Now E[(x )
3
] is given by
Problem 2 continued on next page. . . Page 5 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 2 (continued)
E
_
(x )
3
_
= E
_
x
3

3
3x(x )

= E
_
x
3

E
_

3E
_
x
2

+E
_
3x
2

E
_
(x )
3
_
= E
_
x
3

3
3E
_
x
2

+ 3
3
( E [x] = , E [c] = c, E[cx] = cE[x], where c is a constant)
(15)
From Eqn.s 8, 13, and 15, we get
E
_
(x )
3
_
= E
_
x
3

3
3E
_
x
2

+ 3
3
(16)
Now by denition E[x
3
] is given by
E
_
x
3

x=0
x
3
p(x, )
=

x=0
x
3

x
e

x!
= e

x=0
x
3

x
x!
= e

_
0 + 1
3
+ 2
3

2
2!
+ 3
3

3
3!
+...
_
E[x
3
] = e

_
1 + 2
2

1!
+ 3
2

2
2!
+...
_
(17)
Now, to evaluate the series in Eqn. 17, multiply Eqn. 12 on both sides by
( + 1) e

= + 2

2
1!
+ 3

3
2!
+ 4

4
3!
+..
Dierentiating both sides w.r.t , we get
( + 1) e

+
_
e

+e

+e

= + 2

2
1!
+ 3

3
2!
+ 4

4
3!
+..
_

2
+ 3 + 1
_
e

= + 2

2
1!
+ 3

3
2!
+ 4

4
3!
+.. (18)
From Eqn.s 17 and 18, we get
E[x
3
] = e

2
+ 3 + 1
_
E[x
3
] =
3
+ 3
2
+ (19)
Now, from Eqn.s 13, 19 and 16, we get
Problem 2 continued on next page. . . Page 6 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 2 (continued)
E
_
(x )
3
_
=
3
+ 3
2
+
3
3.. ( + 1) + 3
3
=
3
+ 3
2
+
3
3
3
3
2
+ 3
3
E
_
(x )
3
_
= (20)
Now, skewness of a random variable is dened as
=
1

3
E
_
(x )
3

(21)
So, from Eqn.s 14, 20 and 21, we get
=

3
2
=
1

(22)
From Eqn. 8 and 22, we get
=
1

Hence, skewness of random variable with Poisson distribution is =


1

.
Page 7 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 2
Problem 3
Show that the equal probability contours for a multi-variate Gaussian density function will be hyper-ellipsoid
surfaces. Also, draw equal probability contours (without using contour command in MATLAB) for a nor-
mally distributed 2-dimensional random vector with zero mean and following covariance matrix:
P =
_
4 1
1 4
_
Solution 3: N-dimensional gaussian density function is given by
p(x) =
1
(2)
n
2
||
1
2
exp(
1
2
[x ]
T

1
[x ])
where is NN covariance matrix and is n-dimensional mean vector. Equal probability density contours
can be obtained by nding xp(x) = k, where k is a constant. To prove that equal probability density
contours for a multi-variate gaussian is hyper-ellipsoid surface,
k =
1
(2)
n
2
||
1
2
exp(
1
2
[x ]
T

1
[x ])
k(2)
n
2
||
1
2
= exp(
1
2
[x ]
T

1
[x ])
Taking log on both sides, we get
ln(k(2)
n
2
||
1
2
) = (
1
2
[x ]
T

1
[x ]) (23)
2 ln(k(2)
n
2
||
1
2
) = [x ]
T

1
[x ] (24)
LHS of Equation 24 is a constant and futhermore let it be equal to k
1
.
k
1
= [x ]
T

1
[x ] (25)
Equation 25 is equation of a general hyper-ellipsoid in N-dimensional space given that is a positive
denite matrix. It can be further demonstrated using the eigen decomposition of the positive denite matrix

1
= VDV
T
, where D is the diagonal matrix containing eigenvalues and V is the orthonormal matrix
containing eigenvectors on its columns. Also the diagonal entries of D are positive. V is in form of a rotation
matrix. Let Y be a new vector dened as Y = V
T
[x ] . Substituting this result in Equation 25,
k
1
= Y
T
DY (26)
1 =
i=N

i=1

i
k
1
y
i
2
(27)
Since each eigenvalue of a positive denited symmetric matrix is greater than zero, Equation 27 is an hyper-
ellipsoid in n-dimensional space which is rotated with respect to original space. So it will be hyperellipsoid
in the original space as well.
Problem 3 continued on next page. . . Page 8 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 3 (continued)
For the current case after substituting and in Equation 25, we get
k
1
=
1
15
_
x y

_
4 1
1 4
_ _
x
y
_
(28)
15k
1
= 4x
2
2xy + 4y
2
(29)
Equation 27 can be compared with equation of a general ellipse centered at (x, y) = (0, 0), Ax
2
+Bxy +Cy
2
,
to get the angle of inclination with x-axis as tan 2 =
B
CA
. In current case the angle with x-axis is 45
0
degrees. Applying the rotation and getting the ellipse equation in rotated coordinates (x
1
, y
1
),we get
x = x
1
cos() y
1
sin()
x =
x
1

y
1

2
y = x
1
sin() +y
1
cos()
y =
x
1

2
+
y
1

2
15k
1
= 3x
2
1
+ 5y
2
1
Equation of ellipse in (x
1
, y
1
) coordinates in parametric form can be written as (x
1
, y
1
) = (

5k
1
cos(),

3k
1
sin())
as is varied from 0 to 2 Listing 1 shows a MATLAB script that was used to generate Figure 1.
6 4 2 0 2 4 6
6
4
2
0
2
4
6
X axis
Y

a
x
i
s

Equal Probability density Contours for Guassian Pdf




p(x) = 0.001
p(x) = 0.005
p(x) = 0.01
Figure 1: The plot shows that for large values of k the function f(k) converges to the value of 2
Listing 1: MATLAB code Used to Generate Figure 1
% Drawing a hyperellipsoid using equal probability contour of density
cl c;clear al l ;close al l ;
Problem 3 continued on next page. . . Page 9 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 3 (continued)
% Equal Probability density contour value
k = [0.001 0.005 0.01];
5 types = {-rs,--bo,:k+};
detE = 15;
for i = 1:length(k)
k1 = -2
*
log(k(i)
*
(2
*
pi)
*
(detE(0.5)))
*
detE;
% Getting 360 datapoints to plot a ellipse
10 data = zeros(360,2);
counter = 0;
for psi=0:pi/360:2
*
pi
x1 = sqrt(k1/3)
*
sin(psi);
y1 = sqrt(k1/5)
*
cos(psi);
15 counter = counter+1;
data(counter,:)=[(x1-y1)/sqrt(2),(x1+y1)/sqrt(2)];
end
%Plot the data
plot(data(:,1),data(:,2),types{i});
20 hold on;
end
xlabel(X axis \rightarrow);
ylabel(Y axis \rightarrow);
t i t l e(Equal Probability density Contours for Guassian Pdf);
25 legend(p(x) = 0.001,p(x) = 0.005,p(x) = 0.01);
Page 10 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 3
Problem 4
Consider the following linear model:
y = Hx + (30)
where, y R
m
, x R
n
and is a Gaussian white noise vector with zero mean and covariance R. Given
that x =
_
H
T
R
1
H
_
1
H
T
R
1
y. Derive the relations for the following:
(a) E [ x]
Solution: 4(a)
Using the given expression for x, we get
E [ x] = E
_
_
H
T
R
1
H
_
1
H
T
R
1
y
_
=
_
H
T
R
1
H
_
1
H
T
R
1
E [ y] ( E[cx] = cE[x], where c is a constant)
Substituting the expression for y from the given linear model (Eqn. 30)
E [ x] =
_
H
T
R
1
H
_
1
H
T
R
1
E [Hx +]
=
_
H
T
R
1
H
_
1
H
T
R
1
(HE [x] +E [])
=
_
H
T
R
1
H
_
1
H
T
R
1
HE [x] ( noise is zero mean, so E[] = 0)
=
_
H
T
R
1
H
_
1
H
T
R
1
Hx ( x is the true value which is treated as a constant, so E[x] = x)
E [ x] = x
_

_
H
T
R
1
H
_
1
H
T
R
1
H = I
_
(31)
Problem 4 continued on next page. . . Page 11 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 4 [(b) E
_
(x x) (x x)
T
_
]
(b) E
_
(x x) (x x)
T
_
Solution: 4(b)
E
_
(x x) (x x)
T
_
= E
_
(x x)
_
x
T
x
T
_
= E
_
xx
T
x x xx
T
+ x x
T

= E
_
xx
T

E
_
x x
T

E
_
xx
T

+E
_
x x
T

= xx
T
xE
_
x
T

E [ x] x
T
+E
_
x x
T

(32)
( x is the true value which is treated as a constant, so E[x] = x)
_
also, E[ xx
T
] = E[ x]x
T
, E[x x
T
] = xE[ x
T
]
_
Now from Eqn.s 31 and 36, we get
E
_
(x x) (x x)
T
_
= xx
T
xx
T
xx
T
+E
_
x x
T

E
_
(x x) (x x)
T
_
= xx
T
+E
_
x x
T

(33)
Now, let us consider E
_
x x
T

and use the expression given for x.


E
_
x x
T

= E
_
_
H
T
R
1
H
_
1
H
T
R
1
y
_
_
H
T
R
1
H
_
1
H
T
R
1
y
_
T
_
E
_
x x
T

= E
_
_
H
T
R
1
H
_
1
H
T
R
1
y y
T
R
T
H
_
H
T
R
T
H
_
T
_
E
_
x x
T

=
_
H
T
R
1
H
_
1
H
T
R
1
E
_
y y
T

R
T
H
_
H
T
R
T
H
_
T
(34)
Now, let us consider E
_
y y
T

and use the linear model Eqn. 30.


E
_
y y
T

= E
_
(Hx +) (Hx +)
T
_
= E
_
(Hx +)
_
x
T
H
T
+
T
_
= E
_
Hxx
T
H
T
+Hx
T
+x
T
H
T
+
T

= Hxx
T
H
T
+HxE
_

+E [] x
T
H
T
+E
_

( x is the true value which is treated as a constant, so E[x] = x)


_
also, E[Hxx
T
H
T
] = Hxx
T
H
T
, E
_
Hx
T

= HxE
_

, E[x x
T
] = xE[ x
T
], E
_
x
T
H
T

= E [] x
T
H
T
_
E
_
y y
T

= Hxx
T
H
T
+R ( noise is zero mean, so E[] = 0) (35)
Now, from Eqn.s 34 and 35, we get
E
_
x x
T

=
_
H
T
R
1
H
_
1
H
T
R
1
_
Hxx
T
H
T
+R
_
R
T
H
_
H
T
R
T
H
_
T
Problem 4 [(b) E
_
(x x) (x x)
T
_
] continued on next page. . . Page 12 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 4
E
_
x x
T

=
_
H
T
R
1
H
_
1
H
T
R
1
Hxx
T
H
T
R
T
H
_
H
T
R
T
H
_
T
+
_
H
T
R
1
H
_
1
H
T
R
1
RR
T
H
_
H
T
R
T
H
_
T
= xx
T
+
_
H
T
R
1
H
_
1
H
T
R
T
H
_
H
T
R
T
H
_
T
_

_
H
T
R
1
H
_
1
H
T
R
1
H = I
_
= xx
T
+
_
H
T
R
1
H
_
1
H
T
R
1
H
_
H
T
R
1
H
_
1
_
R
T
= R
_
E
_
x x
T

= xx
T
+
_
H
T
R
1
H
_
1
(36)
Now, from Eqn.s 33 and 36, we get
E
_
(x x) (x x)
T
_
= xx
T
+xx
T
+
_
H
T
R
1
H
_
1
E
_
(x x) (x x)
T
_
=
_
H
T
R
1
H
_
1
(37)
Problem 4 continued on next page. . . Page 13 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 4 [(c) E
_
( y y) ( y y)
T
_
]
(c) E
_
( y y) ( y y)
T
_
Solution: 4(c)
Here y = H x
E
_
( y y) ( y y)
T
_
= E
_
( y y)
_
y
T
y
T
_
= E
_
y y
T
y y
T
y y
T
+ y y
T

= E
_
y y
T

E
_
y y
T

E
_
y y
T

+E
_
y y
T

(38)
Now, from Eqn.s 35, 38 and the given linear model (Eqn. 30), we get
E
_
( y y) ( y y)
T
_
= E
_
y y
T

E
_
y y
T

E
_
y y
T

+E
_
y y
T

= Hxx
T
H
T
+RE
_
(Hx +) x
T
H
T

E
_
H x(Hx +)
T
_
+E
_
H x x
T
H
T

= Hxx
T
H
T
+RE
_
Hx x
T
H
T
+ x
T
H
T

E
_
H xx
T
H
T
+H x
T

+HE
_
x x
T

H
T
E
_
( y y) ( y y)
T
_
= Hxx
T
H
T
+RHxE
_
x
T

H
T
E
_
x
T

H
T
HE [ x] x
T
H
T
HE
_
x
T

+HE
_
x x
T

H
T
(39)
( x is the true value which is treated as a constant, so E[x] = x)
Now, from Eqn.s 31, 34 and 39 and the given expression for x, we get
E
_
( y y) ( y y)
T
_
= Hxx
T
H
T
+RHxE
_
x
T

H
T
E
_
x
T

H
T
HE [ x] x
T
H
T
HE
_
x
T

+HE
_
x x
T

H
T
= Hxx
T
H
T
+RHxx
T
H
T
E
_
y
T
R
T
H
_
H
T
R
1
H
_
T
_
H
T
Hxx
T
H
T

HE
_
_
H
T
R
1
H
_
1
H
T
R
1
y
T
_
+Hxx
T
H
T
+H
_
H
T
R
1
H
_
1
H
T
= Hxx
T
H
T
+RHxx
T
H
T
E
_
y
T

R
T
H
_
H
T
R
1
H
_
T
H
T
Hxx
T
H
T

H
_
H
T
R
1
H
_
1
H
T
R
1
E
_
y
T

+Hxx
T
H
T
+H
_
H
T
R
1
H
_
1
H
T
(40)
Now, again using the given linear model (Eqn. 30), we get
E
_
( y y) ( y y)
T
_
= Hxx
T
H
T
+RHxx
T
H
T
E
_
(Hx +)
T
_
R
T
H
_
H
T
R
1
H
_
T
H
T
Hxx
T
H
T

H
_
H
T
R
1
H
_
1
H
T
R
1
E
_
(Hx +)
T

+Hxx
T
H
T
+H
_
H
T
R
1
H
_
1
H
T
= RE
_
x
T
H
T
+
T

R
T
H
_
H
T
R
1
H
_
T
H
T

H
_
H
T
R
1
H
_
1
H
T
R
1
E
_
Hx
T
+
T

+H
_
H
T
R
1
H
_
1
H
T
= RE [] x
T
H
T
E
_

R
T
H
_
H
T
R
1
H
_
T
H
T

H
_
H
T
R
1
H
_
1
H
T
R
1
_
HxE
_

+E
_

T
_
+H
_
H
T
R
1
H
_
1
H
T
Problem 4 [(c) E
_
( y y) ( y y)
T
_
] continued on next page. . . Page 14 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 4
= RRR
T
H
_
H
T
R
1
H
_
T
H
T
H
_
H
T
R
1
H
_
1
H
T
R
1
R+H
_
H
T
R
1
H
_
1
H
T
( noise is zero mean, so E[] = 0)
= RRR
T
H
_
H
T
R
1
H
_
T
H
T
H
_
H
T
R
1
H
_
1
H
T
+H
_
H
T
R
1
H
_
1
H
T
( noise is zero mean, so E[] = 0)
= RRR
T
H
_
H
T
R
1
H
_
T
H
T
_
R
T
= R
1
_
= RH
_
H
T
R
1
H
_
T
H
T
_
R
T
= R
1
_
= RH
_
H
T
R
1
H
_
1
H
T
= R
_
H
T
H
T
R
1
HH
1
_
1
= 0
Page 15 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 4
Problem 5
Let us consider the problem of choosing an appropriate radar system for precise position tracking of an
airplane. You are supposed to estimate the range of an airplane which must have an error variance of less
than or equal to 50m
2
using two radar stations measurements:
y
1
= r +
1
y
2
= r +
2
where r is the true range of the aircraft. It is given that one radar station has been built such that
1
is
zero-mean Gaussian noise with variance of 100m
2
. The other Radar station has not yet been built, and
2
can be assumed to be zero-mean Gaussian noise with variance of Rm
2
, where R is a design parameter. Since
accurate radar station cost money, hence your objective is to nd the maximum value of R that is acceptable.
Furthermore assume that a priori knowledge of r indicates that it is well modeled as zero-mean Gaussian
and have a variance of 1km
2
. Once again, nd the maximum value of R that is acceptable and discuss how
the a priori knowledge aect your solution for R. With that R nd the equations for an estimator that
incorporates both y
1
and y
2
simultaneously. Also nd the equations for an estimator that incorporates y
1
and y
2
recursively. Show that these are the same estimates with the same variance.
Solution : Consider a combined model for two observation
_
y
1
y
2
_
=
_
1
1
_
r +
_

2
_
(41)
Y = Hr +, x = r (42)
For this problem, there is no prior information available and hence we choose maximum likelihood solution as
the best estimate, which is given by x = (H
T
R
1
H)
1
H
T
R
1
y. R is covariance matrix of the system which
is given by E(
T
) because has zero mean. Also both the observations and their errors are uncorrelated.
Hence R can be determined as
R =
_
E(
1

T
1
) 0
0 E(
2

T
2
)
_
R =
_
100 0
0 R
_
Error covariance (which will be variance in current case because their is only one true quantity range ) is
given by
E((x x)(x x)
T
) = E((x (H
T
R
1
H)
1
H
T
R
1
Y )(x (H
T
R
1
H)
1
H
T
R
1
Y )
T
)
Substituting Y from Equation 42, and noting that (H
T
R
1
H)
1
(H
T
R
1
H) = I, we get
E((x x)(x x)
T
) = (H
T
R
1
H)
1
H
T
R
1
E(
T
)(H
T
R
1
)
T
(H
T
R
1
H)
T
E(
T
) = R
E((x x)(x x)
T
) = (H
T
R
1
H)
1
E((x x)(x x)
T
) = (
_
1 1

_
1
100
0
0
1
R
_ _
1
1
_
)
1
Problem 5 continued on next page. . . Page 16 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 5 (continued)
E((x x)(x x)
T
) = (
1
100
+
1
R
)
1
As a design requirement we are given that its error variance should be less than or equal to 50m
2
.
(
1
100
+
1
R
)
1
50
1
100
+
1
R

1
50
1
R

1
100
R 100
Hence the maximum acceptable value of R is 100m
2
.
In case of apriori information, we can use the maximum a posteriori solution , in which case the esti-
mate is given by x = (H
T
R
1
H+Q
1
)
1
(H
T
R
1
Y +Q
1
x
a
). In current case, the prior is given as zero
mean ( x
a
= 0) and with variance of 1km
2
. Error covariance (which will be variance in current case because
their is only one true quantity range ) is given by
E((x x)(x x)
T
) = E((x (H
T
R
1
H+Q
1
)
1
H
T
R
1
Y )(x (H
T
R
1
H+Q
1
)
1
H
T
R
1
Y )
T
)
(x x) = x (H
T
R
1
H+Q
1
)
1
(H
T
R
1
Y )
= x (H
T
R
1
H+Q
1
)
1
(H
T
R
1
Hx +H
T
R
1
)
= x (H
T
R
1
H+Q
1
)
1
((H
T
R
1
H+Q
1
)x Q
1
x +H
T
R
1
)
= (H
T
R
1
H+Q
1
)
1
Q
1
x (H
T
R
1
H+Q
1
)
1
H
T
R
1
)
E((x x)(x x)
T
) = E((H
T
R
1
H+Q
1
)
1
Q
1
x (H
T
R
1
H+Q
1
)
1
H
T
R
1
)
(H
T
R
1
H+Q
1
)
1
Q
1
x (H
T
R
1
H+Q
1
)
1
H
T
R
1
)
T
)
Noting that x, are uncorrelated we can simplify this expression further by using the linear property of
expectation operator.
E((x x)(x x)
T
) = (H
T
R
1
H+Q
1
)
1
E(Q
1
xx
T
Q
T
+H
T
R
1

T
R
T
H)(H
T
R
1
H+Q
1
)
T
Noting that E(xx
T
) = Q(as given by the prior with zero mean and Q covariance), E()
T
= R(by denition)
and x, are uncorrelated, we can further simplify this expression as
E((x x)(x x)
T
) = (H
T
R
1
H+Q
1
)
1
(Q
T
+H
T
R
T
H)(H
T
R
1
H+Q
1
)
T
= (H
T
R
1
H+Q
1
)
1
= (
_
1 1

_
1
100
0
0
1
R
_ _
1
1
_
+
1
10
6
)
1
Problem 5 continued on next page. . . Page 17 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 5 (continued)
As a design requirement we are given that nal error variance should be less than or equal to 50m
2
.
(
1
100
+
1
R
+
1
10
6
)
1
50
1
100
+
1
R
+
1
10
6

1
50
1
R

9999
10
6
R 100.010001
Hence the maximum acceptable value of R is 100.01m
2
. We can see from the results that even when prior
has comparatively high variance, error variance after including it is reduced and this allows us to choose a
greater R design value. So unless the variance of prior information is , it always improves the covariance
of estimate. In this case, the prior information allowed us to choose a bigger value of variance for one of our
observation to result in same error variance.
To design an estimator incorporating y
1
, y
2
simultaneously, lets assume the form of estimator as x = MY +n.
For this estimator to be unbiased (E( x) = x), we get the constraints as MH = I, n = 0. Designing the
estimation to have minimum variance, we get
E(( x x)( x x)
T
) = E( x x
T
) E(xx
T
)
E(
T
) = R
E(x
T
) = E(
T
x) = 0 x, are uncorrelated
E( x x
T
) = MHE(xx
T
)H
T
M
T
+MRM
T
= E(xx
T
) +MRM
T
E(( x x)( x x)
T
) = MRM
T
This is a constrained optimization problem which can be solved using lagrange multipliers. Loss function
can be formed as
J =
1
2
Tr(MRM
T
) +Tr((I MH))

M
J = MR
T
H
T
= 0
M =
T
H
T
R
1

J = I MH = 0

T
= (H
T
R
1
H)
1
M = (H
T
R
1
H)
1
H
T
R
1
Error variance of this estimator is given by E(( x x)( x x)
T
) = MRM
T
= (H
T
R
1
H)
1
. Substituting
Problem 5 continued on next page. . . Page 18 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 5 (continued)
values for current case
H =
_
1
1
_
R =
_
100 0
0 R
_
E(( x x)( x x)
T
) = (H
T
R
1
H)
1
= (
1
R
+
1
100
)
1
= 50m
2
Hence error variance of sequential estimator is 50m
2
.
To design an estimator that incorporates y
1
, y
2
sequentially, consider the information given by rst
observation as prior for second observation. The form of our system in now Y = r +
2
with H matrix being
equal to 1 and R matrix being equal to E(
2

2
T
). A priori estimate of true state x is available as x
a
= x+w
and associated a priori error covariance matrix is given by E(ww
T
) = Q. Since in rst observation equation

1
has zero mean ,w also has zero mean. Also
1
and
2
are uncorrelated which implies that E(w
T
2
) = 0
Consider the form of estimator as x = MY + N x
a
+ n. For perfect observation (
2
= 0), we have our
observation model as Y = Hx. If we also assume perfect a priori estimates, ( x
a
= x, w = 0), our estimator
should give us perfect results. Using these conditions, we get
x = MHx +Nx +n
x = (MH+N)x +n
This gives us two constraints MH + N = I and n = 0. Thus our desired estimator has the form x =
MY +N x
a
. Error in the estimate is given by
x x = MY +N x
a
x
= MHx +M +Nx +Nw x
= (MH+N)x +M +Nw x
= M +Nw
Our aim is to minimize the variance of error given by
E(( x x)( x x)
T
) = E((M +Nw)(M +Nw)
T
)
= ME(
T
)M
T
+NE(ww
T
)N
T
= MRM
T
+NQN
T
Variance of error needs to be minimized and the nal problem can be formed as a lagrange multiplier based
Problem 5 continued on next page. . . Page 19 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 5 (continued)
minimization due to presence of constraints. Loss function can be formulated as
J =
1
2
Tr(MRM
T
+NQN
T
) +Tr((I MH+N))

M
J = MR
T
H
T
= 0
M =
T
H
T
R
1

N
J = NQ
T
= 0
N =
T
Q
1

J = I MH+N = 0

T
= (H
T
R
1
H+Q
1
)
1
M = (H
T
R
1
H+Q
1
)
1
H
T
R
1
N = (H
T
R
1
H+Q
1
)
1
Q
1
Error variance estimate is given by E(( xx)( xx)
T
) = MRM
T
+NQN
T
. Substituting values for current
case where H = 1, R = R, Q = 100m
2
,we get
M = (
1
R
+
1
100
)
1
1
R
N = (
1
R
+
1
100
)
1
1
100
MRM
T
+NQN
T
= (
1
R
+
1
100
)
2
(
1
R
+
1
100
)
= (
1
R
+
1
100
)
1
R = 100
E(( x x)( x x)
T
) = 50m
2
This implies that error variance of both sequential and combined estimator is same.
Page 20 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 5
Problem 6
Let us consider the problem of estimation of a constant vector x R
n
based upon m measurements given
by the following model:
y = Hx + (43)
Let x and be modeled as jointly Gaussian, zero-mean vectors with
E[xx
T
] = X, E[
T
] = R, E[x
T
] = S (44)
Your task is to nd the conditional mean E[x|y] and corresponding conditional error covariance.
Solution: 6
x and are given to be jointly normal with zero-mean. Since, any linear combination of jointly Gaussian
random variables is also jointly Gaussian, x and y are also jointly Gaussian.
The mean of y using the given model (Eqn. 43) is given by
E[y] = E[Hx +]
= HE[x] +E[]
= 0 ( E[x] = 0, E[] = 0)
So, the joint distribution of x and y can be written as
p(x, y) =
1
(2)
m+n
2
det(P)
1
2
e

1
2
[x
T
y
T
]P
1
_
x
y
_
(45)
where
P =
_
E[xx
T
] E[xy
T
]
E[yx
T
] E[yy
T
]
_
, E[xy
T
] = E[yx
T
]
T
Now, let us evaluate E[xy
T
] using the given model (Eqn. 43)
E[xy
T
] = E[x(Hx +)
T
]
= E
_
xx
T
H
T
+x
T

= E
_
xx
T

H
T
+E
_
x
T

= XH
T
+S (46)
Now, evaluate E[yy
T
] again using the given model (Eqn. 43)
E[yy
T
] = E[(Hx +) (Hx +)
T
]
= E[Hxx
T
H
T
+Hx
T
+x
T
H
T
+
T
]
= HE[xx
T
]H
T
+HE[x
T
] +E[x
T
]H
T
+E[
T
]
= HXH
T
+HS +S
T
H
T
+R (47)
Problem 6 continued on next page. . . Page 21 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 6 (continued)
So, P is given by
P =
_
X XH
T
+S
_
XH
T
+S
_
T
HXH
T
+HS +S
T
H
T
+R
_
,
Now, let P is given by
P =
_
P
x
P
xy
P
xy
T
P
y
_
,
So, P
1
using the block matrix inverse from matrix cookbook can be written as
P
1
=
_

_
_
P
x
P
xy
P
y
1
P
xy
T
_
1

_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
y
1
+P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
_

_,
Now,
[x
T
y
T
]
_

_
_
P
x
P
xy
P
y
1
P
xy
T
_
1

_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
y
1
+P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
_

_
_
x
y
_
= [x
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
y
T
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
x
T
_
P
x
P
xy
P
y
1
P
T
xy
_
1
P
xy
P
x
1
+y
T
_
P
y
1
+P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
_
]
_
x
y
_
= x
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
x y
T
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
x
x
T
_
P
x
P
xy
P
y
1
P
T
xy
_
1
P
xy
P
x
1
y +y
T
_
P
y
1
+P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
_
y
= y
T
P
y
1
y +x
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
x y
T
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
x
x
T
_
P
x
P
xy
P
y
1
P
T
xy
_
1
P
xy
P
x
1
y +y
T
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1
P
xy
P
y
1
y
= y
T
P
y
1
y +x
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_

y
T
P
y
1
P
xy
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
= y
T
P
y
1
y +
_
x
T
y
T
P
y
1
P
T
xy
_
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
= y
T
P
y
1
y +
_
x P
xy
P
y
1
y
_
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
So, the joint distribution of x and y becomes
p(x, y) =
1
(2)
m+n
2
det(P)
1
2
e

1
2
_
y
T
P
y
1
y +
_
x P
xy
P
y
1
y
_
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
_
Problem 6 continued on next page. . . Page 22 of 23
Priyanshu Agarwal, Suren Kumar MAE 600 : Homework 1 Problem 6 (continued)
det(P) = det(P
y
)det(P
x
P
xy
P
y
1
P
xy
T
) (48)
p(x, y) =
1
(2)
m+n
2
det(P
y
)
1
2
det(P
x
P
xy
P
y
1
P
xy
T
)
1
2
e

1
2
_
y
T
P
y
1
y +
_
x P
xy
P
y
1
y
_
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
_
(49)
Now,
p(x|y) =
p(x, y)
p(y)
=
1
(2)
n
2
det(P
x
P
xy
P
y
1
P
xy
T
)
1
2
e

1
2
_
_
x P
xy
P
y
1
y
_
T
_
P
x
P
xy
P
y
1
P
xy
T
_
1 _
x P
xy
P
y
1
y
_
_
(50)
Hence, the mean and covariance of the conditional distribution is given by
E[x|y] = P
xy
P
y
1
y
E[xx
T
|y] = P
x
P
xy
P
y
1
P
xy
T
where
P
x
= X
P
xy
= XH
T
+S
P
y
= HXH
T
+HS +S
T
H
T
+R
Page 23 of 23

Vous aimerez peut-être aussi