Académique Documents
Professionnel Documents
Culture Documents
E
For a scalar parameter we looked at its variance
but for a vector parameter we look at its covariance matrix:
{ } | || |
C
var =
)
`
=
T
E
For example:
for = [x y z]
T
(
(
(
(
(
(
=
)
var( )
cov( )
cov(
)
cov( )
var( )
cov(
)
cov( )
cov( )
var(
z y z x z
z y y x y
z x y x x
C
2
Fisher Information Matrix
For the vector parameter case
Fisher Info becomes the Fisher Info Matrix (FIM) I()
whose mn
th
element is given by:
| |
| |
p n m
p
E
m n
mn
, , 2 , 1 , ,
) ; ( ln
) (
2
=
=
x
I
Evaluate at
true value of
3
The CRLB Matrix
Then, under the same kind of regularity conditions,
the CRLB matrix is the inverse of the FIM:
) (
1
I
= CRLB
So what this means is:
nn nn
n
] [ ] [ ) (
1
I C
=
)
var( )
cov( )
cov(
)
cov( )
var( )
cov(
)
cov( )
cov( )
var(
z y z x z
z y y x y
z x y x x
C ) (
1
33 32 31
23 22 21
13 12 11
I
=
(
(
(
(
(
(
b b b
b b b
b b b
(!)
4
More General Form of The CRLB Matrix
definite - semi positive is ) (
1
I C
0 I C
) (
1
e
y
e
x
e
y
e
x
e
x
e
y
e
y
e
y
e
y
e
x
e
x
=
C
C
2
1
exp
2
1
T
N
p
Quadratic Form!!
(recall: its scalar valued)
So the equi-height contours of this PDF are given by the
values of
such that:
k
T
= A
Some constant
ease for
: Let
1
A C
Note: A is symmetric so a
12
= a
21
because any cov. matrix is symmetric
and the inverse of symmetric is symmetric
7
What does this look like?
k y a y x a x a
e e e e
= + +
2
22 12
2
11
2
=
(
(
(
22
11
22
11
1
1
0
0
1
0
0
a
a
a
a
C C
ed uncorrelat are
&
e e
y x
Note: a
12
0 correlated are
&
e e
y x
(
(
(
(
=
2
e
e e
e e
e
y
x y
y x
x
C
8
e
x
e
x
e
y
ed uncorrelat are
&
e e
y x if
e
y
e
x
e
x
e
y
correlated are
&
e e
y x if
e
y
e
x
2 ~
e
y
2 ~
e
x
2 ~
e
y
2 ~
e
x
2 ~
e
y
2 ~
e
x
2 ~
e
y
2 ~
Not In Book Error Ellipsoids and Correlation
Choosing k Value
For the 2-D case
k = -2 ln(1-P
e
)
where P
e
is the prob.
that the estimate will
lie inside the ellipse
See posted
paper by
Torrieri
9
Ellipsoids and Eigen-Structure
Consider a symmetric matrix A & its quadratic form x
T
Ax
k
T
= Ax x
Ellipsoid: or
k = x Ax ,
Principle Axes of Ellipse are orthogonal to each other
and are orthogonal to the tangent line on the ellipse:
x
1
x
2
Theorem: The principle axes of the ellipsoid x
T
Ax = k are
eigenvectors of matrix A.
Not In Book
10
Proof: From multi-dimensional calculus: gradient of
a scalar-valued function (x
1
,, x
n
) is orthogonal to the surface:
x
1
x
2
T
n
n
x x
x x grad
(
=
=
= =
1
1
) (
) ( ) , , (
x
x
x
x
Different
Notations
See handout posted on Blackboard on Gradients and Derivatives
11
= =
i
k
j i
j
ij
k
i j
j i ij
T
x
x x
a
x
x x a
) (
x A x ) x (
Product rule:
# $ % # $ %
jk
ik
k
j
i
k i
k i
j
k
i
k
j i
x
x
x x
x
x
x
x x
=
= =
0
1
) (
For our quadratic form function we have:
()
()
Using () in () gives:
j
j
kj
j
i
ik j
j
jk
k
x a
x a x a
x
=
+ =
By Symmetry:
a
ik
= a
ki
And from this we get:
Ax Ax x
x
2 ) ( =
T
12
x
1
x
2
Since grad ellipse, this says Ax is ellipse:
x
Ax
k = x Ax ,
When x is a principle axis, then x and Ax are aligned:
x
1
x
2
x
Ax
k = x Ax ,
x Ax =
Eigenvectors are
Principle Axes!!!
< End of Proof >
13
Theorem: The length of the principle axis associated with
eigenvalue
i
is
i
k /
Proof: If x is a principle axis, then Ax = x. Take inner product
of both sides of this with x:
x x x Ax , , =
=
# $ %
k
k k
= =
=
x x x
x
# $ %
2
,
< End of Proof >
Note: This says that if A has a zero eigenvalue, then the error ellipse
will have an infinite length principle axis NOT GOOD!!
So well require that all
i
> 0
must be positive definite
14
Application of Eigen-Results to Error Ellipsoids
The Error Ellipsoid corresponding to the estimator covariance
matrix must satisfy:
k
T
=
C
1
C
Recall: Positive definite matrix A and its inverse A
-1
have the
same eigenvectors
reciprocal eigenvalues
Thus, we could instead find the eigenvalues of
and then the principle axes would have lengths
set by its eigenvalues not inverted
) (
1
I C
=
Inverse FIM!!
15
Illustrate with 2-D case:
k
T
=
C
1
v
1
& v
2
1
&
2
Eigenvectors/values for
(not the inverse!)
C
v
1
1
v
2
1
k
2
k
16
The CRLB/FIM Ellipse
We can re-state this in terms of the FIM
Once we find the FIM we can:
Find the inverse FIM
Find its eigenvectors gives the Principle Axes
Find its eigenvalues Prin. Axis lengths are then
Can make an ellipse from the CRLB Matrix
instead of the Cov. Matrix
This ellipse will be the smallest error ellipse that an unbiased estimator
can achieve!
i
k