Vous êtes sur la page 1sur 16

1

3.7 CRLB for Vector Parameter Case


Vector Parameter:
| |
T
p
!
2 1
=
Its Estimate: | |
T
p


2 1
! =
Assume that estimate is unbiased:
{ } =

E
For a scalar parameter we looked at its variance
but for a vector parameter we look at its covariance matrix:
{ } | || |

C


var =
)
`

=
T
E
For example:
for = [x y z]
T
(
(
(
(
(
(

=
)

var( )

cov( )

cov(
)

cov( )

var( )

cov(
)

cov( )

cov( )

var(

z y z x z
z y y x y
z x y x x

C
2
Fisher Information Matrix
For the vector parameter case
Fisher Info becomes the Fisher Info Matrix (FIM) I()
whose mn
th
element is given by:
| |
| |
p n m
p
E
m n
mn
, , 2 , 1 , ,
) ; ( ln
) (
2
=

=

x
I
Evaluate at
true value of
3
The CRLB Matrix
Then, under the same kind of regularity conditions,
the CRLB matrix is the inverse of the FIM:
) (
1
I

= CRLB
So what this means is:
nn nn
n
] [ ] [ ) (
1

I C

Diagonal elements of Inverse FIMbound the parameter variances,


which are the diagonal elements of the parameter covariance matrix
(
(
(
(
(
(

=
)

var( )

cov( )

cov(
)

cov( )

var( )

cov(
)

cov( )

cov( )

var(

z y z x z
z y y x y
z x y x x

C ) (
1
33 32 31
23 22 21
13 12 11
I

=
(
(
(
(
(
(

b b b
b b b
b b b
(!)
4
More General Form of The CRLB Matrix
definite - semi positive is ) (
1

I C

0 I C



) (
1

Mathematical Notation for this is:


(!!)
Note: property #5 about p.d. matrices on p. 573
states that (!!) (!)
5
CRLB Off-Diagonal Elements Insight
Let = [x
e
y
e
]
T
represent the 2-D x-y location of a
transmitter (emitter) to be estimated.
Consider the two cases of scatter plots for the estimated
location:
e
x

e
y

e
x

e
y

e
x
e
x
e
y
e
y
e
y

e
y

e
x

e
x

Each case has the same variances but location accuracy


characteristics are very different. This is the effect of the
off-diagonal elements of the covariance
Should consider effect of off-diagonal CRLB elements!!!
Not In Book
6
CRLB Matrix and Error Ellipsoids
Not In Book
Assume
| |
T
e e
y x

= is 2-D Gaussian w/ zero mean


and a cov matrix

C

Only For Convenience


Then its PDF is given by:
( )
( )
(

=

C
C


2
1
exp
2
1

T
N
p

Quadratic Form!!
(recall: its scalar valued)
So the equi-height contours of this PDF are given by the
values of

such that:
k
T
= A

Some constant
ease for

: Let
1

A C

Note: A is symmetric so a
12
= a
21
because any cov. matrix is symmetric
and the inverse of symmetric is symmetric
7
What does this look like?
k y a y x a x a
e e e e
= + +
2
22 12
2
11

2

An Ellipse!!! (Look it up in your calculus book!!!)


Recall: If a
12
= 0, then the ellipse is aligned w/ the axes &
the a
11
and a
22
control the size of the ellipse along the axes
Note: a
12
= 0
(
(
(
(

=
(
(
(

22
11

22
11
1

1
0
0
1
0
0
a
a
a
a

C C
ed uncorrelat are

&

e e
y x
Note: a
12
0 correlated are

&

e e
y x
(
(
(
(

=
2

e
e e
e e
e
y
x y
y x
x

C
8
e
x

e
x

e
y

ed uncorrelat are

&

e e
y x if
e
y

e
x

e
x

e
y

correlated are

&

e e
y x if
e
y

e
x

2 ~
e
y

2 ~
e
x

2 ~
e
y

2 ~
e
x

2 ~
e
y

2 ~
e
x

2 ~
e
y

2 ~
Not In Book Error Ellipsoids and Correlation
Choosing k Value
For the 2-D case
k = -2 ln(1-P
e
)
where P
e
is the prob.
that the estimate will
lie inside the ellipse
See posted
paper by
Torrieri
9
Ellipsoids and Eigen-Structure
Consider a symmetric matrix A & its quadratic form x
T
Ax
k
T
= Ax x
Ellipsoid: or
k = x Ax ,
Principle Axes of Ellipse are orthogonal to each other
and are orthogonal to the tangent line on the ellipse:
x
1
x
2
Theorem: The principle axes of the ellipsoid x
T
Ax = k are
eigenvectors of matrix A.
Not In Book
10
Proof: From multi-dimensional calculus: gradient of
a scalar-valued function (x
1
,, x
n
) is orthogonal to the surface:
x
1
x
2
T
n
n
x x
x x grad
(

=
=

= =

1
1
) (
) ( ) , , (
x
x
x
x
Different
Notations
See handout posted on Blackboard on Gradients and Derivatives
11

= =
i
k
j i
j
ij
k
i j
j i ij
T
x
x x
a
x
x x a
) (
x A x ) x (

Product rule:
# $ % # $ %
jk
ik
k
j
i
k i
k i
j
k
i
k
j i
x
x
x x
x
x
x
x x

=
= =
0
1
) (
For our quadratic form function we have:
()
()
Using () in () gives:
j
j
kj
j
i
ik j
j
jk
k
x a
x a x a
x


=
+ =

By Symmetry:
a
ik
= a
ki
And from this we get:
Ax Ax x
x
2 ) ( =
T
12
x
1
x
2
Since grad ellipse, this says Ax is ellipse:
x
Ax
k = x Ax ,
When x is a principle axis, then x and Ax are aligned:
x
1
x
2
x
Ax
k = x Ax ,
x Ax =
Eigenvectors are
Principle Axes!!!
< End of Proof >
13
Theorem: The length of the principle axis associated with
eigenvalue
i
is
i
k /
Proof: If x is a principle axis, then Ax = x. Take inner product
of both sides of this with x:
x x x Ax , , =
=
# $ %
k

k k
= =
=
x x x
x
# $ %
2
,
< End of Proof >
Note: This says that if A has a zero eigenvalue, then the error ellipse
will have an infinite length principle axis NOT GOOD!!
So well require that all
i
> 0
must be positive definite

14
Application of Eigen-Results to Error Ellipsoids
The Error Ellipsoid corresponding to the estimator covariance
matrix must satisfy:

k
T
=

C


1

Note that the error


ellipse is formed
using the inverse cov Thus finding the eigenvectors/values of
shows structure of the error ellipse
1

C
Recall: Positive definite matrix A and its inverse A
-1
have the
same eigenvectors
reciprocal eigenvalues
Thus, we could instead find the eigenvalues of
and then the principle axes would have lengths
set by its eigenvalues not inverted
) (
1

I C


=
Inverse FIM!!
15
Illustrate with 2-D case:
k
T
=

C


1

v
1
& v
2

1
&
2
Eigenvectors/values for
(not the inverse!)

C

v
1
1

v
2
1
k
2
k
16
The CRLB/FIM Ellipse
We can re-state this in terms of the FIM
Once we find the FIM we can:
Find the inverse FIM
Find its eigenvectors gives the Principle Axes
Find its eigenvalues Prin. Axis lengths are then
Can make an ellipse from the CRLB Matrix
instead of the Cov. Matrix
This ellipse will be the smallest error ellipse that an unbiased estimator
can achieve!
i
k

Vous aimerez peut-être aussi