Académique Documents
Professionnel Documents
Culture Documents
l ( ) = log [ f ( y , , )] = log { f [ y , h ( x ), ]} ,
where
= h ( x ) . Let
l ( )
1 U 1 ( )
U ( )
l ( ) l ( )
2 = 2
U ( ) =
=
M .
l ( )
(
)
U
and
2 l ( )
U ( )
=
A ( ) =
t
2 l ( )
1
1
2
l ( )
= 2 1
2
l ( )
p
1
A11 ( )
A ( )
21
=
M
A p1 ( )
If
2 l ( )
1 2
2 l ( )
2 2
M
2
l ( )
p 2
A12 ( ) L
A22 ( ) L
M
A p 2 ( )
O
L
L
L
O
L
2 l ( )
1 p
2 l ( )
2 p
2
l ( )
p p
A1 p ( )
A2 p ( )
M
A pp ( )
( )
U = 0
1
( )
( )(
( )(
U ( 0 ) = U U ( 0 ) =
0 = A 0 ,
where 0 , . Thus,
0 = A 1 ( )U ( 0 ) = 0 + A 1 ( )U ( 0 ) .
Motivated by the last equation, two algorithms can be used to obtain
the maximum likelihood estimate .
t 1
t = t 2
M
tp
Let
and
t + 1
( t + 1)1
= ( t + 1) 2
M
( t + 1) p
be the maximum
( ) ( )
= A ( ) + U ( ), t = 0 , 1, 2 , K
t +1 = t + A 1 t U t
( )
A t t + 1
( ) ( )
= I ( ) + U ( ), t = 0 , 1, 2 , K
t +1 = t + I 1 t U t
( )
I t t + 1
where
I 11 ( )
I ( )
21
I ( ) =
M
I p1 ( )
I 12 ( )
I 22 ( )
M
I p 2 ( )
L
L
O
L
I 1 p ( )
I 2 p ( )
M
I pp ( )
l ( )
= E [ A ( )] = E
2
2
( ) <
where
or
+1
<
Note:
U ( )
information matrix.
I ( )
is called the