Vous êtes sur la page 1sur 7

Gauss Markov Theorem

Dr. Frank Wood


Digression : Gauss-Markov Theorem
In a regression model where E{i } = 0 and variance
2 {i } = 2 < and i and j are uncorrelated for all i and j the
least squares estimators b0 and b1 are unbiased and have minimum
variance among all unbiased linear estimators.

Remember
)(Yi Y ) X )
P
(Xi X (Xi X
b1 = P
2
= ki Yi , ki = P )2
(Xi X ) (Xi X
b0 = Y b1 X

X X
2 {b1 } = 2 { ki Yi } = ki2 2 {Yi }
1
= 2 P )2
(Xi X
Gauss-Markov Theorem
I The theorem states that b1 has minimum variance among all
unbiased linear estimators of the form
X
1 = ci Yi

I As this estimator must be unbiased we have


X
E{1 } = ci E{Yi } = 1
X X X
= ci (0 + 1 Xi ) = 0 ci + 1 ci Xi = 1

I This imposes some restrictions on the ci s.


Proof
I Given these constraints
X X
0 ci + 1 ci Xi = 1
P P
clearly it must be the case that ci = 0 and ci Xi = 1
I The variance of this estimator is
X X
2 {1 } = ci2 2 {Yi } = 2 ci2

I This also places a kind of constraint on the ci s


Proof cont.
Now define ci = ki + di where the ki are the constants we already
defined and the di are arbitrary constants. Lets look at the
variance of the estimator
X X
2 {1 } = ci2 2 {Yi } = 2 (ki + di )2
X X X
= 2( ki2 + di2 + 2 ki di )

Note we just demonstrated that


X
2 ki2 = 2 {b1 }

So 2 {1 } is related to 2 {b1 } plus some extra stuff.


Proof cont.
P
Now by showing that ki di = 0 were almost done
X X
ki di = ki (ci ki )
X
= ki (ci ki )
X X
= ki ci ki2
X  Xi X  1
= ci P 2
P )2
(Xi X ) (Xi X
P
ci Xi X P
ci 1
= P
2
P )2 = 0
(Xi X ) (Xi X
Proof end
So we are left with
X X
2 {1 } = 2 ( ki2 + di2 )
X
= 2 (b1 ) + 2 ( di2 )

which is minimized when the di = 0 i.

If di = 0 then ci = ki .

This means that the least squares estimator b1 has minimum


variance among all unbiased linear estimators.

Vous aimerez peut-être aussi