Vous êtes sur la page 1sur 5

Lecture 8

Generalized Least Squares


Estimation

8.1 Generalized Linear Regression Model


When Assumption A4 is not satisfied,
y = Xβ + ε
E[ε|X] = 0
E[εε0|X] = σ 2 Ω
where Ω is a positive definite matrix. We refer this model as the generalized
regression model.
• heteroskedasticity
σ12 0 ··· 0
 

0 σ22 ··· 0 
σ2Ω = 
 
.. .. ... .. 
. . .
 
 
0 0 · · · σn2

• autocorrelation
1 · · · ρn−1
 
ρ1

ρ1 1 · · · ρn−2 
σ2Ω = 
 
.. .. .. .. 


. . . .


ρn−1 ρn−2 ··· 1

1
2 LECTURE 8 GENERALIZED LEAST SQUARES

8.2 Consequences for Least Squares Estima-


tion
Under Assumptions A3 (zero conditional mean) and A4 (homoskedasticity
and nonautocorrelation), we sometimes refer the disturbances as spherical
disturbances:
E[ε|X] = 0
and
E[εε0|X] = σ 2 I
The OLS estimator for β in the model y = βX + ε is

b = (x0X)−1 X 0y
= β + (X 0X)−1 X 0ε

The OLS estimator b is best linear unbiased, consistent, and asymptotically


normally distributed (CAN), and if the disturbance are normally distributed,
asymptotically efficient among all CAN estimators.

E[b] = EX [E[b|X]] = β
plimb = β

If E[ε|X = 0, then the unbiasedness of least squares estimator is unaffected


by violation of Assumption A4. Moreover, the consistency still holds too.
(refer to Greene for details)
However, the sampling variance for b is

Var[b|X] = E[b − β)(b − β)0|X]


= E[(X 0X)−1 X 0εε0X(X 0X)−1 |X]
= (X 0X)−1 X 0(σ 2 Ω)X(X 0X)−1

Therefore, the sampling distribution of b is

b|X ∼ N [β, σ 2 (X 0X)−1 (X 0ΩX)(X 0X)−1 ]

If we still use s2 (X 0X)−1 to estimate Var[b|X], it is misleading, since


the variance of b is not σ 2 (X 0X)−1 any more. Furthermore, s2 is a biased
estimator of σ 2 , when the disturbances are heteroskedastic and/or serial cor-
related.

c
Yin-Feng Gau 2002 ECONOMETRICS
3 LECTURE 8 GENERALIZED LEAST SQUARES

If Ω is known, then there is a simple and efficient estimator available


based on it. [In such situation, we may discard OLS estimator] However,
if Ω is completely unknown, the OLS estimator may be the only estimator
available, and as such, the only available strategy is to try to devise an esti-
mator for the appropriate asymptotic covariance matrix of b.

Robust Estimation of Asymptotic Covariance Matrices


White (1980) proposed a consistent estimator of Var[b|X]. It is called White
heteroskedasticity consistent estimator. It can be used to estimate the
asymptotic covariance matrix of b.
The result of White (1980) implies that without actually specify the type
of heteroskedasticity, we can still make appropriate inferences based on the
results of least squares.

8.3 Efficient Estimation


Efficient estimation of β requires knowledge of Ω.

8.3.1 Generalized Least Squares (GLS)


Since Ω is a positive definite symmetric matrix, it can be factored into

Ω = CΛC 0

where the columns of C are the characteristic vectors of Ω and the charac-
1/2
√ matrix Λ. Let Λ 1/2 be the
teristic roots of Ω are arrayed in the diagonal
diagonal matrix with ith diagonal element λi , and let T = CΛ . Then
Ω = T T 0.
Given the model
y = Xβ + ε
Transform the data by multiplying by P to obtain

P y = P Xβ + P ε

or
y∗ = X∗β + ε∗

c
Yin-Feng Gau 2002 ECONOMETRICS
4 LECTURE 8 GENERALIZED LEAST SQUARES

The variance matrix of ε∗ is


E[ε∗ε0∗|X] = P σ 2 ΩP 0 = σ 2 I
Since Ω is known, y∗ and X∗ are observed data. The OLS estimator com-
puted from y∗ and X∗ is efficient, hence.
The generalized least squares (GLS) estimator of β,
β̂ = (X∗0 X∗)−1 X∗0 y∗
= (X 0P 0P X)−1 X 0P 0P Y
= (X 0Ω−1 X)−1 X 0Ω−1 y
is the efficient estimator of β.

8.3.2 Maximum Likelihood Estimation When Ω Is Known


Assume that the disturbances are multivariate normally distributed, the log-
likelihood function of
n 1 1
ln L = − ln(2π) − ln |σ 2 Ω| − ε0 (σ 2 Ω)−1 ε
2 2 2
n 1 1
= − ln(2π) − 2 (y − Xβ)0 Ω−1 (y − Xβ) − ln |Ω|
2 2σ 2
We can show that, when Ω is known, the maximum likelihood estimator β
is the vector that minimizes the generalized sum of squares,
S∗(β) = (y − Xβ)0Ω−1 (y − Xβ)
Results: With normally distributed disturbances, generalized least squares
is also maximum likelihood estimator.

8.4 Estimation When Ω is Unknown


8.4.1 Feasible Generalized Least Squares
Suppose we know the structure of Ω, Ω = Ω(θ). At first, we need to
estimate θ̂, a consistent estimator of θ. Then use
Ω̂ = Ω(θ̂)
Finally, the feasible generalized least squares (FGLS) estimator is
β̃ = (X 0Ω̂−1 X)−1 X 0Ω̂−1 y

c
Yin-Feng Gau 2002 ECONOMETRICS
5 LECTURE 8 GENERALIZED LEAST SQUARES

8.4.2 Maximum Likelihood Estimation When Ω Is Un-


known
To make the estimation problem tractable, we have to assume that Ω =
Ω(θ), where θ is a vector of a small number of parameters and θ is no a
function of any elements of β.
The MLEs of β and σ 2 will be the FGLS estimators, given the assumption
of normally distributed disturbances.

References
Greene, W. H., 2003, Econometric Analysis, 5th ed., Prentice Hall. Chapter
10.
Ruud, P. A., 2000, An Introduction to Classical Econometric Theory, 1st ed.,
Oxford University Press. Chapter 18.

c
Yin-Feng Gau 2002 ECONOMETRICS

Vous aimerez peut-être aussi