Vous êtes sur la page 1sur 9

In this lesson we take a look at Autocorrelation and try to answer the following questions: 1. What is the nature of autocorrelation?

2. What are the consequences of autocorrelation? 3. How to detect autocorrelation in an application. 4. How to remedy the problem of autocorrelation. I. NATURE OF AUTOCORRELATION No autocorrelation assumption: cov(ui; uj) = 0 or E(ui,uj) = 0 Violation of no autocorrelation cov(ui; uj) 0 or E(ui,uj) 0 Patterns of auto and non-autocorrelation:

ij ij

1. Impure Autocorrelation Autocorrelation is caused by specification error such as omitted variable or an incorrect functional form. Omitted variable: True equation X2 omitted from the equation Yt= 0+ 1X1t+ 2X2t+ut Yt= 0+ 1X1t+vt vt = 2X2t+ut vt is not the classical error term. Instead, its a function of one of the independent variable, X2. vt can be autocorrelated even if the true error term ut is not Incorrect Functional Form: Correct model in a cost-output study Marginal costt=1+ 2.Outputt + 3.Outputt2+ut Estimated model Marginal costt=1+ 2.Outputt +vt The error term Vt= 3.Outputt2+ut vi contains the systematic effect of the output2 term on marginal cost vi will reect autocorrelation because of the use of an incorrect functional form

2. Pure Autocorrelation In a correctly specified equation. Caused by the underlying distribution of the error term in a truly specified equation. Often take place in time series applications.

Since cov(ut; ut+s) 0 or E(ut,ut+s) 0 (s 0) assume the mechanism that generate the error term ut.

is too general, we must

Type of autocorrelation First-order autoregressive scheme AR(1): The current value of the error term is a function of the previous value of the error term u t = u t 1 + et where 1 < < 1 et is the stochastic error term which satisfies the standard OLS assumptions is the coefcient of autocovariance (or first-order autocorrelation coefficient) =0: no autocorrelation >0: positive autocorrelation <0: negative autocorrelation

Positive autocorrelation

Negative autocorrelation Second order-autocorrelation scheme, AR(2):

ut = 1ut 1 + 2u t 2 + et
Pth order-autocorrelation scheme, AR(p):

ut = 1ut 1 + 2 ut 2 + ... + 2 ut p + et
II. CONSEQUENCES OF AUTOCORRELATION 1. Pure autocorrelation does not cause bias in the coefficient estimates but makes OLS no longer the minimum variance estimator.

2. Autocorrelation causes the OLS estimates of var( j ) to be biased, leading to


unreliable hypothesis testing using t-test and F-test 3. The residual variance 2 is likely to underestimate the true 2 , and as a 2 result, overestimate R III. DETECTION 1. Graphical method Plot residual them against time (time sequence plot)

80

60

40

20

uhat1

-20

-40

-60

-80 1960 1965 1970 1975 1980 1985

Plot u t against u t 1

uhat1

uhat1_1

2. Durbin-Watson test

d=

(u
t =2

t =n

ut 1 ) 2
2 t

u
t =1

t =n

Rewritten in an approximate form:

d 2(1 )
Since 1 1: Decisions:

0d 4

Important assumptions underlying the d statistic: 1. Regression model includes the intercept term. 2. Explanatory variables, the X s, are non-stochastic, or xed in repeated sampling. 3. Disturbances are generated by the rst-order autoregressive scheme

u t = u t 1 + et .
Cannot be used to detect higher order autoregressive schemes. 4. Error term is assumed to be normally distributed 5. Regression model does not include the lagged value(s) of the dependent variable as one of the explanatory variables 6. No missing observations in the data. 3. The Breusch-Godfrey LM Test (BG test) Used in large sample

Allow for: (1) nonstochastic regressors, such as the lagged values of the regressand; (2) higher-order autoregressive schemes, such as AR(1), AR(2), AR(p) Model

Yt = 1 + 2 X t + ut
Assume that the error term follows the p-order autoregressive, AR( p):

ut = 1ut 1 + 2ut 2 + ... + p ut p + et


Null hypothesis of no autocorrelation of any order:

H 0 : 1 = 2 = ... = p = 0
Chi-squared version: 1. Estimate original model by OLS and obtain the residuals u t
2. Run and obtain R2 from

ut = 1 + 2 X t + 1ut 1 + 2 ut 2 + ... + p ut p + et
2 2 3. If (n p ) R > ( p ) then reject H0 , or autocorrelation of order p

F version: 1. Estimate original model by OLS and obtain the residuals u t


2 2. Run and obtain R1 from

ut = 1 + 2 X t + 1ut 1 + 2 ut 2 + ... + p ut p + et
2 3. Run and obtain R2 from

ut = 1 + 2 X t + et
2 ( R12 R2 ) / p > f ( p, n k * ) then reject H0, or autocorrelation of 4. If F = 2 * (1 R1 ) /( n k )

order p k* = number of regressors + p + 1 IV. REMEDIAL MEASURES FOR PURE AUTOCORRELATION Model

Yt = 1 + 2 X t + ut
Assume error term follows the AR(1) scheme Usually, we do not know 1. First Difference Method: As Maddala suggest, can be use when d<R2 Suppose =1

u t = u t 1 + et

Yt = 1 + 2 X t + ut Yt 1 = 1 + 2 X t 1 + ut 1
Subtraction Or Run a no-intercept model

Yt Yt 1 = 2 ( X t X t 1 ) + (ut ut 1 ) Yt = 2 X t + et

2. Estimate based on Durbin-Watson d statistic (in reasonably large

sample) Use Transform data

1
* Yt * = 1* + 2 X t + et

d 2

Yt * = (Yt Yt 1 ) and X t* = ( X t X t 1 ) then run

* where 1* = 1 (1 ) , et = u t u t 1 and 2 = 2

Feasible Generalized least squares (FGLS)

One observation lost because the rst observation has no prior Prais-Winsten transformation to retain the first observation: the rst observation on Y and X is transformed as follows:

Y1 1 2 and X 1 1 2
3. Estimated from the Residuals

Run the following no-intercept regression to get

ut = ut 1 + vt
Then use FGLS as in the previous method
4. Iterative Methods of Estimating

Cochrane-Orcutt iterative procedure


5. NeweyWest method of correcting the OLS standard errors

OLS but correct the standard errors for autocorrelation Only used in large samples

Vous aimerez peut-être aussi