Vous êtes sur la page 1sur 32

ECONOMETRICS

Chrispin Mphuka
University of Zambia

January 2011

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

1/1

frametitleAssumptions
Generic form of the linear regression model : population regression h

= f (x1 , x2 , ..., xk ) +
= x1 1 + x2 2 + ...xk k +

Classical regression model is a set of joint distributions satisfying


Assumptions 1.1 - 1.4

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

2/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below
the RHS: xi 1 1 + xi 2 2 + ...xik k is called the regression function

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below
the RHS: xi 1 1 + xi 2 2 + ...xik k is called the regression function
0 s are regression coefficientst that represent marginal and separate
effects of regressors

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below
the RHS: xi 1 1 + xi 2 2 + ...xik k is called the regression function
0 s are regression coefficientst that represent marginal and separate
effects of regressors
e.g , 2 represents the the change in the dependent variable when the
second regressor increases by 1 unit holding constant other variables.
i.e yi /xi 2 = 2

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below
the RHS: xi 1 1 + xi 2 2 + ...xik k is called the regression function
0 s are regression coefficientst that represent marginal and separate
effects of regressors
e.g , 2 represents the the change in the dependent variable when the
second regressor increases by 1 unit holding constant other variables.
i.e yi /xi 2 = 2
Linearity implies that the marginal effects do not depend on the level of
regressors

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions


Linearity Assunption.
yi = xi1 1 + xi2 2 + ...xik k + i

(i = 1, 2, 3, ..., n)

(1)

where 0 s are unkown parameters to be estimated, and i is the


unobserved error term with certain properties to be specified below
the RHS: xi 1 1 + xi 2 2 + ...xik k is called the regression function
0 s are regression coefficientst that represent marginal and separate
effects of regressors
e.g , 2 represents the the change in the dependent variable when the
second regressor increases by 1 unit holding constant other variables.
i.e yi /xi 2 = 2
Linearity implies that the marginal effects do not depend on the level of
regressors
Example: (Wage Equation)
log(WAGEi ) = 1 + 2 Si + 3 TENUREi + 4 EXPRi + i

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

3/1

frametitleClassical Linear Regression Assumptions

Xi =

xi1
xi2
.
.
.
xik

fi =

1
2
.
.
.
k

thus Xi0 fi =xi1 1 + xi2 2 + ...xik k and


yi = Xi0 fi + i , (i = 1, 2, ..., n )

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

4/1

frametitleClassical Linear Regression Assumptions


Also defined

y =

y1
.
.
.
yn

1
.
.
.
n

X =

X10
.
.
.
Xn0

x11 . . . x1K
.
.
. . . .
xn1 . . . xnK

Therefore assumption1.1 can be written as:


y = Xfi +

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

5/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0

Chrispin Mphuka (Institute)

(i = 1, 2, ..., n)

Lecture 1: Classical Regression Model

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0
(i = 1, 2, ..., n)
differently put implies E ( i |X1 , ..., Xn ) = 0

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

(i = 1, 2, ..., n)

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0
(i = 1, 2, ..., n)
differently put implies E ( i |X1 , ..., Xn ) = 0

(i = 1, 2, ..., n)

Implications of Strict Exogeneighty

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0
(i = 1, 2, ..., n)
differently put implies E ( i |X1 , ..., Xn ) = 0

(i = 1, 2, ..., n)

Implications of Strict Exogeneighty


The unconditional mean if the error term is 0 i.e
(i = 1, 2, ..., n)

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

E ( i ) = 0

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0
(i = 1, 2, ..., n)
differently put implies E ( i |X1 , ..., Xn ) = 0

(i = 1, 2, ..., n)

Implications of Strict Exogeneighty


The unconditional mean if the error term is 0 i.e E ( i ) = 0
(i = 1, 2, ..., n)
If cross moments E (xy ) = 0 = x is orthogonal to y. Under strict
exogeneity the regressors are orthogonal to the error terms.
i.e., E xjk i = 0 (j, k = 1, ..., n; k = 1, ..., K )

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.2 (Strict Exogeneity)
E ( i |X) = 0
(i = 1, 2, ..., n)
differently put implies E ( i |X1 , ..., Xn ) = 0

(i = 1, 2, ..., n)

Implications of Strict Exogeneighty


The unconditional mean if the error term is 0 i.e E ( i ) = 0
(i = 1, 2, ..., n)
If cross moments E (xy ) = 0 = x is orthogonal to y. Under strict
exogeneity the regressors are orthogonal to the error terms.
i.e., E xjk i = 0 (j, k = 1, ..., n; k = 1, ..., K )

Strict exogeneity resquires that the regressors be orthorgonal not only


to the error term from the same observation but also to the error
term from other observations.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

6/1

frametitleClassical Linear Regression Assumptions


Assumption 1.3: No multicollinearity
- the rank of the nXK data matrix, X, is K with probability 1.
-implies X is a full column rank
-implies that n > k i.e., there must be at least as many observations
as variables.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

7/1

frametitleClassical Linear Regression Assumptions


Assumption 1.3: No multicollinearity
- the rank of the nXK data matrix, X, is K with probability 1.
-implies X is a full column rank
-implies that n > k i.e., there must be at least as many observations
as variables.
Assumption 1.4: Spherical error
 variance
2
- (homoskedasticity) E i |X = 2 > 0 (i = 1, 2, ..., n )



Var 2i |X = E 2i |X E ( i |X)2 = E 2i |X , since E ( i |X) = 0
by assumption

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

7/1

frametitleClassical Linear Regression Assumptions


Assumption 1.3: No multicollinearity
- the rank of the nXK data matrix, X, is K with probability 1.
-implies X is a full column rank
-implies that n > k i.e., there must be at least as many observations
as variables.
Assumption 1.4: Spherical error
 variance
2
- (homoskedasticity) E i |X = 2 > 0 (i = 1, 2, ..., n )



Var 2i |X = E 2i |X E ( i |X)2 = E 2i |X , since E ( i |X) = 0
by assumption
The homoskedaisticity assumption says that the conditional second
moment is a constant
-(no correlation between errors) E ( i j |X) = 0
(i = 1, 2, ..., n; i 6= j )

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

7/1

frametitleClassical Linear Regression Assumptions


Assumption 1.3: No multicollinearity
- the rank of the nXK data matrix, X, is K with probability 1.
-implies X is a full column rank
-implies that n > k i.e., there must be at least as many observations
as variables.
Assumption 1.4: Spherical error
 variance
2
- (homoskedasticity) E i |X = 2 > 0 (i = 1, 2, ..., n )



Var 2i |X = E 2i |X E ( i |X)2 = E 2i |X , since E ( i |X) = 0
by assumption
The homoskedaisticity assumption says that the conditional second
moment is a constant
-(no correlation between errors) E ( i j |X) = 0
(i = 1, 2, ..., n; i 6= j )
The two assumptions combined = E (0 |X) = 2 In or equivalently
Var (0 |X) = 2 In
Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

7/1

frametitleClassical Linear Regression Assumptions


Random Sample
-Assume (yi ,Xi ) is a random sample = (yi ,Xi ) is iid accros
distributions

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

8/1

frametitleClassical Linear Regression Assumptions


Random Sample
-Assume (yi ,Xi ) is a random sample = (yi ,Xi ) is iid accros
distributions
Then
-E ( i |X) = E ( i |xi )
-E 2i |X = E 2i |xi

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

8/1

frametitleClassical Linear Regression Assumptions


Random Sample
-Assume (yi ,Xi ) is a random sample = (yi ,Xi ) is iid accros
distributions
Then
-E ( i |X) = E ( i |xi )
-E 2i |X = E 2i |xi
and E ( i j |X) = E ( i |xi ) E ( j |xj )

Chrispin Mphuka (Institute)

(i 6 = j )

Lecture 1: Classical Regression Model

01/2010

8/1

frametitleAlgebra of Least Squares

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

9/1

frametitleWorking with color


Beamer color themes define the use of color in a presentation.
This presentation uses the default color theme.

To use a different color theme, add the command


\usecolortheme {colorthemename }tothepreambleofyourdocument, replacingan

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

10 / 1

frametitleWorking with fonts


Beamer font themes define the use of fonts in a presentation.
This presentation uses the default font scheme.

To use a different font theme, add the command


\usefonttheme {fontthemename }tothepreambleofyourdocument, replacinganye

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

11 / 1

frametitleAdding graphics
Frames can contain graphics and animations.
Columns provide support for laying out graphics and text.
See examples in SWSamples/PackageSample-beamer.tex in your
program installation.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

12 / 1

frametitleSetting class options


Use class options to
Set the base font size for the presentation.
Set text alignment.
Set equation numbering.
Set print quality.
Format displayed equations.
Create a presentation, handout, or set of transparencies.
Hide or display notes.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

13 / 1

frametitleSetting class options


framesubtitleNotes
This shell is originally supplied with the notes class option set to
Show.
This frame contains a note so that you can test the notes options.
To see the note, scroll to the next frame.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

14 / 1

2015-04-02

Lecture 1: Classical Regression Model


The Classical Multiple Linear Regression Model
Setting class options

Here is a Beamer note.

frametitleSetting class options


framesubtitleNotes
This shell is originally supplied with the notes class option set to
Show.
This frame contains a note so that you can test the notes options.
To see the note, scroll to the next frame.

frametitleLearn more about Beamer


This shell and the associated fragments provide basic support for
Beamer in SWP and SW.
The current support is a beta version.
To learn more about Beamer, see
SWSamples/PackageSample-beamer.tex in your program installation.
For complete information, read the BeamerUserGuide.pdf manual
provided with this shell.
For support, contact support@mackichan.com.

Chrispin Mphuka (Institute)

Lecture 1: Classical Regression Model

01/2010

15 / 1

Vous aimerez peut-être aussi