Académique Documents
Professionnel Documents
Culture Documents
Statistical Inference
4.1. The distribution of OLS estimator
~ t(n-k)
4.2 Testing hypothesis about one of
the regression coefficients
4.2.1. Confidence interval
4.2.2. T-statistic and Student t distribution
4.2.1. Confidence interval
Under the CRM assumptions, we can easily construct a confidence
interval (CI) for the population parameter i. Confidence intervals are
also called interval estimates because they provide a range of likely
values for the population parameter, and not just a point estimate.
ˆi i
t
Se( ˆi )
t(n-k)
ˆ t Se( ˆ ) ˆ t Se( ˆ ), i 1,3
Then CI: i / 2 i i i /2 i
11
Testing against two-Sided alternative
Example : Determinants of college GPA (example 4.3, p129)
13
Example 2: Determinants of college GPA
14
A reminder on the language of
classical hypothesis testing
• When H0 is not rejected “We fail to reject
H0 at the x% level”, do not say: “H0 is accepted
at the x% level”.
• Statistical significance vs economic
significance: The statistical significance is
determined by the size of the t-statistics
whereas the economic significance is related
to the size and sign of the estimators.
15
Testing against one-Sided alternative
Testing against one-Sided alternative
Testing Hypotheses on the coefficients
j 0
Two tail |t0|>t(n-k),α/2
j 0
Right tail t0 > t(n-k),α
j 0 j 0
18
4.3 Testing of joint hypotheses
• 4.3.1 Testing hypotheses on two and more
coefficients
• 4.3.2 F-statistic and Fisher F distribution
4.3.1 Testing the Overall Significance of
the Sample Regression
For Yi = 1 + 2X2i + 3X3i + ........+ kXki + ui
To test the hypothesis
H0: 2 =3 =....= k= 0 (all slope coefficients are simultaneously zero)
(this is also a test of significance of R2)
H1: Not at all slope coefficients are simultaneously zero
Option 1: t-test
• If the t variable exceeds the critical t value at the designated level of
significance for given df, then you can reject the null hypothesis;
otherwise, you do not reject it
Option 2: F test
R 2 (n k )
F (8.5.7)
(1 R 2 )( k 1)
(k = total number of parameters to be estimated including intercept)
If F > F critical = F,(k-1,n-k), reject H0, Otherwise you do not reject it 20
Example : Testing the Overall Significance of
the Sample Regression
21
4.3.2 F-statistic and Fisher F distribution
4.4 Testing single restrictions involving multiple coefficients
a. Testing the Equality of Two Regression Coefficients
• Suppose in the multiple regression
Yi = β1 + β2X2i + β3X3i + β4X4i + ui
we want to test the hypotheses
H0: β3 = β4 or (β3 − β4) = 0
H1: β3 ≠ β4 or (β3 − β4) ≠ 0
that is, the two slope coefficients β3 and β4 are equal.\
23
a. Testing the Equality of Two Regression Coefficients
H0: β3=0, β4=0 vs H1: β3≠0 and/or β4 ≠ 0
Option 1: t-test (test directly)
• If the t variable exceeds the critical t value at the designated
level of significance for given df, then you can reject the null
hypothesis; otherwise, you do not reject it
24
a. Testing the Equality of Two Regression Coefficients
• review
2 . x32 2
Var ( ˆ2 )
x x2
2
2
3 ( x2 x3 ) 2
(1 r ) x
2
2, 3
2
2
. x
2 2
2
Var ( ˆ3 ) 2
x x
2
2
2
3 ( x2 x3 ) 2
(1 r ) x
2
2, 3
2
3
r 2 2
(1 r )
2
2,3 x x 2
2
2
3
25
Example- Eview output
• Model: wage = f(educ,exper, tenure )
26
Example- Eview output
• Model: wage = f(educ,exper, tenure )
27
Example- Eview output
• We have se(ˆ3 ˆ4 ) 0.029635
t = -4.958 t 0.025,522 2.
Reject H0
28
a. Testing the Equality of Two Regression Coefficients
Option 2: Transform the regression
29
a. Testing the Equality of Two Regression Coefficients
. test exper=tenure
( 1) exper - tenure = 0
F( 1, 522) = 24.58
Prob > F = 0.0000
We reject the hypothesis that the two effects are
equal.
32
b. Restricted Least Squares: Testing Linear Equality
Restrictions
33
b. Restricted Least Squares: Testing Linear Equality
Restrictions
34
b. Restricted Least Squares: Testing Linear Equality
Restrictions
The t-Test Approach
•The simplest procedure is to estimate Eq. (8.6.2) in the usual
manner
•A test of the hypothesis or restriction can be conducted by the
t test.
(8.6.4)
•If the t value computed exceeds the critical t value at the chosen
level of significance, we reject the hypothesis of constant returns
to scale;
•Otherwise we do not reject it. 35
b. Restricted Least Squares: Testing Linear Equality
Restrictions
37
EXAMPLE 4.4 The Cobb–Douglas Production Function for the
Mexican Economy,1955–1974 (Table 8.8)
GDP Employment Fixed Capital
Year Millions of 1960 pesos. Thousands of people. Millions of 1960 pesos.
1955 114043 8310 182113
1956 120410 8529 193749
1957 129187 8738 205192
1958 134705 8952 215130
1959 139960 9171 225021
1960 150511 9569 237026
1961 157897 9527 248897
1962 165286 9662 260661
1963 178491 10334 275466
1964 199457 10981 295378
1965 212323 11746 315715
1966 226977 11521 337642
1967 241194 11540 363599
1968 260881 12066 391847
1969 277498 12297 422382
1970 296530 12955 455049
1971 306712 13338 484677
1972 329030 13738 520553
1973 354057 15924 561531
38
1974 374977 14154 609825
Example
39
Example
41
Example
• General F Testing
• Model: wage = f(educ,exper, tenure )
H0: beta(exper)=0
42
Example
• Unrestricted model
43
Example
• Restricted model
44
Example
General F Testing
•In Exercise 7.19, you were asked to consider the following
demand function for chicken:
lnYt = β1 + β2 lnX2t + β3 lnX3t + β4 lnX4t + β5 ln X5t + ui (8.6.19)
Where Y = per capita consumption of chicken, lb
X2 = real disposable per capita income,$
X3 = real retail price of chicken per lb
X4 = real retail price of pork per lb
X5 = real retail price of beef per lb.
45
Example
46
Example
49
TABLE 4.4 c: Savings and Personal Disposable Income (billions of
dollars), United States, 1970–1995
51
c. Testing for Structural or Parameter Stability of
Regression Models: The Chow Test
• RSSUR = RSS1 + RSS2 = (1785.032 + 10,005.22) = 11,790.252
• RSSR = RSS3 = 23248.3
= 10.69
• From the F tables, we find that for 2 and 22 df the 1 percent critical F
value is 5.72.
• Therefore, The Chow test therefore seems to support our earlier hunch
that the savings–income relation has undergone a structural change in
the United States over the period 1970–1995
52
c. Testing the Functional Form of Regression: Choosing
between Linear and Log–Linear Models
• We can use a test proposed by MacKinnon, White, and Davidson,
which for brevity we call the MWD test, to choose between the two
models
H0: The true model is linear
H1: The true model is Log–Linear
Step I: Estimate the linear model and obtain the estimated Y values. Call them Yf
Step II: Estimate the log–linear model and obtain the estimated lnY values; call lnf
Step III: Obtain Z1 = (lnY f − ln f ).
Step IV: Regress Y on X’s and Z1 obtained in Step III. Reject H0 if the coefficient of Z1
is statistically significant by the usual t test.
Step V: Obtain Z2 = (antilog of lnf − Y f ).
Step VI: Regress log of Y on the logs of X’s and Z2. Reject H1 if the coefficient of Z2 is
statistically significant by the usual t test.
53
EXAMPLE 4.4 The Demand for Roses
55
EXAMPLE4.4The Demand for Roses
57
The Log-Linear Model
59
Semilog Models: Log–Lin and Lin–Log Models
60
Semilog Models: Log–Lin and Lin–Log Models
61
Semilog Models: Log–Lin and Lin–Log Models