Vous êtes sur la page 1sur 28

Scribd Upload a Document Search Documents Explore

Krishnan Viswanathan

You're looking at our new document page format. Have any thoughts? Leave us your feedback. Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models7. Multiple RegressionAnalysis: The Problem ofEstimation The McGrawHillCompanies, 2004 246 PARTONE:SINGLE-EQUATION REGRESSION MODELS 7 A . 4 M A X I M U M L I K E L I H O O D E S T I M A T I O N OF THE MULTIPLE REGRESSION MODEL Extending the ideas introduced in Chapter 4, Appendix 4A, we can write thelog-likelihood function for the k -variable linear regression model (7.4.20) asln L = n 2ln 2 n 2l n ( 2 ) 12 ( Y i

1 2 X 2 i k X ki ) 2 2 Differentiating this function partially with respect to 1 , 2 , ... , k and 2 ,we obtain the following ( K + 1) equations: ln L 1 = 1 2 (

Y i 1 2 X 2 i k X ki )( 1) (1) ln L 2 = 1 2 ( Y i 1 2 X 2 i k X

ki )( X 2 i ) (2) ............................................. ln L k = 1 2 ( Y i 1 2 X 2 i k X ki )( X ki ) ( K ) ln

L 2 = n 2 2 + 12 4 ( Y i 1 2 X 2 i k X ki ) 2 ( K + 1) Setting these equations equal to zero (the rst-order condition for optimiza-tion) and letting 1 , 2 , ...

, k and 2 denote the ML estimators, we obtain,after simple algebraic manipulations, Y i = n 1 + 2 X 2 i + + k X ki Y i X 2 i = 1 X 2 i + 2

X 22 i + + k X 2 i X ki ............................................ Y i X ki = 1 X ki + 2 X 2 i X ki + + k X 2 ki which are precisely the normal equations of the least-squares theory, as canbe seen from Appendix 7A, Section 7A.1. Therefore, the ML estimators, the s,are the same as the OLS estimators, the

s,given previously. But asnoted in Chapter 4, Appendix 4A, this equality is not accidental.Substituting the ML ( = OLS) estimators into the ( K + 1)st equation justgiven, we obtain, after simplication, the ML estimator of 2 as 2 = 1 n ( Y i 1 2 X 2 i k X ki ) 2 = 1 n u 2 i

As noted in the text, this estimator differs from the OLS estimator 2 = u 2 i / ( n k ) . And since the latter is an unbiased estimator of 2 ,this con-clusion implies that the ML estimator 2 is a biased estimator. But, as canbe readily veried, asymptotically, 2 is unbiased too.

Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models7. Multiple RegressionAnalysis: The Problem ofEstimation The McGrawHillCompanies, 2004 CHAPTER SEVEN:MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF ESTIMATION 247 7A.5SAS OUTPUT OF THE COBBDOUGLAS PRODUCTION FUNCTION (7.9.4) DEPVARIABLE: Y1 S U M O F M E A N S O U R C E D F S Q U A R E S S Q U A R E F V A L U E P R O B > FM O D E L 2 0 . 5 3 8 0 3 8 0 . 2 6 9 0 1 9 4 8 . 0 6 9 0 . 0 0 0 1 E R R O R 1 2 0 . 0 6 7 1 5 3 0 . 0 0 5 5 9 6 5 3 1 C T O T A L 1 4 0 . 6 0 5 1 9 6 R O O T M S E 0 . 0 7 4 8 1 0 R S Q U A R E 0 . 8 8 9 0 D E P M E A N 1 0 . 0 9 6 5 3 5 A D J R S Q 0 . 8 7 0 5 C . V . 0

. 7 4 0 9 4 6 9 P A R A M E T R S T A N D A R D T F O R H O : V A R I A B L E D F E S T I M T E E R R O R P A R A M E T E R = 0 P R O B > | T | I N T E R C E P 1 3 . 3 3 8 4 5 5 2 . 4 4 9 5 0 8 1 . 3 6 3 0 . 1 9 7 9 Y 2 1 1 . 4 9 8 7 6 7 0 . 5 3 9 8 0 3 2 . 7 7 7 0 . 0 1 6 8 Y 3 1 0 . 4 8 9 8 5 8 0 . 1 0 2 0 4 3 4 . 8 0 0 0 . 0 0 0 4 COVARIANCE OF ESTIMATES C O V B I N T E R C E P Y 2 Y 3 I N T E R C P 6 . 0 0 0 0 9 1 1 . 2 6 0 5 6 0 . 1 1 2 1 9 5 1 Y2 1 . 2 6 0 5 6 0 . 2 9 1 3 8 6 8 0.0384272Y 3 0 . 0 1 1 2 1 9 5 1 0 . 0 3 8 4 2 7 2 0 . 0 1 0 4 1 2 8 8 Y X 2 X 3 Y 1 Y 2 Y 3 Y 1 H A T Y 1 R E S I D 1 6 6 0 7 . 7 2 7 5 . 5 1 7 8 0 3 . 9 . 7 1 7 6 5 . 6 1 8 5 9 9 . 7 8 7 2 9 . 7 6 8 0.159201 7 5 1 1 . 3 2 7 4 . 4 1 8 0 9 6 . 8 7 7 0 6 5 . 6 1 4 5 9 9 . 8 0 3 5 9 . 8 7 8

E A

7 8

9 . 8

0.108222 0 1 7 1 . 2 2 6 9 . 7 1 8 2 7 1 . 8 9 . 9 1 2 0 5 . 5 9 7 3 1 9 . 8 1 3 1 9 . 8 5 7 6 0 . 0 5 4 3 7 2 0 9 3 2 . 9 2 6 7 . 0 1 9 1 6 7 . 3 9 . 9 4 9 1 5 . 5 8 7 2 5 9 . 8 6 1 0 9 . 8 6 6 0 0 . 0 8 3 0 7 2 0 4 0 6 . 0 2 6 7 . 8 1 9 6 4 7 . 6 9 . 9 2 3 6 5 . 5 9 0 2 4 9 . 8 8 5 7 9 . 8 8 2 6 0 . 0 4 0 9 7 2 0 8 3 1 . 6 2 7 5 . 0 2 0 8 0 3 . 5 9 . 9 4 4 2 5 . 6 1 6 7 7 9 . 9 4 2 9 9 . 9 5 0 4 0.006152 4 8 0 6 . 3 2 8 3 . 0 2 2 0 7 6 . 6 1 0 . 1 1 8 9 5 . 6 4 5 4 5 1 0 . 0 0 2 3 1 0 . 0 2 2 5 0 . 0 9 6 4 0 2 6 4 6 5 . 8 3 0 0 . 7 2 3 4 4 5 . 2 1 0 . 1 8 3 6 5 . 7 0 6 1 1 1 0 . 0 6 2 4 1 0 . 1 4 2 8 0 . 0 4 0 7 7 2 7 4 0 3 . 0 3 0 7 . 5 2 4 9 3 9 . 0 1 0 . 2 1 8 4 5 . 7 2 8 4 8 1 0 . 1 2 4 2 1 0 . 2 0 6 6 0 . 0 1 1 8 0 2 8 6 2 8 . 7 3 0 3 . 7 2 6 7 1 3 . 7 1 0 . 2 6 2 2 5 . 7 1 6 0 4 1 0 . 1 9 2 9 1 0 . 2 2 1 7 0 . 0 4 0 5 1 2 9 9 0 4 . 5 3 0 4 . 7 2 9 9 5 7 . 8 1 0 . 3 0 5 8 5 . 7 1 9 3 3 1 0 . 3 0 7 5 1 0 . 2 8 2 7 0 . 0 2 3 0 4 2 7 5 0 8 . 2 2 9 8 . 6 3 1 5 8 5 . 9 1 0 . 2 2 2 2 5 . 6 9 9 1 0 1 0 . 3 6 0 5 1 0 . 2 7 8 3 0.056102 9 0 3 5 . 5 2 9 5 . 5 3 3 4 7 4 . 5 1 0 . 2 7 6 3 5 . 6 8 8 6 7 1 0 . 4 1 8 5 1 0 . 2 9 1 1 0.014872 9 2 8 1 . 5 2 9 9 . 0 3 4 8 2 1 . 8 1 0 . 2 8 4 7 5 . 7 0 0 4 4 1 0 . 4 5 8 0 1 0 . 3 2 8 1 0.043413 1 5 3 5 . 8 2 8 8 . 1 4 1 7 9 4 . 3 1 0 . 3 5 8 9 5 . 6 6 3 3 1 1 0 . 6 4 0 5 1 0 . 3 6 1 9 0.00299 C O L L I N E A R I T Y D I A G N O S T I C S V A R I A N C E P R O P O R T I O N S C O N D I T I O N P O R T I O N P O R T I O N P O R T I O N N U M B E R E I G E N V A L U E I N D E X I N T E R C E P Y 2 Y 3 1 3 . 0 0 0 1 . 0 0 0 0 . 0 0 0 0 0 . 0 0 0 0 0 . 0 0 0 0 2 . 0 0 0 3 7 5 4 5 1 8 9 . 3 8 3 0 . 0 4 9 1 0 . 0 0 6 9

0 . 5 9 5 9 3 . 0 0 0 0 2 4 2 1 9 3 5 1 . 9 2 5 0 . 9 5 0 9 0 . 0 0 3 1 0 . 4 0 4 0 DURBIN-WATSON d 0.891 1 S T O R D E R A U T O C O R R E L A T I O N 0 . 3 6 6 Notes: Y1 = lnY; Y2 = lnX2; Y3 = lnX3. The numbers under the heading PROB > | T | rep-resent p values. See Chapter 10 for a discussion of collinearity diagnostics.

Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models8. Multiple RegressionAnalysis: The Problem ofInference The McGrawHillCompanies, 2004 248 8 MULTIPLE REGRESSIONANALYSIS: THE PROBLEMOF INFERENCE This chapter, a continuation of Chapter 5, extends the ideas of interval esti-mation and hypothesis testing developed there to models involving threeormore variables. Although in many ways the concepts developed in Chap-ter 5 can be applied straightforwardly to the multiple regression model, afew additional features are unique to such models, and it is these featuresthat will receive more attention in this chapter. 8.1THE NORMALITYASSUMPTION ONCE AGAIN Weknow by now that if our sole objective is point estimation of the para-meters of the regression models, the method of ordinary least squares(OLS), which does not make any assumption about the probability distrib-ution of the disturbances u i ,will sufce. But if our objective is estimation aswell as inference, then, as argued in Chapters 4 and 5, we need to assumethat the u i follow some probability distribution.For reasons already clearly spelled out, we assumed that the u i follow thenormal distribution with zero mean and constant variance 2 . We continueto make the same assumption for multiple regression models. With the nor-mality assumption and following the discussion of Chapters 4 and 7, we ndthat the OLS estimators of the partial regression coefcients, which areidentical with the maximum likelihood (ML) estimators, are best linear un-biased estimators (BLUE). 1 Moreover, the estimators 2 , 3 , and 1

are 1 With the normality assumption, the OLS estimators 2 , 3 ,and 1 are minimum-varianceestimators in the entire class of unbiased estimators, whether linear or not. In short, they areBUE (best unbiased estimators). See C. R. Rao, Linear Statistical Inference and Its Applications, John Wiley & Sons, New York, 1965, p. 258.

Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models8. Multiple RegressionAnalysis: The Problem ofInference The McGrawHillCompanies, 2004 CHAPTER EIGHT:MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE 249 themselves normally distributed with means equal to true 2 , 3 , and 1 andthe variances given in Chapter 7. Furthermore,( n 3)

2 / 2 follows the 2 distribution with n 3df, and the three OLS estimators are distributed in-dependently of 2 . The proofs follow the two-variable case discussed inAppendix 3. As a result and following Chapter 5, one can show that, uponreplacing 2 by its unbiased estimator 2 in the computation of the stan-dard errors, each of the following variablesfollows the t distribution with n 3 d f . Note that the df are now n 3because in computing u 2 i and hence 2 we rst need to estimate the three partial regression coefcients, whichtherefore put three restrictions on the residual sum of squares (RSS) (fol-lowing this logic in the four-variable case there will be n 4df, and so on).Therefore, the

t distribution can be used to establish condence intervals aswell as test statistical hypotheses about the true population partial regres-sion coefcients. Similarly, the 2 distribution can be used to test hypothe-ses about the true 2 . To demonstrate the actual mechanics, we use the fol-lowing illustrative example. 8.2EXAMPLE 8.1: CHILD MORTALIT YEXAMPLE REVISITED In Chapter 7 we regressed child mortality (CM) on per capita GNP (PGNP)and the female literacy rate (FLR) for a sample of 64 countries. The regres-sion results given in (7.6.2) are reproduced below with some additionalinformation: CM i = 263.6416 0.0056PGNP i 2.2316FLR i se = ( 1 1 . 5 9 3 2 ) ( 2 0 9 9 ) t = ( 2 2 . 7 4 1 1 ) ( 2 . 8 1 8 7 10.6293) (8.2.1) p value = (0.0000) * ( 0 . 0 0 6 5 ) ( * R 2 =

0.7077 R 2 = 0.6981where * denotes extremely low value. (8.1.1)(8.1.2)(8.1.3) t = 1 1 se( 1 ) t = 2 2 se( 2 ) t = 3 3 se( 3 )

Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models8. Multiple RegressionAnalysis: The Problem ofInference The McGrawHillCompanies, 2004 250 PARTONE:SINGLE-EQUATION REGRESSION MODELS In Eq. (8.2.1) we have followed the format rst introduced in Eq. (5.11.1),where the gures in the rst set of parentheses are the estimated standarderrors, those in the second set are the t values under the null hypothesis thatthe relevant population coefcient has a value of zero, and those in the thirdare the estimated p values. Also given are R 2 and adjusted R 2 values. Wehave already interpreted this regression in Example 7.1.What about the statistical signicance of the observed results? Consider,for example, the coefcient of PGNP of 0.0056. Is this coefcient statisti-cally signicant, that is, statistically different from zero? Likewise, is thecoefcient of FLR of 2.2316 statistically signicant? Are both coefcientsstatistically signicant? To answer this and related questions, let us rstconsider the kinds of hypothesis testing that one may encounter in the con-text of a multiple regression model.

8.3HYPOTHESIS TESTING IN MULTIPLE REGRESSION:GENERALCOMMENTS Once we go beyond the simple world of the two-variable linear regressionmodel, hypothesis testing assumes several interesting forms, such as thefollowing: 1. Testing hypotheses about an individual partial regression coefcient(Section 8.4) 2. Testing the overall signicance of the estimated multiple regressionmodel, that is, nding out if all the partial slope coefcients are simultane-ously equal to zero (Section 8.5) 3. Testing that two or more coefcients are equal to one another(Section 8.6) 4. Testing that the partial regression coefcients satisfy certain restric-tions (Section 8.7) 5. Testing the stability of the estimated regression model over time or indifferent cross-sectional units (Section 8.8) 6. Testing the functional form of regression models (Section 8.9)Since testing of one or more of these types occurs so commonly in empiri-cal analysis, we devote a section to each type. 8.4HYPOTHESIS TESTING A B O U T INDIVIDUALREGRESSIONCOEFFICIENTS If we invoke the assumption that u i N (0, 2 ), then, as noted in Section 8.1,we can use the t test to test a hypothesis about any individual partial regres-sion coefcient. To illustrate the mechanics, consider the child mortality regression, (8.2.1). Let us postulate that H 0 : 2 = 0 a n d H 1 :

2 0

Gujarati: BasicEconometrics, FourthEditionI. SingleEquationRegression Models8. Multiple RegressionAnalysis: The Problem ofInference The McGrawHillCompanies, 2004 CHAPTER EIGHT:MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE 251 2 In most empirical investigations the null hypothesis is stated in this form, that is, takingthe extreme position (akind of straw man) that there is no relationship between the dependent variable and the explanatory variable under consideration. The idea here is to nd out whetherthe relationship between the two is a trivial one to begin with. The null hypothesis states that, with X 3 (female literacy rate) held con-stant, X 2 (PGNP) has no (linear) inuence on Y (child mortality). 2 Totest thenull hypothesis, we use the t test given in (8.1.2). Following Chapter 5 (seeTable 5.1), if the computed t value exceeds the critical t value at the chosenlevel of signicance, we may reject the null hypothesis; otherwise, we maynot reject it. For our illustrative example, using (8.1.2) and noting that 2 = 0under the null hypothesis, we obtain t = 0 . 00560 . 0020 =

2 . 8187 (8.4.1) as shown in Eq. (8.2.1).Notice that we have 64 observations. Therefore, the degrees of freedom inthis example are 61 (why?). If you refer to the t table given in Appendix D ,we do not have data corresponding to 61df. The closest we have are for60df. If we use these df, and assume , the level of signicance (i.e., theprobability of committing a Type I error) of 5 percent, the critical t value is2.0 for a two-tail test (look up t / 2 for 60df) or 1.671 for a one-tail test (lookup t f o r 6 0 d f ) . For our example, the alternative hypothesis is two-sided. Therefore, weuse the two-tail t value. Since the computed t value of 2.8187 (in absoluteterms) exceeds the critical t value of 2, we can reject the null hypothesis thatPGNP has no effect on child mortality. To put it more positively, with the fe-male literacy rate held constant, per capita GNP has a signicant (negative)effect on child mortality, as one would expect a priori. Graphically, the situ-ation is as shown in Figure 8.1.In practice, one does not have to assume a particular value of to con-duct hypothesis testing. One can simply use the p value given in (8.2.2), 0 t +2.02.0 f ( t ) D e n s i t y Criticalregion,2.5%

t =2.82Critical region,2.5%95%Region of acceptance FIGURE 8.1 The 95% condence interval for t (60 df).

Leave a Comment

Submit Characters: 400

Mallikarjun Patil Thanks a lot!!., u saved my 420 Rs. 11 / 18 / 2011 negsh This an Important book to me may I down load It? 05 / 01 / 2011

Shreya Chakraborty u can download it only if u upload something. click on download and upload any ebook u hav...this will enable u 2 download anything u want 2 within 24 hrs. 08 / 21 / 2011 negsh It is possible I want to down load this book 05 / 01 / 2011

Binita Negi 11 03 / 07 / 2011 serviceability thanks

07 / 22 / 2010

Sunil Bajracharya read this 05 / 16 / 2010

Khoa Nguyen Viet thanks a lot 05 / 09 / 2010 jayfei2000 It is a typical introduction text book in the Econometrics. About the exercises, according to my experience, we should try to think it by ourslef. For the method of answering the question, I suggest a book which is written by Polya. How to solve the mathematical question! It is a good book that you can refer. 07 / 19 / 2009 Show More Basic Econometrics by Gujarati 4th Edition Econometrics Download or Print 90,227 Reads Uploaded by erdenekhuu

TIP Press Ctrl-F to quickly search anywhere in the document.

149 p. Binder 2 Bodybuilding

1 p. Ranking of USA Universities

819 p. Introductory Econometrics by Jeffrey M.Wooldridge Econometrics ebook Upload a Document Search Documents

Follow Us! scribd.com/scribd twitter.com/scribd facebook.com/scribd About Press Blog Partners Scribd 101 Web Stuff Support FAQ

Developers / API Jobs Terms Copyright Privacy

Copyright 2012 Scribd Inc. Language: English

Vous aimerez peut-être aussi