Vous êtes sur la page 1sur 4

2-13 (Gao Liu, R Hanson) What is the principle of parsimony? How does this relate to forecasting?

Parsimony principal is one of the important principles for model building. It refers to the principle that when everything else is equal, simpler models are better. When applied to forecast, it means that a simple model that fits the past as well as a more complex model will almost always forecast the future better. This principle is viewed as one of the criteria for choosing a model from competing forecasting models. When choosing a forecasting model from theories with approximately the same explanatory power, or the same error, the smaller of predictor variables or estimated coefficients (i.e., the simpler the theory), the more accurate are the forecasts. (Pg 54, Pg733) 2-14 (Gao Liu, R Hanson) Explain the meanings of the following forecasting accuracy measures: percentage error (PE), mean percentage error(MPE), mean absolute percentage error(MAPE) All the three measures are relative measures: PE: -Percentage error -Measures the ratio of the error to actual - (Yt Y^t)/Yt (100) MPE: -Mean percentage error -It should be near zero, as positive and negative errors are offset by each other, with a good model) ---PEt/n MAPE:-Mean absolute percentage error -absolute values are used in summation of PE to arrive at the MAPE --|PEt|/n (Pg 54 56)

2-15 (Gao Liu, R Hanson) Question - Explain the concept of correlation between Y and X using common English terms. Correlation is a statistical technique that can show whether and how strongly dependent variable Y is related to independent variable X. In other words, it measures the degree of dependence or association between two variables X and Y. If there is a weak association between X and Y, then we would say there is a low correlation. Conversely, if there is a strong relationship between X and Y, then there will be a high correlation. By way of example, height and weight are related - taller people tend to be heavier than shorter people. The relationship isn't perfect. People of the same height vary in weight, and you can easily think of two people you know where the shorter one is heavier than the taller one. Nonetheless, the average weight of people 5'5'' is less than the average weight of people 5'6'', and their average weight is less than that of people 5'7'', etc. Correlation can tell you just how much of the variation in peoples' weights is related to their heights. (Pg 16, Pg 58 59) 2-16 (Gao Liu , R Hanson) Question - Explain the concept of correlation between Y and X using standard deviations. An attempt to numerically apply the concept of correlation we must begin through an understanding of a measure called Covariance. The equation for Covariance is shown below. __ __ COV(X,Y) = (X - X ) ( Y - Y ) / n-1 Covariance is a measure of the association between two variables. Often times this measure is not very insightful by itself, but it is important in understanding the other measure of association, correlation. The next step in numerically understanding how the relationship between X and Y would utilize covariance and standard deviations to construct a correlation coefficient. The goal of this study is to provide a meaning for both positive and negative relationships of X and Y (high correlation as well as meaning for a low correlation). The following equations are for the standard deviation of X (Sx) and Y(Sy) and are the basis of the correlation coefficients (rxy). __ Sx = ( (X - X )2/n-1)1/2 __

Sy = ( (Y - Y )2/n-1)1/2 rxy = COV(X,Y) / Sx Sy The correlation coefficient measures the proportion of the covariation of X and Y to the product of their standard deviations (also called Pearson correlation coefficient). More conceptually, this correlation coefficient can be interpreted as the average standard deviation change in Y associated with a standard deviation change in X. Values will range between +1 and -1. r = -1 perfect negative correlation r= 0 no relationship r= +1 perfect positive correlation (Pg 59 62) 2-18 (Gao Liu, R Hanson) Question - Explain the concept of statistical significance, particularly as it relates to the correlation coefficient. When dealing with a low sample size, we cannot always be sure that a strong relationship between X and Y (say, rxy = .88) is accurate or the result of chance. In addition, we may want to know whether there is a significant relationship between X and Y, even though rxy is low (say, 0.02). To solve the above problems, we perform statistical significance which will tell us if the statistical result (e.g. the correlation coefficient) gives a high degree of confidence that there is a relationship between X and Y. In other words, statistical significance can test whether the correlation coefficient r is significantly different than 0. Statistical significance constructs a test called the standard error of the correlation coefficient (Ser) which can be used to perform a statistical significance test (tr) : Ser = ( (1-r)2 / (n - 2) )1/2 tr = r / Ser The sample population (n) determines the statistical significance of the correlation coefficient. The utilization of a hypothesis test and the t-test tables will indicate the percent chance that the population coefficient for r is not equal to zero. This determines whether we accept Ho: r = 0 or reject Ho and accept H1 : r <> 0.

2-19 (Gao Liu, R Hanson) Why is cause and effect so difficult to prove in business and economics? Pg 66, Pg120-121 It is difficult to prove a cause-and-effect relationship between two variables in Business and economics, even when we have figured out a statistically significant association between them, because in the field of business and economics it is hard for us to measure and control for all other factors influencing on the dependent variable. While it seems logical to infer a cause-and effect relationship between the two variables with high correlation, it is a mistake to assume a true cause-and-effect relationship between them just based on the high correlation. Other factors that influence dependent variables may have occurred at the same time. It is possible for two variables to be related (correlated), but not have one variable cause another. In past studies they have shown a high correlation between womens skirt lengths and stock market performance, church attendance and beer consumption and ext. These examples show the need to be careful in concluding that cause-and-effect relationship exist between two variables simply because they are highly correlated. We can cautiously infer cause-and effect relationship only if we have measured and controlled for all other influencing factors on the dependent variable, and measured or manipulated the independent variable before the observed changes occur in the dependent variable.

Vous aimerez peut-être aussi