Vous êtes sur la page 1sur 73

FORECASTING U.S.

CONSUMER PRICE INDEX INFLATION Submitted by Group Number: 5 Names of Group Members: ADITI SINGHAL PRATIYUSH REEMA GUPTA SHRUTI TAGRA

Forecasting Methods & Applications (Course 404) Term paper Summer Semester, 2010 Department of Economics Delhi School of Economics University of Delhi Term Paper Submitted to: Professor PAMI DUA Dated: 26th March 2010

ACKNOWLEDGEMENT We are deeply grateful to Dr. Pami Dua for her invaluable help and suggestions under whose guidance and supervision the present study was planned and executed. We also express our extreme gratitude for CDE STAFF and Miss Divya Tuteja for extending their kind help and sharing their knowledge with us. Our thanks are also due to the management and employees of Centre of Development and Economics and Ratan Tata Library.

CONTENTS

Section 1: Introduction.......1 Section 2: Review of Literature .......2 Section 3: Econometric Methodology and estimation of Models .6

Exponential Model.8 Decomposition Method11 ARIMA Model.13 Single Equation

Smoothing

Regression

Model.....22 Simultaneous Model27 Equation

Section 4: Evaluation of Forecasts..............................................................................................................................31 Section 5: Conclusion.....................................................................................................................................44

Appendix.......................................................................................................................................................45

SECTION I
ABSTRACT

In mainstream economics, the word inflation refers to a general rise in prices measured against a standard level of purchasing power. Previously the term was used to refer to an increase in the money supply, which is now referred to as expansionary monetary policy or monetary inflation. Inflation is measured by comparing two sets of goods at two points in time, and computing the increase in cost not reflected by an increase in quality. There are, therefore, many measures of inflation depending on the specific circumstances. The most well known are the CPI which measures consumer prices, and the GDP deflator, which measures inflation in the whole of the domestic economy. The prevailing view in mainstream economics is that inflation is caused by the interaction of the supply of money with output and interest rates. Mainstream economist views can be broadly divided into two camps: the "monetarists" who believe that monetary effects dominate all others in setting the rate of inflation, and the "Keynesians" who believe that the interaction of money, interest and output dominate over other effects. Other theories, such as those of the Austrian school of economics, believe that an inflation of overall prices is a result from an increase in the supply of money by central banking authorities.

INTRODUCTION Inflation forecasts play an important role to effectively implement inflation targeting regime. Moreover, many economic decisions whether made by policy makers, firms, investors or consumers, are often based on inflation forecasts. The accuracy of these forecasts can thus have important repercussions in the economy. Our paper focuses on forecasting inflation in US economy. We chose this topic because of hegemony of dollar globally which have implications over rest of the world. Obviously such an important variable merits forecasting, since it can help investors hedge risk; and also enable the central bank to gauge the effect that a change in this variable can affect policies. This has been the primary motive of our paper. The secondary question we also ask is how affected interest rates are to fluctuations in the foreign markets. These very vaguely define our motivation for embarking on this project. As you unfurl the pages, we hope that our work manages to efface it out. Regards, Authors

SECTION II
REVIEW OF LITERATURE 1. Gary Koop, Dimitris Korobilis (The Rimini centre for economic analysis) Title: Forecasting inflation using dynamic model averaging Objective: This Study develops the Dynamic Averaging model to forecast U.S inflation. Estimation period Period of study is from 1959 Q1 through 2008 Q2 Frequency Quarterly Variables used in the paper Following independent variables has been used in the analysis: UNEMP: unemployment rate. CONS: the percentage change in real personal consumption expenditures. INV: the percentage change in private residential fixed investment. GDP: the percentage change in real GDP. HSTARTS: the log of housing starts (total new privately owned housing units). EMPLOY: the percentage change in employment (All Employees: Total Private Industries, seasonally adjusted). PMI: the change in the Institute of Supply Management (Manufacturing): Purchasing Managers Composite Index. WAGE: the percentage change in average hourly earnings in manufacturing. TBILL: three month Treasury bill (secondary market) rate.

METHODOLOGY This paper uses econometric methods which incorporate dynamic model averaging. These not only allow for coefficients to change over time (i.e. the marginal effect of a predictor for inflation can change), but

also allows for the entire forecasting model to change over time (i.e. different sets of predictors can be relevant at different points in time). Flow of the Paper There is a large literature on forecasting inflation using the generalized Phillips curve (i.e. using forecasting models where inflation depends on past inflation, the unemployment rate and other predictors). This present paper extends this literature through the use of econometric methods which incorporate dynamic model averaging. As predictors, authors such as Stock and Watson (1999) consider measures of real activity including the unemployment rate. Various other predictors (e.g. cost variables, the growth of the money supply, the slope of term structure, etc.) are suggested by economic theory. Paper used two measures of inflation and fifteen predictors and compared the forecasting performance of DMA and DMS to a wide variety of alternative forecasting procedures. DMA and DMS indicate that the set of good predictors for inflation changes substantially over time. Results for inflation are measured by the GDP deflator and by the consumer price index (CPI). Conclusion The paper concludes that dynamic model averaging leads to substantial forecasting improvements over simple benchmark approaches (e.g. random walk or recursive OLS forecasts) and more sophisticated approaches such as those using time varying coefficient models.

2.David Hargreaves, Hannah Kite and Bernard Hodgetts:Title: Modelling New Zealand inflation in a Phillips curve Objective: This study tries to identify suitable model to predict New Zealands inflation rates. Variables used : Dependent variable: - Output gap to represent inflationary pressures. Independent variable: - Lagged inflation Estimation Period

From1992 to 2005 Frequency of Data Quarterly Flow of paper:FPS of New Zealand models inflation based on a semi-structural model based on banks beliefs of how an economy should work rather than just formal macro foundations. To measure the impact of inflationary pressure they use output gap as the determinant variable, quarterly data for the period 1992-2005 has been used. Conclusion:The paper concludes that Phillips curve fits the data well where only dependant variable is lagged inflation. 3. Athanasios Orphanides and Simon van Norden Title: The Reliability of Infation Forecasts Based on Output Gap Estimates in Real Time Objective: To evaluate the usefulness of alternative univariate and multivariate estimates of the output gap for predicting inflation Dependent Variable: Inflation calculated as change in the log of the consumer price index has been taken as the dependent variable piet = log (pt) -log (pt-h)

Estimation period Different for each rate covered Methodology: The paper examines simple linear forecasting models. They estimate the unknown coefficients, by ordinary least squares. To provide a benchmark for comparison, they estimate a univariate forecasting model of inflation based on similar equation but omitting the output gaps (AR model) and a model known as tf model which replaces the output gap in (1) with the first difference of the log of real output.

Flow of the paper The paper presents some of the summary reliability indicators they examine for twelve alternative measures of the Output gap which was employed in the analysis. Our findings show that for many of the alternative methods, historical and real-time estimates frequently have opposite signs and forecasts using ex post estimates of the output gap severely overstate the gap's usefulness for predicting inflation. Conclusion The relative usefulness of real-time output gap estimates diminishes further when compared to simple bivariate forecasting models which use past inflation and output growth. Forecast performance also appears to be unstable over time, with models often performing differently over periods of high and low inflation. These results call into question the practical usefulness of the output gap concept for forecasting inflation.

SECTION III
ECONOMETRIC METHODOLOGY & SPECIFICATION OF MODELS A strategy for appraising any forecasting methods involves the following steps: - (Makridakris, Wheelwright, McGee: Forecasting: Methods and applications): -

1)

Time series of interest is identified. The data set is then divided into 2 parts: - an initialization set (which in our case is Jan 2001 to Dec 2004) and a test set (which in our study is Jan 2005 to Dec 2006) so that an appraisal of a forecasting method can be conducted.

2)

A forecasting method is chosen from the list of smoothing methods.

3) Makes use of the initialization data set to get the forecasting method started.
4) Estimates of any trend components, seasonal components and parameter values are made at this stage

5) Method is applied to the test set to see how it does. After each forecast the forecasting error is
determined and over the complete test set certain measures of forecasting accuracies will be determined

6) This stage requires modification of the initialization process and/or searching for the optimal
values of parameters in the model.

7) The forecasting method is appraised as to its suitability for various kinds of data patterns.
In the study, we have taken monthly time series data from Jan 2001 to Dec 2007 of U.S. CPI-Inflation. The estimation period has been taken to be from Jan 2001 to Dec 2004 and then we forecast CPI for the period Jan 2005 to Dec 2006. We employ both Univariate and Multivariate Techniques to get the best forecast for period mentioned above. The techniques used have been summarized in the following flowchart:

Forecasting Techniques

Univariate Techniques

Multivariate Techniques

Exponential Smoothing

Decomposition

ARIMA

Single Equation Model

Simultaneous Equation Model

Additive Decomposition

Multiplicative Decomposition

Simple Exponential Smoothing

Brown's Double Expo. Smoothing

Winter'sadditive

Winter'sMultiplicative

Holt's Linear Trend AlgorithmNo seasonal

A. UNIVARIATE TECHNIQUES (I) EXPONENTIAL SMOOTHING: Exponential Smoothing simply involves the use of exponentially weighted average model for smoothing a given time series. That is, the exponential smoothing methods, applies an unequal set of weights to past data. These weights decay in an exponential manner from the most recent data value to the most distant value. The basic notion inherent in exponential smoothing is that there is some underlying pattern in the values of the variable to be forecast. Hence, the goal of these forecasting methods is to distinguish between the random fluctuations and the basic underlying pattern by smoothing the historical values. The exponentially smoothed series Xt is given by: St = Xt + (1- )Xt-1 + (1- )2Xt-2 +

where Xt is the actual value at time t and St is the smoothed value at time t. is the weight between 0 and 1. The weights , (1- ), (1- )2, etc. have exponentially decreasing values.

1) Single Exponential Smoothing( SES) Forecasting from this method requires only three pieces of
data : the most recent forecast, the most recent actual and a smoothing constant. The smoothing constant determines the weight to be given to the most recent past observations and therefore controls the rate of smoothing or averaging. The equation for SES is the following : Ft = Xt-1 + (1- ) Ft-1 where,

Ft = Exponentially smoothed forecast for period t Xt-1 = Actual in the prior period Ft-1 = Exponentially smoothed forecast of the prior period = Smoothing constant

Alternatively, forecast based on this method can be computed as : Ft = Xt-1 + (1- )Xt-2+ (1- )2Xt-3 + (1- )3Xt-4 ++ (1- ) n F t n where n = total number of observations. Thus, the forecast in period t is equal to the weighted average of all the past values and one initial forecast. The sum of the weights is equal to 1. This method is usually appropriate when the data series contains a horizontal pattern that is, it does not have a trend.

2) Linear (Holts) Exponential Smoothing- The method of liner exponential smoothing takes into account
the presence of trend in the data series i.e. it further adjusts each smoothed value for the trend of the previous period before calculating the new smoothed value. It can be used when there is a consistent and persistent trend in the data, as often happens with yearly data. The smoothing equations used in this model are: Tt = ( St S t-1 ) + (1- ) Tt -1 St = Xt + (1- ) (S t-1 - Tt -1 ) where St = equivalent of single exponential smoothed value = smoothing coefficient, analogous to Tt = smoothed trend in data series The most recent trend, (St St-1), is weighted by and the last smoothed trend, Tt-1, is weighted by (1- ). To prepare a forecast, we must add the trend component to the basic smoothed value for the number of periods ahead to be forecast. The general equation is Ft+m = St + Tt m

3) Winters Multiplcative Linear And Seasonal Exponential Smoothing- This method produces results
similar to those of linear exponential smoothing but, it has the extra advantage of being capable of dealing with seasonal data in addition to data that have a trend. It is based on three equations, each of which smoothes a factor associated with one of the three components of the pattern- randomness, trend and seasonality. They are as follows: St = ( Xt / I t-L ) + (1- ) (S t-1 T t-1) Tt = ( St - St-1 ) + (1- ) Tt -1 I = ( Xt /S t ) + (1- ) I t-L where S= smoothed value of de-seasonalized series T= smoothed value of trend I= smoothed value of seasonal factor L= length of seasonality (e.g., number of months or quarters in a year) The forecast based on Winters method is computed as Ft+m = (St + Tt m) I t-L+m

Browns Double Exponential smoothing This method applies the single smoothing method twice (using the same parameter) and is appropriate for a non-seasonal series with a linear trend. Brown's Double Exponential Smoothing is an alternative to Holts linear trend algorithm (New bold and Bos: Introductory Business and Economic Forecasting). re-smoothing the single smoothed constant from Single Exponential Smoothing model derives the smoothing constant in Double Exponential Smoothing. Since forecasts can be expressed as a function of the single and double smoothed constants, this forecast procedure is known as Double Exponential Smoothing. Brown's Double Exponential Smoothing is used on data that show a linear trend over time. Brown's Double Exponential Smoothing can forecast multiple time periods into the future.

This method involves using the simple exponential algorithm twice. The observed series At is smoothed as per the given equation, yielding the series St(1) St(1) = At + (1-)St-1(1)

This series is then smoothed again, using the same smoothing constant, to produce the doubly smoothed series St(2)

St(2) = St (1) + (1-)St-1(2)

The FORECAST is given by the following equation:

Ft+h = 2 St (1) - St(2) + h ( / 1-) (St (1) - St(2)

; h = 1,2,

4)

Winters additive exponential smoothing algorithm This method is appropriate for series with a linear time trend and additive seasonal variation. The Holt Winter's Additive method is applicable when the time series contains a seasonal component. This method assumes the time series is composed by a linear trend and a seasonal cycle, it constructs three statistically correlated series (smoothed, seasonal and trend) and projects forward the identified trend and seasonality

St = (At - I t-l) + (1- )(S t-1 +T t-1) Tt = (St-St-1) + (1-)Tt-1

10

It = (A t -St) (1-)I t-l Ft + h = St + mTt + I t l +h ; l is the length of seasonality

Where,

St is the smoothed value of the series At is the actual value at time period t Tt is the trend estimate at time period t
t

Ft + h is the forecast value for the period t+h made at time period t It is the seasonal index for time period t , and are the three smoothing constants

(II) DECOMPOSITION: Decomposition method identifies three separate components: trend, cycle and seasonal factors of basic underlying pattern that characterize any time series. Decomposition assumes that the data consist of a pattern and an error (random component) where the pattern is made up of trend, cycle and seasonality. The overall pattern can thus be broken up into sub patterns that identify each component of the time series separately. Such decomposition can frequently facilitate forecasting and help the forecaster understand the behavior of the series. Thus, Xt = f (St, Tt, Ct, Rt) Xt = time series value at period t St = seasonal component at period t Tt = trend component at period t Ct = cyclical component at period t Rt = random component at period t Two types of decomposition models are used for the above purpose:

(a) Multiplicative Decomposition: This method of decomposition is used if the differences of the
peaks and troughs get greater as the trend increases.

+10% -10%
Trend

11

+10%

It is represented by Xt = Tt x Ct x St x Rt (b)Additive Decomposition: This method is used when it is evident from the plots of the data that the seasonal and cyclical influences are unrelated to the general level of series. In other words the differences between the peaks and the troughs stay the same and are independent of the series.

+100 units -100 units


Trend

It is represented by Xt = Tt + Ct + St + Rt The main steps in multiplicative decomposition method are: (a) Compute yearly averages so that we remove both randomness and seasonality as yearly data does not contain any seasonality. The averages computed are known as Moving Averages. It includes only trend and cycle (T x C). (b) Divide the actual values (X) by Moving average (T x C) to get seasonality and randomness (S x R). X / MA = (T x C x S x R) / (T x C) = S x R (c) Arrange the values obtained in the following manner and generate quarterly or monthly averages. Q1 Yr1 Yr 2 Yr 3 Yr 4 Yr 5 Average Q2 Q3 Q4

+10%

(d) Adjust the values obtained so that they sum to 4 to get the seasonal indexes for each quarter or month. (e) To get the trend values we regress the depersonalized values on the quarters.

12

Tt = + t Where Tt is the deseasonalized values, is the constant and is the trend. The trend value then can be obtained from the above equation. (f) Cycle value can be obtained by dividing MA by the trend value. MA / T = (T x C) / T = C

III) Auto Regressive Integrated Moving Averages (ARIMA) While using ARIMA methodology we need to first test if the time series of the dependent variable is stationary. A stochastic process is said to be stationary if its mean & variance are constant overtime and the value of the covariance between the two time periods depends on the distance or gap or lag between the two time periods & not the actual time at which the covariance is computed. The stationarity of the time series is required because if the time series is found to be non-stationary, then we can study its behavior only for the time period under consideration. Each set of time series data will therefore be for a particular episode. Therefore, for the purpose of forecasting, non-stationary series may be of limited practical value. The stationarity of the model is tested by the following methods:

Graphical Method (in this we plot the ACF against the lags.If the series die down quickly then the series is stationary)

Unit Root Test

ARIMA models predict future values of a variable exclusively on the basis of its own past history. An ARIMA (p, d, q) can be represented as:

(L)(1-L) d yt = + (L) t where L = backward shift operator (L) = autoregressive operator = 1-1L- 2L2-- pLp (L) = moving average operator = 1- 1L- 2L2-. - qLq

The stationarity condition for an AR (p) process implies that the roots of (L) lie outside the unit circle, i.e., all the roots of (L) are greater than one in absolute value.

13

Infact first step in estimating the ARIMA models is to test if the series are nonstationary or contain a unit root. Several tests have been developed to test for the presence of a unit root. In this study, we focus on the augmented Dickey-Fuller (1979) test. To test if a sequence yt contains a unit root, three different regression equations are considered.

p yt= + yt-1 + t + iyt-i+1 + t i=2 p yt= + yt-1 + iyt-i+1 + t i=2 p yt= yt-1 + iyt-i+1 + t i=2

(1) (2) (3)

The first equation includes both a drift term and a deterministic trend; the second excludes the deterministic trend; and the third does not contain an intercept or a trend term. In all three equations, the parameter of interest is . If =0, the yt sequence has a unit root. The estimated t statistic is compared with the appropriate critical value in the Dickey-Fuller tables to determine if the null hypothesis is valid. The critical values are denoted by , and for equations (1),(2) and (3) respectively. The sequential procedure involves testing the most general model first (equation 1). Since the power of the test is low, if we reject the null hypothesis, we stop at this stage and conclude that there is no unit root. If we do not reject the null hypothesis, we proceed to determine if the trend term is significant under the null of a unit root. If the trend is significant, we retest for the presence of a unit root using the standardized normal distribution. If the null of a unit root is not rejected, we conclude that the series contains a unit root. Otherwise, it does not. If the trend is not significant, we estimate equation (2) and test for the presence of a unit root. If the null of a unit root is rejected, we conclude that there is no unit root and stop at this point. If the null is not rejected, we test for the significance of the drift term in the presence of a unit root. If the drift term is significant, we test for a unit root using the standardized normal distribution. If the drift is not significant, we estimate equation (3) and test for a unit root. Restrictions are also imposed on (L) to ensure invertibility so that the MA(q) part can be written in terms of an infinite auto regression on y. Furthermore, if a series requires differencing d times to yield a stationary series, then the differenced series is modeled as an ARMA(p,q) process or equivalently, an ARIMA(p,d,q) model is fitted to the series. Looking at the plots of ACF & PACF does this. Other criteria employed to select the best-fit model include parameter significance, residual diagnostics, and minimization of the Akaike Information Criterion and the Schwartz Bayesian Criterion. Also the ARMA series is tested for stability using Chow Test.

14

Thus the steps followed in ARIMA methodology are

(i) (ii) (iii) (iv) (v) (vi) (vii)

Testing for stationarity of the time series and identifying the order of integration Identifying the order of AR & MA Checking for invertibility of the model Estimating & Comparing the alternative models Estimating out of sample forecast Evaluation of alternative forecasts Testing the stability of model

TESTING THE PRESENCE OF UNIT ROOT The first econometric step is to test if the series are non stationary or contain a unit root. The stationarity of the model is tested by the following methods: 1. 2. Unit Root Test and Graphical Method

a. Unit Root Test Several tests have been developed to test for the presence of a unit root. In this study, we focus on the augmented Dickey-Fuller (1979, 1981) test. STEPS -ADF test H0 : = 0(The series is Non stationary) H1: < 0 Estimate yt = a0 + y t-1 + a2 t + i yt-i+ et (1) Is = 0? If it is not, STOP, conclude no unit root. If it is, we go to step 3 to test for presence of trend. (2) Is a2 = 0 given = 0? If trend is not significant, go to step 3 If trend is significant retest for presence of unit root using a standard normal distribution. i.e. test If = 0 ?

15

If it is not, conclude no unit root. If it is, conclude there is a unit root. (3) Estimate yt = a0 + y t-1 + i yt-i+ et Test H0 : = 0 If Null is rejected, there does not exist a unit root. If null is not rejected, test a0 = 0 Given : = 0 If the drift is not significant, go to step 4. If drift is significant test for presence of unit root using standard normal distribution.

(4) Estimate yt = a0 + y t-1 + i yt-i+ et Test H0 : = 0 If Null is rejected, there does not exist a unit root. If null is not rejected, there exists a unit root b. Graphical Method Plotting the original series:CPI
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Jan-01 Jan-02 Jan-03 Jan-05 Jan-06 Jan-07 Jan-04

CPI

CPI

Clearly mean and variance of CPI is changing overtime. This implies that our series is non stationary.

PLOTTING THE CORRELOGRAM

16

High values around the line of significance indicate non-stationarity. We cannot use CPI as the dependent variable, therefore we take the first difference D(CPI).

Plotting the first difference of CPI

d(CPI )
1.5 1 0.5 0 -0.5 -1 -1.5 -2 d(CPI)

17

First difference of CPI seems to be stationary but we need to confirm using ACF and PACF functions, unit root tests and random walk model tests. ACF and PACF:-

IDENTIFYING THE ORDER OF AR & MA In order to identify the order of the AR & MA we can look at the Correlogram plotted above and get some kind of indication about it. We will additionally also plot the ACF and PACF. The ACF and PACF are given as follows:
PACF 0.4 0.3 0.2 0.1 0 -0.1 1 -0.2 -0.3 -0.4 -0.5 -0.6

7 10 13 16 19 22 25 28 31

PACF

18

Here we notice that the order of AR can be gauged by looking at the Partial Correlation Function (PACF) whereas looking at the Autocorrelation Function (ACF) identifies the order of MA. PACF of d(cpi) suggests that an ARIMA model should be fitted, with order of integration being one. Since PACF at lag length 1,2 ,12 and 13 are significantly different from zero, d(CPI) seems to be an AR process (p)of the order either 1,2 or 12. As for q we need to try out certain values because ACF is significant till 13th lag. To confirm the order of AR & MA process we need to compare the estimates of different orders of AR & MA process. INVERTIBILITY The different orders of AR & MA process are estimated and compared by taking into account invertibility. A series of yt is said to be invertible if it can be represented by a finite order or convergent autoregressive process. Invertibility is important because the use of ACF and PACF implicitly assume that the yt sequence can be represented by an autoregressive model. The models estimated were found to be invertible with finite invertible roots. COMPARISONS OF DIFFERENT ORDERS OF AR & MA We have tried for all kinds of permutation combination of the ARIMA(p,1,q) and finally ended up with the following models( that even satisfy the invertibility condition) ARIMA MODELS:SERIAL NO. 1. 2. 3. ARIMA MODELS ARIMA(1.1.1) ARIMA(1,1,2) ARIMA(1,1,3) AIC 0.518085 0.309824 0.464677 BIC 0.637344 0.468836 0.663442 SIGNIFICANT VARIABLES MA(1) MA(2), AR(1) AR(1), MA(3), MA(2), MA(1) 4. ARIMA(1,1,4) 0.382699 0.621217 AR(1), MA(3), MA(1), MA(2) 5. 6. ARIMA(1,1,5) ARIMA(2,1,1) 0.292015 0.431230 0.570286 0.591822 AR(1), MA(4), MA(2) None of the variables are significant at 10% level of

19

7. 8.

ARIMA(2,1,2) ARIMA(2,1,3)

0.148223 0.362461

0.348964 0.603349

significance AR(2), MA(2), MA(1) MA(3), AR(2), MA(2), AR(1)

9.

ARIMA(2,1,4)

0.232530

0.513566

AR(1), MA(2), MA(1), AR(2)

10.

ARIMA(12,1,1)

0.072888

0.695028

AR(12)

We compared the models in terms of AIC & the SBC Significance of the coefficients

While considering AR process of order12, though AIC is still declining, BIC has risen considerably. We therefore take ARIMA(2,1,2) as our forecasting model since it has least AIC and BIC as well as all the coefficients are statistically significant as suggested by low p-values. Over fitting and under fitting of the model by trying out combinations such as ARIMA(2,1,2) MODEL Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:33 Sample(adjusted): 2001:04 2004:12 Included observations: 45 after adjusting endpoints Convergence achieved after 30 iterations Backcast: 2001:02 2001:03 Variable Coefficient Std. Error C -0.014524 0.053299 AR(1) -0.260571 0.105989 AR(2) -0.735207 0.107479 MA(1) 0.953543 0.031515 MA(2) 0.979856 0.000964 R-squared 0.526545 Mean dependent var Adjusted R-squared 0.479200 S.D. dependent var S.E. of regression 0.247326 Akaike info criterion Sum squared resid 2.446814 Schwarz criterion Log likelihood 1.664973 F-statistic Durbin-Watson stat 1.851120 Prob(F-statistic) Inverted AR Roots -.13+.85i -.13 -.85i Inverted MA Roots -.48 -.87i -.48+.87i (1,1,2),(2,1,1) and (2,1,3)

does not improve the fitting much as can be seen by relevant indicators mentioned earlier.

t-Statistic -0.272507 -2.458479 -6.840480 30.25661 1016.256

Prob. 0.7866 0.0184 0.0000 0.0000 0.0000 0.006667 0.342716 0.148223 0.348964 11.12135 0.000004

20

For Diagnostic check:1. Graph residuals:


1.0 0.5 0.0 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 01:07 02:01 02:07 03:01 03:07 Actual 04:01 04:07 Residual Fitted -0.5 -1.0

2. Correlogram of residuals Due to insignificant values of Ljung-Box Q-statistic , we accept the null hypothesis of no serial correlation.

3. Structural Stability For AR process to be valid we require structural stability. For this purpose we run Chow stability test(break through points).Here we reject the null hypothesis of no structural stability hence confirming the structural stability of our model. MULTIVARIATE TECHNIQUES

21

In multivariate we use two techniques, in both the techniques we calculate 1-month ahead, 3-month, 6month and 12-month ahead forecasts. The two techniques that we use are: A. SINGLE EQUATION ESTIMATION In the single equation estimation technique we run an Ordinary Least Squares (OLS) regression of the Percentage change in CPI from last year on six independent variables initially. The forecasts are calculated after correcting for serial correlation and heteroscedasticity. REGRESSION MODEL Yt = 0 + 1IMP + 2M1 + 3IIP + 4UNEMP + 5TBILL + 6AR(1) + t Where, IMP = percentage change in imports prices M1 = money supply IIP = Index of industrial average Unemp = Unemployment rate TBILL = T-Bill rate AR(1) used for correcting first order serial correlation Independent Variables:Their Expected signs are: Unemployment Rate (unemp) It is the percentage of the work force that is unemployed at any given date. Data Source: www.economagic.com Expected sign: Negative Whenever unemployment rates are higher, this would imply availability of surplus labour in the economy, hence leading to negative relationship between the two. Money supply (M1) It is defined as the summation of currency plus demand deposits. It is regulated and controlled by Federal Reserve. Data Source: www.economagic.com Expected sign: Positive With higher money supply, the aggregate demand in the economy will be high leading to high rates of inflation. Percentage change in Import Prices (imp)

22

It is the change in the price of imports (including raw materials, intermediate goods, and commodities) relative to the change in domestic prices. It shows the demand pressures in the economy. Data Source: Bureau of Labour Statistics Expected sign: Positive Higher import prices will lead to higher cost of production (due to high prices of raw materials) leading to high inflation in the economy. 3-months T Bill Rate in secondary market (Tbill) We have used Treasury bill discount rate as a proxy for rate of interest in the economy. They do not pay a rate of interest like zero coupon bonds but sold at a discount to create positive yield to maturity. Data Source: www.economagic.com Expected sign: Positive Higher T Bill interest rates imply higher cost of investment in the economy which duplicates to higher prices of goods and services offered. Index of Industrial Average (iip) It is an index that shows how 30 large, publicly-owned companies based in the United States have traded during a standard trading session in the stock market. Data Source: www.economagic.com Expected sign: Negative Higher IIP index implies higher production in the economy, hence implies lower prices and inflation. Expected signs of the independent variables as explained above are summarized in the following table:

Independent variable IMP MONEY SUPPLY TBILL IIP UNEMP FINAL MODEL

Expected Sign + + + -

Y= 0+ 1 IMP+ 2 IIP+ 3M1 + 4 TBILL+ 5 UNEMP+e The output is given below Dependent Variable: CPI Method: Least Squares Sample: 2001:01 2004:12

23

Included observations: 48 Variable C IMP INDAVG M1 TBILL UNEMP R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Durbin-Watson stat Coefficient Std. Error 21.74381 0.157876 5.722040 0.020366 t-Statistic 3.800010 7.751774 -3.091386 -1.316519 2.225761 -3.216456 Prob. 0.0005 0.0000 0.0035 0.1951 0.0315 0.0025 2.337500 0.723945 0.681083 0.914983 39.44143 0.000000

-0.161944 0.052386 -0.720882 0.547567 0.217346 0.824420 0.803518 0.320898 4.324976 -10.34598 0.965086 0.097650 -0.633283 0.196889

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion F-statistic Prob(F-statistic)

Since the Adjusted R-squared and R- Squared both are high i.e. greater than 0.7 implying a good fit of the model. Additionally, low values of Akaike information criterion and Schwarz criterion indicates that all the variables included are relevant.

DIAGNOSTIC TESTS:(A) Test for Auto Correlation High Durbin Watson Statistic implies high auto correlation. To correct for this we applied AR(1) process.Durbin Watson statistic has now become 1.74 which is close enough to 2.

Date: 02/03/10 Time: 16:41 Sample(adjusted): 2001:02 2004:12 Included observations: 48 Convergence achieved after 27 iterations Variable Coefficient Std. Error t-Statistic C 6.477821 8.556915 0.757028 IMP 0.143111 0.032100 4.458231 INDAVG -0.043508 0.082963 -0.524423 M1 -0.508573 0.344268 -1.477258 T_BILL01 0.150514 0.134568 1.118492 UNEMP -0.028426 0.173850 -0.163509 AR(1) 0.872810 0.101232 8.621896 R-squared 0.885422 Mean dependent var Adjusted R-squared 0.868235 S.D. dependent var S.E. of regression 0.255202 Akaike info criterion Sum squared resid 2.605114 Schwarz criterion

Prob. 0.4535 0.0001 0.6029 0.1474 0.2700 0.8709 0.0000 2.308511 0.703046 0.243078 0.518632

24

Log likelihood Durbin-Watson stat Inverted AR Roots

1.287660 1.743239 .87

F-statistic Prob(F-statistic)

51.51785 0.000000

(B) Test for Heteroscedasticity To test for presence of Heteroscedasticity in data we performed Whites Heteroscedasticity test. Under this test H0: No Heteroscedasticity HA: presence of heteroscadasticity By the decision rule if Obs*R-squared has got a very small P-value then the null hypothesis can be rejected by the definition. Here looking at the output we see that the p-value is very high therefore the null hypothesis can not be rejected. Hence there is no Heteroscedasticity. White Heteroscedasticity Test: F-statistic 0.684459 Obs*R-squared 7.508433 Test Equation: Dependent Variable: RESID^2 Method: Least Squares Date: 02/03/10 Time: 14:13 Sample: 2001:02 2004:12 Included observations: 47 Variable Coefficient C -2.069356 IMP 0.004029 IMP^2 0.000399 INDAVG 0.071380 INDAVG^2 -0.000385 M1 -0.164202 M1^2 -0.910335 TBILL 0.019675 TBILL^2 -0.009361 UNEMP -0.369500 UNEMP^2 0.029131 R-squared 0.159754 Adjusted R-squared -0.073648 S.E. of regression 0.069751 Sum squared resid 0.175149 Log likelihood 64.72817 Durbin-Watson stat 2.169865 (C) Test for Multi-Collinearity After checking for multicollinearity using covariance matrix we notice high collinearity between unemployment rate and T-bill rate as well as between percentage change in import prices and Index of

Probability Probability

0.731580 0.676730

Std. Error t-Statistic 58.91648 -0.035124 0.006029 0.668263 0.000766 0.520486 1.146981 0.062233 0.005604 -0.068744 0.131823 -1.245629 0.616364 -1.476943 0.055060 0.357334 0.011238 -0.832912 0.731885 -0.504860 0.066730 0.436548 Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion F-statistic Prob(F-statistic)

Prob. 0.9722 0.5082 0.6059 0.9507 0.9456 0.2209 0.1484 0.7229 0.4104 0.6167 0.6650 0.054004 0.067316 -2.286305 -1.853292 0.684459 0.731580

25

Industrial Average. However, the covariance is not high enough and can be ignored. Moreover, the intuitive appeal of the variables is very strong for not to be dropped.

CPI

CPI 1

IMP 0.548375166 548

INDAVG 0.509364025 276 0.855124517 901

M1 0.143155084 707 0.212431675 396 0.143261466 099

TBILL 0.506076662 262 0.257119262 738 0.064963206 9184 0.117897133 84

UNEMP 0.584457 931657 0.152918 904687 0.072260 385904 0.046807 7893012 0.889305 387432 1

IMP

0.548375166 548

IND AVG M1

0.509364025 276 0.143155084 707

0.855124517 901 0.212431675 396 0.257119262 738 0.152918904 687

0.143261466 099 0.064963206 9184 0.072260385 904

TBIL L UNE MP

0.506076662 262 0.584457931 657

0.117897133 84 0.046807789 3012

0.889305387 432

WAY TO THE FINAL REGRESSION MODEL


Y = a + b1unemp + b0 iip + b3 Tbill + b4 M1 + b5 imp+e

To correct for serial correlation

Y = a + b1unemp + b0 iip + b3 Tbill + b4 M1 + b5 imp + cpi(-1)+e B. SIMULTANEOUS EQUATION MODEL

1. In this technique, we form a model of three simultaneous equations .

26

Regression equation as explained above: CPIt = 0 + 1IMP + 2IIP + 3UNEMP + 4TBILL + 5 CPI(-1) + 6M1 + t CHANGE IN IMPORTS EQUATION: IMP= 0 + 1CPIt+2 IIP + 3TBILL + 4 ER+ t

Where UNEMP(-1) is unemployment lag and all other variables are same as explained above. So, in this model: No. of equations (=G) =2 Endogenous Variables = CPI, IMP Exogenous Variables = CPI(-1), TBILL, M1, ER, IIP, UNEMP 2. Checking Order and Rank conditions for Identification There are two conditions that have to be satisfied for identification:

a) Order condition is a necessary condition for identification. For this condition to be satisfied no.
of excluded variables should be at least as large as G-1. If there are (G-1) variables excluded then that equation would be exactly identified. If more than (G-1) variables are excluded from the equation then the equation is over identified. Equation No. of Excluded Variables Identification

I II

1 3

Exactly Identified Over identified

b) Rank condition constitutes a sufficient condition for identification. In a model of G linear


equations an equation is identified if and only if at least one non-zero (G-1)*(G-1) determinant is contained in the array of coefficients with which those variables excluded from the equation in question appear in the other equation. This condition is also satisfied in our model. Therefore, the model can be estimated.

27

3.

Test for simultaneity

We conducted Hausman test to check for simultaneity. This is important because if simultaneity is present then running OLS regression will give inconsistent estimators. In this case using instrumental variable estimation/ 2SLS will give us consistent and efficient estimators.

4.

Estimating the model

We used 2SLS to estimate the equations.2SLS is a very useful estimation procedure for obtaining the values of structural parameters in over identified equation. Since our model has over identified equation we find 2SLS to be the appropriate estimation technique. How we arrived at the simultaneous equation model Starting at the regression model, we checked for a possible reverse causation of each of the independent variable with the dependent variable:

T-bill: Treasury bill rate has been used as a proxy for the interest rate in the U.S. economy. Since these
rates are being set by the Federal Reserve, we may take it to be an exogenous variable.

M1: Money supply in an economy = Money multiplier* High Powered Money (H). While, the
multiplier depends on certain behavioral parameters, H is influenced by certain endogenous & exogenous factors. The former are governed by governments fiscal and monetary desires. Hence, we may take the same to be an exogenous variable.

CPI(-1): Lagged value of CPI has been used as a correction for serial correlation and since the lagged
values are already known, we can treat CPI(-1) also to be an exogenous factor.

ER: Exchange rate between U.S. dollar and euros has been taken, since there exist a strong correlation
between U.S. and European markets. And because of the influence of the factors that are external to the economy, we have taken ER to be an exogenous variable.

28

IIP: A high interest rate discourages investment and hence puts a downward pressure on IIP. Thus, IIP
is also an endogenous variable.

IMP: Change in import prices is expected to be influenced by rate of inflation (percentage change in
CPI). Hence, we have taken IMP again as an endogenous variable.

UNEMP: Unemployment rate has a clear cut relation with the rate of inflation using Philips curve
analysis and therefore is an endogenous variable.

Hence, the possible endogenous variables are IIP IMP & UNEMP (3). A Hausman test conducted to test for simultaneity labels iip and unemp as exogenous and IMP as the endogenous variable. We accept the results given by the hausman test and take IMP and CPI as the two simultaneous variables CPIt = 0 + 1IMP + 2IIP + 3UNEMP + 4TBILL + 5 CPI(-1) + 6M1 + t IMP= 0 + 1CPIt+2 IIP + 3TBILL + 4 ER+ t

29

SECTION IV
EVALUATION OF FORECASTS For each CPI, a search for the best forecasting model is conducted. The best model is defined as one that produces the most accurate forecasts such that the predicted levels are close to the actual realized values. Furthermore, the predicted variables should move in the same direction as the actual series. In other words, if a series is rising (falling), the forecasts should reflect the same direction of change. If a series is changing direction, the forecasts should also identify this. To select the best model, the alternative models are initially estimated using monthly data over the period Jan 2001 through Dec 2004 and out-of-sample forecasts up to 24 months ahead are made from Jan 2005 through Dec 2006. In other words, by continuously updating and reestimating, a real world forecasting exercise is conducted to see how the models perform.

UNIVARIATE METHODS A. EXPONENTIAL SMOOTHING METHODS 1.SIMPLE EXPONENTIAL SMOOTHING

S g S o th g in le m o in
5 4 .5 4 3 .5 3 2 .5 2 1 .5 1 0 .5 0
D e c0

A TA CU L 1 OT -M N H A E D HA 3 OT -M N H A E D HA 6 OT -M N H A E D HA 1 -M N H 2 OT A E D HA

CPI

4 p r0 5 A u g -0 5 D e c0 5 A p r0 6 A u g -0 6 D e c0 6 A

30

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.585 24.155 16.268 0.697

3-PRD 1.0877 53.047 31.04954 0.852

6-PRD 1.165 54.557 37.86735 0.910

12-PRD 1.693 82.275 57.02683 0.988

By looking at graph one can not unambiguously tell which forecast horizon gives the best forecasts. Intuitively, one would expect errors to increase as the forecasting horizon increases. This can be inferred from the table shown above. RMSE, MAPE, RMSPE all increase as the forecasting horizon increases. So, on comparing these three measures of accuracy one finds one- period ahead forecasts to give the best forecasts as they have the minimum error.

By looking at Theils coefficient one infers that this measure of accuracy is less than one for one period ahead forecasts horizon.This implies that the nave method (in which the most recent actual value would be used as the forecast for the next period) is as good as the forecasting technique- simple exponential smoothing. However for all other forecasting horizons Theils coefficient is greater than one which means that there is no point in using a formal forecasting method, since, using a nave method will produce better results. Thus, overall one period ahead forecasts turn out to be best.

2.
7 6 5 CPI 4 3 2 1 0

HOLTS LINEAR TREND ALGORITHM NO SEASONAL


HOLT-WINTERS-NO SEASONAL

Actual 1-MONTH AHEAD 3-MONTH AHEAD 6-MONTH AHEAD 12-MONTH AHEAD

D ec -0 M 4 ar -0 Ju 5 n0 Se 5 p0 D 5 ec -0 M 5 ar -0 Ju 6 n0 Se 6 p0 D 6 ec -0 6

RMSE RMPSE

1-PRD 0.595 24.022

3-PRD 1.1508 54.146

6-PRD 1.238 52.437

12-PRD 1.844 93.464

31

MAPE THEIL'S U2

16.523 0.709

32.990 0.901

39.644 0.967

60.715 1.076

By looking at graph one cannot unambiguously tell which forecasting horizon provides the best results. From table one infers that errors do not necessarily increase with accuracy. Theils coefficient is less than one only for one period ahead forecast and has the lowest value. This implies the forecasting technique being used is better than the nave method. However for all other forecasting horizons Theils coefficient is greater than one which means that there is no point in using a formal forecasting method, since, using a nave method will produce better results. 3. WINTERS ADDITIVE EXPONENTIAL SMOOTHING
Holt winters-Additive

increase in the forecasting

horizon.Thus, one period ahead forecasts are best on the basis of comparison of first three measures of

6 5 4 CPI 3 2 1 0
De c0 M 4 ar -0 Ju 5 n0 Se 5 p0 De 5 c05 M ar -0 Ju 6 n0 Se 6 p0 De 6 c06

Actual 1-MONTH AHEAD 3-MONTH AHEAD 6-MONTH AHEAD 12-MONTH AHEAD

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.644 26.800 17.992 0.754

3-PRD 1.192 58.873 34.553 0.934

6-PRD 1.248 59.599 40.760 0.975

12-PRD 1.472 75.646 51.358 0.859

By looking at graph one cannot unambiguously tell which forecasting horizon provides the best results. In order to find which one gives the best forecast we look at the four measures of accuracy- RMSE, MAPE,

32

RMSPE, THEILS U2 as given in the table above.Again, the first three errors increase as the forecasting horizon increases.thus. one period ahead forecasts are best. Theils coefficient is less than one for one period, implying Winters additive exponential smoothing better than the nave method. For all other forecast horizon Theils coefficient is greater than one which means that there is no point in using a formal forecasting method, since, using a nave method will produce better results. Overall, one period ahead forecasts are best. 3. BROWNS DOUBLE EXPONENTIAL SMOOTHING
DOUBLE SMOOTHING 20 15 10 5 0
ec -0 Fe 4 b0 Ap 5 r-0 Ju 5 nA u 05 g0 O 5 ct -0 D 5 ec -0 Fe 5 b0 Ap 6 r-0 Ju 6 nA u 06 g0 O 6 ct -0 D 6 ec -0 6

Actual 1-MONTH AHEAD 3-MONTH AHEAD 6-MONTH AHEAD 12-MONTH AHEAD

-5

-10

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.784 30.168 21.212 0.926

3-PRD 1.817 77.321 53.646 1.424

6-PRD 2.379 80.003 62.156 1.858

12-PRD 4.689 241.152 145.298 2.736

From graph one can not unambiguously tell which one of the forecasts are best. From table that RMSE, MAPE, RMSPE increase as the forecasting horizon increases. Thus , one period ahead forecasts give minimum errors. Theils coefficient is less than one for one-period,.This indicates that the forecasting technique used is better than the nave method.Theils coefficient for all other forecast horizons is greater than one indicating no point in using a formal forecasting method. On the whole one- period ahead forecasts turn out to be best as they have least RMSE, MAPE and RMSPE with Theils coefficient less than one. 5. WINTERS MULTIPLICATIVE EXPONENTIAL SMOOTHING

33

Holt-Winters-Multiplicative
6 5 4 5 4 .5 3 4 3 .5 2 3 2 .5 1 2 1 .5 0 1 0 .5 0

M ultiplicative D ecom position


Actual 1-MONTH AHEAD 3-MONTH AHEAD Actu l a 6-MONTH AHEAD -step ahead 1
3 -step a e h ad 12-MONTH AHEAD 6 -step a e h ad 1 -ste a ea 2 p h d

CPI

CPI

D e D c- e c 0 -0 4 M Ma 4 a r- r- 0 0 J5 5 un J -0 u 5 nS -0 e pS D5 05 e e p c - 0 -0 5 5 D Ma e c- r- 0 0 J5 6 un M a S -0 6 r- e 0 p 6J D 06 u e n c -0 -0 6 6 S e p -0 6 D e c0 6

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.652 26.540 17.690 0.772

3-PRD 1.250 62.284 36.448 0.979

6-PRD 1.296 62.109 41.312 1.012

12-PRD 1.472 75.626 51.348 0.860

From graph one can not unambiguously tell which one of the forecasts are best. From table one infers that errors do not necessarily increase with increase in the forecasting horizon. Theils coefficient is less than one for one period ahead period indicating forecasting technique being used is better than the nave method. Overall, one can conclude that one period ahead forecasts are better as they have minimum MSE, RMSE, RMSPE AND Theils coefficient less than one . B. DECOMPOSITION MULTIPLICATIVE

34

RMSE RMPSE MAPE THEIL'S U2

1-PRD 1.033 46.455 31.310 1.209

3PRD 1.142 50.882 35.417 0.895

6-PRD 1.228 51.902 39.372 0.959

12-PRD 1.273 52.452 44.776 0.743

By looking at graph one can see that forecasts are not close to the actual series indicating higher forecast errors. In order to find which one gives the best forecast we look at the four measures of accuracyRMSE, MAPE, RMSPE, THEILS U2 as given in the table above.Again, the first three errors increase as the forecasting horizon increases.thus. one period ahead forecasts are best. Theils coefficient is greater than one for all period forecasts ,implying no point in using a formal forecasting method , since, using a nave method will produce better results. 1. ADDITIVE

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.987 45.171 30.217 1.151

3-PRD 1.195 51.17651 36.814 0.936

6-PRD 1.282 52.630 40.816 1.001

12-PRD 1.503 65.173 52.179 0.877

By looking at graph one can see that forecasts are not close to the actual series indicating higher forecast errors. From the table we see that as the forecasting horizon increases errors increase. Theils coefficient is greater than one indicating that there is no point in using a formal forecasting method, because a nave method will produce better results.

35

C.

ARIMA

ARIMA
6 5 4 CPI 3 2 1 0 Jun-05 Oct-05 Feb-06 Jun-06 Apr-06 Dec-04 Feb-05 Aug-05 Dec-05 Aug-06 Oct-06 Apr-05 Dec-06 Actual 1 step 3 step 6 step 12step

RMSE RMPSE MAPE THEIL'S U2

1-PRD 0.737 25.911 17.409 1.270

3-PRD 1.208 55.187 33.690 1.120

6-PRD 1.415 56.807 39.573 1.3153

12-PRD 2.115 101.073 68.987 1.554

From graph one can infer that forecasts for all periods are not close to actual series. Looking at table , we infer that RMSE, MAPE, RMSPE are increasing with increase in the forecasting horizon, suggesting higher the forecasting horizon less precise are the estimates.But, we see Theils coefficient to be greater than one for all the forecast horizon except one period ahead forecast indicating that nave forecasts will give better results. Overall, one can conclude that one period ahead forecasts are better as they have minimum MSE, RMSE, RMSPE AND Theils coefficient less than one .

36

MULTIVARIATE TECHNIQUES A. SINGLE EQUATION MODEL

1-PRD
RMSE RMPSE MAPE THEIL'S U2 0.484 22.503 14.123 0.580

3-PRD
0.558 26.789 16.758 0.437

6-PRD
0.592 28.070 19.350 0.462

12-PRD
0.581 28.399 20.713 0.339

By looking at graph one can not tell unambiguously which forecasts horizons give us the best results. From table. the first three errors increase as the forecasting horizon increases.thus. one period ahead forecasts are best. Theils coefficient is less than one for all the periods; implying single equation method give forecasts better than the nave method.

B. SIMULTANEOUS EQUATION MODEL

37

1.

2 SLS

1PRD
RMSE RMPSE MAPE THEIL'S U2 1.299 65.182 41.034 2.072

3PRD
1.386 70.457 45.302 1.274

6PRD
1.530 78.078 51.151 1.312

12PRD
2.006 104.876 73.458 1.380

By looking at graph one can not tell unambiguously which forecasts horizons give us the best results. From table, the first three errors increase as the forecasting horizon increases. Thus. one period ahead forecasts are best. Theils coefficient is more than one for all the periods; implying no point in using a formal forecasting method , since, using a nave method will produce better results.

COMPARISON ACROSS MODELS

38

A. ONE- MONTH AHEAD FORECAST


1 PERIOD AHEAD FORECASTS
6 5 4 3 2 1 0 Nov-05 Nov-06 Jan-05 Jul-05 Jan-06 May-05 May-06 Mar-05 Sep-05 Mar-06 Sep-06 Jan-07 Jul-06 ACTUAL HNS BDS WMS WAS SES DM DA OLS 2SLS ARIMA

S.NO 1 2 3 4 5 6 7 8 9 10

Method HNS BDS WMS WAS SES DM DA ols 2SLS ARIMA

RMSE 0.595 0.785 0.652 0.644 0.586 1.034 0.988 0.485 1.299 0.737

RMPSE 24.022 30.168 26.540 26.800 24.155 46.455 45.171 22.504 65.182 25.912

MAPE 16.523 21.212 17.690 17.993 16.268 31.311 30.217 14.124 41.034 17.409

THEIL'S U2 0.710 0.926 0.772 0.754 0.697 1.209 1.151 0.580 2.073 1.270

From the graph we infer that forecasted values generated by using simple exponential smoothing, winters additive, winters multiplicative, Holts no seasonal and Browns exponential smoothing are close to the actual values indicating the forecasts to be good. The same can also be inferred from the table shown above. From table we can see that the above mentioned methods have the minimum errors. Also, the errors are quite close to each other. Theils coefficient is less than one for all the methods except for DECOMPOSITION AND 2SLS method which has theils coefficient greater than one. This implies that forecasting technique used is better than the nave method. Note that the best forecasts are given by OLS method implying that the model considered has strong predictive power.

39

We also see that 2sls have highest errors.This is consistent with the forecasted series for 2sls shown in the above graph.

B. THREE PERIOD AHEAD FORECASTS


3 PERIOD AHEAD FORECASTS
8 7 6 5 4 3 2 1 0 Jun-05 Feb-05 Dec-04 Aug-05 Oct-05 Dec-05 Feb-06 Jun-06 Apr-06 Apr-05 Aug-06 Oct-06 -1 -2 Dec-06 ACTUAL HNS BDS WMS WAS SES DM DA OLS 2SLS ARIMA

S.NO 1 2 3 4 5 6 7 8 9 10

Method HNS BDS WMS WAS SES DM DA ols 2SLS ARIMA

RMSE 1.151 1.818 1.250 1.192 1.088 1.143 1.195 0.559 1.386 1.208

RMPSE 54.147 77.322 62.284 58.874 53.047 50.883 51.177 26.789 70.457 55.188

MAPE 32.990 53.646 36.449 34.554 31.050 35.418 36.815 16.758 45.303 33.691

THEIL'S U2 0.902 1.425 0.980 0.934 0.852 0.896 0.937 0.438 1.274 1.120

From the graph we infer that forecasted values generated by using simple exponential smoothing, winters additive, winters multiplicative, Holts no seasonal and Browns exponential smoothing are not close to the actual values .The same can also be inferred from the table shown above. Also, the errors are not close to each other. Theils coefficient is greater than one for all the methods except for the OLS method . We also see that Browns exponential smoothing have highest errors.This is consistent with the forecasted series for decomposition shown in the above graph.

40

B. SIX PERIOD AHEAD FORECASTS

6 PERIOD AHEAD FORECASTS


12 10 8 6 4 2 0
De c0 Fe 4 b0 A 5 pr -0 Ju 5 n0 Au 5 g0 O 5 ct -0 De 5 c0 Fe 5 b0 Ap 6 r-0 Ju 6 n0 Au 6 g0 O 6 ct -0 De 6 c06

ACTUAL HNS BDS WMS WAS SES DM DA OLS 2SLS ARIMA

-2

S.NO 1 2 3 4 5 6 7 8 9 10

Method HNS BDS WMS WAS SES DM DA ols 2SLS ARIMA

RMSE 1.239 2.379 1.297 1.249 1.165 1.229 1.282 0.592 1.530 1.416

RMPSE 52.438 80.003 62.109 59.600 54.558 51.902 52.631 28.071 78.078 56.807

MAPE 39.644 62.156 41.312 40.760 37.867 39.373 40.817 19.350 51.151 39.577

THEIL'S U2 0.968 1.859 1.013 0.976 0.910 0.960 1.002 0.463 1.313 1.315

From graph we infer that forecast series for all the methods apart from OLS give forecast results which are not close to the actual series. Forecasts generated through Browns exponential smoothing and 2SLS have the highest errors. Theils coefficient for all the forecasting methods except OLS is greater than one which implies that forecasts equal to the present values would have been better. However, Theils coefficient less than one for OLS indicating that forecasting technique used is better than the nave method.

41

TWELVE PERIOD AHEAD FORECASTS

12 PERIOD AHEAD FORECASTS


20 15 10 5 0
De c0 Fe 4 b0 Ap 5 r- 0 Ju 5 nAu 0 5 g0 O 5 ct -0 De 5 c0 Fe 5 b0 Ap 6 r- 0 Ju 6 n0 Au 6 g0 O 6 ct -0 De 6 c06

ACTUAL HNS BDS WMS WAS SES DM DA OLS 2SLS ARIMA

-5

-10

S.NO 1 2 3 4 5 6 7 8 9 10

Method HNS BDS WMS WAS SES DM DA ols 2SLS ARIMA

RMSE 1.844 4.690 1.472 1.473 1.693 1.274 1.503 0.582 2.006 2.115

RMPSE 93.464 241.153 75.627 75.646 82.276 52.452 65.173 28.399 104.876 101.074

MAPE 60.715 145.298 51.348 51.359 57.027 44.776 52.179 20.717 73.458 68.985

THEIL'S U2 1.076 2.737 0.859 0.859 0.988 0.743 0.877 0.339 1.381 2.446

From the above table one infers that OLS shows the least errors. Browns exponential smoothing forecasts again have the highest error. Thus, even among 12 period ahead forecasts OLS turns out to be best.

CONCLUSIONS & IMPLICATIONS

42

The results we obtained showed that the economic theory is applicable in the real world since movements in interest rates, money supply, output unemployment and interaction in international markets affect our dependent variable. However, we notice that only one step ahead forecast across all methods had the required predictive power. This was expected since inflation fluctuates rapidly, hence three month, Six month and twelve month forecast horizon failed to produce good forecast. Another point to note is that we could obtain best forecasts for our dependent variable using the Ordinary least squares method among various univariate & multivariate techniques. The reason could be that OLS method captures the current economic scenario through various independent variables included which other methods could not. Also, consistently bad performance of 2SlS method made us to believe that perhaps we have not accounted for all the endogenous variables that should have been included. Among the univariate techniques, the Holts Linear Trend No Seasonal algorithm gave the best results while the forecast errors for the Browns exponential smoothing method were very high. This shows that the series for our forecast variable exhibits a general trend but it does not show seasonality. Though both the methods of decomposition have high forecast errors, multiplicative decomposition gave us marginally better results. Since OLS gave the best results, we can conclude that, expectation augmented Phillips curve fit our sample data well, which is in consonance with the large literature that is available to us.

43

APPENDICES APPENDIX A: Reporting the results a. SIMPLE EXPONENTIAL SMOOTHING Months Actual Simple Exponential Smoothing Method 1-step ahead Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 3-step ahead 6-step ahead 12-step ahead

3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

3.300 3.000 3.100 3.200 3.400 2.801 2.500 3.000 3.599 4.699 4.400 3.501 3.400 3.999 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.102 1.401 1.999 2.499

3.300 3.000 3.100 3.200 3.400 2.801 2.500 3.000 3.599 4.699 4.400 3.501 3.400 3.999 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.102 1.401

3.300 3.000 3.100 3.200 3.400 2.801 2.500 3.000 3.599 4.699 4.400 3.501 3.400 3.999 3.700 3.500 3.600 4.000 4.200 4.100

3.300 3.000 3.100 3.200 3.400 2.801 2.500 3.000 3.599 4.699 4.400 3.501 3.400 3.999

b. HOLTS LINEAR TREND ALGORITHM NO SEASONAL Months Actual Holts no seasonal Smoothing Method 1-step ahead Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 3-step ahead 6-step ahead 12-step ahead

3.000 3.100 3.200 3.400

3.263 2.963 3.076 3.176

3.188 2.888

44

May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07

2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

3.373 2.773 2.444 2.944 3.614 4.795 4.458 3.468 3.347 4.004 3.652 3.452 3.559 3.959 4.209 4.107 3.879 2.047 1.351 1.951 2.453

3.028 3.128 3.319 2.719 2.333 2.833 3.641 4.985 4.573 3.405 3.240 4.013 3.555 3.355 3.478 3.878 4.227 4.120 3.838 1.941 1.254

3.075 2.775 2.956 3.056 3.238 2.638 2.167 2.667 3.682 5.270 4.746 3.310 3.080 4.027 3.410 3.210 3.356 3.756 4.254 4.141

2.850 2.550 2.812 2.912 3.077 2.477 1.833 2.333 3.763 5.839 5.092 3.120 2.760 4.054

c. WINTERS ADDITIVE EXPONENTIAL SMOOTHING Months Actual Winters Additive exponential Smoothing Method 1-step ahead Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 3-step ahead 6-step ahead 12-step ahead

3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100

3.263 2.963 3.076 3.176 3.373 2.773 2.444 2.944 3.614 4.795 4.458 3.468 3.347 4.004 3.652 3.452 3.559 3.959 4.209

3.188 2.888 3.028 3.128 3.319 2.719 2.333 2.833 3.641 4.985 4.573 3.405 3.240 4.013 3.555 3.355 3.478

3.075 2.775 2.956 3.056 3.238 2.638 2.167 2.667 3.682 5.270 4.746 3.310 3.080 4.027

2.850 2.550 2.812 2.912 3.077 2.477 1.833 2.333

45

Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07

3.900 2.100 1.400 2.000 2.500 2.100

4.107 3.879 2.047 1.351 1.951 2.453

3.878 4.227 4.120 3.838 1.941 1.254

3.410 3.210 3.356 3.756 4.254 4.141

3.763 5.839 5.092 3.120 2.760 4.054

d. BROWNS DOUBLE EXPONENTIAL SMOOTHING Months Actual Browns Double Exponential Smoothing Method 1-step ahead Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 3-step ahead 6-step ahead 12-step ahead

3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

3.518 3.010 3.055 3.188 3.441 2.838 2.444 2.822 3.646 5.546 4.864 2.792 3.122 3.885 3.741 3.495 3.533 3.947 4.278 4.235 4.001 1.329 0.513 1.362 2.828

3.760 2.871 2.988 3.217 3.604 2.636 2.333 2.759 4.052 7.296 5.544 1.353 2.579 3.959 3.726 3.362 3.452 4.087 4.552 4.418 4.025 -0.531 -1.258

4.123 2.663 2.887 3.261 3.848 2.332 2.167 2.665 4.662 9.920 6.563 -0.806 1.766 4.070 3.704 3.163 3.330 4.296 4.964 4.693

4.850 2.246 2.687 3.347 4.336 1.725 1.833 2.476 5.882 15.169 8.602 -5.125 0.140 4.293

46

e. WINTERS MULTIPLICATIVE EXPONENTIAL SMOOTHING Months Actual Winters multiplicative Smoothing Method 1-step ahead Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 3-step ahead 6-step ahead 12-step ahead

3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

3.058 2.915 3.075 3.263 3.391 2.736 2.532 3.016 3.535 4.983 4.648 3.463 3.380 3.922 3.693 3.590 3.455 3.842 4.369 4.258 4.090 2.156 1.394 1.971 2.646

2.948 2.948 3.127 3.181 3.358 2.785 2.497 3.137 3.957 5.210 4.265 3.119 3.310 4.012 3.634 3.311 3.455 4.151 4.755 4.569 4.145 2.109 1.376

2.929 2.910 3.112 3.181 3.513 3.058 2.761 3.037 3.528 4.662 4.197 3.146 3.133 3.847 3.774 3.756 3.857 4.409 4.728 4.411

3.240 2.944 3.046 3.146 3.345 2.745 2.446 2.946 3.545 4.646 4.343 3.440 3.545 4.144

f. DECOMPOSITION (ADDITIVE) Months Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Actual 1-step 3.300 3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 2.350 2.365 2.504 2.559 2.629 2.585 2.638 2.658 2.709 5.213 Additive Decomposition 3-step 6-step 12-step

2.390 2.427 2.427 2.519 2.592 2.646 2.617 2.723

2.327 2.440 2.503 2.558 2.637

47

Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07

3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

3.149 3.139 3.094 3.165 3.260 3.278 3.088 3.108 3.379 3.539 3.760 3.705 3.489 3.341 3.294

2.907 5.334 3.028 3.066 3.161 3.251 3.088 3.042 3.245 3.424 3.717 3.775 3.695 3.553 3.458

2.825 2.759 2.700 2.771 5.274 3.113 2.950 2.942 3.209 3.360 3.569 3.589 3.570 3.581 3.670

2.399 2.428 2.482 2.640 2.730 2.657 2.669 2.768 2.848 5.330 3.228 3.390 3.385 3.389

48

g. DECOMPOSITION (MULTIPLICATIVE) Months Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 Actual 1-step 3.300 3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100 2.315 2.311 2.510 2.616 2.496 2.495 2.648 2.694 2.746 2.952 3.175 3.151 3.049 3.094 3.264 3.327 3.020 3.009 3.393 3.582 3.827 3.749 3.550 3.367 3.271 Multiplicative decomposition 3-step 6-step

12-step

2.386 2.478 2.384 2.432 2.644 2.677 2.661 2.750 2.925 3.021 2.996 3.001 3.155 3.296 3.026 2.946 3.240 3.452 3.764 3.825 3.778 3.599 3.448

2.244 2.440 2.537 2.595 2.718 2.837 2.778 2.684 2.726 2.991 3.165 2.896 2.850 3.205 3.385 3.587 3.602 3.615 3.631 3.656

2.400 2.389 2.426 2.641 2.761 2.597 2.578 2.775 2.878 3.046 3.249 3.416 3.394 3.338

h. ARIMA

49

Months

Actual 1-step

ARIMA FORECASTS 3-step 2.643 3.102 2.438 2.090 2.066 2.711 2.005 3.127 3.084 5.237 4.790 3.744 3.873 4.031 3.375 3.752 3.699 3.958 4.110 4.111 4.004 1.817 2.178 2.217 2.197 6-step 12-step

Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07

3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100

2.613 3.093 2.571 2.041 2.015 2.687 1.955 3.121 3.075 5.306 4.842 3.764 3.896 4.059 3.379 3.770 3.714 3.981 4.138 4.138 4.028 1.775 2.149

2.570 3.079 2.448 1.967 1.941 2.651 1.880 3.111 3.062 5.410 4.921 3.793 3.932 4.101 3.386 3.795 3.737 4.016 4.179 4.179

2.483 3.052 2.278 1.817 1.793 2.580 1.730 3.091 3.034 5.617 5.077 3.852 4.002

50

i. Regression model Months Actuals Ordinary Least Square Regression 1-step Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100 2.686 2.618 3.177 3.433 2.933 3.083 3.479 3.314 3.816 3.598 3.321 3.545 3.774 3.597 3.254 3.547 3.961 3.925 3.852 3.651 3.155 2.504 2.767 2.835 2.293 3-step 6-step 12-step

2.998 3.342 2.933 3.110 3.529 3.448 3.846 3.473 3.136 3.408 3.766 3.590 3.207 3.495 3.926 3.915 3.814 3.588 3.093 2.671 3.106 3.092 2.431

2.909 3.426 3.470 3.884 3.614 3.260 3.372 3.538 3.349 3.078 3.451 3.903 3.836 3.751 3.533 3.045 2.611 3.015 3.210 2.793

3.213 3.582 3.518 3.218 3.429 3.805 3.645 3.393 3.230 2.794 2.472 2.926 3.059 2.637

j. Simultaneous Equations model

51

2SLS year Actual 1-period Dec-04 Jan-05 Feb-05 Mar-05 Apr-05 May-05 Jun-05 Jul-05 Aug-05 Sep-05 Oct-05 Nov-05 Dec-05 Jan-06 Feb-06 Mar-06 Apr-06 May-06 Jun-06 Jul-06 Aug-06 Sep-06 Oct-06 Nov-06 Dec-06 Jan-07 3.300 3.000 3.100 3.200 3.400 2.800 2.500 3.000 3.600 4.700 4.400 3.500 3.400 4.000 3.700 3.500 3.600 4.000 4.200 4.100 3.900 2.100 1.400 2.000 2.500 2.100 Simultaneous Model 3-period 6-period

12-period

3.709 3.555 2.676 3.734 3.745 3.556 3.588 3.639 3.710 4.069 4.222 4.122 4.144 4.353 4.343 4.420 4.379 4.599 4.713 4.664 4.664 4.298 4.728 4.753 4.557

2.937 3.673 3.725 3.807 4.040 3.892 3.614 3.730 3.987 4.337 4.443 4.380 4.369 4.557 4.469 4.630 4.735 4.730 4.750 4.584 4.589 4.362 4.635

3.871 4.080 4.061 4.078 4.164 4.121 3.953 4.050 4.332 4.711 4.840 4.597 4.700 4.895 4.819 4.817 4.687 4.699 4.750 4.630

4.615 4.578 4.738 4.954 4.976 4.723 4.536 4.665 4.899 5.325 5.228 4.931 4.930 4.917

APPENDIX B ON ARIMA:

52

Testing for the joint significance of a unit root and the trend, random walk test is being carried out. Random walk model is given by:Dependent Variable: D(CPI) Method: Least Squares Date: 03/16/10 Time: 12:54 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Variable Coefficient Std. Error t-Statistic Prob. C 0.314320 0.187069 1.680232 0.1003 TREND 0.004011 0.003612 1.110558 0.2731 CPI(-1) -0.181966 0.068478 -2.657306 0.0111 D(CPI(-1)) 0.365385 0.144214 2.533625 0.0151 R-squared 0.254409 Mean dependent var -0.004348 Adjusted R-squared 0.201153 S.D. dependent var 0.347023 S.E. of regression 0.310163 Akaike info criterion 0.579506 Sum squared resid 4.040454 Schwarz criterion 0.738518 Log likelihood -9.328635 F-statistic 4.777053 Durbin-Watson stat 1.639553 Prob(F-statistic) 0.005929 Now to check the joint significance of trend and lagged cpi :Redundant Variables: TREND CPI(-1) F-statistic 4.202846 Log likelihood ratio 8.391986

Probability Probability

0.021685 0.015056

The F statistic(3 ) here does not follow the standard F distribution, hence p- values reported here are not applicable. The 5% critical value reported is 6.49 (3 ) and the estimated F-statistic is 4.202846, so we accept the null hypothesis of presence of a stochastic trend but not a deterministic trend.

Dependent Variable: D(CPI) Method: Least Squares Date: 03/16/10 Time: 13:23 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Variable Coefficient Std. Error t-Statistic Prob. C 0.415888 0.163631 2.541623 0.0147 CPI(-1) -0.183361 0.068652 -2.670888 0.0106 D(CPI(-1)) 0.412847 0.138110 2.989266 0.0046 R-squared 0.232515 Mean dependent var -0.004348 Adjusted R-squared 0.196818 S.D. dependent var 0.347023 S.E. of regression 0.311004 Akaike info criterion 0.564970 Sum squared resid 4.159103 Schwarz criterion 0.684229 Log likelihood -9.994307 F-statistic 6.513565

53

Durbin-Watson stat

1.640188

Prob(F-statistic)

0.003381

Checking the joint significance of constant term and lagged cpi Redundant Variables: C CPI(-1) F-statistic Log likelihood ratio 3.569971 7.066420 Probability Probability 0.036781 0.029211

Here again there is presence of unit root but constant term is not significant. We therefore now run random walk model without the constant term and without trend. Dependent Variable: D(CPI) Method: Least Squares Date: 03/16/10 Time: 13:46 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Variable Coefficient Std. Error t-Statistic Prob. CPI(-1) -0.015866 0.020398 -0.777830 0.4408 D(CPI(-1)) 0.332739 0.142564 2.333963 0.0242 R-squared 0.117216 Mean dependent var -0.004348 Adjusted R-squared 0.097153 S.D. dependent var 0.347023 S.E. of regression 0.329735 Akaike info criterion 0.661453 Sum squared resid 4.783921 Schwarz criterion 0.740959 Log likelihood -13.21341 Durbin-Watson stat 1.657080 Redundant Variables: CPI(-1) F-statistic 0.605020 Log likelihood ratio 0.628211

Probability Probability

0.440832 0.428012

We accept the null hypothesis of a presence of unit root. DIFFERENT MODELS TRIED OUT:ARIMA(1.1.1) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 12:57 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 10 iterations Backcast: 2001:02 Variable Coefficient Std. Error t-Statistic Prob. C -0.007347 0.069488 -0.105735 0.9163 AR(1) -0.073500 0.243657 -0.301654 0.7644 MA(1) 0.671212 0.178423 3.761910 0.0005 R-squared 0.267668 Mean dependent var -0.004348 Adjusted R-squared 0.233606 S.D. dependent var 0.347023 S.E. of regression 0.303798 Akaike info criterion 0.518085

54

Sum squared resid Log likelihood Durbin-Watson stat Inverted AR Roots Inverted MA Roots ARIMA(2,1,1)

3.968604 -8.915949 2.016696 -.07 -.67

Schwarz criterion F-statistic Prob(F-statistic)

0.637344 7.858263 0.001234

Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:01 Sample(adjusted): 2001:04 2004:12 Included observations: 45 after adjusting endpoints Convergence achieved after 10 iterations Backcast: 2001:03 Variable Coefficient Std. Error t-Statistic C 0.001066 0.048720 0.021873 AR(1) 0.246991 0.244182 1.011502 AR(2) -0.412032 0.170839 -2.411818 MA(1) 0.328709 0.263907 1.245545 R-squared 0.343116 Mean dependent var Adjusted R-squared 0.295052 S.D. dependent var S.E. of regression 0.287749 Akaike info criterion Sum squared resid 3.394775 Schwarz criterion Log likelihood -5.702674 F-statistic Durbin-Watson stat 1.907621 Prob(F-statistic) Inverted AR Roots .12+.63i .12 -.63i Inverted MA Roots -.33

Prob. 0.9827 0.3177 0.0204 0.2200 0.006667 0.342716 0.431230 0.591822 7.138637 0.000576

ARIMA(1,1,2) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:03 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 28 iterations Backcast: OFF (Roots of MA process too large for backcast) Variable Coefficient Std. Error t-Statistic Prob. C -0.026575 0.089111 -0.298223 0.7670 AR(1) 0.909160 0.072449 12.54889 0.0000 MA(1) -0.529773 0.182449 -2.903676 0.0059 MA(2) -0.736895 0.176954 -4.164342 0.0002 R-squared 0.430650 Mean dependent var -0.004348 Adjusted R-squared 0.389982 S.D. dependent var 0.347023 S.E. of regression 0.271038 Akaike info criterion 0.309824 Sum squared resid 3.085382 Schwarz criterion 0.468836 Log likelihood -3.125953 F-statistic 10.58945 Durbin-Watson stat 2.116917 Prob(F-statistic) 0.000026 Inverted AR Roots .91 Inverted MA Roots 1.16 -.63

55

Estimated MA process is noninvertible ARIMA(1,1,3) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:13 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 17 iterations Backcast: 2000:12 2001:02 Variable Coefficient Std. Error t-Statistic Prob. C 0.022087 0.123434 0.178934 0.8589 AR(1) 0.729045 0.109214 6.675382 0.0000 MA(1) -0.321323 0.000777 -413.3455 0.0000 MA(2) -0.640392 0.153676 -4.167165 0.0002 MA(3) -0.021279 0.152289 -0.139727 0.8896 R-squared 0.363573 Mean dependent var -0.004348 Adjusted R-squared 0.301482 S.D. dependent var 0.347023 S.E. of regression 0.290033 Akaike info criterion 0.464677 Sum squared resid 3.448882 Schwarz criterion 0.663442 Log likelihood -5.687572 F-statistic 5.855535 Durbin-Watson stat 1.925696 Prob(F-statistic) 0.000801 Inverted AR Roots .73 Inverted MA Roots .99 -.03 -.63

ARIMA(1,1,4) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:17 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 38 iterations Backcast: 2000:11 2001:02 Variable Coefficient Std. Error t-Statistic Prob. C 0.009820 0.099608 0.098588 0.9220 AR(1) 0.626987 0.111704 5.612916 0.0000 MA(1) 0.062532 0.000541 115.6577 0.0000 MA(2) -0.731159 0.130235 -5.614143 0.0000 MA(3) -0.526050 0.114603 -4.590214 0.0000 MA(4) 0.216364 0.134327 1.610730 0.1151 R-squared 0.438611 Mean dependent var -0.004348 Adjusted R-squared 0.368438 S.D. dependent var 0.347023 S.E. of regression 0.275782 Akaike info criterion 0.382699 Sum squared resid 3.042239 Schwarz criterion 0.621217 Log likelihood -2.802075 F-statistic 6.250375

56

Durbin-Watson stat Inverted AR Roots Inverted MA Roots ARIMA(1,1,5)

2.172361 .63 .99

Prob(F-statistic) .30 -.68 -.51i

0.000226 -.68+.51i

Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:21 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 21 iterations Backcast: 2000:10 2001:02 Variable Coefficient Std. Error t-Statistic Prob. C 0.030013 0.033213 0.903637 0.3717 AR(1) 0.839439 0.119196 7.042514 0.0000 MA(1) -0.252059 0.189044 -1.333334 0.1902 MA(2) -0.918210 0.155569 -5.902285 0.0000 MA(3) -0.252725 0.186546 -1.354756 0.1833 MA(4) 0.671983 0.170088 3.950785 0.0003 MA(5) -0.235370 0.167619 -1.404200 0.1682 R-squared 0.509095 Mean dependent var -0.004348 Adjusted R-squared 0.433571 S.D. dependent var 0.347023 S.E. of regression 0.261175 Akaike info criterion 0.292015 Sum squared resid 2.660280 Schwarz criterion 0.570286 Log likelihood 0.283661 F-statistic 6.740844 Durbin-Watson stat 1.925868 Prob(F-statistic) 0.000059 Inverted AR Roots .84 Inverted MA Roots .99 .43 -.24i .43+.24i -.80 -.56i -.80+.56i ARIMA(1,1,6) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:24 Sample(adjusted): 2001:03 2004:12 Included observations: 46 after adjusting endpoints Convergence achieved after 13 iterations Backcast: 2000:09 2001:02 Variable Coefficient Std. Error t-Statistic Prob. C 0.014930 0.091320 0.163486 0.8710 AR(1) 0.361371 0.149825 2.411959 0.0208 MA(1) 0.260751 0.027806 9.377374 0.0000 MA(2) -0.541281 0.065135 -8.310161 0.0000 MA(3) -0.413278 0.106851 -3.867795 0.0004 MA(4) 0.629700 0.089594 7.028360 0.0000 MA(5) 0.322290 0.027868 11.56484 0.0000 MA(6) 0.318231 0.049704 6.402501 0.0000 R-squared 0.517470 Mean dependent var -0.004348 Adjusted R-squared 0.428583 S.D. dependent var 0.347023 S.E. of regression 0.262322 Akaike info criterion 0.318285

57

Sum squared resid Log likelihood Durbin-Watson stat Inverted AR Roots Inverted MA Roots

2.614894 Schwarz criterion 0.679437 F-statistic 1.947647 Prob(F-statistic) .36 .86+.49i .86 -.49i -.18 -.55i -.81 -.55i -.81+.55i

0.636310 5.821650 0.000124 -.18+.55i

AIC and BIC have started rising as we increase order of MA above 5. ARIMA(2,1,5) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:29 Sample(adjusted): 2001:04 2004:12 Included observations: 45 after adjusting endpoints Convergence achieved after 19 iterations Backcast: 2000:11 2001:03 Variable Coefficient Std. Error t-Statistic Prob. C -0.004346 0.053065 -0.081897 0.9352 AR(1) -0.301002 0.249627 -1.205808 0.2355 AR(2) -0.558042 0.195104 -2.860227 0.0069 MA(1) 0.965369 0.236673 4.078923 0.0002 MA(2) 0.660481 0.236331 2.794727 0.0082 MA(3) -0.145872 0.155880 -0.935800 0.3554 MA(4) -0.076424 0.256029 -0.298498 0.7670 MA(5) 0.173387 0.236250 0.733914 0.4676 R-squared 0.510750 Mean dependent var 0.006667 Adjusted R-squared 0.418189 S.D. dependent var 0.342716 S.E. of regression 0.261412 Akaike info criterion 0.314375 Sum squared resid 2.528446 Schwarz criterion 0.635560 Log likelihood 0.926559 F-statistic 5.517984 Durbin-Watson stat 1.827076 Prob(F-statistic) 0.000211 Inverted AR Roots -.15+.73i -.15 -.73i Inverted MA Roots .37+.34i .37 -.34i -.51+.85i -.51 -.85i -.69 ARIMA(2,1,4) Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:30 Sample(adjusted): 2001:04 2004:12 Included observations: 45 after adjusting endpoints Convergence achieved after 33 iterations Backcast: 2000:12 2001:03 Variable Coefficient Std. Error t-Statistic C -0.018261 0.052424 -0.348326 AR(1) -0.331556 0.088485 -3.747028 AR(2) -0.612220 0.142112 -4.308020 MA(1) 1.096117 0.135798 8.071669 MA(2) 0.927579 0.043188 21.47764 MA(3) -0.051258 0.092874 -0.551914 MA(4) -0.156399 0.138781 -1.126945 R-squared 0.528710 Mean dependent var

Prob. 0.7295 0.0006 0.0001 0.0000 0.0000 0.5842 0.2668 0.006667

58

Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Durbin-Watson stat Inverted AR Roots Inverted MA Roots ARIMA(2,1,3)

0.454296 S.D. dependent var 0.253171 Akaike info criterion 2.435627 Schwarz criterion 1.768079 F-statistic 1.996398 Prob(F-statistic) -.17+.76i -.17 -.76i .35 -.46 -.49+.86i

0.342716 0.232530 0.513566 7.104960 0.000040 -.49 -.86i

Dependent Variable: D(CPI) Method: Least Squares Date: 03/12/10 Time: 13:32 Sample(adjusted): 2001:04 2004:12 Included observations: 45 after adjusting endpoints Convergence achieved after 14 iterations Backcast: 2001:01 2001:03 Variable Coefficient Std. Error t-Statistic C 0.006757 0.058166 0.116171 AR(1) 0.703441 0.055530 12.66768 AR(2) -1.006753 0.053100 -18.95965 MA(1) -0.213851 0.132882 -1.609326 MA(2) 0.544751 0.110537 4.928232 MA(3) 0.540041 0.133860 4.034379 R-squared 0.438928 Mean dependent var Adjusted R-squared 0.366996 S.D. dependent var S.E. of regression 0.272671 Akaike info criterion Sum squared resid 2.899619 Schwarz criterion Log likelihood -2.155364 F-statistic Durbin-Watson stat 1.921353 Prob(F-statistic) Inverted AR Roots .35+.94i .35 -.94i Estimated AR process is nonstationary Inverted MA Roots .38 -.91i .38+.91i -.56

Prob. 0.9081 0.0000 0.0000 0.1156 0.0000 0.0002 0.006667 0.342716 0.362461 0.603349 6.101964 0.000289

59

APPENDIX C COMMAND FILE: The software used to hammer out these results is E-Views. We first stacked the data in Excel, and then imported it in E-Views. UNIVARIATE METHODS 1. Simple exponential smoothing: Open the Interest rate series (referred to as y subsequently). Click on Procs/exponential smoothing on the series dialogue box. In the box that appears, select Single; for the smoothing constant enter E, label the smoothed series as smy1. Enter the estimation period as 2000:04 to 2005:07 and set the cycle for seasonal as 12. Click O.K and the smoothed series appears in the work file. Now, open this series to obtain the forecasts as below.

Estimation Period: Till July' 05 Forecasts 1 period ahead 3 period ahead 6 period ahead 12 period ahead Value Aug' 05 Oct' 05 Jan' 06 July' 06

Now, return to the work file--- open y--- click on Procs/Exponential Smoothing. Generate another smoothed series (call it smy2) using the estimation period as 2000:04 to 2005:08. Open this and obtain the forecasts as below:

Estimation Period: Till August' 05 Forecasts 1 period ahead 3 period ahead 6 period ahead 12 period ahead Value Sept' 05 Nov' 05 Feb' 06 Aug' 06

Repeat the above procedure by extending the estimation period by a month at each step-generating new smoothed series for y --- and obtaining new values for 1 period, 3 period, 6 period and 12 period ahead forecasts.

Stack the resulting numbers in an Excel worksheet and obtain the various measures of accuracy viz. ME, MAE, RMSE, MPE, MAPE, RMSPE, Theil's U Coefficient.

60

2. Brown's Double Exponential Smoothing:

The methodology for obtaining forecasts is the same as above, except those in the second step
select Double instead of Single. The forecasts thus generated are again stacked in Excel and measures of accuracy computed.

3. Holt' Two Parameter trend Model:

The methodology for obtaining forecasts is the same as above, except that in the second step select
Holt- Winters- No seasonal instead of Single. The forecasts thus generated are again stacked in Excel and measures of accuracy computed.

4. Winter' Three Parameter Exponential Smoothing Method:

The methodology for obtaining forecasts is the same as above, except that in the second step select
Holt- Winters- No seasonal instead of Single. 5. The forecasts thus generated are again stacked in Excel and measures of accuracy computed.

Decomposition:

Decomposition (Additive)

Click on Sample in the workfile and set the sample to 2000:04 to 2005:07. Open y series. Next, click procs, then Seasonal Adjustment, and O.K. Then, select Difference from moving average-Additive and click OK Now, we must make the scaling factors factors sum to zero. To do so, we compute their deviations from mean as follows. In the command window, type Scalar s1a= (Unadjusted value-mean) Scalar s2a= (Unadjusted value-mean) Scalar s12a= (Unadjusted value-mean) This gives us the adjusted scaling factors .

To get the seasonal indices, first generate a dummy seasonal factor for each season. For this, on the work file click genr and key in the following:

Seasonal01=@seas (1)*s1a

61

Seasonal02=@seas (2)*s2a . Seasonal12=@seas (12)*seas12a Next, click genr and get the seasonal index as follows: Seasonalindex=seasonal01+seasonal02+..+seasonal12 To deseasonalize the time series, write yd= y-seasonalindex in the dialogue box.

Next, estimate the trend- cyclical regression equation using the deseasonalized data (yd). Before running the regression, we need to generate a trend variable by the commands: choose GENR, type trend=@trend(2000:04) in the dialogue box. to get the value of trend equals 1 for month 1 of year 2000, generate another series by writing trend1= trend+1 in the dialogue box. Now, go to Quick/ Estimate Equation and write: Yd c trend1 Use the resulting intercept and slope coefficients to compute the fitted trend for the months of August, October, January and July (year 2005) (in Excel). To get the forecasted values of y, we need to add to these the appropriate seasonal indexes. The same are obtained from the Seasonal index series as the values against August04, October04, January04 and July04 respectively. Thus, we get the 1 period, 3 period, 6 period and 12 period ahead forecasts for the estimation period 2000: to 2005:07. Repeat the above steps by extending the estimation period by one month each time.

The resulting forecasts can then be stacked in Excel and measured of accuracy computed.

Decomposition (Multiplicative):

Click on Sample in the workfile and set the sample to 2000:04 to 2005:07. Open y series. Next, click procs, then Seasonal Adjustment, and O.K. Then, select Ratio to moving average- Multiplicative and click OK. Now, we must make the scaling factors factors sum to twelve. For this, type Scalar s1a= (Unadjusted value/sum of scaling factors)*12 Scalar s2a= (Unadjusted value/sum of scaling factors)*12

62

Scalar s12a= (Unadjusted value/sum of scaling factors)*12 in the command window. This gives us the adjusted scaling factors.

To get the seasonal indices, first generate a dummy seasonal factor for each season. For this, on the workfile click genr and key in the following: Seasonal01=@seas(1)*s1a Seasonal02=@seas(2)*s2a Seasonal12=@seas(12)*seas12a

Next, click genr and get the seasonal index as follows: Seasonalindex=seasonal01+seasonal02+..+seasonal12

To deseasonalize the time series, write yd= y/seasonalindex in the dialogue box.

Next, estimate the trend- cyclical regression equation using the deseasonalized data (yd). Before running the regression, we need to generate a trend variable by the commands: choose GENR, type trend=@trend(2000:04) in the dialogue box. to get the value of trend equals 1 for month 1 of year 2000, generate another series by writing trend1= trend+1 in the dialogue box. Now, go to Quick/ Estimate Equation and write: Yd c trend1 Use the resulting intercept and slope coefficients to compute the fitted trend for the months of August, October, January and July (year 2005) (in Excel). To get the forecasted values of y, we need to multiply to these the appropriate seasonal indexes. The same are obtained from the Seasonal index series as the values against August04, October04, January04 and July04 respectively. Thus, we get the 1 period, 3 period, 6 period and 12 period ahead forecasts for the estimation period 2000: to 2005:07. Repeat the above steps by extending the estimation period by one month each time. The resulting forecasts can then be stacked in Excel and measured of accuracy computed. Unit Roots Testing & ARIMA Load the workfile, click on File/New/Workfile, this opens workfile range window. In the range window select monthly and fill the data range as 2000:04 to 2006.07.

6.

63

Now to import the data click on Procs/Import/Read Text-Lotus-Excel and type in the file name. Type in the numbers of series in the file.

For doing ADF & PP Tests for stationarity in Shazam type the following command: sample 1 64 read(D:\kk.prn)/names coint y/ max type=df coint y/max type=pp file close D:\kk.prn stop

To plot the graph of interest rate (dependent variable say y from here on), click View/Line Graph. Click View/Correlogram to view the correlogram in levels.

To plot & compute the autocorrelations of the first difference of y type show d(y) .View the series by View/Line Graph.To view the correlogram of the same click View/Correlogram & choose level

To estimate different models, click Quick/Estimate equation. Change the sample period to estimation period. For estimating ARIMA(1,1,1) model, write in the Estimate equation dialog box: D(y) C AR(1) MA(1). For the diagnostic check, view the model fit by View/Residual Tests/Correlogram-Q-statistics.

To generate forecasts, Click on forecast, give a name to forecasted series and specify the forecasting range as 2005:08 to 2006:07. Keep changing the estimation and forecasting period to get recursive forecasts by including one month ahead value each time in estimation period.

MULTIVARIATE METHODS 7. Single Equation Model Load the workfile, click on File/New/Workfile, this opens workfile range window. In the range window select monthly and fill the data range as 2000:04 2006:07 Now to import the data click on Procs/Import/Read Text-Lotus-Excel and type in the file name. Type in the numbers of series in the file.

64

Click Quick/Estimate Equation. specify estimation period as 2000:04 to 2005:07 and the equation in the dialogue box i.e. y c log(iip) log(ms) fwdprethree libor3 log(govtexp) AR(1) And click ok.

Transfer the coefficients to excel sheet Create tables of one month ahead, three month ahead, six month ahead and twelve month ahead forecasts.

Now keep on changing the estimation period by adding 1 month and accordingly compute the forecasts till you reach the last estimation period as 2000:04 2006:06 Then, compute the measures of accuracy.

8. Simultaneous Equation model Load the workfile, click on File/New/Workfile, this opens workfile range window. In the range window select monthly and fill the data range as 2000:04 to 2006:07 Now to import the data click on Procs/Import/Read Text-Lotus-Excel and type in the file name. Type in the numbers of series in the file. To test for the endogeneity and simultaneity with respect to the variable log iip,first click on Objects/New Objects/Equation. In the Equation dialogbox, Type: log iip c log govtexp logms forex fwdprethre libor3 Then specify the estimation period.Click ok.Type genr res =resid in the command window. Next, click Quick/ Estimate Equation, to estimate the following equation: Click on Objects/New Objects, select system and click ok In the workfile type instrumental variables and the constant and than the system of equations and click on estimate. Now check for signs and significance of the coefficients. Also check for Various measures of accuracy. Now click on Click Quick/Estimate Equation. specify equation type as TSLS and estimation period as 2000:04 to 2005:07 and the equation and instrumental variables in the dialouge box and click ok. Click on forecast, select dynamic model, give a name to forecasted series and specify the forecasting range as 2005:08 to 2006:07. Keep changing the estimation and forecasting period to get recursive forecasts by including one month ahead value each time in estimation period.

65

Transfer forecasted values to excel sheet , create tables of one quarter ahead, two quarter ahead, three quarter ahead and four quarter ahead forecasts. Calculate measures of accuracy. Repeat the above procedure for Three SLS

66

DATA DEFINITIONS AND SOURCES S.NO. 1 VARIABLE NAME Yield on ten year government securities PROXY (if any) DATA DEFINITIONS Government of India dated securities of residual maturity of ten-years and above based on the secondary market outright transactions in Government securities (face value) as reported in Subsidiary Government Ledger (SGL) accounts at RBI, Mumbai. 2 3 Output Foreign Interest Rate 4 Inflation rate IIP LIBOR (3&6 months) WPI (annual change) 5 Exchange rate Nominal exchange rate 6 Forward Premium Three and Six month forward premium The total amount of goods produced in the economy Three & Six month LIBOR on USD deposits Only year-on-year inflation Rate has been used. The Rs. per $ rate as determined in the forex market RBI Handbook of statistics on Indian Economy+ RBI Monthly Bulletin RBI Handbook of statistics on Indian Economy+ RBI Monthly Bulletin RBI Handbook of statistics on Indian Economy+ 7 Bank Credit Gross bank credit Total credit (Food and Non-food). Data on food and non-food credit are available on a fortnightly basis. The weekly data are generated taking the average of the previous year and succeeding year figures. 8 Bank Rate The policy rate at which the Reserve Bank gives credit to commercial banks 9 FII inflows The portfolio investment inflows from financial institutions abroad 10 Money supply M3 Broad money supply RBI Handbook of statistics on Indian Economy+ RBIMonthly Bulletin RBI Handbook of statistics on Indian Economy+ RBI Monthly Bulletin RBI Handbook of statistics on Indian Economy+ 11 Government The Planned expenditure of the RBI Monthly Bulletin IFS (Government Statistics) RBI Monthly Bulletin RBI Handbook of statistics on Indian Economy+ RBI Monthly Bulletin CSO US Federal Reserve Bank SOURCES RBI Handbook of statistics on Indian Economy + RBI Monthly Bulletin

67

expenditure

Government

68

BIBLIOGRAPHY For Articles

a. Recent Trends in the Indian Debt Market and Current Initiatives


(Rakesh Mohan) Jan 2006

b. Managing the Interest Rate Risk of Indian Banks Government Securities Holdings (Amadou Sy)
IMF Working Paper 2005

For Books

c. Ramanathan, Ramu, Introductory Econometrics with Applications, d. e. f. g.


Tata Mc Graw Hill. Enders, W., 1995, Applied Econometric Time Series Analysis. Makridakris, Wheelwright, McGee: Forecasting: Methods and applications New bold and Bos:Introductory Business and Economic Forecasting Stephen A. Delurgio: Forecasting Principles and Applications

69

Vous aimerez peut-être aussi