Vous êtes sur la page 1sur 11

4.

5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

4.5 ARIMA MODEL BUILDING


We have determined the population properties of the wide class of models but, in practice, we have a time series and we want to infer which model can have generated this time series. The selection of the appropriate model for the data is achieved by a iterative procedure based on three steps (Box, Jenkins and Reinsel; 1994): Identification: use of the data and of any available information to suggest a subclass of parsimonious models to describe how the data have been generated. Estimation: efficient use of the data to make inference about the parameters. It is conditioned on the adequacy of the selected model. Diagnostic checking: checks the adequacy of fitted model to the data in order to reveal model inadequacies and to achieve model improvement. In this section we will explain this procedure in detail illustrating each step with an example.

4.5.1 Inference for the Moments of Stationary Processes


Since a stationary process is characterized in terms of the moments of the distribution, meanly its mean, ACF and PACF, it is necessary to estimate them using the available data in order to make inference about the underlying process.

4.5.1.0.1 Mean. A consistent estimator of the mean of the process, , is the sample mean of the time series :

(4.39)

whose asymptotic variance can be approximated by

times the sample variance

. against . Under the test statistic

Sometimes it is useful to check whether the mean of a process is zero or not, that is, to test follows approximately a normal distribution.

4.5.1.0.2 Autocorrelation Function. A consistent estimator of the ACF is the sample autocorrelation function. It is defined as:

In order to identify the underlying process, it is useful to check whether these coefficients are statistically nonzero or, more specifically, to check whether is a white noise. Under the assumption that , the distribution of these coefficients in large samples can be approximated by:

(4.40)

Then a usual test of individual significance can be applied, i.e., rejected at the 5% level of significance if:

against

for any

. The null hypothesis

would be

(4.41)

Usually, the correlogram plots the ACF jointly with these two-standard error bands around zero, approximated by significance test by means of an easy graphic method.

, that allow us to carry out this

1 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

We are also interested in whether a set of statistic is the Ljung-Box statistic:

autocorrelations are jointly zero or not, that is, in testing

. The most usual test

(4.42)

that follows asymptotically a

distribution under

4.5.1.0.3 Partial Autocorrelation Function. The partial autocorrelation function is estimated by the OLS coefficient from the expression (4.11), that is known as sample PACF.

Under the assumption that

, the distribution of the sample coefficients

in large samples is identical to those of the sample ACF (4.40).

In consequence, the rule for rejecting the null hypothesis of individual non-significance (4.41) is also applied to the PACF. The bar plot of the sample PACF is called the sample partial correlogram and usually includes the two standard error bands to assess for individual significance.

4.5.2 Identification of ARIMA Models


The objective of the identification is to select a subclass of the family of models appropriated to represent a time series. We follow a two step procedure: first, we get a stationary time series, i.e., we select the parameter of the Box-Cox transformation and the order of integration , and secondly we identify a set of stationary processes to represent the stationary process, i.e. we choose the orders . 4.5.2.0.1 Selection of Stationary Transformations. Our task is to identify if the time series could have been generated by a stationary process. First, we use the timeplot of the series to analyze if it is variance stationary. The series departs from this property when the dispersion of the data varies along time. In this case, the stationarity in variance is achieved by applying the appropriate Box-Cox transformation (4.17) and as a result, we get the series . The second part is the analysis of the stationarity in mean. The instruments are the timeplot, the sample correlograms and the tests for unit roots and stationarity. The path of a nonstationary series usually shows an upward or downward slope or jumps in the level whereas a stationary series moves around a unique level along time. The sample autocorrelations of stationary processes are consistent estimates of the corresponding population coefficients, so the sample correlograms of stationary processes go to zero for moderate lags. This type of reasoning does not follow for nonstationary processes because their theoretical autocorrelations are not well defined. But we can argue that a 'non-decaying' behavior of the sample ACF should be due to a lack of stationarity. Moreover, typical profiles of sample correlograms of integrated series are shown in figure 4.14: the sample ACF tends to damp very slowly and the sample PACF decays very quickly, at lag , with the first value close to unity. When the series shows nonstationary patterns, we should take first differences and analyze if is stationary or not in a similar way. This process of

taking successive differences will continue until a stationary time series is achieved. The graphics methods can be supported with the unit-root and stationarity tests developed in subsection 4.3.3. As a result, we have a stationary time series of times that we have differenced the series . and the order of integration will be the number

XEGutsm11.xpl

Figure 4.14: Sample correlograms of random walk 4.5.2.0.2 Selection of Stationary ARMA Models. The choice of the appropriate mean, the ACF and the PACF. Table 4.1: Autocorrelation patterns of ARMA processes Process ACF PACF values of the model for the stationary series is carried out on the grounds of its characteristics, that is, the

2 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

*[2mm]

Infinite: exponential and/or Finite: cut off at lag sine-cosine wave decay Finite: cut off at lag Infinite: exponential and/or sine-cosine wave decay Infinite: exponential and/or Infinite: exponential and/or sine-cosine wave decay sine-cosine wave decay

The mean of the process is closely connected with the parameter : when the constant term is zero, the process has zero mean (see equation (4.22)). Then a constant term will be added to the model if is rejected. The orders table 4.1. 4.5.2.0.3 Example: Minks time series. To illustrate the identification procedure, we analyze the annual number of mink furs traded by the Hudson's Bay Company in Canada from denoted by . to , are selected comparing the sample ACF and PACF of with the theoretical patterns of processes that are summarized in

XEGutsm12.xpl

Figure 4.15: Minks series and stationary transformation

Figure: Minks series. Sample ACF and PACF of

Table: Minks series. Sample ACF and PACF of

lag

Two standard error

ACF PACF

*[-2mm] 1 2 3 4 5

0.254 0.254 0.254 0.254 0.254

0.6274 0.6274 0.23622 -0.2596 0.00247 -0.03770 -0.15074 -0.13233 -0.22492 -0.07182

Figure 4.15 plots the time series

. It suggests that the variance could be changing. The plot of the . This series

shows a more stable pattern in

variance, so we select the transformation parameter

appears to be stationary since it evolves around a constant level and the , that clearly rejects the unit-root hypothesis. Therefore, the stationary

correlograms decay quickly (see figure 4.16). Furthermore the ADF test-value is time series we are going to analyze is .

3 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

As far as the selection of the orders

is concerned, we study the correlograms of figure 4.16 and the numerical values of the first five coefficients that process with model.

are reported in table 4.2. The main feature of the ACF is its damping sine-cosine wave structure that reflects the behavior of an complex roots. The PACF, where the The statistic for testing the null hypothesis is rejected only for , leads us to select the values for the

takes the value 221.02. Given a significance level of 5%, we reject the null hypothesis of zero model for :

mean and a constant should be included into the model. As a result, we propose an

4.5.3 Parameter Estimation


The parameters of the selected model can be estimated consistently by least-squares or by maximum likelihood. Both estimation from the values of the stationary variable. The least-squares methods minimize the sum of procedures are based on the computation of the innovations squares, (4.43)

The log-likelihood can be derived from the joint probability density function of the innovations assumption, :

, that takes the following form under the normality

(4.44)

In order to solve the estimation problem, equations (4.43) and (4.44) should be written in terms of the observed data and the set of parameters process for the stationary transformation can be expressed as:

. An

(4.45)

Then, to compute the innovations corresponding to a given set of observations ,

and parameters, it is necessary to count with the starting values

. More realistically, the innovations should be approximated by setting appropriate conditions about the initial values, giving to

conditional least squares or conditional maximum likelihood estimators. One procedure consists of setting the initial values equal to their unconditional expectations, that is,

For example, for the as follows:

process with zero mean, equation (4.45) is

. Assuming

, then we compute the innovations recursively

and so on. That is,

(4.46)

A second useful mechanism is to assume that the first

observations of

are the starting values and the previous innovations are again equal to zero. In this

4 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

case we run the equation (4.45) from

onwards. For example, for an

process, it is

(4.47)

thus, for given values of same for pure zero mean and condition

and conditional on the initial values

, we can get innovations from

until

. Both procedures are the process with

models, but the first one could be less suitable for models with

components. For example, let's consider an

parameter close to the nonstationary boundary. In this case, the initial value could distort the estimation results.

could deviate from its unconditional expectation and the

4.5.3.0.1 Conditional Least Squares. Least squares estimation conditioned on the first example, for the observations become straightforward in the case of pure models, leading to linear Least Squares. For

process with zero mean, and conditioned on the first value

, equation (4.43) becomes the linear problem,

leading to the usual estimator

which is consistent and asymptotically normal. In a general model with a component the optimization problem (4.43) is nonlinear. For example, to estimate the parameter of the process,

we substitute equation (4.46) in (4.43),

which is a nonlinear function of 4.5.3.0.2 Maximum Likelihood.

. Then, common nonlinear optimization algorithms such as Gauss-Newton can be applied in order to get the estimates.

The ML estimator conditional to the first the innovations

values is equal to the conditional LS estimator. For example, returning to the

specification, we substitute :

in the ML principle (4.44). Taking logarithms we get the corresponding log-likelihood conditional on the first value

(4.48)

The maximization of this function gives the LS estimator. Instead of setting the initial conditions, we can compute the unconditional likelihood. For an model, the joint density function can be decomposed as:

where the marginal distribution of assumption is:

is normal with zero mean, if

, and variance

. Then, the exact log-likelihood under the normality

where the second term

is equation (4.48). Then, the exact likelihood for a general

model is the combination of the conditional model, the exact ML estimator is not

likelihood and the unconditional probability density function of the initial values. As can be shown for the

linear and these estimates are the solution of a nonlinear optimization problem that becomes quite complex. This unconditional likelihood can be computed models via the prediction error decomposition by applying the Kalman filter (Harvey; 1993), which is also a useful tool for bayesian estimation of (Bauwens, Lubrano and Richard; 1999; Box, Jenkins and Reinsel; 1994). As the sample size increases the relative contribution of these initials values to the likelihood tends to be negligible, and so do the differences between conditional and unconditional estimation.

5 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

4.5.3.0.3 Example: Minks time series. As an example let us estimate the following models for the series : (4.49) (4.50) (4.51)

The ariols quantlet may be applied to compute the linear LS estimates of pure
ar2=ariols(z,p, d, "constant")

process such as the

model (4.49). In this case we use the code

with

and the results are stored in the object ar2. The first three elements are the basic results: ar2.b is ), ar2.bst is the vector of their corresponding asymptotic standard errors and ar2.wnv is the innovation

the vector of parameter estimates ( variance estimate

. The last three components ar2.checkr, ar2.checkp and ar2.models are lists that include the diagnostic checking statistics and

model selection criteria when the optional strings rcheck, pcheck and ic are included and take zero value otherwise. The
"constant")

model (4.50) is estimated by conditional nonlinear LS with the arimacls quantlet. The basic results of the code ma1=arimacls(z,p,d,q, with consist of: the vector ma1.b of the estimated parameters, the innovation variance estimate ma1.wnv and the vector a 0-1 scalar indicating convergence. The other output results, ar2.checkr and ar2.ic, are the same as the

ma1.conv with the number of iterations and ariols components.

The exact maximum likelihood estimation of the is arima11=arima11v(z,d, "constant") with of ARMA components (

process (4.51) can be done by applying the quantlet arima11 . The corresponding code and the output includes the vector of parameter estimates ( ), the asymptotic standard errors

), the innovation variance estimate

and the optional results arima11.checkr, arima11.checkp and arima11.ic.

The parameter estimates are summarized in table 4.3.

Table 4.3: Minks series. Estimated models Model *[2mm] 4.4337 0.8769 - 0.2875 0.6690 0.0800 0.0888 0.3477 0.0763 10.7970 4.6889 0.5657

XEGutsm13.xpl

4.5.4 Diagnostic Checking


Once we have identified and estimated the candidate models, we want to assess the adequacy of the selected models to the data. This model diagnostic checking step involves both parameter and residual analysis. 4.5.4.0.1 Diagnostic Testing for Residuals.

If the fitted model is adequate, the residuals should be approximately white noise. So, we should check if the residuals have zero mean and if they are uncorrelated. The key instruments are the timeplot, the ACF and the PACF of the residuals. The theoretical ACF and PACF of white noise processes take value zero for lags , so if the model is appropriate most of the coefficients of the sample ACF and PACF should be close to zero. In practice, we require that about the 95% of these coefficients should fall within the non-significance bounds. Moreover, the Ljung-Box statistic (4.42) should take small values, as corresponds to uncorrelated variables. The degrees of freedom of this statistic take into account the number of estimated parameters so the statistic test under follows approximately a distribution with . If the model is not appropriate, we expect the correlograms (simple and partial) of the residuals to depart from white noise suggesting the reformulation of the model.

6 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

4.5.4.0.2 Example: Minks time series. We will check the adequacy of the three fitted models to the quantlets. For example, for the usual standard error bounds vector a, the statistic for testing model we can use the code ma1= arimacls(z,0,0, 1, "constant","rcheck"). This option plots the residuals with the and the simple and partial correlograms of (see figure 4.17). The output ma1.checkr also stores the residuals in the series. It can be done by using the optional string 'rcheck' of the estimation

in the scalar stat and the ACF, Ljung-Box statistic and the PACF in the matrix acfQ. model evolve around zero and this behavior is supported by the

With regard to the zero mean condition, the timeplot shows that the residuals of the corresponding hypothesis test . The value of the statistic is

so the hypothesis of zero mean errors is not rejected. We can see in the

correlogram of these residuals in figure 4.17 that several coefficients are significant and besides, the correlogram shows a decaying sine-cosine wave. The Ljung-Box statistics for some take the following values: and , rejecting the required hypothesis of uncorrelated errors. These results lead us to reformulate the model.

XEGutsm14.xpl

Figure 4.17: Minks series. Residual diagnostics of Next, we will check the adequacy of the AR(2) model (4.49) to

model

data by means of the code ar2=ariols(z,2,0, "constant", "rcheck").

The output ar2.checkr provides us with the same results as the arimacls , namely, ar2.checkr.a, ar2.checkr.stat, and ar2.checkr.acfQ. Most of the coefficients of the ACF lie under the non-significance bounds and the Ljung-Box statistic takes values and . Then the hypothesis of uncorrelated errors is not rejected in any case.
XEGutsm15.xpl

Finally, the residual diagnostics for the


"constant", "rcheck").

model (4.51) are computed with the optional string "rcheck" of the code arma11=arima11(z,0,

These results show that, given a significance level of 5%, both hyphotesis of uncorrelated errors and zero mean errors are not

rejected.
XEGutsm16.xpl

4.5.4.0.3 Diagnostic Testing for Parameters. The usual t-statistics to test the statistical significance of the and parameters should be carried out to check if the model is overspecified. But it is important, as well, to assess whether the stationarity and invertibility conditions are satisfied. If we factorize de and polynomials:

and one of these roots is close to unity it may be an indication of lack of stationarity and/or invertibility. An inspection of the covariance matrix of the estimated parameters allows us to detect the possible presence of high correlation between the estimates of some parameters which can be a manifestation of the presence of a 'common factor' in the model (Box and Jenkins; 1976). 4.5.4.0.4 Example: Minks time series. We will analyse the parameter diagnostics of the estimated OLS estimation of pure of the optional string "pcheck". The ariols output stores the t-statistics in the vector ar2.checkp.bt, the estimate of the asymptotic covariance matrix in ar2.checkp.bvar and the result of checking the necessary condition for stationarity (4.16) in the string ar2.checkp.est. When the process is stationary, this string takes value 0 and in other case a warning message appears. The arima11 output also checks the stationary and invertibility conditions of the estimated model and stores the t-statistics and models (see equations (4.49) and (4.51)). The quantlets ariols for models provide us with these diagnostics by means

models and arima11 for exact ML estimation of

7 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

and the asymptotic covariance matrix of the The following table shows the results for the
[1,] [2,] [3,] [4,] [5,] [6,]

parameters model:

" ln(Minks), AR(2) model " " Parameter Estimate t-ratio " "_____________________________________________" " delta 4.434 3.598" " phi1 0.877 6.754" " phi2 -0.288 -2.125"

XEGutsm17.xpl

It can be observed that the parameters of the

model are statistically significant and the roots of the polynomial seems to be an appropriate model for the

are series.

, indicating that the stationarity condition is clearly satisfied. Then, the Similar results are obtained for the model.

4.5.5 Model Selection Criteria


Once a set of models have been identified and estimated, it is possible that more than one of them is not rejected in the diagnostic checking step. Although we may want to use all models to check which performs best in forecasting, usually we want to select between them. In general, the model which minimizes a certain criterion function is selected. The standard goodness of fit criterion in Econometrics is the coefficient of determination:

where

. Therefore, maximizing

is equivalent to minimize the sum of squared residuals. This measure presents some problems to be useful

for model selection. First, the cannot decrease when more variables are added to a model and typically it will fall continuously. Besides, economic time series usually present strong trends and/or seasonalities and any model that captures this facts to some extent will have a very large . Harvey (1989) proposes modifications to this coefficient to solve this problem. Due to the limitations of the coefficient, a number of criteria have been proposed in the literature to evaluate the fit of the model versus the number of models but have been extended for parameters (see Postcher and Srinivasan (1994) for a survey). These criteria were developed for pure models. It is assumed that the degree of differencing has been decided and that the object of the criterion is to determine the most appropiate values of and . The more applied model selection criteria are the Akaike Information Criterion, (Schwarz; 1978) given by: , (Akaike; 1974) and the Schwarz Information Criterion, ,

(4.52)

(4.53)

where

is the number of the estimated

parameters

and

is the number of observations used for estimation. Both criteria are based on

the estimated variance

plus a penalty adjustment depending on the number of estimated parameters and it is in the extent of this penalty that these criteria is larger than 's since for . Therefore, the difference between both criteria can be very large

differ. The penalty proposed by

if is large; tends to select simpler models than those chosen by . In practical work, both criteria are usually examined. If they do not select the same model, many authors tend to recommend to use the more parsimonious model selected by . 4.5.5.0.1 Example: Minks time series. We will apply these information criteria to select between the and models that were not rejected at the diagnostic checking step. The

optional string 'msc' of the estimation quantlets provide us with the values for both criteria, and . For example, the output of the code ar2=ariols(z,0,2, "constant","nor", "nop", "msc") includes the vector ar2.ic with these values. The following table summarizes the results for these fitted models:

*[1mm]

-2.510

-2.501

8 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

-2.440

-2.432
XEGutsm18.xpl

XEGutsm19.xpl

Figure 4.18: Mink series. Forecasts for model Both criteria select the model and we use this model to forecast. This model generates a cyclical behavior of period equal to 10,27 years. The series can be seen in figure 4.18.

forecast function of this model for the

4.5.6 Example: European Union G.D.P.


To illustrate the time series modelling methodology we have presented so far, we analyze a quarterly, seasonally adjusted series of the European Union G.D.P. from the first quarter of 1962 until the first quarter of 2001 (157 observations).

Table 4.4: GDP (E.U. series). Unit-root and stationarity tests Statistic *[2mm] Testvalue Critical value ( 0.716 -1.301 7.779 0.535 -2.86 -3.41 0.46 0.15 )

This series is plotted in the first graphic of figure 4.19. It can be observed that the series displays a nonstationary pattern with an upward trending behavior. Moreover, the shape of the correlograms (left column of figure 4.19) is typical of a nonstationary process with a slow decay in the ACF and a coefficient close to unity in the PACF. The ADF test-values (see table 4.4) do not reject the unit-root null hypothesis both under the alternative of a

stationary process in deviations to a constant or to a linear trend. Furthermore the KPSS statistics clearly reject the null hypothesis of stationarity around a constant or a linear trend. Thus, we should analyze the stationarity of the first differences of the series, .

XEGutsm20.xpl

Figure 4.19: European Union G.D.P. (

US Dollars 1995)

9 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

The right column of figure 4.19 displays the timeplot of the differenced series

and its estimated ACF and PACF. The graph of

shows a series that , clearly is a stationary

moves around a constant mean with approximately constant variance. The estimated ACF decreases quickly and the ADF test-value, rejects the unit-root hypothesis against the alternative of a stationarity process around a constant. Given these results we may conclude that series. The first coefficients of the ACF of are statistically significant and decay as or

models. With regard to the PACF, its first coefficient is series. But given that the first coefficients show some

clearly significant and large, indicating that an decreasing structure and the the statistic for the hypothesis

model could be appropriated for the

is statistically significant, perhaps an is

model should be tried as well. With regard to the mean, the value of

and so we reject the zero mean hypothesis.

Therefore we will analyze the following two models:

Table 4.5: GDP (E.U. series). Estimation results

*[2mm]

0.22 0.40 ( 0.93) -0.24 (-0.54) 0.089 0.020 6.90 9.96 -2.387 -2.348

0.28 (7.37) 0.25 (3.15) 0.089 -4.46e-15 6.60 10.58 -2.397 -2.377

Estimation results of the

and

models are summarized in table 4.5 and figure 4.20. Table 4.5 reports parameter and the usual selection model criteria, and ,

estimates, t-statistics, zero mean hypothesis test-value, Ljung-Box statistic values for for the two models. Figure 4.20 shows the plot of the residuals and their correlograms.

Figure 4.20: GDP (E.U. series). Residuals and ACF Both models pass the residual diagnostics with very similar results: the zero mean hypothesis for the residuals is not rejected and the correlograms and the Ljung-Box statistics indicate that the residuals behave as white noise processes. However, the parameters of the model are not statistically significant. Given the fact that including an term does not seem to improve the results (see the and values), we select the more

10 de 11

18/10/2012 16:44

4.5 ARIMA Model Building

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode39.html

parsimonious

model.

Figure 4.21: GDP (E.U. series). Actual and Forecasts Figure 4.21 plots the point and interval forecasts for the next 5 years generated by this model. As it was expected, since the model for the integrated of order one with nonzero constant, the eventual forecast function is a straight line with positive slope. series is

11 de 11

18/10/2012 16:44

Vous aimerez peut-être aussi