Vous êtes sur la page 1sur 13

Bootstrap-after-Bootstrap Prediction Intervals for Autoregressive Models Author(s): Jae H.

Kim Source: Journal of Business & Economic Statistics, Vol. 19, No. 1 (Jan., 2001), pp. 117-128 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/1392547 . Accessed: 30/09/2011 13:56
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of Business & Economic Statistics.

http://www.jstor.org

Prediction Intervals Bootstrap-After-Boot Models for Autoregressive


Jae H. KIM Victoria School of Business,LaTrobeUniversity, Bundoora, 3083, Australia (j.kim@latrobe.edu.au)
is The use of the Bonferroni prediction interval based on the bootstrap-after-bootstrap proposed for autoregressive(AR) models. Monte Carlo simulations are conducted using a number of AR models unit-root,and near-unit-root processes. The majorfindingis thatthe bootstrap-afterincludingstationary, bootstrapprovides a superiorsmall-samplealternativeto asymptotic and standardbootstrapprediction intervals. The latter are often too narrow,substantiallyunderestimatingfuture uncertainty,especially when the model has unit roots or nearunit roots. Bootstrap-after-bootstrap predictionintervalsare found to provide accurate and conservative assessment of future uncertaintyunder nearly all circumstances considered. KEY WORDS: Bias correction;Intervalforecasting;Nonnormality;Unit roots.

The bootstrapmethod (Efron and Tibshirani1993) has been their bootstrap forecasts are not conditional on past obserfound to be a useful small-samplealternativeto conventional vations. The bootstrap based on backward AR resampling methods of constructingconfidence intervalsin autoregressive adoptedhere requiresinnovationsto be normalfor asymptotic (AR) models. Past studies on AR forecasting include those validity because backward AR innovations are independent of Findley (1986), Stine (1987), Masarotto (1990), Thombs only when forwardinnovations are normal (see Thombs and and Schucany (1990), Kabaila (1993), McCullough (1994), Schucany 1990). To circumventthis problem, Kabaila(1993) Breidt, Davis, and Dunsmuir (1995), Grigoletto (1998), and and Breidt et al. (1995) proposed bootstrapintervals applicaKim (1999). The standard bootstrapemployed ble to AR models with nonnormalinnovations.However,due (nonparametric) in past studies generatesbootstrapreplicatesnecessarilybiased to their high computationalcosts and unknown small-sample in small samples, due to biases present in AR parameteresti- properties,directresamplingof backwardresidualsadoptedin mators (see Tjostheim and Paulsen 1983; Nicholls and Pope this article seems to be a more practicalalternative(see Kilian 1988; Shaman and Stine 1988; Pope 1990). Recently, Kilian and Demiroglu 2000). Nevertheless,because there is evidence (1998a) proposed what is called the bootstrap-after-bootstrap of nonnormalityin economic and business time series (see to construct confidence intervals for impulse responses of Sims 1988; Kilian 1998b), the propertiesof predictionintervector autoregressive(VAR) models. It has a built-in bias- vals undernonnormalinnovationsare examined in this article. The purpose of this article is to evaluate properties of correctionprocedurethat adjustsbiases in bootstrapreplicates predictionintervals(BBPI's) for uniand is implementedby two successive applicationsof the stan- bootstrap-after-bootstrap variate and VAR models. Small-sample propertiesof BBPI's dard bootstrap. Monte Carlo simulations by Kilian (1998a) revealed that bootstrap-after-bootstrap intervals perform sub- are comparedwith those of asymptotic(Liitkepohl 1991) and stantiallybetterthan those based on the conventionalmethods standard bootstrap prediction intervals (Thombs and Schucany 1990; Kim 1999). As in Kim (1999), the use of in small samples. joint predictionintervalsis considered.Monte In this article, the bootstrap-after-bootstrap applied to Bonferroni-type is Carlo simulations are conducted using a number of univariprediction intervals for AR models. A feature fundamentally different from Kilian's (1998a) procedure is the use of the ate and bivariate AR models of orders 1 and 2, including backwardAR model in resampling.This is to incorporatethe stationaryand unit-rootAR models, undernormaland nonnormal innovations.Only the results associated with the bivariate conditionality of AR forecasts on past observations into the models are presented in this article because qualitabootstrap forecasts. This extends the works of Thombs and AR(1) similar results were evident from the other AR models. Schucany (1990) and Kim (1999), where the bootstrapwith tively intervalsare constructedbased on the percentileand resamplingbased on the backwardAR model is used to con- Bootstrap struct what may be called the standardbootstrap prediction percentile-t methods detailed by Efron and Tibshirani(1993, intervals (BPI's). Although these authors concluded that the p. 160, p. 170). Otherbootstrapproceduresthat are potentially to the percentile method are the BC and BCa methto bootstrapprovides a useful small-samplealternative asymp- superior ods (Efronand Tibshirani1993, p. 184). However,past studies totic prediction intervals (API's), they found that both API's that they are inferiorto the percentile-t in econometand BPI's exhibit unsatisfactory performanceswhen the model reported ric applications:See Rilstone and Veall (1996) for statistical has characteristicroots close to unity. If biases of parameter inference in the seemingly unrelatedregression context and estimatorsare the majorcause of their poor performances,the Kim (1999) for VAR forecasting.In view of these results, the can bootstrap-after-bootstrap provide a superioralternative. BC and BCa methods are not considered in this article. The bootstrap prediction intervals proposed by Masarotto (1990) and Grigoletto (1998) use forward AR models to generate bootstrap forecasts. The bootstrap-after? 2001 American Statistical Association bootstrap can be applied to their framework to generate Journal of Business & Economic Statistics bias-free bootstrapreplicates, but the major weakness is that January 2001, Vol. 19, No. 1
117

118

Journalof Business & EconomicStatistics,January2001

The major finding of the article is that the bootstrap-afterbootstrapprovidesa superioralternativeto the asymptoticand standard bootstrappredictionintervals.BBPI's tend to provide the most accurateand conservativeassessmentof futureuncertainty,especially when the sample size is small, under nearly all circumstancesincluding the AR models with roots close to or equal to unity. In Section 1, asymptoticand bootstrapprediction intervalsfor AR models are presented.The asymptotic validity of the bootstrapis also given in Section 1. Section 2 presentsexperimentaldesign, followed by Section 3 in which simulation results are presented. In Section 4, the case of a VAR model with high AR order and dimension is presented using an empirical example and simulation. The conclusions are drawnin Section 5.

where I (h) is the asymptotic mean squared error (MSE) matrix of Y,(h) (Liutkepohl1991, pp. 86-87) and "---a>" denotes "asymptoticallydistributed." Truncationof the preform at the 100(1 - a)th percentileof x2(K) ceding quadratic defines an ellipsoid on the K-dimensional plane of forecasts with the nominal coverage rate of 100(1 - a)%. By Bonferroni'smethod (Liitkepohl1991, pp. 34-35), a rectangular region formedjointly by K predictionintervalscan approximate this ellipsoid. More precisely, K components intervals each with the nominal coverage rate 100(1 - a/K)% form a rectangularregion with the nominal coverage rate at least 100(1 - a)%. The asymptotic prediction interval (API) for the kth AR component, with the nominal coverage rate of 100(1 - a/K)%, can be defined as
APIk [Yk,n(h)-?zTOk(h)],

1. ASYMPTOTIC BOOTSTRAP AND INTERVALS PREDICTION


We consider the K-dimensional stationaryAR(p) model of the form Yt= v + A1Yt+ -+ +ApYt-p + ut, t=O0,1,2 ..... (1)

k = 1 ..... K,

where the K x 1 vector of iid innovations ut is such that E(u,) = 0 and E(utu') = l, is a K x K symmetric positive definite matrix with finite elements. Note that the AR order p is assumed to be finite and known. The backwardAR(p) model associated with the forwardmodel (1) can be written as
Yt=L

+ HYt++...-+HpYt+p-+-vtt,

t=O0,1,2

.....

(2)

where E(vt) = 0 and E(vtv') = , is a K x K symmetric positive definite matrix with finite elements. The backward model is used for the bootstrap as a means of generating bootstrapforecasts conditionallyon the last p observationsof the original series. Maekawa (1987) showed for the univariate AR(1) case that the conditionalityon past observationsis an importantdeterminantof forecast bias and variability in small samples. Chatfield(1993) also stressed the importance of consideringthis conditionalityfor predictionintervals.The forwardand backwardmodels (1) and (2) are closely related in the VAR case; see Kim (1997, 1998) for details. 1.1 Asymptotic Prediction Intervals

The unknown coefficient matrices in (1) and (2) are estimated using the least squares (LS) method. Let A = (i, A1..... Ap) and H = (p2,H1..... Hp) denote the LS estiand H = (,z,H.... mators for A = (v,A1 ....Ap) H). Forecasts are generatedin the usual way using the estimated coefficients as
Yn(h) = P ? _A , lY (h) ?- - . . . , _p (h)'

where Yk n(h) is the kth component of Yn(h),Z is the upper Z percentile of the standard normal distribution with 100rth "= .5(al/K), and 'k(h) is the squareroot of the kth diagonal element of ly(h). The rectangularregion formed jointly by these K Bonferroni APIk's has the nominal coverage rate of at least 100(1 - a)%. For an AR process that is suspected to have roots close to unity or exact unit roots, an AR model in level is often fitted for the purposeof forecastingin practice.This is because differencing may destroy valuable long-run informationpresent in the original time series. Because statistical tests for unit roots and cointegration tend to have unsatisfactory smallsample properties,practitionersmay mistakenly specify AR models in levels or may not wish to use pre-test information related to the presence of unit roots or cointegratingrestrictions. Asymptotic interval forecasting of a nonstationaryAR process can be conducted in the same way as described previously for the stationarycase. However, as pointed out by Liutkepohl (1991, p. 378), the use of API's for the nonstationary case may be questionable.He stated, "Thereis some danger that the confidence level of correspondingforecast intervals is overstated.These statementsare a bit speculative,however, because little is known aboutthe small sample properties of forecasts based on the estimated unstableprocesses." In view of the preceding statement, the nonstationaryAR case will receive particular attention in this article. Since 1^(h) is derived based on the assumption of stationarity, Lhitkepohl(1991) proposed the use of a slightly modified version of Er(h) for nonstationaryprocesses. However, preliminarysimulations found that API's based on the modified version perform similarly to those based on Ey(h). Based on this result, API's are constructed using Er(h) throughout. 1.2 Bootstrap Prediction Intervals

where Yn for (j)= Yn41 j <0. It can be shown that, underthe assumptionof normal innovations,
[Yn+h - Yn(h)]',Y(h)-'

[Yn+h-n(h)]-X2(K),

The BBPI's can be obtained as follows: Stage la. Given n realizations(Y1.... Yn)of (1), calcuand late A, H, and the correspondingLS residuals {ut}=p+1 {Pt}n-. Note that these LS residuals are scaled in the same way as in Thombs and Schucany (1990).

Kim:Bootstrap-After-Bootstrap PredictionIntervals AutoregressiveModels for

119

Stage lb. Using the standard nonparametricbootstrap, obtain the bootstrapestimatorsA* and H* for A and H based on the LS method. For the former case, pseudodatasetsare =j + A1Yt-1+" +A v generated as Yt* =*+Ai Y*+ + ApYt-_p u*, where u* *, + ttweeu is a random draw with replacement from ut}, and, for the latter, Yt* + Hi Yt* +"". + HpYt* + v*, where v* is a ran= dom draw from { } with replacement.The estimates of the IVt biases of A and H are calculated as bias(A) = A*- A and bias(H) = H* - H. Adapting the procedureproposedby Kilian (1998a), the bias-correctedestimatorsare calculated from bias(A) and bias(H) and denoted as Ac and Hc. Stage 2. Generatepseudodatasetsrecursivelybased on (2) as

using Ac. Note that z4,n(h, T) is the 100'th percentile of


{Zk,n(h; i) }Ii
S = Yk n(h; Zk,n(h; )

i)- Ykn(h)

(T?,*(h)

.*ic

+. nYt*,+

.- --+.

t*

+ .v* ,

(3)

where the p startingvalues are set equal to the last p values of the original series. Using these pseudodatasets,the coefficient matrices of the forward model (1) are estimated using the LS method and the estimators are denoted A*. Adapting again the bias-correctionprocedureof Kilian (1998a), the biases in A* are correctedto yield the bias-correctedestimator Ac. Note that bias(A) calculatedin Stage 1 is used for this purpose to ease the burden of computation,as suggested by Kilian (1998a). The bootstrapreplicates of AR forecasts are generatedrecursivelyas (h) Yn* = Vc + AcY*(h- 1) Ac-t-.- ApYn*(h p) + un+h, (4)

= where Y*(j)= Yn*+j Yn+j for j < 0 and U*+h is a random draw from ut, with replacement. Repeated generation of pseudodatasetsas in (4), say B times, will yield the bootstrap forecast distribution{Y,*(h;i) }B Note that, for simplicity of exposition, the notation Yt*is used to indicate differenttypes of pseudodatasetsin Stages lb and 2. By generating pseudodatasetsand bootstrapforecasts as in (3) and (4), the conditionalityof AR forecastson the last p observationsof the original series can explicitly be incorporated into bootstrapforecasts. The BBPI for the kth AR component, based on the percentile method with the nominal coverage rate of 100(1 The proof of the precedingtheoremis given in the appendix. It should be noted that both the standard bootstrap and a/K)%, can be defined as bootstrap-after-bootstrap procedures have been shown to be 1- ')], BBPIpk [Yn(h, 7), Y\n(h, (5) valid only for stationaryAR processes, but we asymptotically follow Kilian (1998a,b) in also investigatingtheir performance where Y,*n(h, r) is the 100-th percentileof the kth component in small samples for exact unit-rootprocesses. of the bootstrap distribution {Y,*(h;i)}i1 and "= .5(a/K). In establishing the asymptotic validity of the bootstrapin The BBPI for the kth AR component,based on the percentile-t this article, the assumptionof normal innovationsis required. method with the nominal coverage rate of 100(1 - a/K), can This is because backward disturbanceterms vt's in (2) are be defined as when forwarddisturbanceterms ut's in (1) independentonly are normally distributed.It should be noted that the API also relies on the normality assumption for its asymptotic valid1 -)(h), BBPIpt,k = [KCn(h) -Z,(h, ity (see Liitkepohl 1991, p. 33). For bootstrap intervals, an (6) alternativeunder nonnormal innovations is to resample forYCn(h) -Z,(h, ')&,(h)], ward residuals, whose underlying innovations are indepenwhere YC(h) is the kth component of AR forecasts gener- dent, and obtain backwardresiduals by using the relationship ated using Ac and &O is the squareroot of the kth diagonal between the forwardand backwardAR models (Breidt et al. (h) element of the asymptotic MSE matrix of Yn(h) calculated 1992; McCullough 1994). An investigationof these methods

{}

and &k(h) is the bootstrap counterpart of Uk(h). By Bonferroni's method, the rectangularregions formed jointly by K BonferroniBBPI's have the nominal coverage rate of at least 100(1 - a)%. The standard bootstrap prediction intervals proposed by Thombs and Schucany(1990) and Kim (1999) are constructed using the pseudodatasets(3) and bootstrapreplicates(4), both generated with H and A instead of their bias-correctedversions. This version of the percentile interval (5) is called the standardbootstrappredictioninterval(BPI) based on the percentile method and denoted as BPIp. Similarly, based on the bootstrap replicates generated with H and A, the standard bootstrapversion of (6) can be constructedusing Yk,n(h) and Uk instead of Yc(h) and cO(h). This is called the stan(h) dard bootstrapinterval based on the percentile-t method and denoted as BPIp/. As mentioned earlier, Thombs and Schucany (1990) and Kim (1999) found that these BPI's exhibit when the model is a unsatisfactorysmall-sampleperformances near-unit-root Because BBPI's are constructedusing process. bias-correctedparameter estimators,they have strongpotential to performbetter than the BPI's. For the bootstrap to be a valid small-sample alternative to the asymptotic method, it should be shown to satisfy some desirable asymptoticproperties.As the following theorem states, the bootstrap-after-bootstrap estimators and forecasts satisfy desirable asymptotic properties under certain conditions. Theorem(AsymptoticValidityof the Bootstrap). Consider a stationaryAR process given in (1). Under the assumption that ut follows an iid normal distribution,along almost all sample sequences, conditionally on the data, (a) Ac and Ac converge to A in conditionalprobability,(b) Hc converges to H in conditionalprobability,and (c) Y*(h) converges to Yn+h in distribution.

120

Journalof Business & EconomicStatistics,January2001

is beyond the scope of this article.Instead,the bootstrapinter- BPIo. Contraryto API's and BPI's, BBPI's overestimatethe vals proposedin this article are simulatedunder a wide range nominal coverage rate. BBPIp substantiallyoverestimatesthe nominal coverage rate when the sample size is small, while of nonnormalinnovations. BBPIP/ performs much better than BBPIo, slightly overestimating the nominal coverage rate. The degree of overestima2. EXPERIMENTAL DESIGN tion by BBPI's decreases as sample size increases. For the model M3, BBPI's perform much Table 1 presents the coefficient matrices of five bivariate case of the near-unit-root others. When n e {50, 100}, API's and BPI's AR(1) models simulated, labeled Ml to M5. These models better than the are chosen so that importantparts of the parameterspaceseriously underestimatethe nominal coverage rate and the for example, unit root, near unit root, and stationarity- degree of underestimationincreases as h increases. BBPI's are systematically dealt with in simulations. Note that the do not show such sensitivity to the increasing values of h model M4 is cointegrated. Other design settings include when n e {50, 100}, although they overestimatethe nominal v e {(0, 0)', (1, 1)'} and vech(Eu) = (1,.3, 1)', similar to the coverage rate to a degree. It is also evident that BBPIP/ per= design adoptedby Kilian (1998a,b), while vech is the column- forms better than BBPIp in most cases. When n 200, API's and BPI's perform satisfactorily,outperformingBBPI's that stacking operatorthat stacks elements on and below the diagonal only. Sample sizes considered are 50, 100, and 200 to slightly overestimatethe nominal coverage rate. Figure 2 reports average areas of joint predictionintervals representsmall, moderate, and large sample sizes. The forefor models M2 and M3 when v = (0, 0)'. Plausibly,the avercast horizonh ranges from 1 to 8. The numberof Monte Carlo trials is set to 500, while the numberof bootstrapreplications age area increasesas h increases.BBPI's are wider than API's in Stage 1 is set to 500 and that in Stage 2 is set to 999 (the and BPI's for all cases, although the differences in average latter choice is to avoid the discreteness problem; see Booth areas become smaller as n increases. For model M2, BBPIp and Hall 1994). The nominal coverage rate (1 - a) for joint can be much wider than BBPIPt,especially when n is small. Bonferronipredictionintervalsis set to .95. The results associ- For model M3, they show virtually identical values of averated with the nominal coverage rate of .9 providedqualitative age area. Takinginto account the coverage propertiesreported in Figure 1, it can be said that API's and BPI's are often similar results. The criteria of comparison are the conditional coverage too narrow and BBPIp is often too wide, when the sample rate and average area covered by the joint predictioninterval. size is small or moderate. Because the average area properThe former is the average frequencyof the true future values ties observed in Figure 2 are also evident for all other models under all possible situations simulated, furtherdetails of the jointly belonging to K Bonferronipredictionintervals,and the latteris the squareroot (or the power of K-') of the mean area average area propertiesare not reported. (volume) of the rectangle (rectangularregion) formed jointly Models by K Bonferronipredictionintervals.To calculate the condi- 3.2 Unit-Root tional coverage rate, 100 true futurevalues are generatedconFigure 3 presentsconditionalcoverage rates for models M4 ditionally on the last p observationsof AR process for each and M5. The first panel exhibits the case of model M4 when Monte Carlo trial (see Thombs and Schucany 1990). v = (0, 0)'. As in the near-unit-rootcase, serious underestimation of the nominal coverage rate by API's and BPI's is evident especially when h is long. When n = 200, however, RESULTS 3. SIMULATION API's and BPI's perform satisfactorilywith their conditional 3.1 StationaryModels coverage rates fairly close to 95%. BBPI's slightly overestiFigure 1 exhibits conditional coverage rates of stationary mate the nominal coverage rate in most cases but do not deteVAR(1) models Ml to M3 when v = (0, 0)'. It can be seen riorate substantiallyas h increases. As before, BBPIp shows from the results associated with model Ml and M2 that API's a higher degree of overestimationthan BBPIPt. The second the and BPI's underestimate nominalcoveragerateto a degree. panel shows the case of model M5 when v = (1, 1)'. The The degree of underestimation, however, decreases as sample API exhibits a substantial degree of underestimation,espesize increases. When n = 200, API's and BPI's exhibit con- cially when n is small and h is long. BPI's performmuch betditional coverage rates nearly identical to 95% in most cases. ter than API's, but they underestimatethe nominal coverage There is a tendency for BPIp/ to performbetter than API and rate to a degree especially when n is small. API's and BPI's the underestimate nominal coverage rate to some extent even when n = 200. BBPI's overestimatethe nominalcoverage rate to a degree when n = 50 but provide fairly accurate condiTable 1. Design of the Bivariate AR(1) Models tional coverage rates when the sample size is larger.It should M4 M5 M2 M3 Ml also be noted that simulation results associated with model M4 when v = (1, 1)' are qualitativelysimilarto the case when 0 [ 0 ] 0] 0 9 [: [:,5 0]

v = (0, 0)'.

Roots

2, -2

1.03, 2

1, 2

3.3

Robustness to Nonnormality

NOTE: The entries in the second row indicate the values of A1 matrices; Model M1 is cointegrated; v e {(O,0)', (1,1)'}; and, unless stated otherwise, u iid N(0, u) with vech(Zu) =

(1,.3,1)'.

It was mentionedearlierthat the assumptionof normalityis crucial for the asymptotic validity of the bootstrapproposed

for PredictionIntervals AutoregressiveModels Kim:Bootstrap-After-Bootstrap

121

Model M1
n=5O 99 98 9796 pap n=100 99 98 979 .. . n=200 99 98 --

9 96

-U96.....

i ,-U-bbp

Model M2 n= 50
99 9494-9-4979--a-bpi

n10X
99

n=200

99

96

96

96

i~j

92 392 1 2 3 4 h 5 6 7 8 1 2 3 4 h 5 6 7 8

92

4 h

ModelM3
n=50 9999
98

n=100 ------------------ =-----=97---

97

--------

95 7-

n=200 9997 957i--------------93

0,

api

9393 896 87 85 -89 .. -87 -87


-,-

--U-bbpi

..---bpb

89

..

..

-- - - - - - - - 1 2 3 4 h 5 6

7 8

85 1 2

,85X 5 h 6 7 8 1 2 3 4 h 5 6 7 8

bp

Figure 1. Conditional Coverage Rates for Bonferroni Prediction Intervals: Stationary AR Models, Normal Innovations, Nominal Coverage Rate of at Least 95% for Joint Bonferroni Intervals. The data-generation process is Yt= v +A1 Yt_ + ut, with ut " iid N(0, y,), vech(,u) = (1,.3,1)', and v = (0,0)'.

in this article. However, it is often the case in practice that innovations are suspected of being nonnormal. It is therefore instructive to examine small-sample properties of the asymptotic and bootstrap prediction intervals under nonnormal innovations. Nonnormal innovations are generatedusing the Student-t(5) and X2(4) distributions,which are representatives of fat-tailed and asymmetricdistributions.These nonnormaldistributionsare adjustedso that they have zero means and variance-covariancestructurethe same as 1, given in Section 2. Figure 4 presents the conditional coverage rates of prediction intervals for model M4 under nonnormalinnovations

when v = (0, 0)'. The results reported here can be compared with those undernormalinnovationsin Figure 3. Under Student-t innovations, the results are virtually unchanged, except that the performanceof API's has somewhatworsened. However, under X2(4) innovations, some prediction intervals show drasticchanges. BBPIp and API do not show any noticeable changes, while BPIp performs slightly better than under normal innovations. It is noticeable that BBPIpt and BPIpt show substantialdeteriorationwhen h is short. The degree of deteriorationdoes not decrease as n increases. The third and fourth panels of Figure 4 present conditional coverage rates from model M5 when v = (1, 1)' under nonnormal innova-

122 ModelM2
n=50 10
9

Journalof Business & Economic Statistics,January2001

n=100 10
9 --------------8-------------------8

n=200 10
9
-

. 7 6 555 4 1 2 3 4 h 5 6 7 8 4 1 7

. .

. ----U---bbpi .. . . . . -6

. .

..

. ...- .

. --....-

bbpit -bbpit ---bpit

4 2 3 4 h 5 6 7 8 1 2 3 4 h 5 6 7 8

ModelM3
n=50 20 n=100 20 15 10 n=200 20 15 --api -4--api ---bbpi -A- bbpit bpi ----e--bpit 0 1 2 3 4 h 5 6 7 8 0 1 2 3 4 h 5 6 7 8 0 1 2 3 4 h 5 6 7 8

15 10 5.-X--

10

and v = (0,0)'.

NominalCoverageRate AR for PredictionIntervals AR Models: Stationary Models, NormalInnovations, Figure2. AverageArea of Bonferroni + Intervals. The data-generation of at Least 95% for Joint Bonferroni process is Y,= v + A1,Y-1 ut, withut - iid N(0, Eu), vech(Eu) = (1,.3,1)',

tions. These figures can be comparedto those for model M5 in Figure 3 undernormalinnovations.As in the case of model M4, BBPII,,tand BPI,,t show substantial deteriorationunder X2(4) innovations especially when h is short and n is large. Under Student-t innovations, all prediction intervals seem to be robust to the departurefrom normality, except for API, These findwhich shows a higher degree of underestimation. that bootstrapintervals are reasonably robust to ings suggest nonnormalinnovations. However, care should be taken when is asymmetryof innovationdistribution suspected.In this case, the bootstrap prediction intervals based on the percentile-t method may performundesirably.Rather,the use of the bootstrapintervalsbased on the percentile method is strongly recommended. One may also be interestedin the performancesof asymptotic and bootstrapprediction intervals when innovations are generated with conditional heteroscedasticity. As a further conextension, the case of VAR(1) models with autoregressive ditional heteroscedasticity(ARCH) innovations (Engle 1982) is also examined. We consider an ARCH(l1)process that, can be written as

so that they share the variance-covariancestructurethe same as vech(l,) = (1, .3, 1)'. Figure 5 presents simulationresults for models M4 and M5 when ARCH(l1)innovationsare generated with f0 = .6. From the first panel, in which the results from model M4 with v = (0, 0)' are presented,it is again evident that BBPI's performmuch better than other alternatives, especially when the sample size is small. API's and BPI's underestimatethe nominal coverage rate substantiallywhen n E {50, 100} and h is long, while BBPI's provide accurate coverage propertiesfor all sample sizes. A similar feature is evident from model M5 with v = (1, 1)' in which BBPI's perform much betterthan the others even when the sample size is 200. The case of model M5 with v = (0, 0)' providedqualitatively similar results. It is also evident from both models that BBPIP performs slightly better than BBPII,,, especially when the sample size is small. ComparingFigure 5 with Figure 3, it can be seen that performancesof all prediction intervals are somewhatworsenedunderARCH(1) innovations.Another interestingfeature underARCH(1) innovationsis that BBPI's show a tendency to slightly underestimatethe nominal coverage rate, a feature not generally observed in the case of other innovations.This may be due to the volatility present in e,= +(t ,-e)'/2, , (7) ARCH(l1)innovations. The results presented in Sections 3.1 and 3.2 indicate that where 7 is generated randomly from the standard normal the use of BBPI's is highly desirableundernormalityof innodistribution.The parametervalues are chosen such that I0 E vations, especially for near- or exact-unit-root AR models. {.3, .6, .9} and o = 1. For the bivariatecase, two independent This section provides simulationevidence that desirablepropARCH(1) processes are generated as in (7) and transformed erties of BBPI's remain mostly unchanged when innovations

Kim: Bootstrap-After-Bootstrap Prediction Intervals for Autoregressive Models Model M4


n=50 98 -----..... ------- ---98 n=100 98 n=200 -

123

95-4---ap
92
-

---

92
-

--

92- .------.----------------.---8

---bbpi

9 2

jj??-

-e-bpit

83 1 2 3 4 h 5 6 7 8

83 183 1

4 h

4 h

ModelM5
n=50 99
- ----

n=100 99 97 ---. --...'----"


-------

n=200 99
----97 ----------

97
-!

--- -- -- -- --,93------. 9 97------

91 93 91 ---

---

95 93 --93"

---

---

------- -- ----- - - - -

A-I -- - e-pi

api
bbpi bbpit bp

-------91--------------------------8 - -- - --

891---------------,-, 5
8
-

- ----

------------

---87
-

--------------------------

------

8
-

-----------

87

83 t83 1

83 2 3 4 h 5 6 7 8 12 3 4 h 5 6 7 8 1 2 3 4 h 5 6 7 8

of at Least 95% for Joint Bonferroni Intervals. The data-generation process is Yt= v + A1,Yt- + ut, with ut - iid N(0, Eu), vech(Eu) = (1,.3,1)'. For model M4, v = (0,0)', and for model M5, v = (1,1)'.

AR PredictionIntervals: Unit-Root Models, NormalInnovations, NominalCoverage Rate Figure3. Conditional Coverage Rates for Bonferroni

are generatedfrom nonnormalinnovationsincluding the conditional heteroscedasticity.

3.4 Further SimulationFindingsand Discussions


It is of interest to examine the variability of conditional coverage rates of joint Bonferronipredictionintervals.This is because the conditionalcoverage rates associatedwith BBPI's may show high variability as a result of bias correction. Table 2 reportsthe results associated with the models M2 and M4 when n = 100 and v = (0, 0)', because other results are qualitativelysimilar. The conditional coverage rates of API's and BPI's show similar degrees of variability,but BBPI's generate less variable conditional coverage rates than API's and BPI's for all cases. It is interestingto observe that the conditional coverage rates of BBPIPare far less variablethan those associated with BBPIPt.However, as evident from the simulation results, BBPIP shows a higher degree of overestimation than BBPIp,,in most cases. It seems that the use of BBPIp, is highly attractive,because its conditionalcoverage rates are fairly close to the nominal coverage rate in most cases and less variablethan those of API's and BPI's. Because AR models with deterministic time trend are popular for forecasting in practice, it is of interest to examine small-sample propertiesof asymptotic and bootstrapprediction intervals when a deterministic linear time trend is included in the model. The results associated with models

M2 to M4 when v = (1, 1)' are presentedin Figure 6. These models are generatedwith the coefficients of the linear timetrend variables set to zeros, but they are treatedas unknowns for estimation and forecasting. The results are in accordance with those from the case without the linear trend. For unitroot models M3 and M4, API's and BPI's exhibit serious underestimation when n is small and h is long. BBPI's overestimate the nominal coverage rate to a degree but provide much more accuratecoverage propertiesthan API's and BPI's for all cases. For model M2, API's and BPI's show underestimation of the nominal coverage rate when n = 50, but their performancesimprove as n increases. As in the case of the no-trendmodel, BBPI's overestimatethe nominal coverage rate for model M2 even when n = 200. It is again evident that BBPIpt provides the most accurate conditional coverage rates especially for the model with near or exact unit roots. The results suggest that BBPI's performin generalbetterthan other alternatives when the AR models with deterministictime trend are considered. An interesting feature associated with BBPI's is that they tend to overestimatethe nominal coverage rate in most cases. This tendency is particularlystrong for stationaryAR models whose roots are far from unity. The following conjecturecan be made. For these AR models whose biases of parameter estimators are likely to be negligible, the bias-correctionprocedure of the bootstrap-after-bootstrap bring more sampling can than gain in precision. This may result in overestivariability

124

Journalof Business & EconomicStatistics,January2001

ModelM4(Student-t innovations)
n=50 n=100 n=200

927

95
921

""-95t - -

98 -92----------------------- - - -----------92 -92 ----------... 86 -----------------------183,. 2

9siap --

9-------pi

-A bi bbpit

------89 89 ..... .89 -89--------------------------------86 83 1 .83 2

8 89------------------------------86----------------86-----... .. 7 8

*-Xbpi -bpit

4 h

4 h

4 h

Model M4 (chi-squared n=50 98 --

innovations) n=100 n=200

-----------------------95 ----95
89
-

98 ---------------

95 -------89
8-

------

98
- -

--.

api --api

89

--

9
--- -

/-bbpit X bpi
-

86 -----------------------83 1 1 2 1 3 1 4 h 1 5 1 1. 6

-86 83 7 8 1 2 3 4 h 5 6

----

8683 1,... 1

---------------------

G bpit

4 h

Student-t ModelM5(withdrift, innovations)


n=50 99 97 -----------------------------n=100 99 97 n=200 99 97

85
1

-------2 3 4

..pi

-----6------7

85-9
3 8 1 23 4 56
7-

85
8 -.. 83 1 2. . 3 4 ... 5 6 7

83 83-------------------------

api - -X-bpi bbp 8

99 99-----------------------9--------------h Model M5 (with drfit, chi-squared n=50 n=60 87 -----------9999 85 ----------------83 93 1 2


--_

89

--

--

bbpt

innovations) ---------n=100 n 87 lO0 99


----85___1___1___W__

-- 4 h

n=200 n*200 -----87 _. 2 3 4 h 5 6 7 8 83 -. 93 1 2 3

-e0-bpit ..1 1. 5 6 h

---------------------85 83 5 ----6 7 8 ---a--pi -. 93 - 1

3 -.--

....... . ., innov of Bonferroni M5(with chi-squared Rates s)2(4)__ Student-t and (5) FigurModel drfit, Prediction Intervals: ,, Nonnormal Innovations, 89 --89 - - -. 89

_.l Distributions, bbpit Nominal

Coverage Rate of at Least 95% for Joint Bonferroni Intervals. The data-generation M4, = (0, 0)', and for model M5, = (1,1)'.

process

is Yt =

v + A1Yt-1 +

ut, vech(u)

= (1 ,.3,1 )'. For model

Kim:Bootstrap-After-Bootstrap PredictionIntervals AutoregressiveModels for Model M4


n=50 100 n=100 100 n=200 100

125

-4-api

--------------909-85--980 -75 1 2 3 4 h 5 6 -----

-b--------5-bpi
-80-75 ,175 1 ----

90
80 -----bpit

,, 2 3 4 h 5 6 7 8 1 2 3 4 h 5 6 7 8

Model M5 (withdrfit)
n=50 100 95
90-90

n=100 100 95 ---------------95

n=200 100 ----90


----4---api
-

--

85

--

--

---85-----------

80

85---------------- ------80-------

-Ubbpi

---A-bbpit p

80 ----------

-----------80------------------

80

75---------------75
70 65 1 2 3 4 h 5 6 7 8
------------

---------------------------75
70 -----------------------65 1 2 3 4 h 5 6 7 8 70 65 1

---------------------------G

-bpft

4 h

Innovations 0= 1 and 8 = .6; NominalCoverageRate of PredictionIntervals: ARCH(1) CoverageRates of Bonferroni Figure5. Conditional + Intervals. The data-generation at Least 95%for Joint Bonferroni process is Yt= v +A1Yt-,1 ut, vech(_,) = (1,.3,1)'. For model M4, v = (0,0)', and formodel M5, , = (1,1)'.

mationof the nominalcoveragerate. For AR models with near or exact unit roots, BBPI's still show a small degree of overestimation even when n = 200. These AR models may have fairly small biases of parameterestimators when the sample
Table2. StandardDeviationsof Conditional CoverageRates for PredictionIntervals Bonferroni h API BBPIp BBPIpt
n = 100, Model M4

size is as large as 200. Similarly to the case of stationaryAR models whose roots are far from unity, gain in precision can be outweighed by extra sampling variabilitybroughtabout by bias correction.

AND 4. A MODEL WITH HIGHER ORDER EXAMPLE DIMENSION: EMPIRICAL ANDSIMULATION


There have been many empirical studies on VAR forecasting, but the use of interval forecasts has been neglected. For example, Litterman(1986), McNees (1986), Bessler and Babula (1987), and Simkins (1995) used only point forecasts to evaluate forecasting ability of VAR models. However, as Chatfield (1993) and Christoffersen(1998) stressed, intervalforecasts should be used (together with or instead of point forecasts)for more thoroughand informativeassessment of future uncertainty.As an application,the five-dimensional VAR model examined by Simkins (1995) is used for interval forecasting of U.S. macroeconomic time series. The dataset was given by Simkins (1995) and it includes 172 observations quarterlyfrom 48:1 to 90:4 for real gross national product (GNP), GNP deflator,real business fixed investment, money supply (MI), and unemploymentrate. A VAR(6) model is fitted following Simkins (1995). The parametersare estimated using the data points ranging from 48:1 to 88:4, and prediction intervals are calculated for the period from 89:1 to 90:4. All data are transformedto natural logarithms for estimation and forecasting. Five of the characteristic roots calculated from the estimated coefficients are

BPIp

BPIpt

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8

3.19 3.67 3.99 4.71 5.28 6.20 6.84 7.39 3.06 3.22 3.15 3.38 3.55 3.31 3.41 3.58

3.21 2.88 2.52 2.47 2.62 2.84 2.75 3.03 3.08 2.65 2.36 2.38 2.31 2.07 2.11 2.15

3.20 3.01 3.33 3.59 4.16 4.36 5.18 5.55


n = 100, Model M2

3.65 3.72 4.38 4.76 5.70 6.28 6.87 7.33 3.73 3.41 3.46 3.49 3.78 3.77 3.79 3.80

3.44 3.53 4.15 4.45 5.33 5.96 6.62 7.34 3.33 3.26 3.26 3.54 3.72 3.80 3.83 3.78

3.05 2.72 2.74 3.06 3.18 3.00 3.13 3.06

API: interFor NOTE: both models, v= (0, 0)' undernormal innovations; asymptotic prediction vals; BBPIp:bootstrap-after-bootstrap predictionintervalsbased on the percentilemethod; on the percentile-t intervals based method;BPIp: bootstrap-after-bootstrap prediction BBPIpt: standard intervals based on the percentilemethod;and BPIpt: standardbootstrapprediction method. intervals on the percentile-t based bootstrapprediction

126 Model M4
n=50 100 95
_-

Journal of Business & Economic Statistics, January 2001

n=100 100 --_--__ 95A95


- - - - Aim

n=200 100 --p

9 0 85 ------80 75 1 2 3 4 h 5 6 7 8 --.------

----

85 85 80-

- ---------------------------A-bbpit -

i bbpi

80 -751 75 1 2 3 4h h 5 6 7 8

X -bpi -S--bpit

4
h

Model M3
n=50 100 n=100 100 n=200 100O

90

--,

_-

--

-90

.0

apt

85----

-- --------

-- ----

- -- -- -- -- -----

85- -------------------80 --7 -75 70 ----------

85

----------------------80 ---- -p -------

U--

80 -----------------75 70 1 2 3 4 h 5 . .. . . . . . . .5 --

-bbpi -A-bbpit X bpi --e-bpit

4 h

4 h

Model M2
n=50 n=100 n=200

--

92 - -

- - - -9------------------------

b i

bbpi 89
-

89-------------------------86 186

89

------------------------------bpit

86 1 2 3 4 h 5 6 7 8

4 h

4 h

AR TimeTrend, Prediction Intervals: Models With NormalInnovations, NominalCoverage Figure6. Conditional CoverageRates of Bonferroni
Rate of at Least 95%. The data-generation process is Yt= v + St +A,1Yt- + ut, with vech(E,) = (1,.3,1)' and 8 = 0. For all models M2 to M4,

close to the unit circle, with their moduli all less than 1.1. The tests for normalityof innovations(Liitkepohl1991; Kilian and Demiroglu 2000) are conducted. The test statistics for skewness and kurtosis,respectively,yield 7.57 and 18.77, both asymptotically following x2(5) distribution.However, Kilian and Demiroglu (2000) found that asymptotic critical values can be highly misleading and proposed the use of the bootstrap critical values. These are calculated as suggested by Kilian and Demiroglu (2000) and found to be 5.95 and 51.81, respectively,for skewness and kurtosis,for the level of significance 5%. They are drasticallydifferentfrom the asymptotic critical value of 11.07, as expected from the simulationresults of Kilian and Demiroglu (2000) for large VAR models. The inference based on the bootstrapcritical values indicates that

there is evidence for the presence of asymmetricinnovations. This is also supportedby visual inspection of other descriptive measures. A simple Lagrange multiplier (LM) test for ARCH errors(Franses 1998, p. 165) has also found that there is evidence for ARCH(l1)errorson the residualsof unemployment rate, real business fixed investment,and money supply. The previous section found that bootstrapintervals,especially those based on the percentilemethod, are reasonablyrobustto this type of nonnormality. API's, BPI's, and BBPI's are calculated for the nominal coverage rate of at least 95% for joint Bonferroniprediction intervals. For BBPI's, the numbers of bootstrapreplications for stage 1 and 2 are set, respectively, to 500 and 4,999. Although the details are not given here, the results are found

Kim: Bootstrap-After-Bootstrap Prediction Intervals for Autoregressive Models


100
95 90
80---------------------

127

..4.

ap bbpi

75------75--bbpi 70
65
1 -

-----------------____k

---- -- ---

-, bpit

650 1 2 3 4 h 5 6 7 8

Figure 7. ConditionalCoverage Rates for BonferroniPrediction Intervalsfor Five-Dimensional VAR(6)Model. Estimated coefficients fromSimkins(1995) are used as a data-generation process. n = 100 withnormalinnovations, and the nominalcoverage rate for Bonferroni is predictionintervals 95%.

to be compatible with those of the Monte Carlo simulations conductedin this article.Thatis, API's and BPI's behave simithan BBPI's for all cases. BBPIP 's larly, and they are narrower 's and BBPIpt are also found to be similar for all variables. Another feature worth mentioning is that predictionintervals for all variablesin the system can be very wide, especially for h > 4, indicating a high degree of uncertaintyinvolved. This suggests that evaluation of point forecasts only may provide of misleading interpretation future uncertainty. In addition,a simulationexperimentis conductedusing the estimatedcoefficients of the precedingVAR model as the datagenerating process when the sample size is 100. The model contains a nonzero intercept vector as estimated from the preceding example. Normal innovations are generated treating the estimated 1, as the true variance-covariancematrix of innovations. Other design settings are same as those summarized in Section 2. The results are presented in Figure 7, where the conditional coverage rates of asymptotic and bootstrapBonferronipredictionintervalsare plotted. API substan95% even when h = 1, and the degree of tially underestimates underestimation sharplyincreases as h increases.BPI's exhibit than API. a similar pattern,but with far less underestimation The desirable propertiesof BBPI's are again evident, providing conditional coverage rates fairly close to 95% for all h values. Simulations conducted with the value of the nominal coverage rate (1 - a) set to .75 yield qualitativelysimilar results. Although a pilot study, the simulation presented in this section demonstratesthat the results obtained in the bivariate AR case are highly suggestive to the case of AR models with higher order and dimension. It seems that serious underestimation of future uncertaintyby API is more pronouncedin a large VAR system with a rich lag structurewidely used in is practice. The use of the bootstrap-after-bootstrap strongly recommendedwhen interval forecasting is conducted with a large VAR system. 5. CONCLUDINGREMARKS

tion results suggest that bootstrap-after-bootstrap Bonferroni prediction intervals perform substantially better than their asymptotic and standardbootstrapalternativesin most cases, especially when the sample size is small or moderate. The performancesof API's and BPI's deterioratesubstantiallyfor models with near unit roots, but bootstrap-after-bootstrap prediction intervals seem robust. Asymptotic and standardbootstrap predictionintervals tend to be too narrowcompared to BBPI's, substantially underestimatingthe nominal coverage rate. BBPI's also tend to perform satisfactorilyfor integrated and cointegratedAR processes underboth normaland nonnormal innovations.Hence, the use of bootstrap-after-bootstrap is strongly recommended for AR forecasting of economic and business time series in practice. In general, BBPI's based on the percentile-t method perform better than those based on the percentile method. But the latter can provide a superior alternative to the former when the model is near or unit-rootnonstationary. Simulation results suggest that BBPI's are reasonablyrobustto the innovations generated with a fat-tailed distributionor conditional heteroscedasticity.When innovations are generated from an asymmetric distribution,however, bootstrapprediction intervals based on the percentile method should be favored over those based on the percentile-t method. This article examined small-sample properties of alternative Bonferronipredictionintervals for AR models under the ideal condition where the AR order is known and there is no model misspecification.Future research efforts should be directedtowardinvestigationof the effect of the unknownAR order and model misspecification(see Kilian 1998c). The use of pre-testinformationrelatedto the presenceof unit roots and cointegratingrestrictionscan affect small-sampleperformance of predictionintervals(Diebold and Kilian 2000) and is a subject of futureresearch.Given that the assumptionof normality is essential for the bootstrap-after-bootstrap employed in this article, an interesting extension in the future research is the imposition of normalitywhen the residualsfrom the backward AR model are resampled.Another possibility for AR models with nonnormalinnovationsis the use of resamplingbased on the block bootstrapinstead of backwardAR models.

ACKNOWLEDGMENTS
I thank the editor and anonymous referees for their constructive and detailed comments that greatly improved the article in many ways. All remaining errors are mine. An earlier version of this article was presented at the 1999 Australasian Meetings of the Econometric Society held in Sydney, Australia. This research was partly supportedby a grant from the James Cook University. APPENDIX:PROOF OF THEOREM The forwardAR model (1) can be writtenas Wt= IWt_ -+ Ut, where Wt= (Y', ...., Yt'-+)' and Ut = (ut, 0, ... 0)' are Kp x 1 vectors and I1 is a Kp x Kp matrix of the form
H

This article proposes the use of prediction intervals based for AR models. Simulaon the bootstrap-after-bootstrap

=[E

A]

'

128

Journalof Business & EconomicStatistics,January2001

Diebold, F. X. and Kilian, L. (2000), "Unit-RootTests Are Useful for Selecting ForecastingModels," Journal of Business & Economic Statistics, 18, 265-273. Efron, B., and Tibshirani, R. J. (1993), An Introductionto the Bootstrap, New York, Chapman& Hall. Engle, R. F. (1982), "AutoregressiveConditional HeteroskedastictyWith Estimates of the Variance of U.K. Inflation," Econometrica, 50, 987-1008. Findley, D. F. (1986), "On Bootstrap Estimates of Forecast Mean Square with H, = [HP_1.. f = H H1. Errors for Autoregressive Processes," in Computer Science and StatisH, tics: The Interface, ed. D. M. Allen, Amsterdam: Elsevier Science, pp. 11-17. = Ff-', where F = Franses, P. H. Note that H and fk are related as H (1998), Time Series Modelsfor Business and Economic Forecasting, Cambridge,U.K.: CambridgeUniversity Press. E(WW,). The LS estimatorsfor H and fk can be written as Freedman, D. A. (1984), "On Bootstrapping Two-Stage Least-Squares Estimates in Stationary Linear Model," The Annals of Statistics, 12, = 1n Fn1I 1)827-842. = (= Wt W'-_ 1) E Wt-1 W'-_ I+ Grigoletto, M. (1998), "Bootstrap Prediction Intervals for Autoregressions: Some Alternatives," International Journal of Forecasting, 14, fl= 447-456. Prowhere wn = 1/n UW, (1993), "OnBootstrapPredictiveInferencefor Autoregressive tW'_,Fn- = 1/nLW_1 W,'_,n = Kabaila,P.Journal TimeSeries cesses," Analysis, 14, 473-484. of The usual asympand Dn = 1/n E Wt+l 1/n j VtWt+ W+. Kilian, L. (1998a), "Small Sample ConfidenceIntervalsfor ImpulseResponse totic theory applies so that plim(wn) = 0, plim(Fn) = F, Functions,"The Review of Economics and Statistics, 80, 218-230. = 0, and plim(Dn) = F, where "plim"indicates the (1998b), "ConfidenceIntervalsfor Impulse Responses Under Deparplim(8n) ture From Normality," EconometricReviews, 17, 1-29. the consistency of the LS estiprobability limit, implying (1998c), "Accountingfor Lag-OrderUncertaintyin Autoregressions: mators. The bootstrap LS estimators based on the residual The EndogenousLag OrderBootstrapAlgorithm,"Journal of TimeSeries Analysis, 19, 531-528. resamplingcan be expressed as Kilian, L., and Demiroglu, U. (2000), "Residual-BasedBootstrap Tests for [1* = + w*Fn*-;,n fW = + *D*-1 1 Normality in Autoregressions:Asymptotic Theory and Simulation Evi=" n n dence,"Journal of Business & Economic Statistics, 18, 40-50. Kim, J. H. (1997), "RelationshipBetween the Forwardand BackwardRepwhere wn, F,*,8n, and Dn are the bootstrap counterpartsof resentationsof the StationaryVAR Model, Problem 97.5.29"Econometric Theory, 13, 889-890. wn, Fn, 8n, and Dn. Theorem 4 of Freedman(1984) indicates Between the Forwardand BackwardRepresen-- increases, F,*and D* convergeto F in conditional (1998), "Relationship that, as n tations of the StationaryVAR Model, Solution 97.5.2," Econometric Theprobability.Moreover, the conditional law of n/2w* has the ory, 14, 691-693. same limit as the unconditional law of nl/2wn, and that of (1999), "Asymptotic and Bootstrap Prediction Regions for Vector InternationalJournal of Forecasting, 15, 393-403. Autoregression," has the same limit as the unconditionallaw of nl/2 n' n n. nl/28" Litterman, R. B. (1986), "Forecasting with Bayesian Vector This means that A* and H*, respectively,converge to A and Autoregressions-Five Years of Experience," Journal of Business & H in conditional probability.Because the bias correction is Economic Statistics, 4, 25-38. H. negligible in large samples, the same is true for Ac and Hc. Liutkepohl, (1991), Introductionto Multiple TimeSeries Analysis, Berlin: Springer-Verlag. In a similar way, it can be shown that Ac converges to A in Maekawa, K. (1987), "Finite Sample Propertiesof Several PredictorsFrom conditionalprobability. an AutoregressiveModel,"EconometricTheory,3, 359-370. Since A* convergesto A in conditionalprobabilityand u*+h Masarotto, G. (1990), "BootstrapPrediction Intervals for Autoregression," InternationalJournal of Forecasting,6, 229-329. converges to un+h in distributionby theorem 4 of Freedman McCullough,B. D. (1994), "Bootstrapping ForecastIntervals:An Application to AR(p) Models,"Journal of Forecasting, 13, 51-66. (1984) as the sample size increases, it follows that Yn*(h)= ---+ K. (1986), "ForecastingAccuracy of Alternative Techniques: Ac yn*(h- 1) + AYn*(h p) + Un+hconverges to Yn+h in McNees, S. A Comparisonof U.S. MacroeconomicForecasts,"Journal of Business & distributionfor all h. Economic Statistics, 4, 5-15. Nicholls, D. F., and Pope, A. L. (1988), "Bias in Estimationof Multivariate AustralianJournal of Statistics, 30A, 296-309. Autoregression," [Received December 1998. RevisedApril 2000.] Pope, A. L. (1990), "Biases of Estimators in MultivariateNon-Gaussian Journal of TimeSeries Analysis, 11, 249-258. Autoregressions," Rilstone, P., and Veall, M. (1996), "Using BootstrappedConfidenceIntervals REFERENCES for ImprovedInferencesWith Seemingly UnrelatedRegressionEquations," EconometricTheory, 12, 569-580. Bessler, D. A., and Babula, R. A. (1987), "ForecastingWheat Exports: Do and Stine, R. A. (1988), "The Bias of AutoregressiveCoefficient Exchange Rates Matter?"Journal of Business & Economic Statistics, 5, Shaman,P., Journal of the AmericanStatisticalAssociation, 83, 842-848. Estimators," 397-406. Booth, J. G., and Hall, P. (1994), "Monte Carlo Approximationand the Iter- Simkins, S. (1995), "ForecastingWith Vector Autoregressive(VAR) Models InternationalJournal of ForecastSubject to Business Cycle Restrictions," ated Bootstrap," Biometrika,81, 331-340. ing, 11, 569-583. Breidt, F. J., Davis, R. A., and Dunsmuir, W. T. M. (1992), "On Backcasting in Linear Time Series Models," in New Directions in Time Series Sims, C. A. (1988), "Bayesian Skepticism on Unit Root Econometrics," Journal of Economic Dynamics and Control, 12, 463-474. Analysis Part I, eds. D. Brillinger et al., New York, Springer-Verlag, Stine, R. A. (1987), "EstimatingProperties of Autoregressive Forecasts," pp. 25-40. - (1995), "ImprovedBootstrap Prediction Intervals for AutoregresJournal of the AmericanStatisticalAssociation, 82, 1072-1078. Thombs, L. A., and Schucany,W. R. (1990), "BootstrapPredictionIntervals sions," Journal of TimeSeries Analysis, 16, 177-200. for Autoregression," Journal of the American Statistical Association, 85, Chatfield,C. (1993), "CalculatingIntervalForecasts,"Journal of Business & 486--492. Economic Statistics, 11, 121-135. Christoffersen,P. E (1998), "EvaluatingInterval Forecasts,"International Tjostheim, D., and Paulsen, J. (1983), "Bias of Some Commonly-UsedTime Series Estimates,"Biometrika,70, 389-399. Economic Review, 39, 841-862.

where I is a K(p- 1) identity matrix, 0 is a K(p - 1) x K null matrix,and A* = [A1,..... Ap_]J. Similarly,the backward model (2) can be expressed as W, = f1W,+1 V, where V, = + is a Kp x 1 vector and fk is a Kp x Kp (0 ... O,V_p+1)' 0V matrix of the form

( WW;t'+1 W+1 ") = )(E W'+1)-

nDnl

Vous aimerez peut-être aussi