Académique Documents
Professionnel Documents
Culture Documents
Vector Autoregressions
John Stapleton
3.1 Introduction
3.2 Stationary Vector Autoregressions (VARs)
3.3 Estimating Stationary VARs
3.4 Forecasting Stationary VARs
3.5 Likelihood Ration Tests in VARs
3.6 VAR Order Selection
3.6.1 Introduction
3.6.2 A sequential test for selecting the VAR order
3.6.3 Selecting the VAR order using the FPE
3.6.4 Selecting the VAR order using information criteria
3.7 Testing for Granger-causality
3.7.1 An alternative representation of VARs
3.7.2 Testing for Granger-causality using an LR test
3.7.3 Limitations of Granger-causality tests
(ETC3450) Vector Autoregressions 2 / 135
Table of Contents II
xt = 0 + 1 xt 1 + 2 xt 2 ... + p xt p + et ,
where xt denotes the value of the exchange rate in period t, may not
be a very good representation of the DGP for xt , since the exchange
rate depends on many other macroeconomic variables such as interest
rates, ination, the balance of payments etc.
Consequently, a multivariate framework which explicitly recognizes
the inter-relationships between economic variables as well as their
dynamic character, may be required to adequately capture the
underlying data generating process.
Xt = A0 + A1 Xt 1 + A2 Xt 2 + t (3.3)
(2x 1 ) (2x 1 ) (2x 2 ) (2x 1 ) (2x 2 ) (2x 1 ) (2x 1 )
Xt = A0 + A1 Xt 1 + A2 Xt 2 +...+ Ap Xt p + t , (3.6)
(mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mx 1 )
42 = 16.
.
If we increase the dimension of the VAR by one (from m to m+1)
the number of parameters in the VAR will increase by
p (m + 1) + 1 + mp = 2mp + p + 1
= p (2m + 1) + 1
p (2m + 1) + 1 = 3(8 + 1) + 1
= 28.
then xt is stationary if :
The mean of xt is nite and time invariant i.e.
E (x t )= < 8t.
(mx 1 ) (mx 1 )
The covariance matrix between xt and xt j is nite and time invariant i.e.
0
Cov (x t , x t j )= E [(x t )(x t j ) ] = j < 8t, j.
(ETC3450) (mxm ) Vector Autoregressions 17 / 135
For example, if m =2, then
E (x1t )
E (x t ) = ,
(2x 1 )
E (x2t )
Xt = A0 + A1 Xt 1 + A2 Xt 2 +...+ Ap Xt p + t , (3.6)
(mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mx 1 )
may be written as
Xt A1 Xt 1 A2 Xt 2 ....... Ap Xt p = A0 + t
or
Xt A1 LXt A2 L2 Xt ....... Ap L P Xt = A0 + t
or
(Im A1 L A2 L 2 .... Ap Lp )Xt = A0 + t
or
( L ) Xt = A0 + t ,
where
jIm A1 ri A2 ri2 .... Ap rip j
denotes the determinant of the mxm matrix
Theorem
3.1 The VAR(P) process given by (3.6) is stationary if the roots of (L) have
modulus greater than one. That is,
mod(ri ) > 1
Theorem
3.2 Every stationary VAR(p) process has an innite vector moving average
representation given by
xt = + t + 1 t 1 + 2 t 2 + .......
(mx 1 ) (mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm )(mx 1 )
where
1
= (Im A1 A2 .... Ap ) A0
and
t (0, )8t and E(t t +s ) = 08s 6= 0,
(mx 1 )
E ( Xt ) = E ( + t + 1 t 1 + 2 t 2 + ...)
=
ii)
Cov (Xt ) = x = j j0
j =0
iii)
Cov (Xt , Xt j) = j = j +s s0 .
s =0
^
12,1 d
! N (0, 1), (3.10)
^
SE (12,1 )
^ ^
where 12,1 is the OLS estimator of 12,1 and SE (12,1 ) is the
standard error. Consequently, we can test the individual signicance of
the regressors in (3.1) and (3.2) by using the usual test statistic and
taking the critical value for the test from the standard normal table.
We can test a set of linear restrictions on one or both of the equations
by performing a likelihood ratio test.
However, these tests are valid only asymptotically and may give
misleading results in small nite samples.
Note that autocorrelation in the error term in the ith equation is
potentially a serious problem since, depending on the order of the
autocorrelation, it can render the OLS estimator inconsistent.
Recall that
2 3
21 12 13 . . 1m
6 21 22 23 . . 2m 7
6 7
6 . . . 7
cov (t ) = = 6 6
7 8t,
7 (3.7)
(mxm ) 6 . . . 7
4 . . . 5
m1 m2 . . . 2m
^
Let i denote the vector of residuals obtained when the ith equation
in the VAR(p) is estimated by OLS. That is,
0
^ ^ ^ ^
i = i 1 , i 2,....., iT .
(Tx 1 )
where
^0 ^
^2 i i
i = , (3.12)
(T mp 1)
^0 ^
^ i j
ij = . (3.13)
(T mp 1)
(ETC3450) Vector Autoregressions 30 / 135
3.3 Estimating Stationary VARs VII
For example,
^0 ^
^ 1 2
12 =
(T mp 1)
Xt = A0 + A1 Xt 1 + A2 Xt 2 +...+ Ap Xt p + t , (3.6)
(mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mx 1 )
XT +1 = A0 + A1 XT + A2 XT 1 +...+ Ap XT p +1 + T +1 .
(mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mxm ) (mx 1 ) (mx 1 )
(3.15)
(ETC3450) Vector Autoregressions 32 / 135
3.4 Forecasting a Stationary VAR(p) II
Then
XT +2 = A0 + A1 XT +1 + A2 XT +...+ Ap XT p +2 + T +2 ,
(mx 1 ) (mx 1 ) (mxm ) (mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mx 1 )
and therefore
PT + 2 j T = ET ( X T + 2 )
= A0 +A1 ET (XT +1) + A2 XT +.. + AP XT p +2
= A0 +A1 PT +1 jT +A2 XT +..... + AP XT p +2 (3.16a)
In general, the forecast function for the VAR(p) will also be VAR(p).
That is,
where:
LR is the likelihood ratio test statistic.
Lu is the maximized log-likelihood of the unrestricted model.
LR is the maximized log-likelihood of the restricted model.
q is the number of restrictions imposed under the null hypothesis.
The likelihood ratio test statistic for testing exclusion restrictions may
also be written as
^ ^ asy
LR = T [log jR j log ju j] 2 (q ), (3.20)
where:
where
k = (1 + mp )
is the number of regressosrs per equation in the unrestricted model.
Ap 6 = 0 and Ai = 0 8 i > p.
H0 : Ap = 0 (3.22)
H1 : Ap 6= 0.
Xt = A0 + A1 Xt 1 + ... + Ap 2 Xt ( p 2 ) + Ap 1 Xt ( p 1 ) + Ap Xt p+ t ,
(3.23)
and the restricted model is given by
Xt = A0 + A1 Xt 1 +..... + Ap 2 Xt (p 2 ) +Ap 1 Xt (p 1 ) + t .
(3.24)
Notice that (3.22) requires that every element of the mxm matrix Ap
is zero. (Ap is a null matrix). Therefore (3.22) imposes m2 linear
restrictions. Since the null hypothesis imposes m2 linear restrictions
on the coe cients of (3.23), under the null hypothesis
asy 2
LR = 2(Lu LR ) 2 (m ),
^ ^ asy 2
LR = (T k )[ log jR j log ju j] 2 (m ).
S5 Test
H0 : Ap 1 =0 (3.25)
H1 : Ap 1 6= 0.
Xt = A0 + A1 Xt 1 +........ + Ap 2 Xt (p 2 ) +Ap 1 Xt (p 1 ) + t ,
(3.26)
and the restricted model is
Xt = A0 + A1 Xt 1 +........ + Ap 2 Xt (p 2 ) + t . (3.27)
If
LRcalc > LRcrit
we reject the null and conclude that the VAR order is p-1.
(ETC3450) Vector Autoregressions 43 / 135
3.6 VAR Order Selection V
3.6.2 A sequential test for selecting the VAR order
If
LRcalc < LRcrit
do not reject the null and proceed to test
H0 : Ap 2 = 0
H1 : Ap 2 6= 0.
Information criteria are often used for VAR order selection. The two
most popular information criteria are
h i
AIC (p ) = ln det b (p ) + (pm2 ) 2 . (3.29)
T
h i
BIC (p ) = ln det b (p ) + (pm2 ) ln T . (3.30)
T
The AIC and BIC (referred to in Eviews as the Scharwtz criterion)
attempt to achieve a compromise between choosing a model which
ts the data well and one which is parsimonious (i.e. does not have
too many parameters to estimate).
Goodness of t is measured by
h i
b
ln det (p ) , (3.31)
Since ln(T) > 2 for T 8, the BIC penalizes additional lags more
severely than the AIC, thereby encouraging the selection of a more
parsimonious model.
Because it penalizes additional lags more severely than does the AIC,
the BIC will never select a larger value of p than that selected by the
AIC.
After estimating the selected VAR one can perform various diagnostic
tests to determine whether or not the error term in each equation is
"well behaved".
One can test the hypothesis that the error term is normally
distributed in each equation in the VAR.
One can test for evidence of autocorrelation in the error term in each
equation in the VAR.
These tests are automated in Eviews.
Xt = A0 + A1 Xt 1 + A2 Xt 2 + ... + Ap + t ,
1 Xt ( p 1 ) + Ap Xt p
(3.32)
where xt , A0 and et are mx1 column vectors and Aj is a mxm matrix
for all j = 1,2,......p. Using summation notation, equation (3.32) may
be written more compactly as
p
Xt = A0 + Ai Xt i + t . (3.33)
i =1
Xt = A0 + A1 Xt 1 + A2 Xt 2 + ... + Ap 1 Xt ( p 1 ) + Ap Xt p + t ,
(3.32)
as
p p p
x1t = 10 + 11,j x1,t j + 12,j x2,t j + .. + 1m,j xm,t j
j =1 j =1 j =1
+1t
p p p
x2t = 20 + 21,j x,1t j + 22,j x2,t j . + .. + 2m,j xm,t j
j =1 j =1 j =1
+2t
. = .
. = .
. = .
. = .
p p p
xmt = m0 + m1,j x1,t j + m2,j x2,t j + .. + mm,j xm,t j
j =1 j =1 j =1
+mt
(ETC3450) Vector Autoregressions
(3.34)
54 / 135
3.7 Testing for Granger-causality I
3.7.2 Testing for Granger-causality using a likelihood ratio test
Denition
3.2 Let It denote the information set containing all the relevant information in
the universe up to and including period t. Let
It = I t fx2s js tg
denote the information set containing all relevant information in the universe
excluding past and present values of x2 . Then we say that x2 Granger-causes x1 if
for at least one k = 1,2,....., where mse (x 1,t +k jI t ) and mse (x 1,t +k jI t )
x1t = 10 +11,1 x1t 1 + 12, x2t 1 + 11,2 x1t 2 + 12,2 x2t 2 + 1t (3.1)
12,1 = 12,2 = 0
against
H1 : 12,j 6= 0 for at least one value of j.
Under the null hypothesis,
asy
LR = 2(Lu Lr ) 2 (2).
S1 Estimate
x1t = 10 +11,1 x1t 1 + 12, x2t 1 + 11,2 x1t 2 + 12,2 x2t 2 + 1t (3.1)
Note:
The null hypothesis for the test is that x2 does not Granger-cause
x1 .That is, the null hypothesis is that there is no Granger causality.
If we reject the null, we conclude that x2 does Granger-cause x1 .
The test is an asymptotic test, since only the asymptotic distribution of
the test statistic is known. Consequently, the test may be unreliable in
small samples.
To test for the null hypothesis that x1 does not Granger-cause x2 , we
test the null hypothesis
21,1 = 21,2 = 0.
p p p
x1t = 10 + 11,j x1,t j + 12,j x2,t j + 13,j x3,t j
j =1 j =1 j =1
p
+ 14,j x4,t j + 1t
j =1
p p p
x2t = 20 + 21,j x1,t j + 22,j x2,t j + 23,j x3,t j
j =1 j =1 j =1
p
+ 24,j x4,t j + 2t
j =1
p p p
x3t = 30 + 31,j x1,t j + 32,j x2,t j + 33,j x3,t j
j =1 j =1 j =1
p
+ 34,j x4,t j + 3t
j =1
p p p
x4t = 40 + 41,j x1,t j + 42,j x2,t j + 43,j x3,t j
(ETC3450) j =1 j =1
Vector Autoregressions j =1 64 / 135
3.7 Testing for Granger-causality I
3.7.2 Testing for Granger-causality using a likelihood ratio test
(3.39)
(3.40)
We perform the test by executing the following steps:
S1 Estimate (3.39) and obtain LU .
S2 Estimate (3.40) and obtain Lr .
The VAR order selected by the various VAR order selection criteria
that we have discussed are reported in the table below.
We next test the null hypothesis that dTB does not Granger-cause
dR3 or dR10.
The null hypothesis is
21,j = 0, j = 1, 2, ...., 7
(3.41)
31,j = 0, j = 1, 2, ...., 7
LRcirt = 23.68.
Since
LRcalc > LRcirt
we reject the null hypothesis and conclude dTB does Granger-cause
dR3 and/or dR10.
Xt = A0 + A1 Xt 1 + A2 Xt 2 +...+ Ap Xt p + t (3.6)
(mx 1 ) (mx 1 ) (mxm )(mx 1 ) (mxm )(mx 1 ) (mxm ) (mx 1 ) (mx 1 )
1 a1 x1t a0 a2 a3 x1t 1
= +
b1 1 x2t b0 b2 b3 x2t 1
a4 a5 x1t 2 u1t
+ + .
b4 b5 x2t 2 u2t
(ETC3450) Vector Autoregressions 80 / 135
3.9. Structural versus reduced form VARs IV
(3.44)
1 a1 a0 a2 a3
S = , S0 = , S1 = ,
(2x 2 ) b1 1 b0 (2x 2 ) b2 b3
(2x 1 )
a4 a5
S2 = .
(2x 2 ) b4 b5
(ETC3450) Vector Autoregressions 81 / 135
3.9. Structural versus reduced form VARs V
(3.45) is an example of a SVAR.
Because of the endogeneity bias problem alluded to above,
SXt = S0 + S1 Xt 1 + S2 Xt 2 + ut , (3.45)
or
Xt = A0 + A1 Xt 1 + A2 Xt 2 + t , (3.46)
(2x 1 ) (2x 1 ) (2x 2 ) (2x 1 ) (2x 2 ) (2x 1 ) (2x 1 )
where
1 1 1 1
A0 = S S0 , A1 = S S1 , A2 = S S2 , t = S ut . (3.47)
E ( t t +s ) = 0 8 s 6 = 0
all the regressors on the right-hand side of (3.46) are exogenous, and
(3.46) may be interpreted as the reduced form of the SVAR
SXt = S0 + S1 Xt 1 + S2 Xt 2 + ut . (3.45)
Notice that the reduced form parameters and the reduced form errors
are nonlinear functions of their structural counterparts.
Notice that even though there is no cross-equation correlation in the
structural errors, that is,
E (ut ut0 ) = I2 ,
Let
Xt = A1 Xt 1 + t , (3.46)
where
t VWN (0, ).
Rearranging (3.46) and exploiting the stationarity assumption we
obtain
Xt A1 Xt 1 = t
( Im A1 L ) Xt = t
Xt = ( I m A1 L ) 1 t
Xt = (Im + 1 L + 2 L2 + 3 L3 + ....)t
Xt = t + 1 t 1 + 2 t 2 + 3 t 3 + ...(3.47)
Xt
= j . (3.48)
t j
dt
X bj.
= (3.49)
t j
It immediately follows from (3.51) that all the coe cient matrices on
the right-hand side must be null matrices. Therefore,
1 A1 = 0m ) 1 = A1
2 A1 1 = 0m ) 2 = A1 1 = A1 (A1 ) = A21 .
3 A1 2 = 0m ) 3 = A1 2 = A1 (A1 ) = A31
.
.
Xt = A1 Xt 1 + t (3.46)
b1 = A
b2 = A
b1, b3 = A
b 21 , b 31 , .... (3.53)
The model given by (3.54) and (3.55) is a special case of the general
3 dimensional VAR(1) process
Xt = A1 Xt 1 + t , (3.44)
where
t VWN (0, ),
with
2 3 2 3
0.5 0.2 0.1 4.0 0.5 0.0
A1 = 4 0.1 0.1 0.3 5 , = 4 0.5 1.0 0.5 5 .
0.3 0.2 0.3 0.0 0.5 0.74
Since
Xt = A1 Xt 1 + t
Xt A1 Xt 1 = t
(I3 A1 L ) Xt = t ,
(L) I3 A1 L. (3.56)
(3x 3 )
j I3 rA1 j = 0,
where 2 3
0.5 0.2 0.1
A1 = 4 0.1 0.1 0.3 5 .
0.3 0.2 0.3
It is straightforward to show that
r1 = 1.37, r2 = 5, r3 = 36.
Since these roots are all greater than one in absolute value (since each
root is a real number, the modulus is the absolute value) , it follows
from Theorem 3.1 above that (3.54) is a stationary vector time series.
Suppose that we wish to trace the eects on the system of a one unit
shock to x1 in period 1, assuming that no other shocks occur in
any time period. That is, we assume the following with respect to
the shocks impacting on the system
11 = 1, 21 = 0, 31 = 0,
(3.57)
1t = 2t = 3t = 0, t > 1.
Xt
= j = Aj1 . (3.58)
t j
Setting
t = 1, j = 0
(ETC3450) Vector Autoregressions 95 / 135
3.10 Impulse Response Analysis XI
3.10.1 Introduction
in (3.58) we obtain
X1
= 0 = A01 = I3 . (3.59)
1
Using (3.59)
X1
dX1 = d 1
1
= I3 d 1
= d 1 .
That is, 0 1 0 1 0 1
dx11 d11 1
@ dx21 A = @ d21 A = @ 0 A . (3.60)
dx31 d31 0
(ETC3450) Vector Autoregressions 96 / 135
3.10 Impulse Response Analysis XII
3.10.1 Introduction
t = 2, j = 1
in
Xt
= j = Aj1 (3.58)
t j
and obtain
X2
= 1 = A11 = A1 . (3.61)
1
Therefore
X2
dX2 = d 1
1
= A1 d 1 .
That is,
0 1 2 3 0 1 0 1
dx12 0.5 0.2 0.1 1 0.5
@ dx22 A = 4 0.1 0.1 0.3 5 @ 0 A = @ 0.1 A .
dx32 0.3 0.2 0.3 0 0.3
(3.62)
Xt = A0 + A1 Xt 1 + A2 Xt 2 + ... + Ap 1 Xt ( p 1 ) + Ap Xt p + t ,
For example, in our three dimensional VAR(1) the long run eect
on all the variables in the VAR of a shock to x1 in period 1 is given
by the rst column of the matrix
82 3 2 39 1
< 1 0 0 0.5 0.2 0.1 =
(I3 A1 ) 1
= 4 0 1 0 5 4 0.1 0.1 0.3 5
: ;
0 0 1 0.3 0.2 0.3
2 3 1
0.5 0.2 0.1
= 4 0.1 0.9 0.3 5
0.3 0.2 0.7
2 3
2. 5 0.7 0.7
= 4 0.7 1. 4 0.7 5.
1. 3 0.7 1. 9
21 = 31 = 0.
that
cov (11 , 21 ) = 0.5.
Therefore, assuming that
11 6= 0, but 21 = 0
does not make sense since it ignores the fact that 11 and n21 are
correlated.
In general, impulse response analysis which ignores the
contemporaneous correlation between the errors in the VAR is unlikely
to provide an accurate description of the dynamic relationships
between the variables.
The problem of contemporaneous correlation of the errors can be
resolved by using the errors from the SVAR rather than the errors
from the VAR to conduct impulse response analysis.
(ETC3450) Vector Autoregressions 107 / 135
3.10 Impulse Response Analysis III
3.10.2 Orthogonalized impulse responses
(3.66)
where
cov (u1t , u2t ) = cov (u1t , u3t ) = cov (u2t , u3t ) = 0 8t. (3.67)
SXt = S0 + S1 Xt 1 + ut , (3.68)
where
2 3 2 3 2 3
x1t x1t 1 u1t
Xt = 4 x2t 5 , Xt 1 = 4 x2t 1 5 , ut = 4 u2t 5 ,
(3x 1 ) x3t (3x 1 ) x3t (3x 1 ) ut 1
1
2 3 2 3 2 3
1 a1 a2 a0 a3 a4 a5
S =4 b1 1 b 2 5 , S0 = 4 b 0 5 , S 1 = 4 b 3 b 4 b 5 5
(3x 3 ) (3x 1 ) (2x 2 )
c1 c2 1 c0 c3 c4 c5
or
Xt = A0 + A1 Xt 1 + t , (3.69)
(3x 1 ) (3x 1 ) (3x 3 ) (3x 1 ) (3x 1 )
where
1 1 1 1
A0 = S S0 , A1 = S S1 , A2 = S S2 , t = S ut . (3.70)
Substituting
1
t = S ut (3.71)
into the VMA representation of our VAR given by
Xt = t + 1 t 1 + 2 t 2 + 3 t 3 + ... (3.72)
we obtain
Xt = S 1
ut + 1 S 1
ut 1 + 2 S 1
ut 2 + 3 S 1
ut 3 + ... (3.73)
It immediately follows from (3.73) that
Xt
= j S 1
. (3.74)
ut j
For example,
Xt
= 1 S 1
.
ut 1
(ETC3450) Vector Autoregressions 111 / 135
3.10 Impulse Response Analysis VII
3.10.2 Orthogonalized impulse responses
dt
X bjS
= b 1
. (3.75)
ut j
b1 = A
b2 = A
b1, b3 = A
b 21 , b 31 , ..
Xt
= j S 1
. (3.74)
ut j
However, estimating
2 3
1 a1 a2
S=4 b1 1 b2 5
c1 c2 1
in (3.74) is problematic.
(3.76)
by least squares, since (3.76) suers from simultaneity bias.
Xt = A0 + A1 Xt 1 + t
Xt
= j S 1
= Aj1 S 1
, (3.79)
ut j
(3.76)
suers from a major limitation. The orthogonalized impulse
responses depend on the ordering of the variables in the VAR.
and
2 3 1 2 3
1 0 0 1 0 0
S 1
=4 b1 1 0 5 =4 b1 1 0 5
c1 c2 1 b 1 c 2 + c1 c 2 1
Notice that:
Now suppose we change the ordering of the variables in the VAR and
rewrite our VAR as
(3.82)
Imposing a recursive structure on (3.82) we obtain
Xt
= 0 S 1
= A01 S 1
=S 1
ut
implies that
2 x x2t x2t
3 2 3
2t
u 2t u 1t u 3t 1 0 0
6 7 4
4
x1t
u 2t
x1t
u 1t
x1t
u 3t 5= a1 1 0 5. (3.84)
x3t x3t x3t a 1 c2 c2 1
u 2t u 1t u 3t
xt = A0 + A1 x t 1 + ....... + A7 x t 7 + t ,
(3x 1 ) (3x 1 ) (3x 3 )(3x 1 ) (3x 3 )(3x 1 ) (3x 1 )
where
xt0 = (dtbt , dr 3t , dr 10t )
and
t VWN (0, ).
Table 1 below shows the response of dtb over four time periods to
orthogonalized shocks to each of the variables in time period 1 (which
is 1960q1).
Table 1
Response of dtb Shock to Shock to Shock to
Period dtb dr3 dr10
. 1 0.761033 0.000000 0.000000
2 0.272200 0.001457 0.050738
3 0.249042 0.168280 0.006059
4 0.091407 0.121443 0.174631
Notice that, because they come after dtb in the VAR ordering, shocks
to dr3 and dr10 have no impact eect on dtb.
Table 2 below shows the response of dtb over four time periods to
orthogonalized shocks to each of the variables in time period 1 when
we change the VAR ordering to
Table 2
Response of dtb Shock to Shock to Shock to
Period dtb dr3 dr10
. 1 0.340359 0.658571 0.172081
2 0.097719 0.236282 0.106259
3 0.015845 0.299844 0.014800
4 0.213461 0.018240 0.087759
Notice that shocks to dr3 and dr10 now aect dtb in period 1.
Respons e of D (R3) to D(T BILL) Res pons e of D(R 3) to D (R3) Response of D (R3) to D(R 10)
.8 .8 .8
.6 .6 .6
.4 .4 .4
.2 .2 .2
.0 .0 .0
Res ponse of D (R10) t o D( TBILL) R esponse of D(R 10) to D(R3) R espons e of D (R10) to D (R10)
.6 .6 .6
.4 .4 .4
.2 .2 .2
.0 .0 .0
dR3t
21,2 = ,
dTBt 2
which has no intrinsic economic interest.
While they dont require the imposition of identifying restrictions,
VARs ignore valid restrictions suggested by economic theory.
The number of parameters to be estimated can increase substantially
as the number of lags and /or variables included in the VAR
increases. In a high order, high dimensional VAR we may have a large
number of parameters to estimate relative to the sample size.
The decision as to which variables to include in the VAR is somewhat
arbitrary.
(ETC3450) Vector Autoregressions 131 / 135
3.11 Advantages and Disadvantages of VARs
3.11.2 Disadvantages of VARs
xt = A0 + A1 xt 1 + .... + Ap xt p + t + B1 t 1 + B2 t 2 + ... + Bq t q ,
(3.86)
where
t VWN (0, ),
Ai , i = 1, 2, ..., p, and Bj , j = 1, 2, ...q,
are mxm matrices and A0 is an mx1 vector.
where